What is Happening 7

Follow @PistonDeveloper at Twitter!

This blog post is a brief summary of what happened the past 8 months in the Piston project.

Piston (Core)

List of contributors (72)

Conrod

Conrod is an UI framework that makes it easy to program UIs in Rust.

List of contributors (89)

Image

Image is a very popular image library with pure-Rust encoders and decoders.

The versions 0.20 and 0.21 were released with changes you can read here. (Too long to be listed here)

List of contributors (134)

Imageproc

Imageproc is a library for image processing.

List of contributors (26)

Dyon

Dyon is a scripting language with lifetime checker instead of garbage collection, a similar object model to Javascript and lots of other features useful for gamedev.

  • Added 4D matrices
  • In-loops added for easier communication between threads
  • Improved docs
  • Improved edge cases in syntax, type checker and runtime
  • Faster parsing overall due to Piston-Meta upgrade
  • Internal refactoring of instrinsics

List of contributors (10)

Turbine

Turbine is a long term project to develop a game engine with built-in editor.

Currently some components are developed separately and tested, for later be used in the game engine.

AdvancedResearch

A part of Piston project is research, which moved to its own organization when unrelated to game development.

The reading sequences have been re-organized into two parts:

On the software side, the most exciting news are:

The work on the Control Problem of AI and Zen Rationality resulted in LOST (Local Optimal Decision Theory). The environment which this algorithm performs optimally falls outside classical Instrumental Rationality and might therefore be used to study Zen Rationality (extended reasoning about higher order goals). This decision theory describes an analogue of a “panic” state for formal agents. One mathematical property of this algorithm is that there is no expected utility loss by changing the decision theory, which means that it might serve as a building block in Operator Triggered Intermediate Decision Theories. Such agents might have higher level of mathematical provable safety. It was also proven that AI boxing is only decidable if the world is divided into finite states (paper).

Some other areas of progress:

Example of ongoing topic: Semantics of choices

Due to the many new papers and ideas floating around, I choose here to focus on a particular topic to give you a taste of what kind of research is going on: The semantics of making a choice and how it might be grounded physically.

In one way a choice is very simple, in another way it is very deep and complex.

The basic problem of understanding what choices are, is that they are very easy to understand starting from the axiom of path semantics, but they are very hard to understand as something grounded in physical reality. According to path semantics, this grounding must exist in order for mathematics to make sense, but we don’t know what it is.

A choice might be thought of as an action where one forgets something (or erases a resource), and then “feed in” some new information that is required to obtain a new resource, in order to make new choices. Ideas from a discussion with Adam Nemecek resulted in the paper about Adversarial Paths.

Much about adversarial paths is not easy to understand, but it is very useful as a building block for other mathematical languages. For example, this syntax can be used formalize Adversarial Discrete Topology which might be used to e.g. understand game theory and agents in environments of unknown complexity.

Some important property that is known informally about choices, is the following:

The major difference between Turing machines and informal theorem proving is that Turing machines eliminate choices.

The word “decidable” often appears related to Turing machines, but it might be misguided as it refers to some enabling property. On the other hand, “decidable” might be better understood as a disabling property in terms of eliminating choices. It goes much deeper than just talking about algorithms that are not decidable vs algorithms that are decidable.

Much of the semantics that makes mathematical useful is based on some concept of choice in one way or another, but any realization of an automatic procedure of such semantics requires a reduction of choice into a decidable computable process. Hence, viewing choices as a form of resource, there exists some kind of game (as in game theory) that is intrinsic to the nature of mathematics and computers, which does not arise from the choice of an opponent, but from semantics of expressions in various mathematical languages.

The semantics of choices relative to other parts of mathematics is hard, but physical grounding of choices is even harder.

In trying to understand extreme anthropic observer selection effects, it is speculated that this constraint of eliminating choices is an emergent phenomena. We might be living inside that a universe that has strange underlying physical laws that permits some violation of the principle “making a choice”, while all semantics that we use in practice (including computers and physical machines) is a result of viewing this physical reality through a lense of an extreme observer effect.

While the foundations for choices is a mystery at this point, it serves as a useful concept for arguing for uniqueness of mathematical emergent semantics such as the unit interval on real numbers. Unit intervals on real numbers is fundamental for fields such as geometry, where it appears in the domain of homotopy maps. We want to know whether basic assumptions about the physical universe are unique in some sense, such that there can not exist fundamentally different universes with some kind of “strange semantics of geometry”. Better understanding of this topic could bring us closer to understand how to think about physical laws where choices are not eliminated.

Improved Organizational Security

After detected suspicious behavior on Christimas Eve 2018, I (bvssvni) reviewed security policies and made the following organizational security improvements:

  • Reduced attack surface by separating write access to repositories from crates.io publishing permissions
  • Restricted conditions of getting write access to repositories
  • Restricted conditions of getting crates.io publishing permissions
  • Recommended maintainers to add collaborators to specific repositories when needed

Since everying goes through PRs, most people do not need the write access.

When reviewing logs of all projects, I concluded that most contributions are made by a handful of people with specialist knowledge. It seems that most work in recent years requires specialist knowledge or familiarity with specific codebases, such that the amount of available work for people who want to contribute in general, but is not able to specialize, is shrinking. Most people would be better off just by making PRs to specific projects, and the likelihood that they need repeated access is shrinking over time.

Due to the shifting focus from general contributions toward specialism, there is less need for increasing the amount of members with write access. The majority of projects is now entering a maintenance and stabilization period.

Important Organization Notice: Expecting Decreased Available Work Without Specialist Knowledge During Next 5 Years

The two major activities of the Piston project - maintenance and research - where research always required specialist knowledge, plus the choice of using a modular architecture, means that it is likely that in the next 5 years, more and more low hanging fruit will be picked and the amount of available work without specialist knowledge is expected to decrease.

Most active contributors are heavily invested in specific projects and new contributors are often interested in existing specific projects or creating new ones that require specialist knowledge. This means that we might benefit from making it easier to do specialist work over time, and general organizational development is expected to have less benefit beyond serving this role.

Even version updates have been done in batches without the need for major changes, which is done most efficiently using Eco’s output by one person and by passing on this information to projects that might use it for integrating updates with plans for new releases. This is expected to be done approximately once a month or less in the future.

The need for architect software design is practically zero, as it seems people find it economically efficient to combine their own set of preferred libraries to solve a specific problem, instead of making major design changes to an existing one to solve a more general class of problems. Most changes in design are focused on keeping backward compability, to keep the same set of problems solvable through modularity.

The need for bug fixing is very low despite the size of the project. Excellent language design of Rust and good developer tools are likely to be blamed.

This means that the Piston project must either:

  1. Focus more on special knowledge and projects
  2. Increase ambitions for the project overall

The economically efficient trend seems to be 1) from observing what people chose to focus on in the past. It is therefore unlikely that we will increase the ambition overall and instead focus more on maintenance with specialist knowledge plus research.

Research has most potential of opening up new available work, but this is a slow going process.

The characteristics of the need of specialist knowledge is the following:

  1. Less need for marketing the project or attracting new members
  2. More need for training people interested in becoming specialists

The organization might therefore benefit from work on documenting and representing topics that makes it easier to specialize for new people.

What is Happening 6

Follow @PistonDeveloper at Twitter!

This blog post is a brief summary of what happened the past 7 months in the Piston project.

Piston (Core)

List of contributors (69)

Piston-Examples

I forgot to mention in last blog post that there is a new example using the rs-tiled and piston together.

List of contributors (34)

Conrod

Conrod is an UI framework that makes it easy to program UIs in Rust.

List of contributors (84)

Image

Image is a very popular image library with pure-Rust encoders and decoders.

Recently we have made a collective effort to reduce the burden of maintenance and got rid of the PR queue. Thanks to people making PRs helping each other with reviews! We started an image working group where you can chat with people working on Rust image libraries.

List of contributors (118)

Imageproc

Imageproc is a library for image processing.

List of contributors (21)

Dyon

Dyon is a scripting language with lifetime checker instead of garbage collection, a similar object model to Javascript and lots of other features useful for gamedev.

Dyon-Interactive is now upgraded with many new features.

You can now install dyongame on your computer:

cargo install piston-dyon_interactive --example dyongame

To run, type dyongame <file.dyon>

Other news:

List of contributors (7)

Turbine

Turbine is a long term project to develop a game engine with built-in editor.

Currently some components are developed separately and tested, for later be used in the game engine.

AdvancedResearch

A part of Piston project is research, which moved to its own organization when unrelated to game development.

I have been busy the past half year working on the control problem of artificial super-intelligent agents (ASI). Basically, no one knows how yet how to solve this important research problem, but we believe we are getting closer.

There will only be a few bullet points here, the rest you have to start exploring here.

  • A “Polite Zen Robot” (PZR) might be made safely extensible by using neutral judgements (Link to paper)
  • Granular judgments indicates rational agents might cooperate if their future identity is uncertain (Link to paper)
  • @forefinger suggested a mechanism for bounded utility functions in the human brain linked to the role of serotonin (Link to paper)
  • @forefinger made a mathematical model of the cerebellum (!) (Link to repository)
  • We will start combining Piston and AdvancedResearch to create simulated environments for agents (and use other Rust projects as well!) (Link to repository)

What is Happening 5

Follow @PistonDeveloper at Twitter!

Shush! It has been 4 months since last blog post, how times fly by when you do not notice!

In this post I will give a summary on some projects, and then go into more details about some new research!

Piston-Tutorials

List of contributors (32)

Conrod

Conrod is an UI framework that makes it easy to program UIs in Rust.

  • New triangles primitive widget
  • Improved touch experience
  • Lots of bug fixed

List of contributors (66)

Image

Image is a very popular image library with pure-Rust encoders and decoders.

  • Improved BMP support
  • Lots of bug fixed

List of contributors (95)

Imageproc

Imageproc is a library for image processing.

  • Support seam carving for color images
  • Sobel gradient for color images
  • Improved performance
  • More tests and documentation

List of contributors (16)

VisualRust

  • Fixed incremental build

List of contributors (14)

Dyon

Dyon is a scripting language with lifetime checker instead of garbage collection, a similar object model to Javascript and lots of other features useful for gamedev.

Starting a new project to make a Dyon to Rust transpiler: https://github.com/pistondevelopers/dyon_to_rust

List of contributors (4)

Piston-Music

  • Support for playing sounds in addition to music
  • Change volume on both music and sound

List of contributors (3)

AdvancedResearch

AdvancedResearch is a collection of projects that explore new ideas and concepts. This is moved to its own organization to not spam PistonDevelopers with emails.

Here are some things that happened since last blog post:

Homotopy maps are functions normalized between 0 and 1 on input and generate points that are continuously connected with each other. I found this idea very cool because you can use them for rendering directly without any extra knowledge. The challenge is to find the right API design so you get the best from both worlds of graphical editors and programming.

At perfect intelligence, problems get solved at the information theoretic optimal performance. I used the tools of path semantics to reason about how this might work, but have not formalized it yet (I lack the right conceptual tools!). Surprisingly it is kind of like binary search, but instead of sorting the algorithm need to arrange sub-types. You can order a T-shirt with the symbols of the first steps ∃f{} (it is called a “universal existential path”).

Probabilistic paths: A new discovery

formula for probabilistic paths

Here is a thought experiment designed to help you understand what it is about:

  1. Take a lot of monkeys
  2. Make them type randomly on a keyboard
  3. What is the chance one of them recreates Shakespeare (or Harry Potter)?

Using standard probability theory, it is easy to compute this chance, even we never will get the opportunity to test it out in practice, because it is very, very tiny.

monkey typing on keyboard

In principle, there is a correct probability for any similar question we can ask, no matter how complex the experiment is and how long time it takes to complete.

If you put the same monkeys to play Super Mario, what is the chance one of them will win? We do not know that yet, because the code of Super Mario is much more complex than the first example. Using standard formulas for probability distributions will not get you very far. What we need a different way of thinking about probabilities that can be interpreted from programs.

A probabilistic path is a transform of the source code of e.g. Super Mario, such that you can compute how likely a monkey is to win the game.

In additon you need:

  1. A function describing how likely a given input is
  2. A function describing what is a winning condition from the output

A huge breakthrough in path semantics happened by extending the theory to probabilities of finite sets. Now I got a higher order path semantical function that solves similar problems to the one above. It is called “probabilistic path” in the language of path semantics.

I tested it on very simple things, because it is very hard to use on complex algorithms. One open problem is how describe in a meaningful way why the algorithm is allowed to sum positive and negative numbers while always ending up in the valid probability range between 0 and 1.

Older Newer