Follow @PistonDeveloper at Twitter!

Shush! It has been 4 months since last blog post, how times fly by when you do not notice!

In this post I will give a summary on some projects, and then go into more details about some new research!

Piston-Tutorials

List of contributors (32)

Conrod

Conrod is an UI framework that makes it easy to program UIs in Rust.

  • New triangles primitive widget
  • Improved touch experience
  • Lots of bug fixed

List of contributors (66)

Image

Image is a very popular image library with pure-Rust encoders and decoders.

  • Improved BMP support
  • Lots of bug fixed

List of contributors (95)

Imageproc

Imageproc is a library for image processing.

  • Support seam carving for color images
  • Sobel gradient for color images
  • Improved performance
  • More tests and documentation

List of contributors (16)

VisualRust

  • Fixed incremental build

List of contributors (14)

Dyon

Dyon is a scripting language with lifetime checker instead of garbage collection, a similar object model to Javascript and lots of other features useful for gamedev.

Starting a new project to make a Dyon to Rust transpiler: https://github.com/pistondevelopers/dyon_to_rust

List of contributors (4)

Piston-Music

  • Support for playing sounds in addition to music
  • Change volume on both music and sound

List of contributors (3)

AdvancedResearch

AdvancedResearch is a collection of projects that explore new ideas and concepts. This is moved to its own organization to not spam PistonDevelopers with emails.

Here are some things that happened since last blog post:

Homotopy maps are functions normalized between 0 and 1 on input and generate points that are continuously connected with each other. I found this idea very cool because you can use them for rendering directly without any extra knowledge. The challenge is to find the right API design so you get the best from both worlds of graphical editors and programming.

At perfect intelligence, problems get solved at the information theoretic optimal performance. I used the tools of path semantics to reason about how this might work, but have not formalized it yet (I lack the right conceptual tools!). Surprisingly it is kind of like binary search, but instead of sorting the algorithm need to arrange sub-types. You can order a T-shirt with the symbols of the first steps ∃f{} (it is called a “universal existential path”).

Probabilistic paths: A new discovery

formula for probabilistic paths

Here is a thought experiment designed to help you understand what it is about:

  1. Take a lot of monkeys
  2. Make them type randomly on a keyboard
  3. What is the chance one of them recreates Shakespeare (or Harry Potter)?

Using standard probability theory, it is easy to compute this chance, even we never will get the opportunity to test it out in practice, because it is very, very tiny.

monkey typing on keyboard

In principle, there is a correct probability for any similar question we can ask, no matter how complex the experiment is and how long time it takes to complete.

If you put the same monkeys to play Super Mario, what is the chance one of them will win? We do not know that yet, because the code of Super Mario is much more complex than the first example. Using standard formulas for probability distributions will not get you very far. What we need a different way of thinking about probabilities that can be interpreted from programs.

A probabilistic path is a transform of the source code of e.g. Super Mario, such that you can compute how likely a monkey is to win the game.

In additon you need:

  1. A function describing how likely a given input is
  2. A function describing what is a winning condition from the output

A huge breakthrough in path semantics happened by extending the theory to probabilities of finite sets. Now I got a higher order path semantical function that solves similar problems to the one above. It is called “probabilistic path” in the language of path semantics.

I tested it on very simple things, because it is very hard to use on complex algorithms. One open problem is how describe in a meaningful way why the algorithm is allowed to sum positive and negative numbers while always ending up in the valid probability range between 0 and 1.