Dyon 0.13 is released!

Dyon is a rusty dynamically scripting language, borrowing ideas from Javascript, Go, mathematical notation and of course Rust!

In this post I will go in depth about two features of Dyon, and discuss them from the perspective that I use and need them.

Dynamically loaded modules

One of the things that makes Dyon special is the way of organizing code using dynamically loaded modules. When scaling up big projects in Rust, I noticed that each project required a configuration file to tell which dependencies it has. This is useful, but introduces extra work when moving code around. Another problem is when I work on some game, and want to refresh the game logic while running, there are many ways to do this and it depends on the specific project.

Sometimes I want to experiment with different algorithms and compare them side by side. While a version control system is great, it does not allow you the playful coding style I like. The projects I work on tend to focus on exploring some idea, and this leads to a dozen files spread across folders which does not fall into a neat namespace hierarchy. I do not want to think too much about how to name stuff, it gets in the way for productivity.

A Dyon script file does not know which other files it depends on, so you need to tell it for each project.

Setting up dependencies is part of a loader script, that loads modules and connect them together.

For example:

fn main() {
    find := unwrap(load("src/find.dyon"))
    unop_f64 := unwrap(load("../dyon_lib/unop_f64/src/lib.dyon"))
    unop_f64_lib := unwrap(load(
        source: "src/unop_f64_lib.dyon",
        imports: [unop_f64]
    set_alg := unwrap(load("../dyon_lib/set_alg/src/lib.dyon"))
    main := unwrap(load(
        source: "src/main.dyon",
        imports: [unop_f64_lib, set_alg, find]
    call(main, "main", [])

Horrible, right? Yet, there are some upsides:

  1. Easy to change the loader script to reload code on the fly
  2. Easy to generate code at loading and running time
  3. Easy to control preloading of modules that does not need reloading
  4. Easy to rewrite and refactor code from Dyon to Rust
  5. One configuration file per project

In most languages, you need some complicated system for these things, but in Dyon you learn how do it as a beginner.

The 0.13 version adds some important changes:

  1. Modules are now represented as Arc<Module> behind a Arc<Mutex<Any>> pointer
  2. External functions (Rust code) are resolved directly instead of depending on the module

The reason for 1) is when closures escape module boundary. Variables in Dyon are 24 bytes on 64 bit architectures, which was makes it possible to fit in more stuff like 4D vectors and secrets. I squeezed in a pointer to the closure environment stored beside the closure pointer. This keeps the track of the information required to find the relative location of loaded functions. The Arc<Module> pointer makes it possible to create the closure environment faster, instead of cloning the whole module for each closure.

The reason for 2) is when I use another pattern:

fn main() {
    window := load_window()
    image := load_image()
    video := load_video()
    input := load_input()
    draw := unwrap(load(
        source: "src/draw.dyon",
        imports: [image]
    sample := unwrap(load(
        source: "src/sample.dyon",
        imports: [draw]
    animation := unwrap(load(
        source: "src/animation.dyon",
        imports: [sample]
    main := unwrap(load(
        source: "src/programs/checkerboard.dyon",
        imports: [window, input, animation, video]
    call(main, "main", [])

In the example above load_window is an external function (written in Rust) that returns a module. The returned module containst other external functions, which then get imported into the other modules.

To make this logic work, I decided to use a direct function pointer and wrap it in FnExternalRef. The downside is that you can not track down the location of the external function when something bad happens. I am crossing my fingers and hope this will not be a big problem (famous last words).

Try expressions

A new feature in 0.13 is the ability to use try to turn failures into err and success into ok.

Link to design notes

Some background of why I care about this so much: I am working on path semantics, which is a big dream I have.

A path is a function that relates one piece of a function to another function in a space called “path semantics”, because it is a space of meaning (semantics). Normally there is a bunch of paths for a function, that might differ for arguments and return value, but they all need to work together to get to the other function.

When the same path is used everywhere, it is called a “symmetric path”.

Symmetric paths are extremely mathematically beautiful and rare.


and[not] <=> or

which is equal to, but not meaning the same as the equation:

not(and(X, Y)) <=> or(not(X), not(Y))

I just occured to me that this is one of De Morgan’s laws.

Here is another example:

concat[len] <=> add

which is equal to, but not meaning the same as the equation:

len(concat(X, Y)) <=> add(len(X), len(Y))

You can use these laws to do transformations like these:

add(X, Y)
concat[len](X, Y) // using a symmetric path to reach the other function
// there exists `Z` and `W` such that `X = len(Z)` and `Y = len(W)`
concat[len](len(Z), len(W))
len(concat(Z, W))

I want to create an algorithm to explore the space of path semantics efficiently. There is no currently proof technique I know to derive them, except to “make a hypothesis and falsify”, kind of like how scientists do. The only way to find which functions that works with others is by trying.

Here is a function that tests for a symmetric path candidate:

/// Infers whether a path can be used for a function
/// in a symmetric way for both arguments and result.
sym(f: \([]) -> any, p: \([]) -> any, args: []) =
    all i {is_ok(try \p([args[i]]))} &&
    is_ok(try \p([\f(args)]))

Let me take this apart an explain the pieces:

  • sym is the function name
  • f: \([]) -> any is an argument f of type closure taking [] (array) and returning any
  • p: \([]) -> any is an argument p of type closure taking [] (array) and returning any
  • args: [] is an argument args of type [] (array)
  • \p([args[i]]) calls the closure p
  • \p([\f(args)]) calls the closure f and then p
  • all i { ... } is a mathematical loop that automatically infers the range [0, len(args)) from the body

The \ notation is used in Dyon because type checking happens in parallel with AST construction, plus some other design choices that closures always return a value. I needed a design that was easy to refactor and different for normal functions and closures.

The problem was that this function gets called many times with different closures, and sometimes it fails and crashes because the types are wrong.

try converts all failures into err, such that the function sym can detect a path candidate.

Now, you might think, why not use try and catch blocks?

Dyon already has a way of dealing with errors: res, like Result in Rust but with a trace and any as error type. There is no need to add a second way of dealing with errors, when the first method works just fine.

For Rust programmers, the try keyword can be a bit confusing, since they might be thinking of the try! macro. In Dyon, you use the ? operator instead of try!, just like in Rust. I decided to use it because it feels closer to what we mean with “trying” in natural language.

For example, when a function foo returns res, you can write:

fn foo() -> res { ... }

fn bar() -> res {
    a := try foo()?

This is executed as try (foo()?), returning err early, so you can handle the case where it failed. Interestingly, it gives the same result as (try foo())?.

Solving Economic Inequality in MMOs

Piston is a modular game engine written in Rust. The project was started in 2014 when Rust was pre-1.0, and during the last years we worked on various libraries and collaborated with other projects.

The Piston project is primarely about 2 things: Maintenance and research. By sharing the maintenance burden between many developers, we get more time to focus on the projects we like to work on. This saves time for the people involved.

This post is about the research part. In particular: Economic inequality.

Why do research at economic inequality?

Economic inequality is one of the world’s biggest problems, besides climate change, poverty, deseases. It is also interlinked with all the others. One could say that economic inequality is partly causing all the other big problems we have. Despite being such an obvious and huge problem, so far nobody have come up with a good solution. Why?

I do not know for certain, but after thinking a little, at least I can point to two major reasons:

  1. Lack of an environment where you can test ideas
  2. Difficult to measure effects precisely

So I was thinking: As games increases in size and complexity, maybe you could use them as a test environment?

Another motivation: Balancing economics in MMOs is hard. Perhaps we could do something about it?

Reducing the problem to a simpler case: MMOs

I am not an economist, and I have no intention of touching the real world economy (yuck!).

So, I came up with a simple plan, which might work:

  1. It turns out that economic inequality in MMOs is worse than ever measured in the real world
  2. If we can solve the problem for MMOs, then it might be possible to solve it in the real world

I also had to come up with a concept for trying out this idea, and did a lot of background research. If successful, this could be a mind blowing experience!

I won’t go into the benefits about using this in MMOs, because the Mix-Economy project explains it in the readme.

The Mix-Algorithm: Combine Ideas!

Last year I came up with an algorithm that seems to work OK using a combination of:

  1. Universal Basic Income
  2. Progressive Negative Fortune Tax
  3. Burning Money

Those are 3 ideas that each would crash an economy without some form of taxation. However, if you combine them in a clever way, they balance each other out and stabilizes the economy.

Since the algorithm uses normalized currency, you can mix it on top of an existing economic system. This also makes it possible to turn it on and off to measure effects on the economy.

Link to project

So, how does it work?

Imagine a factory that leads the pipes back the floor. If people do not work hard enough on the factory floor, they die of the toxic smoke coming out of the pipes. This is how a free market with 0% tax would work, because by random chance somebody ends up with too little money and is excluded from the economy until somebody gives them something. When nobody wants to hire somebody or provide for them, they die.

I think the smoke analogy fits because people die as a direct result of living inside a such system, which becomes systematically biased and impersonal as the system grows in size. Because of this effect, no country has ever run successfully without some form of wealth distribution, and need to plaster many systems all over it to keep things running.

Severe economic inequality is the result of a systematic bias of a system that does not scale

The way I see it: It is not a problem with people, it is an SOFTWARE BUG. Swap people with genetically identical ants, everyone equally smart and capable, and you get the same problem!

OK, back to the imaginary factory:

One day an engineer comes and visits the factory and rebuilds the pipes to let the smoke go up in the air. Wow! It leads to a large improvement for the workers. They are happier and more productive, which increases the efficiency of the factory.

Basically, that is what the mix-algorithm does. It makes a simple regulatory design choice that makes the economy more efficient to rid of “waste”.

Every economy needs an outlet, whether it is the poor, the rich, or a mix of both. If you print money, somebody has to loose value. However, if everyone looses value, then prices will increase and eat up the gain.

A way to solve this is as simple as it sounds horrifying terrible: To control inflation, burn the money in the pocket of the rich!

Frightening, right?

However, when you consider a universal basic income, it is not that simple, and perhaps not that horrifying: The rich get more because people have more money. Since the money is not spent when the rich have less needs, you have to get rid of it! Burn it!

In the mix-algorithm, the money that is burned comes from the fact that you are inflating the currency by distributing through negative fortune tax. You are not “taxing” the rich who “pays for” the poor. The money is printed and distributed, and the outlet is money burned at the top. In mere amounts of capital, the rich becomes far richer (up to 6x) when turning on the mix-algorithm. So, it is not unfair to get rid of that money at all!

The money at the top are burned independently of the needs of the many. This is how the factory lets the smoke up in the air, instead of the floor!

Who “pays” for the economy?

It is the lack of money under the soft limit that charges the inflation. Those at the bottom charges the rewards for those above, so it is the BOTTOM who carries the economy.

This is strangely intuitive, because common sense in the real world says that workers DO THE WORK. Yet, now there is an algorithm that supports this argument! (I think that’s cool!)

Incentivize people to work

In a 0% tax environment (remember this is progressive negative fortune tax, not ordinary tax), people work primarily to get the money they earn in the instant of the transaction. Secondarily, they work for interest rates by putting the money in the bank. However, this is not true for people at the bottom, who almost never have capital surplus.

Therefore, you end up in a situation where people at the bottom are less motivated to work. With a universal basic income, this situation gets worse, because with all needs covered, people can just stay home. How do we fix this problem?

The mix-algorithm does something clever: It gives you more money, the more money you have, up to a soft limit. When people can earn just a little bit, they are more motivated because they gain on it over time. The gains are larger at the bottom, and follows the square root function up to the soft limit.

The Mix-Algorithm is not enough: You need a Gini solver

Economic inequality is measured by the Gini coefficient.

The problem with the mix-algorithm is that it incredibly hard to reason about it. How do you set the tax rate when new players join, quit and change transaction patterns?

First I tried studying the long term behavior of Gini, which took 45 minutes of simulation per data point. After two weeks, started thinking this approach would never get me anywhere. I estimated that to gather enough data to find a reasonable approximation of the mathematical inverse, it would require a super computer with 100_000 cores running for 1000 hours.

There had to be another way!

So, I came up with the idea of using a solver, who decides the tax rate directly, and near perfectly.

The Gini solver uses convergent binary search, because inequality might increase sometimes when increasing the tax. When people have more money, they spend more, so rich people get a lot more money! As a thumb rule, distributing money decreases inequality, but this is not always the case.

Here is a result of an early test with random transactions. It shows that the algorithm works pretty well for various start fortunes (the money players get when joining):

Mix Gini

The important thing about this graph, is that you can set a Gini target for a broad range, that covers the range of inequality measured in real world economics.

Don’t be optimistic

Even though this algorithm seems to work in early tests, I expect there are many tweakings needed before it can be used in a game. Perhaps this is an overconstrained problem, or there are other effects that are not apparent under random transactions.

What is happening 3

Another update on the Piston project!

Follow @PistonDevelopers at Twitter!

This year we have focused on upgrading, improving docs and stabilizing libraries. The overall architecture of the core is finished, and minor features are being added to flesh out the API. A lot of libraries that can be used independently also saw improvements and are getting closer to stable.

The most complex upgrade in the history of Piston is done! This was due to Gfx getting an overhaul to support next-gen APIs. It took up most of my time this year, and the rest I spent developing a new scripting language (more about this later!). This is why there have not been that many blog posts about the Piston project in general.


A library for encoding and decoding images.

In-tree JPEG decoder was replaced with jpeg-decoder. This adds support for progressive JPEGs.

The image library has now Radiance HDR and ICO support.

Docs were improved, performance was improved, and lot of bugs were fixed!

List of contributors (78)


A backend agnostic UI framework.

Widgets are now composed from a small set of primitive widgets. This makes it easier to add custom backends for Conrod.

A lot of new widgets were added, a lot of bugs were fixed, and performance was improved!

Conrod has now a guide to help you get started.

For more information, see the previous blog post.

List of contributors (51)


A library for image processing.

Features for contrast, canny edge and other algorithms were added.

List of contributors (8)


Performance was improved significantly (6x) in the Gfx and Glium backends.

RustType is now replacing FreeType as the library for font rendering.

Colors are now standardized to use sRGB, to get same behavior across platforms.

You can now override triangulation in the graphic backend for faster or high quality rendering.

List of contributors (25)


One unexpected surprise this year was the creation of a whole new dynamically typed language, which like Rust provides safety without a garbage collector!

It takes the object model from Javascript, some nice syntax from Rust and Go, replaces null with opt and res, and adds optional type system with support for ad-hoc types. A new concept for dynamical scope called “current objects” does the job of globals, but better. 4D vectors, vector algebra and swizzling, HTML hex colors are built-in features. For template programming it uses an optimized link structure (faster than arrays, uses less memory) which is useful when generating web pages or code. It has ? like Rust, with tracing error messages on unwrap. Syntax is standardized through Piston-Meta’s meta language.

It is a language that focuses on “scripting experience” and borrows ideas from mathematics. Indexed, packed loops, inference of range from body, and something called “secrets” helps solving problems with less typing. These features were tested for game prototyping, machine learning and linear algebra.

Current performance (interpreted) is in the same ballpark as Python, sometimes faster, sometimes slower. Loading a “hello world” program takes 0.3 milliseconds (type checking and AST construction happens in parallel), and dynamical modules makes interactive programming environments easy to set up.

The 0.8 release was epic, and the 0.9 release adds some important polish. Like a baby, it took 9 months to get out of the womb, and it is now usable! We will try to keep breaking changes minimal from now.

List of contributors (3)

Other projects

Some libraries that are not stable yet:

Older Newer