Happy New Year from the Piston Project!

Welcome 2017, may the earth make another beautiful ellipse around the sun! I wish you all Happy New Year!

I just published Dyon 0.14.0, now with a new feature “link loop”.

Making code generation and templating easier

You can read more details about the design of the link loop here.

Yesterday, I was trying to figure out how Dan Amelang’s Nile programming language works. It is part of the effort of ViewPoint Research Institute (VPRI) to reinvent and revolutionize computing. VPRI does a lot of exciting stuff, and currently I am looking into Nile to get some ideas about 2D graphics.

The Piston project already uses a lot of meta parsing, inspired by OMeta2 used to create Nile. Piston-Meta makes it easy to build custom text formats, and the meta documents can also be shared and reused between projects.

For example, Eco is a tool for analyzing breaking changes in Rust ecosystems, and uses the same meta data converter code on two different types of formats: Cargo.toml and JSON dependency graph. Meta parsing does not have to be a fancy technique, it can be useful in bread-and-butter programming too!

Dyon also uses the same meta language to describe its language syntax. The benefit of using this technique is that you get closer to the intrinsic complexity of the domain you are working in. Piston-Meta is a very dense language, just useful enough be used for parsing, and nothing else. Nile was designed to describe how 2D graphics works, but it has some very promising ideas: What about 3D, sound, etc.?

Here is an example of a Nile function/stream (source):

CalculateBounds : Bezier >> (Point, Point)
    min =  999999 : Point
    max = -999999 : Point
    ∀ (A, B, C)
        if ¬(A.y = B.y ∧ B.y = C.y)
            min' = min ◁ A ◁ B ◁ C
            max' = max ▷ A ▷ B ▷ C
    >> (min, max)

Functions in Nile are quite different than in a typical programming language. Every function can have input and output streams in addition to arguments. Because these are processed in parallel, they do not deal with state the same way as typical functions do. One Nile function corresponds to 3 C-functions, divided into “prologue”, “body” and “epilogue”.

I am trying to figure out how to translate this technique to Rust, but also want to see how it is like to write a small compiler in Dyon. So far I had to change the syntax a little bit, since Piston-Meta has no support for whitespace sensitive syntax. If somebody wants to port OMeta2 to Rust, please do it!

Dyon uses a link type to generate code, which is a list that can only contain bool, f64 and str. It is used for code generation and template programming.

For example, here is some Dyon code that prints out Rust code:

fn main() {
    println(gen_struct("Foo", [{name: "bar", type: "f64"}]))
}

gen_struct(name, fields: [{}]) = link {
    "pub struct "name" {\n"link i {
    "    pub "fields[i].name": "fields[i].type",\n"
    }"}\n\n"
}

In the example above we have a link { ... } block and a link i { ... } loop (new in 0.14). The block creates a link, and the link loop joins all link items created inside the body.

In case you wonder what link i { ... } means, it is sugar for link i len(fields) { ... }. Dyon looks inside the body of the loop and figures out which list that uses i.

The performance is pretty good. On my laptop this program runs in 25 seconds:

fn main() {
    x := link i 100_000_000 {i", "}
}

It creates 0, 1, 2, 3, ... up to 100 millions.

Other things happening

There is a lot of discussion going on in the Conrod project. You are welcome to join!

People are now starting to work more the image library. If you want to help, see if you can solve some of the issues. oivindln got now a working deflate encoder in Rust, perhaps we can get rid of C entirely?

Dyon 0.13 is released!

Dyon is a rusty dynamically scripting language, borrowing ideas from Javascript, Go, mathematical notation and of course Rust!

In this post I will go in depth about two features of Dyon, and discuss them from the perspective that I use and need them.

Dynamically loaded modules

One of the things that makes Dyon special is the way of organizing code using dynamically loaded modules. When scaling up big projects in Rust, I noticed that each project required a configuration file to tell which dependencies it has. This is useful, but introduces extra work when moving code around. Another problem is when I work on some game, and want to refresh the game logic while running, there are many ways to do this and it depends on the specific project.

Sometimes I want to experiment with different algorithms and compare them side by side. While a version control system is great, it does not allow you the playful coding style I like. The projects I work on tend to focus on exploring some idea, and this leads to a dozen files spread across folders which does not fall into a neat namespace hierarchy. I do not want to think too much about how to name stuff, it gets in the way for productivity.

A Dyon script file does not know which other files it depends on, so you need to tell it for each project.

Setting up dependencies is part of a loader script, that loads modules and connect them together.

For example:

fn main() {
    find := unwrap(load("src/find.dyon"))
    unop_f64 := unwrap(load("../dyon_lib/unop_f64/src/lib.dyon"))
    unop_f64_lib := unwrap(load(
        source: "src/unop_f64_lib.dyon",
        imports: [unop_f64]
    ))
    set_alg := unwrap(load("../dyon_lib/set_alg/src/lib.dyon"))
    main := unwrap(load(
        source: "src/main.dyon",
        imports: [unop_f64_lib, set_alg, find]
    ))
    call(main, "main", [])
}

Horrible, right? Yet, there are some upsides:

  1. Easy to change the loader script to reload code on the fly
  2. Easy to generate code at loading and running time
  3. Easy to control preloading of modules that does not need reloading
  4. Easy to rewrite and refactor code from Dyon to Rust
  5. One configuration file per project

In most languages, you need some complicated system for these things, but in Dyon you learn how do it as a beginner.

The 0.13 version adds some important changes:

  1. Modules are now represented as Arc<Module> behind a Arc<Mutex<Any>> pointer
  2. External functions (Rust code) are resolved directly instead of depending on the module

The reason for 1) is when closures escape module boundary. Variables in Dyon are 24 bytes on 64 bit architectures, which was makes it possible to fit in more stuff like 4D vectors and secrets. I squeezed in a pointer to the closure environment stored beside the closure pointer. This keeps the track of the information required to find the relative location of loaded functions. The Arc<Module> pointer makes it possible to create the closure environment faster, instead of cloning the whole module for each closure.

The reason for 2) is when I use another pattern:

fn main() {
    window := load_window()
    image := load_image()
    video := load_video()
    input := load_input()
    draw := unwrap(load(
        source: "src/draw.dyon",
        imports: [image]
    ))
    sample := unwrap(load(
        source: "src/sample.dyon",
        imports: [draw]
    ))
    animation := unwrap(load(
        source: "src/animation.dyon",
        imports: [sample]
    ))
    main := unwrap(load(
        source: "src/programs/checkerboard.dyon",
        imports: [window, input, animation, video]
    ))
    call(main, "main", [])
}

In the example above load_window is an external function (written in Rust) that returns a module. The returned module containst other external functions, which then get imported into the other modules.

To make this logic work, I decided to use a direct function pointer and wrap it in FnExternalRef. The downside is that you can not track down the location of the external function when something bad happens. I am crossing my fingers and hope this will not be a big problem (famous last words).

Try expressions

A new feature in 0.13 is the ability to use try to turn failures into err and success into ok.

Link to design notes

Some background of why I care about this so much: I am working on path semantics, which is a big dream I have.

A path is a function that relates one piece of a function to another function in a space called “path semantics”, because it is a space of meaning (semantics). Normally there is a bunch of paths for a function, that might differ for arguments and return value, but they all need to work together to get to the other function.

When the same path is used everywhere, it is called a “symmetric path”.

Symmetric paths are extremely mathematically beautiful and rare.

Example:

and[not] <=> or

which is equal to, but not meaning the same as the equation:

not(and(X, Y)) <=> or(not(X), not(Y))

I just occured to me that this is one of De Morgan’s laws.

Here is another example:

concat[len] <=> add

which is equal to, but not meaning the same as the equation:

len(concat(X, Y)) <=> add(len(X), len(Y))

You can use these laws to do transformations like these:

add(X, Y)
concat[len](X, Y) // using a symmetric path to reach the other function
// there exists `Z` and `W` such that `X = len(Z)` and `Y = len(W)`
concat[len](len(Z), len(W))
len(concat(Z, W))

I want to create an algorithm to explore the space of path semantics efficiently. There is no currently proof technique I know to derive them, except to “make a hypothesis and falsify”, kind of like how scientists do. The only way to find which functions that works with others is by trying.

Here is a function that tests for a symmetric path candidate:

/// Infers whether a path can be used for a function
/// in a symmetric way for both arguments and result.
sym(f: \([]) -> any, p: \([]) -> any, args: []) =
    all i {is_ok(try \p([args[i]]))} &&
    is_ok(try \p([\f(args)]))

Let me take this apart an explain the pieces:

  • sym is the function name
  • f: \([]) -> any is an argument f of type closure taking [] (array) and returning any
  • p: \([]) -> any is an argument p of type closure taking [] (array) and returning any
  • args: [] is an argument args of type [] (array)
  • \p([args[i]]) calls the closure p
  • \p([\f(args)]) calls the closure f and then p
  • all i { ... } is a mathematical loop that automatically infers the range [0, len(args)) from the body

The \ notation is used in Dyon because type checking happens in parallel with AST construction, plus some other design choices that closures always return a value. I needed a design that was easy to refactor and different for normal functions and closures.

The problem was that this function gets called many times with different closures, and sometimes it fails and crashes because the types are wrong.

try converts all failures into err, such that the function sym can detect a path candidate.

Now, you might think, why not use try and catch blocks?

Dyon already has a way of dealing with errors: res, like Result in Rust but with a trace and any as error type. There is no need to add a second way of dealing with errors, when the first method works just fine.

For Rust programmers, the try keyword can be a bit confusing, since they might be thinking of the try! macro. In Dyon, you use the ? operator instead of try!, just like in Rust. I decided to use it because it feels closer to what we mean with “trying” in natural language.

For example, when a function foo returns res, you can write:

fn foo() -> res { ... }

fn bar() -> res {
    a := try foo()?
    ...
}

This is executed as try (foo()?), returning err early, so you can handle the case where it failed. Interestingly, it gives the same result as (try foo())?.

Solving Economic Inequality in MMOs

Piston is a modular game engine written in Rust. The project was started in 2014 when Rust was pre-1.0, and during the last years we worked on various libraries and collaborated with other projects.

The Piston project is primarely about 2 things: Maintenance and research. By sharing the maintenance burden between many developers, we get more time to focus on the projects we like to work on. This saves time for the people involved.

This post is about the research part. In particular: Economic inequality.

Why do research at economic inequality?

Economic inequality is one of the world’s biggest problems, besides climate change, poverty, deseases. It is also interlinked with all the others. One could say that economic inequality is partly causing all the other big problems we have. Despite being such an obvious and huge problem, so far nobody have come up with a good solution. Why?

I do not know for certain, but after thinking a little, at least I can point to two major reasons:

  1. Lack of an environment where you can test ideas
  2. Difficult to measure effects precisely

So I was thinking: As games increases in size and complexity, maybe you could use them as a test environment?

Another motivation: Balancing economics in MMOs is hard. Perhaps we could do something about it?

Reducing the problem to a simpler case: MMOs

I am not an economist, and I have no intention of touching the real world economy (yuck!).

So, I came up with a simple plan, which might work:

  1. It turns out that economic inequality in MMOs is worse than ever measured in the real world
  2. If we can solve the problem for MMOs, then it might be possible to solve it in the real world

I also had to come up with a concept for trying out this idea, and did a lot of background research. If successful, this could be a mind blowing experience!

I won’t go into the benefits about using this in MMOs, because the Mix-Economy project explains it in the readme.

The Mix-Algorithm: Combine Ideas!

Last year I came up with an algorithm that seems to work OK using a combination of:

  1. Universal Basic Income
  2. Progressive Negative Fortune Tax
  3. Burning Money

Those are 3 ideas that each would crash an economy without some form of taxation. However, if you combine them in a clever way, they balance each other out and stabilizes the economy.

Since the algorithm uses normalized currency, you can mix it on top of an existing economic system. This also makes it possible to turn it on and off to measure effects on the economy.

Link to project

So, how does it work?

Imagine a factory that leads the pipes back the floor. If people do not work hard enough on the factory floor, they die of the toxic smoke coming out of the pipes. This is how a free market with 0% tax would work, because by random chance somebody ends up with too little money and is excluded from the economy until somebody gives them something. When nobody wants to hire somebody or provide for them, they die.

I think the smoke analogy fits because people die as a direct result of living inside a such system, which becomes systematically biased and impersonal as the system grows in size. Because of this effect, no country has ever run successfully without some form of wealth distribution, and need to plaster many systems all over it to keep things running.

Severe economic inequality is the result of a systematic bias of a system that does not scale

The way I see it: It is not a problem with people, it is an SOFTWARE BUG. Swap people with genetically identical ants, everyone equally smart and capable, and you get the same problem!

OK, back to the imaginary factory:

One day an engineer comes and visits the factory and rebuilds the pipes to let the smoke go up in the air. Wow! It leads to a large improvement for the workers. They are happier and more productive, which increases the efficiency of the factory.

Basically, that is what the mix-algorithm does. It makes a simple regulatory design choice that makes the economy more efficient to rid of “waste”.

Every economy needs an outlet, whether it is the poor, the rich, or a mix of both. If you print money, somebody has to loose value. However, if everyone looses value, then prices will increase and eat up the gain.

A way to solve this is as simple as it sounds horrifying terrible: To control inflation, burn the money in the pocket of the rich!

Frightening, right?

However, when you consider a universal basic income, it is not that simple, and perhaps not that horrifying: The rich get more because people have more money. Since the money is not spent when the rich have less needs, you have to get rid of it! Burn it!

In the mix-algorithm, the money that is burned comes from the fact that you are inflating the currency by distributing through negative fortune tax. You are not “taxing” the rich who “pays for” the poor. The money is printed and distributed, and the outlet is money burned at the top. In mere amounts of capital, the rich becomes far richer (up to 6x) when turning on the mix-algorithm. So, it is not unfair to get rid of that money at all!

The money at the top are burned independently of the needs of the many. This is how the factory lets the smoke up in the air, instead of the floor!

Who “pays” for the economy?

It is the lack of money under the soft limit that charges the inflation. Those at the bottom charges the rewards for those above, so it is the BOTTOM who carries the economy.

This is strangely intuitive, because common sense in the real world says that workers DO THE WORK. Yet, now there is an algorithm that supports this argument! (I think that’s cool!)

Incentivize people to work

In a 0% tax environment (remember this is progressive negative fortune tax, not ordinary tax), people work primarily to get the money they earn in the instant of the transaction. Secondarily, they work for interest rates by putting the money in the bank. However, this is not true for people at the bottom, who almost never have capital surplus.

Therefore, you end up in a situation where people at the bottom are less motivated to work. With a universal basic income, this situation gets worse, because with all needs covered, people can just stay home. How do we fix this problem?

The mix-algorithm does something clever: It gives you more money, the more money you have, up to a soft limit. When people can earn just a little bit, they are more motivated because they gain on it over time. The gains are larger at the bottom, and follows the square root function up to the soft limit.

The Mix-Algorithm is not enough: You need a Gini solver

Economic inequality is measured by the Gini coefficient.

The problem with the mix-algorithm is that it incredibly hard to reason about it. How do you set the tax rate when new players join, quit and change transaction patterns?

First I tried studying the long term behavior of Gini, which took 45 minutes of simulation per data point. After two weeks, started thinking this approach would never get me anywhere. I estimated that to gather enough data to find a reasonable approximation of the mathematical inverse, it would require a super computer with 100_000 cores running for 1000 hours.

There had to be another way!

So, I came up with the idea of using a solver, who decides the tax rate directly, and near perfectly.

The Gini solver uses convergent binary search, because inequality might increase sometimes when increasing the tax. When people have more money, they spend more, so rich people get a lot more money! As a thumb rule, distributing money decreases inequality, but this is not always the case.

Here is a result of an early test with random transactions. It shows that the algorithm works pretty well for various start fortunes (the money players get when joining):

Mix Gini

The important thing about this graph, is that you can set a Gini target for a broad range, that covers the range of inequality measured in real world economics.

Don’t be optimistic

Even though this algorithm seems to work in early tests, I expect there are many tweakings needed before it can be used in a game. Perhaps this is an overconstrained problem, or there are other effects that are not apparent under random transactions.

Older Newer