Announcing Dyon-Snippets

The Piston project is pleased to announce Dyon-Snippets, a place to share Dyon source code and discuss library design!

Dyon is scripting programming language started in 2016 by myself (Sven Nilsen, bvssvni).

Dyon started as an experiment in a period I had a lot of time, while waiting for some important Gfx redesigns. After a week of coding, I discovered that it was possible to use a lifetime checker (like Rust), but without borrow semantics (unlike Rust) instead of a garbage collector. Combined with copy-on-write for non-stack references, this creates a very limited memory model but sufficient enough for many practical applications. It is difficult to write object oriented code in Dyon, but it is very nice for iterating over arrays.

  • Built-in support for 4D vectors and HTML colors
  • Packed loop for mathematical index notation
  • Secrets (more about this later)

The language uses dynamic loading of modules to organize code, where you have full control over module dependencies. It is very common to write a loader script, a program that runs before you run the actual program.

Here is an example:

fn main() {
    foo := unwrap(load("foo.dyon")) // Load `foo`.
    bar := unwrap(load(source: "bar.dyon", imports: [foo])) // Load `bar` with `foo` as dependency.
    call(bar, "main", []) // Run the function `main` on the `bar` module.
}

This allows a kind of programming where you easily control how stuff gets loaded, e.g. check for for updates or refresh a module every Nth second.

I often use Dyon for problem solving, because the language has a feature called “secrets”. A secret is a hidden array of values associated with a bool or f64. The type is sec[bool] or sec[f64]. The indexed loops in Dyon are integrated with secrets.

For example, you have a 2D array v and compute the maximum value:

m := max i, j {v[i][j]}
println(m) // Prints maximum value of `v`.

Dyon infers the range from the loop body. The code above is equivalent to:

m := max i [len(v)), j [len(v[i])) {v[i][j]}

This is a packed loop which is equivalent to:

m := max i [len(v)) {max j [len(v[i])) {v[i][j]}}

The notation max i, j {v[i][j]} is inspired by mathematics. In mathematics and physics it is very common to use indices and custom loops. It is easy to translate back and forth between equations and Dyon code, and it helps you learn mathematics as well!

The type of m is sec[f64]. You can write the following:

where_max := where(max)
println(where_max) // Prints `[i, j]`.

This is how it works:

  1. The inner max loop returns the maximum value with a secret [j].
  2. The outer max loop finds the maximum inner value and changes the secret to [i, j].

A secret propagates from the left argument in binary operators. This means you can combine any and all loops with max and min:

// Is there any list which maximum value is larger than 10?
m := any i {max j {v[i][j]} > 10}
if m {
    println(why(m)) // Prints `[i, j]`.
}

In problem solving this is very convenient, because many problems can be thought of as formulating a question. When you know the right question to ask, the answer is often easy to find.

Announcing Binpool

This blog post is about a new experimental uniform binary format for particle physics: Piston-Binpool

Background

Currently I am diving deep into marine cloud brightening to understand how humanity can avoid some severe effects of climate change by using some form of clean and reversible geoengineering. If you did not know already, there have been some very worrisome observations in 2016-2017, so I thought it was a good idea to educate myself a bit and perhaps get some input on what technological challenges there are. The scientists and engineers working on this have been quite open and even sent me papers to read! Thanks guys.

While reading about this, which seems to require understanding of climate modelling (it is understandable if you take your time), I figured doing some basic virtual physics experiments could help my brain along the way. Nothing advanced yet, just freshing up some of the things I played around with over the years.

One thing I discovered, that it is not obvious what kind of analysis you want to do with a virtual experiment. Instead of running simulations over and over, I would like to collect the raw data and later run algorithms that computes some functional relationship.

The advantage of this method is that you can exploit some physical similarites between systems. For example, it is commonly known that certain physical behaviors can transfer knowledge between a model and the full size system. So, collecting data about some few particles colliding in a slow motion simulation, can give you knowledge about particles colliding in high velocities and over a huge area. This way you can save both time and energy, because you do not have to simulate every detail of a full system.

This is also interesting because I came across similar ideas when researching path semantics. Studying physics could help design algorithms that predict approximate behaviors of other algorithms. One long term goal is to develop a constraint solver that can assist with engineering tasks.

Motivation and design

I could not find a standard format for storing and reading particle data in a such way that you can reuse algorithms between projects, so I made my own experimental one.

Here is the format:

type format == 0 => end of stream
type format: u16, property id: u16
|- bytes == 0 => no more data, next property
|- bytes: u64, offset instance id: u64, data: [u8; bytes]

The first flag is a type format, which supports 10 common Rust numerical primitives, with vector and matrix dimensions up to 80x80.

The second flag is a unique property id that identifies the property of the particle system. This is also used to define time, e.g. a f32 scalar.

Next the format stores the number of bytes in the data block and an offset instance id, such that you can read and write parts of the particle system (e.g. only the part that is dynamic or needed for analysis).

The number of particles in the block is derived from the number of bytes and the type format. You can also specify a custom binary format, in which case a general purpose analysis tool can just skip the block.

For each frame in a simulation, you simply write out a set of properties, and when reading back a frame you use a loop that breaks when all properties are set. More information about the usage is in the README file.

Future work

My hope is that I am able to reuse realgorithms across projects, for example a command line tool that computes the energy or the number of collisions.

Existential Paths

In this post I will report a big breakthrough in the ongoing research on path semantics.

Link to paper: Existential Paths (work in progress)

Since people asked me what the heck is path semantics, I made an Illustated History of Path Semantics with Stick Figures.

I also made an introduction to the notation in the last blog post about non-existence of monoid symmetric paths. Since then I have figured out how to expand the technique to asymmetric paths.

Path semantics is the idea that some functions have secrets, and those secrets can be expressed as functions, which in turn can have other secrets, and those secrets can also be expressed as functions etc.

This is very different from how most computer treats functions, because they only use them for computation.

For example, if you define a function:

add(a, b) = a + b

In most programming languages, the computer only see this function as a computational procedure. You give it some values as arguments and you get a value out.

However, in path semantics, you can write things like this:

add[even] <=> eq

This means that add hides secretly the behavior of eq that is unlocked using the even function.

When you have functions connected to other functions by functions, it gets confusing, so you say add has a path to eq by even.

A path is a function in some mysterious space called “path semantical space” where all functions are connected. When you explore and learn about this space, you can “backproject” paths as patterns to tell something about large number of functions.

It also happens that these paths are predictors, so you can use them in machine learning.

Path semantics is actually closely related to type theory, because you can use the notation to define sub types:

fn concat(a: [], b: []) -> [] { ... }
fn len(a: []) -> usize { ... }

// When concatenating two lists, you can think of it as sub types `[len] n` and `[len] m`
// returning a sub type `[len] n + m`.
concat(a: [len] n, b: [len] m) = [len] n + m

// Short form.
concat[len] <=> add

Creating a type checker for path semantics turned out to be a seemingly impossible problem…

…until today!

The new breakthrough is the discovery that there are other kinds of paths out there, and one particular kind called “existential path” is closely connected with normal paths.

In fact, it is so closely connected, that it might be possible to make a type checker using them some day.

Simply put, if you have e.g. a: [len] 0 then there exists a function len' such that:

a: [len] 0

0: [len'] true

This checks the sub type for all a. In this case, we know that there are lists of all lengths:

len'(x) = x >= 0

// Alternative notation.
0: (>= 0)

Since we have a concrete value 0, we can just evaluate len'(0) and check if it returns true.

Actually, this f' function corresponds to the post-condition of f, and a lot of smart people have already worked on this already. This means we can reuse current tools for theorem proving!

You might notice that there should also be a function len'' such that:

0: [len'] true

true: [len''] true

This is correct! It could go on forever, right? Luckily this leads to a repeating sequence:

// There are only two options for all functions.
f'' <=> {id?, true_1?}

id' <=> true_1
true_1' <=> id

So, after 2 steps, we can just hard code everything in the type checker there is to know about these functions.

This is a very recent breakthrough, but I am already thinking about how to set up the inference rules.

For example, if you type this:

a: [len] 0 & [len] (< 2)

Since these sub types are using the same function len, and 0 is a concrete value, one can reduce it to this:

0: (< 2)
0: [len'] true
Older Newer