Contents

Scientific Computing with Jai

I spend a fair bit of my time being angry at Fortran code while wishing that it could be replaced with significantly less Jai code. Jai, if you don’t know, is a language under active development that I’ve been beta testing for the past two years. It is an amazingly powerful language, blowing pretty much everything else out of the water, but since it’s quite young there aren’t a lot of libraries and such available for it yet, meaning that a lot of things end up being quite a lot of manual effort in practice – for better or worse.

It occurs to me that for most purposes, relatively few libraries would be needed to be able to solve most scientific computing problems in Jai, faster, more efficiently, and with significantly less code than the equivalent Fortran (or what have you) program.

What’s needed?

The reason people use Fortran is because of the impression that it is somehow blazingly fast, when in reality it was blazingly fast a few decades ago, but now it’s just blazingly fast compared to the bullshit languages people write most code in these days. Taking interpreted and virtualized languages aside, Fortran still appears, superficially, to be faster than, say, C, because it has very good matrix primitives and super nicely optimized linear algebra libraries. Your milage may vary.

My contention is that Jai can do anything Fortran can do, potentially faster, but needs some nice libraries before it can be regarded as the langauge for scientific programming. At minimum these would be:

Good news is, work is already underway to port LAPACK to Jai. There’s also somebody doing some work on machine learning type stuff, although I don’t know how far along that is.

Using the new bindings generator, it might not actually be that hard to get some of this going. Perhaps I’ll take a swing at one of these over Christmas.

Automatic differentiation

The rabbit hole here will be automatic, or algorithmic differentiation. It’s called various names, but let’s just say ‘AD’ here. In short, it’s a way of generating a derivative for a given function, preferably at compile time. The details mostly come down to walking the parse tree and applying the chain rule all over the place, but that description hides a lot of complexity and details.

Basically, imaginy you have $f(x) = x^2$, represented by

1
f :: (x: float) -> float { return x*x; }

Then you would want to be able to get $\frac{df}{dx} = 2x$ by saying something like:

1
df :: D(f, x);

And the compiler would expand that appropriately to

1
df :: (x: float) -> float { return x+x; }

The reason AD is interesting is that it’s faster and in some ways more precise than numerical differentiation techniques. And while you can certainly compute some derivatives by hand, having a computational method to solve this type of problem is super powerful and takes a lot of the pain out of doing more advanced stuff, like physics simulations.

While some languages implement this in libraries, there is apparently first class support for this in Swift and some other languages. There’s also a neat proposal to add AD support to C++.

It looks like most of what is being done there could be done using Jai’s metaprogramming features. I haven’t actually tested this out at all yet. But would be cool!