Contents

Four Years of Jai

I’ve been programming for long enough to be righteously cantankerous about a lot of things. The list of languages, frameworks and libraries I’ve worked with professionally or on personal projects is too long to list – but it includes everything from C and assembly languages through C++, Pascal and Delphi, through Java and Clojure, through Perl, PHP, Python, Javascript, Typescript and so on. I’ve tinkered with Rust, APL, Uiua, Erlang and Haskell. I’ve been around the block a few times.

So let me just open this discussion by saying: I’ve seen the good, the bad and the ugly, and I know how big the “ugly” category is – how ridiculously small the “good” category is.

Which is why, when I first heard of Jai, I was intrigued. The central focus made sense: a language of comparable performance to C, but with modern idioms, conveniences and tooling – a language aiming to be a powerful alternative to C++ in environments where performance is everything, while emphasizing the importance of programmer morale. This seemed like music to my ears, and I followed the occasional updates for years.

Then, in early 2020, I got invited to join the Beta. From what I could tell I was about the 20th-or-so person to join it.

Now it’s four years in, and further, even though it’s still in closed Beta (I’ll come back to that), I have been using Jai professionally for the last three years – a decision which could be considered controversial, but I’ll address that in this essay.

What I’m going to cover here is:

  1. The high level overview
  2. Things I like
  3. Things I don’t like
  4. Using Jai professionally
  5. The status of the Beta
  6. The road forward

But, this essay will also be 0-indexed:

0. The lost era of good software

Software has been getting slower at a rate roughly equivalent to the rate at which computers are getting faster.

The causes of this are manifold. Some of it is attributable to the maxim that became popular in the 90’s, that software developers were more expensive than faster hardware, so they should be encouraged to not waste their time writing good code if bad code written quickly will suffice.

Part of it is due to architectural changes made to CPUs around 1999, which introduced a layered CPU cache system, broke a lot of assumptions that had held true for decades up until that point. In particular, CPUs and RAM used to have roughly equivalent clock speeds and fetching data from RAM would take 1-2 CPU cycles, so it wasn’t very expensive. But as the clock speeds diverged and RAM grew in size, these diverged – now it can take up to 200 CPU cycles for a memory read to return. As a result, caching is more important than ever, and many things that weren’t bad before, or even actively encouraged, such as linked lists where each item is randomly allocated on the heap, are now insanely expensive.

Some if it is compounded by the development, in particular since the early 90’s, of scripting languages that are meant to be used for everyday software development – the likes of Perl, Ruby, Python, PHP and Javascript. What these brought to the table was a sense of “high level”-ness; that you could write code in a way that was abstracted away from the particulars of the underlying architecture by way of a bytecode interpreter or a virtual machine, or in some cases a just-in-time compiler. This presented many benefits, in particular a reduced need to be concerned with resource management, a task relegated to a garbage collector or such. Suddenly programmers were being encouraged to pretend they knew nothing about the hardware their program was destined to run on. Just assume that resources are abundant, that their allocation and deallocation was practically free, and if things started getting sluggish they could just ask management for more servers.

This came at a cost, but in many cases the efficiency hit was easily justified. Some languages went as far as to make the case that the efficiency tradeoff itself was a feature: that the existence of a virtual machine that hogged memory and reduced your code execution speed made it more “portable”. “Write once, run anywhere,” they claimed, except in practice it was never quite all that.

Some of the slowdown comes due to various ideologies and methodologies, including but not limited to Object Oriented Programming, RAII (Resource Allocation Is Initialization) and Clean Code having turned out to be conceptual traps which provided a sense of achievement and righteousness, justified by terms like “zero cost abstraction”, “encapsulation” and “cleanliness” which in practice had huge performance hits, and a host of other issues, associated with them.

And some of the slowdown comes because programming paradigms have largely failed to keep up with the rapid expansion from single-core to multi-core and multi-threading. Many of the general purpose concurrency and parallelism models being pushed turn out to have unintuitive performance bottlenecks or brain-breaking gotchas, and for some reason most languages simply haven’t gotten this right. An example is async/await, a pattern increasingly polluting Javascript and Python codebases in the name of performance, which will sometimes yield weird compute stalls because of how their task queue is built, and further lead to “function coloring” effects which bifurcate codebases and cause massive headaches. Go’s channels model is pretty great if and only if you’re moving around small amounts of data, but the moment you start sending large assets it breaks pretty badly, and that warning isn’t on the tin.

Fast forward to 2024, and the vast majority of CPUs in the world spend the vast majority of their time in one form of resource starvation or another. This can be due to low level issues like data cache stalls, instruction cache stalls, branch prediction misses, dining philosophers and the like, or more higher level effects like Javascript engines going into massive garbage collection runs every now and then because most Javascript programmers are entirely unaware of the memory allocation cost associated with each call to anonymous functions, which they litter around their codebases with the false presumption of zero overhead.

The net effect of this is that the software you’re running on your computer is effectively wiping out the last 10-20 years of hardware evolution; in some extreme cases, more like 30 years. The impact of this on things like server costs, environmental footprint, user experience, and overall outcomes for everybody is staggering.

I’m not even going to start talking about the negative effects of weak and dynamic typing. That would be a rant. One of the perennial problems I see, specially when working in interpreted languages like Python and Javascript, is that in the absence of static, compile time type checking, many bugs cannot be discovered except during runtime, and even then only if the correct code path is lit up with the incorrect type of data. This alone amplifies the need for test coverage by an order of magnitude, because you need to not only test for success under correct input conditions, but you need to actively test for weird breaking conditions if the program gets an input of the wrong type or even correct type in a loose sense but wrong shape (such as with dictionary/object member assumptions), which a static type checker would catch nearly for free.

In short: modern software is slow and buggy.

The good news is that people seem to be wising up to all of this. Slowly, new languages, often based on the LLVM toolchain (for better or worse), are creeping into the public consciousness, bringing to the table the idea that “low level” performance and “high level” semantics aren’t necessarily mutually exclusive. That it’s possible to build robust, maintainable, fast software without selling your soul. Rust leads the charge in terms of popularity, but is annoying to work with and increasingly it’s becoming clear that refactoring large Rust codebases can be a minor nightmare because of the types of “ownership plumbing” required. Other languages have emerged with smaller agendas: Zig, Nim, Odin and V are all worth mentioning as “newlangs” which try to bridge the void. Zig in particular is a language I’d like to spend more time with.

But all of this brings us back to Jai, the language I want to “review” here.

1. Jai, the language for good programmers

What appealed to me immediately about Jai was Jonathan Blow’s observation that most languages were in one way or another marketed as being “easy for beginners”. Which in practice means that the syntax and semantics are specifically built in order to prevent obvious footguns, often by way of obscuring details like memory management or types, which might be important. The idea of building a language specifically around the needs of experienced programmers was intriguing.

To be clear, that doesn’t mean that the language needs to be unwelcoming to less experienced programmers. Nor does it mean that the language is intentionally made dangerous, with sharp edges and scary bits. It just means that there isn’t a promise that you can’t get yourself into trouble if you’re not careful.

But what particularly stands out when you use Jai isn’t the sense of absolute power that you do in fact have. No, it’s something more subtle than that. It’s that the language is actually simple. It doesn’t have many bells or whistles, it doesn’t have weird syntactic sugar, it doesn’t have many edge cases. In fact, there’s a stated goal of explicitly not having any undefined behavior. Either the behavior is defined and intentional, or it’s a mistake which will be fixed.

But this simplicity doesn’t mean a lack of power. On the contrary. As one user put it, “the real amazement starts once you see how well all the features work together.”

Take something as simple as struct. In most languages, you’ll have some way of declaring a data structure of some kind. At their base, Jai’s are very simple. For instance:

1
2
3
4
5
Character :: struct {
    name        : string;
    age         : u32;
    super_power : SuperPowers;
}

But you don’t have to look far before the power of combining using, #as, struct polymorphism, and notes starts to manifest. And that’s before we even talk about features like type restrictions, macros and metaprogramming. I won’t go into the semantics of these here, because there are better places for that, but just understand: you can get more power than C++’s classes and templates allow, with significantly less conceptual overhead, less architectural effort, and less code overall.

I’ve sometimes rewritten old C/C++ or Python code in Jai, and almost always see around a 30% reduction in code, and often a reduction in complexity as well. I also just the other day wrote an algorithm in Jai, that I knew I’d need to rewrite in TypeScript, because it would be easier to visualize its internals and I’d spend less time arguing with the type system. The Jai version ended up being 20% less code and run 178 times faster compared to Deno. But that’s just TypeScript for you.

Of course, there is a price to pay. For the most part, Jai is intuitive, easy to work with, and fabulously powerful. But when you do get yourself into trouble, it can be tricky to debug any language. Thankfully, Jai’s compiler error messages are almost always easy to read, easy to reason about, and the errors are almost always located correctly. It’s a small thing, but it has so much effect on usability.

It also comes with useful facilities like a memory debugger which make tracing memory issues quite easy. It has good debugging symbols (and no name mangling!), and behaves quite nicely. It has one (and only one) first-class string concept, and the type system is overall quite clean.

So, put simply, yes, you can shoot yourself in the foot, and the caliber is enormous. But you’re being treated like an adult the whole time, and provided with a solid array of tools with which to almost instantaneously heal the wound if you are so inclined.

It is a strange feeling to experience software that feels like it’s made by adults, for adults.

2. Things I really like about Jai

There are many things to like about Jai. Some are technical, some cultural, but all matter.

Simplicity

The syntax and the semantics are minimalist and predictable. The flourishes that exist are largely for relatively rare things. Special notation is reserved for common actions. If things need to be spelled out for clarity, they are. There’s very little syntactic sugar, and even then it’s part of a coherent repeated pattern.

Most code is straight-line. While there are lambdas and such, and you can go crazy functional, there’s nothing syntactically encouraging you to do so. The specifics of what are happening are never hidden. There aren’t magical overloads on things that will bite you in the ass, except there is limited operator overloading available, which comes with a stern warning not to overdo it.

I’ve had people look at Jai code for the first time and remark about how straightforward it looks. Which is because it is straightforward. Clean by default. No gotchas. The way it should be.

Speed

The Jai codebase I work on the most currently consists of around 44000 lines of code. It consistently builds in about 1.3 seconds on my main computer, which is a reasonable computer by 2018 standards. That’s including running my build script and metaprograms, and building three different executables. More than 2/3 of that time is spent in the LLVM backend. It could certainly be faster, but it beats the living bejeepers out of anything else. In terms of quality of life for a programmer, this one is way up there. It’s shocking how many languages consider this to be unimportant. Yesterday I compiled a much smaller C++ program (~19000 lines of code) on the same machine, and it took about 45 minutes. No joke.

Build system

If you’ve worked with most any other compiled language, or even some interpreted languages (I’m looking at you, npm), you’ll have had to learn a completely separate and often significantly worse second language in order to tell your compiler/interpreter what to do to build your program. In Jai, the compiler can run code in Jai during compile time, and during compile time code being executed has control over the compiler and full read-write access to the abstract syntax tree of the program. What this means is that you write your build scripts in the same language as your code. It’s hard to overstate how big a deal this is. In the simplest case, it means you don’t have to learn Make, CMake, Automake, Autoconf, Ant, Gradle, Buildtools, npm, bun, crate or whatever other inane thing a language throws at you. But beyond that, you can start to do more advanced things like embedding different levels of testing into your build process, enforcing house rules, or even compile-time swapping out codepaths depending on your intended platform (I have used this in reality).

Metaprogramming

Relatedly, metaprogramming in general. This is supposedly one of the least developed parts of the language, but it’s already so goddamn powerful. This includes arbitrary code that modifies the behavior of a procedure, for example based on the types of the passed parameters, the ability to introspect the code at compile time to enforce house rules or such, and the ability to make macros that operate within the context of the calling location.

Aside from the super glamorous stuff like #modify and #expand, one neat little convenience is #run, which can be used to run any code written in the language at compile time. This is actually how the build system works. But you can use it for all sorts of other things.

I have, in one bit of code, a lookup table with names of intrinsic functions, like so:

1
2
3
4
5
6
intrinsic_functions :: Intrinsics.[
    .{ "sin", "sin", 1, null},
    .{ "cos", "cos", 1, null},
    .{ "tan", "tan", 1, null},
    ...
];

Then I have a #run directive:

1
#insert #run build_intrinsics();

This runs the build_intrinsics function as strings, and inserts the outcome as code into the location where we call from. This allows the compile time baking of a dispatch table, which is read-only at runtime. This kind of thing needs to be used very sparingly so as to not litter your codebase with all sorts of confusion, but it works well.

Cross platform capabilities

Have you ever written a program for mobile, and wished you could have the same program on desktop, or vice versa? Me too. Most of the time, the solution to this is to write the program as essentially a web page, and then package the program as a chromeless browser or webview. This is so slow, buggy and dumb. With Jai, I have written programs that build simultaneously for Android, Linux, across multiple combinations of CPUs. There’s also support for Windows, MacOS, and supposedly also iOS and a bunch of different games consoles and such, but I haven’t looked at that stuff.

Either way: this is way better and easier than I’ve seen in any other language.

Type system

Many languages have good type systems, and Jai’s isn’t particularly remarkable beyond it being just simple and coherent. Coherence of a type system is a weirdly rare thing. I like it. It’s easy to make new types, both composite types (as structs) and derived types. Types also internally serve as interfaces without having to define them separately, since the concept of an interface is done through pattern matching on polymorphism.

defer

It’s a simple keyword, but it singlehandedly eliminates the need for any kind of RAII. You can stick defer ahead of any code block (or function call) to make it execute whenever you return from the current function. Need to guarantee cleanup? Defer. Want to print debugging info regardless of how you error out? Defer. Want to remember to free some memory or close a file? Defer. Simple. More and more languages have this feature because it’s a good idea.

1
2
init_subsystem();
defer deinit_subsystem();

Foreign Function Interface

Having worked with FFIs in a number of languages, such as Perl, Python and Javascript, it was a straight up surprise to me when I discovered that Jai’s FFI was really simple. You simply declare the function you want, and indicate with #foreign which library it comes from. Done. It’ll even attempt to demangle C++ names for you on the fly. So refreshing.

Here’s a real example:

1
2
3
libdbus :: #library,system "libdbus-1.so";

dbus_connection_open :: (address: *u8, error: *DBusError) -> *DBusConnection #foreign libdbus;

Actually this is not hand-written code though. I generated these bindings using the automatic bindings generator. Because who wants to hand-roll DBus bindings anyway?

Procedural polymorphism

This is pretty common, but there are nice bits about how it’s implemented. In particular, instead of the C++-esque <> syntax, one marks a type in the function signature as a compile-time type variable with a $, which serves as a compile time type bake. It’s possible to restrict the type to conform to a different type using the $T/OtherType syntax, or get duck typing (!!) by doing $T/interface OtherType. The difference is that in the former, whatever type is provided as T must conform strictly in shape to the OtherType, but with interfaces it’s sufficient to have the same members – this is a much more simple way of getting “traits” or whatever they’re called.

Here’s an example function with a type restriction. T is a polymorphic type that must conform to the Parser type. Here it could make sense to allow duck typing, but it’s from a codebase where I apparently decided not to for some reason.

1
2
3
4
5
6
lexer_advance :: inline (p: *$T/Parser, amount: int = 1) {
    assert(amount >= 0);
    if (p.remaining.count < amount) return;
    p.remaining.count -= amount;
    p.remaining.data  += amount;
}

Structure polymorphism

One of the most overrated concepts of the last decades has been object oriented programming, and anybody who’s used C++ has experienced the inanity that its templates mechanism has. Jai’s structure polymorphism serves as a much simpler and yet extremely versatile approach to this, allowing for compile-time determination of types. It is quite similar in syntax to procedural polymorphism, and allows the same kind of type restrictions.

Here’s an example from real code. In it, Status is a type, which we pass into the definition. This way, we make the parser struct more generic. This is probably the most weak-sauce use of this possible.

1
2
3
4
5
6
7
Parser :: struct(Status: Type) {
    input           : string;
    remaining       : string;
    status          : Status;
    error_message   : string;
    error_context   : string;
}

Context

Much like how object oriented programs carry around a this pointer all over the place when working with objects, in Jai, each thread carries around a context stack, which keeps track of some cross-functional stuff, like which is the default memory allocator to use, which logging function to use. Additionally, you can programmatically expand this context for the purposes of your own program, in case you have, say, a specific swappable i18n library or whatever. Sometimes you could do this with globals, but the power of context, and the context stack in particular, is that it allows you to change things based on… well, context. There’s even a convenient shorthand syntax for pushing a modified context for the purposes of a single function call (and any decendent calls).

Allocators

Unlike most languages, memory allocation is neither hidden away nor considered low level magic. The language comes with a range of allocators available to users, and a basic Allocator concept, which is found in two copies in the context – one for heap allocations, and one default “temporary” allocator. One observation that’s baked in is that any program (or at least thread) that isn’t short lived is going to have a natural tempo, such as a connection, batch, task, frame or other chunk of work, at the end of which it’s natural to simply forget anything that is temporary. The net result of this is that with only a tiny amount of extra thought you get garbage collection for free. Extend this to other concepts in your program with the appropriate use of Pool allocators, Bucket allocators, and the likes, and suddenly you get impressive performance improvements and a fair amount of memory safety for very little effort. Oh and there’s a memory debugger included in case you get lost.

Philosophy

This might sound weird, but I do think that one of Jai’s most appealing features is the philosophy baked into its design. There is a reasonably rich body of expressed ideology that the language is explicitly attempting to reflect.

There is a how_to folder that comes with the language distribution that contains, among other things, small essays on the state of computer technology, and a wonderful little bit of zen in a file called 999_temperance.jai. It would be tempting to try to explain the philosophy of the language, but I don’t feel like it’s something I can do justice for now. But it’s not lost on me that “jai”* is the Hindi word for something between “glory” and “victory”. Coincidence?

I’ll stub this observation out by saying: It’s fascinating how much better your code gets when you stop trying to do the fancy shit that some dude wrote a book to convince you to do, and just start doing the simplest possible thing. A lot of software development nowadays is based on a cargo cult mentality, both towards low-effort libraries and performative practices. The number of times even in recent months I’ve seen code that was too complicated by a large factor simply because somebody was “following best practices” in a blind way is… disappointing.

(* A note on the name: There’s nothing wrong with it, except that it is apparently meant to be a placeholder before another name is decided. Except, we’re over 10 years into the development and a lot of people now know it as “Jai”. Very few people speak of it as “the language” or whatever other term is considered okay. Realistically, even if there’s a new name decided, it is known as “Jai”, and that will be hard to change.)

3. Things I don’t like about Jai

I initially wrote a list of grievances a year ago, and since then it appears that most of them have been addressed. Oh well. So, most of these are nitpicks, and none of them are dealbreakers in the language.

  • Static #if needs support for switches.
  • The bindings generator needs to be way more plug-and-play. It’s down to a handful of lines of code for most basic bindings, but it’s not quite where Zig’s is.
  • #complete should be #incomplete; as in, switch statements on enums should be required to be complete by default. Maybe this is a bad idea?
  • The S128 and U128 types are defined in library, so they’re not intrinsic, but I still feel like they should be named s128/u128 to conform to the intrinsic types. There is overhead associated with using them (that you don’t get on intrinsic types), which is probably what the uppercase letters are there to remind you of, but it’s easier to know that than it is to remember to have uppercase S and U for this one integer type.
  • There is limited testing support – you’re expected to roll your own. In particular, I feel like it’s lacking a #test_scope or some other way of putting a main() function in a module that allows running the module’s tests in situ. It’s not so much of a problem as a lack of a convenience.
  • enum_flags should be called flags or maybe bitflags or something.
  • There are a few places in the standard modules where CamelCase, snake_case, and Camel_Snakes are mixed. Having more coherence would be nice. There is a cleanup pass planned, so this won’t be an issue.
  • Documentation has drifted a bit, with many features not really being documented.
  • It could, honestly, be faster. Kidding. :-)

… the fact that after four years of day-to-day use, this is all I can come up with, should tell you something.

4. Using a beta language in a real environment

As I mentioned before, there are a bunch of new languages – “newlangs” – that bring different things to the table. Rust is very popular, but is an incredibly high-friction language which I have not enjoyed playing with. I’ve been meaning to give it another outing, since it is super popular, but my first experiences were pretty negative so it’s hard to feel motivated to do so. Rust’s promise of memory safety sounds great, but I’d be much more excited about that promise if the compiler provided that safety, rather than asking the programmer to do an extraordinary amount of extra work to conform to syntactically enforced safety rules. Put the complexity in the compiler, dudes.

Zig, Nim, Odin and even V are quite interesting, but aside from Zig I haven’t been able to spend much time with them. Zig and Odin in particular echo Jai somewhat, and there are ideas in them that I’d like to see more commonly available, such as built in syntax for tests and the ability to require error handling from the callsite (ideally without exceptions though).

But none of these languages has felt as right as Jai.

Three years ago, I started my company, Ecosophy. It’s a company that’s doing very large scale data spatiotemporal data processing, using a pretty standard software stack for the most part. The frontend is Typescript/React, the backend is mostly Python. But the core component, a spatiotemporal database engine, needs to be ridiculously performant.

So I was faced with choices. Obviously I could rule out anything interpreted, anything with shitty performance characteristics, and anything too weird. That left me with a few languages that were rejected for practical reasons (including Fortran, seriously), and a few serious contenders:

  • C++. A solid, uncontroversial choice, but one which from experience I knew I would end up annoyed with. Modern C++ has improved the language significantly, but it’s still “just a garbage heap of ideas that are mutually exclusive” (to quote Ken Thompson). Every C++ program is written in a different bizarre subset of the language, and they are all slow to compile, annoying to debug, and difficult to reason about. Plus, holy hell are the error messages useless.
  • Rust. Increasingly popular, and comes with many nice features. In particular, a certain promise of a not-entirely-well-defined type of memory safety which seems to be eliminating entire classes of bugs. But as I say, my experiences of it have been unsatisfying, and I do not get a sense of it being a language I can maintain high motivation or momentum in.
  • Go. Solid choice, but has some features that can lead to bad performance, in particular the garbage collector. I have good reasons to want full control.
  • Zig. Probably a good enough option; no notes, except…

At the time, I had spent over a year writing Jai code in my free time alongside my duties in the Icelandic Parliament, and had gotten to know it well. I may even have written some Jai code during a boring plenary session once. There’s one for the books. I’d enjoyed it a lot more than any other language, ever. In a number of ways it makes me feel the sense of solidness and reliability that I got from Pascal, back when I was younger, that most modern languages lack sorely. And my sense was that if I was going to prototype this system quickly and have fun doing so, I’d better use the most powerful language available to me. Which meant using Jai.

The main downsides were that it was a) still in beta, and b) it’s a closed beta. Which means that quality and stability might be issues, and it might be practically impossible to hire anybody with any experience of the language. But I reasoned that if either of those became a problem, I could reasonably quickly rewrite the program in Zig or C++ or even Go. This remains true til this day. While the codebase has gotten reasonably big and complicated, I’ve intentionally maintained it in such a way, through documentation and test coverage (with runtime tests largely written in Python), that doing a full rewrite in another language shouldn’t be an impossible undertaking. It would take a bit of time, but most of the architecture would carry over to Zig or C++ easily. The hardest part of most software projects is figuring out how to solve the problem; the actual work of writing the code in a particular language is comparatively easy. Getting the same performance might take some work, but that’s to be expected.

There is of course the possibility of something happening that causes development of Jai to terminate before it’s publicly released as an open source project. This is a risk I was willing to take three years ago, and while the cost of that risk does go up as my codebase matures, it’s still an acceptable cost at the moment.

If anything, the fact that Jon and his team are people who have a well documented track record of serious long term commitments to big complicated software projects, and that they are known to deliver, and that they’re in fact building their next big game in this language, gives me a lot of assurance that this is a safe choice.

Regarding code quality: The Jai compiler has given me problems a few times. I’ve filed some bug reports. But frankly, it has given me less problems as beta software over the four years I’ve used it than I’ve had with substantially more “mature” compilers and interpreters. It just works. There has never been a show-stopping bug. It’s actually insanely impressive how well it works, and such a stark departure from the sense of instability that comes with most software these days.

As for hiring people, I refer to the previous statement about how easy the language is to learn. Of course, it being a closed beta complicates a bit who can even have a compiler, but because my company is super small this hasn’t become a problem yet. In fact, the beta chat group has actually provided me with access to some really talented programmers, and I’ve hired from that pool for some of our work.

5. Why is it still beta?

But that brings us to an separate issue: If it’s so robust, why is it still in beta? I have a number of friends who know of my affinity for Jai and occasionally ask me what gives. Some are genuinely interested in playing with the language once it’s publicly available, but don’t want to sign up to a closed beta – fair enough.

The official answer appears to be that it’ll be released when it’s ready to be released. And sure enough, there are still some sharp edges, limitations and shortcomings that are to be addressed. There are occasional syntax changes and feature set changes. The standard library isn’t very fleshed out, although it contains quite a lot of good stuff. It’s definitely “batteries included” at this point, but occasionally something will turn out to be strangely absent.

In a recent public update, Jonathan Blow said there were three main outstanding issues before a public beta release:

  1. Limitations on the macro system, where the goal is to allow easy and efficient Lisp-style code rewrites on compile time that is strongly typed and easy to debug. Without this the language is mostly fine, but this is one of the original ambitions.
  2. Some issues around cross-compiling, targeting different combinations of operating systems and CPUs, from not necessarily the same OS or architecture. In particular, it’s kind of tricky to figure out the boundary between, say, dynamic libraries used during compile time and dynamic libraries to be linked to from the resulting binary.
  3. And finally, issues around how the context mechanism works when you’re calling into a dynamic library that also has a Jai style context, which may be different from one library to another.

My understanding is that all of these are coming along reasonably well.

Mostly, I think the answer is: It’s a serious language being made by adults, for adults. For serious programmers, by serious programmers, who want to make sure things actually work before they ship.

I also know that the team behind Jai aren’t just settling for the already audacious goal of making a new programming language, but are also building a game engine in the language, and a game using that engine. This both proves the language’s power in a real world setting, but also inevitably means that things might take a while. Releases tend to come in bursts, which probably reflects an internal tendency to switch from language to game and back, alongside other projects. There is nothing even remotely chill about their goals, and I love following this madness.

Because of this, I don’t really have much basis to criticize the fact that it’s still a closed beta. I get it. But I do sincerely hope that it at least becomes open as soon as possible.

That said, because I’m building a serious piece of software in this language in a business environment, there are pressures relating to this choice that will eventually have to be resolved, one way or another. Hopefully the language will be public before I need to make that call. It probably will.

6. What’s next?

For me, it’s simple, I’m really happy with Jai. I’m going to continue to write and publish modules for it, contribute to the growing community, support its development in whatever way I can, and use it both for personal and professional projects.

I’ve met a few people from the community, either in person or virtually, and had very productive conversations with many about the issues we’re running into. Shout outs to Dylan, OStef, Mim, Raphael, Daniel, Kuju, Matija, and all the others I’ve interacted with through the years. I’m enjoying seeing the community grow, and I hope more people will adopt the language and realize that not only can programming be fun and rewarding, but that software can be high quality and efficient.

For you, it might be a bit more complicated. Unless you have access to the beta, your next step might have to be to request beta access. Unless of course you’re one of the people who doesn’t want to use it until it’s public or open source – if you are, I hear you. But either way, I do encourage you to keep track of what’s going on. I assure you, if it’s exciting enough for me to write a long essay on the subject, it’s probably pretty damned exciting.

And if you are already in the beta, say hi. There’s a lot of things we need to do. Many many libraries that need to be built, and a whole new culture to construct.

Let’s fucking go.

Appendix: My contributions

I didn’t really know where to put this, but here’s a short list of things I’ve done in or in relation to the language:

… and incidentally, I have a few libraries that I’ll hopefully get around to publishing soon enough. But that’s for later.