> a big chunk of these vulnerabilities would not exist if C and C++ [...] simply didn’t have zero-terminated string, initialized values by default, had a proper pointer+length type thus replacing 90% of pointer arithmetic with easily bounds-checkable code, and had established a culture that discouraged the prevalent ad-hoc style of memory management.
This is Rust's calling-card, so I find this plea for a better lang / eco rather jarring after dismissing Rust for somehow "making the wrong tradeoffs".
No because Rusts bigger calling card is the borrow checker, which adds a lot of complexity besides other things in Rust, and even leads to justifying unsafe (because some optimized correct data structures are just not possible with it).
Second no, because if Rust calling card is that, you can have this alone even in the most hated unsafe C++ if you limit yourself and admit to doing it right. If you quote that sentence you must also quote his calling card, which is about culture and complexity of language:
> In addition to this, I think the most important reason we have so many vulnerabilities (and bugs in general) is completely disregarded in the hunt for “safe” code: culturally tolerated and even encouraged complexity14. In conclusion, putting up with Rusts compile times and submitting to the borrow checker seems like an extreme solution that doesn’t address the most important problem, which is a cultural one. Jai on the other hand is extremely concerned with complexity and tries to get the cultural part right.
And in that regard I agree with him, definitely better there than C/C++, BUT NOT MUCH!
That's why I fully agree, Rust may be not it, and something like Zig, Jai, Carbon or even Herb Sutters safec++2 thing may shine one day more..
Rust is overfocusing on the memory safety part, which adds too much complexity while not even being able to get fully rid of unsafe..
> not even being able to get fully rid of unsafe..
It makes no sense to "get fully rid of unsafe" and this suggests you've gravely misunderstood the problem. Which puts you in good company, Herb Sutter doesn't seem to understand this on his "CppFront" wiki and Bjarne doesn't seem to grasp it in his recent paper about safety either.
Rust's unsafe keyword marks code which programmers intend to be safe but the machine can't see that. For example the Rust compiler can't see why the Linux implementation of Mutex<T> is correct, why would we give out mutable references to anybody who calls this function named "lock" ? The programmers (in this case mostly Mara) know how the Linux futex system call works and their reviewers have concluded the resulting unsafe stanzas, with their commentary, are correct. There will in fact only ever be one mutable reference at a time even though the machine can't see why.
The reason to care so much about memory safety is that you can't have type safety without memory safety, and when you lose type safety most of your other guarantees are destroyed. Languages which claim to care less about memory safety often have a caveat (even if unstated) that all bets are off once you abuse their lack of memory safety to destroy type safety because all their other promises assumed type safety and now they don't have that.
No, the point about unsafe is not just only those fancy few low level implementations thst Rust language has no concept for, but proper high level data structures you cannot realize safe?
But even then, what's different limiting to the safe subset of C++ (haha yeah I have to chuckle a bit) and declare the dame for the necessary unsafe parts there? I really dont get it it seems ;)
Absolutely agree, it’s one thing to get terrified at the complexity of the borrow checker, and another thing to get terrified by the complexity of Unsafe Rust (“the-thing-that-must-not-be-mentioned” in the Rust community).
I think the solution for memory safety (which the fundamental problem stems from us having to deal with a linear address space) can only be fully tacked with a combination of compile-time and runtime features, but in my opinion Rust goes too overboard on the former and sacrifice too much actual language usability.
Really like to see new experiments like generational references (https://verdagon.dev/blog/generational-references) being researched as an alternative to Rust’s ‘type-systems-approach’ towards memory safety.
Or maybe someday we might finally have thorough tagged pointer support in hardware (like what CHERI is doing) and system-level programmers will rejoice in joy.
If I'm reading OP right, their issue with Rust is the borrow checker - none of those features require one. They're asking for a language that's safer than C, not as safe as Rust, but simpler and easier to ram things through. I don't necessarily agree, but I think that's what they're saying.
Yeah, I think Bevy[1] is also a great example of something that fits in well with a lot of Rust's strengths while also showing that it's possible to lean in heavily to very gamedev centric approaches with how it approached ECS.
I think there's is some truth around try to avoid unsafe, that said the times I've dropped down into it I've found myself chasing heap corruption or use-after-free on more than one occasion :).
Some of Jai's AoS/SoA transforms look neat and certainly interested to see what it looks like once it starts opening up more.
But remember all C++ defaults are wrong, and so of course std::span isn't bounds checked
While your Rust slice will yell at you (at runtime if it can't figure it out at compile time) when you try to index into the fifteenth item in a ten item slice, C++ has Undefined Behaviour in this case.
True. I love C++ but think “don’t pay for what you don’t use” should err on the side of safety over speed when it comes to what “pay” means. I’d rather `operator[]` be bounds-checked and occasionally have to call `v.data()[i]` or `v.unchecked_at(i)` when profiling justifies it.
Spans/slices are getting incredibly common in as a fundamental building block in modern (or modernized PLs). C# also has Span<T>, Go and Rust both have slices etc. We are at the point where they should be standardized on ABI level, IMO, before things get too messy compatibility-wise.
There are interesting differences in these types worth thinking about if you're imagining try to standardize them somehow.
C++ std::span pulls double duty, one flavour of std::span, the one you might see more often, is like Rust's slice type [T] in that it consists of zero or more values of some type T. The other though is more like Rust's array type [T; N] where the size N of the span is actually part of the type itself.
Rust's slice is specifically that [T] type, the type system doesn't see any more difference between a [u32] with 1000 entries and a [u32] with 0 entries than it would between a string with "DOG" in it and a string with "CAT" in it, their types are identical.
C# Span<T> deliberately can't live on the heap. The CLR doesn't want to cope with this type, and by ensuring it's part of your program's stack any questions about the lifetime of the Span are obviated and tricky-to-reason about garbage collection problems don't arise.
Go's slices are very strange because Go's arrays are like those in Rust, their size is part of their type - and yet Go's slices can append. This is achieved by actually creating a new array and copying all the data for the slice into the new array whenever Go sees fit.
For a C API, a slice would be a simple { void*, size_t } or { void*, void* } struct, but I guess for memory managed languages this isn't enough information to pin the underlying data into memory (for instance a reference to the underlying 'object' - don't know how such language-specific details could ever be expressed in a 'standard ABI').
Oh, it's simple enough, even with managed languages in the picture - it just needs to be decided once, and then everybody uses it.
The problem is that many existing ABIs don't optimize for small structs as function arguments particularly well, so just bolting it on like that can mean poor performance compared to old-school separate arguments for pointer and length. You want a hard guarantee that something like foo(slice1, slice2) will be passed entirely in the registers, not as two pointers-to-stack.
Is a C++ std::span object like std::string_view in that it can outlive the data it points to? If yes, that's hardly an improvement over a raw C pointer/size pair.
That's just by convention though, right? C++ doesn't prevent me from storing the std::span somewhere so that it outlives the scope of the called function? IMHO it's disappointing that C++ adds new memory management footguns without first fixing the basics (like at least some rudimentary lifetime tracking to help with such situations).
I keep considering writing a `unique_span` and `shared_span`. Really `span` or (a type it’s based on) should have been templated on the pointer type, so a `span<shared_ptr<const T[]>>` for example.
I'm really interested to see Jai in the wild, but I have to say I find the Jai v.s. Rust comparison here...unconvincing to say to least. It's particularly funny to be excited about "being able to do anything during compilation" using Jai and then complain to that the #1 problem in programming is "culturally tolerated and even encouraged complexity."
I'm also frustrated at a language in closed beta being perpetually compared to a language that is open source and in production. It's a complete apples to oranges comparison. By being in closed beta, there is no way of verifying anything said about Jai. Instead you have to take the word of people in the Jai community, which is a very biased source to say the least.
People aren't used to seeing languages (, products) developed in a "visible but closed" beta. It is, initially, quite confusing.
It makes more sense to see it like game development, which I think it's blow's pov. He's not an open source developer, he's a game developer. And he's making a product he wants to be complete and correct before release.
I think there's quite a lot of good-will towards Blow, and I imagine he has some clout in the game dev. community. My sense is that when he releases his game written in this, and gives it its final syntax-and-semantics pass -- he will get buy-in from some places (esp. indies).
Very good point, which partially alludes to the hype. It is amazing how much publicity that Jai is getting, despite being in a closed beta. It brings up the argument that if it were open to the public, with more eyes and critics on it, would the "hype train" crash? Odin (https://odin-lang.org/), a near category language, that is influenced by Jai and is open to the public, does not appear to be getting anywhere near the recognition.
To what extent does Jonathan Blow's status as a celebrity programmer, plays into all of this? As in, people want Jai to be the "next big thing", versus the actual merits of the language.
One of the things people get excited about is meta-programming, which Jai has much more of than Odin.
Jai has a lot more weird stuff than Odin. Idiosyncratic features like if we're iterating through a sequence we can delete items as we go with a dedicated keyword, via a swap and shrink mechanism. Perhaps some of that weird stuff will be smoothed off during beta, or perhaps Jon will double down on it. But it does make Jai more interesting to talk about meanwhile than Odin which is a more "normal" language design.
I suspect I will never have reason to use either language "in anger" so to speak, but Jai is more interesting to me for the reasons I stated.
Pop and swap while iterating is definitely a useful feature. You can also queue removals up and do the pop and swap all together on a cleanup pass (snippet below). That said, most people don't code this way, so yeah, useful for sure.
I'm also slightly puzzled at saying "compilation is too slow!" followed by "I want to do anything during compilation". I think these are somewhat opposed to each other.
Obviously it's not the only reason for slowness (stuff like templates vs vtables for example are also a huge impact), but at some point I tried to do some more advanced comptime stuff in Zig (it's pretty long ago, around Zig 0.6 or 0.7), and it became incredibly slow to compile. I'd assume it's gotten better with the newer and faster self-hosting compiler, but it still shows that with full comptime programming you can make your compiler as slow as your comptime program.
It depends on how you implement compile time execution.
Creating a VM to run the code directly rather than reusing the compile+run cycle can speed things up. The VM executes slower but takes less time overall, especially with a complex language like Rust.
Compile-time code restriction seems dumb. Programers can write any program they wish, stopping them for some ideological reason isnt effective. As with all the half-baked metaprogramming people restort to anyway.
Template metaprogramming is the idea of executing arbitrary logic at compile time using the template system as your programming language. The replacement for template metaprogramming is a solid compile-time execution model, not a traits system.
It's worth mentioning that DMD is a reference compiler and not meant as a production tool. DMD is for fast compile time during dev cycles and testing new language features early.
You've got the LLVM and GCC compilers for production.
The problem with lack of debug info on Windows in D is solved by using LDC with LLDB/GDB to debug
You can also ask the compiler to emit CodeView debugging info if you want to stick with Microsoft stuff
The author's chief complaint appears to be that dmd produces incorrect debug info on Windows, something I'd expect a reference compiler to do. A reference compiler doesn't have to be fast, but it does need to be correct.
Debug info is one of the backend things that does get priority with dmd. It has gotten some performance increases in recent years but these are mostly just a nicety.
Personally I have fixed some debug info generation problems so it is being worked on.
It wasn’t clear to me: is the debug info wrong or is it just missing because stuff is getting optimized out? Both?
I know in the past I had some difficulty in C++ on x64 because variables were getting optimized out (so depending on where you were in the function the variable literally didn’t exist anymore).
I don't think "DMD is a reference compiler and not meant as a production tool" is a great way to characterize it. There are certainly valid reasons for choosing LLVM and GCC for "production", but also valid reasons for choosing DMD.
DMD does do all the basic data flow analysis optimizations, along with register allocation by coloring, etc.
The problem is there are just a ton of special cases needed to generate good code, especially with the x86 instruction set which is nothing but special cases <g>. Doing these takes a lot of developer time, one by one.
He was complaining about the long compile times of LDC2 even in debug mode, and said that it uses too much memory up to the point that the laptop can't handle. So I understand why he was using DMD for development.
I find the "slow compile time" complaint of D interesting; overall I find ldc2 to be one of the faster compilers out there. I suppose the combination of features and metaprogramming used by the author is responsible for the 3-minute compile times in debug mode. While jai is evidently expected to be substantially faster (author estimates 5 sec for this codebase???), other choices like Rust would be far, far slower than D.
This expectation comes from jai compile times for the game Jonathan Blow is developing: https://www.twitch.tv/j_blow It takes about 2.5 seconds to compile and link over 168k lines of code in debug mode.
I'm not sure that if passing from a widely used language to a marginal language wasn't a success why going from a marginal language to a totally obscure one will be
An obscure language custom built for the purpose of writing computer games and by a developer with razor sharp focus on making it better who is very responsive about bugs submitted by the people with access to the beta.
Okay, great. That totally makes sense, but on the other hand Mr Blow seems to have some rather unusual ideas regarding how to develop this new language (and is a game developer, which sometimes brings its own set of secrecy-first ideas [1]).
I don't understand why they are not working in the open, this is a tool, not a game, input for a large community is extremely valuable.
But as brilliant as Jon Blow is, he is at least equally as stubborn.
Fast compilation and nice, easy to read syntax with good default is exactly what I am looking for.
I can understand the hype about metaprogramming, and its potential usefulness, but I also think this is a pandora box.
Pretty much like C macros and C++ templates, overuse can lead to a very messy codebase, hard to read, hard to debug, with a lot of unexpected side effects.
I don't see why a language can't be one guy's passion project. Even if he releases it and it turns out he's the only person that likes it, so what? It's not like he's had millions in VC or government investment dumped into it or something. He's got some ideas and he's implementing them. I'm interested to see how it turns out.
Golang has had success and was primarily made in private by 3 guys and hasn't strayed too far from the original founding principles they came up with.
Unless it's being designed from committee from the start, most languages start off as a 'passion project.' But, there is a point where they have to bring in more people to handle the work and mitigate the single point of failure.
I think he bought in more people at one point, then sent them away again when he realized he wasn't able to keep the project on track with that many chefs making the soup. The problem with more people is that the project can very easily degenerate into design by committee.
If there's enough actual interest, some group of people will just clone the language in an open manner since enough is known about it. But that doesn't seem to be happening at the moment.
I thought Odin was one of the clone languages at the start (although later it diverged from Jai in many aspects, and is far ahead Jai in the aspect that it's open-source and is actually being used in production).
But really I would like Jai to take some time to mature over the years, instead of rushing out for an immediate public release. It has some very ambitious ideas that would absolutely be killer features, but I think it needs ample time to get polished. (Off to writing C++ code during the time then...)
Odin has similar syntax, but one of the main reasons I am interested in Jai is because it has strong meta-programming support[0], where as Odin doesn't and will likely never due to the author of the language not wanting to go in that direction.
Yeah, I have about the same opinions. Odin made a conscious decision to not invest much in compile-time metaprogramming, and as a result was able to actually ship things without spending too much time on language design. But it’s a bit less ambitious than what Jai was trying to do.
> input for a large community is extremely valuable
In the early stages, everyone has a thousand opinions and it’s not clear even to those working in jai daily what exactly the language wants to become and which patterns should be supported. From what I understand, Blow intends to gradually open things up as the project matures, which I think is 100% the right approach for something fairly experimental and with a lot of new ideas.
> input for a large community is extremely valuable
it’s also an extreme time suck. Having everyone opine on their pet feature would just take away from the focused work the people at Thekla are doing.
Honestly, I don’t even see why Jon should make things open source (apart from it being a major incentive for people to adopt it). The performance improvements they are offering compared to other languages 100% warrant charging for it.
From a glance at the language description, this is very different from what he is looking from Jai. I don't see any description of it having metaprogramming facilities as good as D or Jai (or even C++!).
I think that even if Jai didn't directly inspire Odin and Zig, seeing Jon's enthusiasm for his new language influenced GingerBill and Andrew Kelley in their work. This is only the conjecture of someone on the side, but the timeline fits pretty well in my opinion.
I’m curious if the gaming industry will switch languages, it feels like with the current game engines that it’s heavily entrenched in C and C++. Feels like something like Carbon has the best chance to break in.
A chunk has already switched to C# with Unity. Most other engines (proprietary, Unreal, etc) have decades of C++ that would require too much effort to migrate to anything new.
Unity is still written in C++. It's fairly common to write the core engine in C++ and use a language like Lua, or as is more common in AAA, some proprietary scripting language for gameplay code.
This kind of thing in general is very common and goes back decades, it's just that the languages used to be custom (QuakeC, UnrealScript etc) before the industry largely standardized around Lua. Some older games also experimented with Python for this, and I recall even seeing Tcl once.
In terms of what the language allows you to do, yes. However, if you're a C programmer, you can pretty much just keep writing the same code you've always written (minus the preprocessor, thankfully). You can even compile C code with the D compiler and call those functions from your D code without doing anything further. That's definitely not the case with C++.
.. you forgot to mention also, that (in the context of a D module) D stops treating a user-defined class-type as real type. Additionally, a D class is a reference type. Whereas in C++, a class type is a real type (treated as any built-in type), and is a value type (by default).
Nor does D have a concept of C++ friend.
So even when it comes to classes, C++ and D are miles apart.
D structs are value types, D classes are reference types. This was done to avoid mistakes like having a type that is sometimes used as a value type and sometimes as a reference type.
Would have been much better in my opinion, if D had maintained a C like struct (not a C++ like struct), and a C++ like class. That would have solved the problem you identified, while still allowing for the class to be what it was designed to be. The mistake C++ made was the mistake you identified. But the solution you provided in D is its own mistake.
The C++ class would require the user to constantly remember it has to be passed by reference, not by value. D making the class a reference type means the user cannot mix them up.
It has worked well from the beginning, it is not a mistake.
Honest question: as a GC’d language, D seems to occupy the same niches as Go, JVM-based, and CLR-based languages rather than that of C++. Wouldn’t it be more accurate to compare to them than C/C++? Am I misunderstanding something?
Ah! Okay, that’s what I was missing. I found the blog posts on using malloc and free from D. I knew the GC could be disabled, but I wasn’t aware of how practical it was to manually manage memory. Thanks!
Just curious, but when you genuine open-source license what do you include? Going back to the earliest talks on a potential language Blow cites a non-GPL, permissive license as his probable license. I know some folks tend to not count BSD, Public Domain, etc as open source.
> I know some folks tend to not count BSD, Public Domain, etc as open source.
This isn’t true, unless you’re talking about some niche opinion held by a few eccentric people.
Even Richard Stillman, probably the most ardent and uncompromising copyleft activist in the world (elevating the GPL to nearly religious status) accepts that non-copyleft licenses like BSD are open-source (and free software, a term he prefers).
Certainly anything OSI is acceptable. GPL isn't my cup of tea, but it's fine, too. Owner/authors are doing the work and if you don't like what license they choose, you can go elsewhere.
As long as no entity can retroactively revoke your ability to use the source code, it's probably sufficient.
As things currently stand, that is NOT true for JAI.
I wrote an extremely hard and complex program in which I had to use c++ metaprogramming throughout. Not my decision. I do not understand why you could possibly need more metaprogramming than C++ provides. This is a problematic feature to begin with. Not worthless but not worth much. The debugging was difficult. Many problems show up buried in literally thousands of compiler error messages.
That's the issue with C++ metaprogramming. It's so unwieldy and complex that the benefits are difficult to see and the occasions to be worthwhile limited. When the language has better (i.e. simpler) facilities for meta-programming like D, it will open big venues where it makes sense to use it.
It is like programming object oriented in C, it is possible to do but so unwieldy and requiring discipline and verbose boiler plate that there's rarely an occasion where it would be interesting to use. While it is much easier to do in C++ or D or Java (etc.) as they provide the abstractions that makes it useable. C++'s meta-programming facilities are not good enough.
I disagree. Object-oriented programming is very easy in C even though the syntax itself doesn't guide you. Essentially, all C code I write is object-oriented. Perhaps you are thinking about dynamic dispatch which indeed is very annoying to implement correctly in C?
The problem with C++ metaprogramming is not what you can do with it, it's how you do it; which, IME, is always the most convoluted, unsafe way possible.
The “siamese brothers”-ization of templating & metaprogramming is especially painful.
“In the beginning I hinted at expecting the porting process to go somewhat smoothly (which I’m sure I won’t regret later). The reason for this is that I have two systems in my game that will hopefully be immensely helpful in this endeavor:
- The game records the inputs (HID, loaded files, network, …) of a play-session into a file and replay them later. When replaying, feeding recorded input into the deterministic game loop leads to the exact same state, down to the bit.
- The game hashes game state at various points during execution and saves these hashes to a different file. When replaying, the contents of this file can be used to check that the execution of the replayed session exactly matches the original execution.”
Can't wait to see how this progresses. So far the only source of knowledge have been Blow's streams, but he's the author, and he knows his language inside and out. It will be immensely interesting to see other people's experiences.
Tsoding, coding on Twitch and with archives on YouTube has a few streams from after he got into the beta. They give a decent overview of using the language in its current state, and I find Tsoding petty entertaining as well.
> there’s also the general feeling that the D creators’ vision of what a fixed C++ looks like is just vastly different from mine
I have emphasized for years that D is not C++. It can be used as a replacement for C++, in the sense that it does the same things, but it is not accurate to call it a "fixed C++". So many times I have seen C++ programmers disappointed that D is not C++. On the other hand, if you're a C programmer, you'll probably be comfortable writing D code. I think of D and C++ as incompatible forks of C.
> So many times I have seen C++ programmers disappointed that D is not C++.
More generally, every time I learn a new language, it is always frustrating, because I cannot write the same thing in the new language that I was used to writing in the old. It takes time until one starts thinking in terms of the new language, and only then will the frustration fade.
Yes, we can agree that D is not like C++ - not at all.
The reason C++ programmers are usually disappointed, is because D gets marketed as such (it's the bait you need to get them to look at D afterall).
In fact, even on the basic class type, C++ and D are really miles apart. Make no mistake about this - they are miles apart, even on this one construct (the same contruct for there being a C++ in the first place).
It is clear when you use D, that it wasn't designed to be like C++.
D does have a subset that is C like, and that's primarly because it is, in essence, C.
So D is D.
D is not at all like C++.
A subset of D is like C (cause it is - more or less - C).
Fast compilation seems very appealing. It is one of the main reason why I am interested into Go and Zig.
I recently started working with Rust for contributing to projects like Rome/tools [1] and deno_lint [2]. The compilation and IDE experience is frustrating. Compilation is slow. I am afraid that this is rooted to the inherent complexity of Rust.
> The largest elephant in the room to address is probably Rust. ...
Breaking this down, I can only find two practical problems the author has with Rust:
- long compile times
- the ownership model ("the borrow checker")
The rest of this paragraph appears to be much more general in nature.
Given that the project is only 58,000 lines of D/C++, it's hard to believe that compile time alone is so bad as to drive a decision toward an experimental language like Jai.
So it appears that the main problem the author has is the ownership model ("the borrow checker"). It would be interesting to know more, but the author does not elaborate.
AFAICT, the Rust compiler can be viewed as enforcing the good practices that C++ developers already recognize. So how can this be an issue at all, especially given the ability break out of the ownership model into unsafe Rust (or use other tricks) if the situation calls for it?
"That said, D definitely has some advantages over C++, like more powerful metaprogramming, no need for headers, no uninitialized values and more. But unfortunately, these upsides are outweighed by the downsides."
Yes, after working with D for far too long, I have decided to delete it off my computer and I'm going back to C++, where a class is still a first-class 'type', and a 'value' type at that (at least by default).
In D, a class is a reference type only and worse, the D language has no means of declaring, let alone enforcing at compile time, a perimeter around such a type, within the so called 'D module'. The entire D module is within the perimeter of your class type, at all times!
> In my eyes, the most important ones are faster compilation and allowing metaprogramming via unrestricted compile time execution.
I'm fascinated by languages that are adding more compile-time programming features, in the context of a low-level performance-oriented use case. Would love to know more about the history and state of the art of this area.
That actually makes a lot of sense to improve performance on specific parts. Think about generating lookup tables at compile time for example. Instead of having a separate script generating them, you can keep the generation close to the place you use it, written in the same language and always up to date.
Very interesting! Apparently a lot of information on the Internet about the Jai language is outdated which makes me curious about the follow up post!
I don't know if OP is the blog owner. If yes, I'd love to have an option to subscribe to an email list. I've installed an RSS feed but email is still my preference. Thanks!
> All information for building a program is contained within the source code of the program. Thus there is no need for a make command or project files to build a Jai program.
I heard about this "promising" language as a youngster and never heard of it again, it doesn't seem to have taken off after all these years, and probably it's too later for it ever take off.
All I keep finding are youtube videos and twitch streams, which are the worst way for me to learn about a programming language. I guess it’s still not generally available?
I don’t understand how this language is planning on ever seeing adoption when it’s been restricted from open use since it’s inception. Seems like vaporware.
I think Jonathan Blow's main goal is to make his dream game programming language so that he can be happier while developing all of his future games. Widespread adoption by others is only a secondary concern. He also doesn't want to push it out to the general public until he feels it's ready, and wants to take the time to get it right first.
Jonathan Blow is a skilled craftsman with stuff like this and he knows he's going to get a bunch of Opinions About His Language from people who think they have a complete superset of his knowlege, and he is making his language without involving those people in any way.
That’s fair, but if you look at his posts about the language, he also makes a big point about how he doesn’t want any input from people with an academic background in language & type theory. This smells like anti-intellectualism to me.
I’m a bit concerned that some of the decisions that he’s making in the language are leading towards traps that have caught other language designers in the past, but because he’s rejecting that expertise, he has to learn from experience, at greater cost. These concerns are motivated by specific things I’ve seen in Jai code, but I don’t really want to dive into the specifics, since Jai is unavailable.
I think his disdain towards academia isn't really anti-intellectualism (he's not ignorant of compiler theory knowledge, otherwise he wouldn't have been able to write a compiler from scratch in the first place! And his interest in programming languages seems to span decades, from what I remember from one of his streams). I think he's critical of the fact that the primary focus of academia and industry in computer science have diverged so much over the years, such that most academic CS works haven't really helped developers in building better programs and tools. Thinking about the countless hours compiler academics have spent on all sorts of esoteric parser theory in the past, while all the production compilers today are using recursive decent for pragmatic reasons... maybe this kinda makes sense.
I’d have to say that academic research into programming languages has provided immense benefits for actual programmers over the years, it just has such a long lead time that people forget that the research came out of academia in the first place. People also don’t really understand the process of how these ideas come from academia into mainstream programming languages.
It only seems like academic research has diverged because the benefits of current research haven’t materialized as real, usable features in programming languages that people use to get work done. But if you look at features that we use in day-to-day programming right now, you can trace the heritage of these features back to research programming languages (like Self or Haskell) and then farther back to more abstract research into esoteric subjects like category theory and substructural logic.
The esoteric parsers that people invented in the past were, in a sense, necessary because people were ignorant of how to design languages in such a way that they could support a rich syntax without using a complicated parser. It took a lot of academic research for us to figure out that, say, you could probably use an LALR parser for lots of existing languages, and you could stick to LL(1) for new designs.
That's a fair point. Maybe researchers needed to delve into all sorts of weird parser theories, to later figure out that they weren't that needed. Practicioneers mainly know the final result, but aren't usually interested in the full history of how people got to that conclusion.
But I think during the recent decades, there has been a consensus that perverse incentives in academia are degrading research quality and preventing papers from becoming actually usable in real-life applications (mainly with the focus of paper metrics and the constant need to apply for grants). So although I think academia is still important long-term, I understand why some people would think it's becoming less useful.
Actually that's probably under-selling the problem. It's not just that Jon doesn't value the opinions of academics, he doesn't really value the opinions of anyone except Jonathan Blow.
Which is probably a healthy way to attack your first video game project, it's not as though Braid would be more likely to be a success if Blow stopped believing in it himself. But I would be surprised if that's true of a programming language.
> most academic CS works haven't really helped developers in building better programs and tools
It's doubtless possible to measure "most works" and "really helped" in ways which allow you to either draw this conclusion, or not, as you prefer but I don't think that's a useful way to think about it at all.
it isn't anti-intellectualism at all, it's that he gets asked questions by people in college as well as college grads continuously and the questions they ask are indicating that the things being taught in higher education are absolutely not the kinds of things that he sees in his day to day work.
to be fair to Jon, he works in a subset of software development that most do not: video games.
the kinds of problems that Jon sees do encompass the things that we all see, since he uses the same operating systems that we do, the same compilers, and just generally the software available to him is the same as what is available to all of us.
where the experience of a game developer really differs from that of, say, an enterprise software developer is the complexity of the problem being solved and the speed at which the problems in games must be solved. additionally, it is trivial to compare two games of the same genre and determine which looks better and which feels better. so, performance and quality are of prime importance to a game developer.
game performance and playability directly correlate to game sales in many cases, and game sales directly correlate to employment as a game developer. game developers want to create games specifically, so they want to continue working as game developers. so, they want to create successful games, so they want to create games that perform their best and that look their best so that more players purchase the game.
Enterprise and commercial software developers simply do not have the same types of pressure on them. It is perfectly fine for an enterprise software developer to use object oriented code which consumes 8 bytes of network capacity to transfer a single boolean value because the performance and latency of enterprise applications does not impact their use except in extreme circumstances.
game developers will redesign lots of their types to net a 2 byte savings on a data structure if that is what it takes to keep a full multiplayer game update in a single 1492-byte network packet and avoid packet fragmentation. game developers will spend 200 hours changing their data structures so that they fit more efficiently into CPU cache lines and they will change how game logic is processed so that they miss the CPU data cache as little as possible, because CPU cache efficiency directly relates to performance on almost all modern platforms. these are problems that simply do not exist within most enterprise's software development teams.
and because those are problems that do not exist for enterprises, those are problems whose nature and whose solutions are not taught at University.
Jon has been a game developer for almost his entire career, so he sees things differently than people on this website. people who work at startups and seek angel investment so they can scrape by long enough to deliver an MVP and be purchased have wildly different priorities than a game developer who wishes to succeed as a game developer.
in general I think the wider software development community could learn a great deal from game developers. the software written by developers who are not game developers is almost unilaterally unacceptably slow.
Most general purpose software developers simply do not have the experience to understand how egregiously bad most general purpose software is. Jon does. and his complaints regarding academia reflect the reality he sees.
I think I’ve never heard someone romanticize a profession as hard as what you’ve done here. This comment paints a truly distorted and unrealistic picture of game developers.
Game developers are not radically different from other developers. You see game developers leave the game industry and become programmers somewhere else, or you see programmers in another industry become programmers in the game industry. It is not a big deal.
While you could find some game developers who care about saving a couple bytes to fit something in a single network packet, you can equally find developers elsewhere who care about the same thing. Shave some time off your latency numbers and people stay on your website or app, they buy things or watch ads, your company gets money, you put it in your performance review. That’s just the most boring example I could think of. There are more interesting examples. Most programmers are simply not interested in saving a few bytes or a few cycles because they have features to work on. That includes game developers.
We fetishize low-level programming too much. Low-level programming is, in a sense, easy, because you are working with components that are simpler and have better documentation.
never in my 30-year career have I witnessed a single non-game developer give a damn about the latency or responsiveness of any application they've written.
never in my 30-year career have I witnessed a single game or emulator developer STOP caring about these things.
> I think I’ve never heard someone romanticize a profession as hard as what you’ve done here.
Go fuck yourself. it isn't me romanticizing, it's you thinking you know more than others by default, and outright dismissing the viewpoint of others. go away from me and stay away.
I work in telecoms, we do care a lot about latency, HFT guys are the same.
Some parts use FPGAs instead of normal hardware due to the low latency requirements!
OTOH I remember when one man doubled the framerate of a Nintendo game: apparently not all game developers care so much if they leave so much performance unused..
I had been a game developer for last 7 years and I disagree. Most game developers do not have that much expertise to start with, because it has been a long time that low-level optimization makes or breaks your game. They tend to lean to proven techniques to the point that they have consistent aches like your aging grandparents but still keep using them because complaints alone don't break their games. The ratio of game developers capable of optimizing things at that level is roughly same to (or possibly even lower than) the ratio of comparable non-game developers.
Over the past decade, I watched HN's and Reddit's hype waves cycle through server-side JavaScript, Golang, Haskell, and now Rust. With plenty of secondary favorites in the mix like Julia, Nim, Zig, etc.
A lot of these things do find some modest niche to survive in. But none really take over the world as originally forecast by the hype wave.
Ultimately, you either enjoy tinkering with programming languages for personal enrichment, or you do not. Hardly anyone is actually using anything else at our Java/C#/Python/C++ day jobs.
I don’t think Go was ever especially over-hyped, and it has been pretty wildly successful in its “niche” (server side programming, CLI tools, daemons, etc). There was some controversy because someone said it was for “systems programming” by which the author seems to have meant “distributed/networked systems” rather than “operating systems” or whatever most people mean when they use the (very imprecise) term “systems programming”.
And to that extent, Go has been very successful (virtually the entire container ecosystem and a good chunk of the broader cloud ecosystem). It is doing a lot of stuff that would have previously been in Java or C# or Python or Node or Ruby (contrary to your “hardly anyone is actually using anything else…” remark).
Of course, older languages are naturally going to have more jobs because there’s an enormous volume of legacy code that can’t be cheaply translated to a new language, but so you have to look at the distribution of languages among new projects to be able to even begin making reasonable comparisons between languages, and even then a historically Java shop is probably going to give a ton of preference to Java, so here too we see a lot of weight given to older languages irrespective of their merit.
Of course, this is nonsense and people use containers because they deliver real value, not because of Google marketing or name recognition (most don’t even know containers come from Google, but would likely attribute them to Docker). I’m not a container purist—eventually micro VMs and unikernels will eat a lot of their market share (that’s my prediction, anyway), but until we get there containers are invaluable and criticism like yours is devoid of substance.
While I agree with most of your points, I am not sure what this has to do with the parent you're replying to.
The one part I disagree with is:
> A lot of these things do find some modest niche to survive in. But none really take over the world as originally forecast by the hype wave.
One of Blow's critique of C++ is specifically that it's trying to be the everything language. Jai's goal is to be for video game development, and that's it. Not for embedded systems, not kernels, not drivers, not high performance computing, not operating systems. Just video games.
I was surprised to recently learn that embedded isn't on Jai's roadmap. I respect the decision re scope and feature-creep. It was surprising, given that usually languages with the performance, and low-level capabilities of C are also suited for embedded. Ie: The overlap with existing languages is almost 100% when you look at A: Languages that are fast/LL. B: Languages that are suitable for embedded. (The usual suspects of C, C++, Rust, Zig, and ADA)
I imagine that part of that is that he's a video game developer and not an embedded systems developer. I have no experience with embedded systems programming so idk how similar the two are.
But:
1) Lots of things things have high overlap with fast/LL languages.
2) I imagine there are nice returns to focusing the language on a specific use-case, even more so when the core developers are not actively using it for the other use cases (I doubt he or his team will be doing embedded work anytime soon).
I mean this is a little pedantic. It's true he never said that only games are able to use his language as if it were some kind of law, but he did say that his language is being designed to address issues that video game developers face and that was his main, and almost exclusively his only concern.
If other domains benefit from that, he's not going to actively bar them from using his language, but he also won't give them much consideration.
It's just false, i'm in the closed beta and there are a lot of people that use it for other things than games.
When they find a bug or have a suggestion he give them the same consideration as anyone.
Either the word "only" is doing a lot of heavy lifting in your sentence, or you're simply wrong [0].
Yes you could program anything in the language, just like you could programming anything in any language. But he's designing the language for video game development. The whole reason he's making it is specifically for video game development, and he's been very explicit that he believes that a language should not try to be designed for use in every application.
Javascript absolutely took its position. Speaking of those waves, where did Ruby and RoR go? Seems completely dead on HN whereas 'back in the day' it was top most talked thing. LISP would be a guest of honor - always there, never here.
I think Ruby and RoR just didn’t quite survive the transition to much heavier client-side JavaScript apps. Ruby peaked sometime around 2009 and back then, the browser landscape was much more diverse, and it was common to support IE6. It’s easy to forget how much of a burden IE6 was on web developers.
Ruby also suffered from a proliferation of ill-advised programming practices (monkey patching) and there was also some drama in the Ruby community (Rails Is a Ghetto). These were fixable problems and the Ruby community took steps to stop monkey patching everything and maybe address the other problems, but in the end, I think would-be Rails developers started using Node.js, and Ruby fell from the public spotlight.
As far as I can tell, Python survived by virtue of tools like NumPy, SciPy, Pandas, PyTorch, OpenCV, etc. Kind of a universal glue language for people who don’t want to write C or C++. Otherwise, I think of Python and Ruby (as languages) as nearly interchangeable. Python had its own issues to work through (2 -> 3) and its own drama, but it settled in some more stable niches and seemed to have fewer mercurial personalities at the center of it all.
Ruby and Python is not interchangeable, and Ruby is way more powerful than Python, that's a real reason why there is no equivalent of RoR in Python eco-system [1].
Python becomes very popular not by virtue of its tools, but by virtue of its intuitive and beginner friendly syntax. Because of this essential trait the useful tools and libraries are flourishing in the Python eco-system.
You are right that RoR is like a Ghetto and RoR is not considered as Ruby language. On this aspect, I think D has done a good job to ensure that any D based library and framework will still resemble D language. Like they said with great power comes great responsibility, and I'm afraid that Jai will follow Ruby and Lisp becoming untouchable by the mere mortals except only for a selected few domain expert programmers maintaining very niche applications.
I transitioned from Ruby to Python for scripting tasks, with a heavy heart, for the simple reason that linuxes typically have Python installed by default, but not Ruby, which made working with and sharing Ruby code in diverse and often locked-down environments too painful.
Ruby/RoR values form over function. Over the past few (five or so?) years, as teams went through the transition from aggressive feature development to devoting more effort to maintenance and operations, they realized that relying on conventions as a guiding principle has a substantial cost burden that you don't pay when prefer rigor and specification.
The disparity between Rust jobs and C/C++/C#/Java/Golang jobs on Indeed is staggering. Even worse, most of the Rust jobs are blockchain-related and may not survive the coming blockchain downturn.
It may be the case that some people use Rust but without Rust jobs, there will be no pool of experienced developers to later draw on.
I think there is a space for other alternative low-level languages that aren't that strict about compile-time safety though (Zig, Jai, Odin), that Rust cannot capture.
Many proclaim that compile-time safety using type theory is the only way to create reliable low-level software, but I think it can alternatively be done with good data structure design and various compiler tooling that instead catch these errors at runtime (generational indices/references, Address Sanitizer, and recently Zig's safety mode). We need to explore multiple directions to really solve the memory safety problem, and I don't think Rust is the only way (although it is a viable way, proven by some recent successful applications).
You can even get safety using logic without the compiler. I think resource cleanup responsibility tracking as a static analysis tool is likely to happen with some of these languages. I think zig is a good candidate, once they lock down the intermediate representation.
It's already the case that people are using "logic add-ons" for additional rust static analysis, so one wonders exactly why is it generally speaking that borrow checking itself must occur at the compilation step.
Trivially, one could create an annotation layer on top of zig or c that exactly replicates the rust syntax and performance borrow checking in the same way. It wouldn't be exactly the same because there isn't RAII but you can make correct inferences about what is happening in the body of the functions.
Rust will not accomplish this without significant language changes as well as changing cargo into something a lot more cooperative with external ecosystems.
Zig is getting an absolutely enormous boost from the fact that it is a self-contained C ecosystem that can cooperate with others. Zig has tripped into a very powerful niche--a lot of people LOATHE the build systems of the C/C++ world. If Zig gains very much more traction there, it's going to be extremely hard to dislodge.
I suspect that there are FAR more users of "Zig as C build system" than there are of "Zig as a language".
But cargo already isn’t in Rust. There are AFAIK no cargo-specific concepts that have leaked into Rust, the language. It is very straightforward to build Rust code without using cargo, for example with a makefile.
The reason other tools are slow to add support for rust is because cargo is so ubiquitous in the rust ecosystem that there is little point (I’d estimate that >99% of Rust code is built using Cargo), and not because of any technical impossibility.
I think cutting through the hype-train is something every engineer learns with time, but some times there are diamonds to be picked out.
At least for my own anecdotal experience, Rust has lived up to a lot of the hype for the time-critical low-level projects I was formerly performing with C/C++.
Agreed, though I’m sad it isn’t more often used. It’s certainly picking up steam, but it seems to have a ways to go before it’s going to be a serious contender for new projects in the embedded, video game, etc spaces where C/C++ still reign (by which I mean something like “before the majority of new projects are implemented in Rust”).
I’ve also observed these waves, but never about a language that you can’t actually go pick up a compiler or interpreter for. Jai is unique in that sense.
Your links back up the statement.
The first says the compiler is proprietary and unavailable outside of some beta testers, but may be open in the great and nebulous future.
Or Pi, or iOS, or practically all Android hardware… Kind of a non-starter IMO. Maybe this was marginally acceptable when the language started in '14 but it becomes less so every year.
Blow is designing and implementing a language for making video games, and in particular, for making Thekla’s video games (Jon’s game company). Blow does not make mobile games, nor does he make small, retro-style games that would typically be run on a Pi. His concerns are modern Windows gaming PC’s and the big three consoles. While Jai is Turing-complete and is ‘general-purpose’, the implementation serves Blow’s needs first and fioremost, so the chosen targets make sense in light of that.
Given that he and his team are developing a language and a game in that language at the same time it makes sense to focus on the primary platforms they work on, and expect their game to currently run.
Their primary platform is Windows, and the game in its current very early state is also running on Windows.
What's the value in porting it to Raspberry Pi or Android?
From the streams it looks like a Linux port is usually mostly up to date, and MacOS understandably lags behind. But neither of these platforms have any high priority for now.
Yeah, I guess that’s true as far as Blow goes. But there are other people interested in this thing (hence why the OP exists and why we’re commenting on it). And I presume not all of them want to lock themselves in to the apparent dead end that is the x86 platform in the current year when both cross-platform languages and cross-platform game engines are all over the place.
Design by committee vs. design by singular vision is an age old debate. We have plenty of the former, why not give the latter a try? There are plenty of good languages out there already to use, why does this language have to aim for wide adoption?
I'm not sure Blow cares that much about widespread adoption; he seems more concerned with quality. Personally I'd love an open beta, but I can understand why he might want to reserve the right to make drastic changes to the language before he releases it.
If you watch JB's videos (actually they're posted by someone else on YouTube I believe, taken from his Twitch streams), the language definitely exists.
The other niche languages are either buggy, slow, or both. Jai won't be. Ultimately it's about taste and quality. If the language gets enough right then people will switch to it, because people are desperate for a c++ alternative. Open sourcing or early access won't help here. Somebody just has to do the hard work of writing the compiler internals, and it will be done when it's done. And really, I wouldn't bet against somebody who has delivered on big projects twice before.
I agree that it's marketing but not for the 'exclusivity' reason, for the 'first impression' reason.
You'll still hear quite often that DMD is proprietary or that there is no free Ada compiler even though this hasn't been true for many years..
I think for the type of game that he's working on ("a 2D physics sandbox game that lets you explore the galaxy"), a custom handmade engine might be far better than using one of the mainstream game engines. His game is in 2D, so the shiny 3D renderers that Unity and Unreal have won't really matter that much. And his game seems to contain lots of performance-sensitive physics and geometry code that needs to be tailor-made to his game, so the default solutions in Unity and Unreal will be probably be unusable (so they would need to be written from scratch anyway). The goal he's setting for is very ambitious even for a 2D game, and Unity/Unreal will probably limit him too much.
That said, I think many of the decisions towards his rewrite in Jai seems to come from him picking the wrong language (D) at the start. He should have sticked with C++ in the first place, even with all its warts and complexities. It's a proven language that has shipped countless games, have tons of tooling developed around it, and provides one of the biggest ecosystems available to a game developer (and unlike D, has debuggers that actually work!). And you can certainly improve compile times a lot by using unity builds, managing your header dependencies well, and not going too overboard with templates (though I agree this can become a major pain point, especially if you're using a laptop).
But at what cost? Using Unity/Unreal incur the cost of lower flexibility in your design, more time spent learning "the unreal way"/"the unity way", and a finished product that looks and feels like every other unity/unreal game.
Of course the argument for unity/unreal is also true: Building your own game engine takes lots and lots of time.
Ironically, Unreal Engine was in large part born from Tim Sweeney's "screwing around with tooling and languages".
UnrealScript has long been dropped in favour of C++, but I'm pretty sure Unreal Engine wouldn't exist today, or would be a very different beast, if it hadn't been for UnrealScript and the Unreal Editor being bundled with the original Unreal and its sequels/follow-ups, and Tim's personal interest and research into how programming languages might improve game development, in much the same way that Jonathan Blow is doing now.
Can I assume your use of the phrase "this person" means you are unfamiliar with Mr Blow, and his track record in game development? You might want to look him up -- he's an interesting character and has developed some interesting and influential games.
> Why not just use the best ecosystem like Unreal/Unity
laughable. absolutely laughable.
I'm not even talking about politics of using an engine versus writing one; unity and unreal are complex tools with their own problems. if you value what you are producing, and if you value quality and you want your game to be just the way you want it, Unity and Unreal will fight you just as hard as D or C++ have been for the blog author.
Unity/Unreal are also simply not capable of a lot of things. They are definitely not a pure win for someone making a game.
I’ve made games with off-the-shelf engines and I’ve made games using just code and libraries. Sometimes not even libraries.
The main concern here is that you have a limited amount of time to work on your game. Some people try to sidestep this concern by saying that they’ll “spend as much time as it takes” or something like that, but since these projects often fail due to attrition, I’m skeptical.
Engines like Unity and Unreal have plenty of constraints and they fight you, but you don’t have to fight them unless you have powerful, immovable, inflexible opinions about how the game code should be implemented. Otherwise, these engines give you a surprising amount of freedom. This freedom is not apparent to casual users of the engines and it’s not apparent if you read forums explaining how these engines are used.
> They are definitely not a pure win for someone making a game.
Sure. Not a pure win. In some sense, however, time is interchangeable with time. By choosing an engine like Unity or Unreal, you save some time in some areas of the project, and that time can be reinvested in other parts of your project. You end up with a higher-quality game in the same amount of time, in typical scenarios. Or you end up with a similar-quality game in a smaller amount of time, in typical scenarios.
If the game was a more typical 3D action-adventure or FPS/TPS shooter, then using one of the two engines might have been much better for development time. But this is a complex 2D sandbox simulator with procedural generation and a custom physics engine, so I'm not sure how Unity or Unreal might aid in developing this game in any way.
I’ll pick Unity to talk about here, because I have the most experience with Unity.
I’ve gone in more depth discussing Unity “off the beaten path” before and I think people really overestimate how much you are constrained by the way Unity works. This applies both to seasoned Unity developers and to people who only take a quick look at Unity.
I claim,
- You can use your own physics engine with Unity,
- You can model entities however you want with Unity, even not modeling them as GameObject instances containing MonoBehaviour components,
- It is completely reasonable to develop an actual, real game this way, under realistic staff / expertise / schedule / budget constraints. (In fact, it is known that certain successful commercial games do this.)
Just to focus on a more specific example—let’s say you need your own physics engine. What’s an easy way to do that? Create your physics engine, and have it control the positions of Unity GameObject instances. This way you can easily set up test scenes in the Unity editor and see the results by hitting “play”.
Unity gives you this fantastic GUI for setting up these test scenes and a renderer you can use to visualize your physics engine behavior.
This is really not “abusing” the Unity engine in any way. The engine provides physics simulation, but it does not force you to use it.
Likewise—I’ve written games in Unity that do a lot of procedural generation, and I’ve written games without an off-the-shelf engine that do procedural generation. There are a lot of things that make it easier in Unity, and I’m not spending as much time fussing about with builds, or dealing with input, or figuring out how to port my game to other systems. Unity provides an API for me to create a mesh at run-time. During procedural generation, I generate the meshes for generated chunks of terrain, and the data structures look very similar to the way I would have written the data structures in my own engine.
Since you seem to have much more experience in developing games, I'll take most of your word here.
Though from my four years of experience in Unity (albeit at a non-professional level), I was always fighting with the engine when trying to build new things. Trying to build complex UI code using Unity's built-in system was a mess, the serialization system always had weird errors and didn't really work well with version control, adding custom rendering to the rendering engine was full of hacks, and the documentation was quite poor for the more obscure/hidden aspects of the engine up to the point that I was thinking "I can just write these in C++/OpenGL, why do I have to go through all of this crap?". Nowadays I do gamedev at a lesser capacity (and doing more general graphics programming in C++ instead), and since several years ago I haven't really looked too deeply into the engine. I think doing all of these in Unity isn't impossible, just that I'm expecting a lot of friction while doing it.
There always seems to be some really good indie developers who are stretching Unity to its limits though (such as Manifold Garden). I always wondered how much they were fighting with the engine to make some specific features.
Yes, I think about Manifold Garden a lot when talking about limitations in the Unity engine.
There definitely are limitations in the render pipeline, and I’ve spent time frustrated because I know how to do something in OpenGL but can’t figure out how to do it in Unity. But the rendering pipeline in Unity has become far more flexible in recent years, and you can make your own custom rendering pipeline. Look up "custom SRP" videos on YouTube if you want to see what that’s like. Here’s one such video: https://www.youtube.com/watch?v=91zUwJwkXNQ
I do remember serialization + version control problems long ago, but these days, serialization uses a text format by default and if you are sufficiently adventurous you can solve merge conflicts in your serialized data. Better to avoid merge conflicts in the first place, though.
I've worked on several. You haven't heard of them because none of them shipped due to issues with commercial engines (in particular Unreal Engine 4).
They are very difficult to use if the game play semantics are complicated and require lots of interaction with world state or world geometry. If you're making a common FPS, they are great.
> They are very difficult to use if the game play semantics are complicated and require lots of interaction with world state or world geometry.
Could you give an example? I’m trying to understand what the limits look like. It appears to be trivial to you, but for someone outside of game design, I can’t imagine what those might be.
We almost never hit technical limits in the renderer, streaming systems, etc. Instead, we found that pushing gameplay systems beyond the prototype stage would require more and more effort, as we'd encounter deep engine bugs, or the tooling simply did not cater to our use case.
We ended up implementing more and more tooling outside the engine, and there comes a point where UE4 became a IO/Rendering system. We'd've been happier if the engine were modular in design from the get-go.
> we found that pushing gameplay systems beyond the prototype stage would require more and more effort
I understand that you're saying that some systems exist that can't work. I'm trying to understand what those systems would look like, and how the user would see it as being different. Do you have an example of the system/mechanic that can't work?
> Could you give an examples of a game that used a custom game engines that couldn’t achieve what they wanted in Unreal or Unity?
A game that doesn't require a multi-gigabyte download and a top-tier GPU and CPU to render stuff in 2D? ;)
There are many reasons people might want to opt out of existing game engines: full control over rendering pipelines, assets and dev. experience might be some of them.
Sure, but what does that mean, to the actual end product? Do you have any examples? I’m not a game dev, so to me, I would naively assume that these wouldn’t be limiting, or laughable, as the original comment suggested.
jai's author, Jonathan Blow, always creates a game from scratch. His games: Braid (highest rated XBox Live title when released, "Xbox Live Arcade Game of the Year", https://www.youtube.com/watch?v=YgGeBOC0PX4) and Witness (while not winning significant awards, regularly appears in Best of Decade games lists, https://www.youtube.com/watch?v=URjb3RBIe7c).
Besides that there are multiple games where people use their own engines:
In general, once you know what you're doing it sometimes pays to develop your own engine that is specific to what you're doing. Unity and Unreal are generalist enines that you still to bend to your will to do what you need to do and may have opinionated setups of things that are hard to work around, or maybe are omitted from the engine, or just plain don't allow you to do.
Rewrites in whatever language or tool catches your eye are a common pitfall (although you often learn a lot in the process), but moving to Unity would also be a rewrite. It might be a harder rewrite, in fact, because the author would have to adapt to a new language and a new engine. They might be able to use Unity features instead of rewriting some code but they’d still have to figure out how to use those features.
"Kitchen Sink" engines like Unreal and Unity come with their own set of assumptions which may or may not help to deliver a given game. If you want to make a game which is radically different than the industry standard, you're going to have to spend a lot of time working around those baked-in assumptions, and it may be more efficient to choose another approach.
Putting together the available document "JayPrimer" (which may be obsolete) with the blog post, Jay it seems a language designed by somebody who's was fed up with C++, and deciced to write their own, tailored language; this looks great when seen through the lens of C++, but there isn't anything particularly innovative in the context of modern languages.
In particular, I'm personally neutral to Zig, but there seems to be little reason to prefer Jay over it.
From the primer:
> Arbitrary Compile-Time Code Execution
This is the big selling point, but, brought to the extreme, it's not necessarily a good thing. The examples in the primer are intended to look great:
- Insert build time data
- Download the OpenGL spec and build the most recent gl.h header file
- Contact a build server and retrieve/send build data
but they're the type of things that turn a build into a monster.
I guess comptime execution is big in the gamedev area (I can't say, I have little experience), but I suppose Zig fits the typical gamedev use cases (curious to hear devs with hands-on experience).
> Code Refactoring
The example presented seems to be "extract to function", which sufficiently advanced IDEs should support. It's also unclear if it's currently implemented.
> Integrated Build Process
I don't see this as a good thing. It's good from the perspective of old programming languages, whose build tools are a mess. But having a separate tool is actually an advantage, as long as it's standardized and well integrated (I suppose modern languages have this support).
> SOA AND AOS
This seems to be a very niche feature.
> Reflection and Run-Time Type Information
This is very convenient, but again, nothing unique.
> FUNCTION POLYMORPHISM
It seems to be an odd (flexible/inferred) generics implementation. One of the language objectives is to never perform automated type casting, but in this example, it is performed.
> THE ANY TYPE
I guess this is a polarizing feature.
> STRUCT POINTER OWNERSHIP
Is this syntactic sugar for a C++ destructor?
> Other Cool Stuff
> Specific data types for 8, 16, and 32 bit integers
No 64/128? :^)
Regarding the post, the non-trivial selling points are described as:
> Reducing compile times from about 60s right now to under 5s, hopefully around 1s
This is certainly very appealing.
> having debuggers work
Uh? That's based on the bad D experience.
> Replacing build-scripts with jai code
> Catching more errors by introducing custom compilation checks using metaprogramming
> Replacing complex metaprogramming code with simpler, imperative code
I’m in the private beta and things are changing constantly. I’m not sure why anyone would pick a language that won’t ever have a real ecosystem of libraries. Coming from C++ and need to remain low-level? Pick Rust, done.
With the almost total incompatibility in mindset and preferences between Jai and Rust, I’m not sure if this is actually practical ‘advice’. I can’t help but think most devs that want to use Jai are avoiding Rust purposefully so advising them to skip Jai as a potential replacement for their low level development needs and use Rust is just not going to happen.
Who is avoiding Rust (an open source, publicly accessible, and relatively widespread language) for Jai (a closed vapourware language that is apparently not anywhere near release) for anything but toy projects?
I was not implying that people were specifically avoiding Rust in favor of Jai. My comment was more to the point that people who would be interested in Jai are most likely the same devs that are not interested in Rust.
The reasons devs give for avoiding Rust are mixed but tend to focus on the restrictiveness inherent in Rust’s chosen memory model or the appearance of continual expansion of the language and its complexity or a strong aversion to the manner of dealing with and heavy usage of third party dependency, along with some other topics (syntax, functional-adjacent styling, insistence on idiomatic code at any cost, etc).
While some or all of those reasons may be overblown, some devs just do not want Rust but are looking for an alternative to C or C++.
> I’m not sure why anyone would pick a language that won’t ever have a real ecosystem of libraries
I think Jon is pretty hard in the camp of handmade game programming. My intuition about the decisions he's making about the language is that the way he's thinking about the "library ecosystem problem" is A) use a code generator to create library bindings from C++ (Raphael Luba, another dev contracted to work on Jai has done some extensive work there), or B) write it yourself.
Please choose more descriptive titles for your submissions here, people.
Some dude I don't know is porting a game I know nothing about to a language I know nothing about and his reason is "because I feel like it". He has grievances about the current state of things and the port hasn't gotten anywhere yet.
I have a hard time thinking of a blog post that would be less useful to me.
No lessons learned, no "why you should consider this obscure programming language you have never heard anything about". Also you need to go to wikipedia to find out the programming language is targeted at games. Programming games is not one of my interests.
You'd think they at least explain why jai is good for programming games.
But they don't. The blog post can't, in fact, because he hasn't gotten far enough yet to draw any conclusions.
At least make it look like you are making an effort to not waste my time.
You seem to just not be the target audience for this blog post. I enjoyed it because it fits in with a lot of the subjects that I've been looking into and it resonated with me. Your comment is the equivalent of saying that some research paper about a mathematical topic is terrible quality because it assumes you already know Calculus.
Edit: Also, the author mentions right at the start of the article:
> In this series of blogposts, I will document my experience of porting the game that I am currently working on... I’ve planned on doing this for a long time and want to keep some sort of record of my expectations, the journey and the result.
The point of this article is very clear from the beginning. It's simply documenting the authors journey. They don't need to provide any value to you. You didn't pay to read this article and they may want to keep a record for personal reasons. Putting it out their for the world is great since it allows other people to have a glance at how the experience went, and I hate when people leave discouraging comments because some article/video/entertainment doesn't cater to their every need.
This is Rust's calling-card, so I find this plea for a better lang / eco rather jarring after dismissing Rust for somehow "making the wrong tradeoffs".