Hacker News new | past | comments | ask | show | jobs | submit login

Seeing how many billions have been posted into JIT runtimes such as CLR/JVM/V8. I'm questioning whether or not it is realistic to create a production ready JIT as a side project?



It's important to note that the goals of this BEAM JIT are a lot more modest than those other JITs.

There's no goal of heroic optimization; the optimization goal is really just to remove the overhead of interpretation.

This removes the need to apply the JIT only to some code, because it's fairly simple, it's fast enough to apply when the code is loaded, and so all code is JITed to native as it's loaded. Or you're on an unsupported platform and all code is interpretted.

Because it's all or nothing, testing the OTP release should uncover any JIT bugs; you won't have the hard to track bugs sometimes seen in other systems where a function's correctness depends on whether or not it was JITed and that depends on runtime state. That won't mean no JIT bugs, of course, but they should be easier to track down.


It's really a situation of the 90/10 rule, you'll get the first 90% of the benefits from 10% of the work and then it's many many incremental changes that gets progressively more expensive.

Also in the case of Erlang the language model will reward these first steps even more than most subsequent optimizations because much Erlang code in general don't have much tight loops of the kind that are prominent in benchmarks were a good register allocator would provide huge wins.

More on the 90/10 rule we need to remember that these expensive JIT's are very complicated with optimization levels and interpreters combined with tons of GC options whereas here they explicitly just dumped JIT-interpreter cross-calling to simplify the design as well as a more straightforward internal memory model with less complicated edge cases.


It sounds to me that “JIT” here just means “not a batch compiler”? This seems to me to be more like the way Common Lisp compilers work than what I think of a JITs


I'm not super familiar with LISPs, but what the JIT is doing here is transforming bytecode into native code, which is what other JITs do.

What it's not doing, but is commonly done in other JITs, is any sort of runtime profiling and chosing of which modules to transforming; all modules are transformed when loaded.

As described in the article, previous JIT attempts with BEAM did have that functionality, and they did meet the project goals; profiling cost too much, the compilation step was too expensive, and mixing modes between interpreted modules and native modules added too much complexity.

I haven't looked at the code behind this, but from articles, I haven't seen anything that would preclude running the bytecode to native code transformation ahead of time (or caching the just-in-time transformation for future use), but it's not part of the implementation as of now.


Reading through the article, it sounds similar to what most lisp compilers do: read source files into a data-structure, transform that structure in various ways to some kind of IR and then generate machine code from that IR. They generally do this the first time a file is loaded and generate a “fast load” file to speed up later attempts to load the file.

This sounds pretty similar, except there’s no fast load file.


I guess a difference is that Lisp implementations generally cache the generated code between runs.


The BEAM is mostly concerned with correctness and horizontal scalability. I think a little vertical scalability from a JIT raises the throughout of the system without really changing how it works or what you use it for. If anything, maybe you write less stuff in native code.

A great deal of research has been published in the last 25 years, and some of it invalidates earlier wisdom due to changes in processor design. Just following this trail and applying the 80/20 rule could get a lot done for a little effort. And a simple JIT has half a prayer of being correct.


A tracing JIT is no small feat. You could go simpler and have a template JIT, which has a lot less complexity. Guile went with that, and the speedup from the already quite fast 2.2 to 3.0 was significant.

Andy has been working at Igalia with the JS engines of chrome and Firefox, which makes me believe it might not be easily reached by mere mortals, but looking at the source it is quite easy to follow, even though I would not trust myself to make any significant changes.


LuaJIT?


You beat me to it.

If you're aiming to compete with HotSpot and .Net then you'll need to invest millions, but not all JITs are this ambitious. GNU Lightning is another example of a JIT with few people behind it.


What's "production ready"?

Works, is stable and faster than an interpreter? Sure, that is achievable for a motivated and skilled developer working on their own. At least for a reasonably simple language, maybe not for a beast like C++.

Competitive with one of those bigcorp-funded ones? Nope.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: