> However the Ruby community seem to want having their own JIT written in C more than they want performance.
YJIT is written in Rust, not C, but it's also not just a matter of wanting to write our own JIT for fun. There are a number of caveats with TruffleRuby which make a production deployment difficult:
1. The memory overhead is very large. Can be as much as 1-2GB IIRC.
2. The warm-up/compilation time is much too long (can be up to minutes for large applications). In practice this can mean that latency numbers go way up when you spin up your app. In the case of a server application, that can translate in lots of requests timing out.
3. It doesn't have 100% CRuby compatibility, so your code may not run out of the box.
There's a reason why you don't see that many TruffleRuby (or TrufflePython, TruffleJS, etc.) deployments in the wild. Peak execution speed after a lengthy warm-up is not the only metric that matters.
W.R.T. memory usage that's true, but I think they've been making big improvements there lately with things like node inlining so it may not be true in the near future.
W.R.T. 2, what are you comparing against here? Is the TruffleRuby interpreter that much slower than the CRuby interpreter, also once you use the native image version? Because it seems like once it starts compiling the hotspots the program must get faster than a purely interpreted version. Whilst it may take minutes to reach peak performance for that engine, how long does it take to reach the same performance as YJIT?
W.R.T. 3, yes, but is it easier to fix that or to develop a new JIT from scratch? What is your solution to the C extensions problem for example? My understanding is that this is a major limit to accelerating Python and Ruby without something like Sulong and its ability to inline across language boundaries.
YJIT is written in Rust, not C, but it's also not just a matter of wanting to write our own JIT for fun. There are a number of caveats with TruffleRuby which make a production deployment difficult:
1. The memory overhead is very large. Can be as much as 1-2GB IIRC.
2. The warm-up/compilation time is much too long (can be up to minutes for large applications). In practice this can mean that latency numbers go way up when you spin up your app. In the case of a server application, that can translate in lots of requests timing out.
3. It doesn't have 100% CRuby compatibility, so your code may not run out of the box.
There's a reason why you don't see that many TruffleRuby (or TrufflePython, TruffleJS, etc.) deployments in the wild. Peak execution speed after a lengthy warm-up is not the only metric that matters.