Hacker News new | past | comments | ask | show | jobs | submit login

A two-tier JIT. Interesting to see tiered JIT compilation catch on the way it has. I seem to remember a few years ago reading that the Java HotSpot team had given up on tiered JIT compilation as being not worthwhile.

How far we've come. A whirlwind tour of todays JITs (apologies for the million links):

.Net Core seems not to use tiered compilation. It never interprets the IR; everything is run through the same JIT compiler. https://github.com/dotnet/coreclr/issues/4331

HotSpot uses three tiers these days (counting direct interpretation as a tier) - https://docs.oracle.com/javase/8/docs/technotes/guides/vm/pe...

JavaScriptCore/Nitro seems to use four - https://webkit.org/blog/3362/introducing-the-webkit-ftl-jit/

Edge's Chakra engine has two - https://blogs.msdn.microsoft.com/ie/2014/10/09/announcing-ke...

V8 seems to use two - https://v8project.blogspot.co.uk/2017/05/launching-ignition-...

Firefox's SpiderMonkey JS engine uses two - https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Sp...




I'm not aware of any effort to retire tiered compilation. It was even promoted to be the default in Java 8 (2014). http://www.oracle.com/technetwork/articles/java/architect-ev...

The only downside I'm aware of is that it increases the pressure on the code cache. If your code cache is not large enough, it will thrash as methods are discarded then recompiled. We had significant performance problems with a server and it took quite awhile until we realized that was the cause. A cache of 256 mb was more than enough for us running a 2 million LOC monolith under Tomcat, so the absolute memory use isn't that significant. (Reference we found while researching: http://engineering.indeedblog.com/blog/2016/09/job-search-we...).

Once you know this is an issue, it's easy to monitor, but it is one more thing that can go wrong in the JVM.


Oops, I wasn't clear. I'd meant that, if I recall correctly, the HotSpot team initially experimented with combining the 'client' and 'server' JITs for tiered compilation, but decided it was a lot of complexity for little gain, and didn't commit.

Only a couple years later did they re-attempt it and stick with it.

I could be mistaken here, and I wasn't able to find anything online to support me.


.NET focus always was native code, either AOT with NGEN or JIT on load.

The only variants of .NET with interpreter support were from 3rd party implementations, and the .NET Micro Framework, used in NETduino.

And now their focus seems to be to improve their AOT story.

Another interesting evolution was Android, with Dalvik and its basic JIT, ART with AOT on installation, to ART reboot with an interpreter in Assembly, followed by JIT and AOT code cache with PGO.


On .NET Core's focus, that's not actually true:

http://mattwarren.org/2017/12/15/How-does-.NET-JIT-a-method-...

Android optimizes for battery life, but it's also worth noting that Dalvik was a really rudimentary JIT, having no benefits from JIT compilation, only drawbacks, the ART with AOT being a good upgrade.

But tiered compilation is in a different league, being about speculating what's going to happen depending on what the process has witnessed thus far. The point of tiered compilation is to profile/guard stuff at runtime and recompile pieces of code based on changing conditions, which is how you can optimize virtual call sites or other dynamic pieces, which you can't do ahead of time because the missing part is the decompiler which can revert optimizations based on invalidated conditions.

It's really interesting actually, because you can profile a C++ app and use that to optimize your AOT compilation, but the compiler is still limited by the things it can prove ahead of time, or otherwise it would be memory unsafe.


I wrote "And now their focus seems to be to improve their AOT story.", I didn't say anything about .NET Core.

Should have been more explicit, as I was referring to CoreRT and .NET Native.

> But tiered compilation is in a different league, being about speculating what's going to happen depending on what the process has witnessed thus far.

Just as ART was refactored on Android 7 and 8. ART with pure AOT is only for Android 5 and 6.

https://source.android.com/devices/tech/dalvik/jit-compiler


I think tiered and speculative optimisation are independent concepts.

Tiered is specifically that you have a fast compiler and a slow compiler (or further tiers). Speculative is as you describe.


Is it actually a JIT? It's just compiling everything unconditionally. I guess the fact that the second tier replaces previously compiled functions with more optimized versions makes it a JIT? Or does the definition of JIT require recompiling in response to information about which code would benefit most?


> Is it actually a JIT? It's just compiling everything unconditionally.

Still counts as JIT in my book, but you're right that it's a bit subtle.

Unix-style configure/build/install isn't considered JIT.

Installing a .Net application is pretty similar, but we don't consider it JIT.

In the usual .Net model, what's distributed is IR rather than source-code. Compilation to native code happens at install time. The build-and-install process is less explicit than the Unix way, and it's less error-prone (fewer dependency issues and issues with the compiler not liking your source code).

Really it's a very similar model to the Unix one, but we call one JIT and not the other.

Oracle Java, of course, only ever compiles to native code at runtime, and never caches native code. 'Proper' JIT. (This may be set to change in the near future though.)

Interestingly, .Net seems to be moving in the direction of full static compilation, or they wouldn't be asking devs to rebuild UWP apps to incorporate framework fixes - https://aka.ms/sqfj4h/


It might be fun to make a source-based distribution where every binary in /usr/bin started off as a link to a script that built and installed the requested executable (over the top of the link), before executing it.


Source-based distros essentially do that, they just cache the binaries.

Various research OSs are JIT-based, of course. It looks like JX (a Java operating system) caches its native code, so it's not 'pure JIT' https://github.com/mczero80/jx/blob/5fbeae79/libs/compiler_e...

It looks like Cosmos (a C# operating system) does the same https://en.wikipedia.org/wiki/IL2CPU


Or how about a fuser filesystem on Linux to do the same? That sounds like an interesting idea. Just don't make the mistake of accidentally typing some obscenely large binary like firefox, chrome, or clang...

I think it would need to be integrated into the package management system pretty tightly (or have one of its own) to get all of the shared library dependencies.


Wouldn't be difficult to modify FreeBSD to do that. /usr/ports is just a little more than one indirection away.


Do you think for something to be a JIT it must only compile code immediately before it's used?

In that case the only real JIT I know of is basic-block-versioning. I think almost all JITs will compile branches or methods to some extent before they are actually needed.

Yours is probably not a reasonable definition therefore. I think a JIT is just a compiler that can compile as the program is running.


> Do you think for something to be a JIT it must only compile code immediately before it's used?

I mean, that's more or less what the name "just-in-time compiler" implies. I'm aware that the name is not necessarily a precise definition, but I'm not sure how far the definition stretches. Does JIT have a precise agreed-upon definition, or is it somewhat more vaguely defined?


No these terms never have precise meanings, and trying to debate them too much doesn't achieve much. But if your definition doesn't actually work for any examples of the thing you're defining at all except one then it's probably wrong.


Ok, fair enough. I was worried that there was some precise definition I had missed, but if that's not the case, I agree there's no point in debating it.


Well, there's at least one definition that's pretty noncontroversial, if not terribly satisfying or precise: it's not a JIT if you compile well in advance of any indication the program needs to be run.

Whether that lazy-compilation strategy is fine-grained or not isn't clearcut, I believe. I think if you distribute a C program with a bash bootstrapper calling plain old gcc to compile and run the C code only when needed, even gcc might be considered a (coarse-grained, rather rudimentary) JIT in that context.



To me it's a JIT if the compiler is needed to run the code.

If it means compile on start, it still requires the compiler to be used at load time.

Non JIT would mean you can distribute the code without the compiler. If you can't do that, its JITTED or interpreted, if instead of requiring a compiler to be present you require an interpreter.


I like this definition.


SpiderMonkey has an interpreter too, in addition to the two JIT tiers.


Unsurprising, considering that browsers load code on demand and despite the original vision for Java, the JVM and CRL tend to be used for apps for which it's acceptable to have slow startup time.


What is "Nitro"?


Safari's JS engine, as far as I know.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: