Hacker News new | past | comments | ask | show | jobs | submit login

It looks like LLVM was designed for JIT compilation from the start.



LLVM doesn't make a great JIT compiler and I don't think gcc will either. The problem is that, most of the time, your JITed code won't be used very much and so the amount of time that it takes to generate it is a significant proportion of the total execution time.

LLVM and gcc are optimized for producing the fastest possible code and will do so at the expense of spending more time doing it. That's not to say that they can't become great JIT compilers in the future, but I don't think we're there yet.


LLVM understands this issue. That is why they have "regular" instruction selection passes and "fast" instruction selection passes. The JIT uses FastISel. But you are definitely right, it is not anywhere near as lightweight compared to, say, Hotspot.


There is also an issue of FastISel not supporting the full LLVM IR, so if you depend on FastISel for fast JIT, you basically need to design LLVM IR generation around that, limiting yourself to undocumented, unspecified, and ever changing subset of LLVM IR. Which can be done, but not very pleasant.


Right, but unlike GCC, this is not an architectural issue, but a simple implementation issue that could be fixed.


It's an implementation issue, but it is far from simple.


So I guess I disagree. There are known good solutions to this problem. Yes, it's definitely some work actually writing the code right, but it's not like this is a problem that requires engineering brand new solutions. It just requires a good engineer and some time.

I consider that "simple", as on the scale of "engineering complexity", it would be simple, even though on the scale of "engineering time" it may take longer.


The JIT doesn't always use FastISel. Last time I checked, you had to enable it manually, or set optimization to -O0.


There are many optimizations in GCC and LLVM. If you turn them all off you will compile fast and execute slow, and if you turn them all on you will compile slow and execute fast. You can do this on a per function / trace / translation unit / whatever basis.

Production JIT compilers are the same way. For the hottest code paths, all the optimizations get turned on. The coldest ones are interpreted. The first level of jitted compilation has very few optimizations enabled.

The main thing that doesn't cooperate with JIT compilation yet is whole program analysis.


That's not actually true, at least with regards to LLVM. The vast majority of the time in LLVM is spent in instruction selection. LLVM spends a lot of time on instruction selection/legalization, which you can't just turn off.


Well, you can use FastISel, which is way way faster (it's a simple non-pattern-matching instruction emitter, like the Plan 9 compilers).


But you can't turn off codegen in a JIT compiler either, unless you're interpreting code, so that requirement doesn't fundamentally make GCC and LLVM impractical. It sounds like LLVM either doesn't have very many optimizations or their backend needs work.


LLVM's instruction selection/legalization infrastructure is very sophisticated, very generic, table-driven, etc. Most JIT compilers use more ad-hoc and quicker mechanisms to get machine code out of IR.


Yeah okay. Someone else pointed out FastISel.cpp. It's probably true that prettiness competes with performance.


FastISel will quite often kick out to the regular instruction selector because it can't handle particular IR constructs.


People at Apple working on JavaScriptCore (Safari's JS engine) have recently added a fourth-tier JIT using LLVM, so very much run for only very-very-very-hot code (of course, by waiting for it to become very-very-very-hot, you've already lost a time you could've gained). I can see LLVM making its way into more JITing compilers in such ways: as an ultimate solution where it is clear it's worthwhile spending a lot of time compiling code.

The node.js server case seems worthwhile to bring up here: node.js servers are typically run for days or weeks on end. An extra few milliseconds (or even a second or two) isn't significant if it makes a noticeable difference in performance.


That's all relative, of course, to how long your application is going to run for and how much has to be JIT compiled.


Sure, hence the qualifier "most of the time".

A good example of where this is important is in a tracing JIT. Do you want to compile after 1000 iterations or 10,000 iterations? Faster JIT compilation means it can kick in much earlier.


AFAIK, you can manually specify all of the optimizations done by LLVM during JIT compilation. I think it's not hard to disable most of them for fast compilation, but yes, it still may be too heavy for an instant use.


The Unladen Swallow project (faster Python, including an LLVM based JIT) found that LLVM wasn't really designed as a universal JIT compiler, or at least not one that was useful for languages such as Python. That was one of the reasons why they eventually stopped work on the project (although a lot of the non-LLVM work they did was still very good an useful and so the project as a whole was not wasted).

The Pypy developers (another Python with JIT project) looked very, very, carefully at every available JIT out there, and finally ended up writing their own. It was the only way that they could get something that would do what they needed. The results of doing that was performance that was far superior to what the Unladen Swallow project got when using LLVM.

It sounds very simple when you look at it from a distance. "We need a JIT, here's a JIT, let's use it." Then you find out that just because something is called a "JIT" doesn't mean that it will be of any use to what you are trying to do. The subject area is very complex and there will probably never be a universal solution.

What I could see the GCC JIT mode being useful for is things like generating certain critical portions of a program under programmer direction. That is, it would make a nice library that you call to generate very optimized code for specific functions or modules. A good example is how GCC is currently called in the background by a number of Python libraries which dynamically generate C code and compile it for faster execution. Being able to do this more directly via a JIT process could be very convenient. This is perhaps the sort of application that the author has in mind.


It was. I think it's likely this effort was motivated by LLVM's relative strength in JIT compilation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: