Hacker News new | past | comments | ask | show | jobs | submit login

Currently this is generally true, though not always. The compiler only has so much information available when it's compiling, the run-time potentially has more information. The most aggressive optimizations of compiled code rely on profiling data gathered from actually running the code. You generate representative usage scenarios, run your program using those usage scenarios (usually automated), gather data about how frequently different code blocks are hit through the use of instrumented binaries, then use that data to produce a highly efficient optimized compiler output.

However, not all software engineering groups have the capabilities to produce such highly optimized binaries, and there is always the risk that a user's particular usage patterns will differ enough from the expected patterns that they will lose the benefit of this extensive optimization. However, in an interpreted or byte-code language a lot of the same information needed for optimization is available to the run-time. A run-time designed for optimization may be able to take advantage of that, creating super efficient code paths based on actual usage. This model is more difficult to implement but potentially more robust than statically optimized compilation (and also has the potential to take greater advantage of differences in hardware, a statically compiled native binary doesn't have the ability to morph its optimization based on whether its running on a single core Atom or a 6-way Core i7 cpu, or some 100-core monster of the future, but a run-time potentially can).

In the average case most of this is just theory, but the potential is very real.

Some worthwhile background reading:

Trace Trees: http://www.ics.uci.edu/~franz/Site/pubs-pdf/ICS-TR-06-16.pdf

A blog post / talk from Steve Yegge on dynamic language performance and other topics: http://steve-yegge.blogspot.com/2008/05/dynamic-languages-st...




Very interesting. Can this kind of technology yield further speedups for languages that are already fast (compared to getting dynamic language speed closer to fast)?

BTW these technologies are not interpreters, but compilers (but runtime compilers).


Interpreters are not JITs.


But there is no reason why an interpreter could not contain a JIT to on-demand compile. In fact, the definition of interpreter that seems to be in common use is that it takes raw source code and executes it - nowhere have I ever seen anybody state that the compiler cannot on-demand compile the source code as its interpreted (perhaps to speed up future calls to that code). This is still distinct from VM based implementations, which compile the source code to byte code and the byte code is then executed or natively compiled languages where the code is compiled directly to the host processors instruction set.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: