Hacker News new | past | comments | ask | show | jobs | submit login

You'd have to work pretty hard to make a compiled algorithm slower than an interpreted one.



Not true. The runtime has access to information that the compiler doesn't, such as which branches are actually taken.


This makes no sense. Interpreters add indirection at every single statement in the program. The fact that they may know the direction of the occasional branch cannot possibly make up for this.


JIT is a valid strategy for a scripting language interpreter.


No. A JIT is a valid strategy for a scripting language implementation. An interpreter is a different thing. Many JIT compilers use an interpreter for running uncompiled code and profiling ("mixed-mode interpreter"), but "interpreter" and "JIT" are in no way synonymous.


-1 for that? Wow.


Can you give an example of this? In my experience compiled code is often 100x faster than interpreted.


Currently this is generally true, though not always. The compiler only has so much information available when it's compiling, the run-time potentially has more information. The most aggressive optimizations of compiled code rely on profiling data gathered from actually running the code. You generate representative usage scenarios, run your program using those usage scenarios (usually automated), gather data about how frequently different code blocks are hit through the use of instrumented binaries, then use that data to produce a highly efficient optimized compiler output.

However, not all software engineering groups have the capabilities to produce such highly optimized binaries, and there is always the risk that a user's particular usage patterns will differ enough from the expected patterns that they will lose the benefit of this extensive optimization. However, in an interpreted or byte-code language a lot of the same information needed for optimization is available to the run-time. A run-time designed for optimization may be able to take advantage of that, creating super efficient code paths based on actual usage. This model is more difficult to implement but potentially more robust than statically optimized compilation (and also has the potential to take greater advantage of differences in hardware, a statically compiled native binary doesn't have the ability to morph its optimization based on whether its running on a single core Atom or a 6-way Core i7 cpu, or some 100-core monster of the future, but a run-time potentially can).

In the average case most of this is just theory, but the potential is very real.

Some worthwhile background reading:

Trace Trees: http://www.ics.uci.edu/~franz/Site/pubs-pdf/ICS-TR-06-16.pdf

A blog post / talk from Steve Yegge on dynamic language performance and other topics: http://steve-yegge.blogspot.com/2008/05/dynamic-languages-st...


Very interesting. Can this kind of technology yield further speedups for languages that are already fast (compared to getting dynamic language speed closer to fast)?

BTW these technologies are not interpreters, but compilers (but runtime compilers).


Interpreters are not JITs.


But there is no reason why an interpreter could not contain a JIT to on-demand compile. In fact, the definition of interpreter that seems to be in common use is that it takes raw source code and executes it - nowhere have I ever seen anybody state that the compiler cannot on-demand compile the source code as its interpreted (perhaps to speed up future calls to that code). This is still distinct from VM based implementations, which compile the source code to byte code and the byte code is then executed or natively compiled languages where the code is compiled directly to the host processors instruction set.


its probably on a tangent, but the the theory is used in the OpenGL implementation of OS X [ http://lists.cs.uiuc.edu/pipermail/llvmdev/2006-August/00649... ].


Modern CPUs have branch prediction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: