Hacker News new | past | comments | ask | show | jobs | submit login

"The state of the art in jit compilation has advanced quite a bit since then"

Since when? Since the 70's? Sure.

But since the 90's, not really in terms of techniques, only in terms of engineering and feasibility of advanced techniques. It's not that there is no research, mind you, but it's definitely more engineering than research. That also doesn't make it any less cool, exciting, etc.

"another difference between the days of old and new is that this is being targeted at runtime code generation, which naturally trades off throughput of generated code for speed-of-generation."

This actually was true then too, FWIW.

"These issues, as well as developer velocity in translating VM features into optimized VM features, figure more prominently in our problem set than I would expect it historically did."

While i'm not sure how much it matters, i guess i'd just point out these are not different concerns than history had

:)

I certainly hope you succeed, FWIW.




> This actually was true then too, FWIW.

Ah, you were referring to a era before my time, and it seems I made some false assumptions about motivations back then. Thanks for the correction.

> But since the 90's, not really in terms of techniques, only in terms of engineering and feasibility of advanced techniques. It's not that there is no research, mind you, but it's definitely more engineering than research. That also doesn't make it any less cool, exciting, etc.

Are you referring to meta-compilation techniques here, or the techniques for runtime type modeling developed to drive type-specialization of dynamic code?

If you are referring to the latter, I agree completely. If the former, I'd argue that the runtime type modeling work brings something new to the table which changes the dynamic. But in general I agree with your point - the main difference between now and then is the sheer level of engineering effort by multiple parties, cross-pollination of ideas, and other prosaic matters.

In terms of research, my exposure has been to two main pedigrees of thought in runtime type modeling serving to drive type-specialization of dynamic code: the Self work by Ungar and friends, and Type-Inference by Hackett and Guo (both of whom I have the pleasure of working closely with).

> While i'm not sure how much it matters, i guess i'd just point out these are not different concerns than history had

It always helps to understand the motivations and efforts of what came before, so thanks for the clarification. Any more insight or information or references would be welcome.

> I certainly hope you succeed, FWIW.

This is nbp's baby, but yeah, I hope it succeeds as well. I will never get bored of working in this space :)


Where do you draw the line between engineering and research?

Graal has shown that you can use interpreter specialization to create a Javascript engine that's essentially as fast as V8. It relies on a lot of clever tricks to do that, like partial escape analysis. Graal has generated quite a few published research papers. It seems like both engineering and research, to me.


This is obviously a hard line to draw.

I tend to draw it at "research produces new things that were not previously known, engineering may produce new insights or improvements of things that were already known".

(but again, I admit this is not a very very bright line)

I consider graal to be good engineering. It is a new arrangement and engineering of existing techniques. That will in fact, often produce new papers.

For example, I built the first well-engineered value based partial redundancy elimination in GCC. before that, there were zero production implementations, and it was considered "too slow to be productionizable" until i took a whack at it. I helped with some papers on it. It's not research, just good engineering. The theory was known, etc. I just made it practical. That wasn't research.

Another example: LLVM now has the first shipping implementation ever of an efficient incremental dominator tree updating scheme (that i'm aware of. GCC has a scheme to do it for some things, but it's not efficient). Again, previously not efficient. Theory has been published. Again, making it work well is just good engineering.

Another example: LLVM's phi placement algorithm is a linear time algorithm based on sreedhar and gao's work. If you read further research papers, they actually pretty much crap on this algorithm as very inefficient.

It turns out they were just bad at implementing it effectively, and LLVM's version is way faster than anything else out there. Is it research because our results are orders of magnitude better than anything else out there? No. It may be cool, it may be amazing, etc, but it's still engineering.

Remember that conferences like PLDI and CGO accept papers not just on research, but on implementation engineering.

All that said, i also don't consider trying to differentiate heavily between research and engineering to be that horribly interesting (though i know some prize one or the other).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: