Hacker News new | past | comments | ask | show | jobs | submit login
HolyJit: A New Hope (blog.mozilla.org)
627 points by bpierre on Oct 20, 2017 | hide | past | favorite | 199 comments



For those wondering, it's a specializer, applied to an interpreter, to specialize the interpreter into a jit.

This is a pretty well explored technique, it's rarely done these days because historically, people could not get good enough performance.

(I am not trying to knock them, just put it in context)

For more context on how you'd do something like this, read this:

https://en.wikipedia.org/wiki/Partial_evaluation#Futamura_pr... and http://blog.sigfpe.com/2009/05/three-projections-of-doctor-f...


The state of the art in jit compilation has advanced quite a bit since then, and another difference between the days of old and new is that this is being targeted at runtime code generation, which naturally trades off throughput of generated code for speed-of-generation. Especially for web pages, speed-of-generation, and the stability of the resulting code under polymorphism (so that jitcode with type assumptions doesn't become invalid), is paramount.

These issues, as well as developer velocity in translating VM features into optimized VM features, figure more prominently in our problem set than I would expect it historically did.

We sat down with nbp and went through his proposal in some detail yesterday. I'm reasonably sold on the theoretical soundness of the idea (actually I was excited about it from the first time he proposed it - hand optimizing every new JS feature is soul-sucking, and Graal already demonstrated the feasibility of variant of the concept).

As with any far-reaching idea, though, there are risks associated with it. And we definitely can't rewrite Spidermonkey from scratch.

Personally, I think prototyping the tech on top of a small toy language (objects, properties, proto chains, primitive types, functions), proving out the toy implementation, and then examining how we can incrementalize our transition to HolyJIT is the way to go.

I think a prerequisite is a good codegen backend that we can target. Cretonne is a good candidate, but we need features that aren't on its roadmap - primarily support for on-stack-invalidation.

There's a bit of a road ahead of us on this concept. I think we have a good rough idea of a viable path from here to there, but we are yet in early stages with this.


"The state of the art in jit compilation has advanced quite a bit since then"

Since when? Since the 70's? Sure.

But since the 90's, not really in terms of techniques, only in terms of engineering and feasibility of advanced techniques. It's not that there is no research, mind you, but it's definitely more engineering than research. That also doesn't make it any less cool, exciting, etc.

"another difference between the days of old and new is that this is being targeted at runtime code generation, which naturally trades off throughput of generated code for speed-of-generation."

This actually was true then too, FWIW.

"These issues, as well as developer velocity in translating VM features into optimized VM features, figure more prominently in our problem set than I would expect it historically did."

While i'm not sure how much it matters, i guess i'd just point out these are not different concerns than history had

:)

I certainly hope you succeed, FWIW.


> This actually was true then too, FWIW.

Ah, you were referring to a era before my time, and it seems I made some false assumptions about motivations back then. Thanks for the correction.

> But since the 90's, not really in terms of techniques, only in terms of engineering and feasibility of advanced techniques. It's not that there is no research, mind you, but it's definitely more engineering than research. That also doesn't make it any less cool, exciting, etc.

Are you referring to meta-compilation techniques here, or the techniques for runtime type modeling developed to drive type-specialization of dynamic code?

If you are referring to the latter, I agree completely. If the former, I'd argue that the runtime type modeling work brings something new to the table which changes the dynamic. But in general I agree with your point - the main difference between now and then is the sheer level of engineering effort by multiple parties, cross-pollination of ideas, and other prosaic matters.

In terms of research, my exposure has been to two main pedigrees of thought in runtime type modeling serving to drive type-specialization of dynamic code: the Self work by Ungar and friends, and Type-Inference by Hackett and Guo (both of whom I have the pleasure of working closely with).

> While i'm not sure how much it matters, i guess i'd just point out these are not different concerns than history had

It always helps to understand the motivations and efforts of what came before, so thanks for the clarification. Any more insight or information or references would be welcome.

> I certainly hope you succeed, FWIW.

This is nbp's baby, but yeah, I hope it succeeds as well. I will never get bored of working in this space :)


Where do you draw the line between engineering and research?

Graal has shown that you can use interpreter specialization to create a Javascript engine that's essentially as fast as V8. It relies on a lot of clever tricks to do that, like partial escape analysis. Graal has generated quite a few published research papers. It seems like both engineering and research, to me.


This is obviously a hard line to draw.

I tend to draw it at "research produces new things that were not previously known, engineering may produce new insights or improvements of things that were already known".

(but again, I admit this is not a very very bright line)

I consider graal to be good engineering. It is a new arrangement and engineering of existing techniques. That will in fact, often produce new papers.

For example, I built the first well-engineered value based partial redundancy elimination in GCC. before that, there were zero production implementations, and it was considered "too slow to be productionizable" until i took a whack at it. I helped with some papers on it. It's not research, just good engineering. The theory was known, etc. I just made it practical. That wasn't research.

Another example: LLVM now has the first shipping implementation ever of an efficient incremental dominator tree updating scheme (that i'm aware of. GCC has a scheme to do it for some things, but it's not efficient). Again, previously not efficient. Theory has been published. Again, making it work well is just good engineering.

Another example: LLVM's phi placement algorithm is a linear time algorithm based on sreedhar and gao's work. If you read further research papers, they actually pretty much crap on this algorithm as very inefficient.

It turns out they were just bad at implementing it effectively, and LLVM's version is way faster than anything else out there. Is it research because our results are orders of magnitude better than anything else out there? No. It may be cool, it may be amazing, etc, but it's still engineering.

Remember that conferences like PLDI and CGO accept papers not just on research, but on implementation engineering.

All that said, i also don't consider trying to differentiate heavily between research and engineering to be that horribly interesting (though i know some prize one or the other).


> people could not get good enough performance

PyPy and Graal show good performance. The can be considered special cases of partial evaluation, where the first Futamura project works. They are not generic enough for the second and third projections, though. They even require some help (e.g. annotations) for the first.


Isn't this what pypy does? pypy does the specialization at runtime, maybe this does it at (interpreter) compile time?


It sounds very similar to what RPython and Pypy do. You can even use RPython to create a VM for a language besides Python. I read a good blog post from an author who had done that 5 years ago.

http://tratt.net/laurie/blog/entries/fast_enough_vms_in_fast...


Given how HolyJIT is implemented (with compiler plugins etc), I believe that it does this work at compile time, yes. I haven't actually confirmed this though.


The goal is to optimize the code both at compile-time and at run-time. We need to optimize it at compile time to prevent slow start-up, and we need to optimize it at run-time to benefit from profile guided optimizations.


It also doesn't help that they're specializing an interpreter generated by a compiler. General-purpose compilers generate pretty bad code for interpreters. If the interpreter was written by hand and had some associated metadata, I think there would be a slightly higher chance of success here.


Are they using the generated IR, or using the AST of the interpreter code? The latter would probably be more amenable to JITification.


Rust has several layers of IR: AST -> HIR -> MIR -> LLVM IR

This operates at the MIR layer. You can sort of think of MIR as "core Rust", in that it's the final, desugared form of everything.

This is why the non-lexical lifetime stuff has taken a while; the precursor to that is "port the borrow checker to MIR". MIR/HIR are also fairly new; using MIR is only a year old.


> You can sort of think of MIR as "core Rust", in that it's the final, desugared form of everything.

Though I don't want anyone to get the impression that MIR is a source-compatible subset of Rust; it's a pretty different thing in its own right. (Worth clarifying because one could imagine a "fully-desugared" maximally-explicit subset of Rust, where e.g. all method calls are maximally disambiguated via UFCS, all types are explicitly annotated, no lifetimes are elided, all macros are expanded, etc.)


That is what I imagined when the Karger/Thompson attack came up with regards to Rust. The simplest cheat would be mapping low-level Rust to a safe subset of C. Automatically or hand-convert the source for Rust compiler. Then, run that through CompCert. A Csmith-style program run through both versions of Rust compiler might also catch errors in one or both. One might also use the C tooling to find errors in the Rust compiler or apps. And so on.

First step that was necessary would be converting the Rust to its lowest-level form. That sounds like the fully-desugared" form you describe.


I admit that I'm quite curious about determining exactly what features could be left out of maximally-explicit Rust, because it would determine the obvious MVP for an alternative compiler, that could technically compile all Rust code with the caveat that you would need to first losslessly and automatically transform the source (which theoretically shouldn't be too hard to add the Rust compiler, e.g. it already has the capability to print Rust source post-macro expansion). Without macros, you'd need neither an implementation of pattern macros nor syntax extensions; without inferred types, you'd need neither a trait resolution engine nor anything of Hindley-Milner; with every identifier fully qualified you wouldn't need any name resolution rules... and we've already established that a borrow checker is unnecessary to implement if your goal is to merely compile Rust code that has already been typechecked. All that together means that you could have a legitimately useful alternative backend with so, so much less work on behalf of the implementor!


This is why I wish that there was a higher-level, maintained api for the compiler internals, to make it easier to do these kinds of experiments


This is kind of what mrustc does? It's a Rust-to-C compiler in C++; one which assumes the Rust program is correct (i.e. passes type/borrow check).

But you still need to resolve types to do dispatch, so that's a nontrivial amount of work.

mrustc can currently compile rustc, but the produced rustc doesn't pass the entire rustc testsuite (yet).

A colleague of mine was considering writing a Rust-to-C++ compiler that was similar, but offloading most of the dispatch/resolution work onto C++. This is actually possible, you can turn method dispatch and autoderef into template resolution. You can do stuff to fake type inference too if you know the program is correct already. This is much harder however and I'm not yet sure if it's 100% possible without doing some typechecking in the Rust-to-C++ compiler itself.

You could however take rustc --unpretty=typed (or whatever that option is these days) output and transform that really easily.

This requires you to be able to independently verify that the two ASTs are equal if you resugar, because you can't trust rustc's output for this. In fact, the poc trusting trust attack I wrote[1] would still go under the radar here.

Verifying ASTs as semantically equal is much less work. But there's a loophole here, it is possible to add type annotations to type inference'd Rust to get different behavior. This is because (among other things) integers are inferred more loosely (an uninferable integer type is defaulted as u32). It's possible that a trusting trust attack would be able to propagate itself merely by flipping around the results of inference.

Even if we looked out for that, you still have the problem of dispatch, where a backdoored rustc could change the method being dispatched to be a different trait. Now this isn't something that would work if you assume that the original code was code which compiled fine with rustc (because rustc complains when there's unqualified ambiguity). But we can't actually trust the original compiler to have handled this correctly either!

In both cases there would be need to be traces of weird code in rustc for this to work, but it might be possible to hide this.

(This is also somewhat a problem for mrustc, but to much a lesser degree)

[1]: http://manishearth.github.io/blog/2016/12/02/reflections-on-...


No offense, but the blog post is seriously missing a better (or any?) explanation of what's the new JIT all about. At first I even thought my browser loaded only half the page...


It does link to https://github.com/nbp/holyjit which explains pretty much everything there is to explain so far...


That's all well and good, but github is blocked where I work.


That's all well and good, but you need to raise that with your IT department if it is an inconvenience rather than complaining to us.

It seems odd that github would be blocked especially given HN isn't unless your company has some sort of pathological fear of accidental IP dilution, so perhaps it is a mistake that will be quickly corrected once pointed out?


my coworker's previous employer blocked all code-sharing and question-and-answer sites for programmers because of a pathological fear of accidental IP dilution. it's a real thing.


That it is, although what they end up doing is making people work on their work machine as well as their cellphone. Source: me, working for a defense contractor with the same insane rules, but no "hand in your phone" rules on entry.


Having worked in defense, I generally had stricter rules around phones than around website access. That is really a bit surprising to me.

I guess contractors are all over the map in how they try to make sure to comply with the "rules", though.


I had the same issue at one company. Thats the company where I first learn about ssh tunneling. All the smart devs did it.


What’s wrong with him asking for assistance? You could’ve just ignored his comment lol, no need to be so rude. Im dissappointed anyone upvotes you. This is bad behavior, shouldn’t be promoted.


> What’s wrong with him asking for assistance?

I didn't read it as a request for assistance, more a complaint that we weren't doing enough to assist by providing a link that would work in the specific circumstance that the poster finds themselves in (that we could not have known about ahead of time even if it was something we should be responsible for fixing or working around).

And I did offer assistance by suggesting the only practical way forward (unless you count HN banning github links because some of its readers can't access them as a practical way forward!): discussing the matter with the IT department. Especially as it _could_ be a mistake (externally sourced block lists being overly aggressive unbeknownst to them?) rather than a deliberate action. And if it is a deliberate action the poster may need to investigate what policy the block is part of to make sure they are not accidentally breaching it by other actions.

> no need to be so rude.

I used the exact same tone in my reply as I was replying to. A little passive-aggressive maybe, but if I was rude then so was what I replied to. I know two wrongs don't make a right, but then again neither does the first one on its own so I've not made the situation any worse.


If you don't mind me asking, why is github blocked?


Government site. There doesn't have to be a reason.


I hope they are paying you very well.


Government site. They aren't paying James well.

Added: if you think I'm joking or being unfair, just look at the compensation tables for just about any government outfit. They top out around a salary that is considered average for software folks in some places.


He didn't say he was government. Contractors can make out like bandits still ;)


I know that that is typical, but there are exceptions, and if you work for the right agency you can be paid well, so I don't want to make any assumptions.

If he's not paid well then I hope he realizes that by merely being aware of HN and GitHub puts him in the top 10% of developers and he can do a lot better than working at a place that restricts his ability to educate himself.


Oh, I'm a contractor and I'm paid decently, but the working conditions are terrible. I've been trying to find a job outside of defense/govt work for a long time, but either the pay isn't as good (the one or two times I managed to land an offer) or I'm not a "cultural fit." Not having a network outside of the govt bubble makes it hard.


Hi there, James. I'm a software engineer at Google. I've worked with many former government and defense contractors here whom I would consider some of the best engineers. Please send me your resume (or anyone else with a similar situation!); my email is in my profile. We have offices all over the US (but not Florida) so the bay area wouldn't be a requirement.


Well, I finally got tired of this place and went and got an offer from a local aerospace company, I suppose I ought to see how that goes :)


Man, I grew up in Brevard, and I wish I could work there, but unfortunately all the software jobs are for defense contractors. You'll probably have to work remote if you want to get out of the defense bubble.


He said government site, not government employee. Contractors are sometimes paid very well.


in a lot of places (if you are not a developer) everything non-work related is blocked. It's stupid, but i've seen this often with friends.


> everything non-work related is blocked. It's stupid

Why is it stupid?


1) Not work related typically really means “not whitelisted” so often sites that would be beneficial to the employee completing their jobs will also be inaccessible.

2) Since everyone has smartphones they will just access the same sites with their personal device which is more time-consuming.


Because, the ones who block stuff often don't keep up with what is work related. So in the end you have to write lots of e-mails until they notice that you are one of the few users that should be allowed to use the whole internet.

Eventually, one day they will find out that their blocking is too strict and that they should restrict the blocking to sites which actively try to attack the users computer...

So it is more of an execution problem, but as it fails quite often you could call it stupid to invest into such a feature.


I once had to fight with IT to unblock MSDN, and we were a Microsoft dev shop. Glad I do not work their anymore.


I have had issues before getting Channel 9 to be whitelisted because it was a "video site"


That sounds bad enough. Whitelisting sites on demand should be a simple matter.


Because trying to enumerate everything that could possibly be relevant to your work probably misses some valuable and relevant things.


I'm truly sorry that GitHub is blocked where you work. Here's the pitch, taken from the repo's README [1]:

1) holyjit aims to be easy:

> As a user, this implies that to inline a function in JIT compiled code, one just need to annotate it with the jit! macro.

    jit!{
        fn eval(script: &Script, args: &[Value]) -> Result<Value, Error>
        = eval_impl
        in script.as_ref()
    }

    fn eval_impl(script: &Script, args: &[Value]) -> Result<Value, Error> {
        // ...
        // ... A few hundred lines or ordinary Rust code later ...
        // ...
    }

    fn main() {
        let script = ...;
        let args = ...;
        // Call it as any ordinary function.
        let res = eval(&script, &args);
        println!("Result: {}", res);
    }

> Thus, you basically have to write an interpreter, and annotate it properly to teach the JIT compiler what can be optimized by the compiler.

> No assembly knowledge is required to start instrumenting your code to make it available to the JIT compiler set of known functions.

2) holyjit aims to be safe:

> Security issues from JIT compilers are coming from: > * Duplication of the runtime into a set of MacroAssembler functions. > * Correctness of the compiler optimization.

> As HolyJiy extends the Rust compiler to extract the effective knowledge of the compiler, there is no more risk of having correctness issues caused by the duplication of code.

> Moreover, the code which is given to the JIT compiler is as safe as the code users wrote in the Rust language.

> As HolyJit aims at being a JIT library which can easily be embedded into other projects, correctness of the compiler optimizations should be caught by the community of users and fuzzers. Thus leaving less bugs for you to find out.

3) holyjit aims to be fast

> Fast is a tricky question when dealing with a JIT compiler, as the cost of the compilation is part of the equation.

> HolyJit aims at reducing the start-up time, based on annotation made out of macros, to guide the early tiers of the compilers for unrolling loops and generating inline caches.

> For final compilation tiers, it uses special types/traits to wrap the data in order to instrument and monitor the values which are being used, such that guard can later be converted into constraints.

> Moreover, the code which is given to the JIT compiler is as safe as the code users wrote in the Rust language.

> As HolyJit aims at being a JIT library which can easily be embedded into other projects, correctness of the compiler optimizations should be caught by the community of users and fuzzers. Thus leaving less bugs for you to find out.

[1]: https://github.com/nbp/holyjit/blob/master/README.md


Thanks! That should be the blog post. Sounds like a very cool project.

It's funny that the current tiny non-informative blog post starts with a "tl;dr". Maybe it should be "ts; du" instead!


For what it's worth, the name is obviously a pun and a small wink to the Graal VM [1]. Not sure there's any intention to reference TempleOS's HolyC.

[1] https://github.com/graalvm/


It could also be a play on the phrase "Holy shit!"


Both the reference to the GraalVM and a pun are intentional.


Maybe make its slogan: "HolyJit, it's fast!" -- with the double meaning and all.


As long as the performance of this new engine isn’t jitty.


“Could” if that’s not intentional then this browser will fail lol


This isn't a browser.


Ummm... it’s for Firefox... no?


I mean, maybe, in theory, someday. But the browser would still be "Firefox", that is, very few people would even be aware of this name, regardless of what the name is.


lol, how much of the Firefox today is still Firefox of the beginning? :D


You'd be surprised. When working on Firefox I regularly run into files with "Copyright 1999 Netscape Communications..." file headers.


How much of NSPR is left in there? It's been a long time since I looked at the codebase.


NSPR is still all over the tree. Off the top of my head there's some log functionality, the print format strings, and the environment variable support in use. Searchfox turns up a bunch more: https://searchfox.org/mozilla-central/search?q=PR_&case=true...


I think you must’ve taken my comment too seriously... I agree that Firefoxs success does not depend on the meaning behind the name holyjit lol. It was just a joke.

The main point was that if they came up with this name without the intention of punning holy shit, I have low confidence in their abilities to succeed in general and with Firefox in particular. But also don’t take this too seriously.


Last I heard Terry Davis got kicked out of his parent's house and is now living out of his car. Something about an upcoming court case he's involved in as well.

He has a small following on Reddit and users post updates of his whereabouts from time to time. He still sporadically streams live video to the TempleOS site. Last broadcast was from an internet cafe.


Does Terry still post here? I just went to my settings to turn on dead posts.


I saw a couple of dead comments from him here on HN earlier today or yesterday evening actually. First time in a long while I'd come across his comment so that was kind of random.

Edit: The comments were posted 6 hours ago, check his user page https://news.ycombinator.com/threads?id=TempleOS


mods, thanks for not permabanning him. it’s nice to read his posts and see he’s doing okay


Really tragic case, Terry. Doubtful that TempleOS would exist without his schizophrenia, but everyone would be much better off emotionally if he just took his medication...


> everyone would be much better off emotionally if he just took his medication

You say that as if the schizophrenic person, someone who lives in an alternate reality due to psychosis, has any kind of agency over taking his medication. You can only make that choice when you're sane, and even then the drugs aren't perfect and people routinely decide to come off them for some psychotic reason - yes, you can become psychotic while on anti-psychotics - and failing that they come off them because they're so ashamed of being mentally ill from all those condescending and "well-meaning" (read: superior) people telling them what to do and how defective they are all the time that they want to prove they can handle it. Plus there's the issue that the alternate reality is way more interesting than this one.

Your suggestion that it's somehow Terry's fault is about as helpful as telling a homeless person - who is much more likely to have a psychotic illness by the way - to "just get a job".


It’s generally a bad and dangerous idea, especially for someone who is schizophrenic, but antipsychotics can have very serious side effects, and I think it’s painting with a brush that’s a bit too broad to suggest that someone would only stop taking them for “some psychotic reason”. People who take these medications are often facing some very difficult trade offs.


Please don't put words in my mouth.


>people routinely decide to come off them for some psychotic reason

Sometimes it's for a non-psychotic (but maybe ill-advised) reason like "they made me obese and diabetic."


> people tell you you can't handle being off the meds

> you'll show them

> go off the meds

> you can't handle it

> man fuck those people tho, it's all their fault


"Voices hearer" here. I've been mostly off antipsychotics for 7 years now (over 30 yo). If antipsychotics did just cut "voices" down I would be happy to take them, but unfortunately most of them have sedation and motivational side effects that are harder to bear than the "voices" themselves, which also I don't like. However, it comes handy to have them at hand during those occasional periods of insomnia, since my health priority is keeping a solid routine in study, sports and sleep pattern.


There is absolutely nothing wrong with you varying your medication in order to manage side effects. That's a very good reason. However, parent said:

> and failing that they come off them because they're so ashamed of being mentally ill from all those condescending and "well-meaning" (read: superior) people telling them what to do and how defective they are all the time that they want to prove they can handle it.

Which is a really bad reason.


>For what it's worth, the name is obviously a pun and a small wink to the Graal VM

It might be that, but not "obviously". In fact it could also be totally unrelated.

One can imagine devs naming something "Holy" without wanting to reference TempleOS, the Graal VM etc.

Religious inspired references are perfectly common in themselves.


http://nbp.github.io/slides/HolyJit/JitTeamIntro/ pretty much proves it. Mouse over page 1 and you read:

"The name is a reference to another project named GraalVM, except that the goal of this project is not to make a VM, but only to make a JIT as a library."


This settles it then!


Hate to be a downer but I don't get the pun at all.


Graal is an archaic spelling of grail.


The pun: J -> Sh


https://en.wikipedia.org/wiki/Holy_Grail , "Graal" is an archaic spelling of "Grail".


FWIW, it's the current spelling in some languages (e.g. French).


Actually it is speculated that the the origin ist "Sang(re) Real" ( royal blood ) which metamorphosed into San Graal. Looking for it meant to search for the ( possible) lost blood line of Jesus, I think.


“According to the Catholic Encyclopedia this is a false etymology.” — https://en.wikipedia.org/wiki/Holy_Grail#Etymology

It’s been around for a while but was most recently popularised by Dan Brown in The Da Vinci Code.


Ah interesting, thanks! I'll be more explicit about "in English" next time.


On a similar note, I have been watching this[0] project which provides an IR to target and optional optimizations. But I like the idea that the only difference between a JIT and an AOT compiler is optimization choice (assuming we don't include tracing as part of the JIT features).

Also, this[1] blog series is a must-read for interested beginners.

0 - https://github.com/stoklund/cretonne/ 1 - https://eli.thegreenplace.net/2017/adventures-in-jit-compila...


It sounds more than likely that HolyJit will use Cretonne as a backend.

For the moment it uses dynasm, just to get the prototype working, but I expect to change that in the upcoming months.

(I am the author of HolyJit)


"This means more time to implement JavaScript features" - yes, because JavaScript isn't getting enough features fast enough :-p

Jokes aside, I love this kind of work from the Mozilla team. This and the bits going into Quantum are really amazing pieces of software engineering in my opinion.


I'm often overwhelmed by the size of the JavaScript language; it's a big language (and I'm coming from mostly working in Perl which is also a very big language...but it is dwarfed by modern JS in terms of TIMTOWTDI and in terms of core language features). But, every time I learn about a new feature, I can't fault them for adding it. It's usually a clear improvement in terms of readability and capability. And, because it's such a widely used language there tends to be a lot of input before new features are added. They are rarely half-baked once they reach the standard.

So, I agree it's overwhelming; books about JavaScript from two years ago are already out of date on a lot of fronts. But, it's also resulted in a really powerful and concise language.


Well, except WebAudio. That standard omits some essential low level audio primitives while featuring an incomplete scatter of high level APIs.


Agreed. I've only given it a cursory glance, but I've read several people bemoaning how awful it is. And, I think it's an example of what happens when APIs are built by folks who aren't actually building things with them. Audio always seems to get screwed up by engineers, sometimes for years. Linux had absolutely shitty audio up until...like yesterday.

It's a feedback loop, I think. Almost nobody uses the web for serious audio because the web sucks for serious audio, and thus almost nobody who uses the web for serious audio is working on the standards for web audio. I admire anyone who can make the web platform work at all for anything audio related.


> Linux had absolutely shitty audio up until...like yesterday.

Is that just hyperbole, or has there been a recent (within the past 6 months) development that has made Linux audio better?


Hyperbole. It really was fixed a couple years ago, maybe as much as three or four, if you were running a cutting-edge distribution like Fedora. But, it was entirely fair to say Linux absolutely sucked at audio for decades.


> "This means more time to implement JavaScript features" - yes, because JavaScript isn't getting enough features fast enough :-p

AFAIK Mozilla does not drive the evolution of ECMAScript (not on their own anyway), so it would be a way to provide new features faster, providing better feedback during phase 3 (and possibly even phase 2) and providing time to implement features they could not so far (e.g. ES6 TCO)


If they did we might have something closer to AS3 with optional typing and had it years earlier than the ES6/7 route has taken us.


I'm not sure as to what is fundamentally different about this JIT-compiler (except for the awesome name of course)

Is it basically just a Rust rewrite which also tries to reduce the complexity of their current just-in-time compiler?

Edit: By calling it "just" a Rust rewrite, I'm not implying that's a simple undertaking, even moreso considering the complexity of modern JS engines.


Basically, instead of manually writing assembly fragments, they want to reuse annotated interpreter code, thus ensuring both correctness and safety for JIT-generated code, while reducing the redundancy. The rustc compiler is used to generate assembly fragments for the JIT, directly out of the interpreter code. It seems like a worthwhile endeavor.


Your explanation is much better than what I can read in the blog post.


doesnt rustc rely on LLVM to generate its assembly. i watch the webrender repo and there have been several issues about poor codegen that LLVM could not properly handle. some were also related to absent optimizations at the MIR level.

also would that mean that the asm snippets could change as llvm changes and possibly cause security bugs if not carefully hand-audited/tweaked anyhow?


Rustc does, but it doesn't appear to me that holyjit does, that is https://github.com/nbp/holyjit/blob/master/lib/src/compile.r... and https://github.com/nbp/holyjit/blob/master/lib/src/lib.rs#L1...

Basically, it seems (and I haven't fully digested the code yet) that it uses https://crates.io/crates/dynasmrt for codegen.


Looks like dynasmrt was written for use in holyjit.


Quite possibly! It's the first I'm hearing of it. It's very interesting...


The blog post doesn't explain it very well, but I believe the tool code-generates the assembly at runtime, and not at compile time, so LLVM is not involved in code generation for the JIT.


Since it sounds very similar to RPython it probably does both: it generates a regular (native) interpreter, and generates and merges a JIT inside that interpreter (with all that implies of e.g. tracing and runtime code generation).


Thank you, jchw! Impressively concise and helpful summary!


I believe the fundamental difference is that instead of hand-writing snippets of assembly which their JIT can append, they are having the Rust compiler generate the assembly snippets from Rust code. This lets them be much more confident that they wrote the snippets correctly, and lets them spend much less time writing the snippets so they can focus on optimizations.


... and the snippets are used for the interpreter, which gives you more confidence that interpreted and JITed code behave the same (if I understood it correctly).


I believe you’re correct. Unless there is a compiler bug, the JIT and interpreted code will have the same semantics.


…but (barring a sufficiently advanced compiler) that also means the JITted code can’t use data structures that are wildly different from those used in the interpreter. For example, if integers are boxed in the interpreter, the JITted code would use boxed integers, too.


> Is it basically just a Rust rewrite which also tries to reduce the complexity of their current just-in-time compiler?

My understanding is it's rather more similar to RPython: the developer does not write the JIT, the developer writes the interpreter and the meta-jit generates a JIT from that. The developer can further add various annotations to guide JIT generation for improved performances.

See Laurence Tratt's Fast Enough VMs in Fast Enough Time on rewriting their Converge VM in RPython: http://tratt.net/laurie/blog/entries/fast_enough_vms_in_fast...


Name reminds me of TempleOs's HolyC language...


Inb4 HolyOS, HolyBrowser, HolyKernel


Me too.

I wonder why they don't mention it.


Because it is obscure enough (outside HN) that they might even not have heard of it?


He hadn't. I just explained the Terry/HolyC thing to nbp (the author) this evening over coffee. He had never heard of it before.


s/J/Sh/


This is amazing! Write a regular old interpreter in a fully featured language, then just wrap it in a macro and poof you magically have a jit that will compile and execute your interpreted code on the fly.

Seriously. Look at the example for brainfuck [0], it's less than 70 lines of completely normal interpreter loop, including a one-line macro on the top. What the heck.

[0]: https://github.com/nbp/holyjit/blob/master/examples/brainfuc...


The BF example was pretty cool and was sweet to see in action. That said, the language is extremely simple, and the real meat of the problem with most dynamic languages is not in the compilation per se, but the runtime type model and specializing code on the basis of type-input fed from that model.

I think a simple next step is to take a small toy language (there's a small rust scheme implementation written by another team member that might serve as a good candidate, if extended with js-style prototype-based objects), and prove this out for an actual type-driven specialization.

But yeah, the possibilities are certainly exciting.


36000 lines of handwritten assembler in v8? Not since the transition to ignition and turbofan...

$ git clone https://github.com/v8/v8.git $ cd v8 $ find . -name '*.S'

There's still some macro assembler in the built-in's but it's emitted rather than being assembler.


For both SpiderMonkey and v8, this is counting the number of calls to the MacroAssembler. SpiderMonkey commonly use the prefix "masm", while v8 uses the macro "__ " to alias the MacroAssembler.

The MacroAssembler, is basically what is used to produce assembly code in both JavaScript engines.


This seems really cool! I know it's been mentioned before that there are no plans to rewrite SpiderMonkey in Rust, but this sort of feels like a tiny first step in that direction?

Almost reminds me of early on in Servo's history: https://news.ycombinator.com/item?id=6268521

(Although I may just be projecting what I want to hear :) it's exciting hearing about new Rust projects, especially new stuff going into Firefox.)


Yes, I think it's safe to compare this to the early days of Servo for now, in that one should not necessarily expect to see this in Firefox for the next decade, though the research and expertise gained from creating it might work its way in before then. Still exciting nonetheless!


How is this different from Pypy, other than the fact that ones uses RPython and the other uses Rust?


HolyJIT seems to be similar to RPython rather than Pypy: RPython is both the restricted language and the tooling around it, Pypy is a Python interpreter implemented using RPython.


PyPy is a tracing JIT and mozilla abandoned their tracing JIT efforts (TraceMonkey) a few years ago.


Is this basically the same transition as V8 did with crankshaft to ignition+turbofan?


I think that was the exact opposite - they generated an interpreter from their JIT. This generates a JIT from their interpreter, like PyPy and Truffle do.


It seems to be to generate JIT snippets from the Rust compiler, not an interpreter.


As I understand it you write an interpreter in Rust then you generate a JIT compiler from that interpreter.


Smells like Graal/Truffle


When I first saw the name "HolyJit" On Reddit I thought it was going to be something new from the TempleOS guy.


By my understanding

HolyJit:Rust::Rpython (toolkit):Rpython (language)

Rpython being the language and toolkit that is used to create pypy.


Gotta say... this is pretty badass.


What is a jit?

Why would you want one?

Can it help you write in a jot?

Can it help you write in a dot?

Why do I feel like I’m trapped in The Land of Doctor Seuss?


I was expecting TempleOS getting a new jit


I confess, I actually chuckled out loud.

I haven't seen any of their posts for a while. I hope they are okay.


I find it amazing what he has done. I did say new jit as HolyC is compiled on the fly and has some amazing features that no other system has(embedded images). Yeah there are some esoteric features, but that is the benefit of DIY.


You have inspired me. I'll torrent his most recent ISO and spin it up in a VM over the weekend. I will give it an honest look and be objective.

I should set a blog up before that.


You will need to ignore anything non-technical. He suffers from mental illness and says offensive things.


I was online in the 80s. I have an account on voat. I visit Slashdot. I should be ok. ;-)


Ah Slashdot. I was there when it was cool :) I do have a 5 digit id, so there is that.

How is voat, like is the programming/tech comments worthy?


Kinda reminds me of holyC ;)


First thought was: "Is this another Terry Davis language/framework?"[1]

No, no it is not.

I would suggest changing the name from HolyJit to anything else.

[1] https://en.m.wikipedia.org/wiki/TempleOS


I don't know, I'm a fan of the name to be honest, I wouldn't change it just because of the possibility of accidental association.


What about just avoiding needless offence to people who happen to regard the 'holy' morpheme as indicative of ... well ... something holy and not to be trifled with.

I don't have religion, but given an essentially infinite number of alternatives to this not-very-funny one, why do this?

People also have a right to be idiots in other ways, such as poking hornets' nests because it's funny.


This is a good point, despite the downvotes. (Although, on the subject of avoiding offensive terminology, I'd perhaps have avoided the "people also have a right to be idiots" part).

I've recently been reading Ogilvy on Advertising[1] by David Ogilvy, one of the most successful 20th-century figures in the industry, and while it's a bit dated (it was written 30 years ago) and, obviously, the subject matter is advertising, it's filled with excellent advice that's just as useful in all professional and personal contexts.

One aside that jumped out at me was "While we are on the subject of taste, I deplore the current fashion of using clergymen, monks and angels as comic figures in advertising. It may amuse you, but it shocks a lot of people."

I had honestly never really thought of it that way, but it's true. Religious people (that is, most people) find it hurtful and disturbing when you mock their religious faith. That might not be your intent, but it's kind of like 10 or 20 years ago when people used to explain that "by f-- I don't mean gay, just stupid." Just because you don't think or don't know what you're saying is hurtful does not mean that others aren't hurt by it.[2]

If Mozilla had used terminology in their software that was offensive to women, or gay people, or non-Western religions, I'm sure they would alter it, and rightly so. If they called a copy-on-write library HolyCOW, and Hindus said they were offended by this name because it mocks their religion, I'm sure Mozilla would, rightly, change it. And they probably know enough not to use such a name in the first place.

Of course, "holy" is not a specifically-Christian term, but a concept shared by all religions, Western and non-Western alike. A devout Catholic, Muslim, Buddhist or Jew is equally likely to feel hurt and excluded when they see you comparing their beliefs to shit.[3]

As for the other half of the name, the profanity ship has sailed in the broader culture and especially hacker culture, but it's also true that many people don't feel the same way, and nearly all of these people are deeply religious. I think it's reasonable to expect that religious people in the open-source community ought to accept that you or I will sometimes say "shit" for humor or emphasis, but it's also reasonable for them to expect we'll meet them halfway by not making them the butt of our jokes.

There are plenty of equally-funny "JIT" puns that don't come at anyone's expense; "GoodJIT," "JITHappens," "JIT'sTheBomb," and so on. I recommend using one of those.

I'm not a prude. I wouldn't scold you for saying "holy shit" (or "holy cow") in a conversation with me. I haven't scrubbed those phrases from my own casual vocabulary either. But, if someone told me I'd offended them, I would apologize and probably feel bad the rest of the day, just as I imagine almost any of us would. When you're participating in the open-source community, your audience is mainly strangers, with many different beliefs and backgrounds, whose first impression of you[4] is formed by what you've written on the internet. So it doesn't hurt to be a little more careful.

[1] https://smile.amazon.com/gp/product/039472903X/

[2] Hearing that speech a few hundred times, and maybe even giving it a couple, certainly didn't make high school easy for closeted me.

[3] Not that we shouldn't try our best to avoid needlessly causing offense whether it's to one group or many.

[4] And not only you, but the organizations, projects and communities you are involved with.


Some good points, thank you.

I hear what you say about my use of "idiots", but someone stirring the sh*t (yes, I can use (light-to-full) industrial language too, in its place) without any need, casually and semi-deliberately offending (or worse) many many others, is I think behaving idiotically.

BTW on your point [2]: I am enraged when I observe people slandering two groups at once, casually: a direct target (A), by noting them as obviously as bad/gross/etc as assumed obvious horrible out-group (B). Gahhh!


Exactly, the name is not funny at all. Is the author still 15 years old?


Interestingly my comment above is still being voted down! I find it a bit sad that avoiding this sort of needless annoyance is apparently a minority sport.

Maybe that's why as a start-up, though I have very much done the 'disruptive' thing in the past when needed, I didn't do it to be 'disruptive' for the sake of it, and I always try to find a gentle path to what I want to achieve...


Well, now that you know you were mistaken, it won't happen again, and the rest of us can safely enjoy this rather excellent pun.


I'm wondering if Terry Davis is aware of the potential double-entendre (HolyJit->HolyShit)? which was probably the reason this name was chosen for the mozilla project.

Holy jit, that JS is running quickly. (Is it fast in that case? Quickly doesn't feel right...)


I am absolutely convinced that the god of Terry A. Davis. does not approve of JIT compilation.


I haven't looked closely at the architecture, but isn't JIT compilation pretty much how Holy C works?


I was making an uninformed joke. You are right, it's right there in the wikipedia article: "Code can be compiled JIT.".

Let me repeat: God approves of JIT compilation.


But it sounds like "holy shit" LOL. You know what else is funny? Farts!


Well you know, as Louis C.K. once said, you don't have to be smart to laugh at farts, but you have to be stupid not to.


Mozilla's image library was called libpr0n back in the day, so you might say there is some form.


[flagged]


This is not a civil comment by Hacker News' standards. Please don't post like this.

https://news.ycombinator.com/newsguidelines.html


[flagged]


Checking other comments, it seems like this account has been hacked and used to post offensive comments.


Then the original owner should send an email to hn@ycombinator.com to get it back.


I guess the mods know about it already now and have removed the offending comments. The profile page could also do with some scrubbing however.


I admire how mozilla is so passionate about what they develop but still, in 2017, firefox needs restart every couple of hours for performance reasons. I kinda of feel like they have the proof of concept working but its never good enough for the wider audience.


In class there are at least 30 out of 40 people using Firefox on Ubuntu with no such issues. On my laptop I use Firefox on Debian with no such issues. Also previously on Windows XP, 7 or 8, both in virtual machines and on hardware, I never had such issues, but that's a while ago so not really applicable anymore.

You might want to submit a bug with your specific hardware to see if anyone can identify the cause, because that's not normal.


Do they use tree style tabs though?


I do. Works fine.


Have you tried the 57 beta?


Get Quantum. Serious improvement


In the mean time, the performance for Firefox' Tree Style Tabs plugin have gone downhill. I'd be interested to know more about new browsers that put tabs on the side by default.


For a moment I hoped this had something to do with HolyC


Can we just drop the JS VM and embed some nicer VM (JVM, DartVM) instead?


Yes, once you convince everyone to drop the thing we have that works and implement the thing we don't have that doesn't work. In short: you complain as though it were an easy change to make.


He/she is right though, just wait when WebAssembly gets more mature.

I bet all those plugins will be back.


Shipping an entire Java/ruby/python/whatever VM with your code doesn't make much sense (not to mention, wasm doesn't have a good GC story). Then there's the DOM API issue.

If you aren't making games or crunching big numbers, wasm isn't for you yet.

Even so, let's assume wasm added all those features today. History shows it would still be a decade before you could ship to all your users. That may not matter for fancy startup X, but it certainly matters for the biggest and most important businesses.

I love the potential of wasm, but I think it's way too soon to be preaching about it being the end-all be-all of the web.


I don't remeber ARM, MIPS, x86 or x64 having a good GC history either.

If WebAssembly is good enough as C and C++ target, it is good enough as any of those processors.

As for the potencial of WebAssembly, there are already ongoing efforts to port .NET and Java runtimes to it, and I am looking forward to Adobe porting Flash to it as well.

So it will come, WebAssembly + Canvas + WebGL is already quite usable.


> I don't remeber ARM, MIPS, x86 or x64 having a good GC history either.

Typical client programs intended to run on nascent versions of those architectures were not intended to be delivered and installed very often. In contrast, web pages might get changes deployed to production multiple times a day. Needing to deliver a compiled runtime solely in order to run your client side code is going to be a nonstarter for the overwhelming majority of developers.

Now, at best, we can hope that many developers will collaborate to make caching easier by agreeing to only use one specific version of each runtime, delivered from a single well-known source (though experience with e.g. jQuery means that we shouldn't hold our breath). In the meantime, extending WASM to obviate the need for delivered runtimes will even the field between Javascript and every other managed language.


Runtimes are already being delivered by the majority of languages that compile to JavaScript.

A GC runtime for a language with like Oberon semantics is just a few hundred KB, way less than sonething like minified jQuery.


Those ISAs are much lower-level than wasm. If you want to keep things secure, you have to restrict things. For example, wasm is going to optimize a lot of things and isn't necessarily going to treat your pointers the way you expect. Good garbage collectors need pretty fine-grained control. LLVM has been the bane of GC creators for the same kinds of reasons.


Only from those that want to rely on LLVM for their compiler backends.

Using LLVM to target WebAssembly is not a requirement.

Also the point isn't implementing the best GC algorithm, rather a good enough one.


WebAssembly has no good story for the DOM.


We have canvas and WebGL.

And just because it isn't there today, it doesn't mean it won't be there tomorrow.


i feel like that's on purpose...we can engineer all these crazy things, but getting good DOM access? naw not possible


wasm is an MVP, they fully intend to add DOM support: http://webassembly.org/docs/future-features

Its just that implementing the DOM requires a lot of other (complex) things 1. Stable JS object ABI 2. Integration with the GC garbage collector 3. Better integration with modules

Personally, I'm hoping that we actually get a lower-level subset of the DOM apis that doesn't rely on as many OO features so we can bind to it easier, and avoid more DOM manipulation overhead (though I have no idea what this would look like).


Personally I'd like to see an experimental mode in Quantum/Servo with an RPC interface to render directly to the display list, possibly with an intermediary like a vector of VDOM operations.


WebAssembly has a lot of potential. It would be nice to write web apps in a language other than JavaScript that can compile down to WebAssembly and run in the browser.

Will there be a day where we can make desktop class apps that can run in the browser and not have to wade through the insanity of what is out there now...Should I use ReactJS, VueJs, Flow, Svelte, AngularJS, EmberJs, nextJS, on and on and on.

Pick an approach and standardize? Or just let people ship large WebAssembly binarys with their own runtimes and UI frameworks.


Thanks to WebAssembly there are already porting attempts for quite a few runtimes, just let it mature a bit more and we will get the revenge of plugins.


I agree this would be better, but backward compatibility constraint makes it impossible. WebAssembly is an attempt to add a nicer VM in addition to JS VM.


I thought one of the points of WebAssembly was to not create a second VM. https://developer.mozilla.org/en-US/docs/WebAssembly/Concept...

> With the advent of WebAssembly appearing in browsers, the virtual machine that we talked about earlier will now load and run two types of code — JavaScript AND WebAssembly.


Sure! But do you still want a backwards compatible web browser wrapped around that? Seems like a good opportunity for a clean break from that too.


I know! All this work for Javascript. How many precious developer hours have been committed to this crappy language???


It's fun to mock God ... until he judges you for it.

Daniel 5


Why does the Cargo.toml[1] file in the source repo for HolyJit have the word "brainfuck"[2] in it?

Does that have some significance to Rust or Mozilla? Or is this a case of copy pasta?

[1]: https://github.com/nbp/holyjit/blob/master/Cargo.toml

[2]: Permanent link to line: https://github.com/nbp/holyjit/blob/1f20eb41de2dae14179815c7...



Because the first example that uses holyjit is an implementation [1] of the brainfuck [0] language.

[0]: https://en.wikipedia.org/wiki/Brainfuck

[1]: https://github.com/nbp/holyjit/blob/master/examples/brainfuc...


This is a reference to examples/brainfuck.rs in the repository, which seems to implement a JIT interpreter for Brainfuck, a very basic esoteric programming language.


Because it has an example that JITs Brainfuck: https://github.com/nbp/holyjit/blob/master/examples/brainfuc...

(They actually don’t even need that line, as Cargo already infers this via convention)


There's a brainfuck [1] interpreter example in the examples/ directory, which I assume is the reason for this allegedly temporary name.

[1] https://en.wikipedia.org/wiki/Brainfuck


>the source repo for HolyJit have the word "brainfuck"[2] in it?

It is a reference to the best programming language ever created.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: