Hacker News new | past | comments | ask | show | jobs | submit login
Asm.js AOT compilation and startup performance (blog.mozilla.org)
162 points by bzbarsky on Jan 14, 2014 | hide | past | favorite | 83 comments



Author of the mentioned Nebula3 demos here. I must say it was extremely impressive to watch how quickly the AOT compilation time for asm.js code in Firefox was improved within only a couple of weeks. I think when I first tried to compile the demos to asm.js in a very early Odinmonkey build, Firefox froze for 10..15 seconds on start. Today it takes about half a second and everything is done asynchronously (no freeze or stuttering). This is especially impressive when looking at the time PNaCl currently requires when spinning up the demo for the first time (after that first compilation pass the result is cached though, and the next start is basically instant). Here's a bit more info on demos (lines of code, compiled binary size of various platforms, etc...): http://www.slideshare.net/andreweissflog3/gdce2013-cpp-onthe...


It's funny that time and time again MIT approach over Hacker approach fails. Worse is better so to say ;)

On a simply theoretical ground having LLVM in browser sounds like an amazing thing. It elegantly solves all the problems of using different languages in browser, having super optimization of native LLVM project, etc.

Then you look at Javascript. It was written in a week. It's a sloppy mess of Java, Self and Scheme merged into a single horrible entity. But It just works™. And now it works fast :D

Good job Mozilla.


Let's make no mistake about it: Javascript has been a multi-billion dollar focus by several top-tier engineering companies for almost two decades. It's more accurate to say that Javascript has succeeded despite its limitations.

There's no denying that the PNaCl is a superior approach, and if we were starting from square one would be the smarter design as well. That said, a well entrenched language supported (and, crucially, maintained) by multiple vendors with loads of developer intellectual investment should win this competition.


> Javascript has been a multi-billion dollar focus by several top-tier engineering companies for almost two decades.

No, that's an exaggeration in time and dollars.

JS was one full-time person, me, till fall 1996, then two. Growth after that was to about eight max, and less than the Java crew at Netscape, until Netscape folded on Java and laid off a bunch of people in late 1997. Call the Netscape investment 8 years x 8 people = 64.

On the IE side, the JScript engine reused a common assembly-coded COM-dispatching indirect-threaded (I think) interpreter, and seemed from what I could tell to take about 6 people, but I'll round up to 8. No LiveConnect and COM based infrastructure saved them compared to Netscape. Ignoring JScript.net in Y2K era, the investment until the IE6 stagnation (skeleton crew) looks like at most 6 x 8 = 48.

Apple did not start with KJS till 2001, I believe. They've always had at most 3 people on JavaScriptCore, they run a tight ship: 14 x 3 = 42.

Chrome started V8 in 2006, as far as I can tell. Lars and a team of U. Aarhus and Google on his farm wrote four versions I hear (good idea: prototype, learn, tear up and rewrite). Call that 2 x 10 = 20 for first two years, then since I think it is more like 5 x 20 = 100.

64 + 48 + 42 + 120 = 274 person-years. Even at $1M/person/year, well high even with Google wealth effect of RSUs (for most; I'm guessing, but an outlier like Lars or Kaspar getting rich does not count; really we should stick to base salary + benefits/burden overhead), that's well shy of "multi-billion".

You can say JS learned from all the JIT research before it. You'd be right, but your statement was exclusive to JS and so excluded that upstream, and fair's fair: that helped other languages who could read the literature and code.

EDIT: Eric Lawrence on twitter reminds me that Chakra has a big team, I hear ~60 heads. That seems to be since 2009. Still well south of "multi-billion", even with my absurdly high $1M/year burdened engineer price.

/be


This kind of post and response, meaningless in the grand scheme of things, is why I come to HN. You never know who will be next to say X sucks and then have someone personally involved in creating X speak up unexpectedly. Makes you want to apologize for saying X sucks in the first place, except you're not because without it you wouldn't get the resulting conversation. ;-) It does make me wonder, what things have I said about somebody's creation and then have them come along and see it. Sometimes it's just hard to remember there are real humans on the other side of this screen. ... And with that, I know I'm up too late. Good night all, and thanks for the post, Brenden. I did want to point out though... You forgot IE 10 and especially 11, plus the fun Opera Mini must have had remotely executing JS for a brief time. But these wouldn't significantly revise the numbers up too much. I'd add Adobe to the mix, but ActionScript and AIR are maybe too far from a browser to count for much evolution in JS terms.


Right, see EDIT for Chakra, and yes, I forgot Opera. Lars Thomas Hansen, then Chris Pine, on LinearB and Futhark; then Jens Lindstrom and (I heard) a small team in Sweden on Carakan.

Also (ahem), I forgot Mozilla in the post-2003 era.

Still, no matter the addenda and the crazy $1M/person-year, nowhere near even $1B.


You never know who will be next to say X sucks and then have someone personally involved in creating X speak up unexpectedly.

So then HN is now the /. of the mid 201X? /. Was at its best when people in the know were finally pissed off enough to chime in and set the hooligans straight.


It does make me wonder, what things have I said about somebody's creation and then have them come along and see it

I passingly wrote some disparaging remarks in a github issue about the ES6 binary data strawman spec written by Dave Herman, pretty much out of my own ignorance, only to get a message from him a few months later asking for feedback on it and what I thought was wrong.

By this point I had learned more about the topic and reading the spec language and knew enough to know I had just been an idiot. I ashamedly responded with an apology and basically told him that my criticism could be ignored because it came from my lack of understanding and not a real problem with the spec.


It's pretty interesting to see it laid out there in person years. It really is a small number (especially compared to the much larger number of bodies working on other parts of those browsers).

I'll still take umbrage with the "MIT approach over Hacker approach" comment by the OP. "Worse is better" is fairly independent of that, and I think it does do disservice not only to the academic shoulders that the modern JS engines stand on, but also the academic brains packed into most of the JS engine teams (even just starting with the backgrounds of the editors of the asm.js spec).

It's also amusing, of course, that the submitter of this article is bzbarsky, who has done amazing work for the web and whose mailing list correspondence, at least, all comes from an address ending with "@mit.edu" :)


In fairness, that's just the address I use for most of my email in general. My only affiliation with MIT at this point is being an alum.

That said, I do have a tendency to err on the "not worse" side, I think. And yes, we did read "Worse Is Better" in http://ocw.mit.edu/courses/electrical-engineering-and-comput... (or rather its decade-older incarnation). ;)


It's not my name: http://www.jwz.org/doc/worse-is-better.html

And I don't feel one bit sorry about it. I guess I could call it Stanford(/MIT) way, but MIT is nicer, juicer target.

Also asm.js is the Worse is Better compared to PNaCl. It's simpler, it's a 'hack' of JS compiler, it's not as fast as native performance of PNaCl. But it epitome of the said approach.

C is also Worse is Better approach compared to Lisp and it had some of amazing people, work on it.


> There's no denying that the PNaCl is a superior approach, and if we were starting from square one would be the smarter design as well.

Not necessarily. That has been discussed at length many times here and elsewhere. PNaCl's approach is interesting and technically has much merit, but also has significant downsides (startup speed, complexity, size of implementation, reliance on LLVM for something it was not intended, risks of undefined behavior, PPAPI, etc.). It's technically an impressive technology but also one with fundamental compromises.

Instead, an undeniably superior approach could be to start entirely from scratch, not JS nor LLVM nor anything else, and work to design something truly optimal for the use case we are talking about here (code shipped over the network, to run securely inside a browser, at near-native performance, with fast warm and cold startup). That would look very different from both JS and PNaCl, and could avoid the compromises that both have.


Most of the negatives are heavily focused on implementation rather than design, which I don't disagree with. However, the positives of targeting any language at a stable byte code is incredibly valuable... such as an implementation of javascript itself. A bytecode approach provides a superset to our current status quo.

That said, as browsers act more like operating systems, it makes me wonder if we've somewhat missed the point.

I agree with you about starting from scratch. I think if history is any indication, ultimately we'll end up having to write a new 'web' with very different semantics and design philosophies; goodness knows the old metaphor is starting to creak in a number of problematic ways.


> However, the positives of targeting any language at a stable byte code is incredibly valuable... such as an implementation of javascript itself. A bytecode approach provides a superset to our current status quo.

Not necessarily, it depends which bytecode. For example the bytecode in PNaCl, which is based on LLVM IR, is excellent for C and related languages, but not for many other important languages.

Worth reading this about the limitations of LLVM IR as a bytecode: http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-October/0437...

I've also written a post about the limitations of any single bytecode to achieve all the goals the web needs: http://mozakai.blogspot.com/2013/05/the-elusive-universal-we...


Java has a bytecode for its client embedding; so does Flash ActionScript. This led to trouble. From http://brendaneich.github.io/Strange-Loop-2012/#/27, some pros for JS and cons for bytecode:

* Dynamic typing ⇒ no verification

* Type inference ⇒ delayed optimization

* Would bytecode compress as well?

* Bytecode standardization would suck

* Bytecode versioning would suck more

* Low-level bytecode is future-hostile

Remember Java bytecode backward compatibility hampering language evolution, in the generics (erasure) debate and result. Then they broke bytecode compat anyway.

Flash has two language implementations in it, one for AS2 and the other (Tamarin) for AS3. Only way to be sure about AS2 compat!

In many ways, with JS you have one problem; add bytecode and now you have two.


But asm.js is a bytecode, isn't it? Just with a clever-but-weird encoding that allows a backward compatibility.

In a same manner, one can deliver, for example, an x86 bytecode in JS-encoded form. Just encode opcodes as, say, "eax = 1" instead of "\xB8\x01\0\0\0".


No, asm.js is a JS subset. "bytecode" as boosted here would be a non-subset, like JVML to Java source.

Sure, bits is bits. Doesn't matter if you're after gzipped good results. But bytecode hopes spring eternal and the hopers do not want gzipped, minified, Emscripten-produced asm.js. They want a different syntax.


In all the ways that matter, asm.js is a bytecode with a funny encoding and peculiar semantics related to that. Denying it doesn't really help.

We're all wishing for a sane bytecode for a change. It's not just syntax.


You didn't respond to the "now you have two problems" point.

Keeping asm.js a subset of JS avoids all the back-compat-locking/future-hostile-lowering problems. And engines have only one parser to make super-fast. (Already there.)

This is a significant win. What your "sane" means is mostly aesthetics. asm.js already has int32 and uint32 conversions and casts. There is no big semantic gap for vanilla C/C++ source to JS. Typed array views help a lot here; JS's built-in operators and a few helpers in ES6 (Math.imul, polyfillable) do the rest.

The non-vanilla gaps of note are mostly gaps in JS (e.g., int64, uint64, SIMD), which we're filling in ES7 for many reasons.

Shared memory threads are indeed a gap to confine to a checked subset, not push into JS along with data races (VM-level and usercode-level -- this is fatal). We're working on that too, but it's not a "bytecode" issue _per se_.

If you continue to believe that "it's not just syntax", and you have something in the way of semantics other than the above in mind, please state it explicitly.


I was thinking of int types, SIMD, shared memory with an explicit model, and arbitrary jumps. It's nice that some of those are getting fixed.

I would also like to see a bytecode with structures (and alignment control), nice code generation and execution and some form of virtual memory for resources.

What I don't understand is why Mozilla didn't define a bytecode and a to-JS compiler for it. Browsers without support would have been just as slow, but there would have been much more room for evolution.

I'm almost expecting Mozilla to pull a Trojan any day now: define a bytecode that compiles to asm.js and declare asm.js deprecated.


> What I don't understand is why Mozilla didn't define a bytecode and a to-JS compiler for it

If nothing else, Mozilla does not have nearly as much money as Google, and really cannot afford a "tear down everything and rebuild it" approach like PNaCl. An advantage of OdinMonkey is that it is able to reuse a huge chunk of the SpiderMonkey infrastructure. If you look at the diagram in the article with the red arrow, the "Ion-compile" step is the same size as the others, but in reality is a huge number of line of code, representing multiple engineer-years of ongoing work.

Much of the work the article describes, such as parallel Ion compilation, was carried out mostly for the benefit of non-asm.js compilation, but because OdinMonkey shares the infrastructure, it was able to benefit from it without having to create a new implementation from scratch.

Beyond the engineering of the implementation itself, asm.js itself is a natural evolution of Emscripten-style JS, which had already successfully demonstrated that it can be used for large applications, and that other non-Mozilla browsers are interested in it enough to have done optimization work for it. This reduces the risk that the design itself is technically broken in some way that wouldn't be apparent until people try to run large production-ready applications, as well as the risk that only Firefox will ever be able to run asm.js code at a decent speed.


> If nothing else, Mozilla does not have nearly as much money as Google, and really cannot afford a "tear down everything and rebuild it" approach like PNaCl

The rest of your post is spot-on, but this part isn't really true. The barriers to "tear down everything and rebuild it" are much more systemic and less monetary (just look at the small JS engine teams that got us where we are today).

Meanwhile, Mozilla Research certainly continues to grow, and Rust is a perfect example of a "tear down everything and rebuild it" project now run by them, albeit in a different domain than JS.


The domain differs for Rust from JS or PNaCl. The latter are on the web. Rust is AOT compiled and for systems programming. It's competing with C++, not JS -- and not for safe C++ on the Web (not yet, anyway; Rust2JS and Rust on the GPU are of course on radars due to LLVM and eholk & co. at IU, respectively).

Even with Rust, we don't tear down Unix or DWARF, to pick two examples -- we use 'em for maximum leverage.


int types are in JS and have been since 1995, due to the bitwise logical and shift ops.

arbitrary jumps are a deoptimizer and verifier hazard (Java's verifier had O(n^4) complexity DoS attack demo'd by Michael Franz and his group at UCI). Do not want.

SIMD is coming and wanted in hand-coded JS too, not a bytecode issue per se.

> What I don't understand is why Mozilla didn't define a bytecode and a to-JS compiler for it. Browsers without support would have been just as slow, but there would have been much more room for evolution.

You mix speed ("just as slow" -- wait, we're fast at asm.js input, faster on startup than PNaCl -- did you read the post?) with "room for evolution". I just argued above that having two syntaxes hurts evolution. Please separate speed from evolution and address my argument.

Mozilla is in no position, philosophically or market-share-wise, to "pull a Trojan". Also, my argument stands no matter who is making it. No ad hominem fallacies, please!


> int types are in JS and have been since 1995, due to the bitwise logical and shift ops.

That's like saying int64 is a subset of float64 because you can use two floats to encode it.

> arbitrary jumps are a deoptimizer and verifier hazard

True, this would be one of the decisions to be made when designing a bytecode format.

> You mix speed ("just as slow" -- wait, we're fast at asm.js input, faster on startup than PNaCl -- did you read the post?)

You misunderstood. Asm.js running on a browser that doesn't have support for it is just as slow as output from bytecode-to-JS compiler would be. And for browsers that do have support, both asm.js and a hypothetical bytecode would behave the same.

The major differences with a bytecode would be requiring two "executables" and a better semantic model.

Also, I'm not necessarily defending PNaCl itself, nor did I even bring it up.

> No ad hominem fallacies, please!

I'm not sure where you got that, the Trojan comment was meant positively.

I think it would be nice if Mozilla introduced a bytecode (superset of asm.js semantics), once asm.js itself was accepted.


> That's like saying int64 is a subset of float64 because you can use two floats to encode it.

No. (x>>>0) is uint32(x), (x|0) is int32(x). Please read http://asmjs.org/spec/latest/.

> True, this would be one of the decisions to be made when designing a bytecode format.

Indeed, no clean-slate design, we have learned from the past. Dropping goto leaves less to change in asm.js. You need to produce the missing semantics or else we're just arguing syntax.

> Asm.js running on a browser that doesn't have support for it is just as slow as output from bytecode-to-JS compiler would be.

No, asm.js on browsers that do not use AOT compilation is already faster than non-asm.js for the workloads of interest (compiled from C/C++, etc.).

Anyway, speed was not the issue. => evolution is harder with two syntaxes.

Trojan is usually pejorative -- beware Greeks bearing gifts and all that ;-). No worries, but really, it does not make sense to do a second syntax for asm.js, at least not yet. Maybe in the farther future when JS has reached some fixed point.

/be


> No, asm.js on browsers that do not use AOT compilation is already faster than non-asm.js for the workloads of interest (compiled from C/C++, etc.).

That would be a bytecode-to-asm.js compiler. Hence, no difference besides distribution.

I was not aware so many features are getting added to JS for the sake of asm.js. Other than structure/array abstractions (like LLVM's), which largely only improve debugging, I can't think of important missing features that can't be fixed with compilers or extra APIs.

The only major objection remains lack of elegance (which is indeed largely a syntax/decoding argument). I guess browsers environments are doomed to be ugly and quirky.


> That would be a bytecode-to-asm.js compiler. Hence, no difference besides distribution.

Your "That" referred to something in my sentence "No, asm.js on browsers that do not use AOT compilation is already faster than non-asm.js for the workloads of interest (compiled from C/C++, etc.)" -- but I have no idea what. Please use a noun. What "That" did you mean?

I was not talking about a bytecode-to-asm.js compiler. I said asm.js code (output of Emscripten, hand-coded -- whatever typechecks) runs faster in other browsers such as Chrome than non-asm functionally equivalent code, even without AOT compilation. But as the blog post shows, AOT in Firefox is even faster at startup (and see the link on "throughput" for other wins).

Missing features are not being added to JS for the sake of asm.js. I clearly wrote we are adding SIMD, int64, etc. for hand-coded JS users. Ecma TC39 is not only concerned with "compile to JS" use-cases, we look at apps, libraries, and compilers.

For some reason, at least twice now, when I've written X, you've read Y or !X. Not sure why, but I hope this message is clear, at least!

/be


> Please use a noun. What "That" did you mean?

If a hypothetical bytecode were designed, a compiler from this bytecode to asm.js would be just as fast in browsers without support for this bytecode (or for that matter asm.js) as asm.js is currently.

One would compile C/C++ to this bytecode and either ship it directly (to browsers that support it) or compile to asm.js and ship that (to browsers that don't support it).

This process I described is precisely how Dart works and while I don't particularly like Dart itself, I think its compilation/distribution mechanism is nice.

It's possible to do this later, after (and if) asm.js becomes popular. And it would even possible to eventually compile JS itself to this bytecode.

> But as the blog post shows, AOT in Firefox is even faster at startup (and see the link on "throughput" for other wins).

I am not immediately concerned with the merits of asm.js as implemented in Firefox at the moment.

> Missing features are not being added to JS for the sake of asm.js. I clearly wrote we are adding SIMD, int64, etc. for hand-coded JS users. Ecma TC39 is not only concerned with "compile to JS" use-cases, we look at apps, libraries, and compilers.

Sure, I guess. These features just seemed to me more important for asm.js than generic application JS (as opposed to, say, macros).


SIMD is important to hand-coders, we get this (so does Dart, and sauce for the goose...).

> It's possible to do this later, after (and if) asm.js becomes popular

+1, and also when JS is more "done" (ES7 or 8; http://sweetjs.org/ + SIMD + value objects + minutiae is the nirvana.


JVML is not a bytecode. The bytecode syntax is just a disguise . JVML is a high level language that prescribes a certain object / method / inheritance model. Methods are associated with objects according to specific vtable / vinterface rules.

OTOH, asm.js is defined in terms of value types + function pointers. Just call the function pointer with the right arguments. Bring whichever objects / closures you like.

PS. Gripe of the day: 64bit computing is here (even ARM supports it) and asm.js doesn't seem to be prepared.


Lack of 64-bit ints is a JS problem, asm.js gets them via ES7. See

https://bugzilla.mozilla.org/show_bug.cgi?id=749786

and the value objects strawman under construction for ES7.


> No, asm.js is a JS subset. "bytecode" as boosted here would be a non-subset, like JVML to Java source.

Sorry for quoting Wikipedia, but bytecode is just a form of instruction set designed for efficient execution by a software interpreter.

Maybe I'm mistaken on this, but from reading about asm.js I got an impression that asm.js-aware browsers use different approach to asm.js code and treat it more like a weirdly-encoded bytecode, not as an ordirary JS source. Or I'm misunderstanding things?

If so, asm.js is a bytecode. Whenever there's a correspondence between it and other languages doesn't matter for determining if it's bytecode or not, it's another (useful, but not related to being bytecode) property.

> They want a different syntax.

I don't think syntax matters that much, it's mostly semantics. Probably.


actually, asm.js is just javascript. Basically, Mozilla looked at what sort of javascript code that the different JIT's allready handle really well, and made a specification out of it. So even in Chrome, asm.js will run very efficiently. Mozilla figuered out a way to write javascript code that made type-information easy to extract, which again makes it easy to AOT-compile. For instance, the following code:

    function asmjs(i) {
        i = i|0;
        return (i + 1)|0;
    }
is valid javascript, and you can easily write this in your own programs. The "|0" means that the variable will be converted to a integer, because it is specified in the javascript standard. As an optimization, you can use this as a type annotation, kind of like writing "int i = 0;" This is what asm.js is in a nutshell, and why it's so easy to implement a special compiler for it.


You are replacing the "bytecode" objection, which is about syntax, with your own non-objection equating asm.js with a bytecode like JVML. I'm happy you're ok with asm.js, but those who are not, and who demand "bytecode", do care about syntax first.


What about JVM, if oracle can solve the security issues. Back to the future?


Forget it. JS VMs in browsers are required, JVMs are dead weight. Reversing the trend against the Java plugin, which accelerated due to malware (Brian Krebs said Java was the #1 paid-for malware vector some years ago) but which began with declining plugin share re: Flash, is very unlikely. #1 reason: mobile -- plugins don't fly there, not just because Jobs was mean to Flash.


That's kinda my point as well. If we are to wipe clean. Nothing in browser for scripting exists.

Some kid from Zambia develops JavScript 2.0 in two weeks.

Entire Google team works out a specification for LLWM (Low Level Web Machine) and it's pretty close to LLVM. They take how long to implement it?!

People need to get their browser scripted so they look around shopping for a new language. Oh, cool the awesome LLWM spec is out there. Wow. It's got all the thing they want. Let's wait....

A month passes. People look again but no LLWM. On the other hand there is this JavScript 2.0 that kind of works. It's ugly, but Mark took a look at it and he uses to make dancing kittens. In 3D (i.e. the picture just rotates around axis, using CSS).

Another month passes. Is LLWM done yet? Hmm, the clients are itchy, they want their browser scripted. Maybe dabbling in that JavScript 2.0 doesn't sound so bad. Third month passes. LLWM is still being worked on. Your clients have employed Mark and dumped you. Yeah, life is cruel and JavScript 2.0 is more cruel - Integers overflow when adding two numbers with more than six digits each, it confuses 0 and o, no local variables, just global vars.

...

Fourth month passes. LLWM is still being worked on. JavScript 2.0 sucks but everyone tolerates it. Also there is JavScript 2.1 comming out that allows variables to not be in UPPERCASE. And there is a nice library for dancing kittens called dance.jv2

...

Year passes. LLWM ships. JavScript 2.123 is out and it's about the same in terms of speed and features. Sure there are few warts here and there, like lack of static typing, but overall it's quite solid.

Compare this situation with many other examples of Worse is Better.


How long between Javascript's "it only took 2 weeks" until now though?

Is this is an example of "worse is better" innovating at faster speed, when in fact, Javascript performance has moved at a glacial pace until recently and it took enormous investment to get there.

I think it is fair to say that if someone started with today's web/mobile requirements and designed a language from scratch to meet performance, latency, and memory requirements as well as portability/cross platform execution, it probably would not take as long as Javascript did to reach the current levels of performance.

That is, you're comparing 15+ years of Javascript JIT engineering activity with what, 2-3 years of PNaCL activity by a much smaller team?


Are you sure it's a much smaller team? Do you have any data on how many people are working on PNaCl or have been?

Opera had fewer than 5 people working on their JIT, I believe. I don't think Apple has had a particularly huge JIT team either...


>I think it is fair to say that if someone started with today's web/mobile requirements and designed a language from scratch to meet performance, latency, and memory requirements as well as portability/cross platform execution

True, but so is that developing a simpler, cruder and slower but Fast Enough alternative to said language, would take less time and by the time the said alternative would developer, they would be on same feature parity.

> That is, you're comparing 15+ years of Javascript JIT engineering activity with what, 2-3 years of PNaCL activity by a much smaller team?

No, I'm comparing asm.js engineering (if you can call it that), to (P)NaCl.

JS performance improvements was a weird road to take, but JS of old days wasn't the JS of new days. It's use case was significantly different and Google wanted to enable new use cases for it to run it's Gmail program. So they did.


Enormous investment? See my estimate (generous) above.

Contrast with the Dart (nee Dash) investment, where a team of at least 60 now labors on a language+VM that cannot cross the chasm to other browsers in any foreseeable future where there are multiple browsers with disjoint source that have significant market share.

Evolution doesn't care about aesthetics (my pelican slide from JSConf.{us,eu}). It doesn't care about wealth-effect follies that can't be standardized among developers without a compile-to-JS plan that undermines the native-VM plan. It does not care about roads-not-taken in the past, so long as the road from here is clear.

Bet on evolution.

/be


Bet on punctuated equilibrium. Both PNaCL and asm.js are could follies if you consider mobile games. There is no incentive for someone developing games for consoles or phones to use either of those technologies.


Game devs code to metal, use C++ and OpenGL. This is why Jobs went from web apps to allowing native in iPhone 1 era.

Both PNaCl and Emscripten (or Mandreel -- the relevant comparison, not asm.js which is a different category) work on such source. This is how Firefox OS runs games (Disney Where's My Water, many others). Cross-compilation works.

(Punctuated equilibrium is a bio-crock.)


Coding to metal isn't just about source language, it's about optimizing system level performance. Top tier game devs use vTune, GPU profilers, system level analyzers, to maximize overall system performance for a given workload. Developers at studios like Naughty Dog, Bungie, DICE, Infinity Ward, et al don't just delegate to the C compiler and call it a day.

Leaving aside casual games, which are mostly not performance sensitive, top tier game development is very high risk and expensive. One reason why developers like consoles and the iPhone/iPad is absolute predictability when it comes to target platform. Even branching over into the Desktop PC, game tuning requires a ton of testing on a huge matrix of platforms, chipsets, drivers, and other configurations, all of which is a big expense, as well as a big support cost.

What is the motivation for say, Infinity Ward to port Call of Duty to asm.js or PNaCL? Most of the people who would actually buy it will do so on a console, or through something like Steam. You'd be asking them to add an immature and unproven technology into the mix that sacrifices multithreading, or drops a big chunk of performance on the floor, and in return, add back millions of frustrated web users who will be making customer support claims.

I love the web, I have the tons of "native" apps on mobile that don't really deserve to be native apps at all and would work equally well as a URL to a web site that doesn't force an install. But --

Games are not webby. They are not the web, and treating the browser like a C virtual machine that throws away pretty much most of the browser's machinery in favor of just OpenGL bindings is not what the browser was designed to do, and the fact that it does it at all is amazing, but it is not helping the web.

I think both Chrome and Firefox would do the world a much bigger favor concentrating on making the other parts of the browser rendering engine a lot faster. JS performance is not the primary reason that Web apps feel janky compared to native ones, the entire development model for web development is a minefield full of performance hazards.

For years, the JVM had huge performance advantages on JS. It had native code interfaces, it had off-heap non-GCed memory allocation capability, it had high performance, and yet, Minecraft is pretty much the only success story. Now why is that? And why NaCL, which has a rich father (Google) behind it, and the world's largest browser marketshare of hundreds of millions, can't convince many developers to port to it. You think Emscripten ports are easier than NaCL? The best explanation is that the return on investment in making a game that is shoe horned into the browser is not worth it.

If given the option to buy a game on Steam, or buy it via Chrome Web Store or Firefox store, I would buy the Steam version. I bet the majority of people reading this would do the same.

I see that casual, simple mobile games might be suitable for this, but that's a problem for FirefoxOS and ChromeOS to solve. Most game devs will continue to target iOS and Android until FireOS gets a non-trivial marketshare. Crossing that chasm is going to be hard.

To make a TL;DR short, I am frustrated that native is taking over in the non-games space, and the efforts by both Google and Mozilla to get at the real heart of the matter, of making jank-free, buttery smooth, mobile apps easy to develop for web developers is moving far slower than it needs to be.

All this games stuff is a huge distraction.


> concentrating on making the other parts of the browser rendering engine a lot faster.

The fallacy of the excluded middle, yawn.

We're working on perf all over, so are Chrome folks. But unlike Google, we're not building three VMs requiring their own toolchains. We can't afford to, nor can others building browser engines, and developers are not buying it from what I can tell.

> All this games stuff is a huge distraction.

True in deep ways (have to keep my kids away from it or their brains turn to mush).

We agree that hand-coded JS needs pause-free GC and JITting. Pause-free GC is doable, I've advocated it. Top engines' GCs are still optimized for throughput.

Pause-free JITting is harder, and less theoretically tractable. Whack-a-mole is unwinnable, AOT is beating JIT here.

We follow our nose on this one. If AOT == JIT in some utopian future, great. Otherwise, misspeculations and phase changes happen, and using background threads on other cores to recompile can help, but without precog JITting there will be jank and startup pain.

/be


Well, I agree on the performance (speed) side, VM out, subset In. but feel abandoned on the memory footprint side.

Objects are just huge! Asm.js promises something more like structs, I think. Now making THAT available to JS programmers would be a great help.

I really don't want to go back to C just for faster JS. Lets use the knowledge of the last year to put some of the asm.js goodness into JS hands. LLJS is likely not the answer from a recent jlongster post. Rather than a new language, simply parts of asm.js would be better received by the JS community.

   -- Owen


Typed objects (ES7) offer struct-like packing and heap repr efficiency, independent of asm.js:

http://wiki.ecmascript.org/doku.php?id=harmony:typed_objects


* The fallacy of the excluded middle, yawn.

You're an executive of Mozilla. I think you can join threads without being condescending.

As for whether developers are buying PNaCL, asm.js, Dart, or even Javascript, as a major platform for games development remains to be seen. The majority of the game-dev resources are being spent on native, and those that are developing for mobile, are doing so for native. If Javascript is lucky, it might one day be as popular as Flash for doing games, it hasn't even reached 1/100th of the success of the FlashVM in that regard.

There is a lot of time spent on HN arguing over JS vs Dart vs NaCL, meanwhile, the consumer experience is being taken over by native, and it is not because of JIT performance. Failure to see why developers choose native, and why consumers choose it, I think is a tragic tunnel vision.


More executive charm school needed, clearly.

You wrote a tl;dr piece with a whopper in the middle asserting that we should all go work on other perf than JS perf that is attracting game publishers, big ones -- while at the same time, Google is pushing *NaCl (both with and without the P) / PPAPI as the way to get native code including those very games ported to Chrome and Chrome OS.

I think that's at least the false dilemma fallacy I identified, if not something more like a conflict of interest, and I'm gonna say so. Bluntly.

I frankly don't care whether game devs ever hand-code in JS or Dart. They'll continue to use C++ for good reasons (until the Rust-pocalypse). GC is never free, nor is JIT. C++ and other AOT languages endure.

Really, it's not for us to over-sell JS and require rewrites.

And again, Google apparently gets this market need because they have been selling NaCl against it (with limited success). So it's not out of bounds for JS suddenly, in spite of your tl;dr protests.

/be


It's incorrect to view Google as a single entity. There's a large number of engineers each with their own ideas of what should be done, and Google gives a lot of resources for people to explore things. To say that Google is pushing something can only really be understood in context of how much company level priority and head count is put on it vs individual loudness and passion of the teams doing their things.

Some very small teams at Google who work on stuff as 20% side projects are so vocal and active with the external community that they give the impression of huge focus and investment. Others, while mostly quiet, give the impression of almost no investment while large groups of people churn away.

My interests are aligned with preserving and maintaining what is good about the Web: federation, transparency, frictionlessness, indexability, composeablity, linkability, etc. I am more focused about avoiding this future (http://idontwantyourfuckingapp.tumblr.com/). To me the trend towards appification and install of everything is something we should not be chasing. Ideally, web apps should trend towards less code execution, not more.

The first two decades of the Web were mostly about content. Now we're turning it into an entertainment device and a gaming platform, it's this generation's television. Everyone is chasing off into the bandwagon/rathole of trying to be more like iOS apps, because of the perception that it's how you make money. That gravy train will eventually saturate and become a dead end. It's short term thinking. Skate towards where the puck will be or should be, not chasing Apple down the field trying to catch their lead skater.

Long term I want the Web to "win", but not by changing into a platform of binary opaque blobs like native.


Great comment -- with you all the way on this one.

I use "they" for Google on purpose, it's a fleet of ships for sure. But I also confer with my VP Engineering, Chrome counterpart regularly and know (mostly) what projects are official and intentional. :-)

/be


> We're working on perf all over, so are Chrome folks.

So where are the blog posts about buttery smooth touch interactions on Firefox OS? Where are the blog posts about saving/loading large arraybuffers into indexeddb without it crashing the browser / taking a week and a half on mobile? Because there are asm.js blog posts every other day. I'd love to read about how regular browser stuff is getting leaps and bounds better, but I don't believe that it is.


You are complaining about bugs to fix, in particular typed arrays and indexeddb (I hate typing that). We did fix XHR vs. typed array bug here:

https://bugzilla.mozilla.org/show_bug.cgi?id=866431

but you have a point about indexxxeedddbbb :-D.

Smooth touch interactions are supported better by Firefox OS with touch events, do you have a particular bug in mind?

Fixing bugs and blogging about them would take all blogging energy for little gain. We fix bugs without enough positive noise, but I think the main thing is to fix bugs.

OTOH, Emscripten/asm.js work (which, silly exaggeration, is not blogged about every other day) is not a bug fix. It's a bigger deal, and it needs explication (especially because some FUD about how it entailed new VMs still lingers in the twittersphere).

Also, we're fastest on Emscripten output due to AOT. That is news, news! I say ;-).


I think that's the thesis of the "worse is better" approach.

http://www.jwz.org/doc/worse-is-better.html


Not to pick nits -- it's not the thrust of your argument -- but web browsers have a complex relationship with worse is better today.

Yes, browsers started as a hack and it's reasonable to attribute a lot of their success to worse is better "survival characteristics". But having been in the trenches for a few years, I can say that these days, we operate primarily on the MIT approach. For example, WebSQL is worse-is-better; it was rejected in favor of IndexedDB, which is the "right way" (and was a ton more work). At least at Mozilla we were /obsessed/ with making the "right" api; a hack that we felt placed burden on users, like Unix's solution to the lusering problem, would be seriously looked down upon.

Indeed I'd argue that the fact that so much work happens in standards bodies may be browsers' biggest concession to the "right way".

There are, of course, plenty of modern counterexamples to this too. Like I say, it's a complex relationship...


True. I'm not making it look like Worse is Better ALWAYS wins. But it is funny how often it does, contrary to common conceptions.

The 'fight' between (P)NaCl and asm.js is by no means over. It's hasn't even begun. I have no clue if NaCl is popular for games, I've saw some on Chrome Store, but they always felt weird.

I just like to remind myself that making something is often more important than making it right.


I'd say that there is not any relevant business model that could bring developers to do impressive work and easily reach their audience.

That's how hackers can win in the end. Business models get the market stuck somewhere, but you can't prevent hackers to find a difficult, hard path to reach their goals.

ASM.js is an impressive tech yes, but as always, I'm curious if it can really attract more devs into releasing gaming software.

The real advantages of ASM.js is security and the non requirement to install something new like NaCL.


"The real advantages of ASM.js is security" - isn't NaCl sand-boxed?


I'm not sure but I think it's not a real sandbox, the code is still executed like any native code, the code is just thoroughly checked for instructions that would try to access things that are not supposed to be accessed. It's just some kind of compiling that does very strict security checks, and maybe recheck the executable file hash to match.

NaCL is at true native speed, but the fact that it's still a "beta product" shows that it's not 100% safe. Well anything really is 100% safe, but if NaCL isn't entirely safe, a NaCL vulnerability could be quite disastrous. That's where the google beta culture lag behind.

Security is still the most important features of any massively distributed product. If ASM.js manages to reach a mere 2x slowdown, it's great because it does in the context of an interpreted, thoroughly tested JS engine, which is very much more secure. NaCL introduces new standards and compiling methods, which would require a lot of testing and reviewing before being really deemed secure.

There are a balance of pro and cons, but in the grand battle of big innovations, keeping things secure is a hard problem, because you can't always introduce new things without creating new risks. asm.js relies heavily on the fact that it uses an old, crap language, but since JS engines are heavily optimized, it makes the security problem disappear completely because JS is an old language, so all security problems are already known.

It's a problem about how developers practice their job, what they use, how it is deployed and ran by users. Innovation on a massive scale and security never mix well. Also, execution speed and security don't mix very well. The fact android uses java for its app is another demonstration of the easiest path to security.


> I'm not sure but I think it's not a real sandbox, the code is still executed like any native code

It is actually quite different, read: http://static.googleusercontent.com/media/research.google.co...

There are fact several layers of sandboxing, which dramatically limit both the ability to create exploits, and the damage that a successful exploit can cause.

To be clear: I work for Mozilla, and I think Asm.js is a great project with a lot of potential for developers and the web; however, Google did a lot of innovative work to have a great security foundation for NaCL, which was not a real goal for NaCL AIUI.

> a "beta product" shows that it's not 100% safe

There is no correlation here.

> in the context of an interpreted, thoroughly tested JS engine, which is very much more secure.

> but since JS engines are heavily optimized, it makes the security problem disappear completely because JS is an old language, so all security problems are already known.

The continuing drive for JS performance has lead to new avenues for serious exploits, some of which are poorly understand are only have been partially mitigated by widely used engines. For just one example, see http://www.matasano.com/research/Attacking_Clientside_JIT_Co...

> The fact android uses java for its app is another demonstration of the easiest path to security.

I lol'ed.


Thanks for the clarification.

> I lol'ed.

Then I don't understand their choice. I know there are many java devs, but I think delivering an API through java shields from many attack vectors.

I know java has its vulnerabilities, but it's easier to cover those than to design an OS which is secure from a C/C++ point of view.

Using old techs that are being used by many devs has many advantages, one is that security problems won't be new.

And to be clear: I hate java.

> There is no correlation here.

I was just saying it's better to march into known territory which are existing techs, than to create new techs in which you don't know who will find an exploit first: the white hat or the black hat. Existing techs are like old guys you can trust because they've been here for a long time. I guess the sandboxing is very nicely done, but a tech is not mature until it's not a very little bit used in a mainstream fashion, so that security people can look its parts more closely.

I'm not a security expert anyways, I'd love to watch NaCL be used more, but computer security will always make things suck one way or another.


The sole reason for basing Android on Java was its popularity, which grant it good tools and familiarity. The creators said as much several times.


"And to be clear: I hate java."

Most people probably like JVM rather than Java.


The JVM contains decades worth of knowledge, some of which was developed for languages like Self.


I don't see this as good job Mozilla; I see this as Mozilla using their muscle to keep everyone stuck on Javascript. If Mozilla had been willing to adopt the elegant approach then it would have succeeded. This is a triumph of politics, not of hacking - or to be more charitable, of hacking around politically-imposed constraints rather than fixing them.


From my throne of skulls (https://t.co/CJUs7E3x43), I'm flexing and posing.

Muscle, that's a laugh! Who is the big company with all the money in the world to throw at developers and publishers to use a plugin API supported only in their browser, and still the devs and pubs say no? We heard from another such developer just this week.

s/politically-imposed/physics-imposed/ -- fixed it for you.


I know very little about compilers, low-level optimization, or any of these topics beyond a rudimentary understanding of basic computer systems. It speaks volumes that Mozilla is able to explain some of these concepts in ways that I sort-of grasp, even if the specifics mostly go over my head.

Excellent, excellent article. I look forward to more improvements to asm.js and the future of Javascript. Maybe one day I will actually learn this shit.


> Maybe one day I will actually learn this shit.

I'd strongly discourage it. ASM.js is the most horrible hacky thing. It gets the job done, but there are no good things to learn about how it works. Low level and compiler backend optimization are the most hurtful programming problems. It's optimization for a tons of bad reasons. That's like send a car into space. There are no way it's a technology you can reuse later for other purposes.

I'm sure ASM.js must be quite cool to just use, but to understand how it works, I fear that it could get hairy very quickly.

That's why I have my doubts about it. If you run into a problem with ASM.js and there are not enough guidelines, you might run into 10 walls, solve it through, and still run into walls that noone else will have solved before you.


> ASM.js is the most horrible hacky thing. It gets the job done, but there are no good things to learn about how it works.

There's no denying this, ASM.js is an awful hack but that is for a good reason.

You're just going to have to accept the fact that JavaScript will be the only language supported by all major web browsers for the near future. No-one really likes the situation but there's nothing that can really be done about it, at least in the short term.

So if you're doing software for the web browser, it must be compatible with JavaScript. That is a fact you can't escape. And as much as asm.js is a dirty hack, it retains JavaScript compatibility while being a more sensible intermediate representation for compilers.

You're not supposed to like (or dislike) asm.js, it's intended to be written and consumed by computer programs, not humans.


>No-one really likes the situation

That isn't true. Mozilla likes the situation. That is why they are pushing Javascript so damn hard: as the only language for the web, as the only language for apps, as the only language your phone can run, as the only language your tablet can run and so on. If Mozilla disliked Javascript why would they create an operating system that forces you to use Javascript? None of the backwards compatibility concerns apply there.

It's just politics and a power grab. If all user facing software has to run inside the Javascript sandbox then the people that control that sandbox are given massive power. That is why Mozilla and Google want a web-app future. It hands them the keys to everything. In a web app world if, for example, Microsoft brings out a new input device (like kinect), nobody will actually be able to develop software for it until the gatekeepers of the 'web standards' decide to design an API for it. Since those gatekeepers are mostly Microsofts competitors they would likely block it if they saw it as any kind of threat, or at least till they had developed their own competing version.

Javascript/web standards based OSs will be a huge roadblock to innovation.


> That isn't true. Mozilla likes the situation. That is why they are pushing Javascript so damn hard: as the only language for the web, as the only language for apps, as the only language your phone can run, as the only language your tablet can run and so on. If Mozilla disliked Javascript why would they create an operating system that forces you to use Javascript? None of the backwards compatibility concerns apply there.

What options does Mozilla or any other individual company have here? I think that if they were truly happy with JavaScript, they wouldn't be putting effort into ASM.js and Rust?

Mozilla isn't really pushing JavaScript as the "only language for the web", but they probably are recognizing the fact that they are unable to kill JavaScript or even provide an alternative because that would require all major browser vendors to co-operate (which is not impossible) and all the users to upgrade their browsers (which seems to be impossible judging by the popularity of old browsers like ie6, ie7).

What comes to Firefox OS, it's perhaps a little awkward choice but they are trying to leverage existing web based technologies. Starting from scratch and introducing a new environment with a new programming language would set them behind in terms of time to market and money required to develop the system.

I haven't seen anyone from Mozilla saying out loud that they love JavaScript and they think it's the way forward.

> It's just politics and a power grab. If all user facing software has to run inside the Javascript sandbox then the people that control that sandbox are given massive power.

While this is not entirely impossible, I think the real reason is that there is a JavaScript legacy and abandoning it would be expensive. And Mozilla has invested years of effort and tons of money in making the "web platform" better.

Your impression of this situation is different from mine, and neither of our opinions really matter. You should probably refrain from pushing your personal views as if they were facts.


We work well with Microsoft on Web standards, they are at least as engaged now as their competitors. We are actually having problems getting the missing device and system APIs standardized because Google wants to wait and do its own Chrome-only thing for a while, and Apple is as usual missing in action.

Samsung for Tizen was our best bud, and added battery status, network info, and a few others to WebKit, but then Blink forked and set the cat among the pigeons.

Browser game theory makes strange bedfellows.


"In a web app world if, for example, Microsoft brings out a new input device (like kinect), nobody will actually be able to develop software for it until the gatekeepers of the 'web standards' decide to design an API for it."

What stops Microsoft (in this example) to implement kinect support in IE as a new API, propose it as a standard and evolve the interface like every other web API - that is, in parallel to the standardization process?

If it's any good, Google and Mozilla will scramble to implement it to prevent users to defect to IE.


Firstly they would be pilloried for having a "proprietary" browser and being "anti-standards".

More importantly, IE can't run on your Firefox OS phone or your ChromeOS chromebook or your iOS tablet. 'Deflecting' to IE would mean buying entirely new hardware (and in the case of phones, probably a new multi-year contract). That isn't an accident. The best way to ensure your browser remains relevant (i.e. you retain veto power over any new standards) is to create OSs that can only run a single browser. You then have millions of users that are locked into your browser to use as bargaining chips.

A world where all user-facing software has to target a single standard gives vast power to the owners of that standard. Very few people seem to be considering this problem. No matter how benevolent you think Google or Mozilla are today, it is madness to place so much trust in them. Their interests do not align with yours, and there are very few checks or balances to the system: end users or independent developers have basically zero say in the web standards process.


You are absolutely correct in your analysis, but you seem to have ended up with a very strange conclusion. The entities who participate in the formulation of open standards do have immense power, this is correct. As opposed to...people who write proprietary software having immense power. Are you actually arguing we return to a closed-door model where individual companies like Apple or Microsoft are the gatekeepers of innovation, as opposed to a process that is at least semi-public and can theoretically be contributed to by anybody?

Very strange conclusion.


> No-one really likes the situation

I actually like it. Write once, run everywhere has become true. Of course there is always room for improvement, but that too is happening (and with a really fast peace too).


> You're not supposed to like (or dislike) asm.js, it's intended to be written and consumed by computer programs, not humans.

The fact it's executed through layers of an OS, a web browser, a javascript parser, an ASM parser, is just worrying. What about libraries ? What about opening files ? What about access to the hardware ?

> There's no denying this, ASM.js is an awful hack but that is for a good reason.

The whole idea is to get around the fact companies will lock their system and force some technologies and languages down the throat of programmers who have other solutions and ideas. You can't completely avoid the restrictions and not run into problems. I don't call that a "good reason". You can solve those problems by finding alternative techs and platform to reach an audience.


If you want to reach everyone who has a modern web browser or equivalent (modern web engines are in many devices now), then the rolling web standards party is the only "alternative tech" outside of native stacks.

Businesses that can afford to port or rewrite for every platform out there are rare. The web replaced Visual C++ / RogueWave windows apps for a reason.

Mobile pulls back toward per-OS apps, but only really to low degree: iOS for "elite consumers" (I didn't make that up), Android in several ways: native for games, some Java apps, and a long tail of PhoneGap-wrapped web apps.

Common among the mobile app approaches are web(view)-based apps and native apps. Emscripten/asm.js handles the latter by cross-compilation. The plugins escape valve for "alternative tech" is gone on mobile, so this is likely to be "it" for a while, IMHO.

The web+mobile composite is an ecosystem. Energy is never free, so apart from enthalpy beating entropy locally, tech that flows downhill usually wins with developers and publishers. Game devs/pubs porting their C++ catalogs to JS via Emscripten is an example of "flowing downhill".

What about opening files? The Web has lots of APIs, more all the time. Firefox OS has local storage, and it has only the web as its platform.

/be


I think asm.js is the greatest runtime design that I have seen over past 20 years. It gets the job done cheaply. That is the ultimate goal for any runtime system. It is not designed for human to write asm.js code manually.

On the other hand, most programming languages are full of hacks. Even simple language like C has all kinds of weird behaviors on different platforms. Someone would need at least 1-2 real world experience before he can become a professional C developer in minimum way. asm.js was developed way faster than any other systems that I can remember.


People are using C#, Java and C++ just fine without understanding MSIL, bytecode and assembly. Asm.js might be a hacky thing but it may be the only way to defeat the dominance of a certain hacky scripting language.


Regardless of how you feel about the political implications of asm.js, this is a fascinating technical article on the challenges of implementing a world-class Javascript interpreter.


It's funny that after all, native code is still what devs wants. Even on top of a super JS engine JIT I-don't-know-what.

There was a HTML browser, then a scripting language, and it seems it was the easiest, hackiest way to massively deploy native-fast software, and it's done through a browser.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: