Asm.js is intrinsically superior to native client (and yes, I do see them as competing technologies) and will inevitably push NaCl out of the market for one simple reason: even if the browser doesn't explicitly optimize for asm.js, the code will still run. It may not be usably fast, but something will happen. As opposed to native client, where if your user base doesn't support it you're just dead in the water. And the trend so far is javascript getting faster and closer to native running code. Google must eventually build asm.js optimizations into Chrome. What's their alternative strategy? Ignore asm.js and let Mozilla tout Firefox's javascript engine as faster? NaCl's writing is on the wall and Google knows it.
I'm not sure "intrinsically superior" is the right way to think about this. If you were asked for a mechanism to get web clients to execute stuff "fast" that was generated from a traditional C-like compiler front end, you surely wouldn't have designed asm.js. You'd probably have ended up with something more like the JVM or PNaCL.
It's true that asm.js wins for compatibility, for obvious reasons. And that's a powerful advantage. And it works in the same runtime that existing web apps do, which likewise a win; Java and especially NaCl have complicated interoperability paradigms. They are not just "a faster web page", which is what 90% of developers really wanted to begin with.
That said, the remaining 10% really do want something more. They want to write targetted architecture-specific assembly, perhaps. They want access to syscall-level abstractions like threads and true sockets (buffer sizes, Nagle settings, timeouts, etc...). And asm.js has nothing for these people.
And despite its simlicity, asm.js is still mostly just a toy. It's got one working compiler backend that isn't packaged sanely anywhere. I've three times now decided to get serious and build emscripten from source, and three times given up -- it lags LLVM releases, it has glitchy behavior. Compare to clang and especially gcc which build robustly and cleanly everywhere and come with elaborate test suites. This isn't a toolchain I'm about to start betting a company on, for sure.
Give it time. I like asm.js too. But it's never going to be The Answer to remotely deployable "native like" code any more than Java was.
I'm not sure "intrinsically superior" is the right way to think about this.
It's fast enough. It's compatible with the existing web. It's platform independent. It runs in any standards-compliant web-browser. That's four wins right there.
It's fundamentally superior. End of discussion.
That said, the remaining 10% really do want something more. They want to write targetted architecture-specific assembly, perhaps.
If they do, they can go do it somewhere which isn't the open web.
When I use a web-browser, me and my devices aren't going to wait for you to fix your webpage to support the architecture I happen to be running. We're going to go somewhere else.
Imagine how impossible it would have been for early Macs and ARM-devices to enter the market if the web was already entrenched in platform-specific Intel x86 code. How many of those websites out there do you think would have been "fixed" in order to work on other platforms, years after the authors abandoned them? I'm guessing a tiny fraction.
They didn't have to though, because the web is cross-platform. This is by design.
We should thoroughly ignore Google's best efforts to reverse this with their proprietary NaCl. Why on earth would we want to change the best quality of the web now that it has already proven its value?
> Imagine how impossible it would have been for early Macs and ARM-devices to enter the market if the web was already entrenched in platform-specific Intel x86 code.
Are you aware that NaCl (the platform-specific version) is only enabled for applications installed through the Chrome Web Store? Chrome will not load NaCl modules from the web. All that is supported for the web is PNaCl, which is platform-independent. See: https://developers.google.com/native-client/dev/nacl-and-pna...
So your hypothetical is not in danger of happening.
That said, that's not what the post I was replying to was suggesting. It was suggesting writing architecture-specific assembly code and putting it on the web. And citing that as a legitimate need which should be addressed by browser-vendors.
Needless to say, as someone who supports an open, cross-platform web, I strongly oppose any such measure.
There is also a thing like context. Mentioning JS and HTML as languages in a news article about asm.js/PNaCl almost automatically means - "programming languages".
So you are factually right, but a lot of people will go and say ah, yes - writing HTML is programming. Which is just untrue.
> That said, the remaining 10% really do want something more.
Yes, but those 10% are better served by going native. What's the problem in going native using the platform's native SDK?
OH, if you're going to say that native apps don't run in the browser, well, the browser's inherent advantages over native are that (1) it runs everywhere, on every platform and (2) it's based on standards with multiple implementations.
But NaCL for all intents and purposes is native to Chrome and not part of the web (it also doesn't run on ARM chips). NaCL will never be a standard accepted by either Microsoft, or Apple or Mozilla, because it's not portable enough and because it's so freakishly complicated that writing a spec for it is really, really hard and the jury is still out on its technical soundness. And sure, it's open-source, but that's not enough.
So you know, those 10% that want more, can only find comfort in NaCL if they only want to target Chrome's Web Store, which isn't different from targeting an OS. But that's not the open web. For the open web, those 10% have no choice other than Javascript, which is why Asm.js is so awesome, because it exists.
True enough, though NaCl does. I was mostly listing a bunch of stuff asm.js doesn't do to address the "intrinsically superior" comment, not giving a pros and cons list of specific technologies. Feel free to ignore that point if you like, I don't see that it meaningfully affects my argument.
> [emscripten] lags LLVM releases, it has glitchy behavior. Compare to clang and especially gcc which build robustly and cleanly everywhere and come with elaborate test suites. This isn't a toolchain I'm about to start betting a company on, for sure.
Are you comparing emscripten to clang? Using clang alone won't get you something you can execute in a web browser — unless you include a CPU emulator in JS.
If in fact you are comparing the asm.js workflow with the PNaCl workflow, they each rely on clang, sure, but PNaCl requires a bigger difference with clang-trunk. It is a plain fork.
> PNaCl is essentially a modified version of Clang. This is done for two reasons. First, it eliminates an install-time dependency. Second, Clang was historically not supported on Windows (but this is beginning to change).
Also, PNaCl has a unique, specific API (Pepper), which is very different from the way emscripten works.
Finally, the comparative workflow to have your app work on the Web.
PNaCl: write HTML/CSS/JS, write manifest, write PNaCl code, compile to pexe, rewrite the code to work with Emscripten, compile to JS via pepper.js.
Emscripten: write HTML/CSS/JS, write native code to work with Emscripten, compile to JS.
I think there's room for both to exist. The strategies are hugely different and will depend on the performance payoff, but asm.js has the simplicity to make it happen while PNaCl has the resources of Google, and since resources is what PNaCl needs (docs, dev environments, marketing) then I can see both coming to maturity.
I was threatened by PNaCl for a while, until I realized that it integrates with Javascript through messaging. I think it could be a really solid complement to the Web platform.
I'm kind of glad PNaCl is being researched, and it's really impressive engineering. I personally won't invest in it, but it will be very interesting if they pull it off, especially if other other browser get interested in it. There has been zero interest so far so I'm not betting on it.
The thing that bothers me about PNaCl is the potential for Chrome to become silo'ed and for Google to start pushing it's own ecosystem over the web, if they can't get other browsers to join in. I don't think they'll actually do that, but the chance is still kind of scary.
SPDY is an open protocol which has been implemented by Firefox, IE, Opera, and Amazon Silk, and is being used as the basis of the HTTP 2.0 spec. I'm not sure how it could be any more open.
When Google owns a very popular browser and extremely popular websites they are in a unique position to drive the future of the web. Now it's clear with SPDY that they are willing to abuse this position to make defacto standards come into fruition.
It's why I will never support any more of these unilateral efforts. Whether something is open or not is irrelevant when in the case of NativeClient every committer works at Google.
What about SPDY was "abuse?" How is it any different than asm.js, which is likewise a single-vendor standard that they are pushing other browsers to adopt?
Web Workers are true threads, it's just the shared memory part that's missing. I think we will have to introduce some form of it at some point, and there are ways to do it but there's definitely a lot more research required (think about only allowed a typed array to be shared, with atomic reads/writes). I don't really know what's going to happen with this feature, and it is the one single thing that's going to be hard to do in JS, but I think we can solve it and there's really nothing else missing (or isn't planned to be implemented).
>Web Workers are true threads, it's just the shared memory part that's missing.
And thank god for that. I've never understood the fetish C/++ programmers have for shared memory, and locks, and mutexes. Erlang has a much saner approach to concurrency/parallelism, and web workers isn't terribly far away from Erlang's actor model.
EDIT: That said, I thought about the idea of only allowing typed arrays to be shared, and I actually really like it. I think that's a sane way to approach shared memory in JS, and it could cover some of the use cases that web workers aren't a big help with.
A lot of languages that claim to have "borrowed" from Erlang often borrow the "concurrent unit running independently communicating by a channel or mailbox" idea. Very few borrow the most amazing part -- isolated heaps for each of the lightweight processes. Web workers and Dart isolates, have that as well. I am not claiming they borrowed from Erlang, just saying they have that nice feature.
See, fault tolerance was actually the #1 priority when building Erlang not high concurrency. High concurrency was there too but it was #2. Also the tough part is selling fault tolerance. Only those that have implemented large concurrent systems using shared memory models, plagued by dangling pointers, large mutable state in the middle of everything moving around, and have it fail will appreciate fault tolerance. Most will shrug it off -- "hey look, I can run language shootout's Mandelbrot 10x faster!". There is a trade-off there, of course. Just like in a real OS. If you want to fork separate processes for each request, you'll pay for fault tolerance in performance some, if instead you'd just called a callback in the same process.
> I've never understood the fetish C/++ programmers have for shared memory, and locks, and mutexes.
There are some problems which are more amenable to shared mutable memory, especially when performance or memory space is a concern. And for those problems where the sharing of mutable data is not needed, forking, pointer ownership, thread-local data, and const correctness are all old concepts.
Fair criticism. But the idea of a shared typed array is appealing, just for the simplicity. If you must have shared memory in JS, this is not a bad way to do it.
Web Workers can actually let you pass messages at the speed of shared memory, thanks to an optimization, when using typed arrays with "transferables". So you can fling around buffers straight from a worker to the main thread without copying, pass it to WebGL without conversion etc.
(The sender loses access to the buffer after it's sent, so it's still safe)
"...at some point, and there are ways to do it but there's definitely a lot more research required..." Maybe it's better to abandon the whole legacy JS platform and move on to something that doesn't slow down innovation?
Competing platforms/ecosystems slow down innovation? I would say we need at least a couple of strong JS contenders to get the most benefit. The Central Bureaucracies such as W3C make change (granted positive or negative) slower.
>Competing platforms/ecosystems slow down innovation?
Yes. That's what killed the vision for massive adoption of desktop Linux for one thing.
That's what hampered a good Java web story (tons of competing frameworks).
It's why there are several slow scripting languages (Python, Ruby, Perl, etc), whereas Javascript got crazy fast as the only game in town when it comes to the web.
The reverse of that (Rails as the "one true framework" instead of tons of competing stuff) is what helped Ruby make waves.
Especially when the "competing platforms/ecosystems" are needlessly competing. It's a waste of effort, people duplicating features and stuff.
>The Central Bureaucracies such as W3C make change (granted positive or negative) slower.
The central bureaucracies are actually "competing platforms/ecosystems" too, only they are competing for inclusion in the one same standard. MS, Apple, Google, Oracle, IBM etc each wanting their own APIs and changes to the final spec (which is slow).
Most progress has been made by ONE SINGLE company going at it and inventing something new on their own that others adopt more or less wholesale (e.g Apple with Canvas, MS with AJAX, Mozilla with asm, etc).
Constrast that to each of those companies having its own competing technology for the same thing (e.g different canvas drawing APIs) instead of adopting one and being done with it.
You are describing competition within a platform, which is happening in JS as well: EmberJS vs AngularJS vs KnockoutJS etc. Do you really think this kind of competition is bad?
I am talking about competition on a higher level: e.g. Ruby on Rails vs ASP.NET vs JSP. RoR was not the first one. Do you really think this kind of competition is bad? Was inventing Linux a bad thing because we already had other OS's?
"company going at it and inventing something new on their own" - the very definition of competition: generating something new to get a competitive advantage (new platform, new technology, new framework).
Is this accurate? My understanding was that the V8 team wasn't explicitly optimizing for asm.js, and just optimized V8 in general (and that often asm.js hit fast-paths even without explicit optimization).
It sounds from the article like Epic announced that Chrome/V8 is fast enough to run the Unreal 3 engine, and Mozilla's trying to spin it as V8 explicitly optimized for a "spec" invented by Mozilla when actually they just made V8 really, really fast.
That said, they do include an asm.js benchmark in Octane 2, so it's clearly on their radar. But as far as I've heard they haven't actually implemented special asm.js typecheckers / AOT compilers like Mozilla has.
As you say, Google included asm.js code in Octane 2, and you can see progress on asm.js benchmarks as Google optimizes in ways that help that style of code on arewefastyet,
This is also not something new, Google has optimized for asm.js code since the IO keynote much earlier this year where Google reported a 2.4x speedup on an aggregate asm.js benchmark,
There are many ways to optimize for different types of code. Firefox uses the asm.js type system for asm.js, and does AOT compilation, while Chrome chose a different approach. The two approaches have different benefits. But both browsers certainly optimize for asm.js, they both announced doing so and the numbers are proof that they are succeeding.
Even before asm.js there was emscripten (the primary generator of asm.js and inspiration for much of the spec), which has been used for a number of high profile demos and likely got optimization attention for years before asm.js.
When we released gwtquake it ran poorly on FF and the FF team seems to have spent some time optimizing for it. Imagine a press release boasting how FF optimizes for GWT.
This press release is a little heavy on spin trying to market asmjs. While it does appear V8 is speeding up asmjs style code, until we know specifically about what they've done in chrome 31/32, this seems mostly like speculation. What if in fact asmjs was causing a degenerate case in their JIT and what they actually did amounted to fixing some bugs?
I ran the demo in chrome not that long after it was released and recall the frame rate being almost as good as in the recommended firefox nightly builds. There was some visual weirdness (polygons being missing/transparent).
I expect Epic's announcement is more about these glitches being resolved then performance increases. Mozilla's is pure spin.
That's not true, if you ran it in Chrome it was extremely laggy, but you must have caught it after they implemented a lot of asm.js optimizations. You can see when they focused on asm.js-style code here in these graphs: http://arewefastyet.com/#machine=11&view=breakdown&suite=asm.... Just look at the drop off of the green line!
I also suggest zooming into those graphs to see which changes lead to drops (at least those that are currently visible). Then you could try to explain people what asm.js specific is in changes like:
[hint: nothing, they have more to do with how Emscripten has a tendency to produce extremely big functions which no human would write, and JIT has to accomodate for that to hit them in correct moment and not give up on them. Once it hits them however the optimizations that were there for ages, optimizations purely based on JavaScript semantics, start doing what they are intended to do --- make JavaScript run fast]
Not sure what computer you ran the demo on, but the Unreal Engine demo on top of Chrome was unusable on my laptop (talking about 2-5 fps), whereas in Firefox it was extremely smooth and this was when Firefox 22 was released (so it was the stable release, not the nightlies).
EDIT: I just ran that demo in Chrome, on the same computer as before and it did improve a lot (getting 50-60 fps in full screen HD), though it still seems smoother in Firefox. Competition is great.
It's frustrating to see incorrect information regarding NaCl in top comments.
NaCl is not going to make the web x86-dependent. Recognizing the danger of an architecture-specific web, Google has disabled NaCl for everything except the Chrome Web Store. So unless you believe that the Apple App Store has been making the web ARM-specific for the last five years, Native Client is not the x86 lock-in that you may fear.
And even PNaCl, which requires browser support to run "normally", has a JS fallback called pepper.js. So applications targeting PNaCl do not have a hard dependency on a PNaCl runtime: http://trypepperjs.appspot.com/
Basically, there is a whitelist of domains in Chrome where NaCl is enabled for non-web-store apps; the basic criteria, according to Google, is that (like the web store) these are places where Google has a lever to move people using NaCl from NaCl to PNaCl over time.
Obviously, this applies to Google's own properties.
correct me if i'm wrong, but it does sound like NaCl is about "making the web x86 dependent", but google also has architecture-agnostic implementations in PNaCl and pepper.js
This isn't right - the Chrome team have explicitly said they're going after general optimizations that just happen to help asm.js performance greatly, but are applied regardless of if asm.js is being used or not.
What you say is true, Chrome is using a general approach to optimize asm.js code, but that does not contradict what the post says. It says that both Firefox and Chrome have optimized for asm.js, and that's correct. For example as reissbaker's comment mentions, Google even added an asm.js benchmark to Octane 2.0. That shows it is optimizing for that style of code, among many other styles of course.
That's one tortured and oriented way of looking at it. By your logic, every single dev who has ever made a js lib can make an article titled "Chrome and Opera Optimize for Whoever-Pioneered whatever.js", since the changes improve their lib.
That title is making it sound like the changes were aimed at making that one case faster, when they weren't.
And them adding a library to their benchmark doesn't in any way mean that they aim to optimize for it, only that they want to keep a reference on how their code perform.
Just because I regularly check how my website renders in IE 6 doesn't mean I'm optimizing for it or that I consider it a target.
> That title is making it sound like the changes were aimed at making that one case faster, when they weren't.
No, the changes were certainly aimed at making the case of asm.js code faster. As in the links I posted above, Google has announced they are optimizing for asm.js and that they achieved large speedups on it, they mention it very specifically,
> And them adding a library to their benchmark doesn't in any way mean that they aim to optimize for it
No, it exactly means that: Adding it to Google's main benchmark means that they consider it representative of important real-world code, and hence that optimizing for that benchmark means you are optimizing for the right stuff. As the Octane FAQ says,
>> " Octane aims to be representative of actual workloads and execution profiles of real web applications. Octane's goal is to be a proxy for the JavaScript application that you'll encounter when running browser games, highly-interactive web pages or online productivity tools." https://developers.google.com/octane/faq
I think there's probably a subtle distinction that's hard to show. I'd imagine the line is drawn at interpreting the
"use asm";
which signals the asm.js optimizations should be used. Instead, I'd bet the Chrome team are just watching for JS which matches the asm.js spec and optimizing for those cases, like they do for other code paths & structures. That would mean they are optimizing for it, but not supporting it directly.
> Instead, I'd bet the Chrome team are just watching for JS which matches the asm.js spec
No. All optimizations that V8 are doing are following from the JavaScript specification itself, not from asm.js specification. V8 does not use asm.js specification in any way (including type rules that asm.js prescribes). If you take asm.js code and slightly change typing so that it will no longer pass asm.js type check but still be reasonable V8 will continue to optimize what it can.
I'd say that if you make changes to your web site because you see that your site is slow in IE6 in order to make IE6 faster, you are optimizing for IE6. This is directly analogous to what Google is doing with asm.js.
If you make changes to your site that make both IE6 and all other browsers fast, then what are you optimizing for?
Because this is what is happening.
asm.js is as a monolith: "use asm" in the foundation and a pyramid of typing rules, take one out everything crumbles. Right now V8 does not use "use asm" nor relies on any static typing (beyond what can be derived from JavaScript specification).
For example when V8 improves its optimization pipeline to handle coercion of boolean to integer inline this helps asm.js code, but it also helps anyone doing this operation outside of asm.js specification compliant code. This coercion rule is just a part of the language, so I really see this as optimization of JavaScript as whole.
> If you make changes to your site that make both IE6 and all other browsers fast, then what are you optimizing for?
This is really a philosophical debate at this point. All the one side is saying is that if a team of engineers focuses on making a certain type of code run fast (they look at that code, have benchmarks for it, find stuff that makes it run faster, etc.), then it is fair to say they are "optimizing for that type of code." While the other side is saying that if they write general optimizations that speed up not only that code but other code as well then they are "not optimizing for that type of code."
Both sides are correct in what they mean. The only disagreement is in how to define the term "optimizing for." But as long as we all understand what we mean, there is no disagreement here at all.
Yes, you are exactly correct that v8 does not notice "use asm", but yes, it is also correct that the v8 team has been optimizing for asm.js code, just as they optimize for typescript code (which like asm.js, Google deemed important enough to be put in Octane).
It's compiler itself that is in the Octane (though compiler itself is written in TypeScript). This particular benchmark pressures GC a lot and GC shares the top spot with a load from dictionary stub.
TypeScript is on the different side of the scale compared to asm.js: it produces mostly what a normal human would write. This means normal prototypical inheritance, normal JS objects and so on. There is nothing really special about that. Though of course TypeScript compiler itself have performance profile atypical for a web application or game, it is a compiler after all.
The hottest JavaScript function in this benchmark (if you skip GC and dictionary load) is Scanner.innerScan with 1% ticks. If you look at it you will see a typical scanner function most of compilers have. Any human would write it the same way, actually and the fact that it was written in TypeScript is irrelevant (compiler just stripped type annotations when it was compiling it).
If you sum all this things together you will see why it is incorrect to say "X optimizes for Y" here.
Ok, I see your point, TypeScript is not a good example then, I did not analyze it in depth as you have. But my point still stands in general: Stuff in Octane is code that is intended to be optimized for, by definition. So recursive code, numerical computation code, asm.js code, Mandreel code, functional code, etc. etc. - browsers are optimizing for all of those, and different parts of Octane test each of those.
edit: perhaps an even more specific example is the regex stuff in Octane and SunSpider, they test something very narrow. Likewise GC tests that pretty much require a generational GC to be fast, they also test something very specific. While the TypeScript test, as you said, is more general.
> Stuff in Octane is code that is intended to be optimized for, by definition. So recursive code, numerical computation code, asm.js code, Mandreel code, functional code, etc. etc.
Yep. Now look at the list of these things. Do you see what it is in the sum? In the sum it is real-world JavaScript. Which is precisely reveals the goal: JavaScript should run fast.
I think we are looping about this issue because we see different connotations in the sentence "optimize for asm.js". I see connotation "optimize for asm.js as described by it's specification (with AOT and stuff)", while what actually happens is that underlying JavaScript that serves as foundation for asm.js specification is being optimized. That is why I always prefer to say "asm.js-style code".
> Likewise GC tests that pretty much require a generational GC to be fast
Actually with Splay it is easier to be fast if you don't have a moving generational GC :-)
Ok, then I think we do understand each other. The only question then is what words to use to avoid confusion. So you are fully ok with the phrase "optimize for asm.js-style code"?
I didn't mean splay necessarily ;) It is easy to make a benchmark that relies heavily on GGC or escape analysis.
> So you are fully ok with the phrase "optimize for asm.js-style code"?
I would say I am more OK with it as it highlights the fact that it's about spirit not about complete adherence to asm.js spec.
Unfortunately there is no compact and at the same time clear way to say what V8 does right now, e.g. saying "optimizing for type stable code" is most accurate but does not help to understand what V8 actually does and how it relates to improvements on Emscripten output.
In this case I would better say that IE6 was a motivational example that caused me to find a performance bottleneck affecting all sites. I optimized bottleneck away.
> If you make changes to your site that make both IE6 and all other browsers fast, then what are you optimizing for?
If you specifically make changes to your site in order to make faster on IE6, and it also makes all other browsers fast, the "all other browsers" part is best described as a side-effect.
> If you specifically make changes to your site in order to make faster on IE6, and it also makes all other browsers fast, the "all other browsers" part is best described as a side-effect.
So did the v8 team do something specially to make asm.js faster? As mraleph said, the answer is no. So the fact that asm.js is now faster in Chrome is side-effect.
Again, the question is what you mean by "specifically".
The v8 team look at asm.js benchmarks and make changes in v8 to make those benchmarks run faster, that's clear from their announcements. At the same time, those optimizations can also make other code faster, as they try to make them general optimizations as much as possible.
So it's not true it's a "side-effect", since some of the time they literally start the process by looking at asm.js code, and finding how to make it run faster on v8. So asm.js is in a sense directly responsible for the optimization, and asm.js is sped up by it. But at the same time, the optimization added is - unlike Firefox's approach - relatively general, and helps other code too.
In the end, I don't think there is any disagreement between us here.
To add to the comments here, asm.js seems to be big on bitwise computing. In fact it is totally relying on bitwise computing. So asm.js put bitwise computing at the forefront of programming.
Which means that Chrome will now optimize for bitwise computing without holding any bias for or against asm.js. Asmjs developers can claim the mantle of bitwise computing and anybody who uses bitwise computing or optimizes for bitwise computing is really endorsing asm.js
Please put down these Asm.js vs PNaCl arguments. We are all going to win in the end as both ASM.JS and PNaCl run on pure Javascript (PNaCl through a pepper.js fallback and ive heared asm.js is already a PNaCl target)[1]
In the end, some hugely popular cross compiled games will drive adoption of both technologies to the point where they are ubiquitous and supported by all major vendors. Apple doesnt want people saying "Wah look at how poorly Safari handles that game compared to Chrome/Firefox".
Transpilers are just a work-around, tooling is not mainstream yet. At the moment debugging the script is easier despite being unpleasant. I work with JS almost daily and understand it's strengths and weaknesses. I do not consider it a silver bullet which seems to be a shared sentiment around HN. A language designed to make documents more interactive should be just that. Especially if it is evolving in 5-year cycles (http://en.wikipedia.org/wiki/ECMAScript#Versions).
I believe you're missing the point: with asm.js, JS isn't being used as a "silver bullet", but merely as a kind of "assembly" of web browsers. It's not that we love JS so much, it's that it's pretty much part of every browser, and as such code compiled for asm.js will run in any browser, albeit without any asm.js specific speedup if a given browser does not support it.
Can I run CLR/JVM/Mono in asm.js today? Are there any clear plans for it? Will performance of this scenario equal to that of NaCl? I have my hopes about asm.js but it's good to see that there is an alternative that is evolving in parallel.
You could try compiling Mono/Java to asm.js and see for yourself. Of course performance will not be equal to that of NaCl, but portability will.
I don't see NaCl as a viable alternative, I see it as an attempt of locking down the user to a specific browser, and also as a potential security worry.
The thing is that NaCl lets e.g. game developers use languages and tools _they_ like _today_ to generate great products that run with reasonable performance and are deployable to a decent app-store, here is a poster child: http://www.tested.com/tech/web/3263-how-bastion-can-run-in-a...
You can do the same with Emscripten + asm.js. You don't code in Javascript, you just compile to JS. The products will run with less performance than native, I'm sure, but at least they're not dependent on Google's app store.
And, for me personally if I'm going to run a native app, I see no reason at all to have it run inside a web browser.
Is there any concrete data to support "You still get the best performance in Firefox"? I get around 26fps for 1920x1200 fullscreen on the latest Firefox nightly, but 40fps on the latest Chromium. Ubuntu precise.
Comparing a pre-alpha quality nightly build is never a good way to compare real products.
Comparing the current stable Chrome 31.0.1650.57 and Firefox 25.0.1 on my Windows box running at 1920x1200 yields 57.8 fps for Chrome and 81.7 fps for Firefox. Note that Firefox is pegged at a maximum of 60fps by default which will affect benchmarks. You can disable that limit with the information in the FAQ.