Google has done a number of optimizations that enhance asm.js performance while not specifically targeting asm.js. This results in real time performance that sometimes beats Firefox, which has asm.js built in. While I would like to see more effort to the areas of asm.js that are slow, I can't discount the Chrome/v8 team's approach.
But the Chrome/V8 teams is still benefiting from the existence of asm.js, because the subset of JS that people can use and expect good performance from is now well-defined. Prior to asm.js this kind of certainty wasn't available for either JS engine developers or JS authors.
I'm not sure I understand the purpose of your comment. Google's optimizations can achieve asmjs-like performance outside the well-defined subset so it doesn't necessarily benefit Google. If compile-to-js language authors target the subset ubiquitously and exclusively it may actually hurt Google.
> This results in real time performance that sometimes beats Firefox
While Chrome's results have been impressive, in my experience Firefox does much better for Asm.js code - at least for the games I've been playing. So what benchmarks have you seen?
Also, Google not getting involved in asm.js to make it better, but developing and promoting PNaCL and Dart, well that to me just smells like Microsoft's lock-in tactics with IExplorer in the nineties.
I would really love to see Native Client on Android as alternative to the dreaded Android NDK, and I really don't understand why this hasn't already happened, it's so incredibly obvious. Just give us the ability to deploy and run a PNaCl executable directly as normal Android application, and without all the Java and JNI shenanigans. The Pepper API has a lot more to offer than what's exposed through the NDK headers, and the SDK itself is much easier to get up and running then the NDK.
Why? Last I heard, Chrome is going long on their JIT and trying to get it to automatically squeeze the same performance out of asm.js code as the others, without special shortcuts. These "real" optimizations can then seamlessly be applied outside of asm.js code.
Isn't that something that benefits us all? I'm happy they're trying, at least.
While you are correct that the actual runtime speed in theory should be able to be achieved with the JIT, there are other advantages to officially supporting asm.js. One of the main things is ahead-of-time (AOT) compilation which compiles the asm.js code directly to assembly immediately after it's parsed. This gives you predictability (you literally get warnings in the console if it couldn't compile), which is really big for comprehending performance.
> One of the main things is ahead-of-time (AOT) compilation which compiles the asm.js code directly to assembly immediately after it's parsed. This gives you predictability (you literally get warnings in the console if it couldn't compile), which is really big for comprehending performance.
Related, and you've sort of mentioned this: having your browser be able to validate asm.js is useful for web developers, because then they know if their code is broken (and so will run slower).
Exactly. When I ported my software manually, it was the warning messages that helped. After that I can remove the "asm.js" statement and achieve almost the same performance (around 80% of max speed).
asm.js runs currently through an foreign function interface in Firefox, which makes calling the asm.js code slower.
asm.js represents LLVM bytecode, which is already behind the compilation step. The programmer sees errors when generating the asm.js, not when executing it. Whether it is then executed by a JIT or not is, in this case, irrelevant.
I absolutely agree there are advantages to taking the asm.js -> LLVM bytecode shortcut (cue every single benchmark showing FF on top). But compiler warnings are not one of them.
EDIT: To clarify: my point is that this is not classical compilation, but rather "interpretation of the generated code". Nobody writes asm.js by hand, no matter how it is executed. If there are errors in there, there is something seriously wrong with the tooling, and when the errors show up will be the least of your worries. Comparable to errors in a .jar file or a .pyc---this is just not something we need to be generally concerned with.
EDIT2: I don't mind the downvotes but if I'm wrong, please explain so at least I understand. Otherwise I won't learn.
I think I see what you are saying, but your statements aren't completely right which might be why you are being downvoted. "asm.js represents LLVM bytecode" is not right, just because you can compile one thing to another does not mean it represents it.
You make a good point though; it's not like we are seeing helpful warnings when writing our frontend JS apps. But those warnings are still crucial when actually building & deploying stuff; with various tooling and browsers it's really nice to see a "successful asm.js compilation" message and you know it's working. If you upgrade a tool, you can be sure that it's still working, etc.
Also, it's not uncommon for people to write new languages or compilers and having that feedback that you are on the fast path is really nice.
I think your point has some merit though.
The other benefit to AOT is simply that, when the app starts running, it immediately starts running in the fast path.
You're misunderstanding jlongster's point, or putting too much emphasis on the compiler warning part.
When you load some asm.js code in Firefox and it compiles without errors or warnings, you then know for sure your code was fully compiled and you will not see parsing/compilation happening half-way through a game frame. This means it's slightly easier to reason about the performance of your code.
AFAIK, with the Chrome strategy, you have to think about JIT compilation kicking in at any point, which is unpredictable and completely out of your control.
It's likely because asm.js isn't at all related to LLVM. Emscripten and the like use LLVM IR to compile to asm.js but that doesn't mean that asm.js is LLVM IR. asm.js is actually just a restricted subset of javascript that avoids a lot of things that invoke the garbage collector, forces certain type constraints (variables have static types that can be inferred at parse/compile time), and emulation of pointers using a block of memory among a host of other things. LLVM just gets used the most because Emscripten makes for a really easy way to target the platform.
No, emscripten compiles asm.js from LLVM IR, but asm.js itself is more like a portable bytecode for a 32-bit register machine.
> The programmer sees errors when generating the asm.js, not when executing it.
Why would that be the case? The point of asm.js is so browsers can optimise it. It doesn't matter what your tooling thinks, in the end what matters is whether browsers accept it.
> Whether it is then executed by a JIT or not is, in this case, irrelevant.
No, whether it is executed by a JIT is quite important. asm.js is a subset that browsers can validate and then compile ahead-of-time. If your code is failing validation and falling back to the usual JavaScript mode, there's a big performance penalty, and your code isn't asm.js-compliant!
> Nobody writes asm.js by hand, no matter how it is executed.
Actually, some people do. It's not the nicest of languages, but there are some people who do.
But even if you don't, what if you're using buggy or outdated tooling producing incorrect output? What if you're targeting a browser that doesn't support some new asm.js feature? You need to know if your code didn't validate!
Yes, more or less.
Every operation must be written, that the type is obvious for the compiler. And you have one array, the heap with some special index rules. For example:
(r[((ins >> 9) & 0x7C)>>2]|0
The >>2 and |0 is necessary.
No, it doesn't. Where in IodineGBA do you see any hand-written asm.js code?
Here's are some quotes from the author of that very library:
"asm.js requires a style of coding only compilers can output. A person writing actual asm.js code by hand would need to be insane as asm.js code required style of coding is horribly disorganized"[1]
"Unfortunately asm.js requires one giant array to put things on. No one in their right mind codes like that by hand."[2]
That's cool that they are trying, but what's the opportunity cost of not doing it? How much effort would it be to simply optimize asm.js now and then undo the optimization later when the core JIT gets smarter?
Seems to me THAT is the way to go, but this is not my area of expertise.
Looks to be the year of WebGL + asm.js across all browsers sometime this year. If so next year could be big for WebGL and web gaming again. And one day it could also make an impact on mobile but that seems to be moving ahead with things like Apple Metal and Khronos Vulkan (OpenGL successor - https://www.khronos.org/vulkan).
While I'd like to see asm.js implemented for compatibility, I greatly prefer native code over compiling to a JavaScript subset in the hopes of getting something vaguely resembling the original code back.
I understand why other browsers don't implement the Pepper API, because it's highly Chrome-specific; however, I'd like to see other browsers implementing the native-code sandbox, at least.
Asm.js has several benefits over NaCl. It's readily compatible with just about every browser in the market, and manufacturers just need to add some asm.js-specific optimizations to their JS runtime to fully unleash its execution power (but Chrome too already runs asm.js code very fast with just its generic JIT optimisations).
Secondly, it's based on a self-contained open source spec that constitutes a logical subset of another widely-supported open spec (ECMAScript) - no convoluted, versioned APIs coordinated by large, possibly competing and mutually incompatible engineering efforts. The asm.js spec is actually so simple it fits on a single web page: http://asmjs.org/spec/latest/
Yet it pretty much manages to achieve all that NaCl does by being also forwards-compatible with all the DOM-based extensions like HTML5 without needing any additional APIs (many asm.js demos for example bind to WebGL - this does not require any additional "asm.js API", as the browser's existing WebGL implementation suffices)
As mentioned in my previous comment, NaCl and Pepper are not the same thing. Browsers could support NaCl without committing to support Pepper.
> Secondly, it's based on a self-contained open source spec that constitutes a logical subset of another widely-supported open spec (ECMAScript) - no convoluted
I'm going to have to stop right there. asm.js is a lot of things, but "not convoluted" is certainly not one of them. asm.js involves compiling native code to JavaScript in the hopes that browsers will translate it back to some semblance of the same native code.
The level of "convoluted" depends on which level you want to look at. The asm.js spec is dead-simple, almost purely "mathematical" if you like. But if you require that whatever GCC/clang would have outputted for a specific C file is the exact same binary code that gets executed in the browser, then yes - you are not getting that. But then again, asm.js just - on a high level - specifies the primitive low-level constructs that C/C++ programs employ. Mapping them to instructions is a pretty straightforward job, but of course you are not getting 100% exact same results on every browser - just like you are probably not getting 100% exact same results from clang and GCC for the same C source code.
It's more that I'd like browsers not doing an absurd amount of unnecessary work and overhead. Leaving aside opinions on JavaScript as a scripting language for humans, it's not a sensible intermediate format for compiled binaries. It happens to work as a (very impressive) hack.
Even if it doesn't appear to be for those who don't consider the whole context it's a reasonably effective intermediate format. The current success of it among the vendors proves that.
Have to start somewhere. In the interim, people will use it to build compatibility layers for the Pepper APIs, such as a better pepper.js (https://trypepperjs.appspot.com/).
It runs, but running in the normal JS engine is slow. If code validates as asm.js code, you can compile it with fewer checks and no garbage collection, so that it would run a lot faster. https://en.wikipedia.org/wiki/Asm.js#Performance
There's no need to be disparaging – NaCL is more ambitious but asm.js has a backwards compatibility path. Both are valid engineering decisions, particularly since it's really easy to imagine a world where if NaCL delivers some major benefits the same toolchain could generate both for a seamless upgrade.
In the end there's not much difference between PNaCl and asm.js (see for yourself: http://floooh.github.io/oryol/). Both compile down to 'immutable' machine code, either via JIT or AOT compilation, both call into the same browser API backends, both are based on LLVM (actually PNaCl and emscripten use the same modified LLVM frontend) both have somewhat similar restrictions what APIs can be called in threads, the only real difference is whether a pthreads-style threading model is supported or not (PNaCl does, asm.js does not, but work is underway at Mozilla to implement a true pthreads-style model via SharedArrayBuffer).
If I want to implement NaCL execution, I only need to support a (sane) bytecode and a reasonable "syscall" surface (pepper -- and only if I want to support pepper. I may not want to for things like server-side sandboxing).
I can AOT compile a complete binary, and can run on (just about) any target.
If I want to implement asm.js, I have to implement a full JavaScript JIT (if I want decent performance). This is Hard. I can't AOT compile, by the nature of JavaScript.
So, no, there is a difference. One of these technologies brings the entire bloated browser technology stack along with it, and one finally cleans up that bloat.
> If I want to implement asm.js, I have to implement a full JavaScript JIT (if I want decent performance). This is Hard.
Writing a compiler that gets decent performance for LLVM is also Hard. (Go look at the size of the x86 backend alone in LLVM.) If you were writing an asm.js engine from scratch with no support for any JS other than Emterpreter bytecode, asm.js is probably even a bit easier than supporting PNaCl, due to the lack of types and no SSA. But honestly, the amount of work you need to parse a different bytecode doesn't matter much compared to the amount of work you need to do to write a good compiler, which you would have to do either way.
> One of these technologies brings the entire bloated browser technology stack along with it, and one finally cleans up that bloat.
Except that, as I mentioned above, you're never going to "clean up" the stack. HN, which you're using to post this comment, hasn't even moved beyond the <font> and <center> tags; what chance is there for browsers to drop all that technology when many sites haven't even adopted CSS1? The difference isn't between "HTML + CSS + JS" and "alternative stack", it's "HTML + CSS + JS + asm.js" and "HTML + CSS + JS + alternative-stack-that-duplicates-the-features-of-the-previous-three". You have to take that into account when talking about the complexity calculus.
> I can't AOT compile, by the nature of JavaScript.
Firefox AOT compiles asm.js. All you have to do is validate it (like you would any other bytecode) and then you can in fact AOT compile it. You don't even need a full JS parser, since asm.js only uses a subset of the syntax allowed in JS.
Asm.js bypasses the JIT, using AOT compilation directly to machine code (after verification). You just need a JS parser, not the JIT. At least not if you only want to support asm.js.
There's a very big difference. NaCl and PNaCl don't use the standard, multi-vendor web APIs. They use a proprietary single-vendor (Google) API, Pepper. This not only makes implementation by competitors difficult, it's needless duplication of effort (why maintain multiple APIs for the same thing?).
Also, both are based on single-vendor, single-implementation technology. NaCl used actual native code, so if you're not using x86-64 or ARM, too bad! And PNaCl uses LLVM, so there's only one implementation.
Compare this to asm.js. It's a strict subset of ECMAScript/JavaScript, which has several high-quality implementations, and is portable across platforms. It uses the existing standard web APIs, which also have several high-quality implementations.
Sure, but under the hood both the HTML5 APIs and Pepper APIs call into the same code, at least for WebGL, the performance behaviour on Chrome between WebGL and Pepper's GL wrapper is basically identical, and most other exposed API features are so similar to their HTML5 counterparts that it is almost certain that there's the same code underneath.
> Sure, but under the hood both the HTML5 APIs and Pepper APIs call into the same code
It's still wasted effort, though. Maybe they share some code, but Pepper is an unnecessary extra API. One that's non-standard and results in vendor lock-in.
The general consensus among non-Google browser vendors was that just using the existing browser APIs was a more desirable approach than Pepper. Since then, Pepper has remained a Chrome-specific technology.
That doesn't make it not proprietary! It's open-source, sure, but it's not something easy for other browser vendors to implement and only has a single implementation.
This kind of remark is getting pretty childish. On some level, all technology is black magic or "hacks". With a little bit of understanding, you learn all "hacks" are petty card tricks.
I want less things emulated via Javascript and more stuff done directly via native code. Everything that is built on top of HTML/JS/CSS is basically a hack that is trying to emulate much better native platforms and it sucks. Why advocate to stay there? Let's go in the other direction in my opinion.
On the other hand, for the first time in history, you have true "write once run everywhere". I can take my native C/C++ GL demos that normally run on the desktop, compile them to asm.js, distribute them via a simple URL, and users just click on a link and run the demo on every OS. No download and installation, no browser warnings about dangerous executables, no virus scanner scare popups. Compare this with iOS, where I need to be a certified Apple developer, need to sign my all my code, and can only distribute through the app store (and since it's only small graphics demos without real use, the gatekeepers would never let them through).
I don't think it will be all that hard from a security standpoint. This means mostly fixing the major mess that has been security at the OS level and isn't changing any time soon.
We're not supposed to be giving any application all power they want. We just want to let them use the GPU and take inputs on focus, and if they misbehave we kill them. If they need anything else, they'll have to ask the user. It took Microsoft what, two decades? to realize this. And it's only sightly better now. The mobile OSes were the first to grasp this but it's still imperfect.
Java almost got there but I believe it lost traction because of UI, lack of clear leadership and being tied to a language.
We're going to have to go with browsers because although it's a big pile of hacks they are finally realizing the obvious (in hindsight) way we should develop most applications. I have no doubt it will continue to catch on.
The choice isn't between "support HTML/CSS/JS" and "don't support HTML/CSS/JS". No Web browser can drop backwards compatibility. We haven't even been able to get rid of backwards compatibility for much worse things than CSS, such as <center>, because sites you need to use (such as Hacker News) won't update their markup to use CSS for layout.
asm.js is attractive because it is a small extension to the Web platform, so the overall additional complexity of an HTML/CSS/JS engine (which we are likely never going to be able to drop) doesn't go up much by implementing asm.js optimizations.
We're talking about Web browsers. Technologies that try to integrate into Web browsers specifically but don't have a good compatibility story with the platform haven't had a lot of success.
In some ways they do, but they are also nonstandard, with the downsides that that brings. Neither overall approach is perfect, it's good that we have a combination of both in our field.
Why get hung up on the fact that something is non-standard? Things don't just pop into existence as a standard way of doing things. Some core group of people has to agree on them and then away we go - we have a standard!
That's why everybody should actively use non-standard things that they want to become standards.
If you want to go native, then go native. If HTML/JS/CSS is a hack, then don't target the browser. It's that simple. Oh, you want to target the browser because it's ubiquitous? Well, that's because HTML/JS/CSS.
And on "emulating much better native platforms", well those native platforms are also trying (unsuccessfully) to emulate the web and the web is still winning.
While I fully support the development of alternatives to the web, I do not support that we should stop working on legacy technologies. Do you think people should have stopped working on gas lighting when electric lighting came along?
Speaking of which, wasn't Microsoft supposed to add WebRTC/ORTC support in its new browser, too? I don't think I've heard anything about it in the recent official announcements other than last year's rumors.
WebRTC was in development even for IE (and it requires Opus), but Opus was never enabled for the audio tag. I don't see Opus mentioned anywhere in that list (while WAV is).