Hacker News new | past | comments | ask | show | jobs | submit login
Asm.js in Firefox Nightly (blog.mozilla.org)
297 points by aeosynth on March 21, 2013 | hide | past | favorite | 130 comments



Actual numbers from the Groups link at the bottom:

    So I discovered from Alan today - asm.js was everything I hoped it would
    be. 

    One of the more intensive examples in OpenSCAD
    (https://raw.github.com/openscad/openscad/master/examples/example006.scad)
    gave me these metrics:

    Native: 402 seconds, 
    asm.js version: 605 seconds, 
    asm.js version in Chrome: 3724 seconds

And the exciting part? Backwards compatibility even with IE6, forwards compatibility with optimizations that haven't been invented yet – and no 80s era blob+VM architecture in sight. All without inventing any half baked new language, file format, virtual machines or whatever else, and building on a language spec that's already an ISO standard. Eat your heart out Native Client!


Taking a comment from someone on this thread, named xal:

asm.js is the hack of the year.

Because it makes simple, to get good performance.

Probably the biggest roadblock in software development, is the cost of complexity. Many great projects fail to be realized, because of the development costs. And development costs rise exponentially with complexity. Software development needs to become simpler. The success of simple and agile programming languages, like javascript proves that. Want to learn it ? just open just browser and type some code.

This is one huge step in the right direction. It makes very easy to get good performance, on the most ubiquitous platform, the web. It makes many things possible, at lower costs. And this is the same big reason why, a platform like firefoxOS has the potential to be awesome. Because it makes things simpler.

Now I wish that the same thing done here to javascript, were also done to html, CSS, the dom, etc. The web needs simpler bases. And modularity on top of it.


"Now I wish that the same thing done here to javascript, were also done to html, CSS, the dom, etc."

I agree 100%. If only we could see similar progress in the speed of DOM updates/manipulation life would be great. The fact is that most webapp performance issues aren't JS dependent, the bottleneck is the DOM.


I'm not sure why you think asm.js isn't a blob: in order to achieve those performance levels that you see in Mozilla, you have to interpret JavaScript as something that is not JavaScript, which requires having a spec in hand, dealing with potentially ambiguous encodings, and interpreting the asm.js as a bytecode, rather than as standard JavaScript.

At that point, the only difference between "asm.js" and a "blob" is that asm.js is unreadable ascii, which the blob is unreadable binary.


But it's not, by your logic a compiler producing machine-generated K&R C in the late 70s would also be a blob.

Only as anyone mining the web for esoteric code will attest, anything written in standard C since probably the mid 80s and using only the ANSI C standard library can still be fed to a compiler today and produce perfectly functioning code, taking advantage of every advancement in compiler technology in the 30 years that have followed.

Those C source files do not, for example, make assumptions about Harvard/von Neumann architecture, stack layout, calling conventions, the presence or absense of SIMD instructions, and so on. C is called "high level assembler" for a reason, just as "asm.js" is called "asm.js" for a reason.

Even if we had some standard binary format for describing software available since the 80s, it would almost certainly lack sufficient descriptive power to capture constructs that could be vectorized, or otherwise transformed using more modern compilation techniques that simply didn't exist at the time. Lowering to a binary form throws all that semantic information away. And the more you add back the more the binary form starts to look like a pointless transformation of the original source code.

Then there is the idea that such a format could ever have been designed in the 80s that would have lasted us to present day. Native Client proponents and similar such "VM fundamentalists" are effectively claiming that we can invent such a format, when it has never been done before in the history of computer science.

(Pre-emptively rejecting boring VM counterexamples like Java for reasons that can be extrapolated from what I just described. Every comment on this thread brings it closer to HN's "controversiality" scoring penalty, and I'd very much like for this link to stay on the front page all day)


> But it's not, by your logic a compiler producing machine-generated K&R C in the late 70s would also be a blob.

Seeing as generated C is essentially unmaintainable and unsuitable to serve as a source language, then yes.

> Native Client proponents and similar such "VM fundamentalists" are effectively claiming that we can invent such a format, when it has never been done before in the history of computer science.

I'm confused. Asm.js is an ASCII bytecode. We have plenty of other examples of abstract bytecode throughout the history of computer science, but they tend to be binary, not ascii.

Other than asm.js using a silly encoding for backwards compatibility purposes, what's the difference?

> (Pre-emptively rejecting boring VM counterexamples like Java for reasons that can be extrapolated from what I just described)

Huh? Now I'm even more confused. Are you arguing against Java's virtual machine specification, the constraints of its bytecode as defined, or what?

Notably, Android converts stack-based Java bytecode to register-based Dalvik bytecode; it's not as if the JVM bytecode strictly defines all conventions in an immutable way, or even that asm.js is all that different in practice.

I wanted to address this one last, since it's a bit of a tangent, but it demonstrates the issues of "architecture agnosticism" fairly well:

> Those C source files do not, for example, make assumptions about Harvard/von Neumann architecture, stack layout, calling conventions, the presence or absense of SIMD instructions, and so on.

That's not exactly true. If pure C is output, then yes, this is the case. In reality, especially with Harvard and modified Harvard architectures, one must take into account architecture differences to achieve reasonable performance, especially on the hardware as developed in the late 70s.

For example, there is often limited RAM, in which case one might want to store data in program space. However, C pointers do not, in any standardized way, support annotating the memory space (program, data) in which they live, and code itself can not necessarily portably extract data from program space as to operate on it as if it was in data space.

So, for a Harvard architecture, you often either wind up writing non-portable C, or you deal with inefficiencies -- some of which may not be possible to overcome, as there's only so much RAM or data storage to work with.


> Only as anyone mining the web for esoteric code will attest, anything written in standard C since probably the mid 80s and using only the ANSI C standard library can still be fed to a compiler today and produce perfectly functioning code, taking advantage of every advancement in compiler technology in the 30 years that have followed.

Have you ever actually tried this? It doesn't work.


I have. When I started at Arbor Networks in the early 00's, we needed a fine-grained timer library that was millisecond-granular and could efficiently manage thousands of scheduled events. I stole mine from the MIT-licensed Athena codebase, which dates back to the '80s.


But was it generated code? I've had pretty bad luck with old scheme/lisp implementations that compile to C.


>I'm not sure why you think asm.js isn't a blob

Because it isn't? It's a perfectly normal set of semantics in a language form on top of regular JS.

>in order to achieve those performance levels that you see in Mozilla, you have to interpret JavaScript as something that is not JavaScript, which requires having a spec in hand, dealing with potentially ambiguous encodings, and interpreting the asm.js as a bytecode, rather than as standard JavaScript.

All of them observations that are BESIDE the point if asm.js is a blob or not.


Actually, asm.js is JavaScript. That is, in fact, much of the point. It's kind of like the PyPy project's RPython: a restricted subset of Python that lends itself well to ahead-of-time compilation and optimization.

This leads to the question of whether (when?) someone will try to port Narcissus to it, potentially leading to a practical self-hosted JavaScript engine. "MonkeyMonkey", they could call it.


It's a blob that has excellent integration with a very well-known interpreted language with good tool support, including some nice debuggers. It looks like it will have a nice foreign function interface. Although not terribly readable, at least you don't have to learn a new assembly language when you really need to examine the object code, and you could easily edit the binary if it helps you debug something. Perhaps you can see the appeal?


> Backwards compatibility even with IE6

As long as the runtime is ignored though, if Chrome gets a 6x perf hit I can't even imagine how unusably slower IE6 would run.


Perhaps for your OpenGL games, but for some business application originally written for VB4 that basically draws a bunch of forms and puts stuff in a database, that's more than fast enough for some compiler that took VB4 and turned it into JS.

This is basically the reason d'etre for NaCl as far as I can tell: it was at various points advertised as for games and other 'high performance' stuff, but the more probable commercial reason for it is to leverage companies on to a non-Microsoft platform through an easy migration path. So I think the hypothetical VB4 compiler comparison (or a million similar "business app migration" examples) is pretty fair.

[edit: per subsequent comment, the notion that GC somehow can't be implemented is nonsensical]


>the notion that GC somehow can't be implemented is nonsensical

http://asmjs.org/faq.html

"The asm.js model provides a model closer to C/C++ by eliminating dynamic type guards, boxed values, and garbage collection."

Sure, nothing stops you from emulating a complete CPU. Of course you can do anything.

However, how large will that JavaScript file be? Also, will it be fast enough to be usable?


I think the difficulty of implementing GC would be the lack of access to stack primitives, something that would be impossible to provide in a way that was backward compatible.

By that I mean the ability to inspect the stack and determine the values present there and determine whether or not they point to 'heap' allocated memory.


From the asm FAQ, it seems like GC will happen:

Q. Can asm.js serve as a VM for managed languages, like the JVM or CLR? A. Right now, asm.js has no direct access to garbage-collected data; an asm.js program can only interact indirectly with external data via numeric handles. In future versions we intend to introduce garbage collection and structured data based on the ES6 structured binary data API, which will make asm.js an even better target for managed languages.


I took that to be actually talking about the other way around -- accessing JS GC'd objects from the asm.js code's heap and having it work correctly (which is currently onerous). I don't think that helps implement GC of the asm.js heap.


VB uses garbage collection. You can't compile it to the JS subset of asm.js.


That's like saying you can't implement python in x86... if it can run code, it can run code.


You were able to do the same without asm.js.

Problem is, it's way too large and way too slow. People aren't doing these things because they simply aren't feasible.

If you could compile Python directly into that asm.js subset of JavaScript, things would be okay. However, emulating a whole CPU and running some Python interpreter on top of that won't be okay. Even if you would magically reach 100% of the regular speed, it just would be way too fat.


V8 is about 100 times (seriously) faster than IE6's JS engine. So, yes, "backward compatible with IE6" doesn't really mean anything.

After executing 5 million statements, IE will ask you if you want to continue. It will ask you many times. You'll probably sit there all day long clicking the whatever button.


When the 'competition' is being advertised as an easy migration path for old software, then "backward compatible with IE6" means everything - it's just another way of measuring how much pain is involved in migrating some software over, only here instead of having to switch your entire corp and customers to Chrome, the change is mostly restricted to the developer toolchain.

You're continuing to perpetuate the notion that asm.js only has implications for high performance software, despite replying to a comment where I gave you a very real use case that has nothing to do with performance. And as I suspect you already know, since you seem knowledgeable, GC can easily be implemented in about a million different ways using an emulated heap on top of a typed array, so yes, even a naive implementation of VB4 could easily be made to work.


It doesn't make IE6 any faster, but the code will still run. I suspect that's what they mean by "backward compatible": it can't work miracles, but will at least do no harm.


"Run". Yes, just ~600 times slower. At that speed it probably won't be very useful. You have ~50 msec for an operation which is perceived as instantaneous. If it takes 30 seconds (600 times as long), people will not use it. It simply isn't feasible.

This stuff is only ES3 compatible because it didn't need anything from ES5. A low-level compiler target doesn't need any of these things: http://kangax.github.com/es5-compat-table/


>if Chrome gets a 6x perf hit

Who said Chrome gets a 6x perf hit?

You just have the numbers with asm.js.

If Chrome hasn't been optimized for asm.js style code, then those numbers would either be what Chrome already achieves with JS (e.g plain old V8 speed), or slightly faster than regular JS (because of the more streamlined, easy to JIT nicely asm code).


You're missing the point. The current optimized-for-normal-javascript Chrome takes a 6x hit on asm.js code, compared to odinmonkey. That's kind-of crappy but it's not a killer, and it could get asmjs support (maybe, who knows). The fear I was talking about was browsers which are already much slower than Chrome , and have no chance in hell of ever getting asmjs support, hence the mention of IE6.


Can you go into more detail about why you are glad there isn't a blob + VM approach? I know the long history of Java vulnerabilities, etc... but I'd be curious to hear what someone knowledgeable about the subject would say.


It's easy to show. Imagine Gmail was written for and compiled with an IE3 or IE4 era Javascript parser/interpreter/compiler, and the result was distributed as a blob to run natively. Shortly thereafter Google goes out of business.

Now imagine the browser vendors finally figure out how to make languages like Javascript fast, only there is the small problem of 100 zettabytes of blobs on the web, written by companies that no longer exist, and that can't be optimized at all because it's all distributed in a form that resists any inspection.

We don't even use stacks any more – they were some daft hold-over of the 60s that lasted far too long into the 2040s. Only all those zettabytes of blobs were compiled with optimizations that target stack-based VMs.

That's the promise of Native Client.


A highly obfuscated and minified JS application like GMail has about the same level of introspectability as Java bytecode. I have little problem consuming Java bytecode written 18 years ago by companies that went out of business.

Leaving aside binary format representation of portable VM code, there's also the issue that NaCl supports multithreading and asm.js doesn't. WebWorkers/Isolates are a poor substitute for high performance games.


"WebWorkers/Isolates are a poor substitute for high performance games."

Have you tried? (genuinely curious)


WebWorkers have also been recently enhanced so that it is possible to pass a message with no copy (by moving it out of the current WebWorker; it loses it's reference to the data).

I suspect taking advantage of this would greatly help in a game situation.


What do you mean by that? Java bytecode is leaps and bounds more instrospectable, and by that, I mean that programs can actually garner useful information from it, otherwise tools like Eclipse's code completion woudn't work. The source won't be preserved, but the class, field and method names will. That's much better than what you get from minified JS. I also mean that, despite its unconvenience, Java's reflection API is still quite powerful.

I don't get the fixation with the fact that the filename ends in a .js extension; just because a bunch of code is in a familiar interpreted language doesn't make it readable or maintainable in any other manner.


Native Client also supports SSE on x86, and NEON on ARM.


And will binaries compiled for Native Client today support SSE6 (released Q1 2014), ARMv15 (Q3 2016), and so on? Of course, if the authoring party still exists to recompile it, and they have a commercial reason for doing so..


This is a problem no matter what your target is.

Web apps written in 1997 do not take advantage of modern browser features.

Browsers written in 1997 can not render modern web pages.

At least with NaCL, you have the option of taking maximum advantage of the available hardware. I'd love to see NaCL grow into having actual on-chip support for NaCL-style sandboxing, the way we've seen CPU vendors add support for virtualization-specific extensions.

At the end of the day, I don't see this as a death blow for NaCL. Rather, it pushes us farther into a post-JS/HTML/CSS platform. We'll start seeing new languages take off, and the underlying platform target itself (browser, native, or otherwise) will matter less and less.

asm.js can be just another compilation target, along-side NaCL and iOS. Once you go in that direction, the HTML/JS/CSS monstrosity start to become less relevant outside of hypertext documents.


That would be true if asm.js were just a faster interpreter, but it isn't. It's not even really a separate language: "use asm" is just a hint to the JavaScript runtime that the code follows a particular set of conventions that, it so happens, are very easy to optimize.

An asm.js-supporting runtime which sees this hint can run that code through a different compiler that takes maximum advantage of the available hardware (i.e. whatever it's running on, with the asm.js code doesn't need to know or care about). A runtime that doesn't support asm.js will just run the code like it did before.

My point is that asm.js is every bit as capable of fully optimizing for the host as NaCl. It has two big advantages over NaCl, though. One is that it's architecture-independent: the host knows what architecture it uses, and (assuming it has the compiler, which an asm.js-supporting runtime has by definition) can act accordingly, taking full advantage of its own features. The other advantage is that there's a fallback even in case the runtime doesn't support asm.js: it's no faster than ordinary JavaScript in this case, but at least it does no harm.


> I'd love to see NaCL grow into having actual on-chip support for NaCL-style sandboxing

A bit off topic, but this already exists, in the sense that there is no difference between NaCl-style sandboxing and what every OS kernel already provides for its processes, except that NaCl is generally locked down tighter. All that would need to happen is for OSes to provide support for containers with arbitrarily restricted syscalls (as Linux already does) and NaCl could give up on the software fault isolation without any real loss in security.


kernels provided this feature since forever. selinux, trustedbsd, you name it, provided this. seccomp on linux is just a "in code, per process" version, while selinux, or earlier things like RSBAC or LIDS provide more flexibility (but apparently, nobody understood how to use any of this in the past :)


The major difference is that I can run a NaCL process(es) inside my own userspace process without kernel support, and that a failure in my sandboxing won't also be a failure inside the kernel.


i also believe this to be true mostly for real apps and games. I dont see alot of benefits for normal web development though.



ARMv15? Is that a new architecture after ARMv8? Do you have a link on it?


It relies on typed arrays, so unfortunately it's not backwards compatible with IE6 :(


If I'm understanding things correctly, the JS produced is a subset of ES3. The use of typed arrays simply give supporting platforms an additional speed boost.

According to Brendan Eich: "all of JS (ES3) is 'universally supported' [by asmjs]"


I'm reading Eich's tweet differently. Here's the twitter exchange:

https://twitter.com/BrendanEich/status/302897797827530752

=====

Alias Cummins ‏@no_other_alias

@BrendanEich So would I be right in thinking that the point of asm.js is to tie together all the universally supported bits of JS?

BrendanEich @BrendanEich

@no_other_alias No, all of JS (ES3) is "universally supported". http://asmjs.org/ is a no-GC-pause well-typed compiler target language.

=====

@no_other_alias is asking if asm.js is about formalizing a JS subset that's universally supported (in all browsers). Eich says no, all of JS (ES3) is universally supported (so no subsetting is required for cross-browser compabitility). Asm.js is about formalizing a JS subset that can be compiled to much faster machine code.

So I don't think Eich is making a claim about asm.js backward-compatibility here.


Typed arrays are a background thing in this case. The code one writes in asm.js (or compiles to asm.js) doesn't mention them at all. IE6 will see the code and use normal arrays, because as far as it knows, that's the only kind of array there is.


Sadly not, I overshot the mark slightly here (what worse an advocate for your stuff than an incorrect advocate!). Typed arrays allow for things like casting, where the same chunk of 'heap' can be seen through multiple views, and interpreted as char* by one, and float* by the other.

It can't even be shimmed on IE6, since you can't overload the array indexing operator there.

It's entirely possible and easy to implement an extension/variant of Asm.js that could legitimately target these browsers, by providing shimmable names for the array indexing operation, but diminishing returns and all that..

Note nnethercote (parent comment) is one of Mozilla's engineering ninjas and author of Valgrind, it's pretty likely he knows what he's talking about here.


That's not what backward-compatibility means here. IE6 sees ordinary JavaScript code and executes it. It's true that it can't do the typed-array magic in the background, but it doesn't have to. The JavaScript code that IE6 sees has the same result as compiled asm.js code would; it just does it in a less-optimal way.

That's all that asm.js really is: a set of JavaScript coding conventions that (when followed) prove to be easy to optimize to near-native levels of speed, along with a compiler hint to tell the runtime that these conventions are being used. IE6 doesn't understand the compiler hint, but because the asm.js code is still valid JavaScript, it'll still run.

I suspect that either nnethercote and I are using different definitions of backward-compatibility, or nnethercote is mistaken as to what exactly asm.js is. Authorities are less likely to make mistakes, but they can still make them.


Typed arrays are an explicit JS feature: https://developer.mozilla.org/en-US/docs/JavaScript/Typed_ar.... There's nothing "background" about them.

None of IE 6, 7, 8 and 9 support them: http://caniuse.com/typedarrays. Ergo, asm.js won't work on IE 6, 7, 8 or 9.

(Speaking of authorities: I got this info from azakai, who's the author of Emscripten and one of the designers of asm.js.)


But it's terrible! Bytecode + VM would be so much better ...


I am so excited about it. It's worth noting that I'm porting LLJS to compile to asm.js already, and I have basic code already working with it: https://github.com/jlongster/LLJS

I plan on polishing it up tonight and publishing/blogging about it tomorrow.


Wow, your project looks amazing. My evenings plans just got cancelled.


It's worth noting that LLJS itself came from a few smart guys from the Mozilla Research team. I work for Mozilla but as a web developer and I'm just doing the port.

The asm.js branch is highly unusable as is, but it should be somewhat stable tomorrow!


I remember reading about this on HN a while back and wondering why anyone would want to use it, but now it totally makes sense. I don't want to go back to writing C/C++ but if you could write your app in JS, and then do the tighter "inner loop" parts in LLJS... that sounds really enticing. I'll be following the project closely.


I wasn't convinced of it either at first. You can get around a 5% speed increase with compiled code (because it uses typed arrays and other things which are heavily optimized), and I played around with it for a game, comparing it to emscripten and raw js: https://github.com/jlongster/js-lljs-c-benchmarks/

When I heard of asm.js I immediately realized that I wanted LLJS though. You're absolutely right that it could turn into something which helps you write only parts of your app in asm.js.

Everyone should note, however, that you usually don't need this. Javascript is still highly performant. This is just helpful for things like incredibly complex 3d games, number crunching, etc.


This post mentions IonMonkey, which is the newest enhancement to Firefox's Javascript engine. According to http://arewefastyet.com/ , it appears to be doing pretty well so far (red is old Firefox, purple is current Firefox, black is future Firefox, green is Chrome, lower is always better). Excited to see what'll happen once OdinMonkey gets added to the mix.


Nothing, at least not to the benchmarks currently on awfy. You need to opt-in into OdinMonkey optimizations with "use asm". And of course these benchmarks don't do it.


Small correction: purple is current Firefox, not red. Red is not shipped in any build.


Ah, I had no idea that IonMonkey had made its way to the stable release. Very cool.


Worse really is better. Asm.js fulfills the promise of formats like ANDF, except by going through one of the shittiest intermediate languages you might imagine.


Already adopted is better than not already adopted.



> except by going through one of the shittiest intermediate languages you might imagine.

Visual Basic?


VB's pcode was actually not horrible ....


Quick test on the BananaBread benchmark: Without asm.js

  preload : 53.698 seconds
  startup : 11.904 seconds
  gameplay: 369.86 total seconds
  gameplay: 196.538 JS seconds
With asm.js

  preload : 55.513 seconds
  startup : 7.093 seconds
  gameplay: 75.848 total seconds
  gameplay: 54.318 JS seconds


Hoisting my comment up: Executable != useful. Saying asm.js runs in VMs that have no special support for it breeds a false sense of portability.

If Firefox is the only browser that implements asm.js, it'll have about the same issues as Dart or NaCL. That is, Dart also compiles to JS, but if Dart2JS were 6x slower on Firefox than Chrome, no one would be cheering, and in fact, this is a complaint Mozilla themselves raised in the beginning (that if DartVM has an enormous performance advantage compared to Dart2JS it would make other browsers look bad and fragment things)

If I'm writing a game in C and compiling it to asm.js, and it's gonna run 6x slower everywhere else, I'm effectively developing it for Firefox only no differently than if I had compiled C to NaCL/PNaCL.

For this to be effective, it has to be cross browser. At least, Mozilla and WebKit/Chrome. (honestly, without WebGL support, I don't think it helps IE with games anyway)


I agree with your point; while functionality portability is important, we really want performance portability too.

A big challenge we seem to have with any of the available options is convincing all the major browser makers to agree on something. And even though asm.js won't have perfect performance portability at first, it looks like it should be a much easier sell to browser vendors than Dart or NaCl were.

With asm.js, browser vendors won't need to add a new VM. They'll just need new optimizations within their existing VM. There's no new security model to understand, no API from someone else's browser to emulate, and no large set of features which will be redundant between VMs.

And, asm.js has a lot lower risk of becoming a legacy nightmare if it doesn't catch on. Or if the specification changes. It's just optimizations, so there's room to maneuver when customers with old code demand that you support them forever.


On security, I'm not sure I agree. Part of achieving optimizations is making enough assumptions to perform them, and that leaves you open to holes in your validator. If validation isn't correct, then you could potentially trigger the optimizer to create bad code that would be exploitable.

We've basically moved to Java bytecode verification to asm.js verification. This is just speculation, but I'm actually concerned about the potential security implications.


Browser vendors is a bit abstract when you are talking about a group of four, two of which have horses in the race.

In theory there were good reasons to think browser vendors would support ogg, in practice MS and Apple showed zero interest.


I don't think you understand asm.js. It's cross browser. If it happens to run slowly in v8 that's just because v8 is poor at optimizing JS that makes heavy use of typed arrays (which is already the case in some scenarios, demonstrated by Chrome's performance being inferior to Firefox's on many non-asm.js benchmarks).


Did you fail to understand my point? Dart is cross browser too. You write a Dart app, it runs in the DartVM on Chrome, and runs as JS on Firefox. If Firefox runs it slowly, it's because Firefox is worse at recognizing the patterns of transpiled Dart code, right? I could define my own spec, "dartjs.js" which has specific style of writing JS that a VM could be tuned to recognize. Same issue.

I'm not attacking ASM.JS here, I'm just pointing out that if it isn't supported by other browsers, no one seeking high performance can depend on its portability in that regard.

None of this makes any difference to a developer who is trying to ship a game, he has to deal with the world as it is. And if the world is Firefox has asm.js, and Chrome has NaCL, then they will write their games in C/C++, and use Emscripten+Asm.js for Firefox, and NaCL for Chrome, so NaCL still won't go away.

The point is, rather than just dumping a new spec and VM module on the community, it has to be a cooperate effort so that there's a stable platform to target that isn't a pathologically bad performer elsewhere.


When you say 'dumping a new spec and VM on the community' are you referring to NaCL and Dart? Because asm.js isn't a new vm, and it already works in other browsers, like I said.

If a particular device can only run Java applications in an interpreter, that doesn't mean Java doesn't work on that device, or that Java isn't cross-platform. It just means the vendor hasn't done the work to implement a high-performance JIT (or can't).

In this case, nothing is stopping other vendors from providing high performance for ASM.js applications: The spec is being developed in the open with involvement from interested parties - not by a bunch of stakeholders at one private company - and it produces JS that works in any JS runtime. You could, if you chose, simply make ASM.js applications faster by making all JS applications faster; many of the same optimizations apply.

Any suggestion of 'support' is entirely missing the point: It's Just JavaScript. REALLY. That's it. The point is that it's JavaScript that conforms to rules (enforced on the ASM.js compiler, not the JS runtime) that enable runtimes to apply additional optimizations. Some of these rules can (and probably already do) enable a runtime to apply additional optimizations even if it is unaware of asm.js.

The most obvious example of this is the 'n | 0' pattern. When you're doing some integer arithmetic in JS, you can and probably will write this:

    var z = x * y;
But an ASM.js compliant compiler will, if x and y are both integral, write something like this [1]:

    var z = (x * y) | 0;
These both achieve the same result, but the latter provides more information to any JS runtime - not just an ASM.js one - that enables more optimizations. And in fact if you benchmark properly right now, the | 0 hint produces better performance for integer arithmetic in builds of Firefox that do not have asm.js.

[1] Technically I think it might actually spit out a call to Math.imul, and not an actual use of the mul operator. But I digress.


>These both achieve the same result, but the latter provides more information to any JS runtime - not just an ASM.js one - that enables more optimizations. And in fact if you benchmark properly right now, the | 0 hint produces better performance for integer arithmetic

Could you elaborate please? I am not a JS expert, so I am confused why ORing with an int would help in a multiplication. I'd understand if it was (x + y) | 0, but the star operator is - to the best of my knowledge - not overloaded in JS, so the JS compiler can already assume x and y to be numeric.

What kind of optimization does it trigger in your example?


It's not about overloading.

JavaScript language doesn't have integers, only floats. JS engine has to use floating point add for (x + y). It's slower that integer add. But (x + y) | 0 truncates any float values that could theoretically come up, allowing JS engine to safely optimize it by using integer add directly.


Dart has been developed in the open as well. It's not even released in Chrome yet, it's been fully in the open for 2 years, which is most of its existence and has undergone massive revision since the original seeding of the project based on community contributes. Yes, my comment was snarky.

No matter how you slice it, adding a pragma, and requiring code to be written in an idiomatic format to gain maximum advantage by a conforming VM implementation is essentially introducing a new set of semantics to the VM. The fact that you can sneak it into the existing grammar (x|0) instead of needing to extend the grammar (int x), is a neat trick.

I fully support what asm.js is doing, I don't have any objections to it, and once the Garbage Collection support happens, I will look very hard at making GWT support it as an output format (primarily to support PlayN). But I wish Mozilla would stop peppering everyone of their press releases with a thinly veiled snipe at competitors.


The dart people should make dartjs generate asm.js code.


asm.js is currently only a usable target if you use C.

You also can't compile JS to asm.js, for example.

You get C-like performance by writing C.


That is true, but remember that VMs for languages like Python, Lua, etc. are written in C. That means you can compile them to asm.js. There are demos of this (using emscripten, not asm.js) from a while back.

If you ported a VM with a JIT, and added an asm.js backend for that JIT, it could be a very fast way to run basically any language in asm.js.


Not sure what the point of hoisting/copying your comment up is? Now all of our replies must be duplicated as well?

Original replies are under the original post, https://news.ycombinator.com/item?id=5419154


I should have deleted the original, my bad.


ASM.js is easily the hack of the year. Every hacker should walk around with a smile on their face today.


I think that compile-to-JS languages are going to explode in popularity even more than they already have when they can compile to an optimized asm.js. I might be misunderstanding its impact, but I'm hopeful.


it doesnt really change the game for traditional web development though. Thinks like GWT might be faster with asm.js but it didnt take off for other reasons than speed.


This is huge. Never heard of Asm.js until now but this is the first time I'm really excited about Javascript.

I feel like dropping everything right now and start playing with this.


Ok, everybody, Javascript is over. Now everybody will be able to use their favorite language and chances are that it will be faster.


Sorry I'm late to the party.

Could someone explain what this is? All I've found in search is talk about how much faster it is. Devs write code in C/C++ and target js? Is that it?


asm.js is a low-level statically typed subset of javascript, roughly equivalent to C (that's not exactly it but close).

As a result, it can be AOT-compiled to machine code rather efficiently and lead to excellent performances with special support, while still being 100% compatible with browsers providing no special support (as it's a subset).

The idea is to make it an efficient (if low level) language to use out of the box, and an efficient compilation target for higher-level non-javascript languages (C and C++ are used for comparisons but e.g. clojurescript or elm could compile to asm.js as well instead of their current javascript target).

Basically, as the name implies asm.js tries to be an assembly for the web, but a backwards-compatible one (as opposed to NaCl)


It's a subset of Javascript that constrains the feature-set so that optimizations can be applied. This post is announcing that Firefox nightly has added those optimizations, so you can use the spec and get the speed bump. It's particularly good for transpiled languages, as they can easily target this subset.

http://asmjs.org/faq.html


tl;dr Asm.js is a subset of JavaScript that is easy to optimize. It's sort-of a competitor to NaCl, but its basic premise is that a specialized runtime is just a bonus, not a requirement, to run asm.js files.

Any and all asm.js code has the same semantics as regular JavaScript. This is the most important thing about it--it already runs, everywhere. If a browser gets an asm.js runtime, the only thing that changes is speed.

Asm.js code is very, very low-level. Pretty much the only operations supported are statically-typed arithmetic, and array load/stores. It doesn't use any dynamic behavior--the closest it has, is arrays of functions that must have identical signatures, so that you can implement vtables yourself. Its resemblance to assembly is intended to make it a better compiler target.


For traditional web development this wont change too much does it? i mean you could write the business logic of your app in whatever compiles to asm.js(which in pratice is mostly done server-side) but in the end you would still do DOM manipulation using JS or not?

makes alot of sense for rich apps and games though.


Yes, I would love to see an adapation of MVC frameworks like Ember and Angular to use asm.js - not really sure if it's necessary though, since they are already that fast.


There are some speed comparisons in the comments here; it would be really nice to see some compiled-code-size comparisons between native and asm.js.


See http://mozakai.blogspot.com/2011/11/code-size-when-compiling...

Comparable with native after you gzip.

Note that this wasn't asm.js, however, just vanilla Emscripten output. asm.js output will be somewhat larger due to the strict type system.


> asm.js output will be somewhat larger due to the strict type system.

That's the part I was wondering about.

I'm hoping that we'll see rust compile to asm.js! ISTM that there's some similarity between web worker "transferable object" semantics and rust unique pointer semantics. If ES "binary data" objects end up being "transferable objects", maybe rust unique pointers can map onto transferable objects in rust tasks that are compiled to asm.js and run as web workers.


I approve of this link, just for the Amon Amarth reference!

That said, I'm somewhat excited about this, but also discouraged by the lack of commonality between major browsers in this regard. Now Firefox has asm.js, Chrome has NaCl, and Microsoft has ??? and Opera has ???. Uugghh...

Still, anything that stands to make the web experience faster and more useful is a good thing, I guess..


The only reason that I'm excited for asm.js is the fact that it's cross-browser, since it's literally just a subset of JS. Theoretically something transpiled to asm.js would be executable even in IE6, though I'd really like to see someone try.


Executable != useful though. This breeds a false sense of portability.

If Firefox is the only browser that implements asm.js, it'll have about the same issues as Dart or NaCL. That is, Dart also compiles to JS, but if Dart2Js were 6x slower on Firefox than Chrome, no one would be cheering, and in fact, this is a complaint Mozilla themselves raised in the beginning.

If I'm writing a game in C and compiling it to asm.js, and it's gonna run 6x slower everywhere else, I'm effectively developing it for Firefox only no differently than if I had compiled C to NaCL/PNaCL.

For this to be effective, it has to be cross browser. At least, Mozilla and WebKit/Chrome.


I agree that if that were the case, it would be a problem.

However, I don't think we will see a 6x slowdown here. JavaScript engines are already very fast on compiled code, because they have already been optimizing for typed arrays and so forth for a while now. Google even added a compiled C++ benchmark to Octane, for example.

In current benchmarks, asm.js gives you a speedup of anywhere from 1x (where normal JS optimizations already got us very close to native) to 6x. So 6x is the worst case, and there are plenty of cases in the middle. A 2x-3x difference is not big enough to make something a false sense of portability, the web has tolerated 2x-3x speed differences for a long time now, and these things also change a lot in terms of who is fastest, on what, and by what.

Furthermore, I would not expect a 2x-3x difference to last very long, again, because browsers have been competing on performance of this type of code for a while now (Octane, etc.), and will continue to do so.


Much of the discussion of asm.js focuses on Emscripten-compiled code. Would asm.js be useful for other tools that generate JS, such as CoffeeScript or Dart2JS?


Possibly yes, depending on the language. For CoffeeScript likely not, it is very close to normal JS, and lacks types.

For Dart, I don't know enough about how the VM works.

But in general, a language like Dart could work. For example, LuaJIT or PyPy could be ported, the C parts compiled into asm.js, and the JITs would need a new backend that emits asm.js instead of x86 or ARM. This could potentially make a language like Lua, Python, Dart, etc. very fast on the web. However, how feasible it is would depend on the VM architecture.

I hope to work on this kind of stuff later this year.


The problem is, you'll end up porting those language's Garbage Collectors as well. I don't think it's that practical to compile GC'ed languages to asm.js until GC support is added. The GCs that are in Firefox and Chrome are very efficient low pause collectors, and IMHO, you really don't want to reimplement collector in JS, especially since you can't take advantage of OS/MMU features, and can't do multithreaded collection, etc.

Also, if you consider something like the Java Virtual Machine, it's practical to make it work on NaCL, but JS doesn't support blocking/synchronous calls, and how earth would you make all of those synchronous APIs, and multithreaded APIs work in that context?

asm.js is not isomorphic to NaCL in terms of feature set, but asm.js + something like OpenCL could probably work well for games.


GCing is a concern, yes. With the Binary Data API and some extensions, it might be possible down the road to have GC'd objects in asm.js compiled code. Definitely an interesting area to investigate.

Multhreading is another concern, if you want shared state (with the JVM would need). It can't currently be done, but again, is worth thinking about and perhaps new web APIs could enable it some day.


> Executable != useful though. This breeds a false sense of portability.

That's already a problem with many things web though, different browsers have different performance profiles already and things leading to smooth 60fps in one browser may not do so in an other one.

The flip side for asmjs might be this: you manage 60fps on Chrome, Firefox is also at 60fps but uses less resources (since it's got special support) so you save wee birdies. That's cool.


Well probably one could make the Dart Compiler output asm.js. Also, when you have an incompatible VM, you need to serve different content to different browsers. This, on the very least, eliminates that.


Aah, OK. I actually didn't realize that, my bad. I was thinking of it as something analogous to NaCl.

That's good to know, then. Hopefully this makes things better for everybody!


The good thing of asm.js though: it's valid JS, so it runs in V8, JavaScriptCore/Nitro, Carakan and Chakra out of the box. Slower. But it still runs, as opposed to NaCl.


I believe asm.js is a proper subset of JavaScript, is it not?

So Firefox has JavaScript, Chrome has JavaScript, Microsoft has JavaScript, etc. Firefox can just run it a hell of a lot faster if you add "use asm" to the top of your function.


Any browser can run asm.js code, that's the whole point; it doesn't need special support.


This is exciting. I hope Google (and eventually Microsoft, Opera and Safari) jump on this.


Slight offtopic: if anyone installed Nightly to have a look at this, and the UI is scaled strangely (might just be a windows thing related to HiDPI) you can set the devPixelsPerPx in about:config to 1.0 to get the scaling back to normal.


Tried the demo in Chrome and FF-nightly like everyone, and like everyone I guess, I was really impressed. it's visually twice faster in Firefox. Firefox seems to have come a long way.


From the comments: "This can give a second life to the JS backend of PyPy."

That would be incredible. Any PyPy devs care to comment on this possibility? :3


I'd like to see a python->asm.js compiler. Then I throw that in my resource generation pipeline and I am a happy programmer.


But could Python really take advantage of asm.js? I think that this is more useful for compiling statically typed languages (C/C++/... maybe TypeScript?) to JavaScript.


It's been said elsewhere, but you musn't forget that the python interpreter itself must be compiled. The pypy project is a python interpreter written in a restricted subset of python (rpython), and it would be doable to make the rpython compiler output asm.js.


How does this affect us "normal" JavaScript developers who don't know anything about low-level stuff?


From what I can see, it means you write your code in another language, and then it is "compiled" into performant JavaScript. Basically it would create much more efficient JavaScript than you could write yourself. Nothing to stop you from continuing to write regular JS, though =]


Some of your libraries might get a little faster.


This could be very useful for getting numerical and other high performance code into the browser.


Why can't we just have a universal bytecode that can be interpreted by any browser.


because politics, control, money, power.


chrome compiles javascript into an intermediate language already, they just need to allow that intermediate code directly from a script tag.


Nice! I really need an environment to test asm.js.


Just learned about asm.js , awesome ! but other browser vendors need to support it ! great work again -moz ;)


friendly reminder that google's dart dev team has made claims that dart is already 30% faster than js, and they plan to release VMs capable of surpassing 100% faster than js, so yeah.

keep in mind that if asm.js is half of native speed, dart is capable of bridging the gaps of js.


How will a fast Dart VM help Firefox and IE users?


You can't blame Google if Firefox and IE have no interest in supporting Dart natively.


I don't understand how that's supposed to make sense. A competing product doesn't have to take care of its other competitors.


you seem to really care about standards! so basically, ActiveX is faster than dart. you should probably use that. you know. its actual native performance:)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: