Hacker News new | past | comments | ask | show | jobs | submit login
Bringing asm.js to the Chakra JavaScript engine in Windows 10 (msdn.com)
299 points by bpierre on Feb 18, 2015 | hide | past | favorite | 107 comments



Prior to asm.js I spoke out against Mozilla's hard stance that making JS faster was the only option -as opposed to making a whole new VM or at least working to make PNACL more portable.

But, this here is exactly why I changed my tune. The primary advantage of asm.js is not technical. It's political. It would work without requiring Apple or Microsoft to take any stance at all. The big, slow, stubborn players did not need to be convinced to make any investment of any form. And once it worked, it would not be embarrassing or risky for them to join the party late.

And, here is the evidence that asm.js is working where no other option would. I look forward to the day when we can move on from JS to a better-designed VM. But while I wait, I'll be satisfied for a while with asm.js (once the SIMD support goes mainline).


> The primary advantage of asm.js is not technical. It's political.

One could also argue that asm.js's advantage is technical, and it's called "backward compatibility" :).


I have doubts about asm.js. I'm not super informed so I might be wrong, feel free to correct me :)

1. I don't know how helpful open source is if that source is inscrutable. The JS generated by Emscripten isn't super useful.

2. The only reasonable way to use a non-JS language with asm.js is to use Emscripten to compile its runtime or compile a "native" binary. One of the arguments against PNaCl is that you basically need to use Google's implementation. While the tie-in with asm.js is for developers, whereas the tie-in with PNaCl is for users, I think it's tie-in nonetheless.

3. asm.js is billed as an 'open platform', but it's still limited to platforms that can run browsers with asm.js optimizations implemented. The number of these platforms, when you consider phones, consoles, appliances, and legacy platforms, is pretty small. They can also ill-afford a 50% speed decrease. Since this is basically Windows, Mac OS X, and Linux, your native applications are going to work just fine on those platforms. Sure you might have to do some platform abstracting, but not tons, and especially little if using something like Unity.


These are real concerns. Asm.js is not a perfect solution. I am not Alon Zakai, but I'll try to answer to the best of my understanding.

1. Asm.js is not an alternative for a hypothetical high-performance, human-readable language in the browser. Ex: It's not competing with the idea of a C#-in-HTML JIT compiler. It is competing with Silverlight -where the browser is already downloading inscrutable binaries.

2. Emscripten has so far received the most man-years of effort, but the idea of other systems using asm.js as a compilation target is very attractive and it's not so difficult as to be a huge barrier for competition. For example https://rfk.id.au/blog/entry/pypy-js-faster-than-cpython/ (for a specific benchmark, after jit warmup). Personally, I'm excited about the idea of hooking the emscripten backend into http://terralang.org/ Terralang is a system for building custom, high-performance languages on top of LLVM.

3. Asm.js runs on any standards-compliant JS interpreter with no special support required. Even if the interpreter ignores the "use asm" directive, you can expect to see speed benefits simply from the fact that asm.js code is JS in a style that is easy to JIT very well. In order for phones, consoles, smart TVs ect... to fully benefit, it will of course require the platform holders to put in the effort to respond to "use asm". "Open Platform" doesn't mean "Works perfectly before the platform holder even implements it" ;) Also, Asm.js is not targeted to compete with native, installed executables. It is just trying to get the capabilities of HTML apps on all platforms to be closer to those of native executables.


1. The point I was making is that a lot of asm.js proponents argue that compared to PNaCl, asm.js isn't a binary blob, and is easier to examine and modify. While I agree this is true, I'm not sure how much it matters; it's still very hard to read and work with Emscripten-generated asm.js code. You can write it by hand for improved readability, but that's not a lot of fun either.

I disagree that asm.js isn't trying to implement '<script type="text/lua">', and is only interested in unseating Silverlight/Flash. Alon Zakai argues for asm.js becoming the bytecode format of the web, and advocates for compiling the runtimes of other languages to JS using Emscripten.

2. I read about PyPyJS a while ago, the way it works is by compiling the PyPy interpreter into JavaScript using Emscripten. I don't think you'll ever see TerraLang ported because it relies on LuaJIT. AFAIK, every large user of asm.js uses Emscripten, because asm.js is such a bear to work with manually. That's fine, but to me (again), lock-in is lock-in. I'd be really surprised to see a non-LLVM solution crop up.

3. I think "runs" implies some level of acceptable performance. Try running asm.js code in IE, for example. Even in browsers that have a lot of optimizations (asm.js-specific and not), performance varies between 1.5x-2x of Firefox', and 3x-4x of native code (http://arewefastyet.com/#machine=29&view=breakdown&suite=asm...). This is still impressive theoretically, but pragmatically that's a huge performance hit. It's also important to remember these are very current browsers on very current platforms, and you're assuming a lot of other things, a fat network pipe to download a huge JS blob among them.

For example, I could run a PlayStation emulator on my phone and barely get a proper framerate. If you put this same emulator in asm.js using Emscripten, it would run half as fast at best (in Firefox), and we haven't addressed all the other problems with HTML5: skipping audio, high latency on input (mouse, controllers), networking performance, etc.

Or for another example, let's say I can play Call of Duty 5000 on my new PC at 60 FPS. Handwaving a lot, let's also say I can play Monster Madness on the same PC at 30 FPS, because of the 50% drop due to asm.js. But let's also say I bought a laptop a couple years ago that runs CoD 5k at 30 FPS, but Monster Madness runs at an unplayable 15 FPS. You can argue, "just run the native version!". But at that point, why am I bothering with asm.js at all? To avoid installations? While that can be important, I'm not sure it's worth millions of man hours and dollars.

(I know Monster Madness is nowhere near as demanding as a hypothetical Call of Duty 5000 would be, but I hope you get the point I was trying to make)

To summarize, asm.js enables zero-install, sandboxed, cross-platform applications at a huge technical price (even though you're technically installing the app every time you load it up, caching excepted of course). I'm not at all convinced we can't achieve the same goals in a much simpler way.

---

Finally, we haven't even begun to discuss the security implications of running untrusted software that can be modified in transit. I know it's kind of a rabbit hole (computers aren't born with all the software you need installed, it has to come from somewhere). But I do think when we're talking about communications applications, banking applications, the potential insecurity of zero-install apps is troubling.


> 1. I don't know how helpful open source is if that source is inscrutable. The JS generated by Emscripten isn't super useful.

People ship nonfree, minified JavaScript all the time, and "open source" doesn't mean "only distributed in the form preferred for modification". I don't see how this is unique to asm.js, or even that important: you can just put a link back to the "real" (e.g. C++) source in a comment on the page you serve.


I agree. I've seen asm.js proponents argue that, compared with PNaCl for example, asm.js isn't a binary blob. Maybe not technically, but I think it's hard enough to work with Emscripten-generated asm.js that it may as well be, and it's hard enough to write a large application in asm.js that you probably want to use C/C++ and transpile it to JS with Emscripten. That's the only point I was trying to make.

I do think you've hit on something though:

> People ship nonfree, minified JavaScript all the time

I know, and I think it's dangerous for software freedom if the apps we start depending on are all closed-source web services, where we're at the mercy of the owners. Vendor lock-in is vendor lock-in, no matter the distribution channel, and zero-install isn't worth it.


I am getting the sense that you think asm.js has some troubling interaction with software freedom and I just don't agree with that at all. Whether or not the code you receive is free software is orthogonal to whether or not it's been run through a compiler.

Emacs is free software and people typically download a binary to use it. Is that dangerous somehow? The source code is in a well known location.

I agree that it is troubling that nonfree JavaScript is shipped all the time, but again, that happens every day without asm.js being involved, and asm.js does not help or hinder this.


I get what you're saying. My counter-argument is that because asm.js is being pushed as a distribution platform for "desktop" apps, and web services are so rarely open source, software freedom stands to take a hit.

The main reason free software saw such adoption was that it was very high quality software available for free. Most people aren't even aware of the philosophy and history behind it, nor would they base their decision on what software to use on that philosophy and history.

Web services aren't covered by free software licenses, but they generally have no cost, are sandboxed, and you don't have to install them (even though you're actually "installing" them a lot, just in the background). For almost all users, that's a huge win, but for free software, that's a huge loss.

asm.js enabled many more classes of software to run in this environment, and I think that's a big looming negative for software freedom. I love Mozilla and I think what they do is amazing on almost every level. But the web is more than just a browser and JavaScript. There's 65k ports and dozens (hundreds?) of protocols. There's no need for the battle for the open web to take place only in the browser space.


> Since this is basically Windows, Mac OS X, and Linux, your native applications are going to work just fine on those platforms.

But, you need people to download and install your app in that case. The reason major game companies are investing in asm.js ports, even though their engines work great natively, is because the web is a very good distribution platform. That's really all there is to it - no one controls the web, there's no charge to ship a game there.


Very true and so very close to what I think is Microsoft's true motivation: Office. Viewed through the prism that Office is Microsoft's cash cow and the criticism that the Office version available online in browser is limited, like Google Docs is limited (there's at least one feature that doesn't work given a large enough set of features from Word...), asm.js makes perfect sense.

Perhaps Microsoft will tier their products in a different way. A Office 360 subscription (or whatever it is branded) will get you access to Office, running in a browser, under asm.js. Perhaps they'll do something different. Sooner or layer, however, the Microsoft embrace of this technology will be monetized.

There are side-effects. Microsoft can sell tools that now run a tranpiler to put executables on the web. They might or might not use LLVM to that end.

This development means that asm.js has become a defacto standard. It is too soon to tell if this will be more widely adopted, as defacto standards fail al the time. I think, however, that this one will stand. Quite likely we all are now going to be freed and constrained by small design decisions made long before this adoption came to pass.


The web is a great distribution platform for one subset of applications (news, email, social, etc.) and a bad distribution platform for basically all others. Pixlr is a great example; GIMP will do everything Pixlr does, faster, and without a network connection (yeah load time is bad, but that's a GIMP-specific problem). GIMP doesn't have to pay any money, other than hosting (which an asm.js application also would have to pay) to distribute their app. I can use GIMP without downloading it every time. I can also continue to use GIMP if/when its developers decide to give up. When Pixlr's developers give up, it's gone for good.

I also think pouring millions of man hours into a transpiler and JavaScript engine optimizations is probably the least efficient and most convoluted way to take advantage of the web as a distribution platform. I think the way this should all be framed is, "it would be better if browsers supported a lower-level bytecode, or more languages, or both, but that's politically impossible to achieve, so we have asm.js". That's the only thing that makes sense to me here.


1. Open Source is about licensing of the original source-code, not about compiled binaries - the JS generated by Emscriten isn't any more useful than the JS generated by Uglifier or Google Closure. And for debugging, that's why we have Source Maps.

2. ASM.js is for low level code, the kind of code that doesn't do garbage collection. As far as high-level languages go, Javascript has pretty good performance, is very flexible and can be used as a target by compilers without necessarily needing ASM.js. Examples of languages that are very different from Javascript, but with compilers targeting Javascript and doing it quite well - http://www.scala-js.org/ ; https://github.com/clojure/clojurescript

I personally do not get why developers would want PNaCl, as PNaCl will never be a standard, for the same reason SQLite never made it as a standard storage in HTML5 either. If you want anything other than what the browser standards are capable of, then you're free to go native - because here we can have a discussion about why the browser as a platform has gotten so ubiquitous in the first place ;-)

3. You're mistaken - Asm.js compiled code can run even in browsers that do not have special support for Asm.js. Yes, that code will be slower, less efficient, but it still runs. Also, what limitations does Asm.js has that prevent it from being implemented on all platforms?


1. I agree, but one of the arguments for "use JavaScript and not ActiveX/Flash/Java" is that those are binary blobs. I think Emscripten-generated asm.js is bad enough that it doesn't gain you a lot compared to a blob.

2. Well if they don't target asm.js, I'm not sure how relevant they are to our discussion. They're going to get faster as JavaScript engines become more optimized. I tried the Scala.js benchmark, and at 1000x1000 and 1 pixel, it took over 10 seconds to render the scene. I don't know what that means, but it felt slow. I can't find comparisons of Scala/Clojure in JavaScript to Scale/Clojure on the JVM, if you know of some, can you point me there?

PNaCl won't be a standard because of politics. Technologically, it's superior.

3. It "might" run. If you depend on WebGL you can kiss IE support goodbye, for example. Also "run" ought to come with some sort of baseline performance. Let's say you can run Windows XP via Emscripten. Don't you agree that the attendant speed decrease means that, while it may "run", it's not very useful if you can't really use it?


Actually, on your point #3, it will most likely run faster than regular JS on those non-asm.js optimizing browsers, as most browsers have JS engines that perform some kind of optimization, optimizations that serendipitously work quite well on asm.js code.

Slower than an asm.js-optimizing browser, faster than idiomatic JS.


It's great that MS works on asm.js support. Now that most syntax ideas from TypeScript have been added to ES6. It will come handy for Skype and Office web libraries.

Suggestion:

WebGL 2 and better WebGL support - Vote in the "IE Suggestion Box": https://wpdev.uservoice.com/forums/257854-internet-explorer-...


> Now that most syntax ideas from TypeScript have been added to ES6.

By 2012, most of what's in ES6 had rough consensus on; TypeScript was just implementing what was going to be in ES6, rather than ES6 taking it from TypeScript.


I still feel like the better alternative to asm.js's approach is a VM with a static javascript interpreter/transpiler. It would have worked just as well politically, and been a much more direct benefit.


Apparently Google's toying with the idea of using "use asm" as a signal to opt into TurboFan[1]. While not the AOT compilation envisioned by the asm.js creators, it speaks to the importance asm.js has already gained.

That said, personally I'd rather see time spend optimising the new ES6 features so they can be used without a big performance hit.

[1] https://code.google.com/p/v8/issues/detail?id=2599#c77


Mozilla tracks TurboFan performance on its "Are We Fast Yet?" benchmarks. TurboFan is clearly a work-in-progress because it is not scoring well on the common JS benchmarks:

http://arewefastyet.com/


To be fair, the arewefastyet benchmarks are intentionally very broad, and stress test many different parts of a JS engine. It may be the case that TurboFan has a very fast sweet spot for numeric code (which would explain why they'd want to use it for asm.js), but is slower for the more general cases.

I'd be interested to see how TurboFan performs on individual test cases; for example, Octane has several asm-specific tests, and based on their planning I wonder whether TurboFan is faster than Crankshaft for those individual tests even if it's slower for the rest of them.



Huh super interesting. It looks like currently TurboFan is competitive w/Crankshaft on the asm.js benchmarks, and much, much slower on everything else. I wonder if the V8 team expects TurboFan to someday beat Crankshaft on the asm benchmarks, given current perf differences.


Nice, now that Firefox, Chrome and IE will support it, I think asm.js will become available in most serious browsers, which is a great thing.

It will be interesting to see what kind of impact it will have. For example, will asm.js eventually take over traditional web development? Theoretically, you can compile any compiled language to asm.js, so you'll have a lot more choice for the language you want to use to create your webapps. It won't really be web though: no markup, no links, but yeah, with the current heavily based javascript apps that's also debatable. Also asm.js still has a lot of limitations and disadvantages that won't make it just as easy yet.


"For example, will asm.js eventually take over traditional web development? Theoretically, you can compile any compiled language to asm.js, so you'll have a lot more choice for the language you want to use to create your webapps."

I've outlined this progression before, which seems obvious to me, but I haven't seen anyone else discuss it.

    1. Get asm.js into every browser.
    2 or 3. Observe that asm.js is very verbose, define a simple
            binary bytecode for it.
    3 or 2. Figure out how to get asm.js decent DOM access.
The last two can come in either order.

And the end result is the language-independent bytecode that so many people have asked for over the years. We just won't get there in one leap, it'll come in phases. We in fact won't be using Javascript for everything in 20 years [1], but those of you still around will be explaining to the young bucks why certain stupid quirks of their web browser's bytecode execution environment can be traced back to "Javascript", even when they're not using the increasingly-deprecated Javascript programming language.

[1]: https://www.destroyallsoftware.com/talks/the-birth-and-death...


> Observe that asm.js is very verbose, define a simple binary bytecode for it.

I suspect that is never going to happen, for two reasons:

1) verbosity, on its own, does not make a difference on the web - HTML and Javascript are both generally super verbose, and haven't had any accepted "simple binary encoding" designed for them in those 20 years. What does get implemented is minifiers and compressors (gzip encoding or otherwise), both of which will provide benefits to asm.js comparable to what a bytecode would, and would not require any buy-in from browser maker (the same attribute that has made asm.js successful and PNaCL unsuccessful so far).

2) Historically, anything that is not backwards compatible and does not degrade gracefully is NOT easily adopted by Browser makers, or by websites, unless it provides something that cannot be achieved without it (e.g. WebGL gets some adoption because there is no alternative; but ES6 will get little to non in the next 3 years except as a source language translated to ES5)


Well, if such a bytecode were being standardized, someone would surely write a shim JS library to convert it to JavaScript on browsers that didn't have native support yet. And I think the idea of making a binary format would be more popular for a sublanguage which is essentially guaranteed to be machine-generated and inscrutable, especially given that I've seen a lot of commenters here and elsewhere have a knee-jerk reaction against the idea of using JavaScript syntax, than for HTML/CSS/JavaScript, which have a long history of being written and read manually without any (de)compilation steps, even if most big webapps are minified.


But it works the other way around: it will not be standardized before there's an implementation. If it ever happens (which I think is unlikely), the standardization will follow the shim.

And the knee jerk reactions are meaningless. The people who ship stuff don't seem to mind, and they are the ones who make things matter.


Hey, you're good at this. ;)

It's funny to see that people so opposed to JS are too short sighted to see that asm.js could enable what they've wanted all along. Look at that!


asm.js is a cross-platform bytecode; it just happens to be ASCII and look like JS. :) Arguments against asm.js as a bytecode are mostly about aesthetics and "elegance".

http://mozakai.blogspot.com/2013/05/the-elusive-universal-we...


I think that eventually we might ditch DOM and use WebGL or canvas or something instead of it, like on the desktop.


Maybe, as the DOM becomes more loaded with more abstractions, people will start re-implementing abstractions the DOM already provides; just the subset of abstractions they want. Whether their implementation can beat the native code of the DOM, and the bandwidth concerns of reshipping the same logic is another story.


Yea, it's already happening with React and other frameworks which are using virtual DOM.


And throw accessibility out the window. Sorry visually-impaired people. The web of the future isn't for you. :(


You can have accessibility without the DOM, and really the DOM is not such a great way to do this anyway. Just do things like write explicit audio-only interfaces.


That's a pretty narrow view of "accessibility". For example, you just assumed that your user can't see (or can't see very well?) but can hear.

Users who are deaf and blind? Out of luck. Users who are deaf and not blind but need your thing zoomed to a larger size? Maybe out of luck maybe not (depends on whether the WebGL app detects browser zoom and actively works to defeat it like some do). Users who are deaf and not particularly blind but happen to not be able to tell apart the colors you chose to use in your WebGL? Also out of luck.

What the DOM gives you is a semantic representation that the user can then have their user agent present to them in a way that works best for them. Reproducing that on top of WebGL or canvas really is quite a bit of effort if you really want to target all users and not just a favored few groups.


I can't reply to bzbarsky for some reason, but:

I assumed we were talking about vision impairment, because that's what the comment I replied to mentioned. Of course you can implement whatever else you want as well.

I question this "semantic DOM" idea: the trend has been towards filling the DOM with tons of crap in order to make applications, not documents. Do accessibility agents even work well on JavaScript heavy sites today?

Accessibility can and will be had without the DOM; while it is a concern, it shouldn't prevent things like WebGL + asm.js apps on the web.


No idea why you couldn't reply to me, but....

My point is that visual impairment is not mutually exclusive with other impairment, even though people often assume it is, consciously or not. This is an extremely common failure mode, not just in this discussion.

And while of course you _can_ implement whatever else you want, in practice somehow almost no one ever does. Doubly so for the cases they don't think about or demographics they decide are too small to bother with.

How well accessibility agents work on JS heavy sites today really depends on the site. At one end of the spectrum, there are JS heavy sites that still use built-in form controls instead of inventing their own, have their text be text, and have their content in a reasonable order in the DOM tree. At the other end there are the people who are presenting their text as images (including canvas and webgl), building their own <select> equivalents, absolutely positioning things all over the place, etc. Those work a lot worse.

You are of course right that accessibility can be had without the DOM, but "webgl" is not going to be it either. Accessibility for desktop apps typically comes from using OS-framework provided controls that have accessibility built in as an OS service; desktop apps that work in low-level GL calls typically end up just as not-accessible as your typical webgl page is today. So whatever you want use instead of the DOM for accessibility purposes really will need to contain more high-level human-understandable information than which pixels are which colors or what the audio waveform is. At least until we develop good enough AI that it can translate between modalities on the fly.


Speaking about AI, is it really that hard to do the OCR'ing of the images? I'm no expert, but I was under the impression hat this was a solved problem.


> I can't reply to bzbarsky for some reason

there's a rate limit to stop threads exploding


You use WebGL to create standard GUI applications on the desktop? WebGL and canvas are in no way replacements for the DOM.


E.g. Mac OS X uses OpenGL to render GUI, I guess I should have made myself more clear.

> WebGL and canvas are in no way replacements for the DOM.

That's kind of debatable. If you have access to a fast graphics layer from the browser, you can build a DOM replacement of sorts. I think that famo.us works kind of like that.


It's true that OS X uses OpenGL for GUI compositing, but that's only the lowest level. Above, there's a very important piece of the GUI stack called Core Animation which provides layer compositing.

Core Animation is used by both the native GUI as well as the browser DOM. When you use layer-backed compositing on a web page (e.g. CSS 3D transforms), WebKit implements it with a Core Animation layer. So DOM-based rendering enjoys the same benefits of GPU-accelerated compositing as native apps -- although obviously with rather different semantics since HTML+CSS doesn't map directly to Core Animation.

If you implement your own GUI framework on top of WebGL or Canvas, you're not getting Core Animation compositing for free, so you need to replicate that functionality in your custom framework. (This applies equally to native apps: a WebGL app is equivalent to a Cocoa app that renders everything into a single OpenGL view, and a HTML Canvas app is equivalent to using a single CoreGraphics view.)

I don't think the WebGL/Canvas route makes sense for most apps other than games and highly visual 3D apps. You'll just spend a huge amount of time building your own implementations of all high-level functionality that is already provided by the OS and/or the browser: layer compositing, text layout, view autosizing, and so on. If you're doing standard GUIs, why go to all that trouble?


> You'll just spend a huge amount of time building your own implementations of all high-level functionality that is already provided by the OS and/or the browser

Not only that, but you can't make a 100% guarantee that your implementation will look and work exactly the same as the native one on the underlying OS. For instance, I can re-create all the native Windows UI controls and re-implement all their behavior in exactly the same way, but what if the user has a custom theme installed? Everything breaks. (WPF has a similar problem.)


I agree that it would be a lot of effort to pull off since you'd have to duplicate a lot of the standard OS features in the browser but if eventually the DOM becomes an even bigger bottleneck, it might be a viable solution.


To some degree, yes. You just have to be able to re-use system UI controls like fields. So you wouldn't be able to just use WebGL/canvas/whatever in place of the DOM, you'd need to come up with a new API.


I know. I was thinking that there'd be something like Qt that would render the widgets using WebGL.


Until the web becomes the dominant operating system, I don't think that's reasonable because you'd have to implement an entire UI kit (with all UI components, behaviors, animations, etc) but can't guarantee that it will behave at all like the underlying OS. There's only so much you can re-create in the browser.


Perhaps it's to make it backward compatible, so that, for example, the traditional DOM is implemented as a Javascript library that parses HTML and renders it onto a WebGL surface?


>You use WebGL to create standard GUI applications on the desktop?

Increasinly high end apps do.


Graphically intensive apps do it for performance. But the tradeoff is you lose all the native UI controls that the OS gives you--form fields, text selection, animation, etc. So I don't think replacing the DOM with OpenGL or similar is a good solution for general-purpose apps.


All of those would be implemented in a framework.


Yes, they already are. That framework is called the DOM. People keep complaining about it and trying to come up with replacement frameworks that end up slower and less capable...

The DOM definitely has its problems, mind you. Investing some time in designing a proper replacement for it is worth it, as long as people understand going in that the project might well fail.


>Yes, they already are. That framework is called the DOM. People keep complaining about it and trying to come up with replacement frameworks that end up slower and less capable...

The ones I've seen are actually faster -- Flipboard for one gets to 60fps scrolling on mobile, IIRC. And of course all WebGL based interfaces on native mobile apps that re-implement parts of Cocoa Touch et al, are not that shabby either.

Sure, it doesn't have as much accesibility, but that's something that can be fixed in the future (and of course people needing more accessibility can always use more conservative alternatives).


Which framework has tried to replace the DOM?


None have tried to replace all of it that I know of.

People have tried to replace things like text editing (with canvas-based editors), CSS layout of the DOM (with various JS solutions involving absolute positioning), native MathML support (MathJax; this one has of necessity done better than most, because of so many browsers not having native MathML support). There are a bunch of things out there that attempt to replace or augment the built-in form controls, with varying degrees of success.



That's my point. Currently, you cannot really replace the DOM since that's kind of the extent of the exposed APIs.

None of the projects that you mention are really relevant to the discussion. I agree that they didn't change shit but it's precisely because they are still built on the DOM and cannot really go below that in the abstraction layer.


>I agree that they didn't change shit but it's precisely because they are still built on the DOM and cannot really go below that in the abstraction layer.

There exist UIs done in Canvas and WebGL that are arguably below the DOM in the "abstraction layer" and don't need to use much more DOM nodes besides opening a canvas/gl panel...

(Most full screen 2G/3G web games fall into this, for starters, and those are in the tens of thousands. But there are also lotsa apps).


Flipboard's from the recent story? The resulting site breaks accessibility and the layout obviously has less capability, but it's certainly not slower.


Huh, that's actually seeming not that dissimilar from what I had in mind.


You can use HTML-DOM and WebGL together (overlays, or render as texture).

The WebGL support could be improved in Internet Explorer.

Please vote: https://wpdev.uservoice.com/forums/257854-internet-explorer-...


Is it possible to mix them, while showing, for example, videos, which are clipped by paths or partly obscured by overlaying elements?


I would say yes e.g. with CSS Regions and WebGL. The other way around you have to render the HTML and the video on textures.


Shared memory multi threading will still be a big barrier to porting over many native applications, like games. Unless asm.js fixes that?



Nice! I'll finally be able to implement Glitch for the web (glitch is like react but uses replay to shake out data races).


I share your vision. However, I think getting asm.js a good DOM access definitively coming on 2), because it's an easy way to get visual feedback from anything running in a browser.

Oh, and thanks for the very enjoyable link :)


JS seems to actually be picking up a lot of speed outside the browser?


First, that's not particularly relevant to the question of what happens to the browser itself. Second, the field of "things you can run that is not Javascript" when not in the browser is already incredibly rich, so we already live in a flexible world. Third, frankly I'm not particularly overwhelmed by the prospect of Javascript's longevity in the server space being a long-term phenomenon... an awful lot of what gets linked on HN is less "cool things to do with JS" and "how to deal with the problems that come up when trying to use JS on the server".

And fourthly, and why this reply is worth making, bear in mind that if the browser becomes feasibly able to run any language, rather than having Javascript occupy a privileged position by virtue of being the only language, the biggest putative advantage that Javascript-on-the-server has goes poof in a puff of smoke. If Javascript has to compete on equal footing, it really doesn't have a heck of a lot to offer; every other 1990s-style dynamically typed scripting language (Perl, Python, Ruby, etc) is far more polished by virtue of being able to be moved forward without getting two or three actively fractious browser vendors to agree on whatever the change is (just look at how slow the ES6 has been to roll out, when I'd rate to contain roughly as much change as your choice of any two 2.x Python releases). And it has no answer to the ever-growing crop of next-gen languages like Clojure or Rust. Without its impregnable foothold in the browser, Javascript's future is pretty dim. (In fact I consider the entire language catagory of "1990s-style dynamic scripting language" to be cresting right about now in general, so Javascript's going to be fighting for a slowly-but-surely ever-shrinking portion of pie.)


Depends on how JS evolves. It got a pretty serious setback when ES4 blew up and everyone went back to the drawing board on ES5 and ES6; the ES6 launch makes it (I think) better for most use cases than Python/Ruby/et al, because the VM is an order-of-magnitude faster than most of the popular choices and ES6 is a reasonably usable language even for someone unused to Javascript's current quirks: it has a real module system, the class syntax is sane and similar to how every other lang does it, the confusing `this` binding is fixed with arrow-functions, generators and Promises get rid of deeply-nested callback chains, `let` and `const` get rid of confusing variable hoisting, etc.

Google and Microsoft are both very seriously experimenting with typed variants of JS (TypeScript from Microsoft and SoundScript from the V8 team), and Mozilla had in fact already proposed adding static typing back in the ES4 days, so I wouldn't be surprised if the next couple of versions of the ES spec include static types. The future for JS is brighter than you think — although it's brighter only because it looks like JS will become a better language, not because of JS in its current, mostly-still-ES5 state.


> Theoretically, you can compile any compiled language to asm.js, so you'll have a lot more choice for the language you want to use to create your webapps.

Asm.js is meant for low level code, i.e. the kind of code that doesn't do garbage collection. High-level languages are better off compiling straight into Javascript.

Solid examples that I know about:

- http://www.scala-js.org/ (we are actually using this one)

- https://github.com/clojure/clojurescript

- http://funscript.info/

And there are other implementations as well.


I don't know why but I am starting to like Microsoft. Things seems to have changed quite a lot in the past 6-12 months. .Net going open source ,new web browser spartan, CM acquisition are some of the things.


Microsoft are reportedly investing in Cyanogen, Inc. They have not acquired them to the best of my knowledge. Is there a source that confirms it?


Cyanogen, the commercial version of the OS is not open source, like at all I think. And I bet that if Cyanogen becomes much more successful, within 5 years they'll completely give up on officially supporting the open source version for other phones, too. They'll just see it as an extra cost that gives them little benefit (in comparison to their success then) and as harming of their "exclusivity" for certain phones. So they'll kill it. Cyanogen seems like a very self-interested company to me, and is willing to backstab anyone as long as it puts them at a higher level in the market.


Never forget that the core product of Microsoft is Windows, and most of people are forced paying a new license with each new computer. So any decision MS makes is,as an ultimate goal to sell windows licenses. Today there is a consensus on Open Web Techs. Nobody can tell if it will still be the case 5 years from now.


I think this is far less true. Many of Microsoft's recent decisions arguably have no net benifit to Windows, and in some cases could be seen as counter productive. The lockin is no longer for Windows, but Azure and Microsoft's tools.


Latest financials here: http://www.microsoft.com/Investor/EarningsAndFinancials/Earn...

It's a little tricky to read the Windows impact since (as I read it) it falls into both Commercial Licensing (for business sales) and Devices and Comsumer (for consumer sales). Some articles which break it down a bit:

http://www.anandtech.com/show/8936/microsoft-q2-fy-2015-fina...

http://arstechnica.com/business/2015/01/hardware-surprisingl...

So while the exact percentages aren't obvious, the big picture is that Windows is definitely significant, but Microsoft has become a lot more than Windows.

[I work for Microsoft until they notice I'm wasting time on Hacker News and "fix the glitch".]


The core products of Microsoft are now Office and Azure.

(And having a fast web helps Office 365.)


Would be cool to see TypeScript compile to asm.js, would open up some cool optimizations for hybrid development.

Seems like there was some discussion[1].

[1] http://typescript.codeplex.com/discussions/438243


I've been playing around with compiling small subsets of JS to C++, then running it back through emscripten.

It works, but you have to keep the subset small, otherwise you end up implementing a meta circular interpreter. The problem will be that the subset is never the subset that pleases everyone. When your implementation doesn't support one person's code, it's labeled 'crap', and the negative cheerleading begins.


Who's putting their bets on Microsoft going open source with Trident and Chakra next? Spartan is an opportunity for them to clean up the mess before releasing the codes I think.


At least for Chakra, this would make a lot of sense - I can't think of a downside for them. For Trident there is the theoretical downside that IE becomes fully independent of Windows. This currently doesn't matter, but might decrease lock-in in the future.


Windows dependency of IE is one of the worst things about IE - like having to restart your PC to update your browser. What year is this again?


The fact that it was stagnant, ubiquitous, limited to Windows, and encouraged ActiveX were the major issues, since it made it more difficult and expensive to develop and transition to web applications for about a decade.


Windows will automatically reboot every 6 weeks anyway once evergreen Windows 365 comes out.


I've been thinking that Spartan will probably go cross platform at some point, an opportunity for them to get a few more Microsoft accounts created and it's not like it being Windows only is going to convince people to switch.


I don't know about "Trident" but I think EdgeHTML and Chakra might become open source. I remember reading some where that they were looking into the pros and cons of doing that.


I like IE's dev tools, but my main platform is OSX. Their memory profiler stuff is pretty sweet. If the community could help them maintain builds for other platforms, I would rock that.


My bet is on Chakra, once open-sourced, becoming the JS backend for Node enterprise.


Nice to see Mozilla provide some good, portable standards, as opposed to Google with their own "native client" ActiveX-like nonsense.

I honestly just see Google retiring NPAPI-plugins as a way to push their own non-standard, at the cost of all other browsers. I'm glad to to see that's not winning them any wars.


All of Google's development of NaCl and PNaCl is open source. The only thing stopping other browser vendors from working on it is themselves.

It's also worth pointing out (being maybe just a bit too pedantic) that asm.js isn't a standard. It's a subset of ECMAScript, which is a standard, and Mozilla has published a spec for it. The spec is more how to use asm.js, not how to implement asm.js optimizations. I'm not saying this is bad, but if you're arguing that Google's stance is, "hey here's the code for PNaCl and how to use it, good luck!", you can't argue that Mozilla's stance is any different.


While what you're saying can't be said to be "incorrect", it isn't the full story either.

What Google did was in-house, behind closed doors, develop a solution, embed it in Chrome, push it out in production, and start using it according to their own specs right away.

Then they told other browser makers "Hey. Here's a neat ActiveX-like idea, which kinda makes the web platform-specific again, which you will have no say in how is implemented, because it's already in production, and unless you implement it as we see fit (and will continue to in the future), exactly as fits our browser-model and code (although it may not fit yours). Take it as is or we will be discriminating your browser on our web-services". And so they did.

It may not be proprietary by definition, but it's not "open" by a mile either.

Counter that with what Mozilla did: They proposed a way to make highly optimizable code-sections even faster, machine-code fast, in a backwards-compatible, web-friendly and portable way and invited people to join in. Those who didn't, would not suffer a lock-out, but those who joined could benefit from the work already done.

I don't think there's any point even pretending that these two actors are playing on the same moral level here. Google is acting scumbaggy and everyone but apologists knows it.


Does this mean we'll soon have mobile web apps that rival native apps in terms of performance or will DOM manipulation still slow everything down?


> Does this mean we'll soon have mobile web apps that rival native apps in terms of performance or will DOM manipulation still slow everything down?

asm.js has nothing to do with the DOM.So if the browser engine is slow, a fast javascript engine won't make any difference. It only make a difference with pure javascript computation as asmjs basically recreate an "assembly" on top of javascript, which allows to execute compiled C/C++ code.


JS is already plenty fast, it's just the DOM manipulation that screws it up. But asm.js with direct access to a <canvas> tag, WebGL or not, would probably be interesting.


Which is why ReactJS native is so compelling.



The latter. asm.js is interesting if a part of your app is computationally intensive. For example, if you want to create ZIP files in the browser asm.js is awesome.


Other languages compilers/interpreters may be compiled in asm.js and let us program in something different than Javascript (e.g. https://github.com/replit/empythoned ). Is this realistic?


I feel like I must be missing something but I fundamentally don't get why people are so excited about asm.js.

Does the web really need more people manually managing their memory? Do they not get that that's what asm.js is? It has no GC.


I mean, you must know that applications with particularly high performance requirements can't afford a GC.

asm.js is really about trying to turn the "web" in to a true runtime for applications and a direct competitor to "closed" ecosystems like iOS.


It opens a way to the Web Platform for the developers programming in compiled languages. For example, those developing casual games in Unity or Unreal Engine, can simply compile their games for the Web with as little effort and penalty in performance as possible.


Yes.


Would be cool to see TypeScript compile to asm.js, would open up some cool optimizations for hybrid development.

Seems like there was some discussion[1].

[1] http://typescript.codeplex.com/discussions/438243


asm.js + WebGL allow to completely skip the web stack making the browser just another platform of execution.

Strangely or not, C/C++ strike back.


I think since IE6 Microsoft should be banned from touching anything related to the internet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: