Hacker News new | past | comments | ask | show | jobs | submit login
Partial WebAssembly backend for the GNU toolchain (sourceware.org)
211 points by ingve on March 4, 2017 | hide | past | favorite | 101 comments



And thus the era of METAL[1] begins...

[1]: https://www.destroyallsoftware.com/talks/the-birth-and-death...


This is super enjoyable thank you ^_^


And compelling. Can you really make that work securely in the kernel?


It's a nice idea, but it goes against layered security [1]. Perhaps computer-assisted proofs can make that a non-issue.

In any case, I would very much like to see an implementation of this.

[1] https://en.wikipedia.org/wiki/Layered_security


I sincerely hope this gets a good reception, and doesn't suffer the fate of previous attempts to add similar backends to the GNU toolchain.

In the past, backends for anything usable as an intermediate language received pushback as possible "escape hatches" by which one could glue proprietary code (whether frontends, backends, or optimization passes) into the GNU toolchain. That pushback mostly went away when the GCC Runtime Library Exception came out, which makes such proprietary combinations much less viable.

So, hopefully this will get merged, or at least get on the path to merging after further development.


gcc already has frontends (GIMPLE), backends (LTO and HSAIL) and plugins (DragonEgg) that do this, so you've had nothing to worry about for like five years now.


I don't really follow. Do you mean, for example, BigCorp making a backend that compiles to proprietary BigCorpLang? Why would that be such a bad thing?



I think they mean it compiles to BigBorp ir which is then used with BigCorps secret sauce optimizing compiler.


That's already happening with the nVidia PTX backend, so I guess that's accepted too nowadays.


This isn't really a generic issue of "previous attempts" but more specifically LLVM, right?


No, actually.

There's been a bunch, from java backends, etc.


Yes, Stallman killed the JVM backend. Did not want to promote a proprietary technology.


In retrospect (Oracle v. Google) that doesn't seem like such a bad decision!

Admittedly, the concern about the JVM backend was different from what got Google in trouble; indeed, GNU had their own reimplementation of Java (GCJ / GNU Classpath) which was equally potentially infringing (and was the main reason bytecode output was a problem in the first place - because GCC supported bytecode input already, it could allow for round tripping). But the overall message of avoiding proprietary programming languages/environments seems prescient.


There is actually another voice from Free Software advocates on the Oracle v. Google case, basically saying that Google could have been free from Oracle's trolling in the first place had they based their thing on the GPL-licensed OpenJDK.

A sample of such statements, from https://github.com/biergaizi, can be found at http://weibo.com/1219205751/Dov1ynGLk (login-walled).


OpenJDK didn't exist when Android first started; Dalvik pre-dates the initial release of OpenJDK, let alone the viable versions of OpenJDK that came after further development in the community.


Maybe it's a foolish thought, but it occurred to me that maybe in 10/20 years AAA game titles will have a 'web' version, as well as the PC and game console ones. Or maybe they let you play a demo for free on the browser, with real gameplay mechanics and graphics made with wasm or similar tech.


In 10 years there won't be any difference between a web-app and a desktop app precisely because of WebAssembly-like tech. Imagine WebAssembly being the new JRE or .NET runtime (or maybe more like JavaFX, but supported by everyone). You can run it in a browser, but you don't have to. All native stuff will be accessible to you, like Vulkan, OpenAL etc.


Sounds like my worst nightmare. Browsers won't and should never have the full capabilities of native APIs. It's certain that many more applications will shift into web apps, but there are limits to what I'll want my browser to do.


This battle was lost decades ago, along with semantic markup and needing to enable Javascript to see any text. I don't like it either, but the market took the shortest available path to what it was seeking and now there is no way anyone will roll it back.


Yes there is, iOS, Android and UWP.


How well are Android and iOS running in your desktop?


You forgot UWP from my list, and that runs just fine.

As for desktop as such, my tablets are also fine with an external keyboard and the right set of native apps.


I deliberately didn't ask about UWP because then I would have had to ask "how well does UWP work on a Unix desktop".


Not everyone cares about UNIX desktops, specially looking at changes at Apple and their sales.


When the topic discussed is software that can run on any platform, you have to care about UNIX desktop, otherwise you're cheating - or at least not providing a solution to the problem as was stated.


No, the topic being discussed is

"Browsers won't and should never have the full capabilities of native APIs. It's certain that many more applications will shift into web apps, but there are limits to what I'll want my browser to do."

Which given the increase in mobile OS across the planet and their use of native apps, is something I wish will turn the tide.


Once you run WebAssembly as a virtual machine providing a core API without a windowing system, it barely counts as a browser anymore.

There's nothing in its essence that makes it different from the virtual machines in Java or C# or Python, other than having several alternate working implementations, available in every device that runs a modern browser (true, the latest wouldn't count as "native", but nothing stops you from running a headless WebAssembly instance directly on the metal without the intermediate application layer).

The solutions you proposed (iOS, Android and UWP) may count as native, but they definitely don't count as "universal" in the same way that WebAssembly would.


There is a big difference from all other VMs, a lowest common denominator UI/UX across all devices, even the JVM does better in regards to integration with the host OS.

This has not changed since the HTML5 is going to change the world kind of thing, and I doubt WebAssembly will change that.

Simple stuff like Web Components are still pretty much a Chrome thing and WIP everywhere else, let alone more specific OS APIs.


Uh? Why would that make it any different from providing an "official" GUI API with the virtual machine, like Java Swing and MS XAML (or whatever Microsoft calls its current View stack nowadays)?

Are you talking about implementations not being there (yet, though that could change with time) or is there a fundamental difference in the platform architecture that I'm unaware of?


I am talking about the full stack from the lower layers all the way up, integration with peripherals, audio engines,GPGPU, 3D that doesn't look like Glide demos, ability to fully utilize any kid of wireless communication,composition engine, GUI components, security, 2D acceleration that is guaranteed to always work, ...


Doesn't UWP also try to provide all of that? The only difference I see is that HTML5 comes with a formally defined standard, so it is possible to build more than one implementation by different parties, while UWP is proprietary.

I don't understand why you think providing all those services in UWP is a good thing, but extending HTML5 + the Javascript VM to cover them would be a terrible idea.


> I don't understand why you think providing all those services in UWP is a good thing, but extending HTML5 + the Javascript VM to cover them would be a terrible idea.

It is not that is a terrible idea, it is that it will never happen, thus always being a 2nd class UI/UX.

I am still waiting for a version of WebGL that can actually fully use my graphics card, WebGL 2.0 surely isn't it yet.

Or for web components to be ready across devices and multiple browsers. How many years has Google been talking about them at Google IO now?


Never happen? Unreal Engine has been ported to WebAssembly. Sure it's not part of the standard, but it runs on the platform. I'm sure Vulkan could get the same treatment.

Once you have a development stack with a good toolchain and widely available on many platforms, many people will be interested , and it will grow. Having a standardization body that will incorporate the best is a good thing to have, but it's not a requirement for the platform to get the tech support by third parties (see what happened with Flash or Silverlight, which required a separate unsupported engine as a plugin). Having a single target will surely accelerate development?


It's more like you won't need to develop for two-three targets, like today with desktop, web and mobile. Instead you will use the same runtime, though there might be some restrictions on each platform you'd need to take into account (like handling permissions properly) and make sure your code would work on any platform that provides the capabilities you require.


Java promised that, but we got write-once, debug anywhere. Those different features, bugs, permissions, etc, mean you'll still have to test the platforms, build up different UIs for each platform, etc. About all you'd end up gaining is a stylesheet-based rendering engine, but you'd get that with something like XAML as well.


Java doesn't support compiling C/C++ to their runtime. So that's not really comparable to this.


There is a currently dead project that does that https://github.com/davidar/lljvm . So I'd argue it is comparable.


Like you said it's dead. Hypotheticals don't matter here only real tools we can use today.


WebAssembly is much simpler than Java, and harder to get wrong. The DOM isn't, but I would hope non-web targets for WebAssembly would use different (and simpler) APIs.


WebAssembly in browsers hasn't progressed all that much over the last 5-6 years (~2011 was The Big HTML5 Hype Year =) .. if it's out of preview in another 10 that'll be plenty!

Current spec is extremely limited in capabilities anyway for the full RIA spectrum of needs (strings, objects, dom), at this stage practically useful mostly for numeric calculations and such. Certainly there are loftier goals and a lot of optimism with regards to implementation timelines, "buuut"...


Well, that's just not true. WebAssembly (not to be confused with asm.js) only started in 2015. Browser preview period started in late 2016 and ended very recently, and we're now on the way to stabilization.


Are you talking about asm.js or something other than WebAssembly? WebAssembly was announced in 2015.


Crikey, seems like my off-the-top-of-my-head memory did fail me here!


If you have a fast enough internet, you can right now stream games to your PS4 just like Youtube. In 10-20 years time, we'll be able to stream games running directly on publisher's server farm somewhere.


Latency will always be a consideration though; the fidelity might get there in terms of audiovisuals, but you can't engineer around the speed of light.


I use a LiquidSky cloud gaming PC as my main games machine. For Overwatch, I have a ping of 1ms, as it's next to the OW servers, 40ms total ping from my home to LiquidSky. it's kind of amazing and works wonderfully


> Latency will always be a consideration though

And it will always be way too high for VR.


They could stream a spherical texture (with depth map). Locally you would only have to render the sphere from the view point of the eyes.


If we can get closer though, I suspect most people wouldn't be bothered.


It would basically double the current latency (about 100ms) by adding a 50ms trip to both receiving and sending. (And that's being generous -- it could easily add 100+ms to each trip.)


Oh. Nice. How do we plan on increasing the speed of light to make the latency low enough?


> In 10-20 years time, we'll be able to stream games running directly on publisher's server farm somewhere.

Nvidia has a game streaming service that does this already. I've only tried it briefly but it worked great.

http://www.geforce.com/geforce-now


Onlive went bankrupt.


I don't think this will ever happen, for most games. AAA games don't generally have much headroom for waste. And even if they do, supporting yet another platform is expensive.


AAA games are much more about content than the code that displays the content.


As someone who used to work on AA/AAA stuff that is way off the mark.

One supports the other and both need to be incredibly efficient and compact.


And that content would have to be downscaled and probably converted to another texture format, too, unless you can get people to be okay with waiting for gigabytes of data to be downloaded. Along with this comes extra costs for managing the additions to the asset pipeline that this incurs. Supporting another platform isn't just about code, it's about content creation, asset management, and testing, too.


At 1Gbps (125MB/s), a gigabyte takes 8 seconds to download, less than most AAA games take today just to load from disk. There are videos of Google Fiber users getting ~40MB/s downloads from Steam, so I wouldn't be surprised if at least some users reach that in the near future. And if you rely on a fast internet connection, you only have to send the assets necessary for one level at a time.


That exists, with unity and unreal engine in the browser.


It doesn't exist at the AAA level he's describing, to fit the limitations of WebGL the web versions of those engines are based on their stripped-down mobile renderers.

We need a next generation WebGL with features comparable to DX11 before web graphics can catch up with the state of the art in native PC/console graphics.


Those things are achievable once you get a stable platform with wide adoption.


Which AAA game titles are using that right now?


I don't know if this qualifies as "AAA", but the BananaBread demo is a must-see:

https://kripken.github.io/BananaBread/wasm-demo/index.html


That's a goal post shifting subjective distinction, isn't it?


How is this different than using the emsdk to build?


Emscripten is a cross-compiler tool chain. This is a back end for a compiler.

Currently Emscripten uses a fork of Clang/LLVM called fastcomp with a ASM.js backend.

This is a fork of GCC/Binutils with a WebAssembly and ASM.js backend.


To add to that, in principle Emscripten could use gcc and this backend instead of clang and an LLVM backend. Emscripten itself abstracts over the backend details (it's supported 3 different backends in its history, and currently supports 2). Most of the code in Emscripten is in the libraries, like OpenGL=>WebGL, etc., and JS integration code, which shouldn't depend on the backend.

It would be interesting to get that working and do some code comparisons on the output.


So what does the future hold? Will this replace JavaScript?


The more imminent concern is that publishers are beginning to bundle browser runtimes completely replacing native browser functionality along with its established privacy and fair use expectations. Say hello to unskippable interstitials, unblockable analytics and targeted advertising, content that can't be linked, saved/archived, mirrored, cached, shared, translated or made otherwise accessible.

Developers mostly discuss JavaScript shortcomings and new possibilities offered by WebAss, but seem otherwise happy to completely throw the fundamental architecture of the web under the bus.

Technically, what WebAss can do has since long been possible with native applications; the only thing added here are new software licensing models, eg. pay-per-use, but mostly tracking/ad-financed.


I have the same worry. Page downloads content as some encrypted binary blob, displays it using <canvas>, impossible to block advertising or copy text.

However, at the end of the day, content providers have to make their content available in plain HTML format in order for search engines to index it. If they did the above without offering a plain HTML alternative, they'd lose traffic from search engines. But, if they are offering that format to search engines, they have to offer it to us too.

(Okay, they could lock it down so the plain HTML is only available if User-Agent = Googlebot/Bingbot/etc and source IP is coming from Google/Bing/etc network. But, will the search engines let them get away with that? I would hope Google would refuse to index it on the grounds that their crawler is getting something very different from what the real user sees, but I guess that's their decision.)


You can already do all this. In practice, I've never seen a site render its text and ads to a canvas to make things difficult for me.

IMO, the bigger obstacle to "content that can't be linked, saved/archived, mirrored, cached, shared, translated or made otherwise accessible" is the "fundamental architecture of the web": HTTP. You can't link it (who knows if the person who clicks the link will get the same page?), you can't cache it (who knows when to invalidate it?), you can't mirror it (who can enumerate the dependencies?), and so on. Something like IPFS would fix these things (and should fix these things). But in practice, a fairly cacheable, linkable, mirrorable, sharable web has been built on HTTP and I expect nothing much will change when wasm is thrown into the big vat of web technologies too.


Google (for years now) have been indexing "apps" ie. they execute code found in web pages and index the generated DOM (within bounds and with additional heuristics, obviously). In fact, not deepening Google's search monopoly is another excellent reason not to go all-in on the procedural web.

Mind you, I have nothing against Google nor ads in general (but I do have an issue with tracking, even though it has legitimate uses).


This isn't a dystopian future we can avoid, it is a dystopian present that already happened and we are stuck with it.

Almost every page needs JS to use. Almost all JS is thoroughly munged beyond readability. It's normal for even page navigation (surely a browser function) to be handled in JS, and for resource loading (again a browser function) to be done through JS making AJAX requests to URLs generated deep in a minified blob. The accessibility nightmare is real.

Ads are common as ever and adblocking is an arms race never won. DRM has been here for years, and HBO isn't putting .mp4s on an FTP server any time soon. YouTube is almost unwatchably saturated with ads and is run by Google, the search and analytics company. Targeted advertising and analytics are a bell that can't be un-rung.

The web you are trying to save became a victim of its own success, somewhere around when everyone decided it was going to be HTTP and HTML instead of gopher, the browser wars started and AOL users started to flood USENET.


>Ads are common as ever and adblocking is an arms race never won.

What are you talking about? I think I saw 2-3 ads top through all my browsing for the last year. The arms race is won by the adblocker. There was a slight upset when adblock detectors appeared, but Anti-Adblock killer took care of them.


"Companies are doing shitty things with browsers nowadays. Instead of shaming them and calling them out on it, lets just create a system that shows us how best to grab our ankles"

This is going to ruin so much about the web, and this 'lets just give up' attitude is one of the reasons why


Or... you could continue avoiding websites that do what you deem unacceptable things. I won't visit a Forbes website intentionally - I have no intention of supporting a site that wants to throw ads quite that blatantly in my face. There'll always be a very large amount of sites that behave respectably.


What sites do you frequent? Almost every site is infested with advertising.


Sounds like flash based sites. Nothing really new. WebAssembly means I can use a safe language like Rust, and deliver to any platform with zero install pain for the user.

If publishers are replacing browser functionality, it's probably because the current way to reach browser functionality from wasm is bad/slow. See the second stick man graphic here:

https://hacks.mozilla.org/2017/02/where-is-webassembly-now-a...


That will also put a barrier in front of the current practice of borrowing code and techniques from other sites. How much of that goes on today is pretty opaque, so I'm not sure what effect it would have.


I don't get why people are so excited about WebAssembly, at least on the whole "replace Javascript"/"remove web bloat" aspect.

Right now we have Javascript and its assorted libraries, which, as bad as they are, come with the browser or at least are generally reused/cached very frequently.

People seem to fail to realize that everyone and their mother will dump their runtimes and their VMs on top of WebAssembly as soon as they will be able to. Java Applets 1, 2, ..., 100.

Yes, they will present it in a very fancy and attractive way, but fundamentally that's what will happen, cause who wants to rewrite everything for Javascript-land when they're not forced to? Competitive pressure will always push towards bloat, just look at Facebook and they Android-runtime busting app for an example.

I, for one, am looking forward to the day where, besides the 500MB of Javascript my 40 tabs are loading, I'll also load 1.5 GB of WebAssemblied formerly native runtimes.


JavaScript got at least part of its popularity because it was the only thing that ran in all clients (Flash and Java were close, but had the disadvantage of being owned by a single big player)

Similarly, node.js got at least part of its popularity because it was the only thing that used _the_ language you can also run in the client.

This levels that playing field at least a bit (completely, if it can manipulate the DOM with little friction)

I expect to see many server-side languages will fairly soon compile to WebAssembly, so that they can position themeselves as an alternative to node (For example, I think Apple would be stupid if it wasn't working on a Swift-to-WebAssembly compiler).

In my book, that is good; competition in that field will improve all tools.


I doubt it will be for average brochure sites, but I have used it in two situations, one to encode a series of captured canvas frames into ogg video on the client using ffmpeg to save bandwidth on a telehealth web application, and another to embed a mobile game engine in with samples so users can interact in the browser without having to download build and run http://moaiforge.com/samples/sample-browser/player/index.htm...


As I understand it, wasm is not asm, but an AST format for the JS VM. You're essentially compiling your c, rust, whatever, directly into the same thing JS is compiled to on the fly. But you do it in advance, so no parse time needed. Also, no GC, so good luck getting Java code on there for now.

https://hacks.mozilla.org/2017/02/what-makes-webassembly-fas...


I'm excited at the prospect of 64-bit integer support, which will significantly speed up many applications, making even web-based proof-of-work mining feasible.


Wait. Why would you want web-based proof-of-work mining? Does this have a legitimate purpose, or is it all about forcing unsuspecting visitors to mine Bitcoin for you?


The web page should only start the miner if the user is well informed and clicks a button to start it.

Legitimate purposes include spam-proofing message boxes, as well as cryptocurrencies whose PoW is somewhat CPU friendly, such as those bottlenecked by DRAM latency.


For things that are DOS-sensitive


In the short term: probably not for small glue scripts, but it'll provide a serious alternative for larger web applications.

In the long term, we might see WebAssembly used as a means to implement JavaScript, such that you can count on the same set of features in every browser.

That'll depend on how well WebAssembly can interface with the DOM and with other browser APIs without needing JavaScript shim layers.


The point about having a nice API is huge-otherwise we'd still be writing java applets.


Intel x86 assembler is arguably not a nice API, but most of us are not working at that level any more.

The requirements for an instruction set aren't really the same as the requirements for a language well suited to writing, say, video poker games.


> Will [Wasm] replace JavaScript?

For bigger games, maybe. For applications, not so much.

However, it will give you the ability to mix and match different languages more freely. E.g. you can write processing-intensive modules in Rust/C/C++ and continue to script the UI with JS/TS/Dart.

This way you won't have to recompile tens or even hundreds of thousands of lines of code for minor UI changes.


WebAssembly does have GC, Polymorphic Inline Cache, and direct DOM integration on their roadmap.

So, at some point, you should be able to compile any language to WASM, including Javascript. The only real barrier would be the size of the runtime / standard library for the language. If WASM supports also supports some kind of hashed cache where many sites can share a language runtime, that's not even much of a barrier.

I suspect once it's stable, you'll see all sorts of languages being used in the browser.


This is precisely what I was waiting for! Finally a set of reasonable languages for web! Many thanks!!!


Is the closer C family of languages really reasonable for the web?


Much better than the mess that is JS, IMO.


WASM does have the pieces needed for most languages on their roadmap. GC, polymorphic inline caching, direct access to DOM, threads, and so forth. No idea of what the timeline would be though.


In many ways yes, as C can be much more straightforward than JavaScript, which enforces spaghetti-programming by design (via callback hell). So anyone used to develop desktop apps, facepalming on trivial problems in JS like loading resources, can rejoice. And I expect other languages will be ported soon, like Java, C#, Scala, Python, Go, Rust etc. Imagine getting the whole Java stack of libraries, .NET or ML stack for Python running on top of WebAssembly with almost zero differences and at native speeds. Why not?


> In many ways yes, as C can be much more straightforward than JavaScript, which enforces spaghetti-programming by design (via callback hell).

I don't think this is a fair criticism of javascript. In javascript if you want to you can define all your callbacks as top level functions and just pass a function pointer through, the same way you would in classic C or C++. But nobody does that because its an awful way to write your code. If you want to do async programming you need callbacks of some form, and putting the callback inline is better. (Hence lambda functions being added to C++ and java). Arguably await/defer is better still, and unlike C, javascript now supports await if you want to write code that way.

The only other decent way to write high performance code is to use lightweight threads and message passing (go or erlang). But again, (as far as I know) C and C++ make this sort of code really hard to write correctly because of the memory model. I'm looking forward to seeing that battle played out in rust, where both models can work side-by-side.

As for resource loading in JS, promises aren't perfect but they make deferring code execution on resources loading work great. Just bundle your resource loading into a promise and then require('./resources').then(whatever => {...}).


JavaScript doesn’t have “spaghetti-programming by design via callback hell”. Callbacks are a common approach to dealing with events and asynchronicity because nothing was built into the language (until it added promises and async/await, like how nothing was built into C# until it added tasks and async/await). There certainly isn’t anything built into C.


Rust has basic support today. We're working on making it even easier.

https://users.rust-lang.org/t/compiling-to-the-web-with-rust...

(This post is a bit old, it works on stable nowadays, but it gives you the basic idea)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: