I worry it means "end of browser view source" and frankly concerned this will be (ab)used to implement DRM for information that we can easily view and copy the source now.
JavaScript minifiers are already in wide use by anyone making serious use of JS, and JavaScript obfuscators exist.
I think the cause of computing freedom is likely better served by building high-quality wasm disassemblers (radare has an open ticket, for instance) and by making sure that wasm code is so tightly sandboxed that DRM can't work, i.e., that you have the equivalent of an "analog hole" because you can write a browser extension / patch that taps all the data and the inside code can't tell. Hoping that technologies won't get developed has historically not been a productive approach for software freedom; the folks who want to take our freedom have enough resources that they'll do it whether or not there's a standards process involved.
> that you have the equivalent of an "analog hole" because you can write a browser extension / patch that taps all the data and the inside code can't tell.
Even if it isn't, it's not like the browser itself is just some binary blob these days. Taking the source for firefox or blink or webkit and compiling your own version with slight changes is not only possible, it's already done in many instances. What's Mozilla's response when you take their browser and provide a fork with the sole change being to provide more freedom and choice? Not that it even matters, as it's not like you need to get mindshare from the public for this, developers that want to see the source will actively search for and find solutions, or make their own. There are simple extensions to bypass CORS controls for most (all?) browsers. If they didn't exist, browser variants disabling those same security mechanisms would exist.
It's all opt-in gentlemen's agreement style security. Both ends have to by in for it to work, and you control your end...
Why do I get the feeling that even though it will be sandboxed, there will some sort of DRM API/standard that allows the website to play encyrpted audio/video directly through Intel's Protected Audio Video Path?
Yes, that's a very good worry - that content will be end-to-end encrypted between the provider and an approved monitor, and not even the compiled client-side code will see the cleartext content.
But if the compiled code doesn't see the cleartext content, there's no reason to obfuscate it; the provider can safely do the same thing in unobfuscated JS, by just making an XHR to retrieve the encrypted media and passing it to a <video intel-drm-crap="yes"> element.
Arguing that no such API should exist in the browser is going to be a way easier fight than arguing that no compiled code should exist in the browser. There are plenty of good use cases for compiled code, but this one API is only useful for DRM.
There are decompilers and disassemblers for pretty much every compiled language, and WebAssembly will be no different. The gains in performance, security, and flexibility for the masses greatly offsets any increased effort in the tiny percentage of people who want to view the source.
Most likely these tools will just be built-in, the same way http/2 doesn't have a text-based protocol but you don't notice anything different when using devtools to see network requests.
I'm sympathetic to that argument, but this isn't about demanding access, this is about incentives / soft encouragement.
Many websites today have readable JavaScript because that's the natural thing to do; you just send down the JavaScript source in the original form and it runs.
Many native applications today have unreadable source because that's the natural thing to do; you compile your C or C++ code, and you only need to ship the binary. Your binary even gets smaller if you remove debugging symbols.
You can do otherwise in both cases (obfuscaters in the former, provide source in the latter) but it requires an active decision to do so. Much fewer software authors make the conscious decision to leave their source code readable or unreadable based on what they want a priori. Same with server- vs. client-side development; you can easily hide all your source by keeping it server-side, but for the sake of some technical goal people will decide to move parts client-side, and decide that having it be world-readable is okay.
OP is advocating for a world where people continue to default to providing their source, not one where people are compelled to.
If you render to a canvas rather than generate plain text, then I have to use screen readers with built-in OCR to perform "copy," which is a pain. It doesn't protect you, but it makes my experience worse.
What we're learning from music and movies is that any movement to try to restrict users just leads to user flight; any movement that opens up and enables users to have a great experience with your IP, leads to user delight.
"With this, developers can start shipping WebAssembly code. For earlier versions of browsers, developers can send down an asm.js version of the code. Because asm.js is a subset of JavaScript, any JS engine can run it. With Emscripten, you can compile the same app to both WebAssembly and asm.js.Even in the initial release, WebAssembly will be fast. But it should get even faster in the future, through a combination of fixes and new features."
I still dont understand: Can this be a general purpose replacement for javascript or not? Because if it is then every language, even Dart and typescript, should switch to producing webassembly. Right?
No, WebAssembly is not currently a general-purpose replacement for JavaScript. Maybe in the future it will be, but right now WebAssembly doesn’t have any APIs to access the DOM. This makes it only good for quickly doing calculations with numbers or data that can be easily represented as numbers, such as implementing a physics engine for a browser game.
Even if WebAssembly had some API to access the DOM I have a feeling it would always be sort of the same thing as GWT - you can do stuff that touches the UI in it but probably isn't the best thing.
I think the big milestone for me is going to be when LLVM has first-class support for a wasm backend. I get that you can already get similar behavior using emscripten through asm.js to wasm, but it still feels clunky.
Even still, it's great to see that things are still moving on smoothly (and the new logo looks really nice!).
That's in the works. Here is a patch that provides some of the foundation for getting first-class support for WASM: https://reviews.llvm.org/D26722
Interestingly enough, the "clunky" Emscripten compilation path is quite a bit faster than the WASM backed ATM because it bypasses all the cruft in the LLVM backend which can be pretty slow.
The logo isn't as nice as the most voted one. But I get the philosophy behind it though, probably the biggest reason it was chosen.
But my heart still remains with the other logo. It looks so modern and can go with different color palettes.
I believe that some of the core people who proposed this are actually NOT getting what they really intended. Because ALL APIs are only accessible through JavaScript still.
I think this could be much more useful if at least some APIs didn't have to go through JS. Not saying that's easy.
Part of the problem is the whole idea that every program is supposed to test for the existence of every feature it might need. I think that's ridiculous. I suggested on github that actually what needs to happen eventually is to decompose the web platform into a a bunch of semantically versioned modules. One big problem with that is that modern modularization is not really first class in C++ because of its legacy worldview.
First DRM and now this. I'm getting the feeling the web is not interested in my priorities anymore.
So if I wanted to support a standard that prioritizes easy, accessible exchange of information, openess and user control with my server, where would I look?
All WebAssembly does is further obfuscate the code that runs in your browser. Have you ever tried to reverse-engineer Gmail or any modern web app? It's already basically impossible.
A part of the web has simply become an application distribution system. That's not necessarily a bad thing, many other important websites are still very much open and accessible, like Wikipedia for example.
The web just became bigger than it was before 2001.
This is a further step in the reduction of user control, the thing that I think made the web so successful in the first place. I suspect it will kill the web, but it will at least make it unacceptable for me. I would like to find and support the replacement as soon as possible.
I'd love to know what the current advantages are over running asm.js? I understand that it will definitely be faster eventually, but if I have a project that uses asm.js today, would it make sense to run it with WebAssembly instead? (ignoring the fact that not all browsers support it)
One potential issue:
"If you have lots of back-and-forth between WebAssembly and JS (as you do with smaller tasks), then this overhead is noticeable."
As far as I'm aware, asm.js code does not have an issue with this, as it is just js code. Is this correct?
(edit: I should have mentioned that I'm primarily interested from an electron.js point of view at the moment, where Firefox asm.js optimizations are unavailable)
asm.js code is "just" JS code in the sense that it is a subset of JS: a non-asm.js-aware JS implementation can treat it the same way as normal JS, and it will execute correctly. But in implementations where it's fast, asm.js is usually handled separately.
Re your last question: when I wrote http://wry.me/hacking/Turing-Drawings I found that "each call into an asm.js function takes about 2 msec (at this writing)" (on Firefox). That was back in the week of asm.js's release, and I'd bet the situation's better now. But it was the same kind of issue.
Emscripten should produce an asm.js fallback if wasm support is missing. The biggest wins in the short term will be smaller code size, and faster first startup time.
Opening the web to non-Javascript languages will make it a more general platform. Imagine if Linux only let you run programs written in Perl! That's largely the situation the web has been stuck in for 20 years. I think the concerns about "view source" are ill-founded, as minified Javascript is already essentially an uninterpretable (by humans) assembly language. This essentially retains the status quo regarding human access to the underlying code, while broadening the number of ways you can build web applications. I support it.
This is great news, if nothing else then because it actually allows programs to grow their heap. Previous emscripten-based solutions have suffered severely from either running out of RAM, or requesting more than they need (and then sometimes not even being able to start because of fragmentation.)
The web browser is still a second-rate user interface toolkit compared to a native app, but at least this gives us a slight step forward. Whether that's enough, or whether most application development will be made using native toolkits in walled gardens, remains to be seen.
I'm afraid that without DOM access and GC it won't get much traction or testing as far as the community is concerned. Did anyone took wasm seriously?(i.e run their main external/internal app on wasm as a beta version)
I know ...but when? 2030? There is no timeline. My point is that GC and DOM should have been part of the MVP if it were to gain any traction as the assembly of the web. Otherwise it remains an experiment/niche like Asm.js, PNaCI etc. I didn't see many(if any) "todo" apps in wasm yet.
There are several SDL2 Emscripten examples for compiling to ASM.js. You would follow the same process, except just direct Emscripten to generate WebAssembly using a "vanilla" LLVM's WebAssembly backend [1]. You will probably first need to compile LLVM yourself with WebAssembly experimental target enabled, as explained in the link.
A reference implementation of something is not end-user software, it's meant to be part of the specification by codifying semantics. An interpreter is more useful in this case (what would even be the compile target?).
http://webassembly.org/getting-started/developers-guide/