Hacker News new | past | comments | ask | show | jobs | submit login
WebAssembly support now shipping in all major browsers (blog.mozilla.org)
574 points by subir on Nov 14, 2017 | hide | past | favorite | 339 comments



A good occasion to watch Gary Bernhardt's talk "The Birth & Death of JavaScript" [0] again, where he talks about the precursor of WebAssembly: asm.js and the future implication it "could" have in the future in a really humorous way. A few years old but still relevant.

You want Gimp for Windows running in Firefox for Linux running in Chrome for Mac? Yeah sure.

[0] https://www.destroyallsoftware.com/talks/the-birth-and-death...


And for a somewhat more practical but at the same time more exotic example, the Internet Archive has a ton of old minicomputers and arcade games running in MESS/MAME, each compiled to webasm. One click and you can boot anything and play it in your browser. https://archive.org/details/softwarelibrary

https://archive.org/donate/


That is gorgeous. I just booted Win 3.1 to Minesweeper in less than a minute on my phone's browser.

Too bad makers of the original Minesweeper did not think to build touch input support.


This is amazing. If only it would also work in my phone. Probably for the best to stop me “wasting” time ;).


Are you sure that's actually using WASM? It sounds to me like it's currently using ASM.js compiled via Emscripten. (Though in theory there's no reason why it _couldn't_ be WASM, since Emscripten supports WASM as a compiler target.)


I thought they switched over back in July. https://twitter.com/textfiles/status/884084207688892416


>You want Gimp for Windows running in Firefox for Linux running in Chrome for Mac? Yeah sure.

I actually do. I want all code ever written and every environment it was ever written for to have a URL that will let me run it in the browser. Everyone else seems to want the web to go back to being whitepapers but I want actual cyberspace already!


There's another reason why I want JavaScript in the browser to die:

We haven't had a new browser engine written from scratch since KHTML. Firefox is a descendant of Netscape, Chrome (and Safari) is a descendant of WebKit which is itself a descendant of KHTML, Edge is closed source, but I'm almost sure there's some old IE code in there.

Why?

It's simply too expensive to create a fast (and compatible) JS engine.

If WebAssembly takes off, I hope that one we'll have more than three browser engines around.


The real problem is CSS. Implementing a compliant render engine is nearly impossible, as the spec keeps ballooning and the combinations of inconsistencies and incompatibilities between properties explode.

Check out the size of the latest edition of the book "CSS: The Definitive Guide":

https://twitter.com/meyerweb/status/929097712754098181

Until CSS is replaced by a sane layout system, there's not going to be another web browser engine created by an independent party.


I think Blink's LayoutNG project and Servo both show that you can rewrite your layout implementation (and Servo also having a new style implementation, now in Firefox as Stylo). I think both of those serve as an existence proof that it's doable.


It's doable if you already have a large team of experienced web engine development experts, a multi-million budget and years to spend on just planning.

Implementing an open standard shouldn't be like this. Even proprietary formats like PDF are much simpler to implement than CSS.


TCP/IP is an open standard, yet I suspect you would have the same problems implementing it (Microsoft famously copied BSD stack at first).

You could probably say the same thing about any complex open standard, like FTP etc.


A minimal, 80% PDF, maybe. A complete PDF? No.


It's not doable, part from when it is.

'Open standard' has nothing about something being simple and straight forward. What CSS is trying to do is complicated because of a whole bunch of pragmatic reasons.

Last time a browser tried to 'move things forward', ngate aptly summed it up as

    Google breaks shit instead of doing anything right, as usual.
    Hackernews is instantly pissed off that it is even possible to
    question their heroes, and calls for the public execution of
    anyone who second-guesses the Chrome team


I never claimed that the existing browser vendors can’t do it incrementally — they certainly can. What I wrote was: “... there's not going to be another web browser engine created by an independent party.”


Isn't Grid and Flexbox supposed to be that sane layout system? At least that's what I've heard from those who have used them.


Even if Grid and Flexbox are awesome and perfect and the solution to all our problems, they don't make everything else in css suddenly disappear, a new layout/render engine still has to implement every bit of it, quirks included.


React Native has what seems like a pretty sane css-like layout system. Maybe this could become the basis for a "css-light" standard that could gradually replace the existing css, and offer much faster performance for website authors who opt in.


I presume parent's point is about how they then interact with other layout modes (what if you absolutely position a flex item, for example), along with the complexity of things that rely on fragmentation (like multicol).


What do you base the assumption on that Javascript is the critical piece of complexity here? (it might very well be, but it's not obvious to me)

At least some of the JS engines are used in non-browser projects (V8 and Microsofts), which at least superficially would suggest you could write a new browser engine and tie it to one of those existing JS interpreters. WebAssembly will gain interfaces to the DOM as well, so the complexity of that interaction will remain.


> Edge is closed source, but I'm almost sure there's some old IE code in there

EdgeHTML is a fork of Trident, so yes. That said, I'm led to believe there's about as much commonality there as there is between KHTML and Blink: they've moved quite a long way away from where they were.

> It's simply too expensive to create a fast (and compatible) JS engine.

I don't think that's so clear cut: Carakan, albeit now years out of date, was ultimately done by a relatively small team (~6 people) in 18 months. Writing a new JS VM from scratch is doable, and I don't think that the bar has gone up that significantly in the past seven years.

It's the rest of the browser that's the hard part. We can point at Servo and say it's possible for a comparatively small team (v. other browser projects) to write most of this stuff (and break a lot of new ground doing so), but they still aren't there with feature-parity to major browsers.

That said, browsers have rewritten major components multiple times: Netscape/Mozilla most obviously with NGLayout; Blink having their layout rewrite underway, (confusingly, accidentally) called LayoutNG; IE having had major layout rewrites in multiple releases (IE8, IE9, the original Edge release, IIRC).

Notably, though, nobody's tried to rewrite their DOM implementation wholesale, partly because the pay off is much smaller and partly because there's a lot of fairly boring uninteresting code there.


> Notably, though, nobody's tried to rewrite their DOM implementation wholesale

Depending on your definition of "wholesale", the Edge team claims it took them 3 years to do exactly that:

https://blogs.windows.com/msedgedev/2017/04/19/modernizing-d...


Oh, yeah, that definitely counts. I was forgetting they'd done that. (But man, they had so much more technical debt around their DOM implementation than others!)


I completely disagree that the issue is JavaScript here.

In my opinion, the issue is the DOM. It's API is massive, there is decades of cruft and backwards compatibility to worry about, and it's codebase is significantly larger in all major open source browsers out there.


Naw, the DOM is fairly small. Go here for a summary http://prettydiff.com/guide/unrelated_dom.xhtml

HTML is far larger than the DOM. Like comparing an ant to Jupiter.


I'm not sure I agree that the DOM is that bad(more that people are using it improperly and for the wrong things), but yeah, modern JavaScript is hardly to blame for anything. The closest thing to an argument I've heard is "mah static typing".

Asking WebASM to be everything, including a rendering engine, is asking for problems at such an atrocious level.


I'm not implying that the DOM is bad (IMO it's one of the most powerful UI systems that I've ever used. Both in what it's capable of, as well as the speed that I'm able to develop and iterate something with it), just that it's BIG.

There's a lot of "legacy" there, a lot of stuff that probably should be removed, or at least relegated to a "deprecated" status.


> I'm not sure I agree that the DOM is that bad(more that people are using it improperly and for the wrong things)

If an API makes it easy to make mistakes it's a bad API. Blaming "people" is a cop-out.


So what you're saying is that people are stupid?

If people are misusing the DOM API at a fundamental level(to do non-DOM related things), that's not a fault of the API. It's as if everyone has forgotten that DOM means Document Object Model. The vast majority of websites and web apps are not very complicated on the client-side, so I'd say that the DOM API generally does its job well. Trying to do anything that's not about constructing a document or is doing heavy amounts of animation or node replacement using a document-building API is asking for a bad time. It's quite implicitly attempting to break fundamentals of computer science.

Making the API one uses to render pages in a browser a free-for-all doesn't solve the problem, and you end up losing many of the advantages of having actual standards. What would be better is for the browser to provide another set of APIs for things beyond the "expertise" of the DOM. This kind of the case right now in some regards, but there's a reason why React and Glimmer use things like virtual DOMs and compiled VMs. I'd argue that a standardization of such approaches could be a different API that probably shouldn't even be referred to as a DOM because they are meant to take a lot of shortcuts that aren't required when you are simply building a document. In a sense, WASM is intended to fulfill this purpose without replacing the DOM or JavaScript.

Object-oriented programming is quite often misued/misunderstood. Does that mean it's a "bad" concept? I wouldn't say so. Education tends to suck, and people are generally lazy and thus demand too much from the tools they use.

I'm not copping-out because I'm not putting the DOM on a pedestal. Calling it a bad API because it doesn't work well for a small minority of cases is a total mischaracterization. If it was an objectively bad API, it wouldn't have seen the astounding success it has.

EDIT: I'm not saying that programmers are stupid... but that their expectations are sometimes not congruent with reality.


If an API is meant for documents and you're using it for applications, that's not the API's fault


WebAssembly has nothing to do with JavaScript. When people make this association it is clear they are painfully unaware of what each (or both) technologies are.

WebAssembly is a replacement for Flash, Silverlight, and Java Applets.


Flash, Silverlight and Java Applets all provided APIs and functionality above and beyond what JavaScript or the browser DOM provides. WebAssembly is the opposite, as it is much more restricted than JavaScript. WASM is a sandbox inside a sandbox.


At the moment it is because the only performant visual output is the canvas.

They are adding native DOM access, which changes things.


and js. eventually!


JS is a language and not a bytecode media. Perhaps chocolate will replace cars and airplanes. I love me some chocolate.


I am extremely unclear on whatever point you're trying to make, here, because it really does seem to come from a place of ignorance on WASM and JS. It makes no sense.

It seems like you're claiming, in a really roundabout way, that WASM will never have DOM access, even though it's planned[1]. There are even VDOMs[2] for WASM already. Future WASM implementations that include DOM access can absolutely, and for many folks will, replace Javascript.

[1]: https://github.com/WebAssembly/design/blob/master/FAQ.md

[2]: https://github.com/mbasso/asm-dom


> It seems like you're claiming, in a really roundabout way, that WASM will never have DOM access

I am not going to say never. It does not now and will not for the foreseeable future though. I know DOM interop is a popular request, but nobody has started working on it and it isn't a priority.

Part of the problem in implementing DOM access to a unrestricted bytecode format is security. Nobody wants to relax security so that people who are JavaScript challenged can feel less insecure.


Which of the security concerns browser Javascript deals with do you think are intrinsic to the language, as opposed to the bindings the browser provides the language? If the security issues are in the bindings (ie: when and how I'll allow you to originate a request with credentials in it), those concerns are "portable" between languages.


Not sure if this is directly relevant, but there have been all sorts of type confusion bugs when resizing arrays, etc. Stuff in the base language. They exist independent of API, but merely because the language is exposed.


It isn't due to the language but to context of known APIs provided to the language that can only be executed a certain way by the language.

How would a browser know to restrict a web server compiled into bytecode specifically to violate same origin? The browser only knowns to restrict this from JavaScript because such capabilities are allowed only from APIs the browser provides to JavaScript.


I really don't understand your example. Are you proposing a web server running inside the browser as a WebAssembly program, and the browser attempting to enforce same origin policy against that server? That doesn't make much sense.


Yep, it doesn't make sense and that is the problem. There is no reason why you couldn't write a web server in WASM that runs in an island inside the browser to bypass the browser's security model.


This does not make any sense, sorry.


> I know DOM interop is a popular request, but nobody has started working on it and it isn't a priority.

I linked to the latest proposal downthread; people are absolutely working on this.


It will take more than just DOM access to replace Javascript. Just off the top of my head you'd also need access to the event loop, XHR, websockets, audio & video, webRTC, Canvas, 3D, Filesystem, cookies & storage, encryption, Web Workers and more.


JS is a language and not a bytecode media.

That's an arbitrary distinction that's driven by developer group politics, not a meaningful technical distinction. (Much like the old Rubyist, "It's an interpreter, not a VM.")

Machine languages were originally intended to be used by human beings, as were punch cards and assembly language. There's no reason why a person couldn't program in bytecode. In fact, to implement certain kinds of language runtime, you basically have to do something close to this. Also, writing Forth is kinda close to directly writing in Smalltalk bytecode. History also shows us that what the language was intended for is also pretty meaningless. x86, like a lot of CISC ISAs, was originally designed to be used by human beings. SGML/XML was intended to be human readable, and many would debate that it succeeded.


> That's an arbitrary distinction that's driven by developer group politics

Not at all. JavaScript is a textual language defined by a specification. Modern JavaScript does have a bytecode, but it is completely proprietary to the respective JIT compiler interpreting the code and utterly unrelated to the language's specification.

> There's no reason why a person couldn't program in bytecode.

True, but that isn't this discussion.


The point is that a "textual language defined by a specification" can serve the exact same purpose that a bytecode does. And JavaScript is very much on this path.

That is this discussion, because the fact that people program directly in JavaScript does not prevent it from being in the same class of things as a bytecode.


>JS is a language and not a bytecode media

Does it even matter when most people are using it as a compiler target (even just from newer versions of the language)?


Yes but the point is JavaScript as the "one true way to do client side scripting" can be replaced by webassembly in that capacity.


It cannot. WebAssembly bytecode does not have an API to web objects or the DOM. It is literally a sandbox for running bytecode only.


How about Servo?


Servo doesn't render major websites properly (last I checked). Their UI is placeholder. Their Network/Caching layer is placeholder. There's no updates, configuration, add-ons, internationalization.

Servo is not meant to be a real browser. That's not a bad thing, but I don't think you can use it as an example of a browser built quickly by a small team.


yeah, wasn't this (or related to it) at the top of HN just yesterday?

https://news.ycombinator.com/item?id=15686653


IIRC Servo uses quite a bit of Firefox code

Edit: Looking at the project it seems like it uses SpiderMonkey, but is otherwise new code


Chrome's V8 engine was actually written from scratch, unlike Webkit's JavaScriptCore (which descended from Konqueror/KJS, as you say). Google made a big deal about marketing this fact at the time. (1)

And while yes, Mozilla's Spidermonkey comes from the Netscape days, and Chakra in Edge descends from JScript in IE, plus aforementioned JavaScriptCore, each of those engines still evolved massively: most went from interpreted to LLVM-backed JITs over the years. I suspect that no more than interface bindings remain unchanged from their origins, if even. ;-)

(1) I can't currently find the primary sources from when Chrome released on my phone, but here's a contemporary secondary one: https://www.ft.com/content/03775904-177c-11de-8c9d-0000779fd...)


If the issue is JavaScript, what explains the explosion of JavaScript engines? I agree that JavaScript is a cancer whose growth should be addressed, but implementation complexity isn't a reason.


If these proposed browsers don’t ship with a JS engine [1], do you also hope to have more than one internet around?

[1] Such as V8, Chakra, JavaScriptCore, SpiderMonkey, Rhino, Nashorn, there is a variety to choose from, also experimental models such as Tamarin, they are almost certainly not the critical blocker for developing a browser.


IE/Edge heritage goes back to Spyglass Mosaic.


Also relevant: callahad a few weeks ago: https://twitter.com/nybblr/status/923569208935493632

Netscape navigator on DOS in Firefox via WebAssembly.


Or how about a live coding environment for a Atari VCS (1200) emulator ;)

http://8bitworkshop.com/?platform=vcs&file=examples%2Fhello


My CS background is a bit weak... is the hypothetical Metal architecture he describes supposed to be satire or actually a good idea?


Some say that joking is a socially acceptable way to say socially unacceptable ideas.

I think it's a great idea, though many disagree. It's basically ChromeOS but to the next level.


Implement a WASM JIT in kernelspace & you don't have to have a userspace while still having hot code hopefully optimized to remove bounds checking. Now all your programs are WASM modules & we can replace your CPU with some random architecture that doesn't have to care about supporting more than ring0. Oh why not implement a nearly-WASM CPU? Probably just change branches to GOTO. Now the only program people care about, their browser, can have a dead simple JIT for this architecture, with WASM-in-the-browser being nearly as fast as any other program


There’s prior art for this too, Microsoft started a research project called Singularity that was essentially a kernel that only executed .NET bytecode, and had similar advantages (everything in ring0, no syscall overhead, etc.)

It died pretty unceremoniously though.


We had Joe Duffey talk about it at RustConf this year! https://www.youtube.com/watch?v=CuD7SCqHB7k


It died because it couldn't become an actual product and had a lot of very smart engineers spending a lot of time on something that had no future. Some of the core tech was reused and turned into other products.


Mostly satire because the math doesn't really work out in such a way.


It doesn't? How so? I was under the performance savings calculations he used were at least plausible. (Though obviously just a back-of-the-napkin estimate.)


I'm more excited about the prospects of running V8 inside ChakraCore inside Quantum.


I'm more exited about the prospect of running all of FF or Chromium inside of Edge so I can cut my workload down by 50%


Unfortunately SIMD is still not supported on any browsers and with the move away from SIMD.js it looks like this might take a while.

We've been working on porting over our fairly large barcode scanner library to WebAssembly. While the performance is close to what we have on other platforms ( http://websdk.scandit.com ), the major bottleneck for now is not being able to use optimized code relying on SIMD (and not having an existing C fallback as all other platforms we target have SIMD support)


My understanding is that SIMD support has a proposal that most people are happy with, and will be landing in 2018.


Right now there are SIMD prototypes in 3 engines (SpiderMonkey, ChakraCore, and V8) and the remaining work is standardization between them, tool support, and performance tuning. There will be an official SIMD proposal for WASM in 2018 and it should move through the standardization process pretty quickly.


This is precisely what I'm waiting for as well. I want to run the Nengo neural simulator in the browser, so I can share my research easily, but it looks like I'll have to wait a few years.


www.nengo.ai

wow, this is really cool! thx for sharing :]


> Unfortunately SIMD is still not supported on any browsers

Is SIMD portable though? Will it run good on mobile devices? And what about other non-Intel architectures in general?


Yes. Phones have SIMD. Hell, the cheapo MIPS in my router has SIMD. You use the portable simd instructions and they get compiled to native simd instructions for your platform. Just like any other part of webassembly.


Not really portable. Even x86 has lots of variation between supported operations and vector length. ARM similarly has variations between ARM versions. At least for the abandoned JS SIMD effort, they stuck to the least-common-denominator SIMD (SSE1/NEON-armv7), but it was still good for a noticeable speedup in many applications.


The barcode scanner is really impressive, it worked well on my G5 plus running FF.

I'm excited for WebAssembly.


Have you tested performance on older and lower spec devices? Does it still hold up?


On older mobile devices it can decrease significantly, but it's still much better then any known Javascript alternatives (e.g. quaggaJS) and works reasonably well even when only passing on a handful of frames/sec to the scanner library. (which happens on slower devices)

OTOH having SIMD would speed it up significantly and probably get them all up to speed.


That's interesting. We actually use quaggaJS quite heavily, and haven't found any large performance issues even on lower spec devices.

That being said it did take quite a lot of time to get to that point (tuning scan frequency, resolution, and a bunch of other things), and we looked into scandit recently (our company does use scandit in other products) and even though your library can detect and scan barcodes in worse conditions, it didn't really improve scan times or fix our biggest issues on web, which by the looks of it are the same issues you are running into. (iOS 11 being a pain in WebViews and sites added to the homescreen, and lack of the focus management APIs meaning you are at the mercy of the autofocus), plus the requirement for a license check at runtime is shooting our big use case in the foot (some of our users run the webapp offline for days at a time!)

But we have looked into some webassembly for some of this code, and I'm glad to hear it holds up about as well as expected on lower spec devices. I was worried about lower spec devices not being able to handle the larger "binaries" that WASM tends to produce.

And this is really off topic, but I'd love to pick your brain on how you "solved" the constraints issues on some devices with the getUserMedia stuff (flipping the aspect ratio requested, some devices rejecting all constraints except 2 or 3 "blessed" resolutions, problems with orientation affecting the returned resolution regardless of what was asked, etc...). I understand if you can't talk about it due to company secrets, but I figured I'd give it a shot! If you are open to it, my email is on my user page.


Unfortunately I can't get into details regarding the getUserMedia stuff, not only due to trade secrets but also being from the product team and not engineering :)

> plus the requirement for a license check at runtime is shooting our big use case in the foot

That should only be a problem with test licenses, ping us at support@scandit.com and we can help.


SIMD is like the kids version of GLSL, which works today :)


Running anything with the GPU introduces a huge amount of latency, it only makes sense when you need high throughput and have large enough workloads to justify the latency. SIMD code can be interleaved with normal native code with zero latency.

And then there's the fact that WebGL is so much behind the state of the art that it's not even funny. Sticking to an old version of GL/GLSL severely limits what you can do with it.


>have large enough workloads to justify the latency

Hashsum bruteforcing? It already appears to be picking up, I'm having quite frequent browser crashes these days because of shoddily written JS miners trying to do stuff on GPU


> shoddily written JS

Blame the OS, graphics card drivers, or the browser, but not JS!


Not sure what you mean (maybe i'm missing your point) but those two things are not very comparable. GLSL is an uncompiled GPU language and SIMD is a class of CPU instructions that exploit parallelism opportunities at the block level (apposed to core level like a GPU).


Both can be be used to accelerate the image processing application in question. GLSL is compiled on the fly to GPU instructions that exploit parallelism opportunities, but more so than SIMD because GPUs have greater internal paralllelism.


Yes, they share "parallelism" in the most abstract sense, but beyond that, they are completely different approaches in almost every other way.

You might try to use them to accelerate the same specific task but they have very different capabilities and performance limitations/advantages. Also one is fairly generalised and one is intended to be very domain specific, so don't share all the same types of potential application.


I'm still not sure why you insist they are not comparable. GPU compute vs CPU SIMD is a very standard comparison and image recognition applications, like the one discussed, frequently support both.


Well you've added some context in which they are comparable (some types of image processing), and I agree they are comparable given that specific context. I think that probably was your point that everyone else didn't get, but your original comment was void of context and implied some level of interchangeability, but you have to keep in mind that they are used for far more than image processing (even GLSL is), and will have very different results and very different implementations for the subset of tasks that can be accelerated by both.


If you're looking for an introduction to WebAssembly my "WebAssembly 101: a developer's first steps" post had some success here: https://blog.openbloc.fr/webassembly-first-steps/

HN discussion: https://news.ycombinator.com/item?id=14495893

The awesome-wasm list is also a good start: https://github.com/mbasso/awesome-wasm


For people wondering about features like SIMD and GC support, here is a good status page: http://webassembly.org/docs/future-features/


Specifically for GC, all that page does is to refer to a GitHub issue that is invisible to anyone but WASM contributors :-(


Invisible? You mean locked? https://github.com/WebAssembly/design/issues/1079

I'm not a collaborator on the WebAssembly Org, but I can still see that issue just fine.


Yes, I see that there is an issue there, but the discussion is invisible. Hidden. Locked.


> invisible. Hidden

It's not. It's just that there are no other posts on that issue. GitHub does not have a way to make an issue visible to the public but hide all discussion on it.

> Locked

That's not the same thing. Locked just means you can't add your own comments, not that you can't see comments from other users.


Ah ok, thanks. That there is no discussion is even more disappointing.


The math site they linked to in the examples of sites powered by WebAssembly is awesome: http://mathstud.io/

I almost wish I was a student again so I could relearn math for the first time with all of these great tools...


Wow, that is really cool actually. Impressive performance on my OG Pixel XL too.


I'm not as excited for this as I used to be. In most user applications JavaScript is good enough or better. If it wasn't then we wouldn't be taking the browser to make desktop applications. Recently I decided to make a desktop app and asked around about the different UI libraries. The answer I keep getting is "just use electron and JavaScript". Why? Because love it or hate it the Dom is fantastic and simple for making UIs that are interactive and reliable. And you can't beat JavaScript for manipulating the Dom.

The only benefit I can think of that benefits is specialized software like games or scientific analysis/simulation. But for what most users want, JavaScript is fast enough. The example I keep hearing is "imagine gimp in the browser" but it's already possible to make a gimp like application in the browser using things like canvas and the file api.

So by the time webassembly is ready for the prime and has needed features like memory management and Dom access, will it even by worth it beyond a few specific applications?


I'll agree with you that modern HTML and CSS for presentation is best-of-breed. I'll accept that the DOM API is sufficient.

But neither of those necessitate JavaScript; JS is just a language that happens to run in the browser and has DOM API bindings (and the other browser APIs too). There's no reason those identical bindings couldn't be provided in any other language.


> will it even by worth it beyond a few specific applications?

I'm sure if you sampled all developers the number that would say Javascript is their favorite programming language would be in a substantial minority. So if it makes it easier for developers to write client side code in their preferred language its worth it.

That being said, I'm worried a bit. Javascript being awful has traditionally kept developers doing as much runtime work as possible server side. In the last 5 years the rapid growth of JS libraries, client side frameworks, and recently ES6 have all reduced the pain points and correlated to a dramatic rise in client side code being loaded on users computers with corresponding huge page size bloat.


> I'm sure if you sampled all developers the number that would say Javascript is their favorite programming language would be in a substantial minority.

I wouldn't be so sure of that.


Agreed, it'd likely be one of the largest shares


You should check some of the SO developer surveys. I think you'd be surprised...

https://insights.stackoverflow.com/survey/2017#technology-mo...


One interesting thing to me is that you don't have to write an entire app using only WebAssembly or JS. You can take the classic approach of benchmarking to find hotspots, and then optimizing those. Migrating these performance sensitive sections to more performant code can be a win.


>you can't beat JavaScript for manipulating the Dom.

Exactly. Only javascript can access attributes and call functions so good on exported document and window objects.


JS is enough for most application code. There are libraries that have and always will be in C, C++, or something similar, and it would make the most sense to wrap and expose them for different environments. JS makes no sense for shared libraries.


I know that Figma is built on WebAssembly now, and has quite impressive performance.


There are a lot of better image compression formats than jpeg and png. With wasm you could have a very fast client side decompression and save a ton of bandwidth.


Can anybody ELI5? The documentation is pretty fluffy.

Does WebAssembly actually open up any new API hooks? I get that its a clever way of transpiling existing programs to JavasScript, but surely we could do that already?

Whats new avenues of development is WebAssembly expected to open up? Is the whole point just to enable an easy way to compile games made on other platforms (Unity) to the web?


WASM is used to run native code in the Web browser without going through JavaScript. With it, it should be possible to run code at near-native performance in the browser.

WASM is an intermediate representation which is output by your compiler (of your favorite language) and consumed by the browser's compiler to emit native code.

WASM is a bit similar to LLVM IR, but it's architecture independent.

Compare this to, say, LLVM and Clang. Clang (the C compiler front end) will read C code and emit LLVM IR. LLVM (the compiler backend) will read LLVM IR and emit assembly code for your CPU. With WASM, the developer will run the "front end" and distribute the WASM code over HTTP and the web browser will run the backend and turn WASM into native assembly code.

> I get that its a clever way of transpiling existing programs to JavasScript, but surely we could do that already?

No, WASM code is not JavaScript at any point. ASM.js is a predecessor to WASM that was a compiler-friendly variant of JavaScript that can be compiled to native code.


But is this how it is implemented by actual browsers? if memory serves v8 is pretty much reusing most of JS VM for WASM?


Yes, the browsers do share parts of their JavaScript execution engine with WASM. Which makes sense because WASM still needs to interact with JS. That doesn't mean it can't be fast or start quickly (or at least quicker than asm.js).

From the developer's point of view that doesn't matter. You're delivering WASM and not C or JS code over to the clients.


WASM requires a much simpler VM than JS. It doesn't take most of a JS VM.


Oh I see- that makes sense.


Today the browsers virtual machines can work only with Javascript code. That means that if you want to use another language you have to convert in to Javascript, and you are obviously limited by the features of Javascript. The goal of WASM is to provide a language similar to Assembly (or bytecode of Java) that can be understood by the virtual machines of browser, and so to have a new way to compile languages and use them in the browsers. WASM can also be optimized better, having more features than Javascript (for example javascript uses only double as number types, while WASM has integers and floats)


WebAssembly is a specification of a small assembly language. This language can call into functions in its host environment. The "web" part is that all evergreen web browsers have implemented this language, and you can use it from inside your browser, and call JavaScript functions into it.

This means that WebAssembly is actually broader than just the web; for example the Etherium folks have been discussing using wasm as their language to script their blockchain.

It doesn't currently open up any real new API hooks; it's mostly about being an efficient language for computation. You get near-native performance in the browser. In the future, it may or may not grow more hooks directly into the web platform, rather than needing to call into JS to do so.


WebAssembly is (currently) the same as asm.js, but with smaller file size and faster parse times. Asm.js is just a subset of JavaScript that can be optimized because it doesn't use certain features (like strings or garbage collection).

WebAssembly and asm.js are both intended to be compile targets - you don't write them by hand, but instead write your code in another language (c, Rust, etc.) and then complie to them.

The main benefits are 1) JavaScript is no longer the only language of the web, and 2) it's possible to get better performance than JS ever allowed for.

There is a lot of c code out there that probably isn't worth rewriting fron scratch in JS, but may well be worth recompiling to run as a web app.

Also, in the future, WebAssembly should get DOM access, optional garbage collection and other features that will allow it to be a compile targets for other languages such as Python and Ruby. So then you can use a single language for all of your development without that language being JavaScript.


I can address at least the first misconception here - this isn’t transpilation to JS. This is native code that has browser APIs exposed to it, so in theory there should be massive performance wins.


This is also a misconception, it is just a bytecode. asm.js is already running at around 50% native speed so there is no massive performance wins to find. What you get with WebAssembly is reduced startup time because it doesn't have to be parsed first.

The really exciting thing people should be talking about is not WebAssembly but the general availability of SharedArrayBuffer[0] which finally makes it possible to run "foreign" multi-threaded code efficiently.

[0]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


WebAssembly is not worth the effort unless it finally is supported by the LINK tag. Seriously, using JS to load a JS alternative is ridiculous.


Before HTML5 deprecated it, it was assumed that browsers would just supply plugins to support various scripting languages or whatever, which is why the <script> tag had a type attribute.

Unfortunately, integrating a new runtime like <script type="text/lua" src="main.lua" module="lua.wasm"/> will probably never happen. A link tag, to me, specifies a static resource rather than executable code (although since CSS supports animations now that's probably a distinction without a difference.) <object> might be a good candidate but I don't know if it's still supported.

But, we can't get rid of JS altogether. Browsers will have to support javascript indefinitely, otherwise most of the web becomes unreadable. That being the case, using JS to load WebAssembly seems like the most reasonable backwards-compatible compromise available.


I consider it to be broken web design if a website requires JS to display text. I could live with a hard cut.


I can't agree more...but I believe the issue is that WASM sold a lie. Right now if you read between the lines it is argued that WASM is there just to help JS do more not to replace it.


> WASM sold a lie

What? The original WebAssembly announcement[1], which can be viewed as the manifesto for how WASM was envisioned, it clearly says "once browsers support the WebAssembly syntax natively, JS and wasm can diverge". Eich's goal with WebAssembly is not replacing JS, it's providing a better compilation target for other languages. WebAssembly is a replacement for asm.js, not JavaScript. No one is selling a lie here.

[1]: https://brendaneich.com/2015/06/from-asm-js-to-webassembly/


> WASM is there just to help JS do more not to replace it

That certainly is a shame, but it's not like we have to accept that as the future. HTML is a document markup language, but we've hijacked it to build interactive UIs. It might be that WASM is just a native-ish FFI system for javascript today, but tomorrow it could be something completely different.


I played a bit with WASM and I like its approach, but the one thing that annoys me most with web development is the limited number of client-side languages.

WASM can be great if they allow it to.


Aside: how is WebAssembly's binary form currently shipped? In a demo I saw yesterday it was written out inline as a Uint8Array of integers between 0-255. Even with gzip enabled this can't be very efficient.

For completely unrelated reasons, I was already thinking of adapting (my fork of) the LZ-string library[0] to directly compress/decompress typed arrays to "websafe" UTF16 strings, similar to the already existing `compressToUTF16` functionality in LZ-string. Essentially, the strings would represent LZ-compressed bitstreams, using 15 bits per string character.

Could this be useful for reducing the size of WASM binary when encoded in plain JavaScript? (the minified library would probably be around 1kb after gzip)

[0] https://github.com/JobLeonard/lz-string/tree/array-lookup

[1] http://pieroxy.net/blog/pages/lz-string/index.html#inline_me...


wasm files are already a binary format, an explicit Uint8Array should only only necessary if you want to inline the wasm directly with the JS that instantiates it.


Right, I thought it was odd; I remember reading that part of the big benefit of WASM was more compact, faster-to-parse source code. Going all the way up to text form and back down to binary data is at odds with that.

> an explicit Uint8Array should only only necessary if you want to inline the wasm directly with the JS that instantiates it.

Are there any realistic scenarios where this is the more sensible option?

Anyway, I should have searched MDN first, the relevant bit of documentation is pretty clear:

    fetch('simple.wasm').then(response =>
      response.arrayBuffer()
    ).then(bytes =>
      WebAssembly.instantiate(bytes, importObject)
    ).then(results => {
      results.instance.exports.exported_func();
    });
https://developer.mozilla.org/en-US/docs/WebAssembly/Loading...


I suppose you could avoid having the extra request roundtrip by inlining a uint8array (or a base64 encoded string or something), and it also lets you have wasm and it's associated JS be put together which would automatically avoid cache/version mismatch without any extra work, but I would be surprised if that makes sense in any normal scenario.


For those in the know, what doors is web assembly expected to open? Better cross platform platforms via a browser wrap? Will new types of browser applications be possible that aren't now in JS only world? What are they?


Most exciting for me is really that this opens up the web for true cross-platform development. The same code runs natively on iOS, Android, Raspi, Linux, Windows, Mac etc with the smallest possible footprint and highest performance, and at the same time I can take the exactly same code (minus a few hundred lines platform-specific stuff) and have it run in the browser at near-native speed, without installation, code-signing, walled-garden app stores, 'smart-screen' scare dialogs, etc...

You just compile, upload and share the URL. Like in the old days ;)


Do you mean that WebAssembly will be more consistently implemented or that the web will become a generally a more viable platform?


Imagine something like the Java VM, available without the need of 3rd party installations, that interfaces actual HTML (instead of being locked down into a frame), and has no limitation on language support.

That's the plan. That "interfaces actual HTML" thing isn't available yet, but you can already hack around it.


Expect every language out there to have a WebAssembly implementation, including Flash.


Until wasm supports proper tail calls, only the ones with uninteresting control structures.


Couldn't the compiler take care of optimizing those when necessary? After all I don't know of any "real" assembly with tail call support...


Most assembly languages have a "jump" instruction, though -- wasm supports only structured control flow.


It has br and loop, good enough.


I don't think it's sufficient for full-fledged generalized tail call elimination. You can certainly optimize a self-recursive call down to a loop, but consider a case where a function accepts another function as a pointer, and invokes that pointer in a tail call position, for example (i.e. anything written in continuation-passing style).


JMP is the original TCO.


Tails calls can be easily replaceable by br or loop instructions, nothing special for anyone with compiler design knowledge.


That's true as far as it goes - but wasm lacks general "goto" instructions (across procedures), and loops only work for self-tail-calls.

There's an expressiveness gap between languages with and without proper tail calls. (Unless you're willing to accept the unsafe-for-space solution.)


I think it would still be possible to implement Scheme with the current specification, which is on my snowy weekends to do list for the upcoming Winter.

How far I would get before losing interest or facing those issues, I don't know.


It's possible, yes, but it isn't possible to do particularly efficiently. If you want separate compilation, it gets even worse. (Separate compilation is where proper tail calls come in very handy.)


Ah you got a point there, forgot about separate compilation.


There is a proposal, though it's taken some work due to ABI issues. My understanding is that it's fairly close though.


I'm really looking forward to it.

For others who are curious about it, here it is: https://github.com/WebAssembly/tail-call/blob/master/proposa...


Is there really demand for the resurgence of Flash? In its final years it was used to provide capabilities that the browser couldn't, like video playback. Web Assembly will not provide that same advantage.


Lots of nice games to keep alive.

Video playback can be done via WebGL shaders, just like rendering video on the good old days with SIMD.

And most important, why not?

Someone will eventually do it, there are already JVM and CLR ongoing ports.


There is even an ActionScript to Javascript __piler https://as3js.org/


ActionScript is really the easy part of making a Flash player: the hard part is being bug-compatible with almost every version of Flash ever shipped (the Adobe implementation has lots of branches depending on version of Flash that created the file!).

That, as far as I'm aware, is why Shumway died.


Archival purposes of old flash videos? It would be sad if it would become impossible to watch those old pieces of Internet culture unless someone had exported them to an mp4 video file.

Same with games, for which there isn't even an alternative format to save them in.


There is still nothing better for (online cartoon) animation.


I'm excited for all of the ports of C++ projects the web will get.

For example imagine being able to run a full self hosted C++ compiler itself from a browser on any device with a browser.


I don't really understand this. Is a compiler in the browser all that different to a compiler on a server?


yes.

In the browser it can run offline and does not need the server at all. Wheter a c++ compiler really makes sense, I doubt, but it will definitely come ..


give me qt over js+css any time.


Do any mobile browsers support it yet? The article is unclear.



But does a WebView?

I recently discovered that while Safari supports WebRTC, a "WebView" or a safari view that is added to the home screen does not.


WKWebView and SFSafariViewController both support Web Assembly in iOS 11.


It works in Chrome on iOS.


Without DOM it's not really that exciting.


There is a lot of processing that doesn't inherently need the DOM, and there's also server-side stuff as well. I think you'd be surprised!


Yeah, but WASM was supposed to democratise the web development not to do just server-side stuff. Without DOM and Web APIs you can't do much Web related.


> not to do just server-side stuff

He didn't say anything about 'only server side stuff'. There is plenty that has already been demonstrated with webasm - video editing and filtering, audio editing and filtering, advanced computer graphics, direct porting of games, etc.


>> There is plenty that has already been demonstrated with webasm - video editing and filtering, audio editing and filtering, advanced computer graphics, direct porting of games, etc.

Even for that it doesn't work without some js. The promise was that you can use your favorite language for web development and that's not the case. WASM is now more about the integration with JS.


The point also isn't that you don't need any javascript. It seems like people really aren't getting that webasm enables new things to be possible. Using javascript to bridge to certain APIs really hasn't been a problem. WebGL already works well, even with pure javascript. Between JIT compiling and bulk transfer functions, there aren't actually that many calls happening per frame.

> The promise was that you can use your favorite language for web development

I can see where you are confused. The promise was never about using your favorite language for web development. It was about being able to compile languages to a synthetic ISA that will run at near native speeds.

> WASM is now more about the integration with JS.

It was never about getting away from javascript, it was always about being able to run software at near native CPU speeds in the browser.


There's tons of tooling written in JS for JS that doesn't need DOM access to work, for example.


If you talk about Node I would say that only proves my point that people go to great lengths to use their preferred language whenever possible. Tell them to use JS only for DOM stuff and wasm for things that don't need DOM. Nobody would prefers a FFI between wasm and js instead of plain wasm if given the option to choose.


How do I compile/target stuff for WebAssembly and integrate it with "usual web" (I need to use some advanced math & WebGL real-time that is too slow in JS)? Is there any good tutorial? Thanks!


Currently, the best supported languages are C, C++, and Rust. Languages that have runtimes require their runtimes to also be compiled in, and so are much heavier weight.

For C and C++, you have emscripten. For Rust, we have an emscripten-based toolchain that works today, but have a PR open for using LLVM's built-in wasm backend.

For emscripten: https://kripken.github.io/emscripten-site/docs/#

For Rust: https://hackernoon.com/compiling-rust-to-webassembly-guide-4... (slightly older but still should work afaik) And the new backend https://github.com/rust-lang/rust/pull/45905


The "awesome wasm" link collection has many good starting points: https://github.com/mbasso/awesome-wasm


All the excitement around wasm seems quite puzzling to me. Yes, I understand that you can program in whatever language you love but what new language are you hoping would be supported? C? C++? Fortran?

As far as I am concerned, all the languages I care about already have compilers that target JS.


Rust is the big one for me, personally.

The idea of being to write in one language for embedded software, system software, cli tools, web applications, distributed applications, and now the web client?

That's a very enticing idea. It might finally be the realization of what Java failed to accomplish (technically it did, but never gained a lot of traction on the client b/c it was too slow at the time).


There is a massive performance difference between compiling to js and compiling to wasm.


JS imposes a memory/object model that doesn't map to other languages particularly well, so you either have to play fast and loose with semantics (effectively creating a dialect of the language that you're transpiling), or pay extra overhead for faithfully emulating the original behavior.

WASM is much lower level, and should allow compilers and VMs to use best-fitting data structures and layout.


Another Rustacean checking in. I'd love to write Rust and run it in the browser. Rust uses LLVM so it can use emscripten already, but having closer to native speeds would be great.


„...all major browsers“

Is Internet Explorer not a major browser anymore? I’m talking relevant as in your webpage needs to work fine in Internet Explorer.

For example, my company still has Internet Explorer configured as default browser as Edge is not compatible with some internal pages.


IE is not a major browser anymore

http://gs.statcounter.com/

Look at your Google Analytics, it's really not that important anymore.


I would have agreed until looking at desktop browsers in the US[1] which would make Firefox not a major browser, too.

    Chrome   56.69%
    IE       12.69%
    Firefox  11.14%
    Safari    9.84%
    Edge      7.86%
[1] http://gs.statcounter.com/browser-market-share/desktop/unite...


In corporate env it is.


Either the computers in these mysterious corporate environments are exposed to the internet, in which case they'd be reflected in these public stats collected by them visiting public websites from IE, or they aren't, in which case they don't matter in the slightest because they're not going to visit webpages using WebAssembly.


IE can account for a fraction of total traffic, but it still can affect large percentage of users. The same person can use several devices (work, home, mobile) to browse the web at different times. I can't recount how many times I saw some page at work only to send myself the link to read it later. I'm not forced to use IE anywhere, but if I were, loosing that initial visit would also lower the number of visits from other browsers.


Or, as is absolutely the case, because of the aggressive blocking and monitoring policies and restrictive usage policies in place in the kind of enterprises where IE remains common, they are exposed to the public internet and use public webpages, but with usage patterns which leave them underrepresented on the kind of sites that use third-party stat counters. OTOH, for certain other public websites, they are very large numbers of users.


It's important because of internal websites stuck in the past. I doubt you'll ever need web assembly support for those.


Yeah, you could just run IE6 inside Chrome/Firefox via wasm :)


We had a large customer that for years wouldn't drop a requirement for IE6. It meant for horrendous backward compatibility requirements in code. Personally, I hope to never be in that situation again.

I anticipate that IE will never gain support for WebAssembly, and your company will eventually change its default.


I don't think internal corporate pages count as supporting a major browser. ;)

Most development I see nowadays supports only IE11 and is about to drop that one too in favor of Edge. IE is on its way to deprecation right now.


That doesn't mean IE is a major browser, just like Windows XP is not a major OS simply because some old hospital equipment and ATMs still rely on it.


A few years ago, we started thinking "maybe if we ignore IE long enough, it will go away?" Looks like we aren't quite there yet...

But in the mean time, it's very helpful to encourage companies to base their browsers on standards, and move away from outdated software -- at least a s security precaution.


Ok, now we need GC and DOM interop. Because reinventing layouts is pain.


Cheerp is our solution to emit mixed mode WebAssembly/JavaScript from the same C++ codebase. It generates bridges to JS code transparently to work around the interoperability limitations of WebAssembly. https://github.com/leaningtech/cheerp-meta/wiki/Cheerp-Tutor...


In order to port other languages (Haskell, Go, Python) to target WASM, we definitely need the possibility of writing a GC in WASM. But to build an efficient GC, we need multithreading and synchronization/memory barrier primitives.



Good to see. By the way, we also need multithreading and shared memory to efficiently support structurally shared immutable data structures across multiple cores.


Shared memory is part of the threading tracking issue.


You can already compile your language's runtime to wasm; this includes the GC. The downsides is that now you're adding all that to your binary size.

"GC support" means being able to integrate with JS's GC, basically, so that your language's wasm runtime could use it instead of your own, saving bytes.


> The downsides is that now you're adding all that to your binary size.

The GHC runtime (that is huge) isn't much larger than a web font. It is not much out the range of the JS libraries out there.


Saving bytes and improving interop, I imagine.


Multi threading brings in all kinds of security issues


But without multithreading, you can use 1/4th or 1/8th of the computation power available on your average consumer device.

What kind of issues are you thinking about? Multithreading is not easy but your typical race conditions and multi threaded problems aren't really opening doors to any new security exploits.

Browsers and WASM have quite decent sandboxing to mitigate security issues. There's WebWorkers already which brings a limited form of threads to the browser.

I'm not seeing a problem with multiple threads in a browser.


What kinds of security issues? I really can't think of any besides some issues with too many threads spawned (that can be trivially blocked).


Could be a new source of undefined behavior and memory exploits, if threaded code is allowed to access collections that allocate memory without taking all the locks they should?


Inside the VM, yes. But not a security threat for the host.

The VM should be sandboxing the pages from each other, so those are potential security threats against targets on the same pages only. Not something to be too concerned about.


Help me understand: What's the difference between that and process separation?


even in the very sandboxed context of WebAssembly?


IIRC you can do GC within WebAssembly (though it's less nice and awkward as I've been told).

But DOM interop is needed. Can't wait to get my fingers on the DOM via native C code :)


It doesn't sound like that will be possible. DOM interop is baked into the Garbage Collection task[0]. It sounds like there's a dependency there, somewhere.

[0]: https://github.com/WebAssembly/design/issues/1079

edit: Scratch all of that above. I stand corrected. Here's the DOM proposal: https://github.com/WebAssembly/host-bindings/


Historically, people have thought that the GC and DOM stuff is intertwined, but the latest DOM proposal doesn't require the GC proposal.


Do you have a link to the separated proposals? It certainly seems entwined on the official roadmap.



Awesome. Thanks!


> Because reinventing layouts is pain.

You could simply compile (relevant parts of) Firefox or Chromium to WASM :)


I wished 'lol' was an appropriate response on hacker nws


Can't wait for a port of Angular2+ to C#.


Soon we will all have JDK, .NET runtime and WAsm runtime running on our machines.


Based on the unbridled capabilities, I can't wait for a way to block WASM. Does noscript do this yet?


What "unbridled capabilities" are you referring to?


Any news on access to DOM progress?


I commented elsewhere in the thread about the latest proposal.


Who wants to write a faster CryptoNight miner in wasm?


did they improve external stacktrace support? the last time I checked debugging was imposible due to bad stacktraces.


My understanding is that the current debugging tools can show you the text format of wasm, but showing the original code you compiled to wasm doesn't work quite yet. It's under discussion.


Nice advertisement


Great. We will soon have an entire JVM running inside each browser's tab, just to run an animated slider. Maybe even more than one. Plus a .NET VM, several Python and Ruby interpreters, and so on.

Because webmasters will keep loading ready-made plugins from CDNs, exactly like they are doing now, except that the new generation of web software will carry their own interpreter or runtime with them, because the developer wanted to use Python or whatever.

The web is getting better by the day.


I'd like to interject for a moment here, and pitch my idea about what I call the "Web 4.0". It's basically a binary equivalent of HTML, without the many inconveniences of HTML for content providers.

Web 4.0 pages are obfuscated WebAssembly payloads, that draw their contents via WebGL. This way, adblocking becomes impossible, as the actual content and ads are within the same opaque WebGL framebuffer. Stealing copyrighted text is also made significantly harder, beause the Web 4.0 platform leaves it up to the publisher to enable glyph selection and copying. You have concerns about accessibility? Web 4.0 is 100% accessible thanks WebAudio based realtime speech synthesis!


it'll also be harder to learn how things work... i wonder how much we've learned from reverse engineering the easy to read inline script tags... I definitely remember teaching myself JavaScript by reading blizzard.com's late 90s markup... figuring out that mouse over effect with onmouseover and onmouseout... works in ie4... but whats that strange bug in netscape 4..


You can't really do that today anymore though


> adblocking becomes impossible

No, it'd just take a sufficiently smart AI.


Just like the good old days with sites built in Flash :)


wasm is modular, so in theory, you could produce a jvm9.wasm file, put it on a CDN with caching, and every site that wanted to do that could share it, and you'd only download it once.

I do agree that runtime-less languages have an advantage here though.


We're still a long ways off from garbage-collected languages inside wasm.

That said, it's not the technology's fault that people abuse or otherwise make poor use of it. I see no point in limiting it on that basis.


> That said, it's not the technology's fault that people abuse or otherwise make poor use of it.

I felt like I should respond to this, too.

It's not, but it is not unreasonable to object to implementing new ways for browsers to hog resources pretty much at the hosts' discretion. In this case it can be argued that it's poor use of technology to implement it in the browser.

On a similar basis, one might argue that it's poor use of technology to kill people with dynamite, but it is also reasonable to take a step back and argue that it is poor use of technology already at the point at which you gave a toddler the detonator. Regardless, dynamite is an excellent and useful technology.


I understand your point, and admittedly I have many of the same concerns.

That said, some of us have legitimate uses for said dynamite, pyromaniac infants be damned.


> We're still a long ways off from garbage-collected languages inside wasm.

I'm not sure what you mean by that. Is there some limitation inherent to WebAssembly that makes implementing garbage collection particularly difficult?

For what it's worth, here's Lua (which implements a garbage collector in its runtime) in WebAssembly: https://github.com/vvanders/wasm_lua


I'm impressed. It seems to be in total working order when trying out collectgarbage, __gc and __mode[0].

Code:

  print(_VERSION)
  local a = setmetatable({}, {__mode = 'v'})
  local b = setmetatable({}, {__gc = function(x) print("Deleting: " .. tostring(x)) end})
  a[1] = b
  a[2] = 2
  print(table.unpack(a))
  print(collectgarbage'count')
  b = nil
  collectgarbage()
  print(table.unpack(a))
Result:

  Lua 5.3
  table: 0x50a748 2
  21.4033203125
  Deleting: table: 0x50a748
  nil 2
[0] - 3 features tied to the Lua GC, to simplify, they basically are (respectively): forcefully running the GC, finalizers that run when object is collected and weak refs that don't hold the object alive on their own.

Perhaps http://www.lua.org/demo.html could be replaced with that for users who have JS enabled.

Again - just wow, that's really nice, Lua in a browser.


>I'm not sure what you mean by that. Is there some limitation inherent to WebAssembly that makes implementing garbage collection particularly difficult?

Yes, in that you'd have to implement the GC inside the wasm module itself (as in your example). In contrast to other languages, Lua is very lightweight. Many GCs are non-trivial.

Moreover, proper GC support is a piece of the puzzle for the wasm/JS/DOM interop story to get better.

See: https://github.com/WebAssembly/design/issues/1079


> Yes, in that you'd have to implement the GC inside the wasm module itself (as in your example).

That is essentially the same problem people implementing garbage collectors on real machines are facing. You have some memory and you have to implement the algorithms and data structures necessary to allocate and free it as necessary.

> Lua is very lightweight. Many GCs are non-trivial.

Yes, but don't confuse the problem of implementing a garbage collector with the problem of implementing a system that can support a garbage collector. The former may be non-trivial, while the latter only supposes an architecture where you can arbitrarily manage memory "manually". From what I understand, you just hand WASM an array of memory and it can do whatever it wants with it, since it's a linear bounded automaton.

> Moreover, proper GC support is a piece of the puzzle for the wasm/JS/DOM interop story to get better.

The link seems to be addressing the use of VM-managed GC objects within WASM programs. That would certainly a nice feature, especially when it comes to interoperability with JS, but the lack of it is not a showstopper for implementing your own garbage collector as it has always been done.


>Yes, but don't confuse the problem of implementing a garbage collector with the problem of implementing a system that can support a garbage collector. The former may be non-trivial, ...

In most cases it isn't non-trivial, and that's further compounded by the fact that the GC currently has to be part of the module payload at the moment.

If we're talking strictly implementation difficulty, then interop with VM-managed GC objects may in fact be more difficult; I'm certainly not an expert so I couldn't tell you.

>... but the lack of it is not a showstopper for implementing your own garbage collector as it has always been done.

Per the very first sentence of mine you quoted, I never said it was. It certainly is likely to be far from practical, however.

Also I suggest reading the sibling comment that Steve Klabnik replied with prior. The wasm host bindings proposal is now separate from the GC proposal.


> In most cases it isn't non-trivial, and that's further compounded by the fact that the GC currently has to be part of the module payload at the moment.

My point here is that it is beside the point whether it's trivial or non-trivial to implement a garbage collector. With regards to it having to be part of the module payload, that's how garbage collectors are typically implemented today, meaning it's very practical in the sense that you can just re-target that portion of your language runtime implementation like any other part without making the web a special case. Just target the new instruction set architecture and you have your garbage collector exactly as intended, tuned for the use case you designed it for.

> If we're talking strictly implementation difficulty, then interop with VM-managed GC objects may in fact be more difficult; I'm certainly not an expert so I couldn't tell you.

It may be more difficult, but in the end also an entirely different problem. If you have implemented a garbage collecting language runtime for your language, you've already solved the problem of garbage collection.

> Per the very first sentence of mine you quoted, I never said it was. It certainly is likely to be far from practical, however.

My question from the start was about how exactly it is impractical. Your reply focusing on the hardships of implementing a garbage collector and using the VM GC is an interesting side note, and I appreciate the response and discussion, but it ultimately doesn't answer the question. The evidence to my point is a fully functioning garbage collected language runtime compiled seemingly without modification (it's written in ANSI C, after all) for the WebAssembly platform. You can do that now, without additional hurdles and without considering the garbage collection semantics of the browser runtime. That seems very practical to me.

It is also very practical to be able to use objects allocated and managed by the browser instead of a contiguous array of memory, but that doesn't somehow make the obvious approach less practical.


>My point here is that it is beside the point whether it's trivial or non-trivial to implement a garbage collector.

>My question from the start was about how exactly it is impractical. Your reply focusing on the hardships of implementing a garbage collector and using the VM GC is an interesting side note, and I appreciate the response and discussion, but it ultimately doesn't answer the question.

The hardships of the implementation are central to why it's impractical. That said, I think we may be using different definitions of the word impractical here.[0] I'm talking strictly in a general sense, that packaging an entire language's runtime with code is—for most uses right now—not sensible.

You are correct in that it's entirely possible to use GCed languages inside wasm, and indeed it has been done—I'm not disputing that.

I also understand your point about the multi-platform advantages of keeping the GC portion of the runtime inside of wasm. Funny enough that may turn on its head in the distant future once GC support lands, assuming wasm ends up being a popular compilation target outside of the web. In that case you'd have wasm-managed GC objects on desktop and mobile.

>The evidence to my point is a fully functioning garbage collected language runtime compiled seemingly without modification (it's written in ANSI C, after all) for the WebAssembly platform. You can do that now, without additional hurdles and without considering the garbage collection semantics of the browser runtime. That seems very practical to me.

Lua is very lightweight. The story changes if you try to compile the .NET, JVM, Go or even V8 runtimes into wasm. All possible, but a hell of a lot more difficult.

One case study is Blazor.[1] The FAQ in its own readme states that it's not practical currently due to binary sizes, albeit not for GC-related reasons. But it will be practical. That was the gist of what I was trying to say, that GCed languages (i.e. ones that include their own runtime) are currently not terribly practical relative to the compiled alternatives, most of which are production-ready now.

Of course, having seriously evaluated Unreal Engine 4 for web use with its 50MB runtime, I view even Blazor's current 4MB binary size as quite practical, but it depends on the use case. For general web use, 4MB isn't practical.

In support of your point, I concede that whether the GC resides in the module or VM is largely immaterial to practicability as a whole. My point was that packaging the runtime with the payload is usually not sensible relative to the alternatives. I regret that I incorrectly generalized my original statement in terms of garbage collection rather than runtime overhead.

[0] https://en.oxforddictionaries.com/definition/impractical

[1] https://github.com/SteveSanderson/Blazor


> The hardships of the implementation are central to why it's impractical.

That's a very general statement. There are many hurdles that will make porting some things to WebAssembly a pain, but I don't think GC in particular is one of them.

> That said, I think we may be using different definitions of the word impractical here.[0] I'm talking strictly in a general sense, that packaging an entire language's runtime with code is—for most uses right now—not sensible.

I don't we have a different idea of what impractical means. I am not talking so much about porting an entire language run-time for a language with a very system dependent run-time, and I am not talking about useful libraries of these languages, I am talking about garbage collection.

> I also understand your point about the multi-platform advantages of keeping the GC portion of the runtime inside of wasm. Funny enough that may turn on its head in the distant future once GC support lands, assuming wasm ends up being a popular compilation target outside of the web. In that case you'd have wasm-managed GC objects on desktop and mobile.

I doubt that using a single opaque garbage collector is a good option for many languages. It's interesting and would certainly make the implementation of new languages easier, but a garbage collector that performs well will probably always be quite language dependent. I also don't see the advantage of WebAssembly as a compiler target in general. We already have LLVM IR and the tooling that makes it a (relative) breeze to work with independent of the target.

> Lua is very lightweight. The story changes if you try to compile the .NET, JVM, Go or even V8 runtimes into wasm. All possible, but a hell of a lot more difficult.

Don't ignore that it's a very broadly applicable and quite popular language. There are plenty of languages not particularly dependent on large runtimes that use garbage collectors. It is not fair to conclude that a language with a garbage collector necessarily has a very complex runtime when implementations of the concept have existed since the late 50s. CLR (I assume that's what you mean by .NET), JVM and V8 are all JIT compilers. Porting that likely dwarfs the complexity of porting their garbage collectors. As for Go, I'm not sure why porting its run-time would be that much of a hassle. For JVM, there are already plenty of relatively tiny implementations.

> One case study is Blazor.[1] The FAQ in its own readme states that it's not practical currently due to binary sizes, albeit not for GC-related reasons. But it will be practical. That was the gist of what I was trying to say, that GCed languages (i.e. ones that include their own runtime) are currently not terribly practical relative to the compiled alternatives, most of which are production-ready now.

So it's impractical in some cases because of large binaries, not for any of the reasons you've pointed out so far. Giving WebAssembly programs a way to integrate with the VM GC will change this how, exactly? In Blazor's case the large file size can likely be attributed to things not strictly related to the run-time, like the core libraries.

> Of course, having seriously evaluated Unreal Engine 4 for web use with its 50MB runtime, I view even Blazor's current 4MB binary size as quite practical, but it depends on the use case. For general web use, 4MB isn't practical.

What is general web use, and how does it relate to WebAssembly? By jamming this technology into a browser in the first place, we've already conceded that the web is a generic application platform for which using plain documents or simple JavaScript is sometimes unsuitable. I don't necessarily agree that it should be, but here we are, and there are plenty of uses, given those terms, where large binary sizes may be justifiable. A large download may be worthwhile if it ends up in my browser cache and I am likely to use the application often. That doesn't mean it's suitable for serving ads in novel ways or implementing trivial web applications that you can easily implement in JS.

> In support of your point, I concede that whether the GC resides in the module or VM is largely immaterial to practicability as a whole. My point was that packaging the runtime with the payload is usually not sensible relative to the alternatives.

What are the alternatives? As far as I am concerned, it's either using regular non-web applications or using plain JS. If that's what you mean, I agree that those alternatives are better in a large majority of cases. The best case scenario is that WebAssembly will end up being "here's something closely resembling the CPU, here's some memory, here's some way to access and manage VM objects, and here's a way for JS to call your code". That's not going to make it much easier to port a language run-time or its libraries than it is for any other platform. It will make it easier to write code that interacts with the existing browser environment.


>I am not talking so much about porting an entire language run-time for a language with a very system dependent run-time, and I am not talking about useful libraries of these languages, I am talking about garbage collection.

I'm talking about the former as pretty much the entirety of my previous comment tried to convey.

>I also don't see the advantage of WebAssembly as a compiler target in general.

The advantage is it would be a single target.

>So it's impractical in some cases because of large binaries, not for any of the reasons you've pointed out so far. Giving WebAssembly programs a way to integrate with the VM GC will change this how, exactly?

It's impractical primarily because it's not production-ready. In the interests of pedantry however, reliance on a VM GC would likely result in a somewhat smaller binary size.

>In Blazor's case the large file size can likely be attributed to things not strictly related to the run-time, like the core libraries.

Per its readme: "This is because Mono on WASM doesn't yet have any IL stripping or minification, and bundles a large runtime that includes many desktop-related features that are irrelevant on the web."

>What is general web use, and how does it relate to WebAssembly?

Most people use the internet to visit websites. Most major websites optimize their page loads as much as possible to fit as many ads in as they can without completely tanking the user experience. Using the previous example, a 4MB binary doesn't fit that use case. It would be more the realm of specialized applications.

>What are the alternatives?

Javascript ecosystem or production-ready compiled languages that target wasm.


> We already have LLVM IR and the tooling that makes it a (relative) breeze to work with independent of the target.

http://webassembly.org/docs/faq/#why-not-just-use-llvm-bitco...


> Moreover, proper GC support is a piece of the puzzle for the wasm/JS/DOM interop story to get better.

I commented with a link elsewhere, but it seems that people have found a way around that!


Relevant discussions in case anyone's interested:

https://news.ycombinator.com/item?id=15694292

https://news.ycombinator.com/item?id=15694289

I didn't see those, thanks. That's good news. Hopefully host bindings land sooner now than they otherwise would have if tied to the GC proposal.

The wasm32-unknown-unknown LLVM backend target for Rust is also quite exciting. I say this after having just spent a minor eternity compiling the Emscripten toolchain. :)


And to have different pythons running together we need docker with ubuntu.


I hope to have at least 128GB of ram in my phone by that time :)


Then there should be a huge market opportunity to provide identical services at far smaller and faster downloads.


Can't wait for DOM interop and get away from JS on frontend. Seriously JS should not be the lingua franca of the web :)


Currently DOM interop is a distant wishlist, so I wouldn't hold my breath. JavaScript isn't a perfect language, but it does have native lexical scope, which makes it a good fit for the architecture of web technologies.

> Seriously JS should not be the lingua franca of the web

What would you recommend for a replacement? When I typically see this it is from people can't figure out JavaScript as opposed to any rational technological reasoning. I cringe at the idea that people would offer forth really bad things that don't fit well merely to satisfy their own insecurities.


If you have the ability to compile <insert your favorite language> to WASM, and a little glue in the form of an isomorphic application framework (something to glue together server-side with client-side processes), I'd say it's quite a compelling alternative to JS.

If you look at all the large Javascript frameworks, the increase in hardcore software engineering approaches taken to the front-end are an interesting signal. We have MVC in the front-end, we have MVC in the backend. Why have two MVCs if you can write an isomorphic application and have one?

I'm thinking, for example, something like this... write C#, use a framework (fictional "Isomorph#") that can determine whether code should run on the client or server, and if you have interfaces to UI, you have the full solution. C# (client/server) + WASM/JS + remoting/web services + HTML/CSS/SVG/etc UI = real "apps".

And that could just as easily have been Python (client/server) + WASM/JS and the rest of the stack, or Go + WASM/JS.. etc. etc. I was shocked when I learned of Skulpt.org, which is a python interpreter running in the browser - so we can't be far off from the same thing running on WebAssembly.


> If you look at all the large Javascript frameworks, the increase in hardcore software engineering approaches taken to the front-end are an interesting signal. We have MVC in the front-end, we have MVC in the backend. Why have two MVCs if you can write an isomorphic application and have one?

That is not necessarily a language oriented concept. JavaScript can run on the front-end and back-end having an MVC architecture at both points. A better question is why a developer should wish to impose MVC at all at either point?

These concepts become easier to reason about when the software runs at a single location using an HTTP service (on localhost) to talk amongst its clusters. It is just a multiprocess application with the bits running asynchronously from each other.

When one (or more) ends of the environment are distant factors change drastically. First, you don't control the remote environment. You are completely at the mercy of what the user is willing to execute and they may modify your application in ways you do not anticipate. Secondly, there is a delay between distant ends.

Regardless of the application or language in question the environmental and security considerations at hand will continue to impose a separated JavaScript-like web application.


JS is not particularly accessible to beginners and makes it easy to write bad/dangerous code. A good programmer can write in anything, but it's the not-so-good programmers that you have to worry about.


Agreed.

To be a strong candidate for a web scripting language you need three things:

* native lexical scope. This is how all web technologies work.

* Minimal reliance on white space as syntax. The language needs to be portable. Overloading white space as syntax makes a language more brittle, particularly in distribution.

* Immediate interpretation without a prior static compile step. This keeps the code open to examination and minimizes compile time delays.

Find another language that doesn't have all the stupid crap that JavaScript has and yet still excels in those three points and I will agree upon a replacement.


While I don't agree with your second point, which seems to be there purely to exclude Python, I'll run with it so as not to devolve into religious war territory.

How about Lua?


A line terminator on *nix isn't a line terminator on Windows, which is a big deal if line termination ends a statement, and that assumes the file isn't modified in transit.


Yet somehow, Python doesn't have this problem - it simply treats both types of line terminators as valid. So do all other modern languages that rely on whitespace.


And semicolons are optional in JavaScript, makes it whitespace sensitive I suppose.


Yes and no.

The language specification requires the use of semicolons. If a semicolon is not supplied the interpreter will insert it for you. That magical insertion step is referred to as ASI. I don't remember if ASI is mentioned in the spec (as I don't think it is), but at the very least it is a de facto standard as missing ASI breaks the web.

As stupid as all that sounds... semicolons are actually required to terminate statements in JavaScript.


And yet somehow the web doesn't break when one programmer uses \r\n while another uses \n. What was your original point about line terminators in this context?


I suppose that goes to how the language defines line termination and whether that language is OS aware. JS is completely independent of its execution context. Python may achieve this as well, and if so, then I am wrong in my thoughts about Python.

Here is how JavaScript does this:

* https://www.ecma-international.org/ecma-262/#sec-source-text

* https://www.ecma-international.org/ecma-262/#sec-line-termin...

ASI is defined in the spec here: https://www.ecma-international.org/ecma-262/#sec-automatic-s...


> JS is not particularly accessible to beginners and makes it easy to write bad/dangerous code

That describes just about every programming language ever.


Alternate universe: JVM was more secure and had better DOM interaction, so Applets won against JavaScript.

The world goes around in circles.


IMO we would still need WebAssembly in such a world, because the JVM still enforces a heavyweight object model and GC.

The "wasm is just the JVM all over again" meme needs to die.


> get away from JS on frontend

Yes, it's just about time for another generation to be old enough to think that we need to rewrite everything in a new language.


But now we get to rewrite everything in every language. Everybody wins!


When DOM access comes, all my js will hit the round file.


As much as I dislike JS, my experiences with React Native (JS but largely native UI elements) and Electron (JS with some native elements, but largely HTML+CSS) has convinced me that that HTML+CSS is my real (performance and battery life) enemy, with JS a significant but distant second. I look forward to better UI kits that just draw to a Canvas-like surface or something, ideally without needing an HTML+CSS engine at all.

I'm not thrilled with that because I really like the old-school lightly-styled-HTML web, but I have to admit it's really really bad at "app" style layouts and interactivity, so if we must do that with the web (and apparently we must) it's gotta go, or it's gonna continue to suck.


What do you consider a better alternative? A lot of people love to hate on HTM+CSS, but it's actually fairly powerful. I've been trying out a lot of native UI toolkits, and they're all as quirky and confusing as web tech. I'd agree that there's tons of room for improvement, but any UI tookit will need to make certain tradeoffs and deal with complex layouts and compositions.

Flexbox and grid help a lot with app-style layout.


I dunno... React and React Native are nice. I would prefer to use those for UI but do everything else in C++14 or some other real language. The web stuff could just be the view.


Sure. But it would be nice to be able to cheese between go and python for the UI, as well.


I always cheese my UI. Mobile is the foot rub.


>> Seriously JS should not be the lingua franca of the web

But neither should this. Really, keep your code off my computer as much as possible.


But what is a computer for if not running other people's code?

I can't imagine there are many people in this world who have a computer with more than even 1% their own code running on it.


Red herring. People install or explicitly download code they want to run. The general public doesn't even realize that most web pages are full of code (js not html) and most developers have gotten so used to it they feel entitled. The goal should be to try harder to use less code, not enable native execution. I understand, engineers like to think about what is possible (wouldn't it be cool if..!) but sometime they need to check their ego and consider context.


99%+ of people don't think like you and don't care about the distinction between code on the browser and code on the OS.

"A red herring is something that misleads or distracts from a relevant or important issue. It may be either a logical fallacy or a literary device that leads readers or audiences towards a false conclusion."

I don't see how my comment is a red herring. Computers are for running code. That's what they do. You said you don't want other people's code running on your computer. Bad news for you, that's all your computer does.


I definitely don't disagree with your statements, but I think OP meant more akin to "the web is not for running other peoples code". Ie, the web is for content, not applications.

This may not be what OP meant.. but it's something I can agree with to a degree. I love web apps, don't get me wrong, but I wish we had meaningful fallbacks for those who want content without features.


> Ie, the web is for content, not applications.

Ideally yes, but that horse left the barn years ago. The browser becomes more capable, chalk full of more APIs over time, not less.


The more capable the browser gets the more people will turn off these capabilities. I love JS and the web, yet I have JS off by default! Mostly because it gets rid of a lot of crap, but I also don't trust every single web site I visit.


Computers are for running code as much as cars are for consuming fuel. They both need them to function in the manner that we currently use them, but that's not what they are for.

Both computers and cars exist to perform tasks that enable humans to do other things. It's possible to have a useful car that does not consume fuel and it's possible to have a useful computer that runs no code (hint: hardware's not just for mounting those pretty lights in your computer case).


Web pages have included code in them since 1996. Less than 1% of browser users have ever experienced an Internet without code included in the pages they visit (or run NoScript).

While there is issue with how ill informed the general public is in general on how web browsers / http / html operate in the first place, there is no disinformation campaign here - users don't think the browser does or doesn't run code because they don't understand how any of the system they are interacting with operates at all. The average extent of knowledge when it comes to computers is that the Chrome has the Facebook and you need the Wifi logo lit up for it to work, and even that last one is often way beyond the knowledge scope of your average Internet user.


> People install or explicitly download code they want to run

Not really, explicit installation are not the real difference: you ask for one package in some package manager and you will commonly implicitly get a bunch of other dependencies installed, you will likely never be able to feasibly personally audit all those implicitly installed packages if we are talking about e.g OS repositories...

The real differences are:

1. Trust of the authority that maintains a collection of repositories.

2. Execution permissions.

i.e The package manager for your OS can install code that can run with root privileges if it wants, but you have trust in the authority that maintains the package lists. With the web there isn't any curation of package lists, but the code is sandboxed.


The last time that a package I installed proceeded to install code that the vendor didn't know about was never.

The last time a web page caused my browser to download and run js that the page owner didn't know about was five minutes ago.


So every package author understands each of their dependencies and all of their respective sub-dependencies, recursively on down?

This is probably the best bit of programming humor I've read all morning.


>So every package author understands each of their dependencies and all of their respective sub-dependencies, recursively on down?

Have they personally audited every dependency? Probably not. Is the list of dependencies known? Yes. Is the list fixed? Yes.

On the webpage side:

Does the content provider know what will be served by their ad network? No. Does the ad network provided content change? Yes, constantly. Does the content provider even know who ultimately will be putting crap on their web page via the ads? No.


> Does the ad network provided content change? Yes, constantly. Does the content provider even know who ultimately will be putting crap on their web page via the ads? No.

Whoa! hold on a sec.. code inside a browser != Ad network, when people insert ads into programs outside of web browsers you will have the same issue, only potentially worse because you wont know if they properly sand-boxed them.


:D ... obligatory xkcd https://xkcd.com/797/


> The goal should be to try harder to use less code

The goal is to deliver content and experiences that people actually want, it has nothing to do with the amount of code at all.

At best, using less code might be a performance optimisation (though not always).


>The goal is to deliver content and experiences that people actually want

Wrong. 99% of the time the goal is to sell ads. The tricky stuff is to bundle it with something that people actually want (or think they want).


So we're both agreed then that the goal isn't to "use less code"? I get your point though, so I'll rephrase:

> The goal should be to try harder to use less code

Why is this the goal? At best it's a performance optimisation that some (probably most) sites could use to speed up time to render. But at the same time, there are many other sites where this doesn't make sense or for which the purpose isn't the traditional document based web that is sped up by removing JS. Why should sites that actually benefit from something like WebAssembly be limited just because other sites will use it and (continue to) be slow bug ridden monsters?


Is this relevant what the people are aware of? If you drive a car do you need to be aware of its internals?

WebAssembly looks like a safe, portable, intermediate bytecode. What's wrong with giving people more options (e.g. running native applications in their browsers)?


> If you drive a car do you need to be aware of its internals?

You walked yourself into an obvious answer there: yes, to a limited extent you do. That includes understanding a bare minimum about tires, the engine (what noises are normal, what noises aren't normal), oil changing (why, when), windshield washer fluid, basic indications of electric problems, basics about the need to change brakes (indications of brakes going bad, why they need changed), the basics of the parking brake (and not to drive with it engaged and why), the usefulness of different types of tires (for example snow tires), why you shouldn't needlessly over-rev an engine frequently (or do stupid things like over-rev it for an hour while parked), how turn signals work and the need to make sure the lights on your vehicle are functioning, how high beams work (or at least how to use them), how gear shifting works and why it's important not to thrash your transmission (what abnormal shifting sounds like, why that matters), this list keeps going and I've really only covered primitive things that most or all drivers should know within the first year or so of driving.


Continuing the comparison that means people need to understand not to cold power off the computer, not the specifics of CSS, JS and HTML. Note that cars are also going to be more and more abstracted away with automatic gear shifting (now I don't need to understand how gear shifting works). Do people really need to understand how CPU work to do their banking?


Most drivers don't concern themselves with the alloy composition of their catalysator or the exact air-to-fuel ratio in their cylinder headers or the necessary pressure in the breakheads during an emergency halt.

Or the composition of the road below them in exact ratios of chemical components.


> "Red herring. People install or explicitly download code they want to run."

When you visit a website, and you have given your browser permission to run JS, you're giving your permission to run JS. If you want to block scripts by default, use an extension like uMatrix:

https://addons.mozilla.org/en-GB/firefox/addon/umatrix/


>> When you visit a website, and you have given your browser permission to run JS

My browser never asked me about JS. Even if it did, browser developers would switch the preference rather than annoy the user with a prompt every time a site wanted to run some JS. The end result is that whatever is common practice, the users will end up accepting by default. In such cases it is the responsibility of the people setting standards to keep the users safety in mind. The responses in this thread really make my point. So many making excuses for running excess code and even advocating more APIs.


Take some personal responsibility for the tools you use and how you use them. If a tool has some behaviour I don't like, but makes it easy to change it to something I do, there's no point in moaning that the option exists.


I think it's darkly humorous that I have to install browser plugins to stop sites from mining monero in my browser. It's like websites are those creepy spider things in the Matrix, except instead of sticking humans in pods to harvest their biochemical energy, they're just running up our home electricity bills by maxing our CPUs.


>maxing our CPUs.

And GPU too,

I recently saw that I do get frequent crashes on certain pages, it appears that somebody is putting empty ads that do SHAsum in the background with WebGL using vec4 and shaders!


Let's just move all those 80s cyberpunk novels to the nonfiction section and be done with it.


Yeah web designers will hate me for this but at least 15% of every site is crap that I block.

I'm on a 5Gb data plan for mobile and even on the excellent 4G networks we have here sites are still too slow and unresponsive.


What's the difference in using your electricity to mine crypto-currencies and using it to advertise to you?

I don't really see what's dark about it.


What plugins do you recommend for this?


uBlock Origin with the default choice of lists plus "Fanboy's Annoyance" makes the web 100s of times faster.

It's now a default install on every browser of every computer I touch. Even on Firefox on Android, which works really well btw.


ublock has a list to prevent "resource abuse", I believe it's enabled by default (it blocks things like coinhive & friends).


I don't understand this. What are you afraid of? Are you like Richard Stallman, only downloading webpages from email clients or something? Do you have filters that strip all javascript from the web pages you browse?

If you're not one of these guys that refuse to execute any binary code that you have not compiled yourself, you should welcome the possibility of running binary code from a sandboxed web environment.

I, for one, wish I could run all my proprietary software on the web, and restrict all my native applications to FOSS. That's a prospect WASM enables.


HTML, CSS, SVG and other things your browser will interpret might be more complex than JavaScript and surely their implementation contains bugs and exploits. Hell, I recently learned about CSV injection (comma separated values). You can't realistically discard JavaScript and think that you're safe. The world is full of complex data formats ready to be exploited. Proper solution is to embrace sandboxes and put many walls around it, so inevitable bugs won't be exploited.


This is wrong. The focus will simply switch to sandbox escapes.

On x86, the only real "sandbox" you have is what your MMU gives you. For as long as executable has access to browser's address space, it can do anything a browser can, including reading your webcam, mic, sensors, GPS, etc


>> This is wrong. The focus will simply switch to sandbox escapes.

Thank you. We have a winner here! And the people trying to escape them will have the full capability of native code running on your CPU.

In the mean time, permissions will be granted for ever increasing parts of the system. Users will not be prompted to "allow" for every site they visit because that will be tedious so browsers will start the enable permissions by default. But either way, we now have the browser acting as the keeper of permissions that our OSes are not able to enforce at the granularity we need for such things.

We've been continually migrating the browser to the role of an OS. It's just insane.


How is it insane? It does a lot of things better than our current OSes. As long as we go into it knowing that's what we're doing it's an inherently better application and security model in a lot of ways.


So, fell free to lock both JS and WebASM on it.


For now, couldn't someone build a js<-->wasm bridge so that you could have DOM access? This[1] seems to suggest it's possible

[1] https://github.com/WebAssembly/design/issues/126


You can, it's just very slow.

See this issue, for example: https://github.com/rust-webplatform/rust-webplatform/issues/...


Are they writing their own DOM parser in Rust though?

I should clarify what I meant. I was thinking more the way Cordova works where the JS side handles the DOM but the wasm side handles computations and they talk to each other via a "bridge". So maybe React can offload its heavier computations (diff-ing and what not) to a C lib compiled to wasm and it just returns data back to it.


I beg to differ. The DOM is the problem, not JavaScript. Once other languages can interface with the DOM everyone will finally realise this.


People have been saying that forever. But it's so entrenched now that it'll take a gargantuan effort to get rid of it.


I want to see a WASM+react framework where the application state/logic is fully in WASM and merely the rendering is done by react.


I would want the inverse of that, where the application logic is in some high-level language, and the rendering is done in WASM, i.e. react-dom and the diffing parts of react are in WASM.


Great, time to port SWF to WebAssembly.


Java applets 2.0, we are back to square one, yet again.

People don't learn. The web dev community have just managed to force browsermakers to unify web development on browser side JS, and throw Java applets, activex, and action script to the bucket, just to have a kind of JVM being made a part of the web standard, and forced upon us yet again.


There are some clear and defining differences between Java applets and webassembly. The core problems with Java were that the security sucked, and that applets were non-native and didn't use the dom. I doubt there is any desire in web developers to overuse canvasses and do layout in wasm, and the security model should hopefully be as successful as JS's has been over the years.


> The core problems with Java were that the security sucked

The core problem with Java applets was that they were Java applets

An option to keep the source of the web app closed will fragment the web dev community yet again.


I don’t think there is a single web site with human readable javascript on the web anymore.

With all the bundlers and minifiers manglers etc, delivered js is already as closed as a wasm binary would be.


> I don’t think there is a single web site with human readable javascript on the web anymore.

Meaningless hyperbole - there are plenty. You're posting on one right now.


Sorry, I forgot Hacker News, the representative of modern web development practices.


Ok fair enough.

Hacker News and at least every Wordpress site. So that covers a significant portion of the existing web.


> An option to keep the source of the web app closed will fragment the web dev community yet again.

But this changes nothing. It was always possible to obfuscate JavaScript, just like every other programming language.


That option is about the same as it is today: you can access it, but it will be hard to follow. Just like minified Javascript.


Webkit only has 10k LOC specific to WebAssembly because like every other browser it reuses the javascript JIT. That's an incredibly tiny part of the entire browser.


Having a unified VM with established sandboxing requirements is nothing like JVM. It's more like JavaScript itself, only lower level. What's bad about this?


> People don't learn.

Do you? The comparison with Java has been made before, including on HN.

"While Java and Flash [2] came early to the Web and offered managed runtime plugins, neither supported highperformance low-level code, and today usage is declining due to security and performance issues. We discuss the differences between the JVM and WebAssembly in Section 8. "

https://github.com/WebAssembly/spec/blob/master/papers/pldi2...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: