What I'm most excited about is the potential for WebAssembly (along with WebGPU) to pave the way for a whole new era of browser games and provide an alternative to developers who don't want to fork over 30% to walled gardens.
My team has built out WASM/WebGPU support for Unreal Engine 4 and is in progress for Unreal Engine 5 (and other engines) with a suite of optimization tools like asset streaming (so you don't have to download a whole game at once) and advanced texture compression (necessary for low powered devices like mobile/Chromebooks).
More info on us in this Venturebeat article we were featured in:
Was watching your update in the Khronos Group meetup a few days ago, very impressive work. I'm excited for what this will mean for the web overall, it might finally reach that tipping point where more than .io games become standard to play in the web. That's huge, and will drive demand for more powerful web APIs too.
WebGPU in general is very nice. We're building a game engine[0] that uses Zig to build Dawn (Google Chrome WebGPU implementation) from source for running on desktop / Steamdeck, and working on browser support via WASM now.
Those 30% walled gardens enjoy 2022 3D APIs for 2022 hardware, with great development experience, while WebGPU if it still comes out this year, will be MVP 1.0 after several years trying to make everyone agree into something, and still doesn't have a debugging story.
What would you like to see instead, everyone stick with WebGL? Expose Vulkan directly to the web with a "this is dangerous" prompt? Go with WebGPU+SPIRV, but have no Apple support?
It is already dangerous anyway, as you have no control over driver bugs handling shader code.
As for blaming Apple, the reason why WebGL 2.0 lost compute shaders, a GL ES feature from 2014 (!), was because Google dropped it from Chrome after two failed attempts from Intel to bring it, as WebGPU was just around the corner, two years ago!
This is what happens with committee APIs, everyone messes up and we end up stuck with MVPs forever.
As for what we are getting instead, it is quite clear from console vendors, server side rendering with pixel streaming.
(a) WebGPU could be dangerous anyway, so why not just expose Vulkan entirely with no safety?
(b) Committee-designed APIs are always doomed to fail, equally Google/Apple/Intel's fault, they won't ever ship a non-MVP anyway, 'so why bother' I presume?
(c) Console vendors are pushing server-side rendering pixel streaming too, so we won't have any control over our devices ultimately in the future anyway - so none of this matters.
If those are the arguments, I'd rather have WebGPU than not have it, personally.
Most game engine's don't require you to download the entire game at once. Loading assets dynamically is always an option. However, it's pretty nice to be able to do something like `engine.getAsset("blah")` and have it immediately return you something.
With shared array buffers it’s not actually a big problem. We’re building a multithreaded game that targets wasm and we just spin up workers up front which all use the same shared memory, and then just use standard concurrency primitives to schedule work on them. Just have to be careful not to block on the main thread, and you have to serve all your assets from the same domain for browsers to allow you to make shared array buffers.
What is the minimum weight of an UE4 project compiled to WASM? The article says something about a menu being 10 MB, but is that a menu done in UE4 or just something else and somehow connected with an actual game? To my knowledge an unreal executable weight at minimum around 80 MB? That would compare poorly to Godot which is very light in comparison...
Yes, in a limited form since it sticks to the lowest common denominator that can be implemented across most popular CPUs with decent performance and while producing consistent results. There's a follow-up proposal in progress to relax the "consistent results" part and allow for some slight precision wiggle room with instructions that couldn't be implemented otherwise (e.g. they could add an FMA instruction that compiles to native FMA where available or separate MUL-ADD otherwise, even though they won't produce exactly the same results).
> If the device performs safety-critical functions, like actuating power, medical equipment, or a connected car, the firmware and software cannot be updated without rigorous testing. If the device requires certification, updating it may require recertification. Updates are disruptive and need to be scheduled when the device is not performing its critical functions. Even just performing the update presents risks: if the update ends up bricking millions of devices and requires manual intervention, it could prove an existential risk to the business or the safety of individuals.
WebAssembly does nothing to address any of this. The least important part about this is the language the code is written in. I think this person has thoroughly confused code running in a VM with code being secure, safe and performing to specification.
The author is missing another huge area: plugins/modding/scripting/etc. Think IDE plugins, Minecraft mods, or running user-provided untrusted scripts.
The benefits WASM brings are:
1. Sandboxing - security
2. Isolation - if a plugin crashes, it won't bring down the whole program
3. Interoperability - write a plugin in lots of different languages, not just lua or js
4. Speed
I'm hoping to write my thesis for my master's degree on this topic this year. I'm also in the process of writing a game like screeps, where users provide a WASM script to control units for an RTS-style game (without combat though) https://github.com/JMS55/botnet.
It's amazing how simple it is to constrain memory usage, runtime duration, and secure exported functions to a WASM VM. Performance is also great - currently about ~6 microseconds per tick per unit, up to ~200 microseconds when doing expensive pathfinding. All that, while letting you program your units in Rust - the same language as the server is written in, while being able to share code with the server, and not having to use something more script-y like lua.
IMO the main selling point of WASM is its predictable performance.
It's not a secret that you can write JS which will be JIT-ed into an extremely efficient machine code. But it's a "secret" - how to write that JS. You would need seriously advanced hackers who can study V8 assembly output and correlate it with used JS features. And do it all the time, when someone changes that code. Or may be even unrelated code changes will change the way V8 compiles that particular code snippet. JS compilation is black magic.
On the other hand, writing C is boring and solved problem. Compiling C to wasm works. It's predictably fast. You can use it for performance-critical code and it'll probably work without any adventures to V8 internals.
Making fast JS isn't some super-secret thing. Sure, there are weird cases like `x = y > z ? y : z` being 100x faster than `x = Math.max(y, z)`, but most performance improvements come from three simple rules:
1. Create an object with a fixed number of keys and NEVER add keys, remove keys, or change the data type of a key's value.
2. Make arrays of a set length and only put ONE type data type inside. If that data type is an object or array, all the objects/arrays must have the same type
3. Functions must be monomorphic (always called with the same parameters in the same order of the same type)
Do this and your code will be very fast. Do something else and it will get progressively slower.
Running the profiler in Chrome or Firefox is very easy and it will show you which functions are using up most of your processing time. Focusing on applying these rules to just those functions will usually get you most of the way there.
... Until it's not. If your code needs to be predictably fast, it can't be fast the majority of the time, it needs to be fast all the time. It's not about painting your updates in a tenth of a second, it's about painting your video game scene in under 16ms, every time.
There's nothing you can do to guarantee your JS isn't passed inputs that trigger pathological cases. There's no linter that can guarantee that you're writing code in a way that is the fastest it can be (even with type checking!). Asking developers to be a human linter for the sake of consistent performance is a bad developer experience no matter your skill level.
Kinda. As long as you guard the boundaries of the parts that need performance, you can be a lot more flexible everywhere else. You see this in libraries like React where the external API is polymorphic, but they tend to pass through to calls that are monomorphic internally so performance is better.
I wish TypeScript helped more here. I'd prefer if it had a performance option that disallowed or at least warned about these kinds of things.
I wish we just ditched the JS legacy and had a properly statically typed language, with dynamism as a layer on top (like e.g. "dynamic" in C#), rather than underpinning anything.
Which is what wasm will, hopefully, give us in the long run. And ensure that said PL will have to remain competitive against the new contenders, since they can always replace it.
I think the reverse option is better. Add a `use types` directive to functions or modules. Inside those modules, use a very strict, structurally-typed ML-style type system that ensures good performance. If an untyped piece of code calls a typed one, coerce primitives and throw on objects that aren't a structural match.
> IMO the main selling point of WASM is its predictable performance.
Not once in my entire life have I heard anyone say this
Almost always its either about it being faster or so they can use a language that isn't JS. And both are those have dubious value because I seen wasm be slower and people complain a lot about lack of tool support. Which is why the other day I claimed very few people use it. I seen many try it once or twice and not want to go through it again
The other thing is that the semantics of JS force some constraints on the JIT that make it harder for it to optimize code aggressively. Specifically, JIT compilers for JavaScript need to implement dynamic de-optimization when optimized code paths turn out to be wrong (because JS can do things like overwrite a method, meaning inclined calls to the method are now invalid).
Afaict it's much easier to write a high performance JIT for WASM because those cases aren't possible. And consequently, it's easier for something compiling to WASM to get high performance out.
Sure, but that's the problem of the hosted language's compiler and not the JIT. Then the optimization pass turns into compiling the redefined method to WASM which can then be JIT compiled and inlined, so deoptomization isn't as bad (at least only the compilation unit has to be ditched)
I agree with this. One string use case I’ve seen is number crunching. Doing complex math via WASM is fast and predictable and supports a wider variety of float / integer types.
Another use case I’ve toyed with is date time. Specifically trying to figure out if something like the rust Chronos crate is a better fit for crunching and calculating dates than something like date-fns or Luxon. Not sure about this one yet.
This assumes two things though, and this is another point I just realized about WASM that I like, which is for (most) modern browsers have asm.js / WASM support, and it goes back much farther than Temporal. Therefore with Temporal we have to consider the following:
1. Browser support - its not there yet. you'd have to polyfill. A production level polyfill is 16 KB, and is still very nasacent, and, on top of that, requires support also for BigInt[0]. The polyfill that tc39 put out is decidedly marked as non-production ready[1].
2. Polyfilling - as mentioned above, we have to deal with polyfilling the API, and that isn't a clear and easy story yet. WASM support goes back farther than this.
3. Size - its entirely possible to get WASM builds under 16 KB, and the support is better, espcially for operations on strings and numbers (dates fit this category well). The only complication I haven't quite solved yet is:
A) Can I validate that a WASM build will be under 16 KB. This is crucial. I'd even accept it at 20 KB because of wider browser support[2]
B) Can I fall back to asm.js if needed (there is a slim range of browsers that support ASM.js but not WASM, mostly pre-chromium Edge[3]
C) Is it performant compared to something like Luxon or date-fns? WASM excels at string / numerical operations so my sneaking suspicion is yes, at least in terms of the WASM operations. The complexity will be serializing the operations to a JS Date instance, Luxon & the Intl API might be most useful here
Yeah if you need something ready for prod by a literal next month or something Temporal definitely isn't it as the API isn't locked yet.
Don't forget WASM doesn't provide direct access to any OS time APIs (timezone info, current time, regional time change modifications) so the solution will still basically boil down to "call Date() and polyfill a better library" except now you have extra code to ferry the data back and forth to do a few string and math ops. Unless the use case is processing very large datetime datasets in one call the JS<->WASM function call overhead for all of this will probably take the majority of the execution time.
Not to mention after you get all of this solved, tested, and deployed you know as soon as Chrome starts shipping Temporal the cool custom solution becomes 50% slower for the average user despite all the effort because you didn't just use something like a Luxon which automatically updated to use Temporal on release. This may just be me being lazy though :p.
Even if Temporal ships tomorrow, its minimum 5 years for most applications to take advantage of it, so you're either polyfilling with feature detection or waiting it out using libraries like `date-fns` or `luxon` to fill the gap.
strings & numbers are WASMs strong point, so if you can pack the locale information tightly in a binary format, you might actually win out in the medium term. This shouldn't be a years long project by any means. And frankly, with the way enterprises move, you'll always have some client (at least in my business) where I need to support some modernish browser that may not have Temporal, so if this is more performant (we do alot of date time datasets, so yes, thats part why I'm looking at this) why not?
It could also be the wrong solution. I'll found out one way or another.
Just the IANA timezone database is ~400 KB gzipped for the data only and it's really quite the project to actually parse correctly. With that you'll still need to ferry Date(), Intl(), and friends for getting the current info about the user into the WASM module. Only then can you actually start talking about the code which competes with the 20 KB JS polyfill which started with all of the above as precompiled native code and data.
WASMs strong point isn't necessarily "strings and numbers" it's running large amounts of compiled code on large amounts of data. Video processing, PDF readers, video games. As an example even computing a large image the Mandelbrot fractal (pure math workload) then passing back an arraybuffer of the pixels was faster in JavaScript until WASM SIMD+Threads finally landed and JavaScripts poor parallelism finally factored in. Doing it with a functional call per pixel JavaScript is still ahead of even WASM with SIMD due to the functional call overhead.
But all that said I think it's a really cool project to try and I hope you're able to build what you're seeking. If you do be sure to post it to HN so I can check out how you managed to pull it off :).
I’m gonna rephrase some of a previous comment[1] I made when someone else posted benchmarks like that:
That is comparing WASM+JavaScript vs pure JavaScript. Unsurprisingly there’s some interop overhead. Those benchmarks are not relevant if you’re not using JavaScript (e.g. WASI stuff) or you’re doing the bulk of your calculations in WASM and not rapidly jumping back and forth between WASM and JavaScript.
The first benchmark isn't measuring the performance of WASM, but the calling overhead from JS to WASM (which is pretty small, but it's still an optimization barrier for the JS engine).
The second benchmark isn't measuring the performance of Javascript, but the performance of the Javascript sort() call (which most likely is implemented as native code).
In general, WASM should be both in the same ballpark as portable natively compiled code (e.g. not using SIMD), as well as Javascript which has been written for performance (which also means that well written - but non-idiomatic - Javascript can be in the same ballpark as portable native code).
The main advantage of WASM versus JS isn't mainly performance, but predictable performance (because the GC is taken out of the picture, and the linear memory model), and that WASM is a better compilation target than JS.
The main benefits are being able to use the same codebase for native and web builds and also not needing to use a GC and being able to do memory management yourself. The latter stuff really makes a positive difference for Wasm in eg. graphics / games. Can be less pronounced if the app mainly does DOM manipulation potentially due to the API boundary.
This might be a fact, but it certainly isn’t fun or interesting. Anybody who is framing WebAssembly into some sort of competition with javascript is completely missing the point of it’s existence. The major problem with WebAssembly right now is the major point of it’s existence is mirky. It will take time and effort from everyone using it for figure out it’s true calling, but I am hopeful and optimistic this gets discovered. In the meantime it doesn’t help to pit it against javascript.
Java introduced a language and library… but the real innovation was that it introduced a cross platform VM that was supported by some organization
WebAssembly is now doing the same thing… but just the cross-platform VM part
Now you can run Python and JS on the JVM these days but these are not de-facto implementations and so their adoption is pretty low. I wonder if the same issue will apply to these alternate WASM implementations of existing languages.
If you can use emscripten to compile all stuff to Javascript, then what’s the point of WASM? Smaller instruction set? It’s a serious question. All the JS has the same properties, it seems, as WebAssembly. The runtime implementers can reason about it better?
You can use chopsticks to eat Jello but that doesn't mean you will be happy with it (well maybe that would actually be fun but that's beside the point ;)). Emscripten to JavaScript works by taking what's available in JavaScript and turning it into a base to compile to. Then the hope is the JavaScript engine can figure out what you were trying to do originally and optimize accordingly. Emscripten to JavaScript was never a "this is a great way to do this!" more a "this is the only way to do this!".
WASM was an evolution to say rather than do all that why not just have a way to tell the browser's VM what we want to do directly. Now instead of having to parse JS syntax to find type hints and so on the browser can just parse pre-encoded bytecode. Instead of having to understand certain logic is trying to emulate functionality like 64 bit integer multiplication and optimize it out the browser can be told to do a 64 bit integer multiplication directly. Since this is a separate interface from JavaScript it allows work on things like threads, SIMD, and garbage collection to not worry about how JavaScript has a hard time with these concept since JavaScript is not the base anymore.
Sure, but the way browsers' Javascript engines get their high performance is complex.
JIT-compilation with optimisation (and de-optimisation!) is costly, so browsers tend to only interpret Javascript the slow way at first, enabling each (higher) tier of compilation only after run-time profiling.
With higher complexity comes higher risk of errors, and there have been a number of serious vulnerabilities in browsers' Javascript JIT-compilers in the last decade.
Not many companies have the resources to develop a high-performance Javascript engine that can compete with the best.
Also, writing optimised Javascript code so that it gets made into fast JIT-compiled code is a black art.
WASM on the other hand, has been designed so that it could be assembled into machine code straightforwardly in a single pass using little CPU time.
You'd get native performance straight away. (Not that optimising WASM runtimes don't exist)
Performance against JavaScript for specific use cases (i.e. being a general purpose VM for traditional application code) is a factor. Performance against JavaScript in general (e.g. calling the same simple function in a loop) is not a factor.
E.g. if you had only done rendering on a CPU and someone came by and said "we can do all sorts of stuff we couldn't do before with this GPU check it out!" it'd be easy to say "I could do all that on a CPU" and you could even show the exact same benchmarks presented here and then say "see, the CPU even runs the single threaded factorial function many more times per second than this new GPU". Everything you said would be absolutely correct in the most literal form yet it'd still be completely missing the point of why the GPU was made and how to assess if it fits that purpose better.
Then someone shows you the GPU doing rendering it was designed to do well better than the CPU and the reply is "So it is about the GPU being faster than the CPU?". Yes. No. It depends what context you're asking from. Traditional use cases no, what it was designed to do well yes.
The original article which talked about WASM being slower itself specifically notes this relation of purpose, functionality, and performance it's just tucked away in the conclusion:
> definitely don’t go converting all your websites’ JavaScript to WebAssembly! However, that’s not really the aim of WebAssembly. Its aim is to enable richer experiences on the web that require higher performance, for example machine learning, virtual reality, or gaming.
In a nutshell, WASM is slightly more efficient than Empscripten + Asm.js because it has a binary format and circumvents the need to parse a blob of JS. That's about it, really.
Whereas Asm.js had (has?) perfect backwards-compatibility with unsupported browsers and JS interpreters, WASM requires users to remain on the bleeding edge of new browser features as it continues to evolve, and introduces a whole host of fantastic new bottlenecks as the designers puzzle over how to interface WASM modules with the rest of the facilities JS can already access.
The whole thing is a hilarious boondoggle- an insane amount of effort and complexity for mild bandwidth and page-load time savings- made all the more hilarious for the fact that a remarkable number of people seem unaware that Asm.js ever existed in the first place.
Aren't they though? Is it not possible, that in the future web pages will be scripted in literally every programming language, from python to rust, and javascript will be just one of many ways to write a webapp?
Any general purpose VM is fine, including the VMs behind JavaScript, but just double check that's actually what's being compared instead of inter JavaScript and VM performance. How many times per second you can call 1-5 line functions from a webpage's execution context is comparing the latter and as the article notes at the end:
> definitely don’t go converting all your websites’ JavaScript to WebAssembly! However, that’s not really the aim of WebAssembly. Its aim is to enable richer experiences on the web that require higher performance, for example machine learning, virtual reality, or gaming.
WASM functions aren't meant to replace small JS functions on your standard website. It's meant to be a general purpose VM you can target large amounts of non-webpage code to.
I think that article needs some work - for one, it admittedly didn't account for the startup time WASM needs to compile the bytecode. Second, I'm not familiar with AssemblyScript - but I wouldn't be surprised if its performance wasn't up to something like C++, and some of the benchmarks test stuff like the builtin sort which depends massively on the quality of the standard library.
The article is relatively old. But I have the most complaints about the performance measurements. The WebAssembly should be tested in a way that eliminates the interop time, which is very expensive. That's why sorting and simple multiplication will always be slower if tested outside WebAssembly. Sorting in AssemblyScript is actually faster than in Rust or JavaScript:
https://twitter.com/MaxGraey/status/1414867216676368384
You can compile C, C++ and a bunch of other languages to Wasm, while you can't do that with Java as a target. A lot of existing C code compiles unchanged. And Wasm also actually runs in all browsers nowadays without the user noticing a difference (eg. iOS Safari too).
Yes, because it's learned from their experience. WASM has been designed with security in mind from the very start, and implemented by experienced browser teams with a very deep understanding of the security risks of doing so.
WASM's sandboxing as implemented in practice is different from JITs. It works by allocating a 4GB region of virtual memory and treating the base address as NULL. Pointers in WASM are 32 bit so they are unable to point outside the region.
The big win is the runtime doesn't need to check pointers for validity. However there are some downsides relative to native code:
1. Can't address more than 4GB of memory
2. Can't be efficiently implemented on 32 bit systems
3. Can't share memory between WASM modules
4. NULL dereferences don't trap (I think)
I would not be surprised if future CPUs had hardware support for this stuff, e.g. load/stores with a mask to confine the pointers.
The other way it ensures isolation is by separating code and data. All executable code lives in a separate address space that is not accessible from within wasm. Call stack is also a separate area, making it impossible to muck around or even look at the return pointer. Function pointers are opaque - they can live in global or local variables and in table entries (which are also completely separate from memory), but not in memory; when one needs such a pointer as part of the data structure (e.g. vtables), an index into a table is used instead.
One of the things that killed Flash was not being supported on iOS while Wasm runs fine (and actually pretty well!) on iOS which is pretty good for reach.
On the other side, not being able to run on iOS didn't kill any server side language (think Java.) And Java client side (Android) is very different from Java server side, down to the sets of developers.
Currently WASM is both a client side and server side runtime. It's not clear where it will be in 5 or 10 years. I don't see a compelling server side story. Why WASM and not C#, Java, Node, Python, Rails (I intentionally don't write Ruby) or whatever any of us is using now with its standard runtime?
> Why WASM and not C#, Java, Node, Python, Rails (I intentionally don't write Ruby) or whatever any of us is using now with its standard runtime?
I don't understand why you use a framework instead of a language there, but it seems to be in the same category of mistake as asking “why <compilation target> and not <thing that can be compiled to that target>”. They aren't mutually exclusive alternatives.
Say you are a C# developer and there is a C / C++ / Rust thing you want to use as a dependency.
Well, WASM is your interop layer. Same with Node.js, Deno, Go etc. You can start to share alot more code with a solid interop layer that WASM presents.
wasm does not have a solid interop layer, especially compared to past attempts such as CLR. What you get at the moment is more or less the C FFI, but more awkward to use because of the sandbox.
Personally I'm not super familiar with its benefits if any on the server and would actually not use it on the server myself and just build binaries directly, probably using Go. But I've seen some references to Wasm on the serverside for something similar to containerization or loading plugins. It does seem less obvious to me than the client side.
What makes you say Wasm is a server side runtime / imply that it's meant to be one?
With go as an example, you know the saying “cgo isn’t go”? Well, you could use C, C++, Rust or anything else that compiles to wasm from any other language.
There have been a few people who say if wasm (WASI on the server) existed already, Docker wouldn’t need to exist. Docker runs a whole OS just to run your binary - imagine the benefits of Docker but just running your binary.
It’s all early days so I am slightly waving my hands, but a lot of this works now. Check out _wasmtime_.
For the client I use a simple go -> c++ compiler I wrote and compile to wasm from that actually, on my side projects. It had zero overhead interfacing to / calls to C/C++ (including generics<->templates) since it's just generating that. Example web game made with that: https://github.com/nikki93/raylib-5k
I think I've seen wasmtime before. If I needed to interface to any C/C++ things on the server I would probably just write in C/C++ (or Gx) yeah.
iOS was the final nail in the coffin, but Flash had been having years of an endless flood of severe security problems. It was having major problems hanging on prior to Apple playing their hand.
Losing Flash's excellent authoring tools is still a hard blow though.
That's like saying that the Watt steam engine [1] was basically Newcomen's atmospheric engine [2], not to mention the Aeolipile [3] from ancient Greece.
Yeah, if you squint hard enough, everything new is just the reinvention of the wheel [4]. And yet – sometimes small, incremental improvements are what it takes to push a concept (steam powered machines, or bytecode for execution in the browser) from niche applications to being a breakthrough technology. I don't know if WebAssembly will be that incremental improvement, but claiming that it won't because Java tried and failed is a lazy, fallacious argument.
[4] Speaking of reinventing the wheel: Those radial tires, eh, who needs them? They're basically just like cross ply tires. Not to mention the spoked wooden wheels that have been around since forever.
I think you missed the part about it running in the browser? But yes, some previous technologies are similar to some new technologies, that's not really an insight at this point. Especially not about bytecode interpreters which seems like a standard practice.
The extent of Wasm's availability on browsers is quite big right now. Both iOS and Android and all major desktop browsers. That sort of reach is what I meant by the term "the browser" used generically. Different from being an extension to one or a few browsers or something like that.
Yes, in the sense that everything that shares some history or idea is the same thing.
Like C and O are both chemical elements, so basically interchangeable, or like horse carriages and oil tankers are both methods of transporting things.
> What can wasm accomplish in that way that Java couldn't? I'm really confused about the hoopla.
Provide relatively efficient support for languages other than JavaScript that is reliably available in major browsers without user action, an insecure plugin model, etc.
What killed Java was to a large part loading times. First the Plugin had to load, then the bytecode had to be run. On many systems one immediately knew when Java was used by the browser getting slow and sleeping for a while.
Flash loaded a lot faster (also systems were better, generally) however Flash apps completely messed with user experience.
Nowadays JavaScript can do a lot of things better (say changing URL, history support, back button) which can be integrated with a wasm tool for having a way more seamless integration.
As a user you simply don't notice if something is using JS or wasm.
From there it imo carries over to the server side and other places. Java simply got a bad reputation as resource hog used for bad UI in Applets and many people looked elsewhere.
And then wasm supports C and C++ (and more) with huge eco system of libraries, applications, ... (While of course these days a Java VM (incl. Android) is often used with non-Java languages as well)
Well, unlike Java, Wasm is not the product of a for-profit company. It's a W3C standard that actually does solve many problems most people didn't realize they had. It essentially obsoletes virtualization (CTO of Docker famously said "If WASM+WASI existed in 2008, we wouldn't have needed to created [sic] Docker. That's how important it is"). It will allow the creation of a unified software ecosystem across languages (Wasm "components" are designed to allow you to e.g. import numpy into a JavaScript project. See https://hacks.mozilla.org/2019/08/webassembly-interface-type...). The virtual machine is designed from the ground up to make many types of vulnerabilities, such a stack smashing, impossible. And, as an open standard, it's not beholden to the whims and lawyers of Oracle.
I'm not 100% certain the W3C working group won't end up fumbling it, but if you're not excited about wasm then you probably just don't know much about it.
The founders know very clearly what preceded it as they're working with the authors of these previous attempts (JVM, CLR, etc) to integrate their stacks with Wasm.
Oracle was the company that completely open-sourced OpenJDK and even their own paid support JDK is just a minor modification of the former. Java and the JVM is also among the few languages/platforms with separate specifications, and multiple completely independent implementations.
If anything, Java is a much much safer bet than the oligopoly of WebKit/Blink.
It provides a security model that actually works. WASM code can't access the outside world, except for channels you explicitly provide to it.
Everything before it eventually let you have full access to the host file system, if you asked nicely, were given permission, or leveraged a bug in the system.
wasm is only as secure as the runtime that you're using, and I would be extremely surprised if none of them have sandbox-breaking bugs in the long term.
Can you elaborate? To my understanding you compile code into opcodes and it's as safe as the stack machine going through the opcodes. To my understanding the runtime is the part of the compiled code, and not the stack machine going through it.
Wasm outright forbids things like GOTO and other potentially insecure features opting instead for structured programming. Java has been the subject of countless security vulnerabilities.
Wasm also has the ability to stream bytecode and validate/compile as it streams (fast parsing was a major design goal) resulting in much faster startup times.
Wasm is easier to integrate into the JIT/VM already shipping in browsers so they don't have to ship two massive engines.
Wasm is a bit lower level which should result in faster execution than the JVM in the future.
goto or lack thereof has nothing to do with wasm's security. It could have added goto tomorrow without breaking the sandbox. Indeed, there are a couple proposals (funclets, multiloop) that are basically dressed up goto.
The Java language doesn't have a goto statement, and so all code is properly structured. A labeled break statement is almost a goto statement, but it still obeys structuring rules -- i.e., you cannot jump into the middle of a loop.
At the bytecode level, all structured constructs get compiled into forms that rely on goto statements. Is this inherently insecure? Should the bytecode require structured programming too? How does this guard against malicious use any more than verified bytecode that relies on gotos?
That doesn’t make code insecure, it’s just extremely powerful and very useful for compilers and optimizations even if wouldn’t be desired in a modern language for issues that gave nothing to do with security.
They are using the past tense of Java. In case you don’t know, the primary use of Java when it came out was to run “applets” inside the browser. And it was terrible. It was later when it became mostly a server-side thing (which at the time was dominated by languages like Perl, PHP, etc.)
To be precise, applets were an early application of Java. But Sun Microsystems actually developed it in the long-standing dream of the “universal binary.”
When I wrote good access, I was really thinking that Java would have needed to provide an API at least as good as JQuery, not just a low-level way to walk nodes in a tree.
Nothing. People have been able to compile other languages into java bytecode for decades already. That didn't turn java into the one true runtime to rule them all, and webassembly will not be any different.
I wish browser makers would focus into making the browser user and development experience actually work instead of going after the latest shiny feature.
In the early days of Java, it was bundled with the browser. But because Sun didn't make their own (popular) browser, they couldn't dictate the terms for what features a browser was expected to include.
How are any of this relevant if we talk about a hypothetical future if Java was continued to be used in browsers?
Browsers would include a JRE (which they actually did at a time as well, but that’s just a tiny technicality either way), there wasn’t even mobiles capable of browsing the net at the time, but there is nothing inherently unsolvable, it’s not like there is no partitioning between wasm and js world, but in this alternative reality there wouldn’t be js. Java could have access to the DOM.
I'm building a static analysis CLI tool in Rust. It takes 30 seconds to build the WASM version, which I then upload to a website, providing a web-based version that other developers can use to demo the tool, and isolate bugs in its analysis.
> If the device performs safety-critical functions, like actuating power, medical equipment, or a connected car, the firmware and software cannot be updated without rigorous testing. If the device requires certification, updating it may require recertification.
But can we trust car manufacturers to do this testing and recertification before they push an update?
Generally yes. The delta tests for several classes of changes are baked in heavy processes and their execution is regularly audited. Maybe some startup cowboys see that differently, but then in their cars people are harmed while doing whatever...
Are you sure? I can't help but think of the VW emissions scandal. A "quick update" might seem a better choice to some managers than grounding the fleet.
I am actually surprised that scandal did not get more people behind bars, because given the amount of processes, there must have been quite some number of bros knowing what's going on and covering each other.
> While tremendously valuable, using WebAssembly in a web browser is not what excites me.
The article should have been titled "Why Am I Excited About WebAssembly Outside The Browser?"
The reasons the author gives don't seem that exciting.
I'm excited about WebAssembly in the browser because, as other commenters have pointed out, this allows a new area of delivering executables to run in a sandboxed browser tab with just a click. Convenience, speed and safety. I think it fulfills the early promise of the internet before the malicious hackers got to it. Some might want to say here that this is still insecure, but it's a lot more secure than downloading and running a binary on the main OS.
There's literally nothing wrong with the title. The author doesn't have to qualify their excitement because it might not be for the same reasons as others.
I have not been able to get excited about WASM due to the poor memory management situation. It’s been years since the MVP and we are still in limbo about freeing memory.
Until this is solved WASM is dead in the water for a huge variety of applications. I am rooting for WASM, but it has been discouraging to watch this go unsolved over the years.
I'm not sure I fully understand the issue here, but reading through it, it seems to point towards that this issue only happens to people using 32bit browsers? Which, I assume (depending on the environment of course), is a very small section of users nowadays.
Asmjs was a clever hack but still a hack, and starting over with a clean slate let them fix various limitations inherent to using Javascript as a compilation target
Wasm is smaller and faster than JavaScript and asm.js. It is both faster on average, it has far better predictability (far fewer performance cliffs or pitfalls), and it starts up faster (faster to decode + no need to warm up the JIT with types at runtime).
You can test this pretty easily, since Emscripten still supports JavaScript output with a flag (for environments that lack wasm support for whatever reason). Comparing Emscripten's default wasm output to JS output for the same benchmark will show those benefits in most cases.
Since I've read that the key WASM person quit her job after a burnout, I'm quite pessimistic about WASM. Not surprised if it dies in future years.
To be completely honest, WASM helps having cross platforms things, which is ALWAYS against the interests of companies who always divide their market territories to guarantees steady revenues. I'm also a bit curious how they managed to make WASM happen in the first place.
I'm also still waiting for C++ toolchain to directly output WASM. I haven't touched bynaryen since, but it was not a great experience.
> Since I've read that the key WASM person quit her job after a burnout, I'm quite pessimistic about WASM. Not surprised if it dies in future years.
WebAssembly is so big at this point it's too big to fail (famous last words maybe?). All major browsers support it (Firefox, Chrome, Safari and Edge), runtimes are available for non-browser usage, and a big amount of people are involved in moving it forward.
If it was early days, then maybe losing one key person could have changed the fate of WebAssembly. But at this point, there are multiple key people both inside and outside the WebAssembly organization.
'The key wasm person' is nonsense. It's a spec, with multiple implementations of interpreters, jits, and other execution environments. Even outside of browsers there's other runtimes.
Simply put, missing DOM manipulation.
Here [0] is quite a good summary with plenty of links to currently open proposals/issues on WA's GitHub that would allow it to do that.
Hmm, wouldn't it suffice just to export a createHandle() function to WASM, which could then use said handle to reference the underlying C++ object of the HTML element. V8's involvement isn't even necessary.
The DOM element then would be collected, when there are no JS references to it, its handle is disposed of by the WASM code, or the WASM runtime itself is destroyed.
Seems like a very similar problem to how game engines integrate scripting languages like Lua - with very similar solutions.
Game engines are aware of the scripting languages embedded in them. Browsers are largely unaware of anything but Javascript.
On top of that a lot of DOM manipulation is smoke and mirrors. While the exposed DOM APIs may provide you with some object, internally it's likely to be a collection of weird things in a trench coat due to all the optimisations that browser engines are doing.
And no one in their right mind will give you raw access to the underlying C++ object for many reasons, security being number one. And 30 years of assumptions that browsers have about these objects being number two.
But other than that there are OpenGL bindings (eg. in emscripten) and things for audio like SoLoud etc. You can usually put off writing JS for a while.
WASM on the browser runs inside the Javascript engine 'context', so you will always need to deal with some Javascript (for instance to load and start the WASM blob, but also for calling out into web APIs). Emscripten hides most of that from the programmer (depending what web APIs you need to talk to), but there's still plenty of Javascript running under the hood.
So, to clarify, "WEBassembly" was a terrible name in retrospect -- the majority of WASM use (that I have encountered) is serverside, not clientside.
The super tl;dr of WASM is that it's a universal bytecode format, an idea sort of like the JVM or the CLR. There are a few major benefits to this:
1. Languages can pick WASM as a compile target, and then run anywhere that WASM is supported. This includes the browser, the server, embedded devices, wherever.
2. WASM acts as a "Lingua Franca" for interop between languages, sort of like a C ABI. Any language that supports importing WASM bundles immediately gains support for calling code from any language that supports compiling to WASM.
It's trivial to write a program that calls functions from IE. Rust, Go, Zig, and C# in the span of 10 lines, because of WASM.
On the clientside, you still need JS because WASM needs to interact with the browser's DOM API's. I'm not convinced of the benefits of WASM for writing web apps.
(One exception is maybe Blazor for .NET, which is exceptionally well-done)
When I was learning Assembly in university there are lots of materials that teach building sophisticated apps by hand. WebAssembly (not itself but ecosystem) lacks of tutorials, books, toy projects. All of those materials are about how to compile from language X. So I'm not so excited, there's no relevant activities around.
I am currently writing a typescript project with deno. I need to resize lots of very large images (20mb+) in a reasonable timeframe so I was considering using web assembly for it. Does that seem like a good fit? I have never used WASM before so I am wondering if its worth it.
Nice that the author is enthusiastic; but everything he praises has been around for many years or has a rather hobbyist touch (e.g. what he writes in "The Edge" section). Webassembly has undoubted potential, but in the original use case (browser applications) it is still clearly too slow and inflexible; and interestingly, the adaptation seems to take place more in areas that are well occupied by Node, CLI or the Java VM. I just recently looked again at the current state and studied current literature with the intention of writing a WASM backend for Oberon+; but the technology probably needs a few more years to be worthwhile; at the very least, a built-in GC (or at least a means to scan the stack) should be available, otherwise the implementation would become incredibly inefficient.
GC implementations over Wasm definitely have perf issues. eg. Go compiled to Wasm has these with its GC. The main way when compiling to Wasm now is to not use a GC, which is reasonable depending on the application in question. It's quite reasonable for games or graphics applications.
WebAssembly is like Macromedia Flash for Generation Z, just like Flash was Java Applets for Generation Y. History repeats itself. However unlike Flash, you can't just uninstall wasm because there's no official way to disable it. https://developers.slashdot.org/story/22/07/16/0450218/ask-s...
Also, you know, it's safe unlike both of those. And, you know, it's not deeply connected to representing a single language like both of those. And, I guess unlike those it's built by different companies following a standard rather than one company just pushing whatever they feel like that day of the week.
But you're spot on otherwise. It's like they repeated the good parts of history and avoided repeating the bad parts. Weird.
I don’t really get this - the jvm bytecode itself has zero access to anything outside of some memory given to it. It can’t even print to stdout in itself, it needs a method with a native implementation for everything. So there really is no “safer” here.
Of course libraries can be written that expose more and more functionality, but I don’t see anything inherently unsafe in the JVM as opposed to WASM, which is the exact same way. If anything, the jvm can’t even crash itself, while wasm is free to do all the old memory errors, just this time constrained to its virtual memory space.
The only thing I've ever seen webassembly used for online, was a hobby project where someone used it to get clang to run in the browser, which is cool, but kind of gimmicky, since no one wants to use a web page that needs to download hundreds of megs of dependencies. With Flash you could see the value like the Newgrounds animations it made possible, which would not have been possible otherwise. What has webassembly done for us besides turn the open web into opaque binary? Why is there no opt-out? It's the lack of consent that really gets to me. How many times do we have to say no?
What difference does it make why people want to say no? The issue is we're not being given a choice. WebAssembly is very new, it's controversial, it reduces transparency, and it increases the risk of exposing bugs in your hardware to every news site you visit as well as their ad exchanges. There should be a config option that lets people who are concerned about these things disable it.
Do these browser options, which I will note are currently disabled by default, not work for you?
| Browser | Option | Enabled by default |
|---------|-----------------------|--------------------|
| Firefox | dom.webgpu.enabled | Not yet! |
| Chrome | #enable-unsafe-webgpu | Not yet! |
I imagine this has something to do with why browser vendors have taken so long to ship WebGPU (another complaint I hear from WebGPU skeptics frequently), but what do I know.
I talk about how Flash and Java applets can compromise the host OS arbitrarily, and you respond with a paper about how WebAssembly cannot do that but may lead to the program itself running in unexpected, isolated, ways. FTP:
> The standard has been designed with security in mind, as evidenced among others by the strict separation of application memory from the execution environment’s memory. Thanks to this separation, a compromised WebAssembly binary cannot compromise the browser that executes the binary.
More re-invent JVM/Flash VM but yess extremely similar concepts. This was not done ignorantly though, WASM was much easier to build inside of existing browser engines and gave a bit more freedom in design.
Nobody wants their water heater to run anything, except maybe water heater manufacturers, who can squeeze a penny out of you, and then you like that your water heater is cheaper.
Don't get me started on TVs, mine worked fine until it was updated and started hanging and refusing to turn off. I had a TV that worked well and then they made it stop working. This is the future of devices.
It's cool but they're going to use it to lock the web down and infest everything with DRM. You won't be able to scrape anything and we'll be in a worse position than when IE6 was #1.
I think that a combination of the Americans with Disabilities Act and the fact that websites need to be scrape-able to show up in search results should spare us the worst of this, hopefully...
I understand the excitement of the author, but I think they've simply added to the attack surface for very little ROI. IoT security is a HUGE risk, and securing boot and firmware updates is nontrivial. Even the supply-chain can be attacked, where the JTAG programmers are targets. The idea of being able to tune an ML model in the field is something that should be either built into the firmware with a dedicated HTTP port (like most devices do, by adding a lightweight LWIP server for config that is severely locked down), or a specialized App, which is what 99% of user-configurable IoT devices already do.
TL;DR - Webassembly is completely unrelated to IoT provisioning and configuration.
I'm not completely sure your TLDR is a well-founded and fair distillation of your comment.
And your comment itself reads as if it's a skim of the post itself. Yes, IoT security is obviously a huge risk, yet Wasm would dramatically reduce the possibility space of many (but not all) types of attacks.
My team has built out WASM/WebGPU support for Unreal Engine 4 and is in progress for Unreal Engine 5 (and other engines) with a suite of optimization tools like asset streaming (so you don't have to download a whole game at once) and advanced texture compression (necessary for low powered devices like mobile/Chromebooks).
More info on us in this Venturebeat article we were featured in:
https://venturebeat.com/2022/05/12/wonder-interactives-the-i...