Author of wasmex[1] here (an elixir package which allows runnung WASM).
Next to the mentioned wasmtime, wasm3, and wamr, there is also wasmer[2] I would add to the list of capable WASM runtimes.
The great things these runtimes offer are how easy they are to integrate into other programming languages environments. E.g. with wasmex you could easily run WASM from your elixir application (or with wasmer from ruby[3] or many other languages if you want).
Imagine building shopify, but without needing to invent your own template language for customers to extend their shop-template[4]. They could provide a WASM extension API where shop-owners could let sandboxed WASM files render their shop. This would let allow shop-owners to write templates in their favorite language instead of e.g. liquid and still be secure.
One area that I'm hoping WebAssembly will help with is running native extensions in a portable sandbox. That way libraries like nokogiri can be compiled to wasm ahead of time, then executed on a wasm runtime. There will be a perf hit but not having to wait for nokogiri to compile native extensions will be worth it.
I think the ability to run unknown code in a sandbox is probably the most interesting use case. This is particularly compelling in P2P computing projects.
I am really hoping the mobile platforms like iOS offer pure wasm sandbox APIs for this use case. This could be a powerful way to support programmability, app extensions, etc outside of the appstore. It could appease both Apple for security and developers for flexibility.
proponents of wasm claim that the wasm sandbox was designed with security in mind from the very start. From apple point of view they could offer a runtime with very specific and limited permissions and trust that the plug-ins will respect them.
Interested to see a number of people mention one of the attractions with WebAssembly being the sandbox &customisation angles.
Sandboxed in-game avatar customisation was one of the motivations behind the "WebAssembly Calling Card" (WACC) project I released within the past couple of weeks, posted a Show HN here: https://news.ycombinator.com/item?id=24072304
"WACC: Like an avatar but WASM!"
The WACC specification defines how three 32-bit ints (returned via 3 calls to a function in a WASM module) get turned into triangles:
* 256x256 coord system.
* 15 bit color + 1 bit alpha per vertex triangles.
* 1 FPS!
In part, WACC & the WACC Viewer app is a demonstration of a WIP add-on for the Godot game engine that wraps the libwasmtime WebAssembly runtime so desktop Godot games/apps can call functions in WebAssembly modules. (The other demo app is a tool to help explore `.wasm` files: https://gitlab.com/RancidBacon/wasm-exploder)
The other main goal with WACC is to provide more visually interesting "first run" experiences for people embedding WASM runtimes and/or provide a creative outlet for starting to play around with generating/writing WebAssembly.
So, if you're interested in playing around with some non-browser WASM you might like to have a play with WACC--maybe implement the spec for a different host or create your own Calling Card. (I would particularly like to see it running on standalone hardware like a badge or similar.)
> This is powerful because you [can] use Rust or C++ as your scripting language.
Do people really want to use such complex, heavyweight languages for scripting?
> WebAssembly enables predictable and stable performance because it doesn’t require garbage collection like the usual options (LUA/JavaScript).
Do garbage collection pauses cause significant issues when embedding Lua or JavaScript? If so, I would expect implementations of those languages to switch to reference counting, but none of them are doing so.
> Extremely fast programs in environments where you can’t JIT for reasons.
To get good performance, don't WebAssembly implementations need to use JIT-compilation internally?
> Do people really want to use such complex, heavyweight languages for scripting?
This is a surprising reaction to me, given the history of Javascript and NodeJS in general, but you're not the only person I've seen bring this up.
Ask yourself, 'do people want to use such a strange, high-level language as Javascript for programming servers?' Of course they do, because the primary factor in choosing a language isn't whether or not it's perfectly suited for a specific task, it's how familiar you are with it and how much tooling you already have built up and available around it.
Being able to program your back-end and front-end in the same language is a massive productivity win. It allows you to do all kinds of cool things with architecture and code reuse, and most importantly you don't need to switch mental contexts as often while you're programming.
So if it was reasonable for JS devs to bring their scripting language to the server, it is just as reasonable for Rust/C++ devs to want to bring their server language to the browser. It's not about the language semantics; if someone is primarily familiar with Rust then they'll be faster building web apps in Rust than they would be in Javascript.
> Do people really want to use such complex, heavyweight languages for scripting?
The way I'm reading this, "scripting" should be taken to mean "embedding." Sometimes you do want to run significant amounts of code in a context where traditionally you'd want an embeddable language like Lua or JavaScript, but the fact that such languages are built for scripting (i.e., more optimized for connecting other high-performance components together in a flexible way than for being a high-performance component itself) becomes a limitation.
Some historical examples of where people have wanted complex languages in embedding contexts where scripting languages would have been an easy choice: Google Web Toolkit (write webapps in Java compiled to JS), ActiveX, eBPF, the very idea of loadable native-code modules itself. All of those approaches to the problem have their downsides.
> Do garbage collection pauses cause significant issues when embedding Lua or JavaScript?
> If so, I would expect implementations of those languages to switch to reference counting
Reference counting and tracing garbage collection are duals of each other (Bacon et al. 2004, "A Unified Theory of Garbage Collection," which is a great read). You're going to do the same amount of work in both cases if you manage to clean up the same garbage (i.e., if you deal with cycles), it's just a matter of when/how you do it. Technically, yes, reference counting gets you predictable performance (unless you solve the cycle problem with a backup tracing GC), but it may not have the characteristics you want. Knowing in advance that you're going to decref a bunch at an inconvenient time doesn't really help the underlying problem of GC pauses. :)
> To get good performance, don't WebAssembly implementations need to use JIT-compilation internally?
I think only if you are running wasm that changes dynamically / independently of the application, which on at least one "for reasons" platform (iOS) you can't do anyway. If you're willing to ship the wasm with your application, you can AOT-compile it.
> To get good performance, don't WebAssembly implementations need to use JIT-compilation internally?
No. JITs are great when you're executing code which is very dynamic in nature (either because of dynamic types, or because you're using a lot of interfaces) because the JIT can make use of speculative optimizations based on runtime information.
WASM, however, doesn't really support the notion of compound types and the few types that are supported (int32, int64, float32 and float64) is required to be specified up front. WASM doesn't really have the notion of of ad-hoc polymorphism either. In other words, there's very little for a JIT to speculate about, and so there aren't much insight a JIT can get from having runtime information available. This might change as newer features are added to WASM in the future.
It’s like Java bytecode in the sense that it’s a bytecode spec for a virtual machine, but otherwise quite different. Wasm doesn’t include a GC or support for OO out of the box. It doesn’t even have strings.
That’s great for compiling languages like C++ or Rust though, but makes compiling higher order langues like python or java to it.
It is like Java bytecode, but design geared more towards supporting C.
Wasm in many ways is dumb. They could have easily adopted CLR or JVM bytecode. But no, the web is too "cool" for that. It's better to have a brand new VM that's slower and missing GC and a standard library.
The WASM instruction set is a tiny fraction of the size and complexity of the JVM and CLR instruction sets. One of WASM's primary design goals was to be able to prove safety and soundness properties for the language spec from the very beginning, and they've accomplished that [1]. Subjectively, I'd also argue that it is a much cleaner and more regular instruction set compared to JVM/CLR.
Because it has a lower level of abstraction than JVM/CLR (which, for example, have a built-in notion of what an "object" is, and every other language that you compile to it must have its own semantics shoehorned into that), WASM was able to become a good target for C, C++, and Rust.
The ultimate promise of WASM is that (given the right bindings) you can bring any C/C++/Rust codebase into the browser. E.g. see their demo of Doom running in the browser, utilizing WebGL bindings.
Any popular VM inevitably becomes as complex as those. Look at V8, it's grown into a JVM sized beast.
It would save countless hours of human effort to just adapt what's out there. It will be a decade before WASM has the kind of support JVM and CLR do. By then it will have the same huge codebase that causes vulnerabilities
> It's better to have a brand new VM that's slower and missing GC and a standard library.
I can understand a desire to build something new and better suited for a particular goal.
Didn't Java execution in the browser(loosely speaking) used to be a thing? I remember that causing security issues, and JVM/Java seems like a large enough thing that trying to mold it into something it isn't seems strange to me.
As for the missing GC, I mostly work with Rust so that doesn't really bother me, although I could see how it would be an issue with other languages. Also, I believe WASI mentioned in the article is the standard library.
WASM needs to move beyond MVP state and get first-class support for DOM APIs in the browser and some kind of first-class OS-agnostic APIs for outside the browser.
They were developed with different goals. WebAssembly is a stripped down version of asm.js that does not need to be compatible with Javascript semantics and is slightly closer to the hardware. In particular it shares with Javascript a strong security model.
from website: WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications.
asm.js
an extraordinarily optimizable, low-level subset of JavaScript
http://asmjs.org/
so, IMO WebAssembly is not a stripped down version of asm.js even if they sold it as that to js devs to foster adoption
Yes, you are correct. What I wanted to say is that webassembly is a continuation in that direction.
At the time asm.js was ideated the alternative were browser plugins like NaCl, Flash, or Java.
The problem with asm.js is that the requirement to be a subset of javascript meant that some optimizations were impossible. wasm is another step in that direction.
I doubt that WASM is good as an embedded language, unless performance requirements outweigh the inconvenience.
Dynamic languages are great for experimentation and exploring the system, e.g. a console in the browser is the fantastic tool and web dev people tend to try things there first. By embedding wasm, you are making tinkering with the system you provide less enjoyable for your users.
One could argue that since many languages compile to wasm, you can pick whichever suites you best. But in reality, you are probably limited to languages with thin runtimes, e.g. Rust or C. Otherwise you will end up with a huge wasm blob. Imagine, there are 2 Java extensions, a C# one and something in Python, all running simultaneously. It means 3 different runtimes with a footprint by far exceeding that of the application logic in an extension.
Another burden is the bridging between the host and an extension. Unlike lua or js you can’t pass and inspect objects, the only option is to marshal data as a byte blob. So if you were to pick up a language no one used to write extensions for the particular application before, the very first think you have to do is writing marshalers.
Last but not least, I disagree with the article calling S-expressions ugly and strongly believe in the opposite.
IMHO WASM is the perfect replacement for native plugin DLLs for DCC tools like Maya or Photoshop (but not a replacement for a scripting interface which such applications usually offer too in addition to DLL plugins).
Instead of having to distribute a multitude of DLLs for each OS/CPU combination, you only distribute a single WASM module, and the WASM runtime that's integrated into the DCC tool takes care of the platform-specific details. And WASM is also easier to sandbox than DLLs, so a crash in the WASM plugin can't take down the entire application.
> WebAssembly enables predictable and stable performance because it doesn’t require garbage collection like the usual options (LUA/JavaScript).
GC is a small part of that - the much bigger factor is dynamic optimization is necessary for fast Lua or JavaScript. Wasm is designed to not need that (like most languages that are not dynamically typed).
Some wasm runtimes, eg V8 and Spider monkey, do tiered compilation, where all code is first compiled with a high throughput streaming compiler, and then only the hot paths get recompiled with an optimizing compiler.
Tiered compilation in general does dynamic optimizations, definitely in JavaScript (runtime type collection, etc.) but also to a lesser extent in C# or Java (inlining, etc.). In wasm none of the tiers do dynamic optimizations AFAIK, but tiered compilation definitely helps there too, mostly with startup times.
I'm not sure I understand the use case of the author. He mentions languages (Rust, C++ etc) that can be natively compiled to (almost) any platform, so what's the point of running them in web assembly ? Like Java write once run everywhere ?
My speculation: WASM will be used as default for edge computing and further cloud based containers. Cloud vendors will develop custom silicon optimized for WASM that won't be available for general public, maybe on ARM or RISC-V.
I doubt it. Like Java, WASM is intended to be used with an optimising JIT compiler. It doesn't make sense to try to implement Java or WASM directly in hardware.
ARM used to offer Jazelle, which could run Java bytecode directly on the CPU. It's now long dead. It makes more sense to use a sophisticated optimising JIT to produce efficient machine-code, than to make hardware to directly run an unoptimised IR.
Azul systems made custom hardware for around a decade, tuned for running Java and highly parallel (IIRC they had 256 core systems when Intel was selling 4 core systems). Also had hardware accelerated GC, so people with large heap problems would talk to them.
With the rising ubiquity of Intel VT extensions, they figured out how to put the VM into JVM, particularly for accelerated GC, and it looks like they got out of the hardware business. All I see on their product page is runtimes. And now that Oracle wants to charge people for their JVM, I suspect they have more customers now.
I'm sure it is already in the works. WASM is the key to be independent of the ISA and allows to switch to custom silicon without having the customers to know. That's how cloud vendors will drive the future in the race to get independent from x86 and ARM (very likely that a future owner of ARM will limit the licensing business). So even if there is no performance gain possible due to the reasons you have pointed out, it's probably the abstraction layer of the future for pretty much everything.
Why? As I just said, this kind of hardware approach doesn't make technical sense.
> WASM is the key to be independent of the ISA and allows to switch to custom silicon without having the customers to know.
We already have ISA-independence with JavaScript, Java bytecode, .Net, Python, and other high-level languages.
Again, WASM isn't intended for direct execution on hardware, it's intended to be fed to an optimising JIT compiler. Direct execution on hardware isn't how you get good performance out of an IR like this.
> That's how cloud vendors will drive the future in the race to get independent from x86 and ARM
I don't think cloud vendors care all that much who they buy their CPUs from. AWS offer instances on Intel, AMD, and ARM CPUs. If they really want ISA-freedom, their best bet is RISC-V.
> even if there is no performance gain possible due to the reasons you have pointed out, it's probably the abstraction layer of the future for pretty much everything.
There will never be a single abstraction layer for everything. Compiler engineering doesn't work that way.
These last few days I've been tickering with wasm3 and Rust as way to run Rust on ESP32. The xtensa architecture has still not been merged into upstream llvm so it's quite hard to run Rust on these MCU.
I think WASM can do hard-realtime but that's not really the goal of wasm3, they advertise 14x slow-down compared to "native". However I think a lot of usages of MCU are not actually "hard-realtime", my connected light bulb for example uses the ESP32, I don't think you would get a noticeable difference in perf by using wasm3 but you probably get a lot of benefits, OTA for example, running an interpreter.
Hard real-time is fundamentally about never agreeing to start something you can’t finish.
It has nothing at all to do with throughput. What matters is whether there are any unbounded or loosely bounded operations in your call tree. Like memory management bookkeeping.
I worked with Aicas’ hard real-time VM for three years, although we were not using those features. Among other things, they had to implement their own amortizing garbage collector to do it. You’d need something similar for Rust to handle object life cycles. It’s also common to divide memory into regions so that different classes of traffic can’t exhaust system memory.
Without proper security audits on wasms runtimes they can only be used as hypothetical tools. The main purpose of wasm is a sandbox that's well defined, but that doesn't matter if it's implimentations are vulnerable.
You could use it to just "deploy software anywhere", and that's a neat idea, but it's not 'web' assembly, there's no protections thst make it fit to run arbitrary code from the web.
I don't see how this is a problem. A lot of good came with the bad. This is like when people complain about Visual Basic or PHP. They were awful in so many ways, but they got a lot of people into programming, and those people did a lot of cool things.
If you are going to target WebAssembly outside of a browser environment, uh, why not just target LLVM, GNU, or uh, you know, Assembly... X86_64 or ARM are pretty standardized. Another part 1/1 "tutorial" with some brilliant insight as per usual.
There seems to be a section in the article called "Why would we want to run WebAssembly outside of a browser?" that I think addresses what you're asking?
LLVM bitcode is, I believe, not portable. Neither is Assembly. Nor are the resulting binaries of that processed something that is run within a sandbox.
This, i think webassembly is the sweet spot. It has portability but doesn't have any opinion about how the code should be structured. No opinion about GC or objects etc.
Yes. Because WebAssembly provides a higher level of abstraction (safety, isolation, portability, manageability etc — you know, the “enterprise” stuff) over native code. See here: https://www.secondstate.io/articles/why-webassembly-server/
Next to the mentioned wasmtime, wasm3, and wamr, there is also wasmer[2] I would add to the list of capable WASM runtimes.
The great things these runtimes offer are how easy they are to integrate into other programming languages environments. E.g. with wasmex you could easily run WASM from your elixir application (or with wasmer from ruby[3] or many other languages if you want).
Imagine building shopify, but without needing to invent your own template language for customers to extend their shop-template[4]. They could provide a WASM extension API where shop-owners could let sandboxed WASM files render their shop. This would let allow shop-owners to write templates in their favorite language instead of e.g. liquid and still be secure.
[1] https://github.com/tessi/wasmex/ [2] https://github.com/wasmerio/wasmer [3] https://rubygems.org/gems/wasmer [4] https://github.com/Shopify/liquid