"The main thing holding back wider adoption is a lack of system interfaces. File access, networking, etc. But it's just a matter of time before these features get integrated."
But then you've got to figure out and prevent all the security holes that can be introduced by adding file access, networking, etc. That's what killed the Java write-once, run-anywhere promise. Maybe put the whole thing into a container? Oops, looks like the container wasn't replaced after all (though perhaps it could be simplified).
True, but I suspect it'll be a lot easier to virtualise all those APIs through WASM than it is for a regular native binary. I mean, half the point of docker is that all syscalls are routed into an LXD container with its own filesystem and network. It should be pretty easy to do the same thing in userland with a wasm runtime.
And the nice thing about that is you can pick which environment a wasm bundle runs in. Want to run it on the browser? Sure! Want to give it r/w access to some particular path? Fine! Or you want to run it "natively", with full access to the host operating system? That can work too!
We just ("just") need wasi, and a good set of implementations which support all the different kinds of sandboxing that people want, and allow wasm containers to talk to each other in all the desirable ways. Make no mistake - this is a serious amount of work. But it looks like a very solvable problem.
I think the bigger adoption problem will be all the performance you leave on the table by using wasm instead of native code. For all its flaws, docker on linux runs native binaries at native speed. I suspect big companies running big cloud deployments will stick with docker because it runs code faster.
As somebody who's in the process of building a sandbox for RISC-V 64 Linux ELF executables, even I'm still on the fence.
The problem is that in WASM-land we're heading towards WASI and WAT components, which is similar to the .NET, COM & IDL ecosystems. While this is actually really cool in terms of component and interface discovery, the downside is that it means you have to re-invent the world to work with this flavor of runtime.
Meaning... no, I can't really just output WASM from Go or Rust and it'll work, there's more to it, much more to it.
With a RISC-V userland emulator I could compile that to WASM to run normal binaries in the browser, and provide a sandboxed syscall interface (or even just pass-through the syscalls to the host, like qemu-user does when running natively). Meaning I have high compatibility with most of the Linux userland within a few weeks of development effort.
But yes, threads, forking, sockets, lots of edge cases - it's difficult to provide a minimal spoof of a Linux userland that's convincing enough that you can do interesting enough things, but surprisingly it's not too difficult - and with that you get Go, Rust, Zig, C++, C, D etc. and all the native tooling that you'd expect (e.g. it's quite easy to write a gdbserver compatible interface, but ... you usually don't need it, as you can just run & debug locally then cross-compile).
> The problem is that in WASM-land we're heading towards WASI and WAT components, which is similar to the .NET, COM & IDL ecosystems. While this is actually really cool in terms of component and interface discovery, the downside is that it means you have to re-invent the world to work with this flavor of runtime.
At the application level, you're generally going to write to the standards + your embedding. Companies that write embeddings are encouraced/incentivized to write good abstractions that work with standards to reduce user friction.
For example, for making HTTP requests and responding to HTTP requests, there is WASI HTTP:
It's written in a way that is robust enough to handle most use cases without much loss of efficiency. There are a few inefficiencies in the WIT contracts (that will go away soon, as async lands in p3), but it represents a near-ideal representation of a HTTP request and is easy for many vendors to build on/against.
As far as rewriting the world, this happens to luckily not be quite true, thanks to projects like wasi-libc:
Networking is actually much more solved in WASI now than it was roughly a year ago -- threads is taking a little longer to cook (for good reasons), but async (without function coloring) is coming this year (likely in the next 3-4 months).
The sandboxing abilities of WASM are near unmatched, along with it's startup time and execution speed compared to native.
I'm really eager to see what happens in the near future with WAT & WASI, but I'm also very aware of seeing a repeat of DLL hell.
There are a few niches where standardization of interfaces and discoverability will be extremely valuable in terms of interoperability and reducing the development effort to bring-up products that deeply integrate with many things, where currently each team has to re-invent the wheel again for every end-user product they integrate with, with the more ideal alternative being that each product provides their own implementations of the standard interfaces that are plugged into interfaces.
But, the reason I'm still on the fence is that I think there's more value in the UNIX style 'discrete commands' model, whether it's WASM or RISC-V I don't think anybody cares, but it's much more about self-describing interfaces with discoverability that can be glued together using whatever tools you have at your disposal.
> I'm really eager to see what happens in the near future with WAT & WASI, but I'm also very aware of seeing a repeat of DLL hell.
I think we can at least say WebAssembly + WASI is distinct from DLL hell because at the very least your DLLs will run everywhere, and be intrinsically tied to a version and strict interface.
These are things we've just never had before, which is what makes it "different this time". Having cross-language runnable/introspectable binaries/object files with implicit descriptions of their interfaces that are this approachable is new. You can't ensure semantics are the same but it's a better place than we've been before.
> But, the reason I'm still on the fence is that I think there's more value in the UNIX style 'discrete commands' model, whether it's WASM or RISC-V I don't think anybody cares, but it's much more about self-describing interfaces with discoverability that can be glued together using whatever tools you have at your disposal.
A bit hard to understand here the difference you were intending between discrete commands and a self-describing interface, could you explain?
I'd also argue that WASM + Component Model/WASI as a (virtual) instruction set versus RISC-V are very different!
DLLs already run everywhere since CLR became cross platform.
Really this is walking an already trailed path, multiple times, we can even notice the parts grass no longer grows, how much it has been walked through.
The "universal compile target" facet of wasm is much less focal than the "universally embeddable" one.
The sandboxing is the keystone holding up the entire wasm ecosystem, without it no one would be interested in it same as nobody would run javacript in browsers without a sandbox (we used to, it was called flash, we no longer do).
I am curious why you focus so much on "universal runtime/compile-target do fail" rather than its actual strenght when at least in the case of java applet they failed because their sandbox sucked (and startup times).
Because WASM sandbox only works, to the extent hackers have not bothered attacking existing implementations to the same level as they did to Java applets, which is anyway one implementation among many since 1958 UNCOL idea.
Additionally, it is a kind of worthless sandbox, given that the way it is designed it doesn't protect against memory corruption, so it is still possible to devise attacks, that will trigger execution flows leading to internal memory corruption, possibly changing the behaviour of an WASM module.
> Nevertheless, other classes of bugs are not obviated by the semantics of WebAssembly. Although attackers cannot perform direct code injection attacks, it is possible to hijack the control flow of a module using code reuse attacks against indirect calls. However, conventional return-oriented programming (ROP) attacks using short sequences of instructions (“gadgets”) are not possible in WebAssembly, because control-flow integrity ensures that call targets are valid functions declared at load time. Likewise, race conditions, such as time of check to time of use (TOCTOU) vulnerabilities, are possible in WebAssembly, since no execution or scheduling guarantees are provided beyond in-order execution and post-MVP atomic memory primitives :unicorn:. Similarly, side channel attacks can occur, such as timing attacks against modules. In the future, additional protections may be provided by runtimes or the toolchain, such as code diversification or memory randomization (similar to address space layout randomization (ASLR)), or bounded pointers (“fat” pointers).
> The sandboxing abilities of WASM are near unmatched, along with it's startup time and execution speed compared to native.
Could you expand on this? I think everyone would agree with the first two of these - sandboxing is the whole point of WASM, so it would be excellent at that. And startup latency matters a great deal to WASM programs, again not surprised that runtimes have optimised that.
But execution speed compared to native? Are you saying WASM programs execute faster than native? Or even at the same speed?
Ah this could have been clearer -- the context is userland emulation (and I expand that to broadly mean emulation/VMs and even containers -- i.e. the current group of options). It's not that Wasm is likely to run faster than native, it's that it runs reasonably close to native speed when compared to the other options.
Separately, it also matters what you consider "native" -- it is possible to write programs in a more efficient language (ex. one without a runtime), apply reasonable optimizations, and with AOT/JIT be faster than what could be reasonably written idiomatically in the host language (e.g. some library that already exists to do X but just does it inefficiently).
> the downside is that it means you have to re-invent the world to work with this flavor of runtime.
This is at least one of the reasons we've been building thin kernel interfaces for Wasm. We've built two now, one for the Linux syscall interface (https://github.com/arjunr2/WALI) and one for Zephyr. A preliminary paper we wrote a year or so back is here (https://arxiv.org/abs/2312.03858), and we have a new one coming up in Eurosys 25.
One of the advantages of a thin kernel interface to something like Linux is really low overhead and low implementation burden for Wasm engines. This makes it easier to then build things like WASI just one level up, compiled against the kernel interface and delivered as a Wasm module. Thus a single WASI implementation can be reused across engines.
A thin kernel interface isn't a reimplementation of a kernel. The WALI implementation in WAMR is ~2000 lines of C, most of which is just pass-through system calls.
It does not throw away the Wasm sandbox. Sandboxing means two things: memory sandboxing and system sandboxing. It retains the former. For the latter you can apply the same kinds of sandboxing policies as native processes and achieve the same effect, or even do it more efficiently in-process by the engine, and do interposition and whitelist/blacklisting more robustly than, e.g. seccomp.
Alright, selectively forwarding the syscalls, now you're approaching the problem again where you need to reimplement parts of Linux to understand the state machine of what fd 432 means at any given point in time etc; basically you're implementing the ideas of gVisor in a slightly different shape, without being able to run preexisting binaries. Doesn't seem like a useful combination of features, to me.
Again, no. The security policies we have in mind can be implemented above the WALI call layer and supplied as an interposition library as a Wasm module. So you can have custom policies that run on any engine, such as implementing the WASI security model as a library. As it is now, all of WASI has to be implemented within the Wasm engine because the engine is the only entity with authority to do so. That's problematic in that engines have N different incompatible, incomplete and buggy implementations of WASI, and those bugs can be memory safety violations that own the entire process.
Thin kernel interfaces separate the engine evolution problem from the system interface evolution problem and make the entire software stack more robust by providing isolation for higher-level interfaces.
To filter out syscalls for complex policies, you need to understand the semantics of prior syscalls. For example, you need to keep track of what the dirfs in an unlinkat call refers to. And to keep track of FDs you need to reimplement fcntl. And so on.
This is why gVisor contains a reimplementation of parts of Linux.
Yes, but the engine doesn't need to do this, you can do this on your own time as a library. As there are literally dozens of Wasm engines now, thin kernel interfaces are a stable interface that they can all implement in exactly the same way[1] (simple safety checks + pass through) and then higher-level, more safe, and in some way better policies and APIs can be implemented as Wasm modules on top.
[1] This makes the interface per-kernel, not per-kernel x per-engine. It's also not per-kernel x per-kernel; engines would not be required to emulate one kernel on another kernel.
> let's delegate the hardest part back to the caller!
Obviously, an expert would write the security policies and make them reusable as libraries. Incidentally, that is what WASI is--it's not only a new security model, but a new API that requires rewrites of applications to fit with the new capability design.
> Try writing a seccomp policy for filesystem access
Try implementing an entire new system API (like WASI) in every engine! You have that problem and a whole lot more.
For comparison, implementing WASI preview1 is 6000 lines of C code in libuvwasi--and that's not even complete. Other engines have their own, less complete and broken, buggy versions of WASI p1. And WASI p2 completely upends all of that and needs to be redone all over again in every engine.
Obviously, WASI p1 and p2 should be implemented in an engine-independent way and linked in. Which is exactly the game plan of thin kernel interfaces. In that sense, at the very least thin kernel interfaces is a layering tool for the engine/system API split that enhances security and evolvability of both. Nothing requires the engine to expose the kernel interface, so if you want a WASI only engine then only expose WALI to WASI and call it a day.
WASM approach to injecting the host-interaction API seems to me to be similar to what EFI does. You are provided with a table full of magical functions on startup, and that's how you can interact with the host. Some functions weren't provided there? Tough luck.
As someone who has written a RISC-V sandbox for that purpose, I say stay the course. We need more competition to WASM. At the end you'll find that register machines make for faster interpreters than Harvard architectures. You can have a look at libriscv or message me if you need any help.
This assumes that everyone implements the same set of APIs that work in the same way.
More likely, the browser will implement some that make sense there, some browsers will implement more than others, Cloudflare workers will implement a different set, AWS Lambda will implement a different set or have some that don't work the same way... and now you need to write your WASM code to deal with these differing implementations.
Unless the API layer is, essentially, a Linux OS or maybe POSIX(?) for Docker, which I doubt it would be as that's a completely different level of abstraction to WASM, I don't have a lot of faith in this being a utopian ideal common API, given that as an industry we've so far failed almost every opportunity to make those common APIs.
Good point! This is the hard work that people are undertaking right now.
Things are going to change a little bit with the introduction of Preview3 (the flagship feature there is async without function coloring), but you can look at the core interfaces:
This is what people are building on, in the upstream, and in the bytecode alliance
You're absolutely right about embeddings being varied, but the standard existing enforces the expectations around a core set of these, and then the carving out of embeddings to support different use cases is a welcome and intended consequence.
WASI started as closer to POSIX now, but there is a spectacular opportunity to not repeat some mistakes of the past, so some of those opportunities are taken where they make sense/won't cause too much disruption to people building in support.
It isn't the fault of the group that suggests and standardizes protocols, it is everyone thinking they are smarter and they can do it better is the problem.
> the flagship feature there is async without function coloring
Correct me if I’m wrong, but that’s only possible if you separate runtime threads from OS threads, which sounds straightforward but introduces problems relating to stack-lifetimes in continuations so it introduces demands on the compiler and/or significant runtime memory overhead - which kinda defeats the point of trying to avoid blocking OS threads in the first place.
I’m not belittling the achievement there - I’m just saying (again, correct me if I’m wrong) there’s a use-case for function-colouring in high-thread, high-memory applications.
…but if WASI is simply adding more options without taking anything away then my point above is moot :)
Further to this, my (very basic) understanding is that the actual threading implementation will be left up to the integrator, so some implementations may not actually implement any concurrency (a little like the Python GIL in a way), while others may implement real concurrency, therefore meaning that subtle threading bugs could be introduced that wouldn't be seen until you run in other environments.
> Correct me if I’m wrong, but that’s only possible if you separate runtime threads from OS threads, which sounds straightforward but introduces problems relating to stack-lifetimes in continuations so it introduces demands on the compiler and/or significant runtime memory overhead - which kinda defeats the point of trying to avoid blocking OS threads in the first place.
Correct -- note that the async implementation does not address parallelism (i.e. threading) -- it's a language +/- runtime level distinction.
The overhead is already in the languages that choose to support -- tokio in rust, asyncio in python, etc etc. For those that don't want to opt in, they can keep to synchronous functions + threads (once WASI threads are reimagined, working and stable!)
You can actually solve this problem with both multiple stacks and a continuation based approach, with different tradeoffs.
> I’m not belittling the achievement there - I’m just saying (again, correct me if I’m wrong) there’s a use-case for function-colouring in high-thread, high-memory applications.
>
> …but if WASI is simply adding more options without taking anything away then my point above is moot :)
Didn't take it as such! The ability to avoid function coloring does not block the implementations of high-threads/high-memory applications, once an approach to threading is fully reconsidered. And adding more options while keeping existing workflows in place is definitely the goal (and probably the only reasonable path to non-trivial adoption...).
How to do it is quite involved, but there are really smart people thinking very hard about it and trying to find a cross-language optimal approach. For example, see the explainer for Async:
There are many corners (and much follow up discussion), but it's shaping up to be a pretty good interface, and widely implementable for many languages (Rust and JS efforts are underway, more will come with time and effort!).
True. Its probably worth creating a validation suite for wasi which can check that any given implementation implements all the functions correctly & consistently. Like, I'm imagining a wasm bundle which calls all the APIs it expects in every different configuration and outputs a scorecard showing what works properly and what doesn't.
I suspect you're right - unless people are careful, it'll be a jungle out there. Just like javascript is at the moment.
I understand your point but sadly I think it's too idealistic (not that we shouldn't strive for these goals). We already have those sorts of tests for browsers, and browser compatibility is still a problem. We have acceptance tests for other areas, like Android's CTS tests, but there are still incompatibilities.
That also assumes that everyone involved wants compatibility, and that's unlikely. Imagine a world where every WASM implementation is identical. If one implementation decides to change something to implement an improvement to differentiate themselves in the market, they'll likely win marketshare from the others.
Most companies implementing WASM will tend to want to a) control the spec in their own favour, and b) gain advantages over other implementations. And this is why we can't have nice things.
> I understand your point but sadly I think it's too idealistic (not that we shouldn't strive for these goals). We already have those sorts of tests for browsers, and browser compatibility is still a problem. We have acceptance tests for other areas, like Android's CTS tests, but there are still incompatibilities.
>
I think the browser problem is a marketshare/market power problem, and Wasm doesn't have that problem.
Also, I'd argue that compat tests for JS engines and browsers are an overall positive thing -- at least compared to the world where there is no attempt to standardize at all.
> That also assumes that everyone involved wants compatibility, and that's unlikely. Imagine a world where every WASM implementation is identical. If one implementation decides to change something to implement an improvement to differentiate themselves in the market, they'll likely win marketshare from the others.
This is a good thing though -- as long as it happens without breaking compatibility. Users are very sensitive to changes that introduce lock-in/break standards, and the value would have to be outsized for someone to forgo having other options.
> Most companies implementing WASM will tend to want to a) control the spec in their own favour, and b) gain advantages over other implementations. And this is why we can't have nice things.
I think you can see this playing out right now in the Wasm ecosystem, and it isn't working out like you might expect. There are great benefits in building standards because of friction reduction for users -- as long as there is a "standards first" approach, people overwhelmingly pick it if functionality is close enough.
Places that make sense to differentiate are differentiated, but those that do not start to get eaten by standards.
I think organizations that are aware of this problem and attempt to address is directly like the Bytecode Alliance are also one of the only forms of bulwark against this.
> I think the browser problem is a marketshare/market power problem, and Wasm doesn't have that problem.
No, it really isn’t.
For more than the last two decades every browser bar IE looked towards compatibility and only included differences as browser-specific extensions.
And even when Microsoft eventually caved and started the Edge project to create a compatible browser, they ended up admitting defeat and pivoted to Chromium themselves.
Maybe I'm just not understanding, but I'm not sure how this precludes it being a marketshare problem -- the thing is that the marketshare leader doesn't have to worry about compatibility/being interoperable.
> And even when Microsoft eventually caved and started the Edge project to create a compatible browser, they ended up admitting defeat and pivoted to Chromium themselves.
This can be interpreted as a problem of marketshare not staying balanced. It may have shifted hands, but the imbalance is the problem -- if Chrome had to deal with making changes that would be incompatible with half the users that visit sites on Chrome, they'd be forced to think a lot more about it.
This doesn't mean they can't add value in the form of non-standardized extensions -- that's not a desirable goal because it would stifle innovation. The point is that at some point if users are on browser Y and they get a "this site only runs on browser X", they're just not going to visit that site, and developers are going to shy away from using that feature. In a world with lopsided marketshare, there's not much incentive for the company with the most marketshare to be interoperable.
IE hasn’t been the market share leader in a long time and couldn’t even retain compatibility with itself, let alone any ACID tests nor wider formalised standards.
And these days the problem is simply that the specifications are so complex and fail mode so forgiving that it’s almost impossible for two different implementations to output entirely the same results across every test suite.
Neither of these are market leader problems. The former is just Microsoft being their typical shitty selves. While the latter is a natural result of complex systems designed for broad use even by non-technical people.
> We already have those sorts of tests for browsers, and browser compatibility is still a problem. We have acceptance tests for other areas, like Android's CTS tests, but there are still incompatibilities.
Yeah - but its barely a problem today compared to a few decades go. I do a lot of work on the web, and its pretty rare these days to find my websites breaking when I test them on a different web browser. That used to be the norm.
I think essentially any time you have multiple implementations of the same API you want a validation test suite. Otherwise, implementation inconsistencies will creep in. Its not a wasm thing. Its just a normal compatibility thing.
Commonmark is a good example of what doing this right looks like. The spec is accompanied by a test suite - which in their case is a giant JSON list containing input markdown text and the expected output HTML. Its really easy to check if any given implementation is commonmark compliant by just rendering everything in the list to HTML and checking that the output matches:
> Most companies implementing WASM will tend to want to a) control the spec in their own favour, and b) gain advantages over other implementations. And this is why we can't have nice things.
Your cynicism seems miscalibrated. We have hundreds of examples of exactly this kind of successful cross-company collaboration in computing. For example, at the IETF you'll find working groups for specs like TCP, HTTP, BGP, Email, TLS and so on. The HTTP working group alone has hundreds of members, from hundreds of companies. WhatWG and the W3C do the same for browser APIs. Then there's hardware groups - who manage specs like USB, PCI / PCIe, Bluetooth, Wifi and so on. Or programming language standards groups.
Compatibility can always be better, but generally its great. We can have nice things. We do have nice things. WASM itself is an example of that. I don't see any reason to see these sort of collaborations stopping any time soon.
> half the point of docker is that all syscalls are routed into an LXD container with its own filesystem and network. It should be pretty easy to do the same thing in userland with a wasm runtime.
This is a serious misunderstanding of how containers work.
Containers make syscalls. The Linux kernel serves them. Linux kernel has features that let one put userspace processes in namespaces where they don't see everything. There is no "routing". There is no "its own filesystem and network", just a namespace where only some of the host filesystems and networks are visible. There is no second implementation of the syscalls in that scenario.
For WASM, someone has to implement the server-side "file I/O over WASI", "network I/O over WASI", and so on. And those APIs are likely going to be somewhat different looking than Linux syscalls, because the whole point is WASM was sandboxing.
> True, but I suspect it'll be a lot easier to virtualise all those APIs through WASM than it is for a regular native binary. I mean, half the point of docker is that all syscalls are routed into an LXD container with its own filesystem and network. It should be pretty easy to do the same thing in userland with a wasm runtime.
All of this sounds too good to be true. The JVM tried to use one abstraction to abstract different processor ISAs, different operating systems, and a security boundary. The security boundary failed completely. As far as I understand WASM is choosing a different approach here, good. The abstraction over operating systems was a partial failure. It succeeded good enough for many types of server applications, but it was never good enough for desktop applications and system software. The abstraction over CPU was and is a big success, I'd say.
What exactly makes you think it is easier with WASM as a CPU abstraction to do all the rest again? Even when thinking about so diverse use-cases like in-browser apps and long running servers.
A big downside of all these super powerful abstraction layer is reaction to upstream changes. What happens when Linux introduces a next generation network API that has no counterpart in Windows or in the browser. What happens if the next language runtime wants to implement low-latency GC? Azul first designed a custom CPU and later changed the Linux API for memory management to make that possible for their JVM.
All in all the track record of attempts to build the one true solution for all our problems is quite bad. Some of these attempt discovered niches in which they are a very good fit, like the JVM and others are a curiosity of history.
docker and lxd are competing projects. Docker does not use lxd to launch containers. lxd was written by the lead dev (at canonical) of lxc which was not as polished as docker but sort of kind of did the same thing (ran better chroots)
They both use Linux kernel features such as control groups and namespaces. When put together this is referred to as a container but the kernel has zero concept of “a container”.
So basically virtual machines, those we can spin up with lxd or firecracker. Not that they don't have file access, it's just that's finnicky compared to containers (I'm thinking docker/podman)
One can use something like https://github.com/google/gvisor as a container runtime for podman or docker. It's a good hybrid between VMs and containers. The container is put into sort of VM via kvm, but it does not supply a kernel and talks to a fake one. This means that security boundary is almost as strong as VM, but mostly everything will work like in a normal container.
E.g. here's I can read host filesystem even though uname says weird things about the kernel container is running in:
$ sudo podman run -it --runtime=/usr/bin/runsc_wrap -v /:/app debian:bookworm /bin/bash
root@7862d7c432b4:/# ls /app
bin home lib32 mnt run tmp vmlinuz.old
boot initrd.img lib64 opt sbin usr
dev initrd.img.old lost+found proc srv var
etc lib media root sys vmlinuz
root@7862d7c432b4:/# uname -a
Linux 7862d7c432b4 4.4.0 #1 SMP Sun Jan 10 15:06:54 PST 2016 x86_64 GNU/Linux
Gvisor let's one have strong sandbox without resorting to WASM.
Yes, but note the difficulty of building a specialized I/O or drivers for controlling access in a virtual machine versus the WASI model.
Also, startup times are generally better w/ availability of general metering (fuel/epochs) for example. The features of Wasm versus a virtual machine are similar but there are definitely unique benefits to Wasm.
The closer comparison is probably the JVM -- but with support for many more languages (the list is growing, with upstream support commonplace).
Except developers have consistently chosen not to embed the JVM, CLR, or IBMi.
wasmtime (the current reference runtime implementation) is much more embeddable than these other options were/are, and is trivially embeddable in many languages today, with good performance. On top of being an option, it is being used, and WebAssembly is spreading cross-language, farther than the alternatives ever reached.
These things may look the same, but just like ssh/scp and dropbox, they're not the same once you zoom/dig in to what's different this time.
> As if developers are consistently chosing to embedd WASM, just wait after the hype cycle dies.
>
> What we have now is lots of hype, mostly by folks clueless of their history, in the venture to sell their cool startup idea based on WASM.
I don't think there's much of a hype cycle -- most of the air has been sucked out of the room by AI.
There aren't actually that many Wasm startups, but there are companies leveraging it to great success, and some of these cases are known. There is also the usefulness of Wasm as a target, and that is growing -- languages are choosing to build in the ability to generate wasm bytecode, just as they might support a new architecture. That's the most important part that other solutions seemingly never achieved.
The ecosystem is aiming for a least-changes-necessary approach -- integrating in a way that workflows and existing code does not have to change. This is a recipe for success.
I think it's a docker-shaped adoption curve -- most people may not think it is useful now, but it will silently and usefully be everywhere later. At some point, it will be trivial to ship a small WASM binary inside (or independent of) a container, and that will be much more desirable than building a container. The artifact will be smaller, more self-describing, work with language tooling (i.e. a world without Dockerfiles), etc.
I believe that the two most likely futures for the wasm-as-puglin-engine are mod in games and applications with a generic extension interface.
IMO in games developers would prefer something with a reasonable repl like lua or javascript (as a game is already assumed to be heavy if the mods are not performance critical running a V8 should not be a problem) for extensions in generic complex applications (things like, VSCode, Blender, Excel, etc.) I would posit that the wasm sandbox could be a really good way to enable granular-permission secure extenstion.
I don't understand WASM, but I read that a big draw of WASM is it's ability to provide portability to any language. This would mean Python libraries that depend on an unpopular C library (which could be lost to time) could instead be a single WASM blob.
Assuming equivalent performance, which I understand might not be the case, is there merit to this idea? Or is there nothing new WASM provides?
It does currently present a lot of restrictions as compared to what you could do in a container. But it's good enough to run lots of real world stuff today.
I think that looking at it in terms of Embeddability is more useful compared to portability.
In the sense that compiling C to any language is easily done without too many problems, what wasm allow is to have a secure and performant interface with that language.
For example IIRC one of the first inclusions of wasm was to sandbox many of the various codecs that had regular security vulnerabilities, in this Wasm is neither the first nor the only approach, but with a combination of hype and simplicity it is having good success.
> I don't understand WASM, but I read that a big draw of WASM is it's ability to provide portability to any language. This would mean Python libraries that depend on an unpopular C library (which could be lost to time) could instead be a single WASM blob.
Yes, this is a key value of WebAssembly compared to other approaches, it is a relatively (compared to a container or a full blown VM) lightweight way to package and distribute functionality from other languages, with high performance and fast startup. The artifact is minimal (like a static/dynamic library, depending on how much you've included), and if your language has a way to run WASM, you have a way to tap into that specialized computation.
I could be wrong, but I can't find anything about how to include your C dependencies with IronPython when you compile. Instead I see that IronPython has limited compatibility with the Python ecosystem because of Python libraries using C.
Contrasted with WASM where you can write in any language and bring the ecosystem with you, since it all compiles down.
Fully agree with your point here, but wanted to point out that including C dependencies is actually one of the biggest reasons why Python support is hard for WebAssembly too.
Bolstering your point -- smart-and-hardworking people are working on this, which results in:
It's a fun ecosystem -- the challenge is huge but the work being done is really fundamentally clean/high quality, and the solutions are novel and useful/powerful.
Java's SecurityManager was cool at the start, but over the years there was a steady series of ways to side step it. And now Oracle are wholly deleting it in JDK 25 - https://openjdk.org/jeps/486. It was a stand out feature IMO, and I'll miss it.
That's run-on-the-other-device-I-manage, not run-anywhere. Run-anywhere was used in the meaning of run-by-untrusting-parties, like javascript on the web.
> But then you've got to figure out and prevent all the security holes that can be introduced by adding file access, networking, etc. [...] Maybe put the whole thing into a container?
Since this is an emerging ecosystem, why not take a different spin on security, and instead try e.g. capabilities? Instead of opening a connection to the DB, or a listening socket, you get FDs from your runtime. Instead of a path where you can read/write files, such as assets or local cache, you get a directory FD from openat (not sure right now if that could be bypassed with "..", but you get the idea).
Bonus: you can get hot code reloading for very cheap.
Java write once run anywhere is fine. Java people don't generally bother with containers because there's no point, the JVM already solves the same problem.
Those fat runtimes are part of the portability lie. Runs everywere ... where the runtime is installed.
You can make the same argument for any compiled language if you call QEMU your runtime.
And this isn't just theoretical. Games written in compiled languages know to bundle all their dependencies. Games written in Java often expect you to have a JRE. And more often than not "a JRE" means the official Sun JRE (maybe even a specific version range) because too many Java applications use non-portable interfaces.
> You can make the same argument for any compiled language if you call QEMU your runtime.
Only if you ship a QEMU-compatible image, and I don't think anyone does. The usability and integration with the host system is too poor.
> Games written in compiled languages know to bundle all their dependencies. Games written in Java often expect you to have a JRE.
You can't get away from having to have some interface between the host system and the program, but so far the JVM is the least bad one. When laptops started shipping with ARM processors, both docker images and games that had compiled in their dependencies broke, while programs that were shipped as JARs worked fine.
- was designed with a lot of tight system integration foremost, sand boxing being secondary
- a ton of the sandbox enforcement where checks run in the same VM/code as the code they where supposed to sandbox, like you Java byte code decided if Java byte code should be able to access file IO etc.
- Java Applets are, at lest for somewhat more modern standard, a complete security nightmare _in their fundamental design_, not just practically due to a long history of failure.
- a lot of the security design was Java focused, but the bytecode wasn't limited to only representing possible Java code
- Java "sandboxing" targeted a very different use-case/context the "WASM replace containers" blog is speaking about, mainly the blog is about (maybe micro-) servies while Java sandboxing was a lot about desktop application. I.e. more comparable with flatpack and their sandboxing is also (sadly) more about compatibility then security (snap does that better, but has other issues).
And especially the last point one is one to really important as we are not speaking about WASM replacing sandboxing e.g. for dev tools and similar but sandboxing for deployment of micro services written with it in mind. In such context
1. you (should))always run semi trusted code, not untrusted code
2. when giving access to other resources (e.g. file system) it's often in a context where you normally don't need any form of dynamic access management (like you need on a desktop) which means _all the tech underlying to containers can be used with WASI_. Like there is no reason not to still use cgroups, dropping privileges and co integrated in your WASI VM in the same way docker and co uses them.
3. (I kinda thing) there is a (subtle/very slow) trend to not rely only on container isolation but e.g. have a firecracker micro vm run multiple closely coupled containers (a pod/side care container) but place not closely coupled containers in different micro VMs.
The true challenge isn't WASI, but that it's competing with docker->kubernets where docker is "one thing fit's all (badly)" solution which can not only run your services but can also run all kind of dev tooling, legacy applications etc. without requiring any changes to them and can (badly but often good enough) simulate your deployment locally with compose. Then to make the competition hard kubernets has been become somewhat of a "standard" interface to deployment, especially in the cloud, this might suck, but also mean you use OIC images both locally in in production. And that is what the WASI for service sandboxing use-case is competing with OIC images and software running them, nut just docker.
> - was designed with a lot of tight system integration foremost, sand boxing being secondary
> Java sandboxing was a lot about desktop application
I think this is a false history. Java was designed for interactive television. As in cable television set top boxes receiving apps broadcast over the cable and executing them.
Javas focus has shifted multiple times through history and at least starting with it becoming generally available hasn't really been a single purpose thing, so I don't think this is really making any difference in the argument.
But I could have formulated some thing better:
> sand boxing being secondary
sand boxing for _security_ being secondary, for compatibility it was primary (e.g. like flatpack) and even if it wasn't what was seen as acceptable security for the 90th isn't anywhere close to it for today in most cases
Also "tight system integration" was in context of "highly sandboxes" things, which isn't necessary quite the same. E.g. in context of "highly sandboxes" things rusts standard library and support for C-API libraries is tightly system integrated. But if you speaking in a context of e.g. windows you need to add the official bindings to the various MS specific system libraries to count as "tightly integrated" and even then you could argue it's not quite there due to not having first class COM support.
Anyway I think the most important takeaway form Java sandbox security is "never run the code enforcing your sandbox as part of the inside of your sandbox" because a huge amount of security issues can be traced back to that (followed by the way Java applets have been embedded in the browser wrt. "privileged" applets being really really bad designed in a ton of ways).
Check out the WASI repository. For people not understanding what WASI is, I always tell them it's something like a reference/specification of cross platform syscalls that have to be implemented in WASM VMs.
Of course, access to such things always come with assumptions of control and policies that rely on behavioral analysis. So I hope that something similar to host and web application firewall rules will come out of this, similar to how deno does it.
Yes. Wasm does to my knowledge not have an answer for this, even if some projects patch in their bridge logics. Networking and file systems, and the permission model, is "the rest of the fucking owl". Linux isn't standardized, but at least it's Linux. Without consensus on the fundamental APIs I don't see how we can get to a platform agnostic experience. Even an https call isn't simple: you need TCP and you need to pull root certs from the env, at the very least. Where's the API for that?
I hope for a much more near term bright future for WASM: language interop. The lowest common denominator today is C, and doing FFI manually is stone age. If you can leverage WASM for the FFI boundary, perhaps we can build cross language applications that can enjoy the strengths of all libraries, not just those outside of our language silos.
WASM solves a different problem to containers. Where WASM does well is in running sandboxed code efficiently, because that's where it started out. I think WASM will likely take over as the standard for shipping things like Functions-as-a-Service implementations, and other forms of plugins, where one host application/server of some kind wants to efficiently run pieces of untrusted logic.
Containers don't solve that problem. They aren't a particularly good security boundary, and they are much heavier weight, in terms of bytes and startup costs, than WASM binaries, because they are deeply integrated into the OS for networking, etc. However, when what you need to do is ship a binary with a bunch of accoutrements, dependencies, files, etc, and then run multiple processes, multiple threads, and use more of the OS primitives, containers are an ergonomic way to do that, and that suits Infrastructure-as-a-Service much more closely.
Container security boundary can be much stronger if one wants.
One can use something like https://github.com/google/gvisor as a container runtime for podman or docker. It's a good hybrid between VMs and containers. The container is put into sort of VM via kvm, but it does not supply a kernel and talks to a fake one. This means that security boundary is almost as strong as VM, but mostly everything will work like in a normal container.
E.g. here's I can read host filesystem even though uname says weird things about the kernel container is running in:
$ sudo podman run -it --runtime=/usr/bin/runsc_wrap -v /:/app debian:bookworm /bin/bash
root@7862d7c432b4:/# ls /app
bin home lib32 mnt run tmp vmlinuz.old
boot initrd.img lib64 opt sbin usr
dev initrd.img.old lost+found proc srv var
etc lib media root sys vmlinuz
root@7862d7c432b4:/# uname -a
Linux 7862d7c432b4 4.4.0 #1 SMP Sun Jan 10 15:06:54 PST 2016 x86_64 GNU/Linux
FWIW the performance loss got a lot better in ~2023 when the open source gVisor switched away from ptrace. (Google had an internal non-published faster variant from the start.)
You've shown that wasm solves a sandboxing/security concern that containers don't.
But wasm also has done a lot to enable a portable easy to ship blob/inage, and is rapidly tackling the distribution problems that has been key to container's success.
I'm not sure what we can really put firmly on containers side here, what makes them distinct or lasting as compared to what wasm is shaping up to be.
The various wasm runtimes already offer an array of low level threading and networking capabilities. Higher level services like http servers can be provided by runtimes with very optimized native backends, that have excellent performance characteristics but still "appear" as regular wasm. There may be niche long term users of containers for these reasons, but it's unclear to me that the systems programming capabilities will be much of a moat for containers: if anything, I think we'll see wasm implementations competing to provide the fastest performance for the same well specified interfaces in a hot war competition.
The main counter-factor against wasm that I see is just interia. Change is slow. These runtimes are still early. Wasm language targets have been around for a while but at various quality/polish levels. Wasm components are still effectively brand new, a year and a month after release, and the specs are still undergoing major updates to figure out how exactly async is going to work & there's very few runtimes keeping pace with this radical upshift. Its happening & I believe in wasm, but this is a long change and the capabilities aren't really truly here yet.
Docker-like containers will likely exists as long as there linux servers, maybe the trendiness will leave them behind, but they are a plainly good technology to deploy and distribute application with many fantastic usecases.
I too do see a future where Wasm is way more widespread but as of now we are deploying a small app with a MySQL in single docker container. I suspect that we won't be running MySQL compiled to a Wasm container anytime soon*
* I suspect that we could with the right transpilations/interpreters but for sure we won't
I think WASM will likely take over as the standard for shipping things like Functions-as-a-Service implementations, and other forms of plugins, where one host application/server of some kind wants to efficiently run pieces of untrusted logic.
I really hope WASM takes over as a plugin mechanism. I don't think it will lead to fragmentation because communities will form around their preferred language, it will
just not be enforced anymore. And forcing a plugin language did not work so successfully to prevent fragmentation anyway, see GNU Guile or vimscript.
Yeah and that's where it makes a lot of sense. Less sense is replacing containers where you're using much more of the stack. FaaS hasn't replaced VMs, Kubernetes, Containers, etc, so it seems unlikely that WASM would for similar reasons.
Well, not quite, that's an oversimplification. As I mentioned I think there's a place for WASM as the common unit of business logic, for things like plugins.
Implementing a FaaS runtime like Lambda is actually quite hard to do in a way that is both safe and multi-language. Compiling down to a safe, sandboxed bytecode, is not a bad idea, regardless of whether you're doing that to run it in a user's browser, or to run a FaaS function on some cloud infra, or to write a server-side plugin to a SaaS product.
This is what CUE is exploring for WASM, the ability to have custom functions or plugins that can be shipped with modules and run anywhere. It will be super cool, but they are blocked on upstream changes to the WASM runtime
Yeah sure, there was a time people believed that about javascript, and node.js ended up being a tool used to build website middle-ends and BFF to connect to actual backends.
The user's device which can be a smartphone or a latpop and the developer's device which can be a 24 hour online server or a 24 hour online server, are just two completely different devices that cannot be abstracted away.
WASM is a technology that was born for the frontend, and js was a technology that was born for the frontend, you can for sure metastasize the web front towards the back, but then you end up with a web bias in the frontend (why not make the backend in swift, or Java for phones?, or whatever language will be used for new tech like LLM voice assistants?) and of course a frontend bias in the backend.
Not a good look, let's stop it with the full stack thing, let's specialize, half of use do backend, half of you do frontend, and then we keep on specializing, if everyone does everything we'll cover no ground.
> WebAssembly is a true write-once-run-anywhere experience. (Anywhere that can spin up a V8 engine, which is a lot of places these days.)
This is true if your wasm code is purely computational, but if it interacts with the outside world it’s a different story. Each V8 runtime has a subtly different interface, so code that runs on cloudflare V8 might not run in Bun or Deno. Not to mention if you want to support WASI too, which is a different set of bindings even though it’s still WebAssembly.
Love or hate Docker, part of its success was that POSIX was already a fairly established standard and there weren’t a lot of vendors driving it in different directions.
As PlatformOps (formerly DevOps (formerly SRE (formerly Ops))), either this was hilarious satire or ChatGPT Ketamine trip. I'm not sure.
> In the year 2030, no one will remember Kubernetes.
So what's going to handle out rolling out new versions of your WASM, setting up whatever Reverse Proxy you pick and other stuff involved getting. A bunch of scripts you wrote to do this for you? https://www.macchaffee.com/blog/2024/you-have-built-a-kubern...
> The promise of DevOps has been eroded by complicated tooling and tight coupling of program-container-linux. In my experience, developers want to write code and ship features to hit their quarterly goals.
Here is why we ended up here: In my experience, developers want to write code and ship features to hit their quarterly goals.
Sure, and my angry PlatformOps (formerly DevOps (formerly SRE (formerly Ops))) is stuck picking up the pieces because we are getting crap while we desperately paging you because we have no clue what "duplicate street key" in your logs mean. On top of that, InfoSec dropped us 10 tickets about code level library vulnerabilities in your docker container but all the Developer's Managers got together to convince them it was our problem somehow.
So we are forced to write this bundle of terribly written Ops type software in attempt to keep this train on the tracks while you strap rockets to cafe car.
WASM replacing containers is just a solution looking for a problem. Containers solved a problem of "How do we run two different versions of PHP on a single server without them colliding." Most of the containers problem is higher level DevOps problems that we haven't been able to solve and WASM isn't going to change that. I deal with a team that writes 100% Golang so their code is like WASM as it's ship binary and done. Yea, they begged for Kubernetes because it works a ton better then custom Ansible they wrote to keep these VMs/Load Balancer in sync.
WASM is powering Cloudflare workers in pretty much the fashion the guy describes and it does solve the problem of big latencies for cold starts with Lambda stuff
Instead of spinning up a container on-demand you spin up what is essentially a chrome tab in your V8 instance. Startup time is nil
In terms of solutions looking for a problem, that one seems to have fixed at least one problem for at least one person
This is obviously not true when the application itself can take arbitrary amount of time to initialize.
The overhead of WASM startup is small. But Firecracker is also very quick to start the virtualization; a lean compiled app in Firecracker would start quicker than a bloaty interpreted app in WASM. Both approaches can also freeze a pre-initialized application, and even clone those.
Reality is most "serverless" stuff is slow to start because the application itself is slow to start. WASM isn't going to change that, Enterprises are going to Enterprise. The hype field is just very strong.
WASM has select uses. Cold startups for functions is one of them and glad Cloudflare was able to use it.
I however don't have that problem. Every environment I've been in has banished Lambdas for most things because writing ourselves into knots to keep the lambdas going wasn't worth just having a small container listening to SQS/Kafka was easier and if we needed to scale, we could.
The impression I got initially was WASM is a way to ship and app into the browser as a blob of bytecode. No more random pile of js files that are largely unreadable due to trying to shave every last extra byte off them.
While I would lament the age that I came up in, where if some hot company down the street had just a killer javascript dropdown menu well you could just view source and maybe learn a few things. I think what that initial impression I got was a great idea.
But yeah, the concept has kind of expanded. I want to say you can write WASM 'plugins' for istio. Which I also think is pretty cool.
Something is going to replace containers, I say let them take a swing at it but I think at the end of the day you get something that ends up looking a lot like containers.
When? We’ve been talking about wasm for years. When are we actually getting this future? It’s been 8 years since wasm 1.0, and still we don’t have a stable, easy to use toolchain. Rust has maybe the best support and I still can’t get a basic async application with tokio to work on wasm.
To put it into context, Rust was released in 2012. 8 years later it was stable, had a solid toolchain and plenty of people using it in production. Wasm still feels like a toy compared to that
Lots of people use wasm in production right now, and toolchain support is in a good place across several languages. In that sense we are already there.
You likely visited a website using wasm today without realizing it, and major apps like Photoshop have been ported to wasm, which was the original dream behind it all. That has all succeeded.
But if you want to replace containers specifically, as this article wants, then you need more than wasm 1.0 or even 2.0. I don't know when that future will arrive.
There's a difference between a handful of sites that use wasm and it being the mainstream way in which we write web applications and run hosted software. It's still a very very niche platform that has not fulfilled its promise of either being a first party web tool or a universal runtime.
Like, how easy is it to write a web application in Wasm? Or how easy is it to compile your average program written for a native platform to Wasm without hand picking your dependencies to work on the platform?
You're right, but wasm's goals were never to be a mainstream way to write web applications. It was designed very specifically to allow things like Photoshop, Unity, and other high-end applications to run on the Web, things that just didn't run at all, and were really important. But despite their importance, those applications are a small fraction of total websites.
Wasm succeeded at its initial goals, and has been expanding into more use cases like compiling GC languages. Perhaps some day it will be common to write websites in wasm, but personally I doubt it - JavaScript/TypeScript are excellent.
WASM is another VM and it reminds me of the difficulty that the JVM has faced. You can write-once-run-once a lot of languages like Ruby or JavaScript on the JVM and some of the runtimes are pretty good.
But the rest of the community prefers using the original implementation. If some library that you want to use doesn’t work, no one is going to help you. You’re using the second class implementation.
If you skip the async stuff, Rust in WASM works quite well. I also found that Go has quite good WASM support.
I think the "WASM as containers" and "WASM as .jar" approaches are rather silly, but language support is good enough to use the technology if you think it's a match. I don't think it will be for most, but there are use cases where pluggable modules with very limited API access are necessary, and for those use cases WASM works.
Plus, if you want to run any kind of game engine in the browser, you're going to need WASM. While I'm not replacing my Steam install with a browser any time soon, I have found that WASM inside itch.io runs a lot faster and more stable than Winlator when I'm on my Android tablet.
I tried out WASM for a very small project once. This was built from a freestanding C source file (no libraries at all, not even the standard library). Zig was the C compiler used to build the program.
And I was able to get things working with an understanding of the whole system. You could instantiate the WASM from either a file, or from a byte array, pass in the byte array that held 'system memory' for the WASM program, call the WASM functions from the JavaScript code, and see results of making the function calls. The WASM binary was under 3KB in size.
Now once you want to use libraries, everything light and small about WASM goes out the window. You're at the mercy of how efficiently the library code was implemented, and how many dependencies they used while doing so.
Blazor is a great developer experience but when you put it side by side with the other technology solutions (react, angular, anything really) its so slow that you will quickly be told to use something else...
As long as, people use their computers as PC towers from 2000, without hardware video decoding, sleep states, modern UEFI features.
There is naturally the version that works, keeping the Linux kernel, and replacing the userland with managed language frameworks, I have heard they are making a huge success in mobile devices and throwaway laptops.
Hardware video decoding has worked perfectly for decades.
Sleep worked perfectly until Microsoft decided that device manufacturers should replace sleep with overheating in your bag (a much better sleep mode than, y'know, actual SLEEP).
Not sure what "modern UEFI features" means. Whenever something is described as "modern" that screams to me that someone is trying to conflate recentness with quality which is a red flag. UEFI itself has worked fine for as long as it has existed as far as I know?
Why you would replace the userland with "managed language frameworks" is quite beyond me.
As things are, all browser developers - including Firefox! - disable hardware acceleration on most video cards in their browsers on Linux because it is "too unstable". The result is a 20% difference in battery life between Linux and Windows if you mostly do browsing.
Never experienced this myself, and I have used discrete and integrated graphics cards from a variety of manufacturers.
Meanwhile on Windows I am not exaggerating when I say that every computer I have owned and every peripheral device I have ever used has had serious issues. Wireless headphones randomly disconnect, microphones require frequent unplug-replug cycles, rebooting is often required, reinstalling is common. Mice and keyboards have weird compatibility issues with software drivers. This experience is shared with most people I know that I have discussed it with. People are just used to it.
Maybe it isn't Linux that is the problem. Maybe the problem is that consumer hardware is designed and built on the cheap and is not designed to last, and they get away with it because most people (1) have no idea it could be so much better and (2) have no insight into these issues before buying because they are rarely covered in reviews.
For some reason when this happens on Windows, the hardware is to blame, but when it happens on Linux, Linux is to blame.
Because on Windows, it is the OEMs that provide the support, while on Linux (sadly) even after 30 years, it is mostly reverse engineered unless we are talking about OEM custom distros with their own blobs, like Android, ChromeOS and WebOS.
One benefit of being an old engineer is watching how excited people get when they rediscover something that has gone around the bend over and over again. I swear if you fuckers reinvent DCOM I will shit in your hats.
I actually didn't mind COM and DCOM. I didn't overuse it, so it never bit me.
I guess it's why I love using Microsoft Orleans. The virtual actor model is enough for me to solve almost every problem. If Cloudflare Durable Objects (https://developers.cloudflare.com/durable-objects) can reduce latency they might have the winning product.
On that note, do you mind helping me understand something I haven't been able to glean from Microsoft's docs? Does Orleans give you a way to globally address a thread?
With Durable Objects, two clients on either side of the world can both request a websocket connection to an object with the same unique identifier, and all the bytes from those clients will land in one single process somewhere inside a CloudFlare data centre.
I am pretty sure the answer is yes, but the docs seem a bit less direct than CloudFlare's web focused use cases.
The author has not seemingly considered the vastly different networking in wasm. You don’t have networking. There is an entirely different utility in these environments, containers are meant to host applications, wasm is an application. Don’t even get me started on disk access, env handling, etc. wasi is great, for the places it does well. It is not a replacement for writing a pure golang/rust/c/julia app and running it in a container, it doesn’t have the facilities for that task.
WASM does not run on real hardware. At best, WASM can be considered a virtual machine (in the way that the JVM and the .NET CLR are virtual machines). I guess we can call that a "runtime".
Containers package applications that run directly on real hardware (well, directly on a real kernel that is running on real hardware). There is no runtime. I am talking OCI containers here (Docker and Kubernetes). At least they can. Most containers are probably running on a Linux kernel that is running in a virtual machine (in the way that KVM, EC2, and VirtualBox are virtual machines).
WASM needs a runtime. That is, it is going to run inside an application. That application needs to run on a kernel. So, WASM will always be further from the hardware than a container is.
WASM solves the same "portability" problem that the JVM and .NET do. So, maybe WASM wins against those environments.
That is not the problem that containers solve though. Containers bundle applications with their dependencies. They replace "installation and configuration" with instantiation (deployment). WASM does not magically eliminate dependencies or the differences between environments (not even the difference between V8 implementations).
If anything, the technologies are complementary. Maybe, in the future, all our containers will be runing WASM applications.
Or maybe we will run a different kind of container that ONLY runs WASM applications and then WASM can replace the Linux kernel running in a VM that hosts all our OCI containers today. Perhaps that is what the author really envisions. Even then, it sounds like more of a complement than a true alternative.
Yes. It doesn't provide the roughly 20 years of advancements in JVM technology either. Modern observability and JVM scale is at a different level. The trend was to get the maximum use of hardware. Specifically get away from virtualization to containers. This bucks the trend for absolutely no tangible benefit.
As if you need to learn anything, you get your Dockerfile and that's it, what else there is to learn? Your WASM app still need Kubernetes to run so it's not adding any value.
The complexity is not in running your app in Docker, the complexity is running your container somewhere, and WASM does not help at all with that.
WebAssembly is not going anywhere, it's pretty clear it won't grow much in the next 5years.
It's not trivial to manage a running container or group of, with firewalls and filesystems and whatnot.
My biggest gripe is that it's quite redundant with the os and tends to reinvent stuff. You end up needing to learn, doc, and build towards both os layer fw and container layer fw for example.
Wasm is getting merged in and designed in a way that it is "drop in". As in, the standard libs are written to do WASI calls instead of libc (or whatever else) for standard I/O concerns.
This is represented in some languages better than others -- for many languages you just switch the "target" -- from x86_64-unknown-linux-gnu to wasm32-wasip2 (ex. Rust). Some things won't work, but most/many will (simple file access, etc), as they are covered under WASI[0]
That's not to say that nothing will change, because there are some technologies that cannot be simply ported, but the explicit goal is to not require porting.
gRPC has gone to places, the question is if the thing with similarity to it has actually learned the lessons from the past, or is the same thing repacked for a new generation.
I'm always reminded of Gary Bernhardt's "the birth and death of javascript" when wasm gets discussed. While it's a bit tongue-in-cheek, I think it really drives home that it's really just another layer of abstraction that may or may not be useful for a given problem, and might not be the silver bullet that anyone is looking for. I recon that whether or not wasm will take over everything will mostly be about trade offs between it and the other solutions.
Containers have two goals: reproducibility/portability, and encapsulation. WASM could replace the reproducibility but it can't replace the encapsulation.
> My money is on WebAssembly (WASM) to replace containers. It already has in some places. WebAssembly is a true write-once-run-anywhere experience. (Anywhere that can spin up a V8 engine, which is a lot of places these days.)
Luckily a container is a place that can spin up a V8 engine. If you want to bet on WASM my bet would be on containers running WASM.
Can you explain your thoughts here? WebAssembly is sandboxed and effort must be expended to provide a mechanism for getting data through that boundary. How does that differ from “encapsulation?”
I'm referring to a different kind of encapsulation. Dependencies, tools, version management, configurations, environment variables, etc. Even if you can fully compile your code into WASM and host it on V8 you need to ship it with configuration files, set specific environment variables and so on. Containers allow you to bundle all of that together into a single unit you can share with others.
Note that it is possible to ship containers with configuration files and environment variables. Because Wasm imports can be virtualized (i.e.you could choose to fulfill a file-fetching interface completely or partially by composing two components together), it is possible to build a WebAssembly binary with what you need bundled.
Also, just because you could does not mean you should -- most of the time you don't want to inject environment variables or configurations that could contain secrets until runtime.
WASM might replace processes, but the idea that people will take the stuff they can't manage to put in a native process, and somehow manage to cram it into WASM... ridiculous.
There's not even a single argument in there to support the clickbait title. We have containers, but "containers are annoying". WASM won't be annoying? Pray tell, how do you surmise that?
Docker too complicated? Build times too long? You believe WASM tools will be simpler and faster... why?
In practice the problem containers solve is to bundle an application with its environment so that it will work the same on the developers machine and in production and in five years time when the servers are replaced with new ones running another distro.
The WASM world doesn't have most of the pieces of that puzzle and WASM itself is quite irrelevant. Say we standardized on a sandbox running x86_64 VMs under Firecracker, with the proper sandboxing that would work just as well as running WASM. You might say that WASM is portable, x86_64 assembler is not, to that I would counter that ARM (and probably RISC-V too) can emulate x86_64 faster than they can run WASM. So what's the point of the WASM piece of the puzzle?
"The main thing holding back wider adoption is a lack of system interfaces. File access, networking, etc. But it's just a matter of time before these features get integrated."
Funny. The most obvious place for WASM is a web browser and yet WASM STILL cannot access the browser DOM. Its only been, what? At least 8 years of promises about it coming soon and how people are working on it.
Who exactly has promised it's "coming soon"? People like to make arguments against imaginary arguments for some reason.
Besides, you can already send data from/to Browser<>WASM context which seems to at least solve most use cases people was imagining for WebAssembly back when it was just asm.js.
An OCI container (what people call Docker containers) are just applications that run on a Linux kernel.
That is, you need a Linux kernel underneath for the containers to run on. More often than not, that Linux kernel is running in a virtual machine.
When you run Docker Desktop on your Windows or macOS machine, how do you think it runs that Alpine Linux container? It works because there is a virtual machine running Linux that all the Docker containers run on top of.
If you are running Linux directly on real hardware, your containers do not need a VM. Everywhere else, they do.
Oh, I have seen this proposed, in real life. It was like 2019 or 2020, so not even new. Honestly, it was a good idea as it was proposed then, and I wish more of the tooling I interacted with adopt it.
WASM to an API is essentially the `Fn(…) -> …` type. E.g., you have
POST /some/api
And it can take JSON, but what if it needs to do "something", where something depends on the consumer?
And across the board, what APIs/aaS's do is that some PM goes "I think these are the only 2 things anyone will ever want", those get implemented, and you get an enum in the API, the UI, the service, etc. And that's all it is ever capable of, until someone at the company envisions something bigger.
If I could pass WASM, I could just substitute my own logic.
Like webhooks, but I don't have to spin up a whole friggin' HTTP server.
WASM can, but does not replace containers. What is different is that instead of 20 applications running in 20 containers, 1000 applications will comfortably fit in one container, with better sandboxing than the linux process model at the application level.
I really wish you knew how silly this sounds & is. Why would you even run 1 container for 1000 applications if you're saying that WASM provides better sandboxing? Why wouldn't you just run a single WASM process? Also, containers are generally meant for single processes to provide isolation for each app, not to have all your apps running together with the same access.
What many people don't get between WASM & containers is that containers don't need software developers to make changes to support containers. WASM however relies on software developers to make changes to their apps. Otherwise, you have to emulate an entire architecture in WASM which doesn't perform well. It is the difference between VMs, which emulates physical hardware & containers which doesn't need to emulate the hardware cause it provides the sandboxing using kernel features.
> I really wish you knew how silly this sounds & is. Why would you even run 1 container for 1000 applications if you're saying that WASM provides better sandboxing? Why wouldn't you just run a single WASM process? Also, containers are generally meant for single processes to provide isolation for each app, not to have all your apps running together with the same access.
Better sandboxing does not mean completely foolproof sandboxing, and defense-in-depth is a practice for a reason. The idea is a vulnerability in the runtime (your Wasm runtime) would mean system access. A vulnerability in the container runtime underneath (a different layer of security) would mean system access, and then a VM (if there was one underneath) is another layer to break through. This means to get to "root access" on a machine, there are now 3 layers of security to escape.
Running a single wasm process offers better isolation because it is deny-by-default, running a program as a process, or w/ cgroups + namespaces a a container has a wide surface attack surface. You can achieve greater density of applications with Wasm than you can with containers because of the lighter footprint thank a userspace process.
Containers are a hack that packages the assumption of an operating system and a bunch of other files and dependencies into essentially a tarball to make apps run. You must deal with isolation at the OS level (seccomp, etc). Wasm gives you greater control -- you don't have the "same access" for every app, you can vary access infinitely and dynamically, without worrying about OS primitives much more easily with WebAssembly.
It's OK if you think this is silly -- no one is forcing you to adopt the technology, it'll either come around or it doesn't.
> What many people don't get between WASM & containers is that containers don't need software developers to make changes to support containers. WASM however relies on software developers to make changes to their apps. Otherwise, you have to emulate an entire architecture in WASM which doesn't perform well. It is the difference between VMs, which emulates physical hardware & containers which doesn't need to emulate the hardware cause it provides the sandboxing using kernel features.
This is not necessarily true -- WebAssembly support is being added in languages upstream, and the goal (and reality for some programs today) is that compiling to WebAssembly does not require drastic changes. It's not perfect, but this is a stated goal, and is what is playing out in reality. The WebAssembly ecosystem is working very hard both internally and with upstreams to work with use cases/patterns that exist today, and make using WebAssembly close to a no-op/config change.
Any sysadmin/devops person can tell you that the move to containers was/is not pain free. I'm not promising Wasm will be pain free either, but the idea here is that change is happening upstream -- the ecosystem is working to make it pain free. It will be more like changing a few flags (e.g. building for ARM rather than x86) and following the errors. Some languages will be easier to do this in than others.
You'll just wake up one day and your python toolchain will be able to compile to WebAssembly natively with no extra tooling if you want. Maybe you don't have a stack that can make use of that yet, and maybe Django won't be fully supported early on, but Flask more likely will be.
One advantage of containers is that run lots of software with it. Take old Perl application, wrap it in container, and then run in the cloud. Keep that old binary application that somehow lost the source for.
Also, I think most uses of containers lose the advantage of WASM. WASM is about running on any platform, great for browsers and serverless. But containers are usually run in controlled environment where can compile once and not pay the penalty of compiling each time.
It would be so great if WASM gives us the paradise that Java promised thirty years ago. Being able to get fast, "write once, run anywhere" would be awesome.
I wonder if someone could make a decent cross-platform GUI toolkit to save us from the horribly slow Electron-hell we've carved out for ourselves.
We could name it the Abstract Window Toolkit and it would render the same on every platform. But then, someone would get butthurt about it having a "distinct look" and decide to make the Standard Widget Toolkit that uses native bindings. Fantastic that it stops that distinct look, with the small asterisk that you now have to ship .dll/.so/.dylib shims in your "cross platform" app
I'm no wasm expert, but I find it just fantastically unlikely that they're going to beat the decades of research that have gone into the JIT in the JVM anytime soon. But, I guess if the objective is just "run this bytecode in every browser on Earth," that ship has sailed and I will look forward to more copies of node infiltrating my machines
> I'm no wasm expert, but I find it just fantastically unlikely that they're going to beat the decades of research that have gone into the JIT in the JVM anytime soon
Probably not, but that's sort of orthogonal to my point.
Java started as "write once run anywhere", but it has almost become the opposite of that: "write once, run it on your specific server".
"Portability" is not nearly the same concern with Java as it was thirty years ago; I don't have direct numbers on this, but I would guess that a vast majority of Java code written in 2025 is either running on a server or running an Android app, neither of which are nearly as "portable" what was kind of promised in the 90's, at least not on the desktop.
You say that, and yet for your cited Android apps, they are built using gradle, which is written for the JVM, or using Maven, which is written for the JVM, and likely even typed inside IntelliJ (aka Android Studio) which is written for the JVM. Also, this may be splitting hairs, but Android is actually dalvik, not the JVM
That's why I deliberately said "Java", not JVM (though I think dalvik has been deprecated and it's ART now).
I'm sure you can list any number of programs that are written in Java, but it certainly has not been the cross-platform standard that everyone was promised; it feels like Electron has more or less taken its mantle in the world of desktop land.
I guess what I'm trying to say is that you're not typically deploying JAR files outside of an extremely controlled environment.
For a server environment, I set all the parameters I want and then I code around it. Obviously there's a lot of variation between different servers, but you typically develop your server code a specific set of servers.
Containers are not going away anytime soon. I feel like this is nothing but a clickbait post. There are crazy amounts of innovations happening in the CNI space, such as bootc, katacontainers, etc. Hundreds of CNCF projects (envoy and istio just from top of my mind) are being used world wide which mostly have built upon k8s. Why do you think they would stop to use an immature runtime whose goal isn't even the same as containers?
I just don't see it. WASM requires throwing away all the decades of x86/ARM compiler work in ecosystems like Java, .NET and C++ and placing all trust in the WASM runtime/V8 to perform as well as them.
Yes, but now you are going through another translation (aka V8) layer in order to get down to x86 or ARM. You just have to hope that the optimizer of this translation layer is at least as smart as what you had before.
COBOL just had a new release, COBOL 2023 is the most recent standard.
Also does objects nowadays.
Additionally, given all the AI prompts being used nowadays writing long English texts instead of proper programming, COBOL was actually a language ahead of its time.
> WebAssembly is a true write-once-run-anywhere experience.
Except not. The wasm ISA is really quite limited in the types of operations. A full-blown RISC/CISC ISA will have way more opportunities for optimization. To say nothing of multithreading and pipelining. JITing also has overhead.
> You can compile several languages into WebAssembly already.
But if you can compile them, why not just compile to container, and get free performance?
Wasm will have a hard time with anything low level: networking, I/O, GPU, multimedia codecs.
I actually tried doing this a couple weeks ago. Rust on Cloudflare Workers. It didn't go well -- compiling to wasm disables a lot of Rust features, to the point where it just didn't make sense anymore. I gave up after trying to get some crypto stuff working. I eventually switched to using raw JS because Cloudflare Workers exposes the Web Crypto API, but next time I want to run Rust serverlessly, I'm just going to put it on Lambda, compiled to a regular binary.
Personally, I don't think that Cloudflare is the best provider for Wasm at the Edge as everything needs to go through a Javascript layer that eventually hurts performance and prevents further optimization, but is a strong one nontheless (note: take everything with a grain of salt, even though I try hard to not be biased, I'm also founder of Wasmer)
> "The main thing holding back wider adoption is a lack of system interfaces. File access, networking, etc. But it's just a matter of time before these features get integrated."
Wasmer launched WASIX [1] a few years ago which fulfills the vision that the article describes. With WASIX you can have sandboxed access to:
I found WASM slower than expected. I wrote some WASM logic functions recently which I thought would perform better than their native JS equivalent. For example, take a large array and "pivot" it in 10ms instead of a 100ms.
What I found was the JS version was a bit faster than the compiled WAT. Yikes.
At $WORK we use SQLite in WASM via the official ES module, running read-only in browser.
The performance is very poor, perhaps 100x worse than native. It's bad enough that we only use SQLite for trivial queries. All joins, sorting, etc. are done in JavaScript.
Profiling shows the slowdown is in the JS <-> WASM interop. This is exacerbated by the one-row-at-a-time "cursor" API in SQLite, which means at least one FFI round-trip for each row.
This blog post's promise is all "pink fluffy unicorns", but for a single use case: "Long running containers powering microservices, probably in bigger installations, for consumer facing applications and APIs".
Containers are much more than that. They're service providers for small installations, short-running VeryFatBinaries, close to the metal, yet isolated complex applications in HPC environments.
WASM will be all and well, it'll be a glorified CGI, and might be a good one at that, we'll see, but it'll not and can't snipe containers with one clean headshot and be done with that.
It'll replace Kubernetes? Don't be silly. Not everyone is running K8S for the same reason, and I'm telling that as a person who doesn't like K8S.
Are WASM and containers not fundamentally different things?
At their heart, modern containers are a clever way to create something that looks like a Linux VM without the overhead of actual virtualization. Your application still executes "natively," just inside of a Potemkin environment (modulo whatever holes you poke in the veneer.) The latter bit is why we use containers.
WASM is a bytecode format. It doesn't carry around the environment it needs to execute correctly like a container does. In fact, it (by definition) needs an environment with certain properties (interpreter/JIT present) to work!
No it won't. People primarily deploy containers for full fat native server applications containing tons of proprietary code and libraries and sometimes specialized hardware access (GPU, etc.).
It would be immensely silly to run full x86 emulators in WebAssembly and go through 2 layers of transpiling / interpreting for what can run natively on the host's CPU.
This post is really puzzling. How do I use libraries with WASM? Lets say I wrote a nodejs app, seemingly I will need to bundle not just node_modules but also the entire nodejs runtime to make it run. Why would I do this? How does another developer make changes? If there is some way to specify what dev tools to download, what is the difference between that and docker? If you aren't doing anything complicated, docker is just a good way to set up your dev dependencies. If you are doing something complicated I doubt WASM is going to help you.
> You can compile several languages into WebAssembly already. Languages that can't be compiled will eventually have their own interpreters compiled to WebAssembly.
A big constraint here is the memory model. Languages include a specification for memory allocation, deallocation, lifetime and garbage collection, and WASM has it's own way of going about that that is engine dependent. The performance lost from reimplementing the memory model within WASM could only be regained by going back to something that looks like containers.
Disagree: While this might be the case for a handful of languages such as Rust of Go - Many lanugages need a whole lot of other stuff to run (eg, Python needs a lot bunch of dependencies).
Ah interesting! Looks like that uses a WASM build of CPython to run your Python code - similar to how you can run python in your browser. Would you still have a heap of Python files?
There is one thing that makes WASM very awkward: projecting APIs into a sandbox.
If this was done in a way that works in mechanical sympathy with a wide range of languages I think WASM would be more successful. Making an API available inside a sandbox is painful. You have to manually roll something that can deal with memory, marshal calls and parameters etc. I'm not suggesting it is easy. I'm merely pointing out that this is what I see as the main obstacle to adoption.
I am not interested in "generic" access to system resources (filesystem, network etc) at all. In fact, I can't think of any scenario where I'd actually want to do that. I want to provide APIs that deal with external resources in narrowly defined ways.
It is much easier to do this securely when you explicitly have to provide functionality rather than say "here's the filesystem" and then try to clamp down access.
I want to use WASM on servers. Primarily in Go. I want to be able to create a bunch of APIs that provide narrow access to persistence and communication and project those APIs into the sandbox. In a manner that is interoperable and easy to use from various languages. I don't want to have to craft some application specific marshalling scheme.
If projecting APIs into sandboxes in a (mostly) language agnostic way that is natural, safe and easy to use, it'd be easy enough to write system interfaces. I have no idea why so few people see this as important since I feel it should be self-evident.
(And yes, being able to offer concurrency would be nice, but that's a much smaller issue than the problem of there being no good way to marshall APIs into WASM containers)
Go development is ill-documented but just barely tolerable. Skip the BytecodeAlliance docs. Install WasmCloud's 'wash', write your WIT file, run 'wash build', and good luck with the error messages.
The article starts out "In the year 2030, no one will remember Kubernetes", but focuses mostly on containers. Kubernetes solves a lot of problems that a runtime alone doesn't, like rolling upgrades, load balancing, etc. etc. However, focusing on the thrust of the article, which is replacing containers with code compile to WASM running in v8, it's still hard to agree. For example, apps commonly need a bunch of dependencies, WASM doesn't solve that problem, so you still benefit from building images in multiple cacheable layers.
I mean I don't enjoy Docker either, but I think it's more that there are many problems k8s + Docker help you solve, and WASM alone wouldn't solve a lot of them.
The second paragraph is perhaps something you experience on a very small team in a very small company, but definitely not everyone's day-to-day experience and it sounds like the author would like to impose their own limited experience onto the whole industry. In a lot of companies the DevOps usually jump in and handle the entire management of the pipeline, including the build and release of containers. For most of developers, they still get to simply ship their features, I've rarely met classic developers 'optimising Docker build times'. Secondly, great technologies have intrinsic qualities and potential which get recognised instantly. Now there may be some hype following them, for sure, but it is these intrinsic qualities which silently drive the adoption of the new tech, not the hype. People starting building extensions and add-ons and more complex systems and suddenly you see it everywhere. WASM has been around for a few years now / anyone seen it make its way quietly into our working processes yet?
I think WASM is misfortune because for most websites (forget backend for a moment!), anything it can do can be done better and faster using as JS and server side split. JS gets heavily optimised by V8. And add in the download time for the WASM and the learning curve and it isn't too attractive.
It needs something to make it a must have for some area of adoption. I just don't see it yet.
I vividly remember how I was young and stupid, and some people claimed that web components would replace everything. There was a guy in another team who kept bashing us for building things on top of React because "you'll see, very soon, in just a few months...just like you're having to rebuild from Angular, and someone before you had to rebuild from Backbone/jQuery..."
I've grown less young since then, and I can probably count numerous other claims for some great idea to replace "all this crap very soon." Turns out, "one-size-fits-all" solutions are almost always hype, sometimes not even backed up with pragmatic arguments. There are simply no silver bullets in our industry, especially in relation to web tech. Guess what? Some websites are still being built with jQuery, and maybe there's nothing wrong with that.
No, WASM not going to replace containers. At best, it will likely find its specific niches rather than becoming a universal solution. That's all.
For example, SQL is Turing complete and expresses simulation of data flow paradigm [2]. It is effectively executable on current massively parallel hardware and even scales horizontally, enabling safe transactional "microservices." Why should we embrace WASM, but not SQL?
It's mentioned in passing in the article, but I'm intrigued by the author's mention of how Cloudflare Workers interoperate. Besides the official documentation, are there any articles anyone can recommend on this topic?
Cloudflare Workers run in V8 isolates, which are much lighter-weight than containers, with the ability to run thousands in the same process, and start up new ones quickly on-demand. For Cloudflare it's usually easier to start your application on the machine where it is requested, than to try to route to a machine where it's already running: https://developers.cloudflare.com/workers/reference/how-work...
This is one type of "binding" or "live environment variable" or "capability". You configure it at deploy time, and then at runtime you can just do `env.SOME_SERVICE.fetch(request)` to call to your other worker: https://blog.cloudflare.com/workers-environment-live-object-...
I don't think that WASM will replace containers. They will continue to move closer to each other but still have their advantages in different fields.
Right now I can run containers and WASM workloads in the same k8s clusters. I dont even have to think about it with runtimes like crun/youki or wasmedge. The OCI image format is the universal package format. It always has all its need and the tooling is broad and mature.
With containers I can basically put any app in any language in there and it just runs. WASM support is far from that, there is migration toil. Containers are and will be more flexible then WASM
> In the year 2030, no one will remember Kubernetes.
I highly doubt that. Maybe there will be an evolution to k8s but fundamentally it solves a whole host of challenges around defining the environment an application runs in.
"In the year 2030, no one will remember Kubernetes."
I feel like this prolog missed an important point. Kubernetes abstracts data centers: networking, storage, workload, policy. Containers implement workloads. They're at different layers.
And back to the article's point: WASM may well replace containerized workloads, indeed there are already WASM node runtimes for Kubernetes. Something else may well replace Kubernetes in five years, but it won't be something at the workload layer.
I came here to say this. Kubernetes is an ecosystem that represents a better way of running production workloads at scale for many orgs. People already use Kubernetes for VMs, and various different container runtimes are used by different cloud providers. If WASM does replace containers (unclear to me), Kubernetes would just support WASM, and stay largely similar.
Can I run ephemeral Oracle databases and JBoss instances and Tibco EMS as WASM as part of a CI/CD flow? No? Then it's not comparable to containers.
I sure hope "developing on Cloudflare" is not "what the future looks like".
There are many, many VM:s and programming languages that are more or less easy to compile to many architectures and/or possible to run straight on a hypervisor or metal. JavaScript, Python, Lua, V, and so on. None of them are seen as container competitors.
No it wont. It's incredibly hard to build out a fully useful version of the Linux APIs, as shown to us by Cygwin, and WSL. Even if you built out a similar set of APIs, Linux itself offers a ridiculous set of interaction points where applications can tie together (for example, I can use inotifywatch to copy files out of my container as they're written). I feel like what you'll end up with is something like gvisor running on top of WASM. In which case, what did we gain from VMs at all?
He researched chromeos process virtualization, moby (docker upstream), and chrome tab isolation. It’s all done ebpf magic on top of seccomp at its core.
Statically compiled portable binaries will replace containers.
Maybe not, but one can dream at least.
What do we need containers for, if we can build a service in a binary that you can throw on any Linux of the past 20 years and it just starts to serve network requests?
What do we need to support other platforms if the server world is one big Linux monoculture and the next best platforms are just a cross-compile away?
Why wouldn't I compile everything statically if containers don't share libraries anyway?
Forgive me but I've heard the heralding new age nay revolution of wasm is coming anyday now and it hasn't and probably won't.
I don't doubt that wasm has potential, but personally I imagine more esoteric use cases as the go to than necessarily the replacement for containers (where my money is more on unikernel).
...you're essentially turning the host OS into both a resource manager and isolation boundary enforcer, which is... kind of what hypervisors were specifically designed to do, just at a different level. When the container companies were all starting to come out, I never thought it was a good idea, given what I was building I never said anything because "of course the VM guy would not like containers" - I thought many times about what an ISO+VM "container" product would look like but at the time it would have been hard to match the performance of containers even if we could have gotten the developer experience super good. VM: Cold start: ~10 seconds with an optimized ISO, Management overhead: ~256MB baseline, Consistent performance profile. K8s: Cold start, ~30-50 seconds (control plane decisions + networking setup), management overhead: 1-2GB for the control plane alone, more variable performance due to overlay networking.
imo real question is: at what scale/complexity does k8 overhead get amortized by its management benefits? For a number of services, I suspect it never does. I will dutifully accept all my downvotes now.
I haven't played around with hypervisors much but the whole point of k8s is not just isolation but all the primitives the control plane gives you which you don't need to implement. Things like StatefulSet, ReplicaSet, Volumes, HorizontalPodAutoscaler, Service, DNS, ConfigMaps, Secrets, Accounts, Roles, Permissions etc.
Also the container runtime which is containerd by default I believe can be switched out for micro vms like Firecracker (never done this though - not sure how painful it is).
More like return of Applets, Flash, Silverlight, ActiveX.
And that is great, thanks to it is all turtles to the way down, I can have my plugins back, now running on WebAssembly, that is the only thing I care about it.
Right. Moore's law has made too much progress. We must claw back some slowness! Let's take code that could run securely at full speed and run it in emulation at half the speed instead. That'll keep those menacing hardware performance improvements at bay a little longer.
What sort of tooling is required for wasm? Let's say I wanted to deploy a middle tier in our app, consisting of some nodejs code that talks to an external database.
We'd use a Dockerfile, install nodejs or use a source image to build from. How does that work for wasm? Does it have layers to reuse?
lol I initially thought dylibso was the author, I was mistaken.
That being said - WASM has been steadily improving over time, yet it hasn't quite achieved mainstream adoption.
I'm curious what's holding it back?
It seems like despite its technical advancements, WASM hasn't captured the public's interest in the same way some other technologies have. Perhaps it's a lack of easily accessible learning resources, or maybe the benefits haven't been clearly articulated to a broader audience. There's also the possibility that developers haven't fully embraced WASM due to existing toolchains and workflows.
as a Dylibso employee, I am wondering what made you think that :D at Dylibso we advocate for Wasm for software extensions, rather than an alternative to containers!
>A very obvious argument against WASM succeeding is the Java Virtual Machine (JVM). It's almost exactly the same promise: write once, run anywhere. [...] The biggest limitation is that JVM bytecode cannot run in a web browser
The draw of WASM is to be able to have your code run in a browser tab exactly as it can run on your local hardware, as it can run if you embed it in your own application, with the only thing moving being custom syscalls between the 3
The biggest limitation of the JVM was that it's closed
You can spin up your own WASM interpreter and integrate it anywhere you like. It wouldn't be an impossible bridge to cross, it's RISC, it's open, there's many open implementations. Is it even possible to write your own JVM from scratch?
WASI Design Principles
Capability-based security
WASI is designed with capability-based security principles, using the facilities provided by the Wasm component model. All access to external resources is provided by capabilities.
There are two kinds of capabilities:
Handles, defined in the component-model type system, dynamically identify and provide access to resources. They are unforgeable, meaning there's no way for an instance to acquire access to a handle other than to have another instance explicitly pass one to it.
Link-time capabilities, which are functions which require no handle arguments, are used sparingly, in situations where it's not necessary to identify more than one instance of a resource at runtime. Link-time capabilities are interposable, so they are still refusable in a capability-based security sense.
WASI has no ambient authorities, meaning that there are no global namespaces at runtime, and no global functions at link time.
While I don’t doubt the utility of WebAssembly, I do have to kind of roll my eyes at the ignorance of history going on.
Servers were the future of code after mainframes because of simplicity and write-once run-anywhere code but without all the complexity, they just needed similar networking and storage solutions as mainframes had to be viable first.
Virtual machines would be the future of bare metal servers, allowing code to be written once and run anywhere, eliminating the complexity of bare metal servers. VMs just needed better networking and storage first to be viable.
Containers would replace the complexity of VMs and finally allow code to be written once and run anywhere, once orchestration for storage and networking was figured out.
Serverless would replace containers and allow code to be…
You get the idea.
The only thing holding back code from truly being “write-once, run anywhere” is literally everything that keeps it safe and scalable. The complexity is the solution. WebAssembly will start with the same promise of a Golden Path for development, until all the ancillary tooling ruins the joy (because now it’s no longer a curiosity, but a production environment with change controls and SOPs) and someone else comes along with an alternative that’s simpler and easier to use, because it lacks the things that make it truly work.
I don’t particularly care how it’s packaged in the end, so long as it runs and has appropriate documentation. Could be a container, or a VM template, or an OS package, or a traditional installer, or a function meant for a serverless platform.
Just write good code and support it. Everything else is getting lost in the forest.
We’re still using COBOL mainframes in 2025 in places so I’m sure whatever is the next trend in computing will be great, but I’m sure we’ll still be using k8s (and COBOL) in 2125.
> but I’m sure we’ll still be using k8s (and COBOL) in 2125.
If you want a vision of the future, imagine a bare metal hypervisor hosting Linux hosting K8S hosting V8 hosting a WASM-based IBM mainframe emulator running COBOL.
kubernetes is an endless employment engine for operations staff (it's kinda the opposite of "anybody can write react, so react devs are now cheaper", instead you're constantly maintaining a piece of junk k8s stack).
no way someone would engineer their way out of not over-engineering a kubernetes stack.
The other bit of this is that FNaaS pricing is crazy expensive. Unless someone goes an order of magnitude cheaper than cloudflare's wrangler offering on the edge, I don't see it happening. You get none of the portability by writing yourself into a FNaaS cage like cloudflare or fastly.
Companies start off with business logic that solves a problem. Then they scale it (awkwardly or not) as it balloons with customers. Then they try to optimize it (sometimes by going to the cloud, sometimes by going back to their own VMs). VCs might not like "ending growth", but once you're on a stable customer basis you can breath, understand your true optimal compute, and make a commitment to physical hardware (which is cheaper than cloud scaling, and FAR cheaper than FNaaS).
The piece that might travel the entire way with you? Containers and Kubernetes.
WASM just moves vulnerabilities to the wrapping code. If you use FS, then you still have to prevent the wrapper from stealing your /etc/passwd, so container it is
Containers, VM's, physical servers, WASM programs, Kubernetes, and countless other technologies fill niches. They will become mature, boring technologies, but they'll be around, powering all the services we use. We take mature technologies, like SQL or HTTP for granted, but once upon a time, they were the new hotness and people argued about their suitability.
It sounds like another way to distribute software has entered the chat, and it'll be useful for some people, and not for others.
My boomeritis flares up when I hear someone trying to sell me on 10x-ing instructions . I think we need a new unit of inefficiency called Matrioschkas . The Swedes call this Torta-på-Torta or a cake in a cake (in a cake..)
Author's argument is "because it is easier today and will be as powerful as containers in the future".
Well, what about it gets as powerful but 3 times more complex? Frankly, I find it quite messy to develop WASM in C++ without downloading an emscripten ...container. Yeah, AFAIK, there is no WASM compiler in WASM.
Oh an there is the *in the browser*, also. Yeah, but the truth of the matter is that most WASM frameworks have a mediocre performance when compared to JS (because of the memory isolation).
In this job we love new projects. We like them so much that we keep forgetting that the vast majority of them fail.
You noticed this post is about WASM replacing containers, right?
So, I'd use it the same way we use compilers in containers (i.e: one single download, no installation) and would run it with a runtime like Wasmer, Wasmtime, Wasmedge, etc.
Or else I could run it sandboxed in a browser as a PWA. Then you could build things in Chromebook, phone, etc.
Sticking a bunch of docker containers on a box behind a reverse proxy or load balancer felt like the sweet spot of complexity and scalability for most apps.
"Config files? What are those? To change the system configuration, use these assembler macros, reassemble this module, relink the operating system and then reIPL it". So, systems programmer, because knowing assembler was part of the job description.
Plus, at mainframe sites in the 1960s/1970s, it was common for sysprogs to write custom code to hook into the operating system and change its behaviour (user exits), or even to actually patch the operating system code (SYSMOD) – and assembler was the language used to do all that
If we are talking about IBM mainframes specifically, by the 1970s, a lot of the operating system was written in a high level language (a PL/I dialect), but although they shipped customers the source code, they didn't ship them any compiler for the special PL/I dialect, so customers couldn't modify it by changing the source, only by disassembling the binary, modifying the assembly, then reassembling it. Plus, commonly, IBM shipped only source for the initial release, not later patches, so the customer copy of the source would gradually get out of sync with the binary. Some other mainframe vendors weren't quite so primitive, and so there was significantly less use of assembler by customers for OS customisation (e.g. I think, Burroughs, Multics)
Back in the day you would have used .NET plugin for browsers, which got replaced by Silverlight plugin, nowadays it is WebAssembly, really nothing new per se.
From a consumer/user perspective almost none of the benefit of containers is available to the end-user. Features sets vary wildly by operating system. MacOS doesn't even have real containerization and apple has not signaled moving in that direction. (not even going to bother to take windows seriously.) jails in FreeBSD work in a completely different way from cgroups. Our phones should effectively be containerizing apps so we can e.g. control who is allowed to contact the internet, but no such functionality is offered to the user. Apps instead are simply not allowed to look at each other, but they can contact whoever they want. (Maybe a rooted android has slightly better feature set in this regard, but that sounds miserable to me to have to figure out.)
For writing services, yes, they're quite useful. We've only tapped a tiny part of the potential though. These could be easily repurposed to allow the end-user who uses graphical interfaces to lock down their computer.
This webdev notion of abstracting 30 layers of complexity just to run bytecode is borderline lobotomizing and should be blamed for 8,000,000,000 Bytes of RAM not sufficing for a slightly-above-average desktop computer nowadays.
But then you've got to figure out and prevent all the security holes that can be introduced by adding file access, networking, etc. That's what killed the Java write-once, run-anywhere promise. Maybe put the whole thing into a container? Oops, looks like the container wasn't replaced after all (though perhaps it could be simplified).
reply