Hacker News new | past | comments | ask | show | jobs | submit login
A world to win: WebAssembly for the rest of us (wingolog.org)
297 points by nsm on March 20, 2023 | hide | past | favorite | 182 comments



>the initial version of WebAssembly is a terrible target if your language relies on the presence of a garbage collector.

Lack of GC was one of the appealing parts of WASM to me. Keep it simple. It is good to be careful about utilizing memory on your visitors machines, so you better spend a lot of thought on memory management.


Counter-intuitively, I feel as though GC results in better memory usage in this situation.

Option 1: you rely on every single developer to be careful and make no memory-leaking mistakes.

Option 2: you take this problem out of the hands of the specific application devs and give it to experts in memory management, who write a GC.

If this platform is going to be the future of distributed computing, just imagine the number of terrible devs who will be releasing their code upon you.


> Option 1: you rely on every single developer to be careful and make no memory-leaking mistakes.

Nobody is expecting every single developer to carefully write their own garbage collector. WASM is a compilation target for other languages. The language you're compiling defines how memory management works, be that C / Zig (do it yourself), Rust (compile time borrow checker), Swift/ObjC (Rc / arc) or Go/JS/Java/C# (full garbage collector built into the language).

For GCed languages like Go or C#, the design choice WASM makes is between:

1. The compiler injects its own garbage collector into the created webassembly modules, just like it does when compiling native binaries.

or 2. The webassembly virtual machine provides a garbage collector that any compiled code can use.

The benefit of (1) is that it makes webassembly virtual machines simpler and safer. (This is what the GP comment wants). (Also compilers can do language-specific optimizations to their own GC.)

But (2) has a bunch of benefits too:

- The GC can be higher performance (its not written in webassembly).

- Interoperability between WASM bundles written in different languages is much better. Eg it'd be easier for Go in a WASM bundle to talk to JS in the browser. Or Go and C# code to interoperate via WASM.

- The .wasm modules for GCed languages will be much smaller, since they don't need to compile in a garbage collector.

The Go/Java/C# you write will be the same. The difference is who provides the garbage collector. Your compiler, or the WASM virtual machine?


This comment repeats a fundamental error and misunderstanding: WASM is (currently) not able to be a compilation target for an efficient GC implementation. It lacks features needed to be able to implement GCs reasonably, like memory barriers.

So 1. is out of scope if you don't want a bloated super slow language runtime running on top of WASM (like e.g. Blazor).

The issue is known since the very beginning. Nobody in charge is willing to solve it.

Also 2. is not happening, even it was discussed also since many years.

But that makes actually "sense" form the WASM people's perspective: WASM was never meant to by concurrency to JS in the browser!

WASM is merely a performance booster for where JS sucks (like numerical code).

That WASM will allow to use other languages (and their ecosystems) in the browser is just a day dream of some. The companies behind "the web platform" won't give up on their billions worth of investments into the JS ecosystem.

So no, GC languages other than JS won't ever run "natively" on the browser in a meaningful way. WASM is crippled on purpose in this regard, and as long as the current stack holders continue to control "the web platform" this won't change.


> WASM is (currently) not able to be a compilation target for an efficient GC implementation.

Even if it was, there are other problems that mean the dream of being able to use alternative languages and runtimes on the web is far off, if it even ever happens at all.

As a hobby I'm writing a design doc for an alternative non-web system design. It enumerates some of those problems and proposes solutions, along with other dissatisfying aspects of the web (e.g. the unimplementably large size of the web specs).

https://docs.google.com/document/d/1oDBw4fWyRNug3_f5mXWdlgDI...

It's designed to be a very lightweight set of layered specs and projects that are way cheaper to implement than HTML5, can be developed and deployed incrementally whilst providing value from the start and which places other runtimes on a level playing field vs HTML. It also addresses many of the details you need to tackle for any serious web-like system such as transiency, sandboxing, tabbed WM, cache privacy, portability and hyperlinking.

I sent it to Ian Hixson who found it interesting, but of course the sticking point is funding models. Being way cheaper to implement than the web doesn't mean it costs nothing to implement. The web benefits from the largesse of rich patrons, any alternatives would need to either find a patron or find some business model that let it grow in quiet corners until it was strong enough to be fully competitive.


> This comment repeats a fundamental error and misunderstanding

Unless I'm mistaken, you seem to be vigorously agreeing with me about performance:

I said that implementing a GC inside WASM right now is possible (eg Blazor, wasmer-go) but slower and bigger than if a GC was built into the wasm virtual machine.

You said:

> WASM is (currently) not able to be a compilation target for an efficient GC implementation. It lacks features needed to be able to implement GCs reasonably, like memory barriers. So 1. is out of scope if you don't want a bloated super slow language runtime running on top of WASM (like e.g. Blazor).

... Which reads me like, "its possible (eg blazor) but doing it the current way makes it bloated and super slow". I agree!

> Also 2. is not happening, even it was discussed also since many years.

As another commenter pointed out, wasm-GC is in the implementation phase. Its already supported in Firefox and Chrome, though in both cases behind a feature flag.

https://github.com/WebAssembly/proposals

> It lacks features [...] like memory barriers. ... WASM was never meant to by concurrency to JS in the browser!

What does concurrency have to do with any of this?


> I agree!

Sorry, looks like I've misunderstood this part.

What I've understood was that GC in WASM "is totally possible right now". Which it isn't.

> As another commenter pointed out, wasm-GC is in the implementation phase. Its already supported in Firefox and Chrome, though in both cases behind a feature flag.

I don't believe anything meaningful will happen there. The "GC support" was "announced" right at the same time WASM was introduced. Half a decade later, and nothing happened. Even a high-end GC is right there, namely in the JS runtime, and all that would be needed to use it would be handing over a handful of API wrappers.

I've read a little bit on that topic last year as I wanted to know about the current state and why it takes forever to implement this triviality. But I can see there are all over the place only stalling tactics… People are coming up with infinite "but"s. Since years now. More or less since the first day WASM exists.

I came therefore to the conclusion: This multi-language promise of WASM just won't happen (in any meaningful way). The people behind "the web platform" (Google) are mostly not interested in making web applications just a poor man's "Java WebStart" and "the web platform" just an arbitrary replaceable language runtime. At the very moment you could run any language (and it's ecosystem) on the web there wouldn't be any real initiative to invest into web-apps on "the web platform"—and Google's empire would fall apart.

> What does concurrency have to do with any of this?

Nothing I guess. :-D

I am not a native speaker. I fell for a "false friend"...

I wanted to say: "WASM was never meant to be competition to JS in the browser."


> I wanted to say: "WASM was never meant to be competition to JS in the browser."

I hear what you’re saying, but there’s no evil javascript lobby group running around trying to stop other languages from becoming viable in the browser. Google doesn’t care - they don’t make less money from advertising if Go becomes a viable language for frontend web applications. Google, probably more than any of the other big tech companies, is led by engineers. And I think lots of googlers really dislike javascript and would love to have other viable options.

So why has it taken years to get GC in wasm? After attending a few IETF meetings, I’m increasingly convinced that decisions take time proportional to the number of people in the room. The wasm working group includes all the browser vendors - Google, Microsoft, Apple, Mozilla. and a lot of companies and individuals. That’s going to make any big changes to wasm take longer than everyone wants, even if everyone is on board with the proposal.

Rust is suffering from the same thing. Their inclusive decision making process has made language evolution slow to a crawl in the last few years as more and more people have put up their hands to get involved. There’s too many cooks in the kitchen and they’re getting in each other’s way.

Another take on this is Hanlon’s Razor: Never ascribe to malice what can be explained by stupidity. Nobody is trying to undermine the GC proposal. It’s just slow going. Implementations exist already (behind feature flags). Hopefully wasm-gc is released before the end of the year. We’ll see.


> The benefit of (1) is that it makes webassembly virtual machines simpler and safer

Why would that be safer at all?


It lowers the surface area for bugs (and thus vulnerabilities). WASM is gloriously simple right now. Adding a garbage collector dramatically increases the security surface area.


I do believe that the tradeoff is well worth it though. We can write very high quality software (think of JVM, V8, etc) when the incentives are different than the 163748th CRUD app.

Otherwise we could just as well use a trivial brainfuck interpreter as target, that won’t have a vulnerability ever.


do you have an opinion on which direction is more likely? naively I'd say 2 would be better since it'd require less language-specific toolchain work for a better devexp for the end developer.


Option 1 - Webassembly without a built-in garbage collector - exists today. You can compile Go or C# to WASM today, and it will output a WASM module with an embedded garbage collector.

There's a draft proposal to add a built in garbage collector (option 2) to WASM. I have no idea what the current status is - maybe someone involved can chime in. I suspect some version of the WASM GC spec will land eventually.

When it does, languages like Go, C# and Java should start to become competitive with javascript as frontend web languages.

https://github.com/WebAssembly/gc


WASM GC is already in the implementation phase[1], which means, IIUC that it's a matter of time for it to make into the official standard (i.e. the spec itself is mostly done now)... the implementations are behind a feature flag in Chromium[2] and Firefox seems to be far into making it available, see the current list of WASM-related issues they're working on[3]:

[1] https://github.com/WebAssembly/proposals

[2] https://chromestatus.com/feature/6062715726462976

[3] https://bugzilla.mozilla.org/buglist.cgi?product=Core&compon...


How does it find the GC roots on the stack?


AFAIK garbage collected 'objects' are completely separate from regular WASM heap allocations and tracked directly by the WASM VM (I may be wrong though, but details are here: https://github.com/WebAssembly/gc/blob/main/proposals/gc/Ove...)


I meant in e.g. the C# runtime...


My experience with Option 2, is that unless you're not memory-constrained, you'll always reach a point where you need to be GC-aware. Those objects you kept a reference two? GC can't collect them. This hot path that does allocations ? Better implement an object pool.


You can use reference counting memory management


Which is usually slower than a real gc and must punt to a real gc when you run out of counter space or have cycles.


> Option 1: you rely on every single developer to be careful and make no memory-leaking mistakes.

Nope. You use a language that dont have this problem, and handles memory safety within the compiler, just like what Rust does.


Rust doesn't attempt to guarantee no memory leaks. Memory leaks are allowed in safe Rust.


Sigh, no GC language guarantees no memory leaks either.


So when someone comments "Or we could use a language that doesn't let you write memory leaks using the type system, like Java" then we can reply that that's not true about Java. But in this comment thread people only wrote that about Rust.


Read the thread. The implication was that non-GC languages leads to memory leaks.

Rust was then presented as an option that isn't worse than a GC language in that regard.

Countering that with "Rust doesn't attempt to guarantee no memory leaks." is just silly...


I read it. What problem is Rust claimed not to have, which C and C++ have?

Which GC language makes no attempt to collect unreachable reference cycles, so that Rust is no worse?


Rust is a tiny language compared to like the top 5 most popular languages, why do you assume that everyone should switch boats for that?


I dont. I just suggested one language that deal with memory safety without an GC. There are multiple other languages besides Rust to choose from.


Rust was just one suggestion....

Modern C++ is another. Swift is another, etc.


>so you better spend a lot of thought on memory management.

The more probable outcome is stuff gets shipped with memory leaks since it isn't your server's memory it's eating.


A GC doesn't prevent memory leaks though (if you keep holding references to objects that are not actually needed anymore).


A GC does prevent leaking any unreachable objects, which is as good as it gets.

Sure, if you store strong references to everything, nothing can be done, but that’s a trivially debuggable problem.


...which is also the most likely reason for a memory leak when you manage your memory through refcounting (or rather: with refcounting, memory leaks are just as unlikely as with a GC).

But even without both, these days (with memory debuggers and profilers integrated into IDEs) pretty much all memory related problems are 'trivially debuggable', this stuff isn't as scary anymore as it was 20 years ago (e.g. Xcode's Instruments has a leak detector which lets you click on the line of code which caused a memory leak, and Visual Studio has similar features).


Memory leaks have never been scary in tracing GCs and they are not too scary in RC as well, though I would guess that cyclic references leaking is definitely a top 10 bug for swift and alia. Memory corruption has been/is the real issue.


I'm one of the biggest GC curmudgeons, but as long as it's an opt-in thing that you do not pay for when you do not use, I am happy.

I speak as the kind of developer who wants to optimize the memory management.


As a developer, absolutely!

But as someone who uses the internet for browsing, strong disagree.


As someone who uses the internet for browsing, I don't want do download a separate garbage collector for every web app I use.


Most of what makes browsing slow is network latency. Each website is in essence a 'streamed' app.


I've definitely seen janky webapps with the jank persisting even after all resources are fetched. Modern JS engines are very fast, but developers are faster (in pulling in quadratically more dependencies, negating all performance improvements instantly).


Are you sure everything has been fetched and processed? Some times this happens dynamically. Compare jank to an electron app where all artifacts are preloaded and preprocessed.

To balance that, I've also seen jank in native mobile apps where developers haven't properly used the background thread, or done something strange in interface builder where the constraints don't solve properly.

> in pulling in quadratically more dependencies, negating all performance improvements instantly

Adding a lot of dependencies will affect memory consumption which may have knock-on effects due to paging if your system is low on memory, but that aside it's what and how much code is executed per frame that matters to speed.

Dynamic memory allocation and garbage collect are probably the culprits you're searching for, but improvements of javascript engines and framework libraries have largely negated this issue for most websites.


Not by a long shot. Slow network latency also isn't taking up gigabytes of ram either.


> Not by a long shot

If I compare electron apps like Slack/Spotify/VS Code/etc, where all the assets are preloaded and processed, the app runs smooth. No FOUC or glitches as the page changes layout during loading.

> Slow network latency also isn't taking up gigabytes of ram either

Can't argue with that, given they are 2 different metrics.


Oh god, electron apps have terrible performance. The apps do not run smooth by any metric.

Even static pages over the internet is faster than spotify/slack locally.


Can’t say I’ve ever seen them jank. In my experience, compute doesn’t seem to be the limiting factor for Spotify etc., it only ever seems to be waiting for API calls to return. There really isn’t a whole lot of compute going on in a typical front end, and modern browser engines have JIT virtual machines and parallel processing with the final rendering pushed into the GPU. Even the processing power in a low end mobile is not struggling on compute for these applications


I don't see how it would be possible to have good DOM interop from WASM without some form of GC accessible from WASM.


If the WASM module has no GC'able objects, then there'd be no reason for the GC to run. At least in theory. Certainly the 0 GC'able object case seems like an easy optimization.


Yeah, but once GC is an option 99% of the sites you are going to visit will utilize it and use as much resources that they could ever "get away" with.


Maybe. Everybody assumes that once WASM gets a GC framework that their favorite language, e.g. Python, will become a first-class citizen. But I suspect this won't come to pass for two related reasons:

1) There's no such thing as a universal GC. GC semantics differ across languages because the semantics matter; language developers make different choices. For example, some GCs support finalizers, others don't. Some with finalizers support resurrection, some don't; likewise, some languages specify a well-defined order for finalization (e.g. Lua defines it as the reverse order of allocation).

2) Similar to GC, there are other aspects that will prevent popular languages with complex runtimes from being compiled directly to WASM without altering language semantics or the runtime behavior relative to the standard, native environment. For example, eval.

So even with GC, the choices will likely remain the same as they are now: if you want the full experience of your favorite, rapid-development language within the WASM virtual machine, you must incur runtime overhead, up to and including double virtualization. Or, alternatively, you must contend with a bifurcation in a language ecosystem--native semantics vs WASM semantics. This will all be compounded by how programmers typically treat even the slightest differences, compromises, or concessions in behavior or performance as ritual impurities to be shunned. Ultimately, I don't expect the current status quo changing--languages other than statically compiled, strongly typed, non-GC'd languages like C, C++, Rust, or similar seeing much more usage in WASM environments than they already do, either browser-side or server-side.


I think people don’t distinguish between static languages with GC and dynamic languages. Static languages will run well, they already compile to binary or bytecode. My guess is that Java and C# could compile to WebAssembly, or JIT or AOP would be relatively small.

Dynamic languages need to read source and either interpret or JIT it. The result is much larger runtime and worse performance. It will probably always be worse than running JavaScript. A few people will want to use Python but it won’t be common.


GraalVM’s Truffle project seems to get away quite well with a universal GC implementation.

With it, one can create a competitively fast language-specific runtime simply by creating an AST-interpreter: so far the more complete ones being JS, Ruby, R, Python and even LLVM bitcode — all of them mapping to the JVM’s state-of-the-art GCs. This allows easy polyglot programs as well, where the JIT compiler can even optimize across language boundaries!

I think that languages with more niche control flow/GC semantics should just accept the tradeoffs and find some other way around - which is not unheard of in case of WASM, if I’m not mistaken things like stack pointers also have limitations in WASM vs the native C world.


That's possible partly because the Java GC featureset is nearly a superset of what other GC runtimes offer, and partly because Truffle/Graal integrate deeply with the JVM compilers for things like barriers.


Would this mean that we might end up with a new language that compiles to WebAssembly with GC? Logically, it would be statically type since anyone wanting a dynamically type language could just use Javascript.


There's already one ready and waiting: https://www.assemblyscript.org/


Existing compilers for GC'd languages that target WebAssembly today have to use inefficient schemes to make their runtimes work, making the website slower than they would be otherwise, which is pretty much the entire point of OP.

And anyway, regarding "lack of GC in WASM made it appealing" -- support for high level languages with GC semantics was always a long-term goal for WebAssembly and GC was thought about by the relevant parties long before the 1.0 spec was even ratified (the initial placeholders were added as early as 2017, 2 years before 1.0 ratification); it was just not within scope for the earlier versions because it's a pretty big topic, including many things that aren't even wholly GC related e.g. value types. But this isn't surprising either; most of the earliest implementations and concerned parties were browsers, and the interactions between WebAssembly and JavaScript's memory models, along with the popularity of Javascript-targeting compilers (which suffer from many similar contortions), meant that GC was always a pretty obvious "This is a thing we need to think about in the long run" feature.


Which might/should prompt a rethink...

You are correct, but I disagree with the premise. Javascript and the entire ecosystem is horrible and only came to rise because there was no other option.

For the first time ever we do a have shot of making something better and our goal is to adapt it to the lowest common denominator? We most certainly don't make it easy for ourselves...


> Which might/should prompt a rethink

It shouldn't. It's goal is to continue running in the browsers and to have proper interactions with browser objects like the DOM. And those are garbage-collected by the browser. So you need at least some GC support in wasm to tell it that an object is released and to check if the object is still held by wasm runtime.

> and our goal is to adapt it to the lowest common denominator

How is garbage collection the lowest common denominator?


Isn't it more likely that if there never was first-party GC, everyone who wants GC is just going to bundle a bad one instead, making those sites _slower_?


That’s already happening with .NET, yeah


As opposed to running the litany of GCd javascript they do now?


I don't understand this logic. If you take gc away the people who would leak using gc will leak using malloc instead. Unless you're actually proposing that those people shouldn't be allowed to ship software?


As opposed to now, where they might be using even more memory because bring-your-own-GC solutions are that much more inefficient, as outlined in the article?


Frequently the software running inside WASM sandboxes has its own garbage collector anyway, and it's not going to perform as well as a native host collector in some cases. Especially if you start doing cross-language GC in order to talk to the host (web browser) and manipulate its objects. Of course, exposing a WASM GC API that can actually meet the needs of real software is tough, which is why it didn't happen...

If you think WASM software is going to be efficient with its memory usage, you should look closer at the memory model and take note of details like 'you can't ever shrink the heap' and 'you can't do read-only shareable mappings'


Force people to think a lot more about memory than they want to, and they'll just keep using whatever they currently use, which is probably not optimal for performance.

It's not like there is any chance of a WASM-only future any time soon, and as long as it's competing with JavaScript (and Emscripten targeting it), there's a case to be made for it to be more accommodating.


Not sure how that follows since JS exists and a lot of people are going to just build on a runtime anyway.


> It is good to be careful about utilizing memory on your visitors machines, so you better spend a lot of thought on memory management.

Which is why Javascript has a gc.


>Lack of GC was one of the appealing parts of WASM to me.

Same for me, especially paired with Zig where memory management is far more central to development. It's a shame the JS interfacing overhead is so high, and I think energy would be better spent improving that, rather adding bloat to WASM.


Andy Wingo's blog is awesome. A great resource for those interested in PL design and development. I learned a bunch of stuff about delimited control from this blog, among other things.

Also, Andy implements delimited control in this post! What a legend


My question is when will web assembly able to access the DOM without the need to call Javascript to jump back and forth.


This depends chiefly on the component-model work shipping. https://github.com/WebAssembly/component-model

At the heart is an longstanding wasm "interface types" idea (WIT), defined by a IDL, https://github.com/WebAssembly/component-model/blob/main/des...

Once we know how bits of wasm talk to each other we can have browsers expose web platform "host objects" via this standard inter-module means.

While an interesting topic to most app developers, this is pretty off-topic for language nerds. It will greatly level implementations when it becomes available, as opposed to those willing to pour in enormous effort writing their own serializing bridges... but it doesnt really alter what the language can do and how it will work in wasm. This talk is, to me, more about languages. WIT & host objects will be an enormous boom but they overall won't much affect language design.


I disagree that the component Model or WebAssembly Interfaces (WAI) is what holds a standardization to the DOM access from Wasm.

All those things can already be done via WebIDL (and in fact, is the standard that wasm-bindgen in Rust already uses to generate the Rust bindings)


None of that is standard though.

wasm-bindgen goes away after this. Objects just become usable directly. wasm-bindgen is a monstrously complex layer, and it doesn't allow wasm direct access to the DOM. It very cleverly installs a complex bridge that translates on the fly between js and wasm, making wasm think it has transparent access, but it's quite clear if you actually start debugging what's happening in the browser, there is a huge middle layer of code wasm-bindgen has built for you to present that illusion.

And again, none of it is standard. There I think may be a creative language or two who has built atop Rust's painstaking cross-serialization... but these techniques are far from standard.

The actual standard makes a raft of these ugly unsightly slow intermediary translation layers go away. It makes an actual standard for cross language communication. Not just rust<->js, but rust<->go or scheme<->f# or whatever else.


> None of that is standard though.

WebIDL is pretty much standard as for defining the ABI for the web APIs. All browser engines use it as far as I'm aware.

I agree that there is no standard way to do it from any language other than JS, but from JS that's pretty much set. I wonder why do we need to recreate rather than reuse. It will be way simpler if Wasm simply adopts WebIDL and the languages bindings just adapt to it with slight modifications (such as DOMString or others)

https://webidl.spec.whatwg.org/


Have you, ceo of wasmer, suggested worked on or seen efforts to make WebIDL an ABI for wasm to work across?

I think of webidl as a very abstract high level interface that browsers can use to generate some stubs, specific to their engines with. I feel like with wasm there ought be a stable ABI so modules can be interoperable across implementations. It's an interesting suggestion to imagine trying to extend WebIDL to fit the job.

As ceo of wasmer you must certainly be aware of at least some of the sundry low level details & duties that WIT has to perform. I feel like it's pretty likely we'll end up with some WebIDL to WIT generators or tooling that can help get the web platform wasm-sized. It is a bit self serving to go off & create a new thing, to serve wasm's use case first, but the low level abi Vs high level idl nature of WIT Vs WebIDL does seem like pretty different fundamental objectives, at least to me, but maybe & wouldn't it be great if we could have a simpler unified IDL like you suggest?


> Have you, ceo of wasmer, suggested worked on or seen efforts to make WebIDL an ABI for wasm to work across?

Me, as CEO of Wasmer and son of my mother, have seen efforts in many contexts to try to reuse standards rather than recreate them. Jokes aside, I think it's useful to think on how things that already exist can fit the current panorama rather than creating new standards that might take longer to adopt.

> As ceo of wasmer you must certainly be aware of at least some of the sundry low level details & duties that WIT has to perform

Of course I think that WebAssembly Interfaces can fulfill some part of the picture. In fact, at Wasmer we have been working on that front to make sure it fulfills our company needs [1]. But in the context of the browser, I think WebIDL would have fulfilled the goal of DOM access from Wasm in a way faster way than any other "standard" that is not yet standardized. I'm here advocating for use-case first (product led development), rather than tech-first. I wrote about this in more detail here, in case you may be interested! [2]

[1] https://github.com/wasmerio/wai

[2] https://wasmer.io/posts/is-not-about-wasm-is-about-what-you-...


That's how all browser APIs are accessed from WASM, the DOM is nothing special in that regard. The only way for WASM to interact with the 'outside world' is through import and export function tables, and at least in the browser all functions in the import table will be Javascript functions.

The only alternative would be for all browser Javascript APIs to get a separate "native C-like API" which could be plugged into the WASM import table and circumvent Javascript. That's basically how NaCl worked, with the downside that it only exposed a very small slice of browser APIs, for everything else one had to interact with Javascript via messaging, and this was a massively royal PITA. WASM does everything right in that regard.


Isn't Wasm just stuff transpiled to javascript and then pre-interpreted to bytecode anyway? The ole' Emscripten console port.


No. WASM is its own byte-code, which can be either interpreted directly, or JIT'd to machine code. There's no transpile to JS step.

(There's a text representation too.)


Wait was that asm.js then? It's so confusing, I swear that each time I check I find people claiming different things about this.


asm.js is something of a spiritual predecessor to WASM.

The neat thing about it is that it's both syntactically and semantically valid JavaScript, but also compiles to native code much more directly than JavaScript generally does.

WASM is essentially a successor to asm.js that is not valid JavaScript anymore – browsers do need to provide a different runtime for it.

I believe some early WASM runtimes did reuse parts of their JS stack for JIT compilation, but I believe this is no longer the case, since WASM can efficiently be AOT compiled (unlike JS, which does need the JIT to be efficient due to its much more liberal type system, among other things).


it's less important than you probably think: https://news.ycombinator.com/edit?id=35240914


Rust targets WebAssembly, which apparently works well because Rust is not garbage-collected.

When will WebAssembly get real threads, not just shared memory between processes? I've seen articles from years ago talking about it as a future feature. Current status?


There is a WebAssembly proposal called "Threads".

https://github.com/webassembly/threads

"This proposal adds a new shared linear memory type and some new operations for atomic memory access. The responsibility of creating and joining threads is deferred to the embedder."

As for the current status, it is listed on the proposals page: https://github.com/WebAssembly/proposals

The "Threads" proposal is in "Phase 3 - Implementation Phase" which means it's starting to be implemented, but it hasn't reached "Phase 4 - Standardize the Feature" where 2+ browsers implement it. The final stage would be "Phase 5 - The Feature is Standardized. See https://github.com/WebAssembly/meetings/blob/main/process/ph...


Useful info is here:

https://github.com/WebAssembly/threads/blob/main/proposals/t...

Atomics are well covered. The "wait" and "notify" primitives map efficiently to mutexes.

"Blocking on the main thread is not allowed, so we can't call lockMutex."

The main thread is special in WebAssembly land. Is it still the only one that can do graphics?


Never because real threads can be used to break out of the sandbox using spectre.


SharedArrayBuffer had been disabled for exactly that reason but has been enabled again for cross-origin isolated pages (https://developer.chrome.com/blog/enabling-shared-array-buff...) - which in turn allows to have 'proper' pthreads in Emscripten: https://emscripten.org/docs/porting/pthreads.html


Can what now? I was under the impression spectre gave you access to data, not arbitrary code execution.

Also, I imagine that web assembly is a bytecode format and that should be _less_ susceptible to spectre.

Can you expand? This seems wrong to me.

Edit: read some papers, I'll be damned, it can. I don't understand how tho'. Would love to try playing with a POC that does that.



What is a "real" thread? A POSIX thread?

The entire notion of processes and threads is a bit weird inside of a WASM module. Arguably a WASM module shouldn't care about them at all - the host runtime should be managing parallelism.


The host runtime might provide the ability to spawn threads, but WASM would presumably still need to establish a memory model. I don't know if it has one, or not.

(Or, I suppose it can just completely isolate the memory between threads, and say nothing is shared. But I don't think that's what people would want?)


I'm confused. certainly as an application developer in most languages today I have to be aware of and manage any execution concurrency


Half of the article is entirely about GC.

> Support for built-in GC set to ship in Q4 2023


Parent is asking about threads not GC.


Add to that GC implementation is different for each language, so having generic garbage collector might not be desirable. Consider Erlang which supports millions of processes per node:

> Each Erlang process has its own stack and heap which are allocated in the same memory block and grow towards each other. When the stack and the heap meet, the garbage collector is triggered and memory is reclaimed. If not enough memory was reclaimed, the heap will grow.

That said, there is a WebAssembly compiler for Erlang:

https://github.com/GetFirefly/firefly


What do you need for "real" threads functionality, beside shared memory? Shared mutexes? Fair; Rust somehow implements them, but I don't know how. Shared file descriptors, sockets, etc? They don't seem to exist, and if they did, I think sending (moving) them between threads in a message would do the trick.

What else am I missing?


More compute power, for games. I need about a half dozen CPUs running flat out.


Could one just treat WebAssembly as virtual bare metal, implement a multithreaded OS in WebAssembly and then spawn threads inside that?


That's basically how the Go WASM runtime works[1], except when calling outside the WASM virtual machine (e.g. invoking a JavaScript function) the entire Go runtime must block.

[1] The Go compiler inserts yield calls (preemption opportunities) at various points in the emitted code, such as at function invocations and within loops.


Technically you could, but they'd all be run on the same CPU core.

You could spawn a Web Workers for each core, in theory, although some browsers will lie about the number of cores.


> Technically you could, but they'd all be run on the same CPU core.

Right. The goal for games is to get more CPUs working on the problem.


What's the difference between real threads and shared memory between processes? I don't really follow. Memory object is all you have in webassembly for your process. If it's shared between multiple workers, you basically get threads.


Among other problems, each worker has to instantiate all the host javascript objects and instantiate its own copy of the wasm module, which has its own unique copy of the imports and the function pointer table. It's definitely way less elegant than real threads and potentially creates performance and stability issues.

You also now have fun thread affinity problems, like if you call setTimeout your timer IDs now have thread affinity, which is nonsense.


what do you see as the important difference between real threads and interprocess shared memory?


When time skews and you get sent back a decade or two.


I would refer to Kelsey Hightower's random twitter space[1]. The last 15 min will do.

On paper it's nice and all but it's just not there yet. I've seen so many hype around WASM since 20 but the promised vision is always "any time soon", this triggers my hype alarm a lot.

Say what you will about Frontend/js ecosystem, at least most js project tells you exactly what value it will bring within 3 screen scrolls. I believe it's better for rest of the SWE community take a page or two from js.

[1]: https://twitter.com/kelseyhightower/status/16367865726738145...


WASM delivers, the only problem is that it is unreasonably hyped by people who either don't quite know what it actually is, or who want to push their WASM startup.


cloudflare did a piece on how WASM supports it. ~18% of all websites on the web run through cloudflare.

https://blog.cloudflare.com/big-pineapple-intro/

(I'm not a software developer)


Blazor had to ship the whole .NET runtime to have support for garbage collector and threads.


How big is it?


I think it's currently around 2 MB, they've been working on shrinking the download with each new .NET release


It's great that they're trying to reduce the .net browser runtime size. But I'm really hoping this works out well:

https://www.youtube.com/watch?v=48G_CEGXZZM

One of the biggest pain points of blazor wasm is the initial load times. So if as described in the above video works, then devs can use blazor server pages for initially rendering layout, while the blazor wasm part is downloading in the background.


It seems this is the current doc about the GC:

https://github.com/WebAssembly/gc/blob/main/proposals/gc/MVP...

Looking at the "instructions" section... is it going to be slow again, bringing back the speed of interpreted code to the WASM?


The new Kotlin WASM compiler uses this. You currently have to set some feature flags to unlock this in browsers:

https://kotlinlang.org/docs/whatsnew-eap.html#how-to-enable-...


This is awesome, always great to read Andy Wingo blogposts. Eager to see what the future of Scheme in Wasm looks like!

Here's a bit more or info on the Spritely side on why they want Scheme in Wasm (it's both funny and great to see how the money on decentralized/web3 projects is leaking back to the real world!) https://spritely.institute/news/guile-on-web-assembly-projec...


All I hear when someone writes "WebAssembly is coming" is "more RCE exploits are coming - via the thousands of lines of new code I have to hook up to the Internet by using a browser".

The industry has barely finished debugging the monstrosity that was browsers before - XML, JavaScript, CSS, WebGL, WebRTC, ... so now let's add another giant source of security issues to them!

When will this madness stop? When will browsers actually be capable of doing enough and be moved into maintenance-only mode where only security issues are fixed and no new code is added?

Surely some will say "well, WebAssembly will deliver that precisely - browsers now can run all the code".

But wasn't this the promise with JavaScript already, a Turing-complete language in the browser to end the need for more features of HTML?

Anyway, to deliver some value by this comment:

To disable WASM in Firefox, set "javascript.options.wasm = false" in about:config.

Some websites say you also need to set "javascript.options.wasm_baselinejit = false" and "javascript.options.wasm_ionjit = false" but I don't understand what the point of disabling JIT would be if the whole of WASM is disabled anyway?


The current webdev paradigm is "treat JS as bytecode". We have enormous build processes that compiles high-level languages (TS, JSX, SASS etc) into "low-level" and unreadable JS/CSS/HTML. The latter were supposed to be the high-level language interfaces to the browser. It's a mess.

We'll be much better off with an actual compilation target i.e. WASM, full stop.


There’s a cycle where something gets big because people use it. Say, a mushroom picker puts up a document about picking mushrooms. Now he can enjoy his hobby most excellently!

Next the programmers get interested. They help the mushroom picker upgrade his site with maps and a spreadsheet you can search and everything. Now the programmers can enjoy their hobby most excellently!

Then the overengineers take over and insist that it run Linux. Now you can compile Linux to Wasm, and it works if you just configure the endpoints according to an elegant scheme! The overengineers can enjoy their hobby most excellently!

But the mushroom picker who started this thing isn’t going to come back. No one ever picks up the thread of development again, and there’s nothing to do except “be online.” The ride is over.


There's nothing preventing people from creating a regular HTML/CSS/js site, just like they can do right now, even though all major browsers already support wasm.


WASM makes browsers simpler, not more complex. It's much easier to get the implementation of a 20 page spec correct, than the combinatorial monster that is 200 highlevel language specs and APIs. The more we can push into a small formal core, the better. Formal verification tools call this the de Bruijn criterion. You create a small formal core for your proof system, and everything highlevel just compiles to that.


> It's much easier to get the implementation of a 20 page spec correct

JS spec was also 20 pages. Until we got modern Javascript.

wasm spec will grow. Just look at the roadmap: https://webassembly.org/roadmap/


Feature creep is an orthogonal issue that I 100% agree with. We need to stay vigilant and push back against bloat wherever possible. Exceptions are a mistake imho, memory64 is an absolute must have otoh.


Surely that is a noble goal!

But isn't it a case of XKCD 927? https://xkcd.com/927/

I.e. I would say the probability is zero that because WASM exists other existing complexity of browsers will be removed.

Because the web is so vast, if you once add a feature to browsers you can never remove it because that would break an unknown amount of websites, or even intranet sites.

So now we have wasm + N other subsystems, so N+1, and the security of N+1 systems is less than that of N.


We break/deprecate the web all the time. Web backwards compatibility is a myth, it's just that wo don't get any calls from the 90s complaining that their dogs website broke.


How long have you been using the Web?

ActiveX? DYNSRC? Frames? Flash? How about Gopher support? Capabilities being removed is a time-honored tradition in the browser world.


I feel like you're pretending like there's no value to it? If there was no value then yeah it would be stupid to do it. But it is valuable, because people want within-10%-of-native performance.


If people want native performance, they could just publish native software instead of websites :)


And lose all the value of distributing their software via websites! Again, if you ignore all the value of solutions, then yes, the solutions seem bad.


Perhaps that is just the tax they ought to pay for wanting to squeeze out more performance? :)

The alternative of not paying that tax by using WASM means "I want native performance but I don't want to pay the price of having to do native development."

So what developers are doing here is creating an externality - external cost which other people have to pay:

Browsers for the average internet user who just wants to read some news get worse in terms of security because some people want to distribute their software more conveniently at zero cost.

The cost is paid by the users who all now have WASM in their browser, even if they don't need it.


The web is the best distribution platform we currently have and increasing performance via WebAssembly means a wider variety of programs are now viable on the web.


Distributing native software is a PITA nowadays. You're either at the mercy of a random App Store review process, or you can't run the software you just downloaded outside an app store because the operating system doesn't allow it.


Yes they can, if they have the time and skills to port it to the many plattforms desired. But maybe you do see the point, that it is a bit easier to develope and test for only one plattform, as opposed to ... many?

(have you ever released something cross plattform?)

Point being, the web is a plattform now (since quite a while) and not anymore a static site displayer. Provide a technologically better alternative and people will use that.


native gui dev sucks, and the web does a way better job of sandboxing than native things do.


Then I'm running their code outside of the world-class sandbox the browsers provide.


> But wasn't this the promise with JavaScript already, a Turing-complete language in the browser to end the need for more features of HTML?

The problem is javascript sucks. We want to be able to write any kind of application, but we don't want to have to do it in javascript. Ideally, this would mean that you pick Java or C# instead and use one of several cross-platform UI frameworks, but I've never found a native UI framework that was as easy to work with as HTML. If C# would just let us write native UI with HTML and CSS (and not just using electron), then I would never write a webapp again


> We want to be able to write any kind of application, but we don't want to have to do it in javascript.

To make "any kind of application" on the web you need the web to provide sensible APIs for those applications. And Javascript has nothing to do with it. E.g. lack of controls listed over at https://open-ui.org/ has nothing to do with Javascript.

> If C# would just let us write native UI with HTML and CSS

Good luck implementing anything beyond the most basic controls with HTML and CSS.


You're just misunderstanding what he's asking for.


I don't. His concern is completely misdirected. The problem with the web as a UI/app platform isn't Javascript


Would it not be feasible to turn electron inside out, and have chromium as a library, with bindings for various languages?


I don't think so, no, or rather, I think you might lose out on "the web" part of it. That is, that the web really is a bunch of stuff, all accessible in one "thing", and stuff can and does "link" to various other stuff.

E.g., consider an OIDC log in. It's really one app (the relying party), redirecting to a whole different app (your SSO of choice). You can't exactly do that in another app without, I think, really running into issues of "is it my SSO, or a phish?". The browser provides that trusted layer of "I am look at this app" (via the URL bar). And even then … that's fraught with absolutely immense tons of peril.

It's also a distribution mechanism: I don't have to download Slack, Discord, Postman, etc. — I just go to a URL, and the browser downloads the code needed. (I can and do download some of these, and there are some advantages to do so. But then extend it to every app I use on the web: my bank, Turbotax, my email, my three different loan payment sites, my landlord's payment site … that'd be far too many downloads.)


Rather, wasn't that the promise with Flash, Java, Sliverlight


The improved sandboxing model is supposed to be why this one's going to turn out better (and why it's worth losing some of the ease of development of the old ones...)


Silverlight lives on! i started in SL 1.0 and moved onto Xamarin and now to .NET7. Its the same code, same concepts, just different conventions around the APIs.

For me, WebAssembly is another target. ive been building webview based app UIs with (usually) native backends, cross platform for atleast 10yrs now. I still use libraries i made for SL because PCL was just the first evolution of dotnet! Life is good on the MS gravey train! Im running the same code literly everywhere, backend, frontend, UI, mobile, tablet, cloud. THE SAME CODE, just thin bootstrappers and OS specific impls of various services.

The best part of WASM i think is that i can write services in different languages, for specific purposes, if i ever need to. Yep, i can use scheme or haskell if the need fits, but c# has been evolving too and i can write functional anyways. Im not in the industry where a thin abstraction causes me scaling issues, but i am in one where its hard to find good devs, and usually they can read c# and pick it up quickly no matter their preferred poison.


They were all proprietary though


Yes, except OpenJDK that provided IcedTea, so community patches were possible


Oh right, and ActiveX!


> All I hear when someone writes "WebAssembly is coming" is

"... coming to your insecure browser; not mine."

> To disable WASM in Firefox, set "javascript.options.wasm = false" in about:config.

Exactly. Here's hoping the banks fail before they can start requiring WebAssembly to log in to their websites. ;)


>" Where are the F#, the Elixir, the Haskell compilers? "

I think it makes a lot of sense sense to target WebAssembly with high performance "native" languages like C/C++/Rust/Zig/etc for certain types of apps looking for high performance computations. As for the rest it is simpler to just use JavaScript as the browser already makes a great platform for it.


What about them? Ask the maintainers why they don't support targeting wasm, cuz that who's responsible for that part of it. That isn't in the browser or wasm itself.


WASM isn't just for the browser though.


Where else is WASM being run? Is WASM the eventual holy grail solution to writing cross platform desktop applications?


The "only" thing missing for this is to extend WASI beyond POSIX APIs (e.g. you'd need at least 3D rendering, audio and input APIs), but don't hold your breath I guess ;)


On the server, of course.


Haskell already transpiles to C or JS anyway.


A place where this is relevant that hasn't been mentioned so far is in proxies using the ProxyWASM spec. Instead of loading random libraries into the main code of say, Envoy, you instead load it as a WASM module (so you must target WASI or wasm32-unknown) and all you're allowed to do is specify a set of pre-determined entry points for when traffic connects, sends headers, sends body, and on the return path responds with headers, then body and then disconnects. This means that you can do practically all the things you need to do in a pretty portable way, without breaking into the actual program.


It's been so long since webassembly was announced, I don't remember why I was ever excited about it. I don't think it serves much of a purpose in 2023, as Javascript JITs are quite fast.


TIL assembler for web will have garbage collector and can’t access web.


How do browser standards work w.r.t ensuring things like wasm are still here and functional in 10 years time? What if our friends at Google, controllers of the most popular browser, decide they can't be bothered with WebAssemnly anymore? I'm extremely wary to invest resources in things they control. I know they don't technically have control here, but is there any cause for concern?


Well, ultimately if Google decides that Chrome won't support WASM anymore than it's over, same as Apple decided that Flash is over (or Google decided that NaCl would be over and they put their effort into supporting WASM instead).

But:

WebGL1 is much more niche than WASM and it's still around and well supported after 12 years.

WASM was announced in 2015 and released in 2017, so it's also not exactly "new" anymore.

WASM is such a small part of browsers (for instance compared to a JS engine) that dropping it would really not be worth it. It just delivers too much bang for the buck.


WebAssembly is so widely used at this point that it would need a replacement first. Once that replacement shows up and becomes adopted widely you can start worrying.

"Just use JavaScript and WebGL/Canvas" was a big part of the justification for why it was OK to shoot Flash and Unity in the head.


One question that really bugs me the the sandbox security model. I'm sure the model protects us from memory and IO hazards. But, what about CPU-time sharing protection? I have yet to know if WebAss (or any abstract machine out there) could provide protection from a rouge module, running at 100% CPU in an infinite loop. I'm sure a true virtual machine could limit such thing.

Edit: spelling


Yes, they can. You instrument the module/byte code/JIT'd code to consume a virtual "gas", and when it's consumed, execution returns to the host. E.g., in wasmtime: https://docs.rs/wasmtime/latest/wasmtime/struct.Config.html#...

(I presume the browsers implement something similar, but I've not tested it to see.)


No browser I'm aware of implements that under normal circumstances.

If you run with the debugger open they do tend to insert a lot of instrumentation to enable pausing/breakpoints/etc but that comes at a massive performance hit - in some cases 'debugger open' WASM is slower than JS.


Interesting. I figured they would, given that "normal" JS will happily get aborted if it uses 100%. ("This site is slowing your machine down", etc.)


My guess is that the performance overhead of inserting it into wasm was judged unacceptable. It's a little odd to me, I was surprised when I discovered I couldn't realistically pause a tab running wasm benchmarks - often when I try to close the chrome tab it just hangs.

The runtime I work on (.NET) inserts safepoints at back branches in some cases specifically to enable pausing for GC. It wouldn't surprise me if using WASM GC would cause the runtime to start inserting safepoints as well.


Can you perhaps rephrase your question? It seems like you're asking if the language will handle scheduling priority of a library?

Isn't this the job of the browser/OS? Isn't this already handled in most browsers where each tab is its own process?


I still stand by my question, though. If I consider WebAss is a VM, which I think in some extend, it already is given that it let the process owner control/limit memory and IO sandbox. But, my understanding is that a true VM also needs to provide CPU sandbox, right? To answer another question in this thread related to JS "while (true) {}", most browser would already warn about the script CPU time and allow us to terminate/stop the long running script. I could say the same for a true VM that would limit the number of CPUs to use or even execution time out. Even with the JVM, where there is no mechanism to limit the execution of the entire process, I can run JVM plugins/modules in a separate thread and set execution time limit in that contained thread.


It's the same as running a Javascript infinite loop. The browser will kill the tab if a WASM module doesn't yield back to the browser's event loop after a few seconds.


What's the difference between

    <script>WebAssembly.instantiateStreaming(fetch("cpu_waster.wasm")...</script>
and

    <script>while (true){};</script>

?


The former is asynchronous.


It's still gonna waste a whole CPU (at least)


It isn't built in but you could mitigate this by rewriting the wasm module to only execute for a certain number of iterations and then return to the outer context. This isn't currently handled in Js either. For Wasm engines that run outside of the brower, some do have the notion of cycle counting.

Wasm is a true virtual machine.


Sorry for the noob question but what can you do with web assembly that you wouldnt otherwise do with other web frameworks? What are people using web assembly for generally?


WebAssembly brings absolutely zero new features to web. You can get predictable performance for some code and that's about it. Think about it as an optimisation. There's much more hype than essence there. You could compile C++ or Java to JavaScript 10 years ago. Bellard's jslinux was implemented with JavaScript originally and it was enough to emulate x86 in the browser.


It's hard to overstate how much faster WebAssembly is compared to what we had before the advent of asm.js. It's so much faster that it really does enable you to do things you couldn't do before because it was too slow.

Also, WASM does expose things that (afaik) aren't available in JS, like int64, SIMD, atomics, etc.


> You could compile C++ ... to JavaScript 10 years ago.

That's true, but to get any sort of 'near-native' performance out of that approach, browsers had to add special support for asm.js, running asm.js as regular Javascript would loose 3..10x performance compared to 'special cased' asm.js (if I remember right - for instance iOS Safari never added this special asm.js support, and once it supported WASM the performance jump from asm.js to WASM was dramatic and much bigger than desktop browsers which already had added special asm.js support before).

Compared to an asm.js special case in a Javascript engine, the WASM approach just makes a lot more sense, because Javascript can now focus again on being a programming language and not also being a good compile target.


You can't compile existing C++ or Rust programs to React or other frameworks.

It's used to compile your app in a format that can run on any machine with a web browser.


I use it for my home computer emulation stuff, the same code also compiles to minimal native applications (but with WASM there's not much of a point except during development and debugging - still it's good to have a "native fallback"):

https://floooh.github.io/tiny8bit/

Also see: https://github.com/floooh/sokol and https://floooh.github.io/sokol-html5/index.html.

(e.g. for me it's a way to easily get my C/C++ cross-platform hobby stuff to people without requiring them having to jump through the hoops of downloading and running an untrusted native executable or messing around with C/C++ toolchains and build systems to build the stuff locally).

PS: WASM in the browser is also a good watchdog which punishes you heavily if your code gets to bloated ;)


It's useful for getting the MediaPipe Selfie-Segmentation model running in the browser - https://codepen.io/kaliedarik/pen/PopBxBM


Can we bring back java applets? It was unsafe back in the day, but if we can do wasm surely we can do applets? Lots of good languages and libraries for jvm.


If you need that, you can: https://leaningtech.com/cheerpj/

Can't vouch for their solution but looks like they do what you ask for.



WASM was cool but then all the trouble came. I am more reluctant to use WASM in production today than a couple of years ago. Seems like the biz agrees...


> What about Haskell, Ocaml, Scheme, F#, and so on – what about us?

I think the industry will get by…




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: