I'm fascinated by WebAssembly and love that it exists but if anyone tells you they need to use WebAssembly to make the UI snappy I'd advise you interrogate that assertion thoroughly.
I don't want to speak to this example too deeply because I don't know it (I see they're doing all sorts of stuff with audio so maybe they do need WebAssembly) but modern JavaScript VMs are very, very fast. 99% of webapps are absolutely fine using JavaScript.
The far more important part of making snappy UIs is the Web Worker aspect. To my mind it's one of the key reasons native apps feel so much better than web ones: it's trivial to move an operation off the main thread, do an expensive calculation and then trivial to bring it back to the main thread again. Unfortuately the API for doing so on the web is extremely clunky, passing messages back and forth between a page context and a worker context. I'd recommend anyone thinking about this stuff to take a look at Comlink:
it lets you wrap a lot of this complication up in simple promises, though you still have to think hard about what code lives inside a worker and what does not. In my ideal world all the code would live in the same place and we'd be freely exchanging stuff like you can in Swift. "Don't do it on the main thread" feels like a mantra taught to every new native developer as soon as it can be, the same discipline simply doesn't exist on the web. But neither do the required higher level language features/APIs.
I think they’re vastly understating/oversimplifying how they use WASM. For audio analysis and operations on audio binary/encoded data, it’s quite possible that WASM is a very good fit. Maybe offloading that work to a JS worker would be sufficient—you’re right that JS VMs are extremely fast—but I wouldn’t necessarily discount this WASM use case either. And I’m saying that having explored these kinds of optimizations enough to generally recommend against using WASM unless you have strong reason to believe you’d benefit from it (ideally with benchmarks to prove it).
I’d also caution that using Web Workers isn’t always so obvious either (and the same applies to server side threading eg on Node). There’s significant overhead (runtime) in spinning up a worker, and in every communication between threads—enough to negate their benefit for many use cases. Both WASM and workers have roughly the same perf downsides, and both should generally be justified by real measurement of their impact.
Those are good recommendations, and they confirm my own findings and experience trying to improve performance for Spectrogram and Waveform generation in a heavy audio focused web app.
AssemblyScript and Rust/WASM implementation of these relatively simple but computationally heavy algorithms didn’t result in any meaningful improvements of their JS counterparts. In the end moving computations that took 10ms or more (some up to 700ms) off main thread and using tooling to simplify the WebWorker API was definitely more gainful, simpler and better for code hygiene, and in most cases as fast or faster than the WASM implementations.
>I'm fascinated by WebAssembly and love that it exists but if anyone tells you they need to use WebAssembly to make the UI snappy I'd advise you interrogate that assertion thoroughly.
The Figma team would say differently. For 99% of CRUD app use cases you are absolutely correct. But WASM has enabled functionality on the web we could have only dreamed of 5 years ago.
I'd argue that's part of the interrogation. "We should use WebAssembly because Figma does and Figma is fast" is faulty logic. Are you making a product with the level of complexity of Figma? OK but are you really? Are you sure WebAssembly is what makes Figma fast? If so, what are they using WebAssembly for? Is there an overlap with what you're doing?
> WASM has enabled functionality on the web we could have only dreamed of 5 years ago
Like what? I ask the question genuinely. WebAssembly makes it dramatically easier to do a lot of performance-sensitive things but to my mind it doesn't actually enable a whole lot of new functionality. I don't mean to write off making things easier, it's a huge deal. But if anyone asserts that we absolutely must use WebAssembly for Project X I'd really want to dive into exactly why. "It's fast" is not a good enough answer.
WebAssembly doesn't provide any new functionality much in the same way cars did not when compared to horse-drawn carriages. You can argue that everything you can do with Wasm can be done with JavaScript, but often it involves significantly greater effort.
Where Wasm is a massive improvement:
* Ability to use existing C++/C/Rust code without a rewrite.
* Performance consistency through languages with manual memory-management and more straightforward performance characteristics.
* Performance of working with multithreaded code by using languages that can pass pointers and avoid message-passing overhead.
* This last point is somewhat unproven as it's related to my own personal work, so take it with a grain of salt! Wasm has unique properties that allow it to be augmented to run complex, seemingly unnetworked, code perfectly synchronized between computers with very little latency in the UX. I'm convinced it can eliminate a whole class of complex networking / sync issues. I first demonstrated the concept earlier this year with tanglesync.com, but I'm currently working on a follow-up.
Still waiting for those wonderful WebAssembly + WebGL games that can match Infinity Blade for iOS from 2010, the game Apple used to show off iOS GL ES 3.0 capabilities.
Or something better that citadel demo, beyond the asm.js port done by Mozzilla .
It's an impressive app but I as the parent said, I'd expect JS would have worked just fine & let you do all the same things, at essentially the same speed.
The key differentiation, as the parent says, is using Web Workers to make sure you're not doing work on the main thread.
> I'd expect JS would have worked just fine & let you do all the same things, at essentially the same speed
I think you would be wrong about that. Certainly, it's not just WASM that makes Figma fast, but it's an important piece. Look at it this way: why would they have built out the product with WASM five years ago if Javascript was just as good for their use case? It's a huge investment in a new technology with relatively few people you can hire for it, versus quite possibly the most well-known, well-supported language in the world. They identified some significant advantage that made the cost-benefit analysis line up.
Normally, I don't like the argument that because somebody did it, it must be the right choice. But, Figma made so many correct decisions out of the gate that, in this case, I'd give them the benefit of the doubt.
> They identified some significant advantage that made the cost-benefit analysis line up.
I don't know if this is true or not but I don't rate the fact that they used it as much evidence that they should have used it. IME decisions like this are because some early engineer wanted to use that particular bit of technology.
> They identified some significant advantage that made the cost-benefit analysis line up.
No doubt. But we don’t know what that advantage was. You could assume it was for performance related reasons. But maybe they wanted cross-platform compatibility, something WASM allows far more than JS. Or maybe they hired a load of developers experienced in making a graphic design tool rather than making a webapp and decided to use WASM to provide those developers with a more familiar environment.
Anyway, without knowing any of this it’s very difficult to say “Figma did it so we should too”.
> Unfortuately the API for doing so on the web is extremely clunky, passing messages back and forth between a page context and a worker context.
This is one of the advantages to using WebAssembly: you can use a language like Rust that makes multithreaded code easier to write and faster to run.
Out of the box with JavaScript if you want to share an object with a worker you need to send a message (which internally serializes / deserializes the object) or manually manage serializing / deserializing bytes to a SharedArrayBuffer.
With Rust/Wasm + Web Workers you can use Rust's synchronization primitives and just pass a pointer to the worker. You pay no cost for serialization / deserialization.
> if anyone tells you they need to use WebAssembly to make the UI snappy I'd advise you interrogate that assertion thoroughly.
As you pointed out the JavaScript VM is incredibly optimized but where Wasm shines is consistency. JavaScript requires some care to not produce garbage every frame, otherwise you may unpredictably trigger a lengthy garbage collection. With most Wasm languages you can easily avoid allocating any JavaScript garbage.
The lower-level Wasm languages also tend to be easier to reason about performance. When assessing Rust code performance you typically want to look for Big-O and excessive memory allocations. If you do find a performance bottleneck typically you know where to try to improve. With JavaScript you'll sometimes hit situations where you fall off the VM's golden path, or where one JavaScript VM performs fine and another does not.
This is not a Javascript problem, this is a Javascript ecosystem problem. A standard web page will load megabytes of JS code that was packaged from hundreds of dependancies for the reason that "this is how everyone does it".
I just finished a UI that requires a few HTTP requests to the API and a bit of dynamic behavior (but not a ton), and it's done inline with ES6, with no transpilers or minifiers, using ArrowJS. It does the job, and it rips.
Also developers' problem. I just saw code that imports a 1M library to use one function that would take anyone no more than 10 minutes to implement. Maybe it is not too bad in the end because of tree shaking etc, but this kind of thing happens all the time.
We’re building an "IDE for notes/tasks" [1], so as an editor of sorts, UI snappiness matters a lot for us too. The approach we’re taking is to basically split up the app in two parts (we refer to these parts as "frontend" and "backend", but they are both on the client).
The frontend does all the rendering for the editor, which we want to stay within the frame budget. That's why we offload all data synchronization work (applying CRDT deltas, encrypting/decrypting data to/from websockets, IndexedDB caching, search, parsing JSON and so on) to the "backend" thread.
I think for apps like this, splitting the UI and data part up (kind of like a frontend and a backend in a classic client/server web app) is very useful to prevent blocking the main thread. In our case this "backend" is actually a SharedWorker, which has the added benefit that it's very easy to keep all state in sync with multiple open tabs/windows, and we just multiplex all incoming websocket events to all connected "frontends".
I agree passing messages is a bit clunky, but we’ve taken a similar approach to comlink so we can simply "await" a function call on the backend from the frontend. Besides setting that up once (or just using something like comlink), it's not much work. Okay except I found debugging on Chrome a bit clunky, as I think Firefox allows you to see all SharedWorker console output from within the main thread console for example (with Chrome you can open the inspector for shared workers separely). Plus we also have all kinds of other neat features with modern browsers these days, like the SharedWorker, and zero-copy communication using Transferable objects.
> if anyone tells you they need to use WebAssembly to make the UI snappy I'd advise you interrogate that assertion thoroughly.
Get prepared to be blown away by Makepad [0]. I have no affiliation with them, but just watched their most recent conference presentation [1]. The slides were made with Makepad itself and included, embedded, a full-blown IDE, a synthesizer app, a Mandelbrot to zoom in endlessly, and more. All running at 120fps. The presentation is for the most part live-coding with this setup.
What they want to do is bring coders and designers closer together, and while some code is in Rust they developed a DSL for the GUI parts that is close to how Figma works. These GUI's can run anywhere.
And I couldn't help thinking "Why would people have complicated stacks to create Web 2.0 apps for the Google Web, when they have this?", in other words an opportunity to break out of the browser straitjacket.
Btw. WebAssembly/WebGL isn't the only way in which Makepad is available. And while running well in the browser for a time, there were issues to be solved here (addressed in the presentation). And tbh this isn't a real answer to your assertion. Greg Johnston, creator of Leptos, has made a video with performance comparisons [2].
Edit: Adding a link to the synthesizer app I just found [3].
> And I couldn't help thinking "Why would people have complicated stacks to create Web 2.0 apps for the Google Web, when they have this?", in other words an opportunity to break out of the browser straitjacket.
Perhaps because they still believe in the promise of Web 1.0, where their app is a graceful-enhancement over an initial server-rendered document, that can be easily worked with at a DOM level by any HTML scraper, easily baked to a PDF and printed, easily re-laid-out for better readability just by changing the browser font size, easily text-to-speech'ed (including ARIA roles, alt text, etc), easily re-styled with a user-agent stylesheet, easily intermediated by browser extensions, and so forth.
I've yet to see a WASM-driven web application that's any less opaque to these technologies than a Flash or ActiveX applet would be.
Browser apps are not only about these usecases, which are mostly a document-use-of-DOM feature. Which is fine for certain cases. However for a synthesiser UI, IDE, designtools or even a SDXL explorer flowgraph i don't really care about that. You want fast UI, with threads available to make workloads not hiccup the system. And thats what we're building with makepad.
I found the stranglehold that HTML put on application developers over the years to be so demotivating i almost quit building applications that ran in a browser entirely. But luckily now we have wasm+rust+webgl/gpu and things can happen again.
All fair points. But there's no reason that these things, like ARIA standard, aren't added. That is independent of WebAssembly standard. I did not mention "Google Web" for no reason. The "promise of Web 1.0" in terms of being open is near dead with Google DRM's and the browser oligopoly and all that jazz.
Also mentioned Web 2.0 for a reason. The "cram full-blown app into browsers" web, that uses Rube Goldberg machines behind the scenes. Joking aside, these (btw, also opaque) dynamic applications can hugely benefit from a new paradigm. While the Web itself can go back more to its original 1.0 roots of hypermedia, and allowing more and simpler browsers to wield its content.
That demonstrates that you can use WebAssembly to make a snappy UI, it does not demonstrate that you must use WebAssembly.
It's nearly ten years old now but I remember being absolutely blown away by React Canvas, a UI toolkit that leveraged React but instead of rendering DOM nodes it rendered to a <canvas/> tag. Beautiful, 60fps stuff. All written in JS. Unfortunately all the demos seem to have disappeared since but here's a blog post about it:
Point is, both Makepad and React Canvas have something in common: they ditched the DOM. There are both advantages and disadvantages to doing so but the relevant point is that you don't need to use WebAssembly to do it.
I strongly recommend NOT doing this. It sounds like a fantastic idea in theory (who wouldn't want pixel-perfect control over all client viewports?), but there are so many caveats and edge cases that make it a nightmare in practice. You don't even have to get into accessibility or internationalization stuff to find the kinds of sharp edges that scared me away. Simple things like HiDPI, resizing viewports and drawing text.
I spent a solid 3-4 months of time on this path. I was so tired of fighting platform/browser quirks that it made sense. Now, I accept those quirks as battle-tested features and don't try to fight them anymore.
The only reason you should actually ditch the DOM is if you are doing something like Overwatch in the browser. Even then, I'd argue you would be a complete dumbass if you skip out on the power of CSS, etc. for purposes of handling your Menu/HUD elements.
Saying "Oh, canvas tag. Been there, done that" doesn't do this project justice. It is not new ideas that matter here. It is the execution of them into workable solutions.
People have been there, done that and been sued for it. This is an ADA Title 2 minefield, not a workable solution. They haven't solved that issue, so it remains a nonviable approach that will not make it to mainstream adoption just like the decade of previous examples.
Very disappointing to see that there are almost no benchmark quoted in this thread, only words like "snappy" or hypothetical questions. I have been reading articles to understand if converting some of our JS code to WebAssembly could lead to significant performance improvements and other benefits, and I have yet to come to a conclusion. I have seen dumbed-down examples that don't actually mean anything in the real projects, some anecdotal numbers, and other performance comparisons based on proprietary codebase which don't really help. I would advise that people be cautiously optimistic about WebAssembly -- the benefits may not be worth the effort.
WASM and Web Workers - unless carefully used - won't magically make your UI snappy.
There are three reasons (for the vast majority of apps) that a UI feels sluggish:
1. The network! Requesting data from a server is slow, by far the slowest aspect of any app. As a start, prefetch and cache, use a CDN, try edge platforms that move data and compute closer to the user. However, if you can explore Local First (http://localfirstweb.dev), for a web app it is the way we should be looking to build in future.
2. They are doing work on the UI thread that takes longer than 16ms. This is where Web Workers are perfect, the DX around them isn't perfect, as another comment suggested Comlink helps, but there is a lot of opportunity here to build abstractions that help devs.
3. Excessive animations that delay user interaction - there is so much bad UX where animations have been added that only make things slower. Good animations have a purpose, showing where something came from or is going, and never get in the way.
Finally, we are well into diminishing returns with front end frameworks optimising the way they do templating and update the DOM. The key thing there now is DX, that is how you pick a framework, benchmarks are almost always useless.
> Good animations have a purpose, showing where something came from or is going
Many websites will slowly slide a newsletter sign up form up from the bottom. We already know that it comes from the underworld, so the animation absolutely pointless there.
Web Workers are a really great feature for performance. The way they want the worker code to live in a separate file makes them slightly annoying if you're using a bundler, but each bundler has a loader or similar feature for this, so all is mostly well.
But the thing I haven't found a solution for is, the case where you want to use web workers inside a library that other people will be importing into their own project, and you don't know what bundler they'll use (or transpiler, minifier, etc). I can think of hairy ways to do it, that involve pre-building the worker JS and storing that text in your library, piping it into a file blob at runtime, or the like. But does anyone know a clean way of handling this?
> pre-building the worker JS and storing that text in your library, piping it into a file blob at runtime
I just did that today, to be able to bundle the worker script with the client that controls it in a single file. It's convenient but feels hacky, and I wonder about its impact on page performance.
There's a similar trick for bundling a base64-encoded WASM binary with the host JS that controls it in a single file. That saves effort for the consumer of the script, so they don't need to bundle or copy the binary into their static assets folder, for the script to load.
I think the common (best?) practice is to let the consumer handle the static file (like worker script, WASM binary), and then for the client script to provide an option to set the URL path where the static file is served.
Layer 0: Strategically separate core logic while assuming as little about the environment as possible. Function Y generates something, function X handles the result somehow. Maybe there’s a postMessage somewhere between, or maybe not—you don’t care. Maybe Y is slow, but that doesn’t mean it must assume it runs in a worker. Maybe X serializes output in some way, but it doesn’t need to assume that DOM exists yet. However Y and X are wired up later is none of their concern.
Layer 0.5: Document intended or just practical ways to invoke those APIs. Y is slow, call it from a worker. X formats stuff, so if you’re in a browser you’ll want to hook it up to DOM somehow.
Layer 1: Provide glue functions to wire your core logic up in different environments. Worker message handlers? React components? These things could require more specific environments to be called in, and they would use Layer 0 APIs—but, crucially, your layer 0 won’t fail at its core task if there’s no DOM or postMessage. Maybe your user doesn’t want Y to run in a worker, or manages own web worker pool, etc.
Layer 2: Provide last-mile facilities and helpers. This outer layer is technically outside of your actual library implementation. Bundler configuration templates for esbuild? Webpack? Example projects? Template repositories? Single-file bundle that spawns a worker for simplest use cases or demos? Anything’s great here—though note that if you support too many options there’s a good chance some of them will become stale, which can hurt adoption, and you don’t want to spend too much time on this layer as it’s probably the least important and the most flaky as specs, environments, build tools and trends evolve. (That’s also the reason why commingling this stuff, with all of its runtime/environment concerns, and your actual library is probably a very bad idea. If your library always spawns a worker at runtime, someone may certainly curse.)
Such a design should maximise your library’s utility. Somebody doesn’t want Y to run in a worker for some crazy reason? They are always free to wire up core functions in whatever way they want. Another user has a complex project that manages own worker pool? They’ll probably eject after layer 1. Ensuring as much as possible is at lower layers, strategically separated, means you will have easier time iterating on higher layers to support different environment scenarios or bundlers, and you (or your users!) can add support for any new runtime configurations that appear in future without touching the core parts.
Hi, I appreciate the advice in general, but in my case it's not an architectural matter. Basically, I just have one internal, encapsulated function that ought to run in a worker, and I'd like to implement that without doing anything hairy - and without imposing requirements on the end-user to make their own worker. It seems like this isn't possible, but if you know of a way I'd love to hear it!
Hey, so we're doing something similar to the solution given above, but without compiling the worker at bundle time.
Basically what we're doing is putting the worker code in a string. When you need the worker, you can `import myWorker from ./worker`. At runtime you can create a `Blob` from the string, then create a URL for it using `window.URL.createObjectURL`.
It's certainly far from ideal. Since the code lives in a string, there are no compile time errors at all (though you could probably develop without the string form and put it in a string after). But it kinda works. Hope it's what you're looking for.
Yes, data URI & blobs is a way. You can author worker code as you would normally do (in TypeScript, for example) and bundle (with type checks) it into a string as part of your own build process. Ideally, though, you would want to keep the worker wrapper separate from core library so that users with complex projects can integrate it in their own build however they want…
You can create workers at runtime. Take the fact that it is not straightforward, and hence you need to be asking this question, as a hint that it’s almost certainly the wrong thing to do for your library—unless it is specifically about something like worker management, in which case you would not be asking this question. Don’t mess with the environment, users (and future you) will thank you for it.
JS can be faster in enough cases where it's worthwhile to test.
WASM is mostly being used for code that has already been written and is now being integrated into a web site. I wouldn't suggest jumping right to WASM simply for performance.
I would caution anyone looking to make their UI "snappy": while webassembly performance for things like raw compute - especially with simd - is superior, any time you need to interface with browser APIs like the DOM to update your UI you're now going to pay a bunch of interop costs that aren't present in native JavaScript. For some workloads this will make wasm meaningfully slower - especially ones that use strings.
I have to ask because people frequently make claims about perform based upon unmeasured assumptions that are wrong more often than not. Worse than that, many of these imagined claims about performance tend to be off by one or more orders of magnitude.
Even just JavaScript Web Workers can be really helpful for doing heavy compute outside of the UI thread. JavaScript is pretty fast, it just needs to be unblocked. I used them once to sort a six-digit array of objects client side, while keeping the UI snappy (it took a couple seconds to process, but the UI was responsive the whole time)
Of course for some tasks you'll still need more than that, which very well may have been true for the OP, but benchmarking is good etc
Can we have structural sharing between web workers please? Because serializing everything as a message stream is not the most efficient way to go about many things. Also not the most programmer-friendly.
I don't want to speak to this example too deeply because I don't know it (I see they're doing all sorts of stuff with audio so maybe they do need WebAssembly) but modern JavaScript VMs are very, very fast. 99% of webapps are absolutely fine using JavaScript.
The far more important part of making snappy UIs is the Web Worker aspect. To my mind it's one of the key reasons native apps feel so much better than web ones: it's trivial to move an operation off the main thread, do an expensive calculation and then trivial to bring it back to the main thread again. Unfortuately the API for doing so on the web is extremely clunky, passing messages back and forth between a page context and a worker context. I'd recommend anyone thinking about this stuff to take a look at Comlink:
https://www.npmjs.com/package/comlink
it lets you wrap a lot of this complication up in simple promises, though you still have to think hard about what code lives inside a worker and what does not. In my ideal world all the code would live in the same place and we'd be freely exchanging stuff like you can in Swift. "Don't do it on the main thread" feels like a mantra taught to every new native developer as soon as it can be, the same discipline simply doesn't exist on the web. But neither do the required higher level language features/APIs.