I can’t stand Figma, it’s limited, slow and in general, demonstrating nothing at all for its wasm use so that’s kind in support of my point. Something like Figma would get a lot more performance out of using WebGL. Which is only accessible through JS, and the emscripten bindings for it go through JS calls. So WASM is kind of a dead weight in this scenario.
The company was last valued at $20B so they are doing something right.
They are using WebGL. Using WebGL from WebAssembly has some nice advantages; with Wasm you can use languages that are more accommodating to the sort of densely-packed data that you need to send to GPU buffers.
Accommodating how? Everything goes through JS, what they share is basically an ArrayBuffer (if we're talking command buffers) which you can build directly in JS with the same JIT backend that powers WASM. I went through your article but it doesn't describe how is WASM more "accomodating" to this.
One way or another, the command buffers are an encoding process. They're not WASM code nor JS code, they're GPU code. You need to build them from one platform for another platform. There's no any direct correspondence between WASM and WebGPU, let alone WebGL.
Accommodating in the sense that in a language where you control the memory layout of objects, you don’t have to explicitly do an explicit encoding step, you can just specify a memory layout that the GPU can understand. This makes it a lot nicer to work with in my experience.
You could create a JS object with accessor functions that stores an offset, but the object itself still takes up heap space. In Rust, it would just be a (typed) pointer to the position in the buffer and wouldn't require an allocation.
Sure, but it was meant to refute a pretty weak point that basically boiled down to “I don’t like it so it’s bad”. Enough people do like it and find it fast, and while I can match anecdotes, pointing out the success of the company seemed more compelling.