I love WebAssembly, but it is only as good as how easy it is to pass raw bytes between its modules and the rest of the browser environment.
For example, if you want to do anything involving strings, you have to convert it to a typed array. So that is invoking charCodeAt for every character and saving it to an index in such an array. If you want to get a string out, you have to convert uint charcodes to single-char strings and append them together one at a time. The overhead of these things alone probably makes WASM useless for almost all code involving string manipulation (that also interacts with the rest of the browser).
That is not to say that WASM is useless for strings either: if you have one huge string, only need to convert it to bytes once, and then do repeated operations on the resulting data within WASM, there is a bigger chance that it works out. For example, parsing source code: one conversion to WASM is enough, the rest can be handled from within WASM. Fast search using a prefix trie also sounds plausible: the trie can be built up within WASM, and for searching one only needs to return the indices of matching substrings within the original string. That is a lot less overhead.
Anyway, my pint is: yes, we definitely need performant JavaScript. And I wouldn't be surprised if faster WebAssembly leads to faster JavaScript as well: sorting Typed Arrays used to be so slow that a custom radix sort implemented in JavaScript beat the built-in sort by 4-10x. Somewhere in the last five versions of Chrome the Uint32Array and Int32Arrays got an enormous speed boost. My guess is: because of the demand of faster interop with WASM, something under the hood was improved, directly or indirectly leading to faster sorting as well.
The TextDecoder/TextEncoder APIs are a much easier and faster way to handle WASM strings, and most browsers already support them (Edge doesn't but can be polyfilled).
TextDecoder looks fine, but I'm a bit wary of using TextEncoder naively. It requires allocating a Uint8Array for each string. Each Uint8Array comes with 200 bytes of overhead[0]. Depending on your algorithm you can end up with thousands if not millions of tiny strings. While even phones will probably have enough RAM to handle a few hundred megabytes these days, that will slow things down simply from an allocation/garbage collection perspective.
Furthermore, typed array allocation used to be incredibly slow compared to array allocation - the gap is smaller now but it is still slow enough to be a bottle-neck in my code.
The quote below is from a V8 dev replying to someone who opened an issue about this slow allocation. It is from 2012, but it still applies:
> The reason is simply that while constructing Array is done exclusively in the Javascript virtual machine and an allocation in the VM's heap, constructing a TypedArray involves the browser binding (at least in Chrome) and allocation outside the VM's heap. The former can be properly optimized, while the latter cannot. Future optimizations in this area are already on our radar, but have rather low priority.
> Generally speaking, typed arrays are good if you want to allocate a long-living dense array with a certain type and size. Array objects are better if you want to create temporary arrays often.
I hope things will improve now that Typed Arrays finally get some proper love thanks to WASM.
That would fit my experience of TypedArrays being a lot faster in Firefox in general. Although I guess that given their early asm.js push it would make sense if they have a head-start in this domain.
For example, if you want to do anything involving strings, you have to convert it to a typed array. So that is invoking charCodeAt for every character and saving it to an index in such an array. If you want to get a string out, you have to convert uint charcodes to single-char strings and append them together one at a time. The overhead of these things alone probably makes WASM useless for almost all code involving string manipulation (that also interacts with the rest of the browser).
That is not to say that WASM is useless for strings either: if you have one huge string, only need to convert it to bytes once, and then do repeated operations on the resulting data within WASM, there is a bigger chance that it works out. For example, parsing source code: one conversion to WASM is enough, the rest can be handled from within WASM. Fast search using a prefix trie also sounds plausible: the trie can be built up within WASM, and for searching one only needs to return the indices of matching substrings within the original string. That is a lot less overhead.
Anyway, my pint is: yes, we definitely need performant JavaScript. And I wouldn't be surprised if faster WebAssembly leads to faster JavaScript as well: sorting Typed Arrays used to be so slow that a custom radix sort implemented in JavaScript beat the built-in sort by 4-10x. Somewhere in the last five versions of Chrome the Uint32Array and Int32Arrays got an enormous speed boost. My guess is: because of the demand of faster interop with WASM, something under the hood was improved, directly or indirectly leading to faster sorting as well.