Hacker News new | past | comments | ask | show | jobs | submit login

The paper waits until page 2 to mention that their scheme would not help standard WebAssembly implementations at all: those implementations already eliminate all bounds checks using a virtual memory trick, so reducing the number of required bounds checks makes no difference.

However, it would help implementations that can't use the virtual memory trick, such as if they need to support a 64-bit WebAssembly address space, or run on a 32-bit host, or have a huge number of WebAssembly VMs in the same host process.

And if I understand their (extremely hard to read) graph correctly, the scheme reduces the bounds checking overhead in that case quite close to zero, which is pretty impressive.

On the other hand, any approach based on software bounds checking is ripe for speculative-execution attacks. Maybe that doesn't matter because in the browser it's a lost cause already?




The article mentions also other instructions that trap than just memory accesses, so proving statically when those can't trap could still help.

BTW. The trick that many WASM runtimes use is to allocate 8GB of address space for each linear memory, which a 32-bit pointer + 32-bit offset won't exceed. While fast, that can be limiting: On x86-64, you may only get 47 bits for user-space addresses (one bit reserved for kernel space). On 64-bit RISC-V, the address space bits could be as low as 39. 39-1-33 = 5 bits, limiting you to at most 31 instances in a runtime.


That sounds harsh: page 2 is opening phrase of the 3rd paragraph of the introduction.


this sort of static verifiability is important for applications like smartcard programs though, where the runtime environment cannot afford dynamic MMU.

additionally, this lets you inline verifiable code into your protection domain instead of forcing it into its own module somewhere else.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: