Java failed to deliver safe sandbox. Browsers finally booted it because of numerous vulnerabilities. For me wasm looks exactly like another JVM attempt, but it's a good thing, because the idea is good, we just need better implementation.
Now if I don't need sandbox, it's going to be a tougher sell. But who knows, may be it'll outperform JVM on bare metal some day.
Technically I agree, but nowadays with 5G+Edge computing, where low latency application requirements are essential, there will be other performance constraints not tackled by Java that needs new solutions. Previous JVM attempts mostly focused on "write once, run anywhere". But now, "anywhere" means "anywhere and quickly" :)
That is exactly the point, don't advertise WebAssembly as safe bytecode, if unsafe languages are part of the picture without any kind of control.
Secondly, nor ISO C or ISO C++ forbid implementations that do bounds checking by default. In fact that is what most modern compilers do by default in debug mode.
Finally look at memory tagging in Solaris SPARC ADI, Apple iOS or the upcoming ARM extensions support on Android for how bounds checking is enforced at hardware level while supporting unsafe languages.
The point of the byte code being safe is that no operations performed by WASM code could cause a memory unsafety bug from the perspective outside the sandbox. If your code violates its own memory rules then it will only mess up the logical state of its VM memory chunk, and probably produce an incorrect result. The same thing can happen in any safe language if you access the wrong indexes in an array because of a logical error in your code
That incorrect results might lead to security exploits, the same way that browsers now get exploited by taking advantage how their VMs work.
So hand waving such security issues is rather strange, when it should be the top concern when selling an infrastructure to run code from unknown sources.
> the same way that browsers now get exploited by taking advantage how their VMs work
The difference is if the exploit is run by the browser or the VM. If the VM has a logic error, gives back the wrong result, the browser decides to trust the result and be exploited then it is a browser bug; not a sandbox issue.
The other situation is when a browser spins up a VM to add 2 and 2 but then the VM starts downloading malicious files from the internet.
No kind of safe language can avoid the first class, wasm avoids the second.
I do understand what sandboxing means, and how WebAssembly advocates keep overselling its security capabilities, by ignoring issues that other bytecode formats have taken a more serious approach, already in the mid-60's.
I believe that wasm is not particularly focused on functional security as much as embeddable security.
Essentially the promise here is that you can download a random wasm from anywhere, run it with little-to-none privileges and be sure nothing bad can happen.
There was an article here many months ago detailing how wasm on the server makes it harder to mitigate attacks due to lack of wasm-inspecting tooling compared to system utilities for processes/native binaries.
But in part that is because the attack model of wasm is "literally executing malicious code".
Pointer authentication≠memory tagging. The former requires ARMv8.3 and is in the A12 processor, and the latter is not in any hardware that Apple is currently shipping.
Any kind of typical C memory corruption that you can think of that fails to validate the buffer sizes that you give as parameters.
WebAssembly memory access bytecodes only do bounds checking of the linear memory block that gets allocated to the module.
You then do your own memory management taking subsets from that memory block and assigning it to the respective internal heap allocations or global memory blocks.
So you just need to have a couple of structs or C string/arrays, living alongside each other and overwrite one of them by writing too much data due to miscalculations of the memory segment holding the data.
Except that you load code from multiple sources and one could eventually have a piece of JavaScript code that makes use of such behaviour to have access to some feature that by default is not accessible.
I rather let WebAssembly turn out to be the next Flash, when black hats start looking at it with the same care they had before, no need to waste cycles myself, as it is a lost battle against WebAssembly advocacy.
> Except that you load code from multiple sources and one could eventually have a piece of JavaScript code that makes use of such behaviour to have access to some feature that by default is not accessible.
With all due respect, that doesn't make WebAssembly unsafe in any way.
By the same logic, any program that takes any kind of user input is unsafe because the program could trust data it should not trust and then execute incorrectly.
If a program does not validate (untrusted) input then it is the program's fault. Not the input's fault or the input-producing-method's fault.
I agree with where you're coming from though. People are going to make mistakes and if the average developer has to interpret blobs of bits as meaningful data structures just to get things done, then we are going to see a lot of these types of problems. However, there are already projects in the works that are automating the $LANGUAGE to Wasm glue code which should completely mitigate this issue.
I don't understand what you mean by "a piece of JavaScript code that makes use of such behaviour". Webasm is accessed by javascript, not the other way around.
This really sounds like you have an axe to grind with webasm for some reason, the things you saying seems like grasping at straws. It already works in browsers, so if there is something to exploit you could demonstrate it with a few files on github.
Are you saying webasm will crash or that it is insecure? These are two different things and you seem to be conflating them. In your posts you have said that it is insecure but when you talk about specifics it just seems to be about crashing. Then when pressed you avoid any examples.
I meant working examples. Saying "it's insecure!" and then calling your own words evidence doesn't count.
If there are actual exploits or security flaws, then demonstrate them with a working implementation. You seem to be trying to turn a technical discussion into an emotional one.
But they are addressed differently; in particular internal memory corruption is hard to address from a framework perspective, while external memory corruption can be almost entirely eliminated.
Now if I don't need sandbox, it's going to be a tougher sell. But who knows, may be it'll outperform JVM on bare metal some day.