Hacker News new | past | comments | ask | show | jobs | submit login

The security here is based on sandboxing code and providing limited capabilities. If you're embedding wasm, you choose what capabilities to give the sandbox. For instance, if a game wants to support mods via wasm, it could give the mods APIs to the game world but not to the network or filesystem. A database plugin might have access to interpret a database object handed to it but not exfiltrate data over the network.

We're providing mechanisms here, not identity-based policies.




> For instance, if a game wants to support mods via wasm, it could give the mods APIs to the game world but not to the network or filesystem.

This is, IMO, pretty huge. I'm building a game right now that supports clientside NodeJS mods, and figuring out sandboxing has been a huge pain. Similarly, I've been trying to figure out how to sandbox some of our dependencies at work and in personal projects.

I want to be able to let someone mod my games in any language, and distribute them however they want, while still providing guarantees to my users that the worst a mod can possibly do is maybe freeze your computer or something.

So much of the problems the OP describes ring true to me; it's a very exciting project.


What about fixing the lack of bounds checking when multiple data elements are mapped into the same linear memory block?

This leaves the door open for trying to influence behaviour of C and C++ generated WebAssembly modules, by corrupting their internal state via invalid data.


If you give a sandbox a capability and then there’s a bug in it, there’s always a chance that it will maliciously access those privileged resources. The only way I can see of protecting against logic bugs like these is better tooling.


Yeah, but then one should acknowledge those issues, and not advocate WebAssembly as if there weren't hundreds of other attempts since the late 50's.


They don't mention them because their focus is on other aspects of safety.


Either one is actually serious about security across the whole stack, or not.


It's not like what you propose hasn't been tried before. The main practical issue that I don't see this post address, is the combinatorial explosion that stems from fine-grained sandboxing of any complex application. There are bound to be executable paths through the state space that both pass initial muster and can be used by an attacker to craft a sandbox bypass.

In other words, finegrained sandboxing does not solve the problem. It may be an improvement on the current -dismal- state of affairs as far as ecosystems like pypi or NPM are concerned, but I don't see how it addresses the main issues in any sort of practical, real-world environment.

Something that definitely works is that which security-conscious orgs/teams/persons currently do: ownership and curation.

Ownership implies minimization of 3rd party dependencies.

Curation implies strict quality (incl security) reviews and relentless culling of code that fails them.

The distributed engineering model that you advocate for where code is being pulled-in from hundreds of disparate sources outside of one's control is _fundamentally broken_.


You seem to be arguing that sandboxing is not a security benefit. On the contrary, sandboxing is maybe the security success story of the past decade.


You missed my point, which is not sandboxing.

Webassembly through fine-grained sandboxing promotes software decoherence by amplifying the number of dependencies (since the major downside to working in this fashion is now advertised to be reined in).

When the number of dependencies goes up, combinatorial explosion ensures that the state-space is full of possible attacks. Fine-grained sandboxing does not solve this anti-pattern but can in fact make it a lot worse. You can examine each and every dependency and make sure that its sandbox is kosher but that does not guarantee anything about the interactions and transitive relationships between dependencies. The metasystem is now an amplified (by sheer number of dependencies) state-space that attackers can seek to manipulate.

Since security is a systemic rathen than an isolated affair, the model that the OP advocates for is broken.


You might have to give specific examples with wasm in mind instead of talking about ' combinatorial exoplosions in the state space of the amplified meta system '


System sandboxing (virtualization) yes. "OS" sandboxing (containers) yes. Process sandboxing yes. None of those need or benefit from webasm.

In-process sandboxing, where wasm competes, is, if anything, the security failure of the past decade. JS in browsers has been a constant, never ending battle. And it just hard-failed thanks to spectre.

The idea of everyone rolling their own, hardened syscall interfaces is a straight up terrible idea if security is your goal.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: