Hacker News new | past | comments | ask | show | jobs | submit login

>An OS that “can’t” get viruses or be hacked sounds pretty desirable. Cynically it makes things like “jail breaking” a google home much more difficult.

I don’t think these claims hold. It is still written in a memory unsafe language, so exploitation is totally possible. As well, for malicious software you’re just looking for a process handing out high privilege handles.




> for malicious software you’re just looking for a process handing out high privilege handles In the end, that is true.

But the thing is the way Fuchsia's implementation of the 'capability security model' is done. The capabilities a process (or, a 'component' in Fuchsia's model) use/consume are explicitly given to it. And this scheme is implemented in a way that is easy to see and account for where/from these capabilities are going to/from. An process can do nothing that is not provided by the capabilities it got during creation.

Of course, components might be buggy/malicious and leak capabilities. But the security holes bottleneck in this capability routing scheme, so even with buggy/malicious components, it's much easier to audit and fix. And from an attacker perspective, it's much harder to reach a component given the routing path of capabilities that it's received.


I expect it will be very much the same as drivers we have today, where you have some game anti-cheat rootkit that has a bug in it.

In Fuschia's case it will be like that but the exploitation either gives you access to that driver's capabilities, or simply that driver is giving out handles with permissions insufficiently removed from them.

It will be cool to see a full system audit of capabilities, but I don't think that analysis exists yet.


> In Fuschia's case it will be like that but the exploitation either gives you access to that driver's capabilities, or simply that driver is giving out handles with permissions insufficiently removed from them

Yes, and then you would have to own a component in some route that received the driver exposed capability. Either way, the tight sandoxing and compartmentalization of functionality make things difficult.

There's an example of analysis here: https://blog.quarkslab.com/playing-around-with-the-fuchsia-o...


FWIW, use of memory-safe languages doesn't preclude exploitability. It's "just" another way to reduce attack surface.

(disclosure: i work on fuchsia, big rust fanboy)


I think you are implying this by putting just in scare quotes, but it is one of the most impactful things we can do for the security of native code.


Absolutely agree. The other thing I guess I was implying is that Fuchsia uses a number of tools to reduce attack surface that work regardless of language.

(edited to make explicit other half)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: