Hacker News new | past | comments | ask | show | jobs | submit login

SELinux is for users who have a software package, and want to restrict that software package. Systems like Landlock are meant for developers who want to voluntarily drop what their software can do.

So SELinux is meant to say "I am going to run this program, and I want to limit it to these things. Landlock / pledge / unveil are meant to say "I am writing this software, and I want my software to never be able to do X". This way developers can reduce the impact of any vulnerabilities in their own software. It is defense in depth on the development side.

SELinux can be used if you don't trust a piece of software. But it is generally applied by people who do not know about the internals of the software. In some sense it is treating the software as a black-box, and limiting what the black-box can do.

Landlock / pledge / unveil are implemented by the developer. If you don't trust the developer's intent, then these are not going to help. However, this is a great way for developers with good intentions to limit the impact of any mistakes they made. Notably, since this system is implemented by the developers, the implementers generally have deep knowledge of how their software works. They can consider the internals of their own software.

I expect that, on balance, systems like these are probably more useful. Black-box approaches are inherently limited, and notably expensive to implement. Moreover, it seems like exploiting bugs is a bigger problem than software that is intentionally malicious. Both of these points mean it makes sense to help developers limit the impact of bugs. It seems cheaper, and it targets an apparently bigger problem.




> the implementers generally have deep knowledge of how their software works. They can consider the internals of their own software

This is the weird part to me: if you're a developer, you already control how your app works. You can put any kind of restriction you want in place without landlock. You just program it. Don't want user to access a character device? Test if the file you're about to open is a character device, refuse to open it. Don't want your user to open something in /etc/? Check the file path, don't open that file.

My guess is that Landlock and its ilk are basically abstractions to allow programmers to be lazy? I'm still trying to understand the use case because it feels very much like a "webapp firewall", where the developer doesn't want to be bothered with understanding security, so they slap a thing on their app and tell themselves it's secure, when in reality it's still far from secure. Sandboxes get popped all the time.


The unstated assumption in conversations about computer security is that the software industry is bad at its job. Incredibly bad at its job. Suggesting that developers write correct code will get you (rightly) laughed out of conversations about security.

The problem that solutions like Landlock are trying to solve is "given that this piece of software is going to be compromised, what can we do to mitigate the damage?".

Solutions like Landlock (unlike SELinux/AppArmor/Traditional UNIX) make the additional assumption that if the software gets compromised, the compromise will happen after some set point in its execution (such as after it begins processing untrusted data). Once the program is compromised, the attacker controls the program, not the original programmer. This allows them to bypass any checks programmer put in, since those checks were only ever enforced by the program itself. Landlock solves this by moving enforcement outside of the program and into the kernel. Now, once a program has set up its restrictions, it is no longer possible for that program to bypass them (unless the attacker can also find a kernel exploit).


That makes sense now. It's sort of like an app bringing along its own SELinux policy. My ideal would be a formal specification that dictates to the system what it should allow the program to do, and also how other parts of the system should interact with the program. "I only want to be able to open a file, and the system/other programs should only send me file data from local disks." (I expect Android sort of does this?) It would also be handy to have 'taint mode' at the system level.


It's a second safety net. It probably can be used by lazy programmers, but that's not what it is intended for.

The idea is "make it impossible for my app to misbehave in certain ways". It is a lot easier to enforce this at the kernel (don't need to check every usage of e.g. open() ). Moreover, there isn't really a way to screw this up through a logic mistake.

But most of all, to say with absolute certainty your software does not allow unintended remote code execution is nigh impossible for anything complex. By adding this, you reduce the possible downsides of any such exploit. It also helps you detect such exploits earlier.

It's similar to the kernel using address layout randomization, compiling with stack-guards, having non-executable stack, etc. It is defense in depth.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: