Hacker News new | past | comments | ask | show | jobs | submit login
Why improving kernel security is important (mjg59.dreamwidth.org)
146 points by luu on Nov 10, 2015 | hide | past | favorite | 53 comments



Perhaps the promotion of SELinux and its integration in some major distros is impeding the adoption of more practical kernel security measures like AppArmor or grsecurity. At least that's what Chris Siebenmann, a long-time sysadmin for the University of Toronto's CS department, thinks.

https://utcc.utoronto.ca/~cks/space/blog/linux/SELinuxWhyICa...

What I don't like about SELinux is that it doesn't integrate easily with existing tools. It requires special filesystem attributes, and tools like ls and cp have to be modified to support it. Third-party tools like Ansible also have to go out of their way to support SELinux. It seems to me that AppArmor and grsecurity don't have that problem nearly as much.


I feel like what SELinux is missing is an extra layer of abstraction for the configuration, to make it easier for humans to work with.

The "Just disable SELinux!" sysadmin answer strikes me as coming from the same place as "Just disable IPv6!" and "Just disable PulseAudio!" - a lot of sysadmins don't like having to learn new things. I've done sysadmin work for about 20 years now, and I don't share that view - learning new things is a necessity in this line of work. You either learn a new thing you can apply in your work, or you find out it's not a good solution and now you have an informed opinion on why it's not good.

You mention AppArmor as a "more practical security measure" being held back by SELinux. But Chris's blog post states that they disable that too. In essence, his argument seems to be "I don't really care about all this security stuff, I just want it to work the same way it always has". I understand the viewpoint, but that's not the way to progress.


Our viewpoint is ultimately pragmatic: at the moment, both SELinux and AppArmor appear to be too much work for the potential benefit they offer in our environment. We could spend a great deal of time configuring both of them in order to make things work, and they would still probably not do anything much for us in practice.

(Remember, both SELinux and AppArmor are secondary defenses, not primary defenses; they potentially limit the damage if your system is already partially compromised.)

In part this is because at least SELinux only really works easily if you put everything in what the distribution considers its standard location and run things in the standard way. The moment you deviate from this, you wind up having to research an increasingly large number of file and executable contexts and (re)label an increasingly large number of files. And I'm ignoring NFS here, which we use heavily (I doubt NFS files can easily have SELinux attributes, especially when they live on non-Linux NFS servers).

Security is always ultimately about pragmatics. You have X amount of time to spend on security in all its aspects, and you need to use this time efficiently, to gain as much security from it as possible. Our judgement is that use of our security time configuring SELinux does not have a particularly high payoff.

(For clarity: I'm the author of the entry that mwcampbell linked to.)


Hi! Thanks for your ZFS entries, they've been very useful to me in the past.

I understand the pressure of not having enough time to do things perfectly. But the kernel is a project with widespread use in many domains, and the security features it implements have to provide strong security. We can't let kernel security requirements be dictated by environments that don't believe they require them. It's possible to set up a much less restrictive SELinux environment if you don't need or want to get the full benefit.

This also comes in to my first point, which is that we really need a better abstraction for SELinux configuration. Something that would be simpler to administer for environments that don't require the utmost in security isolation. But it should build on the strong in-kernel MAC implementation that already exists.


The general SELinux issue is a complex subject. My short form take is that regardless of its potential in theory if implemented nicely, in practice SELinux as deployed has consistently prioritized mathematical perfection (and yelling at people) over practical usability in the field. The real result of this has been less security than would have been achieved with a less perfect but more usable system because in the field SELinux does not degrade gracefully and so many people turn it off entirely. Some number of systems are quite secure (assuming no leaks in SELinux itself); many other systems are not secure at all. This is a bad outcome (unless you decide that only people who are dedicated enough to use SELinux really matter and everyone else is 'unprofessional' or the like), and I don't like it. I want a better outcome, one with more security that I can actually justify deploying, one where more daemons and programs are hardened to some degree even if it's not a huge amount.

(At this point the OpenBSD pledge() work is looking attractive, although there are real organizational issues that would make it hard to do in Linux.)

Perhaps one can get to a better future with SELinux by having people build and ship systems for doing little SELinux configurations for daemons or systems that read daemon configuration files so they can automatically label directories and files for your system, or any number of other user friendly ideas. But we've had something like a decade of SELinux and its usability problems at this point and it hasn't happened yet. It's hard to avoid the obvious collection of conclusions.


SELinux came from the defensive side of NSA, pre 9/11, and was not written to make Linux more secure. It was written to force application developers to re-architect their applications so they could run under a mandatory security environment. The original idea was that, once enough applications could operate under a mandatory security model, an secure OS could be put in underneath them.

NSA had had several multi-level secure operating systems developed for them in the past, but they didn't have any applications other than ones specifically written for them. SELinux was intended to remedy that.

Unfortunately, nobody seemed to understand that, and NSA doesn't do much outreach.


> PulseAudio

I abhor PulseAudio, because it just doesn't work as smoothly as JACK or ALSA. Sometimes learning a new thing isn't necessary, as the old ways of doing something still hold.

I agree with the BSD way of doing this: force it on the users and try and help them come to terms with your changes. If there's a revolt and/or it's too much work for you, revert. Otherwise your entire ecosystem is safer.


In my experience, most of the problems people have with PulseAudio come down to buggy ALSA drivers not handling buffer resizing correctly. Adding tsched=0 as a module option "fixes" that by making them not export the buggy interfaces. The result is much the same as playing directly through ALSA.

The reason PulseAudio wants to use these interfaces is that it has to make a tradeoff between CPU utilization+battery usage versus low latency. If you use ALSA directly, the first application to open the device controls the size of your buffer. The first client was a music player requesting a large buffer? Then your SIP client will get high-latency audio. PulseAudio tries to fix this by dynamically resizing the buffer.

I'm sure there's actual issues with PulseAudio, but somehow most of the problems I see people complain about lie further down the stack.

Of course, discussion of pulseaudio is rather off-topic here, it was just one more example of how "just shut down the new thing!" is often easier than diagnosing the actual problem.


-1 on PulseAudio. I used to be ambivalent on the entire thing until I experienced a proper jack setup (KXStudio).

Having a visual interface to plug/sort sound inputs and outputs seems trivial, until you see someone running their skype conversation through FL Studio (in Wine!) to distort their voice using autotune, and then back out to the inputs. Or recording vocals over skype, filter with reverb, and sending them back over skype to the singer. in realtime. this stuff is _trivial_ in KXstudio.

Pulseaudio is still struggling with "Front Left", and "Front Right" both playing audio whilst getting in a fistfight with alsa. in 2015.

I'm building a (private) linux distribution for my corporations' chromebooks, based on gentoo/calculate KDE. I plan to avoid everything lennart poettering has touched. It'll be simultaneously vindicating and frustrating how my users will be saying "wow. you can do that in linux?" again.


The best experience I have with PulseAudio is by layering it on top of JACK.


I believe that what Matthew Garrett is talking about is mostly different from SELinux and AppArmor and so on. Those are all kernel features to harden user-level software in the face of vulnerabilities. Garrett is (mostly?) talking about internal kernel features to limit the damage of kernel vulnerabilities.

(Many of the grsecurity changes are kernel hardening, for instance; they don't directly affect user level code.)


Exactly. We understand the benefit of mitigation mechanisms to protect against userspace bugs, but there's still pushback against mitigation mechanisms that protect against kernelspace bugs.


Tomoyo Linux is an option too: https://en.wikipedia.org/wiki/TOMOYO_Linux


> practical kernel security measures like AppArmor or grsecurity

Don't forget Tomoy - I always thought the interface/ux at least (the side facing the sys.admin) looked much more sane than SELinux: http://tomoyo.osdn.jp/

That said, I'm not sure how much adoption Tomyo has seen, and an untested security measure isn't really great either.


Currently, most users of computers running Linux own those computers, but do not have root access to those computers, because they are cellphones running Android. Instead, Samsung, Google, HTC, and random human rights abusers like Hacking Team have root access to them. Those users can currently obtain root access to their own computers only because of vulnerabilities in the kernels, hardware, and occasionally applications being used.

Given this very bad situation, I would argue that improving kernel security is at best ambiguously beneficial and possibly actively harmful to users. Maybe we should wait to improve kernel security — probably by switching to a different kernel — until we have somehow managed to escape this very dangerous situation in which people do not have root on their own computers.


> I would argue that improving kernel security is at best ambiguously beneficial and possibly actively harmful to users

Kernel security is inevitably going to improve - the argument is over the length of time that's going to end up taking. If we continue to rely on the incompetence of those standing against us, we not only weaken those who don't have the knowledge or expertise to take advantage of their freedoms (and who end up with phones that are compromised by even more overtly evil actors than the phone vendors themselves), we eventually end up in a situation where the holes are blocked off and we have no method to achieve freedom at all.


This is a pointless philosophical debate: leaving kernel bugs unfixed will not make Android a more open platform. That problem needs to be fixed at the vendor level but there's no reason to make the rest of the Linux-using world less secure while waiting for that to happen, to say nothing of the likely much greater number of Android users who will see those exploits used by malware.


This is as far from a pointless philosophical debate as it could possibly be. We are discussing whether people should be able, in practice, to get root on their own phones. Leaving kernel bugs unfixed is the way that happens today. When people get root on their phones, they can then uninstall the malware that is already installed on the phone when they buy it.

At this point, Android user-owners are the vast majority of the Linux-using world, and indeed the computer-using world. Their needs and interests would outweigh those of everyone else if they were actually in opposition.


Here's why I find this argument unproductive:

1. The vast majority of Android users don't know or care about it. They use what's on the device when they get it and if anyone's getting root, it's more likely to be a malware author than the user.

2. If the manufacturer locks you out, the benefit from a root exploit is time-limited until they ship an update and you're faced with losing root or having known security problems.

3. Using root when the manufacturer doesn't want you to is likely to be used as an excuse to deny support requests later.

It's not helpful to anyone to suggest a computing experience which can change at any time and requires them to delay things like security updates until a new exploit can be found. #1 and #3 combine, in that people are either going to avoid using an exploit if it could mean eating the cost of a new phone at any time or, worse, their buddy will do it for them but not help with the ongoing sysadmin work needed to keep the rooted phone secure. It's worth remembering that the only iPhone users at risk of the Hacking Team exploits were the ones who'd jailbroken the device; I'd bet anyone affected reconsidered how much installing ad blocker or pirated game was really worth.

If you want root, the only viable solution is to refuse to give money to vendors which don't let you have it, which currently means contacting your government to express your support for legal restrictions on devices which cannot be customized by the user. If you continue to support restrictive vendors with your wallet, absolutely nothing will change.


Aha. This is a nice motive for the no-more-Linus-in-Linux conspiracy theory, if one thinks that it takes a Linus-level principled stand to keep openness on this issue.


It is philosophically pointless, as you are arguing from an untenable position: if you don't trust your system provider, you shouldn't trust your system. Leaving kernel bugs unfixed does not make your system any more trustworthy.

From a practical point of view, it's a different matter: the exploits may allow you to regain some control over an untrusted device. But you still have the fundamental disparity of you trying to gain (software) control over an untrusted (hardware) device. That makes it a pointless philosophical debate regardless of practical use.


Oh, I hadn't thought of that interpretation of "pointless philosophical debate". You're right: considered as a purely philosophical debate, it's pointless. But considered as a debate over what practical measures to improve our contingent human reality in the face of non-ideal and poorly understood circumstances, it's far from pointless.


In the real world of actual humans with every day problems, control is often more valuable than trust and trust without control can lead to situations where there is no value at all :/.


People running Nexuses have root on their smartphone, even better, they can replace the copy of android with a modified one (which is really what you want, not people running around with /sbin/su in their smartphone). Granted, there are blob everywhere and running a more recent kernel is hard (impossible?), and there isn't a sensible mechanism like secureboot on x86 to add the users key to the bootloader, but it's not that bad a situation. The problem is the rest of the Android ecosystem, and Microsoft phones, and of course Apple.

Besides, if free software can't provide security, people won't run free software. End of story.


> People running Nexuses have root on their smartphone, even better, they can replace the copy of android with a modified one

I don't understand: Nexus products don't ship with root access for users, and while users can obtain root access through various methods, they can do the same on almost any mobile device. They can also replace Android with various alternatives on almost any mobile device. How are Nexus products special in these regards?


On a Nexus phone, the pathway to unlocking the bootloader (and thus being able to install an alternative OS) is this:

$ fastboot oem unlock

On the vast majority of other Android phones, one must perform a customized attack against your phone in order to unlock the bootloader... and on some phones, that attack has not yet been discovered.


The choice between Google and Hacking Team seems super easy to me.

I also wonder if making root more available ends up enabling the bad actor at the phone store more than it ends up helping end users.

(I do get that many people see it as an matter of principle)


It's not a choice between Google on one hand and Hacking Team on the other. It's a choice between Google and probably Hacking Team, on one hand, and the user, whoever they decide to trust to help them root their phone (the actor at the phone store, who may be good or bad), and probably Hacking Team anyway on the other.


I meant someone at the phone store intervening without the user's knowledge. Improving user control without also enabling that tampering is a challenge. I will insist that there is a large group of users that will do exactly what the employee of the phone store tells them to do, even if it is easy to discover that this is a bad idea.

I also didn't thoughtlessly make that first point. Fixing 100 bugs does not have zero impact on whether Google or Hacking Team ends up with control.


"The number of people using Linux systems is increasing every day, and many of these users depend on the security of these systems in critical ways. It's vital that we do what we can to avoid their trust being misplaced."

This, a thousand times over. Security and privacy concerns are ultimately among the make-or-break points of any major platform, and Linux needs to pass this with flying colors, especially as it grows ever closer to the ordinary user who is finally considering replacing Windows.


Even without the point about more users wanting to replace Windows with a Linux distro, it makes harder to argue that "ATMs should run Linux instead of Windows", if Linux is falling behind in security. Also, ATMs should probably run OpenBSD.


Mainly, the PC inside ATM should not do anything remotely security critical. PC-based ATM platforms are actually designed around compartmentalization and in ideal case the PC should only draw UI on screen, handle some subset of input events and route encrypted messages between other components of the ATM (whether it's implemented in this way in actual deployments is another issue).


You can follow some of the security work the kernel community is doing via the kernel hardening mailing list: https://lwn.net/Articles/663361/


> So they took advantage of the fact that many Android devices shipped a kernel with a flawed copy_from_user() implementation that allowed them to copy arbitrary userspace data over arbitrary kernel code, thus allowing them to disable SELinux.

If the problem was on the flawed copy_from_user() so why is he talking about improving the kernel the manufacturer was at fault for tampering with the safety?

>If we could trust userspace applications, we wouldn't need SELinux.

I won't argue that SELinux is a bad solution to our security problems and that a cleaner kernel-native solution would be better. But the very fact that we are still having this(https://news.ycombinator.com/item?id=10537268 ) discussion shows that we haven't yet agreed upon a definite solutions for the code isolation problem. Docker container model is definitely a much better approach for limiting system calls, but is anyone seriously considering integrating docker to the kernel source code?

Another problem is that, while may have a lot of proposals for the problems, implementations(like Grsecurity) and even have a satisfactory number of developers, we have only so many trusted reviewers to audit that code, so we will always lose one or another.

Furthermore, I'd like to note that with neither shellshock nor heartbleed was a kernel was the problem, in fact I don't even remember the last CVE related to the kernel that I've seen. So is the kernel security as horrible as the recent uproar seems to warrant?

Finally we need to be honest with our selves here. Most problems with security that we face today are not directly related to the software that we run, it's related to how e build, ship and evaluate our software. We don't have a security focused mindset in the corporate world, security does not add to the product value to the average user, so we end up shipping code with obvious security flaws. As a tech savvy community we cannot just pretend that security is everybody else's fault.

All that said I don't think we should turn a blind eye to the security limitations of the kernel, but I don't think that the kind of narrow minded discussion that have been floating around the problem will improve anything.


> If the problem was on the flawed copy_from_user() so why is he talking about improving the kernel the manufacturer was at fault for tampering with the safety?

The flawed copy_from_user came from the upstream linux kernel. The malware vendor just took advantage of the security bug which the vendor had not patched. Proper SELinux rules could have prevented this from being exploited.

> Furthermore, I'd like to note that with neither shellshock nor heartbleed was a kernel was the problem, in fact I don't even remember the last CVE related to the kernel that I've seen. So is the kernel security as horrible as the recent uproar seems to warrant?

Do you follow any of the bug disclosure lists? Kernel security issues are common. If your primary source for security vulnerability news is mainstream media, you should be aware that that's a horrible way to stay informed. Most bugs do not get a catchy name like "ShellShock" or "BEAST", they just get a nondescript CVE number and are patched. Media coverage is a terrible indicator of severity of security bugs.

Just as an example, search for "linux" on this page. This is not a complete list, I'm sure there's better resources but I didn't have time to find them. https://www.debian.org/security/2015/

Most aren't remotely exploitable (most of those that are in recent memory are SCTP-related), but they're serious nonetheless.


>So they took advantage of the fact that many Android devices shipped a kernel with a flawed copy_from_user() implementation that allowed them to copy arbitrary userspace data over arbitrary kernel code, thus allowing them to disable SELinux.

Why is it even possible to disable SELinux at runtime? That should be a compile time option. Either you want the added security, or you don't. It's useless if someone can turn it off.

It required a flawed copy_from_user() implementation, which is now fixed. That's just not the only bug in the Linux kernel, there will be new ones. New ones that may or may not allow a attacker to disable SELinux. The problem is that it is modifiable.

Earlier today there was the story about OpenBSDs pledge(). One of the basic ideas of pledge is that you can't turn it off.


> Why is it even possible to disable SELinux at runtime?

If you're able to overwrite arbitrary kernel memory you can just replace the SELinux code with code that says you can do anything. The idea is to make it more difficult to take advantage of kernel bugs such that you can modify arbitrary kernel state.


> Why is it even possible to disable SELinux at runtime?

As with all things SELinux, it's a policy setting. You can configure your policy to disallow disabling SELinux.

That wouldn't have made a difference in this case though, when you have unprivileged applications accessing kernel memory directly.


Why is it even possible to disable SELinux at runtime?

Because policy configuration happens from userspace. And it doesn't really matter if root users are disabling it, or just doing the equivalent of `chmod -R 777 *` without technically "disabling" it.


No. Because bugs.

If you can write to arbitrary kernel memory you can do anything you like. Disable selinux; write funny messages to the console - it's all yours for the taking.


Contrarian stance here: if kernel security is important, the game is over, the attacker has root; they are running code on your server and you've lost because of one of the innumerable issues that the attacker knows that you don't. Restated, the kernel is defined to be insecure; now play the game. :-)

Design wise, the principle I've adopted is that each server is running an application set; when one of those applications is compromised, the server is immediately rooted, game over, all applications in the set are compromised and the attacker controls those resources and the server. In general, the layer of security I am looking for to control server rooting is an application's security, preceded by network level security. Separation of security regions (domains, whatever you want to call them) is accomplished by having different ephemeral servers; exploitation of one region not rachetable to the next region unless it's prespecified that way.


> If kernel security is important, the game is over, the attacker has root

No - one of the primary mechanisms attackers use to gain root is to exploit kernel vulnerabilities.


Right.... if you're relying on the kernel to protect you, you're dead in the water as it is, today. So the reasonable defense is to assume the kernel security doesn't exist.


The reasonable long-term defence is to improve kernel security.


Honestly...no. I don't think so.Security is much too much of a cat and mouse game at this level, where you're specifically fixing things to deal with attacks. There will always be another vulnerability, another bad coding practice that someone comes in through. It's not provably secure (it's in C and has le five zillion config options; among other things); it's not secure by design (wasn't designed for security); it's not even secure by effort (Linus is not a religious fanatic about security).

So the sane approach until a fully redesigned system comes by is to assume that it only provides a thin layer in the defense game and partition access levels and security controls appropriately.


I don't want to sound overly hostile here, but have you actually read the post? Many (if not most) kernel bugs can be mitigated with existing technology, and there's ongoing research that will bring this down even further. There are certainly scenarios where assuming that any level of compromise may be significantly deeper than you imagined is the correct response, but that's not a supportable response in the majority of cases.

Looking at it another way - if application security is important, the game is over, the attacker has already got in via the network. We're bad at writing applications, so we shouldn't expose them to the internet.


All of the above. Do the best you can at network security, and try to get better. Do the best you can at application security, and try to get better. Do the best you can at kernel security, and try to get better. And do the best you can at intrusion detection, and try to get better.


The name-calling referenced in this article also influenced a talk by Theo de Raadt on a new security feature for OpenBSD:

http://www.openbsd.org/papers/hackfest2015-pledge/mgp00003.h...


What name calling? And how is that related?


The image is of a plastic masturbating monkey, probably in reference to Linus Torvalds. They call him "Loudmouth Linus" because he reportedly called them names, and reference the recent WaPo article.

Look, honestly? What BSDs are doing is great for those who want to take advantage of it. The mitigations they are adding will probably eventually be merged into the Linux kernel. And if not, unless users stop using Linux, Linux will exist without them.

And thus, I think this is an immature reaction to having a pull request refused. This is many things, beginning with "not nice."

The BSDs have their own stuff, the Linux users have theirs. Dunno what the hubbub is about.


> They call him "Loudmouth Linus" because he reportedly called them names, and reference the recent WaPo article.

He did call the OpenBSD folks that: http://article.gmane.org/gmane.linux.kernel/706950


Huh. Masturbating monkeys.

Well, I guess he had that one coming, then. Forget everything I said :)


I think the poster perhaps meant to link to the presentation generally[1], which goes over the introduction of BSD's pledge() feature[2].

[1]: http://www.openbsd.org/papers/hackfest2015-pledge/mgp00001.h...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: