Hacker News new | past | comments | ask | show | jobs | submit login
Fixing the Desktop Linux Security Model (whonix.org)
90 points by adrelanos on March 28, 2020 | hide | past | favorite | 98 comments



All desktop OSes assume virtually all code that's run on them (natively) to be trusted. This is in contrast to the mobile security model (at least iOS), and arguably the web, where all apps are assumed to be potentially-hostile. Apple is trying to back-port macOS to the more modern model but so far the efforts seem to mostly be clunky and futile. Windows has UWP and Linux has "Snap", which sandbox individual apps, but they aren't widely used (and of course any malicious app wouldn't use them).

When installing anything on your desktop system, keep the above in mind. Never run anything you don't 100% trust.

I do wonder if package managers actually make this problem worse. As another commenter pointed out, they encourage people to install random packages liberally. It's an interesting exception to the "Windows is insecure" trope.


There is bigger difference than just security; in web/mobile app model the user data/state and the app are tightly coupled and in many cases inseparable. In contrast in traditional desktop model applications are mostly stateless and the data lives independently as files.

Personally I'm kinda saddened to see the file centric model to fade away, but maybe the interoperability by default model it drove was always a pipe dream. But it was a beautiful dream nonetheless


Same model on Android.

And yes, while the Linux kernel has a bunch of issues, glibc has a bunch of issues, etc. the app model as you pointed out is the major problem. Even if you use snapd and others, the apps aren't made to be able to ask the user for things like storage access and what not, making things difficult.

Ex: "Darktable, an image editor, needs to import pictures from you /home/blah/Pictures directory". Well then it needs full read access to /home/blah really, for things to work. On Android and other similar OSes at the minimum it has it's own storage space, and asks for additional access - or goes through the specific call to get pictures from the pictures app.


Yes, Android puts on some security theater to pretend it has the isolationist model, but in reality random apps can do things like draw content over each other at any time, without any special permissions.


Actually, there is a setting per-app in Android to disable apps drawing over other apps. And at least on my phone, it defaults to not allowing apps to draw over each other.


This is why I don't understand why devs think it's OK to advise installing by piping curl into a sudo shell.

In what world is that acceptable security?


Since MSIX, Windows has brought UWP sandbox to Win32 as well.

The upcoming Windows 10X sandboxes everything by default, including Win32 applications.


Windows 10 doesn't work the way you claim all desktop OSs do.


Last I checked (I suppose this was in Windows 7), it's trivial to write a Windows desktop app that can directly control the mouse and keyboard input without even requiring admin permissions.


Yes which is very useful in videoconferencing. We don't want to run Microsoft Teams or Skype elevated. You won't be able to control elevated applications though. You're still on the other side of the watertight hatch.


Yes which is very useful in videoconferencing. We don't want to run Microsoft Teams or Skype elevated.

In macOS, applications do not have permission to capture the screen, control the mouse pointer, use the mic, use the camera, etc. But the user can choose to elevate the permission (similar to iOS).


which as an user infuriates me daily


> You won't be able to control elevated applications though

Which doesn't matter to an attacker if their target is the user's data and that data is all stored in non-elevated applications.


I was surprised and alarmed when I learned my games, made with unity, have so much file system access. I didn’t test it too much but for some reason I just assumed there were more restrictions


Thank you! Linux desktop is far from secure, even with all these suggested solutions. The lack of popularity by attackers means defenders and coders alike focus on mitigations based on assumptions or a small amount of historical data.

From a bad guy's perspective, they just need a foot in the door and move laterally if possible.

In my opinon, while all these hardening features are great, nothing beats behavioral detection and prevention. This isn't easy on linux to be honest, but a way for admins(power users?) To define rules like "a process tree with firefox as a parent is running unusual commands or processes" , more or less what is done with sysmon except with perhaps the audit subsystem would be of tremendous use. Assuming you can't prevent initial access or exploitation, what can be done to detect attacker activity on Linux Desktop?


It doesn’t have to be; desktop Linux users tend to behave.

Go installing random blobs and running scripts from people that download other scripts? Yes you’ll get owned, don’t do that. Even if all your friends are doing it don’t do that.


No,see that's your assumption I talked about. That's not a targeted attack for example, if someone is targeting you or your org, they will not wait for you to open a random malicious script somewhere.

Example: let's say you are not very technical or fairly new to Linux. You visit a hotel and their legitimate captive portal asks you to install a package with a download link which when installed allows secure access, most people in the category i described will install it. Or let's say you visit ubuntiforums or Linux questions a lot and you receive an email from them(you don't know email can be spoofed like 99% of people) about an emergency security advisory for some package,distros are working on a fix but here's a link to a patched package if you want to help test it's stability, good chance majority of users in the category I described would install that package since it came from a trusted source. And fyi, I just described two TTPs of the APT darkhotel: https://attack.mitre.org/groups/G0012/

My point is, if someone studies you well enough to know what you trust and don't trust, their attacks will tuned to work against you. There are many ways to get pwned and attackers need only one to work. Your expectation that everyone should know all the attack vectors well enough to defend themselves is impractical. Even seasoned pros will overlook verifying downloads with GPG here and therr for example.

This isn't just my opinion, you have to assume if targeted, they will at least get initial access.


It doesn't work that way. You're assuming. People are people, you can't assume "they'll behave". Let's use China or Russia who switched to Linux desktop for their operations, will their users be any less susceptible to social engineering just because they use linux? Look, what you have to understand is, even in the world of Linux Desktops it is a much bigger world than the echo chambers we're used to where the only other users are tech workers and enthusiasts.


You mean behave like the usual curl ... | sh advice that is so prevalent nowadays?


Disclaimer: I'm the author of Grapl[0], which aims to do what you're talking about. Normally I don't really comment on the product on HN, but you're describing Grapl's raison d'être.

So, audit is one subsystem people use. ebpf is another.

OSQuery[1] is a tool that'll make it pretty easy to hook into audit and collect things like process executions.

Once you have instrumentation you want to start writing those rules. This is where systems like ElasticSearch, Splunk, or in the case of my product, Grapl, come in.

What Grapl makes particularly easy, and what other systems make quite hard, is the type of detection you're talking about - process tree analysis, or generally any kind of analysis that spans many events/ entities. This is because Grapl will take the instrumentation output and generate a graph, which makes joins across the data quite efficient, and it exposes the graph via Python, so you can express pretty arbitrary logic. Other systems make joins quite painful and slow[2][3]

Grapl's not at the point where a power user could just get it up and running - though it will be very soon. I've written detection logic very much like what you're describing though[4].

Linux is actually doing fairly well here, in my opinion. OSX was probably the worst, but the ESF will change this. Windows, with Sysmon, is still probably the simplest and best solution for instrumentation (though ETW is pretty strong too).

[0]https://github.com/insanitybit/grapl [1]https://osquery.readthedocs.io/en/stable/

[2]https://docs.splunk.com/Documentation/Splunk/8.0.2/SearchTut... (ctrl + f 'In large production environments, it is possible that the subsearch in this example will timeout before it completes.')

[3]https://www.elastic.co/guide/en/elasticsearch/reference/curr...

[4] https://github.com/insanitybit/grapl-analyzers/blob/master/a... , https://github.com/insanitybit/grapl-analyzers/blob/master/a...


Very interesting, I hope to see your work mature.

Expensive EDRs already do this well. The UI,rules and having a place to put the data is hardest part imho. Linux makes things difficult because so many componets are by design reused extensively.

I will look into Grapl, thanks for sharing.


> From a bad guy's perspective, they just need a foot in the door and move laterally if possible.

How does the bad guy get a foot in the door?


> How does the bad guy get a foot in the door?

If you were using Ubuntu Linux in 2007-2008 you will remember the compiz/beryl craze. Everybody and their dog wanted the latest version of compiz.

Adding random repositories was not a problem if that got you the latest version of compiz/beryl.

Until a guy, one of the maintainers of a popular repository, tricked everybody: in the install scripts he added code to change the desktop background to an image with a text reading: don't add random repositories from the internet.

Think about all the random repositories and PPAs that people add to their systems without thinking. That is an example of how you get a foot in the door.


> Think about all the random repositories and PPAs that people add to their systems without thinking.

This doesn't seem like a "desktop Linux security" problem. It seems like a "some users do dumb things" problem. You can't fix that with an OS. But I can "fix" it easily by not adding random repositories and PPAs.


Lots of security issues are fixed not by “fixing” them but by adding mitigations which include user approval, surfacing otherwise hidden activity, and other crutches. At the end of the day it is an automated system and must do things without the human in the loop, so the best you can do is make “the loop” visible to the user.


> user approval

Adding a repository/PPA does require user approval. You can't do sudo without knowing you're doing it.

> surfacing otherwise hidden activity

Yes, and Linux gives you plenty of tools to do that, like, say, "ps aux" and "top" (or the many GUI equivalents in various desktop environments).

> At the end of the day it is an automated system

Remember we're talking about desktops here, not servers. Desktops are not "automated" in any relevant sense for this discussion.


Agree that isn't a great example. Linux package repos are still far in advance of the mish mash of stores and direct install that is Windows and MacOS.

Programs like Thunderbird and every Gnome program with an embedded web browser are a bit problem. It's really hard to prevent random web stuff from happening on your box, and I don't want anything except a fully patched Chrome/Chromium or Firefox talking http(s) to anything unless it is audited, isolated and constrained to do a small number of tasks.

Office suites and file viewers are ripe for exploits. I'd argue that open office has an essentially unlimited supply. PDF readers ... Ugh. Even simple image tools have occasional oopses.

Confining the small Linux tools that do simple things is actually a good match for containment technology; where it fails is big complex GUIs like office suites, or tools that need broad file system access and network comms.

If Linux had any standards for the file dialogue it would be much simpler to retrofit access controls. Even Android is so far in advance of desktop Linux it is hard to see the catch-up happen. I almost can't believe that desktop linuxes still don't support running Android apps.


> Programs like Thunderbird and every Gnome program with an embedded web browser are a bit problem.

Yes, which is why I don't use any of them.

> It's really hard to prevent random web stuff from happening on your box

No, it isn't. You simply don't run programs that aren't trustworthy. If you want to know what network activity your programs are doing, that's what "netstat" is for.

> Office suites and file viewers are ripe for exploits.

Yes, which is why I don't use them except in particular cases (open source PDF viewers are generally OK, and for the times when I'm forced to open an office document, LibreOffice has good defaults for not running embedded macros or resolving embedded links).

> I almost can't believe that desktop linuxes still don't support running Android apps.

These people seem to be taking a stab at it:

https://anbox.io/

In any case, the divergence here is due to Android, not desktop Linux; Google deliberately made Android enough of a "not Linux" to make its apps incompatible with desktop Linux.


I would like to have per-application egress control on linux without containerising everything myself or writing baroque filtering that is undebuggable. I shouldn't need to trust every line of code in some massive package just to run it safely -- and I shouldn't have to tcpdump it to see what it is doing on the network.


This doesn't seem like a "desktop Linux security" problem.

It is a desktop Linux security problem, because people will want to install applications that are not available in a distribution. So, you should provide a mechanism to let them do that relatively securely.


I once discovered a botnet running on my Ubuntu system. I always assumed a PPA the culprit. The ip addresses were gazprom, some stuff in China, a ton of others, etc - too long ago to remember details. I also remember when I first started with Arch, which at that time had unsigned packages.

Edit: comment truncated by sleeping aid. I've gone dumb, but also remember the router suffering too, even after a hard reset, westell, I think.


I also remember when I first started with Arch, which at that time had unsigned packages.

Very much at the urging of Arch maintainers not to, people are using package managers to install stuff from AUR, where literally anyone can upload PKGBUILDs.

Another easy vector (for developers) are language package managers. E.g. Rust's Cargo runs build scripts (small Rust programs for configuration) for many crates, unsandboxed.


Using malformed PDFs, documents, mp3s, etc. Exploiting browser, ssh, php, mysql, mail server vulnerabilities, etc, etc. By no means a trivial task, but the attack surface is large enough that sec bugs keep being introduced.

https://www.cvedetails.com/product/47/Linux-Linux-Kernel.htm...

https://www.cvedetails.com/product/36/Debian-Debian-Linux.ht...

https://www.cvedetails.com/product/20550/Canonical-Ubuntu-Li...

https://www.cvedetails.com/product/79/Redhat-Enterprise-Linu...


Depends on the bad guy. Social engineering is the best way on desktops. And this is what I meant by assumptions, on windows I can tell you phishing and removable drives are the easiest way. On Linux desktop it simply isn't targeted enough to be confident which attack vectors are common.

But you're missing the point, there will always be vulnerabilities. Even if everything is written in Rust, logical vulns exist, default config issues will exist (mongodb and internet exposure for example) and even the smartest people can be phished. Even with a yubikey,if I can trick you into opening a link that will have you download and open a file that will execute code on your machine, I don't need your login because I have your session cookies and I can reverse tunnel traffic through your IP. And I may not even care about that, maybe I just want your ssh private key and I will be on my way.


> Social engineering is the best way on desktops.

Nobody uses my Linux desktop/laptop computers but me, and I have no reason to tell anyone else anything about them, so I don't. So I'm not vulnerable to this attack vector.

> if I can trick you into opening a link that will have you download and open a file that will execute code on your machine

I never click on links unless I know their source. So I'm not vulnerable to this attack vector either.

If all that is wrong with the desktop Linux security model is "some users do dumb things", then that's not fixable by any OS. And even with that said, Linux is much better at containing the damage from doing dumb things than, say, Windows.


I guarantee you, you will fall for a clever phish. You can't outsmart a good spearphish. You are wrong when you think stupidity is the vuln, the vuln is basic human psychology. But even if you're right, it's not like Linux desktop is made only for people like you right? It wouldn't be "some people are dumb" it would be "almost all people are dumb" even if being dumb was the issue.


> I guarantee you, you will fall for a clever phish.

You can guarantee no such thing. Not getting caught by phishing just isn't that hard. It requires a little self-discipline, yes. So do most worthwhile things.

> It wouldn't be "some people are dumb" it would be "almost all people are dumb" even if being dumb was the issue.

That's still not an OS issue. No amount of clever programming will help if people are that determined to do something dumb.


That's still not an OS issue. No amount of clever programming will help if people are that determined to do something dumb.

And yet, a malicious iOS cannot typically take over the OS, other applications, or exfiltrate data from other applications. Sure, someone's bank account can be fished. But systems like iOS show that it is possible to protect users against a large numbers of attacks.

One of the goals of an OS is to provide meaningful isolation boundaries. This is why we have a separation between ring 0 and 3, isolation of process address spaces, a separation between UID 0 and UID > 0. There is no reason why we should not introduce new forms of isolation invented after the 70ies.


> a malicious iOS cannot typically take over the OS, other applications, or exfiltrate data from other applications

Neither can a malicious Linux application.


This is nonsense.

A malicious Linux application can exfiltrate all the data in your home directory. Taking over the OS is not hard either, just run a keystroke logger or put a rogue sudo in the user's path.

Of course, this does not apply if the application is properly sandboxed, which is sadly not the default on Linux yet.


You ever run a pip install? Maybe an apt-get install?


> You ever run a pip install?

On my regular desktop, not of anything that wasn't Python code I had written myself.

In development containers, yes, sometimes I've installed third-party Python programs to test them (and of course in Python it's easy to read the source code since what you install is the source code), but that's isolated from my regular desktop.

> Maybe an apt-get install?

Never of anything that wasn't from my distro's signed package repository.


In summary: make Linux roughly as annoying as Catalina.

e.g. constantly having to allow applications access to files outside the initial default sandbox.

e.g. constantly having to grant permissions applications that need "unusual" system capabilities (e.g. raw I/O)

It also seems clear that this concept ultimately heads towards signing application binaries, like macOS.

Maybe this is a win for some set of Linux users (maybe even most); it's far from clear that it's appropriate for all.


> Maybe this is a win for some set of Linux users (maybe even most); it's far from clear that it's appropriate for all.

...and that's okay. What really bothers me about these security conversations is the attitude that hardening has to be enforced on all users. That's ridiculous—everyone has their own threat models.

The problem with the prompts in Catalina isn't that they exist or even that they're the default setting, but that they're mandatory for everyone.


I mean, there are ways to get rid of them…


How? Please share, because I would very much like to turn them off!


I think disabling SIP does it? I'm not completely sure. Either that or GateKeeper.


That's the thing, it doesn't. I had both SIP and Gatekeeper turned off.

Presumably, without SIP you could in theory either edit the tcc database directly or patch the checks in memory, but implementing that kind of hack is beyond my skill level. And a hack shouldn't be required—there should be at minimum a command line switch for SIP-disabled machines.


I'm not sure, then; I know I don't see them…


> e.g. constantly having to allow applications access to files outside the initial default sandbox.

Any system that has ordinary users trying to do this for ordinary software will fail.

Things like apparmor can work well because the author or distributor of a package knows that it shouldn't ever do a particular thing, so if it attempts to, that is a bug and it should be refused. The user shouldn't even have to be aware that this is the case because the program shouldn't even attempt to do the thing the author/distributor asserted it will never do.

> It also seems clear that this concept ultimately heads towards signing application binaries, like macOS.

Linux has had that for years already. Packages are signed by the distributor. Nobody complains about it there because the device owner can authorize additional parties to sign packages for their machine, as it should be. Have a look at the install instructions for Signal Desktop for Debian on signal.org -- "sudo apt-key add" etc.

You can also install binaries outside the package manager, and then the filesystem executable bit serves a similar purpose -- disabled by default but the user can override it.


Things like apparmor can work well because the author or distributor of a package knows that it shouldn't ever do a particular thing, so if it attempts to, that is a bug and it should be refused. The user shouldn't even have to be aware that this is the case because the program shouldn't even attempt to do the thing the author/distributor asserted it will never do.

The AppArmor + vendor-provided profiles doesn't work, because it means that you have to give most applications access to the user's full home directory. If you e.g. have an office suite and don't permit full home directory access, users will bail out because they cannot open files in their random folder structure. If you permit full home access, it could be exploited to exfiltrate data, write malicious files, etc.

macOS solves this by using a privileged service for Open/Save dialog boxes. If you ask for an open dialog in a sandboxed application, it is handled by this privileged service, which makes the file available in the application's sandbox. It is quite elegant, because it does not require the user to follow any additional steps (it works as any other file dialog), but restricts an application to its own sandbox. Flatpak also follows this approach with its portal mechanism (which is unfortunately not supported by every application yet).


> The AppArmor + vendor-provided profiles doesn't work, because it means that you have to give most applications access to the user's full home directory.

It works fine. Its purpose is to solve a different problem than this.

> macOS solves this by using a privileged service for Open/Save dialog boxes.

There are circumstances where this can be useful, and it would be fine to implement this on Linux for applications where it's appropriate, but it doesn't work as a universal rule.

For example, suppose I have a music player app, I choose open, I get the dialog and choose a playlist. The playlist file is just a list of paths to music files which were never selected in the dialog. The app still needs to open all the music files. And the same thing for office documents with links to images, source code project files etc.

And what about background services? If I install an app to make regular backups of my home directory then it needs access to the whole thing, and not just once. Likewise anything that does indexing, malware scanning etc.

So it's still really the same thing. You have an app that asserts it never needs access to files outside the open dialog and then it can be restricted to just that, but other apps need to do different things. You need somebody to identify which apps can do without certain capabilities, the average user is not equipped for that, so it falls to the author or the packager.


I'm the original/lead author of a large cross-platform application that runs on Linux, macOS and Windows.

Trust me when I say that there is no such thing as application signing on Linux.

We don't care what Linux distributions do - we would prefer if they didn't package our application, because 99% of them are incapable of doing it right. We can't stop them, because GPL.

The situation on Catalina is completely different, and cannot be compared with the executable bit.


Packages have signatures on Linux. They're validated during install rather than execution, but validating them on execution wouldn't cause meaningful problems as long as the device owner can install additional signing keys. Not being able to do so is what causes your consternation with Catalina.

Whether the applications are well-packaged is a different matter, and that's the place where apparmor does get into trouble as well. In theory the packager describes things the application should never do, but if they're sloppy then it actually requires some privileges they didn't grant it, and then you have users mashing every available button to turn off all the security because their application isn't working.

This is the constant battle in security. You have false positives and false negatives. You can't err on the side of false positives or people will stop using your system because it constantly gets in their way. You can do a lot of hard work to reduce both false positives and false negatives, but that's a lot of hard work.

Or you can err on the side of false negatives, which is nice and easy and doesn't interfere with people using the system, except that it's insecure and then systems eventually get compromised. This turns out to be the one that people tend to prefer in practice. Oops.


Packages have signatures on Linux. They're validated during install rather than execution, but validating them on execution wouldn't cause meaningful problems as long as the device owner can install additional signing keys.

How does an average computer user know how to verify a public key (does it belong to who it claims to belong to)? They won't. I know many long-time Linux users, and they will just copy and paste all installation commands, including apt-key add commands. If a server is compromised, they will just copy and paste an attacker's PGP key. There are many criticism I can make of a centralized certificate authority, such as Apple, but by default Apple's code signing provides far more security for third-party applications. It is far less likely that Apple's key is compromised than the key of some random 3rd party APT repository maintainer. Furthermore, if a Developer's key is compromised, Apple can easily revoke it.

A centralized authority has its problems (the authority becomes the gatekeeper). But for most users, it much more secure.

So, how do you suggest how to verify a public key for a third-party repository at scale? Most users will not be part of the PGP web of trust and we cannot all meet the author in real life to compare PGP key fingerprints.


> So, how do you suggest how to verify a public key for a third-party repository at scale?

We already have infrastructure for this. Here's that first line for the Signal installer:

> curl -s https://updates.signal.org/desktop/apt/keys.asc | sudo apt-key add -

You're getting the key from signal.org over TLS. It's an authenticated channel, the TLS certificate is subject to revocation if compromised. Key distribution isn't the problem.

You also don't inherently need a central authority for revocation. There is already a line in sources.list where you're checking for package updates, it would be straightforward to have the package manager to check in the same location for key revocations, or specify another URL to check for that when the key is installed.


That verifies that the download is as it should be.

Apple's app signing verifies that the app is approved by them.

Substantively different things (though related).


A package signed with the Debian signing keys proves it is approved by them. A package signed with the Signal Foundation signing keys proves it is approved by them. It's substantively the same thing, you're just enabled to trust multiple independent parties, or yourself if you install something outside of the package manager.


It's not the same thing at all. Neither Debian nor Signal audit 3rd party applications in the way that Apple does, which is why neither Debian nor Signal play any runtime role in denying the ability to run a 3rd party application.

Apple's app handling process is more than just attaching signing keys. Which is partly why they want you to pay for it.


> Neither Debian nor Signal audit 3rd party applications in the way that Apple does

How do you mean? Almost everything Debian packages is authored by third parties. Debian aren't to my knowledge the original authors of LibreOffice or GIMP or Firefox. And they certainly don't just sign anything without looking at it.

They do also sign some packages they have authored themselves, as does Signal, as does Apple.


The point is that downloading a key from a website does not prove that the key belongs to that organization. If the server is compromised, the attacker could replace the public key and TLS would not barf at it.

Your reactions perfectly underline my point: most users do not understand trust or if they understand trust at some level, they do not know how to verify trust.


> If the server is compromised, the attacker could replace the public key and TLS would not barf at it.

Equivalently, how do you know the public key that came with your iPhone wasn't compromised? It could have been if it was imaged in the factory from a compromised Apple server. It also could have been if Apple's signing server was compromised and used to sign an OS update your device installed, or their development servers that store the next version of the OS to be released. What makes you think that is any less likely than the Signal Foundation having their servers compromised?

If the place you get the public key from is compromised then the public key is compromised. TLS can't save you from that, but what can? (Actually, some kind of web of trust might. If signal.org started telling you its package signing certificate is different from what all your friends say it said last week, that would be an obvious red flag.)


We are an OEM/ISV, effectively. Even though our software is GPL'ed, we distribute binaries of it for a name-your-own price. There is no signing system when a user installs our builds.

Contrast with Catalina, which will not run our application by default (and makes it actually quite tricky for naive users to run at all).

I am fully aware of how linux distro packaging works, but that's not the only way applications are distributed, even on linux.


With SELinux or AppArmor you can achieve that. But you have to specify such permissions.

And btw, SELinux is maintained by NSA, it is secure.


> SELinux is maintained by NSA, it is secure.

I’m not sure I see a correlation.


They employ smart people who know how to write secure software as well as break others insecure software. They just prefer that you keep running insecure software unless you are uncle Sam.


I would say they would like you to run "secure enough" software, that can only be broken by possessing strategic knowledge and computing assets.

https://en.wikipedia.org/wiki/NOBUS

https://www.youtube.com/watch?v=ulg_AHBOIQU


Catalina is not annoying by my standards, at all.


Agreed, I don't really see the problem. It's not like Catalina is bugging you over and over again with permissions. The first time you start e.g. Skype, it will request access to the mic and camera, you either allow it or deny it, and that's pretty much it.

Both alternatives, 1.do not give applications any access at all; 2. give applications access to everything, suck much more. (1) makes a system unusable, (2) makes a system insecure.

I'd rather know if an application wants to start using the mic or webcam.


I like that madaidan is thinking deeply about these issues, and I think this is a great intro to the subject, along with some very interesting comments on the page. madaidan is also behind obscurix which is a hardened arch variant [1]. I tend to think the containerization of apps is the lowest hanging fruit approach, but I also think that there is a larger underlying issue touched upon which is the monolithic style of the current linux kernel, which is why I have also hoped HURD would rise some day, and continue to find minix 3 very interesting.

All that said, I see one glaring omission: hids. One thing I like to emphasis with people is that especially in an age of APT, if someone sophisticated enough is targeting you, you will probably suffer a compromise. One of the most vital parts of a security model therefor is reducing the amount of time it takes to discover a compromise. This is highly undervalued in modern orgs imho.

There are a few different ways to do this, and none are perfect but I tend to prefer a good HIDS like ossec, sometimes custom scripts checking file hashes, etc. There is one large weakness here though, which is how often I see a hids deployed but 1) not configured well, and 2) the logs never get monitored/ingested/alerted on! If you can combine a good hids and actually monitor the log outputs, you can discover most compromises fairly quickly with telltale system actions.

Lastyly, this is also why versioned backup is so important. Because you need to be able to trust old data pre-compromise. The longer you go without knowing you are comped, the more likely you have data being restored that is comped also.

[1] https://github.com/Obscurix/Obscurix


I am surprised no one has mentioned Qubes OS yet on this thread. Are they many Whonix users not also using Qubes?


Qubes is just the hypervisor. The security within the VM still matters. Whonix supports being run in Qubes.

https://www.whonix.org/wiki/Qubes


Why no mention of firejail or similar tools?

Sandboxing applications is an excellent way to minimize risks. And firejail comes with a long set of default rules that make it easy to use without any configuration.


Firejail has far too large attack surface and is suid root which has resulted in plenty of privilege escalation vulnerabilities.

https://seclists.org/oss-sec/2017/q1/25

https://www.cvedetails.com/vulnerability-list.php?vendor_id=...

Also see this thread https://github.com/netblue30/firejail/issues/3046

Instead, we're going to use bubblewrap which is similar but with minimal attack surface. See the sandbox-app-launcher section of the post.


Why no mention of firejail or similar tools?

They do mention bubblewrap in the sandbox-app-launcher section.


Dumb question: why would somebody prefer apparmor over selinux? Why isn't selinux cited?

From my (basic) understanding of selinux some limitations could be put in place using selinux (I'm thinking of, for example, preventing Firefox and it's children from accessing filesystems outside a predefined path, or labelling files downloaded using Firefox from accessing certain paths if executed -- using labels and policies)


Apparmor is easier to understand, learn and use. Selinux is definitely more powerful but the learning curve is so deep. It is not just the security model that is complex. The documentation surrounding it isn’t that good, and avc logging is not enabled by default (you need to install some package) so when something goes wrong I always struggle to figure out how to debug it. Finally, there’s also this library of existing selinux settings provided by the operating system that you need to integrate with. That one is largely undocumented, and what little documentation exists is scattered.heck it took me a week just to find its source code (hint: it is separated in two different repos, not obvious st all).

In contrast, apparmor made sense and worked the first time I used it.

So yeah, selinux is more powerful, but by being so difficult to use it encourages people to go ‘hey something is broken, fsck this I’ll disable selinux’.


SELinux is just so complicated. I'm sure there are B&E tools out there that will just search to see where you have misconfigured it and attack there.

With a really minimal system (like an IOT box) you can lock things down very tight with SELinux, but not practical for a standard desktop to have full coverage, it just protects some standard components.


This is all great work, undoubtedly. I do wonder if the app sandboxing story can be improved further, though. For example, I suspect you can further harden the defense with a gVisor style approach to sandboxing syscalls and filesystem access, while still using the same kernel mitigations to protect the sandbox itself (if not more). But, I don’t know for sure.


There already seems to be seccomp filters in place.


gVisor is a step further: it acts a bit like a user mode kernel, which handles syscalls and carefully passes them to different components that are individually insulated with seccomp, etc. The idea there is that even a root escape gets you nowhere because root in the sandbox can never be root in the host. You’d first need to exploit the “usermode kernel“ then you are an unprivileged Linux user with a very strict security profile depending on what component you have exploited. Exploiting the actual kernel would theoretically be a lot harder because the syscall surface is no longer directly exposed.


Why not just run android on the desktop? Seems easier than changing all this.


Because there is lots of desktop software available for Linux and virtually none for Android and the user experience of Android as a desktop is crap.

If I had to choose between Android and windows I would most certainly pick windows.


Are there distros that have these 'fixes' built in? Alpine Linux?


Chrome OS is a very secure Linux distro. While the base system is very restricted, they also support Linux sandboxes, making it secure for you to use the computer both for online banking and for downloading random code from the internet. Check out the talk "Linux for Chromebooks: Secure Development": https://www.youtube.com/watch?v=pRlh8LX4kQI


This short post doesn't talk about a lot of things that are happening like wayland to address X security issues or flatpak for sandoxing. The way it's written looks more like marketing than anything else... If you want to build your own hardened apps and kernel, you can look into Gentoo


> like wayland to address X security issues

The post does mention X's security issues. We are discussing switching to wayland but XFCE doesn't support it yet.

If we don't switch to wayland, I might add X sandboxing via a nested X server such as Xpra to sandbox-app-launcher. It's already on the TODO list.

> flatpak for sandoxing

Flatpak is not a good sandbox. It fully trusts the applications and the permissions are far too vague to be meaningful. For example, many applications come with "filesystem=home" which means read-write access to the entire home directory so to escape, they just need to write to .bashrc.

We're using sandbox-app-launcher instead.

> The way it's written looks more like marketing than anything else

Sorry for talking about our recent projects then?


It is interesting that Fedora Silverblue wasn't mentioned in the discussion at all. It aims for having an immutable root filesystem with transactional updates. Like Fedora, it uses SELinux to isolate processes with security policies. It aims to be a minimal base system, where users install applications through (ideally) sandboxed Flatpaks and do development in containers [1]. Fedora has also been historically more proactive than upstreams to enable hardening features.

Another good example (as mentioned by a sibling) is ChromeOS. It, of course, has privacy problems, but ChromiumOS is available in source form AFAIK.

[1] I know, containers != security.


Alpine is far from a desktop distro


Why? I've got it on a laptop and it's fine


I agree that it's good for the desktop (I would have used it if Void Linux didn't exist) but that isn't really it's goal.


That's fair. I'm happy to agree that desktop was never Alpine's main goal, having initially targeted network appliances and growing a lot after being picked up for Docker images.


Aren't so many packages on it out-of-date?

Though I'm not sure that makes it "not-a-distro", but still...


Not that I've noticed (when following the latest release), and I doubt that it's worse than Debian.


I just looked at their package repos and it seems it's gotten better at least for GCC and Clang. It used to be pretty bad. See my older discussion here: https://news.ycombinator.com/item?id=18731913


I am developing proper application firewall in form of linux kernel module, but have encountered issues with the linux "kernel protection. What linux is doing is making half baked solutions, which are preventing real solutions to be implemented. It is just crazy. Instead of simple way of defining what application can do (which affects 99.999% of users), they are implementing protections against highly advanced threats which are affecting other 0.001% of targets of APT and preventing to protect all others. It is just crazy.


These ideas are mostly for people that want to run prepackaged, closed, commercial software. Community maintained software with short dependency lists doesn’t have this problem.

I run Linux on my desktop for a number of reasons and one of them is not dealing with stuff like this. If I really don’t like something I dump it in a container (I know it can get out but at some point you either trust the software or decide it’s unusably malicious, I mostly only do this when I have to run node.js crap anyway.)


> Community maintained software with short dependency lists doesn’t have this problem.

That’s just not true. One of the first things mentioned in the post is vulnerabilities. An image viewer shouldn’t be able to put your SSH keys at risk no matter how many memory safety issues it has.

Plus there’s all sorts of software that doesn’t have short dependency lists that I’m sure you’d still like to use. A web browser, for example?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: