I'm actually surprised that Mac, Windows, and Linux are not way more locked down by default by now.
We see the problem on Android and iOS often have binaries doing bad things (spying if not worse). Often it's not the app developer themselves it's some library they included.
Then we have stay Steam users (maybe that's mostly Windows) but basically every game they download could be installing anything, reading anything. Root kit, key logger, network monitor, sending your private keys somewhere, reading your email database, downloading your documents. There is absolutely nothing stopping that at the moment except trust. A few games have already been caught spying by using 3rd party libraries.
Mac has appeared (not a security expert) to have gotten slightly better as of Mojave. Several folders in ~/Library are not readable by default and permission is given per App. (type ls ~/Library/Mail).
But basically I think I'd like things to be way more locked down by default (with of course ways to open it up per app). I'd like to be able to run an app (not just an App store app but any app) and be relatively sure it can't do anything bad. That includes command line apps. I wish every app was run in a sandbox. I'd like to able to run any app or script and know it's not uploading ~/.ssh/id_rsa for example.
> I'd like to be able to run an app (not just an App store app but any app) and be relatively sure it can't do anything bad.
Mac application sandboxing is independent of the App Store. There’s no reason third parties can’t ship applications from their own websites that use the sandbox.
The reason that many don’t is that the sandbox tends to be so restrictive to even common and reasonable tasks that it often makes the UI noticeably worse.
I get the restrictions are severe. I think work should be done to fix that rather than give up and just have everything be massively insecure by default.
Did you check out Snap, AppImage and Flatpak? Okay Snap is store-based, AppImage requires to be used with Firejail for sandboxing, not sure about Flatpak. But I think they're going in the direction you're looking for.
I'll take advantage of this thread to ask a related question : do you consider address sanitizers (ASAN) should be applied to harden binaries (on release builds) ? It indeed has a cost but can detect and crash on use after free or out of bounds access. Would you think the price can be worth it in certain cases ?
I asked that question myself before[1], and others, too [2].
But the answer is a clear no. Read this explanation [3] on oss-security where someone lays out in detail in which ways ASAN creates security problems.
It ultimately is just not built as a protection mechansim, but as a debugging tool. While you'd catch some bugs (UAFs, Buffer overreads), you introduce others.
Maybe there could be a "like asan but designed for production" tool. I think Softbound-CETS was trying to go in that direction, but never became production ready and is as far as I know no longer developed. But right now such a tool does not exist.
Though however what absolutely should be done: Every C/C++-written tool should be tested with ASAN. This doesn't happen (not a single Linux distro uses ASAN testing as a default practice before adding packages), and this is very sad. (I know this because I still find major packages that have ASAN-findable bugs exposed by their own test suite.)
Although having a Linux kernel is an implementation detail on Android, and they might as well replace it by something else tomorrow, it is probably the only use case where the Linux kernel gets deployed in the wild with most security knobs turned on.
SELinux, seccomp, CFI, libc with FORTIFY, forced initialization of kernel data structures, ASAN as required testing, ...
That could be read as : even the most secure Linux environment currently and what could be seen as security best practice ( which your post helpfully enumerates ) requires asan for testing only.
This is an excellent article with a lot of original research instead of the typical security guides that I see that are pretty much copy paste jobs (look up anything about Apache HTTPD for a very good example of that).
A couple of things about Linux security that really hinder ppl / I don’t think are promoted enough:
- People forget to secure VIM. Even if you’ve secured everything, very easy to use the built in mini-shell in VIM to move to the actual shell.
- Look at / focus on your SSH key fingerprints. These things matter. I didn’t pay attention to these things nearly as much as I should have the first two years of my career, but it’s so easy to just intercept your request, grab your private key, and then just pass you on to your regular server without you even knowing.
- Please please secure your web servers. The default configuration can be very difficult to argue is secure, e.g. the fact that every web server reveals out of the box the exact semver of the Apache/Nginx or the lack of automatic HTTPS redirection that would be useful for 90% of modern deployments. Check out Caddy which helps with some of this.
Your private key is not sent to the ssh server. They could do something else nasty like collect keystrokes from your session on the fake server though.
Out of the box your SSH client will only use its private key to prove that it knows the key (it signs a message specific to the SSH connection) during login.
I think this is properly designed so that a bad guy can't live proxy it - if the bad guy gives a victim parameters the bad guy can decrypt those don't match on a real server; if they use the real parameters they're no longer able to read the session so why bother.
For environments where you want proxy behaviour (e.g. "jumpboxes") you can tell the client to volunteer to sign on behalf of further clients down the chain. Bad guys could use that but they still don't get the actual key so they must conduct any attack live, and clients could tell you about or even ask you to explicitly authorise every such request.
We see the problem on Android and iOS often have binaries doing bad things (spying if not worse). Often it's not the app developer themselves it's some library they included.
Then we have stay Steam users (maybe that's mostly Windows) but basically every game they download could be installing anything, reading anything. Root kit, key logger, network monitor, sending your private keys somewhere, reading your email database, downloading your documents. There is absolutely nothing stopping that at the moment except trust. A few games have already been caught spying by using 3rd party libraries.
Mac has appeared (not a security expert) to have gotten slightly better as of Mojave. Several folders in ~/Library are not readable by default and permission is given per App. (type ls ~/Library/Mail).
But basically I think I'd like things to be way more locked down by default (with of course ways to open it up per app). I'd like to be able to run an app (not just an App store app but any app) and be relatively sure it can't do anything bad. That includes command line apps. I wish every app was run in a sandbox. I'd like to able to run any app or script and know it's not uploading ~/.ssh/id_rsa for example.