Although possible to disable the feature, those steps are crazy complicated, and probably impossible for anyone who isn't a developer. (Apart from anything else, regular users should never be advised to disable SIP.)
A year ago, a Chrome update (its Keystone auto-update agent) corrupted system files in Macs which had SIP disabled [1]. The result was that they didn't boot anymore. Mac users who had SIP enabled were not affected.
I won't disable SIP and I'll avoid installing Google Chrome on my new Macs, if possible.
What stops a Linux program altering the system? I guess you need root access to change things outside of /usr/local this could easily be done on macOS too but the wheel had to be reinvented by Apple in a way that is probably less trustworthy.
With SIP you can't change some things even as root. SIP has definitely made macos a harder target, though it is still lagging Windows in some areas. Linux is almost comically unprotected.
You can check binary signatures on disk (tripwire) but that is extremely tiresome to maintain and does not prevent straight loading of shellcode into memory.
/usr/bin/vim was installed by my package manager, but there's no guarentee the version I'm running matches the version that was installed. Now in debian there is a file which has a checksum of the version the package installed, but that's not checked on execution, nor is it itself signed (so the process that replaced vim could just as easilly replace the checksum, or the process that checks the checksum)
You actually need root on macOS to modify anything under /System, and this was the case even before SIP. This is why some installers ask for the root password.
The issue is defaults. Personally, I prefer using an open source alternative OS where generally everything is disbled by default. (NetBSD is best exemple I have found.) Commercial OS like the ones created by Apple, Microsoft, Google, etc. have default settings that are opinionated, i.e., some users might not wish to choose these settings. This puts a burden on the user to disable or work around them somehow. Apple iOS and MacOS by default generate a considerable amount of network traffic to Apple servers as soon as they are powered on. When I power on a computer running NetBSD, there is by default no traffic to a corporate mothership.
Given the choice between (a) a corporate OS that requires me to perform some amount of work to "turn off" some "features" the corporation has enabled and (b) a non-corporate OS that requires me to perform some amount of work to "turn on" the "features" that I want to use, I prefer (b).
It seems to me that the issue with this approach is that those commercial OSs have to deal with a way more diverse audience than NetBSD and even Linux.
While most of Linux's audience (and probably practically all of NetBSD's) is rather technically inclined and could possibly be expected to turn on the security features as they need them, most of Windows' and macOS's audience will very likely have no idea that there is even an option to do this.
Also, software companies would probably take the easy route and just assume that since those features aren't enable by default, most people don't enable them and develop their software in a way which could be incompatible with them.
So I think that for an OS like macOS, where most people flock "because it just works and has no viruses", strict defaults are a sane choice. Having people go through hoops and click through warning messages would probably also push companies to better design their software.
In the end, I think the best way is for such features to be the default setup. But those OSs need to have an "escape hatch" for someone who actually wants those features disabled and actually understands the risks of disabling them. While macOS does (for the moment) have this hatch, it looks maybe /too/ complex. But then I think the difficulty of the exercise is in setting the "correct" level of complexity for this operation.
Well said. And frankly, this is true even of FOSS -- or why are there so many flavors of Linux?
Even technical users are going to differ in the sets of opinions they hold and are qualified to hold. I care about which Python I have installed. The virtual memory manager? Not so much. Someone else might, though.
The problem is that people who work on some specific fields (music, cinema, graphics) have almost no choice when choosing OS and computer.
Most of them won't even care about sending too much data to a company if that's the price to have the same device everyone else is using in their industry...
I have a G4 I bought for use as a DAW. It still has OS9; I never installed OSX as I knew it would probably slow things down. I never needed to connect the Mac to the internet. If I needed to send/receive files via internet I moved them via crossover cable to a laptop or PC that was connected to the internet.
People today, even more so than in the 2000's, have multiple computers. Would it still be feasible to have a Mac used for {music, cinema, graphics} that is not connected to the internet. Certainly one would have other computers that were connected to the internet and moving files between computers on the local network, preferably via Ethernet, is much faster.
But the point of me telling personal stories is not to suggest anyone could/should do the same things; on the contrary, it is to illustrate that "one size does not fit all". Today's Apple chooses for the user, rather than letting the user choose.
> The problem is that people who work on some specific fields (music, cinema, graphics) have almost no choice when choosing OS and computer.
Came here to say that.
> Most of them won't even care about sending too much data to a company if that's the price to have the same device everyone else is using in their industry...
I do, I truly do care. So much that I'm looking at open-source/Linux options, at least for my home projects. Doesn't look very bright on the video side, but DaVinci Resolve is at least available for Linux. Rawtherapee is getting there with local adjustments as we speak. Darktable has lots of power but terrible UX.
The problem is not the defaults. The problem is choice. When the motto is "just works", the system needs defaults that works. But savvy user needs to be able to change these things.
A more common use case for this kind of choice is corporate laptops. They usually set up their own policy on the laptop before handing it to employees for good reason. Firewalls are especially necessary to avoid leaking confidential information.
> Given the choice between (a) a corporate OS that requires me to perform some amount of work to "turn off" some "features" the corporation has enabled and (b) a non-corporate OS that requires me to perform some amount of work to "turn on" the "features" that I want to use, I prefer (b).
actually default on, is not a problem if it is easy to turn it off in that case.
if apple would have had a button to turn it off, we would be fine.
Defaults are opinionated per definition. You might agree or disagree with them.
I don't believe the defaults related to this issue are a problem; its the lack of transparency about this, coupled with it being difficult to change this. Probably every update you gotta fix that. That's akin to running a Hackintosh. And we all know macOS is moving towards iOS; not Hackintosh/PC.
You get industrial grade security solutions out of the box with many Linux distributions. You get namespaces, firewalls and seccomp for free with any Linux kernel, and any Linux system with systemd gets unprivileged containers and sandboxes for free, too. AppArmor exists for MAC, and there are userspace sandboxes.
How many user applications actually fashion a sandbox that is non-trivial to escape with those protections? I struggle to think of any outside of the more popular browsers. The Snap and Flatpak sandboxes are good case studies in the practical limits of Linux sandboxing: it’s rarely effective without designing your entire app around it because the way most applications interact with the system was never designed for it. X11 probably being the most egregious limiter, followed by no standard trusted file access UI.
On the server, there’s a reason Amazon built Firecracker and Google built gVisor instead of just using the Linux sandboxing primitives. I think calling them “industrial grade” is pushing it when they’re rarely used as the first line of defense against code that is expected to be actively hostile.
I agree with the thrust of you comment though, outside the webbrowser, proper sandboxing a barely used outside web browsers. However, I think the problem is as much social as technical. Every time Flatpak comes up, it mostly gets hostile reactions.
The situation is rather unfortunate. A lot of people believe that Linux is more secure than other operating systems, but in practice the Linux desktop is far less secure than e.g. macOS, iOS/iPadOS, or Android.
And no, you aren't safe because it is open source software. Sandboxing also protects against unknown vulnerabilities in open source software.
> On the server, there’s a reason Amazon built Firecracker and Google built gVisor instead of just using the Linux sandboxing primitives.
The reason is that Firecracker is a virtual machine, and Linux containers and sandboxing primitives are not meant to be used for virtual machines.
Pointing at Snap and Flatpak's "sandboxes" is disingenuous when they're notorious for having sandboxing as an after thought to app distribution.
When I say industrial grade, I mean that the sandboxing and isolation primitives that are used in industry are those that are either provided in the kernel, or are deployed as part of a standard Linux server deployment.
Firecracker is a virtual machine that exists to provide a container-level interface. It is not designed for nor capable of running a full virtual machine. GVisor is even less virtual machine like: in fact it originally only worked via a ptrace sandbox and added a KVM-based interface (without a real Linux guest kernel) latter to improve performance.
I point to snap and flatpak because outside of browsers they are AFAICT the only attempts to sandbox user applications on Linux. It would be disingenuous if there were some other apps or distribution channels doing a better job that I hadn’t mentioned, I’d love to hear of some.
> On the server, there’s a reason Amazon built Firecracker and Google built gVisor instead of just using the Linux sandboxing primitives. I think calling them “industrial grade” is pushing it when they’re rarely used as the first line of defense against code that is expected to be actively hostile.
actually firecracker is not a sandbox. it's basically qemu/libvirt and a minimal implementation of devices. it's qemu-kvm with a http interface and way less devices.
the reason why they rewritten qemu-kvm is because qemu-kvm contains a lot of code that is not needed and is way more bug prone. and also loading a kernel is way faster in firecracker since they optimized the kernel loading code.
I’m not sure of the distinction you’re drawing here. Originally Amazon used the Linux namespaces, cgroups, etc. for isolating Lambda invocations, but they only did this at the AWS account granularity (i.e. your Lambdas only shared the same VM with other Lambdas from your account) for security reasons. They built FirecrackerVM so they could run Lambdas freely on the same bare metal machine as others without having to group by tenant VM in this way. Obviously they found using the Linux primitives to be insufficient for maintaining isolation when dealing with hostile native code.
All you need to do is to know about the linked page and have it open on another computer, disable some initial disk protections, reboot into recovery while holding down some unmentioned key combinations, disable further restrictions by typing in cryptic Terminal commands that don't match the public names of the features they affect, reboot again, type in more cryptic commands as root to modify deeply nested system files and perform filesystem voodoo, and then reboot yet again with an optional prayer. Repeat for every OS update.
Yeah but as a counterpoint, most Hacker News users have used Linux at one point, which lets you run commands like `rm -rf /` as root. I think it's fair to allow power users to easily disable these protections.
Ironically, coreutils introduced[1] a requirement to add `--no-preserve-root` to `rm -rf /` way back in 2003, so that particular example doesn't really support the counterpoint you're trying to make.
The easier it is to opt out, the more likely it is for sketchy developers to guide tech-novice users through that process, the more likely it is for malicious actors to take advantage of those who opted out.
https://tinyapps.org/blog/202010210700_whose_computer_is_it....
And a humorous guide on disabling protections like code signing and notarization:
https://www.naut.ca/blog/2020/11/13/forbidden-commands-to-li...