Hacker News new | past | comments | ask | show | jobs | submit login
Beware of sudo on OS X (rongarret.info)
223 points by lisper on Aug 16, 2015 | hide | past | favorite | 92 comments



Honest question:

Why does anyone still care about root escalation on workstations? When do we stop pretending our MacBooks are multi-user UNIX mainframes?

App sandbox to full user privilege escalation may be scary. But if someone can run arbitrary code as my user, then by all means, have root as well.


I was part of the original UNIX porting team at NeXT, and we were just trying to get the most popular non-proprietary operating system (UNIX) modernized (by using Avie's CMU Mach) as the modern core to a standard OS. We didn't even think about any server type applications. And Steve told us that he was tired of being beaten-up at Apple for being closed source - so he wanted UNIX. (As an aside, he refused to allow any object-oriented tools in the system - but in Engineering we realized that Steve wouldn't know an object-oriented system from a hole in the ground, so we ignored his direct orders to not put in anything object oriented. The rest is history.)


Really interesting. What do you make of video like these?

https://www.youtube.com/watch?v=2kjtQnPqq2U

https://youtu.be/SaJp66ArJVI?t=2m04s


I remember those days, and I remember Steve saying many of those things. Steve didn't have any deep understanding of what he was saying, and he spent a lot of time working on the wording of those sentences - practicing them over and over with a few of us in Engineering and Product Management. He created those sentences based on what engineers were telling him - reformulating them into his personal style. Really, Steve would often say that he couldn't do what we were doing, but he could help sell it. He described himself as a marketing guy - and that is what he was.


I wonder how he compared with John Sculley at this time.


That is classic Jobs -- sort of techie but not understanding what any of the words mean, very demanding, and sensitive to the customers' biases.


Thus proving that all our nerdy knowledge isn't as important as we think it is in order to be incredibly successful.


Very true. Our biases sometimes even get in the way of creating things that people want.


At the time, and for a long time after, a lot of computer science types were arguing that everything should be implemented from the ground up in very high level OO languages and abstracted runtime environments and this was the inevitable way things would be. Just look at the discussion on HN right now about smalltalk. Java also came out of this mindset. But Steve came form a highly practical, nuts-and-bolts engineering mindset where perfect is the enemy of the good.

So I think Steve's attitude has to be put into the context of that conflict. But as MrTonyD shows, there were people at NEXT who understood how to translate that attitude into practical terms. And to be fair to Steve that's because he knew how to hire absolutely tip-top talent and trust them to do their jobs right, even when they disagreed with him.


> At the time, and for a long time after, a lot of computer science types were arguing that everything should be implemented from the ground up in very high level OO languages and abstracted runtime environments and this was the inevitable way things would be.

I don't really see that they were wrong. After all, C and POSIX are also very high level and abstracted compared to the assembler-based OSes which existed before them.

The big problem now is that to be a successful alternate OS one needs to be bug-for-bug compatible with POSIX, and have a C compiler, in addition to one's own language and OS. Perhaps recent years' developments in virtualisation and containerisation will make it easier for a non-Linux host to run a Linux kernel and hand containers to it as needed.

Maybe one day OS and systems research will once again pay off.


Why was he against object orientation?


My guess: Back at that time, it was harder to afford the extra resources for OO. It's not until the time I graduated college that machines and VMs of managed environments started to get faster than "embarrassingly slow" in mainstream hands. This is also a big part of the reason why there were languages like Eiffel and also why C++ got so entrenched. Also explains the existence of Objective-C.


> the extra resources for OO

Thinking like this annoys me, because it assumes a specific implementation for OO. Are people conflating OO with Java? Are they conflating it with Smalltalk? Are they unaware you can write C in an OO style?

> It's not until the time I graduated college that machines and VMs of managed environments started to get faster than "embarrassingly slow" in mainstream hands.

Yes. People are apparently conflating OO with Java or Smalltalk.


OO tends to imply things like dynamic dispatch (for method lookup), implicit extra parameters to functions (this pointer and so on), accessors (potentially a lot of extra function calls), placing variables in memory based on grouping within objects rather than where lookup might be most efficient. Not to mention possibly run time type info, which takes up more memory. Those are all things that are mostly fast enough to be a non-issue now, or are optimized by compilers, but they are generally slower.


Seems like you're mentioning a few features that are either optional or inline-able in C++. It took me a long time to appreciate this aspect, but c++ in particular is all about not making you add costs without explicitly asking for it.


Once you start removing features, then you are only speaking of an object oriented-like programming language, not a true object oriented programming language.

Examples include inheritance, abstraction, and encapsulation which you can achieve merely by changing syntax. What requires extra resources at runtime is polymorphism, and that's one of the most important, functional aspects of object oriented programming.


And they say religion is dying out in the civilized world.

Consider if you will that the things you are describing as slow - chiefly the indirection of a virtual call - are slow relative to other things because of relatively recent CPU advancements. If a CPU does no branch predicting or caching, I think the overhead of a virtual call doesn't sound so bad. So, if we're talking about why to avoid OO in the late 80s, I'm not as convinced the cost of a virtual call is the barrier.

That said, my point was that C++ lets you do these things, but you have to ask for them. If you want virtual calls, go nuts. But if you don't, you're not going to suffer from mandatory bloat - your object code will still look good, as if you had written it in a language without such mandatory frivolities.


Well, talking about optimizations and all of that, the reason to be against this is not because of big sections of code being unoptimized. It's the death by a thousand paper cuts. Every small object requires its own (extra) information besides its fields. Even if that's one double word that identifies a virtual function table, if the object is 16 bytes that's 25% more storage space required. Not that a well implemented object oriented scheme wouldn't work nicely, but given a naïve or overblown approach.

In any case, modern CPUs and compilers can use aggressive optimization, sure, and the runtime speed is only equally as important as the ease of development and concerns about reliability.

On an older system, though, using a liberal OOP design in an operating system would be akin to creating structs that have arrays of function pointers associated with them, and having every function call routed through those pointers. Looking at the machine executing this code, you would reliably see jumps to similar codepoints in-between function calls, and a lot of wasted time or space. And I'm not sure anyone has mentioned this, but exception handling especially can add a lot of overhead.

Obviously, the benefits significantly outweighed the cons, especially in this case (NeXTSTEP and Objective-C), but I think that was helped substantially by the fact that projects Steve worked on always had much better computer hardware.


The GP is talking about what OO was like back at the time. What the heck are you complaining about?

There were very few fast OO languages and they were emerging at the time. The most prominent examples of those languages are those languages GP names, and they do not exactly live up to the promises of even contemporary OO research, much less modern research.


I share the poster's sentiment. OO is a mental box and a lot of people end up with overly narrow definitions or have trouble escaping the box thinking. You can apply the same ideas without language support. It is orthogonal to bytecodes, VMs, GC or even a "class" keyword. But a lot of people have very specific expectations and will have a hard time seeing this for what it is.


Actually I think this is the beautiful thing about Perl 5's OOP support.

It was clearly an after thought and a bit of a hack, but it works, and it makes it transparent how everything works. I had been taught Java at university before that, and the whole OOP thing was a bit of a hidden mystery. Perl gave me a far better understanding of OOP.


> What the heck are you complaining about?

That mindset. Like I said. Did you not read my post?


We are talking about computing in 1985. A 25 MHz workstation would be at least 10,000% slower than a modern computer, with about 1,000× less memory available. Still, this isn't a huge difference in context, but talking about adding 4 to 16 bytes of overhead on every data structure, or every object, in a worst case scenario could result in twice as much memory usage.

Steve Jobs wasn't an engineer technically, so he wouldn't know the specifics here, but at the time object-oriented programming had a stigma associated with it because programmers did know of these drawbacks. Using Objective-C was obviously a fundamental choice by the software engineers.

Writing C in an object oriented style is what a lot of programmers did then. But there's a difference between associating data with methods and supporting polymorphism. Hence the talk of virtual function tables.


The Mac OS introduced Object Pascal to the world and was later rewritten in a mix of C and C++, so I doubt that was the reason.


OO was around since he late 60s. The reason it went away has less to do with compute power than people realize


And indeed, even when Macs had faster hardware (early PowerPC), the Objective-C crippled performance. Windows crashed a lot (like Mac did before OS X), but it was fast because it had no isolation between components.


There was no Objective-C in Apple software in the days of "when Macs had faster hardware (early PowerPC)." The state-of-the art at OS X's release was the fourth-generation PowerPC, and there was only one more major iteration of the PowerPC used in Macs. I would call this "later PowerPC." "Early PowerPC" Macs couldn't run OS X.


But there was Object Pascal and C++.


Steve didn't understand OO at the time. Not in the least. Steve was completely non-technical, and he knew that UNIX was standard and didn't want us putting something new and untested in it. I can't tell you how many times I tried to explain OO concepts to him (lots of us tried, over and over.) So, to him, an OO toolkit was something that would make us too complex for developers looking for an easily understood UNIX system. He told us that he was relying on people's existing understanding of UNIX - that would allow them to adopt our system.


There are always casualties when managers make decisions and policies based on things they have absolutely no understanding of. At least programmers were able to do it anyway with this particular case, as Steve had no way of knowing it was happening.


Just a guess here, but true object-orientation is really hard to get right. It's not like OS X was going to be built on Smalltalk. And if you're going to half-ass OO with an implementation like structs+functions, you're usually better off sticking with a functional system. This is especially true when it comes to high availability services, like operating systems.


http://c2.com/cgi/wiki?DefinitionsForOo

Given the above link, I deny there's any single useful definition for OO: It's been defined and redefined over and over again for decades, to the point people can't even agree on what languages are capable of writing software in an OO style.


Probably because of the hype. He was promoting OO and thinking about it as a competitive advantage over the competition. He probably didn't want such "powerful" (?) software to be reused by them.


I have been wondering what would happen if Apple did the Blue Box on NuKernel path instead of the Rhapsody/Mac OS X path for a while now.


I don't know if all of this applies to OSX, but I'll answer from Linux perspective. (some of these are in common, I'm sure) There are restrictions for your user that you can set up as root, but your user can't touch. For example firewall, process restrictions, security labels, etc.

What does it mean in practice: if you set up appropriate restrictions and run (for example) `curl http://...` and curl gets exploited via malformed response:

- it cannot write to system configuration

- it cannot write to user configuration

- it cannot spawn shell

- it cannot bind port to get remote commands

- even if it could, it cannot receive traffic, because it's configured as outbound-only

- while it can send data on the same connection, there's not much to send, because:

- it cannot read your browser saved items (password, history, etc.)

- it cannot read your ssh configuration / keys

...

So yes, superuser access is still important, because it can set up defences which only superuser can override. Not many systems use it so far, but the frameworks are available.

I think the biggest impact currently in this area is the Chrome sandbox, but this one can be actually user-activated.


If an attacker successfully compromises my workstation and can masquerade as me, and the most useful thing they can think of doing is to pivot and rewrite my SELinux rules as opposed to, grab my password database (or just keylog my banking sessions), then I will be a very happy man.

I think mobile actually leads the way in this area, with applications restricted in actions they can take, regardless of who they're running.


My point was - before they can read the password database (or even find that a password database exists), they need to break out of selinux enforced rules. It's not an end goal - it should be a prerequisite for any further data collection. Key logging should not be possible in an exploited application either. Actually if you've got some healthy paranoia, you're maybe running QubesOS and your banking doesn't touch any other work environments.

Of course this is tricky in case of browsers. But that's also why I don't keep my password in the browser ;)


I've always wondered this. root on my laptop gives them .. all the things I can reinstall from a vendor image.

My user, however, gives them my email, my photos, my tax returns, etc.

Separation still sounds like a good theory, but I do wonder if we're putting a lot of effort into defending the wrong user.


Exactly.. sandboxing helps with this, not unix style permissions.

You still see a ton of comments saying things like "OS X and linux are secure because they use unprivileged accounts". But the security on unix is primarily to secure user alice from something that user bob does. It doesn't secure user alice from something that alice does. If alice gets infected with a cryptolocker trojan, it can't touch bobs files, but it can encrypt all of hers.

On a single user system that only has an alice account, what is needed is to secure 'tax program' from something that 'web browser' does.


It's simple to create multiple users on OS X. It's quick to switch between them.

E.g. I have a separate user that I switch to just to contact a handful of credit card / bank sites. Same with tax returns. I don't do them as my normal user. They are not readable from my main account. But I don't do enough. E.g. email and photos are still accessible.

Fortunately I lead an uninteresting life. The main threat to/from my photos is accidental deletion, filesystem corruption, drive crashes, etc. TMZ isn't going to pay anyone to steal them!

The problems are not simple to solve. What I've done is simple half-measures. It's very hard to do more than that.

Edit: and, of course, my normal user is "Standard", not "Admin". I switch to the admin user whenever I do sys admin. I never elevate privileges from my normal account.


>It's simple to create multiple users on OS X. It's quick to switch between them.

It's also something 95% of OS X users don't do or care about.


Unfortunately, and this goes for pretty much every UNIX out there with hardened (grsec) linux as possible exception, not just OSX, once someone has a local user on your machine, it's trivial to get root. On OSX, you don't even need a local privilege escalation exploit, so many other ways based on the way subsystems interact together.

If you want robust security, you will need to forget about OSX/BSD/Linux and look at Qubes OS. It's the best we have right now and nothing else comes close. Alternatively, compartmentalize and segregate (at the hardware level, all virtual machines have tons of host escalation bugs) and accept the fact that you will get owned.


Can you give examples of how on Unix systems you can get root access having a local user account?



Maybe we should have easy to use sandbox commands controlled by the user. Currently all the software must be designed and compiled with sandbox restrictions deliberately (e.g. Chrome). It is better if we can sandbox arbitrary programs with convenience.


CLI is one thing, but for GUI apps, OS X has supported sandboxing for years, enforcing it on all Mac App Store apps against developer outcry.

On Linux, GNOME is working on something that looks quite a bit like how OS X did it:

https://wiki.gnome.org/Projects/SandboxedApps

So I think we're actually moving in the right direction, albeit extremely slowly and imperfectly.


The problem with Mac App Store sandboxing is that it is not configurable and therefore many apps are simply unusable or extremely annoying sandboxed.


Don't we have that? With SELinux, sandboxing is literally a command away (and the command is, aptly, named `sandbox´).

There are certain inconveniences when it comes to sandboxing applications, especially applications that require an X server, which is why sandboxing is not done by default on any popular Linux OS.


Why does it need to be user controlled?

Someone creates a sandbox profile for a program and then distributes it to others.


This is my view as well---not sure why it was downvoted. If I have access to ~/.mozilla and ~/.chrome in userspace, I probably have access to your email password.


When do we stop pretending our MacBooks are multi-user UNIX mainframes?

This is actually a good point, even if I disagree with your minimization of privsep's importance.


I think this is completely wrong -- look at UAC on Windows, that works very well. Even if the machine is completely single-user, you don't want any program you run to be able to modify system files, etc. It is useful to manually grant elevated access if a program needs it, as then any downloaded code could install spyware, etc. On Windows (and on Linux), bypassing these mechanisms is certainly possible, but not trivial.

And that is good for the user.


I hope we can fix this with sandboxing (maybe with containers, but maybe just per-process).

This is going to be the most exciting thing about SGX, which is allegedly coming out in a month, after many many years of anticipation.


Take it a step further: why is there even still a root user? Separation of privileges doesn't require the user abstraction.


For the same reason we did 25 years ago. If I somehow get network access to your machine I'll blow out a stack and have root. Yes it seems difficult but it's easy to get wifi passwords and people use the same passwords and ....


> When do we stop pretending our MacBooks are multi-user UNIX mainframes?

My computers at home have multiple users, because they have their individual preferences setting their desktops and browsers the way they like

My computers at work, laptops and otherwise, have other devs' accounts on them, so they can ssh in and grab stuff, or play with setups I have on my machine. It's a particularly easy way for me to distribute VPN keys... and of course, users can't read each other's private files. If I'm away and someone needs a computer to work, they can use my machine without each of us messing with each other's setup.

Multiple-meatspace-user machines certainly aren't as popular these days, but they're not as dead-and-buried as you're implying. Just because mainframes are no longer the big cheese doesn't mean that there's no call for multiuser machines.


> if x happens, we should stop caring about y

That kind of argumentation usually ends badly, especially when it's about security.


>What this means is that if you use sudo to give yourself root privileges, your sudo authentication is not bound to the TTY in which you ran sudo. It applies to any process you (or malware running as you) start after authentication. The way Apple ships sudo it is, essentially, a giant privilege escalation vulnerability.

But even if you enable TTY tickets, a malicious process on your system can still elevate itself by patching the shell (in memory, using /proc/id/mem) to inject commands alongside the original sudo command. For example:

User types:

    sudo apt-get update
shell executes:

    sudo bash -c "apt-get update; evil.sh"


> a malicious process on your system can still elevate itself by patching the shell (in memory, using /proc/id/mem)

On most desktop operating systems, that functionality has been disabled. Ubuntu, Fedora, etc. all have Yama, which disables the ability for one process to access another's process via ptrace or ptrace-equivalent mechanisms (like /proc/*/mem) without special permissions. OS X has no procfs, but the equivalent functionality, using task ports, has had similar restrictions since the 10.5 days -- see `man taskgated`. You can't call task_for_pid() without having a particular code-signing entitlement or having administrator privileges.

This is why Xcode prompts you for administrator privileges when you first run it; gdb / lldb needs to be able to trace other processes and access their memory, and a normal process can't do that.


Or simply append `alias sudo="sudo evil.sh; sudo"` to .bash_profile.

Although, OS X is a bit like Windows. A bad program running in userspace can essentially ruin your system as much as a program running as root. Privilege escalation is not a relatively big deal when you can `shred -u ~/` without root.


>Although, OS X is a bit like Windows. A bad program running in userspace can essentially ruin your system as much as a program running as root.

Erm, OS X has sandboxing. So, no. Except if you use unsigned third party stuff.


I think he only sandboxed apps are the ones from the App Store—Signing is orthogonal to sandboxing. Even signed apps that you get from anywhere other than then App Store aren't sandboxed.


You can sandbox non-App Store apps the same way as App Store ones, you're just not forced to. And so developers don't.


Which is pretty much everyone reading HN.


In which case it's no different to Linux, OpenBSD or whatever.

As long as its your account files that matter, and you're running an app un-sandboxed, then it has access to them.

Nothing Windows or OS X particular about it.

As about "ruining your system", no, without root privileges it cannot, in either OS X or Windows. Of course it can if there's a privilege escalation bug, but there are tons of those for GNU/Gnome/KDE packages too.


    ...like Windows. A bad program running in userspace can essentially
    ruin your system as much as a program running as root.
How so?

Are you talking about pre-Vista - i.e. versions of Windows in which the user created during first run was an Administrator, and UAC didn't exist - though you could certainly create non-administrator accounts? Vista came out eight years ago...

Or do you mean pre-NT, when there was no separation of any kind? NT came out 22 years ago...


I mean that most of the "important stuff" is running in user space. You can shut down the computer, spread viruses, and delete the user's most important files. The only administrator-owned files on a typical Windows installation happen to be the replaceable ones. The ones modifiable by a user are the custom, personal, and sometimes irreplaceable files.

This is mostly true on personal Linux machines as well. I'm just expressing my opinion that you don't need root access to pwn someone's machine.


Why would an unprivileged app be allowed to make such changes?


It's not (see my other reply), but for why it used to be allowed, it's because the traditional UNIX security boundary has been user accounts. Any process running as user id 1000 has the same permissions as any other process running as user id 1000, and so it's permitted to mess with those processes. It can't mess with user id 1001 or (of course) 0. But Minecraft running as uid 1000, Safari running as uid 1000, and bash running as uid 1000 are all considered as the same entity.

In this model, the real problem here is sudo, which bridges a uid-1000 session to a uid-0 one. If you're administering a security-conscious, multi-user UNIX system, you should not be using sudo from your regular account. Either make a separate account and log in as that, or log in as root directly. (But if you want your Minecraft and your bash to not be able to interact with each other, the traditional approach doesn't have an answer for you.)

Of course, most UNIX deployments today are not multi-user remote-access systems, the way they were 30+ years ago when this policy was set. Most desktop UNIX deployments are effectively single-user systems, and the UNIX isolation model doesn't make much sense there. As a stopgap measure, direct access to another process's memory has been disallowed, but there are other ways for processes that share UID to mess with each other.

However, the most common single-user UNIX systems today are smartphones, either Android (Linux) or iOS (BSD + stuff). Android takes advantage of the UNIX model by assigning each app its own user ID. Angry Birds running as uid 1000 cannot mess with the Chase mobile app running as uid 1001. iOS technically runs all apps with the same uid, but applies extensive kernel-level sandboxing to limit the ways that apps can interact with the rest of the system, which essentially eliminates the rest of the leakiness.

There's a Chromium security document that expands on the traditional "1-part principal" model (identity of the human) and how we need to get to "2-part principals" (identity of the human + identity of the app), which I hope that someone will figure out how to extend to desktop UNIX someday:

https://www.chromium.org/Home/chromium-security/prefer-secur...


Does OS X have a procfs?


OS X doesn't have a procfs. I'd presume there's a syscall and/or debug feature that offers similar functionality.


Funny, I always thought this was supposed to be a feature. It remembered your authentication for a few minutes after using sudo. I assumed it was part of the OSX auth system and would forget if you locked the screen.


Enabling `tty_tokens` still remembers your authentication for a few minutes, but it restricts use to the terminal it was run on.


Yeah I was thinking the same thing, probably that is why they ship those defaults, better UX.


Don't call a security flaw a UX feature please.


Passwords are both security flaws and UX features -- they're inherently flawed, cannot be fixed, and are the only authentication system most people can use successfully.

Security is always in tension with usability.


I think the flaw marcoamorales intended to point out is how sudo doesn't always re-prompt for a password.


It is a feature that it caches your authentication for a few minutes. It's not a feature that other processes running as your user, such as something that manages to do a sandbox escape in your browser, are also able to acquire credentials.


And this is a good reason to lock sudo down to a single application that is allowed to be run. In my case I only allow su for my user. Now even if an attacker were to try and use sudo they would also have to know that they can only use su, and most automated attacks will fail.


They'd just have to "sudo -l". No logging of executed commands either, that way.


Apple is going through the same awkward phase as Microsoft did with WinXP. Except unlike what Microsoft did, Apple has not started a decisive internal process to change things for the better.


How do you know this to be the case? Just curious.


Is there some kind of advantage to this option not being set by default?


Sure, it makes things more convenient. Convenience and security are always a trade-off.


The parent comment was probably asking about some specific advantages.


Convenience is a specific advantage. It also is a great boon when you are used to working in multiple terminals or are running a lot of remote sudo commands over ssh (say testing an ansible setup).


If I set this option, I wonder if OS X third-party app installers will be affected? They often ask for the admin password, presumably to install software in directories owned by the system.


The "advantage" is the convenience mentioned in the (short, succinct) article: I can enter my password for sudo in one terminal window, and escape having to type it again in the other terminal window(s).


I can imagine that some scripts which go deep into the system like e.g. driver installation in OS X might not work anymore after patching OS X yourself.


Are you suggeting that changing documented settings for a system component is "patching OS X yourself"?

Anecdotally, I ran OS X for years with timestamp_timeout set to 0 so that sudo always prompted for a password. This broke absolutely nothing.


Accidentally downvoted you (instead of upvoting).


Beware of sudo, always, everywhere. Seriously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: