Hacker News new | past | comments | ask | show | jobs | submit login

This was how I knew OSX was a real Un*x....I was messing around with...I dunno...brew's predacessor? A package manager that installed linux apps on OSX...everything was based under /opt, so it was easy to just nuke and restart to clean up the cruft.

typed sudo rm -rf / opt

And let it rip.

Then got a weird error message, like a library couldn't load....then the icons in the dock went away, then the only thing that was working properly was the web browser windows that were still fully loaded in memory

then noticed the SPACE between '/' and 'opt'

rm -rf was rm -rf-ing from root.

Valuable lesson learned that day.




My "best" rm -rf story was when I created a system user account on a production system (router) to test something. What I didn't know is that for system users their home directory is set to / by default; this will become important later.

I finished what I was doing and wanted to clean up after, so after undoing my changes I ran userdel -r on that user (-r removes its home directory). The command took way longer than expected, at some point I terminated it to investigate as I assumed there was something unrelated going on (high CPU or IO contention).

At this point you probably figured out what happened. By the time I stopped the command the damage was done and a good part of the system has been nuked. Surprisingly (and thankfully I guess) the actual firewalling and routing is done at kernel level so it continued doing its job totally fine despite userspace being nothing more than a smoldering wasteland.


Back in 2002 I joined a software company where we each had personal folders on a shared 2TB volume. 2TB was a lot back then, but it was needed because we work working with gene and protein sequences so you end up with quite a bit of data.

People used these folders for builds of our systems which could be accessed from any of our various supported environments (basically every flavour of UNIX under the sun - no pun intended there). Lots of them would also use them for development work, since many people simply remoted into a convenient UNIX box and fired up emacs or vi. I was one of the few people using my local machine for development because I was working on a Java application, and running an IDE locally was simply very convenient.

We also had our own CI system that built everything for every supported system overnight, and ran huge suites of automated tests, which also used this 2TB volume.

The key word here is shared. I had my own folder but I could do `cd ..` and see everybody else's folders, and then go poking around in them with full read/write access.

You can see where this is going, can't you?

A handful of weeks before I joined the company somebody had updated a script in a test case (I forget whether it was a pre or post) that did some cleanup. The clean-up was basically an `rm -fR *` in the current directory. What they hadn't spotted before commiting the script is that they'd `cd`ed up one or two directories too far, meaning that they ran an `rm -fR *` in the root folder of the volume.

Everything was gone. Nobody could get anything done, and it took them a day or two to restore the volume from backups (which, fortunately, they had).

Some people lost a day or two's work so, fortunately, it wasn't a business ending event or anything like that. More a cautionary tale and an object lesson about the dangers of running commands like this with unrestricted access to volumes.


The day I started work, we had the ability to browse and restore backups on our Solaris system via a Windows GUI. It was useful to retrieve archived data and job state like old logs.

Within a month another graduate developer had accidentally restored the whole FS. We got to go home early, but had browse-only access from then on.

Of course, we still retained write access to the whole FS because prod and dev were just different root directories and our deployment process was "cp" if you had CLI skills, or copy/paste in Windows Explorer if you like GUIs. We got an rm -rf runaway a little later, though only on home directories IIRC. The early 2000s were wild.


> The early 2000s were wild.

They certainly were. In my prior job my Windows 98SE development PC had a public IP address.


Ouch. Even with shared storage access control and permissions should have saved the day there!


I bet they discussed those kinds of options afterwards.


Did the same thing, wiped out server (it was running stuff like nagios, nothing 'production)

No big deal, restore from backup, right?

It turned out nobody had connected the backup drive to the new server. It was still connected to the server it replaced, which was still in the rack, but turned off.


Hey at least your backup server wasn't SSHFS mounted on your server and wiped also!


MacOS's BSD roots are fascinating. You can still run 'leave +0005' to remind yourself to leave your Terminal session after 5 minutes, to avoid using too much mainframe time.


How long ago was this? AFAIK the `rm` command in almost every linux distro these days will NOT let you delete `/` unless you add `--no-preserve-root` parameter.


I think the key take away in the parent comment is:

"This was how I knew OSX was a real Un*x"


Will it still start recursing and deleting most of the stuff owned by your user? That's arguably more important than the system files!


No:

    # rm -rf /
    rm: it is dangerous to operate recursively on '/'
    rm: use --no-preserve-root to override this failsafe
There's still lots of ways to screw up a rm, like "rm -rf /*" or deleting your entire home directory, but a space after the initial / was apparently common enough that they eventually put a big failsafe for that in the GNU version.


More serious than users fatfingering the command were scripts with code like

  rm -rf /$MYVAR
If you don't set $MYVAR and don't set bash to error on unset variables you are in for trouble.


I don't recall which one it is (u for unset perhaps?) but I cover this by starting every (bash, E and pipefail are not POSIX) script with `set -eEuo pipefail`. All should be the default IMO, I don't understand - other than back-compat - why you wouldn't want them.


It's -u.

You can also prevent this by never using $VAR, and instead always using ${VAR:?} to exit with error if VAR is unset (or use one of the other options to provide a default).


A lesson I learnt several years ago when I discovered that mktemp behaves differently on macOS versus the GNU version in Linux.

From that day onward I always make sure the first line of my bash scripts contains at least "set -e".


I don't think OSX uses GNU tools unless you install them through brew.


macOS doesn't use the GNU versions, though.


I've done something like this, dunno, may have been around 2000.


I think on linux it will refuse to run by default an you have to explicitly disable that protection. Of course running rm -rf /opt /* by accident will still delete everything.


Depends on the distribution I think. The ones I tried here just started deleting things right away: https://bellard.org/jslinux/


Seems to be specific to GNU rm then, the Fedora linux on that page refuses rm -rf / unless you add the --no-preserve-root flag, the other two have a BusyBox based rm.


If you want to run rm -rf I encourage you to run rm -rf >within Gitpod< as many times as you want!

https://www.github.com/gitpod-io/rm-rf


https://archive.md/5lmc9

rm is disallowed to remove . and .. under POSIX, so, for that reason, / needs to be treated specially too.


I did something similar working at the South Pole about 20-25 years ago:

rm -rf * .some-extension

Missed the extra space, hit Enter, lost several days of work. Much smaller blast radius, but still an expensive mistake for a trip like that. No, it wasn't in source code control - a few of us might have been using CVS at that time (this was before Git's time), but apparently I wasn't, or I wouldn't remember the episode decades later.


That's why when using potentially destructive shell commands I always make sure to autocomplete the path using TAB.


sudo zfs rollback mypc/ROOT/ubuntu@tuesday


Would the zfs command still even be installed on a system? I'd imagine that `rm -rf /` would remove /usr/bin and /usr/local/bin (or wherever the command was installed)


I find that making sure -v always runs is pretty helpful. The one or two times I've noticed what was being printed and immediately interrupted is well worth the cost of my console being polluted in other instances.


MacPorts, most likely! Even after Brew took over I still ran it for ages.


I believe Apple forked rm to provide protection for this case?

Also, if you're willing to lean into npm a bit, there's tools that give a layer of protection over rm such as https://github.com/sindresorhus/trash-cli


You might be thinking of the GNU version of rm (the version on any modern linux). There you need `rm -rf --no-preserve-root /` to delete everything, which prevents GP's typo (`rm -rf /*` might still work though).


Can confirm /* notation still works. It comes down to how things are interpreted. The shell expands that before rm ever gets argv


Not under Catalina. Don't ask me how I know that.


I just tried it in Big Sur and it wouldn't allow it.


Nah, it totally works. Yall should try it.


I just did it and it seems fi


Did it give you the this is dangerous message or a permissions error? If it’s the latter, you probably got stopped by System Integrity Protection.


You also can't mkdir there either


A fun experiment is deliberately running sudo rm -rf / on a decommissioned machine or throwaway virtual machine from a shell on it and then seeing what you still have access to. Bash has a surprising number of builtins and there are still things floating around under /proc.


Always specify in order "sudo rm -rf opt /" so that opt is fully deleted before the / deletion causes too many failures?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: