This was how I knew OSX was a real Un*x....I was messing around with...I dunno...brew's predacessor? A package manager that installed linux apps on OSX...everything was based under /opt, so it was easy to just nuke and restart to clean up the cruft.
typed sudo rm -rf / opt
And let it rip.
Then got a weird error message, like a library couldn't load....then the icons in the dock went away, then the only thing that was working properly was the web browser windows that were still fully loaded in memory
My "best" rm -rf story was when I created a system user account on a production system (router) to test something. What I didn't know is that for system users their home directory is set to / by default; this will become important later.
I finished what I was doing and wanted to clean up after, so after undoing my changes I ran userdel -r on that user (-r removes its home directory). The command took way longer than expected, at some point I terminated it to investigate as I assumed there was something unrelated going on (high CPU or IO contention).
At this point you probably figured out what happened. By the time I stopped the command the damage was done and a good part of the system has been nuked. Surprisingly (and thankfully I guess) the actual firewalling and routing is done at kernel level so it continued doing its job totally fine despite userspace being nothing more than a smoldering wasteland.
Back in 2002 I joined a software company where we each had personal folders on a shared 2TB volume. 2TB was a lot back then, but it was needed because we work working with gene and protein sequences so you end up with quite a bit of data.
People used these folders for builds of our systems which could be accessed from any of our various supported environments (basically every flavour of UNIX under the sun - no pun intended there). Lots of them would also use them for development work, since many people simply remoted into a convenient UNIX box and fired up emacs or vi. I was one of the few people using my local machine for development because I was working on a Java application, and running an IDE locally was simply very convenient.
We also had our own CI system that built everything for every supported system overnight, and ran huge suites of automated tests, which also used this 2TB volume.
The key word here is shared. I had my own folder but I could do `cd ..` and see everybody else's folders, and then go poking around in them with full read/write access.
You can see where this is going, can't you?
A handful of weeks before I joined the company somebody had updated a script in a test case (I forget whether it was a pre or post) that did some cleanup. The clean-up was basically an `rm -fR *` in the current directory. What they hadn't spotted before commiting the script is that they'd `cd`ed up one or two directories too far, meaning that they ran an `rm -fR *` in the root folder of the volume.
Everything was gone. Nobody could get anything done, and it took them a day or two to restore the volume from backups (which, fortunately, they had).
Some people lost a day or two's work so, fortunately, it wasn't a business ending event or anything like that. More a cautionary tale and an object lesson about the dangers of running commands like this with unrestricted access to volumes.
The day I started work, we had the ability to browse and restore backups on our Solaris system via a Windows GUI. It was useful to retrieve archived data and job state like old logs.
Within a month another graduate developer had accidentally restored the whole FS. We got to go home early, but had browse-only access from then on.
Of course, we still retained write access to the whole FS because prod and dev were just different root directories and our deployment process was "cp" if you had CLI skills, or copy/paste in Windows Explorer if you like GUIs. We got an rm -rf runaway a little later, though only on home directories IIRC. The early 2000s were wild.
Did the same thing, wiped out server (it was running stuff like nagios, nothing 'production)
No big deal, restore from backup, right?
It turned out nobody had connected the backup drive to the new server. It was still connected to the server it replaced, which was still in the rack, but turned off.
MacOS's BSD roots are fascinating.
You can still run 'leave +0005' to remind yourself to leave your Terminal session after 5 minutes, to avoid using too much mainframe time.
How long ago was this? AFAIK the `rm` command in almost every linux distro these days will NOT let you delete `/` unless you add `--no-preserve-root` parameter.
# rm -rf /
rm: it is dangerous to operate recursively on '/'
rm: use --no-preserve-root to override this failsafe
There's still lots of ways to screw up a rm, like "rm -rf /*" or deleting your entire home directory, but a space after the initial / was apparently common enough that they eventually put a big failsafe for that in the GNU version.
I don't recall which one it is (u for unset perhaps?) but I cover this by starting every (bash, E and pipefail are not POSIX) script with `set -eEuo pipefail`. All should be the default IMO, I don't understand - other than back-compat - why you wouldn't want them.
You can also prevent this by never using $VAR, and instead always using ${VAR:?} to exit with error if VAR is unset (or use one of the other options to provide a default).
I think on linux it will refuse to run by default an you have to explicitly disable that protection. Of course running rm -rf /opt /* by accident will still delete everything.
Seems to be specific to GNU rm then, the Fedora linux on that page refuses rm -rf / unless you add the --no-preserve-root flag, the other two have a BusyBox based rm.
I did something similar working at the South Pole about 20-25 years ago:
rm -rf * .some-extension
Missed the extra space, hit Enter, lost several days of work. Much smaller blast radius, but still an expensive mistake for a trip like that. No, it wasn't in source code control - a few of us might have been using CVS at that time (this was before Git's time), but apparently I wasn't, or I wouldn't remember the episode decades later.
Would the zfs command still even be installed on a system? I'd imagine that `rm -rf /` would remove /usr/bin and /usr/local/bin (or wherever the command was installed)
I find that making sure -v always runs is pretty helpful. The one or two times I've noticed what was being printed and immediately interrupted is well worth the cost of my console being polluted in other instances.
You might be thinking of the GNU version of rm (the version on any modern linux). There you need `rm -rf --no-preserve-root /` to delete everything, which prevents GP's typo (`rm -rf /*` might still work though).
A fun experiment is deliberately running sudo rm -rf / on a decommissioned machine or throwaway virtual machine from a shell on it and then seeing what you still have access to. Bash has a surprising number of builtins and there are still things floating around under /proc.
typed sudo rm -rf / opt
And let it rip.
Then got a weird error message, like a library couldn't load....then the icons in the dock went away, then the only thing that was working properly was the web browser windows that were still fully loaded in memory
then noticed the SPACE between '/' and 'opt'
rm -rf was rm -rf-ing from root.
Valuable lesson learned that day.