Heh. We used syslog at one place, with it configured to push logs into ELK. The ingestion into ELK broke … which caused syslog to start logging that it couldn't forward logs. Now that might seem like screaming into a void, but that log went to local disk, and syslog retried it as fast as disk would otherwise allow, so instantly every machine in the fleet started filling up its disks with logs.
it's wild how easy it is to misconfigure (or not configure) logrotate properly and have a log file fill up the disk. Out of memory and/or out of disk are the two error cases that have led to the most pain in my career. I think most people who started with docker in the early days (long before there was a docker system prune) had this happen where old docker containers/images filled up the disk and wreaked havoc at an unsuspecting point.
I used to joke that if VMware engineers couldn't figure out the logrotate configuration for their own product for a few releases, what chance do I have?
(You can guess how we noticed the problem…)
Also logrotate. (And bounded on size.)