Eh, sort of. I do Devops/Infrastructure exclusively, and while the tedious stuff is automated away, I've seen too much shit break on the other side of the fence to trust it fully (AWS API throwing errors when you absolutely need it to work, etc).
If something is broken, you should have enough automation to get it into a known good state without a human involved, while maintaining data consistency. Anything else should be automated, but you should still be around to babysit it while its going through the motions.
If you think you can automate everything and always trust it to work flawlessly, you haven't been around long enough for the edge cases.
> If you think you can automate everything and always trust it to work flawlessly, you haven't been around long enough for the edge cases. <
And then you have some "clever" coder coming along with a fix for said edge case (that invariably create some new edge cases just outside the domain of the fix).
That's a falacy of an argument really. It's not as if admins can't properly script environments and changes. Monitoring, updating, and scripting of changes happened long before the term devops came into existence.
The "just add more to the pool" solution seems akin to the "solution" in the strip above.
And i have seen similar arguments for when a daemon or server goes down. Hook it up to a script that reboots it automatically (or fires up a new instance over and over and over) and go back to coding.