If someone can make these kinds of mistakes, IMHO it is usually not their fault and instead is the fault of the systems that failed to prevent it from happening.
It is still their fault -- they did something they shouldn't have done. But the mistake exposes a bigger problem in the underlying infra, which is a good thing.
I think it's harmful to ascribe fault as a binary thing.
Assuming, of course, that this wasn't some deliberate act (because that would be weird):
The person who ultimately pressed the button which caused the code to run that sent this email only shares some portion of the fault. Maybe that person even wrote and deployed the code.
There's many other deficient processes that led to this even being possible - why did test code run in a place that had access to production credentials? what caused the code to run in the first place - was it accidentally triggered by some other bug, or deliberately run by somebody who didn't realize they were in production? If so, why are their systems built in a way that it's hard to realize when you're in production? Why is the system architected in such a way that large quantities of email can be sent inadvertently without some sort of approval? You could always delay large batches and send an alert so a human on-call could be in the loop to detect and delay such emails.
I've definitely seen issues where the engineers at the keyboards that day weren't at all at fault, and were just doing exactly what was asked of them, but systemic issues caused something like this. You can blame poor tech hygiene by the whole team, and lack of foresight by the manager, but most of that would be 20/20 hindsight.
This is why blameless postmortems are a good thing, because humans are simply awful with hindsight bias.
Best thing to do is just figure out how not to do it in the future.