Telling developers that 100% test coverage is an anti-pattern is like telling a room full americans that running too much is harmful to your health: true, but not helpful.
But then there's that one guy that tells you you need to run 12 miles per day, otherwise it's not worth existing. False, and not helpful.
The reply was great; it acknowledged the value of test coverage metrics while also understanding the dangers of using it as a numeric goal in the extreme. True, and helpful.
The trick is that one of those un-helpful suggestions is more dangerous than the other. If you consider them opposite sides of the same coin, that coin doesn't have a fair flip.
But thinking that running 12 miles a day is the only useful way to run, will discourage you from attempting to run.
To put it another way, saying "50% test coverage is fine, sheesh" should encourage more testing, because it lowers the bar you have to strive for to consider yourself to be "doing it right", and therefore makes it easier to talk yourself into trying at all.
Even better, what we're saying is "50% test coverage is a good start, keep writing good tests and if the number goes up, that's even better." Targeting 100% coverage simply motivates using the wrong incentive.
It is a subtle, but extremely important, distinction.
I've seen codebases that suffered from too much testing. It overcomplicated the code and made it brittle without catching actual bugs. Higher quality tests that would have been useful and allowed refactoring would have had lower coverage metrics, and that wasn't acceptable.
Granted, too little testing is probably the more common error.