I read most of the guy's article, I think, but I might have missed this last point you're making. I'm not sure I understand. Are you saying it's too technically difficult to do it (via machine learning or whatever) or there would be too many false positives?
If there's one company that doesn't usually use humans to do any kind of spam filtering, it's probably Google. Also, I'm pretty sure they'd work out the kinks eventually.
Both of those, and also: the amount of effort it takes, for no benefit whatsoever. They're doing it because they don't know what they're doing yet. It's not a moral outrage. It's a "tell".
If there's one company that doesn't usually use humans to do any kind of spam filtering, it's probably Google. Also, I'm pretty sure they'd work out the kinks eventually.