Point taken; it "works" for certain values of "work."
> They could hire an army of reviewers. They just don’t.
They may actually do that too, but perhaps there are thresholds that must be met for something to reach a reviewer. I have some sympathy for Google here as I work on email security in a high-volume environment. ML is one tool in the box, and human reviewers are another. Everything is a tradeoff between resources, false positives, and false negatives.
At least my organization's customers can contact support if something is going wrong, but for people trying to legitimately use Google Ads, it can be an extremely frustrating situation of shouting into the void. (And getting boilerplate support answers back from the void.)
> They could hire an army of reviewers. They just don’t.
They may actually do that too, but perhaps there are thresholds that must be met for something to reach a reviewer. I have some sympathy for Google here as I work on email security in a high-volume environment. ML is one tool in the box, and human reviewers are another. Everything is a tradeoff between resources, false positives, and false negatives.
At least my organization's customers can contact support if something is going wrong, but for people trying to legitimately use Google Ads, it can be an extremely frustrating situation of shouting into the void. (And getting boilerplate support answers back from the void.)