Hacker News new | past | comments | ask | show | jobs | submit login

>If you give people who violate policy detailed information about what they did wrong and how you know then it is much easier for them to figure out how to abuse the system without getting caught.

I don't really buy this.

The most effective adversaries already have a deep understanding of detection mechanisms and are typically just tweaking parameters to find thresholds of detection. Other companies mitigate this by delaying bans and doing "ban waves", or even randomizing the thresholds (I have done both for certain types of automated bans for attacks on my systems).

More to the point, adversaries already know what they did wrong so telling them isn't going to make much difference. False positives do not know what they did wrong and telling them will make a tremendous difference.

Full disclosure: I have been the victim of a false positive flag in that my app with over 50M downloads on google play was removed and then reinstated when my reddit complaint post got human attention (thank g̶o̶d Google).




> adversaries already know what they did wrong so telling them isn't going to make much difference

I'm not sure that's true. Adversaries know they are intentionally trying to game Google's system, but that is not the same as knowing all of the internal parameters of Google's system. Telling them what they did wrong in specific cases might well give them useful additional information about those internal parameters that Google does not want to give them.

> False positives do not know what they did wrong and telling them will make a tremendous difference.

While this is true, it is also not actionable, since the whole point is that Google does not know which positives are false positives and has no cost-effective way of finding out (since finding out would require actual humans and the scale of its ad business is too large to make the number of humans that would be required affordable).


I don't for a moment believe Google couldn't afford to pay more humans. Do you?


Not the number of humans it would take to replace their automated fraud detection algorithms to dramatically reduce false positives at scale, no. The whole point of their business model is that all of those processes have to be automated, otherwise they aren't profitable.


I had a similar experience and haven't seen it put this way before. Great point about false positives.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: