>they could automate flagging for human review instead.
I'm not sure this would save them in many cases. Most of the abusive content on facebook is not easily identifiable. I don't think the current AI systems are up to the task. They may be able to identify a human and a dog are in the video but not tell the difference between petting the dog or punching it.
Similarly they likely can not tell the difference between someone at a shooting range or committing a mass shooting.
Just like it is too much to expect police to stop all crime before it happens, I think it is too much to expect platforms to remove violating content before it is reported.
I'm not sure this would save them in many cases. Most of the abusive content on facebook is not easily identifiable. I don't think the current AI systems are up to the task. They may be able to identify a human and a dog are in the video but not tell the difference between petting the dog or punching it.
Similarly they likely can not tell the difference between someone at a shooting range or committing a mass shooting.
Just like it is too much to expect police to stop all crime before it happens, I think it is too much to expect platforms to remove violating content before it is reported.