Hacker News new | past | comments | ask | show | jobs | submit login

One terrible aspect of online content moderation is that, no matter how good AI gets and no matter how much of this work we can dump in its lap, to a certain extent there will always need to be a "human in the loop".

The sociopaths of the world will forever be coming up with new and god-awful types of content to post online, which current AI moderators haven't encountered before and which therefore won't know how to classify. It will therefore be up to humans to label that content in order to train the models to handle that new content, meaning humans will have to view it (and suffer the consequences, such as PTSD). The alternative, where AI labels these new images and then uses those AI-generated labels to update the model, famously leads to "model collapse" [1].

Short of banning social media at a societal level, or abstaining from it at an individual level, I don't know that there's any good solution to this problem. These poor souls are taking a bullet for the rest of us. God help them.

1. https://en.wikipedia.org/wiki/Model_collapse




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: