Hacker News new | past | comments | ask | show | jobs | submit login

Users do report posts, from what I understand quiet regularly. It still requires manual moderation tho otherwise the reporting process can be abused (think anti-competitive purposes)

I've yet to hear of a way that this is solved without hiring thousands of people in what are horrible jobs (see Adrian Chen's excellent reporting on the issue[0][1])

I have no doubt that this will eventually be solves or assisted with ML, and that the solution is likely to come out of FB or G

[0] https://www.wired.com/2014/10/content-moderation/

[1] http://www.newyorker.com/tech/elements/the-human-toll-of-pro...




Those Chen articles hit close to home when I read them, because I've had to verify and act on child pornography complaints at a hosting service. Almost eight years later, I still have nightmares about the (thankfully) little I've seen. When I got into hosting I didn't realize dealing with stuff like that is table stakes; that was my first, and last, hosting employment, but I'll carry that aspect until I die.

That experience really makes me feel bad for the 3,000 new hires, honestly. I couldn't imagine moderating the human condition, which one could argue Facebook basically is. They'd better not clean out Adecco, and actually pay those people with the long-term damage of the job in mind, but that won't happen. Would actually make for a good union...


Completely sympathize. I had an experience in my black hat days - broke into a server and found folder upon folder of JPG's. Stupidly downloaded some of them and opened the first to find an image so disturbing that I can't even begin to describe it.

We were a bit conflicted about what to do (more how to do it), and ended up reporting it to both the US and Australian feds (which I suspect may have given me a free pass on one of the crazier things I later did).

I really didn't take it well, but one of the guys in our group was inspired to start a vigilante group that would hunt these distribution networks down and it achieved some success in the 90s.

Hopefully these employees can be eventually protected with some basic level of ML that would filter out the worst of the worst (apparently Microsoft Research have a well-developed fingerprinting system for child exploitation images) - because i'd really hate to imagine the scenario you and Chen describe, and what I briefly experienced, as becoming more common.


>Users do report posts, from what I understand quiet regularly.

Yes, they do - even when the post has nothing wrong.

Perfectly legitimate accounts writing reasonable posts, without any "fake news" content, or any "hate", are reported, and accounts are blocked.

This is a result of political activism by users. A very hard thing to solve for someone like Facebook.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: