Hacker News new | past | comments | ask | show | jobs | submit login

What's weird is that Facebook cannot rely on their users to report blatantly criminal acts witnessed by thousands of people. It probably says more about Facebook users than the platform and makes me doubt that doubling this or that team size can make a meaningful difference.

Especially with this approach of manual monitoring which will probably just result in more questionable deletions Facebook is already known for.




Users do report posts, from what I understand quiet regularly. It still requires manual moderation tho otherwise the reporting process can be abused (think anti-competitive purposes)

I've yet to hear of a way that this is solved without hiring thousands of people in what are horrible jobs (see Adrian Chen's excellent reporting on the issue[0][1])

I have no doubt that this will eventually be solves or assisted with ML, and that the solution is likely to come out of FB or G

[0] https://www.wired.com/2014/10/content-moderation/

[1] http://www.newyorker.com/tech/elements/the-human-toll-of-pro...


Those Chen articles hit close to home when I read them, because I've had to verify and act on child pornography complaints at a hosting service. Almost eight years later, I still have nightmares about the (thankfully) little I've seen. When I got into hosting I didn't realize dealing with stuff like that is table stakes; that was my first, and last, hosting employment, but I'll carry that aspect until I die.

That experience really makes me feel bad for the 3,000 new hires, honestly. I couldn't imagine moderating the human condition, which one could argue Facebook basically is. They'd better not clean out Adecco, and actually pay those people with the long-term damage of the job in mind, but that won't happen. Would actually make for a good union...


Completely sympathize. I had an experience in my black hat days - broke into a server and found folder upon folder of JPG's. Stupidly downloaded some of them and opened the first to find an image so disturbing that I can't even begin to describe it.

We were a bit conflicted about what to do (more how to do it), and ended up reporting it to both the US and Australian feds (which I suspect may have given me a free pass on one of the crazier things I later did).

I really didn't take it well, but one of the guys in our group was inspired to start a vigilante group that would hunt these distribution networks down and it achieved some success in the 90s.

Hopefully these employees can be eventually protected with some basic level of ML that would filter out the worst of the worst (apparently Microsoft Research have a well-developed fingerprinting system for child exploitation images) - because i'd really hate to imagine the scenario you and Chen describe, and what I briefly experienced, as becoming more common.


>Users do report posts, from what I understand quiet regularly.

Yes, they do - even when the post has nothing wrong.

Perfectly legitimate accounts writing reasonable posts, without any "fake news" content, or any "hate", are reported, and accounts are blocked.

This is a result of political activism by users. A very hard thing to solve for someone like Facebook.


In many places in the western world, facebook users are some 90 percent of everybody in a certain age group. So if it says anything about Facebook users, it says something about everybody.

Facebook is approaching 2B users. Let's say that every 1/1,000 users per day posts something that somebody else flags or which the system flags because of the words used. Then it's some 2M posts or comments that are flagged.

Facebook currently has 4,500 in the community operations departments ("moderators"). Each moderator then has to screen around 450 posts, which is difficult in an 8 hour shift. So obviously, that's not the way it works. They surely have algorithms that sort flagged posts with higher priority; more people flagging means it's more urgent, certain profiles are more urgent, for example people who have used violent language before, users with certain friends are more urgent, users at a certain age are more urgent, certain hours of the day are more urgent, post with certain words associated with violence or suicide are more urgent than posts with racist or sexist slurs or nude material etc.

But even if they correctly identify a threatening suicide or hate crime, how do they prevent it? Shutting down a live video is one way, but contacting authorities would probably also be necessary. How do you do that when your users are spread over 100,000 different jurisdictions? It's a big task.


> Facebook is approaching 2B users.

If you count also their sister properties including FB Chat and WhatsApp and Instagram. But Facebook the social network hasn't 2B monthly active user, as many left or rarely return.


Oddly enough Zuckerberg posted this[1] about an hour ago saying they have 1.9B users on Facebook each month. Rehosted image for non-FB users[2]

"Our community now has more than 1.9 billion people, including almost 1.3 billion people active every day."

[1] https://www.facebook.com/photo.php?fbid=10103696225640981&se...

[2] https://i.imgsir.com/qePz.jpg


"report blatantly criminal acts"

The marketing campaign will be against murderers, the implementation will be against people who voted for Trump.


Why would they do that? I'm sure Trump voters click on adverts as much as anyone else.


I imagine that plenty of people 'report' people after disagreeing with them politically.


Zuckerburg has political ambitions, a little help from the Facebook moderation team might earn him a few favors. He certainly wants that infrastructure in place once he runs.


Easier to sell ads if users don't have to deal with differing opinions. Arguments are hard to avoid and disrupt the casual consumption experience.

Conservative voters probably represent a much smaller portion of facebook's user base than liberal voters, given facebook's age demographics.

There is definitely an assymetry in what kind of hard-line political pages are likely to get banned. They have relaxed a bit in the last few months, but many political pages constantly have to worry about "getting zucced".


[flagged]


I come from significant privilege, so I won't be hurt by people dividing (and fighting among) themselves by ambiguous lines drawn in the sand. But, I don't understand how so many people that don't come from such a background think of globalism as a bad thing.


Because Globalism means the extremely poor elsewhere in the world get less poor, while the relatively rich in the US get relatively less rich, and also the extremely rich in the US and elsewhere get more rich. This does not jive with a lot of people.


Observation shows the first part is promised but does not happen. All the money goes in one pocket, and as a hint to which, its not the lower or middle classes of any country.


It's not something I have thought about before, but I wonder how prevalent the bystander effect [0] is in online media streams. Say, if there are 1000 people watching a live stream on Facebook of a horrendous crime, will no one report it because everyone thinks someone else will report it? If that's the case, Facebook is up against human psychology in hoping people will point out these acts.

[0]: https://en.wikipedia.org/wiki/Bystander_effect


I'd expect the lack of actually seeing the bystanders to weaken the bystander effect.

Lack of physical proximity to what is happen might lessen the urge to act. Pressing 'report abuse' is really easy, but calling the police based on online material is a lot harder.


Based on my experience reporting security issues to them and their condescending and irrelevant responses, I would just call the police. I wouldn't bother with Facebook.


>What's weird is that Facebook cannot rely on their users to report blatantly criminal acts witnessed by thousands of people...and makes me doubt that doubling this or that team size can make a meaningful difference.

Then wouldn't it make sense to add more people whose job specifically is to report criminal acts instead of relying on the users to do it?


Are the reviewers going to browse random content? Somewhat possible with trend detection (make a human look at quickly rising stuff) but very wasteful since most of that will be either benign or incomprehensible to the outsider.

Will they watch private streams and read private messages? That sounds like a privacy disaster even Facebook would prefer to avoid.

Which leaves them with reports from users. Basically, reviewing millions of "hate speech" reports and either effectively instituting a very strict speech code or ignoring vast majority of them leading to further complaints.


Maybe I'm missing something but I don't quite see the point of rebutting issues you raise through your own speculation.


It's a design problem too. If it's not obvious that you can/should report something then you're never going to do it. Plus if there isn't enough feedback about your help then you won't feel like you've made a difference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: