I always wondered why they don't have a system where the videos or images are shown first filtered, like blurred, and then at the discretion of the operator, they can take "several levels of blurring" off until they see the original image. That way, for some images/videos/ you can tell if it's violent/inappropriate right away, and the operator may see "less" of whatever is disturbing for them.
I've also wondered how they deal with long videos that may seem correct for the first minutes, but the real inappropriate content is interlaced for a few frames in-between or later in the video.
the janitors at my office don't get paid enough, and don't get valued enough. AFAIK, we contract out our janitorial work, so that we don't have to pay benefits, etc. it's shameful.
> Janitorial work always has been under-appreciated. But cleaning up shit tends not to be high skill work.
the latter is a non-sequitur, IMO. work that causes that sort of personal injury should be well compensated, regardless of the skill level. it takes a lot of a person. we should try our best to make that person whole again, or to not take so much from them.
What makes you think that? How can you say that with such confidence? Are you letting your pre-existing negative feelings color your assumptions about what people actually think at the company?
I used to work there, and I built tools for these workers. Every one of the other engineers that I knew had nothing but respect for them, and everybody recognized that it was important work.
I kind of hate how they play up how little these folks are being paid in Arizona relative to how much FB employees are being paid in California. Yes, it's not a lot of money regardless, but it's frustrating that people either complain about centralizing jobs in an unaffordable area, but then when they contract out to cheaper places, they're underpaying people.
Even in Phoenix, yes, I think $28k is being underpaid for this job, but you can't tell me they wouldn't also call them underpaid at twice the money, and take the opportunity to compare that salary to a Menlo Park salary.
Well, yeah. Why should all the money Facebook makes from slipping ads into every conversation they can only go to SF techies? These people are doing a pretty important job for Facebook and yet they're getting paid close to minimum wage, with a highly Taylorist environment that barely gives them time to go pee, let alone to take a break and calm down after seeing something really horrible.
You have to remember that those are policies put in place by the contracting company, not Facebook or the engineers that make the tools that these people use.
Facebook takes a hands on approach to vendor management: many of these policies are agreed with vendors at contract negotiation stage, but even beyond that, many are set and enforced day-to-day by Facebook employees working as vendor managers.
The other thing to note (and this is true of most outsourcing arrangements) is that, while engineers may respect these people and wish to provide good tooling, they're kept at arms length by management in terms of communication channels. Outsourced workers don't have access to internal ticketing tools, aren't party to any internal meetings on tooling (even those at management level in the contracted company). Engineers can mean well but that means little if they don't have any means to understand the needs of these indirect employees.
As a purchaser, you're not bound to simply abdicate responsibility if your contractor misbehaves. You have the right and ability to choose contractors whose values are aligned with yours, and refuse to do business with those whose values don't. And you can reflect those values in the terms and conditions of the contract.
Consider for yourself: if you hired a contractor to remodel your house, and you found out that the contractor wasn't giving their employees bathroom breaks, would you simply shrug your shoulders?
And yet if Facebook really valued these contractors as much as their regular employees, they would be hired as regular employees and not contractors. The fact such crucial staff are contractors shows how much Facebook values them.
Yeah I don't disagree, ultimately though, it's a scaling problem. It's unrealistic to hire thousands of people in a high churn context. Cognizant, Accenture, et al are companies designed to do that and they do it well.
So sad that Facebook was forced to contract with such a terrible company that treats its workers so badly. Unfortunately a company like Facebook doesn't have any leverage when it comes to dealing with suppliers, they're just a helpless consumer of services in an unfair and inequitable world.
Perhaps if it were FB itself doing the hiring, rather than a 3rd party company, there would be better metrics being sent back to management about things that needed to change.
I don't think OP meant to disparage the good honest work, but really, if FB (as a company) did value the work, it'd be in-house, wouldn't it? There's too much opportunity for mangement to 'tone down' issues when metrics are shared between companies, whereas in-house there'd be no hiding from the problems.
That said, it's great to hear that you devs were very mindful of the problems that face these people. I certainly didn't not realise how bad it was, so kudos to all involved really.
Also maybe save hashes of "known terrible" images so they don't get moderated by a human at all. It's done with child abuse imagery, it could easily be expanded to gore, blatant hate memes, etc.
The blurred images would convey the concept and the concept in many cases would be psychologically damaging, as you say. However I strongly suspect that the vivid details would only compound that damage. Progressive unblurring wouldn't prevent psychological damage, but surely it should help alleviate it to some degree.
It may even be possible to selectively blur only parts of images for some classes of content. For instance, a filter that blurs portions of the image that are vivid red. If the red blur is on a dinner plate, then it's probably just a plate of pasta with tomato sauce. If the red blur is where somebody's head should be, that's probably gore and witnessing even that could be disturbing. But I know I'd rather see the blurred version.
If such a system were created, facebook would no doubt instrument it. Or at least the employees would suspect that facebook instrumented it. Then you'll have employees worrying if they are deblurring "too much" or "too little".
I remember when "user generated content" was a common phrase. It was great, users generate the content for the site and you just have to provide a platform for it. It seemed like a positive thing.
Not it seems clear that if you open yourself to host something for your users you also need to be prepared to accept that you're going to be hosting the worst of humanity, exposing others to it, including people who work for you.
It seems like such an extreme contrast from what it seemed like user content could be and what it is.
This isn't particularly surprising to anyone who has been online for a while - sometimes you'd see gruesome stuff even on forums, and then you also had the culture on sites like 4chan.
Moderation isn't something that one could really escape from even about 20 years ago, just now we're more cognizant of the effects of poor moderation and users are more aware of how people behave when moderators/admins are pushed to the boundaries of the set rules.
The the scale of the platform is the differentiator. If you're moderating a few hundred or thousand people on a forum, especially a site with a constrained vision and purpose, you're simply not encountering content and behavior like this on a regular basis, if ever.
Now, look at facebook. Billions of users. No constrained vision or purpose to being there; if you're human, you're a customer, and human behavior can be pretty gross at the fringes. And maybe most importantly; they can't just Ban whatever people or content they want, due to both a profit motive and public outrage due to their preeminent position.
This isn't a problem that has already existed, because the problem is with the Scale, not with the Behavior.
Yeah the problem today compared to BBS's/forums 10-20 years ago is that there are "smart" algorithms that will escalate the spread of fake news and shit content, compared to having everything equally exposed to everyone.
I think the big difference is how oversensitive society is now. Before, your moderators just had to deal with outright illegal content, maybe a flame war or two (or just move the discussion to the flame war section). If you ran into anything as a user, it was understood that you should report it and that the moderators would get to it with varying degrees of speed. There was no expectation of a perfectly safe bubble.
Now, if even the most milquetoast comment which could possible be offensive if I assume the absolute worst about the poster, isn't moderated post-hate, you'll be reported to the media who will almost certainly archive it and use it to fill the next 1 - 24 hours of their outrage media cycle. I honestly see a future where people file lawsuits against web sites for not moderating enough, a failure to keep safe or sort of negligence. "Your web site gave me PTSD or enabled cyber-bulling". What's worst, they'll win.
I think this argument has been being made by people for two thousand years (this generation is too soft! Uphill both ways in the snow!) but I've yet to see a causation demonstrated.
Just watch an old movie from about the 90s and you will be surprised how politically incorrect some of those movies were. They are just a reflection of what was acceptable in those days.
And in 20 years we'll watch media from this decade and be shocked at what we put in our media. Society advances, culture changes. Viewing the past through the lens of the present is always going to be a different exercise than viewing the present through the lens of the present.
I agree with the second part of that. Not necessarily the first.
Viewing the past through the lens of the present is always going to be a different exercise than viewing the present through the lens of the present.
Unfortunately, we live in a culture of intolerance. A few months ago there was a race car driver who lost his sponsorship because of something his father said before the driver was even born.
This is quite the theory - do you have more backing other than the single instance you've just quoted? And what is "intolerance" and why is this "bad"?
Excellent diving in! I believe you are accusing me of hypocrisy - could you expand on this? I'm very interested if I can be caught out in fallacy, given how ardently I challenge fallacy when I find it.
To note - literally, I did not use the word "tolerant" in the previous post, and someone made an argument based off a strange assumption that I did.
I don't believe racism should be tolerated. Does that thus mean we live in an intolerant society? If I claimed my opinion set the cultural Zeitgeist, would you not accuse me of great arrogance?
I don't think that you're a hypocrite; I think that you're honest about your intolerance. But you're not the only one, and I think that you are an example of our current Zeitgeist, which is — I believe — one of remarkable intolerance for opposing points of view.
Granted, we're all intolerant of something; the question is where we draw the line. I think that for quite awhile in the 80s and 90s the line was high, but now it's very very low in comparison.
Maybe it would help to define "intolerant?" For example, I don't believe racists shouldn't be harmed for their beliefs, but I think their beliefs should be called out as harmful and immoral at every opportunity. That is what I mean by "intolerant." Do you have a different idea?
You're asking for a lot in a very small space. HN isn't the proper forum to discuss this, and I can't take that much time away from work to educate you. If you can't see it for yourself, then there's nothing I can do for you.
That makes a lot of sense to me, so I'm curious why you lobbed the theory over the wall at a public forum with no intention of defending it.
Would you believe me if I told you "we live in the greatest age of free speech in human history - our societies are more tolerant than ever before?" No? Well, I've given as much argument as you, so I suppose we are at an impasse.
Yep. Society has changed a lot in recent history. Things that were considered OK in the 80's and 90's will end your career today.
Example: The 1985 Dire Straits song "Money for Nothing" is shortened when played on radio in Canada, and on non-rock stations on Sirius Satellite Radio because it has a verse that is now considered unacceptable.
Example: In the 1990's sit-com Frasier, it was mentioned at least once that people were being referred to psychological counseling for homosexuality.
Agreed - it's quite remarkable. In conversations with non-white friends of mine, it's fairly eye-opening how recently they feel they've gotten a fair shake in films. Like, remember Mr. Yunioshi from Breakfast at Tiffany's? It's insane what they got away with.
This surprises nobody who has any experience with hosting. Even as far back as when I ran my own BBS, it was immediately clear that some people only wanted to leverage the content hosting for nasty things. If you were lucky, it was just perfectly legal pornography.
I get a certain amount of secrecy for their own protection, but this:
> They are pressured not to discuss the emotional toll that their job takes on them, even with loved ones, leading to increased feelings of isolation and anxiety.
Just doesn't make any sense. Not only does it not protect the moderators, I can't even figure out what the cynical corporate interest would be.
I'm guessing these moderators are outsourced through another company and that allows for a lot of unreasonable policies.
I think it is generally understood that if you outsource you can get another company to enforce policies you'd otherwise be embarrassed by / legally responsible for if they worked direct for the company. So if something goes wrong "oh that wasn't us, we told that crazy outsourced company not to do it". The other company doesn't care as they're not selling a product and it doesn't hurt them.
I worked for an outsourced customer service company when I was in college. The policies for their in house service and outsourced were explicitly different but customers were exactly the same. This was no mystery to anyone in the company. Policies for the outsourced companies were much much less generous to the customers. So much so at one point they retroactively changed their warranty policy that was enforced by outsourced customer service. In house stuck to the original rules, outsourced declared X, Y, Z wasn't covered. When they got caught after a year of saving money they declared that the company they outsourced service to did it wrong. Then after a time they'd switch back....
I actually know a Facebook moderator who lives in Austin. I found it very interesting that they do a lot of moderating of illegal political content in countries that have strict rules about it. She works directly for Facebook and gets the same perks as the other employees in ATX.
It makes sense in corporate reptiloid sort of way. If you don't discuss this with people outside of the system, nobody would tell you "Dude, you are being abused! It's not worth it! Look at what this job is doing to you - and for what, for minimal wage?" If you can only discuss it only with people in the same boat, it looks like it's a normal situation, acceptable - everybody has it the same way, so it must be just how the things work, nothing to be done about it, no use to try and improve it or look for something better.
If they end up suing Facebook for the mental harm they suffer from the job (and things like content moderators getting PTSD is a recognized thing), then lack of communicating their distress to their friends and family undercuts their case.
"It couldn't have been that bad, you didn't even mention it to your mother or your husband."
Locations are chosen subject to a wide number of factors and influences. But even if local laws prevented such a lawsuit, a cautious manager might still issue the same directive: laws get overturned, etc. They might also be aiming to avoid bad PR (like EA Spouse's 15 minutes of fame).
I think most people prefer not to think too much about this underworld. Like when somebody throws themselves in front of a train one of the first things the police does is put up barriers.
And how do you even talk with your loved ones about this kind of work?
This policy is probably to do with the company trying to ensure folks do not share stories that may end up on the internet/tv causing damage to the company (easier for a 3rd party, like a family member to raise the issue on the internet/tv without a financial, employment damage). I would be surprised if they couldn't speak a medical professional about it.
It would be interesting to see whether the same policy exists for work such as 911 dispatchers.
Indeed, I'd think the cynical corporate interest would be to watch out for well-being of their workers and to prevent PR backlash from workers getting mental issues.
That's the behavior of an enlightened psychopath who has realized how useful good will is. Facebook-the-company is an unenlightened psychopath who just wants money and power and doesn't care what anyone thinks about it.
Zuck has been around long enough to have experienced goatse, rotten.com and all the other shock sites; back then, I'd argue, it was easier to find and be 'accidentally' exposed to than today.
But, that's the odd experience. Having to deal with new levels of depravity every day is different.
Anyway, what do you believe would change if Zuck did it for a day? They need moderators, as long as the technology for detecting it automatically isn't good enough yet. He knows it's a problem.
>Anyway, what do you believe would change if Zuck did it for a day?...He knows it's a problem.
Well for starters he might stop outsourcing the work through a contractor who pays its employees 1/8 what the average Facebook employee is paid. Maybe the powers that be accept and acknowledge these low/underpaid contractors are developing PTSD like symptoms and are clearly not getting services internally much less the financial compensation to get such services externally...not to mention the contractual arrangement which seems to keep the contractors from seeking help externally.
I don’t think the point OP is making is about Zuckerberg spending a day doing the work of one of these positions is to determine the fair market wage of this position in Arizona...
After all what is the local market rate for a job that has shown the tendency to trigger PTSD without sufficient benefits to treat said PTSD? You would hope at least enough to cover PTSD treatments...Would you continue to use Facebook or allow your kids to use Facebook if there was a good chance of them developing PTSD symptoms? Would you take a job where there was a good chance you would develop PTSD and the job wouldn’t cover it and didn’t pay enough for you to cover it?
Yea, but seeing someone spread his asshole in a self-pleasuring manner isn't really the same as a daily parade of conspiracies and snuff videos content moderators have to deal with
That they would gain knowledge of what the job is like, and so realise that their employees doing the work need a lot more support than they currently have.
If it were Facebook employees that might even be the case. As the article states it's conveniently outsourced so they can just label it as a dollar expense without a guilty conscience.
And the secret is that there is no current solution.
There is no automation to stop the sheer amount of dumb, sick, sad, horrifying, malignant and monstrous stuff being put up every minute from around the world.
Frankly it could probably break social media the same way cancer could kill the tobacco industry.
And this is simple moderation - ignoring newsworthiness, censorship, speech, propaganda and other issues this throws up.
The best they probably hope is that the users forget about it.
I mean seriously - what are publicly owned firms supposed to do?
Open up the graph of people who post such content. The more offensive content you post, the less privacy on your Facebook account.
Verification checkmarks allow one to screen out all content posted by unverified users, verification is available in return for a street address where FB mails a letter for a low-bandwidth 2FA. That's not so hard to fake, but also not hard to detect as fake when people deciding whether or not to filter out your content using shared blocklists.
>I would be impressed if Zuck and Sheryl decided to spend a day per year doing this job. I suspect that as a billionaire and the CXO of one of the largest companies of the world, you can get disconnected from the details.
Considering that Mark Zuckerbeg enjoys personally using a stun gun on a goat then slicing it's throat[1], and spent a year similarly personally killing any animals he ate. After his day of undercover b may enjoy the work and declare the complaints baseless.
I wouldn't be surprised. Zuck slaughtered his own meat for a year; he seems willing to expose himself to a harsh reality to remind himself that it exists.
For those used to shock sites from 10-20 years ago, this is very different, I'm not going to link to anything but the disturbing content is on a few whole new levels.
I can't even imagine doing this job. This should be 100% automated with only 0.01% error rate for the first-pass filter (e.g. before it reaches humans).
They are willing to spare their users the horrors but not their own employees/contractors...
I developed the first version of the screening system for a corporation about 10 years ago, when UGC was still in its infancy. We had the help of a Microsoft tool that would match the visual fingerprint of a photo against a known-images database that prevented human operators from re-seeing an already tagged image in a different cloud, hoping to reduce the psychological scarring of the human workers that screened the content. We also evaluated automatic classification of images, but it's very hard to detect soft core porn from shock images even with today's algos, never mind 10 years ago.
One advice from a legal counsel that I received at that time still stands in my memory; she said: "Never look at the images, nevermind how curious you are. It's not just illegal and can land you in prison, but it's permanent. You cannot un-see something you've seen."
One thing I hope they're doing is advanced detection of duplicate inappropriate content - e.g. by splitting a confirmed properly blocked video into frames and identifying matching videos based on frames matching (or even key parts of frames).
You'd still have to investigate/review partial matches, but something like that could cut down a lot on duplicate effort and could auto-identify a lot of things that would need manual review.
I read somewhere that it’s actually a hybrid approach!
For some things liken nudity ML based solution work fine.
But other things such as hate speech are more nuanced and require human oversight.
That said ML is used everywhere possible to flag content to moderators.
For some categories the work of human moderators is used to train the ML models.
Facebook and other companies like YouTube are well insentivized to do the right thing here and automate as much as possible for all the reasons outlined in OP
The only one that got to me was Funkytown. I've got to imagine if they increased pay and hunted through those users, they could find people that can withstand sifting through this stuff.
> Every moderator I spoke with took great pride in their work, and talked about the job with profound seriousness. They wished only that Facebook employees would think of them as peers, and to treat them with something resembling equality.
> “If we weren’t there doing that job, Facebook would be so ugly,” Li says. “We’re seeing all that stuff on their behalf. And hell yeah, we make some wrong calls. But people don’t know that there’s actually human beings behind those seats.”
Some respect and acknowledgement, and more than $15/hour, seems like it would be a good start.
Also: High-quality no-copay mental health services, that are also contractually guaranteed for, say, a year after severance (for any termination reason).
It also occurs to me that even if this work arguably needs to be done, it doesn't need to be done by anyone 8 hours/day or 40/week. (Let alone in conditions where your job security depends on maximizing your "efficiency"). Anyone needing to do this should do it as a part of a job with a balanced set of work duties, with this being something they do only some of their work time.
These things would hurt Facebook's bottom line of course. These people are being chewed up and spit out for Facebook's profit and reputation.
I agree that this could be a part of a bigger position instead of being the bread and butter, but we are talking about sub $30k/year jobs that nobody wants to do.
We could just as easily make the argument that it's not safe for garbage men to hang off the back of the garbage truck, but at the end of the day a) nobody cares and b) I've got my own job that nobody wants to do that I'd love for someone to come along and make less shitty.
> but we are talking about sub $30k/year jobs that nobody wants to do.
Yes, that's the problem. There is no law of nature that says we need to pay people with the jobs that are worst for their health the worst and give them no respect or acknowledgement.
If there are other jobs that is true for, yes, those too.
I wouldn't tell you what to care about, but I would say that we all would benefit by taking care of those of us whose jobs harm them. Your job should be less shitty too, it doesn't take away from that to point out other people who's jobs are harming them and aren't getting what they need. It is reasonable to begin focus on the jobs that are the _worst_, mentally or physically. Maybe yours is one of those too.
It's also reasonable in the HN venue to focus on the jobs that are the worst that make possible the economy that many of us reading this benefit from by being paid a lot more than $15/hour to contribute to, and probably aren't as harmful to our health as those for many of us. If your situation is especially dire, then, yes, you too.
> There is no law of nature that says we need to pay people with the jobs that are worst for their health the worst and give them no respect or acknowledgement.
It's not a law of nature, no; but it is a law of human psychology/economics.
The jobs that nobody wants to do aren't competed for—there are always plenty of such jobs to go around. Therefore, there's nobody kept out of the field. Therefore, the field doesn't ever build any social cachet of exclusivity. And that's a problem, because exclusivity is a prerequisite for the general halo-effect of respect that a field's members can earn.
Example: doctors must know a lot to do their jobs—but we wouldn't default to thinking of doctors as "knowing a lot" if anyone could be one without proving that they know a lot. We'd distinguish "well-trained doctors" from "badly-trained doctors" and we'd certainly give some props to the well-trained ones, but "being a doctor" would no longer command respect on its own. It'd be like... being a programmer, for instance.
That general halo-effect of respect is essentially what drives the ability of a class of workers to collectively bargain—it's what makes the other side of the negotiation treat them seriously. They know that people that are respected by society have a voice in that society, that they can use to denounce bad employers. So they actually sit down and bargain, rather than just firing all their old employees and getting a new batch.
Microecon says that jobs with more employer-demand than employee-supply should pay more, because employers must compete for employees. But because of this failure-of-negotiating-power, this doesn't happen; all the employers effectively are in implicit collusion to keep the wages of such employees down, by just not even taking seriously the idea that they should be competing for such employees. (It's kind of the same effect as employers "implicitly colluding" to fail to compete for talented female employees—but instead of being driven by employers' biases about demographics, it's driven by employers' biases about the assumed social-power that an employee must have if they are willing to take a particular job.)
But to get the members of the field to be respected—rather than just sympathized with (see e.g. sanitation workers—they have "hard jobs", everyone knows this, and people try to not make their lives harder than it is—but being one still doesn't make you any more likely to be listened to at a town council meeting), the field must first become more exclusive. That can work for fields where most of the work occurs in one place—see Hollywood and their actors' and writers' guilds. But if the work is distributed (like with community moderators)—how is the exclusivity going to happen?
I don't think its so much credential based gatekeeping that gives professions bargaining power. You don't have to look any further than game development vs financial services development. The former get exploited to the point of lunacy, the latter are extraordinarily highly paid and respected despite doing basically the same job with a different coat of paint. And its exclusively because there are more people qualified for both jobs trying to go into the former than the later. Its the same effect as to how SpaceX can underpay its employees.
Fundamentally all an employer cares about, and all that will determine if you are paid more, is if there aren't many other people applying for the same job. Even high skilled jobs requiring extensive qualifications - an opening for a CS PHD will always pay way more than a History PHD because there is so much less competition for the later pool of candidates, even if there is a greater performance disparity between a "bad" and "good" CS PHD than a history one in a similar capacity.
Appraising candidates definitely factors in, but even in highly skilled disciplines like game development, as long as the employer can assume a strong likelihood of getting good qualified candidates to replace you they will exploit you. Its when the prospect for replacing you is in any way questionable that suddenly your wages and respect will rise.
Unfortunately, being a very good moderator vs just good enough doesn't make a big difference for the employer. That's a very big difference with doctors, programmers or actors.
IMHO, construing that this happens because of a conspiracy between the employers isn't going to help.
Conspiracy isn't the right word. It's a societal bias. "Implicit collusion" as in nobody ever has to talk to one-another for the "collusion" to happen—everyone just ends up thinking the same way as one-another, as if there was collusion.
There is a law of capitalism for it though, which we have, as a species, put well above nature.
These jobs are only so shitty because Facebook can get people willing to put up with them.
We sit in ivory towers of decades learning to program to be qualified as software engineers making... what I would really call just a reasonable wage for where we, as humanity, are in terms of wealth creation, technology, and productivity. But those are skills built over extreme lengths of time most people don't have, and they will suffer their entire life for it in subsistence wages.
There are so few fields today where there is both demand for labor and a requirement of skill to gatekeep the "unwashed masses" from driving the wages to the wage floor. Probably the most damning information I know of is that the Bureau of Labor Statistics shows that, one, the largest employment sectors are all unskilled, and two, they are the ones that will grow in the next decade [1]. US society is built on a foundation of despondent poor workers with no future and that foundation is intent to expand faster than fulfilling careers will... all of which make up only about 20-40% depending on how you interpret the data of actual employment.
The garbage man metaphor is a good one since those are actually pretty cushy gigs (at least around the northeast). They're a unionized workforce that has opportunity for overtime and is a pretty mindless, albeit skilled workforce.
Of course, that only happened after years of organizing and strikes. These are actions that I'm sure Cognizant could quell in a pinch - if Phoenix makes demands, they temporarily outsource to India and set up a new center in Albuquerque or Tuscon.
This may sound out of touch, but if Facebook wants to maintain this workforce and avoid more bad PR, they may need to restructure this as a flexible or incentive-driven environment. I'm trying really hard not to suggest "uberifying" or "gigging" the work, but allowing moderators to choose their hours or content type for the day could turn this into a desirable role.
And paying them more and improving their care, without a doubt. Subjecting people to that kind of trauma for next to minimum wage sounds like a class action suit in the making.
There's an argument that it's unethical not to automate away the jobs nobody wants to do.
Like... sentient beings putting themselves in harm's way to pick up garbage? Or just spending huge amounts of time taking orders for hamburgers?
Yikes!
I assume the major motivator for most of these cases is money- essentially, you may just kind of have to work all day, possibly at significant risk to yourself, to get paid minimal amounts so you can eat and have a roof over your head.
Ultimately, humanity is deeply undervalued. There are jobs which would absolutely be a net benefit to society if eliminated- the catch is you'd put those currently employed out of work.
Aren't garbage men paid like $80k/year US with great benefits? I could be wrong but it seems like they have a much better situation overall than these people and the mental health issues seems like a much bigger problem than potentially getting dirty or hurt on the job but maybe I have it wrong.
> If it isn't already the case in the US, psychologically hazardous jobs should come with wage premiums.
It's not. It's typically in the interests of shareholders that those costs be externalized onto employees, especially if those employees lack the economic leverage to demand better treatment.
Question: is there a legal prerogative for Facebook to proactively moderate content in the US?
Don't get me wrong: the article is incredibly hard to read. When humans become untethered from social mores, they are capable of shocking and evil behavior.
But...can't people mostly police this themselves? If I had a "friend" posting video of a dog being stabbed, I wouldn't be their friend for much longer. It seems that the result would be relatively small cesspools of highly antisocial behavior. Cesspools which will continue to exist regardless of any moderating effort.
Alternative to moderation: introduce "public content ratings" for the content users post publicly. Machines automatically assess content and rate the user based on the maturity level of their public discussion. I would be "teen-mature" because I sometimes say "shit" online and perhaps another user would be "highly mature - disturbing violence and sexual content". Their ratings would weight groups.
Then, it's up to me to decide who I associate with online.
Caveat: nothing is a panacea. We're all trying to figure out how to handle the fact that we gave every maniac a megaphone to the masses.
> If I had a "friend" posting video of a dog being stabbed, I wouldn't be their friend for much longer.
It's less about it being for you, and more about that it would be hard to sell ad space when it sits next to content of that type. Your friend might not post those materials, but what about Pages and Groups, which is where I'd say most FB users spend their time these days.
In my opinion, Facebook is trying to maintain the illusion that this kind of content isn't being posted at all, that they present a "safe" environment for their customers. If people started to see this kind of content, they might be motivated to stop using Facebook all together. It might also prevent de-friending; I strongly suspect that Facebook doesn't want people to "de-friend" each other. Note how many alternatives they provide to actually "de-friending" (i.e. the "snooze" option).
Question: is there a legal prerogative for Facebook to proactively moderate content in the US?
Not that I'm aware of, but there's a commercial one because advertisers don't want to be associated with certain kinds of content and FB is a public company. Usenet worked great as a distributed social communication protocol, but fell apart under commercial pressure, and it's not obvious how to compete with the huge infrastructural advantages of commercial platforms using voluntary protocols.
It seems that the result would be relatively small cesspools of highly antisocial behavior. Cesspools which will continue to exist regardless of any moderating effort.
One approach is to monitor the cesspools and discriminate against content matchable to those originating points.
Is it just me, or do should we expect that some small population will be naturally equipped to deal with this job? We don't expect everyone to want to be a soldier or medical doctor. These people deal with situations that would make most people throw up. If you can't handle an hour of browsing 4chan, maybe you should know that content moderation is not a good fit for you.
I'm not sure if that solves the core issue of long term mental effects, even on the "hardest" people.
Stepdad is a retired firefighter, has kind of that old-timey tough guy thing going on (although deep down he's a real softy) - he rarely likes talking about the hardest parts of the job, but the few times it has come up he mentions seeing some things that haunt him.
I remember one time him mentioning seeing something so grotesque when he pulled up to the scene that he started laughing at the absurdity. Sort of in a way that he could do nothing but throw his hands up and say "well fuck this."
He jokes about some of that stuff, but I can sense pain beneath the humor.
It seems to me (and I’m serious about this) a classic sociopath would in fact be willing to do this kind of thing for money if I understand correctly, with no ill effects. They apparently have conscience and are something like 1% of the population. But I think they tend to be smart, and can probably get higher-paying jobs.
I am probably one of those people naturally equipped to deal with this job, but because i don't find this content shocking, i find it hard to empathize with the people who want it to be censored, so i would not want to take the job.
which might be a good argument for gigification of these types of content moderation positions since the machine learning tech simply isn't there yet and might never be in some cases.
Someone with a strong constitution/stomach (i.e. the people who can browse 4chan while eating) might find this work relatively leisurely/tame for decent pay and flexible hours if adapted to a gig-economy model. I mean, there's a whole class of sick fucks who watch the stuff on reddit's /r/watchpeopledie for no compensation.
Might end up with less mental health problems if this type of desensitized personality was leveraged for this nasty-but-necessary job instead of recruiting from naive 19-year-old kids or whoever in between retail jobs and exploiting them in atrocious work conditions until they get PTSD.
"relatively leisurely/tame for decent pay and flexible hours"
Uhh, this doesn't sound at all like the stories of driving Uber that I've heard.
I don't think any of the people in the article would be doing this job if they had other options that paid as well, so I don't exactly see how your idea would improve their lives.
> When I ask about the risks of contractors developing PTSD, a counselor I’ll call Logan tells me about a different psychological phenomenon: “post-traumatic growth,” an effect whereby some trauma victims emerge from the experience feeling stronger than before.
Wow. Wow. Is this really a thing, excusing forced trauma by claiming it makes people better?
Personally, I found the implication that performing this job was actually making the contractors "stronger" to be stomach churning. I certainly wouldn't want to approach Logan for help with my own mental health.
People respond in varying manners to traumatic stress. Post traumatic growth is simply the other extreme.
For example, some people take traumatic stress at really poorly (PTSD). Some people develop a world outlook to the tune of "well no matter what happens life is looking up from here" (post traumatic growth).
The counselor is looking at the glass half full. You kind of need to be a glass half full person to persist in that line of work.
PTSD isn't universal. Not everybody that experiences trauma will develop PTSD. So it's not a matter of "you have PTSD but at least your outlook on life has improved."* Rather, sometimes with some people, trauma won't induce PTSD but will cause post-traumatic growth. Research suggests there is a correlation between PTSD and post-traumatic growth, but it's not a hard and fast rule.
Furthermore there are many factors in play. The nature of the trauma and the predisposition of the person experiencing the trauma both seem to play a big role in whether or not post-traumatic growth is likely to occur. People with social support networks or spirituality are more likely to experience post-traumatic growth. Perhaps for related reasons, trauma that is systematic or collective (such as being a prisoner of war) is more likely to induce post-traumatic growth than trauma which is personal or individual (e.g. sexual assault.)
(Furthermore, no matter how distasteful the possibility may seem, it is possible that post-traumatic growth can take people past whatever their baseline was prior to the inducement of PTSD.)
The point is, people with PTSD suffer and their lives are influenced significantly. They have harder time to keep jobs due to symptoms, their relationships are influenced as they are harder to be around, they are more likely to abuse alcohol or drugs, they generally need help.
Other people experiencing post-traumatic growth does not offset all that. It does not make for losses and answer to "is it overall beneficial for people to go through that" is still no.
Counselor answer suggests yes and that is what people take issue with. Whether one can have PTSD and post traumatic growth at the same time is different question.
If I think this is unacceptable[1] and I'm skeptical of the nebulous "AI will do this job in the future" claims, are there conclusions to be drawn other than "UGC isn't sustainable on a global, public platform"? That is, are there serious alternative options, or anybody working on ideas in this space?
I think it's readily apparent that "just show everything" doesn't work if you want to attract a mainstream audience, but I'm reluctant to just give up on the global public platform that FB was originally idealized as.
[1] I think I'd still find it unacceptable if the moderators were being paid 6 figures, had extensive 1:1 counseling, or any other perks - selling mental health for money is something I'm happy saying a utopian society wouldn't include.
My only thought is "scale only through federation". It's impossible to moderate the content of a billion users. It's pretty easy to moderate the content of 1000 users. And if you're moderating the content of 1000 users who mostly come from an actual community (physical or subcultural) that shares the same values, you don't have to have your moderation rules enforced worldwide to the lowest common factor of different value systems, or by outsourced wage slaves from a different culture without any context.
Source: I'm a moderator on a Mastodon instance with about 1000 users (connected to the larger Fediverse of about 2 million users). We've got 5 moderators, and we respond to reports (either by or about our users) promptly and well. We don't have to police the behavior of the whole Fediverse (just our users), and we don't have to protect the whole Fediverse (just our users).
I haven't used Mastodon, but this is kind of the reddit model, right? Obviously some subreddits are >>> 1k subscribers, but they typically scale the quantity of moderators up accordingly.
Do Mastodon admins share common blocklists or anything? If a bad actor decided to start posting offensive content to random instances, I assume you can ban that {username | IP} from the instance used by you and your users, but would they then be able to just iterate through the other n Mastodon instances? Is there anything in place to prevent them from creating a new account and repeating ad nauseam? (not that there is on Facebook, necessarily)
(I don't know anything about Mastodon, which I'm sure is obvious from some of my questions - if they're incoherent in the context of Mastodon that's totally fine)
It's not very like that – subreddits are all on one server, but posts only appear on one subreddit, whereas Fediverse instances are separate servers, but posts propagate between different instances.
Mastodon instance administrators have the ability to block whole instances, and this is usually done because of bad/nonexistent moderation policies. There is some sharing of instance blocklists, more as a matter of convenience than of policy.
This is a really good question. I'm surprised how rarely we even entertain the possibility of these systems growing beyond our capacity for effective, low-suffering review and moderation.
(As for your footnote - people will rightly point out that any media will be used for some awful content, but that doesn't mean every system has an acceptable rate. "Can we get the rate low enough to tolerate?" is still a legitimate question.)
Every time Youtube makes the news for having unsavory content, their responses seem to have the same underlying tone: "we really are trying, but this is impossible." Every day, 24,000 days of video are uploaded to Youtube. Live human review for every new upload would take >100,000 full-time viewers, plus whatever is needed for comments, review appeals, and copyright notices. I can't even find good estimates on how much existing content there is. Some analysts have suggested Youtube might produce $15B/year in revenue. That review team would cost $3B a year at $15/hour, before payroll tax, benefits, counseling, etc; so we're talking about numbers in the ballpark of determining whether Youtube can be profitable. And yeah, you can do playback at triple speed, automate some basic content-filter removals, prioritize content posted under dubious keywords, de-prioritize major channels unlikely to upload something awful. That all lowers costs and makes it more plausible that troubling content will actually get caught.
But you're right: even if we wanted to run this as a public good, all those cost-savings only serve to concentrate the human toll. Law enforcement has been dealing with this issue since photography became widespread, and hasn't found a way past traumatizing people who have to sort through abhorrent content. Now, the same digitalization that makes distributing media nearly free makes it possible to create that experience a thousand times as often. (Shock sites, after all, are just a version of the same pattern consciously centered on the unwanted views.)
So what to do?
Robust tagging systems can help reduce unwanted exposure to content without requiring moderator involvement, the same way it's helped fanfiction communities reach a detente over protecting readers while allowing mature content. But it's telling that the shouting matches over insufficient tagging and robust age controls continue, and of course this only works for content that's acceptable on the site - a "take this down immediately" flag doesn't get used.
Federated approaches like Mastodon and WhatsApp groups might reduce legal liability and help people find spaces they're comfortable in and diminish unwelcome surprises. But that sacrifices both oversight and unity - even if we accept that some instances will be used to trade illegal content, plot violence, etc., we still lose the network effects of "searching Youtube" or "being on Facebook".
At the other end, I suppose ever-more-aggressive centralization is possible; the ability to multiply harm depends on being able to act repeatedly. If Google demanded your SSN to leave a Youtube upload or comment and banned bad actors for years, or the government ran Facebook with warrantless access to every message, the frequency of this sort of content would decline and the moderation burden might shrink greatly. (After all, anyone can tack a grotesque picture to a physical bulletin board, but there's not much of an issue with obscene UGC there.) Of course, the issues with that are screamingly obvious. Identity and the risk of consequences have a chilling effect, legitimate behavior can still be embarrassing when such a store is predictably breached, usage creep for this sort information is not just a risk but an invariant, and governments are not only inclined but often required by law to act on all sorts of not-actually-bad content like "promoting drug legalization".
Beyond that... I don't actually know. Allowing even minimal privacy and UGC while trying to maintain a palatable space seems like a genuinely unsolved problem. There might be a lot of unexplored value in trying to reduce psychological cost without taking humans out of the loop. Most 'digital natives' have seen some horrible shock content without lasting harm, so perhaps the winning approach is to reduce the time/frequency/vividness of exposure to a manageable level. At the very least, it seems like a less exhausted space than the technical side of things.
This is very thoughtful and gets at the heart of the matter.
It's a little discouraging (considering how influential it is in the evolution of tech) how quickly HN reaches for "rah rah decentralization" as a panacea for anything social media.
The public platform needs to be decentralized. We need a social protocol where the data lives in the protocol and not in some corporate server where it is subject to their whims.
We currently live in a dystopia where your Twitter or Facebook could be banned at their whim leaving you a digital outcast.
I’m not sure I follow. Almost any time there is an issue with a large social network folks on HN say decentralization needs to happen to fix it. But how do you “fix it” while still having the same or better user experience?
It’s a very difficult problem to solve and I don’t think saying “make it decentralized! Make it live on a protocol!” is useful without the extensive “how” that everyone seems to ignore.
So utopia is a completely decentralized uncensorable network where basically anything goes? Not wholly achievable probably because of DNS etc. but you could come close. Not an unsupportable position but you probably amp up the underlying issues Facebook is trying to address by 10x.
Are there examples of decentralized/unmoderated platforms with similar popularity (= success for the purpose of this conversation) to Facebook?
I suspect reddit is the best example of blending "something for everybody" with "default users don't see offensive things", while also being able to remove things like the content in the story, but obviously they still have human moderators.
I think the ultra-open / libertarian model is fundamentally incompatible with broad acceptance, personally [1], but I'm happy to be proven wrong. Early IRC/BBSes aren't very convincing to me because even if they were unmoderated the barrier to entry (knowledge, hardware) was high enough to limit adoption.
[1]: I think sites like Gab indicate that even if you clone a successful product and market it as "<x> but with free speech", the free speech part winds up being a negative factor, concentrating elements that will scare off mainstream users.
what a joke this response is. Article mentions A, B and C. with pictures and first hand anecdotes. FB press release issue a statement that they have contracts that do not allow for A, B and C.
As I see it, TLDR is: we needed a lot of people, cheap, and mega-contractor-shops is the only way to get the headcount, cheap. We also wrote nice contracts that require them to treat workers well, so if they don't, it's clearly not our fault. Also, we hired somebody who will be responsible for dealing with such things. So everything is going fine, except rare exceptions which we take very seriously. Thank you for providing feedback, it was very valuable for us.
This job sounds more horrific than a crime scene cleaner. 4500 foot soldiers to clean up a billion users. Murder rape assault fraud. Burn it all with brimstone.
I honestly don’t know how moderators work and i actually don’t use facebook. But 1,750,000,000 / 4500 is about around 380,000 users per moderator. I think they need 5 armys of moderators instead of just a brigade trying to find the worst. They need to step in with private messages or public posts to keep the discussion from going rancid fast. I see it as being logisticaly impossible unfortunately but to keep discussions from going to that depth of evil they need to step up moderatoon pressure a couple orders of magnitude. I’ve said things here that moderators told me it isn’t allowed (actually said things here that were hurtful and unnecessary blanket statements) and I gripe to myself and limp back. If this pressure isn’t there yc would turn into kuro5hin in short order. I’ve seen discussion groups fall into the toxic slime pit more than a few times never to return. I think at least facebook groups can self moderate i have been told. I really doubt good behavior can be coded into a forum.
Now that pit is billions of people not thousands. I have little hope it’s going to ever get better. Is this the destiny of all internet discussions? I worry about this a lot.
A huge amount of content moderation has been offshored to the Philippines, where salaries are considerably less than $15/hour. Combination of access to good internet connections, modern office space, an English speaking workforce, and low salary levels.
I have not seen this documentary, but compared to the verge article, I think a major difference is that the filmmakers did not get cooperation from Facebook or the third-party content moderation company. In fact their photos were posted and employees were told to stay away from them.
(Until recently at least, most of this content moderation work was done by contractors overseas, largely in the Phillipines. Can you imagine what the working conditions are like there?)
The presentation you linked to by Sarah Roberts was very interesting to say the least. Highly recommended. She adds a lot of information about the condition of content moderators in other countries.
If I was running a marketing team or PR firm working on this problem I'd keep pushing the nonsense of AI magically solving everything someday. The actual nuts and bolts of content is pretty horrific, and they are obviously going to want to hide this reality.
At least pay these people more than a dishwasher, please.
Yep. And this is what you get when collectively "We" demand moderation. Its the other side of the fence... Usenet was the unmoderated wasteland, and Facebook is a the moderated hell-landscape of "ok not ok".
Usenet still exists and is moderated in a similar way against exploitative content in partnership with The Internet Watch Foundation (https://www.iwf.org.uk), depending on your provider. Giganews, the largest Usenet provider and owner of other affiliated providers, is a member of the IWF: https://www.iwf.org.uk/member/giganews-inc.
The only thing remarkable here is that these people are not hired by Facebook directly. Someone needs to ask Zuck why he thinks these hires aren’t worthy of catered lunch and @fb.com emails.
Oh so "in the future" she will be able to watch it without sound or to pause the video?
Seriously, who thought blocking this is a good idea? Do you really need to see an entire video to see it is against the rules? It will also save (a lot of) time
"Oh but then they can just do a shallow evaluation of the video" yeah I don't think that is the case with most of the videos.
I wonder how far we are from automating this with image recognition. The interesting thing is policy aspect to the learning problem, does an post/image/video violate a collective set of rules? They are certainly building a huge enough labelled dataset of images, decisions, and policies. I wouldn't be surprised if this went automated in 5-10 years!
Wow, that was eye-opening.
Maybe a suggestion would be to have certain filters. Anyone below the age of 18 wouldn't be able to view posts by groups/people that have been flagged as "NSFW" posters? I know Tumblr implemented a similar idea a while ago.
That would be many times worse than all the chans combined, imagine a chan exclusively used by middle aged mothers where anyone you talk to is doxable at the click of a button.
These contractors work from different agencies. Sometimes they become full-time if they don't end up quitting their job. To make things worse, toxic culture like work place harassment is something they also experience
Underpaid moderators are a side effect, not the main issue. If the network were federated each node wouldn't grow to such an unmanagable size, at least not without other nodes severing ties.
The moderators told me it’s a place where the conspiracy videos and memes that they see each day gradually lead them to embrace fringe views. One auditor walks the floor promoting the idea that the Earth is flat. A former employee told me he has begun to question certain aspects of the Holocaust. Another former employee, who told me he has mapped every escape route out of his house and sleeps with a gun at his side, said: “I no longer believe 9/11 was a terrorist attack.”
I have a hope that people will eventually develop immunity to conspiracy theories after falling for enough of them and eventually seeing them disproved. But with things like pizzagate/Qanon it seems some people just keep going deeper.
This story even suggests the _opposite_ of immunity--the more you see of them, the more likely you are to buy into them.
It's not quite a good analogy, but it also reminds me of how I noticed that building/construction contractors I know almost invariably believe that the risks of asbestos and lead paint are highly overrated -- I've thought it's kind of for their own cognitive self-preservation, since they unavoidably get a lot of exposure, and don't want to believe their job is killing them. But maybe it's also just got something to do with a sort of "exposure therapy" effect where whatever you are exposed to becomes normalized.
Please realize that brains are hackable, and that there is always a set of content that will be absorbed by a brain which appears plausible to the brain, but in reality is false.
Ideas can be configured to get past an audience’s mental defenses. It will happen to you, and you will see it happen to your children, your educated parents, your well mannered neighbors and more.
Maybe there is a mental type which is immune to conspiracies, but that kind of immunity will come with some other mental cost.
I’m seeing gentle people in my family, repeat sentences that always predicted a user was being radicalized/polarized online.
Intelligence analysts who know the material they are analyzing is propaganda still need active intervention to prevent them adopting the views they are analyzing critically. It’s just an issue with how our minds work.
I have a hope that people will eventually develop immunity to conspiracy theories after falling for enough of them and eventually seeing them disproved.
Conspiracy theories don't appeal to the rational. A conspiracy is built on an emotional foundation and the 'logic' is just scaffolding to hold the emotional payload. A lot of people (but (i don't know what proportion) are susceptible to emotional addiction and contagion.
It's not wise or safe to assume that people can/will/should just develop cognitive resistance (though I totally understand your hope that that would happen). Sadly we have abundant and growing evidence that it is relatively easy for bad actors to cultivate antisocial behavior up to and including murderous rampages.
It's as if these people are cannon fodder. I never imagined that they would deserve pity, but they clearly do.
I always figured the Facebooks of the world needed more moderation. But this makes me think that maybe the only effective move is to target further upstream and somehow constrain submission or account creation.
I can't see how this ends any way other than with network authentication of computers with signed bootloaders that can stipulate the operator's identity, paired with criminal statutes prohibiting operating a computer under someone else's identity or defeating the authentication/encryption that makes it possible.
When it comes to neuroplasticity and change, all it takes is time. If you spend more in the shoes of those flagged for moderation than their opposites, maybe its inevitable that you attain a form of Stockholm syndrome.
It's a bit cultlike. Also add isolation from family/friends, and tight control of what you can and can't do, and having no trusted person (therapist there works for the comapny).
It doesn't surprise me that it can lead to similar outcomes, as being in a cult.
I think the intent of the comment was that some things start as conspiracy theories but end up being true, e.g. MK-ULTRA.
Although what I'm curious about in my own example is, how much of a conspiracy theory was this before it was confirmed, as in, were there many people saying that the CIA is doing mind control experiments?
Less important but related, how much do people distinguish between that reality and sort of 'corollary' / related conspiracy theories that are not proven (as an example, that RFK's assassinator was a subject of CIA mind control through MK-ULTRA -- this is unproven but people might assume it's true when they find out that MK-ULTRA was a real program).
The internet is one great big libertarian experiment. It illustrates all of the creativity, and all of the depravity, that result when you just put millions of people together and tell them to do whatever they want without consequences.
Tons of people watch these kinds of videos by choice over on /r/watchpeopledie (Warning: Very NSFW Subreddit if the name of it doesn't make that immediately obvious. Contains many videos of accidental deaths and murders, including those of children.)
I'm sure some of them are jobless and would love to be paid $28k a year to do something they already do as a bit of a hobby.
How many people who browse /r/watchpeopledie browse the sub for a length of time equivalent to an entire work day, Monday to Friday? And how many of those users, who browse videos of people dying all day every day, for pleasure, are qualified to judge what is offensive and what is not? These people should be referred to mental health professionals, not assess what is suitable for the world.
You're also entirely missing my original point. The people currently doing the job don't like it. They don't want to be doing it. And yet they are. Why do you suppose that is? Perhaps... because there are no other jobs available? Saying "you should be looking for a new job" to these people is like telling a homeless man he really ought to be looking for a rental property in his budget range.
Have you considered that a lot of people who already watch that stuff all die do so compulsively and are incubating a crippling mental illness? Also, I'm not sure that people who are consuming that content with enthusiasm are necessarily going to good job at screening it out for other people, because there's a significant overlap between enjoyment of such material and enjoyment of the reactions of people who find it distasteful.
What about people who work in slaughterhouses? Or a coroner or a mortician? Are they inherently "unhealthy" people because they have the personal temperament to handle a job that involves a lot of close contact with death? All nasty but necessary jobs with probably a lot more PTSD risk/scale than occasionally watching some gore and having to identify if someone's use of a racial slur violated community guidelines or not.
> It's like having someone with poor sense of smell take out the garbage.
No, it's like having someone a poor sense of smell detect which things smell and which things don't.
Not to mention, having them do this work is taking advantage of a dysfunction rather than helping to address it. In fact it would be likely to make the dysfunction worse.
So watching the destruction of human life and/or incredible suffering of other people is not problematic to you? If someone has a dysfunction is it not cruel to take advantage of the dysfunction, whatever benefit might be derived by others? Seems like the very definition of dehumanizing rather than caring whether or not your fellow man is thriving.
I'm not saying force these people to work, simply that they're people who lean more to the callous side of the emotional spectrum and they're better suited for this kind of work.
What you see as dysfunction, their apathy when viewing bad content, is their normal state of being.
> "Why do you think some folk are drawn to habitually view such content?"
It's paradoxically unusual but at the same time incredibly relatable. Everybody dies, so there will be broad spectrum curiosity for the subject, inhibited only by natural squeamishness.
Years ago when highschools still had shop classes, the teacher showed my class a binder full of color photographs of what a lathe accident looks like. The lesson was that lathes aren't toys. I'd never seen anything like those pictures before. They were disgusting, and fascinating. Probably psychologically damaging, but not as damaging as getting caught in a lathe. Did I mention the pictures were fascinating? For most people, seeing that sort of thing is rare. Some people are drawn to novelty, particularly when they can relate to it. Very little in life is as relatable as death; death is even more universally relatable than eating.
Is the suppression of squeamishness a form of mental illness, or a form of psychological damage? I think it definitely can be. But I'm far from convinced it necessarily is.
Almost everyone has a degree of morbid curiosity, or there wouldn't be a market for horror movies and true crime media. But my question was specifically about why that would develop into a habitual preference for specifically gruesome ends. I'm gonna go out on a limb and guess /watchpeopledie hasn't recently been taken over by video of people in comas flatlining.
Meh. Sounds like a run of the mill shitty job. You can destroy your body laying bricks and unloading trucks or you can destroy your mental health doing something like this. Of course they're not paid well enough to compensate for the long term damage. Nobody ever is.
There's all sorts of shit you might have to put up with in a shitty job, the work, the environment the customers. This job is shitty but some people are bothered less by this particular kind of shitty than they are by by angry customers yelling at you or laying bricks in the Arizona weather. Sure you might have to see some really screwed up shit among the sea of nip-slips and racial slurs but you get to do it in an air conditioned office without killing your body like the guys in the Amazon warehouse. Some people prefer that.
Different people have different degrees to which they'll tolerate the various ways shitty jobs are shitty. At least one person the author interviewed said "well, it sucks less than Walmart". Back at the point in my life when I was doing shitty jobs I would have preferred brick laying or custodial work but I wouldn't have turned down this job if I needed it. Of course it's terrible for you and turnover is high but all the equally shitty jobs are this way. There really is no winning in the "minimum wage or thereabouts" income bracket.
Edit: Since apparently this opinion is unpopular can anyone tell me why this particular implementation of shitty job is worse than all the other implementations?
Disregarding for a second that you're dismissing this entirely without having done so much of one minute of the job yourself...
A lot of manual labour jobs are protected by unions, which establish safety standards, limits to shift length and healthcare to help employees when they need it. Sounds like these employees could do with the same.
>A lot of manual labour jobs are protected by unions, which establish safety standards, limits to shift length and healthcare to help employees when they need it. Sounds like these employees could do with the same.
I'm not comparing to those jobs. The union dishwasher in some public university cafeteria gets time off, healthcare, etc, etc. that makes his or her job much less shitty than the infinite-term temp that works beside them. I'm comparing to the temp.
You can make a job better/worse by adding/subtracting pay, benefits or working conditions. If I could get tech pay and benefits for construction work I'd be doing construction.
If we're gonna compare this job to others then we should compare to other jobs that are "equally shitty" not jobs that are similar but actually compensated more highly if you roll the benefits into total comp.
A job which gives you ptsd in a few months by having you sit in a chair and see images, is by most human standards astonishingly awful.
People go to war zones and put their lives on their lines to have their minds be harmed in the same way - in contrast these moderators don’t even get hazard pay.
The way you phrase your statement doesn't end on a note of solution or hope and, "the perfect is the enemy of the good."
"Nobody ever is," is also an absolute dripping with sadness. Just my 2 cents on the reactions, I doubt it's because they think the working poor aren't systemically disadvantaged.
Let me take a stab at it - perhaps it makes you feel disingenuous to some degree to propose large-scale, "silver-bullet", beaten-to-death solutions/strategies.
Reason being that, from a practical/cynical perspective, these proposals can too easily be co-opted into red-herring cudgels intended to end discussions before they delve into the realm of the more granular, nuanced, uncomfortable details.
Not because of the exposure to "bad things" like the article pushes, but because I've always felt totally fine (even bored) looking at that stuff, and I don't agree with censoring it.
It's just "meh", another day of reality on Earth; the Internet is just a mirror held up next to it.
Will we ever collectively realize this? It seems younger crowds are more normalized to this stuff because they grew up on the Internet. But add in the hypersensitivity in today's public social network sphere, and we're all freaking out over anything even potentially flammable.
I think you're on the far side of the "desensitization" process. It's not entirely normal to have a flat emotional response to everything, including images of human suffering.
Certainly a good point. I'd add that when it comes to suffering, I certainly feel discontent with that happening:
In the article, the example of a stabbing video is given. I would strongly feel the need for retribution or justice for that victim. Same for all content that shows someone being hurt.
I guess what I mean is that it's wrong to try and purge all this stuff as if it just doesn't exist. This is real content, real people, being hurt in reality. I'd feel disgusting trying to censor it, not disgusted by the content.
Is not publicizing (in a system like Facebook, designed to proactively promote and spread attention-grabbing content to any audience it can) the same thing as censorship?
In other words - I sympathize with the hard-ACLU/EFF stance on free speech. But there's a difference between government censorship and societal moderation - the latter has always existed, and the scale and automation of modern platforms in publicizing content that normally wouldn't spread so far is what's new.
Should people be aware of bad things that happen in the world? Absolutely. Is broadening the audience for disturbing videos the right way to raise awareness? Maybe not.
And if folks really feel such content needs to be published, they still have more options and reach than they did in years past even if "mainstream" places like Facebook moderate them. Granted this last point is getting a bit trickier, as people go after registrars and web hosts themselves for political reasons occasionally (i.e. if this was Cloudflare we were talking about I'd be in full agreement with you - then again Cloudflare doesn't run a recommendation service that automatically causes new unexpected content to appear in front of billions of people).
Maybe Facebook just let all videos be posted, except when it’s illegal (the video, not the act). Pretty much the only thing that would fall in that category would be child porn.
I’m not sure I really understand why big companies feel it’s their job to censor the world. Just let the floodgates open: allow violence, allow porn, allow whatever else. Who cares? And maybe this might actually good for people to see. I feel like we are leaving in this outrage culture that wants to censor everything. It’s not healthy for individuals or society.
> I’m not sure I really understand why big companies feel it’s there job to censor the world
Because that's what the users and the customers both want. Facebook is popular and profitable BECAUSE it provides a mostly-safe (compared to eg 4chan) place to communicate, not despite that.
This is exactly right, and is the correct answer whenever the free speech argument comes up.
If people see offensive content on Facebook, but not on a competitor, then both the users and the advertisers are going to abandon Facebook for the competitor. Every company tries to give their customers what they think the customers want.
Lol. In the real world, "child porn" is just about the most difficult thing to define and enforce. Which country's law? The most restrictive? (No bare skin on anyone) Or the most open? (Nude children ok.) What if the person looks underage but isnt? Or is underage but looks older? How do we handle art? What about existing content (hollywood)? What about political expression (naturists/nudists)? For the people that actually deal with these decisions "child porn" is a useless term. That term is thown around as if everyone knows what it means. In reality there us no real agreement.
It's not difficult though. Just take the highest common denominator, then apply your own rules - in this case Facebook's own rules. 18+ is common in most countries, so that's a safe guideline. FB disallows porn / nudity, so, also easy enough (should be able to filter out 99% via image recognition). It's not difficult to set guidelines there, and IDK why you're trying to make it sound like it is. FB doesn't need to be tolerant or skirt on the edge in this regard. Nobody does. There is no reason why anyone would post nudity of any sorts on Facebook.
I figured that if I didn’t mention child porn I would be downvoted. But it looks like that’s happening anyway.
If I had things my way, no piece of information would be illegal. Child porn, nuclear weapon plans, whatever, I don’t care, it should all be legal.
But I accept that it’s a fringe position, though a completely tenable and consistent one. It’s all information and I think it’s ridiculous to try and censor something that only exists in the abstract.
Moreover, once you start censoring you will never be able to stop. Even though I’m sure people will do bad things with information, I believe that history has shown that the good will outweigh the bad.
Your position is consistent but for most people not tenable. It’s hard to claim that the “information” as you put it is abstract when it exists in video format that can be watched. When you make statement like
Moreover, once you start censoring you will never be able to stop.
Your position become less credible. You need to argue that the censoring that is currently being proposed or being done is indeed bad for society. Or that we’d be better off without said censoring. Engaging in slippery slope style reasoning is not helpful.
But I accept that it’s a fringe position, though a completely tenable and consistent one.
Acceptance of child porn isn't tenable, that literally creates a market for rape. Sure, you may be believe markets are an unstoppable force in human nature and that suffering is inevitable, but those are just the axiomatic beliefs you've chosen to subscribe to. If you don't care about the imposition of suffering on others that these entail, are you willing to accept the same degree of insecurity for your own person?
You think child porn is pretty much the only thing that is illegal to video? I think you should do further research on this issue. There’s a lot more than just child porn that is illegal to video.
All societies have regulated speech. I don’t know anyone who doesn’t believe this should be so. A desire to censor some things should not be spoken of as a desire to censor everything. Such hyperbole undermines your position.
With Facebook and Google speech is easier to disseminate. It’s also way easier to target said speech. No one can be vigilant at all times in terms of checking sources and accuracy of what we see/read/watch. People are easily duped. People are easily led to believe things that are obviously false. The anti-vax movement is an example of this.
We have entered an information age unlike any other in the past. This ability to sway, target, and reach so many at scale for so little cost has implications that may cause society to re-examine the nature of free speech. If Facebook let’s the floodgates open as you say then this re-examination will come much sooner and likely be the result of outrage and thus end up hurting Facebook in the long run. Facebook would like to prevent this from happening. Hence they take steps toward mitigating the public perception of Facebook enabling bad actors.
> Who decides what specific ideas should be outlawed?
The politicians who write laws. Lawmakers decide what should be outlawed.
I'm not sure why people paint this process as complicated. We live in a society, that society decides what is acceptable and what is not. You are welcome to disagree with the conclusion, but it's not some shrouded mystery how we make the choices we do.
"I'm not sure why people paint this process as complicated."
Reading between your lines here it seems like you have never tried to define what is allowed behavior and what is not and more importantly, tried to applied that to specific issues.
It's easy to all agree that pornography, hate speech etc. is bad. But what is pornography, and what is hate speech? Where do you draw the line...
On the other hand, the political process is unwieldy and unresponsive in this age of instant communication, which is why it's not working that great lately.
Well, society decides. Sometimes societies have a framework for deciding the grey area between individual rights and the right of society to regulate (in the form of government). That’s what political and legal battles are all about. There is nothing unfortunate about this though. What is good at an individual level may not be good at a societal level.
Ignoring for a second what should or shouldn't be allowed...
now the moderators are just subjected to constant child porn or potential child porn, which, let's be honest, is still essentially the same issue re: their mental health.
Child porn is a criminal offense, something the police gets involved in and has repercussions for the person who posted it and the people who viewed it. Ban people who abuse the report button and you can stem the job with just a few people, who like the police can get the appropriate environment and training. Plus if I remember correctly Microsoft already having some database to automatically match known images and videos. The number of cases left should be minimal in comparison to TOS violations.
Depends if very first thing mentioned, a graphic video of a murder, is illegal in the relevant jurisdiction. The legal status of snuff films is pretty complicated.
The alternative is paying people to moderate it all day every day. With the mental damage this causes to people who are desperate for a job.
Just blocking it yourself seems like a better solution. You dont have to be fair in what you block yourself and (surprise surprise) you dont need to watch it in the first place.
"Just blocking it yourself seems like a better solution. You dont have to be fair in what you block yourself and (surprise surprise) you dont need to watch it in the first place."
How does that work if nobody watches it, how do you know what to block?
>She knows that section 13 of the Facebook community standards prohibits videos that depict the murder of one or more people
>It’s a place where employees can be fired for making just a few errors a week
As I understand it, an employee has to make sure its extremely likely that it is a murder. She is told she can pause it by the psychiatrist, not to close it. You as an individual dont need to find out. Just close it immediately if you assume its gore.
Not everyone processes things the same way as you and many people (children, poorly educated folk) don't necessarily have the self-awareness to make such decisions reliably. And we both know there are people who delight in propagating such material and causing discomfort to others. Perhaps consider that the problem is not as simple as it appears to you.
>Perhaps consider that the problem is not as simple as it appears to you.
I dont think its a simple problem. It is one without an optimal solution. I just think the downsides of having the stuff around are preferable to having people ruining their mental health out of financial desperation.
I also believe that the reliability would go up with time, people are able to learn quite a lot.
> as users can take care of what they watch themselves.
I don't know what you mean by that. If you visit facebook, you don't get to pick what you see. Certainly a few seconds of a murder is going to be less than desirable.
You can pick what videos you play. If it looks like its going to be a gore clip dont continue to play it. You dont need to find out if the example above, with the guys with machetes, ends badly.
> Certainly a few seconds of a murder is going to be less than desirable.
While not desirable, outsourcing your second every few days to some poor guy who has to watch it all day long doesnt sound like a good solution to me.
That's you and me. What about my kids, if I had any? Do they have the sense to remove it, or does their curiosity get the better of them? What if they aren't in a position they can stop it themselves? What if it's an innocuous enough video until the stabbing begins, and only a second is enough to cause shock?
Which can already happen, there is no prior approvement but people reacting to reports. And to be realistic, they also can and are simply google for it. You dont need facebook to find videos of people getting murdered, there are sites dedicated to that all across the internet.
Not a popular opinion, but its not reasonable to let kids have access to the current version of the internet and childproofing it is not a viable option. A lot of the issues we currently have with the internet can be boiled down to the contradiction of kids using it and the inability to child proof it. If we think that kids shouldnt have access to everything the internet allows access to, it should be feasible to get a second child proofed one online. With the focus on it being child proofed in mind during design, the maintenance wouldnt be that bad if you focus on access control. Kids are not allowed to walk into a sex shop or a bar, they get the internet right on their phone.
I've also wondered how they deal with long videos that may seem correct for the first minutes, but the real inappropriate content is interlaced for a few frames in-between or later in the video.