I wonder if they will push some politic agenda or if they will manage to stay neutral.
Even if I have own opinions on the subject, I think the whole "fake news" thingy seems to politicaly motivated. Doesn't mean they are wrong by doing this, but it's hard to be claiming they are still somehow neutral.
Neutrality is a mirage. Journalism can and should be fair without trying to appear 'balanced' or neutral. For example, there aren't really two sides to climate change when 99% of scientists and experts agree. It's really only the news who tries to position issues as if they are competing arguments like sports teams. This seems like it's had a negative impact over all by giving a false equivalency, for politically motivated reasons, to issues that are by and large not contestable or debatable.
There is a debate, it's just not on the questions you're looking at.
Climate Change is by no means a 'settled' science, one of the most important questions scientists must answer is what the Climate Sensitivity is to CO2. There's a wide range of answers to this question by scientists, depending on which climate model you use, as well as many other factors.
The only debate is over the degree of how bad CO2 is, but we know it is 'bad enough' to be avoided. Whether it's really bad or just bad, we should move on to alternatives without that problem. As someone else said below, this is roughly like asking if there will be millions of climate refugees or billions?
Whether we are looking at 1.5C vs 8C is absolutely enormous in terms of what policies we should be looking to enact. Everything in this discussion is shrouded in a cloud of uncertainty due to the complexity of weather, and the thousands of negative and positive feedback loops which can take thousands of years to stabilize.
Popular science articles misrepresent the science and ignore the variance/complexity, in the end hurting the cause in the eyes of those who don't agree with the prescribed policies to address it.
There is really no way to control 3,000 people and have them all having the same understanding of and all applying the moderation criteria in the same way.
You'll be at the mercy of whichever out of 3,000 moderators ends up passing judgment on your post, whether they are liberal or conservative, whether they are having a good day or not, etc.
I think the networks way over-adopted the spirit of the FCC's Fairness doctrine (1) and the equal time rule (2). It's so easy to build controversy and call if "fair and balanced", which drives up ratings while suppressing the evolution of any meaningful progress in any of the affected areas.
I think a lot of it has to do with CNN and cable news presenting politics as sport. Jon Stewart's take down of Crossfire is still relevant and if anything, CNN doubled down on what Jon was criticizing: https://www.youtube.com/watch?v=aFQFB5YpDZE
When someone writes "we're fucked", the only completely wrong response is to assume "fucked" has a precise and technical definition. As a bit of language, it merely means "things will be shitty". What does "shitty" mean? C'mon… yeah, this is all relative.
Can you be more specific? For example, I really doubt anyone thinks climate change is not always "happening" or that human activity has exactly zero influence on it.
A certain commander in chief has in public espoused that:
"The concept of global warming was created by and for the Chinese in order to make U.S. manufacturing non-competitive."[0].
There are plenty of misinformed people that will gladly tell you there is no such thing and if there was human activity has no effect. Pretending that there aren't and even if the issue is complex there isn't a set of clearly wrong opinions is disingenuous to the extreme.
Sorry, I have no idea how this is relevant. I thought I was asking a pretty simple question: "What is it that the average person concerned about climate change actually believes is going on?"
It appears this is the wrong place to get an answer to that question.
You're asking questions that could be answered with 10 seconds of Google use and you're being kind of impolite about it. That doesn't seem like a good strategy to engage with someone usefully.
Well, I mean, the alternative is that it's a complex issue often touted as black-and-white to allow for taking a moral high ground against rational people that reasonably disagree on some of the conclusions and proposed methods for dealing with the issue.
And if that's true, then it's possible there's no grand conspiracy and at the end of the day, either the evidence isn't strong enough or the case isn't being made well enough to get everyone on board. If the debate is being lost before they even start telling people how dramatically their lifestyles will have to change to have a meaningful impact on the issue, what possible hope could they have when those realities are laid out?
But that's probably not true, so don't worry about it. Now are you with us or a denier?
> I wonder if they will push some politic agenda or if they will manage to stay neutral.
I don't. They will, of course, push a particular agenda. In the worst possible way.
As we've seen with Twitter and other organizations, the ToS don't matter. Or at least, don't matter for the right people. The "bad people" get punished even if they don't violate the ToS. The "good people" don't get punished even if they do violate the ToS.
I'd be OK with even egregious ToS, if they were applied evenly. But as we've seen too often, they're not.
Whatever agenda FB and friends are peddling, it's hidden in the guise of "safety", and "fairness", and "community standards".
That's because the real incentive FB and Twitter have is not to avoid PR fallout - and one side of american political spectrum has an upper hand in that battle.
For the demographic FB cares about, yes they lean left (which is what I am assuming you are getting at, and I agree with you that most of the time it's safe to just do left leaning PR). But Target openly supported bathroom access rights (left leaning) and have been met with a huge backlash from many right leaning markets which has materially impacted their bottom line with defections to Walmart and ecommerce.
What is "neutral"? Neutral as in halfway between political extremes, even if that does a disservice to the truth? Or neutral as in only reporting fact-checked stories, even if they're incredibly biased to one side?
There is obviously false, and then there is everything else. I guess the best facebook can do is weed out the obviously false. If they do that, it is a guarantee that they will get attacked for being biased, because with the shifting of the political winds usually one party or the other spreads more misinformation.
> I think the whole "fake news" thingy seems to politicaly motivated.
Sure, branding real news as fake news is politically motivated, whereas labeling what is the clearly the equivalent of the most absurd supermarket tabloids as "fake news" is accurate and appropriate.
It is curious that many people are somehow oblivious to tabloid headlines when they appear on the internet, whereas nearly everyone discards the supermarket checkout stand "The Pope is a space alien!" stories.
The WMDs that Cheney, Condoleeza Rice, Donald Rumsfield, and Colin Powell were referring to were nuclear weapons [1] though, not chemical weapons. We already knew he had chemical weapons:
>President Bush, Vice President Cheney, Condoleezza Rice, Donald Rumsfeld,
Colin Powell and George Tenet, to name a few leading figures, built support for the
war by telling the world that Saddam Hussein was stockpiling chemical weapons,
feverishly developing germ warfare devices and racing to build a nuclear bomb.
Some of them, notably Mr. Cheney, the administration's doomsayer in chief, said
Iraq had conspired with Al Qaeda that Saddam Hussein was connected to 9/11.
That's the first time I hear about nuclear weapons in this context - I was always under impression that this was about chemical weapons, which they found a lot of evidence of after the invasion.
Read the article I linked above. It's from that time period and Cheney and others specifically tried to link Iraq to nuclear weapons and Al Qaeda. It was fear-mongering and lying to justify the war and the Bush administration knew it: http://www.newsweek.com/2015/05/29/dick-cheneys-biggest-lie-...
I specifically recall LOTS of talk about "mushroom clouds" from the Bush administration leading up to the war. We knew they had chemical weapons because we sold them to them.
Wow! This is interesting. I thought the "WMDs in Iraq" was fake news, turns out the "no WMDs in Iraq" is fake news! So much fake news, hard to sort everything out.
The interesting part to me is that it tends to be the lesser educated that sense that there's something wrong with the news they are fed on TV & newspapers. I think this is likely because college educated people tend to use more pure intellectual horsepower (which is easy to fool) whereas the lesser educated use more "intuition".
That said, simultaneously the lesser educated can be far more easily led astray by laughable obvious fake propaganda (usually right wing) that an educated person would never fall for. But the thing is, those whose job it is to manipulate public opinion are experts in where weaknesses are in each type - different tactics for different audiences.
Ultimately, I see the root cause of this dichotomy as a side-effect of The Death of The Expert.
Edumucated Folk live in one Universe, and Working Joe on Main St lives in another Universe.
By the time you finish 10 years of secondary education, you think you're smart, but mostly all you've done is chain together countless memorized assumptions about how various model parameters should work to over-fit your mental interpolator of the Map of the World.
You become so educated that you filter our more than you realize. You don't even notice, like a fish never notices the ocean in which it swims.
The Academically credentialed refer to others just like themselves, because Authority only works if you mutually vouch for each other, by adhering to the same rules. Outsiders are Othered, smugly laughed at and ignored like Morlocks.
Look at any big example where The Experts not only were off, but were Not Even Wrong, turning a Moonshot into a Neptune shot, then trying to cover up their mistakes to save face so they won't be de-credentialed and forced to go live in the sewers under Metropolis.
Pick any major World event, and you'll find The Smartest Guys in the Room, who earn more in 1 year than Joe on Main St will in his entire life, and who COMPLETELY BUNGLED what ever they claimed they knew with near absolute certainty, because they are Scientists who use Evidence and Math that nobody can proof read, but trust us buddy, you dudn't get a PhD from MIT like all of us did.
1. Iraq WMD: look how spectacularly wrong were the several thousand Experts from 100's of Professional Fields.
2. 2008 Depression.
20,000 Genius Quants and Nobel Prize Winners and top billionaires and the Gods Upon Mount Olympus all bet their reputations that a 1-in-60 trillion Outlier event could never, ever happen.
This Time It's Different.
The price of houses will never go down, so it's a AAA double certified bet to use it as collateral to take out huge loans by increasing the multiple by which you are leveraged in your hedged bets.
And it all blew up.
3. Syria. All the idiot Experts yet again assembled to recommend the real solution is covertly arm ISIS to topple Assad, then send in the Marines to whack ISIS and install some new puppet.
All the experts yet again were so blindly certain that their models were the Territory that they stopped looking at the newest changes on the Map.
Oh, now ISIS declared War on the West? Well that fact can't possibly fit into our Narrative that we Experts know is scientifically true, so let's just ignore it.
Oh, what's this, hundreds of thousands of social media accounts are joining ISIS and threating to overthrow us in our own Nations?
Well that's just preposterous, we know with absolute certainty that 99.9999999% of Muslims are Peaceful. Therefore, let's invite 10, then 20, then 100 million of them to cross our borders and live next door, all so we can retain our Cognitive Dissonance that we are The Experts, we can't be wrong because otherwise why should I have a PhD and earn 20x more than you, a mere Uber driver with no credentials?
4. As Programmers, we are the Enlightened Class, we build the future while Joe Blow on Main St just lives in the Minecrafted Utopia which we sculpt for him. We are Urban Liberal Atheist/Pagans with refined intellectual pursuits, we love Truth and Equality and by golly we really are going to CHANGE THE WORLD, hold hands, sing Kumbaya We Are The World around the SpaceX AI Rocket that will take us all to our new life awaiting us on Mars.
Meanwhile, out of 18,000,000 or so code monkeys on Earth, not even a handful have any idea that all of our computer security is fake, the NSA quite literally records and decrypts everything, using 10,000 new 0day per year which they purchase from Respectable Programmers like you who have a bright future and just want to do the right thing by cooperating with the NSA for a well earned comfortable upper-middle-class salary.
Then Snowden appears and pops your Truman Show bubbke like Copernicus deprecated Ptolomy.
All The Experts assemble like Avengers yet again. They all recommend, everything is fine Mr Joe Blow, we paid a vendor millions for a pretty dashboard of Analytics Big Data charts to tell us that everythibg's fine. We're in The Cloud, we're patched, we use 2FA on E2E with Axolotl Rachets stacked to the Moon and we're Lifetime Gold Donors to the EFF and several Houses of Fierce, Intrepid Journalism.
5. Guccifer the Shadowbroker hacked the shit out of everything.
You name it, it's compromised. VPN, ssh, every web, email, file, chat, dns, ntp, database, all of it is now known to be hijacked.
Can we still trust the banks, the utilities, the telcoes, the defense contractors, the media conglomerates, the entertainment, the software, the OEMs, etc?
You say Trust Us, we're the Experts.
See how this works? Processions of Expert Pied Pipers lead us off a cliff into the ocean like lemmings.
Is Joe Blow on Main St a dumb racist redneck for taking one look at that ol' Bull you're selling, but instead flipping the Series of Tubes to check out what Alex Jones and /pol/ and UFO chasers and Chentrails kooks and Reptillian Agenda paranoids and monetized Youtubers have to say about The Real News?
From where I'm at, living in a card board box on Main St with no future, I'm going with what Joe says he saw Alex say about the Pizzagate-Soros-Globalists. I may not be an Expert beyond Main St, but I do know the only person who never lied to me was the hacker who leaked all the Expert's shit to Wikileaks. It's not a perfect Map of the World, but it's a helluva lot more accurate than the total disasters we got by listening to The Experts.
Complete tangent: assertion: if equation group had like, 10k 0day per year they would have already run out of "good" code names. We wouldn't see EPICBANANA or EXTRABACON at all because all the fun names for sploits would already have been taken. Like, the only way you get those names is by combining words on an approved list, maybe rolling the dice until you get something that doesn't suck.
I wonder if one can do a kind of "german tank problem" analysis on the likely size of the "cyber arsenal" (LOL) based on the frequency of code words and awesome codenames...
One thing in addition to everything you mentioned is that there is NO ACCOUNTABILITY. How many news people got fired for the poor reporting around WMDs and Iraq? I only know of one prominent news-person that got fired: Phil Donahue, for questioning the status quo. We caught Donna Brazille giving Hillary the debate questions (via seemingly the only reputable news source around: Wikileaks). Sure, she got fired from CNN, but now she still has a cushy job at the DNC. I will NEVER vote for a DNC party member while she's still on the payroll. FWIW I voted Obama his first term.
Oh, didn't you hear, things looked bad but they couldn't prove anything, just like the 2008 banking fraud, they couldn't find evidence to prove anyone did anything wrong! (wink wink, nudge nudge)
I feel honored to read such an epic rant as a reply to my comment!! It truly is amazing what they can get away with essentially in broad daylight, but then when you have the entire media machine manufacturing reality for the masses, it's much easier.
It's hard to prove a negative, but I'm open to being proven wrong about the top 5 newspapers. Here they are, please provide an example of an opposition article to the recent Syrian attacks from the day after the attacks.
USA Today,
The New York Times,
The Wall Street Journal,
Los Angeles Times,
New York Post,
We are all human and we bring with us all sorts of biases.
I was listening to a podcast on my commute today, and the person mentioned a book called Think Twice that was about decision making. How we make decisions with these different biases. I think I even remember a great post on HN about all the most popular biases.
I guess what I am getting at is its no at all clear cut how someone's post will be judged given all the baggage each one of us carries with us.
They're doing this because people were calling on tougher penalties and an independent watchdog for social media. They don't want oversight so they pay extra for moderation.
They will definitely push their own agenda, because that's the agenda of most of their market.
We democrats tend to stick to tech pretty hard, and we buy things. Billy joe bob will post his pro-trump rants on Facebook all day, but will still go to a walk in store to buy things.
Of course I'm generalizing and oversimplifying it, but the leadership at FB is strongly left leaning, and they know how to cater to their audience, keep them happy and make money from them.
The issue is of course branding everything that might help the right "fake news". Sure, there's a lot of it (from both sides really) but the idea that they have the power to shape people's opinions in a political direction, and are willing to do it is dangerous for all of us.
All of their actions should be transparent and a forum exist for people to discuss those actions and specific moderators (they can and should likely have a pseudonym associated with their actions); a little more difficult for moderation that aren't published as public.
All that speculation seems kind of weird to me. Zuck is already president of facebook (pop. 2bn), why would he want to be president of America (pop. 300m)? As the article you linked to says, the Chan-Zuckerberg initiative is the sort of thing retired presidents do - he's not angling to become a world leader, he already is one.
If you want the world to be a certain way, president of the USA does a lot for letting you make that happen.
If you want america to be a certain way, that holds even more strongly. How 'murica moves forward will depend heavily on the political parties stances. Being able to move (I'm guessing) the democrats will have a massive effect on politics.
Not being beholden to a share price also really helps.
The guidelines don't matter if enforcement, non-enforcement, and target selection isn't audited for compliance. Give me a clear and fair guideline and an agenda to push / person to punish, and they'll get suppressed.
Moderators on reddit have 1) full control of their subreddit and only have a few basic guidelines/rules to follow but basically they can do/ban what/who they want 2) are not paid.
This is correct. I started r/cubancigars for example. It's mine. I can kick every single moderator off the team, shut the sub down, or change the rules without care. I don't, but I can, and nobody would stop me.
No, my comment illustrates the basic facts of being the top moderator on a subreddit.
If I'm not breaking reddit rules, then as long as I've been active any time within the past 6 months, I own the subreddit and can make changes to the direction of the sub and its moderation team as I see fit.
That subreddit was created, by the way, because I disagreed with rules regarding the discussion of Cuban cigar sources which ship to the US in the main cigar subreddit. They're free to administer their subreddit as they see fit, and I'm free to do the same.
The moderators on Reddit and stack overflow aren't paid, and broadly set their own guidelines.
Anyone with enough points on SO becomes a mod automatically, and gets to deal with "how dare you close my shitty off topic question, I have every right not to follow the rules and homogenise so into every other discussion forum". Anyone who starts a subreddit becomes the mod of it and can promote other mods at will.
What's weird is that Facebook cannot rely on their users to report blatantly criminal acts witnessed by thousands of people. It probably says more about Facebook users than the platform and makes me doubt that doubling this or that team size can make a meaningful difference.
Especially with this approach of manual monitoring which will probably just result in more questionable deletions Facebook is already known for.
Users do report posts, from what I understand quiet regularly. It still requires manual moderation tho otherwise the reporting process can be abused (think anti-competitive purposes)
I've yet to hear of a way that this is solved without hiring thousands of people in what are horrible jobs (see Adrian Chen's excellent reporting on the issue[0][1])
I have no doubt that this will eventually be solves or assisted with ML, and that the solution is likely to come out of FB or G
Those Chen articles hit close to home when I read them, because I've had to verify and act on child pornography complaints at a hosting service. Almost eight years later, I still have nightmares about the (thankfully) little I've seen. When I got into hosting I didn't realize dealing with stuff like that is table stakes; that was my first, and last, hosting employment, but I'll carry that aspect until I die.
That experience really makes me feel bad for the 3,000 new hires, honestly. I couldn't imagine moderating the human condition, which one could argue Facebook basically is. They'd better not clean out Adecco, and actually pay those people with the long-term damage of the job in mind, but that won't happen. Would actually make for a good union...
Completely sympathize. I had an experience in my black hat days - broke into a server and found folder upon folder of JPG's. Stupidly downloaded some of them and opened the first to find an image so disturbing that I can't even begin to describe it.
We were a bit conflicted about what to do (more how to do it), and ended up reporting it to both the US and Australian feds (which I suspect may have given me a free pass on one of the crazier things I later did).
I really didn't take it well, but one of the guys in our group was inspired to start a vigilante group that would hunt these distribution networks down and it achieved some success in the 90s.
Hopefully these employees can be eventually protected with some basic level of ML that would filter out the worst of the worst (apparently Microsoft Research have a well-developed fingerprinting system for child exploitation images) - because i'd really hate to imagine the scenario you and Chen describe, and what I briefly experienced, as becoming more common.
In many places in the western world, facebook users are some 90 percent of everybody in a certain age group. So if it says anything about Facebook users, it says something about everybody.
Facebook is approaching 2B users. Let's say that every 1/1,000 users per day posts something that somebody else flags or which the system flags because of the words used. Then it's some 2M posts or comments that are flagged.
Facebook currently has 4,500 in the community operations departments ("moderators"). Each moderator then has to screen around 450 posts, which is difficult in an 8 hour shift. So obviously, that's not the way it works. They surely have algorithms that sort flagged posts with higher priority; more people flagging means it's more urgent, certain profiles are more urgent, for example people who have used violent language before, users with certain friends are more urgent, users at a certain age are more urgent, certain hours of the day are more urgent, post with certain words associated with violence or suicide are more urgent than posts with racist or sexist slurs or nude material etc.
But even if they correctly identify a threatening suicide or hate crime, how do they prevent it? Shutting down a live video is one way, but contacting authorities would probably also be necessary. How do you do that when your users are spread over 100,000 different jurisdictions? It's a big task.
If you count also their sister properties including FB Chat and WhatsApp and Instagram. But Facebook the social network hasn't 2B monthly active user, as many left or rarely return.
Zuckerburg has political ambitions, a little help from the Facebook moderation team might earn him a few favors. He certainly wants that infrastructure in place once he runs.
Easier to sell ads if users don't have to deal with differing opinions. Arguments are hard to avoid and disrupt the casual consumption experience.
Conservative voters probably represent a much smaller portion of facebook's user base than liberal voters, given facebook's age demographics.
There is definitely an assymetry in what kind of hard-line political pages are likely to get banned. They have relaxed a bit in the last few months, but many political pages constantly have to worry about "getting zucced".
I come from significant privilege, so I won't be hurt by people dividing (and fighting among) themselves by ambiguous lines drawn in the sand. But, I don't understand how so many people that don't come from such a background think of globalism as a bad thing.
Because Globalism means the extremely poor elsewhere in the world get less poor, while the relatively rich in the US get relatively less rich, and also the extremely rich in the US and elsewhere get more rich. This does not jive with a lot of people.
Observation shows the first part is promised but does not happen. All the money goes in one pocket, and as a hint to which, its not the lower or middle classes of any country.
It's not something I have thought about before, but I wonder how prevalent the bystander effect [0] is in online media streams. Say, if there are 1000 people watching a live stream on Facebook of a horrendous crime, will no one report it because everyone thinks someone else will report it? If that's the case, Facebook is up against human psychology in hoping people will point out these acts.
I'd expect the lack of actually seeing the bystanders to weaken the bystander effect.
Lack of physical proximity to what is happen might lessen the urge to act.
Pressing 'report abuse' is really easy, but calling the police based on online material is a lot harder.
Based on my experience reporting security issues to them and their condescending and irrelevant responses, I would just call the police. I wouldn't bother with Facebook.
>What's weird is that Facebook cannot rely on their users to report blatantly criminal acts witnessed by thousands of people...and makes me doubt that doubling this or that team size can make a meaningful difference.
Then wouldn't it make sense to add more people whose job specifically is to report criminal acts instead of relying on the users to do it?
Are the reviewers going to browse random content? Somewhat possible with trend detection (make a human look at quickly rising stuff) but very wasteful since most of that will be either benign or incomprehensible to the outsider.
Will they watch private streams and read private messages? That sounds like a privacy disaster even Facebook would prefer to avoid.
Which leaves them with reports from users. Basically, reviewing millions of "hate speech" reports and either effectively instituting a very strict speech code or ignoring vast majority of them leading to further complaints.
It's a design problem too. If it's not obvious that you can/should report something then you're never going to do it. Plus if there isn't enough feedback about your help then you won't feel like you've made a difference.
3000 persons on top of the current 4500 is a big addition. If all these persons are dedicated to the prompt elaboration of complains and violations of TOS, it might indeed make the difference.
I don't ask if it is economically viable, I guess he knows what he is doing. Facebook is not losing money anytime soon.
That's 7500 people whom will be "forced" as much as free employment can constitute force to do nothing but look at questionable often offensive content.
What a funny way our economy has evolved. 100 years ago many of these people might have worked in farms, 50 years ago they might have worked in factories. Today they sit in an office looking at ostensibly offensive material that facebook deems the rest of society should not be exposed to.
>We keep inventing jobs because of this false idea that everybody has to be employed at some kind of drudgery because, according to Malthusian Darwinian theory he must justify his right to exist. So we have inspectors of inspectors and people making instruments for inspectors to inspect inspectors.
I think it will soon enough be possible for machine-learning solutions in quality control of posts and checking of offensive content. We are not there yet, but the inroads are significant.
Well that's a big problem. Once you mechanize it, then there isn't even anyone to appeal to when you are subject to an unjust outcome. I wrote at length on this post above, but do we really want computers deciding who is or is not allowed to take their clothes off?
OK, 7500 people dealing with (according to the article) millions of reports every week. So let's assume 2M reports, the smallest number that counts as plural "millions". Assuming a 40 hour week and no breaks, this give FB 300K man-hours to handle 2M reports, meaning each person has an average of at most 9 minutes to view the content, investigate, make a judgment and take an action. Anybody who's worked in such a role: Is that enough time?
This is a tremendous amount of time per post. A 6-7post/hr throughput is incredibly slow, unless their guidelines require they produce specific, rather than generated, feedback per post.
However, the actual number of people reviewing is unlikely to be 7500. A decent percentage of that headcount will be dedicated management, HR, analytics and administrative staff.
As another poster pointed out, you're assuming the reports are always unique, where in reality the number of unique instances/articles facebook has to investigate is going to be less than the number of times people report something.
Also 2 million isn't "the smallest number that counts as plural millions" - anything over 1 million is considered "millions".
Lastly you seem to be assuming that this is 100% manual and that each report needs to be treated as seriously as each other report. I'd imagine that at the very least these reports will be prioritized and if something is reported once by a user who reports 100 things a week, it will probably be ignored - giving more time for higher priority things.
The question is what '2M reports' means. If that means 2M reports are made by users, but they can be made on the same page/post/user, then they can be easily grouped. So, if you get 500 reports on a single post, the reports can be investigated in bulk and that would make the number of investigations smaller. I believe this is the case.
Nine minutes per piece of content would be _significantly_ more than any other company I've seen has given to reporting of user-generated content. I would also agree that unless it's a video, nine minutes is enough time.
What worries me is how these gatekeepers will be chosen. Will there be criteria that makes sure they aren't just applying their own biases? Will they come from all sides of the political spectrum, or will one political leaning be better represented? Will political content be reviewed by more than one reviewer?
I can't find it anymore, but there was a leaked pdf of their review guidelines. It was about 50 powerpoint slides and it's certainly more specific than "delete what you don't like". They had quite a few examples, usually in pairs (one to delete, one not to) to highlight the demarcation. I seem to remember stuff like "Sean Connery is stupid" (allowed) vs. "Sean Connery is stupid, like all scots" (delete, because it insults him based on his nationality).
(Not sure about the specific example–"stupid" may be too weak an insult)
They also devoted a surprising amount of pages for when to delete pictures of people urinating. Must be a much bigger problem than I would have thought.
Randomly. They'll just be the content janitors, and I doubt they will be subjected to anything more than the most rudimentary ethical screening or training.
I won't be a bit surprised if there's a systematic effort to encourage applications from within socially conservative entities such as religious groups, and going to be carefully monitoring their corner of the web for such.
Biased "junior level inquisitors" might not be all that bad as long as they represent a spectrum of biases and are reliably kept from specializing on opinion bubbles relevant to their personal bias. In any case, Facebook would be well advised to put some greying neckbeards in charge who remember everything about /. metamoderation.
He is keenly aware (cf the privacy bugs fiascos of mid 2010) that trust in the system is hard to build up and easy to lose. It takes enormous sincere and public effort to pull out of loss-of-trust spirals.
Oh great, like I want my feed curated by someone from the other side of the world with radically different sensibilities. What could possibly go wrong?
Since social media use itself contributes to lower self image and depression, how much are they going to look into their own product as contributing to a problem getting worse for an individual. It would seem that work being done to drive engagement is most problematic for those at risk.
Probably not at all. Facebook's collective desire for higher engagement and profit will likely outweigh any concern for individual or collective users.
I hear what you're saying, but from a slightly different angle. I wonder if the profit motive creates a conflict of interest the same way that corporate news tends towards bias to satisfy advertisers.
I am thinking of exiting Facebook for at least a couple of months because my posts/shares (which tend to have a political slant or at least broader perspective to them) don't seem to get any reaction or be shared anymore. Neither do music, alternative culture, or sustainability/environmental posts.
If Facebook is unable to give people the dignity to fail at debating one another and be challenged by new ideas, then that may not be compatible with democracy. I hope they fix whatever is going on with their feed algorithm, and maybe 3000 people training AIs will help, but I wonder if the problem isn't technology.
Your premise is overbroad. Some kinds of social media use patterns can certainly have such a result. For others it's a great help. I had catastrophic problems with self image and depression long before there was such a thing as the public internet and my online social networks are absolutely essential to my health and wellbeing.
I didn't claim it was the only cause/contributor. I only pointed out that by now, there is a growing body of research demonstrating that it contributes to such problems.
I suspect the long term plan is to create a training dataset labelled by the 3000 people and, when they have sufficient training data, let machine learning / AI take over.
Sorry, my comment was ambiguous -- I was referring to the Google result. Thanks for pointing it out. The commenter that I was responding to made a great point that I agree with and was highlighting as a particularly good example of ML gone awry in the wild.
Other than the subjective statement that quarters are "useful", every fact in the google response is wrong.
Quarters are:
- Worth $0.25 (not $0.50)
- Not the largest. (Dollar coins and half-dollars are larger)
- Made of a blend of copper and nickel. (No silver content for circulating coins)
The picture depicts a standing liberty quarter that is no longer a circulating coin.
It's amazing that Google would put that in production.
I thought this screenshot may have been from an early and outdated rollout of the Rich Snippets, but no, I just searched and Google still says a quarter is worth 50 cents.
Easily explained as an artifact of just how accurate Google's ML systems really are.
What you're seeing is a hyper-accurate snapshot of the exchange rate between "a shave and a hair cut" and "two bits"[1]. Typically you hear about the value over a much longer period of time and the average value of "a U.S. Quarter Dollar" == $0.25.
/snark ;)
[1] "Two bits" -- Noun. Slang term used predominantly by Neo-Victorian communities living in California's Bay Area megalopolis.
That information is so bad for such an easy question, I can only wonder if google is more interested in using their userbase as a training ground than actually providing a useful service.
This report explicitly blacklisting and injecting news people weren't interested in. No mention of college students, so that is probably some bullshit I picked up somewhere, I wasn't following the story too closely.
> These reviewers will also help us get better at removing things we don't allow on Facebook like hate speech and child exploitation. And we'll keep working with local community groups and law enforcement who are in the best position to help someone if they need it -- either because they're about to harm themselves, or because they're in danger from someone else.
The workers aren't there to be a basis for machine learning (though that is a nice benefit). They are there to communicate with local community groups / law enforcement on an ongoing basis.
This way you will have even less way to find a solution when you are hurt because of a false positive.
I mean, the GAFAs are already not known for their stellar customer service, but if we get rid of all humans in the process, it's going to be Brazil on steroid pretty soon.
You are describing the past. That work is already done. All of these "community moderation" jobs at big companies are just checking the AI's work. If you have a billion posts an hour you're not hand triaging that with a team of 4500 people.
Buried somewhere deep inside their 1,000 page employment agreement is probably something along the lines of "you willingly agree to subject yourself to this obvious psychological torture and we are in no way responsible for the mental problems which will inevitably result from your job duties."
Was just talking last week that the last thing any of these large internet companies wants to do is hire a large room filled with low paid workers to do anything, especially here in the US.
If they are making this move they must see some large liability looming on the horizon.
Seems to me that keeping the labor cost reasonable isn't really that difficult. The only real problem is concealing the process well enough that it doesn't provoke outrage. Understand as you read what follows and the hairs on your neck stand up that this is --- after enough layers of BS are applied --- very close to what will evolve.
Just yesterday we read about how Facebook 'helps' marketers 'track when people were feeling overwhelmed, worthless or insecure' [1]. Focusing on people that exhibit this frequently, particularly if they're being bullied, would catch a bunch of these online suicide cases without squandering labor on everyone else.
Then all you need to do is invent enough euphemisms to enable a degree of profiling; the rape streams that go unreported for days are almost always gangbangers. Should be a trivial job to factor that group out, by their grammar alone.
There you go; the chronically depressed types and the gangbangers account for probably 95% of the headline making cases Facebook has to worry about. Focus a small workforce on those two groups (and possibly a few others; extremists, etc.) and Bob's your uncle.
Compare to Bangladesh, Vietnam, or the Phillipines. Low-wage American workers are not very competitive in a global marketplace. If the cost of shipping and importing goods and services < the cost of complying with U.S. regulations and paying American workers, there's every economic reason to have the work sent out to less regulated, lower-paid regimes.
The way to fix this is to even the playing field by ensuring that it's not cheaper to pay to have this done overseas. Otherwise, the "American-made" companies just have to hope that's enough to convince customers to help them make up the gap in profits ... and it's usually not.
That would require a thoughtful examination of our housing policies in the US that are designed first and foremost to drive up property as an asset class. It's probably the single biggest driver of inflation in the US that necessitates increases in wages.
Capitalism operates on the basis of greed, not need. If a desired result can be accomplished at lower cost, it generally will be. Firms largely concern themselves only with first-order costs, leaving second-order costs to government, not least because the more abstract the nature of the cost, the fewer people are able to comprehend its existence.
If you're interested in this topic you may enjoy Ronald Coase's economic essay The Problem of Social Cost (which brought him a Nobel prize) and his body of work in general.
I think it's a fairly common recent strategy to hire a big pool of low cost workers with the intention of gradually phasing them out as ML tech improves.
I wonder if this will reduce the problem of fake accounts. I regularly get such friend requests, and it starts to get annoying.
Also, those seriously affect the attractiveness of ad campains. I dipped my toes into it once, but it looks like a large percentage of gained "users" are just fake ones...
My wife made an account for our dog, with an appropriate headshot as the profile photo, about 9 years ago. He posts regularly, and we tag him in photos.
My guess is that they only block fake accounts if they are manually reported by someone.
You would think this, but as a counter example Xbox (back in the 360 days) had a "report gamertag as offensive" (aka offensive username).
Users could pay $ to change their gamertag (~$10).
It was fairly common to see people publically asking "please report my name as offensive" as it was known that if you got enough reports, you would be "forced" to change your name for free. People who wanted to change their name anyway would solicit reports against themselves to avoid a $10 fee.
Clearly no human was reviewing these reports... and that was a paid service!
That's true. I know a lot of people with a lot of fake accounts for entirely legitimate reasons, as well as some trolls who abuse the ease with which they can be created. A good number of people in the first group maintain multiple sock puppets because their subject to frequent attacks by trolls who report accounts to under false pretences, knowing that FB's response is as likely to be wrong as right.
Really, what's needed is a decentralized approach where no central authority exists, only mutual reinforcements. I have found a marvellous solution for the next generation of social and informational networking but alas this comment field is too small to contain it.
Long overdue. I sort of get these online social media companies skimping on moderation while they are growing and don't have cash.
Facebook is rolling in cash and this has clearly hurt their brand. Hiring moderators to take out the worse of Facebook could help a lot to dealing with the utter bullshit that goes on.
Reddit and Twitter have similar problems, they want to either farm out moderation to volunteer users (Reddit) or automate everything and only step in when the NY Times gets a hold of something (Twitter).
Either way, their moderation leaves a lot to be desired.
Reddit is particularly strange. I don't know how this could be true, but they claim that without the volunteer mods, they couldn't exist. Either they are lying or are just awful at running a business, neither would surprise me.
Does anyone know if Reddit actually makes money? And if they do, how? Ads seem sparse and selling "gold" just doesn't seem like much.
Slashdot’s moderation and meta-moderation system (https://slashdot.org/moderation.shtml) always comes to mind. Could something like that work for Facebook self-moderation?
yes, i always think of this. Why is this system not in use at Reddit or HN? Is it patented? I would pay 1000 euro's if they could implement it for my facebook group, with 2000 members it takes an hour a day which would halved with such as system, 14 hours a month.
Even without the meta-moderation aspect, just having multiple dimensions for evaluating any given post is so useful and seemingly unique.
"Upvote/Downvote" is not nearly as useful for ranking as the range of, say "+1 Funny, +1 Insightful, +1 Agree, -1 Disagree, -1 Spam, -1 Flamebait/Troll" (I've forgotten exactly what set /. uses)
In parts of Eastern Europe, Asia, South America, and Africa, Facebook could hire 3,000 college educated employees for less than $3M monthly, including taxes. So while this from a human point of view is a very decent thing to do, it's not necessarily as costly as one might think, especially not when considering the possible liability or regulatory backlash they may run into if they do nothing and Facebook becomes the place for suicides and violence.
Now, if they add all those jobs in SF, it's another thing entirely.
This is sort of ridiculous, they can't seriously be expected to be held responsible for every video served on their platform. Nor should there be anything inherently worse about live-streaming of violence than that violence occurring in the first place.
I can only see this as a good thing if they manage to catch people before the act and intervene. Is this a primary goal of the program?
Yeah, random acts of violence getting exposure as they're live-streamed on Facebook has been interesting, to say the least. Without the criminal's willful publication of their act, the crimes may have gone unsolved and ignored for a very long time.
I'm not sure that we really need to stop people from publishing videos of themselves executing old men at random. It certainly seems to lead to a speedy resolution and really heightens the impact over the impersonal "Another shooting today..." on the evening news. It's virtually the same as walking yourself into the police station, with the added bonus that the whole country now hates you and is looking for you.
that seems highly unlikely to me, given the abundant history of covert violence. Obviously it's hard to assess for individual cases but across a distribution people seem to avoid notoriety.
Perhaps we can make inferences by parallel; trolls engage in obnoxious behavior when granted anonymity or insulation from direct consequences. Is it likely that absent an audience for their trolling, they would be nice all the time?
Seems unlikely, and the historical record suggests that cruelty, criminality, and impunity are a common combination.
I don't know, these types of crimes have happened for a long time sans audience. Perhaps some are motivated by the audience, but there are definitely some who would've committed the crime anyway, and their broadcast of their criminal activity only speeds up their arrest.
I can see that politicians and businesspeople would not like this development, because it makes a) their cities seem more violent, since we can actually see someone get killed in it; b) their businesses seem less positive and soft and fluffy, since we can actually see someone get killed on it.
However, I don't see why those concerns should override the public safety benefit of allowing criminals to broadcast their crimes to thousands of witnesses and simultaneously preserving an indisputable copy of the evidence for posterity.
Steve Stevens was caught only because the exposure from his broadcast allowed him to be recognized by a McDonald's clerk hundreds of miles away from the crime scene. Without his broadcast, assuming the execution would've occurred, it would've just been another routine story that wouldn't have made it past the local news.
There would probably have to be some real research conducted to find out if this is a net loss by spurring new crime or a net win by allowing us to easily capture the most oblivious criminals (who are not necessarily least dangerous).
Can't help but think of Deming's famous advice on how to deal with quality issues: "Eliminate the need for inspection on a mass basis by building quality into the product in the first place."
If Mark Zuckerberg and his product teams could travel back in time a decade or so, would they still have built everything the way they did?
Market share was likely always more important than quality.
Besides, do app developers even understand statistical quality control? I tend to think this kind of software grew up on a very black/white, functional/broken, bug trackers, etc., a very narrow view on quality as a concept.
I've reported countless spammy comments like "free ladies!" and "free $50 per click!"
But FB replies are 100%: "...but our community deemed that comment to be ok with FB guidelines"
FB should definitely see what Korean FB is like recently...
I predict that Facebook's current PR problem related to this is going to be replaced by a swatting problem before long. I.e. someone reports someone on Live as suicidal, SWAT/police show up, shoot their dog etc, public outrage against Facebook ensues.
Western social media seems to be one step behind the Chinese counterparts in terms of moderation strategies.
Living streaming scene in China has already went through this entire discussion and various forms of implementations on moderations of offensive and inappropriate contents sometime last year when it was rapidly growing.
Again with government intervention, it is so much faster and easier to enforce standards of moderations for private companies to follow.
> Western social media seems to be one step behind the Chinese counterparts in terms of moderation strategies.
I totally agree. We have the NSA so the total information awareness network is there, but we've really fallen behind China on using it to address suicide intervention.
I understand your sarcasm but I believe you have misunderstood my point.
I'm not talking about secret operations, but rather some transparent guidelines or laws issued by the government (legislative branch in the case of US? I'm not sure.) that are enforceable by private entities like Facebook.
Well it really depends on how you define the "appropriate scope of moderation". If you think controversial topics related to politics should not be subjected to moderation, then of course government censorship is not good.
I'm not saying it's good or bad, just that "a step ahead" doesn't really make sense to me. The path Facebook is heading down does not lead to government censorship, and the path American society is heading down doesn't either. That would require a huge shift in public opinion and philosophy.
They already had 4500 people. And use of the moderated features is likely still growing rapidly.
It's great that Facebook is increasing it's moderation numbers, but it's unclear if this was already planned and simply used (successfully) as a PR response to recent events.
I wonder if Facebook made much money on the ads displayed alongside this type of content (or perhaps pre- or mid- roll ads). Do they have a responsibility to treat this income differently?
Does anyone else think it is dangerously Orwellian to describe speech as people 'hurting themselves and others'?
We seem to have a serious problem resulting from people living in bubbles of information sources that only confirm their own viewpoint.
How can the solution to that problem be to have a single corporation design the bubble for everyone?
(Note: I know he's taking about actual videos of violence taking place. However my point is that violence is already happening, and hiding that from public view is 'out of sight, out of mind')
He may be referring to speech in part, but he's also referring to people killing themselves and others on camera on Facebook. I think that's what has triggered this.
Unfortunately, it kind of is becoming that way. Facebook is becoming a de-facto public utility.
The phone and telegraph companies started out as "optional" services. If you didn't like the telegraph company arbitrarily blocking journalists that criticized them, you were free to not use telegraphs.
Facebook is being tied to credit worthiness, and job applications, whether we like it or not.
And in some parts of the developing world, it is more important than a phone number. It is recognized as being so vital, that providing Facebook access is subsidized.
We can no longer pretend that Facebook is some sort of toy, and dismiss criticism by saying that if you don't like it you shouldn't use it.
> Where, how? I've never heard of someone needing a Facebook account for credit and any employer asking for your Facebook profile is a huge red flag.
Here[1]. Saying "wasted" as a status update can affect a credit score. And even if it isn't disclosed, employers, and other entities, may use your Facebook activity. This is a totally valid issue. So, yes, Facebook does matter, whether you like it to or not. It isn't just a toy, and you can't just stop using it, without it potentially harming you. I think it's totally unfair, but people now have to consider their online presence when hunting for a job.
And they don't have to ask for your Facebook account specifically. Companies like ru4.com can tie your financial identity to your social media identity[2]. Just like how even though you didn't submit anything to the 3 credit bureau, they will find data for you, it is also becoming true with social media. I can't log into chase.com without allowing scripts from ru4.com. Luckily, I use a sandboxed Firefox profile, that has never touched Facebook.
Your younger coworkers are wrong. By definition, violence is physical.
There is, however, a push within educational institutions to redefine terms like "violence" as part of a wider "social justice" programme. The crux of it is that offending someone else is a very bad thing to do, regardless of whether it's intentional or your words are true.
Is it true that violence is physical, by definition?
It is one definition, but there are other more abstract definitions that mean "an unjust exertion of force or power", "rough or immoderate vehemence, as of feeling or language", "damage through distortion", etc.
For example, the phrase "violent scolding" is stuck in my mind from a fairy tale I read as a child, and indeed, if you search Google Books for that phrase, you'll see many examples dating back to at least the 1800s, examples that do not mean physical violence.
So, I don't think it's some new redefinition of violence to use it more abstractly. I think of it similar to the term abuse. Sure, abuse often means a physical act, but it often means verbal/emotional acts.
Maybe we need a word that is more powerful than harassment, that isn't physical violence.
I've heard people say things like "virtual rape". I think it's problematic to say that something that occurs in text can compare to having your body physically violated.
Why doesn't Facebook give people a way to have input to their 'community standards'? Basically it's a black box that's presumably stuffed with lawyers, marketers, and some analytics people. I see zero evidence that there is any actual input from the people who use FB. It's essentially a dictatorship dressed in a costume of democracy, and I would far prefer it if the 'community standards' were called what they are, 'Rules of Mark's Club.'
This is a sore point for me as an artist. It's tedious when posts are removed because they depict or seem to depict nudity and you have to go through and assure some anonymous and wholly unaccountable person that they're not. One of my friends teaches art history at UCLA and - surprise - he posts lots of fine art on his wall. He has to have 8 or 10 accounts because he is constantly getting temp banned for posting famous paintings of people with no clothes.
It also bothers me on a more general level, eg it's fine if I take a picture of myself with my shirt off but if one of my female friends does the same thing she risks being restricted from posting or having her account terminated because her breasts are apparently a worse thing than extreme gory graphic violence that comes with a warning but is nevertheless acceptable to post.
That's sexist bullshit that turns women into second class citizens. I utterly fail to understand how it's OK to share pictures of just about any violent subject matter, but any kind of nudity, sexual or not, is grounds for having your account terminated.
Here's a list of some of the things I've seen on FB over the last year, some with an automatic clickthrough content warning (which is a good idea and mostly well implemented) and some not. As far as I'm aware none of these have resulted in account terminations for people who posted them:
Beheadings (video, multiple examples); hanging; people being shot/have been shot; serial killers and their refrigerators stuffed with human meat; disembowelments; autopsy photos. In each of these cases I don't mean grimy thumbnails where you can sort of imagine what was going on, but photos and video of sufficient clarity to be used in a news broadcast if not for the disturbing nature of their subject matter.
I'm leaving out other stuff that I found sufficiently disturbing that I prefer not to even describe it. I'm not into gore, beyond watching a few horror movies in a given year. But I'm pretty open with my friends list and allow people to join me to groups, so I'm exposed to a certain amount from trolls and of course there are episodes of violence in the real world that are newsworthy, and I prefer my news without censorship of any kind.
You'll notice that I'm not calling for this stuff to be removed or banned from FB. I think the 'graphic content, are you sure?' warning strikes a sensible balance between protecting people's sensibilities and allowing free discussion and information. We live in a world that is often violent and I believe that concealing the ugliness of violence often allows it to proceed unchecked. It's also true that some people become obsessed with or celebrate violence, and that admitting it as cultural currency risks desensitization or normalization of violence. Those are tricky questions to which I do not believe any one person, firm, or society has a perfect answer, but given that the instinct of criminal persons and regimes is generally to conceal rather than reveal transgressions, exposure and condemnation is probably a more effective response than obscurity and censorship.
After that unpleasant detour into the pits of human awfulness, I really want to hear from someone at Facebook:
a) why it's OK to engage with the reality of people inflicting horrible violence on others, but it's not OK to let people engage with the reality of sexual or aesthetic expression, and
b) why the 51% female majority of the population are subject to tighter restrictions than the male minority, and
c) why the 'community standards' don't offer any formal mechanism for community input and decision-making.
Think about it, Folks. A picture of a healthy naked body is grounds for account suspension or a ban, but it's totally OK to show that same body hacked to pieces? That's some grade A bullshit, and platitudes about how 'we try to reflect the prevailing standards of society' isn't going to cut it.
Automation intensifies whatever process you choose to automate, and if you automate a standard whereby erotic desire and self-expression are constrained but extreme violence and interpersonal aggression are less constrained, guess which you'll end up with more of? Likewise if men are allowed freedoms that are systematically withheld from women, guess whose freedoms are going to be expanded and whose are going to be reduced?
I demand answers on this. Facebook is one of the most powerful political entities on the planet and those who own it need to explain why, within Facebook, there is greater tolerance for violence than nudity or sexuality, and why one half of the population is subject to greater restrictions than the other half.
It doesn't seem to fit the definition of astroturfing. [1]
>Astroturfing is the practice of masking the sponsors of a message or organization (e.g., political, advertising, religious or public relations) to make it appear as though it originates from and is supported by a grassroots participant(s).
Are they concealing the source of the messages? It seems like they are just adding more people who have the authority to remove content that violates the TOS.
Astroturfing is building opinions by faking or artificially mis-weighting some opinions, like using fake accounts on social networks to vote and contribute with a particular sentiment, making it look like more people feel that way than actually do.
What Facebook does is to allow and disallow certain types of content based on public guidelines. Even if those rules are nonsense, unfair or politically motivated, it's at least not done secretly unlike real astroturfing.
1. That this "moderation" is probably not a very good thing
2. In other countries are already highly intransparent and opaque organizations like "correctiv" involved in these "fact checking" and "counter news" operations. https://correctiv.org/en/
3. To make things worse, people (like Soros) donate money to these organizations that have their own agenda.
(not to mention the boulevard style exposure of this politician who is unfortunately form the wrong political party)
The issue with work like this is it is very distressing. Having to look at videos and pictures of horrific acts (suicides, child porn, etc) is not a pleasant job. Many of those who do this in law enforcement (mainly child porn) have high rates of PTSD and other issues.
Mr zuckerberg's involvement signals it is a huge problem, one unable to spin away..
People always figure out a way around the latest attempt to control - and these measures will be overcome by bad intenders.
So if FB admits it has a community safety and fake news problem - by hiring 3000 additional enforcement agents, why would one stay with FB if there is an alternative. Which there isn't.
I would like a FB lite. Photo sharing and comments and discovery of old friends. No news. No menu of features.
I wish there could be an alternative. ("Its just like. fB but without the news and crap)
I think some alternatives do exist, but I do not believe an near exact clone of Facebook with less of the problems of Facebook exists.
Maybe it's just infeasible to properly monitor such a massive social network? I don't know for sure, but I left Facebook long ago and have not regretted it at all.
Even if I have own opinions on the subject, I think the whole "fake news" thingy seems to politicaly motivated. Doesn't mean they are wrong by doing this, but it's hard to be claiming they are still somehow neutral.