Hacker News new | past | comments | ask | show | jobs | submit login
Tech Companies Are Deleting Evidence of War Crimes (theatlantic.com)
587 points by anfilt on May 9, 2019 | hide | past | favorite | 222 comments



The article centers around activity in Syria, but the problem is much more endemic. In the United States, Apple has consistently taken down apps providing information about US government airstrikes - including those that strike wedding processions, children, bystanders, etc. Recent efforts to identify and scrub foreign propagandists in the United States have silenced legitimate voices of domestic dissent, as foreign influence campaigns typically attempt to magnify grassroots dissenting opinions (this is true also of US foreign intelligence efforts). There's many reasons why these kinds of apps, comments, and conversations are taken down due to their content.

Fundamentally, being a gateway to information in a legal environment where the hosting the content as a curator puts you under risk creates deep incentives to whitewash content. Those forces are already present for companies which might accidentally sustain 9gag/4chan type cultures/commentary which contradicts however its productizing its platform.

Reddit is another example a company that has recently scrubbed itself of most controversial content, including content critical of or dangerous to its host's countries political and national security naratives. Twitter has started down that journey as well.

It feels like we're experiencing the growing pains of social network and hosted content boom reinvented on web technology over the past couple decades.

It's kind of amazing in retrospect how controversial some of the "mundane" applications of technology to society have been, whereas a couple decades ago most of the moral panic centered around concepts like "online dating", which to date have actually been relatively controversy scarce.


Australia lacks safe harbour laws which the US has had for some time. This has a chilling effect on certain types of platforms. Even something like a photo hosting site is very risk heavy in Australia as the type of content on your servers is your responsibility.

I feel like those safe harbour laws in the US are at risk, at least in practice, as the idea of a content "conduit" begins being less plausible as platforms start moderating more and more.

Now, I am all in favour of moderation. Real life communities are moderated, and all the strongest online communities have some kind of moderation. But there needs to be protections for companies who are moderating the best they can and still have evil content on the site. Else these platforms won't be sustainable and will be in heavy legal risk to boot. They won't make much sense anymore.


> I feel like those safe harbour laws in the US are at risk, at least in practice, as the idea of a content "conduit" begins being less plausible as platforms start moderating more and more.

The way the US got the safe harbor to begin with was as follows.

There was a court decision that essentially said that you weren't liable if you were just carrying bits, but if you did moderation then you were.

The problem with this is obviously that you then either have to have no moderation at all or it has to be 100% perfect because you're liable for everything you get wrong, and getting everything 100% perfect isn't really possible. So the result would have been that nobody would do any moderation and everything would be overrun by trolls and spam. To prevent that, Congress passed a safe harbor that allowed platforms to do moderation without immediately ending up in court.

The problem now is that the constitution and jurisdictional issues make it difficult for governments in the US to do the kind of censorship that a lot of people now want somebody to do. So they're trying to get in the back door by creating laws that will force the tech companies to do it, because on the one hand the companies have minimal stake in hosting any given information and will execute just about every takedown no matter how ridiculous if it will reduce their liability, and on the other hand they're not bound by the First Amendment when they over-block protected speech. So imposing any kind of liability on them that will cause them to execute spurious takedown requests is basically the censor's birthday wish, and even better if you can get them to over-block things ahead of time.

But there are solid reasons for the First Amendment to be in effect, and "governments shouldn't be able to erase evidence of their crimes" is pretty far up there on the list. So this ploy to put the national censorship authority into the offices of Facebook and Twitter really needs to get shut down one way or another, or we're in for a bad future.


>The problem with this is obviously that you then either have to have no moderation at all or it has to be 100% perfect

I'm not sure. You can let the users moderate themselves, then you're still just carrying bits. That's where a pluralistic organisation of the platform comes in handy. Things like Reddit or image boards are not just one community, but a plurality of communities. None of those suit you? Go ahead and open your own subreddit, splitter! Then you can moderate there as you please. The problem is of course, the bit carrier cannot expect an advertiser to agree with all the subcommunities. But that's a different, and solvable problem. You need better targeting for ads and you need to accept that some subcommunities will just not be attractive for any advertisers at all.


It wasn't long ago that it was shown that being host to a hateful sub community lead to overall higher levels of hate in unrelated communities.

You don't actually have this imagined separation you imply: hosting fatpeoplehate mean you impose a higher moderation burden on unrelated communities. And while the reason for this might be the principle of free speech, in practice you are only defending the act of hating fat people.


> You can let the users moderate themselves, then you're still just carrying bits.

But then without a safe harbor the users doing the moderation would be liable, no?


> There was a court decision ...

Out of curiosity, what was the court decision? I've been thinking about this recently from the same perspective; that once you start allowing moderation then you should lose the shield of liability, but wasn't aware that this had been articulated by the courts previously.


The EFF has a good page on CDA 230 here [0]. The relevant court cases are the 1991 case 'Cubby, Inc. v. CompuServe, Inc' [1] and 1995's 'Stratton Oakmont, Inc. v. Prodigy Servs. Co.' [2]. In 'Cubby', CompuServe was found to have no liability for content hosted on their site as they did no moderation themselves, had no knowledge of specific content on their site, and were thus not a publisher. In 'Prodigy', Prodigy Services was found to have liability as their moderation practices were judged to make them the publisher of the content.

CDA 230 was the legislative response to this, allowing companies to not assume liability for moderating content.

0: https://www.eff.org/issues/cda230/legislative-history

1: (https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.)

2: (https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod...)


I think DMCA in 1996?

[edit]

DMCA was passed in 1998, based on WIPO treaties from 1996.


The DMCA had the safe harbor provisions, which the GP claimed were added in response to a court case. It's that court case that I'm interested in learning more about, because it sounds like the logic used in that case mirrors my own thoughts on the issue and I'm curious to see if that is the case.


We'll put.

The big social media players underappreciate the areas that they've found themselves in, a major part of news media.

Dealing with fake news, intelligence backed propoganda, and censorship of certain content are very difficult tasks. They aren't just "security" or "platform abuse" issues. These are editorial & fact checking issues.

Ultimately, it will be impossible to deal with them via clear policies and software. The only way to do it is with human intelligence, decision making and judgement.

A video of an isis execution could be snuff, critical news, isis propoganda, baathist propoganda...

The types of organisations that do these kinds ds of jobs tend to have it baked into their DNA. Journalistic organizations for examlr, that have editorial philosophies embedded in their DNA the way Facebook has software development embedded into it's.

In terms of online media, Wikipedia is the best example, imo. Dealing with this stuff is at it's core. I can't see FB ever becoming adequately capable of dealing with these types of issues.

Even bans get I to tricky territory once you reach FB/twitter scale.


Not a surprise. Many of us grew up believing we as Americans are exceptional, that we are inherently the good guys in any international conflict.

The reality is our government has interests. Morality and interests are orthogonal, with the latter being influenced primarily by profits or some perceived direct or indirect threat to America’s elite.


"Many of us grew up believing we as Americans are exceptional, that we are inherently the good guys in any international conflict."

Sounds like the USSR, only that we get rid of this delusion almost 30 years ago. Now it's slowly coming back thanks to the Russian government's propaganda, but it is still far from the present US level.


Sounds like any dictatorship ever.

What is unsettling is that the US is clearly not a dictatorship (or at least, not like "any dictatorship ever"). It manages to stay reasonably democratic even with this and other dictatorship enabling mechanism (powerful Executive Power, heavy sectorial financing of the government, etc). On this point, they are exceptional.


It's definitely not a dictatorship, it's something a lot more insidious. It's a self-sustaining informal plutocracy that masquerades as a paragon of democracy.

I don't understand how people still believe the US is an exceptionally democratic country even after learning about how democracy is implemented here. The whole thing with Electoral College and two-party system is just the tip of the iceberg. For me, it was the revelation about the existence of riders [0] that was the final nail in the coffin of my belief in democracy.

[0]: https://en.wikipedia.org/wiki/Rider_%28legislation%29


> dictatorship enabling mechanism (powerful Executive Power, heavy sectorial financing of the government, etc).

+ significant increases in secret courts and endless budgets and free reign for intelligence agencies that report to executive branch.

All shifts that have been happening over the last two decades.

It only becomes a problem to most people when someone they don't like is given those powers. Which is why they should never have those powers in the first place. Something many have warned about this for a long time and something the US has had a unique history of pushing back against, which is now long ago.


Curiously, I had much the same experience despite being from the UK — Americans were always the heroes. British too (e.g. colonialism, empire, and commonwealth presented as an unmitigated good thing), but if there was a conflict of any sort between the UK and the USA, the USA were almost always the good guys.


Well, for one, all the major movies and tv series are American, so they will present any war, conflict, etc, under American viewpoints, to an international audience (including politically naive, historically ignorant and impressionable young ones).

Even when the plot will cast doubt on the legitimacy of the conflict and the actions of the heroes, those will still be the protagonists (and with the humanizing superiority of at least being able to be morally conflicted about their actions), whereas the nuances of the other side are either hidden altogether, or obscured by some "we're all human after all/both sides are equally at fault" message.



I had the same experience being from and growing in Poland.

I think I can give you a simple reason: Hollywood. Through movies and TV shows, America has been defining the mass culture for the past few decades, for almost the entire world.


This is very deliberate. From 1911 to 2017, 800+ films and 1100+ television shows have received backing from the Pentagon and the CIA [1]. Several films received major script changes, primarily to create a certain perception of the US military and intelligence agencies.

[1] https://theconversation.com/washington-dcs-role-behind-the-s...


As did Jackson Pollock [1] and other artists ...

[1] https://www.independent.co.uk/news/world/modern-art-was-cia-...


That's an interesting link you have there, but it feels like limited hangout. "Yeah we admit we routinely manipulated film and more, but only to paint ourselves in a favorable light."


>I think I can give you a simple reason: Hollywood. Through movies and TV shows, America has been defining the mass culture for the past few decades, for almost the entire world.

I think this can conflate cause/effect. Hollywood creates these movies and stories because the people producing them believe the narrative.

It's interesting to me which narratives are questioned, and which aren't. The Vietnam War is widely considered controversial, but World War 2 was about defeating the evil Nazis, period, right? Adam Curtis has some interesting interviews with soldiers in The Living Dead about this topic; often the men on the ground couldn't tell good from bad.


The issue isn't about Americans believing America is exceptional. Every nation believes themselves to be the protagonists. The difference between the US and everyone else is that the US managed to export its movies to the point they became an integral part of culture of half of the world. In any western nation X, for the past decades, you only had two categories of movies of importance: local movies, and American movies. And usually the latter were better / more interesting.


Yet in reality British colonialism was, being equal to other european countries colonialism (France, Spain, Portugal, Dutch, Belgium, Germany a bit), one of the greatest evils in mankind's history, probably the worst if you count all direct and indirect casualties. I would be properly ashamed of my ancestors if I was from one of the countries, yet people simply don't give a nanofraction of a fuck.

Much of the mess we are facing in last 20th century till now is due to that. Countries literally raped for centuries, all their wealth stolen. Generations raised as servants to powerful overlords. Promoting more submissive minorities into ruling classes. Afterwards some idiots in their tea club draw semi-random lines on the maps when creating new countries, creating perfect storm for future conflicts.

And what we have today? Colonialism with a modern face - continuous brain drain so that only poor uneducated or lazy stay in their home country. And of course resource drain never stopped, look at companies like Royal Dutch Shell or DeBeers and many, many others.


> I would be properly ashamed of my ancestors if I was from one of the countries, yet people simply don't give a nanofraction of a fuck.

I think it’s pretty obvious why people aren’t ashamed. Because they didn’t do the act. Take your expectation of shame to it’s conclusion: It’s very selective to only call out European colonialist countries as being ones where shame should be expected. If we are looking at crimes across history everyone on the planet must end up ashamed of their ancestry as there are few innocents. Not like the current borders and genetic make up of countries in Europe has been static throughout history, let alone every other plot of land in the world.


It is important to learn from history, but assigning blame on the living -- yourself or others -- for what dead people did is a certain path for repeating many of those same historical mistakes.


An exceptional superpower would have to be one which does not consider itself exceptional


The British empire to the early 18th century could be considered an example of this as it was entirely a private sector phenomenon and not considered hugely relevant in government policy.


>that we are inherently the good guys in any international conflict.

How else could any human or organization function if not by acting in ways they believe are right?

>The reality is our government has interests.

The reality is everyone has interests. The "government" isn't some homogeneous entity pushing towards a single plan; it's an incredibly complex network of individuals.

The nation isn't perfect, but does anyone really believe that, on net, America hasn't been a force for good in the world? What is your benchmark?


"The nation isn't perfect, but does anyone really believe that, on net, America hasn't been a force for good in the world?"

Wow. You really never moved out of your bubble, did you?

There are many people(not just IS extremists) who consider the US-Empire as plain evil.

And much more who just think of US as arrogant, ignorant, self righteous bricks who try to enforce the american way of life on the whole world. And covertly or openly destroy anyone who opposes.

Now I am not that extreme, in my view and I actually think that good or bad, all in all, probably yes, but there is too much truth in the accusation. And you proved the ignorant part very well.

(btw. technology and science are the things that do the net good for me ... but politicaly, hell no. Just another empire posing as nice to go after selfish motives. Nicer than other empires maybe but still a empire)


I see the word "empire" bandied about when discussing this subject a lot, and I'm never really sure what people mean by it. I mean, sure on the face of it you're correct, an "empire" according to various dictionaries is just an expansive amount of territory under a single rule, but it doesn't seem like that's what anyone really means when they say it. Otherwise, why aren't most countries empires? They have an "expansive" amount of territory by some definition. Or what about the EU? China? Russia?

What seems especially confusing to me is that "empire" has a very specific negative connotation but it's rarely out and out said what people who say "empire" actually think the US's crime is. I could imagine it to mean the way we treated Native Americans, but it doesn't seem like that's actually what's being talked about most of the time. Furthermore if we're talking just about global economic and cultural influence, that's voluntary(?) so the connotation meant doesn't work there.

What's the real deal here? What are we actually talking about?

And just to be clear I'm very much against US interventionalism but that doesn't constitute an empire... Or does it?

Also, openly destroying people for disagreeing with the "American Way Of Life" isn't something I've seen before. Are your referring to the Cold War, or stuff with the Middle East?


The way I understand empire is a state which tries to expand its influence and dominate wherever it can.

And yes, for people being dominated by it, it has a negative connotation.

Different from a democracy and federation. Because a democracy fighting for freedom and ... well democracy, would not do this:

https://en.m.wikipedia.org/wiki/United_States_involvement_in...

"During the Cold War in particular, the U.S. government secretly supported military coups that overthrew democratically elected governments in Iran in 1953, Guatemala in 1954, the Congo in 1960, Brazil in 1964 and Chile in 1973. "

This is what empires do.

Dominating their way onto weaker states, by all means. Sanctions, assasinations, supporting, supressing, blocking, sending weapons and ultimatively troops. There is a reason why there are US bases all over the world.

And yes, sometimes that can be a net gain, like in Nazi Germany, but usually all the human rights democracy talk is just facade.

And also yes, smaller states do the same, act like empires, but the US is by far the largest Power since the USSR vanished.

But china is on the rise and europe likes to be an empire as well ... but for example with europe I have the hope that we tend to stay more a democratic federation and not go down that path as well.


Okay that does make sense. I think that there are good and bad sides to the US' actions though, and the good actions are not limited to WW2.

It's important to note that, at some size any nation becomes an "empire" by that definition-- it really isn't possible to not do that, and survive. Beyond a certain amount influence, there will be constant threats to security and prosperity and people trying to harm you, and rightly or not the solution any government would look to for these problems is violence. Because government is fundamentally based on force, inside and out (government is enforced at the point of a gun). So to say that the US's government is uniquely terrible is not really true.

Also for the most part the actions of the US are in service to what they say they are in service to. On this I disagree with you slightly: the values US talks about are usually not a facade. And the times that they are, those become highly contentious actions, even in the US, whereas in other countries I don't know that they would be-- so to act as if the US's people are uniquely terrible is again very disingenuous.

In the example that you quote, those actions were clearly in the interest of toppling an even greater evil. I don't think this rationale is morally right, but you could say we went with the "lesser evil".

So I agree with you, that the US does act like an empire, and that other countries would too if they had to or could.

However I see a lot of specific condemnations of the US as if it were unique. That's more what I was responding to. I don't think either of us think that's right.


"So to say that the US's government is uniquely terrible is not really true."

I never said that. What is unique with the US is just that you are the biggest power today. That's why so much of the negativity is focused on your empire. If china would be dominating, it would be them. (and I know their human right standards are much lower)

But with this I disagree very much:

"It's important to note that, at some size any nation becomes an "empire" by that definition-- it really isn't possible to not do that, and survive"

If you are powerful, because your people work hard and efficient, why do you have to invade other countries?

"Beyond a certain amount influence, there will be constant threats to security and prosperity"

Because they are a threat to your prosperity, because they don't sell you their ressources at a price you want?

Note that many people think, that terrorism is the answer because their land was exploited.


> I never said that.

Understood. (: I was mostly going on a tangent there, where I was responding to the kind of comments my OG comment in this thread responded to.

> If you are powerful, because your people work hard and efficient, why do you have to invade other countries?

Perhaps because a country is funding terrorist attacks against you and/or preparing nuclear weapons or ICBMs to attack you with?

> Because they are a threat to your prosperity, because they don't sell you their ressources at a price you want?

I'm very against force retaliation for things that are not force initiation. That's not what I meant so perhaps I should elaborate, although it's a red herring away from my main point: what is included in acting like an empire is anything that expands a nation's influence and domination. That doesn't specify the use of force. So something like having trade wars with countries that refuse to stop ignoring copyright and subsidizing their own products in a dubious way, would be included in acting like an empire. And since those countries are leveraging state power to control the market, why shouldn't the US be allowed to retaliate in kind, since that's apparently OK?

(Side note: I don't think we should have started the trade wars, personally. I think it's an unfortunate circumstance for the economy. But I think doing that kind of thing, once again, is an inevitable necessity for a state because it has to prove it can't be taken advantage of).

> Note that many people think, that terrorism is the answer because their land was exploited.

I'm not exactly sure what you mean here.


I agree with you.

It seems like American Exceptionalism has been shifting from a definition of “America is unique” to “America is perfect”. Obviously no country can meet the 2nd definition.

America has definitely been a force for good in the world, but just like every other organization run by humans, it makes mistakes and sometimes succumbs to human weakness like greed and overconfidence.


>>that we are inherently the good guys in any international conflict.

>How else could any human or organization function if not by acting in ways they believe are right?

Being convinced that you're always right is not the same as doing the right thing or acting the right way.

>The nation isn't perfect, but does anyone really believe that, on net, America hasn't been a force for good in the world? What is your benchmark?

Hoo, boy. No, America hasn't been "a force for good in the world", because that's a really naive, simplistic idea. America has done many things in (and to) the world. Some of them have been good for certain groups of people at certain times.


> The nation isn't perfect, but does anyone really believe that, on net, America hasn't been a force for good in the world? What is your benchmark?

Lots of people all over the world who have been on the receiving end of US military or economic aggression enacted in the interests of the US plutocrats.

African Americans, Native Americans, most of South America, pretty much all of the Middle East and North Africa, anyone who has suffered structural violence as a result of the US's insistence on a morally bankrupt Friedman hack-job neoliberal economic ideology.

The US has done a lot of good things, but on balance I think it's done more damage and has merely replaced the British Empire as the current Anglo-sphere imperialist power.


Abusing its power to exceptionally shield itself from international justice, while at the same time practicing extraterritorial judicial reach, is also not a good sign of being a net force for good.

https://www.reuters.com/article/uk-usa-icc/u-s-imposes-visa-... https://www.csmonitor.com/World/Europe/2009/0213/p05s01-woeu...


Plus, as a nation it meets its own definition of a rogue state.


On balance has done more damage?

Do you believe that in the countries where the US has intervened that without US involvement things would be that much better? Seems to me that when the US supported a dictatorship, the alternative wasn’t a thriving democracy but just another dictatorship aligned to a different power.


[flagged]


Please don't do political or nationalistic flamewar on HN. The conversation fell way below the acceptability line by this point.

https://news.ycombinator.com/newsguidelines.html

p.s. We've had to warn you repeatedly about using HN for political or ideological battle. At some point we ban accounts that keep doing that—regardless of what they're arguing for or against—because it's highly destructive of the intellectual curiosity this site exists for. Would you please fix that?


You've proven my point exactly. Your standard for America is perfection - a country that has never done anything wrong or made mistakes. Under your standard, every country is evil. It's a silly way to look at things.

And, I see you've swallowed the Mossadegh narrative hook line and sinker. Iran was in chaos prior to the coup. Mossadegh was not uniformly popular, there were riots on the street. He was on his way out regardless, as he had failed to deliver on many of his promises.

As for Iraq, sure, that didn't turn out well.

By why did you ignore my comment about Taiwan, Japan, Western Europe, etc? You trot out examples of failures, but completely ignore the success stories.


Please don't do political or nationalistic flamewar on HN. The conversation fell way below the acceptability line by this point.

https://news.ycombinator.com/newsguidelines.html


Don't forget "economic warfare", where the US crushes its "enemiy countries" through painful sanctions, which always leaves the poorest portions of those countries in shambles. Not to mention the refugee crisis it creates, which in turn has the "pleasant" effect of lowering wages across the globe.


Wait, so the US is evil for not giving the benefits of it's economy to states it doesn't like/that dislike it? That seems really backwards-- how does anyone else have a right to the US's economy?

Even more, (if I understand your point correctly) the US is also evil for having a good economy and society that refugees want to flee to?

So the US is both evil for having a great economy available to people and also for not making that economy available to people? Is there any way to win besides just not having a good economy (and society)?

Is this just hating on the strong and successful because they are strong and successful?


Lots of accusations here with zero citations.


[flagged]


It's fine, there is a certain collective on HN that do not believe or want to even google United States and the past dealings with the middle East and South America.

America is not the world - Morrissey


>The article centers around activity in Syria, but the problem is much more endemic. In the United States, Apple has consistently taken down apps providing information about US government airstrikes - including those that strike wedding processions, children, bystanders, etc. Recent efforts to identify and scrub foreign propagandists in the United States have silenced legitimate voices of domestic dissent >The article centers around activity in Syria, but the problem is much more endemic. In the United States, Apple has consistently taken down apps providing information about US government airstrikes

Well, Apple is not some international beacon of progress and activism.

It's a US-based company, run by people who feel US-patriots, and their differences with this or that sitting president aside, will do what's best for their country's "national interests". Same as Google, Facebook, etc.

Which is why countries with any major claim to sovereignty and with their own interests and stakes, build their own equivalents of major search, social networks, etc (e.g. Baidu).

>Recent efforts to identify and scrub foreign propagandists in the United States have silenced legitimate voices of domestic dissent

I wouldn't say the latter was not part of their purpose. You can't have your country's own citizens dissent to the elite/bipartisan consensus. Trillions are at stake...


Things are much more simple I'd say. Those "algorithms" I'd say are very dumb, nor human flaggers that are better.

Even very innocent family videos are being flagged all the time, you just have to speak a foreign language and look like a middle easterner.

Some say that just speaking in elevated tone in Arabic and having a beard is enough to be flagged on big dotcoms.

And if you are openly are a member of just any social organisation, your account deletion is pretty much a matter of time if you are a middle easterner, no matter if you are pro-regime, anti-regime, or have nothing to do with that. The Egyptian national symphonic orchestra was once banned by Facebook as a "terrorist organisation"

This is ironic given that all big social networks including Facebook outsource content censorship to companies in obscure countries like Algeria or in Balkans.


"The Egyptian national symphonic orchestra was once banned by Facebook as a "terrorist organisation""

I'd love to see a source for that bit :)


> Reddit is another example a company that has recently scrubbed itself of most controversial content,

I agree with you on most points however reddit takes no action against extremists and fringe groups proliferating on their platform until substantial pressure is applied from either advertisers or journalists. They go as far as shadowbanning users who bring up their lack of action to shine a light on their culpability, presumably to keep it off of mainstream journalists radar, thankfully it's now too late and enough good journalists have picked up the scent. Expect to see many great pieces about this in the near future.

Reddit also scrubs the profiles and post history of mass shooters who were indoctrinated on their platform, one example is Elliot Rodgers the incel and Santa Barbara shooter from several years ago, truly disgusting and frightening behavior that denies researchers and journalists from understanding the reach and impact of the spread of ideologies and hatred on these platforms.

Just last week in the wake of the Poway synagogue shooting the #1 post on /r/conspiracy was a holocaust denial post, https://pbs.twimg.com/media/D6HG_LMUUAEXYJy.png:large that received 2x reddit gold.


Reddit has banned Russian [0] and Iranian [1] "propaganda accounts", now lots of people over there are calling to ban "Chinese propaganda" [2] aka anybody who gets tired of anti-Chinese submissions constantly being pushed on the front-page.

Gee, one might wonder where that might be coming from? [3] That's the absurdity of the situation: Russian, Iranian and Chinese "propaganda" gets banned, while US propaganda [4] isn't even recognized as such because being pro "democracy, freedom US" is considered the default "good" position and anything not in line with it must be evil or at least bad.

[0] https://www.engadget.com/2019/02/04/russia-spam-account-prob...

[1] https://www.dailydot.com/layer8/reddit-accounts-iranian-infl...

[2] https://www.buzzfeednews.com/article/craigsilverman/reddit-c...

[3] https://www.reddit.com/r/Blackout2015/comments/4ylml3/reddit...

[4] https://www.theguardian.com/technology/2011/mar/17/us-spy-op...


Reddit has been a declining trash heap for quite some time now. It definitely has to do with its emergence into mainstream popularity (the problem gets worse the larger the subreddit). It's the perfect platform for every armchair expert with a bigoted view to gain admiration and affirmation of their views from other uninformed bigots. It happens to be much easier to be a loud and belligerent bigot than to express an informed and nuanced opinion.

The only thing stopping HN from completely morphing into that cesspool is the diligent and fair moderation and the diversity and quality of opinions expressed. For the most part, hackers tend to be more critical and suspicious of authoritative sources than other groups.


>anybody who gets tired of anti-Chinese submissions constantly being pushed on the front-page

It's not really anyone who gets tired, it's the accounts that revert to whataboutism whenever the communist party is criticised, flat out deny things like the Tienanmen Massacre, and spread a lot of hate when something happens that the CCP doesn't like, such as a Tibetan woman being elected president of a student union.

>anti-Chinese submissions

Anti-CCP. It is important to note that criticism of a government does not constitute hate speech towards the citizens. This is another tactic used by propagandists.


> flat out deny things like the Tienanmen Massacre

I'm fairly active on Reddit and I've literally never seen that happen. That's because the Chinese people so deep in historic revisionism often don't even care to begin with and can't even access Reddit in the first place, at least not "easily".

It's usually people with a "anti-China" position who bring up Tienamen square, to conflate it with whatever a given submission is about. As in: "Huweai must be spying because the Chinese government is authoritarian evil because Tienamen square".

> This is another tactic used by propagandists.

Just like conflating issues to give the impression to fight for the only "just cause" [0].

[0] https://en.wikipedia.org/wiki/Falsehood_in_War-Time#Receptio...


I've never seen it happen either.

Redditors will gleefully jump on (by voting to the front page) anything even remotely critical of China. I suppose it fulfills some deeply-rooted sense of 'defiance' in them and gives them some sense of satisfaction. It's basically the childish game of "oh, you don't want me to touch this? I'm going to touch this over and over because of that".

The net effect is that they will judiciously post, for example, the anniversary of Tienanmen Square every year without fail, but fail to mark the dates of equally tragic, but unpatriotic or unflattering events. When was the last time the anniversary of the Mai Lai Massacre or No Gun Ri was upvoted to the front page? How about the anniversary of the invasion of Iraq?

Hint: Never.

It's ironic (and very telling of their motives) that the events they have complete freedom to discuss never receive equal attention or criticism.


> They go as far as shadowbanning users

Nevermind these users really don't do anything constructive since the problem at hand censorship.

Holocaust denial got worse because of efforts to silence some crazy people who did. Not slightly worse, it is really shit you have to think about it again. And people complaining aren't even jews or age above 50.

The self-righteousness, while the incentive might be good, really doesn't help at all.

And yes, denial of the holocaust is a form of censorship, so adjust your talking points correctly.


> agree with you on most points however reddit takes no action against extremists and fringe groups proliferating on their platform until substantial pressure is applied from either advertisers or journalists.

They banned IRA_memes because IRA I guess and quarantined waterniggas because it had "niggas" in the title even though the whole sub was just about drinking water. They're definitely taking an approach of culling anything that advertisers might not like.


> They're definitely taking an approach of culling anything that advertisers might not like.

This could not be further from reality, see https://old.reddit.com/r/AgainstHateSubreddits/comments/a3q6... to get a small preview of the attitudes of their executive team and the kind of content they allow to proliferate and spread on their platform.


The internet was founded on a military backbone, then grew to an information is everywhere backbone, now the fences of the garden have been contracting to a walled garden again.

There is a scene in firefly where the dr. character gets a box. It was implied as a openish internet. Pretty sure that's where were headed.

Information is free until the folks in the back of the courtroom make sure it's not.


Online dating’s purpose is to facilitate ‘positive’ real-world relationships.

The intent of these ‘mundane’ applications is much less admirable.


The problem is that we've equated some platforms with a certain kind of media. Youtube is video, Twitter is news, Facebook is.. reality tv, I dunno. It's pretty debatable whether those platforms are the right place for the coverage of war crimes.

I'm not going to start an argument about the media not doing their job right. However, this discussion would be moot if there was a news platform on par with the above, and I don't mean Twitter. I'd love to see someone build or modernize a news outlet that isn't driven by attention or clicks as a currency.


These platforms aren’t being used to cover war crimes by an entity consciously trying to deliver news.

These posts are first hand accounts of individuals currently living an event. It is as raw as it gets. I don’t think you can very easily direct where content like that gets created. People will use the platform they’re already using for the other parts of their life.

I understand the desire to not traumatize people by causing them to accidentally view horrible events. But at the same time I think deleting the content is a disservice to humanity in the long run for a lot of reasons this article highlights.


We would all love to see that but it is - within the context of the current economy - impossible for the majority of people.

This is a scenario where the regular incentives of the market result in perverse outcomes.

Firstly - it’s not tech which did this. Tech mutated and accelerated a pre existing trend for the worse.

The news cycle effect was well known, driving more and more sensational news reporting till at the cost of reason.

At the same time newspapers were constantly going under or being bought up.

With information distribution at scale, describing reality increasingly became the playground of nations (BBC, and lately various government owned news channels) or major firms (the Murdoch emptier.)

There’s a deep problem at the nexus of human behavior, factual reporting and income.

Attention and fear are easier and more reliable levers to pull in the human consumer.

Humans are easily distracted by sex, violence, gossip and easy to consume content - our brains are wired that way for some of those things, and it’s always pleasing to consume mental sugar than mental vegetables.

The only model which survives this is paying lots of money for specific information - usually linked to your profession.

However general news reporting It is unlikely to recover because of the news cycle effect which creates a tendency to compete on attention, a race to the bottom.


> I'd love to see someone build or modernize a news outlet that isn't driven by attention or clicks as a currency.

There are two motivations for running a news outlet, both with extensive historical precedent:

- You want money.

- You want to spread your view of the world, inflicting your culture on other people.

There isn't a lot of immediate financial gain associated with the second approach, but the gains are real enough, long-term, that there tend to be plenty of such outlets receiving subsidies from people who believe in the message.

But in the modern US, while we still have outlets of that type, there is a very strong belief that they shouldn't count as "news", and that they are less legitimate than "neutral" journalism. That reserves moral legitimacy to the money-oriented approach, and that approach is necessarily driven by attention. News that nobody reads can't sell.


> But in the modern US, while we still have outlets of that type, there is a very strong belief that they shouldn't count as "news", and that they are less legitimate than "neutral" journalism.

At least one of those outlets, in a court of law, by it's creator/owner/founder at the time, admitted that it was "entertainment" and not "news", but rather a channel that called itself "news" in name only, that its main purpose was orthogonal to its stated purpose.

This small tidbit of information is often forgotten about, because it is inconvenient to the narrative.

That doesn't mean that other outlets aren't doing the same, or have elements of it - but only one that we know of has admitted it under oath. That counts for something.

It also doesn't help that this "news" organization also has (or had, until its sale) a side actual entertainment arm that allowed it to hide behind. It was a curious "sleight of hand", all the more shadowy considering that the owner of the channel was not a citizen of the United States, and so not only could push his own views, but views that may be in direct contradiction to the values of our country.

Unfortunately, there are no laws against this - and I am honestly not sure there should be, for that matter.


The money-oriented and "public service" outlets are also entertainment, and are also happy to admit it in public. For example, the BBC has explicitly made the statement that it is a form of entertainment.



Back in the 90s, I recall much talk about "citizen-based news". But the reality is far more complicated. I could build a Tor onion site with throwaway clearnet proxies. But how would anyone know that it existed? And even if they did, how would they use it from phones? You'd need apps for that, and so that requires approval from Google and/or Apple.


Android apps don't require approval from Google to be installed.


From Google Play they do, right? And how many people use alternate sources for Android?


It is my understanding that Fortnite precipitated a large swathe of apk installs without the Play Store [0], though I'd agree it's mostly atypical:

[0] - https://9to5google.com/2018/09/07/fortnite-android-fragmenta...


You're right. The arrival of 'a news outlet that isn't driven by attention or clicks as a currency' is exactly what should happen and surely it will. Or will we have Facebook around in 2119?


If it were possible, an untied peer to peer platform. The problem then is shifted to trust and veracity.

Once you include ratings you're back to square one with just a tiny bit less external control and harder discovery.

Yes, this is Freenet again.


This is the core problem.

And it's just natural. These platforms have a tendency to be natural monopolies. That means it requires government interference.


This.

If your evidence for war crimes is Twitter, there's a problem. And the problem is not Twitter.

Twitter and Facebook posts should not be evidence of anything. That stuff is so easily faked that courts would be entirely right to laugh such "evidence" out of the courtroom.

Evidence should be gathered by appropriate authorities and kept in accordance with international standards on evidence storage.

Hint: That's not "look at this tweet I got!"


This makes no sense.

Pretty much all evidence starts out in some nonprofessional hands.

By this logic you could also say that you shouldn't record evidence of war crimes on your Android smartphone, because Android footage "shouldn't be evidence of anything" and "Evidence should be gathered by appropriate authorities and kept in accordance with international standards on evidence storage."

Videos from Facebook or Twitter can and have often been used as evidence in court.

> Twitter and Facebook posts should not be evidence of anything. That stuff is so easily faked that courts would be entirely right to laugh such "evidence" out of the courtroom.

What courts usually do when such things may be disputed is they ask the platform to confirm that the content in question is real. Source: I have been asked to appear in front of a judge (usually in his office to sign a written statement) to confirm the accuracy/authenticity of data from one of my platforms in the past.

I don't think there often is a dispute though, because both parties are aware this will happen and are also aware of the consequences of falsifying evidence.

Obviously that screenshot/video together with electronic traces and your statement would then be stored wherever it is they store evidence, not just on FB/Twitter.


>Obviously that screenshot/video together with electronic traces and your statement would then be stored wherever it is they store evidence, not just on FB/Twitter...

Money quote.

Twitter is not the evidence store.

Twitter is the place the evidence is gathered from. Along with numerous other sources. And no, they don't simply take your word that the footage on your platform was recorded at place X at time Y and is verified to document persons of interest A, B, and C. They take your word that a twitter post was made at time W containing video N. That's all. I think you may have misunderstood what you were signing if you thought that your signature validated and verified the content of a twitter post. (Or any internet post for that matter.)

That's what the investigations are for, investigations that I can assure you entail far more than twitter posts when you're talking about war crimes.

Crimes require evidence. Real evidence, not twitter posts.


You're completely ignoring the public pressure needed to bring about enforcement of laws against connected figures.

If you give your evidence against government officials to other government officials, good luck getting them bring a case that could hurt their career prospects by pissing off the wrong people.

To get that rolling under those circumstances, you need public pressure. Which means you need to distribute the source material far and wide, so that nobody can claim they don't believe it because they don't have access to it, and the whole internet can try to pick it apart use clues in the information to dig up more information etc.

None of that has anything to do with digital signatures or anything like that. It's a matter of seeing a video with a particular location and timetamp and then finding other documentation from other people for the same place and time, and looking up public records to verify that what you see in the video matches etc. That's how investigations work. You don't verify by having the original and some kind of signature, you verify with corroboration and the additional evidence that the original evidence leads you to.


The problem is though that a video goes viral and then gets forgotten in a time shorter than anyone can verify its authenticity. Public pressure may end up getting applied incorrectly.


People get upset about dumb stuff continuously. It's the job of the public officials to verify the information for themselves. If it's false then they can publish the information disproving it and the public pressure subsides.

Better to have the occasional tempest in a teapot than that people do nothing even when something needs doing.


It seems like you're implying that twitter posts cannot contain evidence of crimes?


GP is arguing Twitter posts are of wrong colour[0]. No matter what a Tweet contains, by itself it's not evidence of anything other than its own existence.

Imagine you open Twitter and see a video of someone committing a crime. That could very well be a real photo of a real event. But the event might not have been a crime. Or it may not have been real at all. Or the footage might be from something else entirely. Remember that time few years ago around an air crash investigation when people believed they've seen footage from the plane breaking up, only it turned out someone just posted a scene from LOST? Or recall these one or two heart-breaking pictures of people suffering that surface on Facebook every time there's some fighting going on, always labeled "evil side $X did this in city $C"? Those are always the same pictures of some one event that happened long time ago, but people read them as "evidence" of new atrocities.

Now obviously courts don't view social media posts in a binary way. Enough investigators and expert witnesses can work on a social media post until it has the colour of evidence.

--

[0] - https://ansuz.sooke.bc.ca/entry/23


I think you are completely missing the point why they are ending up on these platforms...

Try to imagine a world where "appropriate authorities" do not exist, as a first step.

Places where these materials get captured are not first world countries with respectable, recognised law enforcement.

Most of the people these sharing atrocities are aware only about platforms we call social media.

For them these are the only ways they know how to make it public, for the world to see and hope we will act.


>Try to imagine a world where "appropriate authorities" do not exist, as a first step.

Places where these materials get captured are not first world countries with respectable, recognised law enforcement...

I've lived in these nations, and if you can send via internet to Twitter, you can send to the UN. In fact, you can send to both at the same time.


And you know the UN address for submitting war crimes evidence by heart?

adding to that, does UN have a convinietly preinstalled app on every phone just for that purpose?

Just because it's reachable, doesn't mean that random witness will know how...


I guess the reason people put these things on social media is to raise awareness of it. The problem is that they fail to also send a copy to the relevant authorities.


Do you know who Rodney King is?

Which appropriate authorities investigated his beating?


Excellent example. When we're blowing the whistle on our own authorities, we turn to out community.

Private firms have coopted our communities, but they are not beholden to our welfare.


How naive. We should be using any and all methods at our disposal to document first-hand experience of atrocity.

The library of Congress does not have an app for uploading biographical accounts of genocide.

We live under Capitalism; all niche value is exploited by cynical firms.


This article seems mostly concerned about Facebook and YouTube's opaque filtering systems, but the authors are looking in the wrong place. These platforms aren't and shouldn't be responsible for hosting violent and disturbing content - regardless of its purpose or utility. The authors themselves discuss the can of worms that open when you start accommodating the interests of some groups and not others; it isn't sustainable and it isn't possible.

Instead, this article should have focused less on the decisions of AI systems and neural networks, which can be difficult to decipher and interpret, and more on the decisions of the humans behind these tech companies. The latter is much easier to scrutinize and dissect.

I think the truly concerning initiatives are projects like Google's Dragonfly. There's absolutely nothing opaque about a search engine that will actively assist the Chinese government with whitewashing history for a billion individuals. These are the decisions that should be examined.


>Instead, this article should have focused less on the decisions of AI systems and neural networks, which can be difficult to decipher and interpret, and more on the decisions of the humans behind these tech companies. The latter is much easier to scrutinize and dissect.

I agree with your comment overall but this common assumption bothers me. We can put a debugger in a neural net, we can’t put one in a human brain.

People get angry when a policy decision is made that targets them, and are told by that the people responsible that the decision was implemented with a neural net. Then they get mad at the neural net, which shifts blame away from the humans who control it.


Tech companies are deleting evidence of war crimes and everything else at the behest of media companies like the atlantic.

Youtube, facebook, reddit, google search, etc have all been targeted by the atlantic, nytimes, washingtonpost, cnn, etc and bullied into scrubbing content and direct their users to "authoritative sources". Ironically enough, in china, russia and most countries, "authoritative sources" are state propaganda organs. But we are different or so I'm told. Our "authoritative sources" are independent news organizations who strangely enough push the same message with the same talking points. What a coincidence. For "independent" "news" organizations, they sure are united in the same message.

This censorship has been going on for at least 5 years now. Maybe the atlantic journalists should investigate their editors as to why the atlantic has supported censorship? Or maybe the atlantic journalists should start investigating nytimes, washingtonpost, cnn, etc. Why so many government/intelligence agency people are working in media. Why so many children of politicians ( Bushes, Clintons, McCains, Cuomos, etc ) have prominent positions in the media?

Or is the atlantic only interested in war crime evidence that suit their agenda ( pushing for war in syria, venezuela, etc )?

This article is so weird. It's like an arsonist setting a house on fire and telling everyone that the house is on fire.


What exactly are you claiming is being censored?

The media regularly investigates and reports on itself.

The press is far from perfect, but you’re alleging a vast and deep conspiracy when reality is likely far simpler:

The right hand doesn’t know what the left hand is doing.

Sales and business have little impact on editorial. Editorial flies by the whims of a wide variety of strong personalities.

Politicians and relevant players get columnist and editorial roles because of their inside status and insight (of whatever level of quality that might be).

Suggesting those outlets produce a single unified perspective or ideology is a bit much. Which would explain why you might see contradictory ideas.

I continually stress media literacy as crucial-especially these days. It’s not a mystery to be solved. Most of is pretty straight forward hum drum.

Edited to add:

The Atlantic is a very different machine from WP or NYT. It’s magazine whose content has been analysis and ideas and always has been. Their work will always have perspective woven into their articles. It’s not new to them. But it’s not a conspiracy, it’s part of their business model.


These do not have to be mutually exclusive accounts. There does not have to be a secret cabal of shadowy deep state operatives pushing a unified message, there just needs to be a propensity for the rich and well-connected (by definition a small group) to find their way into positions of influence in rough proximity to major media outlets. As you've said:

Politicians and relevant players get columnist and editorial roles because of their status and insight.

It's not difficult to see that wealth, status, and connection are self-compounding. I also don't think it's unreasonable to say that many people in this group hold views that are out of step with the majority of Americans, particularly when you are talking about the pro-military intervention crowd.


Granted, but what’s being implied and not said outright is that those people are somehow influencing entire organizations of strong-willed career professionals and not just given their own post to say what they want alongside the rest.

I’m asserting the latter is usually what’s happening.

Over the past 100 years there have been many times where North American governments have attempted to rule over the press. That never went well for them (save the inception of Fox News).

OP seemed to suggest the large number of contradictory articles was the result of a larger unified conspiracy. I’m suggesting that is highly unlikely.

For what it’s worth- people don’t get into working for the media because it pays really well. Not usually anyway. So in that way I’d rule out wealth as a primary motivating factor to sacrificing one’s principles in that business.


It seems unlikely to me that there is no coordination between left wing media institutions. They say the same things at the same time using the same language. I don't read much right-wing media.

I'm not saying there's a shadowy conspiracy (though I'm not ruling out people talking about "messaging" via email). It may be that they read each other's stuff and the demands of the 24 hour news cycle dictate that there's a certain amount of cross pollination and regurgitation.

> For what it’s worth- people don’t get into working for the media because it pays really well. Not usually anyway. So in that way I’d rule out wealth as a primary motivating factor to sacrificing one’s principles in that business.

Another perspective on this, courtesy of Noam Chomsky in conversation with a journalist: "I’m sure you believe everything you’re saying. But what I’m saying is that if you believed something different, you wouldn’t be sitting here."

> Over the past 100 years there have been many times where North American governments have attempted to rule over the press. That never went well for them (save the inception of Fox News).

I think it's unknown to what degree American intelligence agencies influence the media but I'm willing to bet it's "more than not at all". It may be relatively benign stuff ("Don't use the term 'Islamic terrorism,' Al Qaeda uses that stuff to propagandize") or it may be not so benign.

I agree with the original poster. It's absurd that the left-wing media has been agitating for censorship of "hate" and then complaining when "evidence of war crimes" is deleted. There's been shockingly little mainstream questioning of just how quixotic it is to try to "remove hate".

I'm fairly concerned about the move toward censorship by internet platforms. The only way tech companies can realistically do this is with algorithms. Lots of people seem to think it's a given that these companies can algorithmically remove "hateful things". No one seems to be saying that rules and laws need to have precedent in order to be intelligible. For example, in the US, there are incitement laws but what actually constitutes "incitement of imminent lawless action" is something that has been defined in the courtroom. You can't just say "delete hate" without precisely defining hate. It's a blank check.

I find censorship via algorithms extremely scary. There will be false positives and false negatives and there is absolutely zero recourse. Censorship by algorithms is a perfect expression of bureaucracy. In perfect bureaucracies, responsibilty is spread so thinly that it's impossible to determine who is responsible for a mistake. With algorithms, there's actually no human on the hook at all. Are you going to blame the programmers?

By all means, use machine learning to flag posts so humans can look at them. But automating the removal of content and the banning of human users is a road I strongly suspect we will regret going down. If the volume of content is sufficient so that you need to use algorithms to remove human-generated content then I'd say it's time to reconsider whether that content should be removed at all.


In many cases the censorship is carried out by a 3rd party company setup in a low wage location where someone with their own biases, religious convections and influence of their own society moderates content for the west.

The tech giants don't want anything to do with it because it means employing 1000's of extra people and the costs that go with that. So it just gets contracted out to the the lowest bidder.

It can't all be automated yet, not some machine learning algorithm dreamt up be Facebook is any better of a prospect.

This is the problem with the black box of the multinational corporation deciding what is moral or acceptable with no oversight by the government.


> It can't all be automated yet, not some machine learning algorithm dreamt up be Facebook is any better of a prospect.

Some of it is automated. Youtube removes/demonitizes stuff based on automated checks. I suspect other platforms do this stuff too or are headed in that direction.


Surely they have enough income to hire thousands of US based moderators at a living wage, and moderators for other countries, too.

The relative ethics is the biggest issue, to me.


> It seems unlikely to me that there is no coordination between left wing media institutions. They say the same things at the same time using the same language. I don't read much right-wing media.

> I'm not saying there's a shadowy conspiracy (though I'm not ruling out people talking about "messaging" via email). It may be that they read each other's stuff and the demands of the 24 hour news cycle dictate that there's a certain amount of cross pollination and regurgitation.

There is a difference between being a part of a zeitgeist, and actively conspiring to brainwash people.

The original implication was that a group behind the scenes of large organizations was coordinating them—not that journalists might communicate and share ideas.

They also do regularly read each other's work. Many articles are written in response to others. That is not a new feature in North American press.

> I think it's unknown to what degree American intelligence agencies influence the media but I'm willing to bet it's "more than not at all". It may be relatively benign stuff ("Don't use the term 'Islamic terrorism,' Al Qaeda uses that stuff to propagandize") or it may be not so benign.

That just opens the door for anyone to insert whatever they like, though. That's the sensationalist take I was trying to tone down with my post. It's coming out of thin air.

> I agree with the original poster. It's absurd that the left-wing media has been agitating for censorship of "hate" and then complaining when "evidence of war crimes" is deleted. There's been shockingly little mainstream questioning of just how quixotic it is to try to "remove hate".

Again, to this point, the media/press (and in your words specifically the "left-wing" media) is not a uniform entity and doesn't operate as one. What you're seeing are effects, not causes.

As for the rest—I'm not sure what the solution is yet. It's a tough problem, and definitely a nuanced one. I was only commenting on the conspiracy theories coming out of nowhere on a forum that usually prides itself on empiricism and facts.


"The original implication was that a group behind the scenes of large organizations was coordinating them—not that journalists might communicate and share ideas."

When complaining this is impossible or unlikely, you might want to google the term "Journo-list". It is a thing that is known to have happened, and the idea that one exists again today does not require too much suspension of disbelief, especially since much the same symptoms that made people suspect it at the time are fully visible in the news again. They were absolutely coordinating on stories and doing all the bad things that independent journolists aren't supposed to be doing.


I did look it up because I hadn't heard of that group. Thanks for the reference.

It sounds like there is a larger story to tell about that than "see here is a secret hidden cabal of forces pulling the strings of journalists."

The very Wikipedia article on the subject outlines the active stories outing the list for some of their less-scrupulous conversations in publications like the Atlantic[0].

That sort of goes along with what I had been saying—there is no unified voice. I didn't say there were no bad actors—I clearly stated the press is far from perfect—which includes overzealous rabbling to the point of pressing professional ethics to a breaking point, and making an ugly display of it.

This group also was known to each other—they were not a massive conspiracy pulling the strings behind the outlets from the top-down.

These weren't professionals having their pockets lined to break ethical standards or push pre-decided stories. They were rabble-roused in isolated online communities. If anything, that's the part that sounds familiar now.

[0] https://www.theatlantic.com/national/archive/2010/07/meet-th...


"These weren't professionals having their pockets lined to break ethical standards or push pre-decided stories."

I don't think you have the evidence to come to that conclusion. If they were, how would you know?

Once you find out someone has lied to you in real life about something serious, generally the right answer isn't to slightly adjust down your estimation of the truthfulness. You should generally drop it like a rock, and require a lot of evidence to bring it back up again. I don't mean that politically; I mean that in "real life", as a general life rule. Lies are highly correlated, and it is rational to do this, and indeed, actively irrational not to. (Though many people who are naturally inclined to thinking the best of people have to learn this the hard way, over and over, before getting to this point.)

Similarly, though not identically, you now have strong evidence that what you considered the ethical firewall was breached. You have good reason to suspect that the breach wasn't contained there; what may have started out as putting a toe over the line almost certainly escalated to totally ignoring the line, because in a group that large, that's always how it goes. (Again, general life rule; in groups dedicated to breaking the rules, there's always someone pushing the line just a little bit more than yesterday. Not always the same person, but always someone.)

No, I don't have absolute proof of this... my point is more that you don't have anywhere near proof of your belief that the transgression is limited to just what you know about, and that you have a lot of rational reasons to very strongly suspect that to not be the case. Or, at the very least, I have a lot of rational reason to strongly suspect that is not the case, since I don't consider these people my team or my tribe. Additionally, as I cited earlier, I think that for the same reason people suspected the "journo-list" existed in the first place by watching news stories that something like it exists again today. If this is true, not only has the ethics firewall been breached once again, it means it's been done completely knowingly, at which point any arguments about meaning well go out the window.

(This is all in addition to the general argument that control of the media narrative is literally one of the most valuable things on Earth and I find the idea that literally nobody with any power has ever successfully found a way to manipulate it and it's all just innocent people naively doing their job without a thought as to how it might affect society and that nobody anywhere would even dream of pressuring such people to be such an extraordinary claim as to be absurd. The degree of ethicality on the part of human beings necessary to pull that off strikes me as almost inexpressibly higher than the degree of ethicality that I observe.)


At most I have anecdotal evidence of my work with such people.

As well, if you want to be wealthy that badly you’re better off leaving journalism and doing something else.

I think this rabid kind of speculation is unhealthy. You’re basing your rational on further assumptions.

At the core you seem to assume everyone is so vulnerable that such a situation is more than just plausible, but the most likely reality.

I disagree. So I guess it just boils down to philosophical differences.

For what it’s worth, again, my work has shown me the greatest extent of coordinated government/powers “interference” has been the request for XML copies of published articles. The greatest extent of corporate/financial powers interference has been to cut the legs out from under journalists to maintain margins.

I have no evidence those journalists were coerced into the actions they took outside of the effect of a social media echo chamber. They’re not the only ones.

Add that together and I feel I can make the assumption there is no grand conspiracy otherwise I likely would have seen some traces by now.

This isn’t directly related, but I see far worse coming from corporate executive business management. And they don’t give a damn about the stories.

And you know, I led off with “the press is not perfect”. But it is supposed to be free. Freedom often comes with the tag line that you have take the crunch with the smooth and work through problems.

But coordinated grand conspiracies—no.


> It's not difficult to see that wealth, status, and connection are self-compounding. I also don't think it's unreasonable to say that many people in this group hold views that are out of step with the majority of Americans, particularly when you are talking about the pro-military intervention crowd.

Yes, the wealthy certainly rig the system to stay wealthy and get more wealth. That's an easy thing to assert. But to then make vague connections to intelligence and relatives of presidents as though they posed some unseen censorship apparatus on media with some specific goals is where it gets into conspiracy theory zone.


Journalists and the press has their self interest to protect too. That is why it is hard to take criticism of Facebook, Google, etc from them, because their criticism is tainted by conflict of interest. One does not go to the Pope to get an impartial view about atheism. I do not say that what Atlantic/NY times/etc says on this topic is without value. Only that what they say should not be regarded as final or defenitive word, rather only as claims in argument.


> Only that what they say should not be regarded as final or defenitive word, rather only as claims in argument.

I'd agree with that.

It's the claim that they're in on a massive conspiracy because of a select set of articles on the subject is the line I've taken to issue.


Stuff like this is why I still see value in own encyclopedias, history books, etc. I simply don't trust resources that can be edited/deleted at any time.


This is exactly why archive.org was created. They actually referenced Orwell's 1984 changing history theme when they created it.

Note that archive.org is run on donation, and people often overlooked it's importance and need for funds.


> Tech companies are deleting evidence of war crimes and everything else at the behest of media companies like the atlantic.

Don't underestimate governments forcing these companies to hide evidence of their own misdeeds.

Even governments that you wouldn't traditionally think of as abusive. Take for yourself what a realistic number is for police abuse and then how much gets filmed and uploaded. Then search on Youtube. Clearly, someone's asking for removals in the US. Same for Europe. I doubt very much it's the Atlantic doing that.


This is entirely in your mind until you put some evidence to it, you know, like journalists tend to do.


Great post, I think you’d get a lot out of reading this old blog post on Unqualified Reservations — Is journalism official?

https://www.unqualified-reservations.org/2007/09/is-journali...


I was curious who could write so well with such poor reasoning... unsurprisingly, it's a world-famous mental gymnast:

> Yarvin, attacking the accepted "World War II mythology" in a speech to the 2012 BIL Conference, claimed that Hitler’s invasions were forgivable acts of self-defense, and that this historical fact was suppressed by America’s ruling communists, who invented political correctness as an "extremely elaborate mechanism for persecuting racists and fascists." "If Americans want to change their government," he said, "they’re going to have to get over their dictator phobia." > > Yarvin's opinions have been described as racist, with his writings interpreted as supportive of slavery, including the belief that whites have higher IQs than blacks for genetic reasons. Yarvin himself maintains that he is a racist because he doubts that "all races are equally smart," and supports the notion "that people who score higher on IQ tests are in some sense superior human beings". He also describes himself as being an "outspoken advocate for slavery", and has argued that some races are more suited to slavery than others.


All I can suggest is to think for yourself, read his long format writings and make up your own mind. Don’t just read what people write about him.


>Or is the atlantic only interested in war crime evidence that suit their agenda ( pushing for war in syria, venezuela, etc )?

I'm with you 90% of the way, but I don't understand this argument.

It's like Hilary's emails or WikiLeaks; "But it came from hackers!" Who cares, if it reveals legitimate criminal activity. There's no equivalency.


It's like trusting all the evidence presented by the prosecutor during a court case. Yes, likely none of that is faked or outright lies, but until you give the defense a chance to respond it shouldn't be trusted as accurate representation.

If we wouldn't trust the evidence from a prosecutor by itself, why would we trust evidence from an entity fulfilling the same role in the court of public opinion?


The rules of evidence are different for the court of public opinion.

Take the OJ Simpson criminal trial, for instance.

...or the Lori Loughlin college bribery and money-laundering scandal.

...or the Aaron Swartz travesty.


It is the selective condemnation that makes it highly suspicious and contextual matters. It isn't like hacking where if someone surfaces evidence of say fraud they are automatically guilty regardless of others.

War crimes for one can be far more "transactional" and horrific with two sides. Side A is shooting surrendering soldiers! Now censor the false surrender killings of side B that prompted this.

Or Side C arming child soldiers and teaching them to shoot invaders until they stop moving - except it happened after Side D started waging no holds barred genocide and horrifically arming children was /the right thing to do/ lacking other alternatives. Censor Side D's atrocities as "inappropriate" content and Side C trying to survive now looks villainous instead of the desperate and unseemly but technically moral move it is.


So, here's an idea. Let's take the case of YouTube, instead of its algorithm deleting this content, how about it is marked with a flag. Normal users are completely unable to view, or even know of the existence of content marked with this flag. It doesn't come up in searches and even a direct link will just show some "content unavailable" message.

Human rights groups and the like can request permission for a designated user to have a "special administrator" permission. Anyone with that permission will see a permission toggle when they look at someone's user account. That toggle will allow them to give or revoke permission to view these "forbidden" videos.

That will then allows these organisations to give access to anyone they deem suitable to help them police these videos. There would need to be some sort of safeguards to make sure that permission is not accidentally given to a minor or something.

This should solve a lot of the problems I reckon. Thoughts?


Why would Human rights groups be entitled to have this censorship privilege? So they'd decide now what's good or bad to display on Youtube?

If anyone needs evidence, it is Justice prosecutors and it is likely already possible. Deleted from user-facing Youtube and Facebook and the like doesn't mean it's deleted from their storage...

Plus the title is misleading: they're not deleting evidence of war crimes, they're deleting upsetting and distressing videos from their open platform, and that's a good thing! Nothing proves they don't already pass these videos to law enforcement.


> Why would Human rights groups be entitled to have this censorship privilege?

Indeed, who watches the watchmen? The SPLC was recently embroiled in sexist and racist controversies. No human organization is exempt from corruption. We all have to watch each other while simultaneously be charitable and willing to forgive.

> Plus the title is misleading: they're not deleting evidence of war crimes, they're deleting upsetting and distressing videos from their open platform, and that's a good thing!

Debatable. There's no such thing as a right to be "not distressed" or "not upset". Being a member of any human society means being exposed to stresses of all kinds, and we arguably all have the stressful duty of guarding our human and civil rights.


I am in absolute agreement with you that whether or not such material should be removed is subject to debate. There are good arguments on both sides. I am generally against censorship but I can recognise some merit in protecting children from videos of people being shot, hanged, beheaded, etc.. Not that I particularly want to go down the "think of the children" route.

However, all the debate aside, the simple truth is that governments are ordering that content be removed so until we can have the debates it would probably be a good idea to have some way to allow the content be viewed by some people. Arguments that it's probably already available to law enforcement are all very well but law enforcement has limited resources and it's impossible for them to review all the flagged content so if there are people who are willing to volunteer their time to review it looking for evidence of crimes then we should probably have a way to enable that.

As for their being some bad actors in these organisations - that's pretty much unavoidable, even law enforcement has some bad apples.

It's about compromising and making the best of what we have.


I fear I may not have explained myself properly as you appear to have misunderstood. I was not proposing that these groups be able to decide what does and does not get shown on YouTube. I was proposing that they have the facility to allow selected people to view content that has been flagged so those people can review that content for evidence of war crimes, etc... The content would still be unavailable to the general public.

The idea is that the platform providers can comply with their legal obligation to remove such content from the general public but it can still be made available to lawmakers and armchair detectives.


How do you sell this to YouTube or enforce this on them, to first accept then prioritise this functionality? What benefits do they gain or what alternatives do they avoid, as a company for profit?


Yes, some sort of tiered access seems better than mass deletion.

However, I'm sure all these videos still exist somewhere in backup. I doubt they are really deleted.


The podcast episode "post no evil" about Facebooks moderation problems, covers something very similar where during the Mexican border drug wars people were posting images of dead bodies hung from overpassed and shootouts.

Facebook had internal debates over whether this counted as news or breached content guidelines

I recommend the podcast greatly.

But perhaps we need a simpler approach - a funded "journalism" archive where the tech firms can shift the posts to online storage (ie not delete it) but section it off for later analysis.

Seems the only likely compromise.


Something like the Internet Archive?


Except nobody goes to the Internet Archive unless they're looking for something extremely specific. The benefit of Facebook as a platform is that you can reach a wide audience in a place where often going to the authorities means either jack squat or your incarceration or worse.


Sure.

I meant to say the Internet Archive can be used as the 'funded "journalism" archive' that GP mentioned.


I did think that was one possibility - but this needs to be funded and supported by the tech firms themselves not handed off and forgotten.

I can imagine a way of referencing these redacted images that facebooks just puts up a big red sign and a warning - but that still allows people to find it and it acts as a full web citizen.


And now Facebook is banning random harmless idiots like Paul Joseph Watson - for 'being a dangerous individual' even though he's clearly not a terrorist or whatever. It's insane.


Social media have successfully created the illusion that they are (to some extent, at least) public spaces. This is another reminder that they are not.

OTOH, truly public spaces on the internet ... where reasonable speech can truly be free ... are in very short supply. By accident or intention, this is a crappy state of affairs.

This isn't a new topic. 20 years ago, Lessig wrote, "We can build, or architect, or code cyberspace to protect values that we believe are fundamental, or we can build, or architect, or code cyberspace to allow those values to disappear." https://www.nytimes.com/2001/06/02/arts/adding-up-the-costs-...

If our free speech is limited only to the inside walls of walled gardens, that's not their fault.


> OTOH, truly public spaces on the internet ... where reasonable speech can truly be free ... are in very short supply.

Who decides what is "reasonable"? One's person or group's "reasonable" is another's call to regulation, if not censorship by various means.

How can this conundrum be solved on an international network composed of both adults and children accessing the same content?

I'm not expecting you or anyone else to chime in with a solution; I obviously don't have one myself. But ultimately, this information hydra we've created will need taming in some manner, or it may eat the world alive.


Who decides? Good question. But IMO, democracy is not the strong suit of most corporations.

Here in the US, back in the fabled wild west, the solution was not to expect the law to be upheld by grocers, blacksmiths, or bankers. The public hired professionals. So, I reckon, maybe we need some laws to enforce.


We couldn't have started the discussion of systematic censorship in a better way It's hard to come up with one rule that works for everyone It's normal to have a knee jerk response to censor something if what your looking at is terrible and you wouldn't want other people to see the bad thing, but it is also important that we as a race try to learn from our mistakes; and censorship straight to deletion is burning the evidence. This may fall short of crazy talk around here: But maybe we need a human element in the curation of our digital libary, and a way for society as a whole to have input on this process, not just in removing the offending content; but also in keeping track of it too help people to be accountable for their actions. Full Disclosure: I'm also a resident of christchurch, new zealand where the mosque shooting happened recently I saw our goverment block multiple websites via DNS, in a country where this kind pf censorship is otherwise unheard of: We honestly didn't know if the systems were in place for this kind of censorship. So my 2c is that if you are in a five-eyes nation, they almost certainly have the capacity to conduct this kind of attack.


Damned if they do, damned if they don't.


This. It is simply not possible to have these platforms host this kind of content and it boils down to money. If you want Twitter, Facebook, YouTube and the likes you need ads. Advertisers don't want to be displayed alongside gruesome content.

If people were ready to pay for a platform with "raw" footage of others' lives there could be a place for it. So far the market has proven otherwise.


Not even that. Half of the people want heavy-handed moderation to save the children. The other half are yelling about censorship. Every decision ends with the messenger getting shot.


Also, the two groups don't even seem to be on different sides, not really. Notice how this article doesn't have a single word of criticism for either the media articles decrying how terrible it is that tech companies allow violent and terrorist content to remain up, or the campaigns by major advertisers to pressure them to mass-censor all that content. It just pretends they didn't exist. The coverage of those campaigns by major media outlets was almost universally supportive. It's like the press are intentionally setting up a catch-22 for major tech companies, who coincidentally are eating into their profits and power.


> Advertisers don't want to be displayed alongside gruesome content.

There are ways to mitigate this. For instance, ad spots in non-controversial content is a premium marketing tier. No one wants ad spots in very controversial material, so that would be a cheap tier. Advertisers only get associated with safe/pablum content if they're willing to pay for that assurance, or they get more widespread coverage in more controversial content if they're unwilling to pay.

Controversy is inferred from like/dislike ratios and/or content reports, all of which is user-driven. Some threshold of user ratings would be needed before content is monetized, a threshold that crosses echo chambers, which is a classification amenable to machine learning.

So this results in user-driven content classification and pricing tier driven ad placements, all without requiring any moderation or censorship on Google's part.


> Controversy is inferred from like/dislike ratios and/or content reports, all of which is user-driven.

I can't quite put my finger on it, but I don't think it will be as simple as you think.

For instance, it's possible for a community to develop that thinks certain content is just fine, while another community does not, but is unaware of the first community or its content.

Then they suddenly through some mechanism become aware - and brigade the other community, stirring up controversy.

Heck, they could do this even if they weren't against the content, all for the purpose of having the content removed or otherwise censored, in order to drive their own narrative (which they may not be followers of either - ie, imposing some fake narrative or content on a community they don't believe in, in order to remove content they have no problem with, because of the lulz - seriously, there are some very weird, very toxic people out there who have found each other and do this all the time).

That's just one example I've thought of - the ways to game the system and cause problems with it are innumerable. I don't think this problem is something we will be able to fix as easily as that.


> Heck, they could do this even if they weren't against the content, all for the purpose of having the content removed or otherwise censored

But that's just it, I'm suggesting that legal content is never censored, merely that the monetization is proportional to the controversy, because advertiser skittishness is the real problem causing censorship on these free platforms. So brigading to raise the controversy score would move its ad revenue to a lower tier, but it wouldn't get removed.

Perhaps brigading would become a problem, but you leave it to the person whose channel was impacted to request a manual review if they suspect brigading. The review requirements have now shrunk dramatically.

There may be ways to abuse this too, but I think the spirit of this approach is right.


An interesting take on a place where they did their best to accommodate is Slate Star Codex's Culture War thread on Reddit [0]. I've wrestled with this for a while but as Youtube, Facebook and Twitter become ubiquitous, forcing views contrary to internal ones off the platforms is wildly irresponsible. On the other hand, I don't see why I, as a platform, should host content I find objectionable. I do agree it's money-driven but it doesn't work long-term; it's an interesting problem and I hope a phase of maturity comes soon because it's not as if alternative views can just set up their own platforms [1].

[0] - https://slatestarcodex.com/2019/02/22/rip-culture-war-thread...

[1] - https://www.cnet.com/news/gab-is-down-as-godaddy-paypal-pull...


> On the other hand, I don't see why I, as a platform, should host content I find objectionable.

So you shouldn't have to provide services to people you find objectionable, even if those people are in a protected class? It's a common but mistaken belief that we have no obligations to society or to our fellow citizens.

For hosting specifically, it depends on multiple factors: your ubiquity, the demographics of your audience, the intended/stated purpose, etc. For instance, the Supreme Court recently decided unanimously that social media is the public square, and so no law can restrict access to social media (they were trying to ban sex offenders from social media).

Considering Twitter's and Facebook's stated purposes are to become the place for all online conversation and debate, if they achieve that goal then censoring content arguably violates people's rights. Doubly so because people like Jack Conte have literally stated that access to social media is a human right.


It will be interesting to see how this plays out over time.

In this case, the Supreme Court made it clear that governments cannot prevent people from using social media. However other Supreme Court decisions have it that property owners don't have to allow others to use their property for speech, except in unusual cases like company towns.

It's not obvious to me that they will find these two ideas to be in conflict.


They might not. Then again, aren't Twitter and Facebook essentially company towns in the online world? If they are so ubiquitous that you must join these platforms to participate in modern political debate, that seems pretty darn close to qualifying.


These platforms have became a sort of super states with citizens from all around the world. A single nation's definition of what is acceptable or not is hard to apply to the platform as a whole.


Does it need to be applied to the platform as a whole? I imagine they already target content based on geolocating the IP requesting the content. Even the ad tier status can be geo-specific.


There is a market for a lot of kinds of content, but companies will gladly exchange some of their profit to push to silence certain kinds of messages and to keep competitors down.

There's clearly a market for an uncensored social media network, but Gab has been deplatformed from payment processors, web hosts, registrars... Companies that refuse service to Gab will gladly host degenerate pornography and turn a blind eye to illegal activity.

Much of this censorship cannot be chalked up to profit motive, and the only conclusion is ideological drive or collusion with other tech companies to help insure their respective monopolies and duopolies.


I think it's just simpler than that, it's risk. Some content has a known level of "bad" risk associated with it, legal pornography for instance, but other content potentially has a massive and unquantifiable downside almost to the point of being an existential threat to a platforms business.

You can't force people to host content or be associated with hosting content which they fear will blow up in their faces.


So far almost all private censorship I've seen can be chalked up to profit motive. Payment processors don't like adult content not because they're prudes, but because there's a significantly increased risk of chargebacks, so why bother. Enabling illegal activities while maintaining plausible deniability and making easy profit off it? Why companies wouldn't do that? On the other hand, being a neutral platform is only fun until the Twitter mob notices you, after which you can choose between losing neutrality or getting painted as a bad actor on the front page of newspapers (sometimes you get the latter anyway, which incentivizes others to abandon neutrality preemptively). And even if you don't care, advertisers do care, because getting a mass-market product ad next to politically incorrect content may cause them issues with the Twitter mob and front page of the news.


Yep. What do people want? First everyone goes crazy that people can find stuff on these platforms, then they go crazy that you can't.

I find all this pinning of the world's problems on Facebook/Twitter/Google irritating. Sure they have a lot to answer for, but it's not like propaganda, censorship, or media manipulation did not exist before. Sigh. I guess they are an easy target to pick on.


“First everyone goes crazy that people can find stuff on these platforms, ”

I don’t.

I get crazy when effectively censor at the behest of (any) government under the “we’re private” excuse.

I’m perfectly consistent about this. That and our imperative need to break apart the big tech companies (and bank, but that’s a different question)


> I get crazy when effectively censor at the behest of (any) government under the “we’re private” excuse.

It's depressing how quickly people are willing to throw away the civil and human rights that have taken millennia to win. Let's hope it doesn't take actual violence to remind people of the difference between speech and violence. We haven't had real physical strife and a world war in our generation to keep this distinction fresh in our minds.


> We haven't had real physical strife

Perhaps not in the west. There has been significant physical strife for millions of people in Africa and the middle east for some time that shows no real sign of ending any time soon.


> We haven't had real physical strife and a world war in our generation to keep this distinction fresh in our minds.

Don't worry, we're doing our best to make it happen, give it time, we'll get there!

/the nice thing is, the next world war after that will only involve sticks and stones...


What do people expect?

Isn’t this uncharted territory for everyone?

Everyone can report, but the public sphere where this happens is owned by a company which needs profit.

Humanity was always insulated from what the worst that happened at that minute by the difficulty of transmitting information over older mediums.

This means incentives are not aligned here - speaking the truth, keeping you engaged, and keeping Disney land clean.

A person in the UK who wakes up in the morning doesn’t want to see evidence that South American violence has resulted in an entire village being eradicated.

As a species We were insulated from thinking at species scale. We could do village and perhaps nation scale on and average human basis. A rare few people can think at a planetary or greater scale regularly.

So how precisely should this be structured ?


That's an interesting point regarding human cognition limits.

Thanks to this comment I found Capacity limits of informationprocessing in the brain, by René Marois and Jason Ivanoff[1].

Intuitively I would think the gap of organizational difficulty between a village and nation be far more important than the one between a nation and the whole mankind. I mean that I would expect, a national organizational process to require far more logistic sophistication and far many more explicitly reproducible/scalable processes than what you need in a village. Of course, some of these processes can be argued as directly conflicting with a "mankind scale mindset", but it doesn't really seems to make a greater gap than one that isolate a mind in a smaller human scale.

[1] http://www.psy.vanderbilt.edu/faculty/marois/Publications/Ma...


Excellent point. Arguably content targeting should consider geographic location (and it probably does to some extent).

If we have to start categorizing content so we can preserve journalism with objectionable content, geographic scoring should get boosted even more.


That should be the top comment on this thread.

> A person in the UK who wakes up in the morning doesn’t want to see evidence that South American violence has resulted in an entire village being eradicated.

And really, why should they? It's not like they can do anything over the issue. They can get the abstract news and go live their lives.

Yet, it's extremely important that some people get the evidence, and those are specific people, but distributed geographically and culturally so that I have no idea how to identify them.


Couldn't they take down the content but keep it available for neutral third parties to evaluate and collect?

There has to be a better way to collect evidence of war crimes than leaving decapitation videos visible on social media for all to see.

Maybe the ICC could even launch their own platform where this kind of content can be submitted.


Removed videos need not be graphic. War crimes take many forms. Bombings of hospitals and schools. Mass shellings of cities. Destruction of vital infrastructure, etc.

You'll not see any blood or torn bodies. It will just expose lies of some nation state's propaganda.

These kinds of videos get removed too. Either directly or indirectly by banning the entire account, if it posts more videos that are questionable - which is likely. Or because of targetting by trolls using false complaints.

Though I'm not that worried. Many people already know the perils of YouTube, and archive everything at first sight.

It just serves the nation state's propaganda, that is usually left untouched, because it's served via more official media channels.


You don't think there's value in people seeing what horrible things are being committed? If we reduce "news" to the words on a newspaper page or the heavily curated images in a TV news report, who's ever going to give a shit about anything?


This sounds like a perfect reason why sites like LiveLeak exist. I don't know if I care that Facebook doesn't host videos that go against its TOS, but if sites like LiveLeak are forbidden from hosting these videos even though they want to, it would make me worried.


This.

Censor the mall (facebook, twitter) where families and your grandma hangout from stuff they would never want to see

But let the wild west (liveleak, 4chan, etc) stay wild.


A thought experiment:

If 9/11 had happened today, what would their policies mean for that footage? What should they do to censor it, if anything? Consider too the increase in camera density over the past two decades.


These platforms can do no right. If Facebook/Youtube leave a video online of a beheading of John Smith from <Country X> then <Country X> inevitably calls them out for promoting and hosting this "illegal" content. Now, they're being chastised for taking it down too.

Maybe, just maybe, these companies have internal processes for handing over such materials to the relevant authorities, and don't need to rely on armchair detectives to investigate war crimes.


Ok, what these processes are, and where are they docummented? Because I seriously doubt they exist. I've certainly not read any articles about tech companies having any such thing, and actually helping researchers and archiving organizations.

Also there's no jurisdiction or relevant authorities for many war zones. So archiving and docummenting is the only option now, to help potential investigations and trials in the future.


> Maybe, just maybe, these companies have internal processes for handing over such materials to the relevant authorities

Trusting authorities is not a solution, which should be obvious from an analysis of any country's recent history of scandals. Journalism is critical for keeping abuse of authority in check, and so censorship is a very serious problem.


The actual question is if the process is used to silence lies or uncomfortable truth. Or more Machiavellian, shape the discourse in ways favorable to rulers without regard to either.

These processes are ran by greedy people, not out of anyone's goodwill.


>These platforms can do no right.

Yes they can. Easily. Separate the part of the business that does content hosting and other technical services from the part of the business that's effectively a scaled up editorial board (curated recommendation service) into separate companies. Allow anyone to access the same services/data for the same fees so that people can get their own data and create alternative presentations. This will disentangle responsibilities, remove conflicts of interest and allow for real competition.


Maybe, just maybe, we can decide if content platforms are responsible for user content or not.. and stick to that decision.

Quite frankly i am against any form of censorship.


Let's say you make users responsible for the content they create; how do you intend to enforce that?

Country A's User B uploads Content C that User D in Country E find's objectionable, or in Country E is illegal to view or possess, while in Country A is perfectly legal.

How do you enforce that? Labels or tags aren't the answer, because Content C could be labeled/tagged as X,Y,Z where X,Y,Z are orthogonal to the content, and bear no relation to it.

Do you allow it to be viewed by User D, and if it isn't legal, then put that on User D to be responsible to...what? Unview it?

More than a few of these problems are all because we humans have decided without consensus or agreement that some information is "bad" while other information is "good", and even that "good" information is only viewable by certain people of that group who meet a certain age requirement.

All of it is very arbitrary, and at odds with itself on the world stage.

This issue won't be solved easily, if at all. Instead, we'll likely just continue to make more convoluted laws that can't be followed and/or all descend into totalitarian walled garden states isolated from each other while atrocities continue behind those walls, telling each other within that "we're the free and prosperous ones; those over there are not".

Why does this narrative seem so damn familiar...?


Very interesting article. Essentially: there is a legitimate need for such content, namely to hold the perpetrators accountable. So the question might be: how can this content still be accessed later by e.g. researchers/prosecutors/...? On the other hand that then opens the door for a dictator (we seem to have plenty of them nowadays) to ask for deletion of content and then use the stored materials still to identify e.g. regime critics. Really difficult issue.


I just wish it was easy to turn their algorithms off. All these companies were good before those algorithms started. Like feed for Facebook & Twitter, recommendations for YouTube.

That would solve a lot of problems. It would make sure people aren’t exposed to things unwillingly and these companies won’t be on the hook for it but it will also make sure things aren’t getting reported and taken down as much as they should (for law enforcement tracking purposes).


These tech companies decide, often in unison, to deplatform people, types of legal content, or political views. There is clearly coordination among news outlets, social media companies, and tech companies (everything from payment processors to web hosting).

Who is doing this coordination? To what end?

Is there a common thread in the deplatforming, smearing, and silencing campaigns?


There need not be a single "who" behind it in as much as a set of convergent processes with the same goal of control and power.

It is coordination of the kind of "going after the Joneses", because Chinese get away with it and gave useful results, so can we, ethics and long term effects be damned.


> to deplatform people, types of legal content, or political views.

Who determines what is legal? What is legal in my country may not be in your's, and vice-versa. How do you decide what to show on an international media platform in such a case? Neither? Both? Let the user decide? What if my (your's) government won't let me (you) decide?

On political views, I'm not sure how to answer or correct this.

History has shown humanity that there are certain political views which are very "sticky", in the sense that they answer to something deeper in the human psyche in a way that with the right leader(s), they can create and sustain mass followings that can play host to violence and atrocity of a frightening scale against those who don't adhere to such political (or social) views.

Such political views have proven so horrifying, that certain parts of humanity has sought to stamp them out before they take hold again, to "nip them in the bud" before they even gain any kind of ground.

But the internet has proven to be fertile soil in which such views can hide and proliferate.

This is at odds with the want to allow freedom of speech and information. Such views are akin to "fighting words" or "yelling 'fire' in a theater" where such a danger doesn't exist. But they a honestly worse than that; in certain ways, such views can become and border on cultish and religious behaviour.

I could perhaps make parallels between a certain religion and a certain government of ancient times, but I'm sure you can see that (it was all politics, and it continues to play out to this day).

Is it better if we just let these views proliferate, let their political voice get louder, let them influence others until what was an atrocity committed by them becomes a lawfully executed act, as they gain political power and control? Do we just let these views and the people behind them grow and grow, until another world war ensues?

Because that's where it leads. We know this. We've seen it play out multiple times in the past. Which is why such wide "censorship" occurs.

At the same time, I don't know whether I really agree with it or not. Right now, it's considered legal because the internet is not considered a "place" or a "public sphere". It's just a bunch of interconnected privately owned networks for the most part, and we've voted in politicians (not everywhere or worldwide of course) that want and have kept it that way.

It's strange, here in the United States with the current federal government; the party mostly in power, or who came into power, put in place people in the FCC (Pai) who virtually killed off net neutrality, and also (Pai and long before him, via ALEC laws pandered to city/state governments, most in control by the same party) made it virtually impossible for "public internet", or even third-parties outside the almost monopoly of cable and telephone providers to give better or public service of the internet to people.

In other words, this party has lobbied and pulled the strings hard in order to keep the internet tightly and corporately controlled, fully private, blocking public access or "transport" on the lines.

Yet certain people of that same party are now complaining when views they espouse (or maybe silently agree with) are being "suppressed" due to market forces and such on that same private network.

It's a classic (and dumb) case of "wanting your cake and eating it too".

If they really wanted their views to be allowed freer reign, they should have pushed for full net neutrality and for public and open infrastructure for carrying broadband for everyone. Make it cheap, make it equitable, make it easy to access.

Unfortunately, they know this will also allow other views, potentially views they don't agree with, to also be voiced, and they don't want that, either.

Sigh.


To be frank tech companies are a second order symptom of fucked up societal attitudes pushing the censorship.

The puritan mass market push towards "child friendly mass market" as a default which leaves no place for the ugliness of reality. They don't care about suffering - they just don't want to see it. It is narcissistic sociopathy as a norm - how dare they run out of a burning building in their underwear! Children might see them!

That it is the middle of the night is no concern - they don't want "inappropriate content" in a back alley. They want it gone.

The tech companies tend towards doing nothing until the masses of "think of the children" complaints about "objectionable" content. The absence of those masses of people is what I miss the most about the internet not being mainstream. I personally believe what we need is to offend those busybodies as much as possible until they fuck off into their own bubble.


>The absence of those masses of people is what I miss the most about the internet not being mainstream.

I've seen this sentiment before, but I don't get it. Sure, the internet's mainstream and there are parts of it that have a mainstream culture and that's probably not going anywhere in the foreseeable future.

But the internet itself has no culture (heh). It's just a network. There are still corners of the internet that have whatever culture you're looking for. And if you can't find it, you can make it. It probably won't make much money or make the news, but the option is there.

That said, I agree with the main thrust of your argument that tech companies are just reflecting the mainstream preference to whitewash reality.


The sentiment is based on the transformative and displacing effects of the mainstream and "old internet" refers to culture as opposed to the network.

I am well aware that niche chasing is the way to achieve it for now. What the sentiment longs for as ideal is for influence to have worked more in the opposite direction. Instead we get advertisers trying to be the tail wagging the dog again when people left their last venture over the blandness they imposed.


Aren’t sites like LiveLeak better for this kind of content? Won’t this kind of content make it there anyway?


It could be ordinary people using what they're familiar with. If an emergency suddenly happens, who's going to spend half an hour dicking around researching what site to use and going through their signup procedure and installing their app and all that?


Stratechery had an article that I think provides a reasonable compromise for displaying and taking responsibility of content some might find objectionable but which others would argue has value. It involves the poster being responsible and owning the consequences for the content as opposed to some third party platform which has to try to please opposing sides.

https://stratechery.com/2019/a-regulatory-framework-for-the-...


The issue with doing things at huge scale are all the edge cases. If Facebook wants to have all the benefits and money it gets from owning its huge network it also has the responsibility of having to treat different content with nuance. These kinds of issues are one reason why Facebook wants to move towards a more private network with more private interactions so it can avoid having to deal with the endless complexity of human interaction in public.


Surely there's a difference between not publishing (to the general public) on their platform and not making such content available to the relevant authorities.


I think the whole notion of war as a whodunnit crime is pretty moronic at a fundamental level. #1, you don't have jurisdiction. #2, if you hoped to gain jurisdiction, the most important thing is not piecing together the facts of the case, but winning the war.

If you want to go after the Libyan National Army for doing bad things, you have to go to war with Libya and defeat their army.

The trouble with going to war with Libya if you are the United States is the Atlantic Ocean and the absence of a real driving interest such as self defense that would lead Americans to support a war with Libya.

In order to goad Americans into a war with Libya, the best hope for doing so is to wave bloody shirts at them and somehow convince them that Libyans are going to blow them up at any time now unless we go to war with Libya ASAP.

Facebook and the rest of them don't have a duty to simultaneously prevent the wrong people from viewing Ogrish-style execution videos while also preserving them.

So they want Americans to be radicalized in favor of war against Syria and Libya (causing far more deaths than the typical radicalization which just results in teens posting lots of pepe memes), but not radicalized in favor of Islamic terrorists. That is just an impossible propaganda requirement that cannot be met.


I’m sorry, but this article lost my interest by the opening sub-heading.

Algorithms that take down “terrorist” videos could hamstring efforts to bring human-rights abusers to justice.

“hamstring”? WTF.


Of course the point I forgot to make is that we suddenly have access to all the world's evidence - that every good and bad issue around the globe is now passing through these forms data centres ... and we are not used to that. Our brains seem wired to weight bad news much more heavily.

We need to remember Douglas Adams advice - "It's vitally important not to have a sense of perspective"


It's not evidence, at best it is hearsay. We have no way to validate truth from fiction.


I'm not interested in seeing war crimes on facebook. If you want to disseminate that stuff, use a public forum where that stuff is welcome. There are countless places. Facebook and the spin users give is not welcome by me and no doubt by others.


I'm not interested in seeing your baby pics on facebook. There are countless of other places you can share those.


Facebook is millions, if not billions of tiny public forums. The problem is that people seem to want them all to have the same standards. If you don't want stuff like this on your wall or feed, it's easy to keep out. If the admins of a group don't want this showing up, they too can filter it out. But people seem to be demanding that not only do they not have to see content they dislike, but that no one else can either.


It's the nature of censorship that people want to censor what other people see, not what they see themselves. We believe we're able to make our own decisions and we're in control of our minds, but those other people who aren't us might not be and are in danger of bad influences.


Speak for yourself. People who support censorship, sure. "We"? No.


By "we", I mean most people, as evidenced by all democratic governments censoring various things, presumably because their voters want to prevent the rest of their country from seeing them. Same goes for restrictions on drugs, alcohol, cigarettes, etc. People who're concerned about their health won't be doing those things themselves and don't need a law to stop them, but they want to protect what they see as irresponsible other people. That might be a fair judgement but it's still a judgement about other people being somehow weaker and needing to be protected from themselves.


I don't care what you do in your own groups. If you show me pictures of dead people I'm almost certainly banning you from showing up in my feed. And if my feed keeps showing me dead people or other crimes then I would just leave facebook.


I've noticed that myself as I'm analyzing influential public statements with regards to the Rohingya for my Bachelor thesis on their discrimination and genocide. Unfortunately, Facebook was one of the biggest channels for dehumanizing and incendiary content but this is all gone now. I think FB should at least provide researchers access to an archive of removed content.


This is a very serious issue by far not limited to facebook and youtube. The problem goes much further then private companies not wanting the content on their site. Its very much motivated by a push of governments to purge terrorism propaganda and stuff that could be used to radicalize people.

The best example is, or better was, reddits /r/syriancivilwar. The subreddit collected all and every information available on the conflict, from every side. This brought them in direct conflict with reddit admins, who banned multiple users for linking to terrorism propaganda, namely ISIS videos and AMAQ statements, the ISIS news agency. The subbredit is now a shell of its former self because its reduced to a propaganda platform for factions which are socially acceptable, instead of a place to document the war and its atrocities.

It however doesnt stop there. The content wasnt hosted at reddit but literally anywhere where there is a hope it might not be immediately taken down. From piracy favored streaming services to archiv.org mirrors. Which all get taken down for distributing terrorism propaganda instead of violating the TOS.

When you hear about terrorism propaganda, there is a good chance we are talking about not just evidence of warcrimes, but also material that is important to understand how people can get radicalized. And how to counter that propaganda. People at risk of radicalization can still access the material while the public as a whole looses the insight into the mind of those people entirely. And the people at risk drift off into echo chambers. And for a lack of information we are doing nothing to counter that propaganda, quite the opposite. I doubt the media at large could have done ISIS a bigger favor by their coverage of focusing simply on the barbarism of the videos. They missed the point entirely, the barbarism was very much a part of ISIS propaganda. The reporting was great for getting views, but only reenforced the people who already entered the echochambers. And hoping on a purely military solution sounds absolutely naive to me. Its very much a task for society as a whole to watch out that people dont drop into those echo chambers and get radicalized.

Framing this issue as "AI gone wrong" misses the point entirely. The content would also have been deleted by a human. And with the given political development, its not overly pessimistic to assume that people publicly hosting content will be forced to remove any such content immediately or face prosecution. Which only helps with getting reelected for signaling to combat terrorism instead of actually doing anything against terrorism.

Not to sound overly pessimistic, but the situation looks really dire to me. I had hoped that we as a society had learned our lesson about book burning. And the issue is not a new one. Mein Kampf is still not readily available in Germany for the "fear that it might turn people into nazis". Serdar Sommuncu made the only reasonable thing here and started touring Germany reading from Mein Kampf as well as famous propaganda speeches to show how ridiculous they are. There is nothing special about them. There isnt some hidden insight in those books that turn people into Nazis. But our treatment of them might convince people at risk that there has to be something about them, or they wouldnt be banned in the first place. I fear very much the same development from the array of far right terrorists with their manifests. Instead of the rambling of lunatics they are it gets treated as something that is dangerous as it apparently might convince people.


Number looks very little, if compared to Russia backing. Russia has whole TV channels backed by FSB, whole troll fabrics, and their own agents even in major Western media.

Why few orders of magnitude greater effort cannot help Russia, while much smaller effort helps USA a lot?


You have a long history of using HN for political and nationalistic battle, and we've asked you before to stop. This sort of flamewar is not welcome here.

We detached this subthread from https://news.ycombinator.com/item?id=19866310 and marked it off-topic.


Quantity isn’t always the most important quality. The US (and the UK?) seems to present propaganda as something the bad guys do, which is easier to believe if the changes are limited. Limited doesn’t mean unimportant, and it being limited forces focus on important changes.

(I’m not assuming competence here: I assume that’s random over time)


So, you say, Russia seems to counter-present USA propaganda as something the good guys do? It's looks stupid. Maybe I misunderstood you. Can you say explicitly, who does good propaganda, and who does bad propaganda? Who is good and who is bad?


Mainly I’m saying that the propaganda produced by the USA says “Propaganda? Us? Please put away your tinfoil hat, only bad guys do that.”

I don’t know what Russia’s self-image is, but the impression I have from the outside is that it might as well be “<green frog emoji> Работа для нас в государственной пропагандистской компании <Russian flag emoji>”.


America has the same things besides national TV? Every country has one, I believe America us exception.


"Number looks very little, if compared to Russia backing. Russia has whole TV channels backed by FSB, whole troll fabrics, and their own agents even in major Western media."

[Citation needed]


See https://euvsdisinfo.eu/

About EU vs Disinformation campaign

This website is part of a campaign to better forecast, address and respond to pro-Kremlin disinformation. The ‘EU versus Disinformation’ campaign is run by the European External Action Service East Stratcom Task Force. The team was set up after the EU Heads of State and Government stressed the need to challenge Russia’s ongoing disinformation campaigns in March 2015.


You could have as well given a link to google.com.

You have made several specific russophobic claims and I challenge you to substatiate them and not leave that as 'an exercise for the reader'.


These claims are not russophobic. We are at war with Russian Federation for 5 years. It's clearly demonstrates that we have no phobia. They are anti-russian.

Please, list claims you are interested to dig in.


Are you aware that in the 2018 Russia is still the main trade partner of the Ukraine despite all russophobic noises coming from Kiev?

It clearly contradicts your statement that Russia and the Ukraine are at war.

I quoted the sentence that needs factual proofs.


Major trade partners of Ukraine in 2018 are: EU(import ~55%/export 45,1%), RF(import 14.3%, export: 7,3%), China(import 12.2%, export 4.3%), USA(import 5.4%/export 2.2%), and so on.

> It clearly contradicts your statement that Russia and the Ukraine are at war.

No, it displays inertia of two big and complex systems.

> I quoted the sentence that needs factual proofs.

You got them. The whole site is created by EU and dedicated to disproof Russian propaganda, with links to propaganda narratives and factual materials.

For example: https://euvsdisinfo.eu/figure-of-the-week-136000/


EU is not a country.

"it displays inertia of two big and complex systems"

Yeah, right, I can imagine the US trading with Japan during the whole war. Oh, wait, no, actually, I cannot.

"You got them"

That is a lie. Even the link you gave doesn't have to do much with your statement.


> EU is not a country.

EU is a weird special case, as it’s more tightly integrated than anything else that isn’t a country, despite not being a country. This was one of the points used by both sides in Brexit.

As for the general argument the two of you are having, I no longer think “war” is a useful boolean state, just a political decoration sometimes associated with the use of force against the political interests of another nation (“nation” being another surprisingly fuzzy concept the closer I look).


Can you give an example of such use of force which is not a war? Besides Russian help to Ukrainian rebels.

Turkish, American, Israel's and Arab actions in Syria come to mind, but what else?


[flagged]


They're not in the war crime evidence storage market.


Facebook disseminates propaganda: war crime!

Facebook deletes evidence of atrocities: war crime!


Yes thats the point

Make it impossible for them to operate in this market


Is this really a technology problem? Are we so naive that we believe these things were not happening in the past?

This problem cannot be solved in the consciousness we currently exist in.

The people running the world and those who ought to run it are entirely different.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: