"Third, we will be taking a tougher stance on videos that do not clearly violate our policies — for example, videos that contain inflammatory religious or supremacist content. In future these will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements. That means these videos will have less engagement and be harder to find. We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints."
I cannot help but think that this will not be applied evenly - that some political content will be allowed and some will not.
Big US companies, and all US companies, big and small, have always choosen, and very carefully, what sorts of content they want to distribute, what sorts of image they want to portray, what sorts of causes they want to publicly support, and what their public imagine will be. You were never able to find atheist content in a Christian bookstore, you could only buy the censored version of CDs from Walmart, you couldn't find pornography at K-Mart, The Disney Channel never broadcasted any politically incorrect material, and you couldn't buy t-shirts with "inflammatory religious or supremacist content" in Old Navy. Even the original Geocities had strong content restrictions.
Are you loudly complaining Old Navy doesn't sell a "Hitler was right" shirt? Are you complaining about the "censorship" going on at the Museum of Fine Arts since they don't have a white supremacist exhibit?
In fact, pornography is legal and YouTube does not allow pornography. Why aren't you already up in arms about that "censorship."
>Are you loudly complaining Old Navy doesn't sell a "Hitler was right" shirt? Are you complaining about the "censorship" going on at the Museum of Fine Arts since they don't have a white supremacist exhibit?
Facebook is not Old Navy (one of thousands of competing clothing stores), it's a ubiquitous service with over a billion people in it, almost everybody on the internet.
Like Google, it's more of a basic internet service than a mere website. And its content (and content policies) are a factor in political discourse, both in the US and outside of it.
Secondly, to restrict the discussion to examples that your audience will clearly dislike ("Hitler was right", "white supremacist exhibit") is misleading, because the problem is with items that are not that clear cut but will be censored anyway.
E.g. "Iraq doesn't have WMDs", "CIA is involved in drug trafficking", "US supports death squads in Latin America", "Dodge the Vietnam draft" and so on -- to limit the examples to such items from the past. What would a mainstream company who "censors" stuff allow from those back in the day when they were hot issues?
Or let's take it to today, how about pro/anti-Trump, or pro-anti Assad, or pro-anti Black Lives Matter, pro-anti Manning, pro-anti Assange, etc?
Even stuff that the majority in the US might disagree with, the majority in another culture/country might legitimately agree (and not want it censored) -- but they'd have no say. A single country (and one from which many countries have scars from) will control a large part of the internet discussions (through Facebook, and similar policies in Google, etc) of other countries.
Except in a huge stretch of the notion, that doesn't justify invasion, war, hundreds of thousands dying, and trillion spent -- some degraded barrels of mustard gas and the like from 30+ years ago, the era of Iran-Iraq war...
They aren't installing a filter into the browser they're OPENLY addressing a problematic issue on YouTube, one of many, many video hosting websites. Nothing stops a terrorist from getting a computer, an internet connection, and hosting his own damn video calling for the murder of women and children.
When you call for violence against non-combatants you're breaking the law in every single western country. If there were only one web browser and the company behind it were implementing universal blocking measures maybe I'd agree with you, but honestly I'd have to think long and hard first. Radicalisation is impossible to survive in the long run as the average power an average individual keeps going up.
They didn't say they were going to start removing videos with illegal content, they already do that. They said that they were going to start removing videos that don't break any rules, yet the company deems them unsavory. Which is incredibly frustrating since 1) YT has become the center of our changing culture, and 2) not everyone lines up with the PC Californian culture that dominates large multinational corporations.
They didn't say they would remove the videos, instead they will display an "interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements." Which is not even on the level of a shadow ban, as practiced e.g. on HN.
I wonder what effect Google's wagging finger and implied scolding from an interstitial will have on people who stumble across a video they like but is branded as naughty.
I find it an interesting question because:
A) Not every video branded as culturally unacceptable will be. Not every video is as bad as the worst-case hypothetical used to justify the content classification.
The landscape of cultural attitudes differ from California-based content minders. The categorization can be flat out wrong, there will undoubtedly be a small percentage of videos that even the minders see as mis-classified.
B) Social interventionist policies can - and often do - backfire.
e.g.: Teens that deliberately seek out taboo. The allure of R movies, M games, Explicit Lyrics, and underage binge drinking can cause them to live a period of their life less well-adjusted than if that content wasn't aggressively filtered from their lives in the first place.
If they do that, they might as well remove the videos, since they have the same goal in mind. Look at the quarantined subreddits on Reddit. While the company gets to say it allows free speech, it basically removed those subreddits from existence, thus successfully controlling the narrative. Do we really want large corporations to intentionally guide the direction of our culture? Personally, I don't. In the end, a corporation would guide it in a direction that favors itself and its donors.
> This will be highly subjective, which is the problem
also they have to determine these norms for the whole planet (without North Korea), now every attempt so far that tried to set cultural norms for the whole world has failed, lets see if they do better.
i think that once upon a time facebook and google wouldn't do such things for fear of loosing customers to the competition, now its different: they got us hooked and now they are behaving as if they were a real government. This consolidation around 'platforms' and lack of competition is not good for the internet.
>When you call for violence against non-combatants you're breaking the law in every single western country.
That's not entirely true. Abstract advocacy of illegal violence is protected speech under Brandenburg v. Ohio, 395 U.S. 444 (1969). Only when the incited violence is imminent (as opposed to at some indefinite future time) does the speech fall outside the bounds of the First Amendment.
Don't worry, once their opinion is enshrined in a neural net somewhere, they'll no longer be in charge. Then it will just be a cold metallic box that will silently, and efficiently judge you.
Besides, my local librarian doesn't decide what books the library will have (what's this, USSR?). They do the initial ordering management, but library members can request any book and have it ordered.
There's a mechanism for us curating our own content: we decide to which pages/friends we subscribe. How about that?
Well, most libraries where I live/know have central boards / panels that decide those things -- not necessarily staffed by professional librarians (e.g. the state appoints members there from the academic community, etc., some are voted, etc.). There are also deals to get copy(ies) from each published work (by specific or all publishers).
Librarians do handle the organization and everyday operation, archiving projects, curated collections open to the public, etc.
> Besides, my local librarian doesn't decide what books the library will have
vs
> curated collections open to the public
What's the difference between Google removing a terrorist video from public view (they never delete anything), and a library having a book but the librarian not making it available to the public?
I mean, you rebut me when I say Google curates its own content, then turn around and say librarians are different, their duties include curating content.
Edit: How come people don't call Google some sort of evil censoring overlord when it comes to child pornography? You'll get the good ol' "I defend to the death your right to speak" when it comes to terrorist videos, but not child exploitation ones. Where are the people angrily demanding that google put child pornography back into their search results, out of a demand for freedom of speech for all? Why is that topic treated differently to terrorist recruitment videos?
>Where are the people angrily demanding that google put child pornography back into their search results, out of a demand for freedom of speech for all? Why is that topic treated differently to terrorist recruitment videos?
Because most people act irrationally when it comes to related issues, and because other people (still a minority) don't want to be branded negatively by hysterical public/pundits.
One might as well ask where were the vocal proponents of black rights in 1920 Alabama?
Responding to your edit, some people might disagree with child pornography laws&policies but recognize that any protest will be branded as supporting pedophiles/hurting children and decide that they'd rather die on a hill upon which they have a chance of winning.
I have mixed feelings about this and I share your concern about it not being applied evenly.
I think what bothers me the most, though, is that we're too reliant on one company to be the gatekeeper to the vast majority of video content online. I'd feel better if there were more competitors in this space, and if YouTube was overly restrictive of certain content, you could just move to a different service and have a similar experience.
I can't really blame YouTube for being "too successful", but this is one of those cases where being a near-monopoly can put a company into an awkward position.
It won't be, because "inflammatory" (esp. juxtaposed to "religious"), and "supremacist" are subjective. Unpopular viewpoints, especially regarding religion and race, can easily be branded and thus squelched.
That's the status quo. One could argue nearly anything is political content, or art. They're announcing that they're going to draw the line differently, not that they're drawing a line for the first time.
There are Leftist movements in the US that are arguably just as violent as those on the alt-Right. What's the chance that these policies apply to both sides equally? Given the history, I'd say not much. And I don't think that's a good thing.
EDIT: Unless someone is saying that AntiFa ISN'T violent?
The recent controversies at Evergreen College provide an illustrative example. Videos of student protestors bullying professors, ala Maoist Cultural Revolution style, were taken down due to issues with "bullying".
It's worth noting that these were original videos uploaded to YT by the protestors themselves!
IMO, the entire situation with Evergreen College is closer to being an alt-Right propaganda piece than a leftist one. Screaming at and demanding the resignation of a professor essentially for saying that "On a college campus, one's right to speak - or to be - must never be based on skin color" looks pretty bad, you know? The fact it was produced by leftists who need a far better grasp of optics really doesn't change that.
Removing things that make one side look bad - but not similarly doing the same for the other side - is bias, yes. Unless you think YouTube would have removed videos taken by the protesters if the situation was reversed, and it was a neo-Nazi protest that was looking horrible?
Or are you claiming that the people at Evergreen are all secret members of the Pepe Brotherhood and that they are not holding their leftist views honestly?
No, DuskStar is claiming that the people at Evergreen are leftist college students who did something stupid, and the thing they did makes leftists as a group look stupid. So other leftists are trying hard to hide the thing that was done, to avoid the PR fallout.
This was pretty clear in what DuskStar wrote. See the "the fact it was produced by leftists who need a far better grasp of optics" bit.
The idea that hundreds of non-white alt-righters could gather at a college and all act like extremist liberals while not leaking the fact they're alt-righters is pretty absurd to me.
He didn't say that. He said it was playing in the media the way a right-wing propaganda piece would have played, even though it wasn't one. I thought that was pretty clear, especially given the last sentence of his first paragraph.
How about doing some old-fashioned data gathering for the benefit of the community? Document all violent events, how many YouTube videos are posted of each, how many are removed and how long it takes for each to be pulled down.
So far, no they aren't. Not even close. Not that there isn't any violence from the left, but not nearly as much.
But using the left/right simplified political spectrum is a terrible metric in any case. If there was ever a time to at least use the compass metric to differentiate between authoritarian/libertarian opinions as well as left/right it is now. It can be hard to realize as an American but judged globally you are comparing far right authoritarians to centre or even centre-right authoritarians as if they were opposites.
> Not that there isn't any violence from the left, but not nearly as much
Are you sure? I'd love to see credible sources for this data, for the US. Note that sources talking about "hate crimes" are unfortunately not usable here simply because they by definition exclude violence towards groups that violent left-wing groups target. I am aware that after this last election there was an upsurge in hate crimes; there was also an upsurge in violent attacks on Trump supporters. I have had little luck finding good numbers on what's going on, and would appreciate pointers.
Also, are we talking about recent history, or historically? Because again, for the US, there was a good deal of left-wing violence in the 70s that has been successfully whitewashed out of history. For example, some (but not all) of the leaders of https://en.wikipedia.org/wiki/Weather_Underground moved right on to positions like "Clinical Associate Professor of Law at the Children and Family Justice Center at Northwestern University School of Law"[1] and "communications director at Environmental Advocates of New York" (and for the same one, "heads up his own consulting firm called Jeff Jones Strategies that specializes in media expertise, writing, and campaign strategies that help grassroots and progressive groups achieve their goals")[2]. Going further down the list we have people becoming high school teachers and then academics specializing in education[3], and mathematics instructors [4].
At least as of 20 years ago, none of this was discussed in high school history classes that cover the period. Most people who didn't personally live through it aren't aware that any of this ever happened.
> or even centre-right authoritarians as if they were opposites
The centre-right authoritarians, by that definition, are not the ones involved in what would be considered "leftist violence" in the US.
Oh definitely recent, historical would be different ... although you'd have to be very careful to pick your dates to include the blip of leftist violence and exclude say, the Klan. Not that you don't have a point about the lack of education about the leftist violence in the 70s, but the moral victors tend to write the history.
Sure, there was an uptick in attacks on Trump supporters. But you take all of those and I'll take only violence against muslims or trans people, your choice. Hell, if I took only violence against non-muslim brown people mistaken for muslims the numbers would still be higher.
Well most of the abuse is from the right, so most of the correct action will be towards the right. It won't be equal in terms of amount of action for a given side, but it will be equal in terms of blocking dodgy content.
Given that the costs will still be present but Google won't get any benefit for hosting these videos, how likely are they in the future to simply delete them?
You seem to simplify quite a lot. Costs should go down with videos not being recommended and thus less watched. Not excessively oppressing their users should be a benefit. Sure we could find more of both costs and benefits and I wouldn't be sure about the resulting score.
I also wonder how much it will cost to classify a video versus just hosting one of those average and barely viewed videos.
This is not really new. Looking for "biases" that give better search results is kind of the point; it goes back to PageRank. (A preference for more popular content is going to rank less popular content lower.) Similarly for many other search ranking signals.
Right, quite honestly I'd like to see them just make a mostly uncensored spinoff and set up censorship on vanilla youtube. Lax, uneven censorship gives you the worst of both worlds really.
already happening with alternative media. also, anti "hatespeech" laws are being passed while the term isnt even defined.
the establishment and left extremism figured out that the internet as the greatest platform for real free speech undermines their prerogativd of interpretation in the public.
this is a fight to control public opinion and we will lose it when we continue to be jelly fish.
In the west terrorism works trough media exposure. Fighting it in media is good strategy. Violence itself is ridiculously low volume per capita.
If traditional media would step in, it would be major win. Instead of having symbiotic relationship with terrorism, where they hype up the terror, they could stick to reporting the facts and situation and not providing non-stop terror-porn entertainment to audience.
Once people realize that terrorism just one small risk among other must bigger risks (like high rise buildings with flammable material) they stop fearing it and the terrorism stops being effective. It will never completely vanish, but it will not attract radicals as it used to.
>If traditional media would step in, it would be major win. Instead of having symbiotic relationship with terrorism, where they hype up the terror, they could stick to reporting the facts and situation and not providing non-stop terror-porn entertainment to audience.
Once people realize that terrorism just one small risk among other must bigger risks (like high rise buildings with flammable material) they stop fearing it and the terrorism stops being effective. It will never completely vanish, but it will not attract radicals as it used to.
x2. Traditional media outlets are a massive force multiplier for anyone attempting to terrorize through violence. I understand they have an economic interest in hyping up that sort of thing but maximizing the visibility and publicity of a terrorist act is helping them, not hurting them.
I don't agree that traditional media outlets "hype up the terror". If you compare the coverage of, for example, the recent attacks in London and the fire you're referring to, they seemed to get about equal billing.
On the smaller scale, violence will get more prominent coverage, yes. But that's just a reflection of public interest, and how much intent makes a subjective difference. News outlets like the BBC or the Guardian seem to be far away from "terror porn", and in the case of the BBC, it's obviously wrong to suggest they're doing it for money, seeing as they're not financed by ads.
How often do you hear about shootings in South Chicago in world news? 294 people have died there so far this year![1]
Terrorist events should be treated like any other murders and people will very quickly stop caring. They are just hyped up by the media to drive views and the governments love it because it allows them to expand their power.
I used to hold this very view but now I don't watch the news. The reason why I don't hold this view is in the very name and nature of the news, it's "new". Everyday commonplace events even horrific ones are not new and therefore are not part of the news. Choose to not watch.
Catastrophe porn sells in general, fires, plane accidents, natural disasters, etc. don't increase if the reporting increases.
Terrorism works trough media. Terrorism should get less media exposure and. Ongoing video feeds, constant speculation etc. should be treated as enemy propaganda even if it goes trough neutral channel.
If there is ongoing situation, short messages from the authorities are good. Deeper reporting at later time. TV-cameras on the scene and all the graphical drama building is working for the enemy.
> and in the case of the BBC, it's obviously wrong to suggest they're doing it for money, seeing as they're not financed by ads.
BBC and others are doing what others do and doing it for viewers. They honestly think that creating drama is being neutral. It's not.
Yes. This is most obvious when comparing say an "ISIS inspired gun attack that killed 3 people" vs a ransom mass shooting attack where 20 died. The former is way more mediatized reported on, while on the latter it's recommended "not to politicize the event." But for the terror attack every war-nut and surveillance nut come out to call for bigger military budgets and more surveillance.
This makes me wonder whether it'd help if media sources would stop posting their names of the terrorists in their articles. Basically, don't profile them. Don't say where they came from or their personal situation. Don't interview their friends or family members.
Just call them 'evil monster' or something similarly vague and hope history forgets their existence. Otherwise, you end up with a lot of criminals and terrorists causing great amounts of suffering to have their names recorded in the history books.
Or terrorists (for whatever definition of terrorist) feel they need to "step up their game". I'm already afraid for whatever gruesome thing ISIS is going to think up of next now that carving off someone's head is not considered gruesome enough for immediate media coverage anymore...
> We have also recently committed to working with industry colleagues—including Facebook, Microsoft, and Twitter
All of those platforms that are notoriously bad at curbing terrorism and great at promoting the most asinine "I can't believe it's not a troll" social justice concerns.
I truly hope these changes will include the US and Allied forces combat footage.
I've seen videos on YouTube contain some of the most shockingly needless violence with complimentary cheering and derogatory language used by British and US soldiers (often directed towards foreign people) one can imagine, disturbing.
It seemed like footage aimed at radicalising people to me.
Do you mean to say that the footage should be censored? I think their crimes should be seen be all.
A better stop-gap measure against the radicalization of American terrorists would be to prevent the DoD from recruiting at high schools, at sporting events, in movies and video games, etc.
This is a terrifying step for the internet. These kinds of step lay the groundwork for censorship but fail to consider the impact of these changes in the event that the bad guys come into power or gain influence of these management systems.
It's less "terrifying" than the current alternative, where people can openly support various forms of extreme violence, get away with it, and monetize on the way.
Mind you this "internet" is private biz. This is a pretty obvious "don't vomit on my rug" case. Don't like it? Find another party.
I really don't care if people "support" various forms of extreme violence by watching youtube movies. That is WAY better than supporting it by committing various forms of extreme violence.
Is there any actual evidence that these youtube videos are in fact driving people to commit acts of terrorism?
In hope they'll make people become terrorists. That is not the same as observing those videos actually do it.
Also, some of them are probably posted just so that the Western media can find them and scare the population shitless with them, pumping the huge autoimmune response we have wrt. terrorism here.
People are quick to condemn certain states for their censorship, but give a pass for Google easily. Don't get me wrong, in this particular case there are very few who would argue with Google, but not long ago there were discussions about search results bias during the election campaigns for example.
Indeed, if Google thinks they are capable of shaping the narrative, what is stopping them from redirecting people e.g. to Coalition for Better Ads when searching for a content blocker? It is not hard to imaging scenarios when Google and the individuals' incentives are not aligned and that slope is slippery.
We already have the groundwork. Obscene images are illegal, and morally I've yet to see a justification for why (justification for some cases, but most cases I've yet to see a good justification that is consistently applied in other area). Government should just declare any extremist material as obscene and ban it under the existing framework.
It probably costs like $20 and a WordPress plugin to set up a video hosting/social network site.
If that's impossible to achieve maybe there is some analogue of Darwin's law about whether your content deserves audience.
But the real thing isn't the platform. These groups are after the audience. And that I see Google as having having no moral responsibility to provide.
They aren't locked out of the internet at all or prohibited from creating/distributing content (with which I would take objection). They just can't use a private companies distribution channel at the same level as other users to get eyeballs. Cry me a river. It's not the end of the internet.
While your statement is quite vague it is no less completely inaccurate. "Supporting violence" is subjective and ambiguous. Are action movies and video games supportive of violence? How about war documentaries?
Furthermore, while "hate" speech is not the best use of our ability to communicate it does fall under the protection of free speech - at least in the United States. In some groups hate speech (however it is you describe it) is acceptable.
Just as you are free to not associate with those groups the internet should be free to express the ideas you disagree with.
You don't have the freedom to interfere or disrupt the freedoms of others.
It absolutely is acceptable. There's a lot of hate being stirred towards ISIS, and violence is supported quite openly in the media, and also used in the real world.
The west has enemies and stirs up hate and uses violence to target them, adding to the mayhem in the middle east, killing 10s of thousands of people every year.
Violence is very acceptable for the western democracies. You have to be willfully ignorant not to see this, after the last two years and 70000 people killed by the coalition during that time.
You're confusing is with ought. Violence is only acceptable in very reasoned, measured manners, and only to achieve clear and decisive goals. Hate is not acceptable anywhere. Outrage, maybe.
edit: Anyway. There's a lot of hate towards real or believed enemies of the west. It's easily visible online in news websites' comment sections, on social media, etc. It is somewhat rare to see people standing up against it. Even on platforms, where you need to provide gov. ID to be able to discuss, and your name is visible to others, people feel perfectly fine to spew hate against muslims, ISIS, or whatever in very non-measured ways.
My point with the previous comment is, that it is important to realize the western hate and violence too and not to brush it away, because it's very significant in its effect on peoples lives.
I'm not from the US, so it might be different in different countries.
Snail mail letters are the oldest somewhat comparable format. Privacy of your envelope of hate speech is secured by law in most western nations.
However we have recently seen that principle degrade with all kinds of exceptions. It's weird given how little potential islamist terrorism has in the west compared to what communism, republicanism or protestantism had in the past.
The basic state of human nature seems to be struggle. I suspect the engineers at Google understand quite deeply how serious this is. But I am afraid we are approaching a state of war.
As with all weapons, there is no doubt a risk that enemies will get the weapons. Open source will in fact make it easier. Network control will be more important.
And there are many, many fronts. Trump. ISIL. Any number of dictators (the Chinese Communist Party, Putin's oligarchy). And only a few bright lights of democracy.
I'm not sure terrorism has ever accomplished its goal to effect radical change. But a powerful entity ostensibly protecting us from a group perceived as a common evil that's not worth debating?
I am vehemently opposed to babysitting society, but violence is not speech, it's action. That much seems fair. I do worry about step 3. It sounds like that's going to silence a lot of minority opinions or screw over groups who post no-so-liberal commentary videos etc. Anything hiding behind an interstitial is just crackpot conspiracy theory.. Google is a negative influence on society bleep. Zap.
Google will cut-out minorities, political opinion that will negatively impact Google if they became popular. Easy solution is to become a sheriff of the Internet and silence those people asap.
It's surely better than the current situation of literal neo-Nazis openly talking on YouTube about “white genocide” and how the “Jewish question” must be “solved”, without Google seemingly so much as batting an eyelid.
I'd rather have the neo-Nazis spewing their bullshit out in public, where everyone can see how stupid it is. Driving something underground only makes it more attractive in the end - after all, if it was wrong The Man wouldn't need to suppress it, right?
The problem with allowing neo-Nazis to spew their bullshit alongside the Diet Coke and Mentos videos is that it creates a normative appearance: "this isn't being removed, so there is something tolerable about it, so maybe it's not so bad."
> Driving something underground only makes it more attractive
See, I think you're wrong, and that the secret to attraction is visibility. As an online advertising company would know.
Many people experience political suppression nearly everywhere online simply because they do not know how to communicate in a civil and honest manner. The backlash is not due to political suppression by companies, it is due to the basic fact that they are not welcomed in most online communities. Official censorship policies tend to reinforce in them a belief in the conspiratorial nature of such rejection, but it is not the cause of the rejection. Nor is it actual evidence of conspiracy.
This is clearly in response to the call from several world leaders for more participation from google et al. in combating terrorism online.
Theresa May, as well as some Australian leaders have made very general statements that the big tech companies are not doing enough to fight terrorism.
I expect that these statements are merely a prelude some announcement of legislation, seeking even greater government powers and access to googles data.
I imagine law enforcement would rather have detailed identity information on the people consuming the content, rather than simply having the content removed more efficiently.
So while removing propaganda and violent videos from youtube is just going to drive the traffic elsewhere, at least googles action may help resist further government encroachment on our online privacy.
to fight online terror
1. identify extremist and terrorism-related
videos
2. increase the number of independent experts
3. a tougher stance on videos
4. expand its role in counter-radicalisation
efforts
But I'm not terrified by things that happen online. Least of all videos of events in the past.
And, to be honest, I've seen plenty of NSFW videos.
Let's not kid ourselves. Terrorism has pretty much nothing to do with videos. Radicalization has more to do with the social realities faced on the ground, day after day, as we live our lives. Alienation and the misery of slums, and hopelessness of monotony without progress, are what lead to terror.
The true effects of propaganda are over-stated and over-imagined.
Yes, but that's a less useful story for the people and organizations built on gathering ever more power, so the alternative will be repeated until it sticks. Given the number of people in this thread who take the official narrative at face value, it looks like it's working.
Democracy has it's flaws, but I don't know any other system that works better for society today. One of democracy flaw is balance between freedom and safety, you can't have both.
You can easily resolve this (terrorism) problem in totalitarian regimes (look at Kadyrow and Chechen Republic) but whole society will pay price for that.
On the other hand it's easy to criticize any censorship like moves but radical islam ideology spreads like a cancer through Twitter, youtube and other social networks. You can fight terrorists, kill them but you can't kill the ideology. If you can't kill ideology you need to control it's spreading. What tools do we have to do that aside from moderating content?
Lately I was watching some live stream on youtube about UK terrorist attack to get information about what's happening. I've seen what couple of thousand people wrote on the chat below the video. I didn't seen so much hate, intolerance, racism, fascism from so many people at same time in my life. I've closed it because I couldn't read that, the problem is that probably many young people didn't...
This is very hard problem without any satisfying solution. On one hand we don't want to give up our freedom of speech, we are fearful that some evil actor will use it to manipulate our society, on the other hand, ask parent of a dead kid that died in last UK attacks or any other attacks, ask yourself, what you would give up to get your dead child back.
Frankly speaking, my observations on the actions and expectations by various "democratic governments" around the world lead me to conclude that the world as we know it is fast moving to "totalitarian democratic regimes", where instead of one totalitarian leader/pack controlling and oppressing people for generations (like in the case of dictatorships), we have the illusion that we have the freedom to vote out and vote in different parties/leaders, who eventually toe the lines of the previous elected governments and end up making things even worse overall. Maybe I was naive and ignorant and all this was happening all along over decades and centuries.
The increasing tilt in the balance of power toward the state, on the balance of power equation between the state and the individual, is quite alarming. It seems to me that world over, we're allowing monopolistic views and policies to grow, to the detriment of common people. Terror and terrorism are very useful for those in power to gain even more power. Terrorists who claim to fight for freedom are winning by showing practically that those in power are continually curbing our freedoms. With such policies, tech companies are just joining forces with "democratic oppressive governments" in making them more powerful.
I do not see a clear and easy solution for these problems.
> You can fight terrorists, kill them but you can't kill the ideology. If you can't kill ideology you need to control it's spreading. What tools do we have to do that aside from moderating content?
I'm somewhat hopeful for the "Redirect Method" method mentioned in the article. If the other content is not ridiculing and inflammatory, but reasonable, and fact based and balanced, it may actually work and be a good thing for some people.
> UK attacks or any other attacks, ask yourself, what you would give up to get your dead child back.
There's no getting back. Questions should be asked by regular people who didn't experience any harm yet. And also it should not only be selfish questions, but questions like "What harm I'm willing to inflict on others (inside and outside of my country) in the name of my supposed protection?"
Let's see, this is the same Google/ YouTube that marks dozens of PragerU videos as "restricted", despite three Wall Street Journal editorials protesting that. The same YouTube that has been demonetizing Paul Joseph Watson, Alex Jones, and other conservatives. In summary, Google has long proven that they have an unapologetic Leftist agenda.
Radicalization is a very real issue as we're getting ever more entrenched in our curated echo-chambers - I'm glad they're taking a stand.
Unfortunately I fear the damage has already been done, and these policy changes will almost certainly push newly-popularized radicals onto platforms Google has no meaningful influence over.
I should start an over/under pool on how soon we'll start seeing the "OMG they banned my channel for THIS, and there's no person within 1000 miles I can find to fix it" type of posts.
Interesting that they didn't address specific steps they'd take to protect their content reviewing teams... Sounds like it would be appreciated, especially in the context of the recent issue at Facebook.
In short, Facebook accidentally revealed the identities of the moderators responsible for taking down stories related to terrorism, to potential terrorists. The HN story is here. [0]
That's an excellent news. I was absolutely disgusted by the amount of pro Daesh content on Youtube 2/3 years ago where you could see their videos that became famous for their high production value. Youtube absolutely contributed the spread of fundamentalism no question. And there are still a bunch of hateful Nasheed easily accessible but Youtube is still not removing them because "it's music" ? outrageous ...
I'm rather skeptical when terrorism (or Lovejoy) is invoked as a reason for more censorship, hope it turns out well this time.
we are working with Jigsaw to implement the “Redirect Method” more broadly across Europe. This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits,
It is not really clear why is Europe specifically targeted.
> It is not really clear why is Europe specifically targeted
If you mean "targeted by attacks" I'm not sure why.
If you mean "targeted with this method" is because most of these kinds of acts in the West happened in Europe and not somewhere else. Also, IIRC, most of the attacks came from people born here, not immigrants, so it wouldn't make much sense to target wannabe attackers directly in the Middle East.
>>> Participation in the Program by some participants may be restricted (e.g., no or limited perks), including persons who are government officials, including (i) government employees; (ii) candidates for public office; and (iii) employees of government-owned or government-controlled companies, public international organizations, and political parties.
It tells that Google is completely opaque on who gets hired and who doesn't (because the sentence is fuzzy). Google is a private company so it seems logical that they have the last word. However, given the size of Google, I'd be happy if the ethical argument about what is an acceptable video would come at the table too.
The fact that Google tries to do something about terrorism is absolutely not equal to the fact that it is efficient at that. For us to know that, we must be able to review their results at that (and how do we even define a result in this area :-/ ) as well as their methods (which requires transparency).
I don't believe that there is a problem with terrorism online or that such a thing as online terror exists. Only the most feeble can be terrorised through the screen.
The problem is that there are terrorists among us who are willing to bomb, stab, or run people down with a truck. That there are entire communities where this either receives support or is at least tolerated.
It has very little to do with technology. They don't do it because they saw an ISIS video on YouTube or a post on Facebook. I doubt this is even a minor factor. Hodgkinson watched The Rachel Maddow Show. On cable.
I wouldn't say that it's about individuals being terrorized through the screen. I'd say that it's at-risk individuals who are persuaded to radicalize through the internet. By at-risk, I refer to individuals who may be suffering personal issues in their life, and are susceptible to the messaging used by terrorist recruiters.
If there is one thing at-risk people respond positively to, it's ham-fisted social intervention that DGAF about them and openly seeks to control their behavior.
"Third, we will be taking a tougher stance on videos that do not clearly violate our policies — for example, videos that contain inflammatory religious or supremacist content...."
They go on to say that this type of speech won't be exactly blocked, but basically made harder to find.
It's not quite big brother, but I suspect nothing like that happens all at once. "Inflammatory religious" in particular, feels like a pretty loose description...almost every religion has some bit that's inflammatory to someone else.
Yes, but something that sounds innocuous to members of one religion can be inflammatory to another.
I didn't say all religious speech was inflammatory either. I said most religions have some speech that could be considered so. I really don't get your point.
There are also a lot of political views that would have been considered completely mainstream 5 or 10 years ago that now seem to be considered inflammatory, one example being open borders.
Google said they will cooperate with governments and 3rd party organisations, which means they will have a voice on what government 'thinks' is good or bad and they will use that power to impact our political-social-legal regulations and standards in a way google sees it. At some point AI created and provided by google will be creating law and regulations in some cities or states...
HN is a nieche website for mostly tech people. It has a very nice way of moderating content and I haven't really heard of any misuse of power yet (anecdotal evidence).
They say they are fighting "violent extremism online" but how can you be violent online? That's a physical thing. Instead, they outline four points for greater censorship of videos.
Censorship is a firewall for your brain: crude, obvious, flawed, often in the way, yet still better than nothing.
Which is rather a pitty, given this metaphor makes most of our cultural baggage into unwanted and unremovable bloatware that has its own set of problems.
Let ISIS or White Nazi rebirth club set their own video hosting services up instead of piggybacking on something they are ultimately trying to destroy.
Ok, fair enough. It's Google's sandbox, they can do what they like.
So let's say ISIS and the Neo-Nazis get their own video hosting services, since there probably aren't enough of either to make that infeasible, and they already have plenty of venues for hosting their own material. Then what?
What problem does removing that content from Google actually solve? The internet detects censorship as damage and routes around it, remember - stopping the propagation of anything people want to publish is impossible by design.
Google removing or otherwise taking steps to suppress this content very obviously (partially) solves the "most important" problem.
And that "most important" problem is Google being accessory or providing platform which is a legal and public relations liability.
Notice who the author of the article is. Not some engineer. Not a public relations person. Nope. It's Kent Walker, General Council. And that tells you a lot about the "why".
Personally I'm opposed to this, assuming google's goal with these steps is to protect open societies. Not because I think you should be able to post content without restrictions, but because I think it serves society better to get desensitised to this type of violent content.
In the context of 'protecting' society:
IMO it's better to protect your society by removing or blocking blatant propaganda, but I don't think that ban should be extended to violent/gruesome content. The less you see of it, the harder it will hit once it gets past the blockade. I consider it a healthy human response to be able to see such images and think "Fuck those <INSERT_GROUP_OR_STATE_HERE> assholes", and then move on with your day. Shielding people will have the opposite effect, since people will not be familiar with the possibility of seeing such things and it will therefore retain it's shock-value/utility for the <INSERT_GROUP_HERE>.
No matter how much (western?)society might fool us, humans can deal with this. We've lived as savages, constantly tested by nature and the cruelty of life, for most of our existence on earth. I'm not saying that we should return to such a state, but we also shouldn't needlessly cuddle responsible adult people either.
As I understand it, human psychology is such that repetition makes things seem true. We don't get desensitised, instead we start to believe whatever we see is the way the world is, that <group x being executed> means <group x deserved it>.
It's not the blood and gore that we can't handle. It's the psychological manipulation. They're using the same psychological tricks that Trump used, that Colgate uses. That every advertiser uses. But now it's war. This is the new war: the war for the mind of man itself.
That's all the more reason to harden it, not to weaken our collective defences. If you accept my stance that repeated exposure leads to resistance/strengthening of your mental barriers.
Of course in the case of more subtle manipulation I guess you also have to be aware of the fact that the manipulation is happening. In such a case I imagine one cannot rely on a gradual hardening because exposure alone.
I cannot help but think that this will not be applied evenly - that some political content will be allowed and some will not.