Hacker News new | past | comments | ask | show | jobs | submit login
YouTube is trying to reward “quality” content (bloomberg.com)
128 points by okket on April 12, 2019 | hide | past | favorite | 277 comments



Frankly, I hope YouTube (and Google in general) dramatically overdo it with the censorship, "curation", and deplatforming of "irresponsible" content, purely so that it forces an independent, non-corporate model to be developed.

Having one company be the sole gatekeeper for the overwhelming majority of the web's video content is not a solution.


> Frankly, I hope YouTube (and Google in general) dramatically overdo it with the censorship, "curation", and deplatforming of "irresponsible" content, purely so that it forces an independent, non-corporate model to be developed.

Just like people got what they wished for when Voat was spawned from Reddit?


Voat isn't just what you get with no censorship. It's what you get when you're a new alternative to a large incumbent: you are populated almost 100% by people banned by the incumbent. Nobody else has a reason to join your platform. It's a phenomenon that makes it even harder to compete with an incumbent social network.


What's interesting to me about that when I've visited Voat it is such an obviously disgusting and terrible place that it actually makes me way less sympathetic toward those were banned from Reddit than I might have been when it happened. I don't mean to imply I support many of the banned communities. Just that was somewhat open to the complaints about censorship.


I used Voat when Reddit's servers were going down all the time, and didn't stay because the community is disgusting.


Voat also isn't what you get with no censorship, because Voat censors illegal content.

Every site draws a line somewhere.


There's a great video that goes through this phenomenon (in the context of Youtube vs VidMe) [0]. It points our exactly that the people who adopt a new platform tend to be the people most toxic for the original. I have no idea how you get around this.

[0] https://www.youtube.com/watch?v=r3snVCRo_bI


> I have no idea how you get around this.

You start by courting people with an actual real, personal, counterculture message that doesn't get played in the mainstream media, and start with THAT community.

The Dave Rubin's of the world don't care about free speech. You can tell because they don't care about women's voices that are suppressed in workplaces. They don't care about Muslim or even atheist voices that are suppressed in political campaigns. They only care about white male voices and the occasional woman or person of color who happens to also care about guns or the scourge of feminism or some other primarily white male topic.

If you really want to create a bastion for free speech you would need to care about women's voices and black lesbian voices and disabled voices and fat voices and all of that.

But the Dave Rubin's of the world don't believe those voices are suppressed. You can tell because he doesn't invite anyone like that on his show. And as long as they keep thinking that white men are somehow uniquely suppressed due to "political correct culture" and that women and fat people and trans people ar just universally adulated and celebrated and welcome to speak at every occasion... As long as they keep believing that, the Voats of the world will continue to be cesspools.

I fear the reason they can't do that is they'd realize that the Dave Rubin's and Ben Shapiro's of the world are already well enfranchised and their speech is suppressed far far less than literally every other demographic.

And to be totally clear: I am talking about, for example white women at tech companies in San Francisco who are fired for honestly taking about their work, because the men around them interpret any assertiveness from them as an attack. Just to set the terms of the debate: women literally fired for taking about the work they were hired to do in a way any man could talk and receive only praise.

As an example.


Actually it acted as a grease trap[1] that had the effect of sanitizing reddit for the rest of us. Maybe not "what they wished for", but a positive effect nonetheless.

1. https://en.wikipedia.org/wiki/Grease_trap


Yeah, I'm all about the idea behind letting people express themselves without being censored, but the stark reality is that the people pushing the envelope here are saying some pretty damaging things about other people in a hurtful and escalating way.

I don't know how to square these two thoughts.


> I don't know how to square these two thoughts.

There's nothing wrong with boxing matches. There's also nothing wrong with people wanting to walk city streets without having to fight.

The solution is to provide places for boxing, and places where boxing people is forbidden.


'Sticks and stones may break my bones, but words will never hurt me.'

Granted, it's aspirational.


Have you thought about the possibility that our language has been constructed to give people like you more ammunition against others? And given other less ammunition to use against people like you? Words like "bitch", "bossy", etc. Also cultural norms for what kinds of emotions you interpret as "too much" if it’s a woman but “oh crap I should listen” if it’s a man, etc.

I have no idea who you are, so just a question about where you’re at in thinking about this.

And do I understand correctly your proposal? That we should all aspire to be unaffected by the words of others. And if someone is negatively affected by words it is their failure, not the failure of the speaker or the failure of the group they are in together.


|the possibility that our language has been constructed to give people like you more ammunition against others?

The idea that the language was constructed is pretty silly. It's such a twisted hodge-podge of borrowed ideas. Besides, other languages with far different heritages will hold very similar words and terms as ours. As for the idea of having more ammunition, You'd have to actually provide some evidence for the idea that there's more plentiful / harsh language available to 'people like me'. Particularly when I've seen both of the words you cite used plentifully towards a broad swathe of people.

|That we should all aspire to be unaffected by the words of others.

Yes.

|And if someone is negatively affected by words it is their failure, not the failure of the speaker or the failure of the group they are in together.

Not exactly, no. A person being an asshole is an asshole. Your reaction to it doesn't absolve them of that behavior, even if you responded positively for some reason. However, letting some random asshole ruin your day by saying something you don't like is just setting you up for constant failure, as you are so easily driven into a negative emotional state. I'm suggesting a coping strategy, one among many, to guide a person in navigating a world full of adversity.


Definitely harder than it sounds if I’ve been told that as a kid and am still cultivating that as an adult


I don't think any new platform is going to be able to compete with YouTube at all with the current regulations. How will any new platform comply with new EU copyright laws, or remove unlawful videos as quickly as YouTube? (e.g the unfortunate New Zealand terrorist attack)


1) below a certain scale, they don't have to comply

2) if they make a good faith effort, at whatever level of expertise and resources they have, I belive they will be within the law. I dont think there is any specific legal requirement to do machine learning or any other specific implementation of content ID.

I'd start with checksums, whatever level of flagging by hand I could afford, and go from there.

As you scale you'll attract the attention of rights holders and need to spend more. But I would hope the courts would accept a good faith effort.


Your first point is objectively incorrect. The EU copyright laws apply to every website/company, the only difference is small companies/websites have 3 years to bring themselves into compliance.


My first point is also objectively correct IF size and age are strongly correlated. Do you agree with that premise?


They won't. I suspect that that kind of regulatory capture is the point of these laws in the first place.


I don't think so. Most people ( and a lot of people on HN) really believe that online platforms should be able to suppress such unlawful videos. And it's not a completely unreasonable position.


Most people ( and a lot of people on HN) really believe that online platforms should be able to suppress such unlawful videos.

"Such unlawful videos." So basically, the government should be able to declare content "unlawful" and it's okay so long as public sentiment goes along with it?

Basically, you've simply thrown out Free Speech. I can understand why someone would find the video of the shooting shocking. Personally, I've chosen not to view it. However, if Free Speech isn't an absolute right, then it's simply broken.

Habeas corpus, the Presumption of Innocence, and due process -- what if the government could just suspend those things when it felt like it? Those human rights would simply be broken.

The difference between a just society and an unjust one, is that the just society extends the same rights and protections to all its citizens, no infringements, no exceptions. A society with Free Speech extends the right of speech and the right to read/hear that speech to everyone without infringement as well.

The interesting thing about the Weimar Republic, is that almost all of the laws covering human rights had an "out" for the government, allowing them to suspend the right, "for the common good" in emergency situations. As a result, the subsequent regime didn't have to change many laws to operate as a totalitarian regime.


The Christchurch shooter had no more right of access to a privately-owned live-streaming platform with an audience in the millions than he had right of access to a broadcast TV network. Saying so (which I suspect is what OP intended) is not "throwing out Free Speech"

You can tease apart the right to say something with the right to have it signal-boosted along the way, and if you want to save Free Speech you're going to have to in case we throw the baby out with the bathwater here.


no more right of access to a privately-owned live-streaming platform with an audience in the millions than he had right of access to a broadcast TV network

If the government now has the right to say, "piece of content X -- not allowed" arbitrarily and with no process and rational standard, then Free Speech is simply broken. An event of that magnitude is now history. A free society should not be rewriting history. People are still free to not-view and not-read. That right can't practically be infringed. What can be infringed is the right to view and the right to read.

You can tease apart the right to say something with the right to have it signal-boosted along the way

As the internet is currently formulated, "signal boost" is equivalent to discovery. Being able to manipulate discovery for 90% of the population is tantamount to the power of censorship.

if you want to save Free Speech you're going to have to in case we throw the baby out with the bathwater here

In a fair system which truly supports Free Speech, in the end, signal boosting a bad message becomes a good thing. If it's really bad and full of nonsense, then there will be a reaction and backlash which will bury the bad message. What we have in the years leading up to 2019, is a small group of activists supported by a cultural minority (academia+media+tech) who signal boosted messages, including those of bad actors who hijacked activist messages. Then, when the backlash came against the bad actors, there was an artificial campaign of suppression, which encouraged more backlash against the suppression.

Now, we are in a vicious cycle of censorship/suppression supported by government and corporate power versus the backlash to that suppression. If we want it to end, then we can let Free Speech do what it's supposed to, so the backlash can go to where it belongs: Against bad actors hijacking good messages and bad actors with really bad messages.


>If the government now has the right to say, "piece of content X -- not allowed" arbitrarily and with no process and rational standard, then Free Speech is simply broken.

The key part of this is the word arbitrary. You can't take 'anything down for no reason', but you certainly can take things down. Even the most robust version of free speech protections have carve outs.

The government can prevent you from branding your email product 'Gmail', they can censure you for shouting fire in a crowded theater, they can go after you if you issue a tweet which lies about how many cars your car company is going to build in the next quarter, etc.


The key part of this is the word arbitrary. You can't take 'anything down for no reason'

The government can prevent you from branding your email product 'Gmail', they can censure you for shouting fire in a crowded theater, they can go after you if you issue a tweet which lies about how many cars your car company is going to build in the next quarter, etc.

All of those things have due process and objective standards attached to them. None of those are arbitrary. Those are all following established law. That's a far cry from only saying, "Okay, these messages are 'bad.' We're now de-platforming them."


> they can censure you for shouting fire in a crowded theater

Or lock you up you for spreading anti-war rhetoric, which is what the actual case this quote came from was about.


This risks making about as much sense as “free guns” or “free nuclear weapons”. While we are tolerant about free speech in New Zealand, and indeed we have amazing access to our politicians, even to the point where the dildo-throwing protestors who targeted the 3rd most powerful politician here was let go immediately.

We are, however, rightly intolerant of hate speech, especially speech against a segment of the population. (We do have our share of racist fools, and our media let their own standards slide too far.) Almost every country has laws on the books like this, including the USA.


This risks making about as much sense as “free guns”

I think the US 2nd Amendment is a great thing. People everywhere should have that absolute degree of freedom. That's certainly not the same thing as "free nuclear weapons."

We are, however, rightly intolerant of hate speech, especially speech against a segment of the population.

The problem we've seen with this, is that rather tame bog-standard Republican positions in the US are being tarred as "hate speech" by bad actors on the Far Left. (Accompanied by Far Leftist mobs chasing bog-standard Republicans in the street and beating them up. There's video of this to be found on YouTube.) I'm fine with direct incitement being illegal. That's pretty much the same thing as prohibiting explicit threats of physical violence. But there should be a wide zone around any kind of political opinion which isn't direct incitement. Otherwise, without objective standards, everything just devolves into namecalling, trying to get everyone else de-platformed.

In 2019, every enforcement mechanism needs to be vetted with the question: Can this be exploited? Because in 2019, you can rest assured that some bad actor will exploit it.

Almost every country has laws on the books like this, including the USA.

There is no such thing as "Hate Speech" in the US. However, we do have laws against things like direct incitement.


The 2nd amendment isn't absolute though, which you can see through the different gun laws in each state.


The 2nd amendment isn't absolute though, which you can see through the different gun laws in each state.

Those are often states being naughty and infringing. Those are often backdoor attempts to circumvent the 2nd amendment.


Not according to the Supreme Court, who has upheld the state's right to regulate firearms.


>While we are tolerant about free speech in New Zealand,

China is also tolerant of free speech when it suits them. I wouldn't consider them a role model though. You can get 10 years in prison for possessing objectionable material, and one criteria for objectionable material basically makes BDSM illegal.

>Almost every country has laws on the books like this, including the USA.

Countries like the USA have specific protected categories of people that you're not allowed to discriminate by in certain activities, but publications in the US don't have to go through an office of censorship. You can release a movie without the government seeing it first.


You conflate regulatory expectation with consumer expectation. Most people don't want to see things they don't like. So they don't tune into channels, (YT or otherwise) that play things they don't like to see. The marke corrects itself over time.

Broadcast stations aren't refusing to show that content because of a law, but because they are making a business decision.


I think the problem is exactly the opposite: people are now getting way too much of the content they can’t help but like, and that in way too many cases turns out to be deliberately polarising and extreme content.

The marketplace of ideas no more leads to good content choices than the marketplace of food leads to healthy eating.


(EDIT:

I think the problem is exactly the opposite: people are now getting way too much of the content they can’t help but like, and that in way too many cases turns out to be deliberately polarising and extreme content.

In the long run, that works out like the children's story where the young girl gets to decide she'll eat nothing, ever, but peanut butter and jelly. If it's stupid nonsense, then most people eventually figure that out, and you're left with a fringe and ironic onlookers. Hell, if they left Alex Jones alone, he'd just have wound up being the 2019 version of The Weekly World News. Eventually, his viewership would have devolved to sundry weirdo plus college students watching him while playing a drinking game.

As it is now, big tech has fueled his conspiracy narratives by engaging in behavior that looks an awful lot like a cabal of big tech colluding as in a Sci-fi dystopian movie.)

The marketplace of ideas no more leads to good content choices than the marketplace of food leads to healthy eating.

Over time, the marketplace of ideas has lead to wealth, freedom, and public health unprecedented in all of human history.


Contributing all of that to the 'marketplace of ideas' is ultimately a fallacious remark. It cannot be proven or disproven because it casts a net so wide that it reaches absurdity.

It's placing two points far apart in history and trying to point at one thing as being responsible for all of it.


We give content personalization far too much credit for the modern issue of both social and political polarization, it is much more fundamental. We've had "filter bubbles" since the dawn of civilization -- in fact it's where those concepts come from: tribal thinking / group thinking.

The mistake I see made time and time again is when we believe we are more rational than we really are! It's an evolutionary battle we are fighting, and evolution had a big head start.


You can argue that when given a choice, many people make the wrong one. However this is better than having the government make the wrong choice for you.


Nowhere did I say Government should be able to throw out free speech. I am just acknowledging what I have observed. If you think my observations are incorrect, please feel free to go through the relevant HN threads (e.g, https://news.ycombinator.com/item?id=19414591) You will find a lot of comments saying such censorship is needed. I agree with you that free speech is important, and I think this contradiction between free speech and society trying to prevent the spread of radicalisation is an important question of our times.


Suppressing Free Speech creates more radicalization. If certain radicals are really bad, then more Free Speech ultimately gets rid of them. Suppression just gives them more ammunition. This is the lesson from across just about all of history.

In 2019, governments and big tech have essentially been tricked into replacing societal backlash against nonsense with societal backlash against suppression. What governments and big tech should be doing is protecting the integrity of speech. On a level playing field, Bad Actors will reveal themselves. It's on a tilted playing field where things dysfunction, and the crowd cries foul and declares the game invalid.

In short, in 2019, for governments and big tech to have the Bad Actors fail, they need to stop doing end-runs around the law and principles of Human Rights. They need to stop giving ammunition to Bad Actors by being bad actors themselves.


>Suppressing Free Speech creates more radicalization.

Most quantitative reviews on deplatforming and and other social moderation efforts have shown the opposite.

Maybe in the long-term social view of the issue you're right but on a platform by platform basis this doesn't seem to be the case.


>Most quantitative reviews on deplatforming and and other social moderation efforts have shown the opposite.

Does it though? You don't have to look at social media to get a sense of this to know what has happened in the world. You can look at the Soviet Union and GDR instead. Did censorship there end up killing ideas of capitalism and democracy? Because I can assure you that the Soviet Union's censorship was much more thorough than Twitter or YouTube will ever be.

In fact, you could even move away from censorship and ask yourself how effective banning things that people want to do is in general. Bans on alcohol, drugs, books, movies, porn etc have all failed to get rid of them. In cases such as alcohol, it seems that it only increased the consumption of it.


Sure, censoring racists on YouTube means less racists on YouTube. But it does NOT mean less racists in the world.


>But it does NOT mean less racists in the world.

I don't think we have the data to make a conclusion on this point, but why is the lack of success in respect of the second point sufficient to make us want to stop the first point?


Because it creates echochambers and it weaponizes the term hate speech. Label everyone you disagree with a nazi and then demand platforms ban them. If the platforms do that, then people will slowly move on to other platforms. It's not going to change their beliefs one bit.


>Because it creates echochambers and it weaponizes the term hate speech.

Okay, so that's the downside of curating your platform. You need to contrast that downside with the upside of taking action.

Reasonable limits on Free Speech are plentiful, and there's an easy false dichotomy in believing that you can either censor everything or censor nothing. Is there a middle ground where we establish a solid rule set where we get most of the beneficial effects are retained, but a plurality of voices remain?

Also a note: >It's not going to change their beliefs one bit.

This isn't supported by research and it's antithetical to your first line. If platforming isn't going to change beliefs, then why care about it? Just move over to Gab, Voat or any of the other offshoots created for deplatformed groups. Have you visited them? They're cesspools.

Overt changes in social acceptability and norms cause populations to reconsider their views, which is why people are upset by being deplatformed in the first place.


Most quantitative reviews on deplatforming and and other social moderation efforts have shown the opposite.

If certain vile people were still on YouTube, then they could be demolished by argument. By deplatforming people off YouTube, you actually manage to give the bad people moral ammunition (the people you'd think the least likely to achieve that) and reduce the legitimacy of the platform. What's been accomplished is merely to create even viler, even more insular groupthink.


> If certain vile people were still on YouTube, then they could be demolished by argument.

If arguments actually demolished positions, we'd see a lot fewer fools, and their adherents.

People broadcasting outright lies don't care about arguments. They care about exposure. When they are deplatformed, they quantitatively lose exposure.

They get moral outrage, but it's hard to turn moral outrage into viewers... Especially when your views are unsympathetic.


If arguments actually demolished positions, we'd see a lot fewer fools, and their adherents.

We've seen precisely this happen on YouTube. "Unfortunately," from their point of view, the way it worked out went against the political narratives of the people who run YouTube.

People broadcasting outright lies don't care about arguments.

But onlookers who are normal, sensible people do.

They care about exposure. When they are deplatformed, they quantitatively lose exposure.

This is the POV of people who don't believe in objective truth or meritocracy. They think everything is a rigged game, and they think to win, it's just that they have to rig the game their own way.

What you call "exposure" is actually discovery. Biasing discovery by refusing it to certain people is tantamount to censorship in this day and age.

They get moral outrage, but it's hard to turn moral outrage into viewers... Especially when your views are unsympathetic.

Actually, it's very easy to turn Moral Outrage into viewers.

https://www.youtube.com/watch?v=rE3j_RHkqJc

It's funny, but here you are actually precisely describing some of the groups who were discredited and saw their engagement drop on YouTube -- in exactly the opposite situation to people like Sargon of Akkad. His opponents in the "SJW skeptic" circles dropped off, while people like him prospered. Then, instead of arguing, the SJW crowd resorted to namecalling, mis-characterizing what was actually said, and de-platforming.

YouTube is losing credibility to the people who are paying attention, precisely by deplatforming people who are popular (whose political views they don't like, even though they are not extreme and fairly mainstream) and by trying to pick and promote their own "winners" that people do not like.

In the end, it just amounts to dishonesty. In the end, people will be able to see this, and I hope YouTube eventually pays the price for it.


"Free speech" as you are using that term (vs. it's actual meaning as a guard against government suppression of speech) was suppressed for decades by the media industry.

You couldn't publish the extreme content you see on the internet today in traditional newspapers and television. Mass radicalization didn't occur for all those decades.

The freedom to publish anything you want over a mass medium is something that is only as old as the internet has been generally available to the public, and the tools to amplify and target its reach are only as old as social media.

Whatever right to publish on mass social media that you think is threatened today is only a few decades old at best, but really only since lay people figured out how to use the technological tools of amplification.

You can still shout whatever you want in the town square, but you aren't entitled to the audience of any particular publisher. If those who want to see your content are displeased with that publisher's editorial decisions, they can move to a publisher that has content they want. If there are enough of them to matter, the publisher will either have to change its tactics or lose relevance.


(vs. it's actual meaning as a guard against government suppression of speech)

False, False, False! This is not how it works in US jurisprudence. Freedom of speech has to do with fundamental human rights and society. Government, being an important part of this, therefore must abide by Free Speech. However, private parties are also subject to it as well. Free Speech is fundamental, and even supersedes other rights, like property rights.

https://en.wikipedia.org/wiki/Marsh_v._Alabama

You can still shout whatever you want in the town square,

This is such a dishonest conceit. It's like some dictatorship declaring they have freedom of speech, because while they control the radio, TV, and newspapers, the opposition can still post pamphlets. No, I'm sorry, but the scales of relative power just don't work in a practical sense, either in my examples, or in the one you just proposed!

In 2019, it's like a farmer trying to assert property rights against airplanes overhead. Sorry, but laws do have to change in response to new technological realities.

but you aren't entitled to the audience of any particular publisher.

This is a conceit. The audience relationship is not with people and YouTube. It's with people and a particular creator. It's not the publisher refusing access to its audience. It's the publisher throwing up barriers to the audience.

If there were a monopoly on printing presses, the printing press company wouldn't be allowed to say, "It's my audience, you have no right to it. No Pro Democrat words get printed!" There was a monopoly on telephone wires. It's not as if they would have been allowed to say, "The people connected to my network are my audience! You are not entitled! No Democrat communications over my network!"

What you're defending is just a technological end-run around freedom of speech, which you support because it's your side that would hold all the power.


> https://en.wikipedia.org/wiki/Marsh_v._Alabama

The holding in that case is quite narrow: "Constitutional protections of free speech under First and Fourteenth Amendments still applicable within the confines of a town owned by a private entity."

You would have to stretch the definition of "town" to include "website" to have it apply.

>> You can still shout whatever you want in the town square, > This is such a dishonest conceit.

Fine, then you can go on Gab, 4chan, 8chan or any of a number of other platforms or sites that accept hate speech to varying degrees. There are digital platforms available for that kind of speech.

> The audience relationship is not with people and YouTube. It's with people and a particular creator.

The editorial aspects of Youtube (including recommendations) are certainly within Youtube's rights to control. Youtube is a brand, not an ISP. Creators must establish a relationship with Youtube in order to publish content on it. There are terms for any such relationship, in the case of Youtube, they are described at https://www.youtube.com/yt/about/policies/#community-guideli...

> If there were a monopoly on printing presses,

There is no monopoly on the technical mechanisms of distributing video online. There are plenty of websites that self-host video content today, and there is no reason that a new provider couldn't attract the creators and audience that Youtube doesn't want.


The problem is that a lot of our (political) communication now happens on those private platforms. There is no digital town square, where people could openly share ideas. That's a massive problem for society, because effectively those digital platforms become the gatekeepers of political discussion. We give a lot of free passes to those platforms by not making them liable for allowing content that everyone else would get in trouble for (eg YouTube isn't held liable for copyright infringement), so perhaps society could demand freedom of speech in return from them?


> There is no digital town square, where people could openly share ideas

There are many of them. We're conversing on one right now, in fact. I enjoy civil conversations here with people with whom I disagree deeply about many issues.

However, if mine or anyone else's comments veer to the hateful or inciteful, you can be sure that the moderators will address it by banning our comments or accounts.

> YouTube isn't held liable for copyright infringement

The existence of Youtube Content ID [1] directly contradicts this statement. The whole point of developing Content ID was to enforce copyright on the behalf of, and under pressure from copyright owners.

1. https://support.google.com/youtube/answer/2797370?hl=en


You are right.

Personally, I question how many people would support authoritarianism if we didn't have the regulatory capture environment we currently have. There is such a powerful system of indoctrination in the US.

But then I think of experiments that deal with human nature. Of how 65%~ of people in the milgram experiment gave the dangerous shocks even while hearing people plea... It paints a picture of human nature I guess I want to blame on something 'out there' and not 'in here'.


A 3-step plan to achieve regulatory capture:

1. Fail to moderate and engineer your platform, it turns into a complete cesspool

2. Government is forced to step in

3. Easily comply with government regulations because you're massive, now it is impossible for anyone to compete with you


I think this might actually be a good thing long run. Regulation forces decentralization until you reach the equilibrium where playing whack-a-mole ceases to be cost effective. If YT didn't have regulatory capture, the easy answer would be another YT clone startup. Now the solution is a protocol rather than a platform. The protocol can't be made illegal because it's not inherently against the law but if someone uploads something illegal to their node/whatever, that's on them.


With very few exceptions the web has flown from decentralised to centralised because centralised is an easier user experience and allows monetisation for content creators, decentralised does not (or at least hasn't so far).

Plus, the people bitching about their content removed from youtube don't want to just be able to upload to youtube they want youtube's audience, they in fact think they're entitled to youtube's audience.


they in fact think they're entitled to youtube's audience.

What they're objecting to, is that their audience continues to exist, and YouTube and other tech companies are throwing up barriers between them and their existing audience -- even to the point of refusing to process payments and otherwise doing things which look like possibly illegal collusion between companies.

YouTube has captured the mainstream of all vlog commentary across all society, but they insist that they can continue to act like they're just a niche tech player. It's no different than a company in 1985 that had owned 85% of all the TV stations and Newspapers in the US. At some point, media companies get too big and too dominant over human discourse. At some point, media companies can get big enough to become a threat to Free Speech.


The protocol can't be made illegal because it's not inherently against the law but if someone uploads something illegal to their node/whatever, that's on them.

Instead, such protocols can be marginalized through social means and through the media. On top of that, centralized alternatives can be hounded out of business and given the same treatment. YT doesn't have regulatory capture. They have effectively captured the lion's share of mainstream video commentary, with no apparent way for them to ever be replaced.


Is it a bad thing to make all new platforms have the ability to remove violating content as a base part of the platform? It means more investment in developing the tool, but I'd love to discuss more on why we think its okay that a "hot new" platform should be supported purely because it violates laws that the community deems important.

ex. Lyft + Uber are an example I can think of where violation of existing laws was the only way to create innovation.

Would love to hear more thoughts on how this might work with content/video companies


I already prefer bitchute to youtube overall. Somehow they're staying alive. There's no need to get completely black pilled on the future of video hosting. We have hope.


Well, the answer depends on what kind of system it is.

If its federated, by being split into such small pieces that the regulations don't really affect any particular one of them, since they're either unaffected or can remove content easily while washing their hands of what other people/communities say.

If it's decentralised, by being literally impossible to shut down, block or go after users for using. That's always been the end goal for many of these systems.


.. by said platform not being based on the broken-ass dns/http naming system, and thus not relying on centralized servers that provide a juicy targets for legal nastygrams. Then when governments want to censor, they'll have to go after the errant individuals themselves rather than getting centralized platforms to perform categorical a priori enforcement for them.

What's happening to Youtube et al right now was inevitable - this was apparent to anyone with half a brain back when AJAX was a novel thing. But "it is difficult to get a man to understand something, when his salary depends upon his not understanding it", and that VC funded advertising pop culture party was a hell of a time.

Unfortunately this centralized detour has made it so that the move to p2p methods is going to be a lot more violent. Lawlmakers have now been primed to being able to wave their magic wand to implement their whims, and will expect that ability to continue. When reality hits and it stops working again, they'll be hunting for individuals to blame and persecute.


>Having one company be the sole gatekeeper for the overwhelming majority of the web's video content is not a solution.

I struggle with this line of thinking because it's not specific enough. YouTube doesn't act as a gatekeeper for most video content. You can upload almost anything you want as long as its not grotesque or copyrighted. What most people mean when they say YouTube acts as a gatekeeper is "will YouTube pay me for the content I upload". And the problem with that is the advertisers, not YouTube - a handful of large advertisers decided they didn't like a lot of the content on YouTube and refused to pay for it. The beef is with the advertisers, not YouTube.

I do have a problem with the videos YouTube decides to promote and surface, but when it becomes to the topic and censorship and deplatforming, the argument almost always boils down to "Proctor and Gamble need to pay me to upload content whether they want to or not", which I find asinine. YouTube is obligated to provide you with a sales team.


> a handful of large advertisers decided they didn't like a lot of the content on YouTube and refused to pay for it. The beef is with the advertisers, not YouTube.

No, the beef is with YouTube. They could offer advertisers the option to pay more and only advertise on “high-quality” content, or pay less and advertise everywhere.

Make it a slider with example videos and let them select what they’re comfortable with.

Instead, they went with no choice and arbitrary demonetization cutoffs.

If P&G doesn’t like my shaky repair videos on niche products that nobody else has made, fine. But P&G shouldn’t dictate what’s worthy of advertising or not for every other advertiser AND publisher.



Then why demonetize my super niche repair videos because they aren’t “subscribable”???


> But P&G shouldn’t dictate what’s worthy of advertising or not for every other advertiser AND publisher.

The advertisers on YouTube bargain as a collective. Sure 3-4 multinationals have a larger voice than the others, but they all took collective action to force YouTube to change their ways. It wasn't just P&G. It's no different than if a group of people staged a boycott - sure there might be some people who don't agree with the boycott, but the point of the boycott is to make a big enough voice to force another entity's hand.

If you don't like it, do your own sales. If you don't want to do that, then start publishing the sterile, family friendly, broad content that the advertiser-class is willing to pay for.


I mean, I regularly publish to youtube but I have explicitly avoided monetization of my account. I am not trying to get paid for my content, yet I know that I am at the mercy of youtube's algorithm. I want people to see my content and don't like the idea of a single group being in charge of how it is distributed. I am willing to use it for now, but I long for a decentralized replacement.


You don't have to look any farther than Gab and Dissenter to see what happens when someone tries to create free speech respecting versions of things.


There's a bit of bias in the commentary on Gab, especially considering everyone at YC hates Andrew Mason. Not that he doesn't deserve it...


I get it on all sides... however, it's worrisome to me that the support of censorship is moving so far as to ban access to such (opt-in) tools.


It's starting to look like we're getting more of a regulatory moat. (In the Warren Buffet sense of "moat", as barrier to competitors.)

An additional concern, at least in US tradition, is when once-again-centralized monopoly venues are being mandated (or given permission?) by politicians to censor or "curate" everyone's speech. I'm pretty sure I was taught in school that we consider that a bad idea here.

I generally favor a healthy amount of decentralization, that the remaining big guys be more common carrier (and stop claiming rights to people's communications), and a renewed individual sense of responsibility towards society (not gaming social media, playing to metrics, manipulating our lesser selves, etc.).


Because it worked so well for radio, tv and newspapers.


... Yes?


> Having one company be the sole gatekeeper for the overwhelming majority of the web's video content is not a solution.

Anecdotal experience: Most people discover most videos they watch through hyperlinks/word of mouth, not YT recommendations, and nothing is stopping you from self-hosting a video.

YT is not like a social network (which become quadratically more valuable, and exclusive, with their number of users). There's no walled garden, its consumed passively, and its competition is, in fact, a click away.


Exactly. The ideal alternative to YouTube is the internet itself. Given the choice, I'd rather visit worldclownhistory.com than visit the worldclownhistory YouTube channel.


Solutions are being worked on like peertube: https://joinpeertube.org


Ok, but what happens when somebody uploads a video that (by accident or not) contains copyrighted content?

Who is liable? The original uploader? All the peers who are hosting it? The viewers?

And how quickly can the content be removed?


the accelerationist model of web centralisation. It depends on how much of the populace actually cares, or if YouTube draws most of its revenue from casual viewers watching trending videos that are not likely to fall foul of these practices.


A site like youtube requires so much CPU/GPU power, storage capacity and bandwidth that it's highly unlikely that a new competitor[] will appear, and completely impossible to decentralise.

[]If we don't count the likes of Facebook, Microsoft, etc as competitors, since they would manage the site in the same way Google does.


Plenty of other video streaming sites exist. Competition certainly isn't impossible, and competitors don't have to operate at Youtube's scale to be viable, assuming they target a niche demographic.


But do those video streaming sites offer comparable quality to YouTube, while also giving away half of the ad revenue they receive? Because YouTube does that. If it didn't give away half of the ad revenue to creators then YouTube would probably never have become as big as it is now.

We know that YouTube has trouble making money with this type of criteria. What do you think is the chance that another company with far less economies of scale will somehow make it work better?


Youtube is so capricious with demonetizing user content that it's become a meme, and content creators have to resort to merchandise and Patreon to survive. They're definitely not giving away anywhere near half of their ad revenue.


YouTube takes 45% of the ad revenue in a video, the creator of it takes 55%. If a video is demonetized then it won't have ads on it. This means that YouTube won't make any money on that video. It's in YouTube's best interests not to demonetize videos. They do it, because they want to avoid bad PR and avoid advertisers from doing yet another boycott.

I don't see how YouTube is not giving away half of their ad revenue. Note that even when a video is "unsuitable for advertisers" the creators still get paid by YouTube Red.


They do it, because they want to avoid bad PR and avoid advertisers from doing yet another boycott.

A part of YouTube has been captured by activists, and a lot of that policy is warped to their ends. Often, this is political. Often, it's arbitrary abuse of power in service of this or that "Internet slap fight." Either way, it's looks bad to someone who can see the big picture.


> Youtube is so capricious with demonetizing user content that it's become a meme,

Yeah, and this meme is absurd. In what other ad business are advertisers obligated to pay you? Normally when you are trying to sell ads, you have to have a sales team to try to get advertisers to pay you, and it's a constant effort. Creators do not have a right to monetization. If they want monetization, nothing is stopping them from finding their own advertisers and striking their own deals (many creators do exactly this).


A competitor would need to operate at youtube's scale for it to be considered a serious competitor. Otherwise, we wouldn't be having this conversation, given that websites like Dailymotion and Vimeo exist, but no "youtube stars" use them.


Not every competitor has to be "serious" to be successful.

And what about Twitch? Plenty of "personality" driven content there.


Twitch is not a competitor of youtube. Twitch is specialised in live shows. They don't let you upload videos, and they delete old videos of past broadcasts pretty often.


If user attention is split between Twitch and Youtube then they are a competitor. Same for Netflix. Same for Pornhub. Same for Vine when that was a thing.


If user attention is split between Twitch and Youtube then they are a competitor. Same for Netflix. Same for Pornhub. Same for Vine when that was a thing.

But if you focus attention to political commentary specifically, then YouTube is clearly the dominant giant. In that context, they are a defacto monopoly.


It really wouldn't. YT makes a big deal about vanity metrics like "10,000 hours of video uploaded per hour" (or something like that), but the 80-20 rule still applies.

A giant chunk of uploaded content is never even watched.


A site for YouTube videos that have never been watched before: http://www.petittube.com/


I don't really think so. A turnkey "host your own video site" would pretty much qualify a competitor -- I'm far more loyal to the people I watch on YouTube/Twitch than the platforms themselves. If they moved to their own site it wouldn't matter too much to me assuming I still had a way to get it on my TV.


But in the long term, virality, discovery, and monetization matter a great deal. YouTube could simply wait for current creators to become stale, and absolutely control the future of discourse.


Bandwidth has nothing to do with it. If a site was super-popular, I'm sure they'd get the VC money. That's really not the issue. The issue is getting people to put their videos on other sites, too.


And you don’t think the VCs would drive them to try to keep their content ‘clean’ the way YouTube is?


The Trending page has been listing the same people over and over and over again for the last 3 years. It's shameful, pathetic, and makes the platform feel extremely biased.

Seriously though. This is unacceptable and overly controlled. Not to mention that those people over the last 3 years post borderline brainwash content without any connection to a real audience.

It's pretty sad to see YouTube turn into something like this.


The Trending page makes me feel really old, and pretty "get off my lawn" w/r/t what's popular, or being promoted, right now.

A lot of the channels I've followed have gone fallow. No new content.

That said, I think the arguments around this topic are pretty disingenuous.

1) There's nasty content being uploaded to Youtube -- YouTube isn't making the nasty content.

2) YouTube promotes some content, people are pissed - it's all about the moneti$ation.

3) YouTube creates a system to stop promoting some kinds of content, people are pissed - censorship, slippery slope to de-platforming, etc.

There's no real alternative to YouTube (Vimeo isn't it, and the blockchain YouTube isn't it) for:

a) creators,

b) creators who want to make money.

And looking at what Facebook, Twitter and YouTube have to deal with in the social sphere, who'd want to inherit those problems? This is why we're still going to end up with aggregated media empires, because they'll accept the overhead of running the network, and it will simply get even more shallow in terms of who supplies "content".


> b) creators who want to make money.

The only platform I've seen so far for this one to compete with YT outside of porn is Patreon, and the videos that get monetized there without also being posted and/or monetized on YT tend to be... less than wholesome in a lot of cases.


Patreon only really works for people with an audience so enchanted with the creator that they would follow them anywhere.


> The Trending page has been listing the same people over and over and over again for the last 3 years.

Skytube (android app for YT that I like because it allows shutting off the screen while playing) does call it "Recommended", and that feels more accurate. I agree that the Trending page is terrible. When I was more active on youtube, I hid it from my browser because I often couldn't resist the temptation to open it and then regretted it soon afterwards.


Not sure what you mean by "Skytube [...] does call it ('Trending') 'recommended'", because YouTube have both "Recommended" and "Trending" and they're two different things.

Does Skytube not distinguish them?


No, Skytube doesn't use YT's subscription and recommendations. You locally set your subscriptions and the app checks for updates occasionally or when forced, no account required, no server-side profile (well, they could, I'm sure you stick out like a colorful bird using that client), no YT-defined sorting of what might interest you. And it has a bookmark & download function built in. I'm somewhat of a fan as you can tell, there's more on https://skytube-app.com/ and you can install it via F-Droid.

They do have a Trending page as well, but I don't know how that's calculated. It's very random, has nothing in common with YT's Trending page, and I don't use it.


Thanks for the explanation.

I thought it's just yet another "YouTube with no ad" app, guess I was wrong and it's more than that!


Seriously!

Sorry to de-rail the conversation here, but I feel this exact same way.

How many freaking videos can one person produce about doing their make-up?!

And GMM, my lord, they've eaten every animal's boiled genitals by now, 5 times over.

It's all the same Swedish produced, quick-cut, psuedo-rap music videos, some-guy-with-a-bucket-on-his-head's latest attempts to grasp at 2009's dubstep, or yet another KPop group that comes in with the afternoon's breeze.

Oh look, another movie trailer! It's the 'exclusive' 7th trailer for Marvel's 'Doctor Bong 3: The Mellower'

Harhar, here's yesterday's clips of Late-Nite 'celebrity' interviews via old SNL stars that are 'functional' now.

And the 'Youtube Face' that's on every single freakin' thumbnail. Oh look! Human face making an emotion! Better click!

Gah!


These companies focusing so much on algorithms, when the best solution is to simply use actual humans.

I've done a fair amount of work on trying to detect and deal with spammers, fraudulent activity and such. Nothing beats a human eye.

My most successful approach has been to filter out the super obvious stuff, then send the rest to a queue that is kept an eye on by humans. A small staff can easily fight back against bullshit. Yes, YouTube is orders of magnitude above what I've dealt with, but I'm sure their smart engineers can filter out the majority of stuff.

It's painfully obvious that YouTube lacks human eyes on things. That, or they lack direction and empowerment of their employees.


YouTube already have a huge staff of human moderators, probably in the tens of thousands. Moderating content in a consistent and fair manner is essentially impossible, largely because nobody agrees on what should be kept off the platform.

Is the image of a female nipple pornographic or empowering? Is the image of a murder victim terrorist propaganda or reportage? Is a political rant hate speech or free speech? There is no universal answer to any of these questions, least of all when we consider the huge difference in cultural attitudes across an international user base.

I can highly recommend the documentary The Internet’s Dirtiest Secrets: The Cleaners, which illustrates these paradoxes with brilliant clarity by intercutting between the moderators and the moderated.


> Nothing beats a human eye.

Well, thousands of videos sure beat a human eye.

Then you'll need lots and lots of people that watch those videos and make judgement calls, and there will be edge cases, and you will have to give people a chance to challenge those decisions, etc. For a forum with a few hundred posts a day, that's certainly an option. At Youtube's scale with hundreds of hours of video uploaded every minute of every day, I have my doubts.


Not to mention the cost of necessary therapy.


the best solution is to simply use actual humans. I've done a fair amount of work on trying to detect and deal with spammers, fraudulent activity and such. Nothing beats a human eye.

Except when those humans are captured ideologically, then start to warp an even handed policy towards their own ends.


Humans are expensive, and reviewing horrible content causes mental health issues. I don’t blame them for trying but I agree they aren’t as good and have a long way to go.


> best solution is to simply use actual humans

Gee, how many humans might it take to review the >400 hours of video uploaded every minute? (source: https://www.tubefilter.com/2015/07/26/youtube-400-hours-cont...)


400 hours/minute is 24,000 minutes of video per minute, or 241,920,000 minutes of video per week. Assuming a human can review 8 hours of video per day, 5 days a week (too high, but not by much), and that YouTube engineers can create an algorithm that automatically marks 95% of video (probably low, 95% correct labelling is easy to get; 99% is usually necessary to outdo humans), then you would need 5,040 humans.

Automatically labelling 97.5% would halve that. If employees can only review 6 hours of video per work-day, then it would increase by a third. Both, you'd need 3,360 humans.


And multiple that by let's say 50K/year fully burdened cost per employee (assuming locating them in low-cost areas) and we've now added a minimum of $125 million per year in overhead for a service that already probably doesn't make much money.


> we've now added a minimum of $125 million per year in overhead

Which is <1% of the revenue that YouTube generated last year? Still a meaningful number. But, not unmanageable.


At what profit margin, though? I've never seen actual YouTube numbers broken out in a financial statement.


> you'd need 3,360 humans

Likely this number would decline rapidly over time - i.e. months vs. years - as the ML would use the human tagging to improve itself to the point of reaching the 99% threshold fairly quickly.


>and that YouTube engineers can create an algorithm that automatically marks 95% of video (probably low, 95% correct labelling is easy to get; 99% is usually necessary to outdo humans)

I think a crucial question is whether the numbers quoted are before or after the current set of automated filtering (content id and any existing mechanisms to catch bad, illegal or offensive content.)

They may already be catching the vast majority of easy and fingerprintable things. They may be at the barrier where false positives increase at a high rate.


why are you falsely assuming it all needs to be reviewed? it only needs to be reviewed if it's flagged. each additional flag on the same vid should double the likelihood that the material is objectionable. The question is how much video is actually flagged, and what percentage of those reports are false? Even that's an easy problem to solve, because if the same accounts keep flagging clearly-not-objectionable content they can just be muted.


This is why it's great to have an ecosystem of many small players.


I like the intention, and I hope to see more of this... but it's kind of hard to imagine that the end goal here is "Responsibility", when the business goal is "how many ads can we sell?"


Valid concerns. What does Google gain from putting genuine effort into stopping the spread of these kinds of video? Perhaps more "responsible" content and an eventual turn-around of the platform... But that's a big gamble - may take years, may never happen.

In the meantime, they're bleeding man hours, money, ad impressions, perhaps advertisers targeting the audiences which are being culled.

Now examine the question: What does Google have to gain by appearing to put a genuine effort into this while not actually addressing the problem?

They gain a whole lot more than in the first scenario - new users hopeful of an increase in quality, support from privacy and attention-span advocates, insight into how they can influence, suppress, or promote certain themes going forward, more metrics to become more entrenched in the data harvesting machine learning game...


The purpose of ads is not just eyeballs. It's to get people to buy a product. If a lot of kids that watch YouTube all day see a Nike ad, it doesn't mean they'll buy sneakers.

There are more and more people that do nothing but be on the internet all day. Advertisers don't want those people, they want social people that watch YouTube on the side, and are likely to buy products too.


Valid concern, at the same time as users.... most aren't willing to pay. So I guess we get what we pay for.


I wonder if they could look at groups of users who up vote "bad" videos (as marked by their own internal reviewers), see what else they up vote, cluster those videos, and down promote them in their algorithm.

IOW use the users who like "bad" videos to help find other "bad" videos via an algorithm.


That sounds easy to abuse. Just create some users, have them like obvious bad videos, then have them like videos of <insert competitor>.


Literally any algorithm that simple is easy to abuse, you need to integrate any algorithm with fraud detection and user trustworthiness assessment.


Either Twitter figured that they don't care enough, or it's not that easy. Mass-flagging is still a thing they haven't solved, and what are you going to do? Kick out your users because they're trying to manipulate your powerful algorithm to do their bidding? There's easier ways to make your audience find a new platform.


It's not that easy, the cost of doing it right is astronomical and most users don't care.


This really only works in a low volume situation. There's about 500,000 hours of video uploaded to Youtube every day, so employees would only be able to actually watch an insignificant fraction of that per day. Additionally, they're unlikely qualified to judge whether a video is actually good or bad. They can judge the sound quality, or videography, but unless you're well versed in the field the video is about, it's pretty much impossible to determine if a video is actually offering novel or good information.

Additionally, the bots upvoting content are going to have several thousands, possibly hundreds of thousands of bots. So detecting the circle is going to be very difficult. Even more so if you consider a lot of legitimate users will have the same taste and watch a lot of the same videos.


Quality content takes time to create; many of the past changes in YouTube algorithms explicitly pushed for churning out low-quality content frequently and rewarded it, while seriously disadvantaging irregular quality content (that moved elsewhere in the meantime). You get what you reward.


This can be seen easily if you go to Youtube and search "fortnite", with every single top video being just over 6 minutes long, full of vapid nothingness related to the title, and perhaps 20-30 seconds of content related to the title it originally baited you into watching. The majority of what can be found through searching/filtering on YouTube is also hot steamy garbage, or large corporate channels. The recommendation engine sometimes throws in some gems, but rarely. The only video length filters are "short <4mins" and "long >20mins". Why on earth? It's like they randomly tried those values in 2008 and never looked at it again.


Kids watch that stuff and don't yet know how to sift through the trash to get to the good videos. They're like us in 2007 when we'd all get Rick Roll'd by blindly clicking on YT links placed on forums.

By setting their KPIs arund "view time", YT created a product which caters almost exclusively to kids and conspiracy nuts, aka the only 2 demographics that have seemingly unlimited time to watch cartoons or people talking into a camera.


I noticed this a long time ago. Content creators were being rewarded for "daily" videos. So many of the channels I loved had videos where they came out and said "I'm going daily and, yes, the videos are going to suck until I daily upload good stuff." A few channels I watched just disappeared because of the daily upload reward/promote requirement. Algos don't work with creative works and human tastes.


Not just YouTube algorithms either. Almost all social media site algorithms and search engine ones (like say, Google's) seem to put a heavy emphasis on regular content rather than quality content.

And it's the main cause behind both journalism going through a massive decline in quality and so much low effort content showing up everywhere online.


Please just give us the ability to blacklist channels. That’d increase usability by 10x for me.


AFAIK you can do that? Hit the menu on a video recommendation, "Not Interested" -> "Tell Us Why" -> "I'm not interested in this channel".

IMO, though, blacklisting has been mostly proven to be insufficient. If you block, say, noted chud Sargon of Akkad, that doesn't mean that another half-dozen "Rationalist" "Thinkers" Who Just Happen To Be Chuds won't be recommended to you as well.


Not sure if my experience is typical or not but that hasn't helped me one bit. When I tried the various signals you listed, that just seemed to encourage YouTube to show me even more videos/channels similar to the ones I wasn't interested in. It seemed to be more a signal of 'hey, this person will help us identify more videos like this' than 'let's not show this person any more of this type of content'


allow channels to categorise themselves (which I believe already exists but is underutilised?) and allow users to express positive or negative interest towards categories. There is little incentive to self-categorise incorrectly as it would lower your visibility to those who want to find you.


If you are getting suggested these things it's because people with similar viewing habits as you watched them. I get suggested riposte videos to such content without ever seeing the original "bad" video. I'm assuming this is based on my own habits on the site. I would prefer getting neither, but here we are.

To claim that blacklisting would be ineffective because there is always more trash is disingenuous. If everyone blocked (e.g.) PewDiePie, surely they would get less YouTube talking-head content.

It's like saying that we shouldn't/can't avoid candy because people are always making new candies.


It's well-documented at this point that YouTube's recommendation algorithms enjoy feeding fascist chuddery to people who like things like video games or history, yeah. But I didn't say you shouldn't do it. I do it, because one fewer garbage channel is one more chance for something less garbage. But it certainly seems like it's bailing against the tide.


I second that personal blacklisting is insufficient.

I did just that, clicked on "not interested" -> "tell us why" -> Not interested in this channel"

Yet I'm still getting video suggestions from "Charisma on Demand" and the like.

Also this mechanism doesn't account for changing tastes.

I was one interested in a certain gun channel, but after a shooting and particularly deft comments about it by channel owner, I decided to take up interest in literally anything else. Yet the videos from it and the like followed me around for about a year.


That 'feature' does nothing. I've tried everything, but getting youtube to stop suggesting to me videos from Jimmy Fallon or Conan Obrian or any of those other 'lame obsolete-format "late night" shows for tepid boomers who miss Johnny Carson' seems to be totally impossible.


It doesn’t work for me. I still get channels I never want to see constantly coming up everywhere.

It’s quite annoying because I’m trying to help actively train my preferences and it’s as if YouTube will not let me participate in that.



Video Blocker extension for chromium and firefox keeps me sane on YouTube. It is surprisingly quick to block the main offenders and have actually decent recommendations on the sidebar.


I wish, but I doubt it will ever come / be useful.

Google and YouTube want their AI or algorithms pointed AT you, not working for you.


Honestly, it's pretty ridiculous that YouTube is being pressured to police the content on its platform. What worries me far more than extremist videos on YouTube, is the idea of a mega-corporation exerting outsized influence over the opinions we are exposed to. Is our democracy really so fragile that it cannot survive an open marketplace of ideas?


> Honestly, it's pretty ridiculous that YouTube is being pressured to police the content on its platform.

It's not "ridiculous", it's table stakes. If you ever run an open community, you'll find that bad actors are inevitable.


What are these bad actors doing that requires top-down censorship and marginalization?


If you have to ask that question, then you've never been a moderator or ran a platform before.

There is a lot of content that is invisible to you, the user, because it ends up being filtered out by moderators because it consists of overt racism, calls to violence, outright trolling, stalking and more. Even as a moderator of a small private platform there was a user that was stalked by someone who would attempt to reregister accounts solely for the purpose of harassing them or finding any personal details that they could dig up about them

Eventually you have to establish a certain level of moderation for your platform or those bad actors can and will chase off all of the other users for one reason or another. This gets worse as a platform scales and the level of malicious content your platform is exposed to grows exponentially.


None of the problems you've raised are related to YouTube's proposed changes in the linked articles. They already have rules against most of the things you mentioned. The proposed changes go far beyond them.

https://www.youtube.com/yt/about/policies/#community-guideli...

> Starting in 2012, YouTube rebuilt its service and business model around “watch time,’’ a measure of how much time users spent viewing footage. A spokeswoman said the change was made to reduce deceptive “clickbait” clips. Critics inside and outside the company said the focus on “watch time” rewarded outlandish and offensive videos.

YouTube's engine is currently ranking content based on the amount of time other users are spending actively watching that content. Unwanted videos are already de-prioritized by the current algorithm. The proposed changes are explicitly intended to de-prioritize videos that people are actively watching.

It remains to be seen how they will measure "quality". If they find a bias-free way to measure it, I'm all for it. Most likely though, it will be driven by top-down notions of "outlandishness" and "offensiveness", as opposed to bottom-up user engagement.


You asked what kind of bad actors an open platform might have to deal with and I gave you an example with a very small platform.

Now you scale that up to match YouTube and it becomes a nightmare. YouTube is being pressured to take action by it's users because what's happening is exactly what I was talking about: They have bad actors taking advantage of your platform. That's where Elsagate came from or the more recent reveal of pedophiles using the platform to groom kids.

There is no unbiased way of solving this problem. Because you have to establish certain things as being bad for your platform, which means you are going to be biased against them. Youtubes reaction is in turn them trying to solve issues at scale when they clearly didn't consider how the platform would scale in the first place.


The "bad actors" that you're referring to on YouTube aren't harassing anyone or inciting violence or breaking the YouTube community guidelines in any way. They are simply espousing opinions that you dislike and disagree with.

Sure, YouTube is allowed to do anything they want. But suppressing an open marketplace of ideas isn't in society's best interests. If unpopular opinions were suppressed in the past, the movements for women's rights and gay rights and civil rights would have faced a massive setback. And let's not even get into the question of whether it's in society's best interests for a handful of corporations to arbitrarily decide how to censor the marketplace of ideas.


I'd like you to take a minute and go visit sites like Voat or Gab. Just, hover around there for a bit if you haven't. Those are sites that are exactly what you want: A pure and open marketplace of ideas without any sort of censorship or rules. This is to prove a point.

Which is that an open marketplace of ideas has no value in itself, because the marketplace can be very easily taken over by bad actors if you don't exert some control over your userbase.


Clearly YouTube has been extremely valuable even before they felt the public pressure to penalize "low quality content".

Your comment about an "open marketplace of ideas having no value in itself" is very curious. The very idea of freedom of speech being a good thing is predicated on the idea that an open marketplace of ideas is a good thing. If it isn't, you may as well lobby the government to ban any and all speech which you consider corrosive.

"When men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas--that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes can be carried out. That, at any rate, is the theory of our Constitution. It is an experiment, as all life is an experiment."

― Oliver Wendell Holmes, Jr.


Did you actually visit Gab or Voat? I'm asking you this because if you haven't, then you can't actually argue in favor of the 'marketplace of ideas' very well. Either that or you're not willing to argue in the defense of such sites. Or you're possibly being disingenuous, considering we've switched from corporate curation/censorship of their content towards government censorship which are two very different topics.

As I mentioned, places with zero censorship and zero moderation can and are very quickly overtaken by malicious actors. Either way, if you're willing to defend Gab and Voat on their merits of being an 'open marketplace of ideas' after having gone there then we can continue our argument.


The concept of "marketplace of ideas" goes far beyond Gab and Voat. Two bad apples and cherry picked examples do not make for a counter argument.


Ask @dang and/or your favorite forum admins why any open platform needs moderation.


Posting videos of death, gore, kiddie porn, targeted bullying, fascist propaganda, trying to incite genocide, rape and other violence.

The idea that we should allow sites that are marketed to the general public to be unmoderated is completely ridiculous.


The only things on your list that they should be prohibited by law from showing is kiddie porn, rape, and targeted bullying. People should be able to police themselves from the rest as they see fit, but there's nothing wrong with posting it online.


Yeah, those should be prohibited by law. I think that genocide should also be on that list. And youtube can (and IMO should) continue to prevent the rest. Death, gore, genocide, fascism, violence, etc. do not belong in general public discourse. There's always some other website to post snuff porn.

I don't agree with the way that youtube has applied their guidelines for removing content in general, but I do generally believe that they should moderate their content.


> Yeah, those should be prohibited by law

Then try to get it passed. Write to your representative. If it's so obvious that it should be prohibited by law, then you should have no trouble finding support.

I suspect what you'll find instead is that people don't think it should be prohibited by law, and prefer to instead threaten and pressure a more vulnerable and less democratic entity like YouTube into carrying out their unpopular censorship desires.


I think the things I've outlined are pretty popular TBH.


Evidently not popular enough to become prohibited by law. Granted, I'm not as familiar with non-US laws.


I think it's fairly reasonable that anybody who runs a recommendation service is held responsible for the content they recommend. YouTube isn't just a video hosting platform, it's a homepage providing recommendations, a service that automatically queues up a new video after the one you're currently watching, and an advertising network that directly funds the creation of the videos they host. YouTube is already putting their thumb on the scale - the least they can do is be responsible about what they recommend.

It's been pretty clear over the last couple years that our democracy really is so fragile that it can't survive one of the dominant media companies of the day actively promoting extremist and hateful content.


There's no real reason that "recommended" can't also mean "curated." They could definitely limit recommendations to +1's by trusted users who are getting feeds by entropy.


It's not an open marketplace and each side is is fighting a different fight:

>Never believe that anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert.


What makes it not an open marketplace? The very bias introduced by platform like YouTube which you seem to be defending? Is the average person so dumb that they cannot distinguish between "sound argument" vs "intimidate and disconcert"? If that's how you feel about the average person, why do you still favor democracy?


[flagged]


> People aren't dumb, but I think people have poor information literacy skills. The lack of these skills leads to people being unable to distinguish between "sound argument" vs "intimidate and disconcert"

I actually agree with you. I think that the best way to deal with the problem isn't to censor the marketplace of ideas, but to fix the political system

https://outlookzen.com/2014/01/21/democracy-by-jury/


[flagged]


I don't consider myself from "the humanities", but the lack of applied rationality skills in the general population is dangerously severe. (Look at how populism spread by convincing people via the cheapest of rhetorical appeals to ego and arrogance.)


Given your hostile opening, I'll start by asking what you think that evidence would look like and what you would find convincing.


So not dumb-dumb, but dumb for all intents and purposes of making decisions in a democracy? What's the difference? At the core of your argument is the idea that people in a democracy aren't able to do their singular duty: make a reasonable choice based on available information, so they need to be coddled and only fed information that is "fair", "fair" as determined by liberal arts majors I guess.

And if anti semitic arguments are so unfairly effective, why hasn't it won? All the major platforms have a lot of anti semitic speech that they haven't been able to filter out, and it's remarkably easy to find on the internet, so why hasn't it entranced the majority of non-liberal-arts majors? Why are the masses of average people calling for the removal of anti semitic content - an act built on a rejection of anti semitic ideas - instead of being enthralled by it?


>so why hasn't it entranced the majority of non-liberal-arts majors?

Perhaps a possible explanation is that such speech is already disallowed on many popular networks - Reddit, Twitter or Facebook, for instance. This leads to a secondary effect in which people who have never actually seen antisemitic arguments believe (rightly) that it's something they wouldn't have wanted to see anyway, so they are fine getting rid of it. That's not a great reason on their part, but we can't say that the way antisemitism proliferates has not itself met counter-balancing forces since Sartre wrote the the section GP quoted.

In a similar way, antisemitism is very healthy on forums that do not block it; if you open any thread on 4chan's /pol/ board you'll find many instances of "unironic" antisemitic content, and even if you go to other boards like /g/ or /int/ you'll find plenty of "ironic" antisemitism.

I'd be cautious of saying that antisemitism does not have the ability to grip people anywhere, though; a recent issue was the proliferation of antisemitic ideas on a popular meme subreddit[0] and the circulation of extremely popular antisemitic myths and propaganda[1] - the support shown for this conspiracy theory shown through posts and upvotes on those posts serves to show this is not as marginal of an opinion.

[0] https://www.reddit.com/r/virginvschad/comments/bb1es2/the_vi...

[1] This is a debunking of the conspiracy theory rather than the conspiracy theory itself since I don't want to drive people to the original video, though to view it you can check the description of the video: https://www.youtube.com/watch?v=2o3pkoZyYRA


> Perhaps a possible explanation is that such speech is already disallowed on many popular networks - Reddit, Twitter or Facebook, for instance.

Antisemitism isn't new, it predates Reddit and Facebook and Twitter and we've already been through periods of history where antisemitism had widespread exposure in the public discourse yet democracy still defeated it. Antisemitism (and discrimination in general) is a bad idea for reasons that ordinary people can understand, and practically every democracy agrees. Evidently, we didn't need YouTube to save us from the magical techniques of persuasion that Sartre describes.

> the support shown for this conspiracy theory shown through posts and upvotes on those posts serves to show this is not as marginal of an opinion.

I don't doubt that certain places have a "healthy" amount of antisemitism, but it has no serious grip on democracy, mainstream society, or its chosen laws. There will always be people who believe in things that others consider harmful. I find it difficult to believe that you can or should realistically stop that with censorship.


>we've already been through periods of history where antisemitism had widespread exposure in the public discourse yet democracy still defeated it.

Sartre was writing right after France was liberated from the Nazis, so from his own position in history he'd laugh if he were to hear you say that. Arguably anti-semitism has not enjoyed a popular resurgence since the downfall of the Nazis and I'd hazard a guess that it's because its effects were so evident, as was its proliferation among ordinary "thinking" people in the Nazis's rise to power. Perhaps with new generations who don't have memory of the effects of anti-semitism, who are only acquainted through it by the school system which they may be (rightly) distrustful of, there is much less of a buffer, and perhaps that's a buffer Youtube can aid in providing.

>There will always be people who believe in things that others consider harmful. I find it difficult to believe that you can or should realistically stop that with censorship.

Arguably it can be minimized; some researches point out that speech itself in its function as speech, what makes it such a powerful concept, can harm[0].

[0] https://global.oup.com/academic/product/speech-and-harm-9780...


The "buffer" you want YouTube to raise at the cost of freedom of speech is...ignorance? I can quote famous people too: Those who cannot remember the past are condemned to repeat it.

And of course speech can harm, but that doesn't mean it can or should be stopped with censorship. Who decides what to censor? How can you realistically censor ideas in a free society with advanced communication tools? What kind of society will you create when its main defence against bad ideas is ignorance? I hope that citizens ask these questions before resorting to the childish solution of hiding bad things away.


>So not dumb-dumb, but dumb for all intents and purposes of making decisions in a democracy? What's the difference?

The difference I tried to explain is the difference between dumb and unskilled. We as a society don't value the skills of information and media literacy and those skills are withering in the population at large. We could easily address this problem if we chose to do so.

>And if anti semitic arguments are so unfairly effective, why hasn't it won?

It is winning, hate crime rates are climbing: https://www.hrc.org/blog/new-fbi-statistics-show-alarming-in...

>Bias-motivated crimes based on race, religion, disability and gender all increased. The FBI reported that anti-Black hate crimes increased by 16 percent, from 1,739 incidents in 2016 to 2,013 incidents in 2017. Hate crimes targeting Black people represented 28 percent of all reported hate crimes in 2017. Every other racial and ethnic group also saw increases in the number of reported hate crimes in 2017.

>Additionally, hate crimes motivated by anti-religious bias increased 23 percent, largely driven by a 37 percent increase in anti-Jewish hate crimes, which constituted the majority of religion motivated hate crimes. Hate crimes motivated by bias against people with disabilities increased by a disturbing 66 percent and hate crimes motivated by gender bias increased by 48 percent.


> The difference I tried to explain is the difference between dumb and unskilled. We as a society don't value the skills of information and media literacy and those skills are withering in the population at large. We could easily address this problem if we chose to do so.

I understand the distinction you were trying to explain, but I don't see how it's relevant. You're still saying "we as a society" can't do the critical democratic task of making good choices based on available information, because we're "unskilled" at information or something.

To repeat noego's original question: If that's how you feel about the average person, why do you still favor democracy?

> It is winning, hate crime rates are climbing

That's a pretty broad definition of "winning". Antisemitic ideas and actions have gone from a small minority far from the mainstream, to a slightly less small minority far from the mainstream. This is the catastrophe that you're willing to sacrifice freedom of speech to avert?


whatever gets them to stop recommending me BEN SHAPIRO [EVISCERATES / DEMOLISHES / LEAVES SPEECHLESS] LIBERAL [PROFESSOR / SMART PERSON / SPEAKER] OVER [SOMETHING BEN IS CLEARLY WRONG / TROLLING ABOUT] videos.


Majority of my YT view history is music. 90% of my recommended videos are similar genres of music, but there is nearly always some political/culture war/conspiracy video sitting there as well. The other day it was "The REAL truth about the Vietnam War". Super relevant in 2019.

Fortunately, I can block channels from my Recommended, but there are hundreds of similar channels out there, so killing one just means it'll be replaced by the next nutjob channel in the queue.


that is pretty much why I have quit facebook, I was sick of ultraconservative relatives tagging me in these political shitpost linking to Shapiro style videos and conservative political memes. The only reason I still have a account is so my grandmother and sister can see baby pictures of my kid.


I think when critics accuse YouTube of "stepping on the scale" here, they under-consider how warped that scale is already. The existing YouTube algorithm isn't neutral or fair, and is laughably easy to game. It's already biased—it's just biased towards extremes.

YouTube waded into this moral quagmire a long time ago. Being biased towards "engagement" is just as troubling as having a more human-relatable bias.

Now they're starting to take responsibility for the recommendations this algorithm spits out. I think that's a good thing.


How about metrics that NOT automatically takes down videos on requests, no questions asked?


There are so many simple rules that would improve their recommended content. Examples:

1. Lower rank for clickbait titles

2. Lower rank for listicle / compilation / clip videos

3. Delist listicle / compilation videos with stolen content (most of them)

4. Lower rank for clickbait thumbnails

5. Lower rank for YT drama videos

6. Increased rank for 100% original videos


Channel blocking, with recommendation weighting.

If I get recommendations from a channel producing shit, I should be able to block the entire channel.

Sufficient good-faith blocks dock that channel's recommendation weighting.

The fact this isn't available even when logged inremoves all incentive I have to log in to YT.

(Which is already near nil.)


But then how would virtually every YouTuber make a living? You’ve described all of their tricks :)


This seems like a re-implementation of television ratings and survey (Nielsen) methods. In this way, it's interesting how strategies on the commercial internet gravitate toward business models with a century of momentum (and student loans) behind them. Business chips away at the revolution.


> The video service generates most of its revenue through advertising and the business works best when as many people as possible are spending as much time as possible on YouTube. Hence executives’ obsession with engagement stats. Adding murkier metrics to the mix could crimp ad revenue growth.

This is the core point of it. I've been critical of youtube for some time, but if they are willing to forgo some revenue growth in to deal with the current problems, I applaud them.

> The YouTube channel Red Ice TV broadcasts political videos with a “pro-European perspective,” which critics label as promoting white supremacy.

Bloomberg, you can call them Nazis. This is not a racist uncle that everyone needs to deal with, who can sometimes be reasoned with. These people are literally Nazis.

> YouTube declined to share details on how it uses metrics to rank and recommend videos. In a January blog post, the company said it was hiring human reviewers who would train its software based on guidelines that Google’s search business has used for years.

It's going to be tricky to get this to work, and I'm sure google will make many mistakes by automating decision making to a degree where it will prove to be meaningless, but finally there's going to be some human moderation. Thank you.


I think a key thing overlooked is "human reviewers who would train its software..." which usually means throwing out it as a human interface task to mechanical turk and retrieving the result, then throwing the data into the input of the machine learning algorithm. This is not the same as having a human look at videos.


Well the reality is that you don't get quality content generally when you allow people to publish whatever they want. Everyone has an opinion and viewpoint. That doesn't make them qualified opinions or viewpoints, and definitely doesn't make them worth considering. People have significantly mistaken and waaay over estimated the benefits of mass media in the hands of your everyday person.

It's becoming pretty clear the cost is not worth any perceived benefits (because I don't think you can really point to any benefits such models have given us).

And no, moderating and editorializing content is not dystopian. Whenever someone has that talking point it's really a case of someone read 1984 (and sometimes didn't actually read it) and decided to apply it to everything with absolutely no discretion and absolutely no respect for what the book actually said.


>Creating the right metric for success could help marginalize videos that are inappropriate, or popular among small but active communities with extreme views.

So, deplatform what's not mainstream, accepted by the establishment, favorable to Google...


I don't think that deplatform is the right idea here. Presumably, the videos will still be present and watchable. However, the algorithms used to sort and present videos must be seen as playing an editorial role, and that brings a different set of questions.


That is the equivalent of deplatforming in a lot of ways, especially on youtube where millions of videos are uploaded daily. If your videos are not suggested, and do not show up high in trending or search results, then they practically don't exist except from word of mouth and outside promotion.


Only if you rely on Youtube to give you exposition instead of merely using it as a video hosting site that you link from somewhere else (personal website, other social network, mailing list, whatever). In this case Youtube's algorithm doesn't really matter and even if they ban your account you can just switch to some other host (or even self-host).

But then if you rely solely on Youtube for exposition (and, generally, monetization) you're setting yourself up for this kind of abuse. You need Youtube but unless you're a huge channel commanding millions and millions of views Google doesn't care about you. Worse, if you start bringing controversy you're actually a liability.

I think Youtube's behavior is perfectly rational here. I don't understand on what grounds they should be expected to host controversial channels (and on top of that, give them money) if they think it's hurting the website.

Google is not the internet (yet), just host somewhere else. Find like-minded people and create your own platform. Make the web decentralized again.


Then by definition most videos on Youtube are already "deplatformed," since the space for recommendations and the front page is limited. Why is this suddenly a problem?


False. The trending pages and suggested pages are filled based on google's algorithm that determines which video the user will like/ watch the most. That is the only fair way to determine trending and suggested videos.


How is that fair? Google's algorithm is biased towards what brings revenue for google. Content that isn't easily monetizable is suppressed, and content creators are punished for not conforming their content to what the algorithm wants.


I never said google should rank pages based on monetization value. I only said based on watch time and how well they predict a user will like that video. That is the only fair way imo. Youtube mostly ranks based off of watch time and machine learning to determine how well a user likes a video, monetization seems to only play a small part.


>If your videos are not suggested, and do not show up high in trending or search results, then they practically don't exist except from word of mouth and outside promotion.

Sounds like how the Web should work. If you're searching for it, you can find it. Why should it be spoonfed to users based on the decisions of some algorithmic black box?


Yeah. After all, novel ideas and iconic communities usually start small and annoy the heck of the status quo.

It's so easy to put them in the same bag as the stereotypes of bad people.


Deplatforming via obscurity?


I would actually be glad if some of the more extreme and irresponsible opinions are deplatformed. It cant be argued that Youtube is expected to platform all range of opinions.

There is however a strong likelihood of this being exploited by Youtube for its own purposes.


Deplatforming, blocking, label and dismissing as forms of dealing with `extremes`, is not only a bad solution, but only fuels those extremes - driving them into their own confirmation bubbles and onto more dangerous forms of outletting their opinions.

What needs to happen is solid debate, constructivly listening and then disseminating and showing what they are wrong about and why, with proof.

As for platforms, deplatforming gives them PR, fuels their antidisestablishment mentalities and hands them the discrimination card (ironically) as a social form of crying to gain more people sat on the fence and drives them into hopping off on the wrong side. Be far better to listen, debate and handle, heck even if everybody just laughed and pointed - whilst unconstructive, it is far more constructive than the way it is currently handled.

But the whole rise of label and dismiss, has become a tool that many use to push their own extreme narratives and group-think mentalities without rationalising what they are actually doing.

Imagine if somebody said 2+2 is 5, you would point out what is wrong with that, why and show that it is 4. But society today seems to prefer to label and name call and ignore them without telling them how they are wrong and debating it so they learn. Hence it only fuels and angers those who believe the answer is 5, as they feel ignored, dismissed and in effect, bullied. Then the whole state of victimhood mentalities play out and the only thing debated is done in a confirmation-vacuum by both sides in isolation, giving rise to name calling etc. All rather sad and ugly.


What you are discounting are the bad faith actors that use networks provided by YouTube, FB, Twitter etc to disseminate their message. I think a lot of us believe in marketplace of ideas where bad ideas are disproven and people update their beliefs and move on. But we overlook those who are so driven by ideology/emotions that they spread false data willingly to push their agenda.

They can recruit like minded and impressionable (often young) people and build nation wide movements if left unchecked. Most of us don't have the time to go and verify every piece of information for ourselves. We are relying instead on our notion of the normalized opinions. This is informed by our cultural norms and media narratives etc. The bad faith actors want to push this moving window of opinion to their end through a constant barrage of gas lighting, straw manning and general FUD. I think rational discourse does not work in such situations.

2 + 2 = 5 does not cleanly translate into our political situation because I think a lot of is driven by more basic emotions (for instance hating a particular group of people intrinsically) rather than something objective like this particular equation. There might be ways of having dialogue even in such situations. But I believe it is not through algorithmically boosted viral videos on YouTube.


All very true. Take new papers and some headline story that turns out to be false and a retraction later on is always hidden away. Same with likes and thumbs up on viral stories/videos - even a few hours later that it's proven to be fake without any doubts - the likes and thumbs up on that disprove will always fall short of the attention the original fake did. That in is human nature at play more than technology as the old word "rumours" as well as many others plays testament too. So to find a technological solution is not clear cut.

Sure, there are upsides to banning and deplatforming, maybe they outway the negative aspects.

Really, gets down to constructive debate and critical thinking being installed into our education systems more than what we have today. Which alas seems lacking in many avenues of education still.


> is not only a bad solution, but only fuels those extremes

Where is the proof for this? This is an often repeated talking point when it comes to "censorship" or what not, but if anything we've seen that not doing anything leads to very predictable, very preventable, bad outcomes. We're just on the shore of unrestricted self-publishing and yet we're seeing the effects that these channels of information have had on our society.

I think this is often a case of people assuming that everyone is a good-faith, rational actor. And that's simply not the case in any sizable society or community.


Whilst no studies in these instances, early to tell. But blocking and deplatforming don't stop them being themselves instantly. Human nature is about fairness, people who feel they are unfairly treated, as they would by being blocked and deplatformed - will become more driven. Also banning anything has been proven not to work as well as intended. Look at music as one example of human nature and songs that wouldn't even chart, get banned by radio stations and boom, they suddenly chart without any radio or TV play. That was the case before the internet and such mentalities in humans still exist. Something that nobody is interested in - gets banned and suddenly people get curious about it and even if they didn't want it, will show more interest that they did prior to the ban. It's those people who are on the fence that will seek out those banned items and if that happens in isolated bubbles without balanced input. Then you see how it can be counterproductive.

But alas proof in these instances will only become apparent down the line and will go unchecked and unbalanced until then. That may well be a problem, maybe not. But it all gets down to wait and see, and that most often - never works out well at all. More so when the ban hammer has been played.

But you are totally correct and is a case of no simple solution, though I do feel banning is an extreme solution.

Yes the whole good-faith aspect is key, we presume the best and any bad-actor has a field day stamping all over that good-faith default, which sadly can change that good-faith default into a cynical pessimistic mentality and that is just depressing. But then we know that one bad apple can spoil the bunch. Just as true with humans.


And yet I've barely heard a peep about Alex Jones since he was deplatformed. It seems to have worked pretty well there.


Hasn't stopped him, just because you don't see him on your timeline, or the mainstream platforms don't mean he's gone away or changed. Just shifted into a conformation bubble with others and with that, less likely to encounter any debate of their mentalities and ideas and as such, enforcing them in their own minds.

This in a way prevents opposing minded people challenging them in debate and sweeping problems under a carpet don't solve the problems. It just shifts it until it grows large enough that the carpet is just not big enough anymore.

However, being exposed to such opposing views can become distressing and whilst sometimes you may be in the mood to debate them, other times you are not. So IMHO putting better controls in the hands of the users, sure can default to settings suitable for children. But banning is not the way in the end. Like films, you have ratings, today we can and should have a more granular rating system akin to dating app choices and give the users the power to do there own censoring above and beyond the laws of the land.

But for many a default of not having to see the ugly sides of life and dealing with those issues, does seem to be a choice many make, albeit for various reasons.

Maybe one day we will all walk around with augmented reality goggles and filter out everything we don't like. But then, just because you can't see a problem - don't mean it's still not a problem. Which is kinda the crux of many problems in the World today.


> What needs to happen is solid debate, constructivly listening and then disseminating and showing what they are wrong about and why, with proof.

And also the tougher part, actually being prepared to admit that they might be right about some parts of what they say, interspersed with the things you disagree with. That is what makes a conversation about it fruitful.


Yes, that may be the case that they are not totally wrong about all aspects of issues they raise, problem is how they present them and with debate those issues can be looked at and what is the root cause. Many issues are often the effect of some overlooked issue and sadly by dismissing the first layer for whatever reasons and not listening and disseminating what they say so it can be broken down and addressed fairly. Well alas that only fuels them into an extreme direction even more, isolating groups in society into political tribalism that ignores the issues thru dismissal of any message other political tribes may gravitate towards.

This sadly only widens the gap between the political tribes and fuels the extremist aspects on both sides.

But putting the right message across to encapsulate and ring true with both sides without biasing it towards one side or another is a very nuanced skill. Take climate change - people can argue one side or another all day long. But ask them - do you like clean water and air and you will find both sides agreeing. From their you can push thru what is needed on the back of that point over saying we are doing this to tackle climate change. Same result, just politicaly more easily absorbed and equally shifts away from the whole - well their not doing anything so why should we mentalities. If anything you shift that self serving mentality onto your side as they will then think about their air and water quality that they breath and drink as well as their families. You also moot the whole historical data when people will argue that during the dinosaur period it was warmer and other counter arguments to climate change as it is the here and nows air and water they breath. Also easy to point out that if you run a car engine in a closed garage that the air quality is not good for you - something anybody in denial will comprehend and have to accept.

But without listening and debating what they said, pulling it apart and getting to the root cause and addressing real issues in a way that all sides can accept. You just see the extremes growing wider and further apart and that is not healthy for either side.

Perhaps we should worry about "Social Change" more as the current approach is akin to sweeping under the carpet and pretending there's no dirt and how everything is rosey, even though the distorted landscape hints that it is not and that carpet over time will bulge and bulge if this direction is left unchecked.


>I would actually be glad if some of the more extreme and irresponsible opinions are deplatformed

As long as they are ones your culture doesn't share, and/or you happen to disagree with.

Which, of course, will make them "objectively" extreme and irresponsible.


What a dystopian nightmare the internet is turning into. Anything that isn't in line with "right thought" isn't Responsible and is therefore hidden away? How did we end up here? How did Silicon Valley go from a place focused on ideas and building the future to generating systematic censorship and endless puritanical moral panic? We badly need some new blood guiding tech development.


>How did we end up here? [...] systematic censorship and endless puritanical moral panic?

The reason is advertising money is what funds Youtube. Ultimately, this means Google has to make sure their videos are mostly "advertiser-friendly"[0]. This unavoidable editorial tension is what the prior boycott incident (Adpocalypse[1]) was all about.

>We badly need some new blood guiding tech development.

It's not just a new programming team with different ideals. You need a new funding model.

The obvious alternative is to create a video site that doesn't depend on advertisers. With no ad sponsorships as a revenue constraint, what options does that leave us?

(a) paid subscription or donation websites. But most viewers don't have money to pay for a youtube-clone with non-ad-friendly videos. Even if they did have discretionary income, they wouldn't pay for it anyway. (Example of users overwhelmingly preferring ads over paying an insignificant 8 cents per month.[2])

(b) volunteer self-funded decentralized video service like Peertube

Both those options have massive economic and technical disadvantages. Storing petabytes of videos is expensive and so far, nobody else has figured out a non-advertising model to pay for the hosting costs to serve them to millions.

Keep in mind that the other video websites that some people do pay for such as Netflix, HBO Go, Hulu, etc are also not bastions of free speech that allow extreme political views.

[0] https://en.wikipedia.org/wiki/Censorship_by_Google#Advertise...

[1] https://www.google.com/search?q=Adpocalypse+advertisers+boyc...

[2] https://www.tune.com/blog/mobile-ads-70-of-smartphone-owners...


How come this isn't a problem for Facebook? I could see some crazy political post right next to an ad, and nobody would say "oh well, the brand from that ad is now associated with <fringe political idea>".

I don't see how this is the case for YouTube, but not Facebook. Why are brands so concerned about association on one platform, yet not the other?


For the same reason that brands aren't concerned that the bus carrying their ad may be waiting for the light to change... In front of a strip club.


Why is it a dystopian nightmare for Youtube to curate its content according to "quality", but it wasn't a dystopian nightmare for Youtube to curate its content to maximize engagement for ad revenue?


They give an example in the article of Logan Paul's YouTube channel being troubling. Apparently his fans enjoy his videos as measured by watch time, the like button, views, and subscribers, but his content also lacks a certain jena se qua that Google calls "quality".

The impulse to tell people what the quality of their entertainment is strikes me as dystopian and deeply authoritarian. In addition, it's also patronizing - suppose I like watching Logan Paul videos, why should dear Mother Google decide they aren't good enough for me and decide that I should watch advertiser friendly content instead?


It seems a bit contradictory to me, though, that here we are on a forum whose entire premise is aggressive curation of "quality," talking about a video streaming platform considered by many here to be a vapid cesspool of inane nonsense and clickbait wanting to improve the quality of what its users see, and the immediate reaction is to cry censorship, and wonder whether "quality" even means anything, rather than say "it's about time."

And if you like Logan Paul videos, you're probably already subscribed to his channel. Be sure to click the bell to be notified. There's also a search bar you can type "Logan Paul" into. He's not quality, though.

Assuming Youtube isn't secretly pushing some left-wing agenda of propaganda and mind-control (I don't believe they are, but a lot of people do) and that their goals are as stated, then I don't see how adding a "quality" metric to all of their other metrics is a bad thing.


Everything has a place. I might it find it odd if the technical library where I work started carrying the National Enquirer. Yet, that publication is perfectly appropriate where people expect entertainment magazines.

YouTube was intended as a place for people to share their videos. Logan Paul and others who create and share videos are using YouTube for its intended purpose and it's perverse for YouTube to try and figure out how to stop them from doing what their service was built to enable.

I think there should be a place for high quality curated content and a place for people to share their videos regardless of content.

YouTube is, of course, free to make whatever curation decisions they like just like I'm free to complain about them. This would be like if hacker news claimed to be a place for tech news but also banned all submissions from authors or views they didn't like, even if that content was high quality tech news.


Th latter puts the users in control of what is trending. The former lets YouTube manipulate the content for arbitrary purposes.


Youtube was always manipulating the platform for arbitrary purposes (and sometimes the content, for copyright purposes.) Users have never been in control of anything.

see: the Youtube "Adpocalypse" and the various other changes to their algorithm that destroyed the profitability of entire genres of content (such as animation.)


Enforcement of copyright is following the law not manipulation. I'm sure their automated takedowns aren't perfect, but that's due to technical limitations.

This rewarding if "quality" content is straight up picking winners and losers. But you're right it's not like YouTube hasn't done this before. Plenty of very rapidly viewed videos (e.g PewDiePie's "Congratulations" and rewind videos) mysteriously didn't appear on trending.


They do sometimes manipulate the audio channel by muting it because of a copyright claim, that's what I was referring to.


People can agree on the definition of 'ad revenue'.


And perhaps more importantly revenue can come from anybody regardless of views - money has no smell. “Quality” will easily turn into “whatever the site maintainers or majority consider good”.

This being said I pretty much agree that so far no big tech company found the recipe for properly and “fairly” curating content.


The problem is that "quality" is going to be enforced by the same algorithms that mark whitenoise as a DMCA copyright violation.


both are pretty bad options. Better to let users curate and allow popularity to dictate visibility, surely?


User curation and popularity are, arguably, just as much forms of censorship as any other, although a "trusted user" curation system similar to Steam might be interesting. The only truly unbiased system would be random.


that only really holds if you consider "not promoting" to be a form of censorship - at the end of the day, as long as they're not actively hiding content that YouTube finds to not be "quality" (while still being reasonable content), it can still be found by someone looking for the subject. I don't see an issue with that - there will always be limited space for "promoted" content.


Then YouTube risks things being popular that aren't aligned with it's collective politcal views.


That's already the case, libertarian and conservative content is more popular than progressive content on YouTube.


the latter is demand driven


One could argue that the former is as well, given the platform's low reputation for quality, and the bad press its recieved for allowing extremist and algorithm generated content. Let's not pretend Youtube is pushing this entirely down the public's throat against their will.


The internet has operated that way forever. Most hosts haven't allowed porn since the beginning of time. You're still free to make your own video hosting website if you'd like. Liveleak still exists.


> You're still free to make your own X

It's becoming plainly clear that isn't the case.

Liveleak was recently banned in Australia and NZ.

Gab extension Dissenter has been banned from both chrome and Firefox app stores despite it self-censoring far more than twitter.

Paypal, Stripe and Patreon all deplatform people based on words they have said.

I don't agree in the slightest with conservative or far right thought but this direction is scary if it continues. Once they have "fixed" this problem, who is next?


Any content creator takes certain risks when they choose to be a sharecropper on someone else's platform.

Websites being banned in Country X have no impact on my ability to spin up a home web server and publish whatever I want. If I reach a certain volume of users, and let the site become host to people fomenting hate or violence, there's a chance someone might look to take the site down. Depending on the type of website you run, that may be the cost of doing business.

If you think you might be a target for that kind of action, your game plan needs to have fallbacks:

- take personal cheques, gift cards or crypto instead of Patreon/Paypal/Stripe.

- Spend ~2 hours of your time learning how to setup a home server w/ a Raspberry Pi or an old laptop.

- Start transitioning your users towards the decentralized P2P web (Beaker Browser being an example)


Sharecropper is a great paralell. A sharecropper has no rights to buy his own land, and is forced into a psudeo-slavery, getting all of the burdens and none of the rights to his labor. Sounds unethical.


that's a separate conversation. Youtube promoting or pushing down content based on their idea of "quality" is the concern here.


>You're still free to make your own video hosting website if you'd like.

Websites like Gab started going down that path. Suddenly it becomes "you're free to make your own payment processor" and then it'll be "you're free to make your own bank". Do you think it'll stop there or do we need to make our own country too just to run a website that's not interfered with because they allow opinions some people don't like?


[flagged]


That's the problem: people that have nothing to do with nazis are labeled nazis and then get deplatformed.

How many damn nazis do you think there are that we need entire services demonetized because of them?


Two good ways to make a dystopia: 1) by trying to create a utopia, which always ends up in dystopia, or 2) by removing all rules and concept of society.

Maybe we can unwind the dystopian internet by adding a little of the real world responsibility back onto corporations that have been running wild without any rules “because it’s the net”.

YouTube, Facebook, etc. that are all behaving as publishers but without the responsibility real world publishers have, are now facing that reality. It appears they are having to choose between taking on the responsibility of being a publisher, or to do away with that by become just a platform. At the moment they seem to be trying to hold onto their publisher behavior while doing as little as they can.

In response everyone cries fowl. Instead you should be arguing that YouTube must drop is publisher behavior and design aspects and reorganize around being a pure video hosting platform if that’s what you want. But not both.


I'm starting to feel less and less concerned. The people are starting to revolt. All the YouTubers I follow are migrating to alternative platforms for both hosting and $. Even PewDiePie, long time biggest YouTuber, has put this weight into another platform and will only be doing his live streams on dlive, a streaming platform with a crypto currency tied to it. Very few are totally abandoning YouTube, but I think the writing is on the wall. Everyone is trying to figure out a different path forward.


"I'm starting to feel less and less concerned. The people are starting to revolt. All the YouTubers I follow are migrating to alternative platforms for both hosting and $."

Several streamers I watch have moved, or at least started posting their work to an alternative site. I've followed them to avoid Google/Youtube whenever possible.

Sorely needed competition has emerged.


It’s very clearly a scenario where money drives the narrative. They can literally lose advertisers overnight because of a single rogue user tainting the ad dollar pool.

Business wise it makes absolute sense.


Youtube is a brand like any other, like a TV channel.

If they want to have a theme or specific content ... that's their call.


I don’t think based on recent history there’s any real risk of people not finding a community that shares their crazy ideas.


I share some of your concerns, but we should take care that we don't simply condemn any attempt at promoting mainstreem content as 'censorship' and any attempt to promote highly specific content as 'bubbles'.

If both are dystopian nightmares we're in deep trouble.


It never was.

Silicon Valley was/is filled with smart people who knew how to get on top.

They realized that using ideals to sell their cold hard reality was a great way to get useful minions to put their hearts and souls into projects.

The useful minions came and helped, and are now SHOCKED that the power hungry people they supported would do things related to power.

Most people can't even accept the open fact that the intelligence community helped fund many of these projects (see In-Q-Tel and others)

Now everyone is 'shocked' at them gathering data and sending it off to intelligence organizations. They could have 'never seen this coming'and 'how oh how did this happen?' - The only way those questions are asked is through massive indoctrination and double think.


I don't understand the thought process that a content host has an inherent obligation to equally promote all the content hosted on their service.

It's a very anti-consumer and anti-capitalism position, and I hear it most often from people that in other contexts are very pro-consumer and pro-capitalism.


Advertisers and their demands are cancer.


Like comments here the others simply disagree with?

Four of us disagree? Your words are invisible! Truth be damned.


It’s not hidden away, I just don’t think most people care for it.


It this point, they should just be regulated by the various world governments. Though I don't like a world where google can override a given countries law, or googles tries to solve the issue for union of all the worlds government laws. I also don't like the possibilites of hidden metrics driving things that have such huge social implications.


Exactly. If things keep going at this rate Google and Facebook will be shaping and distorting reality for a large portion of the population.

If 95% of your internet traffic goes through a search box or algorithm... And you spend 6 hours a day on the internet... And you're not savvy to these issues (as our readers are)... That's almost 50% of waking life spent consuming manufactured data which your brain (on some level) interprets as "real" without question. Terrifying.


I think 2019 is when this attitude started to get more mainstream within tech. I’m not opposed to it but I hope people understand that following local laws means that companies may end up enforcing laws from other cultures.


In theory this sounds a little better. Obviously there is some distance to cover in implementing it in reality.


On a side note, the creators of YouTube ought to [at least informally] unionize. There's no reason Adam Neely should be getting copyright "strikes" for explaining music to the masses, or Cody's Lab should be forced to take down videos on mining with explosives [on his family's ranch]. YouTube needs to have a hotline for problem solving for their most profitable content providers, and until the creators collectively hit the pocket book, they'll face these disruptive yet easily solvable problems.


As a tangent to your tangent, seeing a name like Adam Neely, whose videos I've been watching all week, even though I know nothing about jazz or playing music, really strikes me as the magic of YouTube. It's being able to watch talents you'd never normally be interested in, but who prove to be genuinely entertaining, even if you don't know what a quarter note from a crotchet, and it's amazing to realize these people are gaining fame in such a huge market, despite being incredibly niche.


> whose videos I've been watching all week, even though I know nothing about jazz or playing music, really strikes me as the magic of YouTube

100% agree about that being the magic of YouTube, and it's something I feel that most people just don't get. There is so much amazing and incredibly professionally produced content, as good or far better than what you get on TV, on YouTube that simply would never, ever have made it to TV or movies.

And not just quality content, but someone in a third world country with a relatively tiny budget, can capture a huge audience which can show the masses what "others"[1] are really like. I happen to believe YouTube could go a long way to healing some of the growing divisions we're experiencing in modern society. Well, that is if you could actually get individual people to develop a little curiosity, and get YouTube to put a little genuine effort into discoverability and curation of videos, and get politicians to not want to keep people dumb and divided. So it will probably remain yet another lost opportunity to make the world a better place, like so many other opportunities we've squandered.

[1] http://www.otheringandbelonging.org/the-problem-of-othering/


MCN's are basically the private version of this.

Content creators join MCN's "Multi-Channel Networks" with the idea that the MCN will handle some of the more administrative and legal tasks while the content creators focus on content.

MCN's try to help with the copyright striking and often do have a direct line to youtube due to their size and importance.


Except MCNs sit as a middleman to you getting paid. The Defy Media debacle taught everyone a hard lesson.

For those unaware, Defy Media went bust and had taken out substantial loans to operate. A large number of big creators are still unpaid while Defy Media's creditors carve up the company's assets.

All those creators are going to get left unpaid.


Fair enough, but to bring this case up without mentioning mandatory union dues and the costs of unionization feels disingenuous.

Unions very much are a middle man between labor and management, and the collective bargaining agreement that this potential Creators Union would sign would hamstring a creators freedom in this space pretty significantly as well.

Is what it is, both choices have positives and negatives from my perspective.


Yeah, but at least unions don't take your entire paycheck up front before giving you your cut.


I haven't paid attention to this in a few years, but haven't MCNs gone through a pretty bad stretch? I seem to recall a lot of people leaving theirs (sometimes acrimoniously) because they weren't handling those administrative/copyright issues.


YouTubers won't unionize. There's no way that's going to happen after what MCNs did. People aren't going to trust new organizations coming along promising the same things.


This is an interesting idea. The nature of the work of youtubers parallels with the work of Hollywood actors, someone might want to organize Screen Actors Guild for youtube creators.


So wait, they're trying to be Vimeo now?


How designed responsibility isn't censorship?


Yeah, we'll see how well that goes. I can't help but suspect that YouTube's definition of "quality" and mine are not quite the same.


YouTube's (and Google's) management has been very openly left-leaning, so it's not a stretch to imagine that "quality" as defined by their editors will also be left-leaning or at least not right-leaning... so, yeah.


From my perspective "quality" is unrelated to political bent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: