Hacker News new | past | comments | ask | show | jobs | submit login
Alex Stamos: Asking tech companies to police hate speech is “a dangerous path” (technologyreview.com)
154 points by mkm416 on Oct 23, 2018 | hide | past | favorite | 264 comments



"Tech companies" is just way too broad a designation to use here. No one is seriously asking Apple to police hate speech in iMessage or Facetime, for instance, or Verizon to police hate speech in SMS. No one expects AT&T to police hate speech in a phone call.

What people are concerned about are the newsfeeds and timelines, specifically. Companies like Facebook and Twitter and YouTube love to pretend that their newsfeed/timeline products are just like chat apps or phone calls--neutral messaging platforms.

They're not. And the specific reason they are not, is the algorithmic timeline and content suggestions.

It's silly to worry about giving these products "the power to determine what people can—and can’t—say online." They've already seized it for themselves--by deciding for me which content will show up in my newsfeed/timeline/suggested list. They decide which content gets promoted to me.

Yes they use an algorithm to do so, instead of human decisions. But guess who built the algorithm?

Companies that run algorithmic newsfeeds and timelines need to own their role as a publisher and a gatekeeper of content.

Instead of pretending they don't make choices, they should be introspective and thoughtful about the criteria they are using to make those choices. "Engagement" is not neutral criteria because emotions are not symmetrical. Engagement is higher on topics of fear, anger, rage, violence. That's down to our evolution; that's down to the amygdala.

So if you build a publishing system designed solely to maximize engagement, it's going to become a system that preferentially serves content that feeds negative emotions. There are articles and case studies where a person starts with a fresh account and sees what kind of content gets pushed to them; inevitably they get horrible conspiracy theories and fear-oriented content.

Making decisions about what content your audience sees is an act of publishing, even if it's executed via complex algorithm. The companies doing this need to accept their responsibility for what they decide to serve and promote.


> They're not. And the specific reason they are not, is the algorithmic timeline and content suggestions.

I think the more pertinent reason here is that these platforms have broadcast capability (immediate communication with many people) as opposed to p2p capability (traditional SMS or phone calls). Even if Twitter was strictly chronological, without any algorithmic mutation, we'd still presumably be insisting they police content, right? I agree with your conclusion that they're publishers, but to me, what makes a publisher a publisher is not content curation or mutation, but is simply broadcast capability. And so our drive to regulate follows quite naturally from similar drives to regulate the press and media.


Not really.

One involves a neutral role, in which subscribed feeds are delivered to users without modification or filtering.

The other involves an active role on the part of the platform for any number of reasons: increased engagement, removal of voices that may cause perceived damage or lack of trust in the platform itself, or other, more ideological reasons.

I've noticed an intentional avoidance of distinction, lately, between active and passive behavior on a number of fronts, from sexual activity, to medical advice and intervention, to social media publishing. It's a pretty crucial component in ethical analysis that I suspect is being intentionally blurred.


The public wouldn't buy the 'neutral' aspect, as every mechanism that a platform provides, is in some way biasing the type of content that is broadcast. Twitter's RT feature, which requires no curation or modification of actual content by Twitter, still biases content to that, perhaps, which is most divisive or simplistic. Broadcast technology, even without algorithmic bells and whistles, is already a biased technology. I think we might in-fact agree, but to me, there is no crucial ethical difference between a chronological twitter and a algorithmically mutated one, for in either case, the very platform itself (its existence, its design) pre-biases the type of content that will flow through it, and thus we'd end up with content that would make us consider policing it.

So, in your words, I would say, it is somehow fundamentally impossible for a technology to be "passive".


Retweeting doesn't bias content; it's rather the opposite: making it easy for people to share content they like removes a longstanding bias. The old models of broadcast media selected content to fit the biases of a few powerful media executives.

You may find the choices of the average person "divisive or simplistic" but the Retweet button doesn't dictate their choices.

"Policing" content, however, is all about dictating people's choices, motivated by the thought that the "police" know better than less powerful people, and reestablishing the biased filter controlled by the powerful (new-)media executives.


To an extent, but if they provide the information through a set of transparent protocols and mechanisms that users can comprehend, they are remaining as neutral and passive as possible. And that's key: Twitter, at its core, started out as something extremely comprehensible, and grew less so as it began to paint over its transparent abstraction.


> No one is seriously asking Apple to police hate speech in iMessage or Facetime, for instance

Many people are demanding that Facebook do exactly this.


It’s even worse: many people are demanding Facebook monitor Messenger chats for abuses in Mayanmar while at the same time not wanting Facebook to monitor Messenger chats. This is just a perfect example of constant catch-22s I see in people’s expectations of tech companies. You can’t expect Facebook to both not monitor chat messages AND prevent chat messages that do harm. You can have one or the other.


This is the doublethink the most people like to indulge in. The free speech I like must be protected at all cost and that I do not like should be handled via glib statements: 'free speech is not free reach', 'private companies are not obliged to give platform to anyone' and so on.


> You can’t expect Facebook to both not monitor chat messages AND prevent chat messages that do harm.

Not monitor, or not use for advertising targeting?

I'm fine with spam/malware/virus prevention in my Messenger or Gmail. I'm not enormously comfortable getting an ad for baby clothes after I send a private message to someone that I'm pregnant.


wasn't it WhatsApp and message forwarding that people had a problem with in Myanmar? Not p2p.


India, but yes. And again, not exactly p2p, because large (meaning 100+ people group chats) that spread false alarms.


I was reading something about Myanmar[1], but I got it conflated with another article I can't find talking about WhatsApp message forwarding.

[1]https://www.nytimes.com/2018/10/15/technology/myanmar-facebo...


Can you please show one such instance of impossible demand? I thought that's two very distinct group of advocates.


> Instead of pretending they don't make choices, they should be introspective and thoughtful about the criteria they are using to make those choices.

They're stuck, however, because many of the people who bought into the myth of objective, non-biased algorithms have gone down the rabbit hole of the garbage recommended by those algorithms. To those users, attempts to cull the garbage will be interpreted as censorship.

Add to that there is no way to improve the recommendation systems (from an ethical standpoint of "improve") without hurting engagement.

Add to that HN's allergy to government regulation.

It's going to be quite a rollercoaster ride over the next few years. :)


I’m not entirely up to date, but I believe there are calls for regulating WhatsApp communications


not one-on-one communications, as far as I've seen. However there are some calls to limit group chats:

"There are good ideas floating around for how Facebook could make life harder on WhatsApp propaganda artists. In an op-ed published in the Times this week, Brazilian researchers Cristina Tardáguila, Fabrício Benevenuto and Pablo Ortellado offered three ideas: restrict the number of times a message can be forwarded from 20 to five, which Facebook has already done in India; dramatically lower the number of people that a user can send a single message to, from its current limit of 256; and limit the size of new groups created in the weeks leading up to an election, in the hopes that it will stop new viral misinformation mobs from forming." https://www.theverge.com/2018/10/19/17997516/facebook-electi...


In countries like India, people are asking for Whatsapp to censor content.

10-15 years ago, SMS forwarding and bulk SMS were marked as a problem. Indian government had put in laws in place where they can and do ask telecom operators to jam mobile or even internet signals in sensitive areas.

In which case, asking technology companies to toe the line is the next logical step.


>No one is seriously asking Apple to police hate speech in iMessage or Facetime

facebook DOES censor Messenger.


Chats and phone calls aren't neutral in the same way as the content of a newsfeed or timeline. Who calls you or texts you is hardly a random population sample.


>Making decisions about what content your audience sees is an act of publishing, even if it's executed via complex algorithm.

This is true. And it is more difficult than most people want to even come close to dealing with. Let's say that you wave a wand and you have a bot that will 100% eliminate every racist claim ever made on the service, burying it and showing it to no one. You have just made it impossible to talk about racism. By killing the discussion, you will enlarge the groups of people who would never think to use a racial epithet, but who harbor deep convictions that different races have genetically-driven differences in capability and some need coddling. That is racism. But you can't even point that out, let alone discuss why it is factually wrong, on a sanitized platform.

On a sanitized Twitter, Megan Phelps-Roper and her sister would still be members of Westover Baptist Church, protesting gay funerals and spewing vitriol. They might not be able to do it on Twitter, but they'd be doing it elsewhere. Because Twitter was NOT sanitized, and because it WAS possible to confront people with total refutation and challenge to their most closely-held beliefs, Megan Phelps-Roper was convinced that her own position was wrong and destructive. And because of that lack of censorship, that permission to offend and call out, Westover Baptist has 2 fewer people working daily to hurt others. Anyone calling for a sanitized online platform is calling for a death of discussion, a death of social progress, and a death of any opportunity for the ignorant to learn.

In the 1960s, it was profane, disgusting, and obscene to suggest that interracial marriage should be allowed. It wasn't just 'a different opinion.' It was a view that made people sick, that riled up violence, that led to name-calling and hate. And it was only because the public forum was able to bear that hate, those insults, etc, that progress eventually happened.

Eric Schmidt in his book 'A New Digital Age' makes the argument that people like himself should take the reigns and kill discussion so that he might make the decisions for society. Were he around in the 60s, he would be fighting to lock down discussions about interracial marriage. He, and many like him, see the public having heated discussions and roiling in conflict and conclude they are mindless and incapable of policing themselves. This is a view as old as time. It's Conservatism. The old kind. The kind that backed kings, pharoahs, chieftains, etc. The kind that said some people are simply Better and destined to lead, while others are Lesser and destined to follow. Don't be surprised, but many are comfortable to accept that role as a follower if it means less responsibility or need to think. Conservatism died out near the end of the 18th century and through the 19th but there is no reason it couldn't re-establish itself with a fresh coat of paint and maybe with the help of some automation.


Beyond that, they need to be paying people, micropayments of course, for the content. That way they truly are considered a publisher and they employ independent contractors to create content.


> "Tech companies" is just way too broad a designation to use here. No one is seriously asking Apple to police hate speech in iMessage or Facetime, for instance, or Verizon to police hate speech in SMS.

Sure they are. Microsoft is even monitoring their service for "bad words".

https://boston.cbslocal.com/2018/03/27/microsoft-ban-offensi...

> What people are concerned about are the newsfeeds and timelines, specifically.

No. That's a small part of what primarily the left want to censor.

> It's silly to worry about giving these products "the power to determine what people can—and can’t—say online." They've already seized it for themselves--by deciding for me which content will show up in my newsfeed/timeline/suggested list. They decide which content gets promoted to me.

Which you can choose to ignore or bypass.

> So if you build a publishing system designed solely to maximize engagement, it's going to become a system that preferentially serves content that feeds negative emotions.

Then why aren't you demanding CNN or the NYTimes be censored?

> The companies doing this need to accept their responsibility for what they decide to serve and promote.

They are. They are serving what their customers want.

The only people who are complaining about it are authoritarian and selfish individuals who want to control what people see and say. It's no different than a prude whining about the porn people watch.


> What people are concerned about are the newsfeeds and timelines, specifically. Companies like Facebook and Twitter and YouTube love to pretend that their newsfeed/timeline products are just like chat apps or phone calls--neutral messaging platforms.

> They're not. And the specific reason they are not, is the algorithmic timeline and content suggestions.

They're not because they're public, akin to broadcasting. One to many. In the past such has always been more or less under careful control. Public broadcast TV and radio were under control of dogma and moral, and it wasn't feasible to make your own. Publishers could opt not to release a manuscript if it didn't fit their ideology.

Consider the following thought experiment: "Twitter and Facebook were exactly as popular as they are now but they'd show everything only chronologically (last on top). Do you recon the control problem would be solved at that point?"

Now consider the following thought experiment: "Twitter and Facebook are only private 1:1 conversations. Do you recon the control problem would be solved at that point?"

In example #2 (regardless of it being chronologically shown or via an algorithm) the communication -whatever it might be- only goes to one person, not the general public. This contains the strength of propaganda (such as fake news or hate speech) greatly.

Also, remember that there are all kind of biases [1] even while we're not aware of them or when we are weak to fall for them.

[1] It is worth summing them all up but I am by no means an expert on this subject. I'm currently reading the book "The Confidence Game" by Maria Konnikova and it explains various of them in detail.


It absolutely is not broadcast as long as an algorithm is selecting posts for you. Air TV is broadcast. The radio is broadcast. Those platforms do not not selectively choose their audience to maximize engagement. There's no innocent one to many relationship here. Facebook actively gives extremist material to extremists because their robot thinks it will make them use the platform more. It's not the same message going to all subscribers. That is why they are responsible.


My point was that neither phenomenons are exactly new.

One-to-many relationships aren't innocent to begin with; they've always been under public scrutiny, a magnifying glass. Algorithms might make it easier to find what you seek (I can assure you they do not always as I've witnessed on Facebook, Google, Amazon, Netflix, Apple -- you name it).

Bubbles are also not new. If you were a Catholic in The Netherlands in 1950 or 1960 then you watched Catholic TV and listened to Catholic radio and went to a Catholic church on Sunday and listened to a Catholic preacher and a Catholic pope telling you what to think about atheism, abortion, HIV, anti-conception, marriage, homosexuality and what have you and you went to a Catholic dancing. You came home with a Catholic partner of the opposite sex. Oh and you went to a Catholic school. Protestants? They exist, somewhere, but not in your myopic world. [Full disclosure: I grew up as an atheist child of Protestant parents in a Catholic area.]

Its a matter of choosing your overlord(s)...


> Making decisions about what content your audience sees is an act of publishing, even if it's executed via complex algorithm. The companies doing this need to accept their responsibility for what they decide to serve and promote.

Repealing Digital Safe Harbour would be a good first step. If you are responsible for what people see, you are responsible for the content.


This strikes me as a timely and important sentiment.

When we demand that Twitter ban anti-semitic tweets, or that Cloudflare block white supremacist websites, or that Youtube deplatform Alex Jones, we are taking the power to limit speech (which the founders felt was too important to be wielded by the government) and handing it to middle managers at software companies. The de jure rule is "Freedom of Speech shall not be infringed" but the de facto rule is "Don't say anything that would upset the advertisers."

This seems like a Bad Idea (tm) but until/unless a decentralized Mastodon/Scuttlebutt style platform gets traction, I don't know what the solution is. It's a natural result of relying on private apps as a primary method of communication.


    > we are taking the power to limit speech (which the
    > founders felt was too important to be wielded by the 
    > government) 
Someone spray-paints a swastika on your car. Do you think the founders would mind if you painted it over?


My point was that American free speech is not based on the belief that all speech is good or that no speech is bad. It's based on the idea that there's no one we trust to distinguish the good speech from the bad speech. Pointing out examples of obviously-bad speech doesn't disprove this, because it's not the existence of bad speech that's in question, it's the exact location of the borderline separating good from bad.


It'd more like Ford trying to ban people from putting Hilary or Trump bumper stickers on their cars. In your scenario, an individual had their property vandalized. That is not comparable to platforms censoring certain views.


    > That is not comparable to platforms 
    > censoring certain views.
Historically, it has been. If someone had sent a letter to The Pennsylvania Chronicle, containing a recipe for baking a turd pie, I don't think Ben Franklin would have felt the need to print it.

Facebook, Twitter, Google... they're the ones footing the bill to host their users' content.


The Pennsylvania Chronicle isn't a platform. It's a publisher. Readers' letters getting published is the exception, not the norm.

The ideas discussed here would be more like a telecoms provider specifically refusing to do business with someone because they disagree with their politics.


What's the difference? AFAICT the idea that when a website gets big enough it becomes de facto infrastructure and gets governed by different rules is pure imagination.


There are rules that treat telecoms differently precisely because there is opportunity for market failure.

The argument is, that some software companies have crossed into becoming a telecom like entity. A market failure exists, where consumers may need protecting.

Obviously, current laws dont treat facebook, google, or microsoft that way.

Do we feel the same way about gmail/outlook starting to censor emails that google/microsoft dont approve of?


Do you think they would mind if robots owned and operated by Ford patrolled the city at night and painted over swastikas on neo-Nazi's cars?


Even Mastodon instances can get in on the banning.


I'm a lot more sanguine about communities muting someone because they don't like that person than about companies muting someone because it's profitable to do so.

You can hypothesize a future in which Mastodon gets very popular, and in which a single for-profit node or a coordinated group of such nodes monopolize it and end up wielding censorship power similar to traditional centralized social media sites, but that's not what it's designed to do and there's no reason to assume it's a fait accompli.



Yep.

In the end, even most of these hypothetical distributed social network instances would have the de facto rule:

>Don't say anything that would upset the advertisers...


What advertisers? I haven't seen any on mastodon. In fact, I think the network would be actively hostile to their presence.


We're talking about a world where the distributed social network instances replaces the facebooks and twitters of the world.

And be assured, in such a world, the advertisers would move to the distributed social network instances.


All I really want from a social network is to be able to share small files with my friends and see small files that my friends have shared with me. It's convenient to have a for-profit handle all the hard parts (checking whether people are who say they are, storing the stuff when one of us is offline, etc) but it's not an actual requirement. Especially if some portion of the members of the network have a $5 VPS, which is certain to be true.

(As an aside, it mystifies me that some VPS provider hasn't already built this. A FOSS decentralized social network that requires a small dedicated server would be like a whalefall for that industry. If I were a PM at Amazon, I would have a team contributing 'store my files on AWS instead of my phone' to Scuttlebutt right now, and ditto for every other promising-looking decentralized social app.)


Putting aside that it's a slippery and malleable label to apply to undesirable speech, there is an elitism hiding in these ban-hate-speech arguments.

The core assumption is that while _I_ am able to see these vile ideas for the lies they are, the unsophisticated masses must not be allowed to hear them, lest they fall prey.

This is problematic in ways that used to be obvious to people in free societies, but for some reason seems lost now.


>This is problematic in ways that used to be obvious to people in free societies, but for some reason seems lost now.

I don't think this is true. I think some people always got it, and some people still do. The difference is that the internet allows anyone to post their opinions, but it used to be a lot harder to reach other people.

We still have people that we hold up on pedestals for saying the right things, and the people that we remember from the past are the ones that said things that were incredibly right, or incredibly wrong. We don't remember what every common Joe used to say daily.


This might be true.

When I was a kid, I saw a full hooded KKK march on TV, and I said "why do we let them march?" and my mom said "because you have to hear from people you don't want to hear from to know free speech is working"

I cannot fathom that this conversation would even be considered good parenting today.


We let them march because they are using public streets to do so. A KKK march through a shopping mall would be quickly broken up.

You don't have to allow the KKK to use your private property, and neither does anyone else.



i dig it. would be cool to see buddhist, hindu and islamic emplacements as well. personally i'd like to see something for atheists but i don't know how well that plays into government religious definitions.


I think you probably don't have kids.

I don't know many parents and children who would be sitting around watching tv with each other. Let alone watching a klan march?

Way too many screens. Way too much choice. Way too much personalization. The scenario itself would only arise in VERY conservative or traditional families. Most other families just don't work like this.

Blame Netflix I guess?


I think you're missing the forest for the tree. It's about instilling the priciples of the free market place of ideas - not just the specific medium involved.


I'm going to gently suggest that the proliferation of screens, the proliferation of choice, the proliferation of personalization, in and of themselves, demonstrates that the principles of the free market place of ideas is well understood. The kids just choose ideas that perhaps we would not choose. (Or in most cases, ideas that we would definitely not choose.) But this is the essence of the free market of ideas that you subscribe to. That children are not interested in the ideas that we are interested in, does not mean that we need to go and make sure they consume our ideas instead. That's kind of the opposite of a free market.

So kids live in a free market, and they generally choose things we, as parents, don't like. This doesn't, always, make the kids wrong. And it doesn't mean that they are in need of our guidance to see things "correctly". (Which invariably seems to mean, "You're wrong kid, this is what you should believe." And then we wonder why they call us hypocrites when we at the same time talk about a "free market of ideas".)


You're still missing the above commenter's point.

> When I was a kid, I saw a full hooded KKK march on TV, and I said "why do we let them march?" and my mom said "because you have to hear from people you don't want to hear from to know free speech is working"

The point is, exposure of reprehensible ideas is part of free speech. The fact that we consume media differently today does not change this. Maybe the 21st century analogue is a kid asking their parents why Alex Jones is on YouTube (well, that example isn't possible anymore). Or why we let the KKK use Twitter. The medium is a detail, the point is explaining the value of free speech to kids.

The fact that kids may be interested of different ideas also has no relevance to this point. The situation here is when kids ask why certain ideas aren't banned and suppressed, the answer is because free speech entails tolerating the existence of said ideas.


I think you're still not understanding the dynamic between children and parents these days.

Things are not how they were 30 years ago. Children are exposed to a myriad of information on a myriad number of ideas in a myriad number of ways every day of their lives. Here's the reality of being a parent today, your children will have set ideas about many many topics long before they would ever speak to you about it. So the idea that a child would come out of their room, or come home from their friends house, or from practice or whatever, and ask you about the weighty issues of the day, is fundamentally flawed. Children will Google it. Whatever more they need to know will come from their friends via snapchat.

Here's the bad news for all the new parents out there, YOU, will be the last person they will ask about anything like that. And they will attach the least importance to your opinion. (And they will attach no importance to your opinion if your opinion deviates from information on Google or Wikipedia).

Does this mean you will have no impact on your child's development of ideas? No. But it does mean that you have to set your expectations reasonably. The combative and argumentative parent-child relationships in my own opinion, usually arise due to parents not having a realistic set of expectations in this regard.

As a parent, you have to adapt to this new reality. I'm not going to tell people how to parent in the new reality, or even in the old reality. But I will say this, launching into a lecture about the importance of free speech is a REALLY good way to lose the room when dealing with kids. (A lecture sounding speech on anything is a good way to lose the room.)

Or maybe lectures do work for some kids. And maybe there are some kids out there who talk to their parents about these things instead of simply googling them. But my last kid is going to college next year, and that was never my experience.

As I said initially, I really don't think that the scenario that you are envisioning, would happen very often in today's America. Kids today will already have these sorts of ideas set long before you even think to talk with them about it.

(And if that's too much for prospective parents to think about, I won't even let you in on the fun that awaits with respect to the subject of sex, or drugs.)


For any new or hopeful parents sweating this description, I do in fact have kids (the oldest is a teenager) and my experience with them is pretty different from all of this.


The issue is that the wealthy and also foreign adversaries are exploiting the algorithms to amplify speech that serves their interests. That typically does not serve the interests of average people.

The issue is how to avoid exploitation and manipulation.

When the KKK marched several decades ago, it got coverage in newspapers and media proportional to its influence in society. Today, the wealthy and foreign opponents can weaponize hate speech like this to fan flames of division for their own purposes. That is the problem.


So, one common thing I see is I don't think left-leaning people right now realize how similar to the extreme fringes of the right they sound.

Your first two paragraphs would be a huge hit on /pol right up until you got to the point of resolving which adversaries and interests you're talking about.

I don't know where it takes us when an authoritarian, silencing approach is what both sides agree on, and they just haggle over where to point it.


Interests of nations is a very dangerous phrase that justifies horrifying things like overthrowing a democratically elected government for sake of resources or because it reduces the price of an export by 10%. I believe the proper iconoclastic quote is "States don't have rights - they have limits on powers.

Preventing manipulation and exploitation and doing them are really completely identical from a practical standpoint. Even nobility preserving woods was itself a form of exploitation since remaining untouched was a divergence from the status quo.


Personally, I think it's probably less likely that the world has somehow lost comprehension of an entire "obvious" concept, and more likely that people would have always reacted more or less as they do now, and what has increased is the quantity, extremity, and sincerity of hateful expression (in general, not just legally defined hate speech), partially because some people are becoming more loudly hateful, and partially (mostly, I hope?) because mass communication is so dramatically different now.


> Putting aside that it's a slippery and malleable label to apply to undesirable speech, there is an elitism hiding in these ban-hate-speech arguments.

It's also about control. The biggest whiners about "hate speech" are the media and news companies. Because they want to control what people see and hear. They don't care about hate speech, they just want to be able to control which hate speech the masses get to see.

It's patronizing, it's paternalistic and it's also puritanically authoritarian. It's evil.


Essence of article IMO "If democratic countries make tech firms impose limits on free speech, so will autocratic ones"

Free speech is what defines a democratic country.

Terms like 'Hate Speech', 'Fake News' are buzz phrase distractions that get in the way of the core of this reality. We already have a legal system in place that defines libel, threats etc. We don't need a new layer of corporate jurisdiction over our ability to speak online or monitoring what we can or can't say


Cockroaches thrive in the dark corners. Sunlight is antiseptic. And forcing ideas/speech into those dark corners don't keep them from growing, it merely allows it to grow unobserved and un-countered.

Ideas are the only counter to other ideas and how we communicate those idea is via speech. Suppression only invites martyrdom on the behalf of those suppressed, increasing their credibility.


Ideas aren't cockroaches, they're plants. And sunlight helps poison ivy grow just as much as it helps tomatoes and flowers.

There's no plausible mechanism by which talking about an idea more and more, makes it disappear. That's all an idea is: something to talk about.

Now, if you want to argue that no ideas should ever disappear, go ahead, but don't cloak it in "antiseptic" language. Antiseptic kills things. If you're talking about antiseptic for ideas, you're talking about killing ideas.


That's essentially John Stuart Mill's argument in On Liberty. Suppression of bad speech and thought only leads to a tyranny of the majority and can eventually lead to us forgetting what is bad versus good, just as we can't tell what darkness is if we only live in the sun.


Google/facebook/twitter et all should just remove algorithmic approach to timelines and stop allowing them to be gained/bought and go straight up user wall/order/ time-based.

The algorithm will always favor manipulation...

honestly, i just wish users would go back to small communities and indy publishers... It's easy for me to find a music forum or jeep forum with people i want to hangout with but on facebook those that can game/manipulate get the most views and its ALWAYS selling controversy...


There is evidence that deplatforming works (https://motherboard.vice.com/en_us/article/bjbp9d/do-social-...)


For some definition of "works" since the analysis doesn't consider unintended effects or potential adverse long-term consequences. This all seems joyful while the technique is being applied to those with whom we disagree, less so once the technique has been turned on you yourself.

Defending free speech means defending those we disagree with, and maybe even hate.


I think it's pretty safe to say that child porn would run a lot more rampant if it could be easily shared without consequence?

I understand that there's a different level of consequence we're talking about here (legal vs comment deletion) but how we treat child porn is the extreme for intolerance and I think it's safe to say child porn is less than it would be in the more "tolerant" alternative. And I actually do mean think, I don't have evidence, and I don't know, so if there's evidence otherwise that some society accepted child porn with the caveat of "this is bad" always being placed beside it and then saw it's usage and distribution drop I would find that pretty damn compelling case.

I would point to other comparables though. What causes a decrease in smoking? Did putting health warning labels on cigarette boxes decrease smoking? How did that compare to banning commercials and removing indoor smoking?


Ideological convictions are not the same as smoking or child porn. Proving that smoking is wrong is easy. Proving that an entire ideology is wrong is not even possible, because "right" and "wrong" in this sense are subjective - they are relative to a specific person with specific interests. You might think "well obviously 'right' is whatever is the best for the most people", but should you really always prioritize the interests of the group as a whole over its constituent parts? For example, should you suppress Tibetian/black/white/Uyghur/etc nationalism because secession movements are against the interests of their host nations? Your condescension is mind-boggling, putting a system of ideas on the same level as satisfying base desires in an antisocial way is ridiculous.


Deplatforming is about deciding which ideas are granted resources (pushed above the cultural baseline), not who is punished for speaking (pushed below the cultural baseline).

People without platforms are still free to speak. If dang banned me from HN I could still go stand on the corner and read my posts aloud and no one would arrest me.

Free speech does not mean I'm entitled to someone else's platform.


You're changing the argument, though (in fact, you're completely inverting it). The discussion _here_ is whether or not tech companies should be forced to police hate speech. You're trying to conflate that into whether or not they should be allowed to.


This sub-thread is about whether de-platforming (voluntary or otherwise) works to limit objectionable content. That's what the Motherboard article above me is about.


I have to admit it's pretty funny that I'm getting downvoted to gray.

Am I being censored or deplatformed? ;-)

I'll see myself to the corner...


Free speech does not mean I'm entitled to someone else's platform.

But if a platform wants to be a "common carrier" and not a publisher with all the responsibilities thereof, it doesn't get to make the distinction, so actually in a very real sense, you are entitled to use the platform of anyone who wants to be a common carrier, by definition.

If you send something controversial via USPS, they don't have any liability for it, and only in very exceptional circumstances would it be intercepted in transit. If they were responsible for everything they carried, the postal service would look very, very different. If it's illegal you can be busted at either end but the postal service itself doesn't care.


> Free speech does not mean I'm entitled to someone else's platform.

You're thinking of the first amendment, there is a nuanced difference between that and the general principle of free speech.

I'll assume you happened to not be aware of this, but there are lots of people online who refuse to acknowledge that difference and continue to spout the "entitled to someone else's platform" persuasion meme, which is kind of what this whole discussion is about: power, or altering the course of future events. If one's ideas & principles are sound, disingenuously censoring opposing ideas shouldn't be necessary. I believe many pro-censorship people know this explicitly (but would never speak it out loud) and others "sense" it subconsciously.


(Edit: I agree with everything else you're saying.)

Is the difference between the First Amendment and the moral principle of free speech really that nuanced?

It seems to me one has to be thinking strictly in terms of a single country, a single period of human history, a single document in order to conflate the First Amendment with the moral principle of freedom of speech. This is a very narrow view.


> Is the difference between the First Amendment and the moral principle of free speech really that nuanced?

It depends if you're trying to win elections or internet arguments, or to ensure a healthy marketplace of diverse ideas exists, because that is the best way we know of to find the best solutions to problems. Twitter, Facebook, YouTube are where modern people "congregate" and get their "information". Cutting someone off of these platforms dramatically lowers the chances that anyone will hear them, that's the entire point of the action. Claims that dismiss this as no problem "because someone else's platform" are not just saying nothing illegal has happened, they are also essentially saying a free marketplace of ideas is not valuable. This is a very new development in western countries, and it's pretty easy to see how far gone most people are already, even on more intelligent forums like this. Partisan politics trump almost everything on certain topics, even one as important as this.


> though it may have some unintended consequences that have not been fully understood yet.

I think no one disagrees that it "works" insofar as it stops the bad person from getting their bad ideas out there. Opponents of deplatforming generally argue that the long-term reaction to the deplatforming is worse than the problem the bad person's ideas were causing. Better to counter the bad ideas with good ideas to do long term good.


"Opponents of deplatforming generally argue that the long-term reaction to the deplatforming is worse than the problem the bad person's ideas were causing."

Not just that the reaction is worse, that the deplatforming erodes the spirit of free speech. I believe free speech is an important part of democracy.


Even apart from the Streisand effect, conspiring to suppress crappy ideas gives them undeserved credibility. "They didn't want you to hear this."


That's a reasonable concern and may even be true for some instances, but in the case of Milo Yiannopoulos for example, he practically just went away. Milo himself says he spent all of his savings and lost his friends. Even his most ardent fans stopped speaking out about him.

Nobody is listening to his ideas enough to give them underserved credibility.


How do you define "bad" person? Ungood? Doubleplus ungood?


The last one. Doubleplus ungood.


Has Alex Jones stopped his podcast? Has his sites shutdown? Have people how believes what he believe changed their minds? Sure, fewer random people are exposed to him, so? They'll find someone else to follow that says the things they want to hear.

I suppose it depends one what you goal with deplatforming is. If it's convincing people that the ideas that people like Alex Jones and organisations like Black Lives Matter (they're in the article as someone who have similarly been affected) are wrong or hateful then I think it will fail.


I would not consider that "evidence that deplatforming works", in the context of the parent comment.


I don't think this sort of analysis can measure the impact of deplatforming on people's opinions. While deplatforming may suppress the open dissemination of hateful ideas it's erroneous to assume these ideas become less prevalent as a consequence. This has happened in my workplace, for one. After the company made it clear it would not tolerate things like Damore's memo and Stuart Reges' article on women in computing, I and other people who agreed (or at least, aren't actively adverse to those views) shut up, and joined the chorous of people condemning them. But in private, it didn't change our views. If anything, it's made me even more skeptical of that sort of leftism.

Consider the fact that deplatforming (primarily of right wing speakers) in the US became significantly more prevalent around 2014, and 2015. It didn't help in the 2016 election.


Deplatforming works until new "free speech" platforms like Gab arise to replace the old.


Not without the old taking measures to keep that from happening.

Gab itself has been deplatformed to a degree, Google and Apple banned their mobile apps and their DNS registrar threatened to yank their domain over certain user posts.


Only if you assume that everything that a given person says is bad/unwelcome. But, people aren't like that, and hold all kinds of different views on different subjects, some of which they may have extensive background in, and others that they are just pulling out of their nether regions and are subject to all kinds of biases.

For example, despite what anyone thinks about what Trump says (I think he's an idiot and a clown), he does occasionally say something that isn't complete BS (please don't make me try to find such an example).

You just can't go down that route and expect a good outcome. In fact, I would argue that you're going to end up with the opposite result that you intended because you'll, inadvertently, end up giving more attention to idiotic ideas than the ideas actually deserve.


> Ideas are the only counter to other ideas and how we communicate those idea is via speech.

A lot of holders of bad ideas have no interest in rational debate. For those people, sunlight is an energy source, not an antiseptic.

When ISIS posts a video of a beheading, there isn't a lot of reasonable discussion going on in the comment section.


Those aren't just ideas- those are vicious actions. I don't think giving access to those we are actively at physical odds with is within bounds of debating ideas.

I doubt the founding fathers were inviting Quislings to their debating societies during the revolutionary war.


Well, if you don't have a problem with ostracizing groups like ISIS or white supremacists, then maybe we don't disagree that much.


Are we at war with white supremacists? ANTFIA? Venezuela?

Equating ISIS with supremacists is equating a nuclear bomb with an M80 firecracker.


Well that brings up an interesting question: is the US at war with ISIS? I know Congress authorized action against the Taliban and Al Qaeda and maybe that's all that's needed to go after other groups.


>Cockroaches thrive in the dark corners

Viruses thrive through exposure to new hosts.

See, you can prove anything using an analogy.


I like to think of facebook type algorithms as amplification, not suppression. Its not what content gets deranked, its what content gets promoted to the top.

The question is, should "bad speech" be amplified, promoted, propogated, broadcast, surfaced, and repeated, ON PURPOSE; just so it can get rebuked, debunked, dismissed, and exposed?

(I agree with you, the answer is closer to "The Remedy to Bad Speech is More Speech ... Marketplace of Ideas" however, the question being discussed is more akin to "Theres a limited amout of space at the front page, and people have limited amounts of attention to give, WHO gets the megaphone, and for how long". Its the inverse of "the robust debate principle recognizes that sometimes in a crowd of speakers it is necessary to turn down the volume of certain loud and clamorous speakers in order to give others a chance to speak." Facebook and the algorithm DO DECIDE who to turn the volumn up on, who to promote to the top. They already arent neutral, they already exhibit preference and bias for certain ideas.)

Others are arguing that this is flipping the argument, BUT we are talking about algorithmic placement moreso than true censorship. If someone is allowed to post something but it NEVER makes it into someone elses Newsfeed, is it as good as censored?


The primary issue with this argument is the imbalance of the situation. There is an enormous difference in the amount of effort it takes to make a false claim (that sounds nice enough for people to believe without evidence) and the amount of effort it takes to refute said claim, especially if the refutation hinges upon mechanics that most people do not understand. In the unlikely scenario that most of the people who listened to both arguments happened to accept and understand the right one, there is still another huge hurdle. The fact is, the short, wrong claim is optimized for remembering and regurgitation. It is easier to remember that than to remember the complicated refutation, or even remembering that there was a refutation. Even in the ideal scenario where all other parties accept and remember the good argument, the person espousing the false claim can simply leave and continue to make the claim. When also taking into consideration the fact that most people do not have the knowledge and eloquence to make the good argument in the first place, it seems intuitive that the bad argument would still easily make its rounds.


Bleach is a better antiseptic than sunlight.

There's some evidence deplatforming works:

https://motherboard.vice.com/en_us/article/bjbp9d/do-social-...

> “We’ve been running a research project over last year, and when someone relatively famous gets no platformed by Facebook or Twitter or YouTube, there's an initial flashpoint, where some of their audience will move with them” Joan Donovan, Data and Society’s platform accountability research lead, told me on the phone, “but generally the falloff is pretty significant and they don’t gain the same amplification power they had prior to the moment they were taken off these bigger platforms.”

> There’s not a ton of research on this, but the work that has been done so far is promising. A study published by researchers at Georgia Tech last year found that banning the platform's most toxic subreddits resulted in less hate speech elsewhere on the site, and especially from the people who were active on those subreddits.

https://mashable.com/article/milo-yiannopoulos-deplatforming...


I wish this were true. Sadly, evidence may not bear this out. If it did, then e.g. YouTube wouldn't be in a constant battle to take down ISIS recruiting content...


Not only is this analogy bad (sunlight helps plenty of bad things grow, things even detrimental to our survival as a species), but it assumes that you're dealing with rational actors. Stop naively assuming this. It's a bad assumption. There are people out there who will argue in bad faith and have no interest in reasoning or truth. And there are people who will fall victim to those bad ideas. Religion, terrorist factions, general cults, hell we can barely keep our own actions in check while we actively harm the environment.

The idea that if you just say the right, True (TM) thing then people will flock to it so not only naive it's so obviously wrong if you look at a laymen's perspective on basically any subject. It's also just a waste of time for the subjects typically in question. The communication of ideas has changed. It's a chaotic free for all. We've over done it. It's time to have a serious and reasonable conversation about the current state we find ourselves in, least we shoot ourselves in the foot with blind, headstrong optimism about ideals that don't much the reality of human nature.

In before someone references 1984 blindly and displays the ever-popular dystopia-prediction fetish that's so prevalent in these conversations.


I think the onus is on those who would silence debate to prove the immediate harm they are advocating isn't the worse alternative. There is a reason why prior restraint on speech is almost impossible to compel in American courts.

I find it curious that those who apparently are in such a rush to suppress Bad (TM) ideas adopt the worst ideas of fascists in order to do so.


>I think the onus is on those who would silence debate to prove the immediate harm they are advocating isn't the worse alternative.

We've always relied on editorial control for the most part in all of our mediums to make sure that the information being disseminated is reasonably accurate and fit for public consumption. It's not a fascist idea and has absolutely nothing to do with fascism or any type of propaganda for any political party. There's no need to be so absurdly hyperbolic over what is quite frankly, a common sense mechanism when dealing with the proliferation of ideas.

We know that disinformation spreads faster than the truth on several new, internet media platforms these days. The onus is on the people who would so eagerly disregard common sense filters that have been time tested, for them to prove that the danger and harm currently being caused by this new wave of disinformation in every single subject matter will be worth it now, tomorrow, and for the foreseeable future. Take the following:

* Vaccines, health.

* Environment

* Economics

* General politics.

In which subject are the ideals your espousing helping us? Because in each of those subjects I can point you towards real life, damaging consequences that have come about because of unfettered Bad (TM) ideas that spread over modern mediums.

This isn't a theoretical problem. It's happening right now, today.


How is this argument differ from the following one:

(setup - 1950 but we have facebook/twitter/youtube/instagram)

Most of the population thinks that people promoting same sex relationships are just hell bent on destroying the good and wholesome America and demand that the leaders of LGBT of the time be deplatformed from that twitter/facebook/youtube/etc.


How is the argument the same? You can't just ask me to do all the work for you while you just presume your statement is factual with no supporting evidence. Though I've gone ahead and responded because I feel very passionate about this issue.

To answer the heart of your point: any kind of editorial/screening system isn't perfect. We'll get things wrong. We always have, always will. It's still better than what's happening right now on these mediums where we know some portion of the population are "getting things wrong" and we're just allowing it to happen in a free fall fashion. "Do nothing" is never the answer, never has been and probably never will be.

To address your specific example: It's not based on what most of the population thinks. Qualified opinions are a thing. We can talk about what makes a qualified opinion, with your example or any other subject. Notably we see these types of opinions in your example gaining momentum because we're allowing certain groups of people unfettered access to an audience. It's really a bad example in my opinion, but I understand and sympathize with the point you're trying to make.

Unfiltered, unstructured information from unqualified sources tends to prey on our darker nature more often than it appeals to our better senses as people. To kind of expand on implementing practical systems around how we actually function as humans, in a lot of ways the "bad" part of human nature is why we've structured many western governments with explicit separation of powers with checks and balances. There are just some things we gravitate towards as humans that just aren't "good." James Madison in fact said the state was just a reflection of human nature, for the previous reasons. In a lot of ways, we have historically treated information dissemination in the same manner, checking and preventing the worst of our nature when broadcasting and consuming information via editorial controls and trusted, proven sources.


> To address your specific example: It's not based on what most of the population thinks. Qualified opinions are a thing. We can talk about what makes a qualified opinion, with your example or any other subject. Notably we see these types of opinions in your example gaining momentum because we're allowing certain groups of people unfettered access to an audience. It's really a bad example in my opinion, but I understand and sympathize with the point you're trying to make.

Who qualifies the opinion? The opinion of those that wanted gays muzzled in 1950s were very qualified opinions.

Hell, during the Obama's first term in the office he publicly opposed gay marriage as the President of the United States.

Notice that your comment is being slowly greyed out and my question is already in the negative with you being the only person who actually responded rather than attempt to downvote it into oblivion. And this is on HN, not Reddit.


Prove to me that you argue in good faith and are a rational actor.


I thought it'd at least be a few responses before someone went for the typical "got'em" style of response that inevitably relies on Missing the Point, but I guess not.

I don't need to prove that to you. Even in your own odd request, you didn't even make the feeble attempt to pretend that you're running a popular media platform. We are talking about extremely popular platforms that inherently give credence to any view points espoused on those platforms, view points that can have far reaching effects. To give you a very specific area that we're touching on, it's internet demagoguery.

Editorialization is not censorship, nutter.


I don't in any way expect that Twitter has reviewed a tweet for content and approves of the content therein. Equally so for Facebook et al.

The platforms ubiquity makes them the Hydes Park / Public Square of the modern age- if anything they should be regulated to be content neutral rather than be encouraged to silence certain viewpoints.


That's just not true though when you consider that these platforms have algorithms that will determine which posts get more visibility than others (trending is the most obvious of these). I don't think this would be even half as much of a debate if we had an electronic platform that could actually be the modern public square of the US, but instead we have only private entities attempting to provide this, and they are decidedly not protectors of free speech (that would be our government). The extension of this is that even if the government did provide such a platform, there would likely still need to be rules because as much as you like to think we have unfettered free speech, you still can't stand in the middle of the public square and call for the death of another individual or you can't shout "FIRE" in a movie theater when there is none.

If we actually had a government run system, we could ensure things like accountability for your ideas that you self publish in such a public square because the government itself would have the servers that contain the data. The constitution would limit the government from censoring this platform, but it wouldn't limit the government from implementing more effective methods of processing abuses of free speech such as libel by having an immediate record of what was said. My main point here is that it's easier to agree on what if any free speech limitations should apply if there isn't this proxy layer of "Well corporations can do whatever they want with their servers" and "Well these laws don't apply to what they said because nobody is speaking in the domain that freedom of speech applies to"


> If we actually had a government run system

We do! Buy a domain, point to your home server, and go to town. Any unlawful messages you spew will be met with seizure of your domain and/or servers, but otherwise you're free to promote or discuss any ideas you see fit.

If you're talking about a government-run social network, that's an interesting idea. I was actually talking about this with someone a few days ago, but with regard to a government run (or government funded) news/journalism publication that reported on facts (as opposed to felings and clickbait).

I think these government-run services would eventually become victim to people's calls for them to be removed...either they are too biased, or not biased enough. A social network in particular...I cannot imagine a social network built and run by the government that ANYBODY would actually want to use.


Cockroaches are also hardy and reproduce at an incredible rate. We've seen some cockroaches like white nationalism, that would've atrophied as demographics change, spread more rapidly. We now have a solution for a problem that we wouldn't need without the solution. Great.


There's still too many unsolved philosophical questions here. What is hate speech? What should the limits of free speech be? How do we contended with the multitude of religious, legal, and cultural differences and anomalies when policing news and thought across the world? How do we react to people weaponizing the policing of hate speech to remove free speech?

I have yet to hear compelling answers to this problem, and I am not that optimistic that it can be solved in the next few decades. I do agree that trust busting is the wrong approach. At least the problem is currently centralized.


Speech is either free or not free, there is no middle path.

If you want free speech, you accept the consequences. If you want “regulated” speech, there are consequences.

That’s it. I would argue that the level of satire a society can cope with, is directly proportional to the quality of democracy the society has.


Ignoring nuance doesn't make it disappear. Every developed country on earth regulates speech to varying degrees.


> the level of satire a society can cope with, is directly proportional to the quality of democracy

You're contradicting yourself: First, you deny that there are graduations in "freedom", saying it's all-or-nothing.

But then, a democracy's quality is apparently proportional to its freedom of speech, implying that there are, indeed, nuances.


Free speech is a principal (one that existed before the United States) and a goal. You can have a goal, and operate principally, and still have univerally agreed upon edge cases and exceptions.


> I would argue that the level of satire a society can cope with, is directly proportional to the quality of democracy the society has.

So where does that put America :)


>There's still too many unsolved philosophical questions here.

It's only unsolved among people who don't understand what free speech is.

There are no "compelling answers" because the problem at hand is how to maintain the positive branding of free speech while removing what it means for speech to be free.


Personally I think hate speech should be given a far more narrow and less catchy term to capture actual issues <ethnic> intimidation with ethnic substituted for relevant group or axis. The whole point of hate crime charges: it isn't to make the group sancrosanct but that it isn't just the crime of vandalism/murder but violence against the entire group akin to sundown towns lynching ethnic minorities. That suppression is dangerous to freedom and can be suppressed to its benefit akin to the paradox of tolerance.


I don’t understand why someone needs to censor anything in the first place. If a user finds posts of another user offensive, etc, then the first user can unsubscribe/unfollow/block/... the second user. If everyone thinks the same way then the offensive user will just speak with her/himself.


That's how you end up with echo chambers and parallel online universes.


I think censorship is the way to create echo chambers in a much faster way


Well, these unnamed "tech companies" are responsible for the proliferation of absurd lies that will elect a far right authoritarian candidate in my country, Brazil. The spread of this lies happens with the support of a well financed organization.

I always thought that the Internet would be a democratic platform that would improve the debate in society. Maybe we would go back to a democracy without intermediaries.

I was wrong.

We are entering a dystopian world where the profits of a handful of companies are more important than the rest of society.


Modest proposal: The government should appoint a Department of Truth to review and approve all social media posts in a country to eliminate election misinformation.


Minitrue!


What's really depressing is that I can't tell if you're joking or not.


(s)he's joking, but the only reason I'm sure is the use of "modest proposal", not the actual content.


>Well, these unnamed "tech companies" are responsible for the proliferation of absurd lies that will elect a far right authoritarian candidate in my country, Brazil.

There was no reason why the opposition to that candidate couldn't have put up their own absurd lies, or, dare I say, disproved the far right candidate's lies, thereby achieving the same success via the same platform. The elections are all about effective campaigning. You can't blame the platform because the candidate you don't like is too effective at using it. Instead learn from your mistakes and use this platform to be just as effective.


No, anyone of a minimal of ethics and responsibility can't spread these kind of lies.

Then they spend all the time refuting absurd lies instead of explaining their proposals.


That is a strange definition of responsibility - the voters aren't responsible? They lack agency completely? The propagandists who would and have used other means like newspapers, radio broadcasts and even just whispered malicious rumors? They completely lack agency as well?

Tech is just the easy scapegoat for society.


I just read an article today about a named tech company banning marketing accounts associated with said far-right candidate. But I don't think that will stop him from a landslide win. The knife was the opposition's best hope.


It was too late and hardly made a dent in their fake news network.

One of the blocked accounts were of the son of the candidate.


What should these tech companies have done to prevent this?


This NYT article https://www.nytimes.com/2018/10/17/opinion/brazil-election-f... has some tips:

- limit the amount of persons in a group

- limit the amount of people you can forward a message

- Restrict broadcasts


161 comments and no one has mentioned Glenn Greenwald and the Intercept's prolific coverage on this issue? I'll do a quick websearch and fix that.

Should Twitter, Facebook and Google Executives be the Arbiters of What We See and Read? August 21 2014 - https://theintercept.com/2014/08/21/twitter-facebook-executi...

Facebook Is Collaborating With the Israeli Government to Determine What Should Be Censored September 12 2016 - https://theintercept.com/2016/09/12/facebook-is-collaboratin...

Then: Facebook Says It Is Deleting Accounts at the Direction of the U.S. and Israeli Governments December 30 2017 - https://theintercept.com/2017/12/30/facebook-says-it-is-dele...

"hate speech" from:ggreenwald on Twitter - https://twitter.com/search?q=%22hate%20speech%22%20from%3Agg...


Tech companies are using algorithms to prioritize the messages we see, which makes them incredibly valuable as advertising platforms. It seems like Mr. Stamos wants these companies to have all the rewards and none of the complicated responsibilities to match it. If they don't want that responsibility, then they need to get out of the business of sorting and recommending. Let them be like Craigslist.


The choice is: police hate speech or promote hate speech.

Observation: promoting is cheaper (even profitable). But they can promote it with plausible deniability.

Which is a more "dangerous" path? And to whom? Society? Shareholders?


Speech isn't "dangerous". You know what you do about people that say stupid things: you call them out on it. The answer to hate speech is more speech.


> Speech isn't "dangerous". You know what you do about people that say stupid things: you call them out on it. The answer to hate speech is more speech.

This worked to marvelous effect when legitimate, unique net neutrality concerns were buried under an avalanche of duplicate anti-net-neutrality letters sent on behalf of people who were very much deceased.

The solution to this challenge likely needs a much more nuanced approach to it than just burying hate speech in more speech, because look how well that turned out.

Time bears us many more examples where your proposal was successfully inverted to hideous ends. The loudest have a very pervasive tendency to win.


Your example has nothing to do with hate speech. Your example is fraudulent speech.


In the end, what does hate speech do other than to defraud an audience into an unjustified view of a subject?

It's a relevant analogy. You're free to dispute it, but we might all benefit if you invest some effort explaining why this and other examples aren't relevant.


Unless of course you're the side of the debate that was opposed to Net Neutrality. Then free speech worked out quite well, didn't it?


This is a good example of why free speech is necessary. I've always considered myself a strong supporter of "net neutrality". Yet there were instances in the last 5 Years where the policies or positions labeled by many, particularly in the media, as net neutrality were not those that I would support. It was only as a result of robust debate and the free expression of contrarian views that I was exposed to critical facts exposing the misleading use of the phrase "net neutrality."


Like many tools, speech can be dangerous because speech can be useful.


Do you have evidence for this statement? What if you're wrong? There's evidence that you're wrong.

https://en.wikipedia.org/wiki/Paradox_of_tolerance


Calling out something that one could refer to as "hate speech" isn't tolerating the behavior, it is criticizing it.


That's a wikipedia link, not evidence.


speech is dangerous. hate speech promotes and incites violence.


We already have laws that deal with explicitly inciting people to violence.

So called "hate speech" generally is not an explicit endorsement of violence.


Wrong. Hate speech, at least in the US, has no formal definition. All it takes for something to be considered hate speech is to call it hate speech.

Incitements of violence are already prohibited by law.


To the extent that speech is dangerous, it's pretty easy to pivot to defending it on 2nd Amendment grounds, in addition to the 1st. (Relevant: https://xkcd.com/504/)


Speech can absolutely be dangerous. Much of the run-up to the Holocaust was in dehumanizing and characterizing the Jews as money-sucking leeches bleeding Germany dry.


WW2 era Germans were dangerous. Speech didn't kill anyone.


Much of the run up to the Holocaust was in establishing and propagandizing an authoritarian worldview while silencing the opposition with whatever tools were available.


Why are those the only two options available?


Indeed. Also who defines what 'hate speech' is and isn't?


If recent history is any example it's an ever expanding and often arbitrary list depending on an individuals sensitivity level (particularly when it comes down to the individual level like mod's on subreddits or employees at Twitter).

It seems many very vocal people on the internet are essentially pushing an idea that the most sensitive people should be defining what those boundaries are for everyone and the list of things people can be outraged could change at any time, and ignorance of these boundaries is not sufficient excuse.


A Google presentation titled "The Good Censor" was recently leaked where they talk about how themselves, Twitter, and Facebook, should censor free speach. They say that they want to move to a "European model" where civility is valued over freedom.

https://www.theverge.com/2018/10/10/17961806/google-leaked-r...


I cannot speak for the whole of Europe, but the German model certainly isn't "civility over freedom", but practical concordance: you have several basic rights which unfortunately almost always collide in some form with each other, and you find a way to give maximum effect to all of them.

The difference between America and Germany — very broadly speaking — is that America goes for the local maximum (free speech trumps everything) and Germany goes for a global maximum.


Mod does. Don't like the mod, get out!

Social networks don't control the police and jails. They have always had the right - though they neglected it because they are lazy and delusional - to moderate content on their own servers.

It's so clear to me that "one size fits all" social networks will go away. They are inherently dysfunctional. Putting everyone from terrorists to toddlers on the same forums was never a good idea.


No, they neglected it because they originally wanted protections afforded to “common carriers”. Now they are trying to have their cake and eat it too.


The dead, at the very least.


Yes, there's a third: that the balance they strike is infinitely perfect and the net benefit is zero in all dimensions.

It struck me as childishly pedantic to address, so


Why doth [the suppression of uncomfortable opinions] never prosper, what's the reason? For if it prosper, none dare call it Treason.


Hate speech is made in full faith by a group that has dominance and power to influence others to harm groups who live under that dominance and power. How many gradations do you think there are between the two?


I don't agree with that definition at all. It is an atrocious and immoral definition of 'hate speech' (and dare I say probably racist in practice).


What can be done to correct this definition? What should be adjusted?


For one thing, this definition eschews a common standard. Rather, violations are based on arbitrary criteria of "dominance" - which is popular with certain ideologues but has very little support in showing that this is actually a good metaphor for understanding our society. Anyway, in practice (and probably in your mind), this definition would entail grouping people by their skin color and labeling one group as extra susceptible for violating YOUR hate speech code and another group being completely exempt.


>Rather, violations are based on arbitrary criteria of "dominance" - which is popular with certain ideologues but has very little support in showing that this is actually a good metaphor for understanding our society.

I'm fascinated with why people don't acknowledge power structures as having any influence in how one part of society views another; that there is currently no hierarchy in society.

You just haven't ever needed to interface with this idea and therefore believe it's not a defined concept.


>I'm fascinated with why people don't acknowledge power structures as having any influence in how one part of society views another; that there is currently no hierarchy in society.

I didn't say anything of the sort. What I meant to communicate to you was that I don't see any evidence that the SPECIFIC (and arbitrary) power structures you choose to use, have any actual explanatory power. Typically, ideologues of your persuasion will attempt to explain any disparity in society like this. Those claims are almost always unfalsifiable and can be twisted and contoured to fit any data point.

Do you understand my point? There are many power structures and many hierarchies in our society. If you were to explain the particular status of a specific individual here are some characteristics:

- race

- ethnicity

- gender

- sexual orientation

- height

- IQ

- Extrovertness vs introvertness

- Disability Physical

- Disability Mental

- two-parent vs single-parent household

- household income

- geographic location of upbringing

- geographic location of present residency

- education level

- sense of humour

- religion

- political alignment

- marital status

- number of dependents

- hair color

- attractiveness level

- language

- athleticism

- etc. etc.

Ideologues of your persusain tend to argue, with ZERO evidence, that immutable genetic characteristics dominate all others, even though most evidence suggests the opposite. So it isn't about denying the idea that power structures and hierarchies exists, but rather denying the idea that the SPECIFIC power structures and SPECIFIC hierarchies you choose have any value for explanation.


Every reply of yours so far has hinged on my definition of terms being arbitrary.

I'm strictly talking about well-defined terms.

Maybe you have never examined them because you never needed to interface with them in practice, or maybe you do understand them and are just arguing in bad faith to defend a more abstract belief. I won't make any assumptions about that.


If anyone's arguing in bad faith here, it's you. Power dynamics may lead to hate speech but they are not an intrinsic part of hate speech.


Hate speech specifically comes from a position and perspective of power. Stop making up your own terms.


>Hate speech specifically comes from a position and perspective of power.

That's not a universally accepted definition. You're simply asserting it to be true. It's very similar to the way the definition of 'racism' was contorted from a conceptually simple "somebody who hates someone based on their race" to "somebody who discriminates (usually in some abstract ill-defined, overbroad way) and who is part of some power dominance hierarchy (also arbitrarily defined) against a marginalized sub-group" or paraphrased: "If you're non-white, you cannot be racist". Sorry, you don't get a pass on that and you don't get to just assert this to be true. I also believe that this is a deeply immoral way of looking at the world.


This here is a perfect example of a bad faith argument; asserting that your definition of hate speech is the only acceptable definition is disingenuous.


>Every reply of yours so far has hinged on my definition of terms being arbitrary.

Not quite. I am fully aware that ideas like "privilege" or "dominance/power hierarchies" have well defined meanings. But thought they are well-defined, their applicability in explaining our society is controversial. Proponents of these concepts tend to be ideologically driven. I criticized these concepts because you used them to create a definition of 'hate speech'.

>Maybe you have never examined them because you never needed to interface with them in practice

I have no idea what you're talking about here.

>maybe you do understand them and are just arguing in bad faith to defend a more abstract belief

I disagree. I attempted to be very clear in why I disagree with your point of view and I tried to capture your position fairly. Where is this 'bad faith'?


Counterpoint: no it isn’t. US law isn’t global anyway, and companies already do it.


Policing “hate speech” will just create niches where you won't be censored. It's only a question of time until someone comes up with an idea to monetise that. Imagine a platform where “moderate speech” will be banned because it's against the house rules...


I've been social on the internet since the early 90s, and it's been a wonderful place for most of it. Before it became so egalitarian, the people adept at socializing on it were pretty left leaning or downright libertarian. Now everyone is in on it, so we're getting confronted by parts of society we could pretend didn't exist 10 years ago. I don't know why I didn't see it coming.

There's a can/should debate hidden in here. Tech companies totally can police hate speech (or any kind of speech) on their platforms, thanks to handy things like a ToS. Whether they should is a cultural question about what kind of a society we want to have. If history has taught me anything, it's that the can side of the debate wins in the long run.


> Now everyone is in on it, so we're getting confronted by parts of society we could pretend didn't exist 10 years ago. I don't know why I didn't see it coming.

Did you have the same biases I did? At that time and age (my teens, mostly), I just assumed people with different values than mine were ignorant, and so naturally they wouldn't be capable of using advanced technology.

I'm not proud of that, but there's still a _lot_ of that sentiment kicking around, including in the form that giving people additional access to technology and knowledge will educate the masses into the "correct" set of values held by whoever's pushing for greater technological adoption.


There was some truth to that assumption when the internet was more niche and the old ecosystems were in place - it required a certain degree of curiosity and willingness to learn and explore when there were established "reputable" ways.

Greater access does help /if they are willing to use it in self improving ways/ in the first place. If they just use it for tabloids and gossip it won't be a library to them but tabloids and gossip.


Alex Stamos doesn't realize that the political tides have shifted. The technolibertarianism that has been prevalent in Silicon Valley since at least the 1990s is on its way out. Governments around the world are increasingly asserting their sovereignty, and that's not going to change. The internet is not a wild west where a bunch of tech people are free to do whatever they want, ignoring all the consequences and negative externalities they create.

It's a coincidence that John Perry Barlow died at the height of all this, but I think it's extremely symbolic that governments are asserting their power just as technolibertarianism's radical cleric passed away.


[flagged]


We detached this subthread from https://news.ycombinator.com/item?id=18285698. Speaking up is fine, but if you don't make it substantive enough you start a flamewar like this one.


I just want to take a moment point out the irony of this, and my subsequent posting ban.


>If you're not against it, you're supporting it.

I can be 'against' hate speech, but still support someone's right to utter it. Also, whose definition of 'hate speech' are we using?


If you're supporting an attacker's right to denigrate, abuse, or harass other people, you're supporting violating the victim's rights.

One of these has a real social cost, and the other is just literal lip-service.


> If you're supporting an attacker's right to denigrate, abuse, or harass other people...

Did that person's comment say that they support those things? Weird, I didn't read that...

When it comes down to the fundamental rights of an individual you can't pick and choose when something is an unalienable right and when it is not. We can either have free speech (warts and all) or we can live in a world that dictates what a person can and cannot say (a world I want nothing to do with).

Back to the previous point, I support someone's right to say something, not what they say. I think white supremacists are morons who sadly misunderstand there position in the world. I support their right to SAY what they want, but I don't support the contents of their speech.

The discussion around free speech requires nuance; none of it is even remotely close to as black and white as you paint it.


But you can't yell fire in a crowded theater. And if stoking panic in the easily-panicked is something you ARE okay with forbidding, why is it then ok to stoke hatred in the easily-manipulable?

I guess that wart makes this a world you want nothing to do with, because of slope's slipperiness...


Of course you can yell fire in a crowded theater. Perhaps you smelled smoke from the popcorn machine (or were having a stroke) and everyone would get up pretty calmly and leave the theater (or tell you to shut up and watch the movie/play). If someone twisted their ankle or even managed to get trampled to death few lawyers would even consider a case against you.

Now, if you had clear intent to cause injury, blocked exits to ensure harm, and started the fire you'd be up on several charges related to fire safety codes & arson laws long before you had a prior restraint on speech claim (i.e. 1st amendment).

The original Supreme Court case (Schenck v. US [1]) from which the rhetorical flourish about fires and theaters came [2] has been pretty much abandoned in favor of later rulings like Brandenburg v. Ohio [3].

[1]: https://www.law.cornell.edu/supremecourt/text/249/47

[2]: https://www.popehat.com/2012/09/19/three-generations-of-a-ha...

[3]: https://www.law.cornell.edu/supremecourt/text/395/444

One more interesting read on this topic: https://www.cnn.com/2017/04/27/politics/first-amendment-expl...


> But you can't yell fire in a crowded theater.

This cliche comes from a WWI era Supreme Court ruling and was part of the argument for how imprisoning anti-war protestors for “sedition” wasn’t technically a violation of the First Amendment.


No law or censor is stopping anyone from yelling fire.

Censorship is prior restraint. Stopping speech before it is utterred, read, or heard because reasons.


I honestly would have to read up more on the arguments for the restrictions that are currently in place.

A problem with this whole discussion is that no one has the same definition of "hate speech"; it is too subjective of a term. We all (the majority) agree when it is wrong to declare imminent danger.

But like I said, I need to read up more on the rationale for those restrictions.


> If you're supporting an attacker's right to denigrate, abuse, or harass other people, you're supporting violating the victim's rights.

Can you diagram how you arrived at this travesty of a logical leap? You're mixing in at least 5 suppositions completely outside of the stated parents post. Ignoring the other foundations coming alongside by implication.


Let me simplify further, in the face of your ad hominem:

If you're not actively trying to make the world a better place for everyone, you're passively making it worse for everyone.


> Let me simplify further, in the face of your ad hominem:

Sorry? I attacked your phrasing and logic, not YOU. Its like you're wanting to get angry at nothing.

I'm done here if you don't want to act civilized.


Okay, I clearly got a bit too passionate. Let me try and clarify, civilly:

Hate speech is always an attack.

Whether it's overt (Westboro, the N-word, Swastikas, homophobic slurs, etc.) or more subtle ("the 14 words", "88", etc). Supporting the "right" of someone to use hate speech is supporting their right to attack (and thus violate the rights of) others.

Whereas, by preventing those attacks, nothing of value is lost, and real harm is prevented.


Absolutism is also dangerous. You're dividing the world into 'good' and 'bad' and giving no quarter to the bad.

That can go sideways. You think any of the monsters of history thought they were the bad guys?


Thus, Popper's Paradox of Tolerance.[0]

[0]https://www.goodreads.com/work/quotes/6492090-the-open-socie...


By these rules, you just committed an attack and violated the rights of others by saying "88". If you were banned to prevent this attack, nothing of value would be lost.

Oh, "but I wasn't SAYING it" you say? Who judges whether hate speech is excused by context? Facebook interns? AI algorithms? Sorry, the algorithm has pronounced you guilty, you are now banned.

That is the world you would create.


Okay, nevermind then. Nothing to be done. Thinking about this is too hard!

Let's just throw in the towel folks!


Or maybe social media companies get rid of any kind of algorithmic selection. Feeds are completely ordered by date. You can follow/unfollow people who you like or dislike. If you don't want to be offended by jackasses, don't go on the public feeds.

That said, you've shot down a lot of people's arguments but I haven't seen you promote a sensible alternative. Given your personal belief framework and given the first amendment, what do you see as a solution to this problem?


I don't claim to know the answer, but I firmly believe that if all the energy being poured into protecting hate were instead put into eradicating it, the world could only become a better place.


> I firmly believe that if all the energy being poured into protecting hate were instead put into eradicating it, the world could only become a better place

HOW? How do you eradicate hate speech? What is hate speech? Anything that makes you feel bad? So do you single-handedly decide what speech is ok or not?

You're lambasting people's protection for free speech as "protecting hate speech" (which is a false dichotomy) and not offering any alternatives. I don't really get what the point of what you're saying is other than "This upsets me." And that's fine if hate speech upsets you, but understand that it upsets other people too, even if those people support free speech as defined by the US bill of rights and legal framework.

I guess what I'm saying is...stop complaining unless you start examining the problem critically (ok, you don't like hate speech... who enforces what hate speech is? how is it enforced? etc. right now you're attacking free speech but not offering ANY compelling or thought-out alternatives) and for the love of everything stop thinking in absolutes. You seem to have forced yourself into black and white thinking. That's not only counterproductive, it's dangerous. People who think in absolutes are targets to become tools of hate. If you don't start seeing nuance in things, you will be easily swayed and manipulated by anyone who has a Nice Shiny Solution to "end hate speech."


Asking "who defines hate speech?" is a cop-out. No one ever asks "who defines privacy?" in discussions about that. Hate speech is just as obvious to anyone with a modicum of empathy.

It's plenty easy for people in general (except HN, where no one seems to have any notion of what that could possibly be without a grand arbiter to define it down to the spin of each quark) to recognize hate speech.

Platforms already have frameworks for dealing with bad actors. Hate speech is just like any other abuse of a platform, and should be treated as such.

In short, I have come to realize that I'm never going to convince this highly-privileged audience to genuinely care about actual marginalized people over some imagined, theoretical bogeyman. So this will be my last wasted effort on the subject, here.


You're right, you haven't convinced me. Every opportunity everyone has given you to define hate speech, you asy "I don't need to!! It's obvious what hate speech is!!" That's not an acceptable definition. Apparently it's anything you highly disagree with. From talking with you, I'm convinced you would classify a discussion on freedom of speech as hate speech. That scares me. I get the sense you want complete control of all speech just so hate speech dies. In other words, a casual stroll toward tyranny.

I don't think you're examining the problem (and censorship IS a problem) critically. Forgive me, but you are the one copping out. And actually, people do ask "what is privacy." It is being redefined all the time and there are people actively fighting for their definition of it. They at least have a definition.


You think anything that is an attack should be illegal?


>So, I see I've angered the privileged, white, tech-bro, capitalists.

What? Why are you saying white people specifically are downvoting your opinion? And aren't you manifesting hate towards a group of people with a physical characteristic?


I'm not hating, only observing.

The only ones who see fault in protecting the marginalized are the privileged. The only ones who advocate the right for hate speech are those who are functionally immune to it.

And the demographics of HN are young, single, white, males. It's the only explanation I can find for someone downvoting "Hate is an absolute."


You are illustrating the hypocrisy of arbitrary definitions of hate.

You're literally expressing "hate" towards a group of people.

When called out you call it observing things. Do not those with opposing points of view observe things? So now it's about debating objective observations? Or is it about hate?

How can you not realize this is a perfect encapsulation for the defense of free speech and open debate? Can you not fathom that many people do not share your specific point of view on what is hate, or your specific observations and may have some of their own? Nah.. you just want to railroad your views over theirs with no opportunity for a rebuttal.


There's another possibility that you're overlooking, and that is that your subjective view is far more polarized than the average persons view. The language you've used so far shows you are thinking about all of this in very absolutist terms. Most people recoil from absolutism.


There is no "grey" area for hate. It's absolutely binary. If you hate something or someone, there's no half-way. Hate is the force behind the Holocaust, hate is the force behind the Iraq Wars. Hate has torn the Middle-east to shreds for millennia. Hate was the Khmer Rouge's genocide. Hate was Apartheid.

Giving hate any leeway is to give it 100% of its power. It must MUST MUST be fought as an absolute.


Viewing the world in absolute terms IS one of the main causes of hate, in my opinion. The idea that a group of people can be convinced to view something as absolute makes them malleable to do things they wouldn't otherwise do.

For instance, you are absolutely against hate. Now all someone has to do is change your definition of hate. Even in a subtle way. Now you are absolutely against something else you may not have been before. And this absolute disgust for something can morph and become something completely different than it was before. And you will never question it because you have absolutely made up your mind and there is no other way.

The world is not black and white, in any way. There are only shades of gray. There is no such thing as absolute good, and there is no such thing as absolute bad. Please examine this in yourself before you become a tool for someone else's agenda.


So would you say you hate 'hate'? If you don't hate 'hate' and it's "absolutely binary" then doesn't it follow you love hate? If you do in fact hate 'hate' then it must also mean you hate yourself.

Absolutism like this often leads to logical traps. Even when it doesn't it's still unhealthy. Besides, the only way to exterminate an idea is to exterminate the people that might embrace that idea which in itself sounds pretty wrong.


>Giving hate any leeway is to give it 100% of its power. It must MUST MUST be fought as an absolute.

Apparently this means yelling at white people for your downvotes on the internet.


Do you hate Nazis?


I really don't care what you have to say, and I don't like you, but I don't particularly hate you. Happy?


I've read through 3 or 4 of your comments, and you are completely incapable of making argument without resorting to slogan spouting and offensive slurs against those you disagree with. I don't think you're an honest person and you come off as very unpleasant.


Also, since when is using the demographic of a behavior a reason to blame all behavior on that demographic? If demographic X commits crime Y more than anyone else, and I see crime Y, isn't it bigoted to blindly say "it's those damn Xs again"? That's exactly what you're doing here.


What is "hate speech" changes both over time and with who identifies it; even if everyone is against it, they won't all agree on what specifically is included in the definition.


People keep saying that hate speech isn't defined clearly. It is, but these people have just never had to interface with it and have never cared to learn the boundaries.

Why haven't they had to interface with it? Because they have privileged positions in society, so hate speech has never been directed at them. Their lives have never been made harder to live by someone else in an influential position saying they don't deserve to exist.


> Because they have privileged positions in society, so hate speech has never been directed at them.

Really? As a minority and member of other protected groups, I have had to endure what you characterize as "hate speech" many times and yet I remain in complete opposition to the goals of your movement. The restrictions on speech and thought championed by your movement are contrary to the ideals of classical liberalism and, in my opinion, are a hazard to the continuation of a free society.

Your movement does not speak for me and I wish nothing to do with them. I suspect that I am far from alone in this regard among minorities.


That's terrific. I'm sincerely happy for you that you're fortunate enough to be in a comfortable enough place in society that hate speech isn't an obstacle in your life.

However it seems as though your idea of a free society doesn't extend freedom to people for whom hate speech is a much greater obstacle.


> However it seems as though your idea of a free society doesn't extend freedom to people for whom hate speech is a much greater obstacle.

Does extending them freedom from one problem by removing a greater freedom from both them and the rest of us sound like a good bargain? It doesn't to me nor does it seem so, thankfully, to a great many others.


I'm glad to see someone here gets it.


Sounds like you hate white tech bros


[flagged]


Sounds like you have a lot of internalized self-hate. You certainly have a lot of guts saying this on an account attached to your name and resume.


We've banned this account for posting unsubstantive comments and repeatedly breaking the guidelines. Would you please stop creating accounts to do that with?

https://news.ycombinator.com/newsguidelines.html


This is already going down the path of "what is hate speech". Here is what I think is a pretty simple definition:

Speech that explicitly or direct line implicitly dehumanizes anyone.

Then the arguments become pretty simple: does a statement dehumanize someone? Does it indicate that they are any less human than another? That's a much easier discussion to have.


Yes. It's very easy to create a personal definition of what's offensive. If you were the King of the world, I'm sure it would be very easy for you to tell everyone what is and isn't good.

>Speech that explicitly or direct line implicitly dehumanizes anyone.

Except, you've gone nowhere. You've substituted the intrinsic ambiguity and subjectivity inherent in 'hate speech' with the same intrinsic ambiguity and subjectivity inherent in 'dehumanization' .. in fact, I have a better idea of what 'hate speech' is than what 'dehumanization' actually entails. I've seen people argue that a model in a bikini is dehumanizing.


I really don’t think so. You just replaced hate speech by dehumanizing speech but have the exact same situation. Same question then, what is dehumanizing speech, who defines it? The definition that you offer doesn’t seem too different to me than saying hate speech is content that you find hateful.


Speech that outrages, offends or angers another person or group is actually the most humanizing concept available. Anger, being offended, being rude are human traits. No they are not very positive human qualities, but that doesn't make them "de-humanizing". De-humanizing is a completely different concept that can happen in a variety of situations, but one is where one group dominates society and prevents individuals from having (wrong or not) concepts that are contrarian or against their agenda. For example, falsely claiming offensive speech is de-humanizing and rejecting the ability for others to have "offensive" models of the world is in fact, a projection of de-humanizing wrongly re-wrapped as "de-humanizing" itself. There are other concepts in de-humanizing, that are beyond the scope of this. But cenosring speech or rejecting models (positive or negative) in the name of de-humanizing is actually a de-humanizing itself. Being angry is a right as a human. Imagine if we passed legislation making it against the law to be angry. That would be so outrageous and against every thing that is human we know.


"hate speech" and "dehumanizing" are implicitly political terms. Will you will agree with me that one's politics are simply deeply cherished opinions and nothing more?

It appears to me that we are excluding voices from public discourse because, in the opinion of a powerful group, they have been rude.

One person's rude criticism can easily become another person's hate speech.


Unfortunately, politics directly affects the safety, health, and livelihood of everyone.

People get sick die because of political decisions, and people gain great amounts of wealth because of political decisions.

People maintain comfortable places in society or remain in fully employed poverty because of political decisions.

There is very little in peoples' lives that aren't affected by politics.

If you're in a position for politics to not affect you because it's just an abstract topic of conversation, you're part of a privileged class of people who aren't being scrutinized and blamed because of your inherent, born qualities. Congratulations.


Everything in the world could be framed in-terms of power struggles, yes. Class warfare could blanket all reality and ultimately tearing down perceived unjust structures and individuals might usher in the utopia. But you've got to be a simpleton sheep to think that way. You invite the dictator-with-solutions in with that line of thinking.

I am more tolerant and wise than you because I can separate political thought and speech from political action. You, apparently, cannot. I am being charitable by calling you a control freak who cannot handle having their deeply cherished opinions challenged.


I feel that this comment dehumanized me. Therefore, you must remove it. See the issue?


How do you feel dehumanized? Let's have that discussion.

(see the answer?)


With the appropriately-tuned hate-filter you wouldn't get to defend your idea and have a discussion, you would just be de-platformed.


I think the point was that any comment or remark that you'd try to police with that statement can be flipped around towards you.

First of all: who is to decide what's dehumanizing, or even hate speech?

It's also possible to make a comment that is read by one part as hate speech, but it was never intended as such. The quick attack made on the poster that made the: "I feel that this comment dehumanized me." makes it seem like someone is offended by that very comment. If I, by accident, make comment, that someone decides is hate speech and try to have it removed, I would certainly feel dehumanized. Having the meaning of my words ripped apart and banned by a system that decided that I must hate someone, that's hurtful.

Hate speech isn't what worries me though, it's the notion that me speech somehow needs policing. First we ban the obvious stuff, not attacking people on the basis of gender or race, we can all agree on that. Next it's a ban on hateful speech against religion... but we don't all agree on that one, because some religions need to be criticised. Anyway screw the anti-religion people, who next? Those who go against the government, how about those opposing certain parties. In the end I'm kicked of the Internet for saying something negative about Disney.


The people who think "hate speech" should be deplatformed are actively seeking to stop the discussion from happening.


How is your life any harder to live because the parent comment was made? How has any positive human quality you innately have been refuted by the comment?

There is a clear definition of hate speech. It's not as amorphous as you need it to be in order to protect your ego.


Is there a clear definition? Who gets to define it? Who gets to choose what definition we use?


There is. Many people use it. Have you ever gone out of your way to examine it? Or do you just infer it based on your internal beliefs and limited experiences?


There's no need to assume that anyone that questions it hasn't had experiences regarding it. Just the fact that there exists in your response any implication that a lack of experience may change someone's definition of "hate speech" makes it amorphous. There may be a definition, but it being "clear" is questionable. Obviously the answer is to leave it to the courts, but then you have to consider the definitions of "attack" or "intimidate" or "discomfort" and if any amount of these is "violence" and to what extent is it punishable.

So I still don't think it's "clear".


In order to make your points, you keep skating over the first principle: there is a clear definition.


Your definition makes no sense. "We should kill people from country X" isn't hate speech by your definition - it doesn't imply anyone is any less human than anyone else.


"should kill people" = "those people aren't deserving to live" = 1) either all humans are not deserving to live or 2) those specific humans are not deserving to live = if 2) country X are being dehumanized.

if 1) do you consider yourself a human? A) yes, then why haven't you killed yourself? Since you haven't you think you're more deserving of life relative to everyone else and therefore are dehumanizing everyone but yourself, if 2) ok, you've publicly proclaimed that you don't think you're human which then let me add a second rule. Only humans have right to speech.


According to this interpretation, arguments supporting capital punishment is hate speech and should be banned.


These interpretations can also be flipped around. One could argue that not supporting capital punishment dehumanizes murder victims and their families. Then some tough on crime politician gets elected and speech against capital punishment is banned.

Remember that whenever you create a tool to silence people, you are building a weapon that can be used against yourself. As recent elections have shown, sane politicians aren't always in power. The only long term solution is to build a system that works even when your enemies are in charge.


I'm ok with that.


Then you have such an extremely non-mainstream definition of hate speech that it was borderline intellectually dishonest of you not to come out with that example from the beginning.

Even in Europe, which has much stricter bans on hate speech and xenophobia than the US does, and where capital punishment is banned, claiming capital punishment should be unbanned is legal.


> if 1) do you consider yourself a human? A) yes, then why haven't you killed yourself? Since you haven't you think you're more deserving of life relative to everyone else and therefore are dehumanizing everyone but yourself,

This is circular reasoning. If we've already established that I might not think all humans deserve to live, you can't claim that thinking I deserve to live means I think I'm more human than everyone else.

It's perfectly possible to think someone deserves to live less than I do while still thinking they're human.

Such tortured logic is hardly "direct line implicit", which is exactly why your definition doesn't work.


If we follow this reasoning, a statement like "Nazis deserve to die." would be hate speech.


It does indeed imply just that. You're abstracting the impulse to eliminate other groups away from that fact that it comes from a perspective of supremacy: the idea that your type of human is more deserving of existing than a foreign human.


Yes, but thinking you deserve to exist more than someone else has nothing to do with whether you think they're human.

My girlfriend's dog is very sweet and loyal. I think it deserves to exist more than a (human) child molester does. This doesn't mean I think dogs are human.


You're defining dehumanization biologically, when in usage the term is overloaded with another definition that refers to depriving others of dignity, individuality, and other positive human qualities.

To say that another doesn't deserve to be recognized with those positive human qualities is at the root of saying that they don't deserve to exist.


If you're defining it like that then it's just as vague and subjective as "hate speech" is, so we haven't made any progress with this definition.


You're being confronted with a truth about the clear definition of hate speech, and you're resisting moving forward because you intuitively sense the contradiction that your own belief might pose to that. This is where you need to be brave and get past that.


Huh? That's totally out of left field and not even related to the discussion we were having. How can you possibly have any idea what my motives are for this discussion or what my own beliefs are about anything else?


We're discussing the clear and defined understanding of what hate speech is.

You lost the thread of the conversation the moment someone confronted your belief that it doesn't have a clear definition.

The reason you lost the thread is because you're struggling with the internal contradiction of your belief.


So, (if the definition is so clear) do you think claiming capital punishment should be legal is hate speech, as the person I was originally replying to has claimed elsewhere in this subthread?


In a vacuum, I would not say that's hate speech. But there are complexities in the advocacy for capital punishment in the current context of the US's criminal justice system.

Our carceral system is widely recognized to be deeply racist in practice and in philosophy. This being the case, it's not an unbelievable stretch to say that when someone holds a belief in capital punishment, there is a good chance they also have parallel beliefs that rationalize capital punishment as a functional part of that racist system.

In that case, such advocacy is part of their rhetorical framework of hate speech. The clarity of the definition of hate speech gives us to see the context of that.


Who said anything about the US?

Let's imagine you work at a social media company. You see the comment: "capital punishment should be legal". You don't know anything about the carceral system of the jurisdiction the commenter lives in.

Do you remove the comment for being hate speech, or not?

But the main point is that since the other commenter reached the conclusion that this would be hate speech, and you reached the conclusion that it might be but not necessarily, it follows that the definition he or she proposed is not clear.


Remove it or not based on your company's rules and values.

I don't know how to make this any more explicit. Hate speech having a clear and defined concept (which both you and that other commenter can research on your own) gives us a way to define whether or not the comment and context is harmful or not. I'm not going to hold your hand here, you need to read about it.


Okay, let's back up because I think you are misreading me.

I am not discussing whether or not hate speech is clearly defined. I was discussing whether "Speech that explicitly or direct line implicitly dehumanizes anyone" is a useful definition. Do you think that's a useful definition, or not?

If that wasn't the point you were arguing against, then we're going nowhere.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: