I don't know if I know the solution, but I know one thing that isn't the solution: silencing everyone who disagrees with you.
A lot of the people complaining about "toxicity" on the internet seem to be under the impression that if we "deplatform" the so-called toxic people, that will fix things. But on the contrary, that makes things worse.
If someone says something awful on the internet, and everyone either ignores them or politely presents a counterargument, they either move on because they feel they've been heard, or they engage in a polite discussion. Maybe they change their mind, maybe they don't. If they really can't engage in polite discussion, then they come across as making their ideas look worse, so they aren't really doing much harm.
If someone says something awful on the internet, and everyone rails about how awful it is and gets them banned, then that person is angry, and that anger motivates them to keep posting about it everywhere and spreading their idea. Meanwhile, they will integrate that idea into their identity, which makes it far harder to change their mind. And if you actually manage to get them to go away, they will go to cesspools like Voat, where they are even less likely to be exposed to ideas that change their mind, and where in fact they are likely to be exposed to even worse ideas.
Let's get some perspective: what you're complaining about is people saying things you don't like on the internet. Yes, what they are saying spreads ignorance, but the solution to ignorance isn't silencing the ignorant, it's education.
MLK and Harvey Milk both recognized that the source of the bigotry they fought against was fear borne of ignorance. But the average left-leaning person today doesn't see bigots even as people any more. All it takes nowadays is for someone to say one of a list of banned phrases and they're completely written off as even human. If we're going to bridge the gap here, we on the left have got to consider that we might be the toxic ones.
I've found that when I actually talk to so-called "toxic" people politely, they are willing to listen. It's not them that are causing the polarization.
The data suggests that deplatforming works. Milo; Katie Hopkins; InfoWars et al have all lost their former agenda-setting influence following deplatforming. And studies have shown that banning hate sub-Reddits does _not_ cause that content to “pop up elsewhere”, it causes it to decline overall.
It’s important to remember too there are incentives for people to argue and behave otherwise: FB, Twitter, hate-speech mongers who want easy access to large audiences — All have commercial cause to act in favour of more and more extreme speech.
People like that benefit from more and more extreme speech. They permit or encourage it on their platforms. This in turn causes the white blood cell count of the body politic to spike as it tries to counteract the bile and hate. This angry counter-speech then gets presented as “polarisation”.
The solution to ignorance _can often be_ silencing the ignorant, yes, in order that the educators can be heard.
Would we still have an anti-vax problem if FB banned it across its properties? Really?
Banning disruptive speakers works. Every pub landlord knows it.
> The data suggests that deplatforming works. Milo; Katie Hopkins; InfoWars et al have all lost their former agenda-setting influence following deplatforming. And studies have shown that banning hate sub-Reddits does _not_ cause that content to “pop up elsewhere”, it causes it to decline overall.
This is simply not an accurate representation of history.
Milo Yiannopoulos fell out of popularity because of repeated issues with pedophilia, which even the right doesn't condone.
Katie Hopkins was financially ruined Monroe v. Hopkins[1].
InfoWars still has a larger readership than The Economist or Newsweek[2], so I'm not sure where you get the idea that they have lost influence. It sounds like you might be inside of an echo chamber.
You may be looking at a different study, but the only study[3] I know of on the banning of hate sub-Reddits showed that content didn't pop up elsewhere on Reddit. It doesn't show that the hate didn't just move over to Voat, which is my theory as to what happened. Yes, the study was publicized as "deplatforming works", but only because people didn't know where the people who were deplatformed went.
The rest of your post follows suit with claims that aren't based in reality.
Voat is discussed elsewhere here, too. In short, the point was precisely to get the speech off Reddit because it was poisoning Reddit. People said it wasn’t worth them trying because it would move to other subreddits. It didn’t, it moved offsite entirely.
De-platforming worked for Reddit. OP asked for tech contributions to reducing the amount of polarised speech in online spaces. There is one.
Thank you for responding with actual data. I'll have to read the article.
> De-platforming worked for Reddit. OP asked for tech contributions to reducing the amount of polarised speech in online spaces. There is one.
I'm not sure how one can look at Reddit being more consistently liberal, and Voat being... Voat, and see this as "reducing the amount of polarised speech in online spaces".
Reddit also delisted the Chapo Trap House subreddit, it's not exclusively conservatives that are toxic. And Reddit's clean up wasn't about liberal vs conservatives, it was about teen-ogling perverts, overt racism, and threats of violence.
I’m with you on this. After the big Reddit hate sub quarantine a few years back, I started including /pol/ in my list of websites to check when a big geopolitical event is happening because I don’t trust Reddit et al. to give me the “complete” story anymore. If you read with a skeptical eye, it can be surprisingly informative.
I actually find these alternate sites useful for monitoring the latest foreign propaganda because my spouse is from a country with a hostile relationship to the U.S. and consumes most of her news in her native language, which is full of misinformation. I can now more clearly understand where she gets her opinions from sometimes and we can talk it out.
When they ban the "extremists" the "moderates" thrive making youtube/reddit/etc posts about the obvious double standards, and the impossible to pass purity tests created to ban those "extremists". The logical inconsistency of both sides of the current "culture" war parties in the US create a near unending supply of "gotcha" moments, which are just fuel for the fire.
If anything, banning extremists makes these ideologies more palatable to a larger group of people by creating a more gradual on-ramp at the entry level. I believe this is becoming more obvious as we see the middle ground position in the current "culture" conflict (which would still probably get you banned from facebook) rapidly becoming acceptable. The pendulum keeps swinging, and not many people involved seem very interested in stopping the broken cycle and finding a more sustainable solution.
Deplatforming was a big part of that. Deplatforming didn't stop Milo. His base dropped him because he started defending hebeaphiles & pedophile / men being attracted to teens/pre-teens/boys (there's actually something sadder here; with Milo not realizing he was himself abused ... there's an entire tragic story lost there people don't seem to understand or pick up on because they're too busy hating him).
They might leave the platforms you like, but they move over to Voat, Gab or startup their own Pleroma/Mastodon instances (that get banned from everywhere). Deplatforming doesn't really work in the way you thin it does. It literally gives people more drive to stand up for and behind what Capital-T "truth" they think got themselves banned.
Deplatforming doesn't work by silencing people, and the environment that you'd have to create to make it otherwise is not an enticing one. So yes, you can often cause the creation of spaces like Voat as a result of deplatforming.
Sure, something like Voat pops up. But Reddit improves. And Reddit is where all the people are.
There will always be dive bars, there will always be rough online neighbourhoods. But start by cleaning up the civic square.
More and more, as time goes on, people are leaving the MSM including reddit for these off-shoots and it risks creating separate bubbles for every group-think out there. This is what de-platforming and cancel culture does.
It's also not black and white. Sure 99% of us can agree that banning or de-platforming such horrendous stuff like NAMBLA is good. But MSM starts to push out and group non-extremists into the same bucket and ban them all, and you get where we are today, with a good chunk of society that doesn't listen to or trust the MSM.
If what you mean to say is that "the subset of spaces that does not include Voat and Gab is an echo chamber", you are probably saying something about your own views that you might not mean to say.
It's blatantly true that Reddit still holds echo chambers though, and the majority are far-left leaning. r/politics is as much of an echo chamber as r/The_Donald (but one is a default sub, with a deceptive name).
That's why I was particular about what I was taking them to be saying. I'm not looking to litigate whether there are echo chambers on Reddit. There clearly are. The question, now tacitly answered, is whether the person I replied to believes any space that excludes Voat and Gab is definitionally an echo chamber.
> The question, now tacitly answered, is whether the person I replied to believes any space that excludes Voat and Gab is definitionally an echo chamber.
If that's what you think I was saying, let me clarify: no, that's not what I was saying.
Having complete information before forming any opinion is an impossibly high standard. My opinion is based on what I know, and I've been pretty open about what I don't know. If you feel there's some relevant information I'm missing, I'd be really happy to hear it, but pointing out that I don't know everything is fairly uninformative.
Let's keep the discussion on the topic, not on me.
What standard am I being held to? Where am I failing to meet that standard? Do you actually have any disagreement with anything I've said, or are you just going to make snide remarks from the sideline?
You aren't making any concrete statements on where you think I'm wrong, and you are evading any questions I've asked to try to get clarity, so to be honest, I have no idea what you actually think I'm wrong about, or even whether you think I'm wrong.
If it were someone else, I'd just dismiss you as a troll, but I've read and respected your opinions in the past, which makes your current method of arguing a bit disappointing for me. You're well-positioned to change my mind if you ever decide to share what you actually think.
Voat is tiny. Sure there is always going to be a population of die hard edgelords who are determined enough to move to a new site, but in general the vast majority of people are lazy. And most likely it takes the bot / disinformation account owners a while to move all their activity over to a new community
This is akin to an argument that you shouldn't fire your nazi-sympathizing coworker when they discuss their support for an ethnonationalist state because they may join neo-nazi groups and "radicalize" as a result.
Yes, they may. But while they do the workplace where most people interact is a livable place for the rest of us. We shouldn't make it easier to be heard if you have despicable views just because of the implied threat it might get worse.
Yes, and for truly horrendous viewpoints it's ok to ban them outright. But society isn't black and white, and when you start lumping non-extremists into the same bucket and banning them all you end up where we are today: distrust of MSM but a big chunk of society.
Haha the idea that incumbent news firms can do anything to Rogan is kind of hilarious. What are they going to do, shut down the internet and UFC? Close all the comedy clubs? He is completely insulated from anything they can do.
I don't think Joe Rogan or Ben Shapiro should get fired from the average workplace (I'm not familiar with Jeanine Cummins) unless they insist on trying to impose their views. I don't think they should be banned from presumably neutral platforms (reddit, Facebook, Twitter).
On the other hand, I don't think they should be given a column in other media publications and they're more than fair game for ridicule and criticism. I don't think they have a moral right to publish in any subgroup within ostensibly neutral platforms (Facebook group, subreddit).
Oh I bet the Woke Twitterati (tm) would if they could, but Joe Rogan is in the same stratosphere as Ricky Gervais and Dave Chappelle where they are effectively impervious to cancellation barring some Harvey Weinstein/Bill Cosby-level misconduct.
A workplace is pretty different from a public communications platform.
Surely you can tell the difference between talking about odious opinions on the internet, and making decisions as a representative of a company based on those odious opinions.
We can disagree about the wisdom of ethno-nationalist states in general (in my opinion they lead to deciding that some group is a different ethnicity and using that to justify their ouster, see Myanmar, China or recent Indian laws).
Either way advocating for an ethno-nationalist state in the U.S. is akin to supporting the forced removal of at least a third of the population, which is a despicable view to have.
That's OK. People do move to platforms like Gab, but those platforms tend to be echo chambers in their own which has a limiting effect on their popularity in addition to the limiting factor of their initial toxicity.
They're not really good for recruitment (because they're echo chambers that only appeal to people already invested in a point of view), nor do they provide much amusement from baiting other people with very different opinions; these two factors are in stark contrast to mainstream platforms, which provide an abundant supply of both.
Naturally, you get cross-platform raiding where people organize on one platform to attack another and then enjoy the spoils if their attack results in upset, outrage, bans of the target etc. But off-platform raids are relatively easy for platform operators and target populations to monitor, detect, and repel. Likewise, the extremist or highly toxic platforms that people do congregate on function well for a policy of managed containment.
I'd argue that in this case, it deplatforming absolutely worked in the way intended. There are 2 factors at play here. People who are willing to argue in good faith, and people who are simply trying to put forth an agenda regardless of what anyone says.
Those who likely moved to Gab and Voat and 4/8chan are likely persons that fit the latter description. There is no need to even have discussions with people like these on the internet. The best is for them to be deplatformed and continue their self flagellation. They are simply too far gone. If anything, these sites will probably radicalize, but it will also be easier for authorities to keep tabs on particularly dangerous accounts.
The concept of deplatforming is to protect the integrity of good faith discussion. The key idea there is "good faith". Persons like the aforementioned Milo clearly do not engage in any concept of good faith discussion. IMO there is no need for any discussion base like reddit or facebook to protect these types.
Deplatforming can't work- otherwise the world would now be as white-bread homophobic and misogynistic as it was in the 50's when blacks, gays, and women were deplatformed.
That kind of implies those social groups all had full participation on the social platform and were then kicked off it, no? I don't think you've thought this analogy through fully.
Please tell me you recognize the irony of your comment.
Your last sentence is essentially indistinguishable from the position held 50 years ago by people who also felt they were "morally right" in deplatforming.
> People who are willing to argue in good faith, and people who are simply trying to put forth an agenda regardless of what anyone says.
I think that it would be wise to doubt one's own ability to differentiate between these two groups.
I also think that the purpose of public debate is not to persuade the person you are debating with: that's almost never possible. It's to persuade the audience. Bad faith arguments, if they really are bad faith arguments, are usually pretty easy to shoot down, so I don't think that we have anything to fear from bad-faith arguers.
In fact, deplatforming benefits bad-faith arguers because their bad-faith arguments look more reasonable when nobody confronts them.
At another level, deplatforming is a bad faith argument, by your own definition. Aren't you trying to put forth your agenda, regardless of what anyone says, by deplatforming people? Your agenda may be good, but that doesn't justify bad faith actions to support it.
Deplatforming totally works, it quarantines the toxic people/ideas and doesn't allow them to spread further. Its fine if they move to voat or whatever, voat has orders of magnitude less people who use it casually, so there is a much lower opportunity for their message to spread.
What you're describing is "that heap of Nazis over there". It doesn't matter if the deplatformed people pop back up in marginal, poorly connected hate-tolerant sites. That actually helps. They have been successfully "sent to Coventry" in a place where their cultural impact is nil.
Well of course. That's like a prohibitionist using the reduced alcohol consumption overall as justification. Data often suggests that prohibiting <thing> means less of <thing>.
The question is not whether the goal of the silencing is mostly achieved, the question is whether that goal is worth the precedent. If your goal is to silence at all costs, it will always seem worth it. You'll find the less forceful actions (e.g. ignoring, dissuading, etc), when possible, have less slippery trade offs.
A lot of people are skeptics of "anti-racism" and "feminism". Not everyone thinks that racism is causing black people to be poor (IMO a lot of current anti-racists are like people who see a snake-bite victim, and start screaming out that we have to kill the snake - the snake might have caused the problem but treating a snake bite is not just chasing after snakes), and not everyone thinks that essentially every difference between men and women is part of some vast conspiracy to keep women down (this is reductionist - both because it tries to make everything a stupid "on balance, stuff is worse for women" argument, and because it's reductionist to say biology is not a factor).
If relatively mild critics of these things are shut out of the mainstream media, then they get fans on mainstream social media platforms. If they're banned from mainstream social media, they'll find some other place. These fans haven't all been bitten by reactionary zombies, they often come up with their own doubts about the mainstream left, and look for people discussing the questions they have.
Here's an article suggesting that 'alt lite' speakers or skeptics of social justice aren't necessarily creating some 'rabbit hole' effect, but may actually sap views away from hardline extremists - https://www.wired.com/story/not-youtubes-algorithm-radicaliz...
Even if you think that critics of social justice are wrong, they exist (in large numbers). If you don't want the moderates on mainstream platforms, a number will go to darker corners of the internet where there is a genuine danger they will be radicalised.
I guess maybe that's OK? Extreme right-wing radicals might tend to hurt the right. Maybe the goal is to drive away moderate anti-feminists or moderate anti-social justice types (e.g. Milo who seems to me to be a bit of a jerk, but is hardly radical), and if a few end up becoming alt-right then maybe that's OK (and "it's their fault" anyway).
The overarching issue isn't the snakebite incident, it's the loss caused by having people repeatedly being bitten by snakes. So chasing down snakes is a prescribed if not demanded response.
Not everyone leads hard left/right. Most people are centrists. So say you want to be a progressive left, but on issue "x" you don't stand with the left. You can't vocalize this because doing so will have people call you "not really left" or "a bigot" or "not seeing past your privilege."
You can go two ways. You can not want to leave your political home and double down and go harder left ... or you find dissenting voices. If those voices aren't centrists, they're going to be hard right, and people will start to shift their entire political leanings because they want a political home.
People need to be okay with others believing things they don't agree with. Even controversial things. I've noticed people on the left who refuse to be friends with or hand out with people who have different beliefs about certain things .. and that's insane! How do you learn and grow if you cut off everyone with a different opinion than yourself? We're probably all wrong about something.
I’m generally pretty respectful of the idea that people are entitled to make their views public. This is healthy. I don’t think this means that everybody is entitled to have their views broadcast on every platform. Presumably we could agree there’s a line somewhere, at which point it’s acceptable for me, as a company, to refuse to allow views which I consider to be over that line to be broadcast on my platform?
It’s a position of some security to be able to complain that “people on the left who refuse to be friends with or hand out with people who have different beliefs about certain things .. and that's insane”. I belong to at least one minority group. I’m not even what you’d call “left”, but I’d certainly refuse to be friends either someone whose belief was that a group to which I belong should be denied some rights. I’d probably refuse to be friends with people who had certain other views I find particularly offensive too. Is that strange or unhealthy? I’d find it really weird to be friends with someone who held beliefs I find to be grossly offensive.
> I’m generally pretty respectful of the idea that people are entitled to make their views public. This is healthy. I don’t think this means that everybody is entitled to have their views broadcast on every platform. Presumably we could agree there’s a line somewhere, at which point it’s acceptable for me, as a company, to refuse to allow views which I consider to be over that line to be broadcast on my platform?
"Acceptable" is a pretty vague word. What actions are we talking about here? Lobbying and petitioning companies? Boycotting them? Demanding governments regulate speech on their platforms?
It's my opinion that forcing people to propagate ideas they disagree with is just as bad as preventing people from propagating ideas they do agree with: these are two sides of the same coin: free speech. So at the level of government, I would never support laws that forced platforms to allow any sort of speech on their platform.
However, Reddit/Facebook/etc. respond to the demands of their users, so the question I'm trying to answer is: what should we, as users, be demanding?
I think the correct response to hatred isn't to shut them out: that's just responding to hatred with hatred. I think the correct response to hatred is to respond with love: assuage fears, correct ignorance, and help the hateful person to find a better way.
> I’d probably refuse to be friends with people who had certain other views I find particularly offensive too. Is that strange or unhealthy?
This certainly isn't going to be a popular opinion, but yes, I would say this is unhealthy.
Communication is a two way street. If you want to communicate your ideas into the world, you have to be willing to hear the ideas of others. In this sense, connection is power: people who disconnect themselves from people give up the ability to change those people. And the people who you can change the most are those the most unlike you.
You have to realize that people come to the beliefs they have due, at least partly, to circumstance and education. Maybe there's some degree of nature in it--maybe some people are inherently hateful. But if that is the case, I haven't met any of those people. When I talk to bigoted people at length, I often discover that they have led difficult lives, and based on their experience, blame some other group they don't understand for the difficulties they have experienced. They aren't doing this out of malice: on the contrary, they are usually doing it because they care about their own families and communities. The problem isn't that they are naturally bigoted, it's that they're afraid and ignorant, and both of those things are fixable.
By choosing to cut someone out of your life, you're choosing to treat them not as a normal person with fears and gaps in their experience, but as an inherently bad person. I don't think that is a healthy position to take.
There's no problem with wildly differing views. The problem is dehumanizing speech. Taking reddit as an example, there's a thriving r/Libertarian and r/guns together with Sanders supporting or communist subreddits.
Don't conflate strong disagreements with toxic speech.
No it doesn’t. I’m just totally comfortable with a world in which people are educated enough to no longer have to consume these industrial lie factories.
On the one hand it is a reality that leftist ideologies gained a comfortable ascendancy in the major cultural engines of America and that is going to be leveraged as political power. Particularly education, the tech industry and Hollywood.
On the other hand, there is the ongoing background corruption of power afoot. Taking power, utilising it and branding anyone who disagrees as 'toxic' is going to cause a great deal of unnecessary suffering, as it has in the past every time a group uses power in an unchecked fashion.
Revolutions are often started by people who were deplatformed, it doesn't help when the situation is serious. If I recall correctly; Hitler, Stalin, Galileo and Gandhi are all people who were deplatformed and imprisoned. Jesus was deplatformed rather violently. Deplatforming is not a tool with a particularly proud history of success and prudent implementation; and its failures tend to be spectacular. Two sides sticking to reasoned debate, explanation, moderation and compromise has a much better track record of not-so-bad outcomes.
And then something like Trump or Brexit happens, and the left had no idea it was coming because a lot of people's viewpoints were silenced / deplatformed....
Do you think that the average person sharing false stores on Facebook is capable of (a) finding and (b) sharing a website created by some random anti-vaxxer?
The ubiquity of the platform and the ease of sharing massively amplifies the problem.
> Do you think that the average person sharing false stores on Facebook is capable of (a) finding and (b) sharing a website created by some random anti-vaxxer?
You only say that because you don't agree with the agendas of the people you've listed. Would you trust me to choose when to deplatform the people who drive the agendas you agree with?
But the average left-leaning person today doesn't see bigots even as people any more.
I spend the bulk of my time studying extremist discourse and communities. Extremist bigots take it as read that people of the wrong ethnic/religious/political group are either not people or at best inferior people, and direct their efforts to promoting those views, planning how to eliminate those they dislike, and normalizing that behavior.
You are, essentially, projecting the characteristics of bad actors onto other actors who are warning and complaining about said bad actors.
I've found that when I actually talk to so-called "toxic" people politely, they are willing to listen. It's not them that are causing the polarization.
That's a tactic known as 'entryism' designed to leverage your nice, conflict-averse personality as a vehicle for normalization and possible future recruitment.
Why is this suddenly a discussion about extremism? Not everyone who is bigoted is an extremist, but maybe it's easier for you to argue as though this is the case.
> Extremist bigots take it as read that people of the wrong [...] group are either not people or at best inferior people, and direct their efforts to promoting those views, planning how to eliminate those they dislike, and normalizing that behavior.
Funny, this is a pretty good description of cancel culture. "Warning" and "complaining" is one thing, but advocating for the complete social and professional ostracization of people for increasingly arbitrary reasons is happening, and has been happening. _That_ sounds like extremism to me.
Assuming constant bad faith on the part of people, _most_ of whom don't have any control over the views they hold, is not how this problem gets solved.
The trouble is, not everyone is saying that. A few people here and there are, but what about some mode nuance things? For example, what if someone says they're fine with trans-people and think everyone should live their lives whoever they want, but trans M2F shouldn't participate in professional sports because it's not fair.
That's an incredibly polarizing statement, and one who makes it is immediately labeled as a bigot. Some go as far as to say such an individual is "denying trans people exist."
There is a lot to unpack there, and people can have reasonable debates about both sides of that statement. Adam Conover and Joe Rogan have had such a conversation (it's a really good episode of the Rogan podcast), but many people refuse to even listen to it because they consider Rogan a bigot/alt-right-adjacent/etc.
You have to be careful because there is so much room in the details and someone not accepting 100% of x world view doesn't immediately make them a terrible human being.
In the past, controversial views converge over time. Heidegger talked about extreme ideas as a thesis, those trying to keep things as they are as antithesis, and eventually society moves together with some kind of synthesis. All that feels like it's been thrown out the window for extreme left or extreme right ideology.
> That's an incredibly polarizing statement, and one who makes it is immediately labeled as a bigot. Some go as far as to say such an individual is "denying trans people exist."
Great point. A huge part of the problem is language. Subtlety and nuance gets stripped away (especially online), so that everything you disagree with is an unconscionable violation. It's the language of clickbait.
I'm sure the idea is to break through the noise and mobilize people against perceived injustice, but all it does is make the noise that much louder.
"You have to be careful because there is so much room in the details and someone not accepting 100% of x world view doesn't immediately make them a terrible human being."
Like what? What could be the "not a terrible human being" part be about someone who wants the state to disacknowledge someone's humanity?
Who do we trust to be the arbiters of extreme positions? I would consider the dismissal of using biological science to define male and female as an extreme position, and yet I could be banned from Twitter by saying as much.
While that's true, I think it's worth pointing out that it is a mirror of your own statement that "the average left-leaning person today doesn't see bigots even as people any more." Neither statement is true. They are the same kind of hyperbolic/uncharitable extrapolation of a group's inner thoughts.
> That's true, it is a mirror, which should be disturbing to anyone on the left who doesn't want the left to be a mirror of the right.
I mean, sure, but I was meaning to make you think about your own tendency to fall into the same trap.
Also, the left can scarcely avoid being a mirror of the right in many ways, given that both groups are made out of humans. The influence of ideology on social dynamics is overstated.
> What does saying, "I don't think you deserve to be heard" say about the person saying it?
But see, you're putting words in people's mouths here. This saying is far from the average left-leaning position.
Even if we were to look at the subsection of the left that wishes to de-platform certain individuals, they generally wouldn't say something as crude and caricatural as "I don't think you deserve to be heard." This is, perhaps, what you think that they think, but this is uncharitable and dismissive.
Generally, I wouldn't. People may want to deplatform other individuals for a variety of reasons.
A common one is to impede the spread of ideas that they consider dangerous, because they believe that these ideas will cause human suffering down the line. Whether these ideas truly are dangerous, whether their obstruction is effective, or whether it may backfire, that's another debate. The point is that they genuinely believe these ideas to be mind-viruses of sorts, and they genuinely believe that deplatforming is an effective way to impede them. If they are correct on these counts, then their actions are justified.
There are other possible rationales. One would be concern about bandwidth: platforms have limited bandwidth and can only spread a limited number of ideas, and people also have limited bandwidth and can only be aware of a limited number of ideas. The spread of bad ideas may therefore harm us by "clogging" the system (the resurgence of flat Earthism may be the most egregious example of wasted bandwidth).
I don't mean to debate the merits of these reasons. I just mean to point out that the "deplatformers" do have well thought out rationales. And if they are wrong, which is certainly possible, they are not obviously wrong.
Why is this suddenly a discussion about extremism?
Because that is my area of expertise and I wish to contribute to the discussion. While not all bigots are extremists, all extremists are bigots.
"Warning" and "complaining" is one thing, but advocating for the complete social and professional ostracization of people for increasingly arbitrary reasons
I'm not making an argument for cancel culture, which has flaws of its own. When I talk about extremists I have narrow and specific criteria for inclusion in that category, most importantly the advocacy of genocide.
Assuming constant bad faith on the part of people, _most_ of whom don't have any control over the views they hold
You seem to be assuming that most people are mindless, and further that I'm talking about the broad mass of people.
> I'm not making an argument for cancel culture, which has flaws of its own. When I talk about extremists I have narrow and specific criteria for inclusion in that category, most importantly the advocacy of genocide.
No one would disagree with that definition. Your comment's parent made the point that exclusionary language is used in many in-groups, not just bigoted ones.
Your contribution was to say that the parent is talking about left-leaning people like they are - in your words - genocide advocates. I don't know that this is a quality contribution.
> You seem to be assuming that most people are mindless, and further that I'm talking about the broad mass of people.
Are you saying that people generally have control over their opinions and predispositions? I know I can't change my views at will, Lord knows if I could I'd become a bit more woke to fit in better.
The parent to me comment was complaining about the attitude of left-wing people towards bigots. I'm leveraging my knowledge of extremism to explain why left wingers feel such antipathy towards bigotry: because of the genocidal outcomes which are associated with it and nowadays actively promoted by some folk.
Of course, extremism is by no means limited to one particular ideology. I'm not a fan of Maoists, for example.
I believe one can certainly change their views, although it's hard work.
> Extremist bigots take it as read that people of the wrong ethnic/religious/political group are either not people or at best inferior people, and direct their efforts to promoting those views, planning how to eliminate those they dislike, and normalizing that behavior.
Yes, extremist bigots do tend to favor deplatforming: they just want to deplatform a different group.
Surely we can behave better than the bigots?
> You are, essentially, projecting the characteristics of bad actors onto other actors who are warning and complaining about said bad actors.
Yes, but it's not just projection: the shoe fits.
If we behave like them we are no better than them. We on the left can't claim tolerance if we only tolerate people we agree with.
> That's a tactic known as 'entryism' designed to leverage your nice, conflict-averse personality as a vehicle for normalization and possible future recruitment.
LOL at me being conflict-averse. Check out my post history.
Yes, extremist bigots do tend to favor deplatforming: they just want to deplatform a different group.
I'm not talking about deplatforming, I'm talking about the sort of extremist bigots who advocate, organize, or engage in murdering people. You are free to present deplatforming as an equal ill to that if you wish.
If we behave like them we are no better than them. We on the left can't claim tolerance if we only tolerate people we agree with.
Tolerating disagreement and tolerating murder are not really equivalent. You're not obliged to give someone a hug if they're trying to stab you, for example.
> You are free to present deplatforming as an equal ill to that if you wish.
Sigh. Can we not do the obvious straw man arguments?
> Tolerating disagreement and tolerating murder are not really equivalent.
That's true.
Bigots and murderers are also not really equivalent. Nobody is talking about tolerating murderers, so again, let's try to not toss around straw man arguments.
> Yes, extremist bigots do tend to favor deplatforming: they just want to deplatform a different group.
> Surely we can behave better than the bigots?
In a vacuum, I'd agree. As in, in the real world, I would agree. If this were a true human contact based forum, the voice of many regular, busy people will always trump the voice of a few raging bigots. Culturally, we've moved past that; at least in urban centres where this sort of discussion could actually happen.
On the internet, it is different. Posting on the internet is gamified. The rules are simple. To get more influence, you need to be upvoted/favorited/hearted. If your opinion sucks, you are downvoated/blocked/etc.
It's simple, right? But it's also very easily gamed via astroturfing/botting/upvote-downvote farming/influence manipulation. Case in point, any political subreddit prior to the general election in 2016.
Because of this fact, any attempt at good faith discussions in popular forums simply do not exist anymore. Just take a look at how many garbage posts are at the top of any popular subreddit vs actually insightful posts.
Politicians who use these to gain grassroots support have learned to game the system. And enterprising individuals from all over the world are flocking to them. There is big business in upvote/downvote farms, botnets, and influence manipulation via social engineering. Clearly none of these is done in good faith.
Places that have been deplatformed are not always simply people who harbor alternative opinions from the norm. They are places or groups of people who wilfully try debase discussion via the aforementioned methods.
There are bad actors on all sides of any discussion, but it seems to me like organized bad faith is always at the core of the most toxic, polarized places on the internet.
To fix polarization, we must fix the gaming mechanics of these places. More moderation for cheaters is priority number 1.
I think you are right to identify gamification of social media as part of the problem, but I think we need to be careful not to lump in every opinion we disagree with, with people who are using astroturfing/botting/farming/etc. Manipulation of the gamification systems is clearly not in good faith, but there are plenty of real people with odious opinions that they hold due to fear and/or ignorance, but hold in good faith.
Yes, I spend a great deal of time thinking about that. In general, the best antidotes to extremism is getting a girlfriend (most extremists are male and heterosexual), having a kid (more so a daughter), settling down and ageing out. Socioeconomic conditions are a major driver of extremism, which is one reason it tends to flourish in adversity, when there is a large supply of pessimists available for recruitment.
There are of course many other approaches to deradicalization. Personal contact and bridge-building is ideal, but it's slow, expensive, and scales poorly for much the same reasons as why it's not practical to solve all social problems by telling everyone to go to therapy.
>planning how to eliminate those they dislike, and normalizing that behavior
This sounds like de-platforming to me.
When I say 'eliminating' people I specifically mean killing them, and no I don't think that's the same thing as de-platforming people.
What you are doing here is called "vilifying." Where you identify that someone is disagreeable to you and attribute everything they do to bad faith.
It is not. I haven't identified anyone in particular as being disagreeable or practicing entryism, but rather pointed out the existence of such a rhetorical tactic that can be used in bad faith. You seem to be confusing my suggestion that the person I was responding to is a victim of such a tactic with their being a user of it.
> Yes, I spend a great deal of time thinking about that. In general, the best antidotes to extremism is getting a girlfriend (most extremists are male and heterosexual), having a kid (more so a daughter), settling down and ageing out. Socioeconomic conditions are a major driver of extremism, which is one reason it tends to flourish in adversity, when there is a large supply of pessimists available for recruitment.
How are you defining extremism?
> When I say 'eliminating' people I specifically mean killing them, and no I don't think that's the same thing as de-platforming people.
This is a motte-and-bailey argument[1]. You're arguing on a topic about bigots talking on the internet. I'm not arguing that we should allow murder or calling for murder, and I'm not aware of anyone who is arguing that, so you're just presenting your opinion as "I'm against murder" which is not controversial. But in the larger argument you're supporting the deplatforming of a pretty large group of people, and very few of those people are actually calling for murder.
If you're anti-letting-people-call-for-murder, great, we're in agreement on that. But that has literally nothing to do with the overall discussion.
> It is not. I haven't identified anyone in particular as being disagreeable or practicing entryism, but rather pointed out the existence of such a rhetorical tactic that can be used in bad faith. You seem to be confusing my suggestion that the person I was responding to is a victim of such a tactic with their being a user of it.
> Extremist bigots take it as read that people of the wrong ethnic/religious/political group are either not people or at best inferior people, and direct their efforts to promoting those views, planning how to eliminate those they dislike, and normalizing that behavior.
Uh, well, isn't that the same that some leftwing activist advocate for those they disagree with? Deplatforming, censorship, criminalisation?
I don't think it's fair to call this phenomenon "leftwing". My perception is that those who advocate deplatforming/censorship are centrists who call themselves "progressives". They do this mostly in order to support existing power structures that they fear are threatened by free speech. Very few actual leftists in the American context are against free speech.
I wanted to stress that I didn't mean it as a leftwing phenomenon. It's just that the GP seemed to describe it as a peculiarly rightwing one, and I think we've seen instances on the left as well (particularly recently). And while the left is generally much more averse to physical violence than the right, it's also more socially accepted and mainstream, so the outrage storms are more pronounced and the calls for censorship and sometimes criminalisation meet much less resistance.
I didn't agree with your first comment above, as I mostly focus on violent extremism, but you do raise a good point about the general intensity of polarization and how there is certainly support on the left for social sanctions against those we disagree with.
I don't exactly subscribe to the proposals in your example article, but I do think that climate denial is a sort of fraud which has real externalities, and have suggested that people should listening to or engaging with known climate deniers. It seems to me that there's something fundamentally wrong with the idea that it's OK to lie about products or policy for the sake of profit and then assign blame to the victims of predictable externalities for their credulity or lack of preparedness.
One thing we've learned in the age of social media is that false information is considerably more likely to go viral than true information- good news for the entertainment industry, less good in areas like public health or policymaking. It's likely that the ease of transmission is that false information tends to leverage easy prior assumptions over difficult unintuitive ones such that cognitive bias could be said to yield a 'liar surplus' which has an economic value to the deliberate proponent of untruth. It's possible that this will lead to the development of weighting tools for assessing the reliability of information through analysis of the rate and direction of its transmission, but as long as it's profitable to sell false information we will continue to get more of that.
I really like your comment, I don't know what the end goal is? Does that mean we should never try to give our opinion?
Maybe I don't understand.. I think it would be a good thing if every individual with their beliefs is slowly normalizing others to their feelings and beliefs.
Exactly. Extremists who espouse intolerance count on the tolerance of the opposing groups to get themselves accepted and their extremist viewpoints normalized. When they do get deplatformed, marginalized and discredited, they complain about it, which itself serves to recruit people, especially those who are unfamiliar with such tactics and mistakenly think that their side is being unfair.
My problem with this "intolerance-is-okay-against-the-intolerant" thing is I'm lumped in with some extremist politics because of my religion. I don't feel at all treated as an individual (or even treated fairly) by either the left or the right. Some people who identify as the same religion have been homophobic. Since I was of voting age, I have never been against legalizing gay marriage, or treating homosexuals differently in the law at all. But I get flak for my religion from the tolerant left. They stereotype me every bit as much as the right stereotypes Islam. But they sure think they're being tolerant, by supposedly shunning intolerance.
You say that "some people who identify as the same religion [as me] have been homophobic" but I wonder if that's not minimizing the truth. If you're part of an opt-in demographic that's known for negative characteristic X, even if you yourself don't exhibit that characteristic, you're going to receive some transitive association -- and it's at least somewhat warranted because your participation is voluntary.
It's only known for that, because of a few high-profile examples. It's like saying that Muslims should just expect some transitive association with terror groups. I mean, they choose to still be Muslim, right?
One reason why it is difficult for me to tell is that I personally know a lot of people who say the same thing, i.e. that they are in favor of gay rights, but then they donate thousands of dollars a year to an extremely powerful and wealthy religious corporation that consistently preaches against homosexuality, drives gay teenagers to suicide, and puts their financial and political weight toward preventing gay rights in multiple states.
The counterpoint to that is that allowing platforms for toxic views to exist and grow (a) "normalizes" these views and (b) makes it very easy for them to recruit new members, since they have a "legitimate" platform to spread their hate from.
Personally I am fine with them isolating themselves in cesspools like Voat. While they are not being exposed to contrary views there, they are also not exposing large amounts of other people to their views.
Another factor is that toxic views are massively amplified by bots (and the bad actors behind them) and not providing these with an easy-to-exploit platform is a good thing in my opinion.
Isolation / segregation / echo chamber (whatever you want to call it) is amplifying toxicity.
We used to socialize in the streets, in the marketplace, in towns. Maybe the town's butcher was known to be a bit of a racist hick, but he was still a decent shopkeeper and his kids went to the same school as yours, so you tolerated his antics. There was the town slut, the town's drunk, the guy who was a known communist sympathizer, etc. People with their flaws, and qualities, and somehow we managed to make it all work.
What happens when those people now each socialize on their own online social network with people exactly like them? That can't be good.
As I mentioned in another comment, what happens today is that they already socialize in their own online social networks with people exactly like them. They do not tolerate one iota of dissent or open discussion, and will ban anyone with contrary views or criticism.
If we allow them space on our open platforms, without demanding that they tolerate our views, that does not result in any kind of real conversation, it just allows them to spread their toxic views around.
Have you ever actually talked to people who are unlike you, or hold extremely different view points? Because this whole "the do not tolerate one iota of dissent or open discussion" is very far from my experience.
It's true, you usually can't engage people when they are pumped up on adrenaline shouting at each other on Twitter, but that's like jumping into the middle of a brawl and saying "those people will never be able to have a conversation" when they don't immediately stop fighting and listen to you. I've talked to people from far left to far right, from vegans to conspiracy theorists, and with very few exceptions, all of them were happy to explain their ideas and viewpoints and were pretty accepting when I expressed a different opinion. They may well believe that I'm wrong, but they didn't mind talking to me after learning where I disagree with them, and they didn't become combative either.
I believe a very important part about "open discussion" is to be honest and compassionate. Don't talk down to people, don't mock them, don't tell them they are scum, or evil, or destroying humanity etc and then say "they don't tolerate dissent" when really they just don't react kindly to hostile behavior.
> I believe a very important part about "open discussion" is to be honest and compassionate. Don't talk down to people, don't mock them, don't tell them they are scum, or evil, or destroying humanity etc and then say "they don't tolerate dissent" when really they just don't react kindly to hostile behavior.
I feel like this is the goal online communities would like to see of their users. Be polite, accept good faith, etc. HN tries to have it in its own guidelines.
The startup that manages to motivate users to, while disagreeing vehemently on a given argument, remain (civil|good faith|each side remembering that the other side IS human) on target without descending into flame wars or otherwise, would stand to make a TON of money.
I feel like perhaps that is an impossible ideal to pursue though.
I don't know whether they'd make a lot of money, but they would provide a valuable service.
I tend to see things in a functional way, and I often think that the people that fight on Twitter are using that as a way to blow off steam. It's not healthy as in "it will make them have a better tomorrow", but it might manage the pressure before they explode. If your life is shitty and constantly screaming isn't socially accepted, screaming silently by telling somebody to fuck off might be the next best thing, similarly to mental health issues and substance abuse.
I remember when reddit was the cesspool that trolls banned from digg and slashdot went to. Careful with that line of reasoning, because the social platforms rise and fall all the time. And the ones that tend to free speech, engagement and controversy tend to become popular quickly.
The problem is the loudest groups of people are people with the strongest political opinions. Regular people are busy. They have neither the time nor desire to police internet arguments or play the counter argument game. And even if someone does, regular people don't have time or desire to wade through it.
One thing that I noticed when I had a Facebook account was that apart from a few exceptions most what I would consider well-adjusted people rarely participated in political discussions. Sure, people on average might be apolitical but there is no shortage of political content on Facebook -- it just predominantly seems to be driven by people using pseudonyms or with much less stake in their social identity.
Another issue is that both politcal parties in the US seem to believe that they can achieve their goals by shifting the Overton window and they are incentivized to polarize people and hijack online platforms to do this.
This leads to a huge black-hole of representation for any sort of average political view online which cannot be countered with the above logic.
Toxicity is when a site devoted to people mourning a synagogue massacre is overwhelmed by spamming neo-nazis.
Toxicity is annihilating a site's function: a site where climate scientists discuss findings needs to be free of an flood of spam from climate change deniers and paid petroleum sock puppets. Such a site exists for people to do their job not waste every day repeating the same basic information available elsewhere easier to people who came intentionally to disrupt.
> Toxicity is when a site devoted to people mourning a synagogue massacre is overwhelmed by spamming neo-nazis.
> Toxicity is annihilating a site's function: a site where climate scientists discuss findings needs to be free of an flood of spam from climate change deniers and paid petroleum sock puppets. Such a site exists for people to do their job not waste every day repeating the same basic information available elsewhere easier to people who came intentionally to disrupt.
You only have to look around on this thread to see that "silencing everyone who disagrees with you" is exactly what's being suggested.
Reddit/Facebook/Twitter are being held up as examples of places where deplatforming has worked, and where deplatforming should be more broadly applied. These aren't single-function sites.
Moderation to keep discussions on a topic existed long before deplatforming, and is not the same thing.
A big problem with giving hate mongers a platform is that they do not return the courtesy.
Try posting a well-written and well-thought-out criticism of Trump on /r/thedonald and see how many seconds it takes before the post is taken down and you are banned. They are not interested in disagreement, tolerance and diversity. They are only interested in having a platform to spew their hate from.
So on the one hand we have a group that wants to have one (1) subreddit for themselves. And on the other we have a group that wants all subreddits for themselves. Seems like two different things entirely.
That is why moderation and balance are important. r/thedonald is an echo chamber lacking balance, and therefore lacking tolerance, diversity, and mutual respect.
well to be fair r/thedonald exists because this exact thing happens to trump supporters in r/politics
people love echo chambers it turns out. It's no surprise that all the big platforms are evolving to provide them. I think the market for well reasoned argument is smaller than OP thinks.
That's a problem with reddit, whose narrow-focused subreddits encourage groupthink. It's not specific to Trump, I'm sure doing the same in pro-Sanders or pro-Warren subreddits will have the same effect. Or indeed pro-Rihanna or pro-Metallica subreddits.
You'll have better luck with 4chan. People will call you a lot of bad words for sure, but they will also debate your point to death.
I remember when /r/politics was full of conservatives, right-wingers, the tea party, what have you. People would ask them to elaborate on their position, ask for references, or point out the flaws in their logic. The response? Moving the goalpost. Or "both sides do it." After a while, I ignored them because it wasn't worth my time to debate them. We won in a way; conservatives eventually stopped bothering to comment in /r/politics and retreated to their safe spaces.
> I don't know if I know the solution, but I know one thing that isn't the solution: silencing everyone who disagrees with you.
I'm not a fan of trying to SILENCE anyone, however as a private entity, no corporation is required to allow anyone to use their platform for (nearly) any reason. Twitter can ban me today because they don't like my shoes. Facebook can ban me because Mark Zuckerberg thinks I look funny. Google can ban me because I prefer my Alexa to Google Home. Youtube can ban Alex Jones because he spews hate. Cloudflare can ban whoever those white supremacists were for being racist. Taking away their right to choose their customers is just as much an affront to free speech as government censorship.
Several years ago Sprint kicked off a small number of customers because they were costing the company lot of resources due to their behavior with the company. I absolutely agreed with them, and that's their right. Same with social media, hosts, etc. Nothing stops these people from finding another way to get their message out, but my company nor anyone else has to help them.
The fact that corporations can silence whoever they want doesn't change that silencing everyone who disagrees with you is a bad solution that won't work, sorry.
The rhetoric now is even worse because some people maintain that “words = violence” and real violence is needed to shut down this “wrong think”.
I think some people would be amazed if they realized that it’s OK when people disagree[0] and doubly OK when people have different opinions on how to fix something[1].
Finally it’s become hard or impossible to stand up for the RIGHT of people to express marginalized opinions lest you yourself get accused of having those opinions. Wonder if the ACLU could ever “get away” with defending free speech even for bigoted groups in 2020.
[0] - Some people have genuine and compelling (for them, at least) reasons to be pro life despite society as a whole disagreeing with them.
[1] - Plenty of people agree that global warming is a problem, but disagree with the popular narrative on how (or even how fast) they need to fix it.
If InfoWars says Sandy Hook never happened, and they have millions of followers, ban them. That’s a really low bar for banning. Same with Milo. Not only did he repeatedly lie in a very verifiable way, but he did it with the vigor of a bully.
These are toxic people and institutions that don’t care about the facts. They’re after your amygdala, and we are, to some extent, slaves to our amygdala. You can get a PhD and still retain a lot of your most basic instincts, for better or worse.
If we are slaves to our amygdala, how do we have civilization at all? We externalize cognition, just like we’ve done since the dawn of time. Culture is externalized cognition. Standards are externalized cognition. They are ways of protecting us from ourselves. If you let anyone get on the world’s largest platform, with the world’s largest megaphone, and shout whatever they want, what do you expect other than the devolution of society? And why should we throw our hands up and say “we built the platform. We built the megaphone. Anyone can use it.” That’s not a mythological embracement of freedom. That’s stepping back from your responsibility to set standards.
Who decides who's lying and who's telling the truth? Who is the arbitrator of an obvious lie. Declaring something an "obvious lie" has historically been used to keep other people down as well.
This goes down a very dangerous road. Banning speech is not the right way to deal with this. When you do, you give the speaker more energy. When people in the civil rights movement in the US was slapped down, it gave speakers like MLKing a source to inspire his people; to stand tall in the face of adversity.
The line should be violence. Speech is speech. Speech is not violence (unless is specifically advocates violence and that's been defined in US court cases as not-protected speech). Diablo Valley College ethics professor Eric Clanton was basically let off the hook with a slap-on-the-wrist conviction for hitting someone in the head with a bike-lock .. basically because the person he hit was alt-right. That is fucked up and totally not acceptable. All violence, no matter who it is against, should be the hard line .. and it's not, and that's the real issue.
"Who is the arbitrator of an obvious lie[?]" Easy. The final arbitrator is the owner of the platform. It is just one platform. If a platform becomes ban-happy, you can simply walk away, even if you weren't personally banned. Hacker News has plenty of rules and standards, and it will quickly mute and ban someone who violates even a subset of them. That is absolutely not a problem, and the platform is better off for it. It probably has a much larger userbase as a result, so the population of Hacker News users is dynamic, and HN can make decisions that benefit a lot of future users at the expense of a few current users. Why shouldn't HN be viewed negatively for that, but other platforms should? Isn't that arbitrary?
~Thought exercise~: Let's say you were the mayor of a large city. There are multiple areas in the city where you can shout whatever you want during certain hours. Would you let someone hook up an internet-connected megaphone in each area and pipe in hate speech everywhere simultaneously? And what if we were in a future where there were physical bots indistinguishable from humans that crowded into those public spaces, vociferously indicated their agreement, and cheered him on?
That future is now.
In reality, Facebook owns those public spaces. No one actually goes to that spot in the park anymore. Mark Zuckerberg is the mayor. Oh, and it's not just one city, he's everyone's mayor, everywhere. If you don't use Facebook but you're on Whatsapp, you're still in his jurisdiction. As for the "during certain hours" part? Nope, those internet-connected megaphones are blasting 24/7. Those bots are nodding their heads 24/7. Our mayor also collects megaphone usage fees for himself. He gives you dopamine points, likes, and social validation each time you come back for more. Freedom, yeah.
If you're going to argue that truth is relative, fine. Let each person decide their own truth and broadcast it. Let each person decide what to believe and decide what is good and what is evil and let them constantly fight. Constant misery.
Science was a good idea for civilization to adopt as defining truth.
Hey, I did this, and then THAT happened! You try! Wow, it happened again! Let's see if it always happens! It does! Don't believe me? Try it yourself!
Eventually, yes, you have to trust specialists, because there's too much to try on your own. So you vet the specialists, and the specialist specialists. You don't just throw everything away and listen to the first fat idiot who says everything that happens is a lie and paid actors staged Sandy Hook.
Banning speech is not the right way to deal with this. When you do, you give the speaker more energy.
Questionable; long term traffic ranking data doesn't seem to bear out your hypothesis. In the case of Infowars, one might note that people put up with its increasing toxicity for a really long time. You don't seem to give any weight to the impact of the Sandy Hook families of Jones' long-running campaign of calling the entire thing a hoax.
All violence, no matter who it is against, should be the hard line .. and it's not, and that's the real issue.
You seem to be arguing that the law gives left-wing people a pass while not doing the same for the alt-right, but a very similar case with the politics reversed took place around the same time and had a very similar outcome (ie, probation).
Agreed. As I have said previously in a similar discussion, I don't think censorship works very well beyond a few specific cases (yelling "Fire!" in a theater and things like that). I would rather racists and misogynists and all sorts of other assholes were free to express their opinions and engage in debate with the rest of us that that they were hiding in some dark web, invite only circle jerks. Because however distasteful I find the prior, I find the later outright scary.
Bad news: Both are happening at the same time, which is colloquially called 'hiding your power level.' See note 11 in the linked article for an example, though you'll want to use Tor or at least Incognito mode.
https://knowyourmeme.com/memes/hide-your-power-level
> I don't know if I know the solution, but I know one thing that isn't the solution: silencing everyone who disagrees with you.
I found that ignoring people on IRC who would sprinkle any conversation with racist or misplaced remarks helped me quite a bit. Of course it only affects me, but even after polite attempts to steer the conversation it never changed. Same on Twitter, at some point there is one of me and many of them, the cost to engage is just too high.
> but I know one thing that isn't the solution: silencing everyone who disagrees with you
this assumes that the people on the internet are arguing in good faith. they aren't in a lot of cases. the problem i have is willful ignorance and the associated culture that promotes the people that spread lies.
I don't think this is that important because there are far more people reading than posting. Even if the person you're responding to isn't arguing in good faith, there are people reading the discussion with an inquisitive mind.
But isn't the number of people reading a problem per se? The "deniers", those who keep shifting their arguments until they've gone full circle to the starting point, are not trying to convince you- nor you are trying to convince them. They're competing with you for the attention of the reader, trying to fill the space with the illusion of an argument.
sure, in person i think treating people politely will often allow you to come to some kind of middle ground or the very least help you to understand each other.
the anonymity of the internet makes it impossible. this isn't even taking into account people are actively just trolling or state sponsored/bots/whatever people trying to muddle a discussion.
silencing/deplatforming is a form of signaling to their in-group rather than an action for the greater good. in general , the more people know they are being watched, and the bigger the audience is, people are more likely to choose their actions based on signaling value, not effect
It's scary to read the number of people responding to you with "actually, the solution IS to silence everyone who disagrees with me, we should do that as quickly as possible."
Especially the number of them who don't even seem to have read past your first sentence.
The "toxic" people are trying to be honest about real issues, and they tend to get shouted down for it. Then the real issues persist, decade after decade, but they cannot be talked about, it's too politically incorrect. A few more people try, get shouted down, rinse and repeat the process.
Then people wonder why a segment of the population is extremely cynical and bitter and polarized when it comes to any sort of debate on controversial issues...
>I've found that when I actually talk to so-called "toxic" people politely, they are willing to listen. It's not them that are causing the polarization.
Hardly; toxic isn't a word for 'people speaking their minds' or 'speaking up about problems'. It's someone who is abusive, unsupportive, or unhealthy emotionally towards others and in some forms themselves. Or so the dictionaries say.
Or... toxic might be someone who is oversensitive, unable to accept other people have different viewpoints, and retreats to their safe space whenever someone disagrees with their world view...
This might sound mean because it is text, but it isn't, it's just answering your question. But the word (actually term) you're looking for is social justice warrior.
I can't find a good definition from credible sources but WikiPedia and the cited references (https://en.wikipedia.org/wiki/Social_justice_warrior) refer to it as a pejorative term for an individual who promotes socially progressive views, not
> someone who is oversensitive, unable to accept other people have different viewpoints, and retreats to their safe space whenever someone disagrees with their world view
But the article does mention:
> they are pursuing personal validation rather than any deep-seated conviction, and engaging in disingenuous arguments
and
> unreasonable, sanctimonious, biased, and self-aggrandizing
which does seen to have plenty overlap with the general definition and notion of someone who would exhibit toxic behaviour.
Ironically, it seems to be a term (SJW as well as 'toxic') that was invented (SJW) or re-applied (toxic) just to have one group of people label another group of people. Which is double-ironic because neither the labeling nor the behaviour validates ones own point or invalidates the other's point, yet doesn't further any conversation or discussion. It's practically just inflammatory filler.
(by the way, no, it didn't sound mean and I do like to review the popular labels and circular discussions from time to time -- much healthier for me than to be in the middle of it all the time)
Communication and language works on shared concepts and wording, which can be collected as a set of definitions. If we were to not do that, language and communication would not work very well, and with that there would be no foundation for sensible discourse. Or at least, that is what I have seen and experienced so far.
> If we were to not do that, language and communication would not work very well, and with that there would be no foundation for sensible discourse.
You're right, but we misunderstand each other and disagree all the time because language can be subjective.
Take your definition of toxicity: "abusive, unsupportive, or unhealthy emotionally towards others and in some forms themselves". All of those words are subjective.
Abusive is in the dictionary as "harsh or insulting". Harsh is defined as "excessively critical". Excessive means "going beyond the usual, necessary, or proper limit". Now define "usual". Language is a circular logic that at some point you just have to know for yourself what you mean.
When someone labels someone else as "toxic", they're using their own subjective interpretation of "normal", "harsh" and "abusive". And in these situations language doesn't work very well like you mention.
What can help is something less subjective like "I felt angry when you did X". It's more vulnerable and takes more responsibility, but it is less subjective than calling someone "toxic" and can't really be argued, which leads to better understanding and better discussions.
As someone who had a seven year account banned from YC previously for debating against socialism, you're wrong. Regular debate is labeled toxic all the time if you are anywhere to the right of pure Marxism on the political scale.
The "toxic" people are trying to be honest about real issues
Sadly there are lots of reasonable-seeming right-leaning people whose discourse about 'real issues' sooner or later circles around to 'race realism' or 'the jewish question' or some other ideological Pandora's box. It's often later; a common rule of thumb among far-right extremists is that it takes at least 2 years to reliably indoctrinate a person online.
Here is an example of how these persuasion techniques are deployed, albeit a slightly offensive one. This particular example (spread across 3 pages) aims toward recruitment of people for the 'Qanon' conspiracy which is popular among some right-leaning people. While it's orthogonal to other far-right extremist movements, it is a useful example of a developing militant ideology.
It makes you wonder what cool stuff we could have done with the time and resources that are currently being put in to that. But I suppose that the people that are putting their time and resources in to that are categorising it internally as 'cool stuff' already.
> Sadly there are lots of reasonable-seeming right-leaning people whose discourse about 'real issues' sooner or later circles around to 'race realism' or 'the jewish question' or some other ideological Pandora's box. It's often later; a common rule of thumb among far-right extremists is that it takes at least 2 years to reliably indoctrinate a person online.
I think an equally large problem is reasonable-seeming left-leaning people who hold some opinions so spectacularly absurd in the name of ideology that it generates very fertile ground for right wing commentary to resonate.
Haven’t been banned yet, but there is definitely a brigade that will instantly downvote you if you try to explain libertarian/free market ideas. Never thought that would happen, since Silicon Valley was founded with similar ideas.
Just because people mislabel things doesn't mean the meaning of the word suddenly changes. While languages evolve, it's not as simple as having a bunch of people on a website re-declare a word at will.
Regarding debate or YC or socialism: unless people get abusive I find it hard to believe that it would be called toxic. If you could link to it or provide an except of content that would be called 'toxic' without that content being 'abusive' that would be helpful.
I debated against socialism / expanding government in a thread about UBI or something similar. I was told by a mod that my comments were deleted and that they were "tedious" and "idealogical flamebait".
When I complained about his moderation by saying this:
>A moderator just a hair left of your current moderation approach on the authoritarianism scale would "flag and remove" PG's entire recent article about income inequality with a note calling it "tedious ideological posturing."
For that comment, my entire six-year account was banned and I was told to email ycombinator to have it restored.
When I emailed to have my account restored, I was told this:
>We're happy to unban accounts when people give us reason to believe that they won't repeat the things that led us to ban them. In your case it's a little less clear what that reason might be, since you've expressed how much you dislike a bunch of things about HN, our moderation, and me. But if there's something we could do to help you have a change of heart about those things, let me know.
So basically, since I complained about his moderation, he took it personally, and only if I was willing to revoke my criticism of his moderation could I be unbanned..? I was never able to get clarification on what exactly needed to happen for me to get unbanned. The gist was, if you argue consistently in favor of free-market approaches to anything economic, you will be accused of "tedious idealogical flamebait" and eventually be banned.
Take it for what it's worth, but I was never what you would call toxic by any standard definition, and those were my results.
I can't find a recent article on HN containing debates on/against/about/for socialism with banned accounts in them, so it's hard to see the content/context that would then be called toxic, but I do see in your reply that you mention "tedious" and "idealogical flamebait" which is not mutually exclusive to toxic behaviour. I suppose someone could write flamebait or be tedious without being toxic.
How would you know? You can't see any of the comments because the mods generally remove them all..
The reason why YC seems more left-leaning nowadays compared to a few years ago, is exactly what the grandparent of this thread suggested, anyone right-leaning (or libertarian) has been harassed off the site / strongly discouraged from sharing their perspectives.
I wouldn't know in an absolute sense, which is why I wrote "I suppose". To determine if some text could be described as toxic you do have to read it at some point, and without that (if they are deleted as you wrote earlier) it becomes much more theoretical.
I find this hard to believe to be honest.
HN is owned by a Silicon Valley venture capital investment firm. If anything they lean liberal/libertarian (which indeed can be a bit woke on social aspects, but certainly not when it comes to capital).
A lot of the people complaining about "toxicity" on the internet seem to be under the impression that if we "deplatform" the so-called toxic people, that will fix things. But on the contrary, that makes things worse.
If someone says something awful on the internet, and everyone either ignores them or politely presents a counterargument, they either move on because they feel they've been heard, or they engage in a polite discussion. Maybe they change their mind, maybe they don't. If they really can't engage in polite discussion, then they come across as making their ideas look worse, so they aren't really doing much harm.
If someone says something awful on the internet, and everyone rails about how awful it is and gets them banned, then that person is angry, and that anger motivates them to keep posting about it everywhere and spreading their idea. Meanwhile, they will integrate that idea into their identity, which makes it far harder to change their mind. And if you actually manage to get them to go away, they will go to cesspools like Voat, where they are even less likely to be exposed to ideas that change their mind, and where in fact they are likely to be exposed to even worse ideas.
Let's get some perspective: what you're complaining about is people saying things you don't like on the internet. Yes, what they are saying spreads ignorance, but the solution to ignorance isn't silencing the ignorant, it's education.
MLK and Harvey Milk both recognized that the source of the bigotry they fought against was fear borne of ignorance. But the average left-leaning person today doesn't see bigots even as people any more. All it takes nowadays is for someone to say one of a list of banned phrases and they're completely written off as even human. If we're going to bridge the gap here, we on the left have got to consider that we might be the toxic ones.
I've found that when I actually talk to so-called "toxic" people politely, they are willing to listen. It's not them that are causing the polarization.