I agree with most of the article, but the author faltered in the area I predicted he would before I began to read. He expresses fear that without some restraint that sites might become "like 8chan". To which I have to ask, why is that a problem? Because its a hotbed for hate speech? What is hate speech? I'm sure I dont need to elaborate further on what I see wrong with that. What people seem to forget/ignore is that hate speech and other forms of taboo content a) existed before 8chan, 4chan, and other sites with similar cultures, b) would continue to exist even if these sites were removed, and c) currently exists on every other digital platform where people may create communicative content. Taboo content will always exist. You may not like it. I may not like it. But even if it was made illegal to say anything stronger than "I do not accept <x>.", people would still go on the same way as before, albeit likely with more subtlety.
With that in mind, I'd also argue that there should be spaces for "anything goes" content. If people are going to do it anyway, why not give them their own space? Let it be an outlet for those who choose to partake in such content. If we arent willing to accept certain kinds of content everywhere, lets at least allow it somewhere. It will still exist regardless of what we do, but at least if it has its own space we can separate oursepves if we so desire.
Lastly, to make sites like 8chan out to have no value other than content some might find deplorable is, frankly, as close minded as those who the author apparently fears. If you hadn't geussed already I use 4chan regularlly and its brought much more positive value to my life than negative. The technology board is what got me into programming in the first place. Its through them that I learned about Linux, the benifits of FOSS, the history of computing, and skills that have lead me to be succesful in my life. They are even the ones who lead me to hacker news. Maybe the messages they gave me were worded in a way that some deem undesirable. But when looking at the message itself and not the envelope its wrapped in there is value to be found in these spaces. If the culture is not for you, thats fine! I understand. But please, don't see that as a reason to condemn those of us who do use these sites. We have a right to exist too.
As you say, the problem is not 8chan or "the daily stormer" for that matter. But it _is_ Facebook. Anything goes sites like 8chan have so much reality free and bizarre content that they have only the tiniest impact. Yet they provide insight into those mindsets and their trends.
But Facebook and Twitter inject extremely harmful falsehoods into reams of fluff and corporate posts. This gives them far more legitimacy, weight and exposure than they would otherwise have. And since those two have made agitating posts part of their business model, regardless of historic levels of societal harm, the problem is not fixed.
I would be appalled if anything I built were used to do such harm, I would use every ounce of my being to stop it. If necessary I would shut the whole thing down rather than have my name remembered forever as such a facilitator. Obviously Zuckerberg has a different mindset and is clearly no Alfred Nobel.
I really wish people could find a different expression to convey what they mean by this, because what they/you mean by this is probably worlds apart from what this phrase actually meant when it was formulated.
> ""Shouting fire in a crowded theater" is a popular metaphor for speech or actions made for the principal purpose of creating panic. The phrase is a paraphrasing of Justice Oliver Wendell Holmes, Jr.'s opinion in the United States Supreme Court case Schenck v. United States in 1919, which held that the defendant's speech in opposition to the draft during World War I was not protected free speech under the First Amendment of the United States Constitution."
I think we can all agree that protesting the draft should be protected by the first amendment.
> I really wish people could find a different expression to convey what they mean by this, because what they/you mean by this is probably worlds apart from what this phrase actually meant when it was formulated.
A bit yeah, right on Wikipedia [0]:
> The paraphrasing differs from Holmes's original wording in that it typically does not include the word falsely, while also adding the word "crowded" to describe the theatre. The original wording used in Holmes's opinion ("falsely shouting fire in a theatre and causing a panic") highlights that speech that is dangerous and false is not protected, as opposed to speech that is dangerous but also true.
Additionally:
> The First Amendment holding in Schenck was later partially overturned by Brandenburg v. Ohio in 1969, which limited the scope of banned speech to that which would be directed to and likely to incite imminent lawless action (e.g. a riot).
Isn't it understood that the person yelling "fire!" is by assumption doing it falsely? The "[falsely] yelling fire in a croweded theatre" is immediately relatable by everyone and conveys the idea. What is the problem?
The problem is it conveys the wrong idea. Falsity of expression is no grounds for persecution. Moreover, the word "fire" has been yelled many, many times in many, many crowded theaters (during films, plays, concerts, etc.) without anything bad happening. The problem is that it tried to describe a very narrow situation (falsely causing panic which leads to people being hurt by making them believe there's imminent danger) by describing a much wider situation, and then making conclusions from this as if the wider situation was the narrower situation.
That is exactly what happened when this phrase was first used (it was used when a person was prosecuted for speaking against the war under the guise that he's "hurting the war effort" - you can see how this trick works) - and it has been used many times after to validate persecution of speech that hurt nobody but that you could construe a byzantine logical way it could in theory hurt somebody is you really really want to believe it. And that's exactly the problem.
> The sheer scale of social networks makes the effect fundamentally unlike prior scenarios.
Is this even true? There are a billion people on Facebook, but it's not like a given post ever gets seen by more than a small fraction of them. Nearly the entire human race has been reachable by postal mail for over a hundred years; the size of the network really has nothing to do with anything.
What has changed isn't the size of the network, it's the culture. The CDC and a thousand independent doctors say that vaccines don't cause autism, but too many people don't believe them over the mother of a kid who both has autism and was vaccinated, even if that mother has no real evidence that one caused the other.
The problem there isn't that someone is saying something which is wrong, it's that other people are too credulous of unverified claims. Nobody should believe outrageous claims from some random stranger with no domain knowledge and who is contradicted by multiple domain experts, unless they're stricken with enough curiosity to actually replicate the experiment themselves. And even then they shouldn't believe based on the word of the stranger but rather their own observations or those of someone they themselves find trustworthy.
The solution isn't censorship, it's to teach people how to properly evaluate evidence instead of being credulous and easily manipulated fools.
I think your parent is basically referencing the fact that you can find large groups of people who share your views, no matter what they are. Nervous about vaccines? Great, you can find literally thousands of people who affirm your fears, and you needn't even speak to anyone who doesn't. That's what's new, that's how scale changed things.
That's not scale, it's the opposite of scale, and it's very much more the way things worked a long time ago.
When communication was expensive you could only talk to people in your physical vicinity. If they had stupid ideas, you had stupid ideas. A town even a few dozen miles away would have an entirely different set of stupid ideas.
What's different now is that the clusters aren't based on geography, so there is more overlap and you encounter people whose set of stupid ideas are different than yours. That's actually a good thing -- it's how a lot of them get stamped out because they're so obviously bad that they can only survive when no one is even attempting to challenge them.
The problem now is, instead of there being a third world village full of people who think crazy things and whose children die of preventable causes but nobody else has reason to care that they exist, now there's a few of those people in every town getting infected with measles and then showing up to school with your kid. So now we have to care more about idiots being wrong because said idiots live closer to home than they did before.
But the way to fix wrongness isn't censorship. That's how you make wrongness increase, because as soon as you erroneously censor something true, nobody can say it in order to argue that it shouldn't be censored.
The way to fix wrongness is education and debate. Which is real and continuous work instead of a switch you can flip and make the problem go away. Sorry.
There certainly are anodyne parts of 4chan--but they're obviously, to the point where your post comes off as disingenous, not the problem. Hell, they could migrate back to Something Awful, like a lot of us who grew up ended up doing. (Which, just recently, torpedoed FYAD, the last vestige of the edgelord era. The FYAD types seem to have fucked off to 4chan and elsewhere. Nobody was surprised.)
Hate speech is abhorrent and its purveyors should be ostracized from decent society. But you can deal with the crazy screamer on the streetcorner. The organizational facet is not merely distasteful but threatening. These are places to air doxxed information. They're places to gin up raids. Remember "4chan is not your personal army"? It isn't real anymore, if it ever was. For the last five years, 8chan in particular (largely because 4chan has an ownership group that maybe-just-maybe is scared of real consequences, which tells me that real consequences work) has been the personal army of a lot of sadsacks who, just to name one facet, really really really want to make the lives of women who have the temerity to make or talk about video games really horrible.
These places are literally-not-figuratively hives for fascists to radicalize the clueless, and that is a threat at a societal level.
One of the more, ah, interesting things about this is that a lot of they key activists who've been pushing for this crackdown on online speech literally are the Something Awful edgelords. If I recall rightly, at least one of them is now on the NYT editoral board making sure that the proper SA-social-justice ideas are disseminated from its pages. There's a good few journalists out there who graduated from SA too.
But they already are. The crazy screamer on the street corner causes an actual harm to you. To avoid 8chan is trivial; just don't go there.
That people get radicalized going there seems like a poor argument. If I understand you correctly, the principle is the following:
1. Someone holds acceptable views.
2. They visit 8chan.
3. They now hold abhorrent views.
Why should 8chan be blamed for this, or rather, what did they do wrong here? Don't you agree that people have the right to hold opinions without interference, and to seek, receive and impart information and ideas of all kinds, regardless of frontiers?
Radicalization is not about opinions, it is about convincing the radicalized to undertake acts of violence and threats thereof for a political purpose. You get, yeah, that these are also places where these little shitheads gather and decide who to brigade (the aforementioned "oh, hmm, that lady has opinions about video games, let's try real hard to destroy her life" dreck that the various chans indulge in when they're bored), that geek each other up to the point where somebody thinks sending a SWAT team at somebody is a great idea? And that the management is shockingly hands-off about letting them do it and have been for years now?
Swatting is quite illegal anyway - and so is plotting a 'swatting' online, by the "clear and present danger" standard. Free speech is irrelevant to this. Brigading might be a problem but both sides are doing it, so why blame only 4chan and the like while letting Twitter and Tumblr off the hook?
> Radicalization is [...] about convincing the radicalized to undertake acts of violence and threats thereof for a political purpose
I haven't heard it used that way. Wikipedia, abbreviated:
> Radicalization is a process by which an individual comes to adopt increasingly radical political, social, or religious ideals and aspirations that reject or undermine the status quo or contemporary ideas and expressions of the nation.
As in, adopting extreme views. And that raises the question: if you have 100'000 people and tell them all to go read Mein Kampf, then some of them will undeniably go, "yeah that Hitler guy seems like a pretty smart fellow". What is the solution? It can't be to say that extremist speech should be banned. If so, we have to burn most of Western philosophy.
And it seems like "it is morally justifiable to use violence to achieve political means" is just another political belief, correlating strongly with the strength of your political views. So all you are saying is that some political views should be banned. I don't like this.
> And that the management is shockingly hands-off about letting them do it and have been for years now?
The management did nothing wrong. Each site has a different vibe. In the case of imageboards, it's usually 'anything which isn't explicitly prohibited by law'. It's worth noting that 4chan bans the behavior which you describe.
Even if they didn't, it raises an important question: where should the line be drawn?
It seems easy to think of 4chan or 8chan as a discrete entity, which should be shut down. But say that we had some physical entity, like a pub, which for some reason attracted an extremely racist clientele who liked to go have a few drinks and then beat up the immigrants.
Should the pub be held liable for this? What about serverless mailing lists, where I manually put everyone on the list in the "reply to" field, or Usenet? Should that be 'shut down' too?
The thing I don't like about this is that both counterparties are willing. If you're spamming someone, you have person A who sends to person B, but person B doesn't want to hear from person A.
With an imageboard, you have person A who sends to B, C, and D, but B, C, and D all want to hear from A. Should they be stopped in this endeavor?
Let's take a concrete example: how about people swapping recipes for chemical weapons and discussing their efficient deployment against unknowing civilians? Or people discussing and developing (largely anonymously) the logistics of mass shootings, with a view to maximizing the body count?
They're just having a chat. It's unethical to censor them. Perhaps they should be brought in on some other charges, but there's nothing illegal about talking. They're not threatening to kill anyone, are they?
In practice, you see this as well. There's a lot of people posting on the Internet about how they're going to do this and that, but in practice most of them are just blowing off steam.
I get why a site like Facebook which wants to have community standards would like to enforce it, but it should be up to the proprietor.
>the aforementioned "oh, hmm, that lady has opinions about video games, let's try real hard to destroy her life" dreck that the various chans indulge in when they're bored
What are you referring to when you say this? Because if you're referring to Gamergate then that's quite a mischaracterization. I'm not saying that bad things weren't done during it, but it wasn't about hating on women in video games.
And aren't most swatting incidents unrelated to chans? Mostly it seems to have been about people getting angry at one another playing video games or related to streamers of video games. Does that make Twitch complicit?
To some degree? Sure. But if it truly was about hating women talking about video games for being women, then how come some of the idols of the movement were women themselves? If it's all about misogyny then shouldn't they hate people like mombot just the same? This is why I said that it's a mischaracterization.
Having been on the internet since a young age, I've always found it strange the internet is, well, so "srs business" these days. The internet has never, and as you say, will never be a harmonious place where everyone will be nice to each other. We can't even achieve this in physical spaces which can actually enforce these kinds of rules!
Maybe I'm lucky, but I find it really easy to just disasociate with online comments. Whenever I read something hurtful or mean directed to me on the internet, I can very easily just shrug it off as "some internet person" and move on, although admittedly I've never been target harassed like some people have, which I'm sure is distressing. I think part of this was growing up I was always told: "Don't use your real name online! Don't give out your personal info!", and so for me, being online was always about being someone else, a different persona if you will. These days that seems completely out of the window, everyone uses their real names everywhere, uploads actual pictures of themselves and generally their real life seems directly integrated into their online life, which is just unthinkable for me. This probably makes online abuse just that much more personal for them?
I suppose another big change of course is that all the small forums that used to make up the internet are now exceedingly being consolidated into just one or two sites. Again, back when I was a child trawling the internet, if you didn't like one forum, you just hopped to another, and there were always plenty more (sidenot: remember the 'affiliate' tags on sites? I miss those. Made the internet feel more like a big group of communities, but I digress). Of course now, if someone is being harassed on twitter or facebook or something, well, they're not going to go somewhere else because a lot of the competition no longer exists anymore.
I don't really know where I'm going with this comment honestly, I'm just ranbling. What I do know is that regulating the internet is, and always has been, completely futile. So long as someone, somewhere, can spin up a forum on a VPS or on a computer at home, and people can connect to it, there's going to be sites hosting every kind of content imaginable. If they can't even bring down the Pirate Bay with the music industries power and money, they'll never stop people saying nasty things.
Expressing opinions vs making specific threats is a very clear thick line for me, and yet somehow whenever I read about takedowns and such it seems many people have trouble making the distinction.
“I hate x and wish they had never been born”
Is not
“I am coming to your house tomorrow with a weapon”
Yet people treat them similarly.
From there it just becomes “anything that scares me is a threat.”
Then, it becomes obvious that people are in general scared of cultures and attitudes very different from their own.
And so there’s no tolerant chaos anymore.
It’s a Motte and Bailey argument.
Very few people support doxxing and/or incitements to violence.
But it is amazing how much censorship happens in the pursuit of curbing such threats.
In some psychology class long ago, I learned that one of the most self destructive things someone can do is perceive emotions as information. As in "That made me mad so what they did was clearly wrong". A lot of therapy is geared to shifting people away from that cognitive error.
But that concept is thrown out the window when it comes to Twitter/Facebook/etc.
I can't count how many times I've read "I'm literally shaking now" used as an indictment against someone else's perceived transgression. I think out of all the twitter cliches, that one makes me roll my eyes the most.
Unfortunately this is kind of how the human brain works. It presumably evolved from a simpler design that only had "emotions", adding a higher level processing function that began as something very simple, evolving over a long time into the complex neocortex we have now. The primitive emotional brain is still present, with an imperfect "interface" to the newer brain.
I would like to echo your statement. I've always found it bizarre how people conflate insults with threats. In the vast majority of cases we can draw a very distinct line between the two. On top of that, it's not like this has any less ambiguity in meatspace.
Well I mean, like I say, I'm pretty careful not to let out personal information online (Although I'm sure someone could probably track me down if they really tried). Without being in that situation it's pretty hard to say. People say lots of things on the internet but it very rarely translates into reality, not without some kind of actual reason to do so. It's easy to say while not being in that sitatuation, but unless I believed that there was an actual reason to target me, someone wouldn't bother putting their own life at risk just to follow what some other random person on the internet said. Luckily I live in a pretty safe place, so I'd probably inform the police and let them take it from there.
If it were a credible threat of physical harm (it rarely is), then I'd make a police report and be more vigilant with my self-defence preparation and tell my loved ones to be accordingly.
Attempting to censor the information would be pointless at best, and could easily make it worse.
I completely agree and am glad I am not the only one who feels like that.
I believe a lot of the internet of that era changed with Facebook: it was the first prominent social media that encouraged (mandated even) you to use your real name and information and a lot of people went with it. Myspace, MSN or similar before that were all aliases even if you knew the people in real life. This removed a level of privacy and made attacks more personal.
> Let it be an outlet for those who choose to partake in such content
I normally don't argue for censorship but I will throw out that if you believe the following things:
1. There exist thoughts and beliefs which are dangerous to (my) society
2. These thoughts and beliefs are normally tempered through social pressure in the dominant culture
3. Through the internet you can create your own subculture which reinforces your thoughts and beliefs to become the new normal
Then it makes sense to try to stamp out any places where you can form these groups, since you (the dominant culture) want to maintain the status quo.
This isn't necessarily unique to the present day politics, there was a time when civil/economic rights were a radical faction in a broader culture which would want it stamped out too.
You certainly may _want_ to eliminate such places and/or such beliefs. But stamping out places or beliefs doesn't really work. People find or create new places, and what is left is just ill-will and further political division. Beliefs become martyred and the Streissand effect draws more people in to investigate.
The way to reduce bad beliefs is to, IMHO:
1. Ingratiate and investigate. Respect the people, become their friend, listen to their full arguments (don't imagine straw men), understand their motivations. You might even discover the belief isn't so bad. If you don't, continue on to:
2. Suggest alternate ways of viewing the situation. People will always argue their point and appear to not accept your view. But I've found they almost always actually move slightly towards your point, oftentimes after the conversation, even if they won't admit it. It takes time for a worldview to shift.
Arguing makes no sense prior to ingratiating yourself, because if you are an enemy your view will not be appreciated.
One very important thing to counter hate is to try to have everyone in society involved in society somehow. People are much less willing to wish ill on others in society (or society itself) if they feel that they are part of it. A lot of stories about people being radicalized start with a person feeling at end of the rope and alone.
This isn't something that you can really do online, but it is something that can be done in real life and should probably done more. Friends are a powerful force.
This is exactly the way that the broader society changes beliefs on individuals, I think the danger is that individuals can now create their own subcultures which are more significant (to them) than the dominant culture. At some point you don't need to talk to anyone outside of your subculture.
One example is subreddits: over time they start to develop a subculture and the mechanics of downvoting and upvoting means that people with more moderate beliefs can end up leaving after awhile, leaving the subreddit to become more and more polarized over time.
This makes it hard to have these types of empathetic conversations.
What people in general want is exactly that type of censorship. Just look at the way the public in general, media/companies in particular, react when you say Eric Ciaramella's name.
1. From a technical point of view it's poorly-designed software for 2020. For example: you can't measure how many of "we" are refrigerators. You apparently don't reflect about how that may open up exploits to a wide variety of attackers in "their own space" for the "anything goes" crowd. (Same for HN for that matter.)
2. From a practical point of view this type of software serves as a poor entry point for a persuasive free speech argument. I much prefer using E2EE messaging apps as the entry point for a strong defense of free speech in the digital era. We know bad actors have used them and will likely use them in the future. Nonetheless, the rest of us use them and will fight to continue to do so on principle (e.g., for security in a weaponized internet, for privacy, for research purposes, etc.). Those principles are at the forefront, as is the fear that will be used in the counter-argument and which we must resist. We know the value, we know the cost, and we can argue the point clearly.
If we instead start the argument with poorly designed, buggy software that requires a birds-eye view of the network in order to properly assess when and how it's being exploited we're going to be fighting on two fronts. (As well as ignoring the risks of the design as I believe your comment does.)
Also, if you ask for (legal) spaces for "anything goes" content, you need to campaign not just against "hate speech" laws, but all the other laws that prohibit various kinds of information.
Maybe, but probably not. I encourage people to read the full Karl Popper quote, which contains this:
In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be unwise. But we should claim the right to suppress them if necessary even by force; for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive, and teach them to answer arguments by the use of their fists or pistols. We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant.
The claim isn't that we should denounce those who hold opinions we find abhorrent as intolerant and fail to tolerate them; it's more specific than that, it says that we may need to physically fight those whose 'opinions' are backed by force or the credible threat of it.
I don't think vile, anonymous Internet racists rise to this measure. Your mileage may vary. Most of the Interneterati who invoke the paradox of tolerance are saying that, e.g., being intolerant of the LGBTQ, or racially chauvinist, or less than enthusiastically feminist, is grounds for censorship and deplatforming.
I don't agree, and don't get the impression Karl Popper would either.
>it says that we may need to physically fight those whose 'opinions' are backed by force or the credible threat of it.
"Internet radicals" have come to life in the real world, and organized such that they threaten violence, either racist ("race war now") or anti-semitic. Popper's point specifically talks about "rational argument", of which the pure rhetoric (as pointed out by Sartre) is the only tool anti-semites have at their disposal. They do not argue with rational argument.
If my mileage does vary, what then?
Marcuse wrote about the requirement of "rational argument" in 1965:
"Now in recalling John Stuart Mill's passage, I drew attention to the premise hidden in this assumption: free and equal discussion can fulfill the function attributed to it only if it is rational expression and development of independent thinking, free from indoctrination, manipulation, extraneous authority. The notion of pluralism and countervailing powers is no substitute for this requirement. One might in theory construct a state in which a multitude of different pressures, interests, and authorities balance each other out and result in a truly general and rational interest. However, such a construction badly fits a society in which powers are and remain unequal and even increase their unequal weight when they run their own course. It fits even worse when the variety of pressures unifies and coagulates into an overwhelming whole, integrating the particular countervailing powers by virtue of an increasing standard of living and an increasing concentration of power."
I've been trying to avoid an object-level discussion here, because that gets boring pretty quickly.
But the criterion you're proposing applies just as well to Antifa as it does to the Three Percenters, unless you add in a heaping helping of "has opinions I don't like".
Which I'm unwilling to do, because there are people who don't like my opinions, and this kind of thinking gives them ground to persecute me when it's my turn.
So where do I draw the line? Somewhere between the Patriot Prayer crowd and Atomwaffen, basically. The Proud Boys are skirting right along that line, and catching lengthy prison sentences for their trouble. So I think the system we have is working, and don't feel a need to give it more teeth.
I think the kids who show up in black and start street fights are a gift to the far right. I wish they'd hang it up; their hearts are mostly in the right place, but it's strategically stupid.
I'm not sure at what level the tolerance would end: if there's a group on the Internet which occasionally launches or encourages random acts of violence such as beating up LGBTQ people or a bombing or truck attack, does it deserve to be left in peace? Should only the people who commit the real-world violence be suppressed, and site itself left alone to prepare the next attack? Or does a group need to rise to the level of outright rebellion, like ISIS creating its own state in the Middle East, before it needs to be shut down?
If someone hangs out and shitposts on a marginal forum, what is the responsibility of the other people on that forum for their criminal actions? If they're co-conspirators, that's one thing, but making shitty memes and shooting up a synagogue are very different things.
I've noticed that every single one of the recent mass murderers from the white supremacist camp has a Facebook account, but no one seems to want to shut down Facebook, just 8chan or whatever.
You mentioned truck attacks; someone recently drove a van through a tent full of people canvassing for Trump in Florida. Do we deplatform wherever he was hanging out online, or is it just the combination of "did a terrible thing" and "has opinions that we don't like" that justifies censorship?
I don't know what the line is, exactly, only that a lot of people are really eager to draw it way before the point I'm prepared to.
>You mentioned truck attacks; someone recently drove a van through a tent full of people canvassing for Trump in Florida. Do we deplatform wherever he was hanging out online, or is it just the combination of "did a terrible thing" and "has opinions that we don't like" that justifies censorship?
This would make the paradox of tolerance laughably restricted - it would require that we think only on the level of individual action, and that there would need to be nothing short of completely unanimous support for an action. This misses out on the idea of stochastic terrorism. From another angle, let's say there were a society of Nazis in the real world. Most, but not all of the group argue that the Jews should be murdered. One of the group murders a Jew. By your metric, we would only be allowed to revoke tolerance of the group if every member murders a Jew. Isn't that a very high price to pay? Would we only be justified in revoking tolerance from a child porn forum if every member has abused a child?
Ok, but please don't take HN threads further into ideological flamewar. It's against the site guidelines because it takes discussion away from curiosity and into smiting enemies. Also, it's repetitive and therefore tedious and therefore off topic.
"What is hate speech? I'm sure I don't need to elaborate further on what I see wrong with that."
Absolutely you need to elaborate.
I think your comment implies a misunderstanding of populism and the kinds of things that go on in this world. By that I mean, because we actually do live in a world with a lot of rules, I suggest most people don't realize just how bad it can get.
"Anything goes" means ISIS showing beheadings of civilians in order to recruit more young men to the cause which works very well as a recruiting tool.
I'd encourage you to look into the Rwandan genocide - it was done through media. Radio broadcasters on either side were lamenting the others as 'cockroaches' 'vile creatures' who needed to be 'exterminated' passing lies and false information and then, at one point, calling for outright murder. The 'war' was not between armies, it was a lot of regular people, taking to the streets, massacring each other with machetes en masse. Like a 'zombie' film come to life.
And of course, the idiot teaching people to make sarin gas in their basement for fun, or to make sure that those 'cockroaches in group XYZ' can be slaughtered'.
This is what 'anything goes' means.
The moniker 'hate speech' can surely be over-applied, for example, in Quebec a comedian was fined $30K for making a joke about a person in a wheelchair. Sometimes uncomfortable arguments are labeled 'hate'. This is ridiculous obviously.
But - there's absolutely no doubt that 'hate speech' exists and can foment particularly young minds into some really devious world views.
Particularly in the context of 'fake information' the Holocaust, for example, could be 'erased' as a legit element of history.
Some of the stuff that is 'normalized' among some kids is pretty shocking, even just some casual use of language.
The 'internet' is now the commons, like any other commons, and though we need to err very much in the side of freedom of expression there - will be limited.
I see at least two thresholds, 1) the civil commons, like most areas where you can't go around screaming about this or that, like a private office or even a public square where you can't harass people. And 2) more private spaces, people should be able to almost anything they want (i.e. 'almost anything goes) and the threshold should be really high, but true 'hate speech calling for violence' for example should be a problem. ISIS beheading videos should be taken down. People calling for the mass murder of 'some group' can be taken down. If you're trying to teach people to make sarin gas in their basements, this can be taken down. Etc..
> Lastly, to make sites like 8chan out to have no value other than content some might find deplorable is, frankly, as close minded as those who the author apparently fears.
It's nice that someone actually came out and said this.
1) How do we deal with the fact that an Internet mob can horribly damage a random individual?
This is a really serious issue. Even a single stalker can cause a nightmare for a person and they will have difficulty fighting back. An Internet mob can be completely devastating. There needs to be some way for an individual to cost effectively fight back against online "assault"--especially when it's a mob.
2) How do we deal with the whole "stochastic terrorism" problem?
Stirring up a bunch of people until finally one of them does something horrific and then sitting back and saying "But I didn't say to go shoot them, I only posted their picture and name with cross-hairs." needs to have some penalty. We didn't used to care about this because only the truly powerful could say "Will no one rid me of this meddlesome priest?" and have grievous harm come about. Now, with the Internet, you can be much less powerful and have your words reach far more people--some of whom will be sufficiently disturbed to carry things out.
3) Facebook and Google are effectively monopolies. How do we deal with that?
We have historically dealt with monopoly by forcing breakups. The lack of monopoly actions in the last several decades has brought us to where these behemoths have no fear.
I see a problem with points one and two. That problem is partially what spurred the First Amendment of the Constitution (USA) to be made. Trying to control the speech of people is both impossible and forces escalation. When people desire to communicate, they will do so, about whatever topic they choose. Trying to stifle or manipulate this communication will fail eventually; either those who see themselves as part of the "ruling body" will accept this failure, or escalate with more robust controls. Rinse and repeat this through a few iterations, and you end up with the Great Firewall in China. The only way to deal with communication that turns hostile or negative is to increase the communication skills and tools of those involved. Conflict resolution, empathic reasoning, detailed vocabulary, and strong argument skills will allow people to communicate clearly and reasonably. A communicating group that carries many members who have these skills will be deal deftly with these issues, and carry a thick enough skin and strong enough values not to be swayed by the moral poisoning that tends to happen in forums in question.
1. No, there should not. People should have the freedom to congregate and to discuss. If people are hurt by this, too bad. There is no solution.
2. There isn't one. If I say something, there are some insane people who will hear it, and some of them might kill someone. Too bad, but it doesn't make me a terrorist.
3. Facebook and Google are natural monopolies, and they should be regulated like telecom carriers.
> 3. Facebook and Google are natural monopolies, and they should be regulated like telecom carriers.
Malarkey. If Facebook was a natural monopoly they wouldn't have to keep buying competitors like Instagram and WhatsApp because the barrier from the natural monopoly would prevent the then-competitors from gaining significant market share to begin with. The failure there was on the part of antitrust in allowing Facebook to buy them, and even that isn't fatal as long as they don't continue to do it. Social networks only have a popular lifespan of a decade or two and exist in parallel to multiple other competitors. That is unless one company is allowed to buy them all, and keep buying them as newer ones become popular.
And Google doesn't even have network effects. All they have is a better search engine than anybody else, which only lasts until that stops being the case. It's hard to complain about it as long as they're giving the results you actually want -- especially when competing search engines do exist and can easily be used by anyone who wants to.
> 1. No, there should not. People should have the freedom to congregate and to discuss. If people are hurt by this, too bad. There is no solution.
Hogwash. We have harassment, stalking, libel, defamation, and slander laws in meatspace for a reason. The fact that doing it on the Internet exempts you from these laws because enforcement isn't "cost effective" is a problem.
The Libertarian Internet(tm) was tried and failed. People suck; groups of people suck worse; individuals need to be protected at some level.
The question at this point is whether we put laws in place such that individuals can seek cost-effective justice, or whether we wait until so many people get burned that they start clamoring for the government to step on the Internet with jackboots.
> Hogwash. We have harassment, stalking, libel, defamation, and slander laws in meatspace for a reason. The fact that doing it on the Internet exempts you from these laws because enforcement isn't "cost effective" is a problem.
All those are illegal online too. But if I go to the pub and discuss some harassment with my friends that I'm going to do later, the pub clearly isn't held liable for this. Why? Because even if he owns it, the pub owner has no responsibility for what is said inside of the tavern walls.
So, what is taking place on these forums is rarely illegal.
> The question at this point is whether we put laws in place such that individuals can seek cost-effective justice, or whether we wait until so many people get burned that they start clamoring for the government to step on the Internet with jackboots.
And then they move to mailing lists, or FMS. Then what? You going to shut down Tor as well?
My opinion is this: people should grow thicker skin. Those who fail to do so pose a threat to society, insofar as they call for further censorship. Therefore, anyone who is harmed by these people can post-hoc be considered to have deserved it.
It used to be the case that people were recommended to not use their real name on the Internet for basic safety reasons. Now they are encouraged to do so, and whoever doesn't is branded a 'troll'. Who is really at fault here?
> A motivated harasser or mob can basically force you completely offline. You cannot answer that with "grow thicker skin".
If they have observed proper precautions, the psychological issues are the only factor that remains. Those are solved by growing a thicker skin.
> And what about if your business is online? Most states require that your public business filing include your real name.
If so, they should vote with their feet and observe proper precautions, such as incorporating in a jurisdiction with strong privacy laws, or not incorporating at all (Bitcoin). Examples of the former include New Mexico, Wyoming, Nevada, Seychelles, British Virgin Islands, and more.
> Your glib solutions betray your lack of experience with the problem.
No, your glib solutions betray your lack of experience with the problem. Censorship is never an answer as it is, unto itself, a moral failure.
Freedom is the right to be wrong. If I am free as long as I only do it "The Right Way", I'm not free. People need to try different things, make mistakes, be foolish. Without that freedom, we become a(n at least metaphoric) theocracy.
Society's disapproval cannot be incorporated into law in a free society.
>. If I am free as long as I only do it "The Right Way", I'm not free
This is a shallow definition of freedom, the sort of idea of freedom that children instinctively have when they're told to go to bed and start throwing a tantrum and it is largely what the internet was built on, maybe not accidentally by people who in many ways were juvenile adults.
There's a deeper version of freedom, a more adult one, that recognises not only ones capacity to do what one wants, but responsibility for the social good at large. Freedom doesn't just consist of running against the current, it also consists of maintaining enough order so that authentic expression of freedom is possible. A society where everyone is at the danger of being robbed while crossing the street is not free, it's just in chaos.
Likewise an internet without hierarchies, without controls and without enforcement of social taboos, be that by law, by gatekeepers or through exclusion incentivizes the very worst actors to run amok. Social approval should be incorporated into law, (either literal or culturally) because places that don't do it become uninhabitable. (as the current public internet increasingly is).
> that children instinctively have when they're told to go to bed and start throwing a tantrum
Would you be happy if right now someone forced you to go to bed at arbitrary times when you are not tired and potentially had work to do?
> who in many ways were juvenile adults
Are you calling them immature, youthful, or something else?
> a more adult one
No, a more authoritarian/collectivist one that consider abstract concepts such as society and morality as more important than the personal freedoms of a human. It takes out the individualism from a person and only considers them as a cog in a bigger machine which can be replaced at will. This mentality is what led us to absurd laws such as the ones that forbid sodomy, or alcohol, or any other victimless crime really.
> it also consists of maintaining enough order so that authentic expression of freedom is possible
What is this supposed to mean in the context of the internet?
> Social approval should be incorporated into law
Which goes back to the point about lgbt rights. Until recently most people were against them in the west and most people are still against them outside of the west. You should not need your village's permission to commit a victimless act.
> because places that don't do it become uninhabitable
Should it not be up to the "rulers" (that is the server owners) of these services to decide their rules then? It is not like it is the business of someone outside of the community whether said community becomes inhabitable.
> as the current public internet increasingly is
You say increasingly, yet it is only recently that there has been adaption of rules to satisfy the wider social pressure to censor.
> Would you be happy if right now someone forced you to go to bed at arbitrary times
the issue is of course, and that's what children don't realise, that seemingly arbitrary rules exist for good reason. In this particluar case it's because children are not able to delay immediate gratification. And I, as an adult, am in fact still constantly obligated to participate in farely arbitrary rituals, at work or otherwise.
>Are you calling them immature, youthful, or something else?
I'm saying they often exist in an environment that is very isolated from larger obligations to a community. The hacker and tech CEO as a middle aged man in a hoodie, often obsessed with youth and in a sort of state of permanent teenage rebellion, persists for a reason.
> that consider abstract concepts such as society and morality as more important than the personal freedoms of a human
society is not an abstract concept, the need for cooperation, order, safety, is present in every decision every day. It's individual freedom that is an abstract concept. In the case of cyber-libertarianism that is prominent in the tech world it's arguably not even 30 years old, and it only functions within a severely restricted space, both physically (say in the valley) or digitally (in some corner of the internet).
>What is this supposed to mean in the context of the internet?
it means that, if the public internet is supposed to be a healthy space, individuals who harm the well being of the larger community need to be able to be held accountable for their anti-social behaviour.
>Which goes back to the point about lgbt rights.
it's true that communities and social cohesion can sometimes unjustly punish people, however well functioning systems are capable of reform, and this is no reason to abolish social order altogether. To put it another way, do you think that within the LGBT community, laws against hate speech on online platforms are popular or not popular? Marginalised groups in particular see the value in collectively maintaining order, they least of all are going to believe in mythical individualism.
>the issue is of course, and that's what children don't realise, that seemingly arbitrary rules exist for good reason
Maybe you were a lucky child where that was always the case. For myself and many people I know a lot of the rules/commands leveled at us were genuinely arbitrary, and the presence of arbitrary rules like that has a way of delegitimizing ones that only seem arbitrary.
> Likewise an internet without hierarchies, without controls and without enforcement of social taboos, be that by law, by gatekeepers or through exclusion incentivizes the very worst actors to run amok. Social approval should be incorporated into law, (either literal or culturally) because places that don't do it become uninhabitable. (as the current public internet increasingly is).
"Uninhabitable?" To me it seems like the web is plenty full of people, especially in the gated communities of the social networks. Small community forums are still running as they always have. I would even say that things have improved noticeably since 2016, which was a low point.
When you say "uninhabitable", I suspect that is actually code for "uninhabitable to me." I also find certain parts of the web distateful, and I avoid them. There are other places where I spend quite a bit of time, and I don't find them uninhabitable at all, nor do I find them becoming generally worse off.
The problem with your statement about taboo enforcement is the question of which taboos do you select? There is no such thing as an impartial viewpoint here. Creating a moral arbiter with actual power is just creating another cultural battlefield that will oscillate between left-wing and right-wing control. In that, it will further entrench the current two-party duopoly and prevent meaningful change. It will also force businesses to be dragged further into the conflict, and may eventually lead to explicitly partisan companies--something I hope to never see.
Frankly I find the mores of contemporary society to be awful. I wouldn't want their taboos enforced on me. In fact, I would actively resist it using whatever measures I could muster. But since I have the option of moving to spaces where dissent and nonconformism is allowed, I choose to do so.
>especially in the gated communities of the social networks. Small community forums are still running as they always have
exactly my point, focus on "gated" and "small". What do you think is it that distinguishes gated communities from the generic facebook newsfeed? The right to censor, judge who gets in or not, be able to exclude bad actors, and to have stake and identity when commenting, and to set common rules for conduct. the internet is transforming from a public space to an internet of private gated communities.
In this the internet perfectly mirrors declining cities that do not maintain their commons. A lot of fractured, tribal communities emerge who are not in contact with each other, the marketplaces and houses degrade, and a huge amount of resources is spent on simply insulating oneself from everyone else. It is not a pretty picture.
On who gets to decide what taboos exist? There is no need for a moral arbiter. In communities that function well taboos and rules reflect the consensus of its constituent members. The fact that Mark Zuckerberg gets to make institutional decisions for 2 billion people is another sign of dysfunction.
This sounds like more of an argument for smaller, niche communities. I agree with this. How can you contain all of humanity's diversity of opinion and experience into a single "platform" with coherent norms? You can't. It's a fool's errand. Small communities do manage their own norms in a way that is consistent with the values of their members, and they develop their own methods for doing so; there is no need to create a legal enforcement mechanism.
> This is a shallow definition of freedom, the sort of idea of freedom that children instinctively have when they're told to go to bed and start throwing a tantrum and it is largely what the internet was built on, maybe not accidentally by people who in many ways were juvenile adults.
You sound like a very silly child to me, with arguments such as this.
Please don't cross into personal attack. It breaks the rules and damages the commons, so we ban accounts that do it. But also, it discredits what you're saying, so it's not in your interest to post like this.
> If I am free as long as I only do it "The Right Way", I'm not free.
I don't think punching you in the face because I disagree with your opinion is a legitimate form of free expression, so I think your claim is overly broad. Clearly there are some right and wrong ways to express your freedom. Absolute freedom is simply incoherent.
That said, I agree with the general thrust of your point that we need to be forgiving with people's expression. People are imperfect, should be able to make mistakes and they should not suffer for the rest of their lives for earnestly expressing an unpopular opinion or engaging in debate on a contentious issue.
I think this will become a civil rights issue in coming years.
One thing that I find amusing is that the same people worried about the government becoming too authoritarian, want to give the government the power to stifle “hate” speech, which can easily be defined to mean what the government wants it to mean.
It's almost like, on some level, they don't actually believe that they're the downtrodden ones sticking it to those with power. Like this is just a talking point that's used to justify the actions they're trying to take with the power they claim not to have, and to dismiss any concerns about the morality and consequences of those actions as concern trolling - after all, if you really cared you'd be worrying about what the people with power were doing, not trying to police the powerless - and then thrown aside when it's not useful.
What makes this particular approach really dangerous, of course, is that the easiest - and perhaps only - way to enact this censorship is if the people demanding it actually do have power within the existing system and aren't particularly oppressed. There's some truth to the oft-repeated argument that this can't be oppressive because of power structures - if, say, a bunch of woman of colour who're prominent for their activism within their community think this way, that affects no-one. If a chunk of the existing white male political establishment latches onto this for their own goals (like, say, defeating the Republicans), on the other hand...
Both the left and right (and centrist!) authoritarians are only annoyed when things that they agree with get censored. This is the whole point of authoritarianism after all, that you do not tolerate other's opinions.
I don't really like the premise of the article. It seems to revolve around "how Wikipedia remains so information-rich and useful when the rest of the internet (in his view) is filled with divisive, corrosive misinformation." I think Wikipedia has a lot of problems, and coming at it from a perspective of "how can we repeat Wikipedia's success" cuts us off from talking about a lot of topics that are important.
The first examples I could remember off the top of my head:
The internet does need regulation; what I'd like to see (but don't expect to) is a regime where platforms have to be transparent about their moderating systems.
I have no objection to Facebook, Twitter, Google/Youtube, Hacker News or WeChat aggressively moderating their platform, or even running social engineering programs. Their platform, their problem, humans have lived with that sort of thing for centuries. My issue is that content recommendation systems are not transparent about why things don't appear.
Zerohedge got booted from Twitter recently. I don't mind that sort of thing because it is obvious what happened; go to their twitter page and it says the account is suspended. If Google tried something similar and removed them from the index there isn't a tool (that I know of) saying "Zerohedge has been removed from Google's index" and that is upsetting. I'd support legislation where people had to provide that sort of information in a public and accessible manner.
I suppose my big complaint is that if the big companies were organised against a group (maybe Google has an irrational dislike of Russia-based IT companies; it is possible) then that group has no way of identifying that they are being treated differently. This will always be a concern for political minorities.
I agree with you, but I don't see that as "regulating the Internet" (which is the idea I don't like at all), because:
1) you're really just aiming to regulate the selective censorship (which is in a direct relation to the existing monopoly laws)
2) why limit it on the Internet? TV and newspapers played exactly the same game for decades and they still do on some subjects. And when no major newspapers mentions a certain story, it's pretty much to the same effect as if google's removed it from the index. You need to first hear about something to know what to google for, and if it's not on a few mainstream media sites for most of people it just didn't happen.
It isn't that (2) is a bad idea on the face of it, just that a further legislative push seems unnecessary as the tools are already in place. I assume the libraries have an index of newspaper articles dating back a few decades that can be used to chart out what stories get mentioned. A similar index of TV programming is probably already available. If it isn't it should be.
> I hear this too-much-free-speech argument a lot these days, but I can’t get used to it.
That has been my feeling for the last decade. When I grew up, there were all this talk about how Internet would be a truly free medium of information exchange, and what immense benefits this will bring to humanity. Usually, there's generations between new things appearing and their effects being seen, and one can only read in books about what your ancestors though about this new thing and marvel about how wrong they are. Now, we live in the happy time where it takes mere decades.
And so, 30 years after, the Internet did change the world. And it didn't. We got the awesome freedom of information - and we've got the usual suspects gnawing at it from all directions, and unlike what early enthusiasts predicted, largely they succeeded. Some expected that. What they - and myself - didn't expect at all that formerly "progressive" movement, academia and large segments or highly educated and cultured people will join the most oppressive governments in calling to reign in free speech, restrict the freedom of information flow, and deny people with non-mainstream opinions access to free speech, and threatening with vicious harassment and prosecution anybody who dares (or ever dared) to voice a controversial opinion.
I remember when freedom of speech debate was about "should we allow the (actual) Nazis to express themselves publicly" and the answer was "we don't like them, but yes". Now the debate is "should we allow people who support a mainstream political candidate to listen to what she has to say within 10 miles of where we are" and the answer seems to be again and again "we don't like them, so no". And it's even worse on the Internet, because it's not 10 miles, it's everywhere. What happened to all those free speech supporters? Did they get old and die? Did they only support free speech because they didn't think nobody would say anything they don't like? I can't believe they were so stupid as to think that. So what happened?
> But the reality is that no matter how much money and manpower (plus less-than-perfect “artificial intelligence”) Facebook throws at curating hateful or illegal content on its services, and no matter how well-meaning Facebook’s intentions are, a user base edging toward 3 billion people is always going to generate hundreds of thousands, and perhaps millions, of false positives every year.
In re: FB, there was an FB engineer saying the other day that it was impractical to have humans review every post.
I almost responded (but refrained out of a sense of not being snarky) that the word "impractical" here actually means "uneconomical".
No one's forcing Facebook to be what they are.
> no matter how well-meaning Facebook’s intentions are
They want money at the end of the day.
I know I'm beating a dead horse here, y'all know this already, but the service is free to entice the "users" who are then sold to the ad/marketing folks, the REAL users of FB.
It's possible to set up a non-exploitative social network (and indeed, many people are doing just that) but FB/Twitter/etc. have sucked the air out of the room and normal people do not care that they are digital cattle. They just don't, not in any effective operational way.
If somehow FB shut down tomorrow everything their servers do could be done by other servers (the data would be a problem, archive it, yeah, but then who gets to access what? That's just more data though, eh?) FB doesn't fundamentally add anything to the underlying capabilities of the Internet, eh? There are no "magic angel babies" working there.
I think if you somehow went back in time to, say, the 1950's and tried to tell people about FB they just wouldn't believe it. They wouldn't believe that so many hundreds of millions of people would voluntarily give a private company every detail of their private lives (more than the Stasi worked hard to collect) for no more than the Internet dressed up in a crappy UI.
But people do crazy things as long as everyone else is doing it too and there we are.
I don't really have anything constructive to say. Sorry.
That right there never ceases to amaze me. How everybody just willingly joins knowing they’re hurting themselves but out of the need to be in the loop with everybody else.
I’d say you hit the nail on the head. Crazy things indeed.
TLDR: $6.25M is all it would cost to implement Orwellian censorship across Facebook.
TFA says there are "hundreds of thousands, and perhaps millions, of false positives every year." Let us assume there are a million false positives that require human intervention every year. There are 50 work weeks in the year, so to resolve 1,000,000 false positives over 50 weeks requires humans to resolve an average of 20000 per week, or 4000 per day, or 500 per hour, or 125 every 15 minutes.
Therefore, a team of 125 people could theoretically solve this problem by resolving 1 false positive every 15 minutes. At $50,000 US dollar pay scale, Facebook would need to pay out $6.25 million dollars to solve this.
Therefore, given "a user base edging toward 3 billion people is always going to generate hundreds of thousands, and perhaps millions, of false positives every year," it would cost FB ~$6.25M to hire a team of 125 people to resolve the issue, per million false positives.
That seems economical for a multi-billion dollar company.
Facebook already employs 15,000 content moderators, both directly and via contractors. By many accounts, they're overworked and don't get enough time to deal emotionally with what they have to see [1] -- and they only review a fraction of the content on Facebook. 125 people would clearly be many orders of magnitude too small to review everything.
The "hundreds of thousands, and perhaps millions, of false positives" quote is referring to the rate after human moderation (and is also just speculation by the author).
I think people tend to underestimate the scale that would be required to solve this problem because they tend to underestimate just how terrible people are.
A lot of those false positives are going to be things that look very much like a true positive to an underpaid, overworked moderator. I don't think 15 minutes would cut it. Actually, I'm not convinced that any amount of time would, because there are too many occasions when the distinction between hate speech that must be banned and social justice speech that only a bigot would ban rests entirely on the social standing of the person making it within the appropriate circles.
Also, I strongly suspect that Google, Facebook, etc are already applying this kind of manual scrutiny to potential false positives behind the scenes - as you say, it's relatively cheap - and this explains a lot of the problems people have been having with their moderation decisions and why appealing decisions so rarely works out.
The submitted title was "Goodwin: Did the early internet activists blow it?", but we've switched to the article's own title, as the HN guidelines request.
He can discuss multiple things at once. You think the world is single threaded? He has a life outside of his board role (I also think the board stuffed up, btw)
A man who styles himself an internet visionary while abdicating his responsibility to the future of the internet is likely to have an ulterior motive when talking about the internet.
However I also recognize that our community, like other communities, tend to forgive high status community members, as long as their lies aren’t too obvious.
Well done for simultaneous criticism of him and me. This feels like "if you are not against him, you must be for him" reductionism. I merely note he is capable of being seen, like all of us, as multithreaded.
That criticism wasn’t against you; I was actually thinking about how the community handled Hans Reiser 10 years ago. If you feel personally attacked, though, then that might warrant some introspection.
I think you inverted the logic. His case is "Hans Reiser was an outsider, but when he turned out to be a killer (which does not strictly relate to his FS work) we dropped him like a hot potato. Contrast that with Mike Godwin, who did a thing (the respondand thinks is) reprehensible, but because he is a big name, we give him leeway"
he's making a compare-and-contrast argument, as I understood it.
The difference is, Hans Reiser was a murderer. Mike Godwin is just conflicted between his belief in a greater good outcome and his role as a PIR/ISOC conflicted person. I think its a bit apples-and-fishcakes as comparisons go, but I sort of get it.
With that in mind, I'd also argue that there should be spaces for "anything goes" content. If people are going to do it anyway, why not give them their own space? Let it be an outlet for those who choose to partake in such content. If we arent willing to accept certain kinds of content everywhere, lets at least allow it somewhere. It will still exist regardless of what we do, but at least if it has its own space we can separate oursepves if we so desire.
Lastly, to make sites like 8chan out to have no value other than content some might find deplorable is, frankly, as close minded as those who the author apparently fears. If you hadn't geussed already I use 4chan regularlly and its brought much more positive value to my life than negative. The technology board is what got me into programming in the first place. Its through them that I learned about Linux, the benifits of FOSS, the history of computing, and skills that have lead me to be succesful in my life. They are even the ones who lead me to hacker news. Maybe the messages they gave me were worded in a way that some deem undesirable. But when looking at the message itself and not the envelope its wrapped in there is value to be found in these spaces. If the culture is not for you, thats fine! I understand. But please, don't see that as a reason to condemn those of us who do use these sites. We have a right to exist too.