Hacker News new | past | comments | ask | show | jobs | submit login
Trolls are winning the internet, technologists say (theatlantic.com)
230 points by smacktoward on March 31, 2017 | hide | past | favorite | 407 comments



The internet is as it always was, the only difference is more people, and more importantly more people without technological prowess and forum/bbs/irc experience, are accessing it and moving increasingly larger chunks of their life to the internet.

As someone who fell in love with the internet when I was young because nobody knew I was young, it constantly surprises me when people put intimate details online and then become outraged that they are targeted or harassed. The internet didn't magically make everyone equal, it made everyone equally anonymous (to at least their fellow users, privacy down the stack notwithstanding)... that's why it was/seemed like an egalitarian paradise. Everyone was equal, including the trolls, because nobody knew who you were. Is it that surprising that once people started telling others who they truly were discrimination, harassment, and invective started showing up again? The internet doesn't magically make social, political, religious, and ideological rifts magically disappear, it just seemed like it did because nobody could associate users with their backgrounds. Now that we know who everyone is, the internet just makes you encounter more of these rifts due to it's global nature.

But when your first experience with the internet is twitter, facebook, or any other social media this lesson is not only dismissed, but outright refuted. The internet is synonymous with identity to those people. They come completely unprepared and completely open, expecting the internet of the 90's. Obviously things are going to go sideways.

In my ever so humble opinion this is the disconnect that is causing a lot of fervor around "trolls" and "harassment". Those that have been battle hardened by the early internet saw how to treat trolls: you ignore them ("don't feed the trolls") because they didn't actually have any ammo. We've given them ammo now, so here are the consequences.


The majority of people who say "don't feed the trolls" have never been the focus of a trolling campaign, I've found. It is easy to claim this when you aren't under constant violent harassment and scrutiny. It is not fair to expect everyone else to have a "battle-hardened" stoicism when you've probably never experienced such a life.

I've never experienced such harassment myself and used to sing the "ignore" mantra too, until I tried to imagine myself in the shoes of those who receive frequent harassment, and I'm not sure I would be able to ignore at the same level they are expected to.

Also, I'm not sure you are arguing whether a cause of the issue is identity being coupled to your internet presence, but I'm not sure we can go back to the internet of the early 2000's or late 90's, so we need to deal with the world as it exists now instead of thinking appeal to tradition will solve issues we have today.


Trolls are a completely different thing than criminal harassment, and it is disingenuous to equate the two.

The primary danger is that by lumping people with legitimate criminal complaints in with people who are sad because someone said some mean words to them on the internet, you don't make others more concerned about rudeness, you make them less concerned when someone has a real problem.


Also worth noting that criminal harassment via the internet is largely a failing of law enforcement, not a failing of the infrastructure of the internet or a sudden arrival of a new species of really mean humans. There are some privacy/free speech issues down this road, but by and large LE have abdicated their responsibilities in this area.

And while it's a different discussion all together, I think examining why conflating criminal harassment and "trolling" has become in vogue would teach us a lot about how power is used in the modern age.


> LE have abdicated their responsibilities in this area.

On the contrary. One of my side-projects is involved with the running an Internet-to-PSTN wake-up-call service. The service is free-to-use in the US (because domestic calling is so cheap) which also means it gets used by criminal-harassers who send wake-up calls at 4am to their targets (we do aggressively rate-limit which helps mitigate these issues). We are frequently contacted by local police departments across the country (we don't spoof our Caller ID) seeking information about who made the call (roughly 25% of the time we get a subpoena or court order) and almost always we see the call request came from an IP address hiding behind a simple PHP web proxy script or a Tor node.

While my story is an anecdote it does demonstrate that it is very easy to hide your identity online for quick hit-and-run harrassment - it's not that LE have abdicated - it's that they've done all they can do - because what can you do with an IP address allocated to Egypt?

While the issue I've described is a societal problem, it does have a technical solution: if the phone numbering system were redesigned so they worked more like email addresses then people could change their phone numbers or use disposable numbers and aliases which would put an immediate end to unwanted calls... but that's not going to happen.


More phones should just come with whitelist support. Where calls will automatically go to voice mail if it isn't on your list. I have this on my phone due to lots of wrong numbers and it works great.


Details puhlease.


Android I'm fairly sure has some relevant things built in. On my Motorola I can set a do not disturb time overnight where only alarms and calls from selected people will come through (and some other options and combinations). This is android 7 settings > sound > do not disturb.

There seem to be a lot of apps for whitelisting generally.


I'm late to the party, sorry. But I use an Android app called Extreme Call Blocker. Nothing gets through that I don't explicitly allow to get through. And in the case of unknown/anonymous callers, they don't even get my voicemail: There's a setting to pick up the phone and immediately hang up for whichever callers you want. It's basically the equivalent of just not feeding the trolls. It only costs a couple of bucks, but it's easily the best five bucks (or whatever) I've ever spent for an app.


iOS 10 doesn't seem offer a built-in Contacts whitelist feature, but iOS does expose an API allowing third-party applications to filter-out incoming calls and messages: https://support.apple.com/en-us/HT207099 - someone could easily write an app that would use your Contacts as a whitelist, assuming one doesn't already exist.


For iOS, i leave DND mode on full time. Contacts that are Favorited are serve as the whitelist, are not screened by DND. Any second call in 3 minutes will override DND instead of being directed to voicemail.


For iOS, I'm currently checking out Hiya Caller ID and Block by Hiya https://appsto.re/us/cXg16.i

I've had it installed for about an hour now and indeed have not received a single spam call. Experiment at your own discretion.


I have a contact on my phone named "Spam" which currently has about 25 numbers associated with it. I have it set to "Route to Voicemail" (option in the Google Contacts app).


If that were my business I'd charge $.01 to a credit card per call. A bit harder to hide anonymously then.

Or even charge a one-time $1 to sign up, and then free wake-up calls for the life of the account.


Or don't accept requests from known Tor exit nodes and open proxies. Lists of these do exist, though you'll have to pay.


I agree completely with you. Framing the harassment as "trolling" has been a propagandistic victory for the harassers.

Actual trolling is way less worrying. I don't like it and I certainly wouldn't partake in it, but you can actually ignore it. Harassment, not so.


No, it's been a victory for those peddling the fear articles and soliciting victim bucks.

It's really simple. First you make outrageous claims, then you define everyone who disagrees as a harasser. Find the one tweet or comment that's a bit off color, and pretend that's representative. Or just false flag it.

It's the digital version of how to discredit an opposing view or protest, you just need one person smashing a bus stop or looting a store. Once you have enough fear, it becomes self justifying.


Deciding who's the victim and who the harassers seems quite subjective. Let's say that I hit the beach sidewalk wearing a leather-sack-style thong jockstrap, an 18" Bowie knife, and roller blades. Right out of Wild Boys. But totally legal. And let's say that I attracted a crowd of critics, who felt threatened, and who threatened me. And supporters.

Who would be the victim(s), and who would be the harasser(s)?


When I think of harassment, I think of it as an intentional action. If you did that with the intent to disturb people, I think it would then fall under you being the harasser and others being the victim. But then on the other hand if they intentionally threaten or make you feel threatened, they are harassing as well. My definition based on intent may be incorrect, but I feel as though it makes the argument slightly less subjective.


Well, some of the stuff that people do to make a point about discrimination and harassment is expressly done "with the intent to disturb people". I pulled my example from William Burroughs' Wild Boys. But some of the costumes in gay pride marches come pretty damn close. And hey, it's all good fun. But people need to play safe, avoiding needless risk.


Do you honestly believe that free speech is less valuable than law enforcement?

How does converting the internet into a police state help combat anonymous harassment? Privacy is a tool to prevent harassment; not the other way around.


I have multiple internet presence's.

My "real" presence is empty of anything except professional presence. LinkedIn, github etc etc. My "real" presence never engages, never debates and is effectively not there except when it engages with the real world via resumes etc.

My actual presence is scattered across the internet in multiple pseudonymous social networks and eagerly discusses topics I am passionate about.

It occasionally attracts harassment etc, but I dont care cause they cannot reach out into my real life and actually harm me.


>they cannot reach out into my real life and actually harm me

I would be careful with that assumption. The reason you are not hurt is mostly because people don't want to hurt you. The reward is not high enough.

I'm not trying to diminish your importance, for all I know you could be facebook employee #17 or have gold bars buried in your yard.

But any of the top security guys will tell you safety is a function of an attacker's motivation and resources.

It's not theoretical either, you can google people who took lots of time building up their protections and then challenged a pen test. We're talking people with good above average safety practices, that got owned within 24hrs. Social media, banking accounts, everything.


> banking accounts

How would they do this? Access to my bank account requires a password I never use for any other service, and a physical non-internet-connected 2FA token I keep on my person.


Using other information they have acquired they would phone the bank’s customer service line, convince them that they were you & gain control of the account. How hard this is depends on the bank. The fact that you have a strong password & two factor authentication is irrelevant.


I guess it's plausible that they could phone my bank and convince them they were me, even though the bank asks several control questions like "what was the amount on your recent bill to Company X?". But assume they've gotten past that, how do they then "gain control of my account"? If they ask for a new 2fa chip or a new debit card, that gets sent to my address by registered letter, so I have to go to the local post office and present photo ID to get it. I don't have any other ways of accessing my account than by internet bank or debit card.


You should not assume an attacker would walk in through the front door like you do. There are more components in the stack, all of which could be a worthy target.


All it takes is a couple opsec slip-ups to give a dedicated group of attackers the clue they need to link your identities together. Maybe you'll never make a mistake or say something that the wrong kind of person takes umbrage with, but is it really fair to put that task on every single Internet user?


Back in the day, it was standard advice for n00bs.


I don't think the main worry should be about being harmed. Yes, we can all avoid taking it personally and being harmed. We do that by avoiding, by withdrawing, by not engaging. That is the real damage - it is no longer safe to discuss things. People talk about hyper-partisanship but a large part of this is that there are actors who are just in it for the lulz and people that have emotional problems in their lives who use political discussion as a way to exercise their personal anger.

Sure, we can personally stay safe, but public discourse cannot.


I don't say this to troll you but googling your HN name reveals more than enough to know your real name, where you live and start linking it all to other non-programming hobbies (piano!).


Yes, that can be problematic. But hey, maybe it's misdirection ;)

Who do you think Mirimir is?


I'm sorry but I'm not here to check your opsec for you.


Probably Maximillian Holm. Am I close?


There's an amazing number of people who will disassociate their identities online, but still cross one used name over to an auction or property site, where it's easy to deduce someone's real name v


I've tried this approach too. Though it's challenging to maintain the constant vigilance that's sometimes required. I doubt it's a workable solution for most of the people I know.


Yes, I do exactly that. Also, all of my meatspace professional writing is private, so the chances of stylometric outing are minimal. Mirimir is currently my online professional persona, and I generally don't get into arguments using it. There are other personas for that.


"Don't feed the trolls" was a useful thing to say back when the word meant something far more specific than just "someone behaving badly on the Internet."


Yeah, the meaning has shifted. It used to be that if people knew you were trolling, you weren't a good troll.

Now a troll is just an asshole on the internet.


Some of the newbies don't understand that "troll" refers to "trolling" as in "fishing troller": evoking the metaphor of casting a net to collect responses like fish.

They imagine "troll" as a metaphor invoking the humanoid creature from Scandinavian folklore, which, in the Three Billy Goats Gruff story, is a nasty, unpleasant, violent creature who wants to eat everyone who crosses his bridge:

https://en.wikipedia.org/wiki/Three_Billy_Goats_Gruff

Hence the asshole angle.


Adding to that - all the cool kids these days are saying "baiting" instead of "trolling". The concept (and metaphor) hasn't been lost.


Trolling is scraping the bottom, stirring the muck up into a net; not quite analogous to luring/baiting, a much more sporting tactic.


No, that's trawling. Trolling is also called longline fishing, with many baited hooks.


I suspect that when troll refers to dragging anything, it's just a spelling and regional pronunciation variant on trawl.

According to etymonline.com, the etymology of "trawl" is:

1560s, from Dutch tragelen, from Middle Dutch traghelen "to drag," from traghel "dragnet," probably from Latin tragula "dragnet." Related: Trawled; trawling.

Now, "troll" verb:

late 14c., "to go about, stroll," later (early 15c.) "roll from side to side, trundle," probably from Old French troller, a hunting term, "wander, to go in quest of game without purpose" (Modern French trôler), from a Germanic source (compare Old High German trollen "to walk with short steps"), from Proto-Germanic truzlanan.*

It's rather related to "stroll". Nothing in the semantics about fishing.

I suspect that "troll" in longline fishing in fact refers to the long line with many hooks being pulled around; i.e. "trawled".


Ohhhhh... it should be called trawling then! : P


You measure the success of a troll by the ratio of the number of characters the troll types to the number of characters typed in response. It wasn't a term describing disagreeing with or threatening people, it described annoying and distracting people.

The ultimate troll is a well-timed and placed piece of copypasta, which at ctrl-c ctrl-v is impossible to beat.


As always with human language, the word ends up meaning whatever a substantial plurality believe it to mean.


(Disclaimer: Not a fisherman)

There is no such thing as a "fishing troller".

I assume you are thinking of a "fishing trawler" which catches fish by trawling.

Trawling: to fish (the sea, an area of sea, etc) using a trawl-net. Trawl(-net): a large bag-shaped net with a wide mouth, used for catching fish at sea.

That said, to troll is also a method of fishing, but involving a baited trailing line rather than a net.

Personally I have always interpreted 'troll' (the noun) in the mythical under-bridge-dweller sense, and 'trolling' in the 'to stroll or saunter' sense. That is, the troll has emerged from his bridge and is now strolling around looking for victims.


Here's an example from 1994 that uses it in the fishing sense: https://groups.google.com/forum/#!topic/rec.scuba/PuZh2Hck_C...

> > The shop told me that Dacor and Scubapro have good reputations for > > stocking spare parts for all their products, no matter how old. > > This does cause their prices to run a little higher.

> (Gosh--this looks like a troll for a flame-war. I couldn't help > but visualize myself standing in a used car lot.)

Here it's pretty clear that it's being used in the sense of someone dropping a lure, a bit of tasty bait.


I'm confused, are we talking about trolls or harassment ?


These terms tend to get conflated, especially in parts of the web where people use their real names.


Yeah. Total OpSec failure, as I see it. I learned that hard on Usenet.


Harassment is arguably a kind of trolling. And trolling is arguably a kind of harassment. But then there's the ideal of free speech.


I always seen it like this: trolling: silly arguments, maybe attacking a person in a thread about their viewpoint (ad hominem) during a thread. While harassment is insulting a person, everywhere regardless of the thread going on, as well as attacks while the person is not participating in a/the discussion.


Yeah, I get that. But what happens is that threads on one site become trolling topics on other sites. So harassment becomes part of the trolling game.


Your first sentence points out the problem. Those people allowed themselves to become the focus of a trolling campaign by putting themselves out there in the first place. It would be fairly easy to avoid that if your identity online was just a name not linked to anything else. If you put your life online it's expected that someone who has nothing better to do and is hidden behind anonymity will make it harder for you.

I prefer it this way to be honest. So many people want more rules to protect them. We have enough, leave the internet alone please. Any rule that someone comes up with is most likely going to be a power grab and make it more difficult for those that know what they're doing.


Most people aren't willing to do the actual act that takes away the meat for trolls: delete and start a new identity without association with the old one.

I changed my IRC identity dozens of times when assholes decided they wanted to make me miserable, the solution is to be the person who walks away without looking back.

We keep thinking the places we congregate and the identities we put out there are the important parts, when really the important parts are the relationships you migrate when someone makes the current location for discussion untenable.

Just pick a new name in a new place, and invite your friends.


This is junk.

'Don't feed the trolls' has nothing to do with harassment, it's about the quality of forums and user content.

But to be honest not sure if you are being meta? This is a bit of a troll statement.


Targeted harassment is only a fraction of all harassment. Some harassment is not easily avoidable, but some of it is easily avoidable.


Well, one can dox "violent harassers".


1. 'Trolling' is not harassment. You can harass someone by trolling them incessantly, but harassing people is not 'trolling' them. Don't use them as synonyms.

2. They've never been the focus of a harassment campaign because they're not stupid enough to put their personal details out there on the internet and then be surprised that people will use that to harass them. They also don't feed the trolls in the first place.

>I've never experienced such harassment myself and used to sing the "ignore" mantra too, until I tried to imagine myself in the shoes of those who receive frequent harassment, and I'm not sure I would be able to ignore at the same level they are expected to.

99.9% of people that complain about being harassed would be well-served by just not using Twitter. Twitter sucks, and has always sucked. It's the epitome of short attention spans and soundbites. Don't use Twitter.

>Also, I'm not sure you are arguing whether a cause of the issue is identity being coupled to your internet presence, but I'm not sure we can go back to the internet of the early 2000's or late 90's, so we need to deal with the world as it exists now instead of thinking appeal to tradition will solve issues we have today.

Nobody is saying we have to go back to the internet of the late 90s. We just need to understand that there aren't more trolls now than there were then. There are more people in general, and when you are anonymous, they don't have anything to harass you with.


> 99.9% of people that complain about being harassed would be well-served by just not using Twitter.

This is a cop-out. You may not need Twitter for your work, but a lot of us do. If I can't use Twitter, I lose an important way to connect to others in my profession (specifically in open source). And I'm an unimportant software developer. What about women in journalism? What about women in entertainment? They need Twitter, and they need it connected to their real-world identity. You might as well tell them "quit walking on the street, or wear a burka, and you won't get harassed or assaulted." It would be the same cop-out. We have police to keep the streets safe. We need Twitter to be safe too.


> We need Twitter to be safe too.

In theory agree keeping things safe seems great, but I struggle with the idea that we can define "safe" in a way that is reasonable and makes sense, and where the punishment is effective and reasonable.

The vast majority of times I have seen claims of harassment, the usual process is that someone makes a controversial statement, then realises the group that objects was larger than they imagined as they receive hundreds and thousands objections, some of which are almost certainly awful. I understand how that can feel like harassment when 10,000 people tweet at you, and in 10,000 tweets half a dozen are guaranteed to sound batshit insane and downright scary, but it isn't harassment to be disagreed with, nor to have ideas that you promote subject to scrutiny.

SM amplifies what that scrutiny looks like massively, but pure numbers don't constitute harassment. Nor does people watching everything you say and commenting on it. And this is where it gets tough, nor does having people say mean things about you, because mean is just so subjective.

Any definition of harassment is really tricky, and a definition of "safe" is even more difficult, and we haven't even got to the punishment part yet.

Ultimately, the idea we can stop people disagreeing, and disagreeing passionately enough to border on troubling, is impossible, and what is needed is a something proactive an individual can do to protect themselves.

And I have no idea what that is!


No it is not the same, it's not the same at all. There's a massive, massive difference between someone on the other side of the world sending a nasty message on Twitter and being harassed in the real world by someone on a dark street at night. These are so completely different that I can't believe that you are so tied up in your views that you forget that. Anyone that would suggest they are the same has clearly not been the recipient of the latter.

People have a right to be safe in the real world. Harassing someone physically or verbally as they're walking down the street for wearing religious clothing or a short skirt is completely unacceptable and makes people not feel safe in their own neighbourhood. It's not possible to anonymise yourself in real life.

That's completely different from having someone send you a nasty message on the internet. Completely different.

Twitter is safe. Nobody can harm you through Twitter. It's simply impossible to harm you through Twitter. What people can do is send you nasty messages through Twitter. You have two solutions:

1. Stop using Twitter, an utterly worthless dis-service that has never produced anything of value, or

2. Get over yourself and grow a thicker skin.

>You may not need Twitter for your work, but a lot of us do.

No, you don't. You think you do, but you don't. The only people that 'need Twitter' are 'social media' people and quite frankly they're the worst part of Twitter.

>If I can't use Twitter, I lose an important way to connect to others in my profession (specifically in open source).

No you don't. Twitter is not a good way to connect to anyone. Nobody can have any sort of technical discussion through Twitter without completely subverting Twitter's model through hacks like multi-tweet messages, self-replying, twitlonger, etc. Twitter has never been, is not and never will be a good platform for discussing anything. Use email, like the adult you claim to be.

>What about women in journalism? What about women in entertainment? They need Twitter, and they need it connected to their real-world identity.

Don't use Twitter. Female journalists have existed for a very long time, much (much) longer than Twitter, and the vast majority of them don't use Twitter. Twitter is a shitty, toxic platform. It's not some sort of official place. It's not a public service. It's not part of our culture. It's a shitty social media site that you should stay away from.

Twitter is not useful to journalists. Twitter is not useful to entertainers (and you'd be a fool to think that hate mail to celebrities didn't exist before twitter. They employed people to filter and reply to their fan/hate mail before, they can employ people to filter and reply to their fan/hate tweets now).


> It's simply impossible to harm you through Twitter.

That's really ignorant. When somebody with epilepsy receives a strobing image that can cause real harm. When somebody who's recovering from PTSD receives some gruesome message that makes them relive whatever awful experience they had, that causes real harm too. Online bullying can push people to suicide. It's no joke.

It's absurd to claim that it's impossible to harm people through Twitter (or any other online conduit) when victims claim they are harmed and there is clear evidence of harm.

> Twitter is not useful to journalists.

Journalists say the opposite: they couldn't have gotten their foot in the door without twitter.

> It's not part of our culture.

Okay, this is just too silly. Twitter is everywhere in our culture. In advertisements. On TV. Songs are written about it. Books. All politicians use it. Businesses use it. Twitter is huge for sports. Activists use it to organize. If Twitter isn't part of our culture, then neither are cars or t-shirts.


> Nobody can harm you through Twitter.

Milesrout (real name X, location Y) is molesting children and sapping our precious bodily fluids! He has corrupted the police so they won't stop him. Unless someone takes action we're doomed.

I agree with your general arguments, but it is incorrect to say that harassing people on the internet cannot endanger your life.


Your example is arguably a bug in human society.

But there's no doubt that the Internet increases the risk, given the massive scale, relative anonymity, cultural differences, and so on.

Hannu Rajaniemi imagines an intriguing solution in the Quantum Thief. In Martian society, aka the Oubliette, individuals cryptographically regulate who knows, and who can remember, what about each other. So everyone is fundamentally anonymous to everyone else, except for whatever they choose to share with another, or a group.

I can imagine something like Facebook or Twitter doing that, even using GnuPG.


> 2. They've never been the focus of a harassment campaign because they're not stupid enough to put their personal details out there on the internet and then be surprised that people will use that to harass them. They also don't feed the trolls in the first place.

This is a) simple victim blaming and b) impossible to fix in some situations. Illustration of (b) is that the companies of which I'm a director are a matter of public record, and for very little money indeed you can buy online a copy of that record including my home address and telephone number.


Can't you use a POB and special business number?

It's not "victim blaming". It's just what's so. With the Internet, every obnoxious jerk on the planet can come after you, if you attract enough attention.


No, here in Australia, director's information must be place of residence; although there is a process for suppression or replacement based on personal safety needs, for most that'd be closing the stable door after the horse has bolted. What's more, the information is still in the database, it's just masked in standard extracts; the data is one weak password away from exfiltration.

I believe it is most certainly victim blaming. The existence of terrible people may well induce one to recommend countermeasures, but the phrasing "stupid enough to put their personal details out there" is first-class (and thoroughly obnoxious) assignment of responsibility for bad behaviour to the victim of that behaviour. It's the same logical construction as blaming rape victims for their clothing.

As for "attract enough attention", the bar for that has been lowered in recent times to "one tweet".

These are not happy times to be a private citizen communicating online in anything but the most closed fora.


OK, but as director of whatever, perhaps one shouldn't be tweeting controversially. And if one is tweeting controversially, perhaps as part of one's job, then one ought to practice good OpSec.

This isn't a new issue: http://www.learnliberty.org/blog/anonymity-and-doxing-in-the...


You've conflated "controversial" with "attracting harassment", which I perceive as pretty much in line with your earlier defence of victim blaming. This is something I reject outright as simply another angle chilling the speech of the victims of online harassment, instead of attenuating the bullies.

I'm responsible for my own OpSec, but we both know that public registries have their own vulnerabilities. Again, I think I detect a misattribution of responsibility. Nevertheless that was a single example, not the entirety of the means by which personal information can be gleaned by the unscrupulous.

My counter-position is that it is essential for we who have influence over systems design and policy to avoid unintended negative social consequences in the services we build.


If you want walled gardens, by all means build walled gardens. But don't imagine that you can prevent people from using other services. And talking about you.


Yeah I don't really understand why people want to be anonymous on the internet - and if you don't want people to have your personal information against you, then you do want to be anonymous - but hate the idea that anyone else is anonymous.


Huh? You don't understand why people want anonymity? Or at least, pseudonymity?


No he is pointing out irony. He is saying that a lot of people feel like they should be anonymous, but that others, fit whatever reason shouldn't be allowed to. Celebrities, pundits etc.

It's akin to celebs complaining about gun control, stating they hate guns and would never have one themselves, as they are guarded by several armed body guards.


No, I don't understand why people want anonymity while also demanding that people can't anonymously send them messages on a public platform the entire point of which is the sending of messages.

People want to be anonymous, but don't want just anyone to be able to send them messages. Okay, so who can send you messages? Who can post things you can see? What is your solution to this?


I guess I misinterpreted your comment as well.


It is victim-blaming and it's shitty to say that people shouldn't use services like twitter to protect themselves from harassment while omitting that people shouldn't be assholes online in the first place.


Yes, of course, "people shouldn't be assholes online".

But there are at least two problems. One is subjectivity. As in "terrorist" vs "freedom fighter". The other is that there are lots of assholes out there. Billions of them. It's like spam. You just gotta filter it out.


That's stupid and you know it. No matter how much you might want it to be otherwise, people exist. People disagree with you. To varying degrees, they might be hostile about it. Some would characterise my comments in this thread as hostile and worthy of being censored. I think that's ludicrous, and yet there we are.

I think everyone can agree that the most egregious examples of harassment are bad. Sending someone a new picture of gore from a newly created Twitter account every hour for years (or whatever example is trotted out every time this comes up) or scouring DNS records for their address and swatting someone? Yeah obviously that's bad, we all know that's bad. Those are harassment to the level of being criminal offences anyway, there are ways to deal with them.

And I think that everyone agrees that telling someone that their ideas are 'stupid and naive' is perhaps a little hostile, but not to the extent that it needs to be censored.

So what's the middle ground? Are we allowed to swear? Are we going to apply the Americans' ideas of what is offensive to the internet? Because I sure as fuck don't come from America and I sure as fuck don't appreciate being told what is and is not acceptable from a country where it's illegal to say the word 'shit' on TV but it's legal for the Westboro Baptist Church to harass people mourning at the military funerals of gay soldiers. You can sort your own shit out before you start policing the language and behaviour of people in other countries talking to other people from other countries over the international medium that is the internet.

If you want to filter offensive language from your twitter replies, go ahead. Twitter and Facebook are already incredibly filtered thought bubbles, that ship has sailed. It's not going to get any worse if you can automatically filter out everything with language you find 'offensive'.

But this 'let's build a future internet without assholes' stuff is pathetic. What I consider an arsehole, you might consider someone just defending their pronouns or whatever it was that kid stabbed his teacher for the other day.


You've gone into flamewar and incivility in this thread. That's an abuse of HN and not allowed here. Other people broke the site guidelines too, but that's no reason to respond in kind and certainly not to keep perpetuating a tedious spat. We ban accounts that do these things, so please fix this. Specifically, please eliminate name-calling and incivility from your comments and if you can't post thoughtfully, don't post until you can.

https://news.ycombinator.com/newsguidelines.html

https://news.ycombinator.com/newswelcome.html


If you get harassed by an asshole at the bar every time you go, you pick a new bar.

It takes 5 seconds to make a new account and send the info to the few people you want to bring along.

By the time the troll figures out who you are again, they've usually stopped caring.

Repeat as required.


On one hand you could blame the victims for making themselves available targets and tell them to hide. On the other you could blame the perpetrators looking for targets and try to stop them. The former abdicates societal responsibility for the problem and the latter seeks justice. With more of society now online we should be thinking towards the latter rather than cowering in fear?


Firstly, don't extrapolate and call everyone that's had something rude said to them on the internet a 'victim' just because a very few people have been on the receiving end of genuine harassment.

Secondly, nobody is 'telling them to hide'. You can filter out your twitter replies, you can just not read that material. You can read it and say 'wow there are a few nasty people on the internet'. That's up to you. You can also, yes, remove yourself from toxic, shitty sites like Twitter. But what you shouldn't do is read all the nasty shit and then go 'hey this shit is nasty'. Like, yeah, of course it is. What do you expect? That everyone likes you? Stop hitting yourself.

TotalBiscuit, a minor YouTube celebrity, has said a few times that he's disabled YouTube comments not because he wants to isolate himself from criticism or because he is so offended and thinks those people should be banned from the internet, but because he had a morbid obsession with reading through all the comments even though he knew some things in there would make him feel hurt. He had a problem (he couldn't stop himself from reading things he knew he'd not like) and so he found a solution: disable the comments on YouTube, and only interact with people on a subreddit that he has people he trusts moderate. That's not 'cowering in fear', it's being an adult and solving problems. He's doing basically the same thing celebrities have been doing with fanmail for centuries: getting someone to filter it out for him.

I'm sure many people could get someone to do the same thing with their Twitter replies, and I'm sure many people do. That's your choice, that's their choice. It's not 'abdicating social responsibility', it's not hiding, it's being pragmatic and creating a practical solution to a problem. It doesn't restrict the actions or freedom of others, it doesn't get people arrested or banned from Twitter, it stops you from seeing things you don't like. It's like installing AdBlock instead of legislating that internet ads should not contain pornography because your kids use the computer.

How do you propose to 'try to stop them' anyway? I definitely think that real sources of real personal information, like whois records, need an overhaul. It needs to be possible to register a domain, register a company, etc. in a way that still involves your identity (so that people can dispute copyright, Government can collect taxes, you can be arrested if you're hosting child pornography, etc.) without letting any random person on the internet type in a shell command and getting my home address. And it shouldn't just be 'the Government can access it and nobody else can', that's a shitty solution (although probably better than the current one).

Ignoring that (which is a real problem, unlike 'OMG I got a nasty message on twitter telling me to kill myself, whatever shall I do? Shall I just ignore it, like an adult? No, I'm going to throw a tantrum about it despite being a 26-year-old professional journalist'), what do you propose that 'we' as 'society' should 'do' about the 'problem' of people sending rude messages on Twitter?


The problem as I see it is social and solved through policing as we do with other anti-social behaviour.

All of the best online communities I've been part of have had strong moderation including here on HN. So that's one example of how people can take social responsibility at the level of the platform holder. Twitter is a great example of inadequate time and money spent on that problem.

Where things are complicated right now is in the legal sphere where we have a global entity but many fractured jurisdictions. This prevents the problem being tackled at the right level causing it to persist. This isn't even an issue unique to online harassment.

Mollifying and modifying behaviour as an individual is an admission to being in hostile territory. I'd rather we acted together to bring justice and freedom instead.


> short attention spans and soundbites. Don't use Twitter.

It does what it says on the tin-- "Twit", right there in the name.


>The internet doesn't magically make social, political, religious, and ideological rifts magically disappear, it just seemed like it did because nobody could associate users with their backgrounds.

But one could argue that our modern "Social Media Internet" gives exactly that impression. Whether it's Google, Facebook, Twitter or whatnot: People are being funneled into "ideological bubbles" based on their preferences and being surrounded just by people who share their views.

Thus many people assume they are on the "right side" because everybody in their Internet-social circles thinks exactly like them. That's why many people act so surprised/outraged if they happen to come across somebody from another "social media echo chamber" with completely opposing views.


"People are being funneled into "ideological bubbles""

I'm not as convinced they are being funneled. I remember back in high school, my friends and I would sit around in a circle, in meat space.

"War is wrong"

And we would go around the circle: "yup", "absolutely", etc

"Some other liberal talking point"

yup, of course, so true....

I feel they are self selecting. Look at the election, when liberals were 3x more likely to unfriend people. [1] And I think they have been self selecting since before the internet. Also there is a geographic component. Rural areas vote red, urban vote blue. We have had geographic filter bubbles for decades.

1. http://fortune.com/2016/12/19/social-media-election/


Keep in mind that in the meat space people consciously chose to socialize with alike people, that's why you called them "friends" and why only "liberal talking points" were discussed.

On social media, this happens on a less conscious level because with every choice you make some algorithm slowly "personalizes" everything you get to see a little bit more, thus it increases surfacing things/people that you are more likely to agree with while pretty much "hiding" anything/anybody that you wouldn't like/agree with. This is a slow process and it's mostly hidden from the user, as such most people are not even aware of being funneled in different "circles" in such a way, so they think their personalized social media reality represents the reality of everybody as a whole.

The big difference to geographic filter bubbles being that those still leave you aware of there being an "other side to the story", case in point: You know there are red states and there are blue states. On social media that other side gets blended out in such a way that it might just as well not exist at all because it won't be surfaced to you unless you make a conscious effort looking for it.


"Keep in mind that in the meat space people consciously chose to socialize with alike people"

"some algorithm slowly "personalizes" everything you get to see a little bit more"

You're right, I wish I would have wrote: they are not exclusively being funneled. That being said, they are still friending alike people, and not liking stuff which is diverse. I realize I am sort of victim blaming here :(

Basically I have seen a lot of: "the internet causes x", where x existed before. I just wanted to make the point that this is an evolution of a previous phenomenon.


Not blaming the Internet here, rather social media. Pre-social media this didn't really exist in such a crass way. 20 years ago it was kind of a novel thing to "meet" another human online. Most people didn't really care about the other person's political beliefs or whatnot, it was already enough to have found somebody else who was also on this weird Internet thing and shared the enthusiasm for it.

I remember thinking back then: This is the thing! This is what's gonna unite humanity across all kinds of borders as people can't be discriminated against based on superficial attributes like age, skin color or political beliefs. After all, back then we didn't have "social media profiles" and most people still cared about their privacy so barely anybody blast out their personal details for the whole world to see, sadly that didn't become a reality.

Instead, it feels like we are in the exact opposite place, what a sad turn of events.


> Rural areas vote red, urban vote blue.

Don't let the electoral map fool you, there are very few places that skew overwhelmingly to one side. Many places may have a majority favoring one major political party, but the minority isn't that small. I live in the northeast, which is reliable electoral votes for democrats but there's not shortage of republicans around.


The only thing that has changed is the ability to demand opinion segragation and discussion protection, without beeing called out as undemocratic hyper-critter.


I remember reading "The Filter Bubble" about 6 years ago or so and thinking it was the dumbest thing since purple ketchup. Fast forward to the past 24 months and the concept has become more real than the internet itself...


Well, for a lot of the world, Facebook is synonymous with the internet as an idea. In India, FB has negotiated with a lot of mobile carriers to exempt themselves from being counted towards data usage. This means that some double digit percentage of the world thinks that FB is the internet. How can you expect people to not be in a filter bubble when their idea of the net is FB and FB only?


Just in case anyone wants to see the numbers on this, I can't find a better collation.

https://qz.com/333313/milliions-of-facebook-users-have-no-id...


Honest question: how would you reconcile privacy-centric communities like Reddit and definitely-not-privacy-centric communities like Facebook? They both have the same patterns of information bubbles generally feed content in similar ways (broadly speaking) and they both have strong troll subgroups. If anything, Facebook might have a lower overall incidence of trolling because you run the risk of having your profile picture on the evening news.


it's not about quantity of trolls, it's about the effect of trolls. No one cares if you're a troll on reddit, because there's no personal investment. You're just an account. There's an upper bound on the damage that can be inflicted on you. That is not the case with Facebook. Reddit doesn't have to do anything about trolls except give people a downvote button.


Fair point. It is worth noting that they occasionally stalk users and try to de-anonymize them though this is a fairly "niche" practice.


That would be harassment, no?


Does Facebook actually have less trolling? I don't know how anyone could quantify it but subjectively there seems to be a lot. It's easy to create fake accounts, and some people just don't care.


In an absolute sense I imagine they probably have more, given that they have something like 1.5 billion users. I think the type of trolling might be more mild in general though, back when I used it regularly it was mostly just 20 page political debates hinging on gross misunderstandings by both parties...

Obviously this is just my experience though.


Enough so that the retiring US Attorney for Chicago specifically called it out for the city's murder problem.

https://archive.fo/203tP


Criminal street gang members threatening and dissing each other doesn't qualify as trolling. If we're calling that trolling now then the term has lost all meaning. And that retiring US Attorney didn't cite any evidence of one social media system being worse than the others.


"Trolling: A person who posts inflammatory or otherwise unwanted material on an electronic forum, especially anonymously."

http://www.thefreedictionary.com/trolling

"Troll: One who posts a deliberately provocative message to a newsgroup or message board with the intention of causing maximum disruption and argument."

https://www.urbandictionary.com/define.php?term=troll&defid=...

Provoking gangland warfare to the tune of http://www.chicagotribune.com/news/local/breaking/ct-open-le...

There's been an absolute epidemic within this thread of people attempting to redefine trolling in such a way as to minimise either the problem or the source article. It's beyond stupid, beyond credible, and beyond sustaining any credibility of those doing it.

Ya trollin' me, bra?


For some reason, most of what I'd posted to the above comment (quotes from the letter) don't appear to have made it through. Sigh.


Reddit's attempting to bridge this gap already by addin a Facebook-like profile where you can build an identity or following based on your screen name.


>the internet is as it always was

I remember buying computer parts via e-mail. Occasionally they arrived before I sent money. Think the odds are fairly low of that happening now.

Sadly, I guess a lot of it is no more complicated than road rage. Being kind, empathetic, a good citizen is negatively correlated with emotional proximity.

Add tons of people? Less emotional proximity = trolls. Add thousands of miles of distance? No public will to help child war refugees in Syria, and so on.

Such of large percentage of the nastiness would never happen face to face or with a neighbor.


Take, for example the referee of the KY game who got his business trolled as well as death threats. I don't think he did anything to "feed" the trolls however he may be irreparably damaged by such people. I don't think you can ignore them and hope they just go away when your business is threatened as well as your life.


Back when I first started using USENET (late 1980s) it seemed like most people used their real names. Maybe that was just the newsgroups I read.

I didn't really notices the pervasive pseudonyms until web forums and chatrooms started taking off.


Not true. It depended a lot which newsgroups you frequented. There were some where pseudonyms were common and conduct was civilized. There were some where flaming trolls roamed freely.

Generally things get worse - ask your parents. Crime is on an all time high - just ask Sessions. It always was getting worse ask people from the antique. I someone asked me these days the getting worse is getting worse.


Keep in mind that username policies were often specified by the institutions people were connecting through, largely major research universities.

So that may not have indicated personal choice.

(My own early / Usenet handles were ... pseudonyms.)


What about reddit or 4chan, tho? Everyone's anonymous there, and it ain't like the brightest proof for your example. They're mostly the sewer of the Web.


That's why people go there. Nobody's forcing you to.


I've been on the internet since the 80s even, so your line of thinking resonates with me.

But what's different now is that a troll actually got elected president, this shows how we've come from trolls being toothless. It's probably always been the case that perception makes reality, but never before has a pseudonymous forum such as the internet held so much sway on perception.


Wasn't "Celebrity Apprentice" rather trollish?


If we define "troll" as anyone who posts inflammatory, bad faith, needlessly aggressive, insulting, false, or otherwise unproductive content, then the entire class of authors of the genre of news article decrying trolling and harassment are themselves trolls.

>Of course, this is already happening, just out of sight of most of us

Why would they consider that to be such a big problem, if it's not bothering those who are wont to be bothered? The true motivation for these comments is clear, but remains unsaid: what "trolls" are posting is outside the control of these would-be authorities.

The technology community must pick between enabling universal surveillance and censorship, or protecting the privacy of the vulnerable, including those who hold minority opinions and express themselves in ways that could be deemed "unpleasant".


If we define "troll" as anyone who posts inflammatory, bad faith, needlessly aggressive, insulting, false, or otherwise unproductive content, then the entire class of authors of the genre of news article decrying trolling and harassment are themselves trolls.

Isn't that a bit of a tautology? "If I expand the definition of the word 'troll' to include 'authors decrying trolling' then authors who decry trolling become 'trolls' themselves."

It's not an exceedingly convincing statement to me personally.

Though you've used one of the adjectives I think best describes the nature of 'trolling' and that's "bad faith". Personally, I feel the most useful definition of 'troll' is someone who argues/antagonizes/insults others not because they are trying to make a point or start a conversation, but merely to provoke a negative response.


>Isn't that a bit of a tautology? "If I expand the definition of the word 'troll' to include 'authors decrying trolling' then authors who decry trolling become 'trolls' themselves."

No, it's not tautological.

The parent doesn't propose to "expand the definition of the word 'troll' to include 'authors decrying trolling'". He proposes defining trolls as "as anyone who posts inflammatory, bad faith, needlessly aggressive, insulting, false, or otherwise unproductive content".

Which is as (or close) to how it has always been defined, and how the authors of TFA use it.

So what the parent suggests is not some "expansion of the definition of trolling", much less one to include "those who complain about trolling", and in no sense a tautology.


The parent doesn't propose to "expand the definition of the word 'troll' to include 'authors decrying trolling'". He proposes defining trolls as "as anyone who posts inflammatory, bad faith, needlessly aggressive, insulting, false, or otherwise unproductive content".

If you assume that the word troll does not mean "anyone who posts inflammatory, bad faith, needlessly aggressive, insulting, false, or otherwise unproductive content" then someone claiming it does is pretty clearly expanding the definition of the word.

Which is as (or close) to how it has always been defined, and how the authors of TFA use it.

I actually disagree with both of these assertions. Sometimes people expand the word troll to encompass too large a group, but that doesn't necessarily change the definition and I definitely don't see how the article uses it in that manner.

Though, I can definitely understand the desire to expand the definition of the word 'troll' into usefulness. It does make it easier to dismiss other's complaints.


> If you assume that the word troll does not mean "anyone who posts inflammatory, bad faith, needlessly aggressive, insulting, false, or otherwise unproductive content" then someone claiming it does is pretty clearly expanding the definition of the word.

I don't think the OP is advocating for any particular definition, but pointing out that the authors of these articles seem to be using a definition of "troll" (whether or not it's expanded is irrelevant) which would encompass said authors.


Most of these authors label people who are not trolls, btu rather just disagree with them.


That's why "flame war". There's positive feedback.


Yes, trolls=keyboard kowboys/girls=flame-baiters.

Astro-turfers, shills and disingenious actors are an entirely separate class. And, they are winning.


It would be possible for "authors decrying trolling" to do so in a way that isn't "inflammatory, bad faith, needlessly aggressive, insulting, false, or otherwise unproductive". So it's not a tautology unless you insert an empirical assertion that every single one of the "authors decrying trolling" is exhibiting one or more of the abovementioned qualities. And that assertion sounds like a substantial point, not logically empty at all.


In the trolling game, there's the concept of triggering the "inner troll" of one's opponent. So they start trolling too, rather like a puppet. Puppet count is part of scoring.


I think it's more of a signal to noise problem. We've had that issue before in society (e.g. the Middle Ages), when information flow was not constructive, and therefore not _additive_. The net result was that society moved sideways, not forward.

Contrast that with Victorian England, where large parts of society had "manners", which allowed for information to be constructive, and society (technology, etc) could evolve.

When the signal-to-noise ratio gets low enough, people leave groups, and progress cannot occur at any reasonable rate.


Well a large part of our daily communication is still how's weather like, gossip and what's for dinner.

Internet may look like it's all lolcats and trolls if you look hard enough, but that's like dropping into a teenage group and complain all they talk all day is tits and soccer*

At any time one can still go on a gated, moderated community and obtain meaningful noise free communication with little interference from trolls and attention seekers.

This whole piece reads like "the problem of wetness in swimming" - a non problem that has been created by putting oneself into that condition in the first place.

*actual sport may vary geographically


I wish people understood this. Its so incredibly frutrating to deal with (controlling?) people trying to control the way communities that they have just joined operate. Here's an alternative to joining a community you don't like: don't join the community!

People will act like they understand this, but if they did, they simply wouldn't have a problem. Theres really no issue here if you actually understand this advice.


Right, Mirimir likes moderated communities. So HN, Wilders and some subreddits. Never Twitter. People have tried to get me on it, but I find it maddening.


While I agree absolutely with this, the problem is that there is a growing base.. What almost feels like it's becoming a mainstream majority, that feel a better solution is to have someone, the government, LE, some invisible hand, take control and deem what is and isn't acceptable thought and speech.

You can see it even on here. Instead of a thicker skin, or not using something like Twitter, people espouse ideas such as enforcing non 'harassing' language. But in my mind there is a clear difference in the 'harassment' online, and real harassment, e.g. you're swatted at 4am, they poison your pets, they stalk you, not just once but could continue for weeks or months or years on end.

At the risk of sounding like an old curmudgeon, I do understand that now more than ever, we have an entire generation that has never been taught anything except their online persona is as important, if not more important than who they are in meatspace. It's easy to understand, they have really become a brand, their own brand, which is both good and unfortunate. The reality is that life will continue on regardless of data pipes working or not.

The growing outcry for safety and homogeneous thought is larger than ever, in almost every aspect. But with regard to the internet, the issue is that as this group becomes the majority, their calls carry more and more weight for moving the internet into the same world as many US universities.. Every campus now has a Free Speech zone, an XX meter square that you can freely speak your mind, and even hand out constitution pamphlets. Step outside that, speak your mind and you are charged with 'harassment'.

I'm not that hopeful though. As we move more into the world of where the government continues to take a larger role in making the internet fair, we relinquish certain other things. Of course, it starts benign, but it ends up quite different.


Regarding manners, it really depends on what those manners are.

Chinese medieval society was very orderly and mannered; there were clear standards of behavior (at least for the elites). But the manners in question were conservative and did not encourage innovation, so they did not advance rapidly and ended up far behind Europe despite having a head start. The enforced homogeneity of thought and thus destroyed opportunities to advance or get past their own biases. Read about the Imperial exam system for some reference on how this was achieved, and note that Chinese geography has the most fertile regions highly centralized.

In the west we developed this system of manners based on freedom, where we let people do things we think are wrong, as long as they're not hurting us. This permits intellectual diversity, which means that when the mainstream experts have it wrong, at least someone else can come and knock them over with reality. Rationalism, not authority. Objective evidence, not subjective feelings.

My main worry is that the modern mainstream morality seems to be all about intensive efforts to homogenize thought, opinion, and policy as widely as possible, to exalt collective justice over individual rights, and to privilege subjective "lived experience" over objective evidence. Freedom of conscience and freedom of speech seem to be "out" these days and it's gonna create some real unpleasant results. It already is.

---

(Slightly off topic as well:

It's kind of a myth that society didn't move forward through the Middle Ages. While western medieval society remained in the Malthusian trap up until the 1800's, in other ways it did advance continuously over that time.

For example, studies of documents signed reveal that literacy increased continuously from the 1200s all the way until it hit nearly 99% in ~1900.

Similarly, studies of recorded ages reveal that most people didn't know their age in 1200, but more and more people knew it up until the present day. (You can tell this because people who don't know their age tend to round it to end in a 0 or 5, so you can count the over-representation of this in the data, or even by just looking at tombstones).

Something was changing through the whole medieval era.

And there are other markers (e.g. interest rates went down continuously, rationalism advanced, etc etc).)


I'm assuming by "manners" you are talking about societal norms and regulations. And you're correct in your diagnosis that China has the most fertile regions in the world. However, these manners didn't really affect Chinese "development" as much as you imply.

The imperial exam is good example of just how advanced the Chinese nation was for most of history. China as a country remained politically united for most of known history, quite a remarkable feat if you think about it. One of the enablers was a professional bureaucracy that allowed the state to remain strong without being challenged by regional usurpers.

This is a huge topic of course but one of the reasons why China fell behind is because certain isolationist policies took hold just when the modern era dawned, pushing China farther and farther behind simply due to the backbreaking speed of modern development.

Another reason why I don't think "manners" matter is Japan; isolationist until the Meiji Restoration, rapidly modernized after despite the "manners" remaining intact.


> Another reason why I don't think "manners" matter is Japan; isolationist until the Meiji Restoration, rapidly modernized after despite the "manners" remaining intact.

Japan abolished social classes (or pretended to) and nobody gets killed for insulting a samurai these days. The voluntary-then-forced Westernization was a pretty major changes.


"In the west we developed this system of manners based on freedom, where we let people do things we think are wrong, as long as they're not hurting us."

I agree that west is more free then anything else. However that freedom in general sense or relatively new thing for large parts of population. And for that matter, marihuanna is still illegal and harms no one.

I think that west ultimately created system where groups were able to succesfully win fight for their right to not harm others. But it took fight and abuses happened every single time.


Lived experience is all any of us has got, in terms of experimental data. If someone thinks they are having a bad time, it's difficult and dangerous to discount that: right now there is no way to assess the validity of someone else's subjective judgement. To rule it out completely is going to far, almost denying the reality (pleasant or unpleasant) of one's own life...


One of the norms of Victorian society was that once you acquired wealth, you were meant to behave in a way that was befitting of that status - including in terms of morality. And indeed middle class and wealthy Victorians poured money into / agnonized over the condition of the working class.

What I believe you are seeing now is the entrenchment a significant minority of people who not only feel they have not 'won', but that the rich and well connected have broken that social contract. They, unsurprisingly, don't feel like playing the 'manners' game if they feel the system is abusing them.


There was an earlier article on HN related to engagement

https://www.theatlantic.com/technology/archive/2017/03/how-t...

I think some of this may play into it as well. If everything is written to be inflammatory, it becomes an echo chamber of sorts.


I disagree with your list. You put "bad faith" posts as one of the things a troll might do, but I think trolling must always be bad faith. I post aggressively, insulting, inflammatory content with some regularity (not proud of it), but it's never bad faith. I'm never doing it "for the lulz". I don't necessarily think I'm any better than a troll in those moments, but it's not trolling.

It's only trolling if you think what you're posting is funny.


>The technology community must pick between enabling universal surveillance and censorship, or protecting the privacy of the vulnerable, including those who hold minority opinions and express themselves in ways that could be deemed "unpleasant".

They chose a long time ago. A lot of the issues that are slowly coming to the surface for most people (police/surveillance state, endless war driven by military-industrial complex and politics, climate disaster, etc...) are old issues. They're finally surfacing because most of them are so far beyond criticality that they're actually detonating in our midst.

These choices have been made again and again by the few people who really bother to engage with these issues, and we lost. This is what losing all of these battles over decades looks like when the bill starts coming due.


I don't believe it is a choice made once, but a series of choices. When Drupal threw out one of their leaders, because he was into BDSM, they made exactly the choice you suggest we don't. It was not made by some mysterious “other”, it was not made by “them”, it was a choice made by a few named people.

It's a choice we all make every day, is the sum of those choices that make the world we live in. If you want to change the world we live in, act accordingly and convince others to act accordingly.


Most people simply lack the ability to understand reality to any reasonable approximation, and are too easily led by pathos. For them, it's not what you say, but who you are. People like those on HN are a tiny minority, and that's the reality.


Interesting definition of a troll in a recent New Yorker [article][1]:

> …young, understimulated men whose main goal is to be the chaos they wish to see in the world.

[1]: http://www.newyorker.com/magazine/2017/04/03/trolls-protest-...


I don't understand this article at all. I participate in two large professional forums (one for poker, one for development). The professionals are winning the internet there. There are dedicated subforums for trolls and random bs, which are pretty funny sometimes, but serve as an outlet for the garbage.

I browse hacker news... It is what it is, I'm happy with it as a content sharing platform. My biggest gripe is the obvious advertorial/astroturfing that I think is probably something hn offers as a service, but I'm not stupid, so I ignore it.

I get news through economist.com and npr.org and scattered other sources. I don't feel overwhelmed by trolls, except when I read yet another fucking headline about gluten.

My social media is so boring and predictable I'll probably delete it all shortly. I've never had a problem there, but I respect that some people do.

I shop on the internet frequently. I get trolled by counterfeit goods sometimes! Also by fake reviews, but that's what professional forums are for in the post cluetrain world.

I get scholarly articles on the internet every other day as part of a professional masters program. Sometimes open comments on academic articles read like troll fests, but not really.

I participate in open source on github, that's not being trolled to death (Although big os project maintainers may disagree).

And on and on... The internet (as I know it) isn't being won by trolls at all. So what is the article actually about? Twitter? Comments on news sites (lol)? The pessimistic attitudes of "technologists"?


Oh man, have you ever tried to have a political discussion on Facebook? Even citing sources, trying to be diligent... And then, the only response you get is... "that's not right!". And that response is somehow given equal weighting?

I actually wish Facebook, the place where a majority of Internet discussion probably occurs, WOULD institute a dislike button and suppress those comments that get down voted too much (like Hackers News or Reddit).

Right now, it's either a "thumbs up" or a neutral stance, and that's just not enough to accurately describe the level of discourse occurring.


The problem with services like Facebook is they give everyone equal footing on any topic no matter the person's expertise on the subject, their relationship to the matter at hand, or their overall attitude towards discussions in general.

This means a black person who's been repeatedly victimized by police has no more credibility than some white dude who thinks everyone's making it up and it's a bunch of cry-babies going on about nothing.

It means a cryptography expert can get shouted down by some idiot that's just parroting something they read on CNN that's dangerously misinformed.

There's a lot of factors in play here but the "like" button skews the conversations towards things that earn votes which are not necessarily things that are true, accurate, or meaningful.

Sometimes the truth is ugly and painful and that's hard to like.


I mean, I can hop on the gripe-train about Facebook, but the only way I've found to have meaningful political conversations is over a very narrow topic with the expectation that parties come well-researched and willing to continue researching. Politics is excruciatingly complicated, trying to have a cogent conversation about it on Facebook is like trying to have a deep talk about religion while riding bumper cars.


The problem with downvotes is they are susceptible to brigading and in political discussions, there are essentially 2 hivemind brigades. In the end, any point that doesn't agree with the majority hivemind on the platform gets buried. /r/politics is a case study in this.


Political discussions with general public have always been like that, in every country and every age. It's not Facebook's problem, that's just how people are.

Any topic that indiscriminately concerns almost any person (e.g. politics, religion, weight loss and bodybuilding, how to parent and educate children, cooking, healing common pains/sicknesses), when discussed with random people, is automatically doomed to turn into a mess of uninformed opinions, misinformation, trivial logical slips and ordinary quarrel.

Tech/science discussions are usually constructive because they imply harsh entrance filter on education and intelligence, because most of those who upvote "that is not right!" can not join a public discussion of pharmacokinetics of a certain organic molecule no matter how welcome they are. When filters are absent, you get what you have described.

The real problem is that over 95% of people do not even value being constructive, checking facts, citing sources, etc. That is not a technical problem, you can't solve it by tweaking voting buttons.


Political discussions in any forum have rarely been productive, and no number of tools designed to build echo chambers will change that.


"And then, the only response you get is... "that's not right!". And that response is somehow given equal weighting?"

I noticed this long before the internet became this popular. Its more about the anti intellectualism than it is technology.


The author is a prominent woman journalist, so her experience of the internet probably involves more harassment than most people's.


sure, but here her voice is at the top of a widely read internet forum, and there's a polite conversation going on about it. This seems like a win for the internet. I can't stress enough that I respect the pain of public personalities in the age of the internet, to the extent that they suffer harassing emails, fake police calls, stalking, and lord only knows what else. That's not what the conversation is about though (those aren't examples of 'trolling on the internet' those are examples of illegal harassment).

The internet is and continues to grow as a powerful tool for people who want to learn, converse, share ideas, entertain themselves, build things, etc. The idea that 'the trolls have won' seems misplaced and strange.


It's also a heavily moderated forum. Let's see what the response to the article is on Reddit or one of the Chans.


So should model the internet after a victorian society where proeminent women jorunalists could drink tea, play the piano and have a meaningful conversations about marriage?


Yes. Those are the only two possible options.


Harassment or corsets. Pick your poison!


> I participate in two large professional forums (one for poker, one for development)

"Large" is subjective, and I'm inclined to argue that anywhere "non-trolls are winning" simply doesn't have the size that forums which fight against 'actual' trolling at its scale do.

A simple BBS system or IRC channel can "let the professionals win" simply by being so niche that the professional : troll ratio is well within the manageable bounds for a handful of mods to be able to casually moderate every so often. Hell, this pattern is visibly demonstrable with smaller 'niche' subreddits and subcommunities. It's the whole "there goes the neighborhood" pattern in its digital form.

"By obscurity" doesn't isn't really an anti-troll 'win', it's just escaping trolls' attention. It only works so long as trolls don't hear about or aren't interested in the forum.


You're in the top 1% or 0.1% of filtering for quality, IMO. Go spend some "quality time" on /r/the_donald for a more typical experience.


I agree that /r/the_donald is a poor quality experience, but it's analogous and in response to the poor quality experience of the /r/politics. They're both "circlejerks" that don't allow competing views to be posted. So while you may need to take the_donald with a huge grain of salt, if you're only get your news from left wing site like /r/politics, npr, huff po, new york times, and even hacker news for the most part, you could do worse than checking it out. (Though I would recommended Daily Wire, Daily Caller, and Reason as a counter points.)


politics might downvote something they don't like to oblivion, but TD will ban you for violating the narrative.

And if by take it with a grain of salt you mean that nothing in there is actually relevant or related to reality.


TD isn't meant to be a bastion of free speech. It's candidate support website. They say it in the sidebar: "This is a forum for supporters of Trump ONLY". They obviously won't allow anything which does NOT support Trump. /r/politics is supposed to be a neural subreddit, is not even close to it.


It's a forum dedicated to a certain political candidate and their views. By definition, such a place will be an echo chamber with a (usually) low regard for neutrality.

It's no different from a hobby forum simply not tolerating people who don't like that hobby. You don't go to say, a Microsoft specific forum and expect to hear why Apple or Linux operating systems are better. And if you did keep saying that, you'd probably be chucked out or flamed to a crisp.


Check a Usenet archive and see what the alt.* groups looked like in 1997. Lunatics and assholes have always been present on the internet; the difference is that in 1997, we saw the necessity of creating private spaces that excluded undesirable people.

Facebook and Twitter are virtually unmoderated. As long as your comment is legal and non-pornographic, it's probably staying up. The "communities" used by the majority of people the majority of the time are less well moderated than a lot of 4chan boards.

Of course the debate on Facebook is uncivil. Expecting otherwise is a peculiar kind of madness.


And if you have civil debates on Facebook, your social circle is extraordinarily strange.

Also, Facebook is moderated by the person whose wall you are posting on. I've seen effective moderation in this way. If you have friends that value good discourse norms, and you shitpost in the discussion around their posts, they can and will delete your shit and tell you to knock it off.


>Go spend some "quality time" on /r/the_donald for a more typical experience.

/the_donald has almost 400k subscribers. It is what it is. Do you assume these people are stupid, or do their beliefs and interests (and sense of humour) differ from yours?


>>Do you assume these people are stupid

I dunno... every time Trump fucks up in a big way, the_donald folks genuinely think he is playing "five-dimensional chess" and that he fucked up on purpose as a way to outmaneuver his opponents without them realizing it.

So yes, they are absolutely, definitively stupid. I mean how else do you explain the attribution of superhuman intelligence to someone who is an incredible cretin?


The exact same thing could be said about the people celebrating the DNC and Hillary's race to the White house on reddit. An obviously flawed actor with few redeeming qualities, yet still held in high regard in the minds of people on r/politics. The only difference I see on the left side of the divide is lack of humour (which tends to invite trolling).


I don't want to get into a political argument, especially with someone who thinks Hillary Clinton and Donald Trump are even remotely comparable. And I say that as someone with a deep dislike of Hillary.

Interesting that you think the left side lacks humor. Last time I checked, all parody shows on TV were liberal. What is the right-wing equivalent of SNL?


> What is the right-wing equivalent of SNL?

Bullied off the air long ago. Remember people tried to get people like Thiel fired from FB for daring to donate campaign funds to a mainstream candidate.


Name a single good TV parody show.


>implying late night shows are funny


Just checked it out. I see a meme with the text: "[..] There is no bigotry here. We do not look down on snowflakes, cucks, neckbeards or landwhales. Here, you are all equally worthless".

> Do you assume these people are stupid

probably, although it's their morality that's more of a problem than their IQ.

> do their beliefs and interests (and sense of humour) differ from yours?

yes.


Yeah, but you don't have to go there. That's the point.

Trolls aren't winning the Internet, for the most part they're loitering in those corners of the web you'd expect to find them.


if the internet is supposed to be a truly public platform, that is to say a space where different opinions can collide productively then the internet at the moment is clearly losing.

It's not that the trolls occupy the corners, it's that they occupy the big, visible public spaces of the internet, while everybody else is driven into digital sanctuaries. That clearly is a failure if the stated goal is to genuinely democratise knowledge and discourse.

I'd even go further than the author and say that we're way past a 'potemkin internet with a nice facade'. The facade is pretty darn ugly as well. The kind of stuff people are willing to post under their actual name is pretty scary.


What if the so-called trolls are actually just the majority of people?


big, visible spaces of the internet - you mean the media?


sure, those are affected to. The comment sections originally intended for discussion on most big media sites are toxic even with moderation or have become so bad that some news sites have ended the experiment altogether.

That's a pretty sad result given that the whole idea was to break down the barrier between journalists and readers.


Not only the comments, the actual articles are full of trolling, too.


For something like /r/the_donald, you're right. But for something more general, like general Twitter, you're not.


Vote with your feet/filter/time. Don't go to to /r/the_donald.

I just use HN and twitter mainly. I keep Facebook because many friends can only be kept in touch with (out of the blue) via facebook. Otherwise, just block a lot of it.


I don't feel any need to do that on any kind of regular basis, any more than I feel the need to browse pro-anorexia or pro-suicide forums. I certainly take the view that one should keep one's eyes open, but the presence of human garbage (which is a cultural constant) has never bogged down the internet for me. (The worst of the internet - snuff porn, child porn, violent hate groups, people selling people, or murder, or weapons, or whatever - is as old as the internet. /r/the_donald is just like, I don't know... whatever).

Like, why wouldn't I think that the contents of the internet would be a reflection of the contents of our society?


Nobody is forced to go to "the_donald". What would "the_donald" without trolling supposedly be? I dont know "the_donald", but doesn't the name itself already say "this is for trolling"?


I'd rather go bathing in a volcano.


> My biggest gripe is the obvious advertorial/astroturfing that I think is probably something hn offers as a service, but I'm not stupid, so I ignore it.

I've been warned that I would be banned for implying that astroturfing was happening on this site without proof.


This was exactly the point of the article. You seem to have built your perfect Elysium, and you surrounded yourself with likeminded enlightened users. This is not representative of the state of the Web, tho. Plus: forums are still great because they're one of those places where you can still have a full anonymity while discussing something you care about.


Just because you don't have cancer doesn't mean cancer isn't a problem.


No one would have cancer in the internet if they didn't visit cancerous sites. You're hitting yourself.


I'm merely pointing out that "just say no" isn't a real solution to the fact that a large swath of the population falls for propaganda cancer (which is hell of a lot bigger than which sites you go to on the internet).


"I don't feel overwhelmed by trolls, except when I read yet another fucking headline about gluten."

You may not feel overwhelmed by trolls, but if you live in the United States, you have been overwhelmed by trolls.


Really it all boils down to a lack of community. Yeah I know, it sounds like something the dad from Leave it to Beaver would say.

My favorite example goes back to the days of Quake(World). It was a surprisingly bright and sunny place. You'd pop on to a few community run servers to say "hi". Maybe play a few rounds of whatever mod you were into.

When bad actors showed up, they were punted. If they came back they were literally gunned down (whoever had admin access would put them on a team by themselves and everyone would finish them off). As a result of having access to self-policing the number of bad actors were generally low in a healthy community.

As time wore on the goal became customer retention. I can remember publishers selling this as "curated team building" around the time the first X-Box came out. It all went downhill from there. You weren't a member of a community anymore, you were a license holder and a subscriber, and soon the very concept of community largely dissolved.

A likely "better" internet isn't a world of several thousand descending on a single comment thread. It's a hundred at most aimed at the goal of useful interaction. The internet as it stands doesn't need more pan-continental megalopolises, it needs neighborhoods of people seeking common interaction.


Hey steauengeglase!

>> A likely "better" internet isn't a world of several thousand descending on a single comment thread.

I cannot agree with you more! We have been thinking about this problem for a while, and recently started https://www.commonlounge.com/ because we genuinely believe a community-based approach is the right way to win the battle against trolls.

Most people don't realize how much care a community moderator needs to put in a community to keep it a healthy place, and building tools to make their job easier usually takes a back seat at most "community platforms".


the "community" solution only works at small scale/for niches. Given any degree of mainstream attention, 'just gunning them down' is like fighting the hydra.

Niche sites thus chug along fine, but any generalized effort to form communities falls victim to this. For niche communities on reddit, this has basically become a repeatable lifecycle (not including the countable-on-one-hand subreddits with leadership large and dedicated enough to wrangle their topics into submission.)


I don't like the loose language used in this article and others that uses "trolling" as a catch-all for online harassment, violence, privacy violations, and more. It's misleading and gets people riled up as they talk past each other. Conflating 'baiting someone into looking stupid online' with 'sending threats to their family members' muddles the conversation. Worse, it erodes the meaning of the term. I'd call it cultural appropriation but that'd be too obvious.


Unfortunately (because I agree with you), that's the common meaning of the word nowadays. I guess we have to deal with it, and speak of oldschool troll for the original meaning from now on.


The real problem is that bytes are effectively free. Here I am, using up your bytes right now. Using up the bytes on Hacker news. The only limit is my own time, effort, and energy.

You think this means that we're all equal; we all have equal access to sharing bytes. But the problem is that trolls have infinite time and energy to post bytes. They have significantly more power and influence than you do because of that simple fact. Trolls could, if they wanted, completely bring down hacker news with just endless content and it would be near impossible to stop.

I've had to deal with real determined trolls on some of my own sites that simply have infinite time to post, to write code to post, to obfuscate themselves -- a single troll can shout out dozens to hundreds of other people. This is why the trolls are winning.


It is an irrational belief that trolls have infinite time and energy. This is, in fact, the very idea of irrationality, who's concept indicates additional work must be done to determine outcome of choice, either by individual or by group. Note that I did not disagree with your assertions here, but simply seek to restate them in slightly different terms so they may be visualized by others more effectively.

Trolls may encapsulate irrational ideas, which may be visualized as memes, inside multiple illogical statements which may be designed to directly conflict with each other's truths. Evaluating one of these statements as a "truth" may cause the other statement to evaluate to false. Trolls may utilize statements in which evaluation of truth is not desirable to the entity visualizing the truth. Any attempt by the recipient to evaluate truth or non-truth of these statements may lead to additional work being conducted to achieve a desirable outcome, such as "winning an argument".

Additionally, these "double bind" memes, or internal visualizations, may effectively spread the irrationality to other actors if they are constructed correctly. These viral memes take root and spread when the new irrational actor posts more irrationality in the form of similar or modified logic patterns.

This process, manifests as a viral meme (or what I hypothesize as an entity): https://www.youtube.com/watch?v=rE3j_RHkqJc

Trolls remove choice from individuals and groups by causing the individual and groups to work against each other. The only way to prevent this is to educate people on the concept of double bind statements, their use in removing choice for others, and the possibility that these concepts may spread virally, on their own accord, in a population which may or may not be conscious.

My primary suggestion for combat trolls in a quick and easy way is to simply not visualize removal of choice for another in your own internal frame. In other words, visualize choice for yourself in the future if you must, but avoid visualizing the actions of others! Stating your refusal to be illogical, or irrational, is usually enough to send the trolls scurrying away.

Note: A double bind and a double BLIND are two different concepts. Double binding someone has been likened to "crazy making" due to its effect of blocking decision making processes from occurring in a given entity: https://en.wikipedia.org/wiki/Double_bind


> It is an irrational belief that trolls have infinite time and energy.

Infinite is certainly a hyperbole; but trolls typically have more time and energy to devote to destruction than you do. I have experienced it. Some trolls are mentally ill -- I have had to deal with that as well.

I honestly don't know what else you're trying to say here but you did use a lot of bytes to do it.


It's what I call rational irrationality. ;)


> It is an irrational belief that trolls have infinite time and energy.

Unless they are well paid state-sponsored trolls. They still don't have infinite time or energy, but neither we do.

> The only way to prevent this is to educate people on the concept of double bind statements, their use in removing choice for others, and the possibility that these concepts may spread virally, on their own accord, in a population which may or may not be conscious.

Education is the solution. And I hope that we chose it.


State-sponsored and company-sponsored trolls with near infinite money are a problem.


Look how much more you expended than the person trolling you though. If you are willing to be manipulative, you can hassle people into wasting their time with pretty short comments.


Visualize other's truths at the expense of your own.


What do you mean by this?


Not the "if you are willing to be manipulative" statement. That is what defines a troll. A "troll" is simply an entity (of race human) who is willing to remove choice from others by tricking them into removing choice for themselves.

So, what is choice? I hypothesize it is work. Work done and saved on one hand. Work to be done on the other. Some of our choices are based on what has come before regarding the choice we must make. I like German beer, so I drink one each day. Some comes from other humans, however, who choose to remove our choice for us. Advertising does this, for example. Sometimes choice is removed from use of logic, or rationality.

It is my belief that our choices are, in part, governed by our visualization systems. I'm talking about that thing in your head that allows us to visualize images similar to what we see while using compute devices, such as this monitor, graphics card, computer I am typing on now.

If someone else, besides you, can get you to visualize something, I'm asserting that this is actually a choice of removal of choice by yourself.


Some examples would help illustrate your points.


Be spontaneous, is the quick and dirty example. ;) It's hard to have intent to be spontaneous, given it's the inverse of intent.

Other examples of internally held double-binds would be jobs, or relationships. Holding a job you do not like in order to live in a place which makes you happy is a double bind. Quitting the job can be imagined to increase happiness, but cause loss of place to live, which itself makes you happy. Loss of happiness is not desired, so quitting is not an option given moving may result in loss of happiness.

Let's look at a troll's use of a double bind, with intent to spread the double bind:

> Only by enlisting the full potential of women in our society will we be truly able to #MakeAmericaGreatAgain - @potus

This is a recent tweet by the President of the United States, Donald Trump. Trump presents two concepts, which are both shown to be irrational: 1. "We" do not enlist the full potential of women in society, and 2. "Our" society, America, is not currently great. Linking them together is achieved by saying "only", which implies that #1 must evaluate to truth in order for #2 to evaluate to truth. i.e. If women, as a group, do not attain their "full potential", and do so by choice for all of us, America will not be great again.

"Enlisting" is considered engaging someone in their support of a cause, by either willing or unwilling removal of choice. This statement is, technically, unwilling removal of choice from the group women AND unwilling removal of choice of an individual woman, given she is a member of group women. In this case, the removal of choice is actually done via the hypothesized, and forced, attainment of "full potential" of a woman and women, by "us" (which is really Trump speaking for all of us at this point). This is an impressive play on Trump's part, given this statement, in and of itself, is a double bind. Women, or any other entity or group for that matter, may only achieve their full potential by having internal choice for themselves. No other entity may speak for the potential in another without applying judgment, or removal of choice, from that individual AND be capable of staying rational while doing so.

By tacking on the idea of "making America great", any dismissal of the first concept, which again is itself a double bind, enables the use of blame for not wanting to make America great again. As with potential, America is only "great" if people think it's great. If it is to be made "great again", that implies it is not currently great, which itself is an irrational statement.


Thanks! I suppose that makes sense.


I think the counterpoint here is that it's much easier to make a script to automate the process of spreading misery than it is to make quality contributions to an online community.

See https://xkcd.com/810/


What are some good tips for controlling or at least mitigating trolling on sites that depend on user generated content?

Moderation, flagging, downvoting, IP address rate limits, keyword blacklisting, are these effective?


Anil Dash actually wrote a great article about this. The short summary is you need real humans doing the moderating. All the tooling you create should be about enhancing those humans.

http://anildash.com/2011/07/if-your-websites-full-of-asshole...


I banned an entire country once -- not the best way to go but after all other mitigations failed it was the last choice available. And it wasn't China or Russia.

It's almost impossible to give everyone the same number of bytes of work with -- if you can solve that problem you've solved it all.


Please give me my bytes back.


That fact is, there are no more trolls than before. No really. There aren't. Not even since the time of Socrates, and we all know what happened to him.

There are however a lot of people who'd like to silence pesky individuals and organizations who have recently started winning even more pesky debates, such as all those anti-establishment rednecks, patriots, anti-immigrationitsts and right wingers who got Trump into power.

Can't have that. Must put pressure on Google and Facebook to censor (wait, I'm trying to come up with something suitable - hm... Yes, let's call it...) "hate speech" and "fake news". Yeah, that's it!

Now we have the means to shut them up once and for all. Let's softly censor all people saying bad words on YouTube with things like (hm find new ideas here) demonetization and restricted mode, and let's cow these people once and for all so our narrative can win. Meanwhile let's use our corporate power for, let's say on behalf of the people, to put pressure on the governments to back down on the things that they were voted in for in the first place.

There's only one problem. When you make peaceful dissent impossible, you make violent opposition inevitable. Not my words. JFK's. We absolutely must have democracy and freedom of speech. One doesn't work without the other. If you can't win an argument, then accept it and move on. Or try to find another angle. Limiting free speech to win debates is not the answer.


> We absolutely must have democracy and freedom of speech.

You're conflating "Freedom of speech" with "The right to be listened to" and "The right to have a platform". The government isn't going to hunt these people down because they have expressed Foo views, nor is, for example, Google. Google, YouTube, Facebook, etc. as private companies are allowed to decide what they do and do not want on their platform (Excluding certain illegal materials, of course). When they do this, they are not violating the freedom of speech of the individuals whose views they remove.

EDIT: "anti-establishment rednecks, patriots, anti-immigrationitsts and right wingers who got Trump into power."

Would you really call those people anti-establishment? They voted in favour of the corruption of the political system and the succumbing of the tattered remains of it to the forces of lobbying and large businesses. That certainly doesn't sound "anti-establishment" to me. Usually the term "anti-establishment" is used to refer to people who are opposed to large corporations, rather than the demographic who voted the CEO of several of them into office.


You're conflating "Freedom of speech" with "The right to be listened to" and "The right to have a platform".

Those concepts are inherently fuzzy and related in messy ways. Back in the day, AT&T claimed that it had built its long-distance network, and that it was in its rights to exclude others from using it. The effect of this was monopoly, however, so regulators stepped in and new laws were passed.

If "de-platforming" has the effect of censorship, then it might as well be censorship. If de-platforming resulted in no-one wanting to listen to certain speakers, then it would be private citizens convincing other private citizens in a way consistent with democratic values. But in actuality, de-platforming as practiced on college campuses often results in a group of people too intimidated to talk and too intimidated to attend, contrary to their desires. It's not convincing people. It's threatening and bullying them while just avoiding running afoul of law enforcement. (Or, in some cases, showing up in large enough numbers to frustrate law enforcement.)

Would you really call those people anti-establishment? They voted in favour of the corruption of the political system

Do you actually believe that all of them knew that's what they were doing? I know for a fact that some subset of them want the system to get more corrupt, so it will collapse sooner. David Pakman recently interviewed an ex-white supremacist who claimed that subgroup voted for Trump so the system would fail sooner.


"Those concepts are inherently fuzzy and related in messy ways."

No, they're really not. The one has a strict legal definition and is enshrined in law, and the others don't and aren't.

"But in actuality, de-platforming as practiced on college campuses often results in a group of people too intimidated to talk and too intimidated to attend, contrary to their desires. "

And what about the people who have felt intimidated by the hate speech and haven't attended because of that?

"If "de-platforming" has the effect of censorship, then it might as well be censorship."

But it isn't censorship. That person can speak their views outside of that building. They are only prohibited from having a platform to spread their views. They can still publish a book of them and market it. Nobody has lost any of their rights, unless you're stating that you automatically have a right to enter any building and preach your views. Nobody would get upset if $person was denied entrance to a funeral home for preaching their views there, because we all understand that that is inappropriate behaviour.

A church can expell a person for preaching atheistic views, and nobody gets annoyed. But as soon as someone does it for a white supremacist, a nazi, andor a person preaching for the extermination of certain groups, it's suddenly "against their free speech".

Or, for an even better example, think of twitter as a publishing company. If I go to twitter with a book (tweet), they are not legally obliged to publish it. The fact that they do not wish to publish it, or that they stop publishing it (removal of the tweet), is not an infringement of my free speech. I can start my own publishing company (with pretty much every website host), or go to another publishing company (like /pol/) to have my ideas heard. In addition, people writing to the publisher and stating that my book (tweet) should not be published, because of the views that are held in my book (tweet), that is also not an infringement of my free speech.

If the government decides that what I say is unseemly, and they purge it from all forms of media, from libraries, etc. and state that my views are not to be published, then that is censorship.

[An orthogonal point here is that people like yourself often argue that these people should not be able to write protests to the publisher (twitter). As per your own definition, wouldn't that count as infringement of their free speech? It's interesting how you readily support that views should not be removed, but condone that people should not be allowed to protest against views, or that people should not be allowed to write their point of views in emails and posts about twitter. Surely the right to advocate what you call 'censorship', falls under what you call 'free speech'?]


No, they're really not. The one has a strict legal definition and is enshrined in law, and the others don't and aren't.

Sorry, but it is a historical fact that "strict legal definitions" are fuzzy, and the definition of a "right" changes with technology and cultural changes. If such changes didn't happen, then wiretapping wouldn't be illegal and it would still be illegal to overfly someone's farm in an airplane. Hate speech existing as a legal term is evidence against what you say.

And what about the people who have felt intimidated by the hate speech and haven't attended because of that?

Can you point to specific instances of this happening?

A church can expell a person for preaching atheistic views, and nobody gets annoyed. But as soon as someone does it for a white supremacist, a nazi, andor a person preaching for the extermination of certain groups, it's suddenly "against their free speech".

This is breathtaking intellectual dishonesty here. Preaching for the extermination of certain groups is hate speech. Are you saying that all de-platforming is only targeted at hate speech? In that case, you are simply lying. If not, then this is the dishonest tactic of trying associating your opponent with a villainous group.

If de-platforming were only focused against hate speech, there would be no debate, as hate speech is not protected speech. De-platforming, as practiced, is used to silence anything certain people don't like. As practiced, it's a form of group intimidation that uses the same mechanism of effect as hate speech. Hate speech only became a legal term recently. De-platforming on college campuses should also become something legally prohibited.

[An orthogonal point here is that people like yourself often argue that these people should not be able to write protests to the publisher (twitter). As per your own definition, wouldn't that count as infringement of their free speech? It's interesting how you readily support that views should not be removed, but condone that people should not be allowed to protest against views, or that people should not be allowed to write their point of views in emails and posts about twitter. Surely the right to advocate what you call 'censorship', falls under what you call 'free speech'?]

You do realize that you made a false assumption about my views here for weak probabalistic reasons, then proceeded to ascribe them to me? As a test: which logical fallacy is this? I've often seen this tactic online. There's even the same (il)logical structure, down to count and ordering of sentences. It's almost like it's deliberately taught to people!


Private companies are the current gatekeepers to free speech, to propaganda, to censorship, to surveillance. They can not disregard human rights and principles of free society any more than a government can - just as they can not violate human rights. Just because its an oligarch and not a politician, the rules of a free and just society don't suddenly change.

Private companies have some property. Their property rights do not supercede human rights.

Then, another solution would be to dismantle the oligarchic model of economic management and provide the mass of people with the property rights of the platforms of their own free speech. But I suspect that our society isn't ready for anything that radical and must in the meantime merely hold private property accountable for its partnership with government to circumvent sovereignty restrictions.


There's an inherent contradiction in the arguments here.

You can't on the one hand say "people have the right to use Twitter free of harassment" because it's ubiquitous, but then on the other hand say Twitter has the right to censor or ban people whose views it as a company doesn't like. These are contrary positions.

Either Twitter is a "common carrier" and thus has the responsibility to ensure its users have a common standard of experience and ability to access regardless of their political views, or it's not and thus it can both censor users _and_ not respond to their specific concerns because nobody has an inherent right to access it.

You need to choose one of those two.


That's wrong though. Even if Twitter were common carrier, the rules of experience and ability to access is still open to rules and interpretation. Under common carrier, some content can be and is censored for certain groups of individuals.

Common carrier also specifically lists certain communications as being prohibited, namely, that which obscene, intent to abuse, threaten, or harass another person.

At that point, it becomes the interpretation of what is harassment? Is it being swatted or stalked, or as it seems to be the trend that now, language or the crime of non similar thought can be considered harassment.


You're enforcing a false binary here.

Twitter as a private company has the right to remove things that it finds abhorrent, such as hate speech (i.e. stating that jews should be killed, black people should be bred out of existence, etc.).

People can put pressure on twitter to change their policies on what they find abhorrent.

The people with those views can find or create other places to express them.

The argument with 'Twitter as a common carrier' is null and void, because twitter is not an ISP. Twitter clearly falls under "public forum", and is therefore able to state what sort of content it wants to host, and what sort of content it does not. This does not preclude the neonazis finding somewhere else to host their views, like /pol/. Because they can host their views somewhere else, it is not censorship.

Think of it like publishing. If a publishing company does not want to publish my book, that is not censorship. I can go to another publishing company, or start my own. If the government states that what I say is wrong, and enforces a purge of it from all libraries, servers, bookshops, media, etc. then that is censorship.


A monopoly is not a private company. That is all. Anti-trust or freedom of speech. Take your pick.


When you define anyone who does not agree with you as a Troll that phrase kind of gets meaningless.

I meet regularly main stream journalists who complain about Trolls winning on internet but talking longer makes me realize that these people actually hate the fact that their "card carrying journalism" is actually made totally worthless by ordinary individuals on street writing a blog from their bedroom.

I am from India and I know specific examples of media personalities trying to shut down totally legitimate blogs by calling them "trolls". Example: http://mediacrooks.com/ Bunch of them even came to silicon valley and tried to lobby with social media giants that their opinion must get more visibility than other internet user.

Personally I dont think bad guys are winning, I think Internet has truly helped marginalized people express themselves without being judgemental about their opinions and without giving more importance to any authority.


Troll used to mean someone who just tries to provoke reactions online. Nowadays almost every time I hear the word, it's actually used to describe a person who expresses an unpopular opinion. It's an easy way to ignore criticism and disagreement: anyone who disagrees with you is just a troll.


No, Internet was always like this (even worse), just there was much less people on it, and expectations were different. Back in the days of USENET there was a lot of quite unpleasant people you'd run into occasionally, but we've seen it as a part of the scene. Like going on a punk concert and someone spills a beer on you or a fight breaks out. It's not nice experience, but it's the part of the show, partially it's what makes it what it is. In my view it's not trolls who are destroying the Internet, but those who are trying to regulate every single aspect of it, instead of teaching people to more self respect and defiance as a way to deal with assholes. And author is right about that one, it will turn all human interaction into a superficial social equivalent of Potemkin villages where we'll be afraid to speak up without 2 pages of legal disclaimers first.


It's not true that nothing has changed. Facebook and Twitter will show you posts that they think you're likely to interact with, and hide other stories. That didn't happen on Usenet. Those services will also recommend that you follow/friend other users with similar interests, which automatically builds up mobs. That's not how the Internet used to work.


Right but if people are never going to click on stories they're not interested in then does it really matter? Can we really blame increasing silo-mind on certain demographics not seeing links that they demonstratively wouldn't have opened anyway?


Yes, I would say that just browsing a table of contents gives you an idea of what's being talked about and what's getting the most attention, even if you never click through anything. And I don't think the algorithms are that good. It's probably just sorting people into a few discrete buckets.


What I see is some people dont like foolish words on a screen so they want to clamp down on open discourse to control dialogue because in their opinion its fruitless or mean or whatever they feel.

Some silly troll spamming idiotic statements is not nearly the threat that controlling dialogue on the net and deciding what is trolling and what isnt.

Socrates would be labeled a troll these days if he was on social media.


My screen is of a fixed size; my days are of a fixed length. If you want to put foolish words on your screen more power to you, but if they're on my screen or taking my time, that impedes me doing whatever I want to do with my screen or my time.

That is, the more dialogue that I'm uninterested in that I lack tools to filter, the more my dialogue is being controlled by DoS. That is the fundamental paradox of open discourse; I don't know what the right answer is, but naive open discourse isn't it.


I think there are two fundamentally different kinds of dialogue on the internet.

1. Dialogue in communities directed towards a purpose. 2. Dialogue in public forums.

In my mind, the only place where filtering should take place is in #1. If you want to see information about specific kinds of basket-weaving, then the communities directed towards basket-weaving should give you the tools you need to see that information, and should police their communities such that there isn't too much non-basket-weaving content. But in public forums, I think that people should actively resist filters. I see filters as a case of trying to make something straight out of the crooked timber of humanity. Not only is it a goal that may be fundamentally impossible, it has negative impacts on the people who use the filters. To the well-filtered person, the general public may look more civilized and reasonable than they actually are. This can manifest itself, for example, as the well filtered people being baffled by the success of populists candidates because "everyone I talked to online could see right through them!"

I think online communities would be better off ahead of time if they made an explicit choice about which type of dialogue they were hosting, and communicated that choice to all visitors.

I think the apparent "rise" of trolls is mostly borne from sites with a disconnect between their stated goals and their actual rules. News organizations are likely to say they want free-speech forum in their comment section, but people mostly expect conversation directed towards the topics being reported on. Reddit was formerly a free-speech platform, but subreddits started behaving more like directed-communities, and the admins encouraged this site-wide. Hacker News is clearly a directed community, and so has fewer trolls, while 4chan is clearly a free-speech community and therefore ALSO has fewer trolls (i.e. most people on 4chan conform to 4chan norms, even if that would be considered trolling elsewhere.)


So your solution is to be given tools to enforce your cognitive bias reinforcing filter bubble? That's not a solution, that's a technology provided lobotomy.

A dialogue takes two - if you don't like the dialogue taking place, instead of attempting to quiet the other voices, perhaps just remove yourself from the dialogue.


As a counterpoint, the biggest gripe of "trolling" that I have is that it prevents dialogue rather than adding to it. It's one thing to argue for say Nazi ideology in a relevant conversation with the purpose of expressing a genuine point (this would be a rare event indeed), but that is not what trolls are doing. Instead, they are invading a conversation with ad hominem attacks as well as distracting and attention-seeking behaviors.

Preventing trolling does not inherently involve silencing points of view. All it should be doing is removing unhelpful, unsolicited, and off topic nonsense. Even the most evil views that I can think of should not be censored, under the condition that they are actually being discussed with the interest of discourse, for that truly is what "dialogue" is. But again, in that event it isn't really "trolling".


Filtering out thousands of "Obama is a monkey" type comments is not reinforcing anyone's cognitive bias...


The conflation of "Obama is a monkey" type comments with people who legitimately dislike the man's policies is not a good faith definition of trolling.


That's what they're trying to do.

But I'm getting really, really, really tired of people conflating the "other side" stuff with the trolling stuff. Look at all the stuff sent to Leslie Jones last summer. Do you honestly think that was meant to be a dialog? Of course not. That was trolling. That was harassment. And that is what people are trying to filter out.


I would agree with you there. I think the problem is we are talking about 2 separate issues, but due to intentional conflation of the two by ideologically motivated factions, we're crossing the streams by using the, now overloaded, term "troll".


If i say "leslie jones sucks" am I trolling or do I think she actually sucks at her job. If a million people one after another go on leslie jones profile and say "leslie is a horrible actress" is that trolling ? harrassment ?

If someone says "Trump sucks" should they be filtered out ?

Do I think its a dialogue, heck no. Do I think it should be filtered, double heck no.


If you send her dick pics, dead animals pics or photos of apes with "this is how you look like" under it, all that from hundreds of different accounts so that she can not effectively filter it, then yeah, you are trolling and harassing.

Likewise, if you flood someone employer customers with the above material to get him or her fired, you are harassing.

Let's not pretend this was just about "slightly impolite words". Or misunderstood dialog.


No. I am rejecting your reply. Go back and look at what happened to Leslie Jones, then try again. This is not simply "Leslie Jones Sucks".


You are conflating. I simply asked a question. I did not say that is what happened to Leslie. I do notice you avoid the question like the other who responded. That to me is very telling.


It looks like your account is teetering on the edge of using HN primarily to argue about politics and ideology. That's not what this site is for, and we ban accounts that use it this way—we have to, in order to preserve the intended use of the site, which is stories and conversation that gratify intellectual curiosity.


Hmm this kind of surprises me, but do what you feel is best. I am not attached to ideology or politics.

The post you responded to was me wanting to find out where respondees would want to draw the line on what is harrassment and what is open discourse online. I think this is extremely on topic, relevent and devoid of ideology.

As far as arguing I dont see it in my posts. I usually make one comment then read responses. I will take your warning to heart but I dont really see my post style changing.


When I look at your account history, it's mostly civil, but it does look like you're (bordering on) commenting primarily on political issues. The key word here is 'primarily': that's our present criterion for whether the site is being abused or used as intended. I realize the reason for that takes some explaining, so let me try again.

HN is for stories and conversations that gratify intellectual curiosity (https://news.ycombinator.com/newsguidelines.html). That implies a wide range of things that are interesting just because they're interesting. We want people to comment here because their "oh!" circuitry gets activated—not their "burning issue" circuitry. (I'm using that word metaphorically.) It's not primarily a question of the topics since these things overlap, but of the spirit of the discussion.


Ideally there are larger lessons learned here than "there are assholes on the internet"

Like I would hope people now recognize that stoking a culture war to market a shitty movie is a bad idea.


Very much this, and the realisation that attention is finite and very highly rivalrous is one that needs a great deal of amplification.

I'm finding Herbert Simon and Alvin Toffler's Future Shock to be pretty fascinating reading in this context, both from the early 1970s. Toffler cites significant earlier work.

I'm also starting to wonder what the long-term effects of high-level information overload are. Toffler's giving me considerable pause on that account.


This seems to be borrowing from the 'your right to swing your fist ends at my nose' cliche. So, let's put it in those terms. 'Your right to post a comment on an open internet forum ends when it meets my right to only have things I value on my screen.' Surely most folks can see why that doesn't work at all. Moderation is already ubiquitous, and that is not the same thing at all. That's the forum's owner exercising their right to boot your account and delete your comment. I'd be interested to hear any sensible proposals for dealing with trolls that we don't already have everywhere, i.e. moderation and user-centric filtering tools.


If you're using Twitter to communicate with some people, and you're getting a hundred stomach-turning tweets in your feed every day, then you can't communicate with those people any more. Abuse isn't about discourse.


I stopped reading after the third paragraph when I realized the entire basis for this article is a survey given to 1500 "technologists" with a loaded question that basically equates to "is trolling getting better or worse?". First of all, not everyone who could be described as a "technologist" is qualified to answer that question, and people are naturally biased to say things are getting worse. I have seen non-opinionated ways to research trolling, e.g. at a hackathon I saw a couple high schoolers use IBM Watson's sentiment analysis API with Twitter feeds.

In a sense this article is just a more educated form of trolling - someone with an opinion grasps at fallacies and bad statistics to back it up, and uses the Internet to disseminate it.


I think online journalism as a whole has turned into a form of trolling.

Or maybe we are in a post Journalist world now.

The value of a story is based on how many clicks it gets. Thats what makes these sites money. It does not matter if you click because the story makes you angry or you think its insulting just so long as you click and you get the advertising. And as far as standards go there are none. Just make up some stuff that riles people and come up with a click bait headline and on to the next story.


I think that depends on your sources. TheIntercept doesn't fit into that bracket to me. Bias? Maybe, but bias isn't a problem if you read a diverse set of sources. Bias isn't a problem, opinion isn't a problem; reading one source and complaining its 'fake news' because it's bias isn't the same as yours is.


I agree that there is still a spectrum of quality. And certain Journalists and publications can still establish a reputation for reliability and quality.

But the problem is the business model. No one has been able to come up with a solution to make money on quality. Its all volume advertising. NYT is competing with BuzzFeed for eyeballs. Its a competition to come up with the most clever hook and target peoples emotions to drive traffic.

In the print days there was a small number of publications and it was very expensive to start one. The more serious publications published monthly. Its totally different now.

Think about that up until just a few years ago serious Journalists had to produce about one piece a month and they had supporting staff and a budget.

There is very little now that comes close to the quality and analysis that there use to be.


I'm not so sure print is dead yet :)

I read the FT weekend, private eye, and sometimes monocle as my fill of physical periodicals but I think throwing all journalism into the same box as your avg daily mail/buzz feed AP copy/paster isn't really accurate. These aren't journalists they are 'content curators' or whatever the term is these days.

I'll use some of those sites to see what's happening quickly -- the bullshit around a story isn't too important when I'm just looking for headlines. things like 'terrorist attack in paris' are statements everywhere which is about all can expect from such places; more in depth stuff I go longform and sometime even podcasts or books written later.

Instead of raging war on stuff that's been going on for years in the press why don't we just balance out input ourselves instead of playing into this drive to delegitimise the press, which only seems to benefit the government. We need a press to hold things accountable since the judiciary and senate have already been steamrolled.


Trolls were always winning the internet. From "RTFM" attitudes to anonymous user accounts. It's a troll playground. Don't get me wrong the internet was built to be/can be an incredible tool, but let's not misconstrue this is a new trend.

EDIT: I find the issue with the candor on the internet is lack of context 90% of the time.


> From "RTFM" attitudes

While agree some internet users are a bit too gung-ho with RTFM, I think it's unfair to lump them with trolls. While veterans can be harsh and unfriendly, the RTFM response is often warranted (though, again, it could be phrased more nicely -- and the default assumption should be that the person seeking help is well-intentioned). Otherwise, many online communities would be completely overrun by people demanding help with minimum effort on their part.

"RTFM" is not trollish behavior, because trolls have a different goal: to disrupt communities, start flamewars and generally annoy the hell out of people. It's not merely being "unfriendly"; in fact some trolls fake cluelesness instead of anger.


I disagree. I think that hand-holding beginners can be one of the best services a community can provide, and it can go a long way to growing the community.

That doesn't mean everybody needs to be teacher/mentor/educator. But the people who don't want to be can also just keep out of the conversation, instead of rudely pointing to some 300 page manual and telling someone with passion and interest to screw off.


Hmm, RTFM is not aimed at beginners (and note: I do agree with the sibling comment that the aggressiveness of the response leaves a bad taste. I do agree pointing to a 300 page manual, without any guidance at all, is unhelpful).

RTFM is aimed at help vampires (recommended reading: http://www.skidmore.edu/~pdwyer/e/eoc/help_vampire.htm). Help vampires do not grow a community; they destroy it. These are people you do not want in your community anyway. Unlike actual beginners, who learn and then proceed to become valuable contributors (unless of course they are treated harshly by veterans, in which case they simply leave), help vampires are never helpful. They ask the same question again and again, don't pay attention to any cues, don't read anything, and seldom provide context or follow-ups to their problem. And when you've wasted time with one (more often than not fruitlessly), the next one comes in. And they are legion. The end result, like the first article I mentioned claims, is that your community of well-intentioned volunteers is destroyed. So a polite RTFM is often the best response: if the person was an actual beginner, they'll read TFM -- and follow-up questions are often welcome, since they show the asker has actually made an effort. If not, they'll leave and you haven't wasted any valuable time.

In fact, help vampires are often indistinguishable from trolls, in that they have the same community-disrupting effect.

RTFM has always been part of the hacker culture (also see: How To Ask Questions The Smart Way, http://www.catb.org/esr/faqs/smart-questions.html). It has nothing to do with trolls.


Sure, I could see that being a problem, but if the question that the "help vampire" is asking is a valid one, then you have to remember that all online conversations have an audience, and that your response is as much to the audience as it is to the person you're directly responding to.

Even just politely linking a similar question and response is better than responding with RTFM.

> RTFM has always been part of the hacker culture

That doesn't necessarily make it a good thing, or anything worth keeping.


Help vampires do not ask valid questions very often. It's very unlikely a community will have RTFM as a standard response to valid questions (i.e. those that provide context and show effort on the part of the asker), with one exception: when the question has been asked countless times already, and the asker could have seen it if they had made the minimum effort of reading the community's FAQ. In which case, RTFM -- or rather, a link to the FAQ -- is the best response.

> That doesn't necessarily make it a good thing, or anything worth keeping.

But it does make it not trollish behavior, which we were discussing. Hacker culture was not trollish, this was always part of hacker culture, therefore it's neither new nor trollish behavior.

I think a variation of RTFM, let's call it GentleRTFM, is both worth keeping and necessary.


>...one exception: when the question has been asked countless times already, and the asker could have seen it if they had made the minimum effort of reading the community's FAQ. In which case, RTFM -- or rather, a link to the FAQ -- is the best response.

This is actually the RTFM-ish situations I hate the most; specifically as someone searching a forum and finding a thread or threads close to my own question, only to find the only responses are rudely unhelpful.

I get not answering an insipid post, or a mod locking/deleting one but replying to a question you find dumb or repetitive is a waste of time to everyone involved.

Just like "Don't feed the Trolls," I feel one shouldn't raise the post count of a time sink.


I suppose not answering at all is an acceptable solution; a RTFM response does imply a certain irritation directed at the asker (for not doing their homework). In the case of help vampires it does nothing more than signalling irritation to other community regulars, since the clueless undead -- by definition -- won't RTFM or even know what that means.

Answering with a link to the canonical question doesn't seem rude to me, however.


I think a variation of RTFM, let's call it GentleRTFM, is both worth keeping and necessary.

I'm in agreement on that. Although I also think Socratic method should also be employed in those cases.


RTFM are usually jerks acting like jerk and it has nothing to do with seniority or skill. Half of those "harsh" people don't know all that much.

Overworked seniors simply don't respond on forums due to lack of time. Seniors with time and willingness to answer answer how they know. RTFM are doing what they are doing just to stroke their own ego.

Being jerk does not imply skill nor lack of it. It just being jerk.


I disagree that RTFM'ers are jerks, for reasons I've already explained in other posts (though, again, there are ways and ways of telling someone to RTFM, some nicer than others).

Other than that, I agree they are not necessarily more senior or experienced. That's why I used the term "veterans": they are old timers in the community in question, have seen countless newbies come and go, and must suffer the onslaught of help vampires with needy requests and an ungrateful, instant-gratification attitude every day. They may not necessarily be seniors, but they are the people who actually form the community, because help vampires ask their poorly-formulated questions and go away forever, only to be replaced by newer ones. They do not tend to "grow" the community, only destroy it by attrition. How would you deal with them?

Again, "How To Ask Questions The Smart Way" ( http://www.catb.org/esr/faqs/smart-questions.html ) should be mandatory reading for online participation in online communities. Some of it is even common sense. And if you don't have the time to read it or TFM or the FAQ, why should volunteers take the time to help you, anyway?


To answer your question, I eit her ignore asks for help I don't want to answer or say that this section of forum is not intended for beginners questions if that is the case.

Overwhelming majority of RTFM ate not trying to manage community. That involves more then just putting in RTFM with some insults every time you feel like.

I agree that some of rtfm consider themselves community and are trying to put off newcomers - and keep their own social status within forum high. That is just closed circle jerk, exactly like any other. Most of these communities have many other members who form them equally or even more and don't behave this way.

Elaborate faq on how to ask question is great and I would treat it seriosly, assuming the rest of discussion has similarly high standards. Except that rest of discussion is neither precise, nor well pharsed, nothing like that. So yeah, where it is consistent with general quality of discussion, cool. Where it represents double standard, meh.


Agreed. I guess I'm not saying RTFM isn't needed (I rtfm all the time). However it's the bad taste that's left behind and the potential miscommunication from the simple action of telling someone to read the fucking manual. There are better ways to get the original message across.


Trolls, in a broad sense, win life in the same way. "This is why we can't have nice things," is a practically universal sentiment, and much of our human endeavor is geared to managing the activities of its edge cases. Technology only empowers people, it doesn't make them better people... so this is the result.

The response will be a combination of social quarantine, and an upgraded intellectual immune system on the part of netizens. I think Trump's election and the damage his presidency is likely to inflict will be a bit of a turning point in the collective attitude to trolls in general.


I agree. It seems to me that the biggest difference is the extent to which the average internet user gets his content from other internet users (reddit, facebook, twitter, etc) instead of designated content creators with reputations to uphold.


Isn't this the applied form of the Global Village nonsense from back in the '90s? I'm sure I've seen positively-spun descriptions of the process you describe.


>It seems to me that the biggest difference is the extent to which the average internet user gets his content from other internet users (reddit, facebook, twitter, etc) instead of designated content creators with reputations to uphold.

This is as old as humanity, and has always been the case for the majority. See the chapter "Social Proof" in the book "Influence":

https://www.amazon.com/Influence-Psychology-Persuasion-Rober...


Let's not forget that that '03 to '06 era of "Just F*ing Google It".


I think the scope of this trolling concern needs to be expanded to include the latest science on confirmation bias and cognitive dissonance.

The fake news of Hillary running a child prostitution ring out of a pizza parlor is an example of where people are creating protected spaces for confirmation bias online. Such a story is ludicrous. After Trump won the alt-right created a coded language where common words are substituted for slurs. It is human nature for birds of a feather to flock together and part of this includes creating a negative environment to weed out all but the most dedicated. This human behavior can be found at executive levels at Fortune 500 companies where cussing and swearing not allowed at the employee level represent barriers for people, most notably women, who don't appreciate the troll, obnoxious behavior. It is well known that climbing the corporate ladder requires an ever increasing capacity to put up with caustic trolling behavior.


Trolling is the new spam. Except trolls want your attention, not your money.

People said the same thing about spam. And it's largely been defeated (or institutionalized, whichever way you want to look at).

We're all still figuring out the dynamics of the attention economy... and I imagine that as we do, we'll find ways to minimize the impact of trolling.

I'd venture to say that that's the big lesson we have to learn from the current presidential cycle (aka troll-in-chief).


There are folks like us working on this: https://www.cipheredtrust.com/

The trick is a scalable means to do identity verification on the internet...basically a way to do it that preserves user privacy and security and it is dirt cheap so everyone can leverage it.


I'm curious about something... I've seen reports that when places started using Facebook comments, they found that the lack of anonymity didn't prevent trolling, in fact, it made everyday people more hostile.

Is it just about identity? Would that do anything more than just removing the occasional death threat, instead of actually addressing the more pervasive trolling?


I consider myself mostly reasonable.

I might disagree wildly with you in a number of cases but I try to be reasonable in how I argue my points.

I stay away from pages that use Facebook comments because I don't want to be part of the problem with fake profiles and I don't want everyone to be able to trace everything I ever said back to me.

Not because I intend to say something unreasonable. Not because I fear law enforcement but because I will apply for jobs in the future. And I might want to visit arab countries in the future. Or Israel, who knows?

I think this is a reasonable position. I expect and respect if you do the same - even if I disagree wildly with your opinions I don't want you to get in trouble for them bext time you fly into a country were your ideas aren't so correct.

Or next time a president goes rogue like Erdogan.

I think full name policies scare away lots of reasonable people.

The ones who stay either don't think that far, or don't care or use a fake Facebook profile. I guess none of these attributes correlate well with insightful and polite.


There are going to be people who are terrible and will continue misbehaving without concern for others....the basic problem with the internet as it currently exists is that it has almost entirely modeled our physical world's societal processes but unfortunately lacks a way to easily hold people accountable online for their behavior. Most of the stuff people feel free to do online that is detrimental to others, they would never do in the physical world. Distance obviously does give safety to bad behavior, I can sit in NYC and insult you in SF without worrying about physical altercation.

One way to address this problem is to give service providers an easy way to enforce their terms of services. This would have the effect of also making those service providers honest, in other words if you claim to want to create a welcoming space for all your users but you refuse to take effective actions against those who violate your terms then your hypocrisy is laid bare...Today twitter can claim they can't verify everyone's account, but if that was not an excuse anymore, then they'll have to effectively deal with those who violate their policies even if it means losing certain users.


I have found the opposite to be the case. A newspaper I read had a problem with their comments. The comment section on every story was pretty active but almost always quickly devolved into partisan trolling and bickering and name-calling. They switched to Facebook comments to show real names and there was a dramatic change. Now, most stories get no comments. The few that do are civil.


Given that identity is the basis for most trolling, I'd say identity is a poor proxy for trust. Despite what Facebook and the CIA / NSA may want us to believe, there's no evidence that a lack of anonymity/privacy leads to greater accountability.


I'd argue that it's reputation, the prevalance of impunity with present systems, and the problem of introducing consequence for misbehaviour, which is the core.

These can be changed without requiring full identification and disclosure. And there's a very long history of principled (and some unprincipled) pseudonymous discussion.


Co-Sign 100% ....in fact anonymous identities is part of the service we're offering:

https://www.cipheredtrust.com/doc/#anonymized-identities

It is the perfect balance between privacy and accountability.



Identity can prevent abuse if it can be used as an effective way to keep the trolls out. In the real world if you know behaving badly can get you permanently banned from a place you'd like to have access to, that keeps you in check.

Similarly if it was possible for service providers to effectively penalize (from suspension to permanent banning) abusers in a way that they can't easily circumvent (ie create a new account), that'll be effective.


Problem is the current system is the worst of both worlds: spammers will continue to create new fake accounts.

Honest people get shut up - or outed.


Sigh The Trolls have been with the internet from the start. I would say that actually they are mostly contained (why? because its really fucking difficult to be anonymous. Plus you are more than likely to be prosecuted for whatever you've done)

So whats changed? more people are on platforms like twitter, and seem to think that fake internet points are real.

The vast majority of "trolling" is pretty low grade "die", "sexual slur", "lol rape", "exploitation of personal detail that you've shared" Thats not trolling, thats people just being arseholes.

They are anonymous people, they have no impact on your life, ignore them.

edit I should point out that yes swatting and doxxing still happens, and the impact is massive. but it is still rare.

The advantage of a twitter profile is you can delete it, and start again. I wish I could have done that in real life. It would have helped me escape the daily torment of my bullies at high school.


On the contrary... these trolls have SWATted, stalked, ddos'd, called, doxxed, and harassed normal people.

They've spammed business, social media accounts, email...

For some people the impact on their life is very real.

I can't use my name publicly on the internet because of trolls. This impacts my life in a huge way.


I hate this attitude of "blame everything on the trolls". You can't use your real name on the internet because using your real name on the internet is basically asking for trouble. You have to defend yourself in the same way you have to look at ads and read media critically and don't believe everything someone says on the phone or in the mail. It's really nothing new, it's just on the internet this time. Scamming and spectacle has always been a good play.

People are simply not trustworthy, especially in large numbers, the internet just provides more exposure, so "don't be dumb" applies even more. Defending yourself is simply part of living in society.


I had to take down the website for my business and can only accept brick & mortar transactions because of trolls... but sure I guess you can win at your commenting game by claiming everything is shit.

I've been running a business with multiple locations for 10 years and I've never had a problem more serious than trolls on the internet doxxing me.


See, this is the problem. Don't blame it on "trolls", analyze where you screwed up and how you could have defended yourself better so you know what to do next time. Then open up shop under a different alias.

If you're not vigilant about your own security and safety, no one else will do it for you. Take responsibility for your own actions.


This is pretty egregious victim-blaming. If someone is getting abused it is the fault of the abuser and no one else.


You're as much a victim here as someone who downloads a tiny executable off a filesharing network and thinks it's the latest movie or game. Or falls for a Nigerian scam. These are accepted facts of the internet. We defend against them. Just as much as publishing your address or information which can be associated to it should be.

Precautions are important. You can never rid the world of evil, but you can protect yourself from it, especially trivially in this case.


We made Nigerian scams illegal. We defend against them partly by prosecuting them. We work hard to reduce the frequency and the impact of them. But you're not saying that we should do any of that for trolls. I don't accept that trolling is just something that necessarily happens and that we need to accept. We can hold trolls accountable and work to reduce trolling overall.


> We made Nigerian scams illegal. We defend against them partly by prosecuting them. We work hard to reduce the frequency and the impact of them.

And guess what? They still happen. We already have the laws to attack the kind of "troll" that shows up at your door in many cases - but guess what? It still happens. Ultimately the law only gets you so far. So we must defend against them in more effective ways - like individual knowledge.

It's a lot easier and more effective to guarantee safety with online matters when you behave safely online than when you start taking risks and expect the law, the website or whatever else to deal with the consequences. That's just wishfully naive thinking.

Yes, in an absolute sense, all harm should be blamed on the abuser, I don't mean to suggest otherwise. But the prevention potential is so much higher if you can attack it at it's destination than at its source - like using a firewall on your server instead of just saying "hacking is illegal, therefore no one will hack me".


I tell my kid not to run in the street for a reason. On the Internet, you can run in the street all day, and if someone hits you, it's not your fault because "victim blaming is wrong".


Yeah, its one thing to say victim blaming about something that's difficult or expensive (monetarily, freedom or time) - it's quite another to use it to discourage individual defense.


There are people who simply cannot effectively defend themselves.

They don't have the knowledge. They're visually disabled. They've got cognitive decline, for whatever reasons.

And the scams and attacks keep coming.

Generally not trolls, so much, though that can be a thing as well.

I'm not completely against there being some wild corners of the Internet (though you might want to take a look at some of danah boyd's recent writing on 4chan and /b/, and what grew from them, and why, and how), but there's a rather large part of it that really has no business being like that.

People get hurt. Money and life savings are lost.

Not everyone's a street-wise, healthy and hale 24 year old.


Then you buffer them. You limit input and output to trusted family, friends and support staff.


Try that some time and tell me how it goes.


Yeah, I get that.

But damn, if you could later change permissions, and then they would forget, that would reduce your risk. It could get confusing, however.


Someone literally came to my house, where I live.


Why was that information available on the internet?

That's the kind of risk you take if you make that kind of information available on the internet. Think of all the possible things someone could do with your address and assume those will be done. That's the type of precaution you should be taking.


My city publicly lists assessing information if you own property. Many do.

Why are you so set on blaming me for this? Do you want copies of restraining orders I had to file on the two people who came to my house?

When do you think someone should start being responsible for behaving dangerously?


Why were you selected over any other business owner? Just seems odd you would be selected at random and then two different people would make an effort to drive out to your house "for the lols".


Sorry, if I go into details it'll be pretty obvious who I am. My entire point is that internet trolls aren't just some sort of harmless fun for a lot of people.


> My city publicly lists assessing information if you own property. Many do.

People tend not to just browse city property listings for fun.

So why were they able to get to that stage - did they have your name? Why? What did you do to piss them off? Could you have done it anonymously?


Doxxing a business owner is generally very straightforward even if they take careful steps to hide their identity - there's a huge paper trail behind any real business.

If you think you can't be doxxed, you're probably just lucky.


It is pretty hard to operate a business and not have this information be available for a sufficiently interested person.


I do it myself. It's completely possible.

Get a PO box or pay a corporate privacy company to forward your mail.

Depends how you operate though I suppose. Unless you're running a physical business from your house you probably don't need to disclose.


What did they want?


You say:

> People are simply not trustworthy, ...

In my experience the vast majority of people are trustworthy. The problem is that the tiny minority of people who are not trustworthy are winning the internet because they have time, and shout loudly. More, some of them do obnoxious and potentially dangerous things.

That's why people get upset - because it is a tiny minority that spoil the whole thing. If only we could stop that tiny minority ...


> In my experience the vast majority of people are trustworthy.

Sure, but in quantity there will always be some people - even some of those normal, trustworthy people who feel particularly strongly about something and decide to go on the offensive.

There's nothing you can do about that, that "tiny minority" of abusers will always exist in any given situation involving enough people. Ignoring this effect is stupid. Trying to counter it is a waste of time and potentially harmful to speech. Defending yourself is the cheapest option, just do it. Tell other people how to do it.

Maybe I'm just a cynic, but I'm beginning to feel like the trolls are smarter than the people who whine about trolling in this day and age - they're able to defend themselves better at least.


[flagged]


I think this actually explains a lot of what's considered "trolling" when it comes to political topics.

Many individuals stretching the truth for their side just a little bit adds up to a warped picture in total and gets blamed as if it's a goal.


> ... citing your experience with people is equivalent to trolling.

That's an interesting point of view - thank you for sharing it. I will think hard on what you say.


Yes, I should have made that clear.

you were trolled, and it is clearly devastating. What most people think of trolling is just someone shouting WANKER at the screen.


Well, I can't use my name publicly on the internet either, because I enjoy trolling too much.


I'd say it has a pretty profound impact on your life: share personal information online, get 'trolled', and then watch that encounter become permanently associated with your name.


I could have explained it better, but what people call trolling, and actual trolling are two different things.

Apparently posting a link in reply to someone is now "trolling"


> So whats changed?

Eternal September ? As access got more and more widespread, the average savviness of new entrants lowered and they became more and more susceptible to the permanent trolling ? In turn, the new entrants have a bigger voice outside of the internet, and their parole is taken with more value than those who roamed it from the beginning, unmoved by the usual trolling ?


I'm sorry, but this "ignore the trolls" meme has been proven wrong time and again these past few years. It's not a solution.


I'm going to get specific here.

I'm not talking about a bullying campaign that's branched out into the digital world, that is what I would class as actual trolling.

I run a blog that talks about the specifics of labour law. A number of posts resonated with a profession that was going through a tough time. As time drew on, what was professional unity devolved into infighting along two lines: accept a new contract, or not and carry on industrial action.

I advocated the new contract. It was the best offer(in my opinion), went from salary with a paycut and increased hours, to effectively an hourly rate with punitive overtime pay(for the employer)

Now, This is where I supposedly got trolled. I was innundated with comments saying that I had "betrayed an entire profession", "I couldn't add up", "I'd never worked as X what the fuck did I know"

In my posts I swear a lot. Instantly I went from an amusing post with some political points that "really hit the man" to "a dirty troll who's foul language betrays a profession"

I just expressed an opinion.

People engaged with me, and I them on twitter. When I didn't change my opinion, I was a troll. When I used high quality sources, I was a shill.

Unless someone doxxes you, (either in public press, or 4chan) you can just turn it off. Twitter/facebook/email can all be erased.

You do not have to respond. They want a response, to re-enforce their own opinion.


"Twitter/facebook/email can all be erased."

That can mean professional or social isolation for quite a lot of people - or at least disadvantage. The moment your blog, github account, facebook account or twitters matters for employers, peers, business, conferences you would like talk at or negotiations is the moment when walking away affects you.


That's a little exaggerated I think. I don't have a blog, an active GitHub account, and don't use Facebook or Twitter at all. Yet, by some miracle, I have a robust social life and am gainfully employed. I think people overstate the importance of these services (most of which, by the way, did not even exist 15 years ago). I choose to post here but could stop tomorrow and it would have zero effect on my personal or professional life.


B-b-but how will I growth hack my way to a 4-hour-workweek selling ebooks about how to growth hack, without constant, obnoxious self promotion under my real name on all corners of the Web?! God's sake, I can't possibly be a thought leader anonymously! You savage.


What you just describes is normal capitalist economy. People selling and promoting stuff. If that person is able to make a living with just 4 hours a week work like that, they are pretty skilled. Are you saying it is ok to harass them out of that business just because you are jealous?

It is nice example of someone who would be really harmed by leaving.


No, I'm saying there's an inherent conflict between those for whom the "No One Can Tell You're a Dog" Internet is and should be how things work, and those for whom making money under their real name by polluting it with garbage is part of how they do business or build their ridiculous "personal brand" or whatever. For the former, the latter complaining that they're being harassed because they plastered their real world details on every virtual surface they could find comes off as 1) n00btastic and/or 2) entitled whining, and the continued pushes for increased real-name use and harsh moderation from that corner (and the continued intrusion of commercial activity, especially ads and the OMG-we-actually-live-in-a-sci-fi-dystopia spy economy but also marketing trash masquerading as content) represent attacks on the quality and freedom of the Internet.

Just because people get to run around filling the real world with junk to make a buck doesn't mean they're entitled to feel welcome everywhere they might want to do that, or that they can expect to be given a pass on common sense or the norms of the media they seek to degrade. They can console themselves with the fact that they're almost certainly going to win in the long run, I guess, since money always does.

That said: yeah, prosecute actual harassment when possible. "NEVER POST YOUR REAL WOLD DETAILS PUBLICLY ON THE INTERNET" remains and should (but won't) remain excellent advice, however, and I wish more people would follow it, and if they've chosen to ignore it to make money I wish they'd quit trying to make everything worse rather than saying "oh, gee, I ignored that and bad things happened, I guess I should stop ignoring that" (but they won't, because money).


Congrats. And everyone else is expected to be exactly like you and have nice real world community and job that does not involve real identity on the Internet. Gotcha.


Ignore trolls work when they are motivated by attention. It does not work when they hate your guts and seek to harm you however they can.

That is like saying "just igore bullies" to bullied kid. It does not work.


I find there's just in general a huge tragedy of the commons issue. Not just trolls but bad incentives in general

One example is people actively mis-tagging things. Go on soundcloud.com, look at the charts. Pick Techno, notice at least 5 of the top 50 songs are not techno in any way shape or form. Instead the artists just wanted to try to get on any chart possible and so tagged with as many tags as they could dream up.

I've seen a similar thing on other sites.

I have no idea how to solve that. I suspect soundcloud is not interest in hiring people to police and some how negatively incentivising the behavior.

They aren't the only site that has this problem. Stack overflow solves it with their point system and above a certain level you can edit/change tags but it requires volunteers to do the editing and finding a way to incentivise those volunteers to do a good job. (and I know many people are not happy with those volunteer's judgments)


I used to laugh reading about politics in Abraham Lincoln's time: distribution of news was, at best, iffy and journalists made up stories to suit their political/social viewpoint. Remarkably like today. The Internet is indeed wondrous. Thanks to the trolls, we're back to where we were in 1860 again.

"Fake news isn’t a recent problem in the US — it almost destroyed Abraham Lincoln":

https://qz.com/842816/fake-news-almost-destroyed-abraham-lin...


I remember when the internet was primarily shock videos and Newgrounds flash animations, and really crappy expages raw HTML webpages with flaming GIF animations.

That internet was horrible. It was filled with goatse, tubgirl, lemonparty, crush videos and much more.

The internet today doesn't have that. But it is politicized. And just like mainstream media control of the politics has created a race to control platforms and communities.

The internet has fracturous communities that break down into camps of people who are collectively gas-lighting one another. And in spaces where there is an established groupthink - anything that questions that groupthink is immediately a threat to the entire ecosystem of that space. And so the term "troll" has come to mean "anyone that doesn't agree with the groupthink."

Yesterday someone suggested that I was a traitor to my country and would be hunted down because I fact checked a group hysteria. I've been called everything under the sun and have received all sorts of threats. But in that community I'm the one who is labeled a troll.

(I've been criticized, and even temporarily banned from HN on the premise that I was a troll. In that case, I had been explaining and defending the Chinese case for their sovereignty over territory in the South China Sea. It was not bearable on this forum.)


I'm getting pretty worried about all this stuff. When did we lose the ability to discuss things with people who have a different stance?


I think it's the opposite: I hardly see any truly malicious trolling these days, especially since 2013. What the author attributes to trolling may just be someone expressing a view that goes against the grain, but is not deliberately intended to provoke a negative reaction and or in 'bad faith', which is a closer definition of what trolling is. The reason why there is less trolling is because many forum communities and social networks have filters and much stricter moderation and much less tolerance for trolls, than a decade or longer ago. From anecdotal experience, in 2004, trolling was much more prevalent on forums, but now admins and users have much less tolerance for it.


I'll agree the tolerance for extreme trolling is down, but I envy you your shelter: From trash talk to insults to casual "I'll drop this outrageous-but-believable statement and quietly leave", I see non-extreme trolling as mainstream anymore.

And I define "trolling" as "taking actions or communication with the sole goal of gaining pleasure from someone's misery and/or outrage"


Is this really a problem of technology though? Seems more the artifact of a modern culture so predicated on con-artistry and spectacle, that abuse and manipulation have become the dominant form of expression.

Technology makes this easier for sure. But I think it's the futureless world of hucksterism being articulated to everyone that makes behaving this way attractive to so many people.


Yeah, it pretty much is. If you're going to credit the development of the web and related technologies with the economic benefits it has delivered, we need to take it on the chin for the damage it has caused to our society.

Every time we disrupt an industry, especially media, we are by definition shaping society. It's about time we as technologists started being honest about that.


That's a fair observation. You break it, you bought it.


The reason why trolls do what they do is because of people with the mindset of this author. Trolls cannot enjoy the internet because they are browbeaten by someone willing to type an entire article about their own thoughts. It's a deluge of terrible opinions and people with too much time to type them all out. The weapon to defeat that is to give them something ridiculous to rail against, so that their efforts are wasted.

The fact this has worked for such a long time and that just about nobody seemingly has caught on. Means that yes, now there is an entire community of people who do it. It's just part of the internet now and it's growing and it'll continue to get bigger.

Sorry people. I'm not a troll myself, I largely just get off the internet now. But the reason it's bad is because of this author, not trolls.


If I understand correctly what has happened recently, state-sponsored media disruption is the problem.

While the Internet was a place for nerds, it was uninteresting for mainstream powers. Nowadays is a good target for states wanting to influence elections, for companies that want to sell their products and for organized religion. Old tricks in a "new" medium.

If a power can mobilize millions of troops, or expend millions on discrediting climate change. Why should they respect the Internet?


It's the paradox of epistemic systems: as they become larger (or the audience more attractive), they attrack those who would use them for manipulation.

https://www.reddit.com/r/dredmorbius/comments/5wg0hp/when_ep...


I'm surprised at the concern over trolling compared to shilling.

If someone wants to be an ass whatever, but CTR and similar seems like a much more serious problem to me.


I haven't found sources or people being confirmed as CTR shills after the election was over. If you could provide some I'd appreciate these. I agree shills can be a problem but I haven't found conclusive evidence of their influence.


Sure, he's a compilation post from Reddit.

https://np.reddit.com/r/conspiracy/comments/56u8vz/new_evide...


Good read


How is CTR on your radar, but Russia employing thousands of online trolls isn't?


It is; but they're employing shills, not trolls.

It doesn't even have to be political, I'm sure we've all seen the forced corporate memes that totally aren't an advertisement.


they're employing shills, not trolls.

I disagree. CTR was known for promotion, shilling, just as advertisers are, but people in Russian employ were known for disruption. They aren't pushing the merits of Donald Trump, Marine LePen, etc., as much as they're fomenting discord through fake news and aggressive bias.


Linguistics aside I think we're in agreement. :)

The article doesn't outright define trolling, but gets pretty close with:

>humans like trolling. It’s easy for people to stay anonymous while they harass, pester, and bully other people online

All I'm getting at is, people being assholes on the internet isn't a real problem compared to the astroturfing and forum sliding that goes on.


I guess where we're in disagreement, though, and this is something that was in discussion at the senate hearings in terms of "unwitting agents" or put less delicately, "useful idiots" that will follow along with dangerous patterns that might have their source in extremely negative astroturfing/forum-sliding/shilling, but which take on a life of their own.

We have clear evidence that disinformation campaigns and active measures are being used, but the reason they work isn't because initially a bunch of Russian teenagers are employed to push false and dangerous narratives. They work because they get picked up by totally sincere people who have an emotional reason for perpetuating them, but which reject any kind of rational discussion around them.

As a use case: An army of (shills/trolls) pushes a completely fake story titled, "Police Find 19 White Female Bodies In Freezers With ‘Black Lives Matter’ Carved Into Skin." (this was an actual, viral fake news headline), and times it properly to get it to certain key environments.

Once it's there, it gets perpetuated by people who are sincere, who are continuing the narrative for emotional, irrational reasons ("this fits my world view, it angers me, it makes me feel good to bring it to light"). When people respond that it is a fake story, those people are responded to with hostility, name-calling and anger.

At this point, you have a shift from shilling to trolling, but the original source hits a particular emotional note that posting things like, "Hillary Clinton is the most experienced and qualified candidate for president EVER" or "Hey guys! I just learned about this great new product, totally randomly! Let's talk about it!"

Both are problems. The latter are annoying, and with a little bit of savvy, easily identified and ignored. The former is outright dangerous and attacks one of the foundations of a democratic, free society.


>I guess where we're in disagreement, though, [...] "useful idiots" that will follow along with dangerous patterns that might have their source in extremely negative astroturfing/forum-sliding/shilling, but which take on a life of their own.

I'm not sure where the disagreement is; gaining sincere support is the whole point. Shills spread plenty of false anti-bernie things.


I think you should take a second look at what I said. Yes, there were anti-Bernie things, and I say this as someone whose politics are much closer to Bernie's than they will ever be to Hillary's.

But the nature of what Putin is doing is radically different, for the reasons I outlined. Misrepresenting an essay that Bernie wrote 50 years ago, for instance, is dirty politics for sure, but I don't put it on the same level as spreading a fake story about Muslim immigrants or black people, especially in a country like ours that has a very dangerous history of racial populism.

Add to this that those whipping up these deep social divisions aren't even American, and you have yourself a problem with a much greater significance than playing dirty in a primary battle.


I don't think we can cut off trolls because of free speech. Its not even a good idea as some trolls aren't even trolls, they are just loud. Instead, if personal filters were controlled via browser settings or something independent of the actual provider, you could maintain privacy yet have nice content as well. An AI filter would be a good candidate for that.


Wrong. Most internet forums are privately owned, and thus the concept doesn't apply. Plus, trolling and harassment isn't free speech. Conflating the two is insulting to those who are fighting for actual free speech, in places where they will be arrested for advocating that.


I have a hard time knowing who is a troll as there are times when people become one but not in their normal behavior. I remember recently there was a study on that listed here which was interesting. Also, your negative tone for example could be interpreted as trolling if you put lots of those types of responses, not saying you are of course, but some AI might give you a false positive on extreme opinions.

What I meant was not the person trolling but the person viewing content, just being able to have their own filter, where the filter is not controlled by some 3rd party but under your own so let's say the 3rd party couldn't suddenly remove the option. More about control than privacy in that case.


It think the problem is now too many people are on the internet.

Back in the 90's, it was a great place to hang out. In the 2000's, you had fun sites with great user commentary like Fark, Digg, Slashdot, and Reddit.

Now everybody and their kid has a cellphone with internet access. Your average person is an asshole, so that's reflected online because of how saturated it all is now.


It's just an extension of the Eternal September.

Any community that comes in contact with a larger community that does not respect their customs has problems, but it Usenet, email conventions, Burning Man, victims of imperialism, or a local book club.

We have lots of evidence of this. The unfortunate thing is that I can't come up with examples of anything positive coming about as a result: The smaller community is either destroyed or weakened, and the larger community rarely replicates the strengths of the smaller ones.

We can talk about the strength of diversity, and I'm all for it! But this isn't a melding to create something new, this is tearing apart the "other" and continuing on barely altered.

Humans are poorly built to handle large communities, and we've not really come up with an artificial means to deal with such situations.


"the smaller community is either destroyed or weakened, and the larger community rarely replicates the strengths of the smaller ones"

this is a great article on the stages of this phenomenon: https://meaningness.com/geeks-mops-sociopaths


That was a very interesting read, thanks! It's more focused on a "scene" and monetizing than I was thinking of, so it gives a different take on the same basic problem.

My general thought was influenced by various communities (RPGs, IF, Usenet, coding, etc) I've been part of or witnessed, and how they were unable to survive the swarm of slightly-interested. A few interested are good and healthy: You need the fresh blood to both replace those that leave and keep old-timers engaged. Too many and your norms and conventions are crushed and at a minimum you lose the sense of community, most often you lose the ability to communicate effectively.

There are plenty of historical parallels as well, or at least they seem to be - history is not my strong suit, so while I'll suspect the parallels line up, I won't go asserting it too far and will stick with the communities.

Usenet in the face of AOL is one example (I was there, it was tragic, and though we live in a wonderful age with a lot more information available, I think it's harder to FIND an in-depth discussion). Another example is, say, Slashdot. Originally it was the haven for techies, which made it popular, which made it NOT the haven for techies. After /., I moved through a few attempts communities and currently HN is one of my sources for discussion. While every community has it's nostaglics, I definitely feel like the last 2-3 years have had an increase in people talking about the decrease in quality comments. At this point, I've come to expect that eventually HN will fall prey to popularity and I'll have to find somewhere else to go.

In light of the article you linked, I'm not really sure how it links up. I guess too many "mops" are attracted, and even if you don't get any "sociopaths", just throwing the ratio of people that want it close-to-the-way-it-was to people who this-is-nice-on-my-terms off too much can drown the community in noise.

The disturbing thing is that I feel like the tail of the "mops" there. I'm not the creators, I'm usually not a fanatic. For example, while I'm _willing_, I've never gone through the trouble of contributing to any open source project beyond the most thin of bug reporting or the rare cash contribution. OTOH, I don't think I qualify as entitled, and I try to learn and abide by the norms of each community, so maybe this is a variation of Imposter Syndrome, maybe I just know my limitations, or maybe I'm a mooch.

I will ponder this. It dovetails in disturbing ways with some thoughts I've had about societal futures in a Fifth Wave sort of way. Thanks for the link, I'll have to look into "The Gervais Principle" he mentioned as well.


This in spades.

Usenet was tiny. Probably ~50k users, maybe 500k, back in the late 1980s / early 1990s. (Email discussion with Gene "Spaff" Spafford, one of the Usenet Cabal (TINC) members.)

I remember Slashdot ... before there were userIDs. I ... had a pretty low-digit-count ID, and remember when the site hit 100k and 1 million users.

Today, an online service with 5 million to 10 million active users is ... considered a failure. (I was ... kind of responsible for giving a good count of Google+ public user activity.) Facebook is over 2 billion users, and something like 500m regular users.

Those are big networks. And scale matters.

This isn't just a matter of the Eternal September, though acculturation is certainly a big part. I don't think Usenet could have sustained a network of this size, even with four weeks of mandatory on-boarding every September. The scale is simply too large.

One of my hobbies has been to look at how dynamics within various groups change with group size. That matters much more than the group composition (though that also matters). No matter what, a group of 5 people, or 20, or 100, or 1,000, or 10,000, has a fairly distinct feel.

Even before I'd estimated G+'s size, I had a fairly decent intuitive sense (generous, as it turned out), for the userbase size (I'd thought perhaps 20-50 million).

Reddit's various subs also give good examples of how community varies from sizes of ~1 to about 15 million (the largest subs).

My suspicion is that ~50 - 500 users is probably a relatively good sweet point, though I'm not sure how that an be utilised to make scalable discussions possible -- collecting the best from many groups, but keeping the discussions generally smaller.


"One of my hobbies has been to look at how dynamics within various groups change with group size"

Do you have a link? I went to your ello page, and now I have 7 tabs open, lol. I saw "Conversation doesn't scale very well", but that was only a tiny conversation; unless I missed something.

"My suspicion is that ~50 - 500 users is probably a relatively good sweet point, though I'm not sure how that an be utilised to make scalable discussions possible"

Just a thought after reading a couple of your ello posts. I wonder is the sweet point is related to "The basis of trust should be local reputation by peers who actually know who you are and that you play fair." [1]. Meaning it might be beneficial to limit discussions to those that are considered experts by their peers? Yet, sometimes outside perspective can be beneficial. As mentioned in [2]. I guess what I'm saying is, are scalable discussions possible? They seem to branch off into many tangents. Just brainstorming here :)

1. https://ello.co/dredmorbius/post/iex9l-hxfwop9kdyrvugvq

2. https://ello.co/dredmorbius/post/lu30wakaoho3guaguxrasw


A lot of this is still ideas floating around my skull and various notes (electronic or paper), not fully organised.

Ello is, as I've just noted in a post there, very search-resistant, which is becoming a major pain-point. My post index (see my profile page there) is very sadly out of date, though it has a few items on it.

https://ello.co/dredmorbius/post/dlz9c2z6x-tvkd7rxtxwpw

I'd put some thoughts together on Reddit and community and what seems to work, or not: https://www.reddit.com/r/dredmorbius/comments/20yhxc/reddit_...

In the spirit of Thomas Edison, a long list of things that don't work: https://www.reddit.com/r/dredmorbius/comments/1w73g4/on_comm...

You've found two of the better items I'd put together on trust and dialectic discussion at Ello.

Though not directly related to size, I've also suggested standardising topical "channels" of discussion: https://ello.co/dredmorbius/post/ftpx7lwnfjtopygu5xrkhg

Somewhere amongst my writing is a suggestion for a tiered set of vaguely Dunbar's Number sized network clusters. Say, 50-150 people per group, themselves organised into clusters of perhaps 50 groups themselves. A small level of tiers gets you to current global population size (5-7 levels as I recall). If each group surfaces, say, 10 - 100 items per day of choice public content, and the filtering is applied upwards, then ... maybe there's a way to have both local and global discussion.

Topical interests might be overlays on that, based on net interest. A topic with a 1:50 interest level would draw from all 50 neighbouring clusters for its own base level. A topic with a 1:2500 interest would scale to two levels up -- this retains some level of "locality" whilst also allowing for broader discussion.

And branching out between clusters would be possible, but effectively with some additional cost -- UI/UX-imposed or other. Not large, but present.

There's an existing (or was) network that was organised somewhat on these terms, though I believe it's not done much. Building initial interest is difficult.


This is a good point.

You know Sturgeon's Law? Something along the lines of "90% of science fiction is crap, because 90% of everything is crap?"

Well, if you more or less agree with that, there is just so much more of everything that there is so much more crap.

Another commenter expressed this as a signal-to-noise ratio problem. It is that, but it is also that O(N) is bad news for very large values of N.


Surgeon's Law is poorly behaved at Web Scale:

https://www.reddit.com/r/dredmorbius/comments/1yzvh3/refutat...


*Sturgeon's Revelation. Sturgeon's Law was actually something else Ted said: "Nothing is always absolutely so."


I think what you said is true. I also think that it is further proof of the Greater Internet Fuckwad Theory[1].

[1] https://www.penny-arcade.com/comic/2004/03/19


The internet can still be a great place to hang out. It's just a matter of finding a community, rather than on of the big sites. https://lobste.rs is one example that I've found (mostly, anyway), I'm sure there are various fora as well.


The term troll is meaningless these days. The term is an umbrella for a number of different things, from traditional trolling, harassment and posting stuff other people disagree with.

With the rise of reddit and facebook, its possible to live in a bubble, an echo chamber and anything that takes you out of that bubble is considered trolling.

I remember in the late 90s everyone thought that the openness of the internet was going to tear down walls, as you got to expand your bubble. All its done is re-enforce the bubbles people live in.

I think this post outlines what is happening http://slatestarcodex.com/2014/09/30/i-can-tolerate-anything...


I would be interested in seeing more places try posting limits. It's easy for someone to shoot off hundreds of thoughtless messages, but it takes time and effort to make a single thoughtful post. A minority trying to do the former is going to inundate those doing the latter. And for those (probably most of us) who do a bit of both, it's likely that our less thoughtful and more rapid fire output is going to dwarf our thoughtful output (just because the latter requires so much more time and effort).

Even in the cases where someone provides a lot of thoughtful output, a minority of active users can often drown out the majority of visitors. It would be interesting to see what kinds of communities would develop if there were efforts to encourage more even output.


>I would be interested in seeing more places try posting limits.

This was proposed years ago on Reddit: Limit # of votes/posts per user per day.

Never implemented...


I like the concept, though I very strongly suspect it would be gamed.

Creating a system for imposing a cost on persistent identity -- not "real name" identity, but some form of valued persistence -- and then imposing rate-limiting, could be useful.

Reddit does put posting limits on new and unverified accounts, by subreddit. To the point of only one post every 5-10 minutes, if I recall (I've tripped that filter a few times).

Or as I've said a few times: "Who are you?" is the most expensive question in information technology. No matter how you get it wrong, you're screwed.


The problem there is that it is certainly possible to legitimately post hundreds of times a day. I mean yes, someone who posts or votes too much could be a troll. Or they could be a perfectly good, yet exceedingly dedicated community member with a bit much too much free time and a lot of passion for a subject. Limits like those tend to screw over more active users in general.


> People who i don't agree with are winning the internet

fixed


I think the only way that trolling can be defeated is if people are educated in productive discussion, including why it is beneficial and how to carry it out, and also in how to respond to trolls without simply attacking back.


Acculturation is a component, hence Netiquette, the September Effect, and then of course, the Eternal September.

Acculturation only scales so far.


Welp, rather have trolls "winning the internet" whatever that means, than have mainstream journalism "win the internet". All this autistic screeching from journalists, these days, because they are loosing their foothold, lying didn't help win the election and it won't "win the internet" either.


The internet is a reflection of the people using it.


The content of this article, and the reason for such articles, has been covered before:

http://web.archive.org/web/20140625080831/http://thelastpsyc...

As ever, trolls are not who you should be worried about.


I'd argue that bots and the ad-driven revenue model are more destructive. It's responsible for clickbait articles like this.


If you give people a big wall and free spray paint, of course they 'll fill it with dicks. Unlike real life, there is nobody to paint over them. Trolling is a natural part of the internet, its the price for having relative freedom, i dont have a problem with it. If that puts off people who don't have the capacity to tune it out, all the better.


That's not true of course. Most graffiti, murals etc are not of dicks. And if you have a community bulletin board that gets spray-painted with dicks, someone is going to possibly get fined or jailed.


> Unlike real life, there is nobody to paint over them.

In the real world having your tagging painted over is one of the saddest feelings. I wonder if there's a way to digitize that sensation but I think people are desensitized to things being deleted. It would have to be beyond that.


I've seen people respond similarly or analagously to having their posts or comments removed.

Which makes me cautious about invoking that particular remedy in any but particularly eggregious or blatent cases. If I think there's any reasonable option for remediation, I'll try to avoid it.

Spam and blatent trollery, not so much.


Sure, but if the big wall is a map, and you're trying to find something on that map, all the dicks make it so much harder to find it that giving up in despair is often the reaction.

It's hard to tune out a cubic mile of dicks when you're looking for a pearl.


Well, in that case the Internet "map"'s problem is less the dicks drawn here and there than its being covered on every square centimeter by ads. Oh wait, I think I see the part of the map I needed... oh, no, just another ad designed to make me think it was part of the map.


If you walk in a room, and a bunch of teenagers are being noisy, do you leave, or call the cops?

If your favorite social network doesn't give you enough privacy to avoid trolls, or even abuse, go somewhere else.

People complain that trolls have the advantage of privacy, and recommend a police state as a solution.

Privacy is a tool. Use it. Don't let anyone take it away.


In my opinion we saw troll too much as a technical problem and looked for technical answers. It had to fail.


https://www.youtube.com/watch?v=Co5jTvGlt6Y

Whenever I hear troll I think of this video from kitchen nightmares. Puts picture in perspective.



They say this after years of study of FB and Twitter, but anyone who participated actively of USENET already knows that this is the inevitable outcome.


Or early days of The WELL. From its own admins.


It seems that trolls are also winning leadership positions across the world, so why wouldn't they win the internet as well?


Perhaps there's a related sociological or psychological dynamic?


I think many of those trolls are actually bots or human-bot combinations.


Well the Atlantic trolled HN, so it's possible...


We should build and implement TrollTrace.com to use tracking and metadata to expose trolls on the Internet. Don't worry, it would only be used for trolls, not for regular "good" people.


A troll is anyone with an incompatible worldview. That's why when /r/democrats starts posting in /r/republicans everyone says everyone else is trolling.


If I'm having a rational discussion with someone who disagrees with me then I'm automatically a troll? I tend to feel the trolls are the hordes of people calling for me to be executed or using nothing but insults instead of contributing meaningfully.

Everyone is free to their own worldview, shutting down discussions with "I hope you get raped in hell n*" isn't what I call a different worldview, it's what I call an arsehole.


The ability to have a rational discussion comes from a place of [typically white, male] privilege, though. If you are being sexually harassed, racially discriminated against, or are otherwise in a psychologically unsafe position, you're not going to be in a position to "back up your opinion with facts" or "explain why you feel the way you do" or even listen to someone who calmly, rationally explains that "he's never made me feel uncomfortable", or "why didn't you report [being raped to the police] / [being discriminated against to HR]" or "I don't see race".


Well I'm just taking about Not being spammed with loads of shite instead of genuine prejudices as you point out. I have a hard time really understand the race thing as a Brit thigh. I think the colour thing is a predominantly American problem; while I'm sure there is discrimination in the UK Ive never witnessed black folks here getting a hard time any worse than anyone else and Ive lived loved and employed people of different colour to me most of my life and it's not something I've ever had to really even contemplate.


[flagged]


That's a personal attack, and those are not allowed here. If you have concerns about other users breaking the site guidelines, flag their comments or email hn@ycombinator.com. Breaking the guidelines yourself just makes the site worse for everyone.

We detached this subthread from https://news.ycombinator.com/item?id=14007347 and marked it off-topic.


Once upon a time, "troll" meant someone trying to provoke a reaction for fun, from a type of fishing called "trolling", which involves dragging baited hooks through the water.

Lately, as in this article, the term seems to be expanding into a general purpose insult for anyone whose posts annoy someone, sometimes only because they express a different opinion.

Under the old definition, bad faith was a requirement. Having an honest argument or even being an honest jerk wasn't the same as being a troll. That seems to be changing.


A quick scan indicates plain-spoken and incisive, but good-natured. Maybe funny sometimes, but not in a mean way.


[flagged]


Maybe they're downvoting because of the belligerent tone, even though the thrust of your argument has some validity.

I believe the downfall of the weird wide web has been greatly exaggerated. There are still plenty of interesting corners of the internet.

Don't let Facebook and its kind prevent you from finding the interesting parts of the web.


What if most of them aren't "trolls" at all, but rather people who have something to say that is important to them and they choose a method to say it that upsets some people?

The base assumption is that many of them are old-school variety: they seek disruption for its own sake. Suppose for a moment that this base assumption should not be taken as axiomatic. What else might motivate such a person? Some of you are quick to jump to the conclusion that money is the only other motivation. But money isn't what motivates everyone. Lots of people create art, for example, and receive little to no money for what they do.

What if some people are trying to motivate others to think about the things they believe, say, and do? Their medium need not be literature, paintings, music, sculpture, and so forth. A sarcastic tongue can teach too. It can spread ideas and challenge assumptions. Writing a lengthy essay that nobody will read may be a counterproductive use of time.

People roll their eyes if one brings up philosophy in a casual setting. I see a lot of things people call "trolling" as small performances of philosophy. Not all of them are effective, virtuous, or well-informed. But some are. The ones that really get under our skin are the ones who actually understand us the best, they know our weaknesses and exploit them. There is too much focus on who they are and whether or not they're getting paid (because they probably aren't) and not enough on what is being said. In a lot of cases, what isn't said is just as important. If the "troll" wants you to think about your own positions, he may just omit several critical details on purpose and let you fill in the blanks.

Diogenes didn't make people happy when he walked about in broad daylight with a lantern looking for "an honest man", he made some of them laugh, and made others angry. Socrates made enemies by arguing with people and showing them that they didn't really know why they believed as they did. I've had more productive conversations with people in my life when I took an adversarial position to them under the veil of anonymity, even though I agreed with their position. If I can argue a position I don't agree with and hold my own, the other guy comes out of it with new ammunition to defeat it too (or more ambition to find some), and they often teach me a few tricks along the way.

It is entirely too easy to dismiss a person if all you think they want is to make you mad with what they've said.


"What if most of them aren't "trolls" at all, but rather people who have something to say that is important to them and they choose a method to say it that upsets some people?"

If the only way you can express your dislike of a black actress is to compare her to an ape, maybe just shut the fuck up.

"It is entirely too easy to dismiss a person if all you think they want is to make you mad with what they've said."

And it's entirely too easy to ignore the problem of real trolling and harassment if you think it's just people with "differing opinions".


>If the only way you can express your dislike of a black actress is to compare her to an ape, maybe just shut the fuck up.

??? You got me, yes, this is exactly what I do. There's a secret Nazi hiding in every bush and behind every corner ready to pounce and I'm part of the great Internet conspiracy of secret Nazis.

>And it's entirely too easy to ignore the problem of real trolling and harassment if you think it's just people with "differing opinions".

Harassment is totally the same thing. And if a hundred people show up to make fun of the dumb thing you said, it was an organized conspiracy to hurt your feelings and not myopia about potentially millions of people who normally see stuff and don't interact with it until it's too dumb to ignore.

If you've got people sending you death threats or violating TOS of sites, obviously report them. If they say rude things that have absolutely no value at all, ignore them. But if they're actually trying to pick an argument with you in good faith, they may not be a "troll" after all. There's FAR too much where people have slid into thinking that anyone who has a different opinion and expresses it in an unpleasant way is out to "troll" or "harass" them.

I mean, look, you just attacked me, a complete stranger, who has never done any of the things you brought up because I dare suggest that some belief you hold is wrong. How's that for proving a point?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: