Hacker News new | past | comments | ask | show | jobs | submit login
Suspension, Ban or Hellban? (codinghorror.com)
134 points by vijaydev on June 4, 2011 | hide | past | favorite | 85 comments



BTW, HN has a hybrid hellban+slowban in case some still don't know.

Not only do you become invisible to others but there is long sleep delay on pages.

My old "_ck_" account somehow got flagged that way for reasons I don't understand and cannot find anyone to undo (and still would please like it back if possible?)


It has hellbanning, and, apart from, obviously, pg, I don't know a single person who thinks it's a good idea. Since I have showdead on I've seen numerous comments by people who have been hellbanned for years and still don't realise. The really horrible thing is, most of their comments are perfectly legit - sometimes all of their comments.

I remember reading a science fiction story where the punishment for certain crimes was that you got a special tag which meant that everyone would simply ignore you and refuse to acknowledge your existence, for a year. You became "invisible". Other "invisible" people also had to ignore you, iirc, otherwise their sentence would be extended. It was painted quite vividly as an extremely cruel form of punishment.

Personally, I find it extremely distasteful, for the sake of not dealing with people complaining about being banned, to simply make them invisible.


I have no problem with hellbanning and am particularly glad to have 'pg and the HN illuminati managing it without community input. It's kind of horrifying to think about the level of drama that would come with transparency and "accountability" over who's hellbanned.

We have no inherent right to participate here.

There is, I think, a large extent to which this community survives because it's not an open, transparent democracy.


This is notably the first time I've found myself disagreeing with you, in many kilocomments over the past several years.

I once thought it was completely terrible, but changed my mind after the losethos guy showed up — secret hellbanning is a great tactic against active trolls, whether they're intentional or clueless.

Unfortunately it and individual deletions are the sole metamoderation mechanism on Hacker News — they work abysmally for managing good-faith users having a bad day or combatting spam, and there's an order of magnitude more of that than there are trolls.

The worst part is that HN's hellbanning isn't secret — you can just turn [showdead] on, defeating the point (see the "king of the shitpile" problem: http://metatalk.metafilter.com/1786/LPP-wall-of-shame). You'll see plenty of spam on the /new page, but for almost every [dead] comment you see you'll be baffled as to what happened. Sometimes it's just a duplicate post, and occasionally for hellbanned users you can page back through their history of good but [dead] comments and find a brief fit that could have been disciplined easily with a week's time out or even a simple admonishment to stop.

What really concerns me is all the individually-deleted comments and hellbanned users I've seen where there's no context at all as to why they're [dead] — the only thing I can think of is that they were eaten by the news.arc grue.

I understand what you're saying, that pg has every right to publish or [dead] or /dev/null our posts as he sees fit — but that's no reason for him to be cruel and capricious.


That's one thing SomethingAwful---which I believe coined the term "hellbanning" and popularized it---did better than most subsequent implementations imo. It's one of several kinds of bans there, and typically used when the person in question has in fact already been normal-banned before (sometimes several times), and come back with new accounts. Reduces the false-positive rate that way. (It was also more effective for that purpose back in the days when it was used very rarely, and not widely known about.)


You should disagree with me more often. It's more fun, and even I only agree with about 97% of what I write.

HN is poorly metamoderated. Yep. But I don't buy the argument that HN is run by cruel or capricious people. I have criticisms of HN management, but those aren't among them.

So, given that, if you've got a small group of people you trust not to be random jerks about it, who are putting in the time to keep the site running and keep the community at least somewhat functional, I think it's better for everyone that stuff like banning isn't an open transparent process for all of us to get wrapped up in.


Of course pg isn't intentionally cruel or capricious — but he does not assume good faith in others

I think he sees the naughtiness that he prizes in founders in users here, and that leads him down the path to writing off everyone slightly combative as trolls


i certainly agree - people can have off days, for which the punishment of permanent non-inclusion in the community (so long as they don't find out) is really harsh and unfair, worsened by the fact that there seems to be no method of recourse for anyone it's inflicted upon..

alas i find myself agreeing with tptacek in that, i have no idea how a democracy of this would work, or even that we have any right to demand such a thing..

opens safari and checks comment is visible..


there actually is a method of recourse: email pg and have a conversation about it. i am aware of quite a number of people who have at one time been hellbanned and have had that ban reversed.

and as long as i'm commenting, i support hellbanning as well. i haven't seen anybody mention the fact that pg does this as a part-time thing. he does not have time to hold everybody's hand, explain to them what they did wrong, and deal with the inevitable heaps of abuse. it is instead incumbent on every participant to figure out the mores and rules of this mini-society, and adhere to them.


I support hellbanning.

The reason you use hellbanning is not because you don't want to deal with complaints, but to prevent people from simply making new accounts and coming back to troll more.

Does it completely prevent this? No, a very determined griefer can easily run a second account on a second IP and check to see if/when they've been hellbanned, or just make new accounts all the time on principle.

There's certainly collateral damage, and I can see the argument that cost/benefit is off. But it's not like the policy is set out of laziness.


> Does it completely prevent this? No

But, of course, it's only a part of a larger system. Along with hellbans, we have downvotes, flags, comments from high-karma users showing up higher on the page, algorithms to detect voting rings and sockpuppets, and so on.

All of these things combine to make it very difficult to be an effective troll on HN. Trollish comments normally end up with poor visibility and therefore low engagement from other users, which essentially starves the trolls.


>I remember reading a science fiction story...

That story was "To See The Invisible Man" [1] by Robert Silverberg. There's apparently an adaptation in an episode of The Twilight Zone.

[1] http://en.wikipedia.org/wiki/To_See_the_Invisible_Man


Thanks for the reference. That is, indeed, the story!


I find myself in a double bind about the idea. On one hand, it's a pretty passive-aggressive way of dealing with problem users. On the other hand, I can't think of any better ideas. The thing about problem users is that if you don't take care of them quickly enough, they tend to gather a following that ruins things for everyone else.


Why is it passive-aggressive? I see people here talking as if it was the responsibility of the community to engage with and resolve the problems of challenging and unproductive members. How many hours in the day are there? Why is it our expectation that everyone is owed attention, forbearance, and even satisfaction? These aren't social services we're talking about. There's no "due process" clause.

Is the problem that the community isn't telling the problem person that they're banned? There's a reason they don't: because by and large, when you tell someone they've been banned, they react (particularly in the heat of the moment) by throwing a temper tantrum with a new account. Again: why is it the obligation of the community to absorb that kind of abuse?


It isn't. It's just that as a general rule of thumb, I believe in openness and transparency as the morally correct thing to do, and I suspect pg agrees with me. Don't you agree with that at least to some extent?

And therein lies my double bind: how does one balance the need for a civil community with the hacker's dislike of things done in the shadows? Are you happy with the idea that users are secretly banned all the time?


If someone comes in and brazenly violates the rules and values of a community, they can't turn around and expect to benefit from those rules and values later on. They've broken the social contract. Hellbanning is perfectly transparent to everyone in the community except people who are hellbanned.


A problem arises, however, when what appears “brazen” to people who have already interacted with a community for a long period is not so obvious to anyone else. Many community standards (such as “nice”) are nebulous and have vastly differing thresholds and signaling methods, and this is not something that can be easily resolved up-front. Applying subterfuge early in the process makes it possible for a well-meaning entrant to randomly lose by misinterpreting things early on, then being denied feedback that would allow them to make a more informed decision about their behavior. They are further punished by throwing their participatory resources into an invisible hole while thinking all is well. This is easily nontrivial collateral damage, depending on the set of visitors.


This. I've seen some long, serious, insightful [dead] comments which turned out to be from users hellbanned quite a while ago. If they aren't deliberately trolling, tricking them into continually wasting their time this way is a terrible thing to do.


Parking your car so that it blocks the alley is a terrible thing to do. Eating all the skin off a bucket of fried chicken is a terrible thing to do (as is buying a bucket of fried chicken). Serving warm beer is a terrible thing to do.

Failing to welcome the comments, well intentioned or otherwise, of someone who had to be explicitly driven out of a community is not a terrible thing to do.

Not all of those [dead] comments are actually from hellbanned users, are they?


Then tell them to leave. Don't waste their time.


Why? So they can yell at us? Because we owe them?

How about, the ones who care enough to put contact info in their profiles, you can reach out to them and say "you may not have noticed but the HN admins seem to have banned you".


We don’t owe them. Lying to people is just wrong.

I have contacted dead accounts in the past, a problem is that it’s hard to get contact info.

Hellbanning spambots I can understand. Hellbanning people?


I have sent dozens of those emails over the years. It's only a small fraction of the [dead] productive comments I find, but not that many people have contact info or googleable usernames.


Whenever I've turned on showdead, the only hellbanned comments I've seen were either patent nonsense or worse. Sometimes it was a user that alternates between legitimate comments and patent nonsense. Do you have an example to share?


I submit my previous account, "bbot", as well.

Only found out I was hellbanned when someone who had showdead on noticed that a (fairly long) technical comment I left on a bloom energy post was killed. The comment itself wasn't to blame, but a comment left a hundred days earlier which ended up in negative karma, and nuked my entire account.


If I may toot my own horn, I submit my old account, chbarts, for your consideration.


Your comments weren't nonsensical. They just had a bad habit of calling people names.


Fair enough. You were a bit of an asshole around the time you were hellbanned, but you had a fair number of decent contributions afterwards.


The biggest problem I have with this is the wasted energy by the hellbanned.


I'm finding myself not having trouble with wasting the time of people who previously sunk their time into deliberately wasting everyone else's time.


The old eye for an eye chestnut, eh? It's fine for a time, but if the account remains active there ought to be some form of review.


"An eye for an eye" is about revenge. This is about valuing someone else's time no more highly than they value yours.


You seem to think that the hellbanned masses are all trolls, and that is absolutely not the case.

Many of them were just too passionate about a particular issue one day. Quite a few seem to have done nothing untoward at all.


Rubbish. It's about inflicting a punitive measure on those who violate the community's rules.

I don't really have a problem with that right up until the point where you wittingly waste someone's time because they previously wasted yours.


I didn't hellban anybody. But I have an idea who did. And, you know what? If you're gonna ask me, "who's more likely acting in good faith, the hellbanners on HN or the hellbanned?" it's not even a little challenging for me to answer that. The people managing HN are not mean-spirited.


I didn't hellban anybody. But I have an idea who did. And, you know what? If you're gonna ask me, "who's more likely acting in good faith, the hellbanners on HN or the hellbanned?" it's not even a little challenging for me to answer that. The people managing HN are not mean-spirited.

In asking that I'd be turning this into a discussion about deontology and I doubt anyone's interests are served thus.

However, I do take your point. It just happens that I also have a considerable problem with the ethics of reciprocal justice.


Being "invisible" for a year surrounded by people would be an interesting experience, I think.

Besides, of course, the practical problems such as how do you buy food. Do you have to steal it? (Easy, as nobody "sees" you)


Yeah, they had to steal it, and I t was easy - but that was considerably outweighed by the mental consequences of having no social interactions for a year.


When I first read what "hellban" is my first thought was "6th sense".


I imagine the hellbanned users' comments get some random upvotes to keep them extra-clueless, no?


I've also been hellbanned before. I totally deserved it. I decided to shape up and play nicer with others. I also tempered the controversial, wildly politically incorrect conversation pieces. Now I think I'm a model HNer! Hopefully this account does not become hellbanned. Given my invitations to startup school I think I'm doing OK.


It's worth noting that the author's non-transparent heavy-handedness towards "niceness" in meta has driven away at least one of the self-governing, power-to-the-people moderators he advocates.

http://news.ycombinator.com/item?id=2473029

I'm disturbed by the idea that concealment is considered an appropriate response to social problems you don't know how to solve (I'm aware pg does it also).


Just to clarify, all that was over a single removed unconstructive comment (not a question or answer) on a meta. Literally. Does it get more trivial than that? The topic of the blog post was about removal of users from the main site, which is a far more serious matter.


I think it's up to Dr. Clark to decide how "trivial" a matter it is to see his words unilaterally scrubbed, as if they never existed.

And as far as the meta sites go, I've found meta.SO to be an echo chamber; ideas aren't voted up or down so much on how they contribute to the discussion (as it goes on HN), but on whether or not how much the cool clique agrees.


Spolsky's response to that was, to me, so authoritative that I found myself physically nodding my head. I am not always nice, but I find it hard to argue about the value of that norm. So I read your comment as a bit of a "thrown elbow". What you call "non-transparent heavy handedness", I think many reasonable people could call "active nurturing and cultivation of a functioning community". You don't have to like that community. Start your own.


My first reaction was this is terrible, just giving someone the silent treatment, does nothing to clue them into what they did to deserve it. Assuming they are not a troll, who know what they're doing is wrong, the person may not understand, what they are doing is not acceptable, perhaps a cultural difference, perhaps social issues they have, ignorance whatever. Giving them the silent treatment will do nothing, they already have issues with socializing obviously, "getting the hint" probably isn't their strong point.

Then I started thinking about what was said, about this being "reality altering". Which I disagree with, if you are not good at socializing, annoying even, no one is going to want you to be involved in conversations, or participate in activities with you. Much like ratings/web hits, if people don't like you or what you have to say, you effectively get the silent treatment, or are completely ignored or worse. If you do have relevant things to say, and are socially well behaved, people aren't going to ignore you, and will want you to participate in their activities, because you have something to offer, and its in their best interest to let you participate. That is of course assuming they don't have some prior misconception about you, because of race, sex, jealously or other social failures on their own parts. They will accept you in instead of doing all those things that they do to try and give you the hint to "go away" like ignoring you, not telling you about social gatherings, or whatever. This is all very natural, and mimics the real world quite well.


I have considered the "hellban" in the past for supertrolls on the Dark Mists forums, but I think our players are not so easily duped, and it could lead to worse fires to put out. The problem is that they do not just have one account on the forum, and they do not participate in isolation from other players; people talk to each other.

"Did you see my post where I told off Xurinos? No? Censorship! Big brother! I'll make another account and spam it."

In other words, hellbanning is not really different from obvious bans. And frankly, players will communicate via AIM outside of the forum anyway.

It is a lot more difficult, but there are ways that moderators can work with a community to establish the social rules and expectations, such that the entire community is in support of putting down the trolls. And the easiest way to drop a troll is to ignore them, to not validate them. After years of trying different techniques in DM, this has been the most effective.

On occasion, we still ban or delete posts, but we make the reasons clear and consistent with the ethics of the community. There is always some kind of drama of the month. Most of the time, we can just move on to doing more important/fun things like playing the game.

One addition to this: The players understand that if we had to ban them on the forum, they also have their character (associated with that account) banned. So banning is a bit more painful then a simple rejection from public communication and has to be done with due care.


Interesting... what about a progressive hellban, where there's a probability that the hellbanned user's participation is seen by others? The probability would exponentially decay after each offense and be slowly restored as a function of time without offense.


That could just lead to fragmented conversations that serve to further confuse the average user.


What if replies to a progressively hellbanned user were themselves affected by the ban (tainted)? That would address the fragmentation.


It would also result in a lot of good content written by respected community members being hidden.


How about a ban that progressively hides the account from high karma users. In the first circle of hell, only 10k+ don't see the banned, second circle 5k+, etc


I think the other way around might actually be better. Newer, lower-karma users tend to be influenced by the comments they see; older, higher-karma users tend to be better about upholding the old HN values. If you make problematic users invisible to the old guard, newer users will grow to accept certain forms of behavior and even upvote it. If you make problematic users visible only to the old guard, they'll get better feedback.


I wonder if anyone has tried slowing down some posts - sort of like a "hellban," except that the user's comments do show up for others after a few hours, making back-and-forth arguments less likely.


I think this would be an interesting twist on the "slowban" from the article. It would certainly impede argumentative behavior.


Power without transparency inevitably leads to power that is abused. In the west we have built our civil systems around that theory.

What are we risking by not having this transparency on HN? By not having transparency on what accounts are banned, and why? Only the value of the discourse that occurs here.

If the value of the communications that happen here is high enough, our interest in having protections for that communication should be just as high.

If the value of this discourse is low, why are we putting on airs and pretending we need harsh measures, such as slowbanning, to protect it?

Sooner or later, any human, even one as benevolent as pg, will abuse their power over others. We need access to the reasons that accounts are secretly banned or slowbanned, so that we can pressure moderators (or whoever has that power) to wield the punishments fairly, and petition for unfair punishments to be reversed.


"Power without transparency inevitably leads to power that is abused" is true only for some value of "inevitably". You can have a benevolent dictator; you can't have three consecutive benevolent dictators. So long as we don't expect HN to outlive pg, we don't need to be all that concerned about serious abuse. The question is whether the cost of potential abuse is greater than the cost of trolls being able to create new accounts when they get banned.


>"You can have a benevolent dictator"

Or in the case of HN a philosopher king.


I'd never even heard of this practice, hellbanning, slowbanning. I gotta say, and this may not be popular sentiment, that I view using such tactics as dishonest. You got a problem with someone, deal with it head-on. Don't resort to passive-aggressiveness, and don't yield to the oh-so-human desire for revenge the delicious poison of helping someone get what you feel they have coming to them.

Deal with problem users directly and honestly. Warn them, temp-ban them if it doesn't work, permaban them if they force you to, but not without telling them why.

I'm not speaking without experience. I've been the moderator of political threads in the off-topic section of a college football board for about six years. We've had to deal with some crazies.


It turns out that if you ban a certain set of users they will react strongly, often causing many more problems. At Chesspark I was just amazed at the lengths people would go to to get revenge for some perceived wrong when they got reprimanded for violating community standards.

Making them think nothing happened but removing them from the rest of the site is extremely effective.

Blackholing IPs is a poor solution in practice, since many users can change IPs on a whim, and some ISPs run DHCP out of large subnets that may contain respectable users.


I'm not sure how I feel about this suggestion but it seems worth pondering: If you have an advertising-supported site and advertising is what is supporting the community then it seems reasonable to show more or less advertising based on the contribution of individual members of the community.

The most valuable contributors would see little (or possibly no) advertising in exchange for the value they are contributing with their time. People who are disruptive have a cost and the compensation for that would be to gradually increase the level of advertising shown to them.

Clearly, this isn't perfect, but no system is and there is at least a rationale for it.


I think Slashdot allows high karma users to disable advertising actually.


Stackoverflow and sites actually do reduce the advertising in response to your level of karma.


The problem with this democratic moderation is... it's a bit self-affirming and oblivious to its hidden costs. For instance the moderators on Programmers.stackExchange constantly close topics for any reason they can find, even if it was interesting reading and people were providing answers. Not to mention a lot of the topics which are 'legit' are much less interesting. Result? Puts people off, site gets gradually more boring, but they think everything's going great.


I'm a guest, so I like listening in (and participating at times) to the community. As a guest, it's a privilege to be here.

But I have a very simple rule about websites: the site should do things that I naturally expect. If I provide a credit card, I don't expect that credit card to be used for other purchases. If I provide an email, I don't expect to be spammed. If I cancel a membership, I don't expect to be able to access the site.

And if I provide a comment that appears to be legit, I expect other people to be able to read it. When I vote something up, I expect that vote to make a difference. In short, if you provide what appears to be a way of communicating with others, it had damn well be a way of communicating.

When website owners violate that standard of fairness, sorry to say, I find it unethical. They are presenting me with a picture of the world that they know not to be true. Not as bad as using my credit card to make other purchases, but bad.

The "needs of the many outweigh the needs of the few" argument is fundamentally flawed. It's the obnoxious protester who turns out to be right every now and then. It's the guy pleading for an unpopular cause that manages to sway public opinion. In short, we desperately need diversity of opinion and manners.

What happens is, people change and sites no longer serve them. Whatever the intent of the "feature", the effect seems to be preventing folks from changing. To enforce group homogeneity.

Like I said, I'm happy to be here and follow whatever rules there are. But this hellbanning shit is way fucked up. I don't care how many millions of users you have, screwing over folks for the greater good -- and lying to their face about it -- isn't a good thing at all.

How many hours of people's lives do you get to rob them of, pretending to let them publicly comment, before it's a bad thing? 10? 100? 1000? If a thousand people were actively commenting and nobody could read them, where do they go to get that part of their life back? Who says it's okay to lie like that? Just because you are a guest in somebody's house, they should treat you this way? I don't think so.

We act as if people are simply cogs in some great machine, the machine of the site. Not precious humans.

My opinion, for what it's worth. A bit over the top and theatrical, sure, but I exaggerate to make a point. Hopefully folks are able to read and understand it. There's no bright line between "I hellbanned this guy for being that .01% of folks who are impossible to deal with" and "I didn't like Joe, so let's just let him think he's contributing" You start down this road, there is no turning off. It's either acceptable or not. To me it's not.

These are tough problems, yes. But simply because you have something that works doesn't mean that it is the right thing to do. EDIT: Note slowbanning is fine. Nobody says you have to have a responsive site, just an honest one.


Fair enough. This is the way everybody feels before they try to run a community site.

But the thing is, as soon as you have a big community that you're personally responsible for, you find that there are people who you just really wish would go away. They're making the place worse, possibly without even realizing it. You need to get rid of them, but if you kick them out explicitly they'll just make a new account and come back. The most important thing is to stop them making things worse right now. Anything else is secondary.

It's important to consider that online communities are in no way democratic and do not have the concept of "free speech". That's tough to swallow, having spent your whole life knowing how free speech is the most important thing ever.

"Free Speech" is you being able to write whatever you want on your blog without the government taking it down. It's very different from being able to write whatever you want on HackerNews without the spam filter taking it down.

If you need any explanation as to why, turn "showdead" on for a few days and try to pick out the content from in between the real-estate spam.


Unfortunately, banning abusive or spamming users is basically providing feedback that they have been caught. So allowing them to continue expending effort instead of just creating a new account that will also have to be caught and banned creates a more difficult terrain for the abuser.

"hellbanning" in my mind, though, ought typically be applied to automated spammers and other miscreants, not actual humans with opinions, etc.


Automated spammers wouldn't care if their input was ignored or not. Hellbanning would have a good chance of working only with real humans (miscreants included)


The point of hellbanning is keeping the spam coming from one username or IP, and then not displaying it. If you throw a "You have been banned" message back to a bot, then eventually a human is going to read it, make note of it, and recreate the account or connect from a different IP.


Huh? Of course they want their input not to be ignored.


It's the obnoxious protester who turns out to be right every now and then. It's the guy pleading for an unpopular cause that manages to sway public opinion. In short, we desperately need diversity of opinion and manners.

It's the obnoxious users who destroy that diversity by driving out everyone else. Getting rid of them restores it.

How many hours of people's lives do you get to rob them of, pretending to let them publicly comment, before it's a bad thing?

How many hours of your and other people's live are you willing to let a disruptive, vitrolic user steal before it's worse than the alternative?

There's no bright line between "I hellbanned this guy for being that .01% of folks who are impossible to deal with" and "I didn't like Joe, so let's just let him think he's contributing" You start down this road, there is no turning off.

Slippery slope arguments are a pretty poor excuse for an ethical system in the face of moral grey areas and conflicting issues.


This topic reminds me somewhat of DRM -- Specifically, #2 and #3 reminds me of Titan Quest's blocks that caused pirated copies of the game to malfunction and crash without warning.

The problem is, many believe this actually backfired for Titan Quest: people reviewed the game and said it was buggy and would crash often, so people didn't buy the game, even when these bugs only affected the pirates.

Couldn't this happen too, with websites that are trying to rise in popularity? If you have users that are getting slowbanned or errorbanned, they will move away from your site. They might also tell other users that your site is slow and glitchy, and to stay away as well. In the end, this could very well hurt your site more than it is helping it.

And also, what about legitimate users? If a user ever gets a 404, a slow loading page, or no one responding to their queries, they will wonder if they've been hit by a ban. Do you want legitimate users (the majority) to have to worry about something that effects only a small amount of problem users?


% of game pirates >> % of hellbanned users


Another more traditional name for hellban is shadow ban. Just saying, for those who remember the modems and BBSes :)


Other users should simply stop answering to the bad guys. It works like hellbanning but it's more acceptable. I have seen this working in some of my forums. Trouble is "legitimate" users have a bad habit of keeping the conversation alive, so they are responsible too. Ignore the bad guys and they leave, answer to them and accept the consequences.


That's exactly what hellbanning does. It enforces the no-feeding-the-trolls rule. You can't keep a conversation alive with a troll if the troll is hellbanned. The site ignores them for you.


yes but hellbanned guy does not know it, which makes it completely different. The "government" is not acting transparently here, so it's not democratic. Anyway, my proposition is not applicable to most situations, it's just an ideal one.


#2 and #3...every reddit user just became paranoid.


The stackoverflow model seems very similar to the successful models of european prison reform systems: gradually reward inmates with many incremental levels of privileges for good behavior and dial back otherwise.


One drawback to the various forms of invisible bans is that they aren't visible to the rest of the community. The HN "showdead" feature allows the banned comments to be seen, but relatively few people (I assume) use that feature.

One good way for community members to learn what is or isn't acceptable is by example.

Approaches like disemvoweling (and like HN's graying of down-voted comments) have the advantage that the offending post is still available as an example to other members of the community, quite clearly saying "don't be like this".

Another distinction is that disemvoweling is an action taken against comments, not users. Ideally, a user's first offensive post is disemvoweled, but they're allowed to come back and make a second attempt at civil conversation.

Of course, you still need a way to deal with the persistent offenders who refuse to learn from their experiences. For them, banning of whatever flavor seems perfectly reasonable to me.

On http://nielsenhayden.com/makinglight/, where I've seen the most use of disemvoweling (and several discussions of it and other techniques for defending community and rational conversation), disemvoweling of a comment is often followed by a moderator's brief explanation to fellow community members what about the comment was objectionable. There's also often an invitation to the commenter to try alternative approaches to the discussion, or to join in other conversations on the site that might be easier to discuss politely.


Open discussion, democracy and transparency is certainly the goal of the internet at large, but individual sites have different incentives. The moderator has curating as one of their responsibilities and they probably have a general sense of the kind of discussion they'd like to foster.

Moderators and site-owners should take all measures at their disposal so that they can shape the overall experience of the site they envision. In the end, whatever mix of authoritarianism and democracy achieves that is fine.

That being said, hellbanning seems cruel. I think that all punishment should be explicit because the trick is to cultivate reasonable users rather than to pick and choose. I'd prefer some sort of comment rate adjustment as punishment (1 per day, 1 per hour, 1 per quarter hour, etc). As from personal experience, most obnoxious comments derive from a mixture of caffeine and passion.


From the comments from folks here who were hellbaned that then subsequently modified their behavior, the banning options in the software should have some easily review periodic samples of the banned users posts so they can be unbanned if the issue was a short term anomaly.


I love the hellban idea. It's like the light and dark world from legend of Zelda.


Hellbanning? Wow. Talk about paternalism, judgment-elitism and power-madness.

Hellbanning really has to be a method of last resort for users with who repeatedly come back with problems for the community and with at least 2 admins agreeing on the ban. Not as a "i don't agree with this guy, byee".

I was once hellbanned from reddit for 2 months before i noticed it, i had never posted any really offensive posts and nothing even close to trolling, i had many top posts in discussions on the programming subredit and then suddenly the account went dark. After i noticed the ban i asked an admin about it and they did not even have a record of why it was banned, he took a look at my posts and concluded there had to be some mistake. WTF?! "Some mistake" During the ban i had spent tons of time replying to posts in thought that it would be of help or interest to someone, all just going down into some black hole.

The positive i got out of that experience was that posting on internet forums isn't worth my time and i stoped posting on almost all internet forums i had previously been active in (And yet still i am here writing this shit ^_^ )

If i made something wrong i want to know about it, not everyone is an active troll and i find it hard to believe the trolling problem is so big you have to throw away hellbans left and right without even thinking. There are many stories similar to mine, especially on reddit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: