I find myself in a double bind about the idea. On one hand, it's a pretty passive-aggressive way of dealing with problem users. On the other hand, I can't think of any better ideas. The thing about problem users is that if you don't take care of them quickly enough, they tend to gather a following that ruins things for everyone else.
Why is it passive-aggressive? I see people here talking as if it was the responsibility of the community to engage with and resolve the problems of challenging and unproductive members. How many hours in the day are there? Why is it our expectation that everyone is owed attention, forbearance, and even satisfaction? These aren't social services we're talking about. There's no "due process" clause.
Is the problem that the community isn't telling the problem person that they're banned? There's a reason they don't: because by and large, when you tell someone they've been banned, they react (particularly in the heat of the moment) by throwing a temper tantrum with a new account. Again: why is it the obligation of the community to absorb that kind of abuse?
It isn't. It's just that as a general rule of thumb, I believe in openness and transparency as the morally correct thing to do, and I suspect pg agrees with me. Don't you agree with that at least to some extent?
And therein lies my double bind: how does one balance the need for a civil community with the hacker's dislike of things done in the shadows? Are you happy with the idea that users are secretly banned all the time?
If someone comes in and brazenly violates the rules and values of a community, they can't turn around and expect to benefit from those rules and values later on. They've broken the social contract. Hellbanning is perfectly transparent to everyone in the community except people who are hellbanned.
A problem arises, however, when what appears “brazen” to people who have already interacted with a community for a long period is not so obvious to anyone else. Many community standards (such as “nice”) are nebulous and have vastly differing thresholds and signaling methods, and this is not something that can be easily resolved up-front. Applying subterfuge early in the process makes it possible for a well-meaning entrant to randomly lose by misinterpreting things early on, then being denied feedback that would allow them to make a more informed decision about their behavior. They are further punished by throwing their participatory resources into an invisible hole while thinking all is well. This is easily nontrivial collateral damage, depending on the set of visitors.
This. I've seen some long, serious, insightful [dead] comments which turned out to be from users hellbanned quite a while ago. If they aren't deliberately trolling, tricking them into continually wasting their time this way is a terrible thing to do.
Parking your car so that it blocks the alley is a terrible thing to do. Eating all the skin off a bucket of fried chicken is a terrible thing to do (as is buying a bucket of fried chicken). Serving warm beer is a terrible thing to do.
Failing to welcome the comments, well intentioned or otherwise, of someone who had to be explicitly driven out of a community is not a terrible thing to do.
Not all of those [dead] comments are actually from hellbanned users, are they?
How about, the ones who care enough to put contact info in their profiles, you can reach out to them and say "you may not have noticed but the HN admins seem to have banned you".
I have sent dozens of those emails over the years. It's only a small fraction of the [dead] productive comments I find, but not that many people have contact info or googleable usernames.