Hacker News new | past | comments | ask | show | jobs | submit login

Don't worry, the moderator is very sensitive to this issue. There are going to be fairly explicit standards about what should be endorsed and what shouldn't. Roughly things shouldn't be endorsed if they are gratuitously uncivil or content-free, but there will presumably be a fully detailed version of that policy. And there will probably also be a version of showdead for pending comments. (Already anyone who can endorse can see them at /pending.)

We're hoping that this feature will make the site more hospitable to women. That wasn't the only reason for doing it, because users get stupid and/or nasty about lots of topics (or any topic, if they happen to start insulting one another), but certainly some of the most cringe-worthy threads we've cringed over have involved female programmers. So it would be hard to imagine a version of "working" for which pending comments working would not thereby make HN more welcoming to women.




I'm glad this is being considered, but I'm worried that it's easy for non-majority opinions to be interpreted as "gratuitously uncivil", especially on emotion-generating topics.

For example, if a person has had a bad experience being harassed by a coworker and writes a clear and honest comment about this as a problem, that can be easily interpreted (such as by a person unfamiliar with getting harassed) as an uncivil thing to say about your coworker. There is a known pattern of "tone policing" - that people have a pattern of asking less-privileged people to be "not so angry-sounding" when talking about something problematic or upsetting from their lives. I think of this pattern as a defense mechanism to protect yourself from understanding somebody else's pain (especially if it challenges something about your status), and it seems likely to be a bias here, even an unconscious bias.

Instead, I'd love to see explicit standards for comments on a visible community guidelines page, and an efficient flagging system that reinforces these guidelines.


The sort of incivility we're worried about is the more explicit type where someone replies to a comment with "You are an idiot. Don't you realize that x y z?" when they could have simply said "x y z." I'm pretty sure if HN cuts down on that type of comment, people who are earnest but upset about something will be net ahead of people they might be arguing with who are merely thoughtless jerks.


So, a lot of people's first introduction to all of this was the comment Sam made in his article "What I've Learned From Female Founders So Far" about how "we're working on something to improve the quality of Hacker News comments". It is thereby not surprising that people are trying to analyze whether this feature will help that specific issue ;P.

FWIW, I think that this is a different problem than the "you are an idiot" problem: I think a lot of the threads that people are finding issue with when it comes to attitudes towards under-represented groups are not (at least entirely) filled with this "explicit type" of negative comment: it is instead the more insidious, implicit type being stared at.

Regardless, I don't see why the endorsement system will somehow work against either of these problems when the voting mechanism hasn't: the comments that start with "you are an idiot, don't you realize that" are currently getting upvotes, so why would we presume they won't also get endorsements? Are high-karma users voting differently?


I would love to see that kind of comment cut out too, but I am concerned that pending comments won't actually achieve that goal - that biases (unconscious and not) will sneak into the patterns of what gets approved.


The moderator is concerned about that too. But if it did start to happen it would be pretty obvious. So it seems worth trying to see if pending comments can be tuned to cut obvious crap without eliminating stuff that's merely controversial.


That sort of thing needs a flag mechanism, not an a priori vetting of each comment. Enough flags = lock replies, a few more flags = hide comment.


Exactly! And really, we already have a flagging system: the downvote. My understanding is that comments on HN should only really be downvoted for the same reasons as pending comments would not be approved. So why not improve on that existing system? When a comment is downvoted, it should ask the reason for the 'flag', with choices representing the various commenting rules. This should help ensure that people are using downvotes correctly, which should allow them to be acted on more strongly. Sufficient downvotes (or sufficient downvotes with the same reason) could hide the comment entirely.


Mmm. This smells a bit like the League of Legends flagging and Tribunal system, which has worked very well for them. I wonder if HN has enough active readers to make a Tribunal work well.


I've decided to start using a throwaway for political discussions...

I would personally approve any comment that discusses a lived experience in concrete terms.

That being said re: tone policing, there is a segment of the social justice crowd on HN that will make inflammatory generalizations about large groups of people. Generally this will take the form of "X happens because of goddamn white cishet men." Inevitably someone will call these comments out and the response will be that it is justified to speak this way about the "oppressor class" and that anyone who "tone polices" these comments is ignoring the "power dynamics" of the situation.

In my opinion these comments do nothing but add fuel to the fire and I will never approve them. They are wrong for the same reasons gross generalizations about minorities are wrong.

And no, I'm not white, but yes, I am a man.


My take on tone policing is it is a mechanism (whether conscious or unconscious) to tell people they shouldn't be as passionate and concerned about something as they are.

As such, it's relative to the tone of the forum (be it HN, tumblr, irl, wherever). If it's acceptable for me to say "goddamn NSA fucking ruining the internet" then it's acceptable for me to say "goddamn cis people who fucking ruin the internet" [1]

There are plenty of people on HN who speak passionately about many things, and who do so with expletives and curses, blunt and punchy statements. Personally I hope they can continue, but either way I hope it is explicitly allowed or discouraged in the guidelines.

Speaking to over-generalisations; it's a hard one. There is undoubtedly a need to talk about the commulative effect of oppressive groups, and there is definitely a need to talk a about subsections of oppressive groups who are actively doing opressive things (again, whether consciously or unconsciously, knowingly or unknowingly).

Unfortunately we haven't yet figured out/settled on the language to use that differentiates between the two that doesn't involve a whole bunch of clauses and caveats. And given a forum where people are permitted to express their passion, extra clauses and caveats weakens the impact. When you are trying to communicate how fucked up a situation is to someone, it's exhausting and derailing to have to continually validate them and say "of course I don't mean everyone here, implicitly from the context I mean everyone who is engaging in this behaviour".

[1] I had to think a bit then for the phrasing where I could most easily argue "of course I don't mean every cis person" without also weakening the statement with extra clauses.


Excellent points. Are there any examples of guidelines you think would be good starting points?


It's interesting to look at the format of the Django Code of Conduct, although the content would be different since this is for a project instead a forum: https://www.djangoproject.com/conduct/ - including detailed explanations of what it considers important, examples of types of problems, a FAQ explaining the code of conduct, a reporting guide, an enforcement manual, and a changelog.

The Flickr Community Guidelines are also interesting in format: https://www.flickr.com/help/guidelines/ - with many very specific details about acceptable and unacceptable behavior.


Thanks!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: