Hacker News new | past | comments | ask | show | jobs | submit login
why I cancelled my ETech presentations (headrush.typepad.com)
6 points by bootload on March 27, 2007 | hide | past | favorite | 10 comments



And here's another post by Ronnie Bennet taking a slightly different view.

'... As far as can be determined from the few facts she relates, the attacks on Kathy were made anonymously. However, she has tried and convicted Chris Locke, Jeneane Sessum, Allen Herrel, Frank Paynter and, to a lesser extent, Doc Searls without a shred of proof that they were involved. As Chris notes in his rebuttal post, ...' [0]

Reference

[0] Ronnie Bennet, Blogher.org, 'The Matter of Kathy Sierra'

http://blogher.org/node/17339


One thing we can't tell since the original websites are down: Were these attacks made in the comments or were they first-class posts?

Bloggers aren't responsible for things said in their comments.


Valid Identity:

'... Were these attacks made in the comments or were they first-class posts? ...'

Good point. Would a solution be only having comments if you supply an openId? [0] The idea being you have to ID yourself to comment. Would validation of identity be one way to help solve this?

'... Bloggers aren't responsible for things said in their comments. ...'

I wonder how the legals would view that? How much due care do you have to take? The thing with comments is, if you allow anyone to comment without limits, it's your reputation that will be smeared. If you have control or ownership of a system you are responsible. So it pays to control, restrict who and what comments are made. This is one of the key points allegedly made ... why do reputable sites allow such posts to persist?

Post Restriction:

I've seen a variety of techniques from timed or posts delayed until authorised to the extreme of no comments. They work to some degree but do not scale well. My personal idea is to have a blog that has no comments that points to a system requiring login (overhead, time and effort & identity) to comment.

'... Bloggers aren't responsible for things said in their comments. ...'

I wonder how the legals would view that? Was enough due care taken? The thing with comments is if you allow anyone to comment without limits it's your reputation that will be smeared. So it pays to control, restrict who and what comments are made.

I've seen a variety of techniques from timed or posts delayed until authorised to the extreme of no comments.

Reference

[0] No because OpenID is not a trust system. Maybe I should have read more about it before I posted ~ http://openid.net/about.bml


I want to separate what a website owner should do from what he is socially held accountable for. Consider the goatse trolls on slashdot. Slashdot introduced moderation to improve the experience of its users; people don't go around accusing slashdot's creators of writing offensive comments. There is a difference.[1]

That is just what reasonable people think, IMO. I can't comment on legal implications. I am not aware of a legal precedent online, whereas in the old media there was stringent enough control of the medium to be able to hold the proprietors responsible for what goes on it.

Presuming guilt by association is irrational and often unfair, but that doesn't stop it from happening.

[1] If anything, accusing a website owner of writing anonymous comments in his own website would be a little bizarre (I'm not saying anybody made such an accusation).


"Bloggers aren't responsible for things said in their comments."

Sure they are! It's their site, & they have moderation control. Bloggers always delete comments, either spam, off-topic or otherwise.


Hello, I am Aur Saraf.

Early French Gendarmes earned their bread capturing criminals and charging the populace for the service (as freelancer professionals, not employees of the government).

That model worked for them.

Could it work for Web2.0? Could an online "gendarme" find a way to police all social networks?

This is definitely something users would want, as far as we only filter for really bad spam/trolls/bullies that are hated by everyone (otherwise flames will rise over net neutrality and free speech).

The problem is that the net doesn't have jails. I can think up a few solutions.

This is an idea for the taking, offered as a public service since it doesn't match my taste and criteria.

If you DO take it, please track down my email and tell me so that I could smile over it.

Aur Saraf


The craigslist community is pretty good at dealing w/ inflammatory and abusive behavior. Too many flags, your post gets pulled. Easy enough.


Included this for a discussion on social software, large user groups & their implications: read bullies, abusive users etc. In any place where you get large numbers of people your going to get undesirables. The question is how do you deal with them as developers?.

The link is a post by Kathy Sierra.


What's amazing to me is how many "reputable" and people "who should know better" and people who "appear smart" were part of this lunacy.... I guess their true colors finally came out.


part of the problem is we really don't know yet. The lack of information is the problem. My point was not to pre-judge the named persons but to think how developers write software system deals with these situations.

For example flickr. At the bottom of each page you have a flickr abuse flag. Flickr has extensive reporting and if enough of these flags get tripped they investigate. This example seems to be an external blog which I doubt would have these measures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: