Hacker News new | past | comments | ask | show | jobs | submit login

'Social media' in the web 2.0 sense is post 1996, but there were plenty of websites with comments sections and online forums before that. Cases over intermediary liability for online content have some antiquity; in Cubby v. CompuServe (776 F. Supp. 135; S.D.N.Y. 1991), for example, CompuServe were found non-liable because they had no first-hand knowledge of the defamatory posting. Whereas in Stratton Oakmont v. Prodigy (No. 31063/94, 1995 N.Y. Misc. LEXIS 229; N.Y. Sup. Ct. May 24, 1995) Prodigy were found liable as the publisher because they'd set content rules and run a filter over users' contributions. Congress thought that latter result was unhelpful (because it incentivised people to run cesspools rather than to actively moderate them) and legislated.

And sorry, I should have been clearer on good faith. The section preventing providers being liable as a publisher (which is the core of s.230's value to social media platforms) has no good faith requirement. "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." (s.230(c)1) is the whole clause. Platforms don't acquire intermediary liability even if they delete every post praising the Yankees while laughing maniacally and falsely claiming it's a result of profanity use. They simply aren't "treated as the publisher or speaker" full stop.

The good faith language comes from (c)2, which further limits liability (to the speaker) for good faith removals on the grounds that the speech might be offensive. That's not an intermediary liability issue, as such, though.




>The good faith language comes from (c)2, which further limits liability (to the speaker) for good faith removals on the grounds that the speech might be offensive. That's not an intermediary liability issue, as such, though.

We are arguing over a moot point. If section 230 or whatever does not provide for free speech. Then that is what needs to be improved upon. Perhaps make it more clear that free speech is guaranteed.


> If section 230 or whatever does not provide for free speech. Then that is what needs to be improved upon.

The problem is that people have wildly different takes on how to "fix" section 230.

One group wants to eliminate the liability protections, regardless of how much moderating you do. The concern is that this basically makes hosting user generated content at any sort of scale impractical from a business perspective since scaling competent human review to reduce the legal liability below the value per user is impractical for any sort of modern social media.

One group want so eliminate section 230 so only companies that do no moderation have liability protection, forcing social media companies to stop doing any moderation. The concern here is that some level of moderation of abuse / spam seems necessary to keep platforms from degrading into wastelands that no-one wants to use.

The moderate middle ground is reforming section 230 to limit the types of moderation activity that can be performed without losing liability protection.

This last seems politically unlikely as it doesn't provide a political win, despite being good for society.

One group wants to eliminate




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: