Making decisions like that is why we have courts. They are responsible for deciding whether a particular act was murder or self-defense, whether something is pornography or art, whether something is protected or forbidden by the Constitution, etc.
They're far from perfect, but is it better to give the power to control digital communications to a few corporations?
It does though. For example, it gives them the power to refuse to remove false information without being held responsible for it, which the first amendment does not give to publishers like the NYT.
Newspaper publishing is opt in; that is, anything published they chose to publish.
Websites that allow third parties to post content on them is opt out; that is, anything published did so without initial moderation.
If a website operator posts their own statements, they can theoretically be found to be libelous. They can't be held accountable for posts by other people. Newspapers potentially can (though I've never seen a court case where a newspaper was sued for something in the Opinion section), but they -chose- to publish that item.
Realistically websites should be thought of as a public bulletin board. Should you be able to sue the person who put up the bulletin board, for content that was posted to the bulletin board by other people?
I understand the nuance you describe, but the situation I described is, at times, a problem with Section 230. For example:
> When a US Army reservist found herself at the center of a conspiracy about the coronavirus earlier this year, her life was upended.
> Hoax peddlers on the internet falsely claimed that Maatje Benassi was somehow the world's COVID-19 patient zero. Over time, conspiracy theorists posted at least 70 videos across multiple YouTube channels claiming that Benassi had brought the virus into the world. Along with those videos came death threats, which Benassi and her husband, Matt, took seriously.
> But at first, the couple did not know how to respond. Trolls hiding behind aliases on the internet were almost impossible to find, and the Benassis could not sue YouTube for allowing the content to be posted because of a now-controversial law known as Section 230.
My understanding is that you would sue the person who originally posted the content. You can sue "John Doe" and subpoena the social media companies and internet service providers for information to identify the poster.
Which the removal or not of Section 230 doesn't change. No matter what the law says, no matter what culpability exists, if you can't afford a lawyer, you're not getting anything. An issue with the law in the US, but hardly relevant to the issue at hand.
But YouTube has deep pockets, so if you could sue YouTube, lawyers would work on contingency. What lawyer would take a John Doe case on contingency?
Worse, what if the defamer is able to hide their identity or from a jursdiction that doesn't care about an order from US courts? In that case, even paying for a lawyer won't help.
That’s not true. The liability shield only covers content produced by other entities, e.g. tweets. Twitter is still liable for content it produces itself, such as fact checks and trend summaries.
Likewise, the New York Times is liable for the articles published by its own writers, but it bears no liability for the comments section.
But the NYT can carry liability for letters to the editor published in its dead tree format -- see https://www.rcfp.org/supreme-court-will-not-hear-letter-edit... as an example of a local newspaper being held liable for letter-to-the-editor-published defamation.
The CDA draws a bright line between content "authored" by a firm and content "made available." In practice, that line is fuzzy.
As a hypothetical example, Twitter probably should face liability if it took a random tweet (say) accusing Bezos of pedophilia and made an editorial decision to promote that tweet to all its users, but it could still plausibly claim that it was just making the content available.
It's a complicated topic, and I don't know where the best balance lies.
The tweet promotion is an interesting point, but the letter to the editor is easier IMO. It's assumed that a human has read and selected the letter to the editor, which is why they'd have liability. For the promoted tweet, my first reaction would be to say, if a human affirmatively promoted it, they'd be liable. If it's pure algorithm, they wouldn't be if they took it down when served a notice.
That’s not the current situation under Section 230. You can even re-tweet or forward content posted by someone else and not be liable. Only the original author is liable. This is sensible baca use otherwise all sorts of innocuous relaying, trending and categorisation activity normal of forums and social media that affect the scope and visibility of posts could trigger liability.
> The liability shield only covers content produced by other entities
That's what I meant, but you're right, I wasn't entirely clear. Thanks.
That's a protection that neither social media nor the NYT (for comments) would have without Section 230 if they do any moderation (at least according to Stratton Oakmont, Inc. v. Prodigy Services Co.)
> the power to refuse to remove false information without being held responsible for it, which the first amendment does not give to publishers like the NYT
Yes, the First Amendment does protect speech that gives false information. We had a recent HN thread on just this topic:
Please don't post flamewar comments to HN. The site is overrun with this kind of thing at the moment, and we're banning accounts who do it. I'm not going to ban yours because it doesn't look like you've been making a habit of it, but please review https://news.ycombinator.com/newsguidelines.html and stick to the intended use of the site from now on.
They're far from perfect, but is it better to give the power to control digital communications to a few corporations?