> if you're in a real rush, just read Section (c)(1), which is only 26 words, and is the part that basically every single court decision (and there have been many) has relied on.
> (c) Protection for “Good Samaritan” blocking and screening of offensive material (1) Treatment of publisher or speaker
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Wouldn't repealing Section 230 mean more censorship and restrictions as the liability would be too great to be on the hook for what some random account might decide to publish, no?
You're right, repealing section 230 would result in more censorship, not less. But I think "repeal section 230" has been shorthand slogan for a more detailed set of proposals (much like "defund the police" doesn't mean $0 budgets for all law enforcement).
For example, see the DoJ's proposal to amend and clarify section 230 [1] [2]. Among other changes, they suggest limiting the extent to which a platform can moderate content while retaining the protection. Under this rule, if Twitter chooses to moderate aggressively then they lose the 230 protection, while a competitor that limits its moderation to illegal content and viagra spam could still receive the 230 protection.
Wait, so is the idea of those proposals that the federal government or the courts or the lawyers are going to be in charge of deciding what counts as aggressive moderation as opposed to regular moderation? Is that supposed to be better?
Also, your comment is confusing because "limiting moderation to illegal content and viagra spam" was the legal limit before section 230, if I understood correctly, so it's not making it clear what the difference is between "clarifying" and "repealing".
Judges are the ones we all officially trust to make judgments. It isn't better as such to get them involved, but it might become necessary.
I won't worry about technicalities, because I have no idea what they are. But it is becoming abundantly clear that the social media companies have political power and are willing to wield it if they think that it is appropriate, similar to newspapers and television. It is reasonable that the regulations for social and non-social media converge.
The alternative is leaving it up to corporations so the current status quo doesn't seem that bad. (See everyone that wants to repeal Chevron deference)
So getting rid of it would let me sue Google for letting people forward conspiracy theories in emails? Huh.
Sorry, I think you actually do want a new law, not to modify this one. I have no idea how to define the websites that you want this to even apply to -- size? revenue? number of users? does twitter count? mastodon server instances? gab? reddit? texas instruments support forums? IRC servers?
This feels super arbitrary when almost every website to some extent republishes third party content.
No, I don't think any of that. My starting point would be:
(1) Someone has a long-established relationship with an internet service.
(2) That service makes an arbitrary and unexpected change to the nature of that relationship.
(3) That change causes material damage.
Then the site can be sued.
I don't think that would cover the current Trump business, they've been telegraphing for years that he would be banned eventually. But it might close the door to more damaging abuses of power where they strategic on timing and opens up some review of how clearly they are giving notice. Might help with the app store complaints.
That specific idea isn't going to be good - I've probably missed a bunch of relevant stuff - but maybe the idea can be salvaged and implemented with only mild damage to the internet ecosystem.
> Sorry, I think you actually do want a new law, not to modify this one.
There isn't really a difference, but sure. I don't care if changes are implemented as an amendment or a new bill.
The proposal still lets platforms moderate on certain grounds, a repeal would mean that a platform can't moderate for any reason other than "illegal content" if it wants to keep the protection. The law currently says that a platform can moderate material that is "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected." That "otherwise objectionable" has been interpreted as a catch-all that lets platforms moderate on any grounds at all.
The DoJ proposal removes the catch-all and add "unlawful" and "promotes terrorism" to the list.
So if you moderate for a reason on that list, you keep the protection. If you moderate for other reasons, you lose it. And yes, I'm sure we'd see plenty of litigation on what constitutes harassment, terrorism, etc.
Does this mean if a sports fan comes into my forum about programming and starts making sports threads, I cannot moderate that away, or else face legal action? Does it also mean HN could face legal action according to its current rule set?
This is the definitive question that has to be asked of anybody proposing a repeal of 230. I personally dislike 230, and have from the beginning, but anybody who doesn't have a direct and complete answer to this question isn't being serious.
edit:
I personally think that the answer is to standardize terms-of-service documents and to make them into real legal statements that both the website visitor and owner can make claims regarding, and that those statements be registered with a federal agency. The standardization should be so tight that filing it should be a matter of picking from drop down menus on a form, and the implications of each choice should be common knowledge and taught in school. The form should be able to be filled out anonymously (as "owner of [X] site.")
If you run a site without having filed one, though, you would lose legal protections as a publisher based on modding behavior. If somebody filing suit can point out any decisions that have been defined by law as constituting a specific degree of editorial control (examples: spam-control, abuse-control, subject-control, position-on-a-subject-control, arbitrary-control, etc.) then they will be responsible for libel/tort claims as a site for individual posts. A site that has filed a terms-of-service will get the benefit of the doubt, and the ability to push liability onto completely onto posters or maybe even individual mods.
Those requirements would either be too onerous to run the walled gardens of Web 2.0 at scale and small players will have a profitable business opening again, or the big dogs would be forced to really bureaucratize or democratize, and small players will have a profitable business opening by providing them modding services at the forum level.
I was a founding member of the egcs steering committee. When we were creating the rules, we had a debate over whether we should have a specific list of offenses that would allow us to ban someone: some wanted a specific list, others just wanted a supermajority to vote the person out. We settled on a 3/4 vote of steering committee members. I wrote the justification, saying that we shouldn't be stuck if someone came up with a new and creative way to cause damage.
And sure enough, we only banned one person ever. We banned him because he made a series of scary threats against the release manager, including "I know where you live", but these threats were made off-list, it was a campaign of personal harassment because no one wanted this idiot's patches. Suppose we had forgotten to include threats of violence off-list in our expected set of rules? And remember, we were talking about our mailing lists, which are forums according to section 230. So, would we lose our section 230 protections under your proposal?
> I personally think that the answer is to standardize terms-of-service documents and to make them into real legal statements that both the website visitor and owner can make claims regarding, and that those statements be registered with a federal agency.
In other words, you want to create more regulations that websites have to abide by. All that does is favor the big players even more than they're favored now.
> Those requirements would either be too onerous to run the walled gardens of Web 2.0 at scale
No, they would be too onerous for any websites other than the huge walled gardens to meet. All you're doing is setting up another form of regulatory capture.
> or the big dogs would be forced to really bureaucratize or democratize
Or they'll just buy the regulations that favor them, the same way they do now.
Yes, except for the fact that 1st amendment protections would still exist. How much protection that actually provides is unknown and it would need to be litigated to find out.
Suffice to say a redraft which means that you'll have to go to court to determine what types of content you are and aren't allowed to moderate without the side effect of becoming responsible for everything people say on your web property is not an improvement.
What does that mean? Off-topic discussion does not fall outside of the 1st Amendment, and a restriction of the 1st Amendment to try and classify that as somehow less deserving of protection -- that would be a much greater curtailment of 1st Amendment rights than anything Facebook has done.
Bluntly, "would need to be litigated to find out" isn't an acceptable answer given that the people giving this answer are the same people proposing the legislation.
Courts don't decide what the law is, they interpret it. You have to have an idea of what you want the law to be, you can't kick that off to the courts. They're trying to interpret your intention and the letter of the law you propose, not the other way around.
This is getting downvoted, probably because it's pushing an unrelated agenda, but there's an important principle here that both Democrats and Republicans don't seem to understand: at some point, the other party just 100% is going to seize control of the government. It is going to happen.
Donald Trump's administration openly considered declaring BLM a terrorist group. I don't want someone like that deciding what does and doesn't count as neutral speech. That is a thousand times more dangerous to me than Facebook making the same decision.
It is startlingly naive for people not to understand that terrorism laws and 'neutral-speech' laws will be used against causes they agree with. Republicans in particular are kind of jumping the gun by encouraging the government to get involved with speech literally during a period where they no longer control the presidency, or the house, or the senate.
It's pretty revealing that many of the replies to this (and the general response whenever anyone brings up content that we obviously want to be moderated but that doesn't fall outside the 1st Amendment) can be summed up as, "well, we'll make an exception for that." In practice, this constitutes the government deciding what communities themselves are able to moderate and filter. That should terrify 1st Amendment advocates, for multiple reasons:
----
A) It's wildly abusable by corrupt governments, and both parties in the US believe that their opponents are corrupt. You don't even need to outright censor -- you can just classify one political group's speech as harassment and make your own speech un-moderatable.
And importantly, when you do that, it doesn't just apply to one platform like Facebook, it applies to everyone. It applies to independent forums that are owned by the other party. Do you really want to give either Republicans or Democrats the ability to basically at-will strip 230 protections from opposition-run forums whenever they control all three houses?
----
B) It places a burden on the courts that courts are not designed for. Courts interpret laws, they don't create them. The implication behind that is that the people making the laws have a general understanding of their own intent, and the Courts try to figure out what that intent is and what the implication of the letter of the law is. It is unreasonable for people to propose legislation that they themselves don't understand, and then ask the courts to retroactively come up with an interpretation and a set of guidelines that make the laws work. It's backwards design that will lead to a lot of unintended consequences and dumb, harmful rulings.
----
C) Many of these exceptions being proposed even in this comment section are themselves contentious. It is not clear to me, at all, why pornography should have fewer 1st Amendment protections than hate speech. Both are harmful to children, both can gross people out, both cause discussions to degrade, both are things that people don't want on their platforms. Yet, there is a general assumption that of course platforms will be able to filter lewd content, they just won't be able to filter low-key racists and nazis.
This is exactly the kind of content-favoritism that the 1st Amendment was designed to prevent. It is inherently problematic for the government to privilege certain categories of speech over others. The distinction between pornography and hate speech is one of the more blatant examples of hypocrisy, but the problem also comes up in more subtle ways when people talk about banning spam, as if comment spam, self-promotion, and off-topic discussion are some kind of clear categories that everyone agrees with -- they're not.
Reddit, Hackernews, and Facebook all have very different definitions of what self-promotion and spam are. Which definition is correct? Which one should be baked into law? And why? Why is it more reasonable for us to come up with an ad-hoc definition of spam than it is for us to come up with an ad-hoc definition of hate speech? Why should spam deserve less protection under the 1st Amendment?
----
D) Finally, baking these exceptions and moderating decisions into law creates an environment where we're constantly chipping away at the 1st Amendment. The way that the US works right now is that we have extremely broad legal protections, but that the systems built on top of those laws provide additional subjective moderation controlled by the free market, by individual choices, and by communities themselves. This is a good system because universal moderation decisions are practically impossible. In short, it gives us breathing room and allows us to acknowledge that moderation is both important and subjective.
When people tear away independent, private moderation decisions, the only thing that's left to protect people from harassment online is now the law. You lose that subjectivity and that acknowledgement that different forums need to be moderated differently.
People who advocate that private platforms shouldn't be able to make their own moderation decisions should not be surprised to see increased calls for the government to get involved in making certain speech illegal. Even if you could force Twitter or HN to turn into 4Chan, people aren't going to tolerate that. They are going to go to the government and ask the government and the courts to make more speech illegal -- because that's the only avenue they'll have left to protect themselves.
----
I have not seen any proposal, from anyone, not just in general in comments, but from political activists, from bloggers, from lawyers -- I have not seen one single proposal for Section 230 reform that addresses those problems. They all propose an incredibly ambiguous set of restrictions on moderation and then fall back on, "we'll work the details out later." But when you're talking about restricting someone's fundamental Right to Filter[0], the details really matter.
Making decisions like that is why we have courts. They are responsible for deciding whether a particular act was murder or self-defense, whether something is pornography or art, whether something is protected or forbidden by the Constitution, etc.
They're far from perfect, but is it better to give the power to control digital communications to a few corporations?
It does though. For example, it gives them the power to refuse to remove false information without being held responsible for it, which the first amendment does not give to publishers like the NYT.
Newspaper publishing is opt in; that is, anything published they chose to publish.
Websites that allow third parties to post content on them is opt out; that is, anything published did so without initial moderation.
If a website operator posts their own statements, they can theoretically be found to be libelous. They can't be held accountable for posts by other people. Newspapers potentially can (though I've never seen a court case where a newspaper was sued for something in the Opinion section), but they -chose- to publish that item.
Realistically websites should be thought of as a public bulletin board. Should you be able to sue the person who put up the bulletin board, for content that was posted to the bulletin board by other people?
I understand the nuance you describe, but the situation I described is, at times, a problem with Section 230. For example:
> When a US Army reservist found herself at the center of a conspiracy about the coronavirus earlier this year, her life was upended.
> Hoax peddlers on the internet falsely claimed that Maatje Benassi was somehow the world's COVID-19 patient zero. Over time, conspiracy theorists posted at least 70 videos across multiple YouTube channels claiming that Benassi had brought the virus into the world. Along with those videos came death threats, which Benassi and her husband, Matt, took seriously.
> But at first, the couple did not know how to respond. Trolls hiding behind aliases on the internet were almost impossible to find, and the Benassis could not sue YouTube for allowing the content to be posted because of a now-controversial law known as Section 230.
My understanding is that you would sue the person who originally posted the content. You can sue "John Doe" and subpoena the social media companies and internet service providers for information to identify the poster.
Which the removal or not of Section 230 doesn't change. No matter what the law says, no matter what culpability exists, if you can't afford a lawyer, you're not getting anything. An issue with the law in the US, but hardly relevant to the issue at hand.
But YouTube has deep pockets, so if you could sue YouTube, lawyers would work on contingency. What lawyer would take a John Doe case on contingency?
Worse, what if the defamer is able to hide their identity or from a jursdiction that doesn't care about an order from US courts? In that case, even paying for a lawyer won't help.
That’s not true. The liability shield only covers content produced by other entities, e.g. tweets. Twitter is still liable for content it produces itself, such as fact checks and trend summaries.
Likewise, the New York Times is liable for the articles published by its own writers, but it bears no liability for the comments section.
But the NYT can carry liability for letters to the editor published in its dead tree format -- see https://www.rcfp.org/supreme-court-will-not-hear-letter-edit... as an example of a local newspaper being held liable for letter-to-the-editor-published defamation.
The CDA draws a bright line between content "authored" by a firm and content "made available." In practice, that line is fuzzy.
As a hypothetical example, Twitter probably should face liability if it took a random tweet (say) accusing Bezos of pedophilia and made an editorial decision to promote that tweet to all its users, but it could still plausibly claim that it was just making the content available.
It's a complicated topic, and I don't know where the best balance lies.
The tweet promotion is an interesting point, but the letter to the editor is easier IMO. It's assumed that a human has read and selected the letter to the editor, which is why they'd have liability. For the promoted tweet, my first reaction would be to say, if a human affirmatively promoted it, they'd be liable. If it's pure algorithm, they wouldn't be if they took it down when served a notice.
That’s not the current situation under Section 230. You can even re-tweet or forward content posted by someone else and not be liable. Only the original author is liable. This is sensible baca use otherwise all sorts of innocuous relaying, trending and categorisation activity normal of forums and social media that affect the scope and visibility of posts could trigger liability.
> The liability shield only covers content produced by other entities
That's what I meant, but you're right, I wasn't entirely clear. Thanks.
That's a protection that neither social media nor the NYT (for comments) would have without Section 230 if they do any moderation (at least according to Stratton Oakmont, Inc. v. Prodigy Services Co.)
> the power to refuse to remove false information without being held responsible for it, which the first amendment does not give to publishers like the NYT
Yes, the First Amendment does protect speech that gives false information. We had a recent HN thread on just this topic:
Please don't post flamewar comments to HN. The site is overrun with this kind of thing at the moment, and we're banning accounts who do it. I'm not going to ban yours because it doesn't look like you've been making a habit of it, but please review https://news.ycombinator.com/newsguidelines.html and stick to the intended use of the site from now on.
There already exists laws that act very similarly to what people are proposing.
Specifically these laws are called common carrier laws, as they apply to phone systems, for example.
Do you believe that the existing laws, that apply to phone networks, are tyrannical or worse or bad?
I don't think it is unreasonable to consider how our existing laws, that most people find reasonable, such as the ones that already apply to phone companies, could be updated to the modern era, so as to apply to newer communication networks.
FWIW, Section 230 was added primarily so that ISPs and hosting providers wouldn't be liable for content on their sites given that social media as we know it today basically didn't exist.
My understanding was the origins of section 230 were because of a couple of lawsuits that targetted ISPs for content that users posted on their services.
One company moderated their content, they were found liable for libel in user content in court. The other company did not moderate their content, they were not found liable for their user content. Anecdotally one of these services was seen as the good one and the other as the crappy one because of their moderation policies.
While social media didn't exist at the time in its current form, I'd argue this is still very relevant today. Without section 230 websites would only be able to moderate for illegal content, and there's a mountain of stuff you wouldn't want to see online that isn't technically illegal, and a lot of people who will try to put it in front of you if given the oppurtunity.
Right. Compuserve and Prodigy. Compuserve didn't moderate and Prodigy did.
The concern is that absent Section 230, a provider needs to either be (almost) totally hands-off or they have to strictly moderate (which isn't really viable for the modern web and social media).
That's definitely a concern I share, I manage a couple of work related servers that are miniscule in scale compared to social media companies, and the quantity of attempted spam/FUD/astroturfing they receive would render them pointless cesspools if we couldn't just block it
To be clear we're talking about the Barr era justice department in this context. The proposed changes would give the justice department a stick to shake at platforms they didn't like for political reasons.
We hope that the justice department would act in good faith, without political influence, but we're fresh out of an intensive course on how naive it is to rely on that.
For what it's worth, "Defund the police" does literally mean "reduce funding to what we currently refer to as the police force" whereas "repeal section 230" couldn't (in your interpretation) literally mean "strike section 230 from the US code".
The reality is what this article outlines towards the end; this is already covered by the 1st Amendment in large part, Section 230 does the bulk of its work in simplifying court proceedings. If section 230 were changed, it'd likely be unconstitutional anyway.
>For what it's worth, "Defund the police" does literally mean "reduce funding to what we currently refer to as the police force"
I think the relevant part is that one definition of "defund" is (from Google answer box/featured snippet, only provided definition):
>prevent from continuing to receive funds.
So some people upon first hearing the phrase "defund the police" understand it to mean "prevent the police from continuing to receive funds". Without any qualification, this is naturally understood to mean "all funds", which is not the concrete policy proposal behind (what seems to be most uses of) the phrase.
I myself had this understanding of the phrase "defund the police" when I first heard it (which is not something that I would support) but I do support most of the concrete policy proposals behind the "defund the police" movement that involve reallocating funds from the police to social services and other programs. I would be hesitatant to use the phrase myself though, because this interpretation seems quite common among people not "extremely online".
"Defund the police" is such a bad catchphrase, it does all sorts of disservices to the actual proposed measures. It sounds vengeful and petty, not thoughtful. It's also misleading - the purpose is not (or should not) be primarily to make sure police has less funds; it's to make sure social services and other programs have funds. Why not call it "fund social services" instead of "defund the police", instead? ("fund social services" is a bad name, but the spirit is right - focus on the positive outcome, not on how you'll "hurt" the police. If you want to focus on police, focus on how you'll improve their mission and help them do a better job at reducing violence in communities)
The left has been historically bad at messaging. They use terms like “wealth redistribution” or “defund the police” which triggers a lot of people. The right is much better with terms like “tax relief” or “election integrity”. That’s definitely something the left should work on.
Whether you agree with it or not, the purpose is to remove funds from police, which many believe is a positive outcome. Look at police budgets, how that money is spent, how it affects communities to have 60-80% of their tax money go to police. Again, you may disagree, but removing money from the police is the point of the slogan.
The point is that "defund" means "remove all funding from", not "remove some funding from". The person carrying the sign in this photo doesn't want 20% fewer abortions, they want abortions to stop altogether:
I live round the corner from a UK housing estate with a drugs problem. I very often deeply wish that 60-80% of my local tax money would go to the police.
I often also frankly wish that 10% of it would fund a Sharia-style hangman, but perhaps that is my inner misanthrope.
Here's a simple test: if half of the funds that go today to the police would simply be wasted (say, they go to a "Trump foundation" where Trump may use them as he pleases) - would you consider that a "win"? If not, your primary goal is not "defund the police", it is better allocation of funds
There are decades of academic work on police abolition, books upon books and papers upon papers. Not everyone who says "defund the police" is an abolitionist, but many are.
As far as I can tell, there's no concrete replacement proposed for section 230, just a general wish for less protection for these platforms. Complete police and prison abolition is a much more thorough and considered philosophy.
Fair, which is what I assumed was the case, rather than the idea that we wake up one morning to literally zero recourse for someone stealing our stuff.
That assumption of cohesiveness of argument is what I’m trying to get at with my question. The argument we should be having is in the terms of the academic work, rather than the one-line phrase.
My point overall is that no equivalent academic work has been presented that actually attempts to remove Section 230 protections (as the protections stem ultimately from the first amendment), so it’s not fair to compare the two.
I may have gotten separated a bit from that point along the way, so I apologize for my lack of clarity!
I think it’s hard to claim your definition for”defund the police.”
Perhaps some people view it as “reduce police funding” but many mean it to mean “$0 to police” and that’s the definition of defund.
Words matter and language matters and trying to nuance a pretty simple statement like “defund” is hard for people to understand. Especially with people, friends of mine to be specific, literally mean to defund and remove police.
I know no one involved with the Seattle CHAZ [0] but, for example, this is a group that wants to defund police.
If the intent is to reevaluate police funding then a label like “police reform” makes more sense. Calling something “defund police” and meaning reform is dumb, it’s foolish when there are also people using the label who want to defund the police.
I had a conversation with someone using the phrase ACAB who then explained that not really all cops are bad. This is really confusing to me as to why make a statement that I don’t believe in.
I think we need serious improvement in how we fund and operate police. I want reform and work toward reform. But small, suburban, peaceful city has 100 police officers for 50k people and we have military vehicles and whatnot. I want to work on this and make it better but conversations seem current divided between people who want to increase and zero out the budget and people trying to improve are getting shouted down or ignored. So we’re sticking with the normal increase 1% a year.
Do you think you’ll ever move past the initial shock of disagreement with a movement’s slogan that may be an oversimplification or misleading, or will you always need someone to clarify the argument before you can take it seriously (and not take it seriously until then)?
I’ve “moved on” but I think my issue is with the people I’m trying to talk to who haven’t moved on.
I don’t think I need someone to clarify, but it does take me extra time to figure out if someone is taking about the literal or figurative “defund” idea.
You can’t limit the extent to which a company moderates content, as the right to moderate is based in the First Amendment.
All you can do is adjust the shield from liability that 230 provides. If you decrease the size of that shield (as the DOJ proposal does), private companies will further restrict user activity (not expand it) until it fits under the shield again. That is true whether you are talking about a full repeal or only an incremental change.
"repealing section 230 would result in more censorship, not less. "
- No more centralized social media platform, everyone has to be responsible for ones own speech. Alternate technology will emerge to make it so easy to create ones own micro blog and everyone can subscribe.
This is simply wrong. It would mean platforms that don’t moderate have no liability. The law was created because a website that didn’t moderate was found to be not liable for the comments it’s users posted, and a website that did was found to be liable. This is even on the Wikipedia page for section 230.
I guess the conclusion that repealing it would result in more censorship is based on the idea that platforms would moderate more aggressively to try and avoid all liability. When really, that’s simple not possible, and they would have to abandon the practice all together. Well, the larger platforms would have to at least.
> When really, that’s simple not possible, and they would have to abandon the practice all together.
Can't do that, it'll hurt the bottomline. Most users aren't keen on using platforms filled with spam, porn, trolls, and hate speech. And if you don't believe this, ask yourself why a lot of these sites don't allow porn, spam, trolls, or hate speech right now. It's partly moral, but mostly because it's bad for business.
Stricter moderation will be the more likely outcome, and we'll have even less free speech than we have now.
It creates an incentive to improve moderation automation and reduce moderation costs. One way to do that is to verify users, which will help with our sock puppet and deliberate misinformation problems.
If you are concerned about the chilling effect of sites arbitrarily moderating user-generated content they find objectionable, just think about how much worse identity verification would be. Here on HN, people regularly create throwaway accounts so they can provide valuable insider accounts of things happening at companies where they work; that would never happen if they had to prove their identities first.
In fact, Twitter already does verify users for some accounts—that’s where the blue checks come from—and many of them are the most profligate misinformation peddlers. I think Twitter also requires new users to register with a phone number, and Google+ and Facebook both had/have real name policies, and none of that has done a thing to stem the flood of misinformation online.
It’s a problem Facebook is working on. Presumably signals like user reporting but yes, it’s not easy. If someone figures it out, they’ll make a lot of money.
Defund the police does mean $0 for law enforcement though? The only people I've ever seen claim it doesn't are people that aren't claiming they want to defund police but do want to side with those who do. "They can't really mean that, you must have misunderstood." - nope.
Just like when I say repeal 230, I'm seriously saying get rid of the thing or at least rework it so people aren't being censored by non non govt but still very real monopolies on speech.
Basically the issue prior to 230 was that the major precedents worked out like this: if you didn't moderate at all, you were not treated as a publisher and were not liable for the speech of your users. But if you did do some moderation/blocking, you could be seen as a publisher of the things you didn't block. So you have a kind of "all or nothing" type of situation.
The problem is: in terms of legal risk, "just allow everything" is safer, since it's quite expensive or even infeasible to completely moderate the platform. But Congress wanted to encourage internet companies to do moderation for things like pornography and other "indecent" content (this is part of the Communications Decency Act, after all), which is why this "Good Samaritan" section exists: to remove the specter of liability that could arise from even light-touch moderation. (But note that despite that title, the language is broad and not conditional on the presence or absence of any moderation scheme.)
Repeal has a few possibilities, one of which is as you said, just eliminate user-controlled content completely from some spaces, or moderate it extremely heavily (like requiring manual review before something is posted). But, the other extreme is also possible, of totally removing moderation to escape liability the other way around.
>But, the other extreme is also possible, of totally removing moderation to escape liability the other way around.
No it's not a possibility. Social media is paid for by advertising and no large advertiser wants to have their ads show up next to child pornography. Just look at what happened with Youtube.
Unmoderated forums also lose users to moderated forums, because nobody wants to sift through a pile of trolls, nazis, pedophiles, etc. just to find something on-topic. I have seen it happen with Usenet, with any number of loosely moderated mailing lists, websites, and so forth. Users want some degree of moderation, and advertisers are going to gravitate toward the platforms with the most users.
> Wouldn't repealing Section 230 mean more censorship and restrictions as the liability would be too great to be on the hook for what some random account might decide to publish, no?
Yes, no one is going to run a service that hosts user created content if doing so means that they can be raided by the FBI in the middle of the night should a user find it funny to upload illegal content to a company's servers.
Only entities that can throw millions or billions of dollars at censoring illegal content would take on the risk of hosting user uploaded content on their own servers.
> Yes, no one is going to run a service that hosts user created content if doing so means that they can be raided by the FBI in the middle of the night should a user find it funny to upload illegal content to a company's servers.
Which that ends up in regulatory capture areas that prevent new competition and innovation.
Taken beyond digital content, making the producer of a service or good liable for what a user/consumer may do is a horrible precedent. If any crime is committed or problematic content uploaded, the user is committing the crime and responsible for the content not the whole service.
Already there are massive moderation services that are here and more coming. Is this something we really want beyond just removing content that incites violence or is illegal and taken down when notified. Trying to pre-empt everyone's user generated content and take it down is problematic for many reasons including competition and allowing smaller/medium players in the game.
Most of the systems are sufficient that just have reporting that can be removed by users if the content does violate rights rules like inciting violence, illegal content or copyright infringement.
We all know how auto AI driven or and overzealous competition falsely reporting moderation can end up with systems like Youtube's copyright strike system, essentially a harshly tunes and easily exploited system by bad actors.
We have to be very careful and it is safer on the side of less moderation and blaming users not the service. If we don't, big players already established could easily use this as regulatory capture and even stifle competition by merely uploading content to competitive services that will then get them in trouble.
Wholesale shutting down services or making the barrier to entry too high and costly with massive moderation services that amount to regulatory capture will prevent competition and stifle freedoms, instead of blaming users that are creating the problems, that is the wrong way. If we blame the services we can end up with a chilling effect on sharing ideas, information and content besides just obliterating competition.
We are going to turn the internet and freedom into an authoritarian Vogon-tastic bureaucratic Borg if we aren't careful. A hands-off system, with reporting or blocking systems for content that is offensive/illegal which are already common, is the best way. We don't want any of that Vogon poetry.
Any system of sufficient size will have bad things, you deal with those bad things individually not shut the whole thing down.
Karl Popper's take on this:
> The paradox of tolerance states that if a society is tolerant without limit, its ability to be tolerant is eventually seized or destroyed by the intolerant. Karl Popper described it as the seemingly paradoxical idea that in order to maintain a tolerant society, the society must be intolerant of intolerance.
This is the very sort of thing that led to the creation of common carrier status, with the tradeoffs involved. It seems to me to be the solution with ISPs and net neutrality, and there probably should be something similar with sites like Twitter (which is probably the point of Section 230).
I personally feel like moderation has gone way too far with social media, even though I'm far from politically conservative and definitely not a supporter of Trump. I think sites should lose some protected status if they start moderating excessively, as an incentive to moderate even less. Conversely, I think they should be granted a degree of immunity if they do.
A difference between common carriers like the phone company and the post office vs. internet websites is that your communication on common carriers is usually private. There's no reason for any moderation to take place.
Whereas on a web forum such as this, moderation is necessary for conversations to not descend into chaos.
Frankly I don't understand why people think moderation has gone too far. Is it that e.g. Twitter shouldn't be allowed to silence unpopular opinions because silencing unpopular opinions is always bad? Or is it that it's bad because Twitter is huge and influential. Would you have a problem with someone silencing a gun control fanatic who has appeared unwelcome on their private little gun collectors' website?
> Wouldn't repealing Section 230 mean more censorship and restrictions as the liability would be too great to be on the hook for what some random account might decide to publish, no?
I can't tell you what it means for US law, as I don't know what laws surround this. I can tell you that the lack of safe harbor laws has stopped me from making certain technology products in my country though, as the risk is too great.
Give people an upload mechanism and inevitably it will receive material that could land you in jail. Maybe it comes from users, maybe it comes from bots, it doesn't matter. If you're on the hook, it's a no brainer to not get involved if there are no laws protecting your platform from liability.
Yes, I believe that's why it was introduced in the first place because that used to be a problem. The EFF has a summary of its history [1] and the case was [2].
Yep, that was the original fear after a couple of landmark cases against CompuServe (no filters, so they were ruled not liable for what people said), and Prodigy (they had a team of editors filtering content, so it was ruled they were liable for what ended up on their platform).
No, I don't believe so. Any individual can easily create a website and publish whatever they like. It costs maybe $100/year and takes a couple of hours for an inexperienced user to set up a Wordpress blog with manually-moderated comments. Nobody's speech would be censored if Section 230 were repealed.
Instead, platforms would be treated as though they had consciously decided to publish every publically-available comment/file upload/post/etc. That would probably mean manually moderating everything, and rejecting a lot of it for capricious reasons.
But honestly, I am not convinced that that would be a bad thing. It would certainly encourage more people to create their own publishing mediums, and it might help break up the titanic oligarchs that currently occupy the market.
Put it this way: I would happily give up sites like HN to remove the business model that makes mass-scale social media viable. Sites like Twitter and Facebook have inflicted unconscionable harm on our collective humanity, and they have demonstrated a complete inability to self-regulate.
>It costs maybe $100/year and takes a couple of hours for an inexperienced user to set up a Wordpress blog with manually-moderated comments.
It would also cost however much a lawyer charges for helping to moderate every submission. Because I doubt the average WordPress blogger is familiar with all possible laws the submissions could violate. Or can make correct judgements based on precedent for things like copyright works.
Honestly though, the blogger would probably only need to pay for one hour of a lawyer's time. That lawyer would say "Turning off the comments is your best bet" and then round up to an hour for billing.
Setting up your own Wordpress blog can be done with a domain name and server hosting outside the United States, and with some effort this can be done anonymously. In that case, you don't need to worry about Section 230, you only have to worry about the (possibly laxer, possibly stricter) local laws. The reason that big social media sites are vulnerable to this particular US law is because they are businesses that chose to incorporate in the USA.
Should it bother us that the solution to a free speech problem in the US under your proposed legislative change can be rephrased as, "hide from the government and piggy-back on another country's better speech laws?"
This strikes me as a solution to the problem of censorship in the same way that "just buy your medical supplies from Mexico" is a solution to US healthcare costs.
Repealing 230 would affect the site's host and registrar because the host would be found liable for illegal content on their servers.
That means that you aren't going to pay $100 for a WordPress site, because allowing you to upload content without the protection of Section 230 is too much of a risk to bear for only $100.
Can't the US require performing DNS blocking of sites that violate American law? Can't they sanction entities that do business with your American law-breaking website?
>No, I don't believe so. Any individual can easily create a website and publish whatever they like.
Absent Section 230, the hosting provider could potentially be liable for anything published. They already do have to take down illegal content if they're informed about it. Section 230 predates modern social media.
If Section 230 were repealed the business model of selling ads around user generated content would still exist, but it would cost enough in lawyers to kill off the smaller forums that actually wanted to moderate properly, and you could be sued over the manually moderated comments in your WordPress blog.
I have a WordPress blog and a separate forum for comments.
If I were responsible for comments, I would do either of two things:
a) just shut down the forum,
b] use a paid commenting service that would come with moderation. This means that people who would want to comment would probably pay some fee for that privilege.
Accidentally, b) is unusual for us, but a quite sustainable model. Surely more sustainable than current "show ads for free stuff", which seems to be hitting its limits.
> use a paid commenting service that would come with moderation.
Note that a side effect of this is that you would no longer have moderation control yourself. And how many of those paid services would exist, given the legal costs of running them?
What makes you think there wouldn't in practice be the same centralization that we see today -- one or two paid services that handle moderation for the entire Internet because only they have the lawyers necessary to do so. The only difference would be that every independent forum that didn't use them would be opening themselves up to significant legal risk.
At least right now if I set up a forum with Open Source software on a Linode server, I don't need to worry that turning off Google Captcha or avoiding a 3rd-party centralized service for my comments will get me sued.
"What makes you think there wouldn't in practice be the same centralization that we see today -- one or two paid services that handle moderation for the entire Internet because only they have the lawyers necessary to do so."
One important difference is that the networking effect is not as big as in case of FB et al.
But yes, the general trend in maturing markets is toward some kind of oligopoly. It is, I believe, time to reactivate antitrust laws, we are well beyond the point where they should have been used.
I think the rationale is rather to hurt big tech if they are perceived to suppress conservative opinions. The business model of any social media would likely collapse if they are liable to everything said on their platform. Or at least I believe it is the assumption.
I really wanted Mastodon to take off, but none of the people who discuss anything of consequence moved there (because they didn't want to lose their reach). Instead, I only found people in the fediverse that post smalltalk about their daily lives. Which isn't bad! But these people will not be affected by section 230 either way.
Intuitively I expected political dissidents would move off Twitter first, but I can see how the opposite happened instead.
I tried the fediverse, only to find just as many constant reminders of the culture wars there as on Twitter, though the bulk of fediverse servers have selectively federated in order to ensure that views represent one end of the ideological spectrum. If you are the sort of person who prefers to avoid discussions about political and social issues, then the fediverse is no escape.
the idea I think is that certain platforms would emerge that have to allow all content or get into legal trouble. Even if those are just base layers such as an ISP
Dismissive article that misrepresents both why "normal" people out there are unhappy about section 230 and what the higher levels of conversation about changing it really look like right now[1]. Articles like this are read meat for Hacker News. You endlessly reassert your political bias while pretending you are "informing" some imaginary group of stupid people (who in reality isn't even here to be "informed").
[1] Here is a recent discussion about 320 from a few lawyers and an YouTuber who used to work for the mainstream media:
As you will see if you watch the video, it is much more nuanced than the aggregate straw-man set up by the article.
...
The crux of the issue is that Section 320 was drafted with a lot of assumptions in mind. At the time the Web was a fledgling thing. At the time Section 230's practical effect was to empower emerging platforms to do some common-sense moderation without fear of being held liable for all content they distributed.
Does the law still has this effect? Yes, it does. But that is no longer the primary effect in terms of how this law affect most of the people out there. Today its primary effect is to give an impenetrable legal shield to giant tech corporations that engage in deliberate society-wide manipulation of information. That is what most people out there are unhappy about, even if they don't quite know how to phrase it.
Can you explain the society wide manipulation of information and what positive results can be obtained by allowing the kind of actions that are currently blocked by 230 which could be brought were it to be repealed.
Most users who want 230 repealed seem to believe that Facebook would have to give up moderation of their offensive comments/posts to be treated as a common carrier somehow to avoid liability. This is of course completely imaginary. Liability wouldn't be avoided by simply publishing everything.
Others assert that Facebook has had a shitty net effect on society and could be punished if not for 230. This is a little closer to reality but provides no conceptual framework for deciding what obligations Facebook has or how it meets them.
I don't believe that any reasonable solution starts with repealing 230 because fixing the internet one lawsuit at a time is horribly expensive, intractable, suffers from challenges as to jurisdiction and anonymity, unavailable to 90% of the planet.
Instead if you do not like the current state figure out how to articulate Facebooks obligations to society.
The right wants to repeal 230 because they think it will make FB/Twitter keep up all their Q-anon insanity. The left wants to repeal 230 because they think it will punish/harm/shut down the FB/Twitter cesspool.
The reality is probably closer to regulatory capture where only the existing large platforms can afford to moderate leaving us with a heavily moderated, but stronger/permanent FB/Twitter. /shrug
I believe the understanding is that without 230, facebook would be liable just like the rest of the press is, and thus be essentially forced to disallow free-for-all posting.
This is a different mode of manipulation, where the people who own the media can shape the public discourse, but isn't that what the whole world had for decades and decades before social media?
In general, the understanding (right or wrong) is that without 230, a platform has to choose one of two actions: No moderation (modulo moderating for illegal content--whatever that is exactly) or moderating all posts (which is clearly not an option in general).
It may or may not make a big difference as a practical matter. Most things Twitter would moderate today are arguably threats, incitement to riot, etc., all of which could in principle be prosecuted as crimes. But it's at least a difference in principle.
ADDED: On the other hand, there's all the SPAM, link SPAM, astroturfing etc. which isn't actually illegal. Arguably some things could be handled through Terms of Service. But it's worth remembering that Section 230 didn't come about to address purely theoretical concerns given that Prodigy did lose a legal case after all.
> Today its primary effect is to give an impenetrable legal shield to giant tech corporations that engage in deliberate society-wide manipulation of information.
I work at a small (compared to Google and Facebook) classifieds site. I don't think we could exist without section 230, because it would have been too expensive to start without immunity for what our users posted. So, it still does it's initial purpose.
I would also like to start a social network for my local community, but I can't afford the legal liability of what my neighbors might say on the platform I built. If 230 goes away, maybe I'll never be able to build that community. And don't forget, FB started at a single college.
I'm also sure that the initial individuals posting things are exactly the correct individuals to be liable for the things they post.
I'm not sure this is true. Before Section 230 was a thing, companies were not liable for what users wrote on their platforms so long as they didn't moderate legal content [illegal content would be like housing ads with landlords specifying preferred race of tenants, or threats against the President].
A classifieds website could definitely exist post-230, and many did pre-230.
It is very much legal to threaten the President. You might get a visit from the Secret Service if your threat is deemed credible enough, but otherwise, threatening the President is exactly the same as threatening anyone else (actually perhaps even less likely to result in sanction: I could eventually accuse you of harassment for repeatedly making threats against me, the President likely cannot).
>Today its primary effect is to give an impenetrable legal shield to giant tech corporations that engage in deliberate society-wide manipulation of information.
I think this would be better phrased as "... legal shield to websites that only want to show certain content to its users."
"[S]ociety-wide manipulation of information" would imply that everyone gets their news and information from Facebook, Twitter, etc which gets linked directly into their brain regardless if they want it. These giant tech corporations aren't holding a gun to anyone's head - they all started with zero users and they will end when they have zero users.
If this is what "most people" are unhappy about, just wait until they read the TOS and realize what else they agreed to.
If one is following current events, it isn't possible to avoid Twitter, Facebook and even Reddit. Mainstream news outlets often report on content from those social media sites, so even if I'm not a user, social media still effects me.
“memeorandum is an auto-generated summary of the stories that US political commentators are discussing online right now. Unlike sister sites Techmeme and Mediagazer, it is not a human-edited news outlet, but rather a media-monitoring tool for sophisticated news consumers.“
You complain of one thing in your first paragraph and proceed to do the exact same thing in your subsequent paragraphs :) .
The incentive for big corporations is to make money, and they create pockets of bubble which align with user's existing political view.
This has created the polarised society we see today.
If big corporations were trying to create a society with unified political view, as conspiracy theorist on right and left are complaining about, looking at the current state of political landscape in US, I would do they have utterly failed.
I concur with your complaints about techdirt. Masnick comes across as self-righteous in most of the pieces on the web site that should be called the Daily Outrage. I think your first paragraph really describes techdirt and not HN.
Interesting. I find Masnick's writing quite accomplished, and he is quite correct that there is a lot of misinformation (which he links to!) about Section 230. I agree he writes confidently about technology's intersection with the law, but I don't see that as a flaw: the Techdirt team seems to have a much better handle on these issues than the average journalist, and certainly a better understanding than the average blogger or internet commenter.
>If you said "Once a company like that starts moderating content, it's no longer a platform, but a publisher"
I regret to inform you that you are wrong.
That’s not the argument. Not really. The argument is that these companies are removing entirely legal content they just don’t like. Their TOS pages are interpretive depending how much they dislike the poster of the content. They do this intentionally and target specific views. They curate their bubble and that is publishing. The issue isn’t moderating content, it’s the selective enforcement.
He starts with a complete strawman.
>It protects all websites and all users of websites when there is content posted on the sites by someone else.
That is the exact crux. I created the content, but EVERY one of these sites is clear they own it. Once I wrote it and hit send, I gave it to them and assumed ownership, they are clear about this. So they ARE publishing THEIR content.
> That is the exact crux. I created the content, but EVERY one of these sites is clear they own it. Once I wrote it and hit send, I gave it to them and assumed ownership, they are clear about this. So they ARE publishing THEIR content.
This isn't the distinction the law makes though. The distinction is about the provider, not the owner. If I write a book, and you post an excerpt from that book on my website, I'm not liable under 230, even though I'm the original author of the content! The distinction the law makes is between a first party provider and a third party provider.
That is, imagine I got a license to an image in my book, but not for distribution on the internet. You then post the page from the book as a comment on my website. I didn't violate my license, even though the book posted on my site was mine.
On the contrary, they tend to be clear you’re assigning them right to publish worldwide in any medium in perpetuity including make money off that, not ownership.
For instance, you don’t have to get seek their permission to also publish your thoughts somewhere else, or use your own photo commercially, etc.
As such it seems this law makes sense but it also seems to do little about some other problems. I have started to think lately that what is best is a law to forbid espionage on people. Like, you cannot collect and/or store large portions of the peoples browser history. Also, it should be forbidden for the data of one service to be shared with another. Like the example of facebook and whatsapp sharing their data.
So if 230 were repealed, and you posted about wanting to invade congress or how you want to send black people back to africa and moderators removed it, what legal recourse is now open to you, with 230 gone?
Arrgh, you are repeating almost verbatim like half the nonsense that is debunked in the linked article!
If, say, Twitter doesn't want republicans to talk on the Internet (to pick one ridiculous example), repealing section 320 would do nothing to stop them, nothing. No modification of section 320 would do anything, because nothing in section 320 deals with content moderation at all. It's only about liability.
I mean, just to be blunt: if 320 was repealed right now Parler would be sued into jelly given all the violent right wing garbage on that site, but Twitter (which is trying to remove it) would still be around, and still (as per my ridiculous example) censoring republicans.
Now, maybe you want some kind of new fairness doctrine law or something. And that would be interesting to discuss. But it's got nothing to do with liability protections for internet companies, nothing at all.
I do not like people like you who are intelligent enough to be intentionally intellectually dishonest while being extremely malicious underneath.
You fool people for sure. I do not know why you do it, though.
There is no manipulation going on by “Big Tech”. Your fear mongering is unfounded. Pushing this sort of rhetoric is what created the insurrectionist events a few days ago.
> There is no manipulation going on by “Big Tech”. Your fear mongering is unfounded. Pushing this sort of rhetoric is what created the insurrectionist events a few days ago.
There's definitely manipulation going on by "big tech". Social media has radically distorted political discourse and amplified some trends far beyond what Fox News style media alone could have accomplished.
What's debatable (and perhaps straying into crackpot conspiracy theory territory) is the idea that "Big Tech" has done this deliberately, collectively, and with goals of steering society at large in any particular direction. This hypothesis is a lot more far-fetched than the default assumption that big tech has smaller, more localized concerns about maximizing profit, and cares little for the broader consequences of the interaction between their algorithms and society. (Analogously, it is absurd to suggest that US food companies have a goal of making everyone obese and giving them diabetes and heart disease. They simply want to make more money, and have found ways to optimize their product until it becomes addictive, and they don't care about the long-term or large-scale impact on the public because it's not their job to care.)
> There's definitely manipulation going on by "big tech".
We do it to ourselves. I made this site [1] to review moderation on reddit. It works for accounts, subreddits, threads, and there is also a "top posts/comments" section [2]. When a userbase can review the moderation it becomes harder for manipulation to occur.
I agree that the manipulation is automated and intended to increase their profit. The effect seems to be that it creates really bubbles where you only see views that align with your existing world view.
My own experience - I have my own YouTube profile filled with videos which reflect my political opinion and the feed of my friend who leans towards the other side of the political spectrum had videos reasserting his world views.
I don't think the big tech is trying to steer socius towards one unified political view.
Nonsense. A former Democratic staffer employed by Facebook censored the breaking news story about Hunter Biden's laptop on the eve of the election which we now know to be factually accurate. This is a literal and recent example of what the person you accused of being malicious talked about. We all know that if it was a smear against the Trump campaign instead, that this would've never happened. You must be able to see the double standard at work.
No idea why this is downvoted, there's no better example of censorship than this.
We would be in a better situation right now if voters had the facts about these four FBI investigations back in 2018, so perhaps the Democrats could have nominated a candidate that isn't a criminal.
I still voted for Biden, already aware of his family's influence peddling, because Trump is a repulsive moron in addition to being corrupt... just to be clear.
The NYPost was removed from the platforms for releasing this info. After the election, it was revealed that yes, all the pictures of Hunter smoking crack, fucking prostitutes, and the communications about being inappropriate with his 14yo niece were not fabricated and he is under investigation for tax fraud... not a peep from the orgs that silenced them for this. Memory holed.
I started reading this in the smug knowledge that I knew it already, that I wasn't wrong about 230.
But then I saw this passage:
> The "user" protections get less attention, but they're right there in the important 26 words. "No provider or __user__ of an interactive computer service shall be treated as the publisher or speaker...." That's why there are cases like Barrett v. Rosenthal where someone who forwarded an email to a mailing list was held to be protected by Section 230, as a user of an interactive computer service who did not write the underlying material that was forwarded.
I didn't know that! I guess I was wrong about Section 230 :)
I feel like I’ve said this a hundred times in the last 30 days but you should read the book “The Twenty-Six Words that Created the Internet” for a really good history of why we have Sec. 230 and what it was intended to “solve”. It’s an interesting topic that’s the subject of intense debate amongst people on both sides who don’t fully understand the subject they’re arguing about but are convinced they do.
I was struck by a thought while reading the article- are the feeds of social media companies actually protected under section 230? My Facebook feed is more than just a simple serving of my friends content, it’s a work in and of itself (like a collection of essays).
It seems like a stretch given that the law states that they can’t be considered the publisher of “information” provided by others, but possible as these sites are the ones engaged in picking content.
The article mentions the roommates.com racial discrimination case. I wonder what a court would do if a site had some black box algorithm that ended up consistently showing a feed of rooms in one zip code to whites and rooms in another zip code to blacks, never overlapping.
Edit-
My point being- it might be possible to hold Facebook/Twitter/YouTube liable for the information in their feeds, particularly that which is surfaced out of order or from people a user isn’t subscribed to. But I’m not lawyer, nor do I play one on TV.
I've also wondered about this, I suspect because Facebook hired 75% of their lawyers in the last 3 months we will find out soon.
I could see one argument about how an algorithm doesn't really editorialize.
A metaphor that is often used is that social media companies is that they are more like a newsstand than an editorial board on a newspaper. A newsstand isn't under any obligation to sell all the papers available in their stand or place them in a specific order. I would expect the social media companies to say that their algorithms are closer to placing newspapers on a stand based on market conditions, not based on their personal opinions/eye. If that holds up to scrutiny we'll see.
Yes, in one sense an algorithm doesn't editorialize, not being intelligent, but humans can write an algorithm to editorialize for them. The editorial decisions are made while designing the algorithm.
And if Facebook, knowing that outrage drives engagement, chooses to run an algorithm that displays the most engaging content, aren't they also choosing to display the most outrageous content?
That is a good point, you not only demonstrated that choosing which comments to display is editorializing, but displaying the most outrageous content to provoke engagement is, in itself, an algorithm that incites violence.
These websites are like the proverbial 10000 monkeys generating random symbols.
Tech companies can form any narrative by picking parts they promote, does that still make them not responsible for the result?
Couldn't you make the same argument about stickied posts on old forums?
I don't think how the content is displayed changes whether they are covered by section 230, and I think the anti-discriminaton exceptions would need to be more explicit to matter.
It looks like to me that you can. When does a moderation choice becomes an editorial choice? Shouldn't there be some difference between the kind of moderation that removes content and the kind of moderation that promotes content?
It makes you responsible for what your user sees and doesn’t see. If you’re not presenting info transparently by time or popularity, you’re editorializing.
Perhaps the reason there hasn’t been a big, precedent setting court case yet is the difficulty getting the data or seeing the editorializing. It seems like that veil is slowly coming down though through some of these court cases.
What comment on a local news article does Facebook decide to show me to drive engagement? Why was it picked instead of another?
I suppose one could get timestamps on their friends profiles and compare that to what was displayed to them.
The idea that Facebook can make these editorial decisions at the coding level yet not be liable like a newspaper, is one thing that needs to change.
First, your opinion on how things should work is separate from the discussion I was taking part in. "Editorializing" isn't relevant to the protection section 230 provides.
On your point, I'm not entirely sure what you want. Do you think Facebook should be sble face libel charges for how they present somebody else's post? Or do you want their ranking algorithm should be transparent. Should Google search results be considered the sane way?
That depends on who is stickying them. If it’s a user, then it would still just be users deciding what you see and storing the content. A newsfeed on Facebook is a curation of user content. I believe that it is still covered as anyone choosing to forward content or repost content is also a curation, but it is something that could be re-examined and changed.
Think 10-15 years ago looking at the concept of these centralised platforms it'd pretty obvious any kind of arbitration would not be scalable and therefore there wouldn't be. I mean, the business model depends on it.
I'm not a fan of them being arbiters of truth, also not a fan of there being 10's of thousands of blatant copyright infractions on their platforms daily which they monetise.
I don't think there's any other answer than decentralising groups on the web.
I think there needs to be a distinction between information seeking and passive interaction. If I go onto twitter and go to DPRK News (https://twitter.com/DPRK_News , quite a good parody account) it's clear that DPRK News is the publisher of all the information and twitter is the hoster. I'm seeking the information in the DPRK News feed.
When I'm just flipping through twitter's infinite scroll then I'm not really seeking anything in particular than twitter is acting like a traditional editor- choosing the content I see. It's a different activity.
I think platforms that choose content on behalf of the user need to have a different set of obligations than those that serve content based on user requests, an those obligations go up with scale. It's one thing to have a fake story or illegal content be seen by 100 friends, its another to feed it to 100,000 people.
That’s an interesting train of thought. I also wonder about content that’s been modified in some significant way (fact check labels not clearly separated from the content itself for example) by the provider: is there a point where the provider could become liable? I.e. if I made a joke website where you could post comments and it would auto-replace any mention of “topkai22” with “the notorious scoundrel topkai22”, could you sue me directly?
I think that one is pretty clear- you are providing the phrase "notorious scoundrel", so you are the provider of that information.
A subtler one is a site that only surfaced snippets about a subject. It wouldn't be that hard to write an algorithm that surfaces only negative segments about a subject, or to automatically quote out of context.
Surely that is transformative, but the information is still coming from the original authors.
Information that a provider creates itself isn't covered at all by 230, so they're potentially liable for things that don't originate from users like fact check labels no matter how they're applied.
That's one of the many silly things about the usual 230 discourse: one of the earlier flashpoints was around things like Facebook and Twitter flagging "misinformation" and the like, when they don't actually get any protection under 230 for those labels, since they're writing them themselves.
Have the "misinformation" flags been tested in court? It seems like that could possibly be covered by the "Good Samaritan" provision, although it seems it would hinge on if the definition of "restrict access to or availability of material" would including additional labeling rather than flat out blocking.
If a site has a black box algorithm that uses unlawful criteria (race) to limit the scope of searches, the site will not be immune under Section 230c and are liable if and only if they require the users to provide the information (race) as a condition of accessing its service in which case they act as “information content provider”
To my understanding fb would just need a number of your likes or shares to guess your gender, ethnicity, age, education level.
You giving away your data this way, would that count as providing the information to control the information stream?
And follow-up: This reminds me of the story of that data mining store chain which with high accuracy predicted pregnancy. A dad to a teenage daughter was very pissed when the chain started, in his perception, a personalized mail in coupon campaign to his teenage daughter "to become pregnant". When she actually already was. (The dad later apologized.) Nonetheless, the chain then concealed their knowledge, their targeted baby stuff and organic everything coupons, between just enough noise of tires, tools and men's sth sth, that it was no longer obvious to the innocent eye.
Would that tactics be sufficient to white wash from section 230c obligations?
Right- it's well known that discriminatory algorithms can evolve unintentionally. It's quite possible to accidentally create a feed that almost never shows black people white potential roommates and vice-versa.
I don't think "the algorithm did it" would be necessarily be shield against liability.
And, back to my orginal point, I think hosting content a person's profile page is one thing, and making the (probably automated) editorial decision to show 100,000 people that content in their streams is another.
Section 230 certainly encouraged centralization of the internet. Without it, platforms would be liable for content they hosted, which would be financially unviable.
Instead, people would have to self-host their own content on their own infrastructure. (230 provides no protection for people that do that.)
In addition to preventing the big monopolies from arising, this would have made censorship and surveillance much more difficult.
As it is, we’re in a situation where even the speech of the POTUS is beholdent to corporate censors.
The rest of the DMCA drastically shifted the balance of power toward copyright holders (and was also designed to increase consolidation of the industry); section 230 was necessary to offset that. I’d be happy to see the whole law repealed, though it would have the side effect of shattering the business models of the big tech monopolies. Repealing 230 but not the rest of the DMCA would be madness.
> As it is, we’re in a situation where even the speech of the POTUS is beholdent to corporate censors.
We've always been in that situation. Aside from the presidential alert system there is no government-controlled communication channel from the president to individuals, and there never has been.
It's always been up to citizens to inform themselves of governmental goings-on.
Finally, the president of the United States is a citizen holding a political office. No more, no less. The sitting president deserves no special treatment in a country of equals. If a citizen breaks the rules of common decency then fellow citizens are less likely to repeat those words for others to hear.
Trump had a 47% approval rating on the 6th and a 49% approval rating on the 7th.
Nobody is asking for special treatment, they're asking that he and the 50% of Americans that voted for him get the same uncensored ability to share their views as the other 50 who commonly emote the extreme views in the other direction with thout such censorship.
> we’re in a situation where even the speech of the POTUS is beholdent to corporate censors
This is not true. The POTUS has the most prominent platform in the world, he could walk out to any random street corner and anything he says will be broadcast to every corner of the planet, he is not in any way beholden to "corporate censors".
That’s not true any more though. All of the media networks have shown examples of cutting away from the Presidents speeches when they detect he’s going to say something they don’t like.
If you were Trump right now and want to directly address the nation, you know calling CNN and Fox and NBC are going to be a mixed bag at best. FAANG has given the indication right now that for two week no one should be giving the President a platform. I fully expect him to use the national broadcast system.
The narrative you're trying to present is completely false. Any "cutting away" of Trump is utterly inconsequential compared to the complete domination of the media Trump has commanded since the beginning of his presidential campaign. Trump is literally the most famous man in the world, you're simply in denial if you can't admit that.
>when they detect he’s going to say something they don’t like.
Give me a break. You act like they have been cutting out on him for years. It's only been recently this year when he literally lying and calling covid a hoax, telling people to do dangerous things like gather for his rallies during a pandemic / actively saying not to wear masks, and finally this election bullshit.
The networks have gone above and beyond his bs for years- there is a time to take a stand and the last few months have proven that.
> "The rest of the DMCA drastically shifted the balance of power... section 230 was necessary to offset that... Repealing 230 but not the rest of the DMCA would be madness."
Note that Section 230 is part of the Communications Decency Act (CDA). It is not part of the DMCA.
>Without it, platforms would be liable for content they hosted, which would be financially unviable
I don't think that's true is it? YouTube could continue to operate exactly like today EXCEPT they couldn't moderate beyond the legal minimum (no CP etc). That would mean they were still shielded as your options WITHOUT s230 are Hard Moderate (and accept liability) or Not Moderate.
> Instead, people would have to self-host their own content on their own infrastructure. (230 provides no protection for people that do that.)
Without Section 230, hosting providers, registrars and ISPs would be held liable for your "self-hosted" content. It would be too much of a risk to host your content, unless you had deep enough pockets to assuage that risk.
I don't believe that repealing or modifying 230 is right, all websites deserve the protections it provides.
However, I do believe that social media feeds and the companies or people that thrive on them are a cancer on society that need be to regulated into hell and never brought out again.
If people are going to get upset about recent events and how companies are reacting, I would really prefer it if they focus on what the root cause is, instead of trying to get a one up on their enemies.
IMHO the main issue is kind of orthogonal to Section 230 : It's that some companies have grown too big, and have too much power, which is a direct threat to democracy, especially in smaller countries.
Where it does come around to Section 230 is that the way that they can (ab)use their power is tied to what they do and laws like it : an ISP with several billion customers but under net neutrality and common carrier laws would probably still be less problematic than current Facebook/Twitter/YouTube.
Right. 230 is probably fine as is for a small message board or discord server. All of the objections (or at least the non-trolling ones) are about giant global services with algorithms for optimizing engagements.
This is a great summary of what the law's function is.
To address those trying to repeal it:
• Some people think that if 230 was repealed, websites would moderate less. That's false. Because of the liability for users' posts, big websites would be forced to moderate more. And small websites would shut down out of fear of lawsuits.
• Some people think that 230 must be repealed because websites are moderating too much. Well, the great thing about 230 is you can create your own websites. If you want to make your own free-speech forum and watch it be filled with far-right insurrectionists like all the other ones, you can, and you won't assume much liability for it. On the other hand, if 230 was repealed, you would not have this option, and the big tech companies you don't like would censor your friends for the same reasons they do today.
I don't think everyone using Twitter and Facebook is good but this can't be solved by making it hard to run small forums.
I also think it's dangerous to try to force people to run their forums with “fair” moderation. Free speech is important as a right to stop the state using violence against you. Being kicked out of a private group because you're disruptive, mean or faked your speedrunning scores, is on the other hand no great injustice that the law should interfere with.
IMHO "And small websites would shut down out of fear of lawsuits" is misleading - it would be more appropriate to say "And small websites would refuse all user-generated content out of fear of lawsuits".
They would still operate, but without the immunity the only safe way of moderation is to either have very strict moderation that ensures that anything even remotely questionable can't be posted; or (if you can't afford to have high quality manual moderation) simply not allow users to post anything at all.
I think the legal defense would be whether the site acted reasonably and liability would be difficult to establish without harm. If people use your site to coordinate doxxing someone and you don’t take reasonable actions to prevent, stop or legally identify the users behind it, then you’re going to have a tough time getting out of liability. AFAIK doxxing isn’t illegal but harassment can be.
Side note: I wonder what the new requirements that beneficial owners of legal entities be disclosed will do to 4chan, 8chan, etc.
If there is harm, then without section 230 "acting reasonably" is not a legal defence - you (as the website/business/service allowing user-generated content) will be targeted by lawsuits even if you act reasonably simply because may be easier to extract monetary compensation from you than from a random teenager who actually posted the harassing content. Even if you identify that poster and ban them, that does not undo the harm and liability, so you'd be the target of expensive lawsuits. You would likely have grounds for a counterclaim to that user to compensate you for the losses, but that won't help you much, as your costs would be the total claim of the harassment + lawsuit cost, and your compensation would be limited to what you can recover before the user files bankruptcy.
There's no grounds for them to sue a platform because of the immunity granted by Section 230 which was passed 16 years before the events of Sandy Hook as part of Communications Decency Act of 1996. The above comment was about a hypothetical situation if that immunity was repealed - then we absolutely should expect every platform to be sued into ground unless (and even if) they censor everything.
Without Section 230, who is going to host these small websites and why would they take on the risk of having one of the sites' owners posting something illegal?
Honest question: is it possible that some of the repercussions of Section 230 are not just reliant on a strict reading of the law in question, but also the case law and precedents built around it in the form of court decisions? Precedents in the US are sometimes more important than a literal reading of the law.
The idea behind repealing 230 is not essentially about tackling censorship.
It's more so the idea, that the right wing believes that Social Media is unfairly applying their moderation rules targeting the right side of the political spectrum. They want these these rules (even if it's more censorship) applied evenly to both right and left speech.
While what we have right now is not "free" speech, even moderated speech has a bias and both sides point to the other when it comes to these rules being applied unfairly.
Repealing 230 is a way to hurt who they see as responsible - the companies running the services as they are the one who apply these moderation rules unfairly.
A lot of these are straw men arguments. In particular it's taking a lot of statements about how Section 230 ought to be amended or replaced, and falsely portraying those statements as being about how Section 230 currently works. Obviously Section 230 doesn't work that way right now, which is precisely why people are proposing changes.
I regularly read HN threads that end up in discussion of Section 230, and find many posts that contain these same wrong ideas about what the law means.
To that extent, I think you're giving the author of the article too little credit, and the general discourse too much.
Clearly that's my subjective experience, and YMMV.
It’s hard to have a conversation when some (most even?) don’t understand how the law (both specifically this one and in general) work. I wouldn’t throw out “straw man” here — you need to have common (semi-accurate) ground, especially before proposing changes.
I like this law, and I don't think it needs reform. If anything, it encourages competition because if you don't like some company's moderation you can be free to go make your own damn website without moderation (or with moderation policy you prefer), or use a different that will take you instead.
This entire thing is just because of a pissing match between Twitter/FB and Donald Trump. Why doesn't he just an alternative? I thought Parler was supposed to be the "free speech platform"? I'm sure his audience would follow him, and I'd wager a lot of them are already there.
This entire "fight" about S230 is completely bonkers.
> If you said "Section 230 is a massive gift to big tech!"
> Once again, I must inform you that you are very, very wrong.
If section 230 is no big deal, let's just repeal it. What's the hoopla about it, then? Why are companies so adamant to protect it?
The section itself is very broad, but the impetus for the passing of Section 230 was to protect free speech by removing liability concerns for what others say online.
> On July 23, 2013, the attorneys general of 47 states sent Congress a letter requesting that the criminal and civil immunity in section 230 be removed. The ACLU wrote of the proposal, "If Section 230 is stripped of its protections, it wouldn't take long for the vibrant culture of free speech to disappear from the web." [1]
So, apparently big tech has killed that vibrant culture and in 2021, I think we can declare it dead. Tens of thousands of people are getting purged from social media, others alienated, ostracized, and shadow banned. There is a concerted effort to manufacture consent through targeted algorithmic hijacking of free speech. And so, we find ourselves in a situation where the current situation is UNTENABLE.
A free society cannot have its discourse completely dominated by a handful of corporations. This is a civil rights issue. I'm sorry, some people want to weasel out of this with arguments of technicality.
Technically, before labor laws were passed in the 20th century, there was nothing wrong with asking someone to work 60-70 hours a week and paying them the same rate. There was nothing illegal about paying someone 10 cents an hour prior to minimum wage laws. But we realized that some broad protection for certain workers rights were worth intervening in the free market for.
That is the issue of the 21st century. We are the content producers on social media. We should have rights to not be deplatformed. And no, it's not acceptable to arbitrarily empower a handful of corporations to decide what is acceptable speech and what is not. If the speech is not illegal, it should be protected. And in the U.S. we have broad, broad protections for speech for an extremely important reason. Cannot forget that. Too many societies have fallen to tyranny as a result of free speech restriction. Tens of millions perished for the right to speak freely. That can't be understated or forgotten. Ever. The 21st century deserves a movement for a digital bill of rights.
> If you said "Section 230 is a massive gift to big tech!" > Once again, I must inform you that you are very, very wrong.
>If section 230 is no big deal, let's just repeal it. What's the hoopla about it, then? Why are companies so adamant to protect it?
It's not a massive gift to big tech. It's a gift to everyone. It's a big deal to remove it. Certainly, it would affect lots of big tech companies; but it would affect lots of small sites too. Case law from before section 230 is pretty clear: either moderate perfectly, or only moderate to the minimums required to avoid criminal liability. I wouldn't participate in very many discussion forums without best effort moderation; either it would be a cesspool or sterile.
Section 230 allows sites to do whatever level of moderation they want. It makes no requirements of consistency or reason, whatever you do as a moderator is fine. The users who post are responsible for their conduct.
Certainly, there's a lot of moderation going on these days. And there's a real question of if there is a public square left; but section 230 wasn't about having a public square, it was about letting people run their squares however they like.
Yes, absolutely. Section 230 doesn't really take a position on that.
The problem is, if Twitter is a private square; are there any public squares (online); and if we have no public squares, what good are our rights in the public square.
On the other hand, if Twitter is a public square, that places a burden on a private company that they don't necessarily want. Although if you look at quotes from 2013, they were saying things like "We're the free speech wing of the free speech party." [1] I don't think we can necessarily hold a company to the words of a seven year old interview of a former CEO; but it's certainly no longer holding free speech as important above moderation of reprehensible speech.
There is legal precedent for a company town jailing a Jehovah's Witness who was preaching on their sidewalk, the company was later still found guilty of violating the preacher's free speech, even though this was their sidewalk.
No the question is if twitter had a monopoly on private squared that almost anyone can access and peek into. They don't and there are dozens of alternatives.
Its not a publicly owned square, that is obvious. It may be a publicly accessible square.
Ultimately its right to kick people out for lighting dumpster fires, still seems perfectly reasonable if it were a private square that is pubolically accessible.
That said, I could see civil suits could argue responsibility for keeping that square safe for guests(ie allowing ice to build up on the private but accessible square and someone slips and falls)
But if twitter doesnt want someone there in its square, it should be free to kick them out if they arent a protected class, just like every private but publicly accessible business we patron.
"the impetus for the passing of Section 230 was to protect free speech by removing liability concerns for what others say online."
No, it was not meant to protect free speech. It was meant to protect the market for Internet services, from ISPs up to user-facing services. The Internet ecosystem would be crushed either by trolls/nazis or by waves of lawsuits against companies that dared to moderate such things without some form of liability protection for services that perform some degree of moderation.
You don't have a right to an audience anywhere in the world and you will never have one here.
People are finally being deplatformed because they are using someone else's megaphone to spread bigotry hate lies and violence. I have a hard time seeing the problem.
The approximate 1 million military casualties ever died for many causes. So far as I'm aware enabling the spiritual successors of the confederate traitors and nazis to have a voice in someone else's house.
One might say that if a law is so misunderstood by so many and has an outcome that so many would disagree with then it should be replaced based on that fact alone, as what is more dangerous to a society than laws which rule over them that are misunderstood to such a great degree (this might be said of many other laws too). You can blame the people but we install law makers to produce good laws not bad ones (I wouldn't blame the users for not using my app the way I envisaged because they're ignorant, I'd hope I'd improve the app).
But what should replace it? Perhaps something that does make the distinction between what is a platform and a publisher. That could be difficult.
I'd most likely go for something tapered, so that smaller sites had more protections / less responsibilities and larger sites less protection/more responsibility, in order to promote competition because I think lack of competition is the main problem. So if small sites retained section 230 protection and larger sites didn't, perhaps that would produce better outcomes. Thus, I'm completely against stamping down on Parler and Gab, and any other upstart in the space. The more competitors the better, in my view.
> One might say that if a law is so misunderstood by so many and has an outcome that so many would disagree with then it should be replaced based on that fact alone, as what is more dangerous to a society than laws which rule over them that are misunderstood to such a great degree (this might be said of many other laws too).
I agree that laws need to be coherent to the general public to work, but the problem here is that what people misunderstand is the pre-230 conditions that led to its creation, which we would return to if it were repealed and not replaced with something else. It’s a Chesterton’s fence.
I agree that Chesterton's fence is a nice thing to have in any discussion of law but I'd disagree that we'd return to the same conditions as pre-230, the internet landscape is very different now which is why the law appears to favour larger companies. In fact, repealing it altogether might favour them, it wouldn't have back then.
That’s fair, I think it’s safe to say nobody really knows what the second- and third- order effects of 230 repeal would be.
An optimistic take is that we would revert to a ~2008 internet where Twitter and Facebook are much less dominant and “the feed” is a self-curated list of RSS feeds (or ActivityPub subscriptions). A more pessimistic one is that we all run for cover in private self-moderated Slacks and WhatsApp groups.
Is it fair then to say that the only way to “lose section 230 protection” is to cease having content partners by paying them, or employing them, or something like that? I don’t have a political point with this question and apologies if it was covered in the article and I missed or misunderstood it.
Through [1] I found this article [2], and it seems even if you pay people for content (Blumenthal v. Drudge, page 7; AOL was paying somebody for writing articles and somebody sued both that person and AOL for defamation) you're still explicitly immune under Section 230. (I'm not a lawyer, I find this kind of confusing sometimes.)
I'm not a lawyer, and I'm not your lawyer, but allow me to play one on the Internet.
Looking at two parts of the law:
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
> The term “information content provider” means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.
I think employing people to create or develop information under your direction would likely cause you to "lose 230 protection," but paying them for content they created or developed independently probably doesn't. If you contract them to independently create something new, it might be your responsibility in part; I dunno.
Edit: reviewing the article linked by conistonwater's sibling comment, it seems pretty clear from Blumenthal v. Drudge that a contractor's work is not the site's responsibility. I imagine there could be circumstances where the site specifically sanctionable content, and they might be held responsible.
I largely agree with the specific legal analysis in this guide. And it's nice to see a lawyer encouraging people who are not lawyers to read a law for themselves. But freely mixing legal analysis and political opinion, in a snarky format with debunker tone, isn't responsible. Law degrees don't turn our policy views into anointed fact.
The section on "big tech" isn't legal analysis. The fact that a law doesn't call out a specific group doesn't mean that such a group won't specifically benefit, or suffer, in practice. The fact that Section 230 doesn't have any language about company size or market share doesn't mean the law doesn't benefit them, benefit them more than smaller players, or benefit them in ways competitors in other industries don't enjoy. There are arguments both ways there. It's not all the lawyers on one side, and everybody who hasn't read the law on the other.
We know which side Mike's on. Reducing Section 230 claims to "a narrow set of frivolous lawsuits" was really cringey. Have a look at some of the cases that had folks reconsidering Section 230, back before Trump was even a candidate for president. The whole section where that jab appears is a straw man. I've not met anybody who thinks companies are totally immune from all laws.
To add an example, YouTube landing page is full of curated content from Google. I have not subscribed to any of it. In fact, my subscribed videos are on a completely separate page.
Google has chosen what they want me to see. Therefore, Google is a publisher. The fact that they hide this behind an algorithm has no bearing on the outcome. Google is in charge of the content that I see. If Google is in charge, they are publishers.
Donald Trump is still in control of one of the most effective communication channels in the world.
He could schedule a press conference at any time and have his words plastered all over Twitter within minutes by journalists.
Twitter as a platform made a choice, as a private entity. This choice is legally protected. Likewise Apple and Google both have the right to ban apps like Parler from their app stores under the same protections.
Nobody is banning "conservative" speech. What they are banning is outright lies, racist rhetoric, and calls for organized violent action.
If that has become synonymous with conservatism and the Republican party, then I would say that their "political movement" needs to be dismantled.
Nobody is arguing that these platforms don't have a legal right to do what they've done. The only debate is about what they ought to do, and whether the laws should be changed as a result of those preferences.
Regarding Parler: it is a large platform with a mixture of conservative speech and outright violent rhetoric. Does that justify its ban? The same is true for Twitter and FB (ie there exists a large amount of unmoderated extremist speech and calls for violence on these platforms). What makes it right to ban only Parler? Is there evidence that the density of such speech is higher there?
Section 230 is a great thing and the consequences of repealing it or adding extra requirements to qualify for its protection would be unequivocally bad.
From a pure logic standpoint, there is something that bugs me with the argument that moderation isn't the same as producing content. A website's moderation policy could be that, on any given day, any message other than "The archduke is a corrupt autocrat responsible for the assassination of members of the political opposition" -- which might be libel -- will be deleted by the content moderation team. The next day, the moderation policy might be that the only message that will not be deleted by the moderation team should be "Furthermore, the archduke is having a secret affair with his first cousin".
The platform doesn't publish any content in this scenario, it merely awaits for some random anonymous account, on which no liability will be assessible, to post the unique string of characters that is allowed by the content moderation policy on that day.
Of course, what I'm setting up here is a bit of a beard fallacy [1]. Laws are not enforced by algorithms and a human (in particular a judge) is perfectly capable of distinguishing between a content moderation policy that bans profanity and one that bans all but one string of character.
Beard arguments are everywhere, human-made categories always have fuzzy boundaries. However, some categories have sharper boundaries than others. There is very little ambiguity as to what "running a red light" means for a motor vehicle, even though Buridan's paradox [2] tells us that there could be. There may be a continuous curve, but the slope is very steep, creating a sharp distinction between "running a red light" and "not running a red light".
The slope from moderation policy to content publisher is not as steep of a slope. While everyone might agree that banning, say, profanities is not really a form of speech, and while everyone might agree that my cute hack above is really just speech, there are many intermediate points where reasonable people might disagree.
Generally these uncharted areas get progressively cleared up through lawsuits creating precedent which creates uncertainty, and introduce layers and layers of complexity.
My own feeling is that when the delineation of a category requires an accumulation of special cases, exceptions, distinctions, clarifications, it generally means that it does not map to something fundamental or important. A large part of software engineering consists in finding the right abstractions to think about a problem, and when one leaks that badly, it's generally and indication that one is thinking about the problem wrong. My hunch is that the fundamental issue lies in the concept of liability for libel in the first place.
>A large part of software engineering consists in finding the right abstractions to think about a problem, and when one leaks that badly, it's generally and indication that one is thinking about the problem wrong.
Any law about content moderation is ripe for being gamed, so I don't think a hard definition can ever suffice. Think of libel: very hard to prove because it requires clear falsehoods, intent, and expression. So plenty of people, especially politicians, have learned to walk the line. They can legally convince people that their opponents are space aliens who stomp on apple pies and interrupt baseball games.
A better example for this "adaptive" kind of law is online gaming. If a player uses an exploit to gain an advantage, the developer will often correct that exploit. When developers stop pushing bugfixes, the game often ends up being dominated by whatever exploits were left. For law, legal precedent is like a bugfix.
I don't necessarily disagree with anything you said. To continue your analogy about games, if your games requires constantly coming up with new rules and the rule book ends up being thousands of pages long, it's a poorly designed game. How many bug fixes have been made to chess (the answer is not zero, but also not many either).
The point is that a model which requires constant bug-fixes is one that generalizes poorly which is a symptom that you are capturing the wrong abstraction.
This is an informative article, but it is missing a couple key things. Maybe the article was strictly aiming to be factual, but it felt like it was taking the position that 230 is good and should stay in place.
With the assumption that it means to argue for keeping 230, they failed to convince me that it was a necessary piece of regulation. Yes, they corrected many misconceptions, but at the end of the day there is still a key argument against 230, and it goes like this:
There are a lot of things posted by 3rd parties on websites that lead to harmful outcomes. Companies that leave that content up are able to make profit from the views/clicks generated. They have already demonstrated the ability to moderate content, they chose not to do so when it’s not favorable to their profit. Why should I protect platforms or give them the option to not moderate harmful content? If you’re making a profit from it, that’s part of your product and you should hold some liability. This wouldn’t absolve 3rd parties that post harmful content from also having liability because the courts regularly rule on splitting liability and deciding which people hold which portion of blame. Repealing 230 allows us to hold companies accountable for leaving harmful content up when they could have taken it down, it would not require the courts to fine them every time something bad happens- it would just give the courts the option to do so when it’s the right call. As of now, 230 is too strong of protection.
I know they can do it. We have real time profanity filtering in video game chat, social media automatically detects faces for tagging, I know with a few automated solutions and a team of mods (paid and/or volunteer), and online platform can do a reasonable job. HN does a great job with surprisingly few mods. Reddit does it and has a pretty robust process for addressing harmful subreddits. YouTube polices non-advertiser-friendly content, I know they could moderate more if required. Sure, some things slip through the cracks, but maybe a competent defense and a reasonable court can decide that a company isn’t liable for one-off failures, provided it has good processes in place to catch most of the issues. Like Ford isn’t held liable every time someone crashes a car and dies, they are only held liable when they are aware of their cars being dangerous and not doing anything about it- but after issuing a recall, the liability is back on the car owner for not getting it fixed. Anyway, the courts and lawyers are well equipped to decide questions of liability without Section 230.
I guess the other thing I want to come back around to that the article didn’t really address, is that right now the protection seems unbalanced/one-sided because it protects platforms from choosing not to moderate, but it doesn’t force them not to moderate. IMHO, if the Facebooks and Twitters are allowed to say “we don’t have the ability to police our content” then they shouldn’t be deplatforming controversial people like Alex Jones. It’s hypocritical to say that tech companies need protections and then be in favor of kicking people off. I’m not actually sad when someone like Jones or Trump gets the ban hammer. But I have a hard time justifying allowing companies to have moderation power and then allowing them to choose only to use it when a situation gets so bad that it affects their profit. Like, you can’t be watching the a house burn while holding a fire hose and not expect me to be mad when you decide to water your garden instead. If you’re actually unable to moderate, you had better not be making headlines by moderation. Because it seems to me like you’re begging the question- oh, so you clearly can moderate some things, let’s figure out how much money you made while ignoring the harm you were causing up until that point.
What I’m actually in favor of by the way, is repealing the wholesale protection of Section 230 and replacing it with something that requires a reasonable level of moderation, or provides a little bit of protection for companies that have made a good attempt at moderation. That would be a much better incentive for companies than the incentive 230 currently provides.
It sounds like you're really just mad because the big companies don't moderate evenly enough for you.
> replacing it with something that requires a reasonable level of moderation, or provides a little bit of protection for companies that have made a good attempt at moderation. That would be a much better incentive for companies than the incentive 230 currently provides.
I'll bite. I think TW/FB have done a reasonable job of moderating. Where do you set the bar? Where should the politically appointed judge hearing the case set the bar? Your solution is just moving the problem directly into the sites of the political party in power.
> It sounds like you're really just mad because the big companies don't moderate evenly enough for you.
You're actually not too far off. I'm mad that companies are protected unevenly relative to normal people, which allows them to moderate unevenly and have no risk of downside resulting from their decisions. A law which protects a company when they do no moderation, but sets no restrictions on who they can moderate is lopsided. My personal viewpoint is in favor of having some standard of moderation, but the inconsistency is frustrating. I lean towards moderation, but I think we ought to be choosing between the options of "some moderation standard/required" or "no moderation allowed," and not the current option which privatizes the gains and forces losses onto the public. If Facebook decides not to moderate a hate group, they get ad revenue from all that traffic. But if that hate group, which Facebook is enabling, organizes an event where they march through a city and beat someone up, then I think that person should be able to sue Facebook (as well as pressing criminal charges and suing the people that performed the beating). The person that got beaten up would still have to demonstrate to the court that Facebook played some role in enabling the beating, even without Section 230.
So where would I set the bar? I would set it at the level that a company isn't at fault if their platform has the right kind or amount of moderation that the bad event that happened was unusual or couldn't have been expected. If someone gets hurt because Facebook didn't police an openly Nazi group, they should be liable. If someone gets hurt because Facebook has decent moderation procedures in place but the Nazi group was sneaky and posed as a sports fan club and used coded messages, then I think Facebook would have a pretty easy defense even without Section 230.
And you make it seem like judges are wildly political and the legal system is unreliable. There are a few bad judges, but 99% of cases that would be brought in the absence of Section 230 are so openly shut that they would be settled before they went to court. And in the cases that went to court, most judges are good people and want to do the right thing, regardless of their political leanings. The political affiliation of a judge really only matters when an issue is close and tough to call. Let's not give wholesale protection to tech companies from all of the obvious and easily decidable cases where they should be held responsible just because we're worried about a few bad calls.
> So where would I set the bar? I would set it at the level that a company isn't at fault if their platform has the right kind or amount of moderation that the bad event that happened was unusual or couldn't have been expected. If someone gets hurt because Facebook didn't police an openly Nazi group, they should be liable. If someone gets hurt because Facebook has decent moderation procedures in place but the Nazi group was sneaky and posed as a sports fan club and used coded messages, then I think Facebook would have a pretty easy defense even without Section 230.
I like this idea in theory. How does this work if the platform promises no moderation? Is that allowed? I'm thinking of a mastodon-type setup where a setup can choose not to do any type of filtering/moderation.
I just started heddit.com, I am one engineer, I have 2 million users, I am making 10 dollars a month from ads and losing 10 from hosting. How do I regulate my content to protect me when 230 is taken down?
Maybe the solution is partial exceptions to 230 on 10 billion dollar+ corps? It seems a more anti trust approach would solve your concerns without removing protections for small upstarts.
I dont think a blanket repeal is going to help the proliferation of free speech, only the proliferation of private speech.
To me heddit.com looks like a parked domain that isn't serving any content, 3rd party or not, so isn't protected by Section 230 and doesn't need to be. Hypothetical examples aren't a great argument against real world situations that have already happened, but a fake example makes me suspect you aren't arguing in good faith.
This is a perfectly sound hypothetical. Do you only allow discussion with people who have experienced a direct hardship? Is that your line for moderation? I'm being antagonistic, but that's part of your 230 fix right? You want judges to draw the magical cutoff line.
So what do you do about companies who run a shoestring budget? They can't play in this game?
No, but the previous comment looks just specific enough that it seems like they are trying to make an example look like it was a real life example (which would give it a lot of weight) when in reality it was a hypothetical example (which gives it less weight).
But to address the argument as a hypothetical, I don't think it's a fully developed argument. Fledgling companies with small user bases have very little liability by virtue of their small community. A platform with a small community can't be used to do harmful things like incite riots and encourage hate crimes unless you are targeting a specific demographic. And if you're targeting a specific demographic that is prone to doing harmful things then you should be moderating it from day 1. If you aren't targeting a dangerous demographic you have to get pretty big before your platform gets dangerous by size alone, and I do think it's reasonable to expect companies to have some moderation figured out before their platform gets so big that it's dangerous. Plus, a small budget doesn't give founders/developers an excuse to not know what their platform is being used for. Hypothetically, if your platform was small and you didn't realize you were catering to Nazis, then you are liable if those Nazis use your platform to organize an armed march/protest that turns into a riot and people get killed. Or at least I think you should be, and that's why I think that Section 230 should be repealed and replaced with something in the middle- not a blanket protection for companies that don't even try to moderate their content, but some protection as long as they are actually trying.
Also remember that not moderating your platform doesn't mean that bad things will definitely happen and that you will get sued- Section 230 only applies to civil cases already, and there must be some sort of grievance for someone to sue you. The cops couldn't proactively shut someone down for lack of moderation. So even without Section 230, a company could roll the dice while they were on a shoestring budget if they decided that it was worth it to delay developing a moderation system in favor of a different feature. That's in their right, they just ought to accept the idea that they might get sued if something bad happened because they prioritized work on shiny features.
So I guess what I'm saying is yes, I don't think a shoestring budget is a defense for giving a company protections. A new car company can't avoid a lawsuit from ignoring a known safety defect just with the excuse that they didn't have the budget. A startup that can't afford to do something right either shouldn't be doing it or they should accrue some risk.
This last summer with the BLM protests the local town FB group got flooded with people behaving badly and eventually the moderators had to step in and effectively ban a bunch of people and prevent discussion on the topic for a time.
This wasn't a group that catered to Nazis. It was a group that asked about the best pizza in town and pointed out when the high school was putting on a play. To say it caught the mods off-guard would be an understatement.
Mods had to step in and save the day, but it basically requires someone to watch the service 24/7 during heated times as the threads blow up quickly. They don't take days or weeks to warm up. It just takes an active community and some catalyst event.
This was effectively just a forum. It happened in a FB group, but could have just as easily been some local dude hosting a phpbb forum. Same thing.
So this is the thing I keep coming back to: why would anyone ever dip their toes in user-created content if 230 is changed/repealed? It's all risk at that point, especially for folks who would aren't quitting their day job for it.
Or to put another way, with 230 as it is, Trump can't use twitter/fb/etc, but he can spin up his own phpbb and start his own community there. Why isn't that ok?
This is completely an argument in good faith, based on a readily accessible hypothetical that almost everyone on HN can put themselves in the place of. This isnt a court of law, I dont need to be personally affected in order to offer an opinion.
What part of being banned from Twitter prevents someone from being heard?
Also, who is getting banned for talking about the politicians? #2, #7, and #8 (obliquely) on trending right now are about the main politician who was banned...
Section 230 is not supposed to promote free speech. It is supposed to promote a sustainable market for Internet services at various levels. An unmoderated Internet is not sustainable because it is overrun by trolls and nazis, and various kinds of unlawful activities; some degree of moderation is needed, but without liability protection nobody would take on that risk.
Parler has not been kicked off the Internet, it has only been removed from app stores. It can be hosted as a website if meeting the demands of those tech companies -- which amount to doing a better job of preventing terrorists from using the app to organize a violent overthrow of the US government -- and users can just visit that website.
>But if a few tech monopolies can just kick a platform off the internet because they don't like its user generated content then what the hell is the point?
... and yet somehow user-generated pornographic content (c.f. OnlyFans), which is never ever going to be in anyone's app store, somehow exists and by all accounts is exceedingly popular.
The demand from Apple was that Parler start moderation of the content which was trying to plan an armed version of what hand in Capitol on Jan 19th. Which, after seeing what happened recently, seems like a fair demand.
I continue to see plenty of right and left leaning views in YouTube, Twitter and Facebook.
> Let me guess, you’re the type of person who will complain incessantly about the Koch brothers and money in politics and super PACs...
Well, no, not the way you mean.
That is, I think all those things are corrosive within the structure of the present US political system; OTOH, I also think that the ways people propose to constrain them tend to both have side effects as bad or worse than the problems they solve, and violate the First Amendment.
> and yet here you are gleeful about giving tech oligarchs the effective power to choose the next President.
I disagree that that is the result, and I'm not at all gleeful.
I checked, and even after the Twitter ban, President Trump can still post on the White House web site and on his own Trump brand web sites. He just can't use Twitter, Facebook, etc. right now.
This doesn’t seem that complicated to me. What’s all the I-am-smarter-than-you smugness about?
> Law text: “No provider or user of an interactive computer service shall be treated as the publisher.”
People who want Section 230 repealed or modified want to be able to make the legal argument that by actively banning certain speech while elevating other speech, social media companies ARE acting as publishers. The law doesn’t need to define publisher again. We already have an understanding of what publishers do: they selectively choose and publicize OTHER people’s content.
What am I missing? Whether or not it’s justified, the reason some people want it repealed seem obvious to me.
The reasons are obvious but the outcome is the important thing. Largely the sentiments behind wanting to repeal section 230 seem to be about the big platforms.
The point that often gets missed is that without 230 ONLY the big platforms could afford to exist (and probably not even them in their current form).
So while the goal to limit the power of the tech giants may be good, the proposed method will likely accomplish the opposite by removing all competition present and future.
You may be aware of this, if so consider this post to be for someone who isn't.
The outcome of the current situation is clear: people's minds are influenced by unchecked, politically slanted, big corporate entities under a guise of social interaction.
I struggle to see how additional measures aimed at said entities (which is the kind of amendment typically sought) would remove competition.
Perhaps we can keep 230, but add a sentence that it no longer applies once you reach a certain number of users. So the small guys are protected, but not the biggest players.
Or perhaps you lose the protection if you show advertising on your site.
Or perhaps make it so that it only applies to nonprofits.
The snark isn't directed so much at people who want repeal (though techdirt pretty clearly would disagree with them on that point also) but rather at the very large contingent of people who express with total conviction that the law right now draws lines between neutral "platforms" who are protected and biased "publishers" who are not, when the law in fact does the exact opposite.
Yes. That’s exactly the problem people who want it repealed are pointing out: the current law offers broad blanket protection to social media companies to moderate content however they want, even if their moderation practices are highly editorial and partisan, akin to traditional publishers.
Techdirt seems to think people don’t understand that the law doesn’t distinguish between platforms and publishers. On the contrary, that’s precisely why people want it repealed or modified. Either attach platform “neutrality” requirements to these broad protections, or eliminate the protections and allow social media companies to be subject to the old legal regime attached to publishers.
There are an enormous number of people who say things like, "by doing X, Twitter (replace as needed) is no longer a platform and loses its protection."
Not in the sense of, that's how it ought to be or that a change in the law will bring it about, but under a belief that that's how things already are.
The balance has shifted a bit as calls for its repeal have become louder, so "repeal 230" is now a much more common response to disliked moderation actions, but you still don't have to look far to find people saying that a ban or notice or whatever "violates section 230."
Sure, but are you comfortable with the effect that private companies with greater political reach than many nation states can arbitrarily censor speech and push their own politics?
I'm not, regardless of whether I agree with their politics or censorship.
I remember reading an article about a correlation between IQ and prejudices, something that stuck with me as it seems prevalent, and likely the key ingredient when talking about bending opinions via hypertargeting of people on social media, such as supposed Russian involvement in Brexit. Cost effective propaganda when you can find all the people of a group on social media.
Interesting that the first couple of paragraphs talk about a meme, a viral idea he can assume we'll know of because we're all so connected online. Also refers to another popular online article in the same format he liked and adopted. Disconnecting someone from social media feels like it may only be helping their cause.
I think our social networks are too centralised and it's a problem, particularly because people can be accuraretly targeted with misinformation, as well as the social media algos displaying things out of order (i.e. their order). Unfortunately it's a truism that there are plenty people who are not empirically sceptical of what they read online and they're only too happy to share it.
As much as I don't agree with Trump on pretty much everything, I think he has a point about social media being arbiters of truth is a problem. I'm also a Brit so say very little about other country's party/policies. If only we could accurately measure reasons why people stormed Capitol Hill. My hunch is that information read online is the primary motivator, weighed in with the fact how information discovery is heavily entrenched in social media.
I would make platforms liable for their content (or at least greater emphasis on them taking an active role) and yes, I appreciate that would make them untenable as they currently stand, but I'm sure there's plenty of strong counterpoints that'd make me reconsider. At the very least, viral content's virality could perhaps be limited. Can't recall the last viral thing I saw that wasn't trivial but was positive. Social media has a lot of power at the moment.
> if you're in a real rush, just read Section (c)(1), which is only 26 words, and is the part that basically every single court decision (and there have been many) has relied on.
I'm in a rush, so no time to follow a link and scroll. Just put a citation of the law in that paragraph.
But also kind of funny that the resource does a great job explaining what 230 is not — while a terrible job of what it is.
At this point, I have no idea what Section 230 is.
I’m not a lawyer and I don’t really care for the details of this. Hell, I barely understand how a bill is actually turned into law.
Though, I do know that the POTUS vs Twitter showdown today has put a bittersweet taste in my mouth.
I hope they pass a new law whatever it is.
I would even bet Jack is too to take the pressure off. He’s extremely measured every time I hear him speak, and I would bet he secretly wants a law explicitly forbidding him from being content cop.
Section 230 protects websites from liability arising from content that a user created. It also protects users from liability arising from content another user created that they shared.
It protects sites from liability from moderating or not moderating content.
Some liabilities are not exempted. Federal criminal law, intellectual property, privacy, and sex trafficing law are not affected. Its effect on state law is complex (as described in the article).
What law would that be? Internet service providers must provide their service to everyone? Great, now it’s illegal to kick people off your minecraft server and the new york times MUST publish my op-ed. Twitter gave orange-julius cesar a platform, they can take it away.
There’s a general anxiety of a big company taking a way a big chunk of your life on a whim. It’s not fun to be banned by Google or Amazon. I’m not doing anything wrong to get banned, but that seems backwards somehow. (“don’t break the law and you have nothing to worry about” gives off the same creepy vibe).
It’s a similar feeling when driving next to cop while doing nothing wrong or dealing with a government bureaucracy. They have all (or a lot) of the power and decision making is never perfect.
So perhaps this is an argument for stronger antitrust or whatever. More competition would reduce this anxiety for the regular person.
Driving in front of a cop is worrisome because of the imbalance of force. Section 230 specifically bars certain force from being applied— it doesn't add more to the equation. Repealing it would, in practice, require Google and Amazon to scrutinize the content they host for you. It would be like if Target could be sued for the conversation you're having with a friend while shopping. Like if you could be sued for someone else's conduct on the sidewalk in front of your house (for the sake of argument, "on your property, without protection").
The only outcome that can invite is more frequent "driving in front of a cop" moments, because companies would need their own cops, and need them everywhere.
I admittedly don’t know much about this space, but there must be a reason these solutions didn’t take off (I don’t know whether it’s fundamental scaling issues, lack of a centralized group driving product design, or just lack of user interest as existing solutions are “good enough” + network effects). But if the market is failing to select for these approaches it seems that the only way forward is to shape the market rules somewhat. Outright banning everything centralized seems heavy-handed to me, so it might be that the only practical outcome is less centralization we have now, but not none.
Decentralized solutions are hard, harder to monetize, complicated for users to deploy.
Historically users who don't care about their privacy much, are incapable of hosting themselves because their isp sucks and they are technically illiterate, and have little chance of being banned.
I am not a lawyer, but I think it depends on if a minecraft server is the kind of space covered by non discrimination laws - seems a bit dubious to me as it feels more like a community gathering place (where you can have e.g. women-only spaces), but i suspect its not been tested ever.
He says multiple times what it is, doesn't he? It prevents services and users from being held responsible for content authored by others. And thus it was a reversal of the original Prodigy ruling.
> (c) Protection for “Good Samaritan” blocking and screening of offensive material (1) Treatment of publisher or speaker > No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Wouldn't repealing Section 230 mean more censorship and restrictions as the liability would be too great to be on the hook for what some random account might decide to publish, no?