Hacker News new | past | comments | ask | show | jobs | submit login
DOJ unveils proposed Section 230 legislation (justice.gov)
395 points by carry_bit on Sept 23, 2020 | hide | past | favorite | 373 comments



here is the DoJ's breakdown and red-line changes:

1. Incentivizing Online Platforms to Address Illicit Content The first category of potential reforms is aimed at incentivizing platforms to address the growing amount of illicit content online, while preserving the core of Section 230’s immunity for defamation.

a. Bad Samaritan Carve-Out. First, the Department proposes denying Section 230 immunity to truly bad actors. The title of Section 230’s immunity provision—“Protection for ‘Good Samaritan’ Blocking and Screening of Offensive Material”—makes clear that Section 230 immunity is meant to incentivize and protect responsible online platforms. It therefore makes little sense to immunize from civil liability an online platform that purposefully facilitates or solicits third-party content or activity that would violate federal criminal law.

b. Carve-Outs for Child Abuse, Terrorism, and Cyber-Stalking. Second, the Department proposes exempting from immunity specific categories of claims that address particularly egregious content, including (1) child exploitation and sexual abuse, (2) terrorism, and (3) cyber-stalking. These targeted carve-outs would halt the over-expansion of Section 230 immunity and enable victims to seek civil redress in causes of action far afield from the original purpose of the statute.

c. Case-Specific Carve-outs for Actual Knowledge or Court Judgments. Third, the Department supports reforms to make clear that Section 230 immunity does not apply in a specific case where a platform had actual knowledge or notice that the third party content at issue violated federal criminal law or where the platform was provided with a court judgment that content is unlawful in any respect.

2. Clarifying Federal Government Enforcement Capabilities to Address Unlawful Content A second category reform would increase the ability of the government to protect citizens from harmful and illicit conduct. These reforms would make clear that the immunity provided by Section 230 does not apply to civil enforcement actions brought by the federal government. Civil enforcement by the federal government is an important complement to criminal prosecution.

3. Promoting Competition A third reform proposal is to clarify that federal antitrust claims are not covered by Section 230 immunity. Over time, the avenues for engaging in both online commerce and speech have concentrated in the hands of a few key players. It makes little sense to enable large online platforms (particularly dominant ones) to invoke Section 230 immunity in antitrust cases, where liability is based on harm to competition, not on third-party speech.

4. Promoting Open Discourse and Greater Transparency A fourth category of potential reforms is intended to clarify the text and original purpose of the statute in order to promote free and open discourse online and encourage greater transparency between platforms and users.

a. Replace Vague Terminology in (c)(2). First, the Department supports replacing the vague catch-all “otherwise objectionable” language in Section 230(c)(2) with “unlawful” and “promotes terrorism.” This reform would focus the broad blanket immunity for content moderation decisions on the core objective of Section 230—to reduce online content harmful to children—while limiting a platform's ability to remove content arbitrarily or in ways inconsistent with its terms or service simply by deeming it “objectionable.”

b. Provide Definition of Good Faith. Second, the Department proposes adding a statutory definition of “good faith,” which would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and accompanied by a reasonable explanation, unless such notice would impede law enforcement or risk imminent harm to others. Clarifying the meaning of "good faith" should encourage platforms to be more transparent and accountable to their users, rather than hide behind blanket Section 230 protections.

c. Explicitly Overrule Stratton Oakmont to Avoid Moderator’s Dilemma. Third, the Department proposes clarifying that a platform’s removal of content pursuant to Section 230(c)(2) or consistent with its terms of service does not, on its own, render the platform a publisher or speaker for all other content on its service.

and

https://www.justice.gov/file/1319331/download

I highly recommend looking at the redline. It's approachable, and doesn't fall into the interpretation biases of the reporter.


> It's approachable, and doesn't fall into the interpretation biases of the reporter.

I agree with this, but don't forget that interpretation biases will still come into play as the law is enforced — the biases of police, lawyers and judges. So it still makes sense to read others' interpretations of what this might mean in practice.


I think this is a systemic issue on hackernews. People forgetting reality exists, and always pointing at the rules and acting as if they're infallible. You'd think /hackers/ would understand that rules are for show.

I've seen this regarding serious legislation like this or even something as mundane as Apple's app store "guidelines".


> You'd think hackers would understand that rules are for show.

I may be misinterpreting your wording here (my apologies if I am), but I'd assume the opposite. I'd think people who operate in a realm where text becomes action executed by a machine designed wholly around faithful, reliable execution of text fed to it would come to learn the reality-defining power of rules.


Hackers are specifically interested in exploring/exploiting the way human nature makes all machines vulnerable. IE taking a reasonably well-defined system of rules built with good intentions and using them for something entirely unintended.

In this sense, law is similar to code, but far easier to exploit.


> In this sense, law is similar to code, but far easier to exploit.

I would disagree. I feel like programmers see logical contradictions or loop holes in laws and think that if they make the argument in court, the court will segfault and they will go free.

In reality, courts use more inductive reasoning than computers, and aren't as easily tricked.


Courts are easily tricked, the hacks needed are just different from what most programmers assume them to be. The whole job of lawyers is hacking the court to get a more favorable outcome. If this wasn't true it wouldn't matter which lawyer you have since the court would always interpret law in exactly the same way.


> If this wasn't true it wouldn't matter which lawyer you have since the court would always interpret law in exactly the same way

Have you ever litigated?

Litigation isn’t about “hacking” the court. If that’s how a lawyer is selling themselves, you’re being taken for a ride.

Cases are about resolving novel ambiguities in the law. The vast majority of disputes never make it to court. The two sides lawyer up and one of them is advised that based on the facts the precedent is in the opponent’s favor. As such, settlement is advisable. In a minority of cases, precedent is mixed or not applicable—the facts and circumstances are truly novel with respect to the law. Given the law is finite and reality is infinite, this happens more often than you’d think.

Lawyers thus argue how the law should be extended. Remember, case law is law in common law countries. Judges opinions aren’t interpretations per se, but acts of rule making.


Courts are made up of humans. Good lawyers put time into jury and judge/jurisdiction selection. Good firms hire former court staffers that know the judges presiding over the case. It's not meant to be easy to game the courts because it's a privilege.


> In this sense, law is similar to code, but far easier to exploit.

I see way too few computer engineer criminal masterminds to accept this hypothesis at face value. ;)


They're just that good ;)


A better measuring stick might be lawyers and judges regularly applying the law in inconsistent ways.


The biggest thing I've learned from good lawyers, is that with the law - there's always an appeal, a different interpretation, and a challenge or request for exemption from a rule.

Applying to be a lawyer in and of itself can be a process of figuring out the correct paperwork to fill out and asking for a special exemption on a piece of missing information or a missed deadline.


Police discretion is a great example imho. Watch everywhere in your day to day as laws are applied based on a subjective real time evaluation of the situation.


And most cops barely even know the laws themselves. There have been multiple times I’ve heard cops incorrectly cite state laws.


tax loopholes, crazy defense strategies, corporations doing blatantly illegal things (or that should be illegal) and getting by on technicalities. All of them are essentially legal system hacking.


> I'd think people who operate in a realm where text becomes action executed by a machine designed wholly around faithful, reliable execution of text fed to it would come to learn the reality-defining power of rules.

Ah, but don't forget how often the code that gets written doesn't do exactly what the writer expected! Or is exploited by another party...


Indeed. I wouldn't say the law is for show; I would say the written law isn't the whole story. Reality is a three-edged sword: the law, the intent, and the implementation.

"For show," to me, implies you can ignore it and charge forward, bull-in-a-china-shop-style. That doesn't work in law or computers; naive invalid input gets rejected by the first-stage parser, and a court complaint completely ignorant of the law can get tossed by the clerk before it even sees a judge's desk. Rather, hacking is understanding and exploiting the consequences of, and nuances within, the rules.


This is my first time seeing a Babylon 5 reference in the wild. Well said.


Username strangely appropriate


I can highly recommend the discography of They Might Be Giants. ;)


That is always a solid recommendation


Laws aren’t like code. Laws are like specifications...everyone has their own interpretation.


Yet code is meant to conform to specifications. Removing the specifications doesn't make the code any more clear.


I understand and agree with what you mean, but that's what I'd associate with Engineers. I was thinking Hackers understand that the code running on the machine can be exploited.


Hackers don't read news on Hacker News. Programmers do.

So, quite ironically, you are doing precisely what you are talking about when you take the words "hacker news" too literally.


Sure, I know. HN is about tech news, maybe with a startup bent. But, I thought what a lot of programmers would have in common were innovative or uncommon angles, "hacking" of a sort.

I don't think rulebook following coders are particularly interested in debating like this on HN.


I don't really have the pulse of programmers but I'd imagine this is precisely the sort of thing they'd prefer doing over hard work. It may change, but there is still quite a bit of DIY culture in programming for better and worse. Just look at all the daily new syntactic sugar you can find. What are those if not innovative angles to get things to work more literally?


We're acutely aware, I think, of the vast gap between computer interpreted rules and human interpreted ones. The computer's interpretation may be hard to fathom but is ultimately rigorous. Human interpreted rules are basically vacuous: a human can make anything mean anything, no matter how rigorous it may seem.

All it requires is for other humans to accept your interpretation. That depends both on the text, your position in society, and that of the other person. The text will convince a few people when the difference in power is small. When the difference in position is large, no text will support the person in the subordinate position.

Lawyers pretend that this is not the case, and that they have mastered objective interpretation of the rules. It is unclear whether they say that because they are stupid, or because they think you are stupid. As programmers, who really are trained in objective interpretation of rules, it's laughable either way.


I believe you are underestimating the work lawyers do.

I try to avoid the error of confusing my lack of knowledge of the grammar of a system with the system itself being stupid. Law and its application have flaws, but it's not an "anything goes" system as you seem to be describing it here.


Can't social engineer without understanding how human agents differ from mechanical ones. Can't hack without understanding the difference between specification and implementation. To know C is to understand the true meaning of "undefined behavior."

From my social experience in high school, the board game rules nerds were more likely to become programmers than lawyers. The worst of them went on to be a pro poker player.


Riffing on Principle of Charity, Uncle Bob Martin's grumpyness, Mark Twain's many observations, and William Gibson's insight about the uneven distribution of the future:

Cone of ignorance expands further, faster than the cone of understanding.

Unlearning is even harder than learning, which contributes the durability of ignorance.

"You'd think /hackers/ would understand that rules are for show."

Ya. I'd like someone to explain cognitive certainty. The opposite of "strong opinions, loosely held". How is everyone else so sure of what they know? The only certainty I have is knowing that I'm probably wrong.


People aren't forgetting reality exists, they are falling in love with how the rules will actually be enforced, but know that the only morally defensible position to take is as it's written.


> /hackers/

When man found Truth, a worried demon went to the Devil who nonchalantly said; "Eh, I'll get them to institutionalize it".


Hacker news is a very big site. It’s unreasonable to expect that the average commenter is an upper-percentile person in any dimension.


Hackernews users take the principle of charity too far. I have seen too many people here take Trump in good faith, for example. When you give authoritarians the benefit of the doubt, they'll take it and RUN.


> Replace Vague Terminology in (c)(2). First, the Department supports replacing the vague catch-all "otherwise objectionable" language...

This disallows the common practice of open-ended moderation criteria such as "Be kind. Don't be snarky". Proposed section (c)(1)(b) removes the safe harbor except for moderation criteria on the list in proposed (c)(2)(a), which is: "obscene, lewd, lascivious, filthy, excessively violent, promoting terrorism or violent extremism, harassing, promoting self-harm, or unlawful"

> Provide Definition of Good Faith. Second, the Department proposes adding a statutory definition of "good faith" which would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and accompanied by a reasonable explanation...

Open-ended moderation criteria such as "Be kind. Don't be snarky" are at risk of not being judged "particular" enough, which is a requirement of proposed section (g)(5)(a), meaning that sites with open-ended criteria could be judged as "not in good faith" according to this definition, and lose their safe harbor.


Isn't this the point of the legislation?

If you want to be deemed a public square type of space that is fine but you don't then get to impose arbitrary rules about what is said, that would mean it is not a public place and is in fact a private space being editorialized by your private rulings on what speech should be allowed.

This is a very good move and clarifies everything for everyone.


That has nothing to do with Section 230 protection. Nor am I aware of any large Internet site that has ever pretended to be a public square (beyond Reddit, who announced the change).

I don't see why this is a good move, other than opening up a bunch of sites to liability. And it's going to make the ToS even longer so that they can make it "particular". Who wins here exactly?


Oof. That's a good source.

Section 5c is particularly problematic. It disallows shadow-bans if I'm reading correctly.

Section f9 is...suspicious.

Section d4 seems painful for small providers. You lose good samaritan status if there isn't a good way to contact you.

Section c2A is the important one. It basically means that a site cannot remove content unless it is objectively obscene or extreme. Section g5A suggests that you can have a ToS that explains your moderation decisions. But c2A seems to say that you aren't a good faith actor unless you use the objectively reasonable moderations standards defined by the law.

It's not clear that this would even have an effect since clearly you can moderate based on your ToS. A sports site can remove non-sports content even if it isn't obscene.

It seems like the goal would be to say that content moderators are biased in some way and drop section 230 protection based on that pretext, but that'd require a court to find that the moderation is being done inconsistently, and being done inconsistently intentionally. I have a feeling there'll be difficultly proving that.


Why do you say "clearly you can moderate based on your ToS"? I don't see anything here would permit the ToS to have moderation criteria such as "off-topic" while retaining the safe harbor. Section c1b is what leads to the conclusion that only the criteria in c2a are allowed, and section c1b doesn't mention "good faith" at all. Furthermore, for many sites such as Hacker News which define their topical focus broadly, "off-topic" could be judged to lack the "particularity" necessary to qualify as good-faith moderation criteria under section g5A.


You're right, I'm making the assumption that the law isn't completely nonsense and banning any form of moderation whatsoever.

I could be wrong.


"objectively obscene or extreme" - is there a clear definition for obscenity? I feel like there was a famous quote from a judge on the SCOTUS to effect of "I'll know it when I see it", but law is not an area that I have expertise in.


It’s evolved over time. The basis was content which was deemed to “deprave and corrupt those whose minds are open to such immoral influences". The next step was to remove protection status only for speech which is “patently offensive" and "utterly without redeeming social value".

Maybe it’s time for another step.

https://en.m.wikipedia.org/wiki/Roth_v._United_States


I agree, based on legal precedent ("know it when i see it") seems it would be very challenging to enforce an objective definition of obscenity.


It seems to be that under this reform to maintain Section 230 safe harbor either you would have clear and consistently enforced ToS or if not relying on a ToS you would be removing material that is objectively obscene or extreme.


This seems like a fantastic reform. It may lead to Twitter finally applying the same moderation standards to Trump, ie banning his account.


Slim chances. My source close to twitter feeding hose tells me Trump tweets and retweets are worth few million dollars in ad display daily. If they were to ban him, soon a stockholder lawsuit would follow that Twitter is willfully throwing revenue away.


Notably, there is no carve out for factually untrue information. So you can't remove posts claiming that silver solution will cure Covid...


But isn't that covered in the new last section 'good faith'?

where so long as it's clearly defined in TOS and moderation policy you can remove it. E.g. fb has a more clear policy that's public that says no false facts that can cause harm.

I'm a 100% skeptical of this administration and the (run on rant ahead) insanely religious zealot by the name of Barr who is terrifying in trying to force his crazy beliefs on all of us whilst creating a monolithic fascist-esque unitary executive who's whim (or that who has his ear last..) controls virtually all aspects of our lives...

but this seams reasonable and seems like it would answer a lot of the problems we see complained about on HN all the time?

right now one of the top posts is about YouTube take down with no info/recourse.


I don’t understand why Barr is hated by so many. I assume purely partisan reasons? He is very well spoken, principled, and in retrospect his summary of the Mueller Report was completely accurate (despite news articles I read at the time that had me convinced he was some kind of evil mastermind).

The only thing I don’t like about Barr is it’s hard to see him curtailing the surveillance state, but frankly that applies too almost everyone in politics since those who speak out against the cia/nsa tend to get into unfortunate high-speed car accidents.


Yeah, somehow I doubt you're actually asking in good faith.

Barr himself is horrifically partisan, which as the AG of your entire country is scary as all hell. Just read his speech in November 2019 to The Federalist Society (which itself is horrifying, the societies goals are simply at odds with what the USA has claimed to be).

It's as partisan as it gets.

Or is that okay, because its in-line with your own opinions?


Beyond partisan. He is truly devout Catholic and driven by religious doctrine. He argues that religion is necessary in American Democracy and the decline of religion has caused what he views as morally corrupt welfare state. His views are truly insane. far beyond republican politics.

he thinks that religion is needed side-by-side with our individual liberty style government.

read this whole speech it's truly insane i can't believe someone so intelligent (and effective), who was given so much power by Trump, has these views it's truly scary

"But today – in the face of all the increasing pathologies – instead of addressing the underlying cause, we have the State in the role of alleviator of bad consequences. We call on the State to mitigate the social costs of personal misconduct and irresponsibility.

So the reaction to growing illegitimacy is not sexual responsibility, but abortion.

The reaction to drug addiction is safe injection sites.

The solution to the breakdown of the family is for the State to set itself up as the ersatz husband for single mothers and the ersatz father to their children.

The call comes for more and more social programs to deal with the wreckage. While we think we are solving problems, we are underwriting them.

We start with an untrammeled freedom and we end up as dependents of a coercive state on which we depend."

https://www.justice.gov/opa/speech/attorney-general-william-...


d4 could be accomplished by making an email to report complaints to. Doesn't seem especially onerous.

f9 is suspicious how? This document is not intended in anyway to prevent anti-trust actions, so why is a specific exclusion suspicious? There's a lot of public push right now to act on anti-trust issues for large tech companies, so there's no secret there.

5c... is more complex, and I'm not sure it means what you think it does.

c2A does seem particularly problematic to me in the context of trying to combat disinformation which is neither illegal nor obscene, nor violent.


>Section 5c is particularly problematic. It disallows shadow-bans if I'm reading correctly.

Can you explain more, I'm not really seeing it...

Also how is disallowing shadowbanning problematic? I always never liked it.


As a user I don't like it but as an app/website provider, it significantly delays spammers, giving them the illusion that what they're doing is working so they should just maintain the status quo.

If instead you outright ban them, they adjust or learn about your anti-spam algorithms and just keep going.


Wonder if one answer would be to move potential spam, etc. to a separate-but-equal area. So if you're in /r/Politics and start advertising your get-rich-quick scheme your post is moved to /r/PoliticsSandbox.


That removes the "shadow," unless you present only to the submitter that the content is in the intended space.


What's the legitimate government interest in disallowing shadowbanning?


Well "legitimate" is open for interpretation.

I'd say this administration has resonated with its base by treating bans as politically-motivated.


They are politically motivated. For example, I’ve been banned from Twitter for defending Kyle Rittenhouse, who engaged in lawful self defense against a number of violent assailants. Why? Because the left has decided it serves their interests to paint him as a mass murderer (indeed Facebook has an official policy that he is a mass murderer).

Facebook also regularly censors viral pro-trump content and project veritas has exposed this.

I wish it weren’t so, but it is. I’ve experienced it firsthand.


I've seen a lot of retellings of why someone was banned that end up being a bit generous, particularly if experienced first hand.

It's tough to take these at face value because we trust and believe in the things that define our ideology and take any challenge to our presentation of that as an attack on the underlying ideology.


I'm guessing that's because "viral pro Trump content" is very frequently xenophobic& racist content, includes calls to violence and is full of conspiracy theories. If a site doesn't moderate that sort of stuff, it frequently goes to shit, I know inadequate moderation on that front had made reddit much much worse.

Also this sort of stuff can lead to real world violence like when Facebook let violent vigilantes organize on their site it led to Kyle Rittenhouse killing two people.


"To promote free and open discourse online and encourage greater transparency between platforms and users." That seems legitimate to me; the mechanisms of democratic government rely on free speech and the marketplace of ideas existing in practice as well as in theory. If undesirables were not only kept out of the town square, but kept out by a secret system that not even they were told about, that would be obviously damaging to democracy; yet that's what's happening online.


The "undesirables" can make their own websites, which are totally protected from the government by the First Amendment. The owners of the shadow-banning websites should also have the right to decide what thoughts are exposed on their website.

I get to take down any flyers on my fence, even if I leave up the ones I like. Or I can vandalize the ones I dislike, maybe entirely changing their message. My fence.


> The "undesirables" can make their own websites, which are totally protected from the government by the First Amendment.

No they're not; the government has already shown itself able and willing to do an end-run around the First Amendment by e.g. pressurising payment processors to refuse to do business with such websites.

> The owners of the shadow-banning websites should also have the right to decide what thoughts are exposed on their website.

> I get to take down any flyers on my fence, even if I leave up the ones I like. Or I can vandalize the ones I dislike, maybe entirely changing their message. My fence.

If you're hosting a private website you can privately decide what goes on it. If you're holding yourself out as a public communications provider and want the benefits of section 230, you're being granted special privileges by society and you need to hold up your end of the bargain by hosting the kind of uncomfortable discussion that society needs.


>No they're not; the government has already shown itself able and willing to do an end-run around the First Amendment by e.g. pressurising payment processors to refuse to do business with such websites.

I cannot see how a change to a law that's proposed by the enforcement arm of government will actually protect people from abuses done by that same enforcement arm. If they act in bad faith now, why assume good faith will follow?

>you need to hold up your end of the bargain by hosting the kind of uncomfortable discussion that society needs.

I don't think Amazon's user reviews are where uncomfortable discussions need to happen. But I do think the government demanding what should be discussed on servers owned by private citizens is a clear violation of the First Amendment.


> I cannot see how a change to a law that's proposed by the enforcement arm of government will actually protect people from abuses done by that same enforcement arm. If they act in bad faith now, why assume good faith will follow?

This seems like an argument that no government will ever reduce its own power, and so constitutional protections, due process etc. are all pointless. Even a single arm of the government is far from a monolith, and the boundaries of what government should and shouldn't do are always evolving. I'm sure this proposal isn't coming purely from the good of this administration's heart; part of it is public pressure, and part of it is the consideration that they may no longer be in power come November. But realpolitik is always a factor; good laws are still good laws.

> I don't think Amazon's user reviews are where uncomfortable discussions need to happen.

I'm sure a lot of vital IRL political discussion happens in the checkout queue at the supermarket (I'm sure that sounds like a joke, but I'm completely serious). As life moves online, we need a corresponding public sphere.

> But I do think the government demanding what should be discussed on servers owned by private citizens is a clear violation of the First Amendment.

Private citizens acting in their capacity as private citizens are still free to discuss whatever they like, or ban whatever discussion they like. If you want to have a capaciously moderated website that's fine, but such a website will not and cannot be a Section 230 public communications provider.


Shadowbans make it harder for users who have been affected to report these outcomes.


You could break up moderation issues into 2 types:

Type 1: Errors, ignorance, human foibles. Primarily annoying/asshole behavior.

Type 2: Malice and malformed content. Spam, propaganda, trolling.

I can argue against shadow bans for the first type. For the second type, any evidence of your operational method is a data point to break the moderation barrier and infect/manipulate users.

If a spammer knows its been banned, it switches over to another account. If a troll knows where your ban lines are, they come back and stay exactly on the edge to trigger someone or make them fall over the line.


> This reform would focus the broad blanket immunity for content moderation decisions on the core objective of Section 230—to reduce online content harmful to children—while limiting a platform's ability to remove content arbitrarily or in ways inconsistent with its terms or service simply by deeming it “objectionable.”

Holy shit I did not expect this government to get something so nuanced and difficult so perfectly right.

Also clarification on unlawful and promotes terrorism language is a nice touch.

Kudos.

Thanks for the write up btw!


> b. Carve-Outs for Child Abuse, Terrorism, and Cyber-Stalking. Second, the Department proposes exempting from immunity specific categories of claims that address particularly egregious content, including (1) child exploitation and sexual abuse, (2) terrorism, and (3) cyber-stalking. These targeted carve-outs would halt the over-expansion of Section 230 immunity and enable victims to seek civil redress in causes of action far afield from the original purpose of the statute.

Can we please just do this one on its own either way? This has been a real problem online with companies like Cloudflare offering hosting to websites engaging in these areas. These three are explicitly illegal and yet sites that harbor this content, especially cyber-stalking sites like Kiwi Farms (47 U.S.C. § 223), are still somehow online.


Conversely, this is the one section that I don’t want added. The “Bad Samaritan carve-out” seems to cover cases where the platform is aware of the content pretty well, and I’d support adding that. This is about making them liable even when they aren’t aware of it, which is problematic for any site that handles anything user-generated, especially if you don’t have an army of moderators like Facebook does.


> This is about making them liable even when they aren’t aware of it…

There are two sides to this though. If you make a carve-out for ignorance you incentivize ignorance.

I think the argument is that if you’re not able to moderate your user-generated content at the most basic levels like running image hashes against the CP database then you shouldn’t be hosting it.


> I think the argument is that if you’re not able to moderate your user-generated content at the most basic levels like running image hashes against the CP database then you shouldn’t be hosting it.

Then surely some minimum level of CP detection should be part of this section, right? If the requirements here are not defined well enough, then any company, from the smallest startup to a behemoth like FB, could be liable for some CP shared through the platform in a novel way that would have been impossible to detect.


As you mentioned, there are ways to detect some pre-existing illegal content but having a notification mechanism in place can take care of the knowledge part.

Force providers to have a reporting system that feeds back a unique case code that can be quoted as evidence of knowledge. Then they have x days to investigate and respond.


Maybe needing an army of moderators is just the cost of doing business when it comes to hosting user content. I'm weary of that change being made, but really, I don't see a viable alternative, or a good argument against it. If you can't moderate content at some scale, then maybe you just shouldn't host content at that scale.

I think there may be a light, or a deeper darkness, that comes out of this though. We may see a lot of investment in automation for catching this kind of content.


So you'd be okay with HN shutting down?

I'm not involved with HN, but it seems likely that many smaller venues esp ones that aren't big money makers, including most mailing lists and small sites like HN would be advised to discontinue operating if exposed to this kind of extraordinary liability over content which they had no knowledge of.

"Actual knowledge" should be your preferred approach to your concern... but nothing will probably solve your problem because extremely well funded platforms are substantially immune to the law in any case, just as they're immune to common decency.


Wait, hacker news is moderated quite actively and skillfully. Not trying to be a smart ass, I just don’t know what point you’re making.


And instantly every chat and email service without millions of dollars to throw at content filtering is forced to shut down.

No, they’re not social network type services like Facebook or Twitter, but... section 230 doesn’t discriminate between types of online services!


Section 230 has been used to defend a library when a child used it's computers to access pornography.

It's not all chatrooms and social media. Restricting the internet to be run by people able to manage their own websites would hurt. Ebay would have to manually review every account and listing. Good luck finding a user review website like Rotten Tomatoes. No more Straw Polls. No more GitHub.


> Restricting the internet to be run by people able to manage their own websites would hurt.

It goes deeper than that, I think. Do VM hosting companies also rely on Section 230 immunity? Do ISPs? (They’re not Title II common carriers anymore!) Would providers of this nature be required to monitor what their users do as well?

I don’t believe either of these have been tested in court, but I think there’s at least the potential here to make it much, much harder to manage a website.


So the big platforms get to have an even larger moat. Surely you can see an issue with that?

And automated systems have their own huge problems.


I think maybe you got the wrong lesson out of reading 1984?


I think conjuring that book is hyperbole. I don't think making people responsible for the legality of content they host is some crazy oppressive overreach of government power. The laws about what content is legal or illegal are pre-existing, these rules just shift the burden of enforcement in recognition that the government being the sole source of rule on massive distributed platforms isn't feasible. We tried, we failed. The bad guys ruined a good thing like usual.


The laws about what actions are legal or illegal are pre-existing, the telescreens and mandatory neighbor reporting just shift the burden of enforcement in recognition that the government can't effectively police the distributed masses.

Fixed that for you.


Thanks, that seems like the point I was trying to get across.


Moderation is the secret sauce of user-generated content. You shouldn't allow for user generated content if you aren't prepared to moderate it. It seems many people still don't understand that this is the biggest problem facing social media. Not scaling, not engagement, but moderation.


If we started with this approach from the beginning, we wouldn't even have social media or sites like Wikipedia. Enshrining it in law just means only the existing players can play the game.


How so? Wikipedia is a bad example as it is probably one of the most heavily moderated sites on the web. Social media is still 100% possible, it'll just be impossible to scale overnight as you'll have to scale moderation efforts with your username.


Wikipedia is heavily moderated, but primarily by volunteers. Do you think that would be sufficient if the Wikimedia Foundation was liable for anything bad that users might post? That’s putting a lot of trust in your community, and the consequences of volunteer moderators slipping up or missing something are pretty big.


1b seems problematic and unnecessary in light of the other things.


This is pretty much in line with how I would want to reform section 230.


>Bad Samaritan carve-out

You mean...Samaritan? The Samaritans were the Nazis of their day and the legend is about a Samaritan that rose above his race and did what was good - thus, the one Good Samaritan.

You wouldn’t say Bad Nazi. You just say Nazi and Good Nazi.


That is not what the parable of the Samaritan was implying, nor the context of it. Samaritans were and are an ethnic group that historically has had friction with other Semitic groups (notably, Jews). They were not in any way "Nazis of their day," and the parable was not about that particular Samaritan being better than other Samaritans, but about morality being a stronger tie than culture.

https://en.wikipedia.org/wiki/Samaritans

https://en.wikipedia.org/wiki/Parable_of_the_Good_Samaritan


> that rose above his race and did what was good

Someone already corrected you on facts, but I just want to say that the premise of "rising above his race" is nonsense to me. Why would you think this way?


I don't care for most of it except this:

>> Carve-Out for Actors Who Purposefully Blind Themselves and Law Enforcement to Illicit Material

>>...it makes little sense to apply “Good Samaritan” immunity to a provider that intentionally designs or operates its services in a way that impairs its ability to identify criminal activity occurring on (or through) its services, or to produce relevant information to government authorities lawfully seeking to enforce criminal laws. A Good Samaritan is not someone who buries his or her head in the sand, or, worse, blinds others who want to help.

>>One important way to confront the grave and worsening problem of illicit and unlawful material on the internet is to ensure that providers do not design or operate their systems in any manner that results in an inability to identify or access most (if not all) unlawful content. Such designs and operation put our society at risk by: (1) severely eroding a company’s ability to detect and respond to illegal content and activity; (2) preventing or seriously inhibiting the timely identification of offenders, as well as the identification and rescue of victims; (3) impeding law enforcement’s ability to investigate and prosecute serious crimes; (4) and depriving victims of the evidence necessary to bring private civil cases directly against perpetrators.

>>We propose making clear that, in order to enjoy the broad immunity of Section 230, an internet platform must respect public safety by ensuring its ability to identify unlawful content or activity occurring on its services. Further, the provider must maintain the ability to assist government authorities to obtain content (i.e., evidence) in a comprehensible, readable, and usable format pursuant to court authorization (or any other lawful basis).

Is this the end of online privacy as we know it?

From here: https://www.justice.gov/file/1286331/download


Wow, that seems to be written with Apple (at least) in mind, and is terrifying.

I want operators to intentionally design and operate their services in way that impairs their ability to identify any activity, because any exceptions are exploits. Just saying "criminal activity" doesn't make it not spying on Americans.


If it truly can't identify activity then does it need Section 230 in the first place? The immunity granted by Section 230 is for people who want the ability to moderate content, and if you are merely a dumb pass-through of a bunch of encrypted information you aren't using it as you are more of a common carrier, right?


Exactly. They are also basically saying you don't need to moderate most illicit activity (if it's an unmoderated format) but you essentially can't operate such a platform in a form where it is "unmoderateable"/untraceable because you should be liable for any unlawful content that goes up, and for the the fact that you facilitated it.


The law provides them immunity from civil suits. Apple as a provider can say they comply with section 230 and avoid any legal liability from people that may be harmed by their actions. In other words, you can't sue Apple because someone used iCloud to distribute child porn of you.

The essential bargain struck for 230 was that providers get immunity in exchange for policing their system and helping law enforcement. Apple et al removing their ability to police their systems breaks that bargain.


Ok, so I was under the impression that if you are say, a telephone network, that people can't sue you for stuff people do during phone calls, under the premise that the phone company doesn't moderate phone calls; and my understanding of the "bargain struck" is then that online web forums felt that centralized moderation was absolutely critical to spam prevention and search/discovery services (I disagree on the practicalities here, but will admit that with 90s era tech and under the premise of the specific model of website this seems reasonable to have believed and legislate for), and so wanted the ability to "have their cake and eat it too": to be able to decide almost on a whim to take down content--and thereby to have people active to looking at content to decide what to take down--but not to be responsible for those editorial decisions, as they wanted to do their supposedly-required moderation tasks with limited staff and "after the fact": and so, if they "decide" (quite likely by negligence) to leave something up that they should not have published, Section 230 grants them immunity with some provisions. It doesn't seem like a truly end-to-end encrypted service--such as iMessage or iCloud Keychain (but absolutely not iCloud Backup, which has a key escrow backdoor: it should be end-to-end encrypted, but explicitly isn't) needs this immunity, right?


That impression is correct; the precedent is here (https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.).

A lot of people don't really seem to understand that about S. 230- services that exist basically to pass messages have never needed the protection and that won't change. The best example outside of messaging services is probably content delivery networks, and I'll pick on CloudFlare because they're a great example.

As long as CloudFlare is fine with just taking money and serving content, no matter whose it is, they have the legal immunity as intended. As soon as they decide (on a whim) to deny service based on content, as they did a while back, that legal immunity vanishes. It would be catastrophic to their business to make that decision (especially considering everything else CloudFlare hosts), and that's the point.

It's also very interesting considering that the changes also appear to affect Patreon (and similar) and, perhaps a bigger deal, PayPal, since it's an interactive computer service used to fund the creation of information (among other things). In fact, I think that's an even bigger deal, because it's not necessarily traditional platforms banning certain people that causes them to be denied a platform these days, it's the inability of them to get funding in a way that isn't "send cash to this PO Box".


PayPal was already screwed in the moderation department because there is no equivalent of common carrier for financial services. In fact, operating under the common carrier model as a financial service is strictly illegal. You have to vet your customers and perform best effort blocking of sanctioned individuals.


> As long as CloudFlare is fine with just taking money and serving content, no matter whose it is, they have the legal immunity as intended. As soon as they decide (on a whim) to deny service based on content, as they did a while back, that legal immunity vanishes. It would be catastrophic to their business to make that decision (especially considering everything else CloudFlare hosts), and that's the point.

Except that they already do moderate based on content (malware, CP, etc) and don't have this immunity. The only reason they haven't been sued is because no lawyer or legal team is going to take on 8chan or random malware websites, and state-level DOJs don't have bad actors knocking on their door with the public's support behind them.


Does CF actively look for malware, CP etc to throw them out, or are they responding to law enforcement in that regard?

My understanding is that the hurdle to get something booted from CF is extremely high, which is why it's very well liked in both copy right infringement (torrents) and for general scammers trying to hide their server locations.


They do periodically check Google safe browsing (or simply have a webhook for safe browsing) and disable websites that are flagged there. You can also submit the abuse report forum and CF regularly disabled websites for malware that haven't been detected by safesearch.


> [iCloud Backup] should be end-to-end encrypted, but explicitly isn't

Just keep in mind that, when you say it should be, what you're saying is that Apple should make iCloud Backup work in such a way that, if the user loses the keys, Apple is completely unable to help them recover their data.

And, I don't agree. I can support it as an opt-in feature, but wouldn't use it; I expect sensitive applications, such as Signal, to encrypt their backups as indeed they do, and would prefer as a last resort to retain my precious exocortex, even if I lose my entropy.

And if I wouldn't use it, most people wouldn't, and Apple is right not to do it that way.


Fun story bro, but in the real world Apple agreed with my position and only didn't go through with it because of the FBI.

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...

The reality is that there are tons of obvious alternative solutions to user secure key escrow other than "lose access to your data".


Ok, point taken, TIL.

Serves me right for not checking the user name before replying...


They may or may not. A victim of child pornography can argue that Apple is liable because they provide a secure platform to distribute the images. Or maybe a victim of a crime might sue because Apple refused to help. Section 230 prevented those types of lawsuits. Without it, who knows?


You nailed it.


> and so wanted the ability to "have their cake and eat it too": to be able to decide almost on a whim to take down content

Not really. Sure the telcos can't sensor individual phone calls, but if they don't like what you are doing they will disconnect you. For example my telco explicitly prohibits me from sending spam over SMS. If I break that, they disconnect me. It costs me money to reconnect which is a huge disincentive to spammers, but more than that, they insist on knowing who I am, that have a "true name" in other words. So not only can they disconnect me, they can ensure I never reconnect.

Two differences with an internet bulletin board is the users are anonymous and posting is free. Thus the owners can never effectively ban people that deliberately set out to harm their business, they can only deal with the posts as they occur. Or to put in another way: until you read his post, you can never know if a new user is a spammer and it cost him nothing to post his spam.

The difference between a telco and an internet web site is really competition, and it's curcial. It costs billions (trillions?) to set up a telco. Worse, there is typically no local competition for land line (and really just a hand full of mobile providers too). So your local land like provider bans you, it's as if you've lost your town water supply, or electricity connection. In that environment a telco shutting you down is indeed total censorship.

That's not true for the internet. There are literally thousands if not millions of outlets. If one does not like your post, another one surely will. Since there is no monopoly you are always able to publish somewhere so no one one person or company can shut you up. In fact, it's not difficult to build your own publishing forum and it costs peanuts, so in fact _no one_ except the government can prevent you from having your say.

230 struck a new bargain for this new environment. What arose under its protection was curated forums targeting particular consumers. Stuff the consumers didn't like (such as SPAM, or bullshit, or leftist, or abusive) it is removed. Because the cost of creating the forum is so cheap the competition for eyeballs is fierce, and what ends up dictating the success of these forums is not whether they agree with posts or their political views, it is how well their curation matches their target market. This equation has been spelt out here on HN time and time again, with comments like "Twitter must have judged the cost of letting Trump post bullshit higher than the readership it gained".

So, 230 solved the "free speech" problem using capitalist competition in its purest form. Where "pure" near zero friction, and a market with near perfect information flow between participants. It seems to work pretty well to me.

It's pretty clear these amendments, g.5(c) in particular, will destroy that balance.


I thought the only reason the telephone company bans you for SMS spam is because of an FCC rule that says they have to do that; very different than them wanting to or taking agency in the idea.


Except Apple doesn’t encrypt iCloud uploaded content with client-side-stored encryption keys. It has been and will continue to be subpeonable. Section 230 concerns internet services, not all technology.


The question is where do you draw the line. Apple consciously decides not to collect or track certain things about users. Yet that data could also reveal illicit activity, which means Apple is knowingly turning a blind eye to illicit activity.

Further, any data about users could potentially uncover illicit activity, meaning a provider could essentially be required to track everything possible in order to avoid ignoring potential illicit content.


I wonder if it would be applicable to end-to-end "messengers" like Telegra/Signal/WhatsApp etc. Though I don't see any language there to suggest it is just for online "platforms" but not "messengers".

Then again, could be this entire proposal to begin with is intended just for so-called "platforms" not messengers.


> Such designs and operation put our society at risk by: (1) severely eroding a company’s ability to detect and respond to illegal content and activity; (2) preventing or seriously inhibiting the timely identification of offenders, as well as the identification and rescue of victims;

To me, that says if a company writes something that prevents or blocks illegal content from being accessed by law enforcement, any immunity or protection is removed.


That’s a very narrow theory of it. I read it as saying “if the system is designed to keep out external viewers of the content, then it is illegal”.

That is, E2E encryption, because it is impossible for someone to eavesdrop usefully by design, is intended to be made illegal.


I think you're agreeing with the parent comment:

"...if a company writes something that blocks... content from being accessed by law enforcement [like e2e encryption], protection is removed."

That's exactly how I read this. This is a head-on attack on all types of encrypted applications that would block government from (legally) accessing the user's data whenever they want.

This would effectively remove protections from Signal, iOS, WhatsApp, Keybase, or any other platform offering e2e encryption. It doesn't rule encryption illegal per se, but now the platforms may be held liable for the crimes that happened through their services, which would force them to either take their chances, or shut down, or implement some sort of backdoor.

(disclaimed: IANAL).


>> “if the system is designed to keep out external viewers of the content, then it is illegal”

Does that mean that GEO-restricting content will be made illegal?


No. What? This is entirely about law enforcement. If you want to design your system to that the only external viewers are police then w/e — that’s up to you.

What the parent is saying is that you can’t use “the system is designed so that nobody can access it” as an excuse for why law enforcement can’t access it.


I’m not sure that I see why Apple would need Section 230 protection... why do you think they do?


More like messengers like Signal, rather than Apple. Which apple product is designed to be not able to identify their customer?


I think it's written with 8chan in mind.


Yes it is, of course ushered in by the party of “personal liberties” and “individual freedom.”


An anti-8chan interpretation. "If everyone is Anonymous, we're just going to treat it like the actions coming from your service are your actions."

Fascinating.


Would WebRTC become practically illegal? Because any peer-to-peer communication might be argued to result "in an inability to identify or access most (if not all) unlawful content."

If a website has a feature like voice channels or video calls that are peer-to-peer, then for "most (if not all)" of it the website has an inability to identify unlawful content. e.g. Zoom, Slack, Google Voice, and Discord would have to monitor your calls so they have the ability to identify unlawful content.


Does that impact VPN providers?


Most certainly. Probably ISPs too since now police can't wiretap and listen in on connections to forums.att.com to stop people from posting illegal content there. Except that this law will be selectively enforced to reduce freedom and spy on the public's communications.


This sounds like the EARN IT act. Very scary. Would de-facto outlaw e2e encryption


This is so stupid. It's completely ignoring the fact that programs like GPG have existed for decades that allow you to transmit whatever illegal content you want to over any media supporting arbitrary plain text.


Did they just make tor illegal?


Only for sites that would want/need S230 protections right?


> Is this the end of online privacy as we know it?

That would imply there was online privacy at some point, or at least that it was a thing that all users could reasonably understand and achieve.

If users understood that Facebook's business model might eventually require what (in hind-sight) appeared to be multiple privacy violations, but continued to use the service anyway because they couldn't help themselves, they really never had online privacy when using the service.

For those of you down-voting me, I'm pointing out this is a fallacious argument talking about online privacy. The companies who built these huge platforms didn't bake in online privacy when they built and evolved their systems. Talking about it like we've "lost it" is pointless, but I do think it's worth exploring how we can make it better!


One thing this effort tells us indirectly but unambiguously is that federal law enforcement and surveillance does not have a technical solution for end-to-end encrypted services or Apple's device encryption. If they did, there would be no need to create new legal penalties (and make no mistake, the federal law enforcement and surveillance agencies are the real force behind this effort to defeat online privacy).


>Purposefully

How is that going to be determined?


I'm not seeing that in the actual proposed legal text (https://www.justice.gov/file/1319331/download), and I'm wondering if I'm overlooking it or this is just posturing that they weren't able to write up in a reasonable way.

The closest thing I see is subsection (d)(2), which says that the platform can be prosecuted (A) for a "specific instance of material or activity" (B) if it had "actual notice of that material's or activity's presence on the service," unless (C) they remove/block "the specific instance of material," report it to law enforcement, and "preserve evidence related to the material or activity for at least 1 year."

I believe the major commercial E2E platforms generally have the ability to notice specific hashes of known-bad material (think, e.g., child sexual abuse material) and block it / alert the platform through a client-side filter, which I think would make it pretty easy to comply with these requirements.

Alternatively, it would be enough, I think, to remove and ban the accounts involved.

The only difficult part is that you need to "preserve evidence," but my understanding is that this phrasing doesn't generally compel you to create evidence where none existed. Privacy-focused platforms have for years avoided keeping logs that they do not want to get turned over for the government, and it's generally much more onerous for the government to ask you to start keeping logs than to get mad at you for deleting/purging logs you already collected.

So I don't think this actually imposes any requirements on design, or gets in the way of E2E or non-logging platforms. If you are informed of specific illegal content, you need to take action. But if you operate the service in a way that you don't have "actual notice" or "evidence" of anything people send with it, I think that's still fine.

The other carve-outs don't seem to be relevant. (d)(4) might be if you look funny enough: it says the platform has to make itself able to receive notification of illegal content, and that a platform doesn't get immunity "if it designs or operates its service to avoid receiving actual notice of Federal criminal material on its service or the ability to comply with the requirements under Subsection (d)(2)(C)." I suppose you could argue that not keeping logs means that you've designed your service in a way where you can't "preserve evidence," which would run afoul of this. But I don't think that's the right interpretation: if you're not creating unnecessary logs in the first place, if you keep the logs you do log for a year, you've preserved all the evidence that exists.

Am I being too optimistic here? (I do agree that the plaintext summary you quoted is very concerning.)


This law would make it illegal to host discussion forums such as Hacker News.

Subparagraph (c)1(B) says that the only safe harbor for removing content is (c)(2). (c)(2)(A) restricts the criteria that can be used to remove content to the following: "obscene, lewd, lascivious, filthy, excessively violent, promoting terrorism or violent extremism, harassing, promoting self-harm, or unlawful".

The current Hacker News Guidelines contain stuff like "Be kind. Don't be snarky". These are broader than the text in (c)(2)(A). Therefore the safe harbor may not apply.

Worse, section (c)(1)(C) implies that removing ANY content by a user could make the forum liable for ALL OTHER content posted by that user, unless there is "good faith", but "good faith" is defined in section (g)(5)(A) to require all moderation criteria to be defined with "particularity". "Be kind. Don't be snarky" could be construed to lack "particularity".

The only alternative provided by this law would be to only remove content according to extremely legalistic moderation criteria. In my personal experience, all high-quality open forums require moderation with a degree of subjectivity with open-ended criteria similar to our Hacker News Guidelines. Given the legal risks of having to go to trial to argue about whether you are a "publisher" of other people's forum comments, it would be foolish for anyone to continue to employ open-ended moderation under this law.


With such proposed laws on the books, I don't see how anyone would start a new company that does anything at all with user generated content, as the risk for significant criminal liability is just too high. Only billion dollar companies would be able to afford to curate and make sure not a single law was broken ever, at any time, by one of their users.

Imagine having your home raided in the middle of the night because someone thought it would be funny to upload illegal content to your startup's servers. Now imagine being bankrupted and sent to prison afterwards.


Maybe that means only peer to peer systems without central servers are safe. There is no "server" and the data is everywhere but nowhere.


So now the police raid your house in the middle of the night because you accidentally seeded some bad content. /s

I joke, of course what would actually happen is your ISP would turn off your connection.


People are raided all the time for sharing illegal content, whether the act of sharing violates IP laws or the content itself is illegal.


> This law would make it illegal to host discussion forums such as Hacker News.

Not illegal, but extraordinarily legally risky. Perhaps YC has pockets deep enough, connections strong enough, and derives enough benefit to take that risk.

Something that is even more of a volunteer run labour of love? less likely.


The changes to the existing “Good Samaritan” section basically do what they said they were going to do: change it so that the section about providers not being publishers and the section about liability for blocking are independent, and clarify what “good faith” in blocking means. Basically the standard for civil immunity is slightly higher for blocking content than providing content. This is weird and still a big change by itself, but it is a reasonable topic for debate, I think.

The new ridiculously named “Bad Samaritan” section is a disaster that wipes out the point of the “Good Samaritan” section and is basically a combination of the mandatory CSA reporting law and SESTA/FOSTA for all federal and state laws. It’s so bad it almost seems like a poison pill.


Can you explain more why the section is particularly disastrous? I agree that it's a dumb name.

  The section: (1) “BAD SAMARITAN” CARVE-OUT. Subsection 
  (c)(1) shall not apply in any criminal prosecution under 
  State law or any State or Federal civil action brought 
  against an interactive computer service provider if, at 
  the time of the facts giving rise to the prosecution or 
  action, the service provider acted purposefully with the 
  conscious object to promote, solicit, or facilitate 
  material or activity by another information content 
  provider that the service provider knew or had reason to 
  believe would violate Federal criminal law, if knowingly 
  disseminated or engaged in.


If someone is selling drugs, distributing copyrighted material, publishing defamatory things, or participating in any other crime on your platform and you know about it, you lose 230 immunity for all content. Maybe this would encourage E2E encryption, but... If you read further, every single provider that allows user content also has to have a tipline for criminal activity and keep records on it.


Having a tipline isn't so bad on its own, but then the problem becomes: User leaves unverifiable tip, what do you do with it? It's not like you can break the encryption...


Sounds like we are turning our entire website infrastructure into a 1984 monitoring system


> Sounds like we are turning our entire website infrastructure into a 1984 monitoring system

It already is one for all intents purposes, just not one that is used ubiquitously (except in the sense of getting us to buy stuff). All that is changing is that the panopticon is becoming slightly more explicit than implicit, making it that much easier to (eventually) flip a policy switch that designates groups wholesale as "enemies of the people" requiring active scrutiny and interference.

I'm reminded of this cartoon:

http://2.bp.blogspot.com/-x2BP9Xd7xfo/T7bvuweLPfI/AAAAAAAACj...

Twenty years later and it is still fresh as a daisy.


Ingsoc didn't monitor proles, and didn't always monitor inner party members, so I'd claim its monitoring reach was only somewhere between half and a quarter of our entire website infrastructure.

Bonus clip: https://www.youtube.com/watch?v=UQ6LGrr8iEg


Turn it over to law enforcement. A single chat message is rarely a crime, but can be used to find further evidence of crime.


Turn what over to law enforcement? The ciphertext and timestamp?


If we’re talking about content that you literally don’t have access to and can’t see because it’s encrypted, like iMessage or Signal then you don’t need Section 230 protection because you aren’t moderating content.


A statement from the user who submitted it, including their unverifiable copy of the offensive message. The law is not computer code. They don't need to break ciphertext to get a warrant.


You can do this thing called "message franking" to make it more verifiable, if you want.

The downside is, context matters. If you report messages out-of-order or with important context deleted, you can trick investigators into thinking something was said or implied that actually wasn't. It needs to be carefully designed.

(I blog about cryptography, but you should ask a cryptographer if you want to design something like this.)


I don't design message apps. I just know that law enforcement did just fine before the internet when most communication (face to face, over the phone) wasn't recorded or verifiable.


Interesting. Thanks.


It is also a dumb name because the phrase "Good Samaritan" comes from the unexpectedly helpful behavior of the original Samaritan -- Samaritans and Jews despised each other, so the man who was ignored by a priest and a Levite, but helped by..of all people!..a Samaritan, found cause to remark on this.

(This is all from memory...)

Going by the original parable, this should probably just be "Samaritan carve-out," or maybe better yet as just "Bad actor carve-out".

Anyway...


The obtuseness of the wording definitely stood out.


Any actual Samaritans out there? How do you feel about that language?


I had no idea Samaritans were an extant group of people, but Wikipedia tells me that there are around 1,000 people who self-identify as Samaritan, mostly around Mount Gerizim and Tel Aviv. Fascinating.


No doubt they all frequent this forum


It’s a reference to the Biblical parable of the Good Samaritan. It’s not some racist jab.


I well understand that. My question is whether modern day residents of, or emigrants from, Samaria feel comfortable with the Bad Samaritan characterization. If it were a large demographic in the U.S., there might be a reaction. Then the question becomes, how large a population do you need before you have to speak carefully about a group.


Yeah, I feel like it’s really weird to have such dissonance about this on HN. Like can you imagine if the phrase was something like Good Chinaman? There would be an uproar.

To me this is like learning that word for getting scammed is just a dig at the Romani.


To some folks out there, those weigh the same.


Neither are most other terms, but that counts for little.


Unsurprisingly, this is pretty horrifying for anyone who cares about either the 1st or 4th amendments (and the 5th amendment considerations of 'you can't blind yourself to bad content stuff', that's amazingly bad). It blows my mind this isn't a separations of powers concern to even allow these people to draft language or make any statements on the record of what they think the law ought to be.


> It blows my mind this isn't a separations of powers concern to even allow these people to draft language or make any statements on the record of what they think the law ought to be.

That seem to be an untenable position. Legislation is drafted with input from lots of different groups including the agencies that might be enforcing the legislation. Proposing legislative changes isn't infringing on the powers of the Legislature.


You want to pay a cop to write the law? Put it under their seal, the one they use to exercise police powers?

Sure, other people vote on it. But it stinks.


> You want to pay a cop to write the law? Put it under their seal, the one they use to exercise police powers?

> Sure, other people vote on it. But it stinks.

Wait, do you think that police departments and police unions don't participate in drafting laws?


Participating and writing are a bit different things.


If you wish, feel free to tack on "(up to, and including, writing them wholesale)" to my comment.


Not in the case of police unions


Their point is that it's just about as old as the laws here, and that you can still have effective legislation even if it's primed for corruption.

You could make the same argument about most American systems of government, but pointing and saying corruption could exist is not the same as showing how it does.


It blows my mind that anyone who purports to care about the first amendment would argue that we shouldn’t “allow these people to draft language or make any statements on the record of what they think the law ought to be.”


There is a difference between an individual acting in their individual capacity and an entire department of the federal government.


The rights protected in the first amendment don't apply to a department of the executive branch drafting language for the legislative branch.


What are you getting at here? That the Constitution doesn't prevent Congress from passing a law that prohibits the DOJ from participating in the formulation of public policy?

This just seems like a nonsensical rabbit hole to explore to me.


It seemed to me that the claim was: You can't support the first amendment and be against DOJ participating in legislation. My point was that you in fact can hold that position. The first amendment doesn't extend so far as to make this position illogical.

To be clear, I'm not claiming I hold that position. In fact, I fully agree with your comment in reply to the top level comment we're under.


The comment was about “these people,” not the DOJ qua DOJ.


I guess I don't get it. I thought (and it still seems) that "these people" refers to the DOJ. To be clear, I don't agree with the person you replied to, so we probably agree in the end, I just didn't agree with your comment as I read it either. But I guess I'm not parsing something correctly here...


Phrases like "has an objectively reasonable belief" give me chills and is RIPE for abuse.

Also, modifying the language of Civil Liability to include good faith efforts for language that could be deemed "unlawful" is .... sneaky, and again, ripe for abuse.


The irony is that phrases like that are actually the least ripe for abuse because there is a ton of case law concerning them. I.e. they have a very specific meaning in regard to the law/statute/code. They're legalese, and legalese is your (lawyer's) friend.


The lack of imagination for how Republican court packing could flip that around is dire.


You couldn't pack courts to change the meaning of a phrase like this. It's not like I can come in as a new judge, say that I interpret these words a certain way, and plow my way forward. The wording has to be accepted within the technical jargon of the field.

You know how we have words like compile, build, "binary", or executable? It's the same thing. Expanding the interpretation of the law is expanding the interpretation of a highly technical definition and takes much more skill than saying "I interpret the words this way".


> The wording has to be accepted within the technical jargon of the field.

Huh? If a majority Supreme Court decides a phrase "really" means X in a certain context, it means X for the rest of the courts. The "technical jargon of the field" notwithstanding. They're often the source of that jargon.

The Slaughter-House Cases famously by a 5-4 vote reduced the Privileges or Immunities Clause of the 14th Amendment to a dead letter only 5 years after its enactment. A handful of years later it specifically held that despite the 14th Amendment, the First and Second Amendments didn't apply to the states. But then, despite no actual relevant change in the Constitution itself, the Bill of Rights began to be applied to the states by the Supreme Court in the 1900s, through the somewhat roundabout method of the Due Process Clause instead.

Or choose some other example, if you prefer. The "reasonable expectation of privacy" standard that has formed the basis of Fourth Amendment law for decades rests on the court's novel interpretation of the stubbornly unchanged words of the Fourth Amendment in the 1960s.

The Supreme Court wields huge power to interpret the law untethered to any pre-existing rule, if it so chooses.


The court that turned "Shall not be infringed" into "No really, it's okay to infringe on this particular combination of parts because it wasn't an arm at the time" should not in any way be underestimated for capacity to torture language.


> The court that turned "Shall not be infringed" into "No really, it's okay to infringe on this particular combination of parts because it wasn't an arm at the time"

Is there a case you’re referring to? Because in reality, it’s not uncommon for two “shall not be infringed” sections of the law or Constitution to come into conflict.


Pretty much the entirety of the National Firearms Act as projected through Miller.

I fully understand the social context of the time, (shortly having come to pass after the events of the Valentine's day massacre) but find the entire logic behind it flawed, and open to challenge on the grounds it's essentially a poll tax (unreasonable barrier to entry on the exercise of a constitutionally guaranteed right) predicated on the Federal power of taxation of interstate commerce, which is it's own bag of shakyness.

I was reluctant to even post it because it almost always devolves into a whining match that no one is infringing anything, until you add the "closing" of the Machine gun registry in '86 into the picture where the registration requirement creates a de facto ban on civilian ownership/production of automatic firearms for lawful purposes because Congress has mandated no money be spent updating or maintaining the registry; leaving it open yet non-functional, constraining supply of legally transferable automatic weapons to those produced and registered prior to 1986. That notwithstanding there's a lot that has been hung on the coat rack of that entire vein of politicking that just smells to high heaven to me.

But we aren't talking about that, we're talking about Miller, being the one case in which the Supreme Court leaned so heavily on a qualification that an arm must be kept and used consistent to the prefatory clause of the Second Amendment, thus cementing the next 50 odd years of slow methodical encroachment on firearm owner's rights to keep and bear arms until Heller reversed the stance, and explicitly acknowledged the non-modificatory nature of the prefatory clause on the operative clause of the United State's 2nd Amendment.

Note I'm not opposed to some level of tracking/registration of certain firearms in general; just not combined with the wild gesticulations that have been employed to create de facto bans and excessively high barriers to entry to possessing, fabricating, or doing business in firearms. To me, the 'keep' part of the 2nd Amendment covers the right to fabricate replacement parts as needed, even receivers, but in the eyes of the Law, the act of fabricating or production is separate from the Act of keeping (possessing). Hence to meet with my standards of keeping, you not only have to pay a $200 tax, you have to pay an appropriate recurring SOT ($2000ish last I checked), which also requires you to essentially do business as an FFL of some flavor, and to structure your life around what should be as frictionless and routine an interaction as humanly possible because otherwise the Federal government will unilaterally decide you don't really need that right to "keep" (to my standard remember) those arms because you're not engaging in enough interstate (or intra-state commerce due to some effing grain taxing case that SCOTUS ruled on that established the interstate commerce clause granted Federal regulatory authority on intrastate commerce if that stuff had a reasonable chance of effecting the interstate market environment) for it to be a slam dunk case that Federal Law Enforcement can dunk on you for effecting or attempting tax evasion in violation under jurisdiction granted by the interstate commerce clause, which is really being employed as a workaround to clamp down the number of automatics or undesirable firearms, and disenfranchise any poor sod who doesn't sweat the details enough of their right to vote via a felony firearm charge.

I've spent entirely too much time thinking on this sort of thing. Especially since I only own a Mossberg, but it's the principle of the thing. I downright object to any implementation of something that requires an average person to navigate that many layers of indirection for something that should just be straightforward.


I thought we were referring to packing the federal courts. As judicial review is a power solely reserved for the Supreme Court, I don't think it applies to this discussion.

The phrase in question has a Wikipedia page: https://en.wikipedia.org/wiki/Subjective_and_objective_stand...

And a BAR exam study page: https://www.quimbee.com/keyterms/subjective-standard-of-reas...

You don't get to just change the meaning of such phrases on a whim. Supreme Court or not. You'd be re-interpeting hundreds of years of case law for the sake of extending a single decision, which is easier and more subtle to do in a myriad of ways.

It's like worrying a programmer is going to redefine the meaning of the word compiler or something.


> As judicial review is a power solely reserved for the Supreme Court,

It is not. “Judicial review” is just applying the heirarchy of laws top-down from the federal Constitution, and every court in the US federal system does it. Orders striking down federal laws as violating the Constitution often originate from District or Circuit Courts. Supreme Court involvement is not necessary for judicial review.


As usual for Section 230 stuff, Mike Mansick at Techdirt has the thoughtful, sourced analysis - including all the players and a history of their past actions

https://www.techdirt.com/articles/20200921/22345845352/trump...


I used to be a very regular reader of techdirt but it's become clear to me their quality has declined and they've been going for clickbait headlines full time since changing the monetization strategy. Masnick still makes some valid points but they're few and far inbetween, and the other site writers are generally not a beneficial read.


Techdirt has always been that way, you just started noticing. Mike is all about the clicks. Mike seems to never be wrong, according to himself. Mike will distance himself from his own words when convenient. Techdirt should be called The Daily Outrage.


I am impressed with how far off-base your analysis is. I mean, I'm struggling to frame the state of mind where someone might assemble those conclusions.

Techdirt's headlines reflect exactly what's in the article. Instead of click-baity, a reasonable person might call that accurate.

Mike Mansick's coverage of complex legal articles is extraordinarily good. He's one of a small number of journalists who make complex legal understandable without butchering or omitting relevant details.

As for the Daily Outrage, well, okay. Techdirt covers outrageous behavior. TDA is a little simplistic but it's not off base.

A relevant side note: I've been calling out biased reporting for 30 years. Not because it makes my bad team look bad but because addressing bad behavior unequally provides nurturing spaces for it to thrive. Techdirt is one of the few publications that consistently called out bad behavior by the Obama administration - sometimes it was the ONLY publication doing so.

I didn't want to see Obama vilified or lionized. I wanted corruption outed and problems fixed and I really don't give a damn who the PotUS is.

Feel free to respond here with other news publications that don't change their national coverage methods, depending on who's holding the White House.


The title and by-line to this article doesn't make me think that this analysis is at all thoughtful or sourced. It may be, but it seems like it may also be a 13 year old boy trolling another kid at school.


An interesting tidbit I found after some digging:

"Carve-Out for Actors Who Purposefully Blind Themselves and Law Enforcement to Illicit Material" The recommendations suggest that sec. 230 protections not be extended to platforms that intentionally structure themselves in a way to make giving information to law enforcement difficult or impossible. This probably bodes poorly for private by design forums with aggressive log flushing policies (I'm specifically thinking of things like 4chan, which claims to permanently and irrevocably delete data aggressively).


This is not in the current proposal. It would basically ban end-to-end encryption. The current proposal is still pretty bad, though.


There's other areas that discuss a ban of E2E encryption.

Specifically this:

>>One important way to confront the grave and worsening problem of illicit and unlawful material on the internet is to ensure that providers do not design or operate their systems in any manner that results in an inability to identify or access most (if not all) unlawful content. Such designs and operation put our society at risk by: (1) severely eroding a company’s ability to detect and respond to illegal content and activity; (2) preventing or seriously inhibiting the timely identification of offenders, as well as the identification and rescue of victims; (3) impeding law enforcement’s ability to investigate and prosecute serious crimes; (4) and depriving victims of the evidence necessary to bring private civil cases directly against perpetrators.


Almost guaranteed to be used to selectively prosecute.


All laws in the USA; as is tradition.


All laws everywhere.


This is all so true. Law enforcement depends on the individual doing the enforcement at a particular time and place; the actual law and your interpretation don't matter. Justice depends on groups of individuals coming together to make decisions, as does legislating. All of these people are going to make mistakes, at best.

To bring it back around to the topic at hand, I think this enforcement action is a mistake.


The US does have a small amount of protection against this. Selective prosecution can be a reason for a judge to dismiss a charge.

I'm not aware of how often it's used/successful, though.


GDPR was explicitly stated to be intended to be enforced selectively.


Extraordinary claims need extraordinary proofs.


Laws are always enforced by humans, so this is always true. What is different about these proposals is that they are explicitly prompted by partisan (and personal i.e. Trump) motives which, although it's true of other laws too, is supposed to be the quiet part not said out loud.

The whole premise of investigating for "bias" is clearly designed to be abused - similar to HUAC asking you to prove that you aren't a communist


On the other hand, they're a reaction to that exact thing in e.g. Twitter's enforcement of their rules. They'll decide what's kind and what's not.

If you're on Twitter's good side and retweet somebody with "get em!", you're good. If you do the same while not aligned with Twitter politically, you're asking your followers to harass individuals and will be punished.


What is partisan about antitrust action?

Dont we want an open market of ideas ?


Authoritarian laws wrapped in a 'save the children' package make a mockery of our legal system.


> ... hold online platforms accountable both when they unlawfully censor speech and when they knowingly facilitate criminal activity online.

If these proposed changes are enacted, I await the catch-22 where an "online platform" is sued in relation to the same content; first where they "censored" something and then had to put it back online, and second as "knowingly facilitating criminal activity" because it's online.


I was just about to say... > when they knowingly facilitate criminal activity online. Does this include the criminal misinformation campaigns the GOP love to quote as free speech? What justifies criminality? And WTF is the justice department doing writing laws? That’s outside their jurisdiction. Congress writes laws. But I guess when your political party owns congress, the White House, and the justice department... you can do whatever you want regardless of the constitution.


They are proposing legislation. Congress would have to adopt it into a bill. Doesn't mean it's going to happen, or that it will pass. But anyone can write a letter to Congress and propose legislation. You and I would likely get a nice pat on the head and an auto-signed letter, but we can do it.

The DoJ is on step 1: https://www.usa.gov/how-laws-are-made They simply have a louder podium to announce their idea.


Yeah, I know how laws are made, it just shocks me that it's coming direct from the DOJ, quoting the administration.


If Fox News is streaming over YouTube TV, is it an online service that can be sued for moderating content in bad faith? Who gets sued? Fox News, YouTube TV, my ISP?


Everyone.


in the united states, this is the correct answer. sadly.


Sue'm all and the let the judge sort them out.


What law separates "lawful" from "unlawful" censorship?


> What law separates "lawful" from "unlawful" censorship?

The law of the jungle?


Indeed.


Yeah, that’s not suspicious at all.

Also, the DOJ really should back off here. They’re an article II department, their job is enforcement and not legislation. If the law should change, that is emphatically congresses responsibility. They can recommend all they want, it should be valueless though.


> DOJ really should back off here

> They can recommend all they want

Isn't that what they are doing, recommending? Your statements seem contradictory.


I think when the person above you said recommendations should be "valueless", they meant it should be a neutral, objective recommendation. Like you've tasked us with enforcing laws A, B, and C, and if you gave us powers D, E, and F it would make that easier in these ways.

Not like Barr's words: "For too long Section 230 has provided a shield for online platforms to operate with impunity. Ensuring that the internet is a safe, but also vibrant, open and competitive environment is vitally important to America." Because those are Trump/Barr opinions or value judgments about how good or bad the recent state of affairs has been and about what is supposed to be important to America.

Law enforcement isn't supposed to set the policy objectives. They're just supposed to implement them.


> I think when the person above you said recommendations should be "valueless", they meant it should be a neutral, objective recommendation.

No, I mean that the DOJ's opinion on what the law should be is literally without merit, and possibly is worth even less than that. The creation, implementation, and adjudication of the laws are separated into distinct branches of government by design. I do not want the group that is responsible for enforcing the laws weighing in on what they think the law should be; one does not ask the group that will wield the power what powers they ought to have if one wants it to end well for everyone else.

I would be much happier if the DOJ stuck to enforcing the laws as written and would prefer if they would kindly shut up and go away on the issue of what laws they think should be written.

Edit: To be clear, I also think that Barr is full of it. He's blown any credibility of being anything but a partisan hack. But even with a different AG and a different administration, the idea of the DOJ recommending to Congress what powers it should have strikes me as very much a bad idea, and I would say that even if I was otherwise happy with the administration suggesting it.


> I mean that the DOJ's opinion on what the law should be is literally without merit

Same for the CDC, FDA, EPA, IRS, HHS, etc.?

I suppose the President should also be forbidden from proposing legislation and for asking any of his agencies for proposals also?

This is an untenable position, IMHO.


I never said “forbidden”, that’s obviously untenable. Even people I don’t like have the right to say thinks I don’t want them to say; I’m speaking of norms and social pressure.

I’m saying that we should tell those who will wield the power to pound sand when it comes to what powers they should get. Ideally the president should not be proposing legislation, that is not their role in this democratic republic. One could argue that the centralization of legislative agenda making into the executive is one part of why presidential elections are such a high pressure situation these days; ideally that should be handled by the deliberative body and not the executive.

That being said, there is a huge, massive, unbridgeable chasm between the DOJ, who is not only capable but expected to send men with guns to either detain you or legally shoot you if you resist and every other organization you listed. The risks of abuse of power within that specific organization are massive, which is why traditionally there’s supposed to be a bit of a gap between the president and the DOJ to reduce the risk of politicization of the latter.


It seems completely infeasible to me to think that Congress and its staff is capable of drafting reasonable legislation without considerable input from the executive agencies.

I understand the general conflict of interest you are concerned about but that is why there are separate branches. The executive branch can only propose and/or respond to inquiries, they can't actually introduce legislation.

(I'm just going to ignore the rabbit hole of regulatory contruction here as that is another can of worms)


Congress is free to consult various other branches, but that is substantively different from the DOJ proposing legislation. Congress should be in the drivers seat, imho.

And yeah, the devolution of quasi-legislative ability to regulatory bodies is a serious ball of wax that I haven’t managed to formulate a coherent solution to.


And we should tell them to <bleep> off.


Fairly sure the US DOJ had gone well beyond ‘recommending’ moves and actions outside of norms or status quo in recent months...


Drawing the line on this is one heck of a huge gray area

It says providers cannot purposely "turn a blind eye" to potentially illicit traffic, i.e., cannot choose not to track and record such data. Yet any data about individual users and their behavioral patterns has the potential to help reveal illicit activity.

Meaning a provider could essentially be required to track everything possible in order to avoid ignoring potential illicit content. Keystrokes? Absolutely they could reveal illicit activity. By deciding not recording them, a provider is turning a blind eye to that possibility.


Having a legislative affairs office is one thing, but publishing draft legislation? That seems WAY out of line for an executive department. Why are my tax dollars being used for the executive to write laws?


Why are there so many people here surprised that executive departments participate in proposing legislation?

It seems like completely normal activity to me.


It’s not normal to write draft legislation. It is normal to have a legislative affairs office that works with Congress on all sorts of things but to propose draft legislation is weird. Agencies typically try to avoid doing this as it could adversely impact their funding moving forward. Why fund an agency that already has enough money to fund a lobbying arm? It’s a norm more than a rule but it’s a thing in my experience.


This report suggests that this sort of thing is not uncommon but that it rarely takes the form of a complete bill, which is closer to your term "draft legislation".

Redlining an existing statute with changes is what the DOJ published and is closer to what I was getting at when I said "participate in proposing legislation".

https://www.acus.gov/sites/default/files/documents/technical...


Thanks that’s a great paper. Much obliged.


I guess that depends - is it a frequent occurrence, historically?


It happens all the time. It is a completely mundane part of the daily functioning of government.


In my experience this is not at all normal. Any more examples off-hand if agencies writing draft legislation? Not talking rulemaking, talking actual law. Would happily be proven wrong here but it’s something I have actively seen agencies avoid in the past, so I’m curious where you’ve seen this kind of overt draft legislation written.


See my response to your other comment above.


Thanks. Seems like I’m mistaken about this or my experience is not as common. Appreciate the links.

I’d be interested if they always came with a press release and public quotes / pressure - but maybe that’s in the ACU report. Haven’t gotten all the way through it yet.


That does seem a bit unusual but then again the amount of attention that social media (and media in general) is getting lately is basically off the charts.


Further there are often laws that basically state "here's some general guide, executive agency X will figure out the specifics and publish that in the Fed Register as regulations."


The law instructs them to write the rules / guidance.It’s their responsibility to then do that and we fund them to do that.

Agencies don’t get funded to write the law that gives them the mandate to write the regulation. That’s a classic bureaucratic self-licking ice cream cone.


> Having a legislative affairs office is one thing, but publishing draft legislation? That seems WAY out of line for an executive department. Why are my tax dollars being used for the executive to write laws?

How common is that already? It seems kinda similar to the president preparing and proposing a budget to Congress.


I think that’s fine to ask social medias to choose between being a publisher or a platform.

It’s perfectly fine to push your point of view online as it’s protected by the first. But you should bare consequences for the cases not protected by the first.


1A protects you from the government. It has no relevance to free speech among persons and corporations.


Why is speech the only part of 1A that gets this treatment? You wouldn't say the same thing about religion


The same is 100% true about religion.

1A does not give you protection from religious discrimination among persons and corporations.

It is the Civil Rights Act that does.


Don't be semantic, my point is why isn't speech codified at the same level?

Clearly at some point someone said "freedom of religion should mean protection from persons and corporations", why can't we say the same about speech?


It isn't semantics. We give protected classes extra protections because they are attributes of who a person is. To give the same protections to actions that people do would be wildly different.

A salesperson should be able to be fired for shit-talking their own product. A customer service representative should be able to be fired for treating a customer inappropriately. The editor of a magazine should be allowed to edit contributors' articles. Putting speech on the same level as a protected class is ridiculous.


Alright, so let's take you literally and seriously: A boss can't fire a homosexual for being homosexual, because that's something he is.

But can definitely fire him for anything he does, right? Like having a pride flag on his backpack? Or committing the speech act of saying "I'm gay"?


Courts aren’t machines that evaluate a series of IF statement in vacuums; they look at the entirety of the situation. They’re going to be looking for the root cause of the reason that management has landed on their decision.

Someone mentioning they’re gay after being asked if they have a wife would be a very different situation from someone who, for instance, is engaging people in inappropriate and unwanted discussions of sexuality.

The law is quite simple: was the person fired because of their membership in a protected class?


What do you mean “why isn’t speech codified at the same level”? You mean, why haven’t legislators made a law that protects free speech against discrimination by employers and businesses, in the same way the Civil Rights Act does for religion?

I mean, what other answer is there than what’s implied by that very question: it’s because “legislators haven’t made a law”.

Now you can believe all you want that there’s been a centuries-old conspiracy by legislators to not ever draw up and approve this kind of law. But maybe you should consider that the other things mentioned in the 1st amendment, such as freedom of press, lobbying, and assembly, also aren’t “codified at the same level” as religious protections. And maybe you’ll realize that there are obvious differences in how religion is perceived to be different than the 4 other things protected by the 1A.


> I mean, what other answer is there than what’s implied by that very question: it’s because “legislators haven’t made a law”.

Imagine if someone ask you "Why is this Apple red?", and then you responded by saying "Well obviously, it is red, because it is red!"

Would this response be considered by anyone to be anything other than a pretty extreme bad faith response?

I think it is pretty obvious what the other person was asking. And yet, you responded... in the way that you did.


If I had intended a "pretty extreme bad faith response", I would've done it in a sentence, not several paragraphs.

To use your analogy, it'd be as if u/deadmik3 were asking, "Why must apples be called 'apples', when oranges are called 'oranges'?" It's a question that isn't coherent enough for a single answer, as it seems to be based on flawed foundational assumptions on the part of the asker, e.g. "Who says you 'must' call apples 'apples'" and "Do you actually think English people named oranges after the color, 'orange'?"

Here's deadmik3's original question/assertion [0]:

> Why is speech the only part of 1A that gets this treatment? You wouldn't say the same thing about religion

I would've thought u/kube-system's response was clear enough (e.g. it's the Civil Rights Act, not the 1A, that protects religion), but apparently it hasn't been. So I genuinely don't know what deadmik3's issue is. Do they think laws (and/or the process of making them) are merely a "semantic" concept? Do they think that religion and speech, being in the "First" amendment, confers to them a special overriding priority (i.e. in the way that being in Amendments 2-27 do not)? And if so, have they considered that the 1A explicitly mentions 3 other freedoms – press, assembly, and petition/lobbying – that, like speech, do not have the protection for religion?

Without knowing the presumptions behind their confusing question, it's hard to answer or otherwise debate it. I mean, the natural rebuttal would be to point out that the CRA's protections for religion is far from clear cut and indisputable – has deadmik3 never heard of the gay wedding cake case, which after 6 years ended in a narrowly defined Supreme Court decision? [1] – which means that similar protections for free speech would be even more contentious and logistically complicated, which is likely a key factor why that legislation doesn't exist/has never passed.

But why get into that if someone believes lawmaking is a semantic designation, rather than an actual process that requires considering how a law (and its enforcement) will actually operate in reality?

[0] https://news.ycombinator.com/item?id=24570504

[1] https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...


> lawmaking is a semantic designation

The semantics, in this case, is when someone is trying to ask "why have lawmakers not done X yet?"

And then the response to this question is "its not the law!"

The original question was quite clearly asking for a justification or reason as for why lawmakers have not done something.

And then the semantic response, that ignored the very obvious question, was to say "this is what the law is!".


> why can't we say the same about speech?

We can. We just haven't yet, and it is not clear that it would result in a world that most Americans would prefer to the one we live in. And like religion it would be subject to lots of tension and litigation about the speech of the corporation's owner vs. the speech of the corporation's customer.


When discussing the nuance of legislation and constitutional law, there is no such thing as being too pedantic. The difference between something being enshrined in law, something being protected by the constitution, and something being protected by jurisprudence based on the constitution are really big, important differences.


What do you mean? The first amendment absolutely does not allow me to practice my religion in Macy's.


... unless they pass the law that's referenced in the link, you mean?


The 1st amendment may only protect us from the government, but there is nothing stopping us from taking away immunity from "platforms" that are actually acting as publishers.

If a platform wants the liability protections for being a platform, then we can force them to not act like a publisher, or we can take away those platform protections.


> I think that’s fine to ask social medias to choose between being a publisher or a platform.

Why should we force websites to choose between being a publisher and a completely unmoderated platform? Why do people keep parroting that line with zero justification as if it's self-evident? You are commenting right now on a website that is heavily moderated, a website that could not possibly exist if the admins faced personal legal liability for any illegal content an anonymous commenter posts on here. If you don't think Hacker News should be legally allowed to exist why are you posting here?

Nobody outside of a fringe group of edgelords wants their favorite Internet communities to turn into 8chan. But the legal regime you are suggesting would make any other kind of website that hosts user generated content effectively illegal.


When HN's demographic imagines such legislation, they are thinking about how racist views get quickly kicked off of other platforms, and they want those other platforms to suffer. You won't hear any of these proponents complain about someone getting banned on HN, because it's an environment where their line of thinking is dominant.


I think that most of them should remain platforms, but that recommendation algorithms should be treated as publishers. So a reverse chronological feed would be protected by 230, while publishers would be liable for the contents in an algorithmic feed. People can subscribe to whatever sketchy content they want, but it won't be pushed to people that didn't ask for it.


I don't agree with the downvotes: it's a concept that should get more discussion, in the sense of "where should the line be drawn between common carrier behavior and publisher behavior?"


That’s a fantastic distinction. I would love to see that happen.


Heck, I think it's fine for entities to be both, but recognize the need for special rules allocated to individual companies that reach a certain level of subjective public-space-y-ness

If our concerns are facebook, instagram, reddit, and maybe 20 others, let's not constrain ourselves with fundamental rules that try to group them in with an obscure independent forum site that houses a few hundred members


I don't think you understand the first amendment.


Enlight us.

For example, calling for the murders of specific people based on their political views or races don’t fall under the first. And social medias allow the spread of some of these messages with no consequences at the moment.


That's not exactly true. "Speech is not protected by the First Amendment if the speaker intends to incite a violation of the law that is both imminent and likely."

Someone randomly spouting off that people of a race or ideology should be wiped out doesn't always/exactly pass this legal test.


Great point. Visit any US neo-Nazi website and you’ll see language that one could argue is “threatening to a specific group”. But unless it’s “hey, everyone gather at 5 pm on Main St so we can start shooting people”, it’s still protected language.


That's actually a point that I don't think has been decided yet, specifically "imminent". It's been ruled that "at some unspecified point in the future" doesn't count, but I can't find a case (with an admittedly short search) about where the exact boundaries are.

I don't know what the exact legal definition of imminent is, but the layman's definition involves the thing happening soon.

It may well depend on how close to 5 it is (but what timezone?).

The same sentence with the words "right now" would almost certainly meet the test though (assuming the action was actually likely to occur).


The First Amendment does protect that though. As long as the speech isn't intended to incite imminent lawless action, calling for murders is absolutely protected.

E.g. "We should go harm X" is arguably illegal, since it's an immediate call to action.

However, "It'd be great if X died" or "All Y should die" are certainly protected.

In the same vein, this is why "Punch a Nazi" is totally legal: assault is illegal, but you're not immediately inciting a lawless action. "Let's go punch that Nazi", less so.


The essential thing missing from all of these discussions (unless I am misunderstanding people) is the immediacy of the incitement. The context in which the speech matters. That is where the imminent and likely parts properly arise from.

If you write a book advocating for violence against x or y group or individuals that is permissible, but if you were in a crowded square and advocated the same thing when those targets were also in the square and it is likely that your incitement will lead to violence then it is not. That's incitement, it's imminent, and it is for a lawless act. But again, if you did it at home on your blog in some nebulous sense that isn't likely to cause some specific event then it is protected speech.

An important distinction here is that "true threats" are a separate category from what we are talking about. A true threat doesn't have a "likely" or "imminent" component and so is even broader in scope than violent speech in general. That is, true threats are not protected.

https://www.mtsu.edu/first-amendment/article/1025/true-threa...


It's not a matter of what kind of speech 1A governs, but what kind of entities.

The government is the entity that is not allowed to restrict legal free speech. Private corporations are not bound by the same rule; they can restrict all they want.


So if I sent a note with the content you just described to everyone I know via UPS and you define this content as a crime, then UPS should be held liable in facilitating this crime? So therefore UPS needs to inspect the content of every package that it delivers to avoid culpability in crimes like this?

Or somehow UPS is different from a social media entity. Then what legally is a social media entity?

These are interesting times. The rules will certainly change; it remains to be seen if they will ultimately change for the better.


If the UPS was already opening the mail of everyone to add its own pamphlets when you discuss x or y, yes they should be responsible for facilitating crime that they know of then.


Interesting, so the implication is that web companies are generating advertising from user-submitted content, therefore it's been read by the company, so therefore any cross-user communication needs to be reviewed by a person or a sufficiently accurate AI for endorsement through publication.

So in your framework, transmission of user content + advertising based on that content = culpability. I wonder if social media companies would figure out a way to legally prove transmission of a message without viewing its contents as a way to avoid culpability and maintain some level of profitability.


This is like suggesting that if you build a spam filter, you need to build a filter that scans content and catches and reports every type of crime possible to commit ever.


how is that not protected under the first amendment?


also private speech is more free-er than public speech


I'm not here to educate you.


Could you please not post in the flamewar style to Hacker News? We're trying for something different here.

https://news.ycombinator.com/newsguidelines.html


I'm aware.


This could destroy the tech oligopolies. This would tangle them in unlimited court cases allowing for smaller sites with smaller pockets [1] to flourish. I imagine they will fight this to the death - or drag it in court at least until November elections for a pro-Oligopoly (Biden) camp to take over.

[1] Lawyers hate small pockets.


> [1] Lawyers hate small pockets.

Judging from patent trolls (technically their lawyers) suing small startups, I don't see how thats the case, unless each website was serving <100 people and making 0 revenue.


It could also have the reverse effect, because the prospect of unlimited court cases represents a very high financial bar for small newcomers to deal with. The oligopoly already has the funds to deal with it. And/or they comply, creating an ossified, state-controlled set of social networks. Yay.


If this does destroy tech oligopolies, they'll have nobody to blame for it but themselves. They've had over a decade to prove that they're fair and impartial and, if anything, have worked overtime to prove the opposite.


Private organizations have zero obligation to be fair and impartial. If you want a company to be run differently, start your own.

It's funny how conservatives were totally on board with deregulation and perfectly fine with corporations steamrolling every one else as long as they were aligned with conservative objectives. Then the moment a powerful corporate faction with liberal-ish sensibilities emerges, they freak out and abandon all their fake principles and run crying to the nanny state to save them from the big evil corporations.


> the nanny state

That doesn't make much sense in this context - the "nanny state" refers to the government taking care of your physical needs, like a nanny. But even if you sweep all government regulation under the blanket of "the nanny state", that's still not what's being proposed here: what they're proposing is removing protections previously afforded by the government because they've been abusing them for so long. In essence, all that's being proposed is that everybody plays by the same set of rules.


What censorship-related rules are they not playing by now that everyone else is?


1. Please note that ‘obligation’ is not the only standard at play. Companies exist in a social and economic context. (I have previous comments on this topic that can be easily found.)

2. The ‘if you want a company to be run differently, start your own’ argument is tiresome and weak. There are (and should be) many mechanisms to influence corporate behavior.

3. Even the pre-Trump Republican Party has long taken flack from libertarians who essentially argue that one core principle should guide their political philosophy.

3B. Personally, I have not found a strong philosophical grounding to claim that political philosophies should be reducible to one core thought from which everything neatly derives. (That would be nice, wouldn’t it?) In my experience, figuring out public policy decisions is fundamentally more complex than that due to the interplay of conflicting values and moralities.


I’m interested in the reactions and feedback; next time, I’ll split the comment into distinct parts. Would any voters care to detail their POV by #?


> libertarians who essentially argue that one core principle should guide their political philosophy.

this isn't really true of libertarianism even. the word "aggression" from the NAP does a lot of heavy lifting and is subject to a lot of different interpretations.


I've always heard aggression defined as roughly:

Initiating or threatening any forceful action against an individual or their property

Is your complaint that the boundaries of "threatening" are too squishy?


yes, "threatening" and even "forceful" are subject to interpretation.

suppose you see me walking around town with a rifle. is that threatening? maybe not if you're comfortable with open carry, but what if I do it on the sidewalk in front of your house?

is it a violation of NAP to not wear a mask during a pandemic? what if I've already tested positive for covid and am refusing to quarantine? or what if I know that I have a detectable viral load for HIV and have sex without informing my partner?

another interesting example: if you accept the claim that racist speech is an implicit threat of violence, you can use NAP to justify deplatforming.

you can make NAP imply almost any position you want, depending on how you interpret it. only a very specific and narrow interpretation implies the typical positions held by (US) libertarians.


That's fair. When it comes up, it's usually limited to explicit threats.

Basically if you don't flat out say "I'm going to do X", that's not really counted.

But I can see why it's confusing for an outsider.


There is a range of libertarian thought, but all share one principle: liberty.

The ‘non-aggression principle’, in my experience of libertarianism at least, is not as central / common across libertarian writings.

Wikipedia:

> Libertarianism (from French: libertaire, "libertarian"; from Latin: libertas, "freedom") is a political philosophy and movement that upholds liberty as a core principle.[1] Libertarians seek to maximize autonomy and political freedom, emphasizing free association, freedom of choice, individualism and voluntary association.[2] Libertarianism shares a skepticism of authority and state power, but libertarians diverge on the scope of their opposition to existing economic and political systems. Various schools of libertarian thought offer a range of views regarding the legitimate functions of state and private power, often calling for the restriction or dissolution of coercive social institutions. Different categorizations have been used to distinguish various forms of libertarianism.[3][4] This is done to distinguish libertarian views on the nature of property and capital, usually along left–right or socialist–capitalist lines.[5]


doesn't that wikipedia quote kinda make my point in a different way though? if the concept "liberty" is central rather than the NAP, isn't it just as subjective? the pursuit of freedom from coercion at the point of a gun vs freedom from coercion by economic necessity lead one toward very different conclusions.

to be clear, I certainly don't intend to shit on libertarianism. I'm far from an expert on the philosophy, and I do feel libertarians make a lot of valuable contributions to political discussions. I wouldn't want to live in a world where a libertarian got every single thing on their wishlist, though.


My #3 point, in case it wasn’t clear, is this: any particular form of libertarianism, in comparison with many other political philosophies (such as as the platform of progressives in/near the Democratic Party, has considerably fewer principles that must be traded off. Libertarianism is widely regarded as ‘intellectually simpler’.

My #3B point emphasizes this question: ‘Is simplicity best? Or simply the easiest?’ (to quote a song)

In my view, the respectability of private mortalities is not strongly correlated with the simplicity of their core principles. (For background on what I mean by public and private moralities, see writings by Robert Kane, such as ‘Through the Moral Maze’)


Hard to not view this as dead on arrival. The House won't take up any major legislation emanating from the White House/DOJ in an election year, especially given Barr's corruption as AG.


Section (d)(2)(B) makes it illegal NOT to "expeditiously" remove illegal content once a provider has "actual notice" of illegality. If someone has a blog that they don't really maintain, someone else posts an illegal comment, a third person flags the illegal comment, and the blog owner doesn't get around to moderating their blog for a few months, do they go to jail?

That same section would make it difficult for web hosting providers and DNS providers not to take down other people's websites immediately if they receive any complaint about any illegal comment on those websites (they already do this sometimes, but this would make the problem worse). This would make it easy for trolls to take down small forums or blogs without the means to employ a 24/7 moderation team; just post something illegal then immediately complain about your own comment to the DNS provider before a moderator even has a chance to see it. The only solution allowed by this law would be for small forums to pre-moderate all content.

https://www.justice.gov/file/1319331/download


It looks like this makes service providers liable for illegal content if they do not keep logs for 1 year? Am I reading that correctly?


It’s not just illegal content, it includes “facilitation” of any illegal activity. The logs are only required when law enforcement notifies you of the activity. But if they think you know about the activity and are consciously promoting it, they don’t even need to notify you. My guess is they will probably revise this part.


Reddit has a subreddit that’s main purpose is helping you to shoplift without getting caught. I always assumed it was free speech laws protecting it. Does this mean there will be another incoming subreddit purge?


Discussion about using marijuana in medicinal context would also need to be purged, because it is an illegal Schedule I substance at the federal level.

People that use any drug are free to talk about it because of the first amendment, but the proposed legislation would prevent anyone from being able to share their experiences online, because their service providers might be liable for facilitating illegal activity.

That's a scary precedent.


After hosting militia organization and planning pages, would Facebook be liable for the Kenosha shootings under this legislation?


Well, Facebook are "friends" (or at least friendly to) those currently in power so practically probably.


This is a terrible law and hopefully will not get passed. Unfortunately, both the left and the right love censorship and control.


Wouldn’t it be better to keep the existing 230 for companies with less than, say, 100 million users? And remove 230 protection altogether for companies with more than 100 million users? This would let the startup ecosystem flourish. It would also seriously hamstrung large companies. Would this be a bad thing, though?


It's unknown whether disparate startups would do a better job with things like attacks from state actors compared to the giants. Obviously, as things become more consequential, people would rather have entities they can hold accountable. However I think our mechanisms of holding entities to account are ineffective, big or small.


I don't think startups would really get a boost, because when Youtube, Tumblr or Reddit have put new restrictions on what people can post I didn't see any users leaving for somewhere else. So I would guess that YouTube would have to remove many videos but users would stay there on YouTube.


I don’t think so, because section 230 can be genuinely used for pretty nasty stuff (eg forums for targeted harassment where moderators and management do not participate but do moderate).


The argument is always made that the First Amendment doesn't protect speech hosted on corporate servers. I don't agree, I believe if you're operating without individual content review, you're hosting speech in the same way a telephone company is, but sure, I'll let you have it..

Regardless, that's where Article 19 of the UN's Universal Declaration of Human Rights comes in.

> "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers"

- "any media"

- "regardless of frontiers"

Seems pretty cut and dry to me.


The US, as with most States, cares little about the UDHR. The only human rights you have are those that the government suffers too strong a penalty to disrespect, and there is no real penalty for running foul of the UDHR.

It's sad, but it is what it is.


Treaties in the United States only have force through ratification. The USA ratified this declaration only with a suite of reservations that "the provisions of Article 1 through 27 of the Covenant are not self-executing" and they were quite clear that "the Covenant will not create a private cause of action in U.S. Courts."

So the UDHR doesn't mean anything in the US. Even if it did, the broad protections for private property rights could be interpreted to mean that UDHR allows internet companies to drop anything from their private property that they don't feel like hosting for any reason at all.

> (1) Everyone has the right to own property alone as well as in association with others. > (2) No one shall be arbitrarily deprived of his property.


Well the first amendment isn't Article 19 of the UN's Universal Declaration of Human Rights. Not sure where you're going with that.


The point is that speech hosted on "new frontiers" is protected by the Universal Declaration of Human Rights, regardless to whether or not it's protected by the First Amendment.


Hey, wanna guess what international human rights declaration the US is not a signatory of?


Section (g)(3) says that "Being responsible in whole or in part for the creation or development of information includes, but is not limited to, instances in which a person or entity... comments upon ... or affirmatively and substantively contributes to, modifies, or alters information provided by another person". This implies that one editor of a wiki could be held responsible for not noticing and removing illegal content posted by another user while they edit a page.

https://www.justice.gov/file/1319331/download


I think the bigger issue with the dominant social/publishing platforms is not a free speech, but their ability to ruin a business without ability of the business to appeal, go to court, sometimes even without a right to find the reason. They are increasingly becoming a marketplace and, as such, they have to be legally forced to keep it transparent and fair. Utility companies cannot turn your electricity off just because some clerk inside PG&E decided that you use this electricity in a way he personally does not like. YouTube, Facebook and Twitter do it many times a day.


Just ran into this on Twitter earlier today, seems relevant. Very useful primer on what Section 230 is and clarifications of common misconceptions: https://www.techdirt.com/articles/20200531/23325444617/hello...


What ever happened to simple stuff like

- You go IPO, you lose 230 exemption. - Your revenues reach 2000x the avg US capita, lose 230 exemption. Done

This was always meants as a check on online players that had reached monopoly status via network effect.

All the ugly stuff (terrorism etc) can and should be dealt the same way NSA ATF FBI etc have been doing since the dawn of the internet. Sect230 should stay clear of that


If they would like to investigate social media for anti-conservative bias, does that mean it'd be okay for attorney general of other states to investigate Fox News for bias, or Breitbart? Of course not.


Fox News and Breitbart are both publishers, and do not claim to be platforms to shield themselves from liability. Totally different.


Both of them run comments sections and websites where they publish content from users.


The barely concealed subtext here seems to be that this is not at all about monopolies, and is really about forcing Google to return search results favorable to conservative politicians and ideologies.


The legislation asks them to take their finger off the scales, not put it on the other side.

> The current interpretations of Section 230 have enabled online platforms to hide behind the immunity to censor lawful speech in bad faith and is inconsistent with their own terms of service. To remedy this, the department’s legislative proposal revises and clarifies the existing language of Section 230 and replaces vague terms that may be used to shield arbitrary content moderation decisions with more concrete language that gives greater guidance to platforms, users, and courts.

>

> The legislative proposal also adds language to the definition of “information content provider” to clarify when platforms should be responsible for speech that they affirmatively and substantively contribute to or modify.

From the proposal:

> a. Replace Vague Terminology in (c)(2). First, the Department supports replacing the vague catch-all “otherwise objectionable” language in Section 230(c)(2) with “unlawful” and “promotes terrorism.” This reform would focus the broad blanket immunity for content moderation decisions on the core objective of Section 230—to reduce online content harmful to children—while limiting a platform's ability to remove content arbitrarily or in ways inconsistent with its terms or service simply by deeming it “objectionable.”

That cuts both ways, you know... like it's supposed to.


https://news.ycombinator.com/item?id=24089823 which side of the scale is the finger on?


> The legislation asks them to take their finger off the scales, not put it on the other side.

This is fair. Though why conservatives would want this is a mystery. For example, Trump is allowed to post on Twitter and Facebook despite breaking their rules time and time again. Being consistent would mean removing a lot of conservatives voices, voices that have been protected by these big tech companies.

https://www.nbcnews.com/tech/tech-news/sensitive-claims-bias...


> The legislation asks them to take their finger off the scales, not put it on the other side.

Yes. But the notion that tech companies are putting a "finger on the scales" in favor of liberals is the conservatives' perspective. Another perspective is that they're banning categories of speech, such as misinformation and hate speech, which just happen to be more prevalent on the conservative side.


"which just happen to be more prevalent on the conservative side."

Ahem... So when one side ask and say to burn things it is not "hate speech" it is "peaceful protesting".

But when one side says the same is "hate speech".

It was never about "hate speech".

But who is speaking.


This!


> The legislation asks them to take their finger off the scales

What do you mean? In what way is the DOJ's finger on the scale? Either facebook is free to manage itself as it sees fit, or it isn't. But if it isn't, then that's the finger being on the scale. That's the government controlling facebook. If you want the government controlling facebook's editorial powers, that's all well and good, but it's putting its finger on the scales.


Not the DOJ, the tech companies. Their fingers are on the scale.


That's a claim, yes. The government is [implicitly] claiming that tech companies are being inconsistent. To fix this, the government is stepping in and regulating speech. A perfect solution.


Of course it is. Conservatives have raced so quickly to the right that they've outrun the Overton Window. So rather than snap back, they're trying to push Big Tech to accelerate the movement of the window.


Big Tech tries to play for both teams by writing the legislation that regulates Big Tech, while making money from large government contracts and making public statements of support for more or less or certain kinds of regulations, depending on the company and the context.

The personal is political in life and in business. Regulation could make currently profitable companies less profitable, and would thus make compliance a foregone conclusion. If the public wills that business and data collection practices should change, and such changes mandated by law, Big Tech will change, and the status quo would change. Those that currently benefit from the status quo make noises about changing it in an effort to guide and forestall such change as long as it is profitable to do so, to ensure the possibility of increasing profits in the future.


> Conservatives have raced so quickly to the right ...

I'd like to suggest that we use care with the term "conservative". Is it possible that the parent post conflates "conservative" with "Republican"?

I'm no authority on the U.S. Republic party, but I know it's sometimes said that that party has shifted hard to "the right". I could believe that's true, which would line up with the parent post.

However, some of my own political views would be considered "conservative" in the traditional sense, and I don't think they've undergone a hard shift to the right.


Studies have shown that rank-and-file Democrats have moved leftward much more than rank-and-file Republicans have moved rightward[0], but at least one study shows that Republican representatives have tacked rightward much more dramatically[1].

[0]: https://www.pewresearch.org/politics/interactives/political-...

[1]: https://www.washingtonpost.com/news/wonk/wp/2015/06/02/this-...


I've seen [0] before and you can sort of explain [0] and [1] by adjusting the frame of reference. As [0] shows, Republicans have pretty much been unmoved for 30 years. At first view you might conclude it's democrats who are moving right, but if you consider the rest of the 1st world as some kind of "average", I think you would find Republicans are moving farther right. Issues like health care, social safety nets, and worker protections are pretty much close to "center" in Europe, but are "left" in the US.

One could argue Republicans moved much father rightward of the 1st world mean than democrats have, which reconciles with [1] - in order to stay mostly politically unchanged after 30 years you must be more and more steadfast.


See also https://www.pewresearch.org/politics/interactives/party-id-t...

Democrats have always outnumbered Republicans, but the gap was much wider mid-century. The gap has narrowed considerably, but mostly because of declining Democratic party affiliation. Independents now outnumber Democrats.


Where is this wealth of principled conservatives people keep pretending exist, but are no where to be seen in politics or political discussions? Like it or not, Republicans are the "conservative" party and around 95% of them support Trump and the vast majority of them consider themselves to be conservatives. Attempts from conservatives to distance themselves from the Republican party just reek of No true Scotsman.


> Like it or not, Republicans are the "conservative" party and around 95% of them support Trump and the vast majority of them consider themselves to be conservatives.

I apologize but I may be missing your point.

In the GP, I was suggesting that the term "conservative" is being used in competing ways by different groups, and we should be clear on that.

Are you saying that it's not worth making such a distinction, because the vast, vast majority of the public use the term "conservative" to mean "whatever the Republican party's platform is"?


If the vast majority of the public use the term conservative to mean Republican, that's pretty much what the words mean. That's how language works. Conservatives of today have the same hallmarks as the original conservatives. It's little to nothing to do with sound fiscal policy or responsible spending or size of government and everything to do with maintaining social hierarchies. From that perspective, Trump supporters are definitely conservative.


> That's how language works.

I agree. It sounds like we have disagreeing estimates regarding how popular each definition is. We may be at an impasse on this, unless you know of a good source of reliable data for it.


What's stopping web sites from getting around all this with some basic changes to their terms of use?


Subparagraph (c)1(B) says that the only safe harbor for removing content is (c)(2). (c)(2)(A) restricts the criteria that can be used to remove content to the following: "obscene, lewd, lascivious, filthy, excessively violent, promoting terrorism or violent extremism, harassing, promoting self-harm, or unlawful".

Furthermore, section (c)(1)(C) implies that removing ANY content by a user could make the forum liable for ALL OTHER content posted by that user, unless there is "good faith", but "good faith" is defined in section (g)(5)(A) to require all moderation criteria to be defined with "particularity".

Most open-ended subjective moderation criteria (such as used on Hacker News e.g. "Be kind. Don't be snarky") could be construed to lack "particularity" and is often about things not on the list in (c)(2)(A).


I'm talking about abandoning the 230 safe harbor protections (since this bill would render them worthless), and instead requiring in their TOU that posters agree to assume full liability for content they post, and also agree that anything they post can be removed for any reason. Existing users would be prompted to agree to these terms, and if not their accounts would be wiped.


Terms of use are not legally binding in case they contain something that contradicts the existing law, just like any other contracts.

For example, imagine an employer wrote an employment contract signed by all employees that said something obviously against the law, like "if you get pregnant, we are letting you go". Even though employees signed this contract, that specific clause was against the law, so there is no way to legally enforce it. If the employer tries to legally enforce it, they will be in for a rude awakening.

Another, more realistic, example. Let's say you are about to get married and decide to get a pre-nup. Both you and your partner sign it. Let's say both signed a clause that says "any future earnings made during this marriage will get split 90/10 in case of the divorce, not as a 50/50 split". If divorce actually happens, this clause will be entirely useless, despite both parties willingly signing it. Both of the people who signed it will be able to claim total 50/50 on the split, even if they initially agreed to a different split.

tl;dr: the law overrides anything one can put into their terms of use or any other contract.


And the problem here is that it is inseparable from the political issue.

The core is 4a, replace vague terminology. The point is to end 230 unless social media is compelled to reproduce right wing content. Nothing else. The other stuff about illegal material doesn't really matter.

Why this? Because this is all right wing media talks about with respect to 230. They want social media punished for removing their material. They publish their vicious lies which are removed by the website. Then they loudly proclaim the constitution is the reason.

So we can see through the politics. The rest of the proposed bill is fluff. The part that matters is the compelled speech.

So we have this: "Publish our lies or you are liable for all other user content that might be illegal."


Right wing 'material' is often refused for inciting violence, targeted harassment, and more now explicit lies on matters of significant civic importance - such as when/where to vote, lying health info in the middle of a pandemic, etc.

It's bad faith content, then when their crap is reasonably removed, there's endless crying. In fact a lot of the bad faith content is mixed up deliberately with political speech to incite action which allows them to claim marytrdom. If you do a long youtube video that is mostly political speech, and then you mix in a 5 minute targeted violent harassment in the middle, youtube is going to remove the whole thing. This is total bad faith.

The ironic thing is right-wing "alternative forums" to reddit/youtube/twitter, all have extensive moderation because, surprise surprise, trolling behavior degrades the value of online discussion boards. Don't be surprised if all you do is stand in a theater yelling you are shown the door. Or worse, you start yelling threats at another movie goer. That's the real analogy of 'right wing content' being banned.


Something like this will probably pass eventually because there is bipartisan support. Biden said in January that he thinks that Section 230 should be "revoked... immediately.”


"Promoting Transparency and Open Discourse

First, the draft legislation has a series of reforms to promote transparency and open discourse and ensure that platforms are fairer to the public when removing lawful speech from their services."


[flagged]


It's second from the top right now and it's only an hour old.


There is new proposed internet legislation every week, an EFF donation drive every two weeks, Reddit shuts down every month for some protest.

At this point, I'm just beyond fatigued by it all.


And when you finally run out of energy, wham, every website will require approval by a committee chaired by the speaker of the house and several tech CEOs. :)


Not really. They can ask congress nicely but unless both the house and senate agree to pick it up, it's just another dumb letter on fancy stationary.


Details schmetails, this is clearly Trump's promise to retaliate against Twitter for making his posts as dubious. The man is so fabulously self obsessed.


I think that the title is correct but it is not very descriptive of what it is about for people having a look at it.

Tldr quote from it:

"We therefore urge Congress to make these necessary reforms to Section 230 and begin to hold online platforms accountable both when they unlawfully censor speech and when they knowingly facilitate criminal activity online.”


Online platforms "censoring" speech was never unlawful. What section 230 did was protect them from liability over speech that they did not censor. E.g., if I posted a libelous comment about you in the comments section of an online platform, section 230 protected the online platform from being sued by you for libel over my comment.

Weakening the 230 protections will do nothing to encourage platforms to censor less. Indeed, it will have the opposite effect.

I haven't read the proposal, but if it goes beyond just weakening 230 and instead tries to say platforms cannot censor speech, then it won't get past the First Amendment.


It's completely crazy, I don't get why my previous comment was so much down voted.

To me it does not make any sense at all... Did anyone understood it like if I was giving a pro or con opinion?

I tried just to clarify in a short text what it was about for people not being aware of what 'section 230' is. Also now there are a lot of longer comments, so that could be more explicit.

Would anyone have a logic explanation to share with me?


Congress will not likely accept this law without major changes and the DoJ, unless we wind up tossing out the Constitution entirely (which could still happen sadly), cannot create law.


"The current interpretations of Section 230 have enabled online platforms to hide behind the immunity to censor lawful speech in bad faith and is inconsistent with their own terms of service. To remedy this, the department’s legislative proposal revises and clarifies the existing language of Section 230 and replaces vague terms that may be used to shield arbitrary content moderation decisions with more concrete language that gives greater guidance to platforms, users, and courts.

The legislative proposal also adds language to the definition of “information content provider” to clarify when platforms should be responsible for speech that they affirmatively and substantively contribute to or modify."

Can someone competently defend this not being used directly as a tool by Trump to prevent his deeply deeply harmful misinformation about the virus and other things from being censored?


Seeing Barr on the proposal makes it look DOA. He also wanted to mandate compromised encryption for everyone's security benefit.


This is essentially putting penalties in place if social media companies use dark patterns in their algorithms behind the scenes. Long term, this is good for everyone.


This is impressively well put. Section 230 protections should not apply if you censor people for saying things that you were censoring them for saying opposite of 2 month back.


“BAD SAMARITAN” CARVE-OUT. Subsection (c)(1) shall not apply in any criminal prosecution under State law or any State or Federal civil action brought against an interactive computer service provider if, at the time of the facts giving rise to the prosecution or action, the service provider acted purposefully with the conscious object to promote, solicit, or facilitate material or activity by another information content provider that the service provider knew or had reason to believe would violate Federal criminal law, if knowingly disseminated or engaged in.

Color me surprised this wasn't already in the law.


(5) GOOD FAITH. To restrict access to or availability of specific material “in good faith,” an 16 interactive computer service provider must— 18 (A) have publicly available terms of service or use that state plainly and with particularity the criteria the service provider employs in its content20 moderation practices;

This is going to be very interesting for Reddit. Not that I disagree with the text.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: