The current comments seem to say this is rings the death knell of social media and that this just leads to government censorship. I'm not so sure.
I think the ultimate problem is that social media is not unbiased — it curates what people are shown. In that role they are no longer an impartial party merely hosting content. It seems this ruling is saying that the curation being algorithmic does not absolve the companies from liability.
In a very general sense, this ruling could be seen as a form of net neutrality. Currently social media platforms favor certain content, while down weighting others. Sure, it might be at a different level than peer agreements between ISPs and websites, but it amounts to a similar phenomenon when most people interact on social media through the feed.
Honestly, I think I'd love to see what changes this ruling brings about. HN is quite literally the only social media site (loosely interpreted) I even have an account on anymore, mainly because of how truly awful all the sites have become. Maybe this will make social media more palatable again? Maybe not, but I'm inclined to see what shakes out.
I'm probably mis-understanding the implications but, IIUC, as it is, HN is moderated by dang (and others?) but still falls under 230 meaning HN is not responsible for what other users post here.
With this ruling, HN is suddenly responsibly for all posts here specifically because of the moderation. So they have 2 options.
(1) Stop the moderation so they can be safe under 230. Result, HN turns to 4chan.
(2) enforce the moderation to a much higher degree by say, requiring non-anon accounts and TOS that make each poster responsible for their own content and/or manually approve every comment.
I'm not even sure how you'd run a website with user content if you wanted to moderate that content and still avoid being liable for illegal content.
> With this ruling, HN is suddenly responsibly for all posts here specifically because of the moderation.
I think this is a mistaken understanding of the ruling. In this case, TikTok decided, with no other context, to make a personalized recommendation to a user who visited their recommendation page. On HN, your front page is not different from my front page. (Indeed, there is no personalized recommendation page on HN, as far as I'm aware.)
> The Court held that a platform’s algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment.
I don't see how this is about personalization. HN has an algorithm that shows what it wants in the way it wants.
So, yes, the TikTok FYP is different from a forum with moderation.
But the basis of this ruling is basically "well the Moody case says that curation/moderation/suggestion/whatever is First Amendment protected speech, therefore that's your speech and not somebody else's and so 230 doesn't apply and you can be liable for it." That rationale extends to basically any form of moderation or selection, personalized or not, and would blow a big hole in 230's protections.
Given generalized anti-Big-Tech sentiment on both ends of the political spectrum, I could see something that claimed to carve out just algorithmic personalization/suggestion from protection meeting with success, either out of the courts or Congress, but it really doesn't match the current law.
"well the Moody case says that curation/moderation/suggestion/whatever is First Amendment protected speech, therefore that's your speech and not somebody else's and so 230 doesn't apply and you can be liable for it."
I see a lot of people saying this is a bad decision because it will have consequences they don't like, but the logic of the decision seems pretty damn airtight as you describe it. If the recommendation systems and moderation policies are the company's speech, then the company can be liable when the company "says", by way of their algorithmic "speech", to children that they should engage in some reckless activity likely to cause their death.
It's worth noting that personalisation isn't moderation, An app like TikTok needs both.
Personalisation simply matches users with the content the algorithm thinks they want to see. Moderation (which is typically also an algorithm) tries to remove harmful content from the platform altogether.
The ruling isn't saying that Section 230 doesn't apply because TikTok moderated. It's saying Section 230 doesn't apply because TikTok personalised, allegedly knew about the harmful content and allegedly didn't take enough action to moderate this harmful content.
>Personalisation simply matches users with the content the algorithm thinks they want to see.
These algorithms aren't matching you with what you want to see, they're trying to maximize your engagement- or, its what the operator wants you to see, so you'll use the site more and generate more data or revenue. Its a fine, but extremely important distinction.
What the operator wants you to see also gets into the area of manipulation, hence 230 shouldn't apply - by making algorithms based on manipulation or paid for boosting companies move from impartial unknowing deliverers of harmful content into committed distributors of it.
"harmful content" is such a joke word.
if a piece of text or media could harm people the military would have weaponized it long ago.
Even monty python made a satire of such a "harmful content" scenario:
https://www.youtube.com/watch?v=Qklvh5Cp_Bs
Big tech's hyperbole of the word is even more severe than Monty Pythons absurdist satire. Sadly it's not a joke.
Doesn't seem to have anything to do with personalization to me, either. It's about "editorial judgement," and an algorithm isn't necessarily a get out of jail free card unless the algorithm is completely transparent and user-adjustable.
I even think it would count if the only moderation you did on your Lionel model train site was to make sure that most of the conversation was about Lionel model trains, and that they be treated in a positive (or at least neutral) manner. That degree of moderation, for that purpose, would make you liable if you left illegal or tortious content up i.e. if you moderate, you're a moderator, and your first duty is legal.
If you're just a dumb pipe, however, you're a dumb pipe and get section 230.
I wonder how this works with recommendation algorithms, though, seeing as they're also trade secrets. Even when they're not dark and predatory (advertising related.) If one has a recommendation algo that makes better e.g. song recommendations, you don't want to have to share it. Would it be something you'd have to privately reveal to a government agency (like having to reveal the composition of your fracking fluid to the EPA, as an example), and they would judge whether or not it was "editorial" or not?
[edit: that being said, it would probably be very hard to break the law with a song recommendation algorithm. But I'm sure you could run afoul of some financial law still on the books about payola, etc.]
> That degree of moderation, for that purpose, would make you liable if you left illegal or tortious content up i.e. if you moderate, you're a moderator, and your first duty is legal.
I'm not sure that's quite it. As I read the article and think about its application to Tiktok, the problem was more that "the algorithm" was engaged in active and allegedly expressive promotion of the unsafe material. If a site like HN just doesn't remove bad content, then the residual promotion is not exactly Hacker News's expression, but rather its users'.
The situation might change if a liability-causing article were itself given 'second chance' promotion or another editorial thumb on the scale, but I certainly hope that such editorial management is done with enough care to practically avoid that case.
Specifically NetChoice argued that personalized feeds based on user data were protected due to first person speech. This went to supreme court and supreme court agreed. Now precedent is set by the highest court that those feeds are "expressive product". It doesn't make sense, but that's how the law works - by trying to define as best as possible the things in gray areas.
And they probably didn't think through how this particular argument could affect other areas of their business.
It absolutely makes sense. What NetChoice held was that the curation aspect of algorithmic feeds makes the weighting approach equivalent to the speech of the platforms and therefore when courts evaluated challenges to government imposed regulation, they had to perform standard First Amendment analysis to determine if the contested regulation passed muster.
Importantly, this does not mean that before the Third Circuit decision platforms could just curate any which way they want and government couldn't regulate at all -- the mandatory removal regime around CSAM content is a great example of government regulating speech and forcing platforms to comply.
The Third Circuit decision, in a nutshell, is telling the platforms that they can't have their cake and eat it too. If they want to claim that their algorithmic feeds are speech that is protected from most government regulation, they can't simultaneously claim that these same algorithmic feeds are mere passive vessels for the speech of third parties. If that were the case, then their algorithms would enjoy no 1A protection from government regulation. (The content itself would still have 1A protection based on the rights of the creators, but the curation/ranking/privileging aspect would not).
I misunderstood the Supreme Court ruling that it hinged on personalization per user of algorithms and thought it made a distinction between editorial decisions that show to everyone vs individual users. I thought that part didn’t make sense. I see now it’s really the third circuit ruling that interpreted the user customization part as editorial decisions, not excluding the non-per user algorithms.
This ruling is a natural consequence of the NetChoice ruling. Social media companies can't have it both ways.
> If that were the case, then their algorithms would enjoy no 1A protection from government regulation.
Well, the companies can still probably claim some 1st Amendment protections for their recommendation algorithms (for example, a law banning algorithmic political bias would be unconstitutional). All this ruling does is strip away the safe harbour protections, which weren't derived from the 1A in the first place.
> TikTok, Inc., via its algorithm, recommended and promoted videos posted by third parties to ten-year-old Nylah Anderson on her uniquely curated “For You Page.”
That's the difference between the case and a monolithic electronic bulletin board like HN. HN follows an old-school BB model very close to the models that existed when Section 230 was written.
Winding up in the same place as the defendant would require making a unique, dynamic, individualized BB for each user tailored to them based on pervasive online surveillance and the platform's own editorial "secret sauce."
The HN team explicitly and manually manages the front page of HN, so I think it's completely unarguable that they would be held liable under this ruling if at least the front page contained links to articles that caused harm. They manually promote certain posts that they find particularly good, even if they didn't get a lot of votes, so this is even more direct than what TikTok did in this case.
It is absolutely still arguable in court, since this ruling interpreted the Supreme Court ruling to pertain to “a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,”
In other words, the Supreme Court decision mentions editorial decisions but no court case has yet backed up if that means editorial decisions in the HN front page sense (as in mods make some choices but it’s not personalized.) Common sense may say mods making decisions is editorial decisions but it’s a gray area until a court case makes it clear. Precedence is the most important thing when interpreting law, and the only precedence we have is that it pertains to personalized feeds.
> Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech.
HN is _not_ a monolithic bulletin board -- the messages on a BBS were never (AFAIK) sorted by 'popularity' and users didn't generally have the power to demote or flag posts.
Although HN's algorithm depends (mostly) on user input for how it presents the posts, it still favours some over others and still runs afoul here. You would need a literal 'most recent' chronological view and HN doesn't have that for comments. It probably should anyway!
@dang We need the option to view comments chronologically, please
Writing @dang is a no-op. He'll respond if he sees the mention, but there's no alert sent to him. Email hn@ycombinator.com if you want to get his attention.
That said, the feature you requested is already implemented but you have to know it is there. Dang mentioned it in a recent comment that I bookmarked: https://news.ycombinator.com/item?id=41230703
To see comments on this story sorted newest-first, change the link to
> HN is _not_ a monolithic bulletin board -- the messages on a BBS were never (AFAIK) sorted by 'popularity' and users didn't generally have the power to demote or flag posts.
I don't think the feature was that unknown. Per Wikipedia, the CDA passed in 1996 and Slashdot was created in 1997, and I doubt the latter's moderation/voting system was that unique.
Key words are "editorial" and "secret sauce". Platforms should not be liable for dangerous content which slips through the cracks, but certainly should be when their user-personalized algorithms mess up. Can't have your cake and eat it to.
Dangerous content slipping through the cracks and the algorithms messing up is the same thing. There is no way for content to "slip through the cracks" other than via the algorithm.
You can view the content via direct links or search, recommendation algorithms isn't the only way to view it.
If you child porn that gets shared via direct links then that is bad even if nobody can see it, but it is much much worse if you start recommending that to people as well.
Everything is related. Search results are usually generated based on recommendations, and direct links usually influence recommendations, or include recommendations as related content.
It's rarely if ever going to be the case that there is some distinct unit of code called "the algorithm" that can be separated and considered legally distinct from the rest of the codebase.
Moderating content is explicitly protected by the text of Section 230(c)(2)(a):
"(2)Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or"
Algorithmic ranking, curation, and promotion are not.
The text of the Third Circuit decision explicitly distinguishes between algorithms that respond to user input -- such as by surfacing content that was previously searched for, or favorited, or followed. Allowing users to filter content by time, upvotes, number of replies etc would be fine.
The FYP algorithm that's contested in the case surfaced the video to the minor without her searching for that topic, following any specific content creator, or positively interacting (liking/favoriting/upvoting) with previous instances of said content. It was fed to her based on a combination of what TikTok knew about her demographic information, what was trending on the platform, and TikTok's editorial secret sauce. TikTok's algorithm made an active decision to surface this content to her, despite knowing that other children had died from similar challenge videos, they promoted it and should be liable for that promotion.
But something like Reddit would be held liable for showing posts, then. Because you get shown different results depending on the subreddits you subscribe to, your browsing patterns, what you've upvoted in the past, and more. Pretty much any recommendation engine is a no-go of this ruling becomes precedence.
TBH, Reddit really shouldn't have 230 protection anyways.
You can't be licensing user content to AI as it's not yours. You also can't be undeleting posts people make (otherwise it's really reddit's posts and not theirs).
When you start treating user data as your own; it should become your own and that erodes 230.
From my reading, if the site only shows you based on your selections, then it wouldn't be liable. For example, if someone else with the exact same selections gets the same results, then that's not their platform deciding what to show.
If it does any customization based on what it knows about you, or what it tries to sell you because you are you, then it would be liable.
Yep., recommendation engines would have to be very carefully tuned, or you risk becoming liable. Recommending only curated content would be a way to protect yourself, but that costs money that companies don't have to pay today. It would be doable.
> For example, if someone else with the exact same selections gets the same results, then that's not their platform deciding what to show.
This could very well be true for TikTok. Of course "selection" would include liked videos, how long you spend watching each video, and how many videos you have posted
And on the flip side a button that brings you to a random video would supply different content to users regardless of "selections".
It could be difficult to draw the line. I assume TikTok’s suggestions are deterministic enough that an identical user would see the same things - it’s just incredibly unlikely to be identical at the level of granularity that TikTok is able to measure due to the type of content and types of interactions the platform has.
An account otherwise identical made two days later is going to interact with a different stream. Technically deterministic but in practice no two end up ever being exactly alike, (despite similar people having similar channels.)
The "answer" will turn back into tv channels. Have communities curate playlists of videos, and then anyone can go watch the playlist at any time. Reinvent broadcast tv / the subreddit.
>Pretty much any recommendation engine is a no-go of this ruling becomes precedence.
That kind of sounds... great?
The only instance where I genuinely like to have a recommendation engine around is music steaming. Like yeah sometimes it does recommend great stuff.
But anywhere else? No thank you
If one were to subscribe to such a distinction between algorithmic ranking and algorithmic suggestions I would liken it with a broad paintbrush to:
Ranking: A group of people share a ouija board, and together make selections.
Suggestion: A singular entity clips together media to create a new narrative, akin to a ransom note.
If the sum of the collection of content more than its parts, if is different in strength not kind, or self reenforcing, it's really hard to distinguish where the algorithm ends and the voters begin.
Per the court of appeals, TikTok is not in trouble for showing a blackout challenge video. TikTok is in trouble for not censoring them after knowing they were causing harm.
> "What does all this mean for Anderson’s claims?
Well, § 230(c)(1)’s preemption of traditional publisher liability
precludes Anderson from holding TikTok liable for the
Blackout Challenge videos’ mere presence on TikTok’s
platform. A conclusion Anderson’s counsel all but concedes.
But § 230(c)(1) does not preempt distributor liability, so
Anderson’s claims seeking to hold TikTok liable for
continuing to host the Blackout Challenge videos knowing
they were causing the death of children can proceed."
As-in, Dang would be liable if say somebody started a blackout challenge post on HN and he didn't start censoring all of them once news reports of programmers dieing broke out.
The ingenuity of kids to believe and be easily influenced by what they see online had a big role in this ruling, disregarding that is a huge disservice to a productive discussion.
Personally, I wouldn't want search engines censoring results for things explicitly searched for, but I'd still expect that social media should be responsible for harmful content they push onto users who never asked for it in the first place. Push vs Pull is an important distinction that should be considered.
Did some hands come out of the screen, pull a rope out then choke someone? Platforms shouldn’t be held responsible when 1 out of a million users wins a Darwin award.
I think it's a very different conversation when you're talking about social media sites pushing content they know is harmful onto people who they know are literal children.
Trying to define "all" is an impossibility; but, by virtue of having taken no action whatsoever, answering that question is irrelevant in the context of this particular judgment: Tiktok took no action, so the definition of "all" is irrelevant. See also for example: https://news.ycombinator.com/item?id=41393921
In general, judges will be ultimately responsible for evaluating whether "any", "sufficient", "appropriate", etc. actions were taken in each future case judgement they make. As with all things legalese, it's impossible to define with certainty a specific degree of action that is the uniform boundary of acceptable; but, as evident here, "none" is no longer permissible in that set.
Any good will attempt at censoring would have been as a reasonable defense even if technically they don't censor 100% of them, such as blocking videos with the word "blackout" on their title or manually approving videos with such thing, but they did nothing instead.
> TikTok is in trouble for not censoring them after knowing they were causing harm.
This has interesting higher-order effects on free speech. Let's apply the same ruling to vaccine misinformation, or the ability to organize protests on social media (which opponents will probably call riots if there are any injuries)
I don't doubt the same court relishes the thought of deciding what "harm" is on a case-by-case basis. The continued politicization of the courts will not end well for a society that nominally believes in the rule of law. Some quarters have been agitating for removing §230 safe harbor protections (or repealing it entirely), and the courts have delivered.
The personalized aspect wasn't emphasized at all in the ruling. It was the curation. I don't think TikTok would have avoided liability by simply sharing the video with everyone.
"I think this is a mistaken understanding of the ruling."
I think that is quite generous. I think it is a deliberate reinterpretation of what the order says. The order states that 230(c)(1) provides immunity for removing harmful content after being made aware of it, i.e., moderation.
Section 230 hasn't changed or been revoked or anything, so, from what I understand, manual moderation is perfectly fine, as long as that is what it is: moderation. What the ruling says is that "recommended" content and personalised "for you" pages are themselves speech by the platform, rather than moderation, and are therefore not under the purview of Section 230.
For HN, Dang's efforts at keeping civility don't interfere with Section 230. The part relevant to this ruling is whatever system takes recency and upvotes, and ranks the front page posts and comments within each post.
Under Judge Matey's interpretation of Section 230, I don't even think option 1 would remain on the table. He includes every act except mere "hosting" as part of publisher liability.
Yeah, no moderation leads to spams, scams, rampant hate, and CSAM. I spent all of an hour on Voat when it was in its heyday and it mostly literal Nazis calling for the extermination of undesirables. The normies just stayed on moderated Reddit.
Were there non KKK/nazi/qanon whatever subvoats (or whatever they call them?) the one time i visited the site every single post on the frontpage was alt right nonsense
there was a whole lot of stuff that very judgemental people just never got to see, blinded by rage of wrongthink that they would gladly sweep all the others away to get some victory over something that will still just be somewhere else.
I enjoyed the communities around farming, homesteading, homeschooling of kids, classic litterature etc... but oh no, someone said some naughty words? lets shut it down.
Yeah, I think my comment implied i'll cry and turn away if I see bad words. It was more like, i'm personally not a nazi and don't want to read their stuff and it appeared to be a nazi site. Why wouldn't farming, homesteading, homeschooling etc. just use reddit?
It was the people who were chased out of other websites that drove much of their traffic so it's no surprise that their content got the front page. It's a shame that they scared so many other people away and downvoted other perspectives because it made diversity difficult.
Not sure about the downvotes on this comment; but what parent says has precedent in Cubby Inc. vs Compuserve Inc.[1] and this is one of the reasons Section 230 came about to be in the first place.
HN is also heavily moderated with moderators actively trying to promote thoughtful comments over other, less thoughtful or incendiary contributions by downranking them (which is entirely separate from flagging or voting; and unlike what people like to believe, this place relies more on moderator actions as opposed to voting patterns to maintain its vibe.) I couldn't possibly see this working with the removal of Section 230.
Theoretically, your liability is the same because the First Amendment is what absolves you of liability for someone else's speech. Section 230 provides an avenue for early dismissal in such a case if you get sued; without Section 230, you'll risk having to fight the lawsuit on the merits, which will require spending more time (more fees).
I'd probably like the upvote itself to be considered "speech". The practical effect of upvoting is to endorse, together with the site's moderators and algorithm-curators, the comment to be shown to a wider audience.
Along those lines then then an upvote i.e. endorsement would be protected, up to any point where it violated one of the free speech exceptions, e.g. incitement.
Nuff said. Underneath the ever-lasting political cesspool from /pol/ and... _specific_ atmosphere, it's still one of the best places to visit for tech-based discussion.
2) Require confirmation you are a real person (check ID) and attach accounts per person. The commercial Internet has to follow the laws they're currently ignoring and the non-commercial Internet can do what they choose (because of being untraceable).
4chan is moderated and the moderation is different on each board with the only real global moderation rule being "no illegal stuff". In addition to that the site does curate the content it shows you using an algorithm even though it is a very basic one (the thread with last reply goes to the top of the page and threads older then X are removed automatically.)
For example the qanon conspiracy nuts got moderated out of /pol/ for arguing in bad faith/just being too crazy to actually have any kind of conversation with and they fled to another board (8chan and later 8kun) that has even less moderation.
Yep, 4chan isn't bad because "people I disagree with can talk there", it's bad because the interface is awful and they can't attract enough advertisers to meet their hosting demands.
All of these have their algorithms specifically curated to try to keep you angry. YouTube outright ignores your blocks every couple months, and no matter how many people dropping n-bombs you report and block, it never endingly pushes more and more.
These company know that their algorithms are harmful and they push them anyway. They absolutely should have liability for what their algorithm pushes.
I don't understand your explanation. Do you mean just voting itself? That's not controlled or managed by HN. That's just more "user generated content." That posts get hidden or flagged due to thresholding is non-discriminatory and not _individually_ controlled by the staff here.
Or.. are you suggesting there's more to how this works? Is dang watching votes and then making decisions based on those votes?
"Editorial control" is more of a term of art and has a narrower definition then you're allowing for.
The HN moderation team makes a lot of editorial choices, which is what gives HN its specific character. For example, highly politically charged posts are manually moderated and kept off the main page regardless of votes, with limited exceptions entirely up to the judgement of the editors. For example, content about the wars in Ukraine and Israel is not allowed on the mainpage except on rare occasions. dang has talked a lot about the reasoning behind this.
The same applies to comments on HN. Comments are not moderated based purely on legal or certain general "good manners" grounds, they are moderated to keep a certain kind of discourse level. For example, shallow jokes or meme comments are not generally allowed on HN. Comments that start discussing controversial topics, even if civil, are also discouraged when they are not on-topic.
Overall, HN is very much curated in the direction of a newspaper "letter to the editor" section, then more algorithmic and hands-off like the Facebook wall or TikTok feed. So there is no doubt whatsoever, I believe, that HN would be considered responsible for user content (and is, in fact, already pretty good at policing that in my experience, at least on the front page).
> The HN moderation team makes a lot of editorial choices, which is what gives HN its specific character. For example, highly politically charged posts are manually moderated and kept off the main page regardless of votes, with limited exceptions entirely up to the judgement of the editors. For example, content about the wars in Ukraine and Israel is not allowed on the mainpage except on rare occasions. dang has talked a lot about the reasoning behind this.
This is meaningfully different in kind from only excluding posts that reflect certain perspectives on such a conflict. Maintaining topicality is not imposing a bias.
> This is meaningfully different in kind from only excluding posts that reflect certain perspectives on such a conflict. Maintaining topicality is not imposing a bias.
Maintaining topicality is literally a bias. Excluding posts that reflect certain perspectives is censorship.
There's things like 'second chance' where the editorial team can re-up posts they feel didn't get a fair shake the first time around, sometimes if a post gets too 'hot' they will cool it down -- all of this is understandable but unfortunately does mean they are actively moderating content and thus are responsible for all of it.
Dang has been open about voting being only one part of the way HN works, and that manual moderator intervention does occur. They will downweigh the votes of "problem" accounts, manually adjust the order of the frontpage, and do whatever they feel necessary to maintain a high signal to noise ratio.
Every time you see a comment marked as [dead] that means a moderator deleted it. There is no auto-deletion resulting from downvotes.
Even mentioning certain topics, such as Israel's invasion of Palestine, even when the mention is on-topic and not disruptive, as in this comment you are reading, is practically a death sentence for a comment. Not because of votes, but because of the moderators. Downvotes may prioritize which comments go in front of moderators (we don't know) but moderators make the final decision; comments that are downvoted but not removed merely stick around in a light grey colour.
By enabling showdead in your user preferences and using the site for a while, especially reading controversial threads, you can get a feel for what kinds of comments are deleted by moderators exercising. It is clear that most moderation is about editorial control and not simply the removal of disruption.
This comment may be dead by the time you read it, due to the
previous mention of Palestine - hi to users with showdead enabled. Its parent will probably merely be down voted because it's wrong but doesn't contain anything that would irk the mods.
Comments that are marked [dead] without the [flagged] indicator are like that because the user that posted the comment has been banned. For green (new) accounts this can be due to automatic filters that threw up false positives for new accounts. For old accounts this shows that the account (not the individual comment) has been banned by moderators. Users who have been banned can email hn@ycombinator.com pledging to follow the rules in the future and they'll be granted another chance. Even if a user remains banned, you can unhide a good [dead] comment by clicking on its timestamp and clicking "vouch."
Comments are marked [flagged] [dead] when ordinary users have clicked on the timestamp and selected "flag." So user downvotes cannot kill a comment, but flagging by ordinary non-moderator users can kill it.
Freedom of speech, not reach of their personal curation preferences, narrative shaping due to confirmation bias and survivorship bias. Tech is in the put them on scales to increase their signal, decrease others based upon some hokey story of academic and free market genius.
The pro-science crowd (which includes me fwiw) seems incapable of providing a proof any given scientist is that important. Same old social politics norms inflate some deflate others and we confirm our survival means we special. Ones education is vacuous prestige given physics applies equally; oh you did the math! Yeah I just tell the computer to do it. Oh you memorized the circumlocutions and dialectic of some long dead physicist. Outstanding.
There’s a lot of ego driven banal classist nonsense in tech and science. At the end of the day just meat suits with the same general human condition.
(1) 4chin is too dumb to use HN, and there's no image posting so, I doubt they'd even be interested in raiding us
(2) I've never seen anything illegal here, I'm sure it happens, and it gets dealt with quickly enough that it's not really ever going to be a problem if things continue as they have been.
They may lose 230 protection, sure, but probably not really a problem here. For Facebook et al, it's going to be an issue, no doubt. I suppose they could drop their algos and bring back the chronological feeds, but, my guess is that wouldn't be profitable given that ad-tech and content feeds are one in the same at this point.
I'd also assume that "curation" is the sticking point here, if a platform can claim that they do not curate content, they probably keep 230 protection.
I don't frequent 4cuck, I use soyjak.party which I guess from your perspective is even worse, but there are of plenty of smart people on the 'cuck thoughbeit, like the gemmy /lit/ schizo. I think you would feel right at home in /sci/.
Certain boards most definitely raid various HN threads.
Specifically, every political or science thread that makes it, is raided by 4chan. 4chan also regularly pushes anti/science and anti-education agenda threads to the top here, along with posts from various alt-right figures on occasion.
Seems pretty sparse to me, and from a casual perusal, I haven't seen any actual calls to raiding anything here, it's more of a reference where articles/posts have happened, and people talking about them.
Remember, not everyone who you disagree with comes from 4chan, some of them probably work with you, you might even be friends with them, and they're perfectly serviceable people with lives, hopes, dreams, same as yours, they simply think differently than you.
lol dude. Nobody said that 4chan links are posted to HN, just that 4chan definitely raids HN.
4chan is very well known for brigading. It is also well known that using 4chan as well as a number of other locations, such as discord, to post links for brigades are an extremely common thing that the alt-right does to try to raise the “validity” of their statements.
I also did not claim that only these opinions come from 4chan. Nice strawman bro.
Also, my friends do not believe these things. I do not make a habit of being friends with people that believe in genociding others purely because of sexual orientation or identity.
Go ahead and type that search query into google and see what happens.
Also the alt-right is a giant threat, if you categorize everyone right of you as alt-right, which seems to be the standard definition.
That's not how I've chosen to live, and I find that it's peaceful to choose something more reasonable. The body politic is cancer on the individual, and on the list of things that are important in life, it's not truly important. With enough introspection you'll find that the tendency to latch onto politics, or anything politics-adjacent, comes from an overall lack of agency over the other aspects of life you truly care about. It's a vicious cycle. You have a finite amount of mental energy, and the more you spend on worthless things, the less you have to spend on things that matter, which leads to you latching further on to the worthless things, and having even less to spend on things that matter.
It's a race to the bottom that has only losers. If you're looking for genocide, that's the genocide of the modern mind, and you're one foot in the grave already. You can choose to step out now and probably be ok, but it's going to be uncomfortable to do so.
That's all not to say there aren't horrid, problem-causing individuals out in the world, there certainly are, it's just that the less you fixate on them, the more you realize that they're such an extreme minority that you feel silly fixating on them in the first place. That goes for anyone that anyone deems 'horrid and problem-causing' mind you, not just whatever idea you have of that class of person.
These people win elections and make news cycles. They are not an “ignorable, small minority”.
For the record, ensuring that those who wish to genocide LGBT+ people are not the majority voice on the internet is absolutely not “a worthless matter”, not by any stretch. I would definitely rather not have to do this, but then, the people who dedicate their lives to trolling and hate are extremely active.
> I think the ultimate problem is that social media is not unbiased — it curates what people are shown.
This is literally the purpose of Section 230. It's Section 230 of the Communications Decency Act. The purpose was to change the law so platforms could moderate content without incurring liability, because the law was previously that doing any moderation made you liable for whatever users posted, and you don't want a world where removing/downranking spam or pornography or trolling causes you to get sued for unrelated things you didn't remove.
The CDA was about making it clearly criminal to send obscene content to minors via the internet. Section 230 was intended to clarify the common carrier role of ISPs and similar providers of third party content. It does have a subsection to clarify that attempting to remove objectionable content doesn't remove your common carrier protections, but I don't believe that was a response to pre-CDA status quo.
> The CDA was about making it clearly criminal to send obscene content to minors via the internet.
Basically true.
> Section 230 was intended to clarify the common carrier role of ISPs and similar providers of third party content.
No, it wasn't, and you can tell that because there is literally not a single word to that effect in Section 230. It was to enable information service providers to exercise editorial control over user-submitted content without acquiring publisher-style liability, because the alternative, giving liability decisions occurring at the time and the way providers were reacting to them, was that any site using user-sourced content at scale would, to mitigate legal risk, be completely unmoderated, which was the opposite of the vision the authors of Section 230 and the broader CDA had for the internet. There are no "common carrier" obligations or protections in Section 230. The terms of the protection are the opposite of common carrier, and while there are limitations on the protections, there are no common carrier like obligations attached to them.
> The CDA was about making it clearly criminal to send obscene content to minors via the internet.
That part of the law was unconstitutional and pretty quickly got struck down, but it still goes to the same point that the intent of Congress was for sites to remove stuff and not be "common carriers" that leave everything up.
> Section 230 was intended to clarify the common carrier role of ISPs and similar providers of third party content. It does have a subsection to clarify that attempting to remove objectionable content doesn't remove your common carrier protections, but I don't believe that was a response to pre-CDA status quo.
If you can forgive Masnick's chronic irateness he does a decent job of explaining the situation:
Yeah but they're not just removing spam and porn. They're picking out things that makes them money even if it harms people. That was never in the spirit of the law
It's also classic commercial activity. Because 230 exists, we are able to have many intentionally different social networks and web tools. If there was no moderation -- for example, if you couldn't delete porn from linkedin -- all social networks would be the same. Likely there would only be one large one. If all moderation was pushed to the client side, it might seem like we could retain what we have but it seems very possible we could lose the diverse ecosystem of Online and end up with something like Walmart.
This would be the worst outcome of a rollback of 230.
> The purpose was to change the law so platforms could moderate content
What part of deliberately showing political content to people algorithmically expected to agree with it, constitutes "moderation"?
What part of deliberately showing political content to people algorithmically expected to disagree with it, constitutes "moderation"?
What part of deliberately suppressing or promoting political content based on the opinions of those in charge of the platform, constitutes "moderation"?
What part of suppressing "misinformation" on the basis of what's said in "reliable sources" (rather than any independent investigation - but really the point would still stand), constitutes "moderation"?
What part of favouring content from already popular content creators because it brings in more ad revenue, constitutes "moderation"?
What part of algorithmically associating content with ads for specific products or services, constitutes "moderation"?
> What part of deliberately showing political content to people algorithmically expected to agree with it, constitutes "moderation"?
Well, maybe it's just me, but only showing political content that doesn't include "kill all the (insert minority here)", and expecting users to not object to that standard, is a pretty typical aspect of moderation for discussion sites.
> What part of deliberately suppressing or promoting political content based on the opinions of those in charge of the platform, constitutes "moderation"?
Again, deliberately suppressing support for literal and obvious facism, based on the opinions of those in charge of the platform, is a kind of moderation so typical that it's noteworthy when it doesn't happen (e.g. Stormfront).
> What part of suppressing "misinformation" on the basis of what's said in "reliable sources" (rather than any independent investigation - but really the point would still stand), constitutes "moderation"?
Literally all of Wikipedia, where the whole point of the reliable sources policy is that the people running it don't have to be experts to have a decently objective standard for what can be published.
The rise of social media was largely predicated on the curation it provided. People, and particularly advertisers, wanted a curated environment. That was the key differentiator to the wild west of the world wide web.
The idea that curation is a problem with social media is always a head scratcher for me. The option to just directly publish to the world wide web without social media is always available, but time and again, that option is largely not chosen... this ruling could well narrow it down that being the only option.
Now, in practice, I don't think that will happen. This will raise the costs of operating social media, and those costs will be reflected in prices advertisers pay to advertise on social media. That may shrink the social media ecosystem, but what it will definitely do is raise the draw bridge over the moat around the major social media players. You're going to see less competition.
>The option to just directly publish to the world wide web without social media is always available,
Not exactly. You still have to procure web hosting somewhere, and that hosting provider might choose to refuse your money and kick you off.
You might also have to procure the services of Cloudflare if you face significant traffic, and Cloudflare might choose to refuse your money and kick you off.
>that option is largely not chosen...
That's because most people do not have neither the time nor the will to learn and speak computer.
Social media and immediate predecessors like Wordpress were and are successful because they brought down the lowest common denominator to "Smack keys and tap Submit". HTML? CSS? Nobody has time for our pig latin.
> You still have to procure web hosting somewhere, and that hosting provider might choose to refuse your money and kick you off.
Who says you need to procure a web hosting provider?
But yes, if you connect your computer up to other computers, the other computers may decide they don't want any part of what you have to offer.
Without that, I wouldn't want to be in the Internet. I don't want to be forced to ingest bytes from anyone who would send them my way. That's just not a good value proposition for me.
> That's because most people do not have neither the time nor the will to learn and speak computer.
I'm sorry, but no. You can literally type in to a word processor or any number of other tools and select "save as web content", and then use any number of products to take a web page and serve it up to the world wide web. It's been that way for the better part of 25 years. No HTML or CSS knowledge needed. If you can't handle that you can just record a video, save it to a file, and serve it up over a web server. Yes, you need to be able to use a computer to participate on the world wide web, but no more than you do to use social media.
Now, what you won't get is a distribution platform that gets your content up in front of people who never asked for it. That is what social media provides. It lowers the effort for the people receiving the content, as in exactly the curation process that the judge was ruling about.
>You can literally type in to a word processor or any number of other tools
Most people these days don't have a word processor or, indeed, "any number of other tools". It's all "in the cloud", usually Google Docs or Office 365 Browser Edition(tm).
>select "save as web content"
Most people these days don't (arguably never) understand files and folders.
>and then use any number of products to take a web page and serve it up to the world wide web.
Most people these days cannot be bothered. Especially when the counter proposal is "Make an X account, smash some keys, and press Submit to get internet points".
>If you can't handle that you can just record a video, save it to a file, and serve it up over a web server.
I'm going to stop you right here: You are vastly overestimating both the will and the computer-aptitude of most people. There is a reason Youtube and Twitch have killed off literally every other video sharing service; there is a reason smartphones killed off personal computers (desktops and to a lesser degree laptops).
Social media became the juggernaut it is today because businesses figured out how to capitalize on the latent demand for easy sharing of information: Literal One Click Solutions(tm) that anyone can understand.
>what you won't get is a distribution platform that gets your content up in front of people who never asked for it.
The internet and more specifically search engines in general have always been that distribution platform. The only thing that changed in the last 30 years is how easy it is to get your stuff on that platform.
> Most people these days don't have a word processor or, indeed, "any number of other tools". It's all "in the cloud", usually Google Docs or Office 365 Browser Edition(tm).
Read that again. ;-)
> Most people these days don't (arguably never) understand files and folders.
We can debate on the skills of "most people" back and forth, but I think it's fair to say that "save as web content" is easier to figure out than figuring out how to navigate a social media site (and that doesn't necessarily require files or folders). If that really is too hard for someone, there are products out there designed to make it even easier. Way back before social media took over, everyone and their dog managed to figure out how to put stuff on the web. People who couldn't make it through high school were successfully producing web pages, blogs, podcasts, video content, you name it.
> I'm going to stop you right here: You are vastly overestimating both the will and the computer-aptitude of most people.
I disagree. I think they don't have the will to do it, because they'd rather use social media. I do believe if they had the will to do it, they would. I agree there are some people who lack the computer-aptitude to get content on the web. Where I struggle is believing those same people manage to put content on social media... which I'll point out is on the web.
> There is a reason Youtube and Twitch have killed off literally every other video sharing service
Yes, because video sharing at scale is fairly difficult and requires real skill. If you don't have that skill, you're going to have to pay someone to do it, or find someone who has their own agenda that makes them want to do it without charging you... like Youtube or Twitch.
On the other hand, putting a video up on the web that no one knows about, no one looks for, and no one consumes unless you personally convince them to do so is comparatively simple.
> there is a reason smartphones killed off personal computers (desktops and to a lesser degree laptops)
Yes, that reason is that smartphones were subsidies by carriers. ;-)
But it's good that you mentioned smartphones, because smart phones will let you send content to anyone in your contacts without you having anything that most would describe as "computer-aptitude". No social media needed... and yet the prevailing preference is for people to go through a process of logging in, shaping content to suit the demands of social media services, attempting to tune the content to get "the algorithm" to show it to as many people as possible, and put their content there. That takes more will/aptitude/whatever, but they do it for the distribution/audience.
> Social media became the juggernaut it is today because businesses figured out how to capitalize on the latent demand for easy sharing of information: Literal One Click Solutions(tm) that anyone can understand.
I'd agree with you if you said "distribute" instead of "sharing". It's really hard to get millions of people to consume your content. That is, until social media came along and basically eliminated the cost of distribution. So any idiot can push their content out to millions and fill the world with whatever they want.... and now there's a sense of entitlement about it, where if a platform doesn't push that content on other people, at no cost to them, that they're being censored.
Yup, that does really require social media.
> The internet and more specifically search engines in general have always been that distribution platform. The only thing that changed in the last 30 years is how easy it is to get your stuff on that platform.
No, the Internet & the web required you to go looking for the content you wanted. Search engines (at least at one time) were designed to accelerate that proces of find exactly the content you were looking for faster, and get you off their platform ASAP. Social media is kind of the opposite of search engines. They want you to stay on their platform; they want you to keep scrolling at whatever "engaging" content they can find, regardless of what you're looking for; if you forget about whatever you were originally looking for, that's a bonus. It's that ability to have your content show up when no one is looking for it where social media provides an advantage over the web for content makers.
You're free to to make your own site with your own moderation controls. And nobody will use it, because it'll rapidly become 99.999% spam, CSAM and porn.
Actually it seems like with these recent rulings, we will be free to use major social media platforms where the choice of moderation is given to the user, lest those social media platforms are otherwise held liable for their "speech".
I am fully fine with accepting the idea that if a social media platform doesn't act as a dumb pipe, then their choice of moderation is their "speech" as long as they can be held fully legally liable for every single moderation/algorithm choice that they make.
Fortunately for me, we are commenting on a post where a legal ruling was made to this effect, and the judge agrees with me that this is how things aught be.
You said this: "People, and particularly advertisers, wanted a curated environment."
If moderation choices are put in the hands of the user, then what you are describing is not a problem, as the user can have that.
Therefore, you saying that this choice exists, means that there isn't a problem for anyone who chooses to not have the spam, and your original complaint is refuted.
> You referencing what people "want" is directly refuted by the idea that they should be able to choose whatever their preferences are.
>
> And your opinion on other people's choices doesn't really matter here.
I think maybe we're talking past each other. What I'm saying what people "want" is a reflection of the overwhelming choices they make. They're choosing the curated environments.
The "problem" that is being referenced is the curation. The claim is that the curation is a problem; my observation is that it is the solution all the parties involved seem to want, because they could, at any time, choose otherwise.
Ok, and if more power is given to the user and the user is additionally able to control their current curation, then that's fine and you can continue to have your own curated environment, and other people will also have more or less control over their own curation.
Problem solved! You get to keep your curation, and other people can also change the curation on existing platforms for their own feeds.
> The claim is that the curation is a problem
Nope. Few people have a problem with other people having s choice of curation.
Instead the solution that people are advocating for is for more curating powers to be giving to individual users so that they can choose, on current platforms, how much is curated for themselves.
> Instead the solution that people are advocating for is for more curating powers to be giving to individual users so that they can choose, on current platforms, how much is curated for themselves.
When you say "on current platforms", I presume you mean on existing social media.
No, that isn't a solution. There's a reason advertising dollars eschew platforms without curation, and they're the one paying for it. Similarly, the people choosing the platform are choosing it because of the curation.
If you don't like it, you're going to have to pay for your own platform and convince other people to participate in it. Good luck.
Yes it is a solution to a user who want to control their own moderation and curated environment.
You get your environment, and other people get theirs. The problem for the user who wants their own curated environment is solved.
> the people choosing the platform are choosing it because of the curation.
They would be free to have that curation for themselves in this circumstance where the curation choice is given to the user.
> If you don't like it
You are literally commenting on a thread where the judge ruled in a way that forces liability onto social media companies for not doing this.
So, the choice that social media companies now have is to either 1: give the moderation choice to the user. or 2: suffer very serious and significant liability.
I am fine with either of those situations, and instead it is you and those social media companies who will have to deal with the "If you don't like it" questions, because thats what the judge said.
> you're going to have to pay for your own platform
No, we can instead laugh when social media companies have to pay a bunch of money due to their liability, as this judge just ruled.
don't let children use?
In TN it that will be illegal Jan 1 - unless social media creates a method for parents to provide ID and opt out of them being blocked I think?
Wouldn't that put the responsibility back on the parents?
The state told you XYZ was bad for your kids and it's illegal for them to use, but then you bypassed that restriction and put the sugar back into their hands with an access-blocker-blocker..
Age limitations for things are pretty widespread. Of course, they can be bypassed to various degrees but, depending upon how draconian you want to be, you can presumably be seen as doing the best you reasonably can in a virtual world.
I'm not sure about video, but we are no longer in an era when manual moderation is necessary. Certainly for text, moderation for child safety could be as easy as taking the written instructions currently given to human moderators and having an LLM interpreter (only needs to output a few bits of information) do the same job.
That's great, but can your LLM remove everything harmful? If not, you're still liable for that one piece of content that it missed under this interpretation.
There are two questions - one is "should social media companies be globally immune from liability for any algorithmic decisions" which this case says "no". Then there is "in any given case, is the social media company guilty of the harm of which it is accused". Outcomes for that would evolve over time (and I would hope for clarifying legislation as well).
At the scale social media companies operate at, absolutely perfect moderation with zero false negatives is unavailable at any price. Even if they had a highly trained human expert manually review every single post (which is obviously way too expensive to be viable) some bad stuff would still get through due to mistakes or laziness. Without at least some form of Section 230, the internet as we know it cannot exist.
Media, generally, social or otherwise, is not unbiased. All media has bias. The human act of editing, selecting stories, framing those stories, authoring or retelling them... it's all biased.
I wish we would stop seeking unbiased media as some sort of ideal, and instead seek open biases -- tell me enough about yourself and where your biases lie, so I can make informed decisions.
This reasoning is not far off from the court's thinking: editing is speech. A for you page is edited, and is TikTok's own speech.
That said, I do agree with your meta point. Social media (hn not excluded) is a generally unpleasant place to be.
"Social media" is a broad brush though. I operate a Mastodon instance with a few thousand users. Our content timeline algorithm is "newest on top". Our moderation is heavily tailored to the users on my instance, and if a user says something grossly out of line with our general vibe, we'll remove them. That user is free to create an account on any other server who'll have them. We're not limiting their access to Mastodon. We're saying that we don't want their stuff on our own server.
What are the legal ramifications for the many thousands of similar operators which are much closer in feel to a message board than to Facebook or Twitter? Does a server run by Republicans have to accept Communist Party USA members and their posts? Does a vegan instance have to allow beef farmers? A PlayStation fan server host pro-PC content?
> I think the ultimate problem is that social media is not unbiased — it curates what people are shown.
It is not only biased but also biased for maximum engagement.
People come to these services for various reasons but then have this specifically biased stuff jammed down their throats in a way to induce specific behavior.
I personally don't understand why we don't hammer these social media sites for conducting psychological experiments without consent.
I'll have to read the third circuit's ruling in detail to figure out whether they are trying to draw a line in the Sand on whether an algorithm satisfies the requirements for section 230 protection or falls outside of it. If that's what they're doing, I wouldn't assume a priori that a site like Hacker News won't also fall afoul of the law.
That's how I read it, too. Section 230 doesn't say you can't get in trouble for failure to moderate, it says that you can't get in trouble for moderating one thing but not something else (in other words, the government can't say, "if you moderated this, you could have moderated that"). They seem to be going back on that now.
Real freedom from censorship - you cannot be held liable for content you hosted - has never been tried. The US government got away with a lot of COVID-era soft censorship by just strong-arming social media sites into suppressing content because there were no first-amendment style protections against that sort of soft censorship. I'd love to see that, but there's no reason to think that our government is going in that direction.
If it is a reckoning for social media then so be it. Social media net-net was probably a mistake.
But I doubt this gets held on appeal. Given how fickle this Supreme Court is they’ll probably overrule themselves to fit their agenda since they don’t seem to think precedent is worth a damn.
No more recommendation/personalization? This could go either way, I'm also willing to see where this one goes.
No more public comment sections? Arstechnica claimed back in the day when section 230 was under fire last time that this would be the result if it was ever taken away. This seems bad.
I'm not sure what will happen, I see 2 possible outcomes that are bad and one that is maybe good. At first glance this seems like bad odds.
Actually there's a fourth possibility, and that's holding Google responsible for whatever links they find for you. This is the nuclear option. If this happens, the internet will have to shut all of its American offices to get around this law.
The underlying hosted service is nearly completely unmoderated and unpersonalised. It's just streams of bits and data routing. You can scan for/limit the propagation of CSAM or DMCA content to some degree as an infrastructure provider but that's really about it and even then you can only really do so to fairly limited degrees and that doesn't stop other providers (or self hosted participants) from propagating that anyways.
Then you provide custom feed algorithms, labelling services, moderation services, etc on top of that but none of them change or control the underlying data streams. They just annotate on top or provide options to the client.
Then the user's client is the one that directly consumes all these different services on top of the base service to produce the end result.
It's a true, unbiased section 230 compatible protocol (under even the strictest interpretation) that the user then can optionally combine with any number of secondary services and addons that they use to craft their personalised social media experience.
I think HN sees this as just more activist judges trying to overrule the will of the people (via Congress). This judge is attempting to interject his opinion on the way things should be vs what a law passed by the highest legislative body in the nation as if that doesn’t count. He is also doing it on very shaky ground, but I wouldn’t expect anything less of the 3rd circuit (much like the 5th)
So the solution is "more speech?" I don't know how that will unhook minors from the feedback loop of recommendation algorithms and their plastic brains. It's like saying 'we don't need to put laws in place to combat heroin use, those people could go enjoy a good book instead!'.
Yes, the solution is more speech. Teach your kids critical thinking or they will be fodder for somebody else who has it. That happens regardless of who's in charge, government or private companies. If you can't think for yourself and synthesize lots of disparate information, somebody else will do the thinking for you.
You're mistaken as to what this ruling is about. Ultimately, when it comes right down to it, the Third Circuit is saying this (directed at social media companies):
"The speech is either wholly your speech or wholly someone else's. You can't have it both ways."
Either they get to act as a common carrier (telephone companies are not liable for what you say on a phone call because it is wholly your own speech and they are merely carrying it) or they act as a publisher (liable for everything said on their platforms because they are exercising editorial control via algorithm). If this ruling is upheld by the Supreme Court, then they will have to choose:
* Either claim the safe harbour protections afforded to common carriers and lose the ability to curate algorithmically
or
* Claim the free speech protections of the First Amendment but be liable for all content as it is their own speech.
Algorithmic libel detectors don't exist. The second option isn't possible. The result will be the separation of search and recommendation engines from social media platforms. Since there's effectively one search company in each national protectionist bloc, the result will be the creation of several new monopolies that hold the power to decide what news is front-page, and what is buried or practically unavailable. In the English-speaking world that right would go to Alphabet.
The second option isn’t really meant for social media anyway. It’s meant for traditional publishers such as newspapers.
If this goes through I don’t think it will be such a big boost for Google search as you suggest. For one thing, it has no effect on OpenAI and other LLM providers. That’s a real problem for Google, as I see a long term trend away from traditional search and towards LLMs for getting questions answered, especially among young people. Also note that YouTube is social media and features a curation algorithm to deliver personalized content feeds.
As for social media, I think we’re better off without it! There’s countless stories in the news about all the damage it’s causing to society. I don’t think we’ll be able to roll all that back but I hope we’ll be able to make things better.
If the ruling was upheld, Google wouldn't gain any new liability for putting a TikTok-like frontend on video search results; the only reason they're not doing it now is that all existing platforms (including YouTube) funnel all the recommendation clicks back into themselves. If YouTube had to stop offering recommendations, Google could take over their user experience and spin them off into a hosting company that derived its revenue from AdSense and its traffic from "Google Shorts."
This ruling is not a ban on algorithms, it's a ban on the vertical integration between search or recommendation and hosting that today makes it possible for search engines other than Google to see traffic.
I actually don't think Google search will be protected in its current form. Google doesn't show you unadulterated search results anymore, they personalize (read: editorialize) the results based on the data they've collected on you, the user. This is why two different people entering the same query can see dramatically different results.
If Google wants to preserve their safe harbour protections they'll need to roll back to a neutral algorithm that delivers the same results to everyone given an identical query. This won't be the end of the world for Google but it will produce lower quality results (at least in the eyes of normal users who aren't annoyed by the personalization). Lower quality results will further open the doors to LLMs as a competitor to search.
And newspapers decide every single word they publish, because they’re liable for it. If a newspaper defames someone they can be sued.
This whole case comes down to having your cake and eating it too. Newspapers don’t have that. They have free speech protections but they aren’t absolved of liability for what they publish. They aren’t protected under section 230.
If the ruling is upheld by SCOTUS, Google will have to choose: section 230 (and no editorial control) or first amendment plus liability for everything they publish on SERPs.
Solution that require everyone to do a thing, and do it well, are doomed to fail.
Yes, it would be great if parents would, universally, parent better, but getting all of them (or a large enough portion of them for it to make a difference) to do so is essentially impossible.
Government controls aren't a solution either though. The people with critical thinking skills, who can effectively tell others what to think, simply capture the government. Meet the new boss, same as the old boss.
I think we've reached the point now that there is more speech than any person can consume by a factor of a million. It now comes to down to picking what speech you want to hear. This is exactly what content algorithms are doing -> out of the millions of hours of speech produced in a day, it's giving you your 24 hours of it.
Saying "teach your kids critical thinking" is a solution but it's not the solution. At some point, you have to discover content out of those millions or hours a day. It's impossible to do yourself -- it's always going to be curated.
EDIT: To whomever downvoted this comment, you made my point. You should have replied instead.
I agree with this. Kids are already subject to an agenda; for example, never once in my K-12 education did I learn anything about sex. This was because it was politically controversial at the time (and maybe it still is now), so my school district just avoided the issue entirely.
I remember my mom being so mad about the curriculum in general that she ran for the school board and won. (I believe it was more of a math and science type thing. She was upset with how many coloring assignments I had. Frankly, I completely agreed with her then and I do now.)
I was lucky enough to go to a charter school where my teachers encouraged me to read books like "People's History of the U.S" and "Lies My Teacher Told Me". They have an agenda too, but understanding that there's a whole world of disagreement out there and that I should seek out multiple information sources and triangulate between them has been a huge superpower since. It's pretty shocking to understand the history of public education and realize that it wasn't created to benefit the student, but to benefit the future employers of those students.
K so several of the most well-funded tech companies on the planet sink literally billions of dollars into psyops research to reinforce addictive behavior and average parents are expected to successfully compete against it with...a lecture.
We have seen that adults can't seem to unhook from these dopamine delivery systems and you're expecting that children can do so?
Sorry. That's simply disingenuous.
Yes, children and especially teenagers do lots of things even though their parents try to prevent them from doing so. Even if children and teenagers still get them, we don't throw up our hands and sell them tobacco and alcohol anyway.
Open-source the algorithm and have users choose. A marketplace is the best solution to most problems.
It is pretty clear that china already forces a very different tiktok ranking algo for kids within the country vs outside the country. Forcing a single algo is pretty unamerican though and can easily be abused, let's instead open it up.
"Open-source the algorithm" would be at best openwashing. The way to create the type of choice you're thinking is to force the unbundling of client software from hosting services.
80% of users will leave things at the default setting, or "choose" whatever the first thing in the list is. They won't understand the options; they'll just want to see their news feed.
I'm not so sure, the feed is quite important and users understand that. Look at how many people switched between X and Threads given their political view. People switched off Reddit or cancelled their FB account at times in the past also.
I'm pretty sure going from X to Threads had very little to do with the feed algorithm for most people. It had everything to do with one platform being run by Musk and the other one not.
Unfortunately, the biases of newspapers and social media sites are only diverse if they are not all under the strong influence of the wealthy.
Even if they may have different skews on some issues, under a system where all such entities are operated entirely for-profit, they will tend to converge on other issues, largely related to maintaining the rights of capital over labor and over government.
Seems like the bias will be against manipulative algorithms. How does tiktok escape liability here? They give control of what is promoted to users to users.
I look at forums and social media as analogous to writing a "Letter to the Editor" to a newspaper:
In the newspaper case, you write your post, send it to the newspaper, and some editor at the newspaper decides whether or not to publish it.
In Social Media, the same thing happens, but it's just super fast and algorithmic: You write your post, send it to the Social Media site (or forum), an algorithm (or moderator) at the Social Media site decides whether or not to publish it.
I feel like it's reasonable to interpret this kind of editorial selection as "promotion" and "recommendation" of that comment, particularly if the social media company's algorithm deliberately places that content into someone's feed.
I think if social media companies relayed communication between it's users with no moderation at all, then they should be entitled to carrier protections.
As soon as they start making any moderation decisions, they are implicitly endorsing all other content, and should therefore be held responsible for it.
There are two things social media can do. Firstly, they should accurately identify its users before allowing them to post, so they can counter sue that person if post harms them, and secondly, they can moderate every post.
Everybody says this will kill social media as we know it, but I say the world will be a better place as a result.
Yeah, pretty much. What's not clear to me though is how non-targeted content curation, like simply "trending videos" or "related videos" on YouTube, is impacted. IMO that's not nearly as problematic and can be useful.
I always wondered why Section 230 does not have a carve-out exemption to deal with the censorship issue.
I think we'd all agree that most websites are better off with curation and moderation of some kind. If you don't like it, you are free to leave the forum, website, etc. The problem is that Big Tech fails to work in the same way, because those properties are becoming effectively the "public highways" where everyone must pass by.
This is not dissimilar from say, public utilities.
So, why not define how a tech company becomes a Big Tech "utility", and therefore, cannot hide behind 230 exception for things that it willingly does, like censorship ?
Wonder no longer! It's Section 230 of the communications "decency" act, not the communication freedoms and regulations act. It doesn't talk about censorship because that wasn't in the scope of the bill. (And actually it does talk about censorship of obscene material in order to explicitly encourage it.)
> In a very general sense, this ruling could be seen as a form of net neutrality
In reality this will not be the case and instead it will introduce the bias of regulators to replace the bias companies want there to be. And even with their motivation to sell users attention, I cannot see this as an improvement. No, the result will probably be worse.
Refusal to moderate, though, is also a bias. It produces a bias where the actors who post the most have their posts seen the most. Usually these posts are Nigerian princes, Viagra vendors, and the like. Nowadays they'll also include massive quantities of LLM-generated cryptofascist propaganda (but not cryptomarxist propaganda because cryptomarxists are incompetent at propaganda). If you moderate the spam, you're biasing the site away from these groups.
You can't just pick anything and call it a "bias" - absolutely unmoderated content may not (will not) represent the median viewpoint, but it's not the hosting provider "bias" doing so. Moderating spam is also not "bias" as long as you're applying content-neutral rules for how you do that.
These are some interesting mental gymnastics. Zuckerberg literally publicly admitted the other day that he was forced by the government to censor things without a legal basis. Musk disclosed a whole trove of emails about the same at Twitter. And you’re still “not so sure”? What would it take for you to gain more certainty in such an outcome?
Haven’t looked into the Zuckerberg thing yet but everything I’ve seen of the “Twitter Files” has done more to convince me that nothing inappropriate or bad was happening, than that it was. And if those selective-releases were supposed to be the worst of it? Doubly so. Where’s the bad bit (that doesn’t immediately stop looking bad if you read the surrounding context whoever’s saying it’s bad left out)?
Means you haven’t really looked into the Twitter files. They were literally holding meetings with the government officials and were told what to censor and who to ban. That’s plainly unconstitutional and heads should roll for this.
The government asking you to do something is like a dangerous schoolyard bully asking for your lunch money. Except the gov has the ability to kill, imprison, and destroy. Doesn’t matter if you’re an average Joe or a Zuckerberg.
So it's categorically impossible for the government to make any non-coercive request or report for anything because it's the government?
I don't think that's settled law.
For example, suppose the US Postal Service opens a new location, and Google Maps has the pushpin on the wrong place or the hours are incorrect. A USPS employee submits a report/correction through normal channels. How is that trampling on Google's first-amendment rights?
This is obviously not a real question, so instead of answering I propose we conduct a thought experiment. The year is 2028, and Zuck had a change of heart and fully switched sides. Facebook, Threads, and Instagram now block the news of Barron Trump’s drug use, of his lavishly compensated board seat on the board of Russia’s Gazprom, and bans the dominant electoral candidate off social media. In addition it allows the spread of a made up dossier (funded by the RNC) about Kamala Harris’ embarrassing behavior with male escorts in China.
What you should ask yourself is this: irrespective of whether compliance is voluntary or not, is political censorship on social media OK? And what kind of a logical knot one must contort one’s mind into to suggest that this is the second coming of net neutrality? Personally I think the mere fact that the government is able to lean on a private company like that is damning AF.
All large sites have terms of service. If you violate them, you might be removed, even if you're "the dominant electoral candidate". Remember, no one is above the law, or in this case, the rules that a site wishes to enforce.
I'm not a fan of political censorship (unless that means enforcing the same ToS that everyone else is held to, in which case, go for it). Neither am I for the radical notion of legislation telling a private organization that they must host content that they don't wish to.
This has zero to do with net neutrality. Nothing. Nada.
Is there evidence that the government leaned on a private company instead of meeting with them and asking them to do a thing? Did Facebook feel coerced into taking actions they wouldn't have willingly done otherwise?
"But by the time Nylah viewed these
videos, TikTok knew that: 1) “the deadly Blackout Challenge
was spreading through its app,” 2) “its algorithm was
specifically feeding the Blackout Challenge to children,” and
3) several children had died while attempting the Blackout
Challenge after viewing videos of the Challenge on their For
You Pages. App. 31–32. Yet TikTok “took no and/or
completely inadequate action to extinguish and prevent the
spread of the Blackout Challenge and specifically to prevent
the Blackout Challenge from being shown to children on their
[For You Pages].” App. 32–33. Instead, TikTok continued to
recommend these videos to children like Nylah."
We need to see another document, "App 31-32", to see what TikTok "knew". Could someone find that, please? A Pacer account may be required. Did they ignore an abuse report?
See also Gonzales vs. Google (2023), where a similar issue reached the U.S. Supreme Court.[1] That was
about whether recommending videos which encouraged the viewer to support the Islamic State's jihad led someone to go fight in it, where they were killed. The Court rejected the terrorism claim and declined to address the Section 230 claim.
IIRC, TikTok has (had?) a relatively high-touch content moderation pipeline, where any video receiving more than a few thousand views is checked by a human reviewer.
Their review process was developed to hit the much more stringent speech standards of the Chinese market, but it opens them up to even more liability here.
I unfortunately can't find the source articles for this any more, they're buried under "how to make your video go viral" flowcharts that elide the "when things get banned" decisions.
I don't think any of that actually matters for the CDA liability question, but it is definitely material in whether they are found guilty assuming they can be held liable at all.
TikTok, Inc., via its algorithm, recommended and promoted videos posted by third parties to ten-year-old Nylah Anderson on her uniquely curated “For You Page.” One video depicted the “Blackout Challenge,” which encourages viewers to record themselves engaging in acts of self-asphyxiation. After watching the video, Nylah attempted the conduct depicted in the challenge and unintentionally hanged herself. -- https://cases.justia.com/federal/appellate-courts/ca3/22-3061/22-3061-2024-08-27.pdf?ts=1724792413
An algorithm accidentally enticed a child to hang herself. I've got code running on dozens of websites that recommends articles to read based on user demographics. There's nothing in that code that would or could prevent an article about self-asphyxiation being recommended to a child. It just depends on the clients that use the software not posting that kind of content, people with similar demographics to the child not reading it, and a child who gets the recommendation not reading it and acting it out. If those assumptions fail should I or my employer be liable?
Or you do things that gives you rewards - and do not care what it will result otherwise - but you want to be saved from any responsibility (automatically!) for what it causes just because it is an algorithm?
The enjoying the benefits but running away from responsibility is a cowardly and childish act. Childish acts need supervision from adults.
You seem to be overlooking the fact of the late plaintiff being 10 years old. The case turns on whether it's reasonable to expect that Tiktok would knowingly share content encouraging users to attempt life-threatening activities to children.
Installing a mechanism that will output harmful results if certain input is provided that is negligence! We are talking about pushing random crap into the face of readers for f's sake here not the cure of cancer algorithm with side effects, please be smart!
You want to bake cookies yet refuse to take responsibility for the possibility of somebody choking on them, or sell cars without maling crashes impossible!
Impossible goals are an asinine standard and "responsibility" and "accountability" are the favorite weasel words of those who want absolute discretion to abuse power.
If it is operating mechanically, then it is following a process chosen by the developers who wrote the code. They work for the company, so the consequences are still the company's responsibility.
The car is following a process chosen by Mercedes' engineers (to go forward when the user presses the accelerator.) The newsfeed is likewise following a mechanistic process driven by user input (they wouldn't be showing misinformation if users weren't uploading and sharing them.)
If the Mercedes infotainment screen had shown you a curated recommendation that you run them over, prior to you doing so, they very possibly would (and should).
What happened to that child is on the parents not some programmer who coded an optimization algorithm. It’s really as simple as that. No 10 year old should be on TikTok, I’m not sure anyone under 18 should be given the garbage, dangerous misinformation, intentional disinformation, and lack of any ability to control what your child sees.
Do you feel the same way about the sale of alcohol? I do see the argument for parental responsibility, but I'm not sure how parents will enforce that if the law allows people to sell kids alcohol free from liability.
We regulate the sale of all sorts of things that can do damage but also have other uses. You can't buy large amounts of certain cold medicines, and you need to be an adult to do so. You can't buy fireworks if you are a minor in most places. In some countries they won't even sell you a set of steak knives if you are underage.
Someone else's response was that a 10 year old should not be on ticktoc. Well then how did they get past the age restrictions?(I'm guessing its a check box at best). So its inadequately gated. But really, I don't think its the sort of thing that needs an age gate.
They are responsible for a product that is actively targeting harmful behavior at children and adults. It's not ok in either situation. You cannot allow your platform to be hijacked for content like this. Full stop.
These 'services' need better ways to moderate content. If that is more controls that allow them to delete certain posts and videos or some other method to contain videos like this. You cannot just allow users to upload and share whatever they want. And further, have your own systems promote these videos.
Everyone who makes a product(especially for mass consumption), has a responsibility to make sure their product is safe. If your product is so complicated that you can't control it, then you need to step back and re-evaluate how it's functioning. Not just plow ahead, making money, letting it harm people.
Alcohol (the consumption form) serves only one purpose to get you buzzed. Unlike algorithms and hammers which are generic and serve many purposes, some of which are positive, especially when used correctly. You can’t sue the people who make hammers if someone kills another person with one.
> Alcohol (the consumption form) serves only one purpose to get you buzzed.
Since consumable alcohol has other legitimate uses besides getting a buzz on, I don't think this point stands. For example, it's used quite often in cooking and (most of the time?) no intoxicating effects remain in the final product.
We're not talking about "all algorithms" any more than the alcohol example is talking about "all liquids". Social media algorithms have one purpose: to manipulate people into more engagement, to manoeuvre them into forgoing other activities in favour of more screen time, in the service of showing them more ads.
You could sue a hammer manufacturer if they regularly advertised hammers as weapons to children and children started killing each other with them, though.
You said sue the hammer manufacturer. Why didn’t you say to sue the newspaper that ran the ads? The fact that you couldn’t keep that straight in your analogy undermines your argument significantly imo.
Except right now youtube have a self advertisement in the middle of the page warning people not to trust the content on youtube. A company warning people not to trust the product they built and the videos they choose to show you... we need to rethink 230. We've gone seriously awry.
It's more nuanced than that. If I sent a hateful letter through the mail and someone gets hurt by it (even physically), who is responsible, me or the post office?
I know youtube is different in important ways than the post, but it's also different in important ways from e.g. somebody who builds a building that falls down.
The Post Office just delivers your mail, it doesn't do any curation.
YouTube, TikTok, etc. differ by applying an algorithm to "decide" what to show you. Those algorithms have all sorts of weights and measures, but they're ultimately personalized to you. And if they're making personalized recommendations that include "how to kill yourself"... I think we have a problem?
It's simply not just a FIFO of content in content out, and in many cases (Facebook & Instagram especially) the user barely gets to a choice in what is shown in the feed...
Contrast with e.g. Mastodon where there is no algorithm and it only shows you what you explicitly followed, and in the exact linear order it was posted.
If the post office opened your letter, read it, and then decided to copy it and send it to a bunch of kids, you would be responsible for your part in creating it, and they would be responsible for their part in disseminating it.
It's be more akin to buying a hammer and then the hammer starts morphing into a screw driver without you noticing.
Then when you accidentally hit your hand with the hammer, you actually stabbed yourself. And that's when you realized your hammer is now a screwdriver.
Yes, I thought that’s what I said - no one knows the shape of danger social media is currently.
It’s like trying to draw tiger but you’ve never seen an animal. We only have the faintest clue what social media is right now. It will change in the next 25+ years as well.
Sure we know some dangers but… I think we need more time to know them all.
We already know that broadcasting (aka the 4th power) comes with responsibility. We've known that for several hundred years by now.
And social media (with editorialized or even curated-for-specific-user feeds) have been around for more than a decade now (Facebook's big Timeline update for instance dates to 2011).
Yes, because a mechanical tool made of solid metal is the same thing as software that can change its behavior at any time and is controlled live by some company with its own motives.
It sounds like your algorithm targets children with unmoderated content. That feels like a dangerous position with potential for strong arguments in either direction. I think the only reasonable advice here is to keep close tabs on this case.
Does it specifically target children or does it simply target people and children happen to be some of the people using it?
If a child searches Google for "boobs", it's not fair to accuse Google of showing naked women to children, and definitely not fair to even say Google was targeting children.
«Had Nylah viewed a Blackout Challenge video through TikTok’s search function, rather than through her FYP, then TikTok may be viewed more like a repository of third-party content than an affirmative promoter of such content.»
I'd say explicitly allowing children on your site (such as TikTok and Google) is targeting children. Any company doing so should be careful with the content on their front pages and feeds.
Part of the claim is that TikTok knew about this content being promoted and other cases where children had died as a result.
> But by the time Nylah viewed these videos, TikTok knew that: 1) "the deadly Blackout Challenge was spreading through its app," 2) "its algorithm was specifically feeding the Blackout Challenge to children," and
3) several children had died while attempting the Blackout Challenge after viewing videos of the Challenge on their For You Pages. App. 31-32. Yet TikTok "took no and/or completely inadequate action to extinguish and prevent the spread of the Blackout Challenge and specifically to prevent the Blackout Challenge from being shown to children on their [For You Pages]." App. 32-33. Instead, TikTok continued to recommend these videos to children like Nylah.
Do you think this should be legal? Would you do nothing if you knew children were dying directly because of the content you were feeding them?
Yes, if a product actively contributes to child fatalities then the manufacturer should be liable.
Then again, I guess your platform is about article recommendation and not about recording yourself doing popular trends. And perhaps children are not your target audience, or an audience at all. In many ways the situation was different for TikTok.
I think it depends on some technical specifics, like which meta data was associated with that content, and the degree to which that content was surfaced to users that fit the demographic profile of a ten year old child.
If your algorithm decides that things in the 90th percentile of shock value will boost engagement to a user profile that can also include users who are ten years old then you maybe have built a negligent algorithm. Maybe that’s not the case in this particular instance but it could be possible.
Like if I'm a cement company, and I build a sidewalk that's really good and stable, stable enough for a person to plant a milk crate on it, and stand on that milk crate, and hold up a big sign that gives clear instructions on self-asphyxiation, and a child reads that sign, tries it out and dies, am I going to get sued? All I did was build the foundation for a platform.
That's not a fair analogy though. To be fairer, you'd have to monitor said footpath 24/7 and have a robot and/or a number of people removing milk crate signs that you deemed inappropriate for your foothpath. They'd also move various milk crate signs in front of people as they walked and hide others.
If you were indeed monitoring the footpath for milk crate signs and moving them, yes you may be liable for showing or not removing one to someone it wouldn't be appropriate for.
That's a good point, and actually the heart of the issue, and what I missed.
In my analogy the stable sidewalk that can hold the milk crate is both the platform and the optimization algorithm. But to your point there's actually a lot more going on with the optimization than just building a place where any rando can market self-asphyxiation. It's about how they willfully targeted people with that content.
"I have a catapult that launches loosely demo-targeted things, without me checking what is being loaded into it. I only intend for harmless things to be loaded. Should I be liable if someone loads a boulder and it hurts someone?"
Of course you should be. Just because an algorithm gave you an output doesn't absolve you from using it. It's some magical mystical thing. It's something you created and you are 100% responsible for what you do with the output of it.
"""The Court held that a platform's algorithm that reflects "editorial judgments" about "compiling the third-party speech it wants in the way it wants" is the platform's own "expressive product" and is therefore protected by the First Amendment.
Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too."""
I've agreed for years. It's a choice in selection rather than a 'natural consequence' such as a chronological, threaded, or even '__end user__ upvoted /moderated' (outside the site's control) weighted sort.
If I as a forum administrator delete posts by obvious spambots, am I making an editorial judgment that makes me legally liable for every single post I don’t delete?
If my forum has a narrow scope (say, 4×4 offroading), and I delete a post that’s obviously by a human but is seriously off‐topic (say, U.S. politics), does that make me legally liable for every single post I don’t delete?
What are the limits here, for those of us who unlike silicon valley corporations, don’t have massive legal teams?
> If my forum has a narrow scope (say, 4×4 offroading), and I delete a post that’s obviously by a human but is seriously off‐topic (say, U.S. politics), does that make me legally liable for every single post I don’t delete?
No.
From the court of appeals [1], "We reach this conclusion specifically because
TikTok’s promotion of a Blackout Challenge video on Nylah’s
FYP was not contingent upon any specific user input. Had
Nylah viewed a Blackout Challenge video through TikTok’s
search function, rather than through her FYP, then TikTok may
be viewed more like a repository of third-party content than an
affirmative promoter of such content."
So, given (an assumption) that users on your forum choose some kind of "4x4 Topic" they're intending to navigate a repository of third-party content. If you curate that repository it's still a collection of third-party content and not your own speech.
Now, if you were to have a landing page that showed "featured content" then that seems like you could get into trouble. Although one wonders what the difference is between navigating to a "4x4 Topic" or "Featured Content" since it's both a user-action.
>then TikTok may be viewed more like a repository of third-party content than an affirmative promoter of such content."
"may"
Basically until the next court case when someone learns that search is an algorithm too, and asks why the first result wasn't a warning.
The real truth is, if this is allowed to stand, it will be selectively enforced at best. If it's low enough volume it'll just become a price of doing business, sometimes a judge has it out for you and you have to pay a fine, you just have to work it into the budget. Fine for big companies, game ender for small ones.
> Now, if you were to have a landing page that showed "featured content" then that seems like you could get into trouble. Although one wonders what the difference is between navigating to a "4x4 Topic" or "Featured Content" since it's both a user-action.
> If my forum has a narrow scope (say, 4×4 offroading), and I delete a post that’s obviously by a human but is seriously off‐topic (say, U.S. politics), does that make me legally liable for every single post I don’t delete?
According to the article, probably not:
> A platform is not liable for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”
"Otherwise objectionable" looks like a catch-all phrase to allow content moderation generally, but I could be misreading it here.
I'm guessing you're not a lawyer, and I'm not either, so there might be some details that are not obvious about it, but the regulation draws the line at allowing you to do[1]:
> any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
I think that allows your use case without liability.
That subsection of 230 is about protecting you from being sued for moderating, like being sued by the people who posted the content you took down.
The "my moderation makes me liable for everything I don't moderate" problem, that's what's addressed by the preceding section, the core of the law and the part that's most often at issue, which says that you can't be treated as publisher/speaker of anyone else's content.
It's not a loophole. That's the intended meaning, otherwise it would be a violation of freedom of association.
That doesn't mean anyone is free to promote content without liability, just that moderating by deleting content doesn't make it an "expressive product."
What the other replies are not quite getting is that there can be other kinds of moderator actions that aren't acting on posts that are offtopic or offensive, but that do not meet the bar for the forum in question — are they considered out of scope with this ruling?
As an example, suppose on a HN thread about the Coq theorem prover, someone starts a discussion about the name, and it's highly upvoted but the moderators downrank that post manually to stimulate more productive discussions. Is this considered curation, and can this be no longer done given this ruling?
It seems to me that this is indeed the case, but in case I'm mistaken I'd love to know.
Wouldn't it more be you are responsible for pinned posts at the top of thread lists? If you pin a thread promoting an unsafe onroad product, say telling people they should be replacing their steering with heim joints that aren't street legal, you could be liable. Whereas if you just left the thread among all the others you aren't. (Especially if the heim joints are sold by a forum sponsor or the forum has a special 'discount' code for the vendor).
If you discovered a thread on the forum where a bunch of users were excitedly talking about doing something incredibly dangerous in their 4x4s, like getting high and trying some dangerous maneuver, would you let sit on your forum?
How would you feel if somebody read about it on your forum and died trying to do it?
Update: The point I'm trying to make is that _I_ wouldn't let this sit on my forum, so I don't think its unethical to ask others to remove it from their forums as well.
Not the OP, but if I thought we were all joking around, and it was the type of forum that allowed people to be a bit silly, I would let it stand. Or if I thought people on the forum would point out the danger and hopefully dissuade the poster and/or others from engaging in that behavior, I would let it stand.
However, if my hypothetical forum received a persistent flood of posts designed to soften people up to dangerous behaviors, I'd be pretty liberal removing posts that smelled funny until the responsible clique moved elsewhere.
I think you're looking for the kind of precision that just doesn't exist in the legal system. It will almost certainly hinge on intent and the extent to which your actions actually stifle legitimate speech.
I imagine that getting rid of spam wouldn't meet the bar, and neither would enforcing that conversations are on-topic. But if you're removing and demoting posts because they express views you disagree with, you're implicitly endorsing the opinions expressed in the posts you allow to stay up, and therefore are exercising editorial control.
I think the lesson here is: either keep your communities small so that you can comfortably reason about the content that's up there, or don't play the thought police. The only weird aspect of this is that you have courts saying one thing, but then the government breathing down your neck and demanding that you go after misinformation.
A lot of people seem to missing the part where if it ends up in court, you have to argue that what you removed was objectionable on the same level as the other named types of content and there will be a judge you'll need to convince that you didn't re-interpret the law to your benefit. This isn't like arguing on HN or social media, you being "clever" doesn't necessarily protect you from liability or consequences.
You are simply not shielded from liability, I cannot imagine a scenario in which this moderation policy would result in significant liability. I'm sure someobe would be willing to sell you some insurance to that effect. I certainly would.
For anyone making claims about what the authors of Section 230 intended or the extent to which Section 230 applies to targeted recommendations by algorithms, the authors of Section 230 (Ron Wyden and Chris Cox) wrote an amicus brief [1] for Google v. Gonzalez (2023). Here is an excerpt from the corresponding press release [2] by Wyden:
> “Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” the members wrote. “That interpretation enables Section 230 to fulfill Congress’s purpose of encouraging innovation in content presentation and moderation. The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike. Section 230’s protection remains as essential today as it was when the provision was enacted.”
This statement from Wyden's press release seems to be in contrast to Chris Cox's reasoning in his journal article [1] (linked in the amicus).
It is now firmly established in the case law that Section 230 cannot act as a shield whenever a website is in any way complicit in the creation or development of illegal content.
...
In FTC v. Accusearch,[69] the Tenth Circuit Court of Appeals held that a website’s mere posting of content that it had no role whatsoever in creating — telephone records of private individuals — constituted “development” of that information, and so deprived it of Section 230 immunity. Even though the content was wholly created by others, the website knowingly transformed what had previously been private information into a publicly available commodity. Such complicity in illegality is what defines “development” of content, as distinguished from its creation.
He goes on to list multiple similar cases and how they fit the original intent of the law. Then further clarifies that it's not just about illegal content, but all legal obligations:
In writing Section 230, Rep. Wyden and I, and ultimately the entire Congress, decided that these legal rules should continue to apply on the internet just as in the offline world. Every business, whether operating through its online facility or through a brick-and-mortar facility, would continue to be responsible for all of its own legal obligations.
Though, ultimately the original reasoning matters little in this case, as the courts are the ones to interpret the law. In fact Section 230 is one part of the larger Communications Decency Act that was mostly struck down by the Supreme Court.
EDIT: Added quote about additional legal obligations.
The Accusearch case was a situation in which the very act of reselling a specific kind of private information would've been illegal under the FTC Act if you temporarily ignore Section 230. If you add Section 230 into consideration, then you have to consider knowledge, but the knowledge analysis is trivial. Accusearch should've known that reselling any 1 phone number was illegal, so it doesn't matter whether Accusearch knew the actual phone numbers it sold. Similarly, a social media site that only allows blackout challenge posts would be illegal regardless of whether the site employees know whether post #123 is actually a blackout challenge post. In contrast, most of the posts on TikTok are legal, and TikTok is designed for an indeterminate range of legal posts. Knowledge of specific posts matters.
Whether an intermediary has knowledge of specific content that is illegal to redistribute is very different from whether the intermediary has "knowledge" that the algorithm it designed to rank legally distributable content can "sometimes" produce a high ranking to "some" content that's illegal to distribute. The latter case can be split further into specific illegal content that the intermediary has knowledge of and illegal content that the intermediary lacks knowledge of. Unless a law such as KOSA passes (which it shouldn't [1]), the intermediary has no legal obligation to search for the illegal content that it isn't yet aware of. The intermediary need only respond to reports, and depending on the volume of reports the intermediary isn't obligated to respond within a "short" time period (except in "intellectual property cases", which are explicitly exempt from Section 230). "TikTok knows that TikTok has blackout challenge posts" is not knowledge of post PQR. "TikTok knows that post PQR on TikTok is a blackout challenge post" is knowledge of post PQR.
Was TikTok aware that specific users were being recommended specific "blackout challenge" posts? If so, then TikTok should've deleted those posts. Afterward, TikTok employees should've known that its algorithm was recommending some blackout challenge posts to some users. Suppose that TikTok employees are already aware of post PQR. Then TikTok has an obligation to delete PQR. If in a week blackout challenge post HIJ shows up in the recommendations for user @abc and @xyz, then TikTok shouldn't be liable for recommendations of HIJ until TikTok employees read a report about it and then confirm that HIJ is a blackout challenge post. Outwardly, @abc and @xyz will think that TikTok has done nothing or "not enough" even though TikTok removed PQR and isn't yet aware of HIJ until a second week passes. The algorithm doesn't create knowledge of HIJ no matter how high the algorithm ranks HIJ for user @abc. The algorithm may be TikTok's first-party speech, but the content that is being recommended is still third-party speech. Suppose that @abc sues TikTok for failing to prevent HIJ from being recommended to @abc during the first elapsed week. The First Amendment would prevent TikTok from being held liable for HIJ (third party speech that TikTok lacked knowledge of during the first week). As a statute that provides an immunity (as opposed to a defense) in situations involving redistribution of third-party speech, Section 230 would allow TikTok to dismiss the case early; early dismissals save time and court fees. Does the featured ruling by the Third Circuit mean that Section 230 wouldn't apply to TikTok's recommendation of HIJ to @abc in the first elapsed week? Because if so, then I really don't think that the Third Circuit is reading Section 230 correctly. At the very least, the Third Circuit's ruling will create a chilling effect on complex algorithms in violation of social media websites' First Amendment freedom of expression. And I don't believe that Ron Wyden and Chris Cox intended for websites to only sort user posts by chronological order (like multiple commenters on this post are hoping will happen as a result of the ruling) when they wrote Section 230.
I'm skeptical that Ron Wyden anticipated algorithmic social media feeds in 1996. But I'm pretty sure he gets a decent amount of lobbying cash from interested parties.
I'm not at all opposed to implementing new laws that society believes will reduce harm to online users (particularly children).
However, if Section 230 is on its way out, won't this just benefit the largest tech companies that already have massive legal resources and the ability to afford ML-based or manual content moderation? The barriers to entry into the market for startups will become insurmountable. Perhaps I'm missing something here, but it sounds like the existing companies essentially got a free pass with regard to liability of user-provided content and had plenty of time to grow, and now the government is pulling the ladder up after them.
The assertion made by the author is that the way these companies grew is only sustainable in the current legal environment. So the advantage they have right now by being bigger is nullified.
The parent said "grew", but I think a closer reading of the article indicates a more robust idea that tboyd47 merely misrepresented. A better sentence is potentially:
are able to profit to the tune of a 40% margin on advertising revenue
With that, they're saying that they're only going to be able to profit this much in this current regulatory environment. If that goes away, so too does much of their margin, potentially all of it. That's a big blow no matter the size, though Facebook may weather it better than smaller competitors.
Section 230 isn't on it's way out, this happened because the court found that TikTok knowingly headlined dangerous content that lead to someone's death.
By turning them all into expensive tarpits of time and money - through the power of strategic spite. Making it so expensive that all plantiffs cannot win even if they prevail in a lawsuit. It is a far harder standard to get legal costs covered and if it costs tens of millions to possibly get a few million in a decade interest dries up fast.
> In other words, the fundamental issue here is not really whether big tech platforms should be regulated as speakers, as that’s a misconception of what they do. They don’t speak, they are middlemen. And hopefully, we will follow the logic of Matey’s opinion, and start to see the policy problem as what to do about that.
This is a pretty good take, and it relies on pre-Internet legal concepts like distributor and producer. There's this idea that our legal / governmental structures are not designed to handle the Internet age and therefore need to be revamped, but this is a counterexample that is both relevant and significant.
Fantastic write-up. The author appears to be making more than a few assumptions about how this will play out, but I share his enthusiasm for the end of the "lawless no-man’s-land" (as he put it) era of the internet. It comes at a great time too, as we're all eagerly awaiting the AI-generated content apocalypse. Just switch one apocalypse for a kinder, more human-friendly one.
> So what happens going forward? Well we’re going to have to start thinking about what a world without this expansive reading of Section 230 looks like.
There was an internet before the CDA. From what I remember, it was actually pretty rad. There can be an internet after, too. Who knows what it would look like. Maybe it will be a lot less crowded, less toxic, less triggering, and less addictive without these gigantic megacorps spending buku dollars to light up our amygdalas with nonsense all day.
Judge Matey's basic point of contention is that Section 230 does not provide immunity for any of TikTok's actions except "hosting" the blackout challenge video on its server.
Defining it in this way may lead to a tricky technical problem for the courts to solve... While working in web, I understand "hosting" to mean the act of storing files on a computer somewhere. That's it. Is that how the courts will understand it? Or does their definition of hosting include acts that I would call serving, caching, indexing, linking, formatting, and rendering? If publishers are liable for even some of those acts, then this takes us to a very different place from where we were in 1995. Interesting times ahead for the industry.
You're reading it too literally here - the CDA applies to:
>(2) Interactive computer service The term “interactive computer service” means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.
"hosting" isn't actually used in the text of the relevant law - it's only shorthand in the decision. If they want to know what the CDA exempts they would read the CDA along with caselaw specifically interpreting it.
I’d imagine one that reasonable people would understand to be the meaning. If a “web hosting” company told me they only stored things on a server with no way to serve it to users, I’d laugh them out the room.
The ruling itself says that this is not about 230, it's about TikTok's curation and collation of the specific videos. TikTok is not held liable for the user content but for the part that they do their 'for you' section. I guess it makes sense, manipulating people is not OK whether it's for political purposes as facebook and twitter do, or whatever. So 230 is not over
It would be nice to see those 'For you' and youtube's recomendations gone. Chronological timelines are the best , and will bring back some sanity. Don't like it? don't follow it
> Accordingly, TikTok’s algorithm, which recommended
the Blackout Challenge to Nylah on her FYP, was TikTok’s
own “expressive activity,” id., and thus its first-party speech.
>
> Section 230 immunizes only information
“provided by another[,]” 47 U.S.C. § 230(c)(1), and here,
because the information that forms the basis of Anderson’s
lawsuit—i.e., TikTok’s recommendations via its FYP algorithm—is TikTok’s own expressive activity, § 230 does
not bar Anderson’s claims.
How did you find it in the first place? A search? Without any kind of filtering (that's an algorithm that could be used to manipulate people), all you'll see is pages and pages of SEO.
Opening up liability like this is a quagmire that's not going to do good things for the internet.
«Had Nylah viewed a Blackout Challenge video through TikTok’s search function, rather than through her FYP, then TikTok may be viewed more like a repository of third-party content than an affirmative promoter of such content.»
The question though is how do you do a useful search without having some kind of algorithmic answer to what you think the user will like. Explicit user or exact match strings are simple but if I search "cats" looking for cat videos how does that list get presented without being a curated list made by the company?
For example, just today there was a highly entertaining and interesting article about how to replace a tablet-based thermostat. And it was posted on the internet, and surfaced via an algorithm on Hacker News.
Without any kind of filtering (that's an algorithm that could be used to manipulate people)
Do you genuinely believe a judge is going to rule that a Boyer-Moore implementation is fundamentally biased? It seems likely that sticking with standard string matching will remain safe.
how does that work for something like tiktok. Chronological doesn't have much value if you're trying to discover interesting content relevant to your interest.
> Chronological timelines are the best , and will bring back some sanity. Don't like it? don't follow it
You realize that there is immense arrogance in this statement where you have decided that something is good for me ? I am totally fine with youtube's recommendations or even Tiktok's algorithms that according to you "manipulate" me.
So basically closer and closer to governmental control over social networks. Seems like a global trend everywhere. Governments will define the rules by which communication services (and social networks) should operate.
IANAL, but it seems to me that Facebook from 20ish years ago would likely be fine under this ruling; it just showed you stuff that people you have marked as friends post. However, if Facebook wants to specifically pick things to surface, that's where potential liability is involved.
The alleged activity in this lawsuit was TikTok either knew or should have known that it was targeting content to minors that contained challenges that was likely to result in harm if repeated. That goes well beyond simple moderation, and is even something that various social media companies have argued in court is speech made by the companies.
Not at all. It’s merely a question of whether social networks are shielded from liability for their recommendations, recognizing that what they choose to show you is a form of free expression that may have consequences — not an attempt to control that expression.
Of course Comrade, there must be consequences for these firms pushing Counter-Revolutionary content. They can have free expression, but they must realize these algorithms are causing great harm to the Proletariat by platforming such content.
Brother, the child asphyxiation challenge isn’t political content getting unfairly banned. They would only be liable for harm that can be proven, far as I’m aware, so political speech wouldn’t be affected unless it was defamatory or something like a direct threat.
All large platforms already enact EU law over US law. Moderation is required of all online services which actively target EU users in order to shield themselves from liability for user generated content. The directive in question is 2000/31/EC and is 24 years old already. It's the precursor of the EU DSA and just like it, 2000/31/EC has extraterritorial reach.
I can sue the corporation. I can start a competing corporation.
Elected governments also aren't as free as you'd think. Two parties control 99% of US politics. Suppose I'm not a fan of trade wars; both parties are in favor of them right now.
>I can sue the corporation. I can start a competing corporation.
Ah, the libertarian way.
I, earning $40,000 a year will take on the corporate giant that has a multimillion dollar legal budget and 30 full time lawyers and win... I know, I saw it in a movie once.
The law books are filled with story after story of corporations doing fully illegal shit, then using money to delay it in court for decades... then laughably getting a tiny fine that represents less than 1% of the profits.
>I can sue the corporation. I can start a competing corporation.
Yeah good luck with that buddy. I’m sorry, but you can’t do a thing to these behemoths. At least when a government bends you over it loses your vote, which sorta kinda matters to them. A corporation is incentivized to disregard your interests unless you are profitable to them, in which case they treat you like glorified livestock.
True, but this particular case and Section 230 are only about civil liability. Regardless of the final outcome after the inevitable appeals, no one will go to jail. At most they'll have to pay damages.
I don't know that because it's obviously false. If someone was jailed in relation to such a case then it was because they did something way beyond violating the HOA CC&Rs, such as assaulting an HOA employee or refusing to comply with a court order. HOAs have no police powers and private criminal prosecutions haven't been allowed in any US state for many years.
Google is your friend. Sorry to be so trite, but there are literally dozens upon dozens of sources.
One such example happened in 2008. The man's name is "Joseph Prudente", and he was jailed because he could not pay the HOA fine for a brown lawn. Yes, there was a judge hitting Joseph Prudente with a "contempt of court" to land him in jail (with an end date of "the lawn is fixed or the fine is paid"), but his only "crime" was ever being too poor to maintain his lawn to the HOA's standards.
> “It’s a sad situation,” says [HOA] board president Bob Ryan. “But in the end, I have to say he brought it upon himself.”
It's not my job to do your legal research for you and you're misrepresenting the facts of the case.
As I expected, Mr. Prudente wasn't jailed for violating a HOA rule but rather for refusing to comply with a regular court order. It's a tragic situation and I sympathize with the defendant but when someone buys property in an HOA they agree to comply with the CC&R. If they subsequently lack the financial means to comply then they have the option of selling the property, or of filing bankruptcy which would at least delay most collections activities. HOAs are not charities, and poverty is not a legally valid reason for failing to meet contractual obligations.
So, having a bad lawn is ultimately worse than being convicted of a crime, maybe even of killing someone, since there's no sentence. There's no appeal. There's no concept of "doing your time". Your lawn goes brown, and you can be put in jail forever because they got a court order which makes it all perfectly legal.
> It's not my job to do your legal research for you and you're misrepresenting the facts of the case.
So, since it's not your job, you're happy to be ignorant of what can be found with a simple Google search? It's not looking up legal precedent or finding a section in the reams of law - it's a well reported and repeated story.
And let's be honest with each other - while by the letter of the law he was put into jail for failing to fulfill a court order, in practice he was put into jail for having a bad lawn. I'll go so far to assert that the bits in between don't really matter, since the failure to maintain the lawn lead directly to being in jail until the lawn was fixed.
So no, we don't have a de jure debtor's prison. But we do have a de facto debtor's prison.
Let's be honest with each other: you're attempting to distort and misrepresent what happened in one Florida case to try and support your narrative about what happened in a different and entirely unrelated federal case. The case of Anderson v. TikTok under discussion here doesn't involve a contempt of court order, no one has gone to jail, nor has the trial court even reached a decision on damages.
The reality is that this case is going to spend years working through the normal appeals process. Before anyone panics or celebrates let's be patient and wait for that to run its course. Until that happens it's all speculation. Calm down.
The US legal system gives authority to judges to use contempt orders to jail people when necessary as a last resort. This is essential to make the system work because otherwise some people would just ignore orders with no consequence. Whether the underlying case is about a debt owed to an HOA or any other issue is irrelevant. And the party subject to a contempt order can always take that up with a higher court.
The government tends to have a monopoly on violence, which is quite the difference. A faceless corporation will have a harder time fining you, garnishing your wages, charging you with criminal acts. (For now at least...)
Conversely, the US government in particular will have a harder time with bans (first amendment), shadow bans (sixth amendment), hiding details about their recommendation algorithms (FOIA). The "checks and balances" part is important.
>The government tends to have a monopoly on violence
They don't literally, as can be seen by that guy who got roughed up by the Pinkertons for the horror of accidentally being sent a Magic card he shouldn't have been.
Nobody went to jail for that. So corporations have at least as much power over your life as the government, and you don't get to vote out corporations.
Tell me, how do I "choose a different company" with, for example, Experian, who keeps losing my private info, refuses to assign me a valid credit score despite having a robust financial history, and can legally ruin my life?
> They don't literally, as can be seen by that guy who got roughed up by the Pinkertons for the horror of accidentally being sent a Magic card he shouldn't have been.
Source for that?
I found [1] which sounds like intimidation; maybe a case for assault depending on how they "frightened his wife" but nothing about potentiall battery, which "roughed up" would seem to imply. The Pinkertons do enough shady stuff that there's not a need to exaggerate what they do.
>...By that same logic, you can 'trivially' influence a democratic government, you have no such control over a corporation.
That is a misrepresentation of the message you are replying too:
>>You can trivially choose not to associate with a corporation. You can't really do so with your government.
You won't get into legal trouble if you don't have a Facebook account, or a Twitter account, or use a search engine than Google, etc. Try to ignore the rules setup by your government and you will very quickly learn what having a monopoly of physical force within a given territory means. This is a huge difference between the two.
As far as influencing a government or a corporation, I suspect (for example) that a letter to the CEO of even a large corporation will generally have more impact than a letter to the POTUS. (For example, customer emails forwarded from Bezos: https://www.quora.com/Whats-it-like-to-receive-a-question-ma...). This obviously will vary from company to company and maybe the President does something similar but my guess is maybe not.
Government is force. It is laws, police, courts and the ability to seriously screw up your life if it chooses.
A corporation might have "power" in an economic sense. It might have market significant presence in the marketplace. That presence might pressure or influence you in certain ways that you would prefer it not, such as the fact that all of your friends and family are customers/users of that faceless corporation.
But what the corporation cannot do is put you in jail, seize your assets, prevent you from starting a business, dictate what you can or can't do with your home etc.
Government is a necessary good. I'm no anarchist. But government is far more of a potential threat to liberty than the most "powerful" corporation could ever be.
But what the corporation cannot do is put you in jail, seize your assets, prevent you from starting a business, dictate what you can or can't do with your home etc.
a corporation can "put me in jail" for copyright violations, accuse me of criminal conduct (happened in the UK, took them years to fix), seize my money (paypal, etc),
destroy my business (amazon, google)...
But government is far more of a potential threat to liberty than the most "powerful" corporation could ever be.
you (in the US) should vote for a better government. i'll trust my government to protect my liberty over most corporations any day.
No, they can appeal to the state to get them to do it.
But you still think parliament actually controls the government as opposed to Whitehall, so I understand why this may be a little intellectually challenging for you.
I feel like that's a poor interpretation of what happened. Corporations and businesses don't inherently have rights - they only have them because we've granted them certain rights, and we already put limits on them. We don't allow cigarette, alcohol, and marijuana advertising to children, for example. And now they'll have to face the consequences of sending stupid stuff like the "black out challenge" to children.
It's one thing to say, "Some idiot posted this on our platform." It's another thing altogether to promote and endorse the post and send it out to everybody.
Businesses should be held responsible for their actions.
Well as these social networks are increasingly dominating internet use to the level that they end up being the only thing used in the internet by constituent plebeians, it makes sense that they receive as much regulatory oversight as telecom providers do.
I think it is broader than that. It’s government control over the Internet. Sure we’re talking about forced moderation (that is, censorship) and liability issues right now. But it ultimately normalizes a type of intervention and method of control that can extend much further. Just like we’ve seen the Patriot Act normalize many violations of civil liberties, this will go much further. I hope not, but I can’t help but be cynical when I see the degree to which censorship by tech oligarchs has been accepted by society over the last 8 years.
It means that the process of assimilating new information, coming to conclusions, and deciding what a nation should do is carried out in the minds of the public, not in the offices of relatively small groups who decide what they want the government to do, figure out what conclusions would support it, and then make sure the public only assimilates information that would lead them to such conclusions.
Is it really adding governmental control, or is it removing a governmental control? From my perspective Section 280 was controlling me, a private citizen, by saying "you cannot touch these entities"
The fix was in as soon as both parties came up with a rationale to support it and people openly started speaking about "algorithms" in the same spooky scary tones usually reserved for implied communist threats.
Disclosures: I read the ruling before reading Matt Stoller’s article. I am a subscriber of his. I have written content recommendation algorithms for large audiences. I recommend doing one of these three things.
Section 230 is not canceled. This is a significant but fairly narrow refinement of what constitutes original content and Stoller’s take (“The business model of big tech is over”) is vastly overstating it.
Some kinds of recommendation algorithms produce original content (speech) by selecting and arranging feeds of other user generated content and the creators of the algorithms can be sued for harms caused by those recommendations. This correctly attaches liability to risky business.
The businesses using this model need to exercise a duty of care toward the public. It’s about time they start.
> There is no way to run a targeted ad social media company with 40% margins if you have to make sure children aren’t harmed by your product.
More specific than being harmed by your product, Section 230 cares about content you publish and whether you are acting as a publisher (liable for content) or a platform (not liable for content). This quote is supposing what would happen if Section 230 were overturned. But in fact, there is a way that companies would protect themselves: simply don't moderate content at all. Then you act purely as a platform, and don't have to ever worry about being treated as a publisher. Of course, this would turn the whole internet into 4chan, which nobody wants. IMO, this is one of the main reasons Section 230 continues to be used in this way.
Also want to note that the inverse solution that companies could take is to be overly Draconian in moderating content, so as to take down anything that could come back negatively on them (in this case, the role of publisher is assumed and thus content moderation needs to be sufficiently robust so as to cover the company's ass).
To me this decision doesn't feel it is demolishing 230, but reducing its scope, a scope that was exanded by other court decisions. Per the article 230 said not liable for user content and not liable for restricting content. This case is about liability for reinforcing content.
Would love to have a timeline only, non reinforcing content feed.
Might be a cultural difference (im not from the US), but leaving a 10 year unsupervised with content from (potentially malicious) strangers really throws me off.
Wouldn't this be the perfect precedence case on why minors should not be allowed on social media?
You are correct. US parents often use social media as a baby sitter and don’t pay attention to what they are watching. No 10 year old should be on social media or even the internet in an unsupervised manner; they are simply too impressionable and trusting. It’s just negligence, my kids never got SM accounts before 15, after I’d had time to introduce them to some common sense and much needed skepticism of people and information on the internet.
Look your kids are going to discover all kinds of nasty things online or offline so either you prepare them for it or it's going to be like that scene in Stephen King's Carrie.
I am also a little confused by this. I thought websites were not allowed to collect data from minors under 13 years of age, and that TikTok doesn't allow minors under 13 to create accounts. Why is TikTok not liable for personalizing content to minors? Apparently (from the court filings) TikTok even knew these videos were going viral among children... which should increase their liability under the Children's Online Privacy Protection Act.
Assuming TikTok collect age, and the minimum possible age is 13 (ToS) and a parent lets their child access the app despite that, I don’t see how TikTok is liable.
Also, I’m not sure how TikTok would know that the videos are viral among the protected demographic if the protected demographic cannot even put in the information to classify them as such?
I don’t think requiring moderation is the answer in all cases. As an adult, I should be allowed to consume unmoderated content. Should people younger than 18 be allowed to? Maybe.
I agree that below age X, all content should be moderated. If you choose not to do this for your platform, then age-restrict the content. However, historically age-restriction on the internet is an unsolved problem. I think what would be useful is tighter legislation on how this is enforced etc.
This case is not a moderation question. It is a liability question, because a minor has been granted access to age-restricted content. I think the key question is whether TikTok should be liable for the child/their parents having bypassed the age restriction (too easily)? Maybe. I’m leaning towards the opinion that a large amount of this responsibility is on the parents. If this is onerous, then the law should legislate stricter guidelines on content targeting the protected demographic as well as the gates blocking them.
Hurting kids, hurting kids, hurting kids -- but, of course, there is zero chance any of this makes it to the top 30 causes of child mortality. Much to complain about with big tech, but children hanging themselves is just an outlier.
This would be considered "accidental injury" which is the #1 cause of teenager mortality. The #3 cause is suicide which is influenced by social media as well - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6278213/
Part of the reason social media has grown so big and been so profitable is that these platforms have scaled past their own abilities to do what normal companies are required to do.
Facebook has a “marketplace” but no customer support line. Google is serving people scam ads for months, leading to millions in losses. (Imagine if a newspaper did that.) And feeds are allowed to recommend content that would be beyond the pale if a human were curating it. But because “it’s just an algorithm bro” we give them a pass because they can claim plausible deniability.
If fixing this means certain companies can’t scale to a trillion dollars with no customer support, too bad. Google can’t vet every ad? They could, but choose not to. Figure it out.
And content for children should have an even higher bar than that. Kids should not be dying from watching videos.
The key thing people are missing is that TikTok is not being held responsible for the video content itself, they are being held responsible for their own code's actions. The video creator didn't share (or even attempt to share) the video with the victim- TikTok did.
If adults want to subscribe themselves to that content, that is their choice. Hell, if kids actively seek out that content themselves, I don't think companies should be responsible if they find it.
But if the company itself is the one proactively choosing to show that content to kids, that is 100% on them.
This narrative of being blind to the vagaries of their own code is playing dumb at best: we all know what the code we write does, and so do they. They just don't want to admit that it's impossible to moderate that much content themselves with automatic recommendation algorithms.
They could avoid this particular issue entirely by just showing people content they choose to subscribe to, but that doesn't allow them to inject content-based ads to a much broader audience, by showing that content to people who have not expressed interest/ subscribed to that content. And that puts this on them as a business.
> Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech. And now TikTok has to answer for it in court. Basically, the court ruled that when a company is choosing what to show kids and elderly parents, and seeks to keep them addicted to sell more ads, they can’t pretend it’s everyone else’s fault when the inevitable horrible thing happens.
If that reading is correct, then Section 230 isn't nullified, but there's something that isn't shielded from liability any more, which IIUC is basically the "Recommended For You"-type content feed curation algorithms. But I haven't read the ruling itself, so it could potentially be more expansive than that.
But assuming Matt Stoller's analysis there is accurate: frankly, I avoid those recommendation systems like the plague anyway, so if the platforms have to roll them back or at least be a little more thoughtful about how they're implemented, it's not necessarily a bad thing. There's no new liability for what users post (which is good overall IMO), but there can be liability for the platform implementation itself in some cases. But I think we'll have to see how this plays out.
What is "recommended for you" if not a search result with no terms? From a practical point of view, unless you go the route of OnlyFans and disallow discovery on your own website, how do you allow any discovery if any form of algorithmic recommendation is outlawed?
That's just branding. It's called Home in Facebook and Instagram, and it's the exact same thing. It's a form of discovery that's tailored to the user, just like normal searches are (even on Google and Bing etc).
Indeed, regardless of the branding for the feature, the service is making a decision about what to show a given user based on what the service knows about them. That is not a search result with no terms; the user is the term.
Now for a followup question: How does any website surface any content when they're liable for the content?
When you can be held liable for surfacing the wrong (for unclear definitions of wrong) content to the wrong person, even Google could be held liable. Imagine if this child found a blackout video on the fifth page of their search results on "blackout". After all, YouTube hosted such videos as well.
TikTok is not being held liable for hosting and serving the content. They're being held liable for recommending the content to a user with no other search context provided by said user. In this case, it is because the visitor of the site was a young girl that they chose to surface this video and there was no other context. The girl did not search "blackout".
> because the visitor of the site was a young girl that they chose to surface this video
That's one hell of a specific accusation - that they looked at her age alone and determined solely based on that to show her that specific video?
First off, at 10, she should have had an age-gated account that shows curated content specifically for children. There's nothing to indicate that her parents set up such an account for her.
Also, it's well understood that Tiktok takes a user's previously watched videos into account when recommending videos. It can identify traits about the people based off that (and by personal experience, I can assert that it will lock down your account if it thinks you're a child), but they have no hard data on someone's age. Something about her video history triggered displaying this video (alongside thousands of other videos).
Finally, no, the girl did not do a search (that we're aware of). But would the judge's opinion have changed? I don't believe so, based off of their logic. TikTok used an algorithm to recommend a video. TikTok uses that same algorithm with a filter to show search results.
In any case, a tragedy happened. But putting the blame on TikTok seems more like an attack on TikTok and not an attempt to reign in the industry at large.
Plus, at some point, we have to ask the question: where were the parents in all of this?
«Had Nylah viewed a Blackout Challenge video through TikTok’s search function, rather than through her FYP, then TikTok may be viewed more like a repository of third-party content than an affirmative promoter of such content.»
You can of course choose not to believe the judges saying it matters for them, but it becomes a very different discussion...
> That's one hell of a specific accusation - that they looked at her age alone and determined solely based on that to show her that specific video?
I suppose I did not phrase that very carefully. What I meant is that they chose to surface the video because a specific young girl visited the site -- one who had a specific history of watched videos.
> In any case, a tragedy happened. But putting the blame on TikTok seems more like an attack on TikTok and not an attempt to reign in the industry at large.
It's always going to start with one case. This could be protectionism but it very well could instead be the start of reining in the industry.
Is not the set of such things offered still editorial judgement?
(And as an addendum, even if you think the answer to that is no, do you trust a judge who can probably barely work an iphone to come to the same conclusion, with your company in the crosshairs?)
I'd say no, because they averages over the entire group. If you ranked based on say, most liked in your friends circle, or most liked by people with a high cosine similarity to your profile, then it starts to slide back into editorial judgment.
This is only a circuit court ruling - there is a good chance it will be overturned by the supreme court. The cited supreme court case (Moody v. NetChoice) does not require personalization:
> presenting a curated and “edited compilation of [third party] speech” is itself protected speech.
This circuit court case mentions the personalization but doesn't limit its judgment based on its presence - almost any type of curation other than the kind of moderation explicitly exempted by the CDA could create liability, though in practice I don't think "sorting by upvotes with some decay" would end up qualifying.
This judge supports censorship and not free speech, it a tendency of the current generation of judges populating the court. They prefer government control over personal responsibility in most cases, especially the more conservative they get.
No. Section 230 protects you if you remove objectionable content. This is about deciding which content to show to each individual user. If all your users get the same content, you should be fine.
If they can customize the feed, does that make it their speech or my speech? Like if I give them a "subscribe to x communities" thing with "hide already visited". It'll be a different feed, and algorithmic (I suppose) but user controlled.
I imagine if you have explicitly ask the user "what topics" and then use a program to determine which topic then it's a problem.
I've got a WIP mastodon client that uses a llama3 to follow topics. I suppose that's not releasable.
Section 230 is alive and well, and this ruling won't impact it. What will change is that US social media firms will move away from certain types of algorithmic recommendations. Tiktok is owned by Bytedance which is a Chinese firm, so in the long run - no real impact.
Anyone know what the reputation of the Third Circuit is? I want to know if this ruling is likely to hold up in the inevitable Supreme Court appeal.
The Ninth Circuit has a reputation as flamingly progressive (see "Grants Pass v. Johnson", where SCOTUS overruled the Ninth Circuit, which had ruled that cities couldn't prevent homeless people from sleeping outside in public parks and sidewalks). The Fifth Circuit has a reactionary reputation (see "Food and Drug Administration v. Alliance for Hippocratic Medicine", which overruled a Fifth Circuit ruling that effectively revoked the FDA approval of the abortion drug mifepristone).
Moderation doesn't scale, it's NP-complete or worse. Massive social networks sans moderation cannot work and cannot be made to work. Social networks require that the moderation system is a super-set of the communication system and that's not cost effective (except where the two are co-extensive, e.g. Wikipedia, Hacker News, Fediverse.) We tried it because of ignorance (in the first place) and greed (subsequently). This ruling is just recognizing reality.
No. Moderation is about allowing objectionable content at all (or at the very least putting up roadblocks to passive consumption). It's different from allowing objectionable content, letting users seek it out but not promoting it. And it's yet another thing than not only allowing it, but also proactively putting it in front of somebody's eyeballs.
Seems like semantic quibbles to me, but then we're talking about law and computers...
The distinction between "conduit" and "publisher" seems compelling to me. Once a company has pierced the veil and looked at the content of the messages they transmit to me it doesn't really matter how they are modifying the stream they still should be liable for what they transmit.
E.g. if someone uses Twitter to send 12-year actress Jenna Ortega a photograph of a penis then Twitter should be liable for aiding and abetting child abuse. (In my opinion.)
I'm not sure that Big Tech is over. Media companies have had a viable business forever. What happens here is that instead of going to social media and hearing about how to fight insurance companies, you'll just get NFL Wednesday Night Football Presented By TikTok.
There's no reason,as far as I'm concerned, that we shouldn't have a choice in algorithms on social media platforms. I want to be able to pick an open source algorithm that i can understand the pros and cons of. Hell let me pick 5. Why not?
> the internet grew tremendously, encompassing the kinds of activities that did not exist in 1996
I guess that's one way to say that you never experienced the early internet. In three words: rotten dot com. Makes all the N-chans look like teenagers smoking on the corner, and Facebook et.al. look like toddlers in paddded cribs.
This will frankly hurt any and all attempts to host any content online, and if anyone can survive it, it will be the biggest corporations alone. Section 230 also protected ISPs and hosting companies (linode, Hetzer, etc) after all.
Their targeting may not be intentional, but will that matter? Are they willing to be jailed in a foreign country because of their perceived inaction?
Thanks to "Contempt of Court" anybody can go to jail, even if they're not found liable for the presented case.
But more on point, we're discussing modification of how laws are interpreted. If someone can be held civilly liable, why can't they be held criminally liable if the "recommended" content breaks criminal laws (CSAM, for example)? There's nothing that prevents this interpretation from being considered in a criminal case.
Section 230 already doesn't apply to content that breaks federal criminal liability, so CSAM is already exempted. Certain third-party liability cases will still be protected by the First Amendment (no third-party liability without knowledge of CSAM, for example) but won't be dismissed early by Section 230.
"In other words, the fundamental issue here is not really whether big tech platforms should be regulated as speakers, as that's a misconception of what they do. They don't speak, they are middlemen."
I think a bigger issue in this case is the age. A 10-year old should not have access to TikTok unsupervised, especially when the ToS states the 13-year age threshold, regardless of the law’s opinion on moderation.
I think especially content for children should be much more severely restricted, as it is with other media.
It’s pretty well-known that age is easy to fake on the internet. I think that’s something that needs tightening as well. I’m not sure what the best way to approach it is though. There’s a parental education aspect, but I don’t see how general content on the internet can be restricted without putting everything behind an ID-verified login screen or mandating parental filters, which seems quite unrealistic.
> I’m not sure what the best way to approach it is though.
Pretty much every option is full of pain, but I think the least-terrible approach would be for for sites to describe content with metadata (e.g. HTTP headers) and push all responsibility for blocking/filtering onto the client device.
This has several benefits:
1. Cost. The people paying the most expense for the development and maintenance of blocking infrastructure will be the same parents who want to actually use it, instead of creating an enormous implicit tax on the entire digital world.
2. Privacy. The websites of the world don't need to know anything at all about the user. No birthdays, no geographical information to figure out what legal jurisdiction they live in, and no giant national lookup database that can track every website any resident registers to. Just isolated local devices that could be as simple as a Boolean for whether the child lock is currently unabled. (In practice I'm sure there will be local user accounts.)
3. Leveraging physical security. Parents do not need to be programmers to understand and enforce "little Timmy shouldn't be using anything except the tablet we specially set up for him that's covered with stickers of his favorite cartoon." Sure, Timmy might gain access to an unlocked device, but that's a challenge parents and communities are equipped to understand and handle.
4. Rule complexity. The individual devices can be programmed with whatever the local legal rules are for ages of majority, or it can simply be parents' responsibility to change things on a notable birthday. Parents who think ankles on women should never be shown at any age would be responsible for putting on plugins that add extra restrictions, instead of forcing that logic on the rest of the world.
I think this is the most privacy-friendly and reasonable approach. However, as a devil’s advocate, this is still pretty fingerprintable.
“Most users load n pages with ankles, the likelihood of a user only loading a single page with ankles is someone under the age of X from country Y with Z% likelihood”
Finally, it goes to end of global social media. jurisdiction cannot be use as a weapon. if you use it as a weapon. they don't hesitate use that to you as a weapon.
I hope this makes certain streaming platforms liable for the things certain podcast hosts say while they shovel money at and promote them above other content.
I am guessing this is about spotify and joe rogan - they would have a pretty tough time pleading section 230 for content they fully sponsor and exclusively publish, with or without the decision in question.
So under this new reading of the law, is it saying that AWS is still not liable for what someone says on reddit, but now reddit might be responsible for it?
It is amazing how people were programmed to completely forget the meaning of Section 230 over the years just by repetition of the stupidest propaganda.
> Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech.
This is fascinating and raises some interesting questions about where the liability starts and stops i.e. is "trending/top right now/posts from following" the same as a tailored algorithm per user? Does Amazon become culpable for products on their marketplace? etc.
For good or for bad, this century's Silicon Valley was built on Section 230 and I don't foresee it disappearing any time soon. If anything, I suspect it will be supported by future/refined by legislation instead of removed. No one wants to be the person who legisliate away all online services...
With no sense of irony, this blog is written on a platform that allows some Nazis, algorithmically promotes publishers, allows comments, and is thus only financially viable because of Section 230.
If you actually want to understand something about the decision, I highly recommend Eric Goldman's blog post:
My interpretation of this is it will push social media companies to take a less active role in what they recommend to their users. It should not be possible to intentionally curate content while simultaneously avoiding the burden of removing content which would cause direct harm justifying a lawsuit. Could not be more excited to see this.
While this guy's missives are not always on target (his one supporting the DOJ's laughable and absurd case against Apple being an example of failure), some are on target... and indeed this ruling correctly calls out sites for exerting editorial control.
If you're going to throw up your hands and say, "Well, users posted this, not us!" then you'd better not promote or bury any content with any algorithm, period. These assholes (TikTok et al) are now getting what they asked for with their abusive behavior.
I put a few forums online that never got active users. What they did get was spam, plenty of it, a lot of it. We can imagine the sheer amount of garbage posted on hn, reddit, Facebook etc
Deleting the useless garbage one has to develop an idea where the line is suppose to be. The bias there will eventually touch all angles of human discourse. As an audience matures it gets more obvious what they would consider interesting or annoying. More bias.
Then there are legal limits in each country, the "correct" religion and natuonalism.
Not that it matters, but I was curious and so I looked it up: the three-judge panel comprised one Obama-appointed judge and two Trump-appointed judges.
This could result in the total destruction of social media sites. Facebook, TikTok, Youtube, Twitter, hell even Linkedin cannot possibly survive if they have to take responsibility for what users post.
I don’t understand how people can be so confident that this will only lead to good things.
First, this seems like courts directly overruling the explicit wishes of Congress. As much as Congress critters complain about DCA Sec230, they can’t agree on any improvements. Judges throwing a wrench at it won’t improve it, they will only cause more uncertainty.
not liking what social media has done to people doesn’t seem like a good reason to potentially destroy the entire corpus of videos created on YouTube.
Congress did not anticipate the type of algorithmic curation that the modern internet is built on. At the time, if you were to hire someone to create a daily list of suggested reading, that list would not be subject to 230 protections. However, with the rise of algorithmic media, that is precisely what modern social media companies have been doing.
I can make a decent argument that even offering a basic “sort by” dropdown, where the platform sets a _default_ sort, classifies it as an “algorithm”.
I’m arguing that judges shouldn’t be tearing down the existing legal regime without someone actively planning for a replacement that doesn’t have massive _unintended consequences_ (which all of the proposed Sec 230 reforms have)
And the fact that you only mentioned “modern social media companies” means that you are also underestimating which offerings qualify. Sec 230 protects all websites and apps that show _any_ user content to other users, not just the large social media companies. Think review sites, online classifieds, your group chat in a Messenger app, blog comments, social recipe sites, shared bookmarking sites, the “memo” field of transactions in every blockchain, etc.
And the obvious worry I have is that new jurisprudence starts pulling Jenga pieces away from AI chatbots before Congress even decides whether those qualify as a platform, a publisher, or something completely different.
The original video is still the original poster's comment, and thus still 230 protected. If the kid searched specifically for the video and found it, TikTok would have been safe.
However, TikTok's decision to show the video to the child is TikTok's speech, and TikTok is liable for that decision.
If the child hears the term "blackout" and searches for it on TikTok and reaches the same video, is that TikTok's speech - fault - as well? TikTok used an algorithm to sort search results, after all.
> However, TikTok's decision to show the video to the child is TikTok's speech, and TikTok is liable for that decision.
How is my interpretation incorrect, please? TikTok (or any other website like Google) can show a video to a child in any number of ways - all of which could be considered to be their speech.
Aah, I counted paragraphs - repeatedly - for some reason. That's my bad.
That said, this is a statement completely unsubstantiated in the original post or in the post that it links to, or the decision in TFA. It's the poster's opinion stated as if it were a fact or a part of the Judge's ruling.
"We reach this conclusion specifically because TikTok’s promotion of a Blackout Challenge video on Nylah’s FYP was not contingent upon any specific user input. Had Nylah viewed a Blackout Challenge video through TikTok’s search function, rather than through her FYP, then TikTok may be viewed more like a repository of third-party content than an affirmative promoter of such content."
Well, if we consider the various social media sites:
Meta - Helped facilitate multiple ethnic cleansings.
Twitter - Now a site run by white supremacists for white supremacists.
Youtube - Provides platforms to Matt Walsh, Ben Shapiro and a whole constellation of conspiracy theorist nonsense.
Reddit - Initially grew its userbase through hosting of softcore CP, one of the biggest pro-ana sites on the web and a myriad of smaller but no less vile subreddits. Even if they try to put on a respectable mask now its still a cesspit.
Linkedin - Somehow has the least well adjusted userbase of them all, its destruction would do its users a kindness.
My opinion of social media goes far and beyond what anyone could consider "not liking".
In any case, it would mean that those videos would have to be self hosted and published, we'd see an en masse return of websites like college humor and cracked and the like, albeit without the comments switched on.
You are arguing that mega social media companies should not be immune from liability. I could care less about those companies, but I do care about the unintended consequences of this ruling.
The mega social media companies weren’t immune from liability before the ruling. But they are mega corporations, with lots of attorneys on retainer who craft the ToS / EULA / other contracts to shed legal risk. Even this ruling isn’t likely to hurt them much in the long run.
The “baby” is every small blog that allows comments, every store / product review, every social bookmarking site, every game with multi-player chat, etc. These are examples of features available because of Sec230 protections. If some enterprising attorney can spin the _default sort order_ into a statistically significant harm to their plaintiff, every single website/app just became a far bigger target for litigation. And even if they can’t, now every mom and pop website will have to pay an attorney to find out if the plaintiff has a case according to this new vague standard.
And this happened because of a judge / jurisprudence, not because of a lawmaking body who solicited feedback from both companies and consumers. This ruling is likely to stand no matter the legal / / social economic fallout.
Craigslist already lost the Personals section to Sec230-modifying legislation. That was a drop in the bucket compared to what we could lose from this ruling.
> In any case, it would mean that those videos would have to be self hosted and published, we'd see an en masse return of websites like college humor and cracked and the like, albeit without the comments switched on.
You seem to be making many assumptions here.
(1) I don’t think this cripples the mega social media companies. They already have thousands of attorneys — they will be busy for a year shedding by liability risk by crafting more onerous ToS that we all end up agreeing to.
(2) Nobody “self hosts” from soup-to-nuts (except for the biggest companies in the world). Your ISP, your DNS provider, your cloud host, etc. all benefit from Sec 230 protections to some extent. We have to wait to see the fallout of those layers.
(3) Companies like College Humor and Cracked benefitted from viral marketing of social networks. If your implied expectation comes true and big social media companies are crippled by this ruling, there will be fewer upstart acts like College Humor and Cracked that grow to become something notable.
(4) Even small companies like College Humor and Cracked won’t be immune from this new redefinition of the line between platform/publisher and speech. My suspicion is this ruling pulled out a Jenga piece, but it will be a while before we see how the tower of internet economics falls.
The person you're responding to didn't say they were confident about anything, they said (cynically, it seems to me) that it could lead to the end of many social media sites, and that'd be a good thing in their opinion.
This is a pedantic thing to point out, but I do it because the comment has been downvoted, and the top response to it seems to misunderstand it, so it's possible others did too.
I’m pretty sure I didn’t misunderstand the parent comment.
I just didn’t choose to address only that comment in my reply — I spun my reply in the context of all of the previous discussions about Sec230 reform — because that’s the underlying worry that I have. I could care less if the biggest social media companies die off, but I _do_ worry about all of the other unintended consequences of this ruling (more accurately, changing the definition of “speech” to include something that is so far removed from direct and intentional).
But it’s more likely to go the other way around: the big sites with their expensive legal teams will learn how to thread the needle to remain compliant with the law, probably by oppressively moderating and restricting user content even more than they already do, while hosting independent sites and forums with any sort of user‐submitted content will become completely untenable due to the hammer of liability.
Negative externalities aside, social media has been the most revolutionary and transformative paradigm shift in mass communication and culture since possibly the invention of the telegraph. Yes something that provides real value to many people would be lost if all of that were torn asunder.
What is likely to happen is that Government will lean on "friendly" platforms that cooperate in order to do political things that should be illegal, in exchange for looking the other way on things the government should stop. This is the conclusion I came to after watching Bryan Lunduke's reporting on the recent telegram arrest.[1]
There's nothing in the article about making the social media sites liable for what their users post. However, they're made liable for how they recommend content to their users, at least in certain cases.
The return of the self hosted blog type internet where we go to more than 7 websites? One can dream. Where someone needs an IQ over 70 to post every thought in their head to the universe? Yes that’s a world I’d love to return to.
>Where someone needs an IQ over 70 to post every thought in their head to the universe? Yes that’s a world I’d love to return to.
I remember the internet pre social media but I don't exactly remember it being filled with the sparkling wit of genius.
The internet is supposed to belong to everyone, it wasn't meant to be a playground only for a few nerds. It's really sad that hacker culture has gotten this angry and elitist. It means no one will ever create anything with as much egalitarian potential as the internet again.
Nah, ISPs (and webhosts) are protected by Section 230 as well, and they're likely to drift into the lawyer's sights as well - intentionally or unintentionally.
... because it's small tech that need Section 230. If anything, retraction of 230 will be the real free ride for big tech, because it will kill all chance of threatening competition at the next level down.
Insane reframing. Big tech and politicians are pushing this, pulling the ladder up behind them-- X and new decentralized networks are a threat to their hegemony and this is who they are going after. Startups will not be able to afford whatever bullshit regulatory framework they force feed us. How about they mandate any social network over 10M MAU has to publish their content algorithms.. ha!
>There is no way to run a targeted ad social media company with 40% margins if you have to make sure children aren’t harmed by your product.
So, we actually have to watch out for kids, and maybe only have a 25% profit margin? Oh, so terrible! /s
I'm 100% against the political use of censorship, but 100% for the reasonable use of government to promote the general welfare, secure the blessings of liberty for ourselves, and our posterity.
Right? I missed the part where a business is "entitled" to that. There was a really good quote I've never been able to find again, along the lines of "just because a business has always done things a certain way, doesn't mean they are exempt from changes".
"There has grown up in the minds of certain groups in this country the notion that because a man or corporation has made a profit out of the public for a number of years, the government and the courts are charged with the duty of guaranteeing such profit in the future, even in the face of changing circumstances and contrary to the public interest. This strange doctrine is not supported by statute or common law. Neither individuals nor corporations have any right to come into court and ask that the clock of history be stopped, or turned back."
This is a typical anglosphere move: Write another holy checklist (I mean, "Great Charter"), indoctrinate the plebes into thinking that they were made free because of it (they weren't), then as soon as one of the bulleted items leaves the regime's hiney exposed, have the "judges" conjure a new interpretation out of thin-air for as long as they think the threat persists.
Whether it was Eugene Debs being thrown in the pokey, or every Japanese civilian on the west coast, or some harmless muslim suburbanite getting waterboarded, nothing ever changes. Wake me up when they actually do something to Facebook.
When I see CEO's, CFO's going to prison for the actions of there corporations, then I'll believe laws actually make things better. Otherwise any court decisions that say some action is now illegal is just posturing.
What I want to sink in for people that whenever people talk about an "algorithm", they're regurgitating propaganda specifically designed to absolve the purveyor of responsibility for anything that algorithm does.
An algorithm in this context is nothing more than a reflection of what all the humans who created it designed it to do. In this case, it's to deny Medicaid to make money. For RealPage, it's to drive up rents for profit. Health insurance companies are using "AI" to deny claims and prior authorizations, forcing claimants to go through more hoops to get their coverage. Why? Because the extra hoops will discourage a certain percentage.
All of these systems come down to a waterfall of steps you need to go through. Good design will remove steps to increase the pass rate. Intentional bad design will add steps and/or lower the pass rate.
Example: in the early days of e-commerce, you had to create an account before you could shop. Someone (probably Amazon) realized they lost customers this way. The result? You could create a shopping cart all you want and you didn't have to create an account unti lyou checked out. At this point you're already invested. The overall conversion rate is higher. Even later, registration itself became optional.
Additionally, these big consulting companies are nothing more than leeches designed to drain the public purse
I think the ultimate problem is that social media is not unbiased — it curates what people are shown. In that role they are no longer an impartial party merely hosting content. It seems this ruling is saying that the curation being algorithmic does not absolve the companies from liability.
In a very general sense, this ruling could be seen as a form of net neutrality. Currently social media platforms favor certain content, while down weighting others. Sure, it might be at a different level than peer agreements between ISPs and websites, but it amounts to a similar phenomenon when most people interact on social media through the feed.
Honestly, I think I'd love to see what changes this ruling brings about. HN is quite literally the only social media site (loosely interpreted) I even have an account on anymore, mainly because of how truly awful all the sites have become. Maybe this will make social media more palatable again? Maybe not, but I'm inclined to see what shakes out.