Hacker News new | past | comments | ask | show | jobs | submit login
Social Media Platforms as Common Carriers? [pdf] (ucla.edu)
108 points by amadeuspagel on July 7, 2021 | hide | past | favorite | 88 comments



My biggest technical issue with this "common carrier" movement is spam. The author mentions it, but seems to suggest that users can simply block or ignore spammers. As somebody who witnessed the commercialization of the internet, I find this extremely naive. We generally think of email as a "common carrier," but even there, spam is blocked. Blocking spam is censorship. Failure to automatically detect 1% of spam will still overwhelm non-spam content.

If I lose common carrier status because I'm blocking spam, and thus section 230 protections, am I on the hook if I fail to block a fraudulent spam comment that leads to a user being harmed?


> We generally think of email as a "common carrier," but even there, spam is blocked. Blocking spam is censorship. Failure to automatically detect 1% of spam will still overwhelm non-spam content.

These aren't quite the same. The paper recommends common carrier protections only for publishing that people explicitly seek out or to whom they subscribe. If you're subscribed to spam, the solution is obvious and technically trivial (unsubscribe).

The spam problem for email is not trivial because email is, by design, a public addressing system which permits unsolicited messaging.


Well, the common carrier approach would not necessarily mean that every user can publish what they want.

Common carrier treatment arises when you control a unique piece of infrastructure, or, more loosely defined, a dominant one. As a result you need to provide non-discriminating access for competitors.

In my interpretation, this requirement would also be satisfied by Facebook allowing publishers and companies to create pages. A minimal threshold of importance that's equal for all page owners might be valid.

And the paper touches on this, mentioning that there's a valid role in moderating comments:

> There might thus be reason to leave platforms free to moderate comments (rather than just authorizing users to do that for their own pages), even if one wants to stop platforms from deleting authors’ pages or authors’ posts from their pages


Simple, outsource the moderation. Provide a common access API for third parties to provide moderation. Whether their home feed content resemble 4chan or the New Yorker is entirely up to the user.


I don't think this is a major issue at all. All the common carriers (phone line, cell phone, public square in shopping mall, etc.) designated by law have (or can have) spam problems. This does not change a thing. No serious common carrier will challenge the court with the argument: I can ban spammers so I can ban everyone I want to.


Phone companies are regulated as common carriers, but there's an exception allowing them "to block by default illegal or unwanted calls based on reasonable call analytics before the calls reach consumers."[1]

[1]: https://www.fcc.gov/consumers/guides/stop-unwanted-robocalls...


But that puts us back where we started:

FB, et. al., will argue that they are blocking unwanted messages based on reasonable analytics. So we will continue to see exactly the same type of feed curation as today.

The only difference (maybe) is that a platform (probably) can't just wholesale ban a particular person... but I bet that even the phone company is allowed to ban a person is deemed to be sufficiently abusive or harassing.

The fundamental problem is that there is a vocal minority who really, really don't want to believe that they are the vocal minority who everyone else wants to ignore. There isn't some magic set of laws and regulations you can create to force the silent majority to listen to you when they think you're dumb and annoying.

The first amendment grants you the right to speak and the right for people (who wish to) to listen, but it doesn't grant you the right to an audience for your speech.


If you really believe that applying common carrier laws to social media companies, with the same exception for spam that phone companies have, wouldn't make a difference, you have no reason to oppose that, right?


> There isn't some magic set of laws and regulations you can create to force the silent majority to listen to you when they think you're dumb and annoying.

You're not applying the rule correctly. Twitter and FB are preventing, say Trump, from posting at all, even to those who want to receive his messages, so they aren't just blocking his messages to recipients who don't want to receive them. That goes beyond simply "blocking unwanted messages".


In a decentralized system, users could have various "vouches" for the authenticity of their messages.

A trusted user could vouch for a new user, and/or the user could go through various types of third party verification, such as phone number, ID verification, or other verification, and then the third party could vouch for them on that basis.

You could even have monetary vouches, where a third party vouched that the user paid a certain amount of money, solved a captcha, or solved some computational problem.

The user could be in control of what measures would be required to reach their screen. As it stands, social media essentially incorporates a combination of trusted user vouching (you can see who is a friend of a friend), and monetary vouching (advertising).


Couldn't a social media site implement support for custom blocklists or moderators, and have users opt into them? Facebook can have their own Official TM Facebook Moderation Blocklist, which users can subscribe to and filter out all spam and hate according to Facebook's algorithm. Or users can choose not to and subscribe to someone else's or no blocklist at all. Therefore, there wouldn't be any issue with common carrier status since users can post without being hindered, but other users can tailor their feeds to only see posts they care about.


I think the biggest problem is less spam and more harassment. Simply blocking harassers isn't as effective as blocking spam due to the way social networks generally work. The harasser can just move on to your connections which is potentially even worse than them harassing you. Imagine an ex-partner sending revenge poor to all your contacts. Fighting that requires the ability for users to report potential bad actors and for the platform to have a centralized moderation mechanic to stop those bad actors.


He could just have done that using phone calls or standard mail. The government handles those cases, we don't expect phone or post companies to inspect what we send and ban us if they don't like what we send.


Or not making porn.


Not all online harassment involves porn. I'd wager that the vast majority doesn't. You can be squeaky-clean porn-wise and still be the target of online harassment campaigns.


I think this is a common failing for legal analysis of tech problems. Some years ago Lessig made the argument that radio bandwidth should be unregulated and belong to all, instead of licensed to cellphone carriers. The technical basis for the argument was that's how Ethernet hubs work, and they work great. Of course, nobody uses a hub today.


Could you explain what the common failing is?

To underestimate how technical changes affect real-life usage?

I often see the opposite fallacy: That tech people assume technically optimal solutions to be socially and legally optimal, which they often are not. To transfer this to your example: Perhaps the social and economic landscape of society would be better off if the spectrum belonged to us all. Maybe we would have developed legal and social norms to handle the hub nature?


Assuming some existing technical solution is optimal. Hubs existed for a time because they were cheap, not because they were good. Or assuming that some solution to an unsolved problem will be pulled out of the magical innovation hat. We'll just devise more efficient time sharing algorithms.


Thanks, the first fallacy is great, I had not thought of this before. It's similar to how people will not deem change possible until it has always existed (i.e. when convinced, they overwrite the memory of their own past opinion).


User has control over what is marked as spam, and can see spam messages if needed. If you block some content the way gmail blocks spam, it won't be censorship.


> If you block some content the way gmail blocks spam, it won't be censorship.

Are you thinking of the spam folder? Because the primary way that gmail blocks "spam" is by never delivering the email at all. Not to your spam folder, not anywhere.

I learned about this when I was unable to receive email from a personal friend.


Yes i was talking about the spam folder, i have never heard about this other way of blocking, and can't find anything about it on the internet. Is this behaviour documented anywhere?



I guess part of my question with spam lately is why can't we just go after the spammers? It seems to me that acting as a public nuisance would be enough for the government to take action.


Because most of the spam is coming from other countries and the cost of tracking down every spammer is prohibitive.


To me, the question is how a common carrier status can coexist with content curation and promotion.

Looking at the ur-example of the phone company, AT&T doesn't try to preemptively place calls that a household "might be interested in based on analytics."

The modern social media landscape, however, is dominated by "the algorithm." Youtube lives on its recommendations; Twitter wants you to keep refreshing its feed; even Hacker News tries to sort the front page by a combination of novelty and interest. In my opinion, this is a continuous, editorial judgment by the outlet, and it is very different from the idea of a "neutral public square" that exists as an objective point in space.

The fight isn't really about whether an individual person should have a right to send messages to other individual people on social media platforms; it's about whether the media platform has a responsibility to actively promote views without regard for their content. It's not about forcing the "shopping mall" to allow protesters (PruneYard, from the article), but instead it's about forcing the "shopping mall" to advertise that it's hosting protesters and include them on its maps and business lists.

(Edit to add:)

The article discusses this point somewhat in its section E, related to compelled recommendations, but I think the topic deserves much broader analysis. A "legally viewpoint-neutral Twitter" would be a Pyrrhic victory for proponents if Twitter could still restrict user's visibility to direct subscribers/replies only, denying the public part of its platform.


I think you will be surprised at how DISENGAGED folks are vis a vis a phone call on their AT&T line.

Why? Because a damn ton of them are spam / scams. So yes, ATT doesn't filter calls, and users have learned not to answer them.


The phone system is a failure of innovation. Some kind of caller id should be mandatory. There is no technical reason something akin to DNS for phone numbers can't exist. Phone companies would rather cash checks than innovate.


Unspoofable caller ID is belatedly happening with STIR/SHAKEN but in general your point stands.


belated? This all should have happened 20 years ago!


> A "legally viewpoint-neutral Twitter" would be a Pyrrhic victory for proponents if Twitter could still restrict user's visibility to direct subscribers/replies only, denying the public part of its platform.

I don't agree. I think that would be a huge victory. It would certainly not be pyrrhic in the sense that we would lose anything.


If Twitter hides Trump from everyone doesn't follow him, but shows all his messages to everyone follows him. What is the problem then?


The problem for Twitter is that Trump used their platform to further incite a mob as they ransacked the US Capitol. Twitter doesn't want to be a conduit from Trump to his followers if the messages he delivers incite them to commit acts of violence and insurrection.

There is a direct line from Trump tweeting about how Pence lacked the courage to deliver them the presidency by certifying what Trump called a "fraudulent vote", to Trump's followers chanting "Hang Mike Pence" in the halls of the Capitol when they got the message.


Yes, you may interpret this way, but the U.S. law wouldn't designate those speeches as inciting violence. "direct line" is simply insufficient.


The problem from Trump's perspective is that he was getting a ton of engagement from non-followers and he wants that level of engagement back.


These social media platforms don’t act like common carriers.

It’s a fundamental necessity of the business that social media platforms be able to deliver enough good content that users want to join and engage while filtering out enough bad content that users don’t get fed up and leave.

So they implement all kinds of mechanisms designed to promote and emphasize “good” content and minimize “bad” content. It’s important to understand that “good” means good for business and “bad” means bad for business, and of course there are sometimes short-term vs long-term considerations to balance.

Mechanisms like search rank algorithms, retweets, comments, likes, content policies, moderation, flags, bans, etc.

If the government takes some of these mechanisms away from a social media platform they will (1) fundamentally change the nature of the social media platform — quite possibly breaking what made it a desirable place to share in the first place; (2) prevent these companies from making business decisions in their own business interests.

Imagine HN with no moderation, or your favorite narrow-interest subreddits full of spam and political posts with no one allowed to moderate them or ban the crap posters. It entirely defeats the purpose.

Is this really what we want?

Also, remember that platforms much more permissive than the big social media platforms exist that are accessible over the same internet, not to mention plain old web sites. So what exactly is the need to force, say Facebook or Twitter to have very permissive content policies?

I think what many people concerned with this actually want is for social media platforms to continue to moderate and filter bad content and promote good content, but they want to tell the platform what “good” and “bad” mean according to their own opinions rather than letting them determine that for themselves.


>It’s important to understand that “good” means good for business and “bad” means bad for business, .

The 'good' here is what people 'want', at least what they think they want and not what they 'need'. Overtime the users unconsciously or consciously(influencers) realize that the system overwhelmingly favors this behavior and resort to only broadcast content which people want irrespective of whether its factual or end-up hurting those who consume it in the long run or even if they themselves don't believe in that content.

You can verify this yourself, Subscribe to a Twitter topic of your expertise and check the home screen(mobile app) (or) if you're brave enough check the home page of LinkedIn and you'll see the algorithm's overwhelming bias towards conformity.


The article already clearly distinguished the hosting function and the recommendation function. What makes those social media platforms common carrier is their hosting function, you are taking about their recommendation function, which can be technically separated from the hosting function. For example, if they think Trump is bad, just hide him from everyone not following him. At the same time, still show his messages to everyone following him, for the hosting purpose.


But all that spam would come into play with "generic" feeds. You don't expect spam from people, groups and organizations that you subscribe to.

You can think of "discovery feeds" as an addon - and maybe even not provided by the same company providing the social network itself.

Cherry on top? You would have to actively subscribe to those feeds, forcing them to be interesting/relevant.


But you're back to the problem of breaking the business model of these companies. If everything was opt-in, subscription-only, that would fundamentally change these platforms.

Even then, with zero moderation, feeds that people subscribe to could still get overwhelmed by trolling and vitriol.


I think you're right, they cannot be a "common carrier" because decentralization (in the sense of giving people choice) of the selection algorithm makes them irrelevant. There is no other service they provide (to the end user) other than the content selection.


Consider HN.

What do you think the main feed and discussion might look like if moderation was not allowed?

Likewise for any special-interest sub-reddit you might subscribe to.

I think these things will be very substantially changed, for the worse, if content moderation isn't allowed.


> be able to deliver enough good content that users want

> Is this really what we want?

There is a pretty simple solution to this. Leave these decisions in the hands of an individual user's choice.

I am sure that there would be some kinks that need to be worked out. But coming at these problems from the perspective of allowing a user to choose different spam/block parameters or algorithms, of their own choice, and letting the market decide what users really want.

I am sure that most people would voluntarily choose certain general spam blocking settings, without the need for the platform to make every decision for the user.


Be careful what you wish for.

If social media platforms are considered common carriers they will have a 'duty to serve." They will need to provide their service to anyone not judged by a court to be breaking the law.

That includes spam, pornography, and many forms of online abuse and harassment. There is a strong case to be made that ISPs should be common carriers, since they don't (and should not) know what is contained in the packets they transport.

But treating Twitter or Facebook or Hacker News as common carriers makes no sense. They are not "carriers" who are neutrally transmitting data they have no control over.

That's not what they want to be, and that's not want their users want them to be. 4chan has it's place, but most people don't want to whole web to look like that.

== Edit: Keep in mind the the law is a blunt interment. Claiming small social media sites don't need to worry about laws restricting big social media sites, is like claiming that honest citizens don't need to worry about laws targeting terrorists.


The point of this paper is that bad stuff would exist but you'd never see it unless you explicitly seek it out. You might not even be able to search for it; you might have to paste an exact URL to get it.


How is any of what you described different from the situation today? Except that said URL won't contain a TwitFace domain name if they don't want you on their platform.

Is the domain name really that important, so long as the content is available?


For one thing, Volokh suggests that Twitter would also have to allow you to follow @BadStuff and then view its posts in your timeline – as long as you set your timeline to chronological mode. If you use the algorithmic timeline, on the other hand, he claims that you're now relying on Twitter to provide a kind of judgment on what tweets are better than others, so Twitter would then have the First Amendment right to refuse to show you some users' tweets entirely.

Personally, I suspect that any law like this would result in Twitter simply removing the chronological timeline altogether.

In addition, although the paper's title says "social media", Volokh also suggests the services which banned Parler – Google and Apple's app stores, and AWS – could also be subject to common carrier regulations. For those kinds of services, recommendation algorithms and domain names are irrelevant.


By the way, one of the colleagues thanked in this piece is Eric Goldman, who has written the highly recommended paper titled "Search Engine Bias and the Demise of Search Engine Utopianism" [1]. He also frequently writes at [2]

[1] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=893892

[2] https://blog.ericgoldman.org/archives/author/eric-goldman

[edit] While I'm at it, one of the best summaries of the whole neutrality debate is from Laura Granka [3], and Goldman has his own summary here [4]

[3] Granka, L. A. (2010). The Politics of Search: A Decade Retrospective. The Information Society, 26(5), 364–374. https://doi.org/10.1080/01972243.2010.511560

[4] Goldman, E. (2011). Revisiting Search Engine Bias (SSRN Scholarly Paper ID 1860402). Social Science Research Network. https://papers.ssrn.com/abstract=1860402


If social media becomes a common carrier, but ISPs do not, can Facebook just host their service on their own ISP and block unwanted users at the ISP level? "It's not Facebook blocking the user, it's the ISP, which happens to be the only ISP we host Facebook on."

All just poking holes in the idea that social media can be a common carrier while the ISPs are not. Somehow the literal carrier avoided becoming the common carrier.


Social media companies only invest enough in trial ISP networks to encourage faster speed by incumbents. Telecoms outsource physical operations and desperately want to get into value adds like media content, low quality app stores, e-security, and advertising. No one wants to move down the OSI layers into commodity services or capital intensive slogs.


How can the ISP ban Facebook accounts, if they are separated entities? Or do you mean the ISP bans Facebook urls related to certain accounts / posts? Not sure how this gonna work.


Facebook allows anyone to sign up and access the site through user.facebook.com. All actual hosting is outsourced to NotFacebook ISP because the cloud. NotFacebook totally at their own discretion null routes baduser.facebook.com.


Maybe Facebook's ISP of choice requires handing over the TLS private keys?


> just host their service on their own ISP

Good luck


That's not how laws work.


Your right, this legal "hack" would probably be frowned upon by a judge, but it does demonstrate how silly it is that the literal carrier is not the one considered the "common carrier".

It would be illegal for Facebook to arbitrarily prevent someone from accessing Facebook, and legal for Comcast to arbitrarily prevent someone from accessing Facebook. And back to my original proposal, if Facebook and Comcast came to some sort of deal, perhaps Comcast could block unwanted user on behalf of Facebook.


But Facebook cannot do through an agent what it is not allowed to do itself. Doesn't matter if the agent is a subsidiary or a partner.


I'm not seeing what problem is being solved here. If, as he points out early in the paper, common carrier status in the US would only apply to hosting, but not recommendation engines and the like, what is the gain? Anybody can already find hosting somewhere. It's the discoverability that the platforms provide that's the secret sauce.


> If, as he points out early in the paper, common carrier status in the US would only apply to hosting, but not recommendation engines and the like, what is the gain?

There's something between hosting and recommendations, which the paper suggests common carrier status could also be applied to, and that's subscriptions. So youtube wouldn't have to recommend Alex Jones' videos, but if people subscribe to his channel, it would have to show his videos to them.

> Anybody can already find hosting somewhere.

Even if that's the case, taking away their current hosting disrupts people's speech.


This can be viewed as an alternative under antitrust law. If you get big enough, either you get broken up, or you have to become a regulated monopoly. You get to pick.

"Big enough" by EU standards is where there are less than four competitors of reasonable size and reach. The US tends to tolerate a higher threshold. Amy Klobuchar's "Antitrust" book suggests 40% market share as the threshold.


I don't think market share is much relevant here. Small cell phone carriers can ban customer text message conversation based on political opinion, but not AT&T?

On page 8:

> Why does the law preclude the companies from doing this—even when they’re not monopolies, such as landline companies might be, but are highly competitive cell phone providers?


A very important point. Antitrust has entirely different conditions to be met. The EU conditions are even more complicated than < 4 competitors and are evaluated in-depth in each case. At the same time, antitrust law has been attempted as a mechanism for ensuring freedom of information, although largely unsuccessfully (due to internal growth; EU antitrust law is predominantly geared towards blocking acquisitions afaik).

Now, media regulation is an entirely different regulatory approach: It has more normative roots, and in some cases is even precautionary: Allowing sanctioning in the face of plausible threats. That's a very sharp sword and the reason why platforms fear it.

[edit] For completeness, the definition of "big enough" in the EU is called dominant position and is defined in the Hoffman-La Roche case. It means being so powerful that competitors can no longer act independently. (https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:61...)


I'm in favor of regulated monopolies under some circumstances but making Facebook or any information-filter-platform into regulated monopoly sounds nightmarish. I'd sort of trust the US government, at its best, to do some things. But I would never trust any state regulate content filtering.

Beyond that, making sure it's always possible to host content on the Internet would be the most reasonable way guarantee free speech. And content filters would have competition this way as a content filter is a kind of content.


State regulated content filtering already exists everywhere which is why you don't see porn lined up along every highway.


Would a regulated monopoly be subject to community decency rules? Would twitter be fined every time somebody posts Janet Jackson's nipple?


> If you get big enough, either you get broken up, or you have to become a regulated monopoly.

Some products cannot be easily broken up without disrupting the actual product. For instance, Facebook could not be broken up since the value of the product is in large part due to having one significantly sized and unified user base (network effects). But these companies can't have it both ways - they can't claim that they operate in a competitive environment with few barriers for new competition while also claiming that splitting up their user base would destroy their unique product offering.

We also need to be wary of market share arguments, especially given that these companies largely operate in the Bay Area and reflects its values/political culture/etc. This is why we regularly see them enact censorship in lock-step. Even if several companies operate with less-than-majority market share, they can behave as a cartel. That's why we shouldn't treat Twitter and Facebook and Tik Tok as alternatives to each other.

A better alternative might be to simply envision and implement new regulations based on minimum user bases. If your user base is larger than X (to be defined) then you are subject to regulations. Some suggestions could be user bases larger than the [smallest or largest] state by population. A social media platform that has more influence and power than a state government seems like a reasonable target for regulation.


I like the articles separation between hosting and other activities. I think having content available by URL and in the feed of friends/subscribers feed but never shown in discovery features to the general public is a good compromise between allowing expression and preventing extremism proliferating.


For context: This paper argues that online platforms constitute a sort of "infrastructure" similar to utilities, and that they should be regulated to guarantee equal access.

This idea is not new - it has been discussed under different terms, e.g. with relation to "must carry"-rules for cable providers.

To simplify a bit, the argument boils down to the question of liability and responsibility for content curation. Platforms either curate content and are liable; in this scenario, they may be held accountable within media regulatory norms. Or they don't curate and provide equal access, then they aren't liable and free from media regulatory normative standards.

Note that the US' pretty exceptional take on free speech (as an untouchable right) very much complicates this simplified description.


>, and that they should be regulated to guarantee equal access.

You mentioned the UK BBC in another comment. Here are some examples of BBC censorship: https://en.wikipedia.org/wiki/Censorship_in_the_United_Kingd...

If the government-run BBC sometimes censors certain topics, I'm confused as to how that same government becomes a watchdog & enforcer over private corporations designated as "common carrier" and not censor.

In other words, what higher power over the UK government forces them not to censor? There's an inherent contradiction enforcing an "equal access" law because the government itself doesn't follow it. This has unavoidable effects on government regulation of the private corporations it oversees.

The "common carrier" designation is easy to implement when communication is point-to-point with paid subscriptions (e.g. telephones) -- instead of broadcast funded with ads or government taxes (e.g. Facebook/Twitter, BBC). There is no government in the world that allows broadcast of any topic without interference.


This is a tangential argument, but of course government-provided media are not devoid of censorship/bias. That's because nobody is. The market is also not devoid of censorship/bias, neither in its supply of information nor in the resulting consumption.

Public service broadcasters are not about alleviating censorship; they are primarily a means of providing a solid base level of access to information. They cannot reasonably provide access to all information.

Your remarks on oversight are a very valid concern; that's why Germany, for example, has publicly funded (not state-funded; they get a mandatory fee from citizens over which the state has no control) but state-independent public service institutions.

There's much more to the argument, but a central point here is the shift from regulating supply to regulating consumption. One can trivially argue that platforms are harmless because any censorship they implement is not total in the sense of absolute government censorship - you're free to publish your stuff elsewhere.

But today, more and more countries are seeking to ensure healthy consumption of information, and to enable this, they need to intervene in platforms' content curation.

It's a dangerous argument, of course, because this is precisely what totalitarian states are doing. But - and this is important - regulating content is something that democracies have always had to do, e.g. banning libelous content, revenge porn etc. etc.


> The "common carrier" designation is easy to implement when communication is point-to-point with paid subscriptions (e.g. telephones) -- instead of broadcast funded with ads or government taxes (e.g. Facebook/Twitter, BBC). There is no government in the world that allows broadcast of any topic without interference.

The paper deals with the supposed distinction between point-to-point and broadcast. In brief, it's not very clear. Consider the postal service: If millions of people subscribe to a magazine, is that point-to-point communication or broadcasting? How is it different from millions of people subscribing to a youtube channel? Or following someone on twitter?


It's more like publishing than broadcast. Somewhere in between those two I think.

Edit: the difference comes in where there are feeds and recommendations popping up to users who haven't previously subscribed to something. That and the volume/media.


I think this is only an issue because there is no public digital infrastructure. We need an "internet post office" whose only rules are the law, operates at-cost for users, and is completely free of liability. For example, you can't sue the post office if someone mails a pirated DVD.

Would it be used almost exclusively by extremists and lunatics? Probably, but I still think it's worth having if only in light of all the HN stories about how some account slip-up at Google utterly ruined someone's life. Everyone deserves an email address they can never lose access to.


Isn't that sort of basically email now? If you sub to the mailing list of "extremists and lunatics" then you'll probably get your email on extreme and lunatic issues.

The issue is, the extremists and lunatics don't just want email, they want twitter, facebook and all the inherent amplification capabilities of both. Not just a 1 to 1 message, not even a 1 to many messages but full on advertising to potentially interested users the same as the cat pics/videos get.


Interestingly, there are hints that some countries are looking into providing Matrix accounts for all of their citizens.

I don't think there's been a formal announcement yet, but it was mentioned in one of their Matrix Live videos a couple of weeks ago.

Personally, I see pluses and minuses. Yay, free Matrix for everyone. Boo, your government can monitor everything you do on that server.


Whether this is a good idea is seen differently in different countries.

Countries with strong public service broadcasters (e.g. the BBC in the UK) tend to consider state-run information infrastructure a good idea, at least in a dual system including private corporations.

Countries without public service media see private marked actors as perfectly sufficient.

Needless to say, the US is squarely the second type.


Do extremists and lunatics want their email handled by the government?


The advantage of the paper is that it has a great deal of detail and research, done apparently by someone with actual knowledge of the law and case history in question. Usual discussions of the common carrier or public space argument tend to stop at a surface level because everyone involved in the discussion only has basic knowledge and training, so it's interesting to see it more carefully examined.

As just one example, it argues that social media companies are not necessarily protected from being compelled to host content they disagree with simply because it would be "compelled speech," as stated in the rejection of recent Florida legislation. There are a number of existing cases where entities were compelled to do just that because they were operating a public space, even if privately owned (a shopping mall for example). Agree or disagree, this is information I wasn't aware of (and probably a lot of readers here as well), so it's interesting information to have.


Having only read the introduction of this paper myself, this seems to be an apt summary of its contents.

Also worth noting that the language in this paper is very approachable and rife with footnotes that often point to past legal decisions, if not precedents. Even if you don't agree with the thesis of this paper, it still makes for an eloquent and informative read of one side of the aisle.


Well, it is a scholarly law paper.

[edit] To clarify: Being a scholarly law paper means that (1) it is well-documented and thoroughly researched, but also (2) it may employ words that seem to have a common-sense meaning but really don't. Things such as "fair", "bias", "access", "responsible" etc. have very precise legal meanings that are not readily apparent.


So far it seems like a fairly well balanced analysis, which I think addresses the censorship concerns without violating these companies' First Amendment rights:

    I’ll begin by asking in Part I whether it’s wise to ban viewpoint discrimination by certain kinds of social media platforms, at least as to what I call their “hosting function”—the distribution of an author’s posts to users who affirmatively seek out those posts by visiting a page or subscribing to a feed.

    I’ll turn in Part II to whether such common-carrier-like laws would be consistent with the platforms’ own First Amendment rights, discussing the leading Supreme Court compelled speech and expressive association precedents [...] And then I’ll turn in Part III to discussing what Congress may do by offering 47 U.S.C. § 230(c)(1) immunity only for platform functions for which the platform accepts common carrier status, rather than offering it (as is done now) to all platform functions.

    On balance, I’ll argue, the common-carrier model might well be constitutional, at least as to the hosting function. But I want to be careful not to oversell commoncarrier treatment: As to some of the platform features that are most valuable to content creators—such as platforms’ recommending certain posts to users who aren’t already subscribed to their authors’ feeds—platforms retain the First Amendment right to choose what to include in those recommendations and what to exclude from them.


I think what happened with "social media" is a digital equivalent of land enclosures. In the past, there was Internet content, let's simplify it to blogs, which was available publicly for anybody to see, and then there was content aggregators/filters (search engines, blog aggregators) which selected and curated the content. But anybody could come and do that.

Social media effectively "privatized" the content, now you have the content within their walled garden, and only they have monopoly on content filtering and aggregation (within their realm). This allowed them to monetize the advertisements on this content.

So I believe wanting "social media as common carriers" really means going back to the state before they existed, where anybody could host content and anybody could filter content. I don't think it will happen, because then the business model is dead, and the force of capitalism will not allow it (just like there is very little public land now).


Social media above a certain size should be treated as common carriers. There are no reasonable alternatives to them, and today, common public activities activities that are core to our life are conducted on these large platforms. This is not a matter of protecting the freedom of companies to do as they wish - we already regulate companies and restrict their activities in many ways. Private power utilities cannot discriminate against their customers based on their speech or political viewpoints, for example. The same regulations can be enacted to govern these platforms.

I often see arguments saying that someone who is deplatformed/demonetized on these service can just use an alternate service, but I find that to not be the case in practice. Consider that Twitter, Facebook, and YouTube have more users than virtually all nations. Their network effects are core to what the product is, which is why there aren't suitable alternatives (especially when they enact censorship in unison). Telling someone to just go use a different platform is like telling someone that they don't need their power utility, since they can just stick a windmill on their property instead.

Finally, I am greatly concerned that these large privately-controlled platforms are essentially outsourcing government-driven censorship and also violating election laws. For example when conservatives did form their own platform on Parler, AOC called for the Apple and Google app stores to ban Parler after the Jan 6 capitol riot (https://greenwald.substack.com/p/how-silicon-valley-in-a-sho...). If a sitting member of the government pressures private organizations to censor others, it should be considered a violation of the first amendment. Leaving aside the technicalities of law, it is unethical and immoral even otherwise and completely in conflict with classically liberal values. Actions taken by these companies to suppress certain political speech in this manner also amount to a donation to the other side. This isn't recognized as "campaign funding" but it is probably more effective than campaign funding at this point. We need to do a better job of recognizing the gifts-in-kind coming out of Silicon Valley tech companies towards political parties based on the ideas they suppress/amplify/etc.


Here's the thing, you don't NEED social media. It's not a utility. Your quality of life won't diminish if you don't have access to it. I suggest deleting it and not using it for a few months and see how you feel. I 100% guarantee your mental health will improve.


I understand where you're coming from, and I probably would agree that mental health would improve. But the reality is that the majority of political discourse today happens online on these digital platforms. It is only going to get more digital as Gen-Z comes of age. So while my immediate short-term mental health might improve, I also feel that not being present on social media means that I would be participating less in our democratic process, my views would be less represented, and in the long-term my quality of life may change in a negative way as a result. It's also why I think the ban of Donald Trump is unacceptable - for all practical intents and purposes, these social media platforms now gatekeep the most fundamental (political) processes underlying society.


The contribution of social media to our "democratic process" is to poison it.

Too many twitter warriors banging away on keyboards.

You AND SOCIETY will benefit if you check out of it and just meet your neighbors.


> You AND SOCIETY will benefit if you check out of it and just meet your neighbors.

Not if your neighbors are staying at home consuming corporate/government approved social media messages.


So many supposedly principled free-market libertarians and conservatives tying themselves in knots so that Trump can continue to push his lies. The american conservative movement, like nationalists everywhere, have no principles bedsides power. And with the new permanent conservative judiciary there are many victories in their future. Our democracy is in peril but if you look at places like Russia and China they seem stable enough, maybe the death of the American world order won't be so bad.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: