Hacker News new | past | comments | ask | show | jobs | submit login
Censorship is bad even when it’s done by private companies (necpluribusimpar.net)
233 points by barry-cotter on Nov 6, 2019 | hide | past | favorite | 246 comments



I have a wide view of what 'censorship' is, and I include spam filtering in the definition. For example, a while ago I saw an advert for some kind of erection-causing pill on a forum discussing C++. If the forum moderators remove such a post, they make the forum more efficient by saving other users (who are looking for C++ content) from themselves having to filter out irrelevant information / a sales pitch.

I also see far too many 'work from home' adverts in discussions in the Independent newspaper's comment section (on unrelated articles), and if they took a more censorious approach to such comments then the comment section would improve.

To an extent I support censorship, and according to what I believe censorship to be, almost everyone else supports it.

If this very comment had been about some unrelated topic, such as giving an opinion about who to blame or not blame for problems in the Middle East, it would be right for the comment to be censored from this discussion, and it would be censorship from a private company, censorship of a political viewpoint no less.


Here’s the thing. On dedicated forums we know the protocol. We don’t want spam and we don’t want politics.

If it’s a political forum we expect all civil exchanges to be treated equally. So a proponent for Owls and a proponent for sawmills both get their say and one doesn’t get “deranked” or de-monetized because it’s not the popular opinion or the au currant opinion. We don’t expect one political candidate to be artificially ranked and another artificially buried in the results.

Now, if I’m on the Hillary blog, yes, of course I expect the org to manage the commentary to fit their narrative. I don’t expect Zuckerberg or Pichai to turn their orgs into the Hillary blog or the Donald blog.


> If it’s a political forum we expect all civil exchanges to be treated equally.

The problem here is that bad actors intentionally take advantage of this by supplying an endless stream of 'civil' arguments for entirely abhorrent stuff. This includes usually feigning ignorance and claiming they're 'just asking questions'† when objections are raised, even though it's the tenth or fiftieth or five hundredth time the same thing has come up.

https://rationalwiki.org/wiki/Just_asking_questions


> even though it's the tenth or fiftieth or five hundredth time the same thing has come up

Immediate thought: merging topics is drastically different from closing or deleting them.

Slightly off-topic below:

Generally speaking, I would love a technological solution for QA redundancy. Saw way too many long forum threads that have asked same/similar questions over and over. Not politics and not from bad actors, but e.g. reviews of new devices etc, where everyone and their dogs asks about, say, battery life, every 10 pages. StackOverflow-like QA platforms provide some structure to this, but are limited to objective answers. For example, there's no SO for book plot reviews/discussions and SO format isn't really appropriate there.


Given that there will always be an influx of new people; and that most people will not be familiar with previous discussions, I'm not convinced that most forums are being assaulted by bad actors. This seems to be more of a Eternal September problem.

Of course bad actors can abuse this; though I've always felt it would be good for the derailing comments to be removed with a polite dm message explaining that the topic had been discussed previously with links to said discussion.


It's definitely an organized tactic among some groups. For example, there's a literal neo-Nazi handbook† that advises members of that group to disguise their sentiments in civility and/or 'jokes' in order to sneak it into mainstream discussion.

https://www.theguardian.com/commentisfree/2017/dec/19/neo-na...


Any movement can use those tactics. I wouldn’t be surprised if they also read rules for radicals too. Any group looking for influence is going to use tried and true methods.

So while many vile groups like the nazis and others seeking power, there are many other groups who hold unpopular opinions even unpopular and illegal outcomes (as presently held by the public), I’m not sure we want to suppress that. Much of what we have today as acceptable discourse and so on is because we allowed those voices which were considered degenerate or unacceptable one way or another.

We don’t need a new dogma telling us the way to think correct.


Just because Nazis use rhetorical questions doesn't make rhetorical questions bad.


> we expect all civil exchanges to be treated equally

Define civil :)

Take any hot button issue like guns, abortion, etc. where your stance on one side of the issue can be seen as immoral or life threatening to the other side. Take a passionate / borderline tweet from one side and you'll probably get a 50/50 disagreement on wether or not it's "civil". Now what? Block the tweet? Warn the user? How many people need to complain before it's considered a problem?

Now go a step further and look at the Westboro Baptist Church. They think that they're doing a public good by shaming those who have died (because they believe God punishes sinners and their families with death). They believe it's a sin not to tell the families that the recently deceased is a sinner. They believe they're communicating God's message. Now you'll probably get 99% agreement that it's uncivil. But now what do you do? Silence an unpopular opinion?

The problem is finding out where the line is for defining what's appropriate on a platform and what will be censored. Is it 50% + 1 consensus? 95% consensus? And who gets to decide? The users? The CEO? A board of censors?

These are tricky answers and different countries and companies draw the line in different places. But the devil is always in the details.

It would be nice if everyone agreed on what's civil and what's not, but unfortunately that's not the case.


So the WBC is an extreme case. Obviously it’s problematic. Their tactics are disgusting, and really it’s counter productive to their cause, though they are more like nasty trolls.

But we’re seeing issues where things are not problematic but because people are guilty of thinking “wrong”. If I want to discuss international politics and think we should liberate/invade country X or conversely we should leave country X well alone, I should not get penalized for articulating a point of view.

One thing I don’t understand is, if I follow janeblow@ I should not get offended by her tweets. I have the power to unfollow her, I can block her. I don’t see why people’s reaction id to get janeblow@ suspended.


> think we should liberate/invade country X

There are some people on the right and left that think that advocating for foreign war is problematic and counter productive. Some foreign wars in history have been justified and others have been out of greed or racism.

I think the point is we all draw the line in different places on what speech is appropriate which is why censorship itself is problematic.


> and look at the Westboro Baptist Church.

That’s actually a great example of why viewpoint censorship is a _bad thing_, for everybody, including Twitter, you and me. Look, you and I both know that there are facts that are true that you can’t express for fear of being kicked off of public forums. In fact, even stating that there exist facts that are true but that you can’t express is toeing the line, even though nobody disputes that this is the case. Since you seem to be more or less pro-censorship, I’ll assume that you’re a bit left leaning, so here’s an example that’s suer to make you agree with me: imagine if publishing climate data somehow became (even more) controversial and people sharing (true, undisputed) global temperature readings found themselves being kicked off of discussion forums.

Now we have a situation where we have two sorts of people being deplatformed: the Westboro baptist church and people who think that the world is getting hotter. This paradoxically makes the WBC seem _more reasonable_ by association. Remember, censorship is retroactively self-justified - since it was censored, you don’t know what it is, just that it was something really bad, so it was bad enough to get deleted. We’re better off if anybody says whatever’s on their mind and, if it’s ridiculous, it gets mocked.

Does that leave some people who agree with ridiculous viewpoints anyway, no matter how often or thoroughly they’re debunked? Sure, but there are two possibilities: one, they’re in a small minority in which case they’re harmless or two, they’re actually a majority which suggests that they might actually have a point - and you, representing the intransigent minority, attempting to control them through censorship is EXACTLY why censorship should be opposed.


> Since you seem to be more or less pro-censorship, I’ll assume that you’re a bit left leaning

What gave you that impression? I was raising questions to show that any censorship (or definition of civil discussion) is problematic.

It's a tough problem because bad ideas can lead to bad things, but stopping good ideas can lead to bad things too. If we all agreed on what's good and what's bad this would be easy, but we don't.

I'd much rather live with the consequences of free speech than live with the consequences of censorship. But neither side should project claim it's a utopia.


> we don’t want politics.

HN doesn't ban discussion about encryption, which is a political topic. HN doesn't ban discussions about non-state-approved search engines, social media, and VPNs, even though in some parts of the world they're so "political", that they get people sent to "re-education camps".

Twitter's ban on political advertising suddenly reminded everyone how many things besides wars and elections are political. You can't advertise switch to green energy now, because that's about climate change, and that's political.

In practice "politics" ban means you still allow things that are political, but only ones that are ideologically aligned with the status quo.


Peter Thiel expressed a similar well-stated opinion in an interview (can't find it right now) a number of years ago about alleged political censorship by universities.

"I give a Catholic school leeway censor its students or professors to have a Catholic bias. It's on the label.

"It's another thing for USC to censor on a political left or right bias, without disclaiming such bias, or even professing a lack of bias. It's simply a dishonest influence on speech."


Censorship that’s dangerous is usually against an unpopular idea or opinion to the group. Usually when censorship is being discussed, the kind of speech or topics it applies to are already allowed in that forum or venue and there is selective filtering of the content going on outside of what the users themselves choose to entertain.

For instance, if there were a forum specifically about penis pills, and the discussion centered around penis pills, if the moderators selectively expunged certain pill brand mentions or alternative ideas about penis pills from the forum because they personally preferred a over b, or had a personal or business investment in a over b, that is censorship. If you wanted to discuss canaries or automobiles in this forum, they may tell you to move it back on topic. That isn’t censorship.

Similarly, many broad online platforms allow political speech already, it’s just some ideas and people, typically conservative, are being selectively filtered out of the conversation because they don’t want the greater exposure to the alternative but legitimate and contextual ideas or opinions.

That is dangerous. If the shoe was on the other foot politically, it wouldn’t diminish the severity it.


What alternative but legitimate beliefs are being silenced? I'm not following the issue closely, so I've only heard about people like Alex Jones and Milo Yiannopoulos being removed from social networks.


Censorship on spam is probably agreed upon by everyone but spammers. Censorship on the opinions of people is dangerous on the other hand. Like I say on HN many times: today it's the opinions you disagree with that get censored, tomorrow it's your voice.


Isn’t that the crux of the issue? A lot of egregious hate speech — say, what Richard Spencer said in the recently leaked audio — is agreed upon by everyone but racists. But every time someone suggests censoring that, people invoke the same slippery slope fallacy. Today it’s the spam that gets censored, tomorrow it’s your voice!


Because the criteria for deciding what speech gets censored is always subjective.

Sure, we can probably find one statement that most contemporary people would agree is wrong and shouldn't be allowed to be said. Like 99.99% of contemporary people.

But what percentage do we cut it off at? 75%? 30%? And why is it a popular vote to begin with? Is morality or ethics something that is relative to popular ideas or are there some concepts that are just plain right or wrong?

I'm not afraid of people saying stupid stuff. I'm afraid of people with the power to curtail speech. Because that invariably gets used against the public.

And "the freedom to do what is right" is not really freedom. You must have the freedom to do something stupid.


That’s my point. What’s the criteria for deciding what spam is? And yet no one complains when comments like “Wow! I made $X from my couch!” are censored — even though that’s every bit as subjective as Richard Spencer’s comments.


Do you consider it censorship when comments or articles are flagged on HN?


Depends on the case, but as far as I can tell, most of the time comments are flagged and removed, it's because too many people disagreed with the poster (including the HN moderators), rather than somebody spamming the site, so yeah, a lot of the moderation here is censorship in any way you could define it. You see this over and over again on public forums: a means of controlling spammers who are not actually participating in what the forum was designed for is introduced and is immediately abused by people who want to limit actual participation.


I don't, because you can still see such comments (it may require an account and flipping a switch). And in fact I browse Hacker News with showdead turned on, and I often start from /active so I can see stories and comments other people don't want me to see. This isn't censorship because Hacker News itself still makes them visible: it's more like spam filtering in that regard, where content is being categorised but not censored.

Spam filters in particular are not censorship because they are ultimately just labelling devices. Because of their accuracy many people often accept and act on those labels automatically, but they don't have to, and actually recently Gmail's filters got terrible and so I have to check my spam folder all the time now.

In some cases spam filters flat-out reject messages and don't even show them to the recipient, not even in a spam folder. I understand the technical reasons for that (full storage and processing of spam is expensive), but, that gets much much closer to the border of censorship, with the only real difference being one of intent.


I would say if the mods do it yes, if it’s the hoi polloi then no.


> If all mankind minus one were of one opinion, mankind would be no more justified in silencing that one person than he, if he had the power, would be justified in silencing mankind.

-- John Stuart Mill


Point is, Mill lived in an era where dissemination of information was barely at a steam-engine equivalent, whereas now we've got automated opinion-generating rocket systems in orbit ready to barrage millions of info missiles at any online discussion at any point in any language within 300 nanoseconds.

New realities need new quotes.


I started reading through On Liberty recently and I was surprised by how topical it was. This discussion doesn't seem to have changed much over the past 150 years. Mill spends a long time addressing arguments for censorship that I still see used today.

A lot is lost when a book-length discussion is reduced to a single sentence. In his work, there's a very thorough discussion of why this liberty is essential. I don't like how the grandparent used the quote so flippantly. Without the supporting context, it's a baseless statement that encourages low-quality discussion.

One of things Mill did really well is that he thoroughly described his opponents' position before arguing why it was incorrect. You haven't done that, which is why it's unclear what part of his argument is invalidated by improvements in communications.


Here's an excerpt of an argument by Marcuse (by no means the only one to argue against Mill, in fact there are philosophers who question the entire justification for free speech, and I think they have a more interesting point, though they don't directly address Mill):

>Now in recalling John Stuart Mill's passage, I drew attention to the premise hidden in this assumption: free and equal discussion can fulfill the function attributed to it only if it is rational expression and development of independent thinking, free from indoctrination, manipulation, extraneous authority. The notion of pluralism and countervailing powers is no substitute for this requirement. One might in theory construct a state in which a multitude of different pressures, interests, and authorities balance each other out and result in a truly general and rational interest. However, such a construction badly fits a society in which powers are and remain unequal and even increase their unequal weight when they run their own course. It fits even worse when the variety of pressures unifies and coagulates into an overwhelming whole, integrating the particular countervailing powers by virtue of an increasing standard of living and an increasing concentration of power.

(From https://www.marcuse.org/herbert/publications/1960s/1965-repr... published 1965.)


> I don't like how the grandparent used the quote so flippantly. Without the supporting context, it's a baseless statement that encourages low-quality discussion.

The context was someone saying they have no problem with users flagging stories, as long as it's not the mods. I disagree, and I might just as well have said "personally, I don't like it either (when some of the hoi polloi take it on them to decide what the rest should discuss, insted of using the "hide" feature)". Instead I said it with a quote.

That doesn't "encourage" ignoring that context, a non-sequitur slogan like "new realities need new quotes", and equating any verbal or written statement, even lies or auto-generated spam, with persons holding an opinion.


For what it's worth, I had upvoted you. The shallowness of the discussion wasn't something you started, and I'm not sure you really had a responsibility to end it. I just wanted to see us do better as a community.


The article isn't speaking against censorship in general. It's specifically about companies that are de facto monopolies. And at the very end it mentions big money distorting the marketplace of ideas. I didn't read anything in the article that would oppose censorship by your C++ forum.


I agree. If you broaden the term "censorship" so much as to make it equivalent to any entity trying to enforce quality or relevance control standards in its own venue, you make the term useless. While also subverting conversation about actual censorship, cheapening it.


I'm not certain there's a strict line between "actual" censorship and OP's definition. Care to give a definition you think is defensible as "actual" censorship?


Sure, it's hard to draw a strict line that will make everyone happy. I'm not going to try. But that doesn't mean that removing ED pill spam should be conflated with the government banning books under threat of imprisonment. We live in a messy world and have to fall back on using our judgment and common sense, flawed as they are.

It's obtuse to pretend that it's confusing why one is acceptable and the other isn't. And it's not helpful to say that the lack of an unambiguous, unanimously accepted definition means that certain clearly reasonable actions can't be taken.


Agreed. Censorship is a simple concept - it's any time expression is suppressed. Those who say that _their_ removal of content _isn't_ censorship are simply avoiding negative word associations, or any inherent suggestion that their judgement could be subjective. Even the censorship of off-topic and spam comments is a subject to a high degree variability in judgement and opinion.


there is an entire range of censorship, it's not black and white. The question is where to draw the line


Opinions I like = censorship

Opinions I don't like = legitimate moderation


And that line will always move depending on the moderator motives and societal acceptability.


>If this very comment had been about some unrelated topic...it would be right for the comment to be censored from this discussion

By that same logic, we should censor your comment, because this article is not about filtering bot/adspam from small niche forums; It's about mass censorship and manipulation of organic political opinions on platforms with billions of users.


> For instance, if you’re a journalist and you want to promote your work, there is simply no viable alternative to Twitter at the moment. No other microblogging platform comes even close to having the number of users Twitter does, and most of the alternatives are hotbeds of extremism, so that anyone who joins them is thereby disqualified in polite company.

IMO if you can't even try to use an alternative microblogging platform for fear of being labeled an extremist by "polite company", that's indicative of a much more serious societal issue.

The quote included in this article from John Stuart Mill's On Liberty explains the danger rather eloquently I think:

> Society can and does execute its own mandates: and if it issues wrong mandates instead of right, or any mandates at all in things with which it ought not to meddle, it practises a social tyranny more formidable than many kinds of political oppression, since, though not usually upheld by such extreme penalties, it leaves fewer means of escape, penetrating much more deeply into the details of life, and enslaving the soul itself.


What right do journalists have to have a platform for promotion and advertising? Either before or after the Internet and Twitter.


We're not talking about inalienable rights. That's implied in the title of the linked post.

If you don't care about "good"/"bad" decisions and just care about the legal bounds, just say so and dismiss the article out of hand. Your roundabout question just drags discussion down.


The thing is, people rightfully are pointing out an important issue ("corporate tech giants have monopolized the public speaking space and are holding unaccountable power") and but inexplicably ignore the obvious solution (break them up and/or emigrate to federated/decentralized platforms while actively promoting them) in favor of the worst possible one (force Twitter to unban nazis).


Yes, this is essentially my point, except stated in a much more concise way than I was able to. The author wants something we all want – less power for massive private companies to control our society. But he doesn't want to accept what that takes – greater government control. So his essay is little more than inane rhetoric, e.g. journalists are basically silenced if they can't be on Twitter.


How does breaking them up not kick the can down the road? Presumably the forces that led to the monopoly of users will be in play after they're broken up. Enforcing freedom of speech on platforms that have become de facto utilities or public spaces is the only permanent solution on offer.


The title and content of the post implies that we should regard private censorship with the same concern as we do government censorship, regardless of the current legal framework. Which I don't necessarily disagree with. In fact, I think most people here are on the same page of: it sucks big time when a company like Facebook can unilaterally decide that the famous Napalm Girl photo should not be displayed.

So where's the argument? Where's the discussion? Forgive me, but I don't find the above assertion interesting to discuss when virtually everyone agrees with it. The actual topic of interest is: how can we prevent Facebook from making such decisions over its own platform?

That's a pretty big question with a lot of angles. So I chose, for the sake of argument, to focus on what the parent commenter highlighted, because its specious premise raises issues with the author's overall premise:

> For instance, if you’re a journalist and you want to promote your work, there is simply no viable alternative to Twitter at the moment. No other microblogging platform comes even close to having the number of users Twitter does, and most of the alternatives are hotbeds of extremism, so that anyone who joins them is thereby disqualified in polite company.

In the history of our society there has never been free or equal channels of promotion, i.e. advertising or being featured in mainstream media. It has always been the case that if you didn't pay, or otherwise get noticed by mainstream media, you were unheard and/or considered to be part of a "hotbed of extremism".

So I ask of the author (and those who agree with him), if you want to argue that Facebook/Twitter should be compelled to not censor, then what sense does it make to assert people (journalists are not) have a right (legal or moral) to have their self-promotion be noticed, when that precedent does not exist? Moreover, it's just a flawed premise because I can think of journalists big (e.g. at the NYT) and small (e.g. former students of mine) who have paid careers in journalism without being active on Twitter. I contend that this is not just quibbling over minor facts, but an example of how the author's argument only makes sense by ignoring details of how the world actually works.

What does this one point have to do with the article's overall premise? He wants our society to change in a significant way, but he neither accepts the legal consequences that would follow, nor proposes an alternative framework that would be feasible. I don't disagree with that Big Tech Censorship is of huge concern. But I believe that if we want to change this status quo, we'd have to agree to give the government greater powers to regulate private industry and wealth individuals. The author doesn't want to grant this, rendering this entire discussion to be an inane impotent exercise.


Ostensibly that's the argument. But the real argument is that platforms are wising up to being used for extremist positions and that the extremists don't like it and so they twist everything to associated themselves with content that should get a pass.


it's a combination of the right to free expression and free association, not just for journalists, but for everyone. I should not be prevented from hearing things i might be interested to hear, or from hearing things from people i 've chosen to associate with because someone 3rd party thinks they re not nice.


If I run a website, why doesn't my free association right extend to kicking people off it?


you are absolutely free to - but in that case you should be held to the same legal standards as newspapers who edit their content.


No, that's patently ridiculous. The internet is a new medium; things scale so vastly differently here. Your proposal is literally impossible to enforce at the scale of major platforms. To force everyone to choose between "hands off, no moderation" and "you are responsible for all content" is a thinly veiled attempt to force the big players to not moderate, and therefore host extremist content - because they literally can't review everything.


I actually tend to agree that it s a bad solution (its discussed in a subthread here) but i like to entertain all options. The real cause is the lack of competition for the few major platforms. I wish G+ had taken off.


So the choice is between "host ISIS decapitation videos" and "be charged with a crime when a user makes a true threat using your service."

You're proposing something that will kill the literal forum that you're posting it on.


What legal standards are you referring to? Newspapers and media are not held to additional obligations than what a normal person has – e.g. it's illegal for anyone to commit slander, or violate obscenity laws. The fact that a publisher can be held liable for publishing illegal content is not what gives them the "right" to edit and censor their own publication.


Why? A newspaper is composed almost entirely of content that the paper is paying journalists and writers to produce whereas a social media site is almost entirely composed of content posted by individuals, how does it make any sense that a platform becomes a newspaper because they ban some content from being posted on their platform?


Newspapers aren't held responsible for the stuff people post on their comment sections either. Otherwise they'd have long since been sued into the ground, considering the sheer amount of bile exercised in said comment sections.


None, until we concede that — because the world in which we want to live includes quality writing from a free press engaged in investigative journalism — it is required that journalists be able to promote themselves somehow.


I don't disagree, and I think that's an ideal virtually everyone agrees with. But an ideal alone cannot be the premise for asserting an enforceable change to the status quo.

The reason why the Founders – and subsequently, the courts – enshrined freedom of speech from government censorship, is because that is the most the government can enforce without violating other limitations and stipulations of the Constitution. It's not as if the Founders completely forgot to consider "Hey wouldn't it be nice if everyone had a right to be heard in our society?"

If the author wants this ideal to become reality, then he needs to accept giving greater power to the government (or propose a completely different paradigm). The fact that he apparently can't – possibly because he recognizes that increased powers would end up undermining freedom of speech – is the reason why things are as they are right now.


> IMO if you can't even try to use an alternative microblogging platform for fear of being labeled an extremist by "polite company"

I downvoted you because this is a ridiculous claim. If you use a platform like Gab that embraces bigots and extremists perhaps, but that's not the only other platform.


Does Gab embrace bigots, or simply not remove any legally permissible content? I was under the impression that it was the latter. If it's the only place where unsavory content is not removed, then of course that stuff will be over-represented.


Uhh, they do embrace bigotry. The official Gab twitter feed has posted openly bigoted comments, not just quoted from their site.


I do not consider the claim in the grandparent comment ridiculous, but I would consider your claim ridiculous. As a sibling comment pointed out, Gab does NOT embrace bigots and extremists, it simply does not remove legally allowed content. Can you truly not see the difference?

Edited to clarify my opinion that only one of these two claims is ridiculous


Gab doesn't "embrace" bigots and extremists, they just embrace free speech.

You have to realize that most things can be viewed as bigotry and extremism. Look at how feminism and lgbt are viewed around the world - as extremism. Would you say twitter "embraces" bigotry and extremism because they allow feminism and lgbt on their platform?

It's a question about monopoly and fairness and the cultural values of america. Free speech is both a legal and a cultural concept.


What if a video producer uses PornHub because YouTube keeps demonetizing them for, for example, gun reviews?


What's wrong with PornHub?


I would consider it far from ‘polite company’.


i do have a colleague that would be shocked if i created a gab account.


As I said, Gab is an exception since they embrace the worst of humanity.


well at least his fundraising ability is unparalleled


sorry , wrong thread


What are some other platforms?


Mastodon comes to mind.


Gab apparently runs on Mastodon now.


Mastodon.


The problem that arises when you define censorship broadly is that the opposite of censorship is censorship.

If no one can restrict what gets published in their venues then as soon as a venue starts discussing stuff I don't like, I can spin up a shill call center or a guy with a thousand accounts and a copy of GPT2 and flood that venue with so much divisive moronicism that the voices that threaten me will be almost completely drowned out.

Worse, large open venues being flooded with low quality communications already happens even without any intentional censorship-by-flooding attack. ... providing great cover for these attacks. Especially savvy attackers can leverage the pre-existing populations of well meaning fools by shaping their messages to motivate fools into carrying on the disruption on their own.

If you think about it-- the classical view of censorship where someone outright silences you is nearly impossible online today in most of the world. It is extraordinarily hard to completely stop the spread of information that people want to spread. But at the same time, censorship that works by flooding out an idea or discrediting it by association with abusive nutballs has never been easier or more effective.

When we worry too much about private parties shutting down conversations in their own venues we risk improving the situation around an outmoded and somewhat ineffectual model of censorship at the expense of making a modern and highly effective model much worse.

The best I think we can do is foster an internet structure where everyone can have their own venues which they can operate under whatever rules they think are best, and everyone is free to move among them at the lowest cost possible. That way, effective moderation can shut down flooding attacks and voting-with-your-feet can shut down overly censorious (or overly passive!) moderation.

Unfortunately, the highly centralized world created by the popularity and network effects of sites like facebook, twitter, and youtube are the opposite of this. Instead of people being empowered to self-regulate abuse and migration being easy, people are disempowered and migration costs are high.


This is a rather common style of argument often used against civil rights: every person's civil rights should be restrained because criminals abuse those rights.

We can't have censorship free platforms because someone might spin up a thousand accounts. We need the NSA to monitor all communications because terrorists use the Internet. We should arrest 12-year olds for making finger guns because of school shooters.

But in this case as in so many, it's an off-topic argument, merely a distraction or an excuse for overreaching authority. No one here is arguing that someone who makes a thousand accounts shouldn't be restrained; that's simply not the topic under discussion, even if the word "censorship" can be stretched to include that topic.


Please. This isn't some hypothetical that I'm suggesting.

There are largely uncensored forums, like 4chan. That's actually what you get when you go there rather than having something that is heavily moderated but only in ways that you agree with, where you then pretend it isn't moderated.

And in fact, I'll fully support you having your own platform which is as uncensored as you want it to be. I wish you the best of luck.

What I don't support is you arguing that other people can't have their own platforms which restrict publication on their own platform however they see fit, including ways that you or I might disagree with.

People arguing that private parties can't limit the material posted on their own property are absolutely taking a position which is contrary to free speech. The power to exclude is just as important, if not more important, than the power to include.

If you'd like to argue that the dominance of a few platforms violates my assumptions and changes the tradeoffs, I'm not sure I'd disagree. But I think we should instead worry about fixing the monopoly problems rather than limiting private parties ability to exclude speech they disagree with on their own properties.


I think we should make a distinction between platforms and publishers. A publisher curates content according to their opinion, to the extent that when they publish outside opinions, they add a disclaimer like "Opinions expressed herein do not represent those of the publisher". There's no such disclaimer on reddit, Twitter, or Facebook because people understand that the posts aren't coming from the platform. These sites aren't a place for the owners to express their opinion; they're a mechanism for the users to communicate.

And when the mechanisms of communication are privately owned, freedom of speech can't exist if the owners censor those mechanisms.

Platforms (as opposed to publishers) should be treated like telephone companies or the mail or ISPs. They shouldn't even read the messages they carry, let alone censor them.

On the other hand, sites aren't required to be a platform, but if they choose to act like a publisher then they should be held responsible for the content they publish.


What if you take out "makes a thousand accounts" and substitute "spreads hate speech" or "calls people racial slurs"? In that case there's no abuse of the system; only the speech itself is problematic. Would censorship-free platforms have a way to suppress that speech? Should such speech be suppressed?

In my opinion, yes, but I'd like to know what you think.


I haven't seen a platform that actually banned hate speech; they all allow such speech, as long as it's directed at acceptable targets (often white people or Christians). These platform's actions aren't about problematic speech, but about identity politics.

But in theory a platform could ban such content impartially. Should they? Should such speech be universally suppressed? I'd have to say no; if it were, who could criticize Wahhabism or Scientology?

Now I'd like to know what you think. Should "hate speech" directed at Scientology be suppressed?


Criticism is not hate speech. Ignoring the unclear phrase "hate speech", banning calls to violence or verbal harassment is very different from banning opinions.

Free speech as it is understood in the US applies to the government and the government can and does limit your rights when it comes to violence and time and place (e.g. you can't yell during a session of Congress). It's absolutely morally fine to do so and it does not infringe on freedom of speech in any way.


Another distracting discussion of criminal behavior. As I said two posts up, we all agree such abuses should be banned. Opposition to censorship and defense of free speech is about censorship based on opinions; free speech advocates aren't fighting for an unlimited right to make threats or yell "Fire!" just for fun.

By "ignoring the unclear phrase 'hate speech'" you've ignored the very issue under discussion.


So the author wants to protect "the marketplace of ideas" from monopolies caused by a different market. However, they don't want the government to do it. They also don't seem to think it's possible for smaller companies to challenge these monopolies.

What then is the author's solution? One is never given.


In my opinion, the solution is simple:

If any “platform” censors beyond removing illegal or copyrighted content or spam, they should have their “platform” status revoked, and they should be considered a publisher.

This prevents such companies from using “platform” status as a cost saving mechanism to help them become large monopolies. Companies like Facebook would not have been able to take over so much of the market if they had to spend hundreds of millions of dollars on curating content to prevent lawsuits, etc.

I also don’t think of this as some kind of punishment. It’s perfectly reasonable to want to set up some kind of publishing company that has user-generated content; it’s just not reasonable to expect such a company to grow to such a global scale as Twitter or Facebook.

Edit: added “spam” to reasons


> If any “platform” censors beyond removing illegal or copyrighted content, they should have their “platform” status revoked, and they should be considered a publisher.

What about spam?

And would this extend to marketplaces? If a company allows users to sell educational resources for kids, what would happen if they start removing items that clearly don't fit?


They could have a checkbox "remove spam" that the moderators could turn on and off. There's not a lot of ambiguity about spam so it's not a real issue. I think the law should be "platforms should not tinker with their users' expressed preferences - you can't hide something that they 've chosen to subscribe to".

> And would this extend to marketplaces?

I believe these are already considered publishers ? In any case it's no different than how traditional bookstores work


> There's not a lot of ambiguity about spam so it's not a real issue.

People say this, I don't think it's true.

I think there is ambiguity around when marketing crosses into spam. If you look at different communities, even HN and Reddit, you'll find different opinions about what kind of promotion crosses that line.

Individual communities need to be able to make their own decisions on that, they don't need blanket rules applied across the board to all platforms.


reddit has already a "spam level" slider. They should allow subreddits to turn off spam filtering completely - although of course it doesnt make sense. But if they want to be classified as a simple carrier they would have to .


In your definition of censorship, would it be OK for Facebook to hide some political articles behind a "filter dishonest posts" setting that was on by default for users and groups?

Is it OK to do that to a marketing post?


I think facebook should be required to have controls to turn off their various filterings. And they should be held liable if someone can prove they systematic hide certain views from searches. That would be enough for me (having them on by default is also OK).

(Incidentally i started using twitter in "view latest" mode and i prefer it - turns out their expensive algorithms were pretty useless for me)


> There's not a lot of ambiguity about spam so it's not a real issue.

For an example to the contrary, look at how many Wikipedia articles get deleted as 'self-promotion' even when they're just simple informational entries about a business or semi-notable person.


> There's not a lot of ambiguity about spam so it's not a real issue

I couldn't disagree harder. I'd go into detail, but a) I'm at work, and b) the fact that we disagree is evidence that there _is_ ambiguity.


alright, i dont want to keep you from work, but i agree they should be able to turn off spam filtering, even though it makes little sense. And they should be held liable if a court can prove that their spam filter is biased in some illegal way


Replying to your root comment but also addressing some points in subcomments.

Generally (and perhaps ideally), I like the idea of more visibility/transparency into what algorithms (and human moderators) do.

But I think it's worth noting that, pragmatically, there are at least a few minefields here...

- Providing full transparency into what gets flagged as spam is going to make it a lot easier for spammers to model, evade, and notice changes in the spam-identification code and practices.

- Perhaps you were imagining this being temporary, but the operational feasibility of keeping all of this spam in-platform ultimately depends on signal/noise ratio, scale, profitability, and time horizons. It's one thing to bear the burden of storing 1% of your plaintext post volume for one year--it's quite another to store 20 or 30% of your video upload volume in perpetuity.

- I'm having a hard time imagining the proceedings of the court-of-legal-vs-illegal-spam-filters. Without a lot of technical training (or a lot of reliance on expert testimony), the simplest way for an average person to grasp whether a spam filter is biased or not is going to be either looking "back" at what it did/didn't flag, or "testing" it with some sample corpus. In both cases, this process will be fraught with interpretation and sampling bias issues. This may hinge on a judge, panel, or jury being able to see through a bombastic argument that trots out the most respectable {ideology} opinions from the discard pile--they have to realize that nothing in the discard pile can be interpreted without a really good understanding of the input pile. (They'd have to be so good at this that they could spot someone who is trying to score political points and manipulate opinion by intentionally mounting an adversarial campaign that intentionally crafts seemingly innocuous ideological content that also gets flagged as spam).

- Likewise, there's probably no great way to make a litmus-test corpus. Spam looks very different in different contexts. A useful Amazon review or tweet may both look like spam if someone emails or DMs them to you. Even if you can create a set of corpora that cover the bases, they'll probably be useless if they don't evolve, and very vulnerable to political and ideological manipulation if they do.

- If there's a lot of legal fear around getting sued because your spam-filter has a taste for how {ideological subcommunity} likes to phrase its opinions, I'd expect spam filters to accumulate safe-words and rules that cause them to back off of charged topics. It'd only be a matter of time until legit spam/manipulation ops infer the existence of those rules and move to exploit them.


Spam can be solved by just making it easy for people to spot when something was classified as spam, e.g. as spam folders do.


Maybe I'm misreading this comment or the context (because I think the biggest single piece we're currently lacking in platform moderation is maximal transparency into what/who gets moderated and how)...

but how would this "solve" spam in any case where the ratio of spam-to-signal crosses whatever threshold it takes to start breaking a platform's network effects, reducing engagement, driving out existing users, warding off new ones, shifting how the community views what the platform is for, etc.?


Spam didn't break email's network effects. Yeah, it hurts it a bit, but there were lots of attempts to build competing email networks that didn't go anywhere.

This is partly because email isn't "for" anything except communication. The idea that platforms must stand for something is new and wrong. A good platform stands for nothing and is open to everyone.


This might just come down to why I'm not sure I'm interpreting you right.

It's one thing to have the platform equivalent of a spam folder for new top-level posts that smell like junk. But these platforms have more significant design challenges: to cleanly handle replies, retweets + commentary, mentions, comments, threads, and anything else that is sort of inherently contextual (including the possibility that there are legitimate non-spam posts that interact with a spam post to quote, comment, reply, warn others, and so on).

I'm a little skeptical about how well anyone can meet that design in a way that makes it easy to see what was flagged as spam and isn't also sensitive to the ratio of spam to legitimate posts...

Its possible you're imagining that spam posts don't show up at all in a thread unless you hit a single toggle that re-renders the thread with any pruned replies and branches in place. Interfaces like this don't spiral out if the ratio changes, but I also don't think they make it easy to spot the flagged posts.

An interface that marks the posts in-thread can make it easier, but they're sensitive to that ratio. Very sensitive if they shows the full post, and a little sensitive if they replace the post with a clickable indicator that there's suspected spam there.


Yes, for sure any implementation is very much about the design subtleties. I did work once on an email spam filter so I have a pretty good idea of the complexities involved in that space, which includes things like some people's spam being other people's ham, people replying to spams, problematic user interfaces and so on.

My point is a more general one: people argue for censorship as the only way to maintain a workable forum, but, I don't believe that, based on my prior experience. I'm not saying it's easy to build a really great spam filter for social forums, but Slashdot proved out a lot of good techniques, and anyway censoring stuff en-masse just creates a different set of problems: social rather than technical.


> It’s perfectly reasonable to want to set up some kind of publishing company that has user-generated content

Note that under this setup, a forum like HN would be basically impossible to maintain. How would you set up a publishing content that included user content without having a human pre-vet all of that content? And HN has, like, 3 moderators?

The practical end result of this would be that the Internet would split into a bunch of 8chans, and a bunch of TV stations, with nothing in-between. If you care about free expression, making businesses scared of user content is counterproductive.


Depends what you mean by "a forum like HN". Before HN the main geek watering hole was Slashdot, which famously never censored content and fought strongly against attempts to force it to do so.

Slashdot also had a rather sophisticated moderation and scoring system, that allowed spam (hot grits etc) to be downranked and appear auto-collapsed, whilst longer form content was upvoted and expanded by default - even if it was a reply to negative ranked content.

You may feel a personal preference for HN, or not, but they were essentially the same from the perspective of any lawyer.

In other words HN could easily keep its distinctive feel without ever banning or erasing anything, just by implementing sufficient controls that let users see what they want to see: in fact it already does via options like showdead.


The idea that it's OK to implement an algorithm that takes some user's input to censor posts but it's not OK to do so directly seems like it skirts the central question.

If I have upvotes and downvotes, weighted the way I like, and I limit who can get an account in the first place, I can probably achieve the speech outcome I want relatively easily "without" human intervention.


So no specialized platforms that cover a single topic or area? After all, then they'd need to delete off topic posts. What about user moderators deleting things? What about a mailing list owner enforcing guidelines? Or is all public online communication one giant unmanaged and unstructured sets of posts on everything and anything? Or, in reality, nothing but paid advertisements.

edit: What about downvoting? Does that not count because it's by users? However the algorithm for ranking and matching is controlled by the platform, does that count as censorship since it will promote some posts over others?


> no specialized platforms

That's not an issue with reddit - moderators are not paid, nor is reddit responsible for them - and when users subscribe in a sub, the contract is that they 'll abide by moderation rules and downvoting rules. That's not corporate censorship. The problem is when the company itself starts censoring what the users have voluntarily subscribed to read.


Yet HN are, as I understand it, responsible for dang and sctb. They regularly moderate away the shite - which preserves one of the less polluted spots on the net.

How do you square that circle? Surely you should object to what HN does - in fact why would a believer in no censorship even have an account here if HN are censoring (moderating and limiting) what everyone is able to say?


Yeap, having paid moderators would classify you as a publisher - you 're essentially a newspaper publishing op-eds. HN would be a publisher and i don't find that wrong.

However, HN could instead be subreddit in reddit's platform, with HN (or preferably an unrelated external entity) paying the moderators.


There is no legal distinction between a 'platform' and a 'publisher' that compels a business to conduct its activity in any particular way. That is a falsehood that has somehow wormed its way into the general discourse.


Now consider the world wider than the USA. Imagine I am in the UK (I am), who have overly strong libel and defamation law. Other countries can too. I don't agree the law's balance is right, but that's a different discussion.

Were HN and FB a publisher, and had a UK office (and similarly in other countries with strong laws in this area) they would be liable as well as the individual posters. Those laws are why when things like Snowden and Cambridge Analytica broke they were published both sides of the Atlantic.

You've now created a problem potentially far bigger than the one you fixed.


That is already the case though, isnt it? You can say things in reddit that would get you in trouble in a UK-based forum. I believe the distinction between publisher and platform is US thing.

FB And HN are subject to US law where they 're based.It's up to the UK to block access to them if they think they 're illegal - Or preferably to fix libel laws.


IANAL but as I understand it, the UK case at the moment is it might get me into trouble if they can pin me down, but not the discussion forum itself. There might be a legal request to reveal my identity, but purely to pin me down. As a publisher, the forum itself would be jointly and severally liable.

If Fred Smith writes a piece in The Times that libels me, I sue the newspaper, not Fred Smith.

FB and the other globals are subject to the laws of every place they operate. e.g. See all the US companies causing controversy changing their policies within China.


But what if (UK citizen) fred smith writes a piece in the new york times? I don't think you can sue the NYtimes, or that NYtimes will care - I bet they receive a ton of legal complaints from foreign countries every year most of which are readily dismissable by US law.


Now I would need to consult a lawyer. I don't know US law on libel and defamation.

The fact that stories like CA and Snowden have been jointly published in the US, rather than say Canada or France makes me suspect that chances of a successful suit are low in the US. That's probably the point. :)


This problem has been solved for years, and yet no platforms implement it: K-Means Clustering

https://towardsdatascience.com/understanding-k-means-cluster...

The property "interesting" is an individual decision. A forum like HN could implement K-Means Clustering instead of having dang madly moderating stuff. Over time, those HN users interested in "technologically interesting stuff" would aggregate together, and generally only see posts that are "voted up" by others in their cluster.

All the other crap would be seen by all those HN users that migrated over from 8chan.

The problem with this, of course, is that HN would be full of all sorts of sickening crap that you and I would never see (or, at least, until a small sampling of us saw it, and all down-voted it into the 8chan netherworld).

The great thing about this -- the FBI, et.al. would have much better tools to do their jobs: they could troll these seedy underbellies of HN, etc., and actually spear-phish and destroy the scum that is posting repellent garbage.

Or, we could carry on doing what we're doing, even though it can't work. Our choice.


I agree with the spirit of the idea, even if it might not be realistically defined or enforced.

One thing that comes to mind is how do you define "illegal" content? A huge hangup here tends to be free speech - lots of legal free speech is not legal in places like Europe - e.g. there generally isn't a concept of hate speech in the US, and in the US inciting violence literally means "lets go over here and beat these people up right now" - anything less tends not to be inciting violence. Europe tends to be a lot more restrictive on these fronts.

And if you want the bar to be "illegal anywhere", then places like China would make a lot of things illegal that everyone in the west would be fine with. If you want to say that of course we wouldn't pay attention to China, now you're back to picking and choosing countries likely based upon arbitrary metrics.


> lots of legal free speech is not legal in places like Europe... places like China would make a lot of things illegal that everyone in the west would be fine with

The obvious solution, for a US company, is to follow US law and accept that some content will be blocked in China, strict theocratic states and possibly Europe.


Illegal content is pretty well defined. Pretty much every platform bans illegal content , quite effectively and taking into account local variations so it's not an issue.


Hi, if you could stop spreading the lie that there is some magical distinction between a 'publisher' and a 'platform' that somehow limits what a business is legally allowed to do, I would really appreciate it, and I know a lot of other folks would too! Thanks!

https://www.techdirt.com/articles/20190613/03172142391/once-...


OK you can replace "platform" with "protected by CDA 230 ", the argument still stands as is.

https://www.eff.org/issues/cda230


I'm not sure what you think you're proving with that link. CDA 230 does something very close to the opposite of what you seem to think that it does. It protects the providers of interactive computer services from liability, but it does not restrict their control over whether to provide those services or not, which would of course obviously be a violation of those interactive computer services' first amendment rights. CDA 230 protects, it does not restrict. The text of the code is really quite straightforward and easy to interpret if you do it in good faith, putting your ideology aside.


> it does not restrict their control

this whole thread is about the hypothetical situation in which the two are linked by new laws


A hypothetical situation which could never arise, because the first amendment protects everyone, including both the users AND the operators of interactive computer services (such as HN, Reddit, Facebook, Google search, etc).


Once you add spam to this list, it becomes super fuzzy. Unsolicited marketing is obviously spam and so are the Nigerian scams, but what about some cult manifesto or conspiracy theory or chain mail letter? Or possibly a crowdfunding link that some well meaning, but misguided people sent around a few thousand times? Or possibly a request for donations by some non-profit? Propaganda posts by a political party? Where is the line?


Why would a “conspiracy theory” be considered spam? A chain mail letter might be spam. A “cult manifesto” is probably not.

I don’t think it’s too fuzzy, as long as companies don’t arbitrarily define “things we don’t like” as “spam”.


I see two issues with this. First: What you're describing is Gab - a platform that literally only enforces legal restrictions, the dynamic on that site is that the most extreme voices drive away the reasonable people that actually attract new people. Your rule would essentially also stop reddit from existing.

The second is that Facebook doesn't have to censor you to censor you. They just tweak your post so it stops showing up other peoples feeds. Youtube does this all the time, they don't censor anyone, they just make sure no one can find the voices they don't like. Meanwhile the content they like gets boosted, we've seen this all the time where you go to youtube for a video about puppies and thanks to autoplay 4 hours later you're hearing about how feminists are destroying western civilization from within by Jordan Peterson.


Those are real problems. For this to work, reddit would have to remove its frontpage and r/all - everything would have to be explicitly subscription based.

Youtube should be required to enable alternative modes (view all latest) and should be held liable if one can prove that they are e.g. suppressing a legal viewpoint in their searches. I see this as win-win for transparency


That pushes the power to control who says what from individual companies to the government, since the government is classifying "illegal."


Illegal comes in two forms: illegal to publish like calls to violence and libel laws which refer to newspapers and "publishers". While the government can sway #2 to introduce political biases, it's not easy to change the definintion of the first in an obviously biased way

The companies can still choose to remain nonpublishers, i.e. platforms - but they have to be impartial. That doesn't mean that censored communities can 't exist in their platforms - but the censoring can only be done by an entity unrelated to them. I think it's a good compromise to separate the platform from the censors.


Acknowledging that a problem even exists, is completely valid in and of itself. Especially when there is a sizable population that refuses to believe that something is even problematic in the first place.

For instance, if a climate scientist provides concrete evidence and analysis showing that climate change is happening and is problematic, we wouldn't dismiss them for not simultaneously offering a solution to the problem.


Government should regulate Facebook, Google (Youtube), Twitter, etc. as utilities that they are. The current situation where the corporations aren't responsible for the content but can censor it is the best example of why they should be treated as utilities.


But what would be the goal of governing them as utilities? This is always said as if it were obvious what would happen once FB gets utility status.

Should FB resume its banning of sexually explicit, beheading, animal torture, and ISIS videos when it is a "utility"? And how will the product of FB change as a utility? In what sense is FB a utility other than a very common place people like to go to, like concerts or a sports stadium or something?

What is actually meant when we say "regulate technology companies built around communication as a utility"? How large before you become one? Is any app that has chat or public posting a utility now? What classifies? Is the only goal to make them dumb pipes? What other roles and responsibilities would the government hoist onto FB and require it do?


It would allow the government to do what governments do with utilities (say electric utilities or oil pipelines). Prices could be regulated so facebooks profit margin goes from 30-40% to a “just and reasonable” 10%. Moderation of illegal content could stay, but there would be enforceable “non-discrimination” requirements so Republicans could sue Facebook for alleged discrimination. The government could impose GPDR type protections without legislation.


What will happen to New York Times when its running sexually explicit, beheading, animal torture, etc? They will get sued.

As soon as we can start suing them for the content they provide the sooner they will find resolution to it.

The same way they will get sued if they censor content that might be deemed socially important.


> What will happen to New York Times when its running sexually explicit, beheading, animal torture, etc? They will get sued.

The NYT isn't a public utility nor a public broadcaster. I'm curious what you think would happen to the New York Times if they were to get sued in your imaginary hypothetical?


I'm curious what you think would happen to the New York Times if they were to get sued in your imaginary hypothetical?

Editors can and have been jailed. They are ultimately personally responsible for what the paper published.


Which contemporary examples (i.e. within the last 50 years) are you referring to? Journalists have been threatened with imprisonment for protecting sources from government subpoena. But the sanction is for refusing a court order, not because of the publication content.

https://archives.cjr.org/opening_shot/opening_shot_july_augu...

And in any case, a publisher is different than a utility or a platform. Social media users are already punished for publishing illegal content (libel, copyright violations, etc). How does making Facebook a "utility" change that?


If it is as your say why would you object to Mark Zuck being personally liable for all content posted on Facebook? If anyone slanders anyone else and his platform publishes and disseminates it, why should he not be considered responsible?


How would a law that holds Zuckerberg responsible in the fashion you describe not be applicable to every website operator and Internet service provider?


It would not be applicable to those who only “censored” illegal material in response to a court order. The difference being whether the operator decides to “censor” content themselves for their own reasons/agenda.

Same as newspaper vs postal service.


Or absolutely ban censorship and content 'curation' that created walled gardens tailored to your superficial tastes.

Only block content that is illegal, and do it on demand - so back to the days when the services like facebook were just a content aggregator, where filtering and curation was done by user, not for user.


That would be the best solution. It wasn't even a problem before a political camp made it one. I hope the US finds a solution, because other countries are just not capable to do so.


I don’t want to live in the dystopia where Facebook is utility. It is not remotely comparable to water, electricity or roads.


Communication platforms are absolutely as important as electricity and roads in today's world. And like it or not, Facebook has a near monopoly on certain classes of communications.


You can recognize something as being bad even before you have a solution.


A solution can be worse than the problem it allegedly solved.


Which is what the author recognizes:

> Indeed, as conservatives often point out, the existence of a market imperfection doesn’t mean that government intervention is justified, if only because it may actually make the problem worse. But as many people don’t seem to understand, especially among libertarians, the fact that government intervention is not warranted doesn’t mean there is no problem.


Best case would be education to present the staunching advantages of freedom of speech.


The author did not claim to give a solution.


> One is never given

Here’s an obvious one: break up Facebook. Easily splits into Facebook, Instagram and WhatsApp.


And that does what?

This is a conversation where people are not acting in detail but on generality.

What does moderation entail.

What are the rules needed

What rules for what type of conversations?

What does moderation mean.

What is an acceptable level of service ?

What is “service” in a web forum?

What’s actually wrong?


You‘d need to break it into much smaller pieces, but that is the obvious solutions the problem.


Instead of an ideological bias, what if it is just a demographic bias?

The author supposes that there are some mustache-twirling masterminds behind the scenes at Google, Twitter, and Facebook, picking and choosing what content is promoted. But the algorithms, at their most basic sense, promote what is popular.

Based on voting records, in absolute numbers, there are simply more left-leaning people (at least in the US) than there are right-leaning people. Gerrymandering and the electoral college, etc don't reflect this in the government, but it is the case.

The left-leaning viewpoints are more promoted then, simply because there are more people who want to see them. There's no reason to posit a nefarious conspiracy.

We could argue that it would be beneficial to promote less popular viewpoints more to expose people to new ideas. It probably would be a good thing. But these companies are in the business of making money and people generally just want to see things that confirm their own biases. Just look at how upset conservatives are that they can't find enough content like that ;)


>The author supposes that there are some mustache-twirling masterminds behind the scenes at Google, Twitter, and Facebook, picking and choosing what content is promoted. But the algorithms, at there most basic sense, promote what is popular.

Hardly. The algos have programmers thumbs on them heavily. A hilarious example was when Reddit was trying to deal with the_donald successfully spamming reddits hot algorithm. But someone screwed up the new weighting and instead of pushing td posts down it pushed them up, the whole front page was nothing but td posts for a day. They eventually fixed it, but never think that there isn't a human jury rigged mess of if else statements at the core of the algorithms.


That just proves my point. the_donald had to find an exploit and resort to spamming in order to get the algorithm to push them onto the front page. They couldn't get there on the basis of, you know, actually being popular because a large number of people want to read them.


>That just proves my point

It proves that reddit admins didn't like right wing shit posting and were willing to rewrite the algo to deal with it. Which is completely the opposite of the point you were making that these algos are somehow neutral.


It proves that reddit admins take steps to deal with people who abuse the site with spam, which is what one would expect of any forum administrator for any spam content.

If their content was so compelling, why did they have to spam it? Why wouldn't it just attract users organically the way other content does?


>Why wouldn't it just attract users organically the way other content does?

[[citation needed]] Would you like to present proof that t_d did not grow organically in 2015/2016?

So far you're heavy on claims and light on proof.


Citation? No problem. Here it is:

> A hilarious example was when Reddit was trying to deal with the_donald successfully spamming reddits hot algorithm.

- buzzkillington in Hacker News comment: https://news.ycombinator.com/item?id=21463480

You were the one who stated that /r/the_donald spammed Reddit to reach the front page. Are you changing your story and now saying they reached the front page organically? Are you confused about what spamming is?

If there really were a large number of people genuinely interested in the_donald, it would have reached the front page without spamming. The reddit algorithm is probably also biased against posts trying to sell penis enlargement pills. Do you think that is also oppressive?


If you haven't already, would you mind reading about HN's approach to comments and site guidelines?

>Be kind. Don't be snarky. Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

https://news.ycombinator.com/newsguidelines.html


> supposes that there are some mustache-twirling masterminds ... picking and choosing what content

Uh, no, he points out, accurately, that the mustache-twirling "masterminds" are actually kicking people off of the site for having unapproved viewpoints.


Replace "censoring conservative viewpoints" with "censoring the nice people who tell me about how I can make $13,000 a week working from home". Which part of this argument changes? I can't find any part that does.


I can't speak for the article, but I can speak for myself. The difference is that I, personally, have control over the blocking of the "make money fast" communication. I think part of the right to free speech is the right to be able to choose what speech you consume, i.e., everyone has the right to free speech but nobody has the right to impose upon you.

For me personally, since I run my own incoming mail server, there is no platform censoring those emails. I use my own tools, trained on my own emails and my own choices about what to block, to do the filtering.

When a platform entirely blocks some speech, that is not my choice; it is theirs. That is the critical difference.

This is still not a bright shining line, because there isn't one. But it's sufficiently clear to be a guiding principle. If you want to lock yourself in some particular filter bubble, that's your choice, but nobody else has the right to force you into one.


> The difference is that I, personally, have control over the blocking of the "make money fast" communication

Have you never encountered a phpBB forum, or a wiki, or a social network, that doesn't have spam blockers? In my experience, such sites are inundated with spam, occasionally to the point that signal is hard to find over the noise. Your "solution" only touches on email. Filtering spam from such diverse sources would require a much more intrusive technology. Also, you're putting a wholly unreasonable burden on website owners to preserve abusive content -- if somebody uploads a flood of junk data, are you legally required to retain and redistribute it all? And are you forcing users to download gigabytes of garbage, and filter it down to a kilobyte of content?


As a pragmatist on this matter, I believe the solution here is scale. The control I have over a phpBB forum is that if I don't like their spam policy, I can leave, and either join another or form my own.

We're not talking about a phpBB forum or a little blog with a few thousand commenters. We're talking about single companies with very significant fractions of the total discourse flowing through them. If Facebook blocks something, it affects a massive number of people, and "just go make your own Facebook" is not a suitable recourse. Scale matters. Small entities can do things without being a threat to the body politic that large entities can not.

As I've said before, I'm not entirely convinced that something the size of Facebook can actually work in the long term [1]. Trying to jam everybody into one set of rules may simply be infeasible, for reasons not entirely related to "rights". If you look at the current conflicts over Facebook through this window, you may find it makes sense over the conflict. Especially if you are one of the majority here on HN who are kinda lost over the idea of why so many have become anti-Facebook, when you just see Facebook imposing your perfectly sensible, obviously correct values on the world, why is everyone complaining? Well, it's that whole diversity thing you may have heard rumors about. It's real. It may not be the case that you can get everyone into one community with one set of standards, regardless of how "obviously right" those standards are.

[1]: https://news.ycombinator.com/item?id=20146868 - in fact, I'd amplify the original post I made by also pointing out Facebook is international, so we're not just trying to stick the diverse cultures of the US in there, but trying to stick all the diverse cultures of the world in one big pile. When I put it that way, does the idea that such a thing could work in the long term even pass the smell test?


> We're not talking about a phpBB forum or a little blog with a few thousand commenters.

I listed examples including "wikis" and "social media sites" which categorically contain the largest websites in the world. You'll find websites of all orders of magnitude from tens to millions of users -- where do you draw that line? But more importantly, are you under the impression that large-scale sites are somehow immune spammers?

> As a pragmatist on this matter, I believe the solution here is scale. The control I have over a phpBB forum is that if I don't like their spam policy, I can leave, and either join another or form my own.

I'm not understanding something here. You think that it's okay for spammers to implement a "heckler's veto" by de-facto shutting down every single niche website that (by law, presumably) accepts/rebroadcasts any and all submitted content -- that you'll be able to "vote with your feet" and move/create a site or forum that will somehow be immune to this onslaught? At what point is "scale" the answer? Bigger is better?

> As I've said before, I'm not entirely convinced that something the size of Facebook can actually work in the long term [1].

I'm suffering whiplash. You gotta help me here. You're happy to feed small websites to the spamwolves, yes? But you're not convinced that big websites are good either. Do we just cancel the entire internet?

I'll grant that I'd probably get more work done.


It's easy to write laws such that totally blocking automation is allowed, totally blocking human written content is 'censorship', and categorising/hiding by default/ranking content is allowed.

Yes, this doesn't deal with things like "is YouTube ranking biased" but it's a start. You posit that sites would be obliged to publish onslaughts, but many spam filters are pretty good at filtering such attacks. For instance that's where CAPTCHAs came from (there are better technical solutions than CAPTCHAs but you get the idea).


I don't see any point in engaging with someone working so hard to turn everything I say into something as absurd as possible. If you can't understand the basics of something like "scale matters" without immediately twisting it into "but why are you ignoring scale so hard?", there's no meeting of the minds to be had here.

Though for the benefits of others, I'd point out that a careful reading of my post reveals that I did write something that implied strongly that small forums have more freedom in their spam blocking than large ones do. The idea that I'm obligating websites to never filter anything is some bizarre idea klyrs has gotten a hold of and apparently doesn't plan on letting go of.


The article specifically complains about even slightly less than even-handed promotion of conservative and liberal viewpoints, so it's certainly not only talking about forcing people into a filter bubble. It's just not the case that internet companies make it literally impossible to hear conservative viewpoints. (I'm not conceding the point that they are biased in any important way, but it's just certainly not the case that it's impossible to find conservative viewpoints on Twitter.)


>I think part of the right to free speech is the right to be able to choose what speech you consume, i.e., everyone has the right to free speech but nobody has the right to impose upon you.

You can think this, but it's not borne out by legal precedent. Odious speech is protected in public spaces, so if the KKK wants to impose their speech on my by disrupting my morning commute they have that right as long as they get a permit.

And if an anti-abortion activist wants to shove gory images of fetuses in the face of everyone going into a Planned Parenthood to get their prescriptions refilled, nobody stops them from doing that either.


"You can think this, but it's not borne out by legal precedent. Odious speech is protected in public spaces, so if the KKK wants to impose their speech on my by disrupting my morning commute they have that right as long as they get a permit."

I think (note that phrase) that in the 21st century, this is an exceptional historical case for the rare communication medium that attaches communication directly to physical location. Almost everything else we do nowadays has personal control involved, and we should consider that the more common case.

It also is not as unregulated as you may rhetorically claim. They are allowed to say odious things, but, at a reasonable volume level (or they'll be shut down for being a public nuisance regardless of the contents of their speech), and they are not allowed to jump in your face and stay there (in which case it's assault, again, regardless of the content of your speech). Moreover, I'd observe that you're mostly hypothesizing, as indicated by your own phrasing, and I'm a lot more pragmatic on this matter, so I don't feel the need to solve problems that don't (currently) exist. I don't think anyone's been more than mildly annoyed by public speech in a long time. (Being annoyed by the content that someone disagrees with you is part of being human. Trying to "solve" that produces worse problems than the original problem.)


The difference is that there's a Spam folder in your GMail where you can see all those "unwelcome" emails... and also, filters adapt to your preferences, not the other way around.


So, the argument is Twitter shouldn't combat spam? That HN shouldn't edit/delete content? That if a website accepts user generated content, they should have to make it viewable?


That's my argument, yes. I'm fine with hiding replies, and users using opt-out-able block-lists (e.g. that's how porn is implemented on Twitter and many other websites)... moderation is OK, censorship & (shadow)banning is the evil.


> Which part of this argument changes?

One is that people arguing about politics in political discussion forums are using the forum for its intended purpose whereas spammers are abusing it (in fact, stealing from it, since it makes its money through advertising). I’ve yet to see anybody oppose removing political discussions on forums about woodworking - that’s not what it’s for.


Conservative viewpoints are political ideals. Scams are scams. If you don't make that distinction, then Ponzi schemes become protected speech.

There's also a clear (direct) profit motive, and it's depriving people of property. There's lots of ways this argument changes. When you censor political views, you're depriving people of their rights. Shutting out a swindler isn't even close to the same thing.


Are you really sure that is the distinction. A lot of conservative policies results in even the conservative voters being worse off as a result. And there IS a direct profit motive by those who fund conservative politicians (a.k.a. scammers)


Absolutely, I'm 100% sure. I'm not talking about indirect "cui bono" desire to use politics to one's own advantage. Whether or not voters are worse off is subjective and debatable. Policy discussions are a form political participation. Selling a product isn't. There's a very clear distinction here. In my view, it's much less reasonable to make content distinctions based on "correct" political beliefs, vs. venders selling a product.


>Which part of this argument changes? I can find any part that does.

One is fraudulent spam, the other is not.


The article is good, but it could do with a lot less ad hominem and calling people 'idiots'.

You can do better than this:

> you may think that you’re very sophisticated, but actually you are just being ridiculous.

> Sure, you can say that if you want (though I don’t think that’s how the word is actually used by ordinary people), but only idiots think it shows that it’s not a problem.

> I know that some people, especially among libertarians, think it makes them smart to believe that, but they clearly have not thought this through.


I'm always surprised when articles like this don't mention reddit. Isn't it a top 10 website that was once top 3? I've read quotes saying one in three americans has visited reddit. Anyway, reddit is a great resource for studying moderation activity.


That is because it oozes pretense of refusing to accept that their "silent majority" is really a noisy minority. Not recognizing freedom of association is one of the big warning signs along with not realizing the impact of what they want on /themselves/. Personally at that point I mentally mark them as spam.

Annoyingly they conflate actual corporate censorship concerns with their persecution complex. DMCA abuse is actual corporate censorship for instance. Same if say Comcast blocked Netflix. The government using a cudgel to make corporations do their censorship when laws would be struck down in an instant is censorship and a messy semantics arguement of corporate vs government origin. These complaints boil down to "they aren't giving me an audience how dare they!". There is a world of difference between not given any attention and cannot publish.


I don't see how this relates to leaving reddit out of articles about moderation in social media.


The article starts with an unsupported premise that the platforms these companies operate favor liberal over conservative speech so my natural inclination is just to dismiss the rest of it; however, even if true the solution to improving freedom of speech is not suppressing someone else's freedom of speech.


I probably can’t supply data that would convince you liberal speech is favored over conservative speech. However, within the past month my personal Facebook account was curiously disabled within an hour of typing the word “Christian” in a comment. I would not identify as conservative or liberal. I certainly can’t prove that merely using this word was what triggered the permaban, but Facebook absolutely would not reinstate my account, even after I supplied a valid driver’s license as proof of my identity.


It's hard to take this argument seriously when the only companies mentioned are left-leaning tech companies such as Twitter, Facebook, and Google.

I suspect that the author's concern isn't censorship per se, but censorship of viewpoints he happens to uphold. If Twitter's suppression of conservative content is bad, then so is The Federalist's suppression of liberal content.

You're about to argue that one of these is a social media platform, while the other is a news outlet. In that case, what do you think is the essential difference that makes it okay for one to censor certain ideas but not the other?


Isn't it more likely that the author chose Facebook, Google, and Twitter not because they're left-leaning, but because their traffic is vastly more significant than The Federalist? The three are all top-50 sites globally. The Federalist doesn't even make it into the top 10k.

It's pretty rational to focus more on the sites that have the most traffic when discussing bias, isn't it?


> It's pretty rational to focus more on the sites that have the most traffic when discussing bias, isn't it?

I feel that if a person is making an argument from principle, then that principle should apply everywhere and not just where politically convenient to them.


The author does:

> ... although it can be leveraged against big tech companies that are biased in favor of liberals, this line of argument also has implications that conservatives may less readily welcome. For instance, it means that how rich people use their money to promote their ideas may also be a problem, insofar as it distorts the marketplace of ideas.

The principle he's arguing is that when an organization that has a de facto monopoly on a type of information stream (search, microblogging, and social circle media) uses that monopoly to favor one political viewpoint, that is harmful to society because it "distorts the marketplace of ideas" by preventing equal access to said marketplace.

The Federalist is not in this position. For every conservative magazine like The Federalist, there's a liberal one like Mother Jones. This is the "marketplace of ideas" that the author refers to in action, not a distortion of it. So the author is, in fact, arguing from principle, just not the principle you're using.


I think a lot of people do take issue with things that call themselves "news" promoting one side of the ideological spectrum.

That being said, social media is just that: something users can post to. Twitter users were not hired to promote a particular point of view, they were invited to share their own point of view. To then go through and cull the people you invited to share based on ideology while still calling yourself a social media platform is where a lot of the complaints (rightfully) come from.


It sounds like you're saying that what a company calls itself determines which ethical obligations apply to it. I can agree to an extent, but I suspect there's a different point here.

Twitter, per its own website, is "what’s happening in the world and what people are talking about right now." Is there any way to revise this sentence—or any other aspect of the Twitter brand—such that Twitter would no longer be subject to these complaints of censorship?


> Twitter, per its own website, is "what’s happening in the world and what people are talking about right now." Is there any way to revise this sentence—or any other aspect of the Twitter brand—such that Twitter would no longer be subject to these complaints of censorship?

I would say no. It's a service designed from the ground up to allow people to speak their mind. When they start editorializing who is allowed to speak their mind, people will rightfully complain of censorship.

I suppose they could change the tagline to:

"what’s happening in the world (that we approve of) and what people (we like) are talking about right now."

I don't think many people would be excited to use that service though.


What makes a company left-leaning?


It turns out that people who suffer under an injustice are the ones most motivated to draw attention to the injustice. Whether the author has a personal motivation for his concern is utterly irrelevant.


I can argue that censorship isn't OK on either? Isn't that the appropriate position?


How could a private company impose censhorship when none of them are allowed to use violence? They could only ban certain types of behaviour on their premises, right? Which is not really censorship as one could just go to someone else's premises or found their own.


The real problem here isn't that companies are allowed to control what content goes on their platforms, but that companies are allowed to get so large that they're a de facto monopoly for much of the world (e.g. Facebook).


Regulating private companies for the benefit of society? I didn't think that idea was still within the current Overton window at this point.

The phrase "hoist by their own petard" seems appropriate here.


> For instance, if you’re a journalist and you want to promote your work, there is simply no viable alternative to Twitter at the moment. No other microblogging platform

Microblogging platforms aren't the only venue for journalists to promote their work.

But, even if that wasn't the case, that's not an argument that private censorship is bad but that private monopolies are bad, and that censorship by monopolies on essential communications mechanisms (public or private) is bad.


The problem I think is that we let these companies grow so big, and let society grow so dependent on them that their policies become a worldwide public issue. If we become dependent on Twitter or Facebook to participate in democracy, regulation should follow.


There is outright censorship, or rather removing topics that the owner of the platform doesn't like, and then there is the much more serious problem of creating echo chambers for the users where dissenting opinions are not deleted, but hidden.


Censorship is awful, sometimes even worse through large tech corporations, i.e. Google, Facebook, etc.


Politicized flamebait, flagged. Call it censorship if you want.


For several years, conservatives have expressed concern that big tech companies, such as Google, Facebook and Twitter, are suppressing right-leaning content in various, more or less direct ways. Even though conservatives often exaggerate it, I think there is no doubt about the anti-conservative bias of these companies, but that is not what I want to discuss in this article.

You don't want to discuss it here because you don't have a detailed analysis backed up by solid data. If you did, you'd lead with it.

Trust me, most people here can handle an article full of graphs and stuff.


I've been flabbergasted at the number of universities that have recently stifled speakers.

I agree with much of what the author wrote. It's time for change. Classic liberalism may be best.


See but that's a libertarian issue here.

If you value freedom above all else, Google being an independent entity, should have the freedom to ban / veto whomever they want.

You can't have it both ways: let people be free to say whatever they want AND force the owners of communication channels to spread all information.


I don’t think that’s the argument they’re making. I think they would concede that Google has a property right, and may at it’s discretion moderate content. However, they would argue that Google shouldn’t exercise that right because it is harmful to the culture of free speech. This is volunteerism, which is to say they should choose to hold themselves to a standard without being forced to do so.

I’m not sure how I feel about it, but I think I’ve summarized the best version of the argument.


Freedom comes with responsibility. The idea that they are not responsible for the content (i.e. you can't hold Youtube/Google legally responsible for the content they have on their platforms) while at the same time they make profit on the very same content and can censor it as they see fit -- this is idiocy in its purest form.

We hold newspapers or TV station legally liable for the content they serve. This is fair. Situation in which they couldn't be sued for the stuff they air still making money on it -- that's the current situation with them. It's even worse because they can somehow censor the content they don't like for their own political reasons.

It's like with banks. Yes, we can make money. And yes when times are bad we can get money from the government/tax payers too. This is not freedom this is fascism.


That definition of responsibility negates freedom in itself. I have literally seen it used by dictators who set libel laws to crush any negative stories.

The stance is absurd - tt is akin to saying that Kinkos should be liable for anything printed beyond "they knowingly used ink laced with weaponized anthrax" - and should be charged with terrorism because a customer who quietly printed a bomb threat formatted to look like a resume - just because they kick you out for openly printing Goatse.ck and every other old shock site which scares away the other customers.


Exactly my point. But for this to work, they need to be regulated by Government as utilities.

No one has the right to turn off electric grid on your house because they are "private company". No one has the right to turn off "running water" from your sink because they are "private company" and if you don't like it, you can go somewhere else or open your own. The same way, no one has right to censor me because they are "private company" and if I don't like it I can go ahead and open my own Youtube. They should be and will be regulated as utilities.


What precedent are you using to imagine that if Facebook/Youtube/etc were to become public utilities, that they would be barred from censorship? Public broadcasters are currently regulated by the FEC:

https://www.fcc.gov/media/radio/public-and-broadcasting#OBSC...

> Indecent Material. Indecent material is protected by the First Amendment, so its broadcast cannot constitutionally be prohibited at all times. However, the courts have upheld Congress' prohibition of the broadcast of indecent material during times of the day when there is a reasonable risk that children may be in the audience, which the Commission has determined to be between the hours of 6 a.m. and 10 p.m. Indecent programming is defined as “language or material that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards for the broadcast medium, sexual or excretory organs or activities.” Broadcasts that fall within this definition and are aired between 6 a.m. and 10 p.m. may be subject to enforcement action by the FCC.


You're conflating right to receive a basic necessity with the right to distribute a non-necessity.

You may have a right to receive water. But you don't have a right to pee in the water supply.

Nor should the private water company be obligated to ensure that your pee goes into the tap water of every household.


A major difference between the utilities you've described and YouTube / Facebook / Reddit / Twitter is inescapable monopoly. Barring special cases like well-water or local power generation that most people don't have access to, if the utilities cut you off, there is no alternative source. If YouTube cuts you off, you can go to another video host (including but not limited to paying for your own computer to host your data). If Facebook kicks you off, you can use email. Hell, if Facebook or Twitter kick you off, you can go use Twitter or Facebook.


Freedom comes with responsibility.

And who decides what is responsible?

This may be helpful https://www.popehat.com/2019/08/29/make-no-law-deplatformed/


Power comes with responsibility, not freedom. Where did you even get that phrase?

And, does this imply that slavery comes with blissfull carelessness?


Freedom is the ability to make choices, which means having the power to choose. Great power is a great freedom in choices- and of course that freedom comes with a responsibility- you made choices, you are responsible for them.

And yes, if you have no freedom you cannot be considered responsible for what you do or don't. A slave cannot be held responsible for actions he's been ordered to perform. This might not be "blissful carelessness" but certainly is lack of responsibility.


that implies there 's a tradeoff between freedom and responsibility. freedom is a natural right - responsibility, while highly valued, is not required.


We are simply confusing two different meanings of "responsibility" and misunderstanding each other.

Responsibility is both a way of acting that is mindful of the consequences, and simply the fact that people will ask you to respond of the consequences of your actions.

The two are strictly connected but are not the same. You "act responsibly" when you are prepared to answer tough questions about your actions. The simple fact that you will be asked to answer those questions, independently from your actions, is the fact of "being responsible".

So responsibility as virtue is not required by freedom; but as a simple fact of life it is its consequence, people will consider you responsible for what you do.


Ironically this is somewhat similar the LGBT cake shop case where the two political sides are flipped.

A great example of how many people don’t care about right and wrong, just winning.


This basically begs the question (in the true sense of the phrase). The debate we're having right now (not just about this particular article but in general) is whether that should be true. Saying that it is the current legal regime is not telling anyone anything they don't already know. We are discussing whether that is a good idea and whether it should be changed.


I agree with you that Google or Facebook should be allowed to do what they want. The problem we face is that these companies run major (the dominant) communication channels, that's (IMHO) where the problem lies [it shouldn't be like that, but heck, we cannot trust our governments better than G or FB].


Perversely, the popular press is also communicating to people that G & FB should censor more, instead of calling them out for their manipulation shenanigans.


Isn't it the essence of Libertarianism that something can be wrong without making it illegal?

So there's no contradiction between "Google is wrong to censor" and "Google should be free to censor".


> If you value freedom above all else, Google being an independent entity, should have the freedom to ban / veto whomever they want.

Not so if you value freedom of human beings above all else, and even less so if you would wish to maximize human freedom across the population.

Unregulated capitalism maximizes the potential for freedom, if you win a lottery ticket, at the expense of diminishing freedom for all the losers. Libertarians tend to either already be rich or believe they have a significant shot at winning the lottery ticket.

A practical example: lack of worker rights in the US corresponds to more freedom in the abstract, but in practice creates a society where the average worker is much less free than, say, the average European worker. For example, workers in more regulated labor markets enjoy many more days of vacation on average, leading to literally more freedom to explore the world and pursue personal interests.


> If you value freedom above all else

Thats not libertarian, its freedom to do whatever you want that doesn't affect other people.


If your flavour of libertarianism doesn't account for inverted totalitarianism [0], it's just not factoring in reality.

[0] https://en.wikipedia.org/wiki/Inverted_totalitarianism


That's a false dilemma. The author is not disputing Google's natural right to do so. He s arguing that they shouldn't however get a pass from libertarians. It is possible to criticize someone without demanding that someone regulates them!


Know what else distorts the marketplace of ideas?

Money.

Money allows for buying advertisements, essentially allowing you to buy a platform to spread your ideas.

Having money typically has a pretty clear bias: protecting the money of those who have it. This comes at the expense, many times, of those who do not.

That typically equates to conservative policies.

I'm not convinced that the market distortion caused by supposed bias in social media moderation is even close to the distortion created by allowing money in politics. In fact in many (not all) cases of purported bias, the true impetus for action was due to hateful or otherwise harmful speech.

Nor do I agree that if we disagree with state censorship, we should disagree with Facebook censorship.

Being thrown into the legal system with threat of monetary penalty, jailtime, etc. is worthy of much more scrutiny than speech that gets you banned from a social network.


This was a big part of why I enjoyed Jack Dorsey's comments on Twitter removing political advertising.

The Internet was supposed to be a democratizing force, and it was, for awhile. To an extent it still is, but as the convergence of content continues to pull people into the various Platforms™, all of which are supported by advertiser dollars, the content itself is diluted. Some not so much, some so much it's not recognizable, others so much that it's no longer there.

YouTube is a great example of this. Once a monument to the power of democratized content, it's slowly but surely pushing out real creators in favor of advertising companies producing advertiser-friendly and oh-so-clickable content, much of which is complete and utter trash. It's not just marketing companies though, a new breed of creator has spawned since roughly 2010, determined to soak as many advertising dollars out of the platform as they can with as much vapid and soulless content as it will suffer.

And all of this is sponsored in turn by equally vapid and soulless corporate advertisers, determined to make sure their precious brands are not associated with anything as uncouth as an opinion or a real person.

Hell, there's a great argument to be made of how the "pivot to video" and the resulting catacombs full of gutted and destroyed media websites can be laid at the feet of corporate marketers who wanted video ads. Nay it was not any Internet citizen who claimed the future was video, we didn't ask for it, and we certainly didn't like it.

The whole thing is gross, and far from isolated to YouTube, it has plagued them all, save perhaps for Twitter and I think the only thing saving Twitter is that it's just not as easily monetized. But, Twitter has a different problem, which I believe Jack is trying to solve; the influx of automated bots, bought and paid for by other marketers, more often trying to sell politics than products. But it's just a different kind of feces from the same anus and indicative of the same core problem: advertisers suck, at everything. And they have more power than ever on the Internet, and so the Internet sucks more than ever.


We need to think about why advertising is winning.

A big reason is that advertising is a tax deduction. If I fund my content with advertising, the advertiser gets to deduct the money they spend from their income tax. If I sell content to users, the users don't get to do that. Between the employer and employee halves of FICA, federal withholding and state income tax, that's typically more than half the money. To make it even we would either need a tax deduction for purchasing content or to get rid of the one for buying advertising.

Then you have companies like Apple taking a 30% cut of the creator's revenue. The customer has to pay $1.43 in order for the creator to get $1. That's obviously making direct purchases less competitive against advertising -- and it multiplies with the tax treatment. If you're paying 50% in tax and then 30% to Apple, the customer needs to earn ~$2.86 for every $1 they want the creator to get. If two content creators start with $1 and use it buy each other's content, by the time the money goes to the other person and back one time, what's left is only ~$0.12.

It would be a lot easier to sell content if the creators actually got the lion's share of the customer's money.


This is mentioned in the last half of the last paragraph in the article.


I think it's a difference of additive vs subtractive distortions.

Censorship deletes things: it reduces the size of the marketplace of ideas.

Money adds things: if I spend money tomorrow to buy some ads it doesn't affect you unless either (a) you're also spending money and trying to buy the same ad slots or (b) your views are very weakly held and you become convinced to change sides by my witty advert.

The "get money out of politics" position is based on an unstated assumption that if you buy, say, 1 million adverts, then it's practically guaranteed that (say) 5% of those adverts will successfully change people's minds regardless of how bad your arguments are, and so if you keep spending more and more then you can eventually buy whatever outcome you want.

But is that assumption true? I don't think it is. There have been studies of this which showed political advertising is pretty ineffective in general, beyond informing people that there's an election and who the party candidates are. And there have been two high profile cases in 2016 of votes that were won by the side that spent by far the least (Trump and Brexit). If money was so powerful they should have had no chance, but it didn't work.

Additionally if this really worked, you should see taxes always fall as rich people buy ads supporting politicians who support lower taxes, which in turn frees up more money to buy ads, ad infinitum (ho ho ho). But this isn't what we see: ignoring occasional spikes to levels so high they were totally ineffective at being collected, the tax burden and size of government has gone up over time rather than down. Rich people seem to be doing pretty badly at buying the sort of policies they're supposed to want.

On the other hand, if people don't know a position, idea or possibility exists at all they can't possibly support it. You can ignore voices you disagree with but you can't pay agree with voices that don't exist anymore.


Money adds things

No. What money does is redistribute limited attention.

Information may (or may not) be freee, and nonrivalrous. But its complements, attention, distribution, discoverability, access, reach, are all rivalrous. Time is the ultimate rival good: a second, minute, hour, day, week, month, year, decade, lifetime, spent on one thing is that interval not spent on all else.

"Money doesn't talk, it swears." Bob Dylan.

Money is an amplifier, or at least, provides access to amplification. It can also be an eraser, a gate, a gate keeper, which determines who shall and shall not pass.

The more I read on the nature of monopoly, the more I'm convinced that the critical element of it is not prices or "consumer welfare", but of role as gatekeeper. The monopolist (or monopsonist) faces a constant queue of approaching supplicants. Each supplicant has the best alternative of ... no service. Or at best, extraordinarily reduced service (fewer capabilities, higher costs, most often both). The monopolist, particularly if already at or near capacity, though even at levels of service far beneath that, need only consider the next supplicant.

What people miss out on with money is scale.

The typical US household would be hard pressed to meet an emergency expense of $500. When Facebook purchased Instagram for $19 billion in cash and stock, it expressed a liquidity of nearly 40 million such households -- about the population of California. That is, a single corporate entity, and more significantly, the single individual effectively controlling that corporation, Mark Zuckerberg, has a greater effective voice than the entire rest of the state.

In a land dedicated to the principles of liberal democracy, that seems inimical to its very foundations.

As to the touted ineffectiveness of political advertising, the arguments would ring far truer if 1) the parties claiming such reflected the belief in their actions and 2) would not protest so loudly efforts to either curtail or 3) clearly identify those doing the advertising and spending.


As to the touted ineffectiveness of political advertising, the arguments would ring far truer if 1) the parties claiming such reflected the belief in their actions and 2) would not protest so loudly efforts to either curtail or 3) clearly identify those doing the advertising and spending.

1) This is the case, isn't it. In America the Republicans were far more relaxed about the ruling allowing unlimited US campaign money, then spent around half the amount the Democrats did and won.

If they were really being duplicitous about it you'd have seen major freakouts amongst Trump supporters about his very low levels of spending, but I don't remember much of that. You saw far more angst amongst Democrats about the "free" news coverage he got by virtue of saying popular-but-unpopular things.

I'm not saying people are totally consistent on this, but at least in the last election, Trump's behaviour appears to have matched the overall right wing pattern of not really believing political spending is a big deal. The US needs a pretty high baseline of political ad spend just to communicate "there's an election on day X, vote for candidate Y" to 350+ million people in a very short space of time. Beyond that it doesn't seem to matter.

2) Why shouldn't they protest? It's perfectly possible to both believe that government control over speech is bad on principle, and also that political advertising isn't as powerful as your opponents believe.

3) Why should people doing political ad spending be forced to be identified, but people posting political views on the internet not be? A good reason to not force identification of such people is to stop retaliatory attacks by extremists designed to silence people, a very real problem. This is especially important in elections where there's a risk whoever comes to power will try to get revenge on people who supported their opponents. Not normally a risk in US politics because of the First Amendment but it's been seen elsewhere.


You know what else distorts the marketplace of ideas? Guns. Monetary incentives are a lot more voluntary in comparison


Private companies should be able to censor political speech, period. If it's my desire to build a platform that is favorable to certain types of speech, that's my right and the government shouldn't be forcing me to modify my platform to rebroadcast speech I disagree with.

In terms of conservative complaints about leftist censorship, I have to simply roll my eyes. The conservatives complaining about big tech don't care about censorship, they only care about the dominance of their own political messages. This is obvious when you ask them about "democratizing speech" on platforms they already dominate like cable news and talk radio. Remember the fairness doctrine? They hated the idea of having the government dictate the political composition of the airwaves, but when it comes to big tech, suddenly they want to nationalize the platforms to their own benefit. I find the effort extremely disingenuous.

Also, something something downvotes something something censorship!!


> government shouldn't be forcing me

See, here we go: he says “X is bad” and you immediately respond with “what do you mean the government should outlaw X?!” Nobody’s talking about government intervention - he actually spends a lot of time explicitly saying he isn’t.


I think you are right. But I would also argue that such censorship should be as transparent as possible.

Though the real question is, at a certain size of user base, does a private space not become a public one? Or rather, if there is no alternative of not having censorship, no place else to go, does the censorship not become a problem then?

I am not sure what the solution here would be, actually, but while I think if people don't like facebooks/twitters censorship they should just go somewhere else, I would not hold the same view for example on email providers.


I would be willing to pay taxes to fund a government run online "public square" that could not be legally censored, but I am fundamentally opposed to taking control of privately owned websites and forcing the owners to host content that they disagree with, especially political content. I disagree with the idea that the number of people visiting a website is an important metric in determining the owner's property rights.

> at a certain size of user base, does a private space not become a public one

I think that's a question worth exploring, but I think any honest discussion of this idea has to acknowledge that most of these social media companies are essentially trivial entertainment sites and that if we are really trying to designate access to twitter as a basic human right we must first consider why life saving drugs, housing, and healthcare are not human rights.


Not that impressed with this article. It states as it might be a fact that the "right is suppressed" and "left" is "in favor". Then it never really explains why censorship might be bad or not. It doesn't get really concrete, just something vague about "marketplace of ideas".

IMO it's bad to have places where only one side is heard, the other is not. Further, there are too many places where people might feel they're a majority in their opinion, while they're not. Usually people would notice they're the odd one out. That's not the case in various places (e.g. subreddits).

Note that I'm from the Netherlands and do not agree with freedom of speech above everything is a good thing. There should be some limits to it. These reasonable limits should be in place.

Making distinctions between right/left and complaining that more of your stuff is questionable.. yeah.. so? The "left"/Democratic party I consider pretty conservative and not looking out for its people. As such, it's always strange to see the complaints, what is right/left for e.g. US to me is "right wing" vs "even more right wing".

I also do not see the restrictions; there should be way more limits on reddit/twitter/facebook. Further, I think the companies should be held responsible for this.


This is such an own-goal. Before worrying about censoring unwanted content, companies should spend their time on stopping promoting it (and building client side filtering controls) which is just as huge a problem and not anti-American to fix.

The problem is that companies want the flamebait and toxic content to drive enragement engagement ad impressions, but don't want to be accountable for the stuff that's just too edgy for advertisers.


"Companies want the flamebait...." The moderation system on reddit seems to heavily promote group think. Actually promoting controversial comments as a default would be interesting.


I use reddit precisely because it has topical communities. Moderating away people and posts that distract from the topic is a good thing. It promotes group think exactly because you can belong to any number of groups.


It's perfect for hobby or niche interest subreddits. It's a flawed system for discussing politics or anything remotely controversial. Safespace bubbles, censorship by downvotes, mods that curate their subreddit, mass removed comments... That's all fine if you're discussing your favourite videogame or cooking recipes, but not when there's politics involved.

Imageboard style websites where every shit comment has as much visibility as the other is vastly superior for having an interesting discussion. The downside is that you have to wade through the bad comments.


* Actually promoting controversial comments as a default would be interesting.*

Controversial is subjective. Would you like FB/reddit to promote holocaust denial?


Reddit has an option to sort on controversial on a group and also on comments. I suspect the algorithm is probably looking for things with a combination of upvotes and downvotes. Take a look at any reddit group or the comments and sort on controversial if you want to see what it looks like. If you defined controversial as any thing that has a 15% to 55% down vote ratio that would be fairly specific. Maybe over 60% downvote it may be off topic or spam? It would be an interesting experiment, for a few days, to change the default from "hot" to "controversial".


> The problem is that companies want the flamebait and toxic content to drive enragement engagement ad impressions

No, we really don't. In fact, we have mechanisms in place to keep ads away from certain categories of content. Do you have any evidence to support your statement?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: