Hacker News new | past | comments | ask | show | jobs | submit login

Tolerance of wrong viewpoints is different than the active support algorithms give to discovering false content.



It’s not “active” support if the algorithm acts in a content neutral fashion, for example based on engagement metrics. In such a situation, changing the algorithm to artificially not let allegedly-false content be discovered is actively supporting the opposing viewpoints. Leaving the algorithm to act without artificial content-specific modification is not active support. Tolerance would be leaving the algorithms alone.


Former Googler here (11 1/2 years, including in Ads)

The idea that algorithms are "neutral" is laughable. There is a loosely organized group of activists out there who are aware of how these algorithms work and actively manipulate them.

"Engagement metrics" are nothing more than these people pushing the buttons.


I don't really understand why tech companies, like Google, go so far out of the way to maintain the image of being neutral. I agree they have a right to censor content they choose for whatever reason but what I don't understand is why they try appear to be neutral about their decisions. It feels like everyone is aware of what is going on, even other commenters who support Google censorship admit they approve of the bias.

So why do tech companies cling to this line of being neutral when no one really seems to accept it and they themselves have no intention of being neutral? I feel like there wouldn't be any conflict about policies or complaints they have to deal with if they were more honest. Maybe it has to do with section 230. I don't know but I feel like we would be better off if consumers had more information.


Every MITM-as-a-service starts off by being a neutral conduit to attract users, and then slowly adds restrictions to appease advertisers. But users never appreciate additional restrictions, and so Google (et al) have to keep marketing themselves as general hosts lest they lose even more mindshare.


There is a difference between opposing viewpoints and factually incorrect information that is destructive.

If your view is based on provable falsehoods, your view is worse than valueless.

Tolerating these people is harmful to everyone, but that is not why Google is banning it. It is because it is harmful to Google's bottom line.

At the end of the day, Google owns their servers and can say what is allowed and what is not.


Google is a legal construct. I can’t go have a coffee with Google. I can’t get a high five from Google. Google will do whatever our laws say it has to do in exchange for liability protections for its owners.


If you can find a majority to agree you can change the laws. It seems unlikely though. I’m not even sure what you would change the law to be. Current reading of the US constitution says that Google has the same free speech rights that you do.


>Google owns their servers and can say what is allowed and what is not.

Well that's the problem in a nutshell. It ain't good for our society.


They are not there to improve society, they are there to make money, and if you think different you are a fool.

Republicans fought to make companies people, and to be not regulated. And now are crying when it fired back, because forcing a company to publish or block something is actually stepping on its first amendment right.

You won't really have such platform, unless it is done through a well established non-profit or through government (as long as government is Democratic and checks and balances work correctly).


What do you lose by not being able to post or upload files on a Google-owned server?

People act like being banned is a life threatening event.


Not "like life threatening" - Bad for our society.

In that free public conversation is centrally crucial to the sanity of our society.

And having that public conversation controlled by a profit-seeking entity is definitely detrimental to that conversation. And thus detrimental to the sanity of our society.

And an insane society is obviously all kinds of threatening.


Again, Google is not stopping you from using your speech.

If they ban you, they are telling you that they don't want you on their private property.

They don't take away your internet connection.

When did society need Google, or Twitter, or FB to function? They do not. Those are simply three websites. There are literally millions more.

If this site banned me, I lost literally nothing. If Google Drive banned me(assuming I have an account, which I do not) I lost literally nothing. Same for FB. Same for Twitter.

They are not required for a functional society.

Why is that so hard to understand?


To the degree that Google's service is popular is the degree to which it "stops me from using my speech".

Because the public conversation requires a public. Right?


That is not even close to true.

They stop you from using their services, that is it.

You have no right to access popular private servers.

You have no right to a conversation because everyone else has the right to ignore you if they so choose.

Again, private companies can not be forced to give you server space. That pesky first amendment thing and all.


Show me where you have a right to access a popular server that is not owned by you.

Show me where you have the right to have a conversation. That means that others can be compelled to talk to you.


Oh, but it is true.

If their service is popular, then the public conversation is to that degree affected. Clearly.


The truth of the matter is that the right-wing userbase of HN is deathly scared of being marginalized in wider society. Seeing things like Parler getting kicked off of AWS, Facebook Fact-Checking moderation and now this makes them scared.


That is a problem except that no one gets booted for being conservative, so it is not really a problem.

The top performers on FB are conservative pages for crying out loud!

They are afraid of much, which is the problem. It is irrational fear and based on ignorance and hatred.

Parler earned their ban 1000 times over. When sites get shut down by their hosts because its users were openly inciting violence that is not a problem.

I have yet to see someone get booted for following the platform's rules and merely being conservative. People from all over the political spectrum get banned every day.


I disagree. I think the algorithms are fundamentally immoral because they promote content that gets "engagement". Which includes and in many cases prioritizes content that people have engaged with because it causes a negative response. Rather than pushing good* content, it prioritizes lowest common denominator, reality tv, desperate pundit, fast food, self congratulatory, outrage porn garbage.

*By good, I simply mean thoughtful, high quality, factual, educational, or otherwise uplifting content regardless of politics


I don't think "Good" is unambiguous enough to trust the platforms to promote it. How about simply "related"? Show people the content they've explicitly asked for. If people explicitly ask for outrageous content, then fine, but we needn't force feed it to society.


Algorithms don't work like this though: content that feeds outrage disproportionately outranks content which doesn't.

Algorithms don't discriminate "content" by it's actual content: they keyword match and look for clicks, and build a pretty perfect radicalization pathway more easily then they build a discourse [1].

You have probably experienced this: almost everyone has the experience of wanting to see some particular YouTube video, but opened it in an Incognito tab (or just avoided it) explicitly because they know the topic will prime the YouTube homepage to fill with nothing but things you don't want to see.

[1] https://www.rollingstone.com/culture/culture-news/youtube-fa...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: