I stopped letting Google filter my world when I saw that it had de-indexed conversations between extremely well credentialed scientists when they came to unpopular conclusions regarding Covid policy. This story appears to confirm that Google sees ideological influence as part of their core mission. I don't really object to that when the ideology is about transparency, neutrality, objectivity, so this isn't anti-ideology. But I do object when it's about censorship, advocacy and feelings, because my own ideology is pro-Enlightenment. So I've de-Googled, and I will be ready to de-Kagi, de-Zoho, de-LineageOS, etc. if they become similarly captured. I just hope that ends with me still having access to modern tech.
You position is still kind of anti-ideology because transparency and neutrality doesn't force a perspective, it just maximizes available information. So there is a staunch difference to how Google does behave since it delegates decision making to those that should make it. And people will do so with different results.
The thing google really excels at is questions like "what bakeries near here are open on Sunday after 12:00h".
Sure, I can go to 15 individual websites, select the specific branch I want (if I got another map service to spit out names of bakeries) and hunt for their opening hours in 15 different styles of menus.
Google is just really good if it can sell you something. And sometimes, I need something to be sold to me.
Duckduckgo (uses Apple Maps) works just fine. Search for bakeries and observe every listing has a closing time or notes they're already closed. Bing maps works just fine. Search for bakeries and select "late lunch" in the hours dropdown.
That's around the time that the Google Now feed became blatantly opinionated. It would find articles about things that I'm interested that were also related to political hot topics, and if I clicked the button to tell it I wasn't interested in the hot topic, the only option was to tell it I wasn't interested in the topic I am. Even if there was plenty of content about the topic I am interested in that wasn't being tied into the zeitgeist.
> This story appears to confirm that Google sees ideological influence as part of their core mission
That's confirmation bias. I could do the same argument going the other way. That google profits of misinformation by promoting content that is demonstrably false and have caused harm in the real world. BTW, scientist can have opinions too, and such opinions can be wrong. What opinions were expressed? That closing borders doesn't work? Yes, once there are multiple spreading points restricting movement in international scale doesn't work and would be unpopular due the economic harms it brings. That mask doesn't work? Yes, they don't work because to be correctly used you literally have to have them 100% of the time when you are outside of your bubble, something that the public isn't very disciplined about.
Treating content without bias and having some things you don't like crop up next to the things you do like isn't "promoting" anything one way or the other. The moment you start meddling with boosting or deboosting things, you get into censorship territory. This is what the people fixated on "misinformation" or "disinformation" or "malinformation" or "information I don't like" don't get. Doubly so when they act like their favorite scientists, journalists and pundits are always right. Speaking of confirmation bias...
And speaking of masks as well, recall how early on in the corona debacle, The Science said that they don't work... :)
The mask that were "popular" like cloth mask, etc. that weren't up to the N95 standards didn't work because the virus is aerosol. Not all masks are equal, that's what the science has discovered.
Also, not being associated with certain content is the purview of each individual and organization. The EU is a standard of this, that tells companies, like Twitter, that they still have responsibility about their user content, so they must moderate their site.
The early days of the pandemic, when everyone was actually most scared, had authrority figures tell us not to wear masks and that the virus is not airborne, just droplets (so that it's important to wash our hands and anything we buy). The idea that the virus isn't airborne persisted even after they started admitting that masks are critical.
And regardless, any mask of any kind helps. N95 is much better, but even a basic cloth mask will help reduce the amount of viral load you breath out or in. Especially if both the person breathing out and the one breathing in are wearing them.
The issue is that we didn't know the virus was aerosol (airbone means something else). The best evidence at the time was that it was droplets. In the case of aerosol, just being in the vicinity of where an infected patient was, was enough. Washing hands has limited effect on aerosol since it "hangs around" in the air. The issue is that the virus also uses droplets. Coughing and breathing both spread. It was effective against one form of spreading, not all. With time we learned. That's how science works and expecting anything else is ripe for disappointment.
I distinctly remember US health authorities claiming that surgical-grade masks, perhaps specifically N95s, that I am not 100% sure, were not helpful, and this led to social media companies deboosting posts disagreeing with that assessment and applying labels basically calling them fake. I don't recall if they were banning people at that point. But I am surprised that so few people remember the constant flip-flopping and goalpost-shifting. I would think that anyone paying attention would be very wary of "informationisms" as a result.
By the way, questioning the usefulness of cloth masks after that period would get you shunned or banned on many networks well past the point where we had ample evidence to prove that they were ineffective, so even ignoring the initial screw-up, there is plenty of easily provable information that was systematically censored by Internet information brokers.
I do remember this, though finding an article might be difficult or impossible at this point.
My recollection is that very early on (like the first couple of weeks) Fauci said that masks didn't work. Then later on once there were plenty of masks available, he admitted that he had lied because the medical personnel and others needed the masks more and he didn't want people to go out and buy/hoard all the masks. Basically the "noble lie" theory[1].
So I don't think there was any science saying masks didnt work. It was just that the person who is supposed to be our trusted head guide and Mr. Science himself, is a consequentialist more than a scientist.
Now for all the people (like Google) who censored posts and labelled them misinformation, etc based on Fauci's noble lie, I would cut them a little bit of slack the first time. But once he admitted to the fib, that should have been the end of his credibility. If he was willing to lie before, why wouldn't he do it again? Especially when it worked out perfectly for him?
Can you pull the article? Because I remember several times repeating to others that it isn't that they didn't work, but that the public doesn't need 'em, since social distancing should be enough. The N95 production of mask at the start of the pandemic wasn't able to give a mask to everyone that wanted one. Nowadays, there's sufficient production to do that. That was the whole thing.
> So, why weren't we told to wear masks in the beginning?
> "Well, the reason for that is that we were concerned the public health community, and many people were saying this, were concerned that it was at a time when personal protective equipment, including the N95 masks and the surgical masks, were in very short supply. And we wanted to make sure that the people namely, the health care workers, who were brave enough to put themselves in a harm way, to take care of people who you know were infected with the coronavirus and the danger of them getting infected."
In other words: stay inside people rather than wasting masks that are needed elsewhere. In his position, I would have done exactly the same thing. Do not use masks, stay indoors. Managing public perception of safety is hard yo!
That's as far as I'll go since I am about to head to work, and I've spent enough time playing the game where I'm told to dig for articles by people who don't bother doing it themselves, so hopefully this is satisfactory. The deletion should be telling, there's a lot of retconning and memory-holing going on unsurprisingly.
Bullshit. Google is telling them to remove content if they want to comply with Google's advertising T&C, otherwise the ads won't run and they won't make money. That's not censorship; if you're running an ad-supported site, you have to run ad-friendly content. If Google is paying you to run their ads, you're their piper, and you play the tunes they call.
That is censorship. Sometimes large businesses remove their advertising from a paper publication because they disagree with the reporting. When that happens, the whole industry rightfully calls it censorship and abuse.
> If Google is paying you to run their ads, you're their piper, and you play the tunes they call.
You just described censorship in crystal clear terms.
No, that is not censorship. You calling it that does not make it so.
Consider the alternative: Either morally or legally, Google and/or their advertisers have to run ads on sites that they disagree with, thereby supporting them both financially and reputationally. Where, in that scenario, is their free speech?
You're free to say anything you want. I'm free to not endorse it or support it. So are Google and their advertisers.
(You might make a case for actual censorship if Google refused to index such pages.)
It is censorship, even though they have the right to do it and even if they can have justified reasons for doing it.
During war, letters home from soldiers and even generals are read by a censor and censored, so that sensitive information cannot get to the enemy. That is legal, that is agreeable, that is still censorship.
In this case, Google even specifies what articles need to be removed for continued advertising, so it is crystal clear censorship.
The definition most of us use for censorship is "I prevent you from saying something." What Google is doing is "I decline to help you make money from saying something." That is not at all the same thing.
Prevention is one way to censor. Prohibition is another. Suppression is a third. In this case it is suppression by economic means. There are degrees of everything, including censorship. Google withdrawing ads because they don't want to be associated with a publication is a different thing than Google sending dictates on exactly what is allowed to be said for continued business.
Is Google breaking any laws? Probably not. Are they practicing censorship? Yes, they are.
> This story appears to confirm that Google sees ideological influence as part of their core mission.
Google isn't threatening to deindex them, they're threatening to pull ads on the site. This is likely not because Google-the-ad-company cares very much but rather because their advertisers are very sensitive about what kinds of content their brand is associated with.
Additionally, the content that is most likely triggering Google's let's-not-scare-off-advertisers alarms is most likely not in the content itself but rather in the comment sections. For example, one of the articles they call out as being flagged falsely [0] for vaccine misinformation and hateful speech doesn't mention vaccines in the body but does have comments that would get flagged to death on HN, one of which mentions vaccines in a hostile way.
So this really isn't a case of Google censoring Naked Capitalism's content, it's a case of Google's advertisers not wanting to be associated with an unmoderated right-wing comment section. We can discuss whether that is broken (for example are the advertisers less skittish about unmoderated left-wing comments?), but it's not as clear cut as you make it sound.
What an idiotic, assumption-laden response to a fairly detailed explanation of another's reasoning. There are many demonstrable instances of Google having de-ranked and blocked content that doesn't fit certain political narratives, and not always in reasonable ways. During the pandemic in particular they abounded, yet your best response is to assume the most idiotic possible reasoning about a very specific public figure and let that symbolize the rest.
Okay? Google is a private company. Would you prefer if Fox News was forced to broadcast liberal points of view? Or maybe the Catholic Church should be forced to give sermon time to people who worship Satan? Why should Google be forced into supporting a political ideology they don’t support?
> Let me guess, those well-credentialed scientists were named Dr. Jordan Peterson?
The most credentialed scientists speaking against the mainstream in Covid policy were:
Jay Bhattacharya, Stanford; Martin Kulldorff, Harvard; and Sunetra Gupta, Oxford.
They wrote the so called Great Barrington Declaration [1].
They are highly credentialed scientist in epidemiology, and they are professors in some of the World's top institutions. Google, Facebook and pre-Musk Twitter did wrong when censoring them. But make no mistake: Their Covid views were, and are, against the mainstream. For example very different from [2].
Just because you hold the minority opinion and get censored for that, doesn't necessarily mean that you were right.
I don't think Jordan Peterson said much anything about Covid when the situation was ongoing? I think Jordan Peterson only started to comment about Covid policy much later, in 2023 and 2024.
Uhg, no. Such things are very bad advice. Healthcare is operating on most cases very lean (partly due the privatization of healthcare) and it would have killed more people if resources were diverted from unavoidable health events to an avoidable one. Every time a physician has to spend 30 minutes with a patient with severe coughs that could have been avoided, that's time that could be reserved to other patients. Every call to 911 would put strain on our health services.
Also, lets not ignore the economic consequences for poor families of potentially losing breadwinners individuals due bad luck, also lost of productivity since instead of working from a safe environment, they would get sick.
Your own links point out how the Barrington Declaration was bad advice. In particular, it ignored the impact of its goal of fast herd immunity on the capacity of healthcare systems.
The libertarian free market think tank that sponsored the report should be more than happy with the freedom of private companies like Google to do as they please with their speech. Google is a private search engine, and they can index whatever they want. An omission of a website from their index is their right.
> Your own links point out how the Barrington Declaration was bad advice.
Yes, also my personal opinion is that the Great Barrington Declaration is misguided, wrong. And they are the minority opinion. But still, we have to admit that the people behind it are highly credentialed in epidemiology. There are other highly credentialed epidemiologists who hold different views. But there is no strong unified consensus in epidemiology on Covid policy.
Censoring university professors, even if we disagree with them, is still wrong. I guess I wouldn't make a very good libertarian.
Why? The guy credentials is psychology as far as we are aware. Was he speaking about the isolation in the pandemic? Maybe he had a point there. Was he talking about anything else? Where's his research on the topic?
In the pyramid of evidence "expert opinion" is at the bottom for a reason.
No, it's because nobody was talking about Jordan Peterson. There are other well credentialed scientists who were censored, read the sibling-parent comment. Bringing him up as a scapegoat and then using that to mewl about how people are babies for being concerned about disenfranchisement of opinions is bad faith at best, a straw man argument, and inflammatory.
A) Advertisers request content changes before showing advertising is not forcing you to not have that content, just saying you have to do that if you want their advertising. Why support Google's ad business?
B) Here's a spreadsheet of complaints - if we look at this one last entry, it's not right therefore "Google's demand is capricious, arbitrary, and demonstrably false" is quite the hilarious leap.
Based on it's own logic I think I now have more than enough bad faith argument examples to consider this site "capricious, malicious and demonstrably false"
30 years ago all respected news organizations had a strict separation between ads and content. The ad side was not allowed to tell the news side what to publish. either you buy an ad, or you don't, but never would the ad side tell the news side why someone pulled ads.
Manufacturing Consent appeared almost 40 years ago, and it complains about the exact same kind of influence. News organizations have always been highly dependent on ads (at least the ones that weren't dependent on being some magnate's vanity rag), so the reality is that they have always been deeply aware of how to avoid making the ads people too angry with the news they publish.
Maybe today they are more brazen in this intermingling, but it has always been there.
The real difference is back then they would push back: do you want to reach the people who follow us or not.
Of course back then ads were sold in house. You didn't have the powerful middleman who could afford to say no to just you. (there were such middlemen, but they were not as powerful as most ads were in house)
I had the same initial reaction as you, but steel-manning I think it makes sense that they wrote it awkwardly. I still think they're being overly dismissive and reactionary, but given this affects their income they are probably a little justified in receiving this bot-driven algorithmic bad news poorly.
I'm guessing they mean something like, "let's look at the case with everything in the book on it and see if it's valid. It's not valid, therefore this is capricious and arbitrary". For that to work, I'm also assuming that they aren't software engineers and don't understand how ML and Google work. They might even be thinking that there is a human reviewing their site and giving that determination. If you think that it was a human, I could definitely see where you'd find it capricious and arbitrary.
But on that note actually I just touched one of my own nerves. Maybe we should be holding them to the "human" standard. I am abolutely sick of living in the algorithmic world where some algorithm makes a decision about me and I'm stuck with the fallout of that decision with no recourse. I think we need to start considering bot activity from a company as the same level of liability as a human.
> I'm also assuming that they aren't software engineers and don't understand how ML and Google work
NC has superior tech journalism to most dedicated "tech journalism" outlets. Yves et al have been sounding the alarm on algorithmic rulemaking sans human intervention for some time.
>But on that note actually I just touched one of my own nerves. Maybe we should be holding them to the "human" standard. I am abolutely sick of living in the algorithmic world where some algorithm makes a decision about me and I'm stuck with the fallout of that decision with no recourse. I think we need to start considering bot activity from a company as the same level of liability as a human.
I sometimes take on a very uncharitable view of Big Tech and its focus on scale: they want all the power and profit of technological force-multipliers, without extending any responsibility and humanity for what it brings. I don't think it's unreasonable at all to expect well-resourced organizations like Google to invest in mitigating the externalities and edge cases their scale naturally imposes on the world. The alternative is basically letting a sociopathic toddler run wild with a flamethrower.
Google needs to die, in order for the Internet to live; their near-monopoly is stifling and they have repeatedly proven they are not good stewards of the outsized amount of power they have.
The internet is dead, by the end of the year most of it will be generated, including the comments here.
Both Google and Microsoft are racing to put 'generate' button on every single input field, emails, forms, documents etc.
Nobody can index and properly rank the amount of vomit that is spilling out right now. The cost per byte created is basically zero, but the cost per byte indexed is far from it.
Let them force censorship, the transformer will generate whatever they want. Just paste the policy in gemini's 10m token model and ask it to rewrite it accordingly, no harm done.
The don't have to silence opinions; they could forgo the ads. and hosting services. and anyone else upset by their content.
and build their own!
There's some other folks on the other side of the political spectrum had similar problems not long ago. Perhaps their alternative networks provide an opportunity for you? They been through this already.
Yes, it's not impossible, but let's be serious for a moment, Google does control an enormous amount of web traffic. Are we really comfortable with advertising dictating which content is easier to popularize? I know I'm not.
Or Google could perhaps not enforce censorship for people wanting to show ads. One the contrary, it should put pressure on advertisers that demand it because they are in a dominant position.
It would even strengthen their own standing in a lot of ways, so it seems to be overall bad strategy, even if you want to maximize profit.
There's other ways to find websites, like HN and word-of-mouth; I wish people made more of an effort to create and maintain alternative indices of websites. Too many people have gotten used to googling things instead of bookmarks and other ways to find a site.
> There's other ways to find websites, like HN and word-of-mouth;
The problem is that, to get traffic to your site (i.e. have your speech heard), it doesn't help if you use these other ways to find websites. You need others to use these ways. And 99% of them won't. I'm all for advocating widespread change of these habits, but that's a society-wide effort, not something a single website can do, much less one de-listed by Google.
Oh yes word of mouth is a great way to learn new information in the age of a society that bases its communication interconnected technology. It's totally reasonable to tell someone to start up their own indexing service to solve the problem as well.
So your basic solution is people have to know something without knowing that they know something. How else can you bookmark something if you don't already know that it exists? I'm not sure how you cannot see the problem where a half a dozen companies have taken over your entire ability to find something in the modern world. Your glorious solution is go get the multi-million dollars of funding to build your own solution that does something different.
There are other search engines besides Google but go ahead start using those other search engine and you'll find most of them are the same captured type portal information of content that they deem acceptable rather than allowing an end user to have some default filters but also be able to remove those default filters if they want. To be honest to God joke of your entire post is that Yandex is probably the best search engine at this point for finding actual uncensored information.
Those same links that require prior knowledge to know about them because there is a coordinated efforts by indexers to block information they don't deem acceptable. Did I sum that up correctly or am I still missing something?
And if you bothered to read the original post the indexer was misclassifying them and then threatening them based upon a misclassification. I'm not sure what kind of world you want to live in but I'm very confident you're going to get that world and not be happy with it.
His commenters are saying too many "bad words" that google ads doesn't like. It's been like this for over a decade, but from my perspective the "bad word list" has grown out of control to where people can't have anything more than kintergarden conversations on pages hosting Google ads. I kind of wish they would just ban every word in the dictionary so we can move on from their control of the web.
If HN had google ads, the URL list in this spreadsheet would reach the moon.
> I kind of wish they would just ban every word in the dictionary so we can move on from their control of the web.
You know, when a company doing stupid things cause the entire world to self-censor, it means there's something very wrong with the intended democracy where it's hosted and the other ones letting their people being preyed upon.
This is the one area where Google's massive power on web advertising could be used for good. If they would push back against this stuff to advertisers, the advertisers would (for the most part) just lighten up.
What really happens is that both Google and the advertisers want it to be this way. Many Google employees hate the idea that their search engine "surfaces harmful information," and they define "harmful" in a much more broad way than most people do.
So no, Google doesn't get to use this excuse about "we're just hamstrung by the advertisers!"
This is the danger of hosting your own comment sections; by the powers that be (be it companies like Google or coporations) you are the end-responsible for user generated content. There's some leniency of course, since you didn't write it yourself.
Personal anecdotes. Members posted porn gifs on my old fashioned forums 20 odd years ago, Google did not appreciate that.
Members posted stills from an unreleased game on my current forums; we got DMCA takedown notices from at least three separate legal people associated with the company that published said game. If we didn't do the takedowns ourselves, they probably would've gone to the host and have our site shut down.
TL;DR, if you use a 3rd party (host, service), your content has to comply to their terms and services.
Are we also not sure that this is a danger of letting one advertizer become dominant on the entire internet and controlling a large part of human discussion?
P.S.: it's a shame they fail to make the right conclusions : if anything, Musk's "everything app" is going to be even worse than we have now, for the same reasons.
duckduckgo. It is getting better. I rarely need to fall back to a Google search. ChatGPT 4 is good at getting to the specific information that I am looking for and I can get the source links.
Google has to take into account the fact that googlebombing in a coordinated way to wage information warfare, is a thing.
No opinion on any specific action of Google, but I have very strong ideas about the efficacy of assuming anything about the internet is a libertarian unregulated market producing optimal results through natural law.
Rather, the information ecosystem is a wildly polluted commons, polluted by design in all directions, and Google may simply be trying to keep track of the most intentional polluters and poisoners. Can't fault 'em for that.
Also, 'profit' is a very flexible term. If you assume that the only profit is that of Google ads or ad revenue, you ignore the profit that can come from polluting the information sphere in such a way that you can execute a coup and conquer, subjugate and destroy an adjacent country, which is a very old school method of having an effect on the world. And yet, it persists, and uses the internet and Google (where possible) to facilitate the behavior.
....in order to comply with Google Ads, on which the site runs. NC has been around for years, I'm surprised they haven't moved to a patronage model of some sort.
It's worth reading the actual criticism, but the clickbait headline implies that you are talking about Google impacting the site's organic search ranking.
Large numbers of YouTube videos get demonetized (which is what Google is threatening here), it's a chronic problem among educational YouTubers, especially in categories like history and literature.
Google very much leans against conservative content. By this point, big tech's suppression of conservative viewpoints is so plainly obvious and widespread as to be Occam's Razor.
Firstly, they don't, not even a tiny bit. All they care about is "numba go up" and "numba not go down".
The fact that they've rustled your jimmies means they've mostly succeeded in their supporting objective - getting you to think they are actually engaged, and not just playing whatever side serves their ends at this current moment.
Secondly, if you don't think they aren't promoting reactionary claptrap for clicks, I have to presume you don't use their services all that much.
Anybody's guess whether they're doing it right now.
YouTube has been the first really big experiment in libertarian algorithms serving functional purposes. They tried to get 'engagement'. Succeeded in the sense that there's no rival YouTube in practice, so yay them? Unfortunately the most efficient method of getting humans to provide this engagement is to turn 'em into monsters and cultists, something the algorithm did not account for.
And here we are: the algorithm doesn't get to just prioritize engagement anymore. I honestly do not think Google were serving political ends there. I think they may have drawn the conclusion that seeking raw engagement inherently serves certain political ends, even ends that threaten their own company's existence and freedom, and therefore they are trying to bump up a level and define what is 'good' engagement, without abandoning engagement itself.
Deeply disturbing for a number of reasons ... but also self-inflicted.
I have been a reader of Naked Capitalism since 2008 and it is ironic that google flagged it with that collection of tags: NC is generally left-leaning and progressive and the host (Yves) generally espouses views that are the opposite of what you think when you see those tags ...
... but the healthy and stimulating discussion that ensues would, I suspect, confound a dumb classifier/parser.
Again, ironic.
I believe this is self inflicted because the site - and the publisher - are dependent on ad revenue. I have contributed financially to the site in the past but it seems not many others do and so they are at the mercy of the ad network and whatever constraints come along with that.
I'm going to make another donation today but the problem will persist until she can find a way to divorce herself - and her site - from the ad network.
Without stating an opinion, I just want to get some of the facts clarified since there are a lot of knee-jerk reactions here:
1) Google isn't threatening to deindex them, they're threatening to pull ads on the site. This is likely to be due to the fact that advertisers are very sensitive about what kinds of content their brand is associated with.
2) The content that is most likely triggering Google's let's-not-scare-off-advertisers alarms is most likely in the rather extensive comment sections. For example, one of the articles they call out as being flagged falsely [0] for vaccine misinformation and hateful speech doesn't mention vaccines in the body but does have a comment in the body that mentions vaccines in a hostile way.
Indeed; this is capitalism in its purest form. The linked article about 'Blowback' features user comments containing gems like ‘Grooming little kids to be “trans” or “queer” has outraged many and made things worse’. Advertisers generally do not want their ads seen alongside such polarized and extremist statements, because it could harm their brand (unless they are only targetting that demographic). They pay their advertisement platform to manage that.