Hacker News new | past | comments | ask | show | jobs | submit login
Google, the Stupidity Amplifier (2016) (gregegan.net)
270 points by Santosh83 on Aug 31, 2020 | hide | past | favorite | 187 comments



I got a good one the other day. I searched "does freezing kill bacteria", and the featured answer came back "freezing food kills harmful bacteria that can cause food poisoning".

I was curious, so I clicked through to read more. It turns out this text was taken from the headline at the top of a PDF. What the Google crawler didn't notice, however, was that the headline actually had another headline above it. The full text read "Myth: Freezing food kills harmful bacteria that can cause food poisoning".


This was accurately fixed. Freezing definitely kills most types of bacteria


Unless I'm missing a joke or something I just googled it and it says freezing only makes bacteria inactive.

https://ask.usda.gov/s/article/Does-freezing-food-kill-bacte...


Freezing is lowering the temperature below zero Celsius. In anything made of water, the procedure creates ice crystals that poke the cell membrane, making a lot of microscopic holes than quickly leach the inner contents of the organism by differences in osmotic pressure. This kills the cell by "bleeding to death".

As bacterias are cells with membranes in principle they are killed when frozen, so far, so good...

... but this is just the general case. Not all bacterias have being created equal and there are thousands of species.

Some are extremophiles and store sugar or other substances that act as an antifreeze, so can decrease the temperature of freezing and remain "liquid" under zero point. They can stand much lower temperatures before being killed (by freezing). Is just that they are killed at -25C instead -5C, but the killer is still a tiny ice dagger.

And some bacteria have evolved an outer protective gelatinous capsule, made partially with "sugar", that is not easy to pierce, so they can alter their osmotic pressure and stand more osmotic stress.

And finally, some will just die, but not before creating a inner spore devoid of water that can stand the cold or dry until rehydrated. No water, no ice.


Sushi. Never eat "fresh" sushi. The reality is that fish destined for or sushi, other that tuna, is always hard frozen for a considerable period of time to destroy not just bacteria but a whole variety of parasites. It does work. Do not fall into the belief that freezing is not effective. Freezing is a common part of health codes for good reason. (It is also why I personally stay away from some forms of "fresh" fish.)

https://www.gov.mb.ca/health/publichealth/environmentalhealt...

"Parasite Destruction for Raw Fish: Seafood products to be served raw must: Have been frozen at a temperature of –20°C (-4°F) for 7 days or below –35°C (-31°F) for 15 hours, to destroy parasites that might be present."


But then if you go to Tsukiji fish market in Japan they will serve you fresh seafood raw and this happens probably hundreds of thousands of times a year.


I assume they don't serve salmon there though because of the risk? That's the reason it wasn't eaten raw before Norway got involved.


Just for context, neither of two of the most common food borne bacteria (E. Coli O157:H7 and Salmonella) are killed by typical freezer or refrigerator temperatures.


Wow I definitely wasn't expecting such a detailed explanation. Thanks :)


Maybe next time somebody googles it they will find this explanation.


Chance is close to zero as google weight by links. and nobody links to comments.


This is why I highly doubt cryogenic frozen brains will be revivable.


The current brains are totally lost. Maybe a partial image of memories could be recovered when we understand more about the process but I wouldn't count with it.

But if tomorrow we would found a suitable antifreeze in some obscure creature that is not rejected by the body and does not damage the cells, this is a game changer. To be able to put the body just about to freeze but not further... that would be awesome.

Would be a key to 1-hibernation and long term space travel, 2-increase life expectancy slowing the metabolism and this mean slowing cancer also, 3-increase in the range of temperatures that humans can stand without suffering chill burning and losing fingers.

De-thawing would be still a brutal process. Not all cells would survive.


Well, I doubt the revival process too but the freezing bit might be ok if it was done expertly well.

My understanding was that ice crystal length was also related to time taken to freeze.

IE freezing faster produced smaller crystals that don't then damage the cells so much.

We have a liquid nitrogen icecream shop that exploits the shorter crystal length to make "creamier" icecream while you wait. Not exactly how I'd want my brain frozen but hopefully the cryogenic experts know a way.


That sounded really expensive but apparently it can be as cheap as $0.50/gal, and about 1 gallon of liquid nitrogen will freeze 1 gallon of liquid cream, and 1 gallon of premium ice cream has maybe $5 worth of ingredients, so I suppose it would probably work out plus or minus an order of magnitude. Neato!


Yeah, liquid nitrogen is not that expensive despite how exotic it sounds.

But it really does need to be handled appropriately. For example, while you can put it in a simple thermos flask you shouldn't be in small enclosed spaces with it. IE elevator/car.

The liquid expands to quite a lot of gas and might force out all the standard air taking the oxygen with it.

But mainly it is about 'freeze burns'.

My wife(science lab manager) cringes at the lack of safety glasses and gloves at the ice cream shop.

They plonk ingredients into an open top mixer/blender and then pour the liquid nitrogen in while it is running!

Edit: tweaked wording.


If we can clean up old images surely we can fill in the gaps in our brains with ML :)


Totally great and precise.


Apart from the double plural "bacterias". (-:

"Not all bacteria are created equal ..."


Thanks for the correction. I borrowed the spanish plural.

Trivia: Spanish would distinguish between "una bacteria" (one bacterium) feminine noun in Spanish so it ends in "a", and between "una colonia de bacterias" (a colony of bacteria). Bacterias would be the correct plural in Spanish. English works a little different here.


English still prefers using the Latin and Greek plurals for words borrowed from, or indeed inherited from, Latin and Greek. "medium"/"media", "datum"/"data", "stratum"/"strata", "stigma"/"stigmata", "bacterium"/"bacteria", "schema"/"schemata" ...

The exceptions are things like

* mixed modern coinages (e.g. "television", a Latin suffix with a Greek prefix),

* divergent modern coinages ("bicycle" is not Greek "κύκλος" nor Latin "cyclus", and was taken from French),

* words not attested in the plural in the original language (e.g. "virus", only attested in the singular),

* words not actually singular in the original language (e.g. "ignoramus" is a verb in Latin, first person plural), and

* things that have since diverged (e.g. "hippopotamus", where "ποταμός" meaning "river" is "-ός" and hence "ποταμοι" in Greek but the word that English has came via Late Latin).

English generally pluralizes these things the Germanic way with "-es"/"-s", although pluralizing "hippopotami" as Latin is well attested and "bicycles" is the French plural.

Getting English speakers to remember "spaghetto", "panino", "confetto", "graffito", "paparazzo", or even "die" (singular of "dice") is somewhat hard. (-:


To be completely safe, best go with the quadruple plural, bacteriæses.


Which, I'm assuming, is why all sushi in the USA must must be frozen and then re-thawed before serving.


This is for roundworms. Is a different problem.


Really!?! I had sushi at the bar many times and it was prepared right in front of me. Maybe you meant ingredients?


Yes, if you had sushi in a restaurant in the USA, the fish was frozen and re-thawed beforehand.

https://cooking.stackexchange.com/questions/76455/in-the-usa...


Sushi grade fish must be flash frozen to kill any bacteria, worms, etc. in order to be served frozen. Most fresh (killed, butchered and served quickly) food isn't really great, either taste wise or health wise. Especially if it's not fully cooked.

Most red meats sit out for a bit, either wet or dry aged, which enhances the flavors. Fish will need to be frozen or cooked to a safe temperature.


> Most fresh (killed, butchered and served quickly) food isn't really great

I have eaten raw skipjack tuna that I caught myself and never froze. Tuna, as far as I know, has very little risk of parasitic infection. It was delicious.


Unfortunately, tuna carries a high parasitic load, including parasites that can affect people:

https://pubmed.ncbi.nlm.nih.gov/25461601/

https://www.healthline.com/nutrition/raw-tuna#parasites

You got lucky, and yes, it is delicious, but you're definitely playing health roulette.


Is not the worst fish in this sense. Cod is worse.

The real problem with tuna is that is an accumulator of nasty substances like mercury and also fast and nomadic by nature. This means that you can find Japanese tuna in California, or the Mediterranean, and this can be a problem.

Most of the fish parasites just turn into food after being cooked. Our stomach acid will not care. The number of marine fish parasites able to harm humans is very small. We are too hot for them. I could count all the major types with my fingers (I was surprised to learn (here) that Kudoa is a new problem in any case).


Yeah, it's not necessarily something I'd do again knowing what I know today. Thank you for the links. :)


I knew someone (a French person) who was a massive fan of tartare. Tartare is basically raw meat. He'd go to expensive tartare restaurants (in Paris), where hygienically-raised cows were killed, and the meat cut out and served, all in the same day. He described tartare as being the best thing ever. (I've never tried it.)


Steak tartare is also popular in Hungary, where it's called Tatár beefsteak. I still think it sometimes looks disgusting but I admit I love to eat it once in a while. (And any version worth eating is fairly expensive, since you should be using very good beef.)

The catch, for anyone who's never tried it, is that you're not just eating raw meat, you're eating finely chopped raw meat and butter and egg yolk and onion and toast etc.

Now I'm wondering what animals we do eat raw in America other than oysters and (if you know where to go) other sea creatures that might be alive when served.


I had it once when I was in Germany in 1989. It tasted like raw hamburger. I didn't much care for it but de gustibus non est disputandum.


This is really more common than many north americans think. Crudo, tartare, carpaccio, sashimi, etc. Cow, fish, horse, buffalo, etc. Some are raw & seasoned, some lightly cured (e.g. ceviche)


I'm french and yes "steak tartare" is quite common in restaurants (mostly in what we call "bistrot"). There is often a raw yolk on top.


In Germany and surrounding countries it is very common to eat raw minced pork meat (Mett) on bread/buns.


I seem to recall this was breakfast fare.


It is indeed. (Or dinner - we do sandwiches for breakfast and dinner)


This reminds me of my favorite classic internet cartoon about tapeworms, "The Worm Within". https://fray.com/drugs/worm/


I had a middle eastern variant (kibbeh or something?) here in Sydney and it was loaded with spices and tasted great. I am not a fan of raw meat for texture reasons usually though.


I have tried it by accident.

It was delicious...but if I ate it too fast I would gag.


Yes, seeing there "steak" and ordering it without knowing it's "raw" is a very common mistakes for tourists in France.


Tartare should be kept bellow -18 for at least 3 days as far as I know. You should never eat it fresh.


The worst part about this is that it's not an honest mistake, but rather a dishonest one: Google embeds as many 'answers' into their SRP as possible so that users do not leave their website.

If Google just displayed links to users, then the user might actually click them, which would result in them leaving Google for another website, which is Not Okay. One of Google's largest efforts is to make sure they're the only website the user stays on, which is why they scrape contents from pages and display them on their SRP, why they want to hide the address bar from web browsers, why they push projects like AMP so hard, and many other things. Conveniently these efforts are also all of the nature that they can just claim they're helping the user get what they want more quickly - as long as they're using Google.

Users also all trust Google a considerable amount, so very few will take any additional action after being instantly supplied with an answer that is scraped from some random website. Google gets 100% of the user flow, has 100% control over what answer to show, and of course can show the user some great ads while they're at it, while denying the source website traffic for their content.


I blame the modern web for this.

Google a question you have, especially something with mass appeal, (health, fitness, media, culture, travel, etc.) and click any of the top links.

It will load slowly, 1000s of tracking scripts, there will be popups, scrolling might break, you will be redirected, more popups, half the screen covered by an ad, the site breaks, etc...

In fact I just recorded myself doing this. Watch as I struggle to read the page, I end up not even able to scroll down: https://imgur.com/gallery/5fdBdcL


I blame Google for the modern web.

Tracking scripts, ads, etc.


Yes. Google has created this monster of SEO optimized spam desperate to show you ads.

If they changed their algorithm to prioritize sites with minimal ads and tracking, load quickly and aren’t bloated, and actually work with a good user experience, incentives would change.

I guess that is what they are trying to do with AMP. But you would think they could do it by just changing their ranking algorithm.


Maybe we'll get there one day. If SEO-bot were as smart as a decently intelligent human, it'd take all that into account. I'm not holding my breath.


I also blame browsers. Create an empty .html file and open it in firefox. Load time 370 ms (according to an addon). It's a local empty file. How can firefox spend 234 ms on DOM processing? Chrome seems to be better here.

Edit: window.performance.timing.domContentLoadedEventEnd - window.performance.timing.navigationStart

Firefox: 411

Chrome: 70 => better, but still 70ms for what?


I get 66ms on Firefox, with the following breakdown:

    navigationStart: 0
    fetchStart: 0
    domainLookupStart: 35
    domainLookupEnd: 35
    connectStart: 35
    connectEnd: 35
    requestStart: 36
    responseStart: 36
    responseEnd: 36
    unloadEventStart: 49
    unloadEventEnd: 49
    domLoading: 49
    domInteractive: 63
    domContentLoadedEventStart: 64
    domContentLoadedEventEnd: 65
    domComplete: 66
    loadEventStart: 66
    loadEventEnd: 66
So it looks like the bulk of the time is actually whatever happens between fetchStart and domainLookupStart, and between responseEnd and unloadEventStart. The actual time from domLoading to domComplete is only 17ms, which is about 1 frame on a 60hz monitor.

Note: refreshing the page with the javascript console open takes ~200ms on my machine, and certain extensions can make it even slower than that.


There is no "empty .html file" in the browser. It's not just a text file, it's a document tree; there are compulsory nodes that need to be created even if you didn't explicitly write them in your file. <html>, <head> (which must contain <title>) and <body> need to exist - and since you didn't create them, the browser must, after it's failed to parse what you were supposed to have included.


>>> If Google just displayed links to users, then the user might actually click them

And ironically, this has the effect of making Google Search even worse. Whether someone clicked on a link used to be a signal to Google’s search algorithm that the link (or at least the title and snippet) was relevant. And there’s no quality feedback loop at all for the embedded info boxes.


> And there’s no quality feedback loop at all for the embedded info boxes.

Actually, there is feedback for so-called non-organic SERP elements. You can check whether a user clicked the next result after an element, which is a signal of uselessness of the element. On mobile devices you could also check whether user continued scrolling after seeing the element. Both signals are used by search companies.


And what's missing from that assumption is users who see an incorrect info box, assume it's correct, and move on without clicking anything. The effects cancel out.


Yep, but that's also true in the case of classic links. If there is false information on the site behind a link (or in the snippet), which a user perceives as true, there is no way for a search engine to gather it from user's behavior.


> If Google just displayed links to users, then the user might actually click them, which would result in them leaving Google for another website

But if the user found the answer they wanted from Google itself, why would they stick around on Google's site?

Google might actually make more money if the user clicked on another web site, since chances are that the other site would serve them a Google ad (e.g., via DoubleClick).


Yep, I've also noticed a lot of really weird stuff from sketchy sources embedded in the answers panel.

We forgot that the reason search engines were effectively exempt from libel/copyright etc. is because were just describing the web, they published the map but the content was on all the individual websites.

It's a lot harder to argue that model still applies when you've got algorithms building out these micro-articles from increasingly disparate pieces. Somewhere here there's a line between a preview of a link and republication.


That has come to my attention more recently when looking for short answers to small medical questions surrounding a condition—things I'd forgotten, etc. I quickly learned how off-the-mark those can be.

The feature is really clever and works great when it works, but sometimes just because something is formatted as an authoritative answer, it's promoted above actual correct information. It's almost slower than just looking for a qualifying source in those cases.


I’d argue that they also took on additional legal responsibility when they started censoring search results, whether it’s hard censorship like removing firearm-related products from shopping results or soft censorship like de-ranking right-biased news sources. If you take the entire internet and selectively filter it down, you have the power to craft any narrative as surely as if you were writing the opinion pieces from scratch yourself. It’s hard to call that anything except “editorial power”.


Relatedly, for sufficiently obscure historical figures for which a Wikipedia entry exists without any depictions in literature, we sometimes have images from the game "Crusader Kings 2" take the spot. r/crusaderkings are full of posts of these incidents, and at the time of this comment this is still the case for Cynewulf of Wessex.


Relatedly, I’ve seen many many EU4 maps of countries come up in google results for those countries, which is concerning as they’re often (as is the nature of the game) inaccurate.

At least with CK2, there’s really no harm done if someone sees a fictional portrait of a long dead minor ruler. With entire nations, the story is a little different.


There are many cases as these. Facebook is another culprit here, pulling up info from various sources and presenting it as authoritative. I remember looking up the Dalai Lama and seeing at the bottom: "Nationality: Chinese" which made me laugh, but on the other hand things like these make many people upset.


That is likely intentionally or a regional specific label to appease Chinese interests.

Google maps will show you different borders and place names depending on where you are.

https://www.washingtonpost.com/technology/2020/02/14/google-...


> I remember looking up the Dalai Lama and seeing at the bottom: "Nationality: Chinese" which made me laugh, but on the other hand things like these make many people upset.

Out of curiosity, what do you believe to be the correct nationality?


I presume the Dalai Lama considers himself to be Tibetan.

Now, perhaps you would consider that we ought to demand people's "nationality" be an actual UN member state which counts them as a citizen and since Tibet isn't a member... That's a bit awkward though because the "One China" policy ensures neither the Republic (in Taiwan) nor the PRC will allow the other to be a UN member at the same time as them.

Also I find that I doubt we demand most people have documentary proof. Not taking their word for it is already a bit weird unless you're a border official. Is Samantha Bee really Canadian? I mean, she says she is so I just assumed...


> the "One China" policy ensures neither the Republic (in Taiwan) nor the PRC will allow the other to be a UN member at the same time as them.

Actually, 90+% of the population of the ROC, including their leaders, would be ecstatically happy to be an UN member separately from the PRC. The only reason they officially have a "One China policy" is that the PRC has made it clear that while they can tolerate the status quo of a rival government with zero prospects of actually challenging them controlling a small piece of their territory, they will very much not tolerate that piece of territory to become officially independant and provide an example for at least half a dozen others.


It could also be like asking what was Mozart’s nationality? Austrian or Holy Roman?


He was a Salzburger, which was an independent princedom back then.


Is that what he is celebrated as these days? Which country takes pride in this composer these days?


Salzburg is now part of Austria, so..


I agree. Same for the Dalai Lama. He was once Tibetan but today he is Chinese. That’s just how it has worked and does work, like it or not.

Obviously being in exile I have no idea what passport he carries of if he gets to issue his own given he contends the current government, etc.


The difference is that the Dalai Lama is alive and presumably has an opinion on this.


It think Austria is proud of their Salzburg heritage and China (CCP) works really hard to tear down the Tibetan culture.


Han Chinese children have to learn Tibetan in school:

https://mobile.twitter.com/DanielDumbrill/status/12992582723...

I think that rumors of cultural oppression in Tibet have been vastly exaggerated


He holds an Identity Certificate[1] issued by the Indian passport office, but no passport.

[1] https://portal1.passportindia.gov.in/AppOnlineProject/online...


Next time I'm in Austria, I'll tell them to tear down their Mozart statues.


Tibetan, because before the so-called "Annexation" of Tibet (the word "occupation" is carefully avoided, even on WP), Tibet was an independent country (at least for a few decades). It's not like Manchuria or a few other geographic regions inhabited by ethnicities different from Chinese. So people born during that period in Tibet were Tibetans.

With the Dalai Lama this point is particularly interesting, because his life is an ongoing conversation with the Chinese (although at times they seem like two monologues).


There are millions of people currently living who were born in the Soviet Union and about 0% of them identifies or is identified as “Nationality: USSR”.

I think what matters is whether the existence of an independent country is contested. No one contests non-existence of USSR but Tibetan case is not closed.


Maybe because the USSR was an artificial construct very few people identified with? Even when it existed, people were calling themselves Russians, Latvian and so on, not really "Soviet."


Nationality: Chinese*

* verbose explanation why this might be disputed


I think most HN people know Google is essentially a dumb search engine.

We're not surprised when image results have no correlation with the search term. The problem is when other people fail to notice this disparity.

They'll go on to write articles and blog post based on google's results. Later someone else will make a wikipedia page based on the blog post. Google's flawed results will now be confirmed by the wikipedia page.

A virtuous cycle indeed.


Tangentially related, I had a surprising experience using DDG yesterday. It was... “smart” for lack of a better word.

I was watching an episode of Star Trek TNG and a character was introduced that was a little over the top and vaguely resembled Jim Carrey so I searched “Star Trek Jim Carrey” and one of the top results (after a bunch of YouTube clips of some sort of Jim Carrey Star Trek skit) was the Wikipedia page for the specific episode I was watching. And here’s the interesting part: Jim Carrey wasn’t in it.

My search results took my misguided query and still returned the one correct episode out of who knows how many hundreds of Star Trek episodes exist. I can only guess that enough people have searched “Star Trek Jim Carrey” and stopped searching once they got to information about that particular episode.


IIRC search engines also use the text of the incoming link and surrounding prose. So it is quite possible that enough people on Reddit (or any other discussion site) said "The person in this episode looks like Jim Carrey $link" the the search engine picked up on it.



> A virtuous cycle indeed.

I think that "vicious" is more appropriate here


"indeed" stands for implied /s postfix operator. /s operator, applied to the word "virtuous", turns it into the word "vicious".


Citation needed. Sounding plausible doesn't mean it happens or happens frequently. Most likely there are small number of cases that fit into this description - after all, wikipedia has 54 million articles. But I won't believe there's meaningful number of such cases, unless I see an evidence to the contrary.


Do you have an example of Wikepedia page where this actually happened?



"a dumb search engine" - Yes that's how they dominated search. By being dumb. All hail ddg (but first let me use '!g")


Not really. I have a draft page in Wikipedia about a motorcycle model stuck for months to be approved because it does not have enough references even if it provides links to the manufacturer, dealers and motorcycle review sites. No, blog posts are not enough to put something in Wikipedia.


The author has written the novel "Permutation City", which I read after seeing it recommended here on HN a lot. It is a great book. I never thought I would be interested in sci-fi, but this book changed my mind. It deals with simulating conciousness and its implications.

If you enjoyed the post, and the author's website, be sure to read some of his books too!


If I remember correctly, Permutation City is more about the implications of modal realism (or "dust theory" in the book). Consciousness upload is used in the book to enter a simulated reality, but in itself that doesn't explain what happens when the simulation is turned off.


It does actually, and I really like this explanation: So, imagine you model a human mind. Now you can slow this model down, speed it up, the human mind does not know. In the novel states of the mind in time are computed, out of order, while watching reality, the mind observes it is time scambled but the mind itself does not notice. On the basis Egan concludes that the pattern that forms the mind may well be present in the universe somewhere, and the next time pattern, one time instance further may be somewhere completely different. But the mind does not notice. So the mind can just exist without it's physical basis being anywhere specifically... It get's one thinking.

Honestly as the other response says, indeed there is a parallel universe part to it.. For which I didn't really understand it relation to "dust".

I also liked "Quarantine" which made Quantum Mechanics somehow more intuitive to me. Truly a great writer.


> Honestly as the other response says, indeed there is a parallel universe part to it.. For which I didn't really understand it relation to "dust".

The other universes count as part of the "dust".

The characters in the book thought it would be only random 'simulations' that accidentally happen sometime in the universe's infinite future that kept running them -- that is, literal dust.

They were wrong. Simulations in entirely disjoint universes also count, and aren't nearly as predictable.


There's a fairly simple explanation, when you think about it.

They arranged to have the simulation in our universe turned off, but not to have all simulations turned off -- there's no way they could do the latter. So they ended up in a simulation run by someone who'd started off by simulating Earth, and who -- more likely than not -- were especially interested in embedded simulations.

Embedded simulations like theirs, but also like the Autoverse, and whoever it is was interested enough that they'd violate the simulated physics of Earth to keep simulating the cellular automaton despite the physics of Earth saying it should be shut off.

Viewed from that perspective, the ending is... not predictable, exactly, but at least plausible.


I would also recommend Charles Stross' Accelerando for an interesting take on the post-singularity universe where "others" can be in control of the simulated worlds. Sorry for the limited details, but I do not want to spoil the ending for others.


Permutation City is my favorite novel of his, but in many ways Diaspora is his most impressive work.

His short stories are also criminally overlooked.


> I never thought I would be interested in sci-fi, but this book changed my mind. It deals with simulating conciousness and its implications.

That was exactly my feeling after finishing this novel. I went on to read Diaspora, another excellent book.

I think I was prejudiced against sci-fi because of how it is generally portrayed in movies and TV, with an emphasis in odd sounding words and fantastical themes. But Greg Egan makes a fair connection with the current world and expands it with such credibility, you can definitely see some things happening.


I've not read Diaspora (and do not intend to, as it trips several of my phobias), but from what I understand about it, it makes most contemporary spacefareing sci-fi look like depictions of 2001 that featured bodycon jumpsuits and flying cars. Its vision seems very far away and yet far more realistic (aside from the baking of most of meatspace humanity with a gamma ray burst just after full digitization becomes available, which reads with the same kind of authorial vindictiveness as I see in the OP article).


It also includes elements from Permutation City, where multiple "you" may choose what to do in that scenario. Some stay, others wander around the universe in space pods, etc. Everything is narrated with such a natural approach that I found myself not questioning the validity of the science.


That's what I mean. On the one hand: plausible. On the other: terrifying.

And just that, from a cosmological perspective, the chances of us being hit by a gamma ray burst after the kind of technological advancements described but before the capacity to evacuate or shield biological lifeforms arises is... slim. I figure it was introduced more as a plot device than anything else, the biggest deviation from potential veracity (and a "take that" at anyone questioning said potential veracity, which goes just beyond cheeky to slightly obnoxious).


Yes. Anyone reading this: Please do not take TV and film sci-fi to be representative of book sci-fi. This is indeed generally true, of more than just sci-fi.


Interestingly a current search still shows this, and the root cause is the fact that there are no pictures of Greg Egan available.

This is a fascinating failure mode - Google works on a best effort basis, so it doesn’t have a strong enough concept of “there is no correct answer to this question” or perhaps “my approach is wrong in this case”.


This is one of the many problems with Google nowadays. Instead of returning me the only 3 search results somewhat matching my query, it stupidly broadens my query so that it's able to show me pages and pages of (to me) completely unrelated results. For advertising I guess...

> concept of “there is no correct answer to this question” or perhaps “my approach is wrong in this case”.

Exactly. What's wrong with "We could only find 3 results matching your query. Try searching using some other terms, or try the following related queries."? It's so annoying to sift through the results just to realize that Google has no damn clue what I'm looking for.


> Interestingly a current search still shows this

It doesn't for me. I just get his home page as the top of search results, and a knowledge panel on the right-hand side that appears to accurately describe him (with zero photos), and a button "Claim this knowledge panel" which he could presumably use to correct any misinformation.


Definitely not what I saw - strange that it would be different for different users. Doesn’t seem to be something that would be personalised.


hm, actually, there are pictures of him available. At least to Google's secret heuristic-ML-mashup.


Frankly, I find nothing fascinating here - more like a rookie mistake that nobody cares about - at least not stronlgly about to make something about it.


This makes me wonder if Google developers ever get feedback from outside the company, it feels like some feature is released and developers move the the nsxt shiny thing and most feedback (the one that does not reach on a news paper) is ignored. Probably this is expected when you have many users and most of them are not paying.


I wonder this about a lot of "tier 1" software, whenever I get easy to access critical business logic breaking bugs in Amazon, eBay, etc...

I just find it utterly unacceptable. I'd drag my developers over hot coal if they somehow released what I see from these trillion dollars companies.

I wasn't able to "add to cart" with my Amazon app for instance and things like YouTube and Instagram constantly crash when I'm adding new content. And many banking apps totally freak out when I use a password manager. This is pretty core operation. It's really amazing.

I really don't know how the valley operates sometime.

I'd imagine they supposedly have entire departments and dozens of people who are supposed to make sure this stuff works.

Sometimes I find it so unbelievable that I try multiple devices to take the possibility that it's just me out and it's invariably reproducible on all.

Maybe nobody at the company uses Android or maybe nobody uses their own product? (Maybe people who work at, say, Facebook don't actually like using it...thats understandable, but also unacceptable)

I really don't know how it happens. Maybe their tests are 100% automated and no human checks it? I don't know.


Maybe their quality control can be improved, but you give no insight in QA practices they are obviously skipping. Unless you're counting "dragging developers over hot coal" as your advice. In which case, I have serious doubts the system you're comparing them with is actually of better quality.


It's a problem with many developers. They think that because there can be a root cause analysis, they didn't fuck up.

But there are times when something simply shouldn't be, definitionally. As in, if X happens, it doesn't matter what the root cause analysis is, something is wrong.

To give a real life example, if you're driving drunk, the events and reasons leading up to the act don't really matter. They're not without value, but they're not important for determining that someone fucked up.

This is what the poster you're responding to is saying. That the problems are so goddamned egregious that they simply shouldn't be. It's not up to the aforementioned poster to fix the problems, it's up to the companies in question wanting to be better.


Fair point, but that's obviously not true. All evidence we have points to both amazon and google following excelent software development practices. They are even drivers of good practices in certain parts. So, we'd need stronger evidence to show that they are not doing their due diligence before releasing new versions. I'd say that releasing software without tests, code review and incremental rollout would be akin to drunk driving. However, I believe they do all of that and more.

Moreover, I don't know about the many developers you have experience with, but if the developer considers the behaviour to be a bug, I have a hard time believing they don't consider the behaviour to be a failure (ie. someone made a mistake somewhere). Though, I can definitely see people shifting the responsability to someone else, specially if there's a blame culture in place. Obviously, that's easier to do once you know the root cause. However, a company will be better off by putting solid QA practices in place and measuring "fuck ups" as a decrease in productivity instead of measuring it as the number of times each developer break production.


Maybe the issue they are doing too much dev and less contact with real users. You can unit test all your code but you can't cover all possible real data, so you need to accept user feedback. Sometime ago in one of our products there was an issue with RTL languages, as a developer it was first time I hit this concept, I got the report, I informed myself, I fixed the issue,the ticket was closed,the user was happy. In this case it is clear the developer assumed that they can always find someone picture, the assumption is wrong but probably all the TDD and good dev practices won't catch this issue until someone listens to the user reports.


I think they are passive aggressively telling you something essentially. The public complaints bin would wind up such a useless cesspool of entitled idiots and nuts complaining about "bias" because it doesn't conform to their warped worldview or issues that aren't even theirs like slow boot times. Actually processing that would be a massive sisphyean task with little actual gains from it. You know the "fire your bad 20% of customers that take 80% of your time" advice? Their numbers at large are looking far worse so just didn't hire them preemptively essentially. Strategically it makes no sense to do so.


Google does offer a feedback button in the knowledge panel. However, I assume they are unlikely to take any actions unless there's at least a certain amount of agreeing feedback.


I am wondering if they also put an algorithm to decide when "a certain amount of agreeing feedback" is reached. But TBH compared to other search engines Google is much better in general, my doubts are on the "summary" Google is doing presenting stuff as fact instead to just link you to the source directly.


^ exactly this.


^ this is exactly the attitude I'm referring to. Let me give you an anecdote to further illustrate.

Last friday my internet went out (Cox) and I called into support. The first thing the guy told me? My equipment is too old.

This is simply unacceptable as a reason why my internet stopped working at 6 in the morning. I suspect it was an attempt to upsell me a new modem, but I have a docsys 3 that's under 5 years old. It doesn't matter. Any series of events that results in technical support leading with blaming "old equipment" on the outage is wrong, period. Maybe the equipment went bad, it happens, but that's not what you lead with. Certainly not before trouble shooting.

This is what I mean when I say "definitionally". It doesn't matter what their motivation is. It doesn't matter if it's possible that my equipment died. That employee fucked up.

This isn't about technical excellence or software practices. This is about decision making.


I'm not sure if that employee fucked up. That may be company policy. Support people usually follow a script and they very rarely venture much beyond that. Partially because they lack technical knowledge and partially because it could actually get them in trouble. If I understand it correctly and you were calling your ISP at 6am, I don't think the person you were talking to was a developer. That said, I have no expertise in customer relations, so I wouldn't know if they were following good practices or not in that instance.

I get your point that people make bad decisions, but in software development, those bad decisions shouldn't break the user experience, but decrease the developer's (and/or the team's) productivity instead. Doing root cause analysis is part of the process to achieve that, as it can give insight on how to avoid that class of problems in future. That said, if someone is not doing a good job, they'd hopefully figure it out by themselves, but constructive feedback can also go a long way. Ultimately, if you have a good process in place, it should help you to make an educated assessment of who is underperforming (eg.: their PRs take very long to merge, they are full of beginner's mistakes, their code is never properly tested, their releases are always rolled back a few times until they get it right, etc). All of that without playing the blame game on the (hopefully) rare ocasion that a major bug is found in production.


To a user "The employee fucked up" and "Management fucked up" and "The developers fucked up" are indistinguishable.

Your blame-free production line won't help you if you're making the wrong thing and/or your UX doesn't help the user.

Example: someone at Google decided that if you make a general area-based query - e.g. vets in SF - street view is disabled.

It doesn't matter why they made that decision. To a user, it's an annoying inconvenience. The fact there may - or may not be - some kind of management rationale doesn't make it less annoying or less inconvenient. Nor does the fact that the code doubtless passes the tests it's designed to pass.

The same applies to Amazon's search options. I don't want to see results for 1.5TB external hard drives or any internal drives at all if I search for "external hard drive 12TB". I also don't want fake reviews, or reviews for unrelated products.

I don't want my own videos with my own music hit with fraudulent copyright claims when I upload them to YouTube. Especially if "my own music" is white noise. [1]

I don't want to have to deal with bugs like Heartbleed, Meltdown, or Spectre. I don't want Excel in Office 365 to fail to respond to double-click select. I don't want my card to be declined for no reason when I'm buying groceries.

I don't want the airliner I'm on to crash because management cut corners.

And so on.

Some of these issues are serious, some are less serious. But what's lacking is seriousness across the industry as a whole. There's an underlying attitude that software is either an annoying cost which can be pared to the bone, or a financial optimisation process, or an interesting puzzle to tinker with. And there seems to be too little deep understanding that it exists outside of a laptop screen, and when it fucks up it causes very real issues for very real people.

[1] https://www.bbc.com/news/technology-42580523


> Your blame-free production line won't help you if you're making the wrong thing and/or your UX doesn't help the user.

Blaming someone will not help either. As you said, the user doesn't care who fucked up.

Some of the problems you listed are still open research areas, so I wouldn't call not solving it lack of seriousness. And for the street view thing, given that there's no clear right or wrong, the best they could do is look at the data and see which approach seem to improve the user experience. I hope they've done that, but I don't know.

Then, there are bugs in excel, online shopping, aeroplane systems. Maybe those would be explained as a lack of seriousness, but I don't see how blaming someone would help avoiding those issues.

Finally, there's the recent intel/amd bugs. They were out there for a long time and it took quite a bit of time and ingenuity to find that there was a security flaw there. I think we can hardly classify that as lack of seriousness or understanding that it can have bad effects to people's lives.


This is probably going to come across rude, but I feel like it needs to be said.

nothing in your post was really relevant to this discussion. The fact that it may have been a company policy or that the person wasn't a developer is completely irrelevant to the larger idea. It's literally missing the forest for the trees.

I've been doing software dev for over 20 years and one of the things I'm known for is building very stable systems. Part of the way I do this is by not allowing myself to use these sorts of things as a defense.

When designing and/or building a system I ask: What should not be? Then I build the system to actively look for those situations. Put another way, I don't build systems with the assumption that there are no bugs. Too many developers push the responsibility outside of themselves. As if bugs are inevitable, therefore the fallout from bugs is inevitable. You just do a root cause analysis and move on rather than asking what you could have done differently to avoid the fallout.

I've lost count of the number of times systems I've built have actively refused to do something and then sent out a notification explaining why.

My point here is that you're doing yourself a disservice by insisting that root cause analysis should be enough. Yes it's important, but it's not enough. Some things simply should not be, and they're avoidable.

The problem is that many companies don't really care.


I don't think it's rude to point out someone is missing the point, though I can see why you're considering that given the overall aggressive tone of your messages. Let's backtrack, then. My main point is that scolding an individual for a problem in production is not going to improve the quality of the software produced by a company at all. Do you disagree with that?


Not relevant. You responded to another poster, I responded to that response to try and explain what the other poster was trying to say.

Neither of us said anything about scolding anyone, that's a strawman.


Hm, then is your point that many developers become careless while programming because they are in an environment with safeguards in place to avoid screw ups?


I think the main reason is that they can afford to not care. There is no pressure. They can afford to be inefficient because they know that as long as the core features work, customers will not leave.


Maybe the reason they are trillion dollars companies because they care about more important things than obscure or unprofitable bugs and berating their staff.


Because they care about filling their pockets and mostly nothing else, that’s the reason.


True. Apple has bugs that have been posted on their forums for like 10 years that haven’t gone away and rack up hundreds and sometimes thousands of comments. They don’t care. In some cases they block further comments without ever fixing the bug. They figure something that only affects 1 percent of people Of whom only half of them will actually get annoyed, they don’t need to fix at all. It really adds up though when you have hundreds of bugs.


Seconded, I think they don't want an internet where there is 0 pictures available of some known figure and that's why they don't take such a case into consideration.


In large organizations like this you tend to do what advances your own career, not what's useful. Mainly because it's not going to affect Google's bottom line in the short term.

See how many chat/social networking offerings they have done.


In small companies feedback from customers reaches support or the CEO(sometimes) and tickets are opened for all(most) issues. So are this companies have no support people or what are those people doing? Their job is to collect this feedback and put in tickets and advocate for the user witht he developers. Many times I had some feature implemented in a clean.logical way from coding point of view but it had to be changed so custoemers are satisfied and not me the developer.


They have billions of users, it's not scalable to treat each of them individually. So, they take more scalable approaches to identify problems and assess user satisfaction. That means that an individual should expect not to have much influence over the direction the product takes and which bugs are prioritized.


Except Google's approach doesn't seem to serve the needs of any user.

Even search quality has degraded notably in the past 5 years, mainly because of the automated smartness that more hinders than helps.

I used to get answers to my problems, now i have to wade through piles of manure that is only vaguely related to what i'm searching. And they're even overhelpful. I had to plan trips with Bing maps because Google helpfully routed me around a road closed in winter. Except it was winter but I was planning a summer trip...


You can tell google maps when you're going to be making the journey. Maybe you couldn't back when you tried. Related to worse search results, I don't know. But the fact that you're finding it less relevant doesn't mean it doesn't serve the needs of any user, but that it doesn't server your needs. And that's the point. Microsoft pushes people pretty hard towards bing, yet Google keeps coming on top, I assume people know about bing but are preferring google. They only need to be better than the competition they have.


Oh, google is perfect for finding the opening hours of a pizza joint near you.

Unfortunately, it used to be perfect for more advanced searches too. Not any more.

Btw, I don't see where to enter the date when i just ask for directions on google maps. Maybe it's available in the phone app but i use the web page for planning because I can open it on a real screen...


It exists in the web UI too. You need to select the precise time and day, not just the general time of the year, but you can tell it when you want to depart or arrive. See my screenshot below. By default, the dark blue section will have "Leave now" selected.

https://rafael.kontesti.me/screenshot.png

Regarding the results, I feel like I'm better at finding things when I have no idea what I'm actually looking for. However, when I know what I'm looking for, it seems more difficult at times, I can't say it was easier before, though.


The question is if they dont' collect feedback they don't know how many users are affected. If you break feature X and all users are affected but you send them to read some FAQ and not record the reports then the only way you get informed about this is from Hacker News or a news paper, maybe this is the Google and Apple strategy, if it causes an issue that reaches a news paper then we investigate it, otherwise we fix only the bugs that upset our developers or bosses


Most of their feedback probably comes from their metrics. Things like sudden drop in the number of purchases, an unexpected increase in refinenment of certain queries, increased number of 500s, etc. In order to respond in scale you have to try to detect unusual patterns showing up in your data. That said, they do have some avenues to contacting them. There are feedback buttons in the knowledge panel and in the translations. There's an actual support in youtube (albeit it will take a while until you get to a real person) and so on. As it is, they are likely already getting more feedback than they can process on a daily basis. Bear in mind that not all feedback is useful. A big chunk of the work in any support hotline is wading through the mountain of useless feedback. I can only imagine how that'd be like at google's scale.

Note: by useless feedback I don't mean that the user is wrong because they don't understand the interface, when it's actually the interface that's not intuitive. I'm saying that people are often calling in trying to solve something completely out of your control. Just talk to anyone in support and they will tell you the craziest things that pop up. One example I've heard was a user who could not wrap around the idea that an app on the phone needs internet access to order food delivery. Something like that can easily take 30 minutes for a supporter to handle, for instance.


Let's try a thought experiment. There is a `Feedback` button on the bottom right, imagine that Google developers actually look at them. Let's assume some numbers:

- Once every ten searches there is that "smart" box. - Once every thousand "smart" boxes a user spots an error and clicks the `Feedback`. - There are a hundred developers behind this feature, working 8h/day.

So, according to [the result in a smart box](https://kenshoo.com/monday-morning-metrics-daily-searches-on...) there are 228 million searches per hour. So, to go through all the feedbacks, your developers need to average well over 600 per hour.

An alternative approach to get this estimation: imagine every tenth user reports one error per year. There are [2 billion gSuite users](https://www.zdnet.com/article/google-g-suite-now-has-2-billi...), so intuitively there should be at least as many Google search users. By simple division, your developers would need to go through almost 700 feedbacks per hour.

Having these numbers: how do you think Google engineers should actually react to the feedback?

Disclaimer: I work at Google, but on something not exposed to the outside world. However, we do hit similar scale issues with our users being only Google engineers.


Solution: Hire 10,000 people to vet the feedback and funnel it into the org. Those people vet 6-7 feedback per hour, or 18-21 if you keep the operation running 24/7.

But that would cut into the Profit from the Money Printing Machine.

Saying "We're doing business on a scale that's too big for us to be (profitably) accountable" isn't an acceptable answer.


Imagine Google is a public utility. Would it be worthwhile for the society to spend that much manpower to funnel the feedback brute-force way like that ? To me, it clearly is a waste of money for the society, and an inefficient and pointless way to improve the product.

This kind of deontological approach is not very useful unless it's applied to a morally important issue, and even then, an utilitarian/consequentialist approach is needed to cross-check to make sure deontological approach doesn't go astray.


Very slippery slope, since you can use that argument to justify anything you want.

The bottom line is this: You guys are actively spreading misinformation and profiting from it. That's a bad thing.


Do you have any alternative ideas on how to make this better?


> Probably this is expected when you have many users and most of them are not paying.

Everyone pays one way or another otherwise google wouldn't exist.

I leave feedback on Google results and apps all the time. Either no one reads them, a computer reads them and based on keywords pushes them to a queue that reaches a human or /dev/null, or no one in the company wants to accept responsibility for mistakes otherwise they may not be promoted.


Is this article still relevant?

When I search for "Greg Egan", the knowledge panel in the right column doesn't show any photos at all.

And it even has a button "Claim this knowledge panel" that he could presumably use to make corrections.

So is this article just old news, where Google has not only corrected its mistakes but also provided a manual way for people to fix misinformation about themselves?

I'm not understanding why this article is appearing now in 2020.

(Also, a search engine is always going to get some things wrong. Without knowing the rate of errors, a single example doesn't really mean much of anything.)


Highly relevant. Google will monetize fully automated mistakes and you're saying it's OK because the victims can manually clean up the mess later?


No information on the web is 100% perfect or accurate. Even the NYT makes a host of mistakes daily across all its edited articles, and publishes corrections for the most serious ones when readers point them out.

Google's results are largely accurate, and as long as it has a mechanism for correcting mistakes does seem to make it pretty OK to me.

What are you suggesting any serious alternative be? That would actually work at scale?


>What are you suggesting any serious alternative be?

Not erroneously copy stuff?


> That would actually work at scale?

Probably "Not erroneously copy stuff?" is not a trivial thing to do at scale.


Not copying stuff is trivially scaleable to infinity.

Seriously, if you can't do something at scale, don't do something at scale. Being shitty to everyone involved and then saying "but can't scale with profits" is not OK.


Yes, because it's still happening to many people other than Greg Egan.


I’m reminded of time learning a new language, and how little it takes to completely change the meaning of something.

Imagine if you thought you understood 90% of what somebody said to you but you didn’t catch the word “not”; you would have exactly the wrong interpretation. Or what about skipping a modifier like “slightly” or “significantly”? (And don’t even get me started on sarcasm or other things that may not be detectable to all.) Google’s summaries, or any auto-summarization, risk inventing a new conclusion that was never part of the original.

This is a cornerstone to critical thinking as well. If you spend your time trusting one-sentence tweets and other shallow writing, are you also asking yourself what hasn’t been stated? For example, I could say: “Engineers quitting Google amid latest executive actions”; the reality might be “it was exactly two engineers”, “their reasons for quitting were unrelated”, etc. It is extremely easy to create misleading summaries.


translate.google.com somewhat prone to misnegation. Basically one word doesn't change the score that much, and there are things like different word order that overwhelm it.


I haven't even read it yet but this website is exactly what I'd expect to see all sites like, customisable to your taste (and even needs, for visually impaired). And this is the first web page I've ever seen that does that since... the web began I guess.


For better or worse, some US regulations have made it a requirement for certain websites to have accessibility functionality. That has spawned a small ecosystem of plugins that aim to fulfill the regulatory requirements. Does they make for a more accessible websites? Judging by the plugins I have seen so far, I really doubt it. But it is at least a springboard to work from, and a kick in the butt for people who truly do care and are talented enough to pull it off properly.

That said, these plugins, and their failings, are a reminder to me that accessibility can't be an after thought, it needs to be baked into your product's UX.


Fictionpress and Fanfiction.net are also similarly customisable. You can change para width, font-size, background/foreground colour, and the font too (about 10 different kinds).

Font-size is trivially done by the user zooming, but the other settings probably ought to be implemented by more sites, especially text heavy ones.


Do people expect google to be run by Magic?

Google, wikipedia, even facebook, twitter and reddit have been a plus to modern human thinking.

Just coz they fail somewhere doesnot mean they are bad.

People dont remember how much things sucked before them. Also computer vision, AI are new things.

""And your mission was to organise the world’s information. How’s that working out for you so far?""

It working out so well. Internet is just 40 year old and we are here for fu*k sake.

Some people have 0 bigger picture scene.


> Do people expect google to be run by Magic?

I would argue, that most people actually do. They don't understand how search engines work. Its just a box, where you put what you search for and they provide answers. How do google gets those answers? They have no idea. Currently, I have only a remote idea how google provides those answers and I've implemented some searching myself back in the day, so how your nontechnical people could even understand this? They don't even want to, they have better things to do than to find out how their technology works, they just want it to work.


  > Just coz they fail somewhere does not mean they are bad.
But looking at the bigger picture is not especially favorable to Google et al:

Back in the 1980s, technology was advancing at a dizzying pace, and the world seemed on its way to peace, freedom, and brotherly love.

Today the world has a dis/misinformation crisis, and authoritarianism is rising.

Even if none of that is the fault of Google and social media, it casts doubt on the benefit of today's internet. How great can it be, given the state the world?


I think you may be missing the forest for the trees, no offense intended.

The world is still on its way to peace, freedom and love.

What has happened is that the new communication paradigm surfaced truths that there were before but not acknowledged. Violence against blacks didn't start with Rodney King and didn't end with Jacob Blake. There were there before, and they got pushed into the spotlight exactly by the advancing technology of the Internet: videocameras, cell phones, search, video distribution, forums, etc.

We are in the midst of the most important revolution of humankind - the knowledge revolution. Like the agricultural and industrial revolutions before, it's going to turn up the society on its head, and it won't be pretty. But the society that follows will be way better that the one we have now.


That is an attractive theory, which I also held for a long time. Sadly, as I see it, the world is not playing ball. There comes a point where you have to face the world as it is, not as it is supposed to be.

I still think technology has equal potential for good, as it does for bad. I hope our industry will adapt, to tilt the balance back to the former.

I'll leave it at that. I wrote and deleted several longer replies to your comment, but they all wound up sounding even more condescending :)


> Do people expect google to be run by Magic?

Some would argue that Google wants to be seen as magic driven, or at least AI driven (which, to most, is the same as magic).

> Google, wikipedia, even facebook, twitter and reddit have been a plus to modern human thinking.

These organizations have been hugely beneficial for finding or conveying information, which is different from thinking. Of the five, only the Wikipedia can claim to have reasonable (albeit, not always reliable) systems for vetting information in place. The others hide behind automation while concealing how the automation works.

I'm not going to claim the world is worse off for them, nor am I going to claim it's better. For the most part, it is a trade-off. While organizing the world's information is important, assessing it's value is also important. That's difficult to do when the priority is to do so at large scales and not all information is provided in good faith.


I was reading some articles from when the internet was in its early days, and they were so full of hope and excitement. They were looking forward to how well everyone could be informed. There was one anecdote where someone had a less rosy view, and he worried that the access to a vast amount of information would lead to people becoming very polarized. This really struck me because I often feel like I can look something up and find 10 good articles on both sides that will really vindicate either view. I think in some ways it’s human nature to go with the view you prefer, and then if you find some articles, you can really trick yourself into feeling correct or very certain about something, despite the fact the information you found might be false. This is something I struggle with a lot, and I have troubles figuring out if a source is good. Your comment about assessing the value of information and information not being provided in good faith struck a chord.


> "Some people have 0 bigger picture scene."

You should read some of his stories, he's pretty good with the big picture.


I can understand the author's frustrations. Things like this could cause real world issues because people put too much faith in Google giving them the correct answer.

I think it is more the fault of individuals, and to an extent the institutions teaching those individuals which have a responsibility to teach about online credibility in our time. I.e. don't just copy information from Google - click the link to verify the source is legitimate and look for additional confirmations of the thing you want to report. Of course, if one "legitimate" writer or website gets something wrong, it exacerbates the problem for everyone down the line.

Maybe Google should have a warning somewhere that there information has not been vetted for accuracy?


The problem is that without strong evidence you should assume that all information you read can't be trusted. This seems to be the root cause of all of the election meddling, fake news and similar social issues that are "popular" these days. The only real solution is to teach critical thinking. However teaching critical thinking to the masses is a very hard problem.

So instead of funding schools we just spend money passing laws that verge on censorship.


Yeah, exactly. There was something I was reading/watching the other day that was advising that you not trust information unless it comes from a credible/legitimate source. But isn't that the problem? I mean, a large percentage of the US thinks that Fox news is a credible source (including our President...). People think they're doing research by watching videos on YouTube.

I don't really know how you fix this either. It seems to me that it's not whether you're ABLE to think critically that's the problem, it's whether you WANT to. It feels like a large portion of our population just doesn't want to put in the effort.


Exactly. But also remember that watching YouTube videos can be good research. The medium isn't important, but you need to be careful that the content is coming from a reputable origin and cross-reference with independent sources. Typing the fact that you want to be true into the search box and watching the first 3 videos is not good research.


Your argument collapses if one notices that Google Search used to be a lot more accurate in the first 10-15 years of its operation. One could say they were actually trying to organize information back then.

The damage that has been done in the last 10 years is hard to miss. Search accuracy went down the toilet as did ranking of results, they started rewriting queries en masse and also censoring.

I hope a competitor arises that steals their lunch. They're so bad these days that it can't be very hard to achieve.



NB: That points to what turns out to be an old version of a continuously updated page, from 2013. Actual content, which was offline at the time, dates to at latest 2016. Apologies for any confusion.


I like science. My kid is two.

Major disappointment when Google('rockets') results in only sportsball.

As in, zero indication that the word 'rocket' is to be associated with anything else.

So I used duckduckgo. Same result.

1. Bitterly disappointed in the state of search engines today.

2. Reinforced the idea of teaching critical thinking to my child...


On a related note, how do I block Google from showing me the Answer blurb when I search for something, especially on desktop browsers?


I feel the same for Google, and other search engines as well. Are the results given pushing us to come together as a people - towards common goals?


I've come to think that most of the tech on Internet act as a stupidity amplifier, consider: Twitter, Facebook



Imagine taking headlines as concrete facts. Oh....


ai cannot understand irony


Published 2012, but updated since then... I think it should say 2016


Entitlement... There are 189 Greg Egan on Linkedin alone. No wonder Google doesn't show only his face. If his claim is valid, the result right now seems pretty ok to me...


Why entitlement? If you just searched for Greg Egan and got a bunch of images for Greg Egan's that would be fine. But google is pulling together info as a bio of Greg Egan the science fiction author, presenting it as authoritative, and including a picture - not someone with this name but that specific person. The author being annoyed that Google is falsely saying this is what he specifically looks like doesn't sound entitled to me


But it doesn't! Just look him up right now and you will see his bio doesn't include pictures of random people. There are pictures of his books covers and picture of other people users have been searching for (probably other authors and people loosely connected to him) Then in the middle part there are indeed fairly random pictures but these have nothing to do with his bio. They are just Google results.


Because this dude is like I filled million recaptacha I deserve more free result when AI research is so new and so slow.

This dude thinks just coz human can do it means its easy and forgets the billion years of evolution that went to make our eye or the massive amount of processing our brains does to see an object.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: