Hacker News new | past | comments | ask | show | jobs | submit login

The worst part about this is that it's not an honest mistake, but rather a dishonest one: Google embeds as many 'answers' into their SRP as possible so that users do not leave their website.

If Google just displayed links to users, then the user might actually click them, which would result in them leaving Google for another website, which is Not Okay. One of Google's largest efforts is to make sure they're the only website the user stays on, which is why they scrape contents from pages and display them on their SRP, why they want to hide the address bar from web browsers, why they push projects like AMP so hard, and many other things. Conveniently these efforts are also all of the nature that they can just claim they're helping the user get what they want more quickly - as long as they're using Google.

Users also all trust Google a considerable amount, so very few will take any additional action after being instantly supplied with an answer that is scraped from some random website. Google gets 100% of the user flow, has 100% control over what answer to show, and of course can show the user some great ads while they're at it, while denying the source website traffic for their content.




I blame the modern web for this.

Google a question you have, especially something with mass appeal, (health, fitness, media, culture, travel, etc.) and click any of the top links.

It will load slowly, 1000s of tracking scripts, there will be popups, scrolling might break, you will be redirected, more popups, half the screen covered by an ad, the site breaks, etc...

In fact I just recorded myself doing this. Watch as I struggle to read the page, I end up not even able to scroll down: https://imgur.com/gallery/5fdBdcL


I blame Google for the modern web.

Tracking scripts, ads, etc.


Yes. Google has created this monster of SEO optimized spam desperate to show you ads.

If they changed their algorithm to prioritize sites with minimal ads and tracking, load quickly and aren’t bloated, and actually work with a good user experience, incentives would change.

I guess that is what they are trying to do with AMP. But you would think they could do it by just changing their ranking algorithm.


Maybe we'll get there one day. If SEO-bot were as smart as a decently intelligent human, it'd take all that into account. I'm not holding my breath.


I also blame browsers. Create an empty .html file and open it in firefox. Load time 370 ms (according to an addon). It's a local empty file. How can firefox spend 234 ms on DOM processing? Chrome seems to be better here.

Edit: window.performance.timing.domContentLoadedEventEnd - window.performance.timing.navigationStart

Firefox: 411

Chrome: 70 => better, but still 70ms for what?


I get 66ms on Firefox, with the following breakdown:

    navigationStart: 0
    fetchStart: 0
    domainLookupStart: 35
    domainLookupEnd: 35
    connectStart: 35
    connectEnd: 35
    requestStart: 36
    responseStart: 36
    responseEnd: 36
    unloadEventStart: 49
    unloadEventEnd: 49
    domLoading: 49
    domInteractive: 63
    domContentLoadedEventStart: 64
    domContentLoadedEventEnd: 65
    domComplete: 66
    loadEventStart: 66
    loadEventEnd: 66
So it looks like the bulk of the time is actually whatever happens between fetchStart and domainLookupStart, and between responseEnd and unloadEventStart. The actual time from domLoading to domComplete is only 17ms, which is about 1 frame on a 60hz monitor.

Note: refreshing the page with the javascript console open takes ~200ms on my machine, and certain extensions can make it even slower than that.


There is no "empty .html file" in the browser. It's not just a text file, it's a document tree; there are compulsory nodes that need to be created even if you didn't explicitly write them in your file. <html>, <head> (which must contain <title>) and <body> need to exist - and since you didn't create them, the browser must, after it's failed to parse what you were supposed to have included.


>>> If Google just displayed links to users, then the user might actually click them

And ironically, this has the effect of making Google Search even worse. Whether someone clicked on a link used to be a signal to Google’s search algorithm that the link (or at least the title and snippet) was relevant. And there’s no quality feedback loop at all for the embedded info boxes.


> And there’s no quality feedback loop at all for the embedded info boxes.

Actually, there is feedback for so-called non-organic SERP elements. You can check whether a user clicked the next result after an element, which is a signal of uselessness of the element. On mobile devices you could also check whether user continued scrolling after seeing the element. Both signals are used by search companies.


And what's missing from that assumption is users who see an incorrect info box, assume it's correct, and move on without clicking anything. The effects cancel out.


Yep, but that's also true in the case of classic links. If there is false information on the site behind a link (or in the snippet), which a user perceives as true, there is no way for a search engine to gather it from user's behavior.


> If Google just displayed links to users, then the user might actually click them, which would result in them leaving Google for another website

But if the user found the answer they wanted from Google itself, why would they stick around on Google's site?

Google might actually make more money if the user clicked on another web site, since chances are that the other site would serve them a Google ad (e.g., via DoubleClick).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: