Hacker News new | past | comments | ask | show | jobs | submit login

I'm trying to understand how this works:

the s3 page has all links go to promocode.org https://s3.amazonaws.com/walgreens-photo-coupon/walgreens/in...

When you click on that you get redirected to promocode.org where you get re-prompted to click on the promo code and that's where the cookie promo gets tacked on the walgreens website.

I understand that amazonaws.com is a highly-ranked domain. What part of this process makes this particular s3 webpage rank up in search algorithms though? At the end of the day don't you need lots of _direct_ inbound clicks and links to this specific s3 page for it to rank higher?

The only way I see this working is if _indirect_ clicks of the entire domain count towards the ranking of this specific page -- that doesn't seem right though.

edit: looks like the paragraph above describes the concept of "domain authority" so that's probably the answer




Probably because Google is treating all s3.amazonaws.com links as being the same authoritative site, so they see each of these coupon sites as just a page of s3.amazonaws.com and therefore the site gets the link 'juice' from the root s3.aws domain.

Each page of a site doesn't need a ton of links to that specific page to rank, just links to the site in general (site being root link plus subdomains).

That's typically why blogspot-type services give you a subdomain, and not a page on their main domain.

It's been known about in SEO circles for a while[0], will be interesting to see if things change in the next major Google update.

[0] https://www.blackhatworld.com/seo/how-to-get-backlinks-from-...


> What part of this process makes the s3 webpage rank up in search algorithms though?

No idea. The article basically ends with "I'm not sure exactly how this happened so I'm going to talk to some experts". A bit of a letdown, tbh. I was waiting for the big reveal!


I'm not sure either but it looks something to do with domain authority as the pages are hosted on amazon s3 and the domain is related to amazon maybe it makes Googlebot think the pages are affiliated to amazon so it ranks them higher (just like articles on wikipedia.org rank higher with less effort). This is all conjecture though because I'm not an expert so maybe somebody can correct me or add to it.


>When you click on that you get redirected to promocode.org where you get re-prompted to click on the promo code and that's where the cookie promo gets tacked on the walgreens website.

Not quite. Notice that when you click "Show Code" (the first click), a new window is opened and the existing window is redirected to Walgreens.com. All the action happens on the first click.


The concept of domain authority has been proven, time and time again, that domain authority just doesn't exist. Here's a recent explanation: https://www.searchenginejournal.com/domain-authority/246515/


From your article: "John Mueller’s(Google rep) response deflected a straight answer"

It really depends on how you define site authority.

As the article you cited states:

“I am just labeling that unknown multiplier effect as a trust factor, that’s all.

That’s a realistic definition of Site Authority, as a catch-all for all the quality signals that Google uses in it’s core algorithm.”

At least in the early 2000s having a page on a high authority(however you define it) domain automatically guaranteed higher rankings.

So even today, it is pretty much impossible to outrank wikipedia on some mundane(non SEO worthy) topic even when wikipedia article is more basic, has less inbound links and even cites the more substantial article which is based on some random "low quality" domain. Obviously citation needed here...


>> At least in the early 2000s having a page on a high authority(however you define it) domain automatically guaranteed higher rankings.

I totally agree. And in all of those cases, the page that is on that "high authority domain" has INTERNAL LINKS from other pages of that site. The site has high authority because other high authoritative links point to that site. And that site links internally to that page. That's why that page ranks.

That's completely different than having an orphan page or an "orphan site" (a set of orphan pages) that are on a highly authoritative domain. Just because those orphan page(s) exist on an "authoritative domain" doesn't mean that it will rank. Even 10,20 years ago that was the case.

In the case of this AWS site, the Amazon S3 page(s) that rank, they're orphan pages. I may be wrong, but if you go to the home page of that amazon domain, you can't click through to the page or pages that are mentioned in this original post.

Just because the orphan pages(s) or orphaned "site" is on an amazon domain, doesn't give it ranking power--because "domain authority doesn't exist.


>Employee at Google says domain authority doesn't exist.

I'm sure they wish we'd forget PageRank ever existed too.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: