There was apparently always some kind of mix (AWS did start from internal requirements, though it quickly diverged). The OG services like S3 and DynamoDB power a lot of Amazon for sure. There’s been more bed of Amazon moving to AWS, most recently talk about shifting their main DBs.
I actually tried shopping on Amazon.com from AWS just to see if it was something local to me. Nope! I couldn't see any products or add them to my cart from AWS.
My understanding is that Fastly has a lot more edge locations than AWS, so using their image CDN results in lower latencies for many users than they could achieve with Cloudfront alone.
It's not edge location count that matters, but Cloudfront doesn't use BGP Anycast but rather does a more traditional DNS-based routing and tries to spread the requests across multiple edge locations (even those farther away) for redundancy, intentionally.
When I asked for detail about why they don't use Anycast, the Cloudfront engineering team basically said their customers care more about uptime than latency and that full Anycast was too sketchy. Apparently amazon.com disagrees, at least. I'm also happy getting much lower first page view latency out of Cloudflare.
The one Amazon property that seems unaffected is their China site (https://amazon.cn). No doubt its running entirely separate infrastructure, unlike every other country's local Amazon site.
Not too surprising I think... for myself and I suspect many/most, the calculus is something like:
A) check back in half an hour when Amazon will probably be working again
or
B) (1) find another online retailer which sells the kind of stuff I'm looking for (2) get the specific items I want in my cart (3) find my credit card (4) make an account (5) re-enter all my shipping information (6) shell out extra for shipping (7) repeat if I could not find a single retailer that sells all the items I wanted (8) curse myself the next day for committing to a retailer with slower shipping, unfamiliar returns process, etc., because I was too impatient to wait an hour for Amazon to be back up.
Not all of the steps in B apply to everyone of course. But it only takes a couple to make option A the winner, which I'd hazard is the case for many many Amazon shoppers.
There's an element to slow shipping that has less effect today, since we're starting the week, it doesn't matter if things turn up tomorrow or Friday if it is for weekend DIY projects.
I wouldn't mind using another retailer today and near enough forget about it until the weekend. If by some chance it doesn't arrive until the following Tuesday, it doesn't matter, I'll find something else to do at the weekend quickly enough!
However, if you need it pronto, I'd rather pay for a courier.
Many people have grown up with Amazon, and in the early day it looked like they were undercutting retail from top to bottom, making the highstreet look like the bad guys. As their warehouses get exposed, and the issues of no union appear, I'm less likely to buy from Amazon as I think there are ethics involved that need consideration.
Do I really want to buy something knowing that a poor soul was probably operating at 98%, just below maximum threshold before collapse to get this item boxed?
Ok so, option C: philosophize for a while, then follow option B. For the vast majority of customers, the calculus during an Amazon outage still favors option A.
Right but when they return how do you distinguish between a new impulse and a continuation of the old? This is one of those cases where marketers and businesses love to act like they know everything when in fact it's a huge info blind spot.
More likely millions per hour. But even then it's not like all those sales will be lost. Most people will probably just come back later to make their purchase.
I d be interested to know the numbers - with the amount of impulse buys or marketing driven links, there s probably a lot of ads being paid for that redirect to nothing a user might just give up on
I think peak Christmas time shopping was a couple million per hour back in 2006.
I know we used to contrast that to million per minute (or second) kinds of outages that NYC financial firms could have (along with the SEC escorting you out in handcuffs if you really screwed up).
This was meant as a friendly nod to the fact that FAANG outages usually lead to a lot of people checking HN and generating insane amounts of load, like the time a few weeks ago when YouTube/Google Auth was down and took HN right with it :)
But overall this site is extremely well run, so please don't take this as more than a joke.
I think it was this [0] event. Some people commented on HN being overloaded [1], including me [2] :-) It was not completely down (iirc), but there were a lot of "Sorry, we cannot serve your request right now".
yeah, Amazon is past the stage where they had sysadmins. They now have armies of Operations People with SDEs being the fallback in case a runbook does not cover the current failure
Let's see whether this outage takes out HN with it once again ;-)