Hacker News new | past | comments | ask | show | jobs | submit login
Hacking the Amazon S3 SLA (daemonology.net)
33 points by cperciva on Oct 23, 2008 | hide | past | favorite | 4 comments



He looks at differing error rates for GET/DELETE/PUT to S3. This will presumably include any failures on the network path from the client to the server.

Might that be enough to explain the higher rate of failed PUTs?

Would anyone expect the effective network path from them to a well-known service like S3 to be noticeably asymmetric (at the levels of failure mentioned in the article)?

Most consumers' 1st hop (ADSL or cable) is asymmetric in bandwidth - does that imply the ISP is doing different sums in terms of capacity planning which might cause a difference in packet loss?

Does this asymmetry also exist outside of consumer 1st hop connections?

Do modern network shapers prioritise download ACK traffic (needed to give the customers those blazing download speeds) over uploaded data (bloody file-sharers)?

Lastly - given packet loss as measured by ping includes packets lost up or down, I guess there is no way to measure these independently without some co-operating code at the far end?


Doesn't this hack require that the service is used in an extremely constrained way, almost entirely to get a refund? You cannot employ this hack if you are using S3 in a meaningful way. Thus, you are simply getting a max 25% refund on the money you spent gaining nothing.


There are scenarios where this could be exploited quite easily -- the "backups which aren't being touched" scenario is one of them.

If you're issuing a constant stream of requests to S3, then this can't be exploited, since it relies on being able to say "I'm not going to issue any more requests until the next 5-minute interval starts".


Can't wait until someone launches this as a service :).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: