Hacker News new | past | comments | ask | show | jobs | submit login

It makes a lot of sense to be able to run loss making products. Otherwise everyone would use S3 together with Google compute engine and Azure databases (let's assume they'd be cheapest). In this scenario all providers would lose out.

In the current world, they can keep prices for some products below costs but make their money with bandwidth and the other services people are forced to use to avoid egress traffic.




"In the current world, they can keep prices for some products below costs but make their money with bandwidth and the other services people are forced to use to avoid egress traffic."

Which AWS products are loss leaders?

S3 storage pricing is not exactly cheap. Neither is EC2 instance pricing.

"Otherwise everyone would use S3 together with Google compute engine and Azure databases (let's assume they'd be cheapest). In this scenario all providers would lose out."

No, S3 would do well, GCE would do well, Azure would do well. Providers only lose out to the extent their products no longer compete on merit alone.


I can imagine that this is a good reason. Otherwise they could make bandwidth cheaper so that people who cannot move everything can at least move part of their applications.

I think the three providers are smart enough to know why they charge that much for bandwidth. And this is the only reason I could think of why all 3 of them charge that much. And I'm pretty sure that some products run at a loss, they do for nearly every company. But AWS won't tell us which ones.

It's reasonable to think that S3 is loss making or about breakeven on its own but recoups costs due to bandwidth charges.


There's still latency, you know ;).


I guess the latency between AWS Frankfurt and GC Belgium should be low enough (5-10ms) to use it for most applications. E.g. storing large amount of data at one provider and renting compute engines for processing at the other one. The latency shouldn't be an issue there, as long as the throughput is high enough.


Can confirm on this, storage for a lot of stuff is in S3 and compute is GCP preemtpibles. Works if you have a small dataset which requires a large volume of compute.


Is that cheaper than using Google for storage as well? Or are there other reasons for that setup?


Bit of both, no point moving it as the automation/clients that dump data to S3 make it quite hard to change.


GCS supports the S3 API modulo resumable uploads (we do them differently): https://cloud.google.com/storage/docs/migrating

Feel free to send me a note, my contact info is in my profile (I helped build preemptible VMs and I'm sort of fascinated you're doing this).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: