Hacker News new | past | comments | ask | show | jobs | submit login

I think the big problem is that usage collection is a few days out of date, at least for GCP. Autoscaling can react in seconds to increased load, but it takes about 3 days before that shows up on your cost reports. You can burn through a lot of cloud resources in 3 days.

GCP at least has some provision to get very detailed information about usage (but not cost) that updates in less than an hour. That, to me, is the tool for building something like "shut down our account if usage is too high". It is annoying that you have to code this yourself, but ultimately, it kind of makes sense to me. Cloud providers exist to rent you some hardware (often with "value-add" software); it's the developer and operator's responsibility to account for every machine they request, every byte they send to the network, every query they make to the database, etc. and to have a good reason for that. To some extent, if you don't know where you're sending bytes, or what queries you're making, how do you know if your product is working? How do you know that you're not hacked? Reliability and cost go hand in hand here -- if you're monitoring what you need to assure reliability, costs probably aren't confusingly accumulating out of control.




Are you being sarcastic?

> I think the big problem is that usage collection is a few days out of date, at least for GCP. Autoscaling can react in seconds to increased load, but it takes about 3 days before that shows up on your cost reports.

That does not sound like a good reason, but more like a crappy implementation of usage collection.

I don’t see why a bunch of Google engineers can’t implement real-time billing properly, and see no reason to defend their inability to do their job.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: