Hacker News new | past | comments | ask | show | jobs | submit login

Nah, I think not. My business needs to run. Would be very painful if an engineer put in a hard shutdown at $X and then left the company only for me to find out after all my services are shutdown when we've grown past $X.

This is a hard no from me.

How is AWS going to even know what to shutdown/remove? What if it's storage causing my bill to overextend?

Yeah, not just no, but hell no.




I’m not sure I’d want a running business haemorrhaging $10,000/minute or whatever is possible for cloud compute.

My business is looking at cloud compute at the moment and we have absolutely no idea how to do it safely. In fact sagemaker is exactly a product we have looked at and discounted since we cannot be sure we can do it safely without getting unexpected megacharges.

We had an incident recently where a cellular router was set up to send a text message on a GPIO changing. The old firmware didn’t have pull-ups on the input. We told our end customer to update the firmware. They didn’t and also didn’t hook up the input and left it floating. We got a £20,000 bill for text messages as a result and a soured relationship with the customer.


What if that engineer fat fingered and reserved a million GPU instances instead though?

It would be nice if cloud providers were better at surfacing potentially company killing things via their dashboards and nag emails.


Well, they’d have to first contact support and get the instance limits lifted… it would be one hell of a fat finger.


Not everything in AWS is a VM with a limit.

For example, their CDN has no "TB per month limit". If your site gets popular and is 100% cacheable, you could be getting a million dollar bill.

I hear similar horror stories about the various distributed databases in several clouds. They're all crazy expensive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: