Hacker News new | past | comments | ask | show | jobs | submit login

99% of our AWS resources are terraformed, but developers constantly push back / want to use the console to create stuff to test or play around with. So we setup a separate "hack" AWS account and give them admin access in there, and have an automated job using AWS Nuke to delete everything in there once a quarter.



We give each one of our developers their very own aws account managed through AWS organizations service. They are full administrators and responsible for resources and cost.

So far we haven’t had any issues or bad surprises, although we have setup some aws billing alerts just in case.

Feel free to make them responsible for cost and resources and you’ll be surprised how well they can manage their own account.


> Feel free to make them responsible for cost and resources and you’ll be surprised how well they can manage their own account.

Wow, this is horrible. I understand responsability but this is too much. Are other employees responsible if the company loses money for their actions?


It is quite common to have budgets employees have to work with.


Ever had a company credit card?


I suspect it's more the individuals would be warned and re-trained if they didn't keep their costs under control (usually it's done at the team level) rather then having actual financial responsibility.


Not sure what the problem is, if anybody exceeds the expected « normal usage » we simply get in touch and fix the issue.

Lessons learned for everybody, it’s a win-win situation.


I think it really depends upon a number of factors but even pretty smart people can make stupid mistakes, especially when it comes to security in AWS. I’m familiar with several cases where engineers fired up old AMIs and got the instances compromised within an hour because they were running old, vulnerable software and ran it in a publicly routable subnet. There’s some basic rules to follow that can help avoid issues like those though that as organizations scale need to enforce to a greater degree eventually. Disallowing provisioning their own VPCs, disallowing publicly routed subnets, and establishing some decent auth infrastructure is all a good start that will work for a long time and have minimal friction for users. I’m a strong believer in security as a UX problem where doing the Right Thing should be easier than doing the Lazy / Bad Thing so I feel if people are having issues doing things the right way I’ve messed up and need to improve usability and meet my users where they are to achieve my own goals of a secured infrastructure.

Giving people responsibility and autonomy also comes with some responsibilities by the providers in a shared responsibility model is all I’m saying and every policy works out fine until it doesn’t.


> We give each one of our developers their very own aws account managed through AWS organizations service. They are full administrators and responsible for resources and cost.

How many developers work at your organization?


25 at the moment :)


> although we have setup some aws billing alerts just in case.

My experience with these has been decidedly mixed. As in, you define them and never, ever see an alert.


Hmmm, weird.

We always get the alerts in time with thresholds set to 70% of the wished value.


I'm more worried about someone inexperienced with AWS accidentally doing something really expensive than any kind of intentional abuse.


If you make a mistake with excessive resources allocation, you can get in touch with aws and ask for a refund and they will gladly do so.

I’ve had to do it a couple of times for personal and profesional accounts, and I’ve never had any rejections from them


We have done something similar: a sandbox account that developers and solution designers can play around in, experiment and manually create resource in as long as resources are properly tagged (we have devised internal naming conventions). They are also responsible for the clean-up; a resource that is not properly tagged is purged automatically after a 8 hour time lapse.

Other accounts and environments (including dev) require everyone to follow a streamlined process: read only access to the account, a fully documented solution design and a corresponding terraform project in GitHub. terraform project checkin triggers a pull request for a review and an approval. Once the pull request has been scrutinised and merged, CI/CD runs terraform to provision resources in the account.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: