So I had an observation about AWS scripting that I'd like to share. I've been working on some AWS scripts lately for both personal projects and stuff we deploy at work, which means that I often use 2 sets of credentials.
A lot of people on Github/SO seem to like to set their AWS credentials as env variables in their scripts. I was doing this myself (since it seemed like the established pattern), but then found out aws cli has a --profile option. You store everything in the ~/.aws/credentials file instead of on an env variable and simply switch the profile you use when executing the script.
This works better for me when managing multiple credentials and doesn't allow the off-chance of uploading my credentials to a remote git repo. I was curious if people just didn't know about this option or if setting your credentials within the script is preferred for some reason.
If you are writing a demo, it make sense to encapsulate all of the credentials directly in the script, so that it can be comprehended as a single file. How you actually include your credentials will depend on your use case, so it shouldn't be part of the proof-of-concept.
Yes, it's a godsend .. soo much easier than baking in credentials. If you are using the PowerShell tools, you can also set it up so that your PowerShell session always starts with a particular set of credentials (e.g. Dev). Not sure if you can do the same with the CLI.
It's a really nice mix of not having to worry about the granular infra of Heroku, but still having control of things where you want. It makes setting up auto scaling incredibly easy.
We have a lot of services running on Elastic Beanstalk and have had a lot of success with it.
It's good, but it's not perfect. Some of the issues we have encountered: it's slow to deploy; it doesn't always provide enough feedback (watching the Events page for 5-10 minutes with no real idea what it is doing); there are some relatively small changes which trigger a rebuild (which is slow). Overall I still love it, but we're kind of butting up against its limits. I'd definitely START with it though, as you get a lot of automation for minimal effort.
What's the default deployment like with security etc? Looking to get a really lightweight app into production but am no sysadmin - since it's just managing other AWS services, guessing it's not as plug-and-play as a more hand-holdy PaaS?
it does an ok job by default, security groups are pretty locked down, it creates two for every environment(elb+server). if your db is outside of beanstalk youd want to add your own security groups to it. if you use the eb cli tool to ssh though be careful, the way that works is it opens port 22 to the world, you ssh, and only when you stop ssh'ing cleanly it closes it...
What have you guys done for server logs in EB? The system they seem to have in place is god awful. Even through their CLI, it seems to have to make a request and load about 100 lines from the tail of the server logs.
Use an external logging service. You can run your own if you like, but there are plenty of people that offer pretty good services for not much- assuming you're not generating more than a GB or 2 a day.
I have personal experience with Papertrail and Loggly, which is quite nice if you want to build derivative data.
We use a script that more or less looks like this at Staffjoy. It took many hours at the AWS loft to get it working. I just commented some code we use to confirm that a deploy succeeded and blocks until it does.
How do people detect errors? I run 5 beanstalk environments right now and they randomly fail and i'm not really sure if it's my fault or if beanstalk isn't reliable in certain configurations or what.
ELB has been such a pain to work with for us. Their latest platform version has fixed some of the random fails at least. Our biggest issue right now is that the same EC2 instance that is running the code is used to pull down & build the docker image. For us in staging that meant the machine ran out of memory. _Of course_ we then had to upgrade to a better machine so as not to run out of memory.
At Bleacher Report we use a Ruby app called Gantree to manage this process. It pulls in appropriate ebextensions for the deploy by taking a convention over configuration approach, deriving the correct extensions from the stack name.
"The name is derived from the word gantry which is a large crane used in ports to pick up shipping containers and load them on a ship. Gantry was already taken so I spelled it "tree" because the primary use is for elastic beanstalk and I guess a beanstalk is a form of tree?"
This is awesome! Thanks for sharing :) I've been looking for tools to abstract the process even more so than I'm doing now. Will definitely try to get Gantree up and running.
multi-container docker deploys actually launch ecs for you. so its basically cloudformation+ecs, removing the step of configuring autoscaling groups and adding some management stuff on top. Also, it existed before ecs services existed i think, so it handled all the elb stuff. i found version management much nicer in beanstalk, string versions instead of auto-increment revisions, enhanced monitoring is better than what you get out of the box with ecs, but a second system of things that could go wrong, now just using ecs services always
A lot of people on Github/SO seem to like to set their AWS credentials as env variables in their scripts. I was doing this myself (since it seemed like the established pattern), but then found out aws cli has a --profile option. You store everything in the ~/.aws/credentials file instead of on an env variable and simply switch the profile you use when executing the script.
This works better for me when managing multiple credentials and doesn't allow the off-chance of uploading my credentials to a remote git repo. I was curious if people just didn't know about this option or if setting your credentials within the script is preferred for some reason.