Hacker News new | past | comments | ask | show | jobs | submit login
AWS CodeDeploy (amazon.com)
211 points by helper on Nov 12, 2014 | hide | past | favorite | 51 comments



While this looks nice, part of me can't help but be annoyed by yet another deployment option on AWS. We now have CloudFormation, Elastic Beanstalk (which can take many forms, including Docker), CodeDeploy, and Opsworks.

I can imagine how, for a new user, it's utterly baffling which of these options is the best for the longterm, with the least friction. I use OpsWorks quite a bit but have found it very challenging, and the feedback cycle when attempting to develop new cookbooks is excruciatingly slow.

All I want, personally, is a system that uses a set of interchangeable scripts that represent dependencies, so my server configs can live in version control. It doesn't even need to run on multiple OSs (which seems to be a central tenet of Chef). It just needs to deploy/rollback with zero downtime, and ideally autoscale as quickly as possible. Is this it? Is there any way to know without spending weeks fleshing out how it works?


CloudFormation is really a different beast. It's focused on creating and managing related collections of AWS resources, not on application deployment. Anyone doing, well, pretty much anything on AWS should probably be using Cloudformation.

http://aws.amazon.com/cloudformation/


I've found developing OpsWorks recipes directly on the instance to be the best remedy for the slow feedback cycle.

Your cookbook is located /opt/aws/opsworks/current/site-cookbooks. If you run a command from the web console, you can repeat the command with a tailed log output using "opsworks-agent-cli run_command".


> I use OpsWorks quite a bit but have found it very challenging, and the feedback cycle when attempting to develop new cookbooks is excruciatingly slow.

Yes. As a frequent Chef user, I was really excited about OpsWorks, but it feels quite beta, and as you said the build/test cycle is even worse than usual for config management.

Have you had any success developing your Chef scripts on Vagrant and then using them on OpsWorks? It seems like the real thing is just too different. The open source OpsWorks cookbooks don't match what's actually running, you've got a VPC environment around you, there are extra tweaks to the AMIs, etc.

For new projects I think CloudFormation + regular Chef is a better way to go.


I tried getting a Vagrant environment up but I'm not convinced it's worth it. There is at least one repo with what looks like a working config [1].

I was far enough along in development by the time I started messing with Vagrant that I never finished it. I started debugging using the opsworks-agent-cli as mentioned in zackangelo's comment. I'd just been `git push`ing changes, updating cookbooks and testing again - his method of editing cookbooks [2] directly actually sounds rather workable. I imagine in the future I will mostly edit and test cookbooks this way.

1. https://github.com/wwestenbrink/vagrant-opsworks 2. https://news.ycombinator.com/item?id=8598255


I used this development flow regularly. You had to account for opsworks nodes settings not to be there. There's discussion where I currently work that there really should be a plugin to simulate metadata and perhaps the same or another to simulate opsworks.


How about an Amazon-managed Docker (not shit ElasticBeanstalk) infrastructure/config management system?

If they handled things like EBS integration, orchestration, ELB integration, and all of the usual stuff, I'd be sold in a second.


On Google Cloud Platform, there is a new service called Google Container Engine (currently in alpha). It uses Kubernetes to manage Docker containers, running on Google Compute Engine. You could also run Kubernetes on EC2 if you wanted to.


From my understanding, after talking to them at Web Summit, something like this is in the works. And they're aware of how incredibly complicated the AWS console is becoming. I don't know how or when they plan to address it, however.


They promise lots of things (like cross region VPC peering) to person to person at Summits...


Yes, I am still waiting for something prettier than software VPN or OpenS/WAN. Their docs show 5 ways to connect VPCs, all of them complex, all of them with significant downsides. One would think this sort of thing would be easier.


It takes a couple of days to get Ansible playbooks set up to deploy CoreOS VMs onto EC2 with a decent Docker setup and EBS. I know, I've just done it. And I don't like Ansible, and haven't used it seriously before (I use it for project because it's on a contract and it's easy-ish for them to find people who understands Ansible if/when it's needed) so it took longer than I'd have liked.

Frankly, I don't understand why people are so happy to lock themselves tighter into AWS.


I'm a big fan of Elastic Beanstalk. AWS with Elastic Beanstalk just makes things super easy for my deployment. And today Amazon announced they have over 1 million active users. This just shows how powerful their offerings are.


Or it shows that AWS doesn't have any other simple deployment service which doesn't tie you into using particular technologies. Hopefully this is where CodeDeploy will come in.


Wow STRML - thats exactly how i felt and why i created CloudCoreo.com... you are literally describing what we do (plus we handle services other than ec2 as well). it's not technically open to the public until January, but after seeing what you want i would love your feedback on the system. Let me know if you are interested and i can walk you through how it works.


also check out http://cloudnative.io

instead of using rolling updates, it's a blue-green deploy -- essentially what netflix does. it also automates ami building, triggered by a jenkins plugin after a successful test run.


Thanks for the link. Yes I was excited to see AWS moving higher up the stack and offer code pipeline and deployment options, but CodeDeploy is exactly what would have been designed (and in fact was) if EC2, ELBs and AutoScaling did not exist. It was born 18 years ago. It suffers from all the same issues that any rolling update with mutable infrastructure does.

Blue/Green, while harder to use without tooling like CloudNative adds, has numerous benefits including near instant rollback to a previously known good state.

Netflix figured this out 5 years ago.


I've only briefly read over the documentation, but this service seems to not follow deployment best practices that aws and others such as netflix have been talking about for years. Specifically the pattern of pre-baking an ami with your current version of the app you are deploying and any other needed software completely installed on the ami and then having an autoscale group be able to boot that ami up in a few seconds and start working. This greatly helps with scaling up, doing rolling upgrades and also very easy rollbacks.

The CodeDeploy service seems to operate by you manually launching base ec2 instance with a code deploy agent and then this agent will checkout your git code on the live instance, run any provisioning steps and then if things break somehow rollback all that work, still on the live instance.

I'm sure this is still a big improvement to companies who are manually sshing into servers and running deployments by hand, but as someone who pre-bakes ami's and does rolling upgrades with autoscaling groups this service seems like a step backwards.


I've been working on the CodeDeploy Integration here at Codeship and have been working with the service for a bit (as a preface on my thoughts)

While Immutable Infrastructure is also in our opinion (and I've written about this extensively) the way to go in the future updating systems in place is still the primary way to deploy systems and will be for a while. By providing a centralized systems to upload new released and manage the deployment (how many instances get the new deployment in which timeframe) you can take away some of the security problems of opening up ports for access and potential deployment errors where the SSH connection dies.

Especially when deploying into a large infrastructure connecting into each instance for update becomes painful. That's where an agent based services like CodeDeploy is really powerful and removes the single point of failure that is the machine/network that you deploy from.

With ElasticBeanstalk, Opsworks and Cloudformation they now really start to surround all the deployment workflows.

Definitely a great service that will in my opinion become very important to many many teams. You can also read more about our specific integration in our blog: http://blog.codeship.com/aws-codedeploy-codeship/


In place update is useful in the success case - agreed.

In the failure case however, even with a fleet of only 20 instances, a rolling update that has issues after the 10th instance puts you in a world of pain.


Have you written anywhere how you guys deal with operational monitoring (eg. Boundary, New Relic, etc.) when you're spinning up brand new instances all of the time?


A bit: http://blog.codeship.com/lxc-memory-limit/

We use librato for monitoring our build server infrastructure and mostly only look at max/min values for metrics that could mean trouble. Generally we're able to separate data of different instances by their instance id so we could look into them individually.

We use NewRelic for our Rails application on Heroku and pump Heroku data into Librato as well (we love data and metrics)

And of course you can always send me an email to flo@codeship.com with questions.


I have used Stackdriver (http://www.stackdriver.com/) before and it works good. It can get a bit pricey. They got bought by Google few months ago -- something to watch out for. I really had a good exp with their product.


We use stackdriver very heavily - but also take a look at SignalFuse.


I am wondering why you guys don't see CodeDeploy as a competitor to Codeship? or eventually becoming something like Codeship?


Yep, this is Amazon reaching out to those who won't/can't take the 'immutable infrastructure' approach.

It's fair to say if you're already doing things in the way you describe, this service isn't for you.


Baking everything into AMIs is the "right" way to go, but if AWS had a supported, hosted git server inside their network to push into, I'd rather have the speed of deploying from git, and only bake AMIs when necessary for system upgrades.


...and it looks like one's in the works: https://aws.amazon.com/codecommit/


I have the same opinion after reading over the product page and glancing at the documentation.

At my org, we currently use a very similar method of deployment that AWS CodeDeploy seems to provide, except that we wrote it in house and use some python fabric scripts to do it. It sounds like AWS CodeDeploy would help us to "AWSify" this method of deployment and also have some nice benefits of health checks/rollbacks, UI based management, and a deployment history log. However, we would essentially still be maintaining/writing bash/python scripts to do the heavy lifting.

However, we are in the process of moving to a method of deployment which uses SaltStack to prebake an AMI with everything that is required to run in production and then use Netflix's Asgard to manage the deployment of the prebuilt AMIs. We are very excited about this method of managing deployments to take advantage of AutoScaling groups and using a well defined/tested stack of tools.

Interested to hear thoughts from others using a deployment approach that they think would benefit from AWS CodeDeploy.


> The CodeDeploy service seems to operate by you manually launching base ec2 instance with a code deploy agent and then this agent will checkout your git code on the live instance, run any provisioning steps and then if things break somehow rollback all that work, still on the live instance.

Deploy and rollback doesn't happen on live instances. (Group of) Instances are taken off service for rolling deployment and you can configure what percentage of your fleet should be deployed to at once.


Looks like an improved version of OpsWorks


BTW, we now integrate with this from CircleCI: https://news.ycombinator.com/item?id=8597439

There's some discussion in that post of how it compares to pre-baking, etc. Of course there are trade-offs either way. CodeDeploy does require that your are careful with your lifecycle scripts to make deployments as atomic as possible. At least they provide a good selection of default lifecycle events for you to take advantage of.


Here's an open source tool that does something very similar (and you aren't vendor-locked into AWS):

https://bitbucket.org/scorebig/elita


As a beginner I have a hard time understanding all these services that Amazon provides. I know I should probably be using them, but I don't know which one.


What is it you're trying to do? It might be worth taking a look at the storage provided by S3 and how you'd go about hosting a static website (if that's something you might need). Or set up a free micro EC2 instance. That will give you a linux box which you can play around with.


How does this compare to Deis? Does it serve the same use case, albeit locked into AWS?

Discussion on Deis from yesterday: https://news.ycombinator.com/item?id=8591209


This looks a lot like Marathon [1] though without some of the resource abstractions that Mesos [2] provides underneath.

1. https://mesosphere.github.io/marathon/ 2. https://mesos.apache.org/


Is this going to integrate with Docker? Would make a great orchestration platform


That was my initial thought, too. But it looks like the Docker-related news they hinted at is still coming:

https://twitter.com/jeffbarr/status/529493907839533056


This is my question as well. It looks pretty low level so you should be able to leverage docker but I'm not seeing any specific tooling for it.


Isn't that kind of thing covered with docker on beanstalk essentially?


This could be a big deal in terms of raising the bar for deployment practices.

Right now "nobody ever got fired for" setting up deployment via rsync and some ad-hoc shell scripts. That works for a single host, although it's not great for reproducibility. But as soon as you go to multiple hosts you need some degree of orchestration, monitoring, and integration with your load balancer to avoid downtime.

CodeDeploy offers those benefits, so if it turns out to be even slightly good, it could become the "nobody ever got fired for" choice, for any non-trivial app running on AWS.


If the job is "get an app onto a bunch of boxes and load balance the healthy ones," I feel like AWS has already been doing that for a long time – deploy your code to a box, create an AMI from the box, and use it as a launch configuration for an auto scaling group. New code=new box=new AMI, and then you don't have to worry about the mechanics of moving code to a bunch of boxes at the same time.

This seems like a tiny step forward for orgs who are deploying code to boxes that they never take down, but for the orgs that have been doing it the AWS-prescribed (immutable) way, I'm having trouble seeing how this is useful at all.


I think immutable infrastructure is probably the way forward, but it's not yet easy enough to be the default for lazy people.

tiny step forward for orgs who are deploying code to boxes that they never take down

That's exactly why this is a clever move - it's a better way to do what you already know how to do. This should get more teams using responsible deployment practices. If you have to first learn a whole new mindset about infrastructure, most people just won't bother, and will keep on rsyncing.


Interesting. Between Elastic Beanstalk, OpsWorks, and now CodeDeploy it seems like AWS is taking over every production developer workflow from the hobbyist on up.


I find none of them good enough for my workflow. I think heroku nailed it with their deployment approach for my case.

Beanstalk worker tiers are a nightmare. You need to place all your workers in separate repos, for example for git aws.push style workflow to work.


Good point. Plus Heroku just gives you so many more options instead of the 3-4 that ElasticBeanstalk provides, although Docker integration potentially changes the calculus significantly. Interesting that its taking EB so long to catch up to Heroku on that front.


Yeah I was also hoping this would be more something along the lines of the Google App Engine. I like just writing the application's code not having to worry about the stack too much.


I am having a hard time to understand how CodeDeploy will change my current deployment workflow (which consists of git aws.push basically). Can someone here enlighten me?


I really think Heroku nailed the deployment model. I just push my code, and then the code is deployed. It would be nice to have staging baked in.


Region Unsupported

CodeDeploy is not available in EU (Ireland). Please select another region.

Supported Regions

US East (N. Virginia) US West (Oregon)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: