Hacker News new | past | comments | ask | show | jobs | submit login
Decentralize your DevOps with Master-less Puppet and supply_drop (braintreepayments.com)
30 points by pgr0ss on Feb 10, 2012 | hide | past | favorite | 10 comments



How can you tell what's in each system at every step? I mean - you know what should be there, but how can you be sure someone didn't forget to apply, or that they had a clean tree at the time? They say "With a centralized Puppet server, the server maintains a single canonical version of the Puppet configuration." I'd counter with: make it always have the latest version from git - you end up in the same position you want, as described in the same point.

I'm usually relying heavily on external resources (puppet) / node searches (chef)... not sure I really like the chef-solo or forced apply in puppet way of doing things.


At work I recently started to use masterless puppet to manage a working environment (e.g. directory structure, shell configuration, packages installed into our own RPM database) on compute clusters we use but don't have root access to. It was much easier to get working than I expected. Looking at the source for rump (https://github.com/railsmachine/rump) was helpful when determining the proper directory structure and puppet command line options.


I'm currently trying to run Chef in solo-mode with a git repository. Nodes would update with regular pull & apply. Has anyone tried something like this?


Not yet but I thought about that.

My current favorite setup (for Ubuntu + Windows nodes) is knife-solo with rsync.

That said unlike for code, I like being able to apply modifications just by rsyncing: if my git host is down it will still work for example (but then I mostly chef deploy alone).


That's not dissimilar from the way RightScale, for example, uses chef. You're missing out on the coordination aspects—for example, having your DNS monitor just know every node that has "dns::slave" in it's run list—but it can work.


In this workflow, the configuration being applied to a server is whatever people happen to have rsynced to it. That may allow for quick iteration in their QA environment (as long as people avoid stepping on each others toes), but wouldn't it be better if the sandbox and production servers pulled their configuration from the sandbox and production git branches?


Our goal is to make applying changes to a server an intentional step. Instead of changes in production being pulled, we make it a push. With more complex changes it involves unbalancing a server, making the change, testing that everything is working correctly, then rebalancing it. While smaller changes, we simply apply them to the environment wholesale. This process gives us the flexibility to do both.


We do use the sandbox and production git branches for those environments. If you want to puppet sandbox, you start with a git checkout sandbox. If you try to puppet sandbox from the production or master branch, you get a big warning and a prompt asking if you are really sure.


Couldn't this cause a race condition if two admins try to apply changes at the same time? Without a centralised repository, I presume that you would need to introduce a locking mechanism on a per-node level, so one would: take lock, run noop, run apply, release lock. Maybe make it implicit.


Wouldn't this break externalized resources?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: