I might just be biased by selective attention, but it seems like more and more of these are popping up lately.
I feel like we will eventually recognize a variant of Greenspun's Tenth Rule as common wisdom:
> Any sufficiently complicated build system or configuration management system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Nix.
(Although to be honest it might make more sense to replace "Nix" with "Guix"...)
I think it’s a cyclic pendulum sort of thing. We crave the networking/herding effect. So we promote a bandwagon, and try to get everyone on the same. We get there and realize we’ve turned a trip to Starbucks into an Artemis launch. Frustrated that we’ve rediscovered you can’t please all the people all of the time, we race back to individual tooling, each doing one task and one task well, factionalized and repeated in hundreds of tribes. It’s lonely at this extreme and we crave the networking/herding… coda.
I tend to think about it differently: Some ideas get reinvented and simplified until only the idea is left. And the idea then seems so obvious to everyone that no one considers it necessary to have a reusable implementation. That isn't the same as NIH. It is sharing ideas rather than code.
was going to say the same thing - but for ansible.
an inventory file is literally a static list of pets in its simplest form, and with some simple convention you could have a directory per-host with any playbooks required. plus you have docs, community modules, etc.
I came to Ansible from Puppet, buying in to the claims I'd heard about Ansible being so much simpler ("it uses YAML for its config, which is so much simpler than configuring Puppet").
Turns out that, like Puppet, Ansible seems to have been congealed rather than designed, and it's a mess of inconsistent spaghetti code.
All the other config management systems I've tried, from Salt to Chef have exactly the same problem.
I'd be thrilled to find a config management system that actually was simple and elegant.
Ansible is like a fifth wheel on a car, since all of the configuration can be done inside of OS packages, and orchestration can be done via SSH (which is exactly how Ansible does it). Put those two together and Ansible is a solution to a non-existent problem.
Not by hand! With configuration packages in OS-native format!
1. When a system comes in, it is scanned and entered into the asset management database, which then triggers a process to enter the scanned MAC address into the DHCP, by generating a new DHCP configuration package.
2. the previous version of the DHCP configuration package is upgraded with the new DHCP configuration package.
3. the system is hooked up to the network and powered on.
4. the firmware is permanently reconfigured to boot in this order:
1. HD0
2. HD1
3. network.
5. since HD0 and HD1 are not bootable, the system boots from the network, whereby the infrastructure automatically provisions it with the standard runtime platform, which consists solely of packages in OS-native format, including configuration packages which configure things which all servers have in common.
6. as part of the automatic installation, the server is automatically installed with additional configuration packages based on which profile it is in, turning it into a specific application server.
7. the server comes up after automatic installation, and reports back to the infrastructure that it is ready to serve.
I stopped using configuration management a while ago for my servers because they are too complex and fragile for a sporadic usage.
My main issue is that you need to use them daily in order to keep them working and for you to keep muscle memory.
For a time I tried to restrict myself to only Debian package, but with the time I felt like it was too restrictive, and not enough flexible.
I also tried Nix and really liked it, but my main issue was me having to google every time I wanted to make a change due to config file/package being awful to write/remember.
Maybe you will take a different path, but for personal usage, configuration management is really a waste of time imho, the frequency at which you will touch files will be low, and your tool/setup will become outdated before you realize it.
And nowaday everything run inside a container, so you will have to adapt your tool to manage docker/podman/docker-compose at some point
> I also tried Nix and really liked it, but my main issue was me having to google every time I wanted to make a change due to config file/package being awful to write/remember.
I just use it package-management-style with "nix-env" instead of using the whole configuration language. Pretty simple and easy to remember.
I dunno, I just feel like in my time playing pretend DevOps, I've just ended up learning how to use Terraform/Puppet/Ansible/whatever and now all of my infra actually is cattle, even though I think 9 out of 10 devs in my shoes would have gone the pets route.
It was some extra work to set up, but now I feel supremely confident in the hardiness of the infra, and can scale it at will with some small tweaks.
Is it perfect? Heck no! But I know for a fact when I go the "pets" route it ends up being more work in the end for me than just biting the bullet and learning a new tool.
I just like knowing stuff! I enjoy diving in and getting my hands dirty learning the "best" way to handle a use case, or at least setting things up to be well understood if I ever have to pass it off to someone else more qualified than me.
The problem is that configuration management systems think everything is either cattle or pets.
[0] Betsy is a special cow. She will also be slaughtered, so she is not a pet, but she needs to be taken care of in a special way, so she is not a cattle.
[1] When the herd becomes large, you get joahna. joahna is like betsy, but not exactly like betsy so she needs a care and feeding slightly different than the betsys.
I love the concepts behind NixOS as well as Guix. I've tinkered with both, and Nix has served the daily driver role for multiple computers of mine.
One advantage this has over Nix is a simpler build toolchain. For some, that advantage may be quite useful. Even more so for the sh-based configuration management tool, rest[0], submitted several days ago.
While Guix is similar to Nix in it's higher amount of build dependencies, in theory, Guix's bootstrapping is at least elegant in comparison.
Yes it is more advanced, but actually if you just think of the nix configuration language as an advanced JSON then that goes a long way in making it feel a lot less awkward IMO. Attrsets are basically JSON objects, and your final configuration is basically a huge, merged attrset.
It's about what I'm thinking to do with my Odroid running from a microSD card. dd the microsd to another one and swap it when it will inevitably fail, more sooner than later. The only problem is that I don't want to pull the card off and copy it periodically or after an apt update. I'm storing configs in git but restoring them is a tedious task. Maybe Pets can help me. Or rsnapshot as on my laptop.
If people were good about doing it by hand and keeping track of what was done to the box, and ensuring that changes only happen via the documented pipeline, then we wouldn't have a cottage industry of configuration management tools. Maybe you are an outlier and are able to keep good notes, but I think its obvious that the general population is not diligent enough.
Well I use nix so its tracked in git and I can mostly rely on data-only backups then. It makes it so much easier to have multiple laptops (e.g. I have a "completely personal" and "contract work" laptop which have a lot of overlap but are not the same).
> It makes it so much easier to have multiple laptops
You're getting away from having a pet then.
And once you consider that you might have a third laptop for something work related and be able to duplicate the base configuration then you no longer have pets and you just need configuration management.
That isn't even the common use case of most IT professionals, much less most laptop users.
Well I have 25 years of experience with configuration management, including over 10 years having lead the development of one of the major tools in the space, and I don't waste time doing configuration management on my laptop, I just back it up. You can go ahead and do configuration management of your laptop if that gives you enjoyment, but it is entirely over the top to say that is an "unhealthy attitude".
Everyone can go and waste their time in whichever way they like.
Some people like gardening, although they will never get to "sell produce" at the vegetables market. That doesn't mean gardening is bad, it just means they enjoy it and enjoy wasting their time doing gardening. Other people just go and buy vegetables at the supermarket that were grown at commercial farming.
Same with doing configuration management on your laptop, if you really want to enjoy it, go for it. For most people its a complete waste of time and a useless activity compared to the alternative, which is changing the few files you need changing by hand, and backing it up to solve the "lost my work" problem.
I get what you're saying and agree when it comes to setting up a machine, in that the end result will be basically the same.
For gardening though I can garantee you that homegrown fruits and vegetables can be far superior to anything you can find at the supermarket, so would argue the end result is not the same.
Maybe a better example would be using hand tools versus power tools -- if one takes the expense of power tools out of the equation.
I don't think I've done anything to my laptop that I can't google in a minute or two at the most. I don't really need that visibility. If I need to know something very specific, I could always drop it into a note to keep.
The visibility of the changes aren't a virtue in themselves, it is only when you do something with them that they become useful.
Spending hours documenting my build system for my pet laptop to save myself 60 seconds a few times a year isn't a good payoff for me. If the payoff is that you also learn configuration management along the way, then that's fine.
(I also don't quite understand how you're testing and validating your build system for your laptop without periodically wiping it and reprovisioning it -- and once you start considering things like that I definitely don't have enough time for all that)
Each system has a text file where I note down the steps I did to install and configure something new or also when doing major changes. Also the steps for less often routine jobs (like major OS updates) get entries. On one hand it is kind of a log book what changed on the systems and also is a wonderful sources for when you need to do it the next time.
give me this for mac os please. my home-rolled dotfiles repo is ok, but id really love some config management system that’s not ansible that i could use to bootstrap my workstations.
i used to write a lot of chef and generally like ruby, but i can’t make heads or tails of whatever progress chef is. mostly seems like chef as an oss tool is dead.
anybody got something else they like to configure their mac os systems with?
Ansible is a little heavyweight for what you want, but if you wrap it in a script and only use the built-in modules, it's probably got everything you need.
This is how I set up my Mac as well; just a local connection. Sets up out of box Mac in about 15 minutes and I can keep my two Mac's configs in perfect sync: https://github.com/geerlingguy/mac-dev-playbook
1) Install homebrew (one-liner, easily googled)
2) Use brew to install chezmoi
3) chezmoi pulls my dotfiles, including my Brewfile
4) Say 'brew bundle' and 90-95% of the software I need is installed
Then I need to set up settings sync in VSCode and Alfred and I'm pretty much done. What few things are missing I can install manually, or update the Brewfile to match.
yeah, my home rolled stuff does the same with the Brewfile. chezmoi looks great though—i have a bit of cross-platform (mac os and linux) stuff happening too, so this looks really great to make those bits better. thanks!
Although ansible can be used in local machine, but it is design for multiple remote machine. For example, if you want copy a config file to target, you must use some tools like `scp` or do some hacking. And it's much slow than shell script, and the print in stdio is ugly. (because it's design for running in 100 machines in one time.)
Ansible also had a steep learning curve, and redhat did not prepare a good beginner's manual for it. Searching the web there is only the experience of people who have used it for years, each with their own way of writing. There are no best practices for getting started easily.
Except it is bloated and slow, and primarily only maintained by Red Hat where they don't fix bugs for extended periods of time. Their loops and register methods are extremely slow taking 30 seconds to delete a group of files and folders which only takes a second in Bash.
These are indeed the two to three things I miss about our chef based infrastructure we had before. Doing 200 - 300 things on a system with chef takes 2 minutes in a chef run either at next full half hour + splay, or when forced. With ansible, the very same system at times takes 15 - 20 minutes. And mitogen is kind of a thing which reduces the ansible run back to 2 - 3 minutes - acceptable even enjoyable levels - but it falls apart if the python installation varies across many hosts, or when connecting to many systems at once (though that might be our firewall), or for other reasons if it feels like it.
And ansible filters are just something else. I get how to use them by now, but compared to some simple ruby's select + map... yeah. In most cases, once we need two of the complex filters, we just introduce a custom one in pure python because that saves sanity points.
Very true. I really wish Terraform would have attempted to solve automation on the scale of Ansible, or another organization would have committed to a replacement in Go or Rust. Unfortunately Ansible is still the best that we have currently.
I use salt stack, and with all its warts I think it works just fine for environments with "pets". The idea is of course that in theory, you should be able to re-deploy any system as it's nothing but a collection of "states", but it works just as well for patching, maintaining and upgrading existing systems that never move.
I don't have much experience with Ansible or the other popular configuration management alternatives, but I don't imagine they are much different in this respect.
"Cattle, not pets" is not a great analogy in the first place. Maybe I only talk to smaller farmers, but they definitely care if a cow dies. Maybe not as much as if the dog dies, but it's a sign something is seriously wrong with their operation.
Analogies aren’t meant to be literally equal. They’re just a convenient way of expressing a complex problem in a simpler way, which the cattle vs pets analogy does well.
You just said it: they care if a cow dies because something is wrong with their operation, not because they care about that particular animal per se (usually). That's the exact opposite of why someone cares when their dog dies.
I use etckeeper for this, along with a cron entry to put the currently-installed set of packages into a file (/etc/packages.installed). Thus rebuilding becomes: base-install, pull /etc, reinstall-packages, re-checkout /etc.
The alternative you provide is still speciesist to a degree. I like the suggestion in a sibling comment to this one which avoids referring to any animals, though.
Made sense to me until you said 'containers', which I immediately read as container plants (approximately always gardened, never farmed) instead of in the runtime sense.
Appreciate your feedback! I can totally see how you would read it that way. Maybe "servers should be farmed, not gardened" would be less ambiguous. Server farms have been a thing for a long time after all.
Thank-you for the suggestion of vegetables. It makes sense to me as a native speaker. If I had to clarify with folks, I'd open with the idea of server "farms" (Microsoft IIS) and then talk about "gardens" as custom setups.
I've used masterless salt for managing pet hosts, and the experience was pretty good (though I also already had experience using salt on a 30ish node cluster). However, these days it would be the only reason I would have Python installed on some of them. Avoiding that dependency management would definitely be a selling point.
I feel like we will eventually recognize a variant of Greenspun's Tenth Rule as common wisdom:
> Any sufficiently complicated build system or configuration management system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Nix.
(Although to be honest it might make more sense to replace "Nix" with "Guix"...)