So a couple of things. First off, this is using AWS for two important services. That is probably a no go for a whole bunch of people right away.
But where it really lost me is that it is "yet another thing with a custom installer" or YATWACI (tm). Let me say it again: if you want wide adoption of your software, get it $@&!ing packaged for popular OS's. It is not hard, and is way nicer than "well first, create a virtualenv..."
I doubt wide adoption of a tool used internally is at the top of the Netflix devs priority list. It solves an internal need and it does not hurt to throw some code over the wall, so why not toss it on github and see if you can get some free labor.
FPM-created packages suck. They often have incorrect metadata format, most of
the time they don't have any dependencies encoded, they often have crappily
written initscripts (less of a problem today, with systemd everywhere) and
similar infrastructural things.
All that while writing RPM specs and proper(ish) debianization are quite easy.
> (less of a problem today, with systemd everywhere)
Systemd is far from everywhere. Ubuntu 14.04 doesn't use it and is supported until 2019. Debian Wheezy is supported until 2018. RHEL 6 has support through to 2021.
Overall this means it'll start to be reasonable to presume systemd sometime around 2020, and be realistically almost everywhere from say 2025.
Software changes take 5-10 years to roll out everywhere. This is such a core change that it'll be on the longer side.
Even though I agree with you here (I still use Wheezy, personally and
professionally), most projects don't target that old releases, so they build
mainly for modern mainstream (Debian/Ubuntu and Red Hat/CentOS), which means
systemd.
And that still doesn't change the fact that most programmers produce shitty
packages, if any, so every time I use somebody's software I need to package it
myself to have it properly built.
And for each and every package/project I would need convince the maintainer
that he doesn't understand what he's doing, he's doing it wrong, and he should
learn a correct method? Because it all boils to this (barring a more polite
way of stating the fact).
Getting something correctly packaged for popular OS's is not as easy as it may seem. Even Torvalds famously gripes about it. Luckily, there's no need for that here since it can be distributed as a Python package. The project already uses setuptools (in setup.py) so it's most of the way there.
The biggest problem with that is CADT among the toolkit lib devs, that in combination with rigid dependencies in distro package managers leads sprawling permutations.
Besides the usual public/private key system, OpenSSH supports a less-used system called user certificates.
User certificates achieve the same purpose as your normal key but instead of pre-installing your public key on the server, you present a certificate during authentication and the server checks that it's signed by a trusted authority - it's more or less a PKI similar to that used for HTTPS, etc.
One of the key advantages of this approach is that the CA can enforce limits on the key, such as validity periods and disabling SSH features like port forwarding, binding a certificate to only run a particular command, or only allowing it's use from a specific IP.
BLESS appears to be a piece of infrastructure for autonomously signing these certificates - on it's own that gives you benefits like a proper audit log, but it seems that it's real purpose is to enable an SSH bastion host[0] to generate ephemeral keys for the servers it accesses.
You'd use this if you ran your environments as immutable hosts: you can't add users' public keys to the hosts when they change/join the team, so you let the log into the bastion (which presumably does get updated with new public keys) & let it create the certificate to connect to the target host (and also immediately use it & proxy your connection to it).
I think you could compromise on immutability for the SSH CA key file, or else every time you rolled keys you'd have to reprovision your environments.
While not completely related, can anybody suggest a good, clean way of implementing centralised login under Linux?
With yp/nis being out of date (and not considered secure?) most things seem to point to using Kerberos for auth, but how do people then go about syncing passwd or some other method of getting all user accounts consistent across all machines?
Then what about files, is nfs still the preferred method of sharing the home directories?
But where it really lost me is that it is "yet another thing with a custom installer" or YATWACI (tm). Let me say it again: if you want wide adoption of your software, get it $@&!ing packaged for popular OS's. It is not hard, and is way nicer than "well first, create a virtualenv..."