I have the same problem. Docker as a means of making deployable, system-imaged tarballs super-easily? Great! And they'll run almost anywhere given the aggressive portability emphasis of Docker-the-product? Even better! Networking, volume mounts, and quotas included? Holy shit, this is awesome!
...now tell me about how all this stuff gets configured.
...shell scripts, you say? &&-spliced because lots of infrastructure is affected by a layer limit? And everything's committed so you can't hook parent containers' "$package_manager upgrade" phases? So everyone is running bunches of time-consuming (or superstitious/witchcraft) commands multiple times throughput the build hierarchy cautionarily if you don't control every layer? What the fuck?!
Seriously. Docker is a great product, period. But their community totally dropped the ball on provisioning. Being able to bail back out to shell commands is an important ability, but that being the default for conventional, complex deploys is batshit insane. That's what Puppet/Chef/Ansible/Packer/pick-your-favorite-provisioning-tool were designed to solve.
These aren't specialized, high-learning-curve "old-school sysadmin-club members only" technologies. They're easy, accessible, and save you from short (quicker initial provisioning/predictability), medium (updates to low-level parts of your infrastructure), and long-term (tracking security-related dependencies) headaches. Even if people don't use one of these tools for its portability benefits (because they're on Docker, so fuck portability . . . until it manifests as a random-container-linux compatibility issue), I'm baffled as to why they don't pick them up for the benefits in maintainability. Anyone who has had to deploy more than a handful of low-level package updates in a complex containerized deploy has to have asked "isn't there a better way?!" at least twice.
...now tell me about how all this stuff gets configured.
...shell scripts, you say? &&-spliced because lots of infrastructure is affected by a layer limit? And everything's committed so you can't hook parent containers' "$package_manager upgrade" phases? So everyone is running bunches of time-consuming (or superstitious/witchcraft) commands multiple times throughput the build hierarchy cautionarily if you don't control every layer? What the fuck?!
Seriously. Docker is a great product, period. But their community totally dropped the ball on provisioning. Being able to bail back out to shell commands is an important ability, but that being the default for conventional, complex deploys is batshit insane. That's what Puppet/Chef/Ansible/Packer/pick-your-favorite-provisioning-tool were designed to solve.
These aren't specialized, high-learning-curve "old-school sysadmin-club members only" technologies. They're easy, accessible, and save you from short (quicker initial provisioning/predictability), medium (updates to low-level parts of your infrastructure), and long-term (tracking security-related dependencies) headaches. Even if people don't use one of these tools for its portability benefits (because they're on Docker, so fuck portability . . . until it manifests as a random-container-linux compatibility issue), I'm baffled as to why they don't pick them up for the benefits in maintainability. Anyone who has had to deploy more than a handful of low-level package updates in a complex containerized deploy has to have asked "isn't there a better way?!" at least twice.