>Before Docker there were a lot of different solutions for software developers to package up their web applications to run on a server.
There are basically two relevant package managers. And say what you will about systemd, service units are easy to write.
It's weird to me that the tooling for building .deb packages and hosting them in a private Apt repository is so crusty and esoteric. Structurally these things "should" be trivial compared to docker registries, k8s, etc. but they aren't.
.rpm and .deb are geared more towards distributions needs. Distributions want to avoid multiplying the number of components for maintenance and security reasons. Bundling dependencies with apps is forbidden in most distribution policies for these reasons, and the tooling (debhelpers, rpm macros) actively discourage it.
It's great for distributions, but not so great for custom developments where dependencies can either be out of date or bleeding edge or a mix of the twos. For these, a bundling approach is often preferable, and docker provides a simple to understand and universal way to achieve that.
That's for the packaging part.
Then you have the 2 other parts: publishing and deployment.
For publishing, Docker was created from the get go with a registry, which makes things relatively easy to use and well integrated. By contrast, for rpm and deb, even if something analog exists (aptly, pulp, artifactory...) it much more some tools created over time which work on top of one another, giving a less smooth experience.
And then, you have the deployment part, and here, with traditional package managers, it difficult to delegate some installs (typically, the custom app develop in-house) to the developers without opening control over the rest of the system. With Kubernetes, developers gained this autonomy of deployment for the pieces of software under their responsability whilst still maintaining separation of concerns.
Docker and Kubernetes enabled cleaner boundaries, more in line with the realities of how things are operated for most mid to large scale services.
Right, the bias towards distro needs is why packaging so hard to do internally, I'm just surprised at how little effort has gone into adapting it.
You need some system mediating between people doing deployments and actual root access in both cases. The "docker" command is just as privileged as "apt-get install." I have always been behind some kind of API or web UI even in docker environments.
You can always simplify your IT and require everyone to use only a small subset of Linux images which were preapproved by your security team. And you can make those to be only deb or rpm based Linux distributions.
The only problem with these Linux based packaging for deployments are Mac users and their dev environment. Linux users are usually fine, but there always had to be some Docker like setup for Mac users.
If we could say that our servers run on Linux and all users run on some Linux (WSL for Windows users) then deployments could have been simple and reproducible rpm based deployments for code and rpm packages containing systemd configuration.
I'm guessing they meant to say package formats, in which case they'd be deb and rpm. Those were the only two that are really common in server deployments running linux I'd guess.
dnf is a frontend to rpm, snap is not common for server use-cases, nix is interesting but not common, dpkg is a tool for installing .deb.
There are basically two relevant package managers. And say what you will about systemd, service units are easy to write.
It's weird to me that the tooling for building .deb packages and hosting them in a private Apt repository is so crusty and esoteric. Structurally these things "should" be trivial compared to docker registries, k8s, etc. but they aren't.