k8s is popular because Docker solved a real problem and Compose didn’t move fast enough to solve orchestration problem. It’s a second order effect; the important thing is Docker’s popularity.
Before Docker there were a lot of different solutions for software developers to package up their web applications to run on a server. Docker kind of solved that problem: ops teams could theoretically take anything and run it on a sever if it was packaged up inside of a Docker image.
When you give a mouse a cookie, it asks for a glass of milk.
Fast forward a bit and the people using Docker wanted a way to orchestrate several containers across a bunch of different machines. The big appeal of Docker is that everything could be described in a simple text file. k8s tried to continue that trend with a yml file, but it turns out managing dependencies, software defined networking, and how a cluster should behave at various states isn’t true greatest fit for that format.
Fast forward even more into a world where everybody thinks they need k8s and simply cargo cult it for a simple Wordpress blog and you’ve got the perfect storm for resenting the complexity of k8s.
I do miss the days of ‘cap deploy’ for Rails apps.
> k8s is popular because Docker solved a real problem and Compose didn’t move fast enough to solve orchestration problem. It’s a second order effect; the important thing is Docker’s popularity.
I introduced K8s to our company back in 2016 for this exact reason. All I cared about was managing the applications in our data engineering servers, and Docker solved a real pain point. I chose K8s after looking at Docker Compose and Mesos because it was the best option at the time for what we needed.
K8s has grown more complex since then, and unfortunately, the overhead in managing it has gone up.
K8s can still be used in a limited way to provide simple container hosting, but it's easy to get lost and shoot yourself in the foot.
>Before Docker there were a lot of different solutions for software developers to package up their web applications to run on a server.
There are basically two relevant package managers. And say what you will about systemd, service units are easy to write.
It's weird to me that the tooling for building .deb packages and hosting them in a private Apt repository is so crusty and esoteric. Structurally these things "should" be trivial compared to docker registries, k8s, etc. but they aren't.
.rpm and .deb are geared more towards distributions needs. Distributions want to avoid multiplying the number of components for maintenance and security reasons. Bundling dependencies with apps is forbidden in most distribution policies for these reasons, and the tooling (debhelpers, rpm macros) actively discourage it.
It's great for distributions, but not so great for custom developments where dependencies can either be out of date or bleeding edge or a mix of the twos. For these, a bundling approach is often preferable, and docker provides a simple to understand and universal way to achieve that.
That's for the packaging part.
Then you have the 2 other parts: publishing and deployment.
For publishing, Docker was created from the get go with a registry, which makes things relatively easy to use and well integrated. By contrast, for rpm and deb, even if something analog exists (aptly, pulp, artifactory...) it much more some tools created over time which work on top of one another, giving a less smooth experience.
And then, you have the deployment part, and here, with traditional package managers, it difficult to delegate some installs (typically, the custom app develop in-house) to the developers without opening control over the rest of the system. With Kubernetes, developers gained this autonomy of deployment for the pieces of software under their responsability whilst still maintaining separation of concerns.
Docker and Kubernetes enabled cleaner boundaries, more in line with the realities of how things are operated for most mid to large scale services.
Right, the bias towards distro needs is why packaging so hard to do internally, I'm just surprised at how little effort has gone into adapting it.
You need some system mediating between people doing deployments and actual root access in both cases. The "docker" command is just as privileged as "apt-get install." I have always been behind some kind of API or web UI even in docker environments.
You can always simplify your IT and require everyone to use only a small subset of Linux images which were preapproved by your security team. And you can make those to be only deb or rpm based Linux distributions.
The only problem with these Linux based packaging for deployments are Mac users and their dev environment. Linux users are usually fine, but there always had to be some Docker like setup for Mac users.
If we could say that our servers run on Linux and all users run on some Linux (WSL for Windows users) then deployments could have been simple and reproducible rpm based deployments for code and rpm packages containing systemd configuration.
I'm guessing they meant to say package formats, in which case they'd be deb and rpm. Those were the only two that are really common in server deployments running linux I'd guess.
dnf is a frontend to rpm, snap is not common for server use-cases, nix is interesting but not common, dpkg is a tool for installing .deb.
> everybody thinks they need k8s and simply cargo cult it for a simple Wordpress blog
docker _also_ has this problem though. there are probably 6 people in the world that need to run one program built with gcc 4.7.1 linked against libc 2.18 and another built with clang 7 and libstdc++ at the same time on the same machine.
and yes, docker "provides benefits" other than package/binary/library isolation, but it's _really_ not doing anything other than wrapping cgroups and namespacing from the kernel - something for which you don't need docker to do (see https://github.com/p8952/bocker).
docker solved the wrong problem, and poorly, imo: the packaging of dependencies required to run an app.
and now we live in a world where there are a trillion instances of musl libc (of varying versions) deployed :)
sorry, this doesn't have much to do with k8s, i just really dislike docker, it seems.
I am a big fan of using namespaces via docker, in particular for development. If I want to test my backend component I can expose a single port and then hook it up to the database, redis, nginx etc. via docker networks. You don't need to worry about port clashes and it's easy to "factory reset".
In production this model is quite a good way to guarantee your internal components aren't directly exposed too.
that's sort of my point though - namespacing is a great feature that allows for more independent & isolated testing and execution, there is no doubt. docker provides none of it.
i would argue that relying on docker hiding public visibilty of your internal components is akin to using a mobile phone as a door-stop - it'll probably work but there are more appropriate (and auditable) tools for the job.
> docker _also_ has this problem though. there are probably 6 people in the world that need to run one program built with gcc 4.7.1 linked against libc 2.18 and another built with clang 7 and libstdc++ at the same time on the same machine.
You are supposed to keep only a single process inside one docker container. If you want two processes to be tightly coupled then use multi-container pods.
Hit the nail on the head. How else could you at the push of a button not just get a running application but an entire coordinated system of services like you get with helm. And deploying a kubernetes cluster with kops is easy. I don't know why people hate on k8s so much. For the space I work in it's a godsend
Good points but I think it would be accurate to say that Docker solved a developer problem. But developers are only part of the story. Does Kubernetes solve the business' problem? The user's problem? The problems of sys admins, testers, and security people? In my experience it doesn't (though I wouldn't count my experience as definitive).
At my company we have had better success with micro-services on AWS Lambda. It has vastly less overhead than Kubernetes and it has made the tasks of the developers and non-developers easier. "Lock-in" is unavoidable in software. In our risk calculation, being locked into AWS is preferable than being locked into Kubernetes. YMMV.
> I do miss the days of ‘cap deploy’ for Rails apps.
Oh boy I do not miss them. Actually I'm still living them and I hope we can finally migrate away from Capistrano ASAP. Dynamic provisioning with autoscaling is a royal PITA with cap as it was never meant to be used on moving targets like dynamic instances.
>I do miss the days of ‘cap deploy’ for Rails apps.
Add operators, complicated deployment orchestration and more sophisticated infrastructure... It is hard to know if things are failing from a change I made or just because there are so many things changing all the time.
k8s is popular because Docker solved a real problem and Compose didn’t move fast enough to solve orchestration problem. It’s a second order effect; the important thing is Docker’s popularity.
Before Docker there were a lot of different solutions for software developers to package up their web applications to run on a server. Docker kind of solved that problem: ops teams could theoretically take anything and run it on a sever if it was packaged up inside of a Docker image.
When you give a mouse a cookie, it asks for a glass of milk.
Fast forward a bit and the people using Docker wanted a way to orchestrate several containers across a bunch of different machines. The big appeal of Docker is that everything could be described in a simple text file. k8s tried to continue that trend with a yml file, but it turns out managing dependencies, software defined networking, and how a cluster should behave at various states isn’t true greatest fit for that format.
Fast forward even more into a world where everybody thinks they need k8s and simply cargo cult it for a simple Wordpress blog and you’ve got the perfect storm for resenting the complexity of k8s.
I do miss the days of ‘cap deploy’ for Rails apps.