Hacker News new | past | comments | ask | show | jobs | submit login
Kubernetes is deprecating Docker runtime support (github.com/kubernetes)
457 points by GordonS on Dec 2, 2020 | hide | past | favorite | 175 comments



I think that the title of this is a bit misleading.

Kubernetes is removing the "dockershim", which is special in-process support the kubelet has for docker.

However, the kubelet still has the CRI (container runtime interface) to support arbitrary runtimes. containerd is currently supported via the CRI, as is every runtime except docker. Docker is being moved from having special-case support to being the same in terms of support as other runtimes.

Does that mean using docker as your runtime is deprecated? I don't think so. You just have to use docker via a CRI layer instead of via the in-process dockershim layer. Since there hasn't been a need until now for an out-of-process cri->docker-api translation layer, there isn't a well supported one I don't think, but now that they've announced the intent to remove dockershim, I have no doubt that there will be a supported cri -> docker layer before long.

Maybe the docker project will add built-in support for exposing a CRI interface and save us an extra daemon (as containerd did).

In short, the title's misleading from my understanding. The Kubelet is removing the special-cased dockershim, but k8s distributions that ship with docker as the runtime should be able to run a cri->docker layer to retain docker support.

For more info on this, see the discussion on this pr: https://github.com/kubernetes/kubernetes/pull/94624


Also, people probably don't understand the difference between the container runtime and container build environment. You can build your container with Docker still and it can run in a different environment.


You can, but buildah exists.


What's the advantage of using buildah?


It's docker without the dockerfile which, from what I can tell, is the biggest feature of docker most engineers like.

I've personally switched to bazel for building most of my containers but that's a far departure from what the majority of people are doing I suspect.


My company uses bazel to build containers and the distroless images that Google provides, it's a really nice setup IMO


I love the experience and performance. If more adoption happens it'll just get better as more languages are supported.


Can you point to any sources using bazel for this?



Is containerd CRI compliant? Kubelet still interacts with cri-containerd which inturn calls containerd. Isn't cri-containerd the dockershim of containerd?

Maybe I'm mixing up things, pls correct me wherever needed.


containerd can serve CRI requests itself. This has been the case since the containerd v1.1.0 release[0], which included the cri "plugin" as an in-process part of the containerd binary. For a while, to keep up the plugin idea, it was in a separate github repo too, but these days it's in the main containerd repo directly[1].

[0]: https://github.com/containerd/containerd/releases/tag/v1.1.0

[1]: https://github.com/containerd/containerd/tree/9561d9389d/pkg...


Thanks for explaining.

I suspect this will nuke a huge amount of tutorials out there though & frustrate newbies.


This is deep in the internals of Kubernetes, nothing about `docker build/push` or `kubectl apply` will change.


This changes nothing for 99.9% of Kubernetes users.


For what it's worth, there are a few cases where docker vs some other runtime does make a difference.

One difference is that if you 'docker build' or 'docker load' an image on a node, with docker as a runtime a pod could be started using that image, but if containerd is the runtime it would have had to be 'ctr image import'ed instead.

I know that minikube, at some point, suggested people use 'DOCKER_HOST=..' + 'docker build' to make images available to that minikube node, which this would cause to not work.

It would be nice if k8s had its own container image store so you could 'kubectl image load' in a runtime agnostic way, but unfortunately managing the fetching of container images has ended up as something the runtime does, and k8s has no awareness of above the runtime.

Oh, and for production clusters, a distribution moving from dockerd to containerd could break a few things, like random gunk in the ecosystem that tries to find kubernetes pods by querying the docker api and checking labels. I think there's some monitoring and logging tools that do that.

If distributions move from docker to docker-via-a-cri-shim, that won't break either of those use cases of course.


(Super naive layman question, I don't work in this space.)

What does this mean? I thought that Kubernetes manages Docker containers which makes the title kind of confusing.


"docker containers" are more accurately called OCI containers, and have been standardized so that various container runtimes can use exactly the same container images.

Kubernetes can use docker runtime (dockerd) to run OCI containers, but Docker Inc strongly discourages the docker runtime being used directly for infrastructure. Docker runtime imposes a lot of opinionated defaults on containers that are often unwanted by infrastructure projects. (For example docker will automatically edit the /etc/hosts file in containers, in a way that makes little sense for Kubernetes, so Kubernetes has to implement a silly work around to avoid this.)

Instead Docker Inc recommends using containerd as the runtime. containerd implements downloading, unpacking, creating CRI manifests, and running the resulting containers all without implementing docker's opinionated defaults on top. Docker itself uses containerd to actually run the containers, and plans to remove it downloading code in favor of using the one from containerd too.

The only advantage to using docker proper for infrastructure projects is that you can use the docker cli for introspection and debugging. Kubernetes has created its own very similar cli that works with all supported backend runtimes, and also can include relevant Kubernetes specific information in outputs.


> But Docker Inc strongly discourages the docker runtime being used directly for infrastructure.

Is there a list of these defaults or other downsides to using docker instead of containerd?


Docker did a nice blog post on this a few years ago. Docker uses containerd for running containers. It just does things on top of it that you don't need with Kubernetes. There's a nice diagram in the post, too.

https://www.docker.com/blog/what-is-containerd-runtime/


I'm not sure of any such list, but using containerd directly is faster, less likely to break k8s when docker adds new features, etc.

Much of this all stems from the flak infrastructure people gave docker when they made swarm part of the engine. But it comes to more than that. Docker has its own take on networking, on volumes, on service discovery, etc. If you are trying to use docker as a component of your own product, at least some of these are likely things you want to implement differently. And the same may well be true of any new features docker wants to add in the future. At which point one must ask why bother using docker directly?

containerd was quite literally created when docker decided to extract the parts of docker that projects like kubernetes might want to use. It has evolved heavily since then, but that really does capture the level at which it sits. This leaves dockerd in charge of things like swarm, docker's view on how networking should work, docker's take on service discovery, dockers view on how shares storage should work, building containers, etc.


Half-OT: What are alternative runtimes and why would you use them?


The main alternative runtime that I know of (at the level of containerd) is CRI-O. These runtimes are at the level of fetching images, preparing manifests etc. I'm not really sure what benefits they provide. CRI-O is intended to be kubernetes specific, and thus lacks any features that containerd would have that k8s does not need. This in theory ought to mean smaller, lighter, more easily auditable code.

There is another lower level of runtime, the OCI runtime, of which the main implementation is runc. Alternatives have interesting attributes, like `runv` running containers in VMs with their own kernel to get even grater isolation, `runhcs` which is the OCI runtime for running windows containers, etc. Most if not all of the higher level runtimes allow switching out the OCI runtime, but in general sticking with the default of `runc` is fine.


Yeah, the terminology around "runtime" is confusing and is used inconsistently. As you say, the actual runtime is something like runc which CRI-O (and I believe containerd) normally uses. CRI-O, as the name suggests, is an implementation of the Kubernetes Container Runtime Interface--which should work with any OCI-compliant runtime.


In the OpenShift world we use CRI-O and it has been awesome for us. I've never actually had it be a problem. Occasionally have to SSH into a node and inspect with crictl to see what's going on but it's almost always PEBKAC that points at CRI-O when it's not CRI-O's fault. I'd definitely recommend looking at it.


How has been your experience with crictl ?

Does it do everything that docker cli does ? Build, pull, etc ?


Only docker, buildkit, podman, buildah, and some non-Dockerfile based tools are able to build. CRI-O or containerd do not have build functionality. It does have pretty much everything else you expect like pull, ps, run exec, attach, rm, rmi, etc.

If you need builds, I'd suggest either run dockerd alongside a containerd kubelet, or use buildkit.


That's super valuable. Thanks! I did not know about buildkit (and buildctl), but now see it.

Very valuable. Thanks!!!


One such alternative runtime with tangible differences to containerd/cri-o is Kata[1], which actually runs OCI images as microvm's. This has some benefits if the applications you're running are untrusted/need additional sandboxing, such as if you're running a PaaS on bare-metal k8s.

[1]: https://github.com/kata-containers/runtime


I did some research, and it also seems that there is a way for containerd to manage Firecracker microvms.


For a runtime in your Kubernetes cluster there are containerd and cri-o. These are good for Docker / Open Container Initiative images.

There are others... some for non-Docker image support. There are people running other things than just Docker these days. They are more niche case.


A simple example would be mostly running Linux applications on Linux machines (= containers), but having a Windows application that needs to run in a Windows VM on a Linux machine (VM, or a container in a VM).


OCI container images and Docker images aren't the same.


it would be more accurate that they both conform to the OCI runtime spec, right?


Why do you spell containerd as "ContainerD"?

You wrote dockerd without caps.


dockerd is the literal name of a binary while containerd is the name of a project. As far as I can tell containerd stylizes its name in all lowercase but more than half the time I still see it written like a standard name, ContainerD, exactly like this.


Being nitpicky here, but the canonical representation of "containerd" is all lowercase, as in the logo.


Simply put, Docker includes a bunch of UX components that Kubernetes doesn't need. Kubernetes is currently relying on a shim to interact with the parts that it _does_ need. This change is to simplify the abstraction. You can still use docker to build images deployed via Kubernetes.

Here's an explanation I found helpful:

https://twitter.com/Dixie3Flatline/status/133418891372485017...


Former Docker employee here. We've been busy writing a way to allow you to build OCI images with your Kubernetes cluster using kubectl. This let's you get rid of `docker build` and replace it with `kubectl build`.

You can check out the project here: https://github.com/vmware-tanzu/buildkit-cli-for-kubectl


That is a really good idea! Does this just schedule a one-off pod, then have that do the build?


Not quite a one-off pod, but very close to that. It will automatically create a builder pod for you if you don't already have one, or you can specify one with whichever runtime that you want (containerd or docker). It uses buildkit to do the builds and has a syntax which is compatible with `docker build`.

There are also some pretty cool features. It supports building multi-arch images, so you can do things like create x86_64 and ARM images. It can also do build layer caching to a local registry for all of your builders, so it's possible to scale up your pod and then share each of the layers for really efficient builds.


That’s pretty nice. One of the things I am curious about is how Kubernetes will deal with private “insecure” in-cluster registries (which are a major pain to set up TLS for when you’re doing edge deployments or stuff that is inherently offline).


It's a situation where 'Docker' has become eponymous with 'container'. But 'Docker' in this case refers to the runtime that Kubernetes uses to run container images on servers ('nodes') where the UI/UX features of Docker (like it's CLI, image building capabilities, etc.) are not needed.

Container images nowadays can be built by a variety of tools, and run by a variety of tools, with Docker likely being the most popular end-user tool with the most history and name recognition. Others like Podman/Buildah are differently-architected replacements.

As long as a container meets the open container specs, it can be built with whatever tool and run on whatever tool that also follows the specs.


This holds for OCI style containers, but not all container systems are of the Docker/OCI variety. For example LXD.


In basic terms... this is a technical detail that isn't going to impact Kubernetes users (a mass majority of them). Those who are concerned with running their workloads in Kubernetes, anyway.

The part of Kubernetes that runs containers has had a shim for docker along with an interface for runtimes to use. It's called the Container Runtime Interface (CRI). The docker shim that worked alongside CRI is being deprecated and now all runtimes (including Docker) will need to use the CRI interface.

These days there are numerous container runtimes one can use. containerd and cri-o are two of them. Container images built with Docker can be run with either of these without anyone noticing.


Sibling comments cover the details but to put it simply: there are two definitions of the work "Docker":

1. [common, informal] "An OCI container".

2. [pedantic, strictly accurate] "A set of tools for building & interacting with OCI containers".

This article is talking about the latter definition.


It's actually even more confusing than that. There's also Docker, Inc. the company and there used to be the Docker Enterprise product (although I believe newer versions are now Mirantis Enterprise which bought that part of the business).

Docker is pretty much the a textbook example of why you probably shouldn't use the same word for a lot of different things.


Java :-)

Better yet, .Net.


Heh. I can't tell you, especially going back a few years, the number of people who claimed to hate Java with a passion because as far as they were concerned it was that security-shredding dialog box that would pop up demanding to be updated. (OK, there are probably other reasons to dislike Java as well but I agree it's a lot of different things.)


MS Teams


It will still run docker containers, they're just deprecating the Docker runtime, which is more of an implementation detail


Kubernetes will just use containerd directly, most end users will just continue to use docker on their laptop or whatever. Or you can use something else like podman, it's all OCI: https://opencontainers.org/


Kubernetes orchestrates containers, but Docker is just one way of running containers. It wraps all the underlying Linux into a nice set of of easy to use commands. Kubernetes is deprecating interacting with the underlying Linux via the Docker wrapper.


Kubernetes manages many types of containers, Docker containers just happen to be the most popular (or at least I'd venture to guess). But Kubernetes for a while has supported a few container runtimes (: Here's some k8s docs on a few: https://kubernetes.io/docs/setup/production-environment/cont...


@mods, would appreciate if someone could change the title to "Kubernetes is deprecating Docker runtime support" (I accidentally missed the word "runtime" when submitting).


You can edit the title for two hours after submitting the article.


even better: deprecating non-essential Docker components (or something to that effect). Currently, this is clickbait.


> You can edit the title for two hours after submitting the article.

The submitter can. This kind of misses the point anyway. The title is misleading.


I am the submitter, I just didn't know I could edit the title after 7 years of using HN! Gone ahead and done it now.


I was replying to the submitter.


Your post is entirely clickbait. Docker runtime support doesn't really matter since most people already have moved onto other runtimes like containerd/runc.


Ah I see. You need a runtime in k8s that actually runs the containers that are in pods. So you can use Docker to run those containers, or containerd or whatever. Each of your k8s nodes has to have this program running to run the containers. So they don't want to support that first one.

Not a big deal. It's some backend stuff that's not interesting to people who use managed k8s. Cool cool.


Yep, NBD. OpenShift removed Docker a while ago and replaced it with CRI-O. 99% of people never noticed, and the ones that did just like to know how things work on the inside.


It's funny, and telling, how many commenters here are using K8s without really knowing how it works (and what this change therefore means). I'm in that group myself.

Is this a testament to, or an indictment of, how abstracted our systems have become?


If it works it's fine.

I took an Operating Systems class decades ago in school in which I wrote a toy OS, but at this point I couldn't tell you much about how operating systems really work, but I deploy software to them everyday. That is fine, it is the nature of computers, they are basically abstraction machines. And OS's are pretty mature and stable, I don't really ever need to debug the OS itself in order to deploy software to one, for the kind of software I write. (Others might need to know more).

But personally I still haven't figured out how to use K8, heh.


I use my car without knowing how it works. The weird thing about programming is that as a job, if I had a company car they'd be happy for me to call the mechanic to fix it, but if kubernetes ain't working then that's another job for the multihatted programmer. That sort of means, from a selfish point of view it's better to pick one of the locked-in cloud technologies that you can't fix (AWS or Azure has to!). But I suspect many do the opposite (I want k8s on my cv!)


Would you say you know the intricacies of how VMs work before using them to deploy apps?

There is nothing “un abstract” about running applications on VMs or machines. We’re just evolving the abstractions that we work with. Before it was VMs, then containers and now containers + orchestrators. In the future it will be some other abstraction.

Every step of the way, we’ve made this transition for compelling reasons. And it will happen again.


VMs definitely are leaky abstractions. Less so as time goes on and the illusion is refined. Eg timekeeping used to be a problem on VMs. And how many people have spent dev time debugging mysterious freezes on EC2 only to discover their RDS or whatever had been on a bursting instance type?


Not having to care about implementation details seems like a positive thing to me. There's a reason RTFM is a meme and "read the fucking source code" is not.

Anybody claiming they need to know everything about their dependencies is being unrealistic.


The purpose of every piece of software is to be usable without knowing how it works.


I feel like “how something works” and “implementation details” aren’t synonymous and are really context dependent.

As a user you should know the different types of namespacing that affect containers without necessarily knowing that/how your runtime calls clone() to do it. And as a sysadmin you had better know how all the components fit together and their failure modes because you’re the one supporting them.

Different people have different views of any technology so someone’s necessary understanding as a user of managed k8s can be different than a sysadmin who is a user of k8s code itself.


Read about this over on Twitter: https://twitter.com/Dixie3Flatline/status/133418891372485017...

The only "official" notice about it so far seems to be in the linked changelog.


This was very useful. Thanks for sharing.

It seems that Docker images will still run fine on k8s. The main change is that they're moving away from the "Docker runtime", which is supposed to be installed on each of the nodes in your cluster.

More details about k8s container runtimes here: https://kubernetes.io/docs/setup/production-environment/cont...


Right. The simplest option is to use containerd, as the runtime. Installing docker will also install containerd (because docker uses it internally), so nothing much needs to change except a configuration option. docker and k8s can run side by side sharing the same containerd instance, in case you need to do something like build containers inside your k8s cluster.

You lose out on things that require access to the docker daemon socket, but ideally any such software should be replaced with something that talks with the kubernetes API instead. (exception is building containers in cluster. If you need that, run docker side by side with the kubelet, or use buildkit with containerd integration). You also lose the ability to interact with containers with the docker cli tool. Use crictl instead, which has most of the same commands, but also includes certain k8s relevant information in output tables.


Yep, they're not really "Docker" images. A while ago the image/container formats were standardized through the OCI.


There are several official responses now to help us understand this deprecation.

K8sContributors on Twitter: "#Kubernetes 1.20 introduces an important change to the kubelet - the deprecation of #Docker as a container runtime option. What does this mean and why is it happening? You can learn more in our blog post! https://t.co/lzPPzwXUNM" / Twitter - https://twitter.com/K8sContributors/status/13343017328309903...

Don't Panic: Kubernetes and Docker | Kubernetes - https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-...

Dockershim Deprecation FAQ | Kubernetes - https://kubernetes.io/blog/2020/12/02/dockershim-faq/


The twitter thread has way more details about the change, which is why I submitted it here: https://news.ycombinator.com/item?id=25279424

https://twitter.com/IanColdwater/status/1334149283449352200 also has some details


Can someone explain how logging is supposed to work after this change? I’m complete bloody lost.

Actually I’m nearly always lost with kubernetes. It’s either broken or changing.


Loosely, my understanding Kubernetes works like this:

You have a Pod definition, which is basically a gang of containers. In that Pod definition you have included one or more container image references.

You send the Pod definition to the API Server.

The API Server informs listeners for Pod updates that there is a new Pod definition. One of these listeners is the scheduler, which decides which Node should get the Pod. It creates an update for the Pod's "status", essentially annotating it with the name of the Node that should run the Pod.

Each Node has a "kubelet". These too subscribe to Pod definitions. When a change shows up saying "Pod 'foo' should run on Node 27", the kubelet in Node 27 perks up its ears.

The kubelet converts the Pod definition into descriptions of containers -- image reference, RAM limits, which disks to attach etc. It then turns to its container runtime through the "Container Runtime Interface" (CRI). In the early days this was a Docker daemon.

The container runtime now acts on the descriptions it got. Most notably, it will check to see if it has an image in its local cache; if it doesn't then it will try to pull that image from a registry.

Now: The CRI is distinct from the Docker daemon API. The CRI is abstracted because since the Docker daemon days, other alternatives have emerged (and some have withered), such as rkt, podman and containerd.

This update says "we are not going to maintain the Docker daemon option for CRI". You can use containerd. From a Kubernetes end-user perspective, nothing should change. From an operator perspective, all that happens is that you have a smaller footprint with less attack surface.


I'm sure all this complexity makes sense for all sorts of reasons buried in the history of Kubernetes development at Google. "Things are the way they are because they got that way."

The fact that so many other orgs, many of which are startups or just small to medium sized tech companies, use a system this complex is ludicrous to me.


Hi! Small tech company that uses k8s here. Much of the complexity is irreducible and has to go somewhere, and it's much better to have it in a single, stateless, well-defined place that's also easy too introspect.

I've seen way too many Ansible nightmares grown out of deceptively "simple" mutable VM deployments.

k8s makes our life so much easier because it eliminates a whole bunch of other complexity. Easily reproducible development environments, workload scheduling, sane config management...


My favorite opposite of this is companies running k8s, using a ansible to build state on launch..


The practical change that's happening is that in future versions of Kubernetes, they're removing support for using a shim for telling the Docker daemon to run containers, and focusing on just using containerd (which Docker uses under the covers anyway).

It's kind of like if you had a shell script to launch programs, and it used to move the mouse to press icons, but now you've deprecated that and will only run programs directly.


Well, I might have designed it differently, but I wasn't there. For what it does, this architecture works well. More to the point: none of it is visible to end-users of Kubernetes. You send a Pod definition, some magic happens, pow! running software.


Except when it doesn't, at which point you're lost.


Well, like I said, I might have done it differently. But there's a consistent logic to how it works[0]. That carries a lot of water.

[0] most folks emphasise the control loop aspect, I think it's more helpful to point to blackboard / tuple-space systems as prior art.


> I think it's more helpful to point to blackboard / tuple-space systems as prior art

Ah, nevermind then, that gives me rest concerning the viability and transparency of it all.


I'm glad I could help.


It's a generic API for automated provisioning. It asks you to make some decisions up front (what do you want to do when an instance goes down or has a new version available), and lets you forgo making some (like which nodes your instance should run on).

It's not more complex. If you're small, the overhead is probably not worth it. If you're big enough, you manage the k8s control plane, and you don't have to manage your tenants infra.

Is a programming language too complex for hello world? Perhaps, but that's not all it does.


You don't need the complexity until you do, and then you're facing a rewrite into the complex system. Defining a deployment in YAML isn't really all that hard.


You pretty much nailed it. This is a super useful "elevator description" to give to people. Mind if I share it (with attribution)? Even better if you slap it into a blog post or something (except a tweet thread :-D), but HN is perfectly fine :-)


Thankyou, I'm flattered. Please feel free to share.


Noob question:

As I understand, dockershim makes docker daemon cri compliant. But dockerd already uses containerd which is cri compliant. So, why can't kubelet directly interact with containerd APIs without dockershim?


The kubelet can talk to containerd's cri endpoint, yes, but there's one additional bit of complexity.

If someone wants to use kubelet + docker so that they can, for example, ssh into a node and type 'docker ps' to see containers, or have something else using the docker api see the containers the kubelet started, that won't work after re-pointing the kubelet from docker to containerd.

The difference here is namespacing[0], but not the linux-kernel-container-namespace, rather the containerd concept by the same name to allow "multi-tenancy" of a single containerd daemon.

In addition, I don't think you could have docker + cri run in the same containerd namespace since they end up using different networking and storage containerd plugins. I think that terminology is right.

So yeah, repointing the kubelet to containerd directly works fine, but it won't be the same thing as running docker containers.

[0]: https://github.com/containerd/containerd/blob/9561d938/docs/...


> So, why can't kubelet directly interact with containerd APIs without dockershim?

Each kubelet does its thing through the Container Runtime Interface (CRI), so in a sense it doesn't know what it's running on. If it used containerd's interfaces directly, it wouldn't be possible to substitute in a different option.

For example, there are emerging VM-based approaches like Firecracker and VMware "Project Pacific" (disclosure: I work at VMware).


The GP did ask about logging though specifically. One of Docker daemon's more interesting features is how much log enrichment it does. Does kubelet do the same thing out of the box? I know containerd itself does not, unfortunately.


Yes, I overlooked that. I am afraid I don't know, but since the Docker daemon now relies on the same code, I would expect that there's similar functionality at that level.


> I’m nearly always lost with kubernetes. It’s either broken or changing.

Glad I'm not the only one. I'm sure I'm not the smartest engineer/sysadmin in the world, but I'm also not the dumbest and I have never gotten an on-premises Kubernetes installation to work.

The way I manage containers is lxc and shell scripts. I understand it, and it works.


I work for Red Hat as an OpenShift Consultant, so I'm on the bleeding edge of change and am constantly pushing boundaries.

Don't tell the boss or the customers, but most of the time when release notes for a new version come out, I look at them and go "WTF, why do we need that, I better do some research." It's fast changing, complex as hell, and absolutely brutal. That said most things are there for a reason and once I dig in I usually see the need.

That said I do love it despite its warts. There's no doubt some Stockholm Syndrome at play here, but I love the API (which is pretty curlable btw, a mark of a great API IMHO) and the principles (declarative, everything's a YAML/JSON object in etcd, etc). I see it the same way I did C++ (which I also loved). It gives you great power which you can use to build an elegant, robust system, or you can create an unmaintainable, complex, monster of a nightmare. It's up to you.


The full-time job of keeping up with Kubernetes was published in February 2018[0] and things have only gotten faster since then.

[0] https://goteleport.com/blog/kubernetes-release-cycle/ , HN discussion: https://news.ycombinator.com/item?id=16285192


Getting Kubernetes itself running on bare metal ("running" as in "you have containers and can access them") is half a day of work.

What is deadly difficult is getting networking to work. Even a comparably "easy" thing with a couple of two-NIC machines (one external, internet-routable, one DMZ) cost me a fucking week.

What's even worse is when one has to obey corporate restrictions - for example, only having external interfaces on "loadbalancer" nodes:

- First of all, MetalLB only has one active Speaker node which means your bandwidth is limited to that node's uplink and you're wasting the resources of the other loadbalancers.

- Second, you can taint your nodes to only schedule the MetalLB speaker on your "loadbalancer" nodes via tolerations... but how the f..k do you convince MetalLB to change the speaker node once you apply that change?!

- Third, what do you do when you want to expose N services but only have one or two external IPs? DC/OS was way more flexible, you had one set of loadbalancers (haproxy) that did all the routing, and could run an entire cluster on four machines - two LBs, one master, one worker. There is no way to replicate this with Kubernetes. None.


Yes, it's the networking that never works for me either. I'm one guy, wear many hats, and don't have time to chase rabbits down holes. If I follow published instructions and it doesn't work, I pretty much stop there.


Give nomad, docker swarm, or lxd a shot.


I'm currently involved in an effort to rip out docker swarm at work because its overlay networks are shockingly unreliable. LXD looks interesting but https://linderud.dev/blog/packaging-lxd-for-arch-linux/ convinced me that it's another Canonical NIH special and probably best avoided (in particular, "only distributed for Ubuntu via snaps" means "forces auto-updates" which means "not going in my environment"). Need to try Nomad; I'm cautiously optimistic since the rest of HashiCorp's stuff is good.


Nomad 1.0 is the bomb. I've ran Nomad since 0.7 and it was usable but pretty rough in those days. 1.0 is such a good, smooth thing. Super awesome even if you don't want to use Docker at all, you can just execute random stuff through exec/raw_exec, or straight Java. Heck, run it in a BSD homelab and use one of the jailshell executors.

It's not Kubernetes but it doesn't try to be at all.


Is this on 19.x swarm mode? I thought it got a lot more reliable with that release.


Welcome to modern software development, apparently.


Modern software development makes me wish I was born in the late 1940s.


You like punch-cards? Because that is how you get them!


If you're born in the late 40's, you'd probably have been exposed to some punch cards but when starting work, it'd be the 70's already. I'm sure there was still punch cards around at the time, but at least you had printing terminals if not monitor-based terminals.

That said, depending on where you were working, things could also change fast. You could find yourself finding that a kernel system call had changed because someone patched it the evening before.


Yes. Plenty of time to think before doing.


About time.

Coincidentally, today I watched three presentations about burning Kubernetes clusters and all of them had Docker daemon issues in the mix. I’ve been using Docker for over five years myself and I’ve been using Kubernetes for almost two years now. The most pain I encountered was with Docker or its own ecosystem.

In the last two years it always had some weird racy situations where it damaged its IPAM or simply couldn’t start containers after a restart anymore. Also its IPv6 support is just a joke.

Sorry, I had to rant and I hope that this announcement will fuel the development of Docker alternatives even more.


> Docker support in the kubelet is now deprecated and will be removed in a future release. The kubelet uses a module called "dockershim" which implements CRI support for Docker and it has seen maintenance issues in the Kubernetes community. We encourage you to evaluate moving to a container runtime that is a full-fledged implementation of CRI (v1alpha1 or v1 compliant) as they become available. (#94624, @dims) [SIG Node]


Here’s what this means for real-world kubernetes deployments:

- 99% of Kubernetes deployments use dockerd as a runtime

- 99% of dockerd deployments use containerd as a runtime

- containerd can be called directly by kubernetes via cri-containerd

- Therefore most Kubernetes deployments can, and should, be simplified by calling containerd directly.

- This deprecation notice will make this transition happen sooner.

This is the natural consequence of Docker itself splitting out its runtime into containerd.


I'm kind of confused by this. It sounds like its removing just some parts of docker (like the UI stuff), but not others? Can I still run my docker-built images on K8?


> Can I still run my docker-built images on K8?

Yes.


yes it's just the underlying container runtime. so this is really only applicable to sysadmins managing their own k8s installation


Might make sense to update the link to point to the language exactly:

https://github.com/kubernetes/kubernetes/blob/master/CHANGEL...

"Docker support in the kubelet is now deprecated and will be removed in a future release. The kubelet uses a module called "dockershim" which implements CRI support for Docker and it has seen maintenance issues in the Kubernetes community. We encourage you to evaluate moving to a container runtime that is a full-fledged implementation of CRI (v1alpha1 or v1 compliant) as they become available. (#94624, @dims) [SIG Node]"


containerd will still run images build by Docker. Google can talk about how Docker is missing CRI support, but I feel like this is just Google wanting to cut out Docker entirely.

I seems like containerd is maintained by The Linux Foundation, a group of people who mostly don't even run Linux (most of their releases and media material is made on Macs)

I dunno. I don't like the direction things are going in the open source world right now.


This was always a land-grab by folk who wanted Docker's """community""" (read: channel) but not Docker's commercial interests. Any time you see a much larger commercial entity insist you write a spec for your technology, especially one with much larger pockets, the writing is always on the cards.

The bit that absolutely fucking sickens me is how these transactions are often dressed up in language with free software intonations like "community", "collaboration" etc. Institutionalized doublethink is so thick in the modern free software world that few people even recognize the difference any more. As an aside, can anyone remember not so long ago when Google wouldn't shut up about "the open web"? Probably stopped saying that not long after Chrome ate the entire ecosystem and began dictating terms.

The one mea culpa for Docker is that the sales folk behind Kubernetes haven't the slightest understanding of the usability story that made Docker such a raging success to begin with. The sheer size of the organizations they represent may not even allow them to recreate that experience if indeed they recognized the genius of it. It remains to be seen whether they'll manage that before another orchestrator comes along and changes the wind once again. The trophy could still be stolen, there's definitely room for it.


Meh.

The whole idea of containerization came from Google anyways, who uses it internally. Docker came out with their container system without understanding what made it work so well for Google. They then discovered the hard way that the whole point of containers is to not matter, which makes it hard to build a business on them.

Docker responded by building up a whole ecosystem and doing everything that they could to make Docker matter. Which makes them a PITA to use. (One which you might not notice if you internalize their way of doing things and memorize their commands.)

One of my favorite quotes about Docker was from a Linux kernel developer. It went, "On the rare occasions when they manage to ask the right question, they don't understand the answer."

I've seen Docker be a disaster over and over again. The fact that they have a good sales pitch only makes it worse because more people get stuck with a bad technology.

Eliminating Docker from the equation seems to me to be an unmitigated Good Thing.


> The whole idea of containerization came from Google anyways, who uses it internally.

Not really. Jails and chroots are a form of containerization and have existed for a long time. Sun debuted containers (with Zones branding) as we think of them today long before Google took interest, and still years before Docker came to the forefront.

> I've seen Docker be a disaster over and over again. The fact that they have a good sales pitch only makes it worse because more people get stuck with a bad technology.

> Eliminating Docker from the equation seems to me to be an unmitigated Good Thing.

Now this I agree with, Docker is a wreck. Poor design, bad tooling, and often downright hostile to the needs of their users. Docker is the Myspace of infra tooling and the sooner they croak, the better.


What Google pioneered was the idea of defining how to build a bunch of containers, building them, deploying them together to a cloud, and then having them talk to each other according to preset rules.

Yes, we had chroot, jails, and VMs long before. I'd point to IBM's 360 model 67 which was released in 1967 as the earliest example that I'm aware of. A typical use before containerazation was shared hosting. But people thought of and managed those as servers. Maybe servers with some scripting, but still servers.

I'm not aware of anyone prior to Google treating them as disposable units that were built and deployed at scale according to automated rules. There is a significant mind shift from "let's give you a pretend server" to, "let's stand up a service in an automated way by deploying it with its dependency as a pretend server that you network together as needed". And another one still to, "And let's create a network operating system to manage all services across all of our data centers." And another one still to standardize on a practices that let any data center can go down at any time with no service disruption, and any 2 can go down with no bigger problems than increased latency.

Google standardized all of that years before I heard "containerization" whispered by anyone outside of Google.


Containers came from Solaris and the BSDs, and the warehouse-sized containerized deploys that this article/changelog is associated with came from Google. You're both right.

And agreed, Docker is a mess. It seems like everything that's good about Docker was developed by other companies, and everything that's bad about Docker was developed by Docker. The sooner the other companies can write Docker out of the picture the better. I want the time I wasted on Swarm back.


I get preferring that major open sourced projects weren't controlled by a big corporation, but this seems overly dramatic.

Docker was always a company first and foremost, I fail to see how leaving the technology in their commercial control would have been better in any way than making it an open standard. Just because Docker = small = good and Google = giant corporation = evil? Docker raised huge amounts of VC funding, they had every intention of becoming a giant corporation themselves.

And it's kind of bizarre to completely discount the outcome of this situation, which is that we have amazing container tools that are free and open and standardized, just because you don't like some of the parties involved in getting to this point.


> making it an open standard

I would hesitate to use the term "open standard" until I'd thoroughly assessed the identities of everyone contributing to that open spec, along with those of their employers, and what history the spec has of accepting genuinely "community" contributions (in the 1990s sense of that word)


The container image/runtime/distribution area is heavily standardized now via the Open Container Initiative (OCI) that was founded 5 years ago.

https://www.linuxfoundation.org/press-release/2015/12/open-c... https://kubernetes.io/blog/2016/12/container-runtime-interfa...

You can see the releases and specs that are supported by all major container runtimes here: https://opencontainers.org/release-notices/overview/

For example, OpenShift ships https://cri-o.io in its kubernetes distribution as its container runtime, so this isn't really new.

Disclosure: I helped start OCI and CNCF


I've never tried contributing to CRI so I don't really know what the process is like. I imagine like any such large and established standard it would require a herculean effort, that doesn't necessarily mean it's not open just that it can't possibly accept any random idea that comes along and still continue to serve its huge user base.

But let's say you're right and call it a closed standard. Then this change drops support for one older, clunkier closed standard in favor of the current closed standard. Still doesn't seem like anything to get upset over.


> This was always a land-grab by...

What's "this" in that sentence? Kubernetes in general?


The standardization of “Docker” containers into “OCI” containers and the huge amount of public pressure put on Docker to separate out their runtime containerd from dockerd.


Do you think it shouldn't have been standardized so other vendors products could be interoperable with docker's containers, or just that the standardization should have been done differently, or other?


I would say that Google never wanted to be _in_ with Docker in the first place. Google had been doing things the docker way before Docker existed (Borg). Docker sort of caught the developer ecosystem by surprise, but proved the viability of containers in general. From this point forward it was clear Google would build their cloud future on “containers”, not on Docker. If you can find archived streams of the GCP conference that took place shortly after Dockers rise in popularity, they say the word container all day long, but never mention the word Docker once. I was there and remember counting


Support for Docker was a correct market move for Cloud to adopt users that were already familiar with a tech base.

But divorcing their API from that tech base is also a move to support Cloud users---they don't want the story for big companies to be "If you want to use Kubernetes, you must also attach to Docker." That cuts potential customers out of the market who want to use Kubernetes but may have a reason they can't use Docker (even if that reason is simply strategic).

Google Cloud's business model walks a tightrope between usability and flexibility. Really, all the cloud vendors do, to varying degrees of success.


> I dunno. I don't like the direction things are going in the open source world right now.

I commented on a child comment as well, but I don't understand this idea. The news is that a piece of commercially built software is being deprecated by a major project in favor of one built on an open standard, and you're interpreting this as a blow to open source?


> I seems like containerd is maintained by The Linux Foundation, a group of people who mostly don't even run Linux (most of their releases and media material is made on Macs)

Using Macs for content creation isn't evidence that the Foundation members don't use also Linux, whether for software development, backend servers, etc.


It's probably fair to extrapolate that some tools they rely upon in their business flow aren't available on Linux, which is probably of concern.


I think Derek Taylor does a good breakdown of all the various software choices by The Linux Foundation:

https://www.youtube.com/watch?v=a-2dYfYvJGk


If I made presentations, you would discover that my content was all created on a linux desktop.

What a totally random data point of no relevance or significance eh?

Such things do in fact reflect the character and nature of the people involved. It doesn't necessarilly define them entirely, but yes it does reflect them.

It's not that you're not a "true Scotsman" necessarily if you say, care about linux primarily in other roles than desktops. You can be perfectly sincere in that, and it's valuable even if it only goes that far. But it does mean you are in a different class from people who actually do abjure the convenience of proprietary software wherever possible, and today "possible" absolutely includes ordinary office work and even presentation media creation.

It's perfectly ok to be so compromised. Everyone doesn't have to be Stallman.

It's equally perfectly fair to observe that these people are not in that class, when such class does exist and other people do actually live the life.

You can't have it both ways that's all. If you want to preach a certain gospel to capitalise on the virtue signal, without actually living that gospel and not actually posessing that virtue, it's completely fair to be called out for it.


It seems hyperbolic to say that the operating system a person uses is reflective of their character.


i 100% agree with you. open source feels really, really bad especially in the last year.

to others reading this -- simplified, but, docker uses containerd to build/run images. all docker images are valid containerd images. you can run images through containerd straight off the docker hub.


It depends on what one means by "open source."

Open source is fine; there's a ton of available code out there, to mix and match for whatever goals you need. Open services were never a thing, and what we're observing is that the SAAS model is eating the entire marketplace because tying services together to solve tasks is far easier (and depending on scale, more maintainable) than tying software together to solve tasks on hardware you own and operate exclusively. Owning and operating the hardware in addition to owning and operating the software that does the thing you want to do doesn't scale as flexibly as letting someone else maintain the hardware and provide service-level guarantees, for a wide variety of applications. But the software driving those services is generally closed-source.

If by "open source" you mean "Free (as in freedom) software," the ship has kind of sailed. The GNU-style four essential freedoms break down philosophically in the case of SAAS, because the underlying assumption is "It's my hardware and I should have the right to control it" and that assumption breaks down when it's not my hardware. There may be an analogous assumption for "It's my data and..." but nobody's crystallized what that looks like in the way GNU crystallized the Four Freedoms.


This is a pretty good primer on this peculiar new problem.

It's kind of a case study for future text books about how if there is a certain incentive, it will be embodied and satisfied no matter what. If the names and labels have to change, they will, but the essentials will somehow turn out to not have changed in any meaningful way in the end.

It's if anything worse now than before. At least before you were allowed to own your inscrutible black box and use it indefinitely. There was sane persistence like a chair. You buy it, and it's there for you as long as you still want it after that. maybe you don't want it any more after a while, but it doesn't go poof on it's own.

One way things are actually better now though is, now in many cases the saas outside of your control really is just a convenience you could replace with self-hosted good-enough alternatives, thanks to decades of open source software and tools building up to being quite powerful and capable today.

I think this is a case of the rising water lifting all ships. If the proprietry crowd gained more ability to abuse their consumers, everyone else has likewise gained more ability to live without them. Both things are true and I tell myself it's a net positive rather than a positive and a negative cancelling out, because more and better tools is a net positive no matter that both sides have access to use them for crossed purposes. At least it means you have more options today than you did yesterday.


Regarding your last paragraph "They are my friends and I should have the right to socialize with them however I want without being surveilled".


I believe that's a good consequence of the kind of universal principles that it would be nice to have for data but can't serve as one of those principles.

For example, Al Capone had some friends, and the government violated his right to socialize with them without being surveilled for good cause.


It's normal for rights to be taken away from criminals. Or do you mean pre-conviction... I think it's ok if it involves manual legwork.


Al Capone wasn't yet convicted of anything when they started wiretapping his phones.


comtainerd is hosted by the Linux Foundation (more specifically CNCF). It is maintained by people from all over, including but not limited to the major US tech companies (Apple, Google, Microsoft, Amazon).

containerd was also created by Docker Inc and donated to the LF.


Docker Inc really do not want infrastructure projects wrapper docker itself. It causes all sorts of headaches for them. They encourage using containerd for infrastructure projects (which is basically the core of original docker extracted out as a sperate project maintained by a large community). Docker is basically an opiononated wrapper around containerd, and they intend to move even more in that direction in the future.

TLDR: Docker Inc almost certainly is happy to see this change happen.


Probably to get around the recent docker registry throttling would be my guess. They are likely looking at building out their own container ecosystem.


Changing the runtime doesn't change the registry.


> We encourage you to evaluate moving to a container runtime that is a full-fledged implementation of CRI (v1alpha1 or v1 compliant) as they become available.

So Docker is deprecated, but no replacement is yet available?


I believe at least containerd and CRI-0 are actively available and in use quite a bit. (There are some others I've seen, too.)

It's just saying if you use something else, it must follow at least the v1alpha1 or v1 CRI runtime standard.


The recommended runtime to use is containerd, which Docker itself is using under the hood. This is just about removing Docker as a piece of middleware.


This link has a more thorough detail on what is actually happening and when - https://github.com/kubernetes/enhancements/tree/master/keps/...

Bottom line I think is that using docker as a container runtime with K8S is going to be harder unless cri-dockerd becomes production grade but even then, from the Cons section it looks like it will not be a good option -

cri-dockerd will vendor kubernetes/kubernetes, that may be tough. cri-dockerd as an independent software running on node should be allocated enough resource to guarantee its availability.


Did anybody make a test to compare CRI-O vs Docker especially when it comes to overall node memory usage for let's say 30-50 containers per node? I guess CRI-O would save a lot of memory but I don't have numbers.


This is inevitable one way or another. Docker lost any leverage after 2015, when they still had some chance of making sure container as they invented can still be monetized in the same way of VMware.


It gets less confusing when you realise that the original specification for containerd came from Docker (the company) and the current implementation of docker (the application) use containerd as their runtime.

By using containerd (or podman) in K8s, you're getting rid of a lot of unnecessary overhead and so should get more containers per host...


You can spin up test Kubernetes clusters with different underlying container runtimes using Minikube. If you want to play around with a cluster running containerd instead of the docker container runtime use:

minikube start --container-runtime=containerd

Use this to convince yourself that all your current docker images will still deploy and work as usual.


I found the title just a reminder to invest learning concepts and topics that can last a life time. Tools come and go, and it is healthy to change from time to time.

Containers as a concept is an important learning, but the implementation for today may not be the same as the one in 5 years from now.


Docker has networking and other layers too. Docker runs as a daemon too, so it is not very secure. GKE uses containerd (u can use others) What is nice about containerd is that it only runs the container and you can write plugins to it. So much lighter than docker.


This is misleading. If you're using Docker to build images and using Kubernetes to manage containers nothing changes. The deprecation mentioned is internal to Kubernetes and does not impact people who use Kubernetes to manage containers built using Docker.


What does that mean for people working with Dockerfiles for local/small scale development that would like to be able to use kubernetes at some point? Will they not be able to use their Dockerfiles at all?


No change for this workflow. Developers can still use docker to build OCI images as they always have, and containerd can run them as previously.


As much as I love Docker as an excellent freelance developer tool for juggling customer environments, I just never understood the urge to run entire enterprises on containers. It certainly doesn't make things easier, faster, more secure, or cheaper; all it ever did was isolating shared library dependencies (a self-inflicted problem created due to overuse of shared libraries in F/OSS, since static linking has done just the same thing since the dawn of times; of course, in neither case do you get automatic security or stability updates which was the entire point of shared libs). Now they're removing Docker altogether from the k8s stack? So much for Docker's perceived "isolation" I guess.


From your post, I think you might fundamentally misunderstand Docker's use/value. From a value-add standpoint, Docker doesn't really care about "isolating shared library dependencies", but instead, compartmentalizing an entire application, dependencies and all. The value in this, of course, is that you no longer have to care about version conflicts between resources that are sharing a machine. As an added bonus, it means your deployment process can stay the same regardless of the type of container you're deploying. Before, if you had to deploy a Ruby app as well as a Python app, those required fundamentally different processes, as they each require their own package managers and interpreters. With a container, you compile each of those tools _into the container_, and then your deployment process is just "Create container image, send it somewhere".

Hell, even if you wrote an application with 0 dependencies, you're still on the hook for installing the correct version of its compiler, the correct version of your deployment tool, and the correct version/OS of your VM. These are still dependencies, even if they're not dev dependencies.

> It certainly doesn't make things easier, faster, more secure, or cheaper;

If you don't think being able to reuse software makes your workflow easier, faster and at the very least cheaper, I'm not sure what you could possibly think would do those things.


I'm sure you believe what you're saying. But, as pointed out in many posts here, very few people can setup Kubernetes, let alone know it enough for troubleshooting. As an example, in a project of mine we had to call-in an k8s expert after almost a week of downtime (turned out IP address space was exhausted on that Azure instance). And a constant in almost all recent projects of mine is sure people fiddling with k8s integration setups, and achieving very little.

In that kind of situation, it is unwise and irresponsible to treat your infrastructure as a black box. You still need to be able to re-build/migrate your images for security, stability, and feature upgrades, so you're basically just piling additional complexity on top.

The premise of Kubernetes and containers/clouds is an economical (and legitimate) rather than technical one: that you don't have to invest into hardware upfront, and pay as you go with PaaS instead. That tactic only works, though, as long as you have a strong negotiation position as customer. In practice, if you won't get locked-in to cloud providers by tying your k8s infra to IAM or other auth infrastructure, or mixing Kubernetes with non-Kubernetes SaaS such as DBs (which suck on k8s), then you still won't be able to practically move your workload setup elsewhere due to sheer complexity and risk/downtime.

The economical benefit is further offset by a wrong assumption that you need no or fewer admin staff for Docker ("DevOps" in an HR sense).


> But, as pointed out in many posts here, very few people can setup Kubernetes,

My post, and most of yours, had nothing to do with Kubernetes, but containers in general. I don't care for Kubernetes, and would actively reject using it 99% of the time. Your post, however, was mostly about containerization of applications, whose validity has nothing to do with one particular product or pattern (Kubernetes).

Containers are an almost unanimous win in terms of the simplification of development and deployment. Conflating Kubernetes to be the only approach to containerization is a farce.


It makes things reproducible, and k8s is still containers just not docker


Kubernetes still uses containers, just not Docker. From the release notes:

> We encourage you to evaluate moving to a container runtime that is a full-fledged implementation of CRI...


k8s runs containers, docker is just one implementation of containers.


Just another case where an idea originally created for developer convenience turned into an enterprise thing and instrument for mass control. Reminds me of Java build tools having long forgotten that they're there to make developer's lives easier rather than appeal to enterprise control freak desires. Now have fun developing k8s-compatible containers to enslave us in "the cloud" with developer workflow an afterthought.


The 12-factor app has been around since at least 2014. Why do you think "containers" are "enslaving" you?


So... What's the better runtime alternatives?


I've heard good things about https://containerd.io

Kubernetes documentation has a setup guide (for containerd, as well as CRI-O here: https://kubernetes.io/docs/setup/production-environment/cont...


The 'standard' one is containerd.


Say it many times: we miss Docker Swarm




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: