Nice to finally see it in writing: "Throughout this investigation Docker has made it clear that they’re not very open to ideas that deviate from their current course or that delegate control"
I think it’s a step backwards to have a monolithic application like Docker controlling your container runtime when you’ve gone through the effort of developing independent (micro)services.
Containerising software is the way forward and I think Kubernetes and CoreOS have demonstrated to have and support a long term vision.
This doesn't make sense. As long as docker works well most people don't care about what's under the hood. For example The Linux kernel is a monolith, but you wouldn't make the same argument about running your containers on Minix.
You are right that both projects are monolithic (and of coz open-sourced), but it is unfair to compare a VC backed company with non-profit organization.
Stuff like this (from Docker Inc, not Kubernetes) is why I'm really glad that Kubernetes is doing this.
I'm a Mesos user and absolutely can't wait until the Unified Containerizer bits are finished. The plan is to be able to pull down a docker image and run it as though it was being ran via the docker daemon, but not using docker and instead using the mesos native namespacing and control group bits. I would have already been a Kubernetes user had it not have depended on docker so heavily.
In our mesos environment, the only part we have issues with on a regular basis is docker. First it was the userland proxy and the performance and latency issues it introduced. Then it was the unresolved bug of docker just simply hanging under any sort of load[1] with upstream not really caring at all. Then there are random and difficult to reproduce issues where containers with bridged networking simply stop passing traffic from external interfaces into the containers when nothing has changed etc. So we use docker containers for some things, but try to limit them due to docker simply not being amazing for running production services for some of our use cases.
Solomon Hykes @solomonstre
@kelseyhightower TLDR "no actual downside for Docker users, but it makes it hard to for us to embrace-extend-extinguish"
Today at 3:08 PM
> I react badly to hypocrisy, a known weakness. You should be a maintainer for a week and decide for yourself :)
I'd say that's pattern matches as a humblebrag, except it's more of a... faux-humble pointed insult? "Apologizing" for tone and managing to call someone a hypocrite within so few words is... I struggle for words. Impressive? It leaves an impression.
I thought it might be worth highlighting the final section, where some consequences of this decision have been highlighted:
'There will be some unfortunate side-effects of this. Most of them are relatively minor (for example, docker inspect will not show an IP address), but some are significant. In particular, containers started by docker run might not be able to communicate with containers started by Kubernetes, and network integrators will have to provide CNI drivers if they want to fully integrate with Kubernetes.'
It will be interesting to see what would happen if a number of significant and interested enterprise parties unite to embrace runC or similar in earnest, relegating Docker's solution set to just another implementation. RedHat + Google + (say) Microsoft and it's game over.
Runc is part of the open containers spec set up by Docker and CoreOS. So it is Docker's solution as well as the community's. They recently open sourced containerd which is based on runc so they are clearly embracing it.
Interesting move. It's obviously still completely open how much the docker ecosystem can establish itself as a standard besides runC respectively docker api(cli)/daemon.
I just spent 20 minutes reading various sites, trying to understand what libnetwork is and failed. Everything either tells me that libnetwork is awesome or starts showing go source code.
It's docker's networking library. If you want to integrate your networking into docker and not provide the necessary flags on docker daemon startup, you will have to integrate mess with it.
Not clear. Who am "I" and what is my networking and what i am integrating it with? Am i a Docker user? Am I setting up a network of a bunch of containers? Or building some software package that extends docker?
At some point they were being made fun of for being slow, old fashioned, stuck in the early 2000s. They have to be obviously for stabiliy, but you know word spreads. So around RHEL 7 time they went all crazy trying to show the tech world they are still hip and cool, looked around, saw what the cool kids were talking about -- Docker. So they got themselves some Docker. So now it is Docker, Docker, Docker all day every day.
Naturally now it is uncomfortable to say "yeah well LXC is there, can use that as well or instead of Docker", but that would kind of backpedaling on their Docker bet by having something that competes with it. So they are stripping it out.
Its not like the two things are even targeted at the same use case. Libvirt LXC is designed to make containers appearvlike VMs and is pretty heavyweight to set up. I always preferred using raw kvm or LXC to wrapping it in libvirt which just gets in the way.
Docker is largely for running single applications with lightweight easy to use setup so you can run it constantly.
Wow didn't realize they'd deprecated lxc. Thx for pointing that out.. that seems to leave all the projects relying on libvirt for container support in an odd spot? I thought that Openstack for example used libvirt-lxc for their container (non-docker) support?
"Future development on the Linux containers framework is now based on the docker command-line interface. libvirt-lxc tooling may be removed in a future release of Red Hat Enterprise Linux (including Red Hat Enterprise Linux 7) and should not be relied upon for developing custom container management applications."
Pretty sure that is just RHEL being unintelligent as usual.
LXC is not really related to Ubuntu in any real way other than they are looking to write a persistent daemon for it called LXD.
The main dev does work for them and they thus get first class support, that helps! (It's not working as well on Debian for example, at least a few months back)
It might because they want customers to use OpenShift (version 3 is all docker/kubernetes under the hood) instead of a custom built solution. Although I do wonder how things like this will impact the OpenShift project since Kubernetes is such a key component.