Hacker News new | past | comments | ask | show | jobs | submit login
Rktnetes brings rkt container engine to Kubernetes (kubernetes.io)
113 points by philips on July 15, 2016 | hide | past | favorite | 35 comments



We're really excited to broaden customer choice and let people decide what container technology makes sense for their workloads. The CoreOS team has been amazing in helping to ensure Kubernetes is flexible enough to support alternate runtimes starting with not just rkt, but a full scale implementation of all of Kubernetes (running on rkt). Thank you so much!

Disclosure: I work at Google on Kubernetes.


Does this mean k8s will be able to run pods directly via systemd-nspawn? I'm not completely sure what rkt buys us -- please fill me in.


rkt utilizes systemd-nspawn under the hood, but you need some additional features around that as well. The usage is explained pretty well in the architecture guide: https://coreos.com/rkt/docs/latest/devel/architecture.html


What about LXC? Is there just no community interest for this feature?


I would hope/expect to see systemd-nspawn support before LXC directly.


If you've ever looked at the underlying code of "LXC" it's basically a bunch of spaghetti scripts that utilize cgroups. It was awesome for its time, but it's really not something worth supporting in 2016 vs. the new container runtimes.


This is extremely misinformed. When did you last look at it? Can you post some examples of the spagetti scripts mentioned? What features do the 'new 2016 container run times' have that the LXC project did not provide initially and does not have? Really?

It's sad to see this level of dimissive reductive fud being used against the LXC project liberally and consistently especially here on HN to promote other container solutions.

A quick visit to linuxcontainers.org or the LXC github repo will quickly and dramatically dispel such notions.

The LXC project is written in C and has been in development since 2009 along with cgroup and namespace support in the kernel. It's mature, robust and easy to use. It provides advanced container networking options from veth to macvlan and supports aufs, overlayfs, btrfs, LVM thin, ZFS for snapshots and clones, since at least 2011. It was also the first to support unprivileged containers with its 1.0 release in 2013.

It's this work that Docker was based on and its unfortunate the ecosystem far from highlighting the benefits they got many choose to downplay and misrepresent the LXC project leading to comments like the above.

LXC is now on 2.0 with multiple improvements from proper display of cgroup controls and container resource usage inside containers with the lxcfs project to initial support for live migration.

It is also significantly easier to use than single app containers especially for multiple processes and networking without needing to create custom process managers, run apps in specific ways or network workarounds. LXC runs the container OS's init in the container, Docker and others do not. There are reasons for both approaches. Hope for better informed discussions on Linux containers here.


Well the big one that LXC does and has always lacked is a story for how to ship code. Docker has distribution / hub, rkt has quay, LXC has... nothing.

If LXC had a registry long ago, I suspect you'd see a lot more people using it directly.

Now the big difference is that LXC is mostly (as of right now) a Canonical project and they have a habit of trying to compete with the world. LXD looks interesting from a technology POV, somewhere between Intel's Clear Containers (where they seem to have gotten the idea from), and Docker. I don't see it really gaining any traction however.

A few examples that spring to mind of them competing with the world and for the most part, failing:

    * bzr - if you want a python vcs, use mercurial
    * Upstart - we all know how well that worked out
    * Dekko - their fork of the lightweight email client Trojita
    * juju for config management. This one is arguably one of the best things Canonical has produced tech wise
    * Unity/hacked up GNOME - because working with upstream is boring, and the world wants lots of Ubuntu phones
    * Their own window manager, Mir, because they can do it better than Xorg developers who are writing Wayland with decades of experience.
The list goes on and on.


LXC has always had a registry of base os images. LXC is a normal open source project and is not VC funded and does not have a lot of the building wall kinds of features like the hub and tons of others things that others would embrace.

LXC's problem is near zero marketing and the significant muddying of the waters about containers by the vc funded docker ecosystem which launched off LXC. How does one position LXC or LXD as 'NIH' when others based off it, or misrepresent it as low level when it has far more sophisticated tools, features and maturity?

Isn't it an irony for all the criticism of bash scripts from the docker ecosystem the Dockerfile is actually a bash script. There still remains way too much misinformation about containers here on HN and it damages informed discussion and credibility. VC funded companies should not be allowed to hijack open source projects in this manner without due credit and proper information.

Lets talk about overlayfs, aufs, namespaces, cgroups and all the Linux components that are containers. Are these projects to work in anonymity without any credit or rewards while VC funded companies suck the value and momentum. Is this the 'open source model' hn wants to support. Compete on value not disinformation. This only leads to low quality discussions.

Ubuntu tends to do their own thing and I don't particularly see anything wrong with diversity. What I find more galling is this underhanded positioning of anything not controlled by Redhat as NIH. Is systemd, nspawn criticised as nih. Flatpak is not dimissed as NIH but Snappy is. How does this kind of brazen manipulation happen? I can't fathom ordinary unbiased users thinking in this way and this kind of politics is becoming a problem for informed discussion and wider choice in Linux. Unity is surprisingly polished for a Linux project and I would not have even tried if I didn't look beyond the prevailing prejudice. Let's not do this.


Huh? LXD is Canonicals, but I don't think LXC is in any sense a "Canonical" project? (I thought) LXC's primitives are what Docker is actually using.

Why would LXC need a registry any more than say Rkt? scp a tarball, or host the tarball on an https endpoint? Done, no?


Docker stopped used LXC stuff a while ago. Sure docker uses Linux namespaces and such, but that has a lot more to do with OpenVZ then LXC.

People seem to like LXC because they want to treat containers as 'light weight virtual machines' and run full stack Linux OSes in them with init systems and multiprocesses and such things.

However that isn't the direction the industry is moving, which is focusing on containers as simple runtime packaging were you have the absolute minimum needed to run a single application.

RKT is interesting because it doesn't depend on a a separate daemon to manage containers. Instead you use the host's systemd daemon to manage containers in a similar way you manage everything else in a system. It simplifies things like managing cgroups and improves reliability.


I do believe the latter two are another case of "we have commit rights, you don't so go away" from the Gnome and Wayland teams respectively.

https://plus.google.com/+TheodoreTso/posts/4W6rrMMvhWU


Some of it is powered by fans and those who bought into the hype of some of the commercial entities pushing their containerization. I think at some point to make what they bought in look good, they have to put down what others have. That happens pretty much with any popular technology. Remember, how Javascript on the server was "async", and therefore better, and was going to kill all the other languages and frameworks? Kinda the same idea.


I am interested in Google's plans for Kubernetes - is the plan to include unikernels in the aegis of Kubernetes' control?


Well, two things really:

1) It's definitely not a Google project. CoreOS just gave a great example of that (Redhat, Samsung, Huawei, IBM, Cisco, etc etc etc) 2) We plan on fully supporting OCI and any container technology that supports that interface should run without issue. So, to the extent that a unikernel runtime supported that, then you're good. There are, of course, many items to be sorted out (how do you provide underlying scheduling, how do you handle the functions that kubelet currently provides, how do you provide effective debugging, etc), but I'll leave that to smarter folks than I to work out :)

Disclosure: I work at Google on Kubernetes


While CoreOS is not Google you should mention that Google Ventures is the leading investor in CoreOS.


That's true, but it is totally and completely separate. While I know Google Venture folks, they do not share anything about investments (performance/strategy/etc) to the main product org.


The rkt process model does makes a lot more sense than Docker. The combo of Kubernetes + Docker has always seemed a little awkward to me; K8s just wants to control containers, and something lower-level than Docker would fit its needs better.

Looking forward to trying it out!


Does kubernetes without rkt support pluggable isolation environments? It seems like a pretty cool feature to be able to say "this pod needs to be run under kvm." I'm not well-versed in how good the regular container isolation has become at this point, so maybe it's not as big of an issue these days.


It is an alpha feature but you can switch the rkt "stage1" using the rkt.alpha.kubernetes.io/stage1-name-override annotation on a pod.

So if you wanted to use the virtual machine based pod isolation you could do something like rkt.alpha.kubernetes.io/stage1-name-override=coreos.com/rkt/stage1-kvm:1.10.0.

We are working with upstream to come up with a better mechanism but this is a great example of where having an annotation is a great release valve for adding experimental features.


Also there is a node-wide kubelet flag `--rkt-stage1-image1` to set the default isolation environments, which can be overridden by the per pod annotation `rkt.alpha.kubernetes.io/stage1-name-override`



What would a user of Kubernetes (as opposed to the person setting up Kubernetes) gain from this? How much of the underlying containerization technology is exposed through the Kubernetes APIs such that one could actually leverage features in rkt that aren't in Docker, or vice versa?


Will it ever be possible to run a heterogeneous cluster with rkt and docker containers running side by side in the same cluster? Or is that a bridge too far?


Yes, the container runtime flags are per kubelet wise[1], so you can have a cluster with a mixture of runtimes. I can see the value you mentioned in testing and trying out the rkt+k8s. In fact we did something similar a while ago where we had Docker running in the master node, and rkt running on the minion node to narrow our testing scope.

[1] http://kubernetes.io/docs/getting-started-guides/rkt/#spin-u...


This should be possible as it is a kubelet level flag. And the k8s API doesn't care about those details. So, you could have a kubelet join the cluster with docker and a different one join with rkt.

That said, I haven't tested it personally.

What would your motivation be for doing this?


Our motivation would just be supporting both to the end user. I'm imagining a single Kube cluster with Pachyderm running on it and users can specify their jobs as either Docker containers or rkt containers. Ofc this use case is only as real as the users desire to specify their jobs in both formats on the same cluster which I haven't validated.

Maybe a more realistic use case is that it would allow us to specify our Pachyderm containers in rkt and still support Docker for users jobs.


Would certainly make it easier to evaluate and gradually migrate to rkt. I don't have any plans on setting a separate cluster up to test rkt anytime soon.


I'm personally most interested in getting Kubernetes to provide support for the Open Containers Initiative specifications. And it would be really awesome if you could use runC as the executor for Kubernetes. Or at the very least, the rkt executor generated OCI configuration files so you could use other OCI-compliant runtimes.


Rkt + Kubernetes = rktnetes! I'd love a little help with the pronunciation? In my head the "t" is silent...rock-a-netes?

We're obviously big CoreOS and Kubernetes fans at Pachyderm so great to see more technologies supported in general


rocket-netes?


This is how we say it at CoreOS.

Roughly: rocket-net-ease


Will it ever be possible to switch between rkt and docker at the pod level instead of at the kubelet level?


rockaneties is how I said it in my head.


This is some terrible branding!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: