I believe this article is poorly researched and very pro-Google slanted. The author, 'Timothy Prickett Morgan' falsely presents the impression that Google has been driving in one direction for ten years, that they did most of the work of containers in Linux, and that everyone else can thank them for their hard work.
I was following the container automation area very closely a couple of years ago, including some personal email interactions with Wilkes, and the primary LXC/cgroup userspace/kernelspace authors (none of whom worked at Google). It is my distinct impression, as others have noted, that Kubernetes was not an in-house for in-house thing, but rather a product that was made after many successive in-house systems specifically with a view towards sharing with the public, probably partly in response to public perceptions and popularization around LXC/Docker, and Amazon EC2's rapid success which Google's management would probably like to replicate.
Nobody on the Kubernetes project ever claimed that it was an internal project gone public. It was absolutely, 100% designed to be open-sourced as a derivative of Borg and Omega. There's not sleight of hand here.
That said, we had been discussing a Borg-as-a-Service for quite some time BEFORE Docker became popular. Docker's popularity was undeniably a catalyst in getting Kubernetes built.
This is true. Early cgroups came primarily from Google. Namespaces did not. We have also been very involved in development of various cgroup controllers, and very vocal about what changes will make our style of isolation (which is suddenly popularized by Docker) more robust.
Looking at /usr/src/linux/Documentation/cgroups/* it seems that while the renamed cgroups kernel functionality by itself did have initial google authorship, the "actually make it useful" later developments of namespaces and LXC (ie. first userspace component) did not come from Google, and by the time Google was working on cgroups code from SGI and other sources (such as cpusets) had already been merged. Precisely what Google wrote would be an interesting question, in any event it's significantly less than "all of the above" as the article claims. By the time I started looking at it in 2009, development was dominated by IBM, who apparently funded LXC userspace development because they felt the new features would be good for their big mainframes.
That's mostly accurate. Lots of people (at Google, IBM, SWSoft and elsewhere) had been working on approaches to get resource isolation into the Linux kernel since around 2000, but none had achieved general support. The main debate was around the abstractions to be used for defining/controlling the sets of processes being isolated and the isolation parameters, rather than the actual mechanisms used for isolation.
Around about the same time (~2005?) SGI got cpusets merged into the kernel; this was initially just intended for pinning groups of processes on to specific NUMA nodes on big-iron systems. At the suggestion of akpm we started using it internally at Google to do coarse-grained CPU and memory isolation, by making use of the fake-NUMA emulation support to split the memory on our servers into chunks of ~128MB each and pinning each job to some number of fake nodes. This worked surprisingly well, but required painfully-complex userspace support to keep track of memory usage of each job, and juggle memory node assignments (particularly since we wanted to be able to overcommit machines, so we had to dynamically shift nodes around from low-priority jobs to high-priority jobs in response to demand).
The cpuset API and abstractions turned out to fit the resource control problem pretty well, and they had already been merged into the kernel, which gave that API a kind of pre-approval compared to the other generic resource control approaches. So we worked on separating out the core process/group management code from cpusets, and adapting it to support multiple different subsystems, and multiple parallel hierarchies of groups. The original cpusets became just one subsystem that could be attached to cgroups (others included memory, CPU cycles, disk I/O slots, available TCP ports, etc). It turned out that this was an approach that everyone (different groups of resource-control enthusiasts, as well as Linux core maintainers) could get behind, and as a result Linux acquired a general-purpose resource control abstraction, and other folks (including some at Google) went to town on providing mechanisms for controlling specific resources.
The namespace work was going on pretty much in parallel with this - it wasn't something that we were interested in since it was just added overhead from our point of view. The jobs we were running were fully aware that they were running in a shared environment (and mostly included a lot of Google core libraries that made dealing with the shared environment pretty straightforward) so we didn't need to give the impression that the job had a machine to itself. IP isolation would have been somewhat useful (and I think was later added in Kubernetes) but wasn't very practical to provide efficiently given Google's networking infrastructure at the time.
We weren't really interested in LXC since we had our own userspace components that had developed organically with our container support (and which as others have commented were so entwined with other bits of Google infrastructure that open-sourcing them wouldn't have been practical or very useful).
Actually I got in to the area for experimental large-scale video transcoding cluster design (2009-2010; $), then to enable microservice-based architecture of secure digital currency exchange systems (2011-2015; $). However, even in my postcard archive project (2016; interest) I am using it for scalable reverse image search, enhanced workflow (CI), and secure image ingestion as well as standard web and database service segregation.
I was following the container automation area very closely a couple of years ago, including some personal email interactions with Wilkes, and the primary LXC/cgroup userspace/kernelspace authors (none of whom worked at Google). It is my distinct impression, as others have noted, that Kubernetes was not an in-house for in-house thing, but rather a product that was made after many successive in-house systems specifically with a view towards sharing with the public, probably partly in response to public perceptions and popularization around LXC/Docker, and Amazon EC2's rapid success which Google's management would probably like to replicate.
Bad journalism.