Hacker News new | past | comments | ask | show | jobs | submit login
Containers: Docker, Windows and Trends (microsoft.com)
91 points by runesoerensen on Aug 17, 2015 | hide | past | favorite | 21 comments



Considering the timing of this blog post, it's probably "warmup" for the release of Windows Server 2016 Technical Preview 3 (10.0.10514) expected later this week[1].

It also seems that MSFT made the new preview available for public download just a few hours ago[2], along with a new docker client[3].

A couple of PowerShell scripts for setting up container hosts were also uploaded earlier today[4] (which was where I found the VM and docker client links. Use at your own risk :-))

[1] http://www.winbeta.org/news/windows-server-2016-build-1051x-...

[2] http://download.microsoft.com/download/F/B/A/FBAAEE1A-3AF2-4...

[3] http://aka.ms/ContainerTools

[4] https://github.com/Microsoft/Virtualization-Documentation/bl...


What actually is the point of Microsoft helping docker? They tell that """Linux containers require Linux APIs from the host kernel and Windows Server Containers require the Windows APIs of a host Windows kernel, so you cannot run Linux containers on a Windows Server host or a Windows Server Container on a Linux host."""

So why help docker? can't they just create their docker-like software for Windows Containers? What is the point of sharing the same client if what you can do with it depends on the platform you use?

EDIT: instead of just downvoting me, an explanation would be welcome


Because sharing the same client is still useful, even if they're different platforms. For example, being able to use the same application to control systems that run on Linux as well as Windows without needing to duplicate a lot of logic in a lot of places. Additionally, I can see cross-platform programs as potentially being possible through the right combination of well laid out containers, and usage of file associations/binfmt_misc in proper doses. So settling on the docker api does make sense for ms, because a balkanized server ecosystem just leads to frustration and people less interested in your platform.


It's a good thing CloudVolumes (aka SnapVolumes, basically VM-side volume unioning for VDI and server app containers a-la Softricity) sold to VMware when they did about 2 years ago. The core idea of app executables and libraries as opaque, read-only containers which can be shared at the enterprise scale is a good abstraction for many reasons including security and dedupe.


It sounds like:

• Docker is really cool! Everyone is talking about Docker!

• We at Microsoft like Docker, too! (But Docker obviously screws us pretty hard, since the whole ecosystem is built around the Linux kernel. Windows isn't great at this type of isolation.)

• We don't like being left out of the Docker container party, so we're going to create _two_ things that are Almost As Good As Docker Containers!

• (1) A new package format, "Windows Server Containers," that we're calling a container, even though it doesn't actually offer containment. Containers can mess with other containers.

• (2) The same old Windows Server virtual machines, but we're going to call them "Hyper-V Containers." Yeah, they're just VMs. Separate kernels in memory and all of that.

• We know neither of these offer what you wanted. But good news! You can use the same container format for _both_ of these new almost-a-container systems.

Yikes.


Your summary of #1 is uncharitable. Although we don't know the technical details of what Microsoft has actually implemented, their description of Window Server Containers applies equally well to containers on Linux.

"While the sharing of the kernel enables fast start-up and efficient packing, Windows Server Containers share the OS with the host and each other. The amount of shared data and APIs means that there may be ways, whether by design or because of an implementation flaw in the namespace isolation or resource governance, for an application to escape out of its container or deny service to the host or other containers. Local elevation of privilege vulnerabilities that operating system vendors patch is an example of a flaw that an application could leverage. Thus, Windows Server Containers are great for scenarios where the OS trusts the applications that will be hosted on it, and all the applications also trust each other."


I am not trying to be a downer, I already wait Windows containers for my work, we are planning to build a project out of it, however there is a difference: When a Docker container on a Linux host was able to escape the container, (around 0.8) it was considered a huge security vulnerability, but this text from MS is like they already accept there will be vulnerabilities because "they designed wrong" or "implemented wrong", this does not give a confidence to run untrusted containers, and that's why they implemented HyperV containers as well.


They accept that there is less resource isolation in a plain container compared to a virtual machine container.

With a plain container, any OS process that you see is the common host OS process - it is just projected into your container. Compromising the process is compromising the the process for all containers.

For security purposes there's a big difference between starting with access to everything and then trying to reign in processes, access, resources etc compared to starting with hardware isolation and then allowing some functions (e.g. management) to cross.

Microsoft is completely correct on this: Containers are not security boundaries. A security boundary would require very few access points with very specific security policies. That is not containers.

Hyper-V virtual machines, on the other hand, enjoy hardware level isolation and starts from the other end: Anything that should cross the VM boundary has to be explicitly allowed, as opposed to OS virtualization where anything is allowed until the projection disallows it.

For instance, a container could try to delay processing of callbacks from the kernel processes. It is the same process as the others containers, and a single malicious container could very well starve the others for resources.

Both have their uses. Plain containers offer higher density but less isolation, Hyper-V (or any other VM technology) containers offer lower density but higher isolation.


You're making the extremely poor assumption that the only people who want containers are people running Linux. What are the options on Linux to run a Windows container?

Instead of spewing the "MS IS BAD BECAUSE MS" crap, perhaps you should applaud them for trying to embrace what we're all hoping becomes a standard. They could've very easily just written their own completely incompatible containers, but instead chose to try to continue down the open path.

Let's be real - docker isn't doing anything new, or anything particularly unique, their entire value proposition is standardizing features that have existed for literally decades.


Being able to use Docker tools to deploy these Windows containers is a win, though. Companies with mixed data centers right now (aka many enterprises) are reluctant to use Docker-ecosystem tools for deployment because they won't work for Windows applications. If you can package up Windows apps in ways compatible with Docker orchestration tools, now it's a cross-platform tool that can work for the whole datacenter.


Assuming that "Windows Server Containers" are based on the Drawbridge technology that's been around Microsoft for a few years, the containers can only mess with other containers to the extent that you've configured them to be able to.

I used Drawbridge to implement R support in the Azure Machine Learning service two years ago (as referenced at https://redmondmag.com/blogs/the-schwartz-report/2014/10/win... ), but I haven't worked on it since so I don't know how things have evolved.


> Docker is really cool! Everyone is talking about Docker!

Stupid MS. They should have known that they cannot think that anything developed outside MS is cool. We have a cult thing going here, and MS you are not invited!

> We at Microsoft like Docker, too! (But Docker obviously screws us pretty hard, since the whole ecosystem is built around the Linux kernel. Windows isn't great at this type of isolation.)

No, they are saying that OS virtualization is a necessity for Azure, and that they've adopted the Docker container format because it applies equally well to Linux, Windows or any other operating system. There is nothing inherent Linux about Docker, and there is certainly NOTHING tying it to the Linux kernel, contrary to your claim. But they should have known that they are not welcome in the cult, so they should have developed their own container format?

> We don't like being left out of the Docker container party, so we're going to create _two_ things that are Almost As Good As Docker Containers!

No - they are creating Docker containers for Windows which is equivalent to Docker containers for Linux, using the same format, the same API and allowing for existing tools to be used. Is that bad?

And they are also creating a specialized Virtual Machine capable of hosting a single container so that we can decide at deployment time whether we want a) higher density but less isolation or b) lower density but higher isolation. Which - to be honest - makes perfect sense, as that decision is mostly about trust of the environment in which you deploy and should not in any way affect how the container is developed and packed.

> (1) A new package format, "Windows Server Containers," that we're calling a container, even though it doesn't actually offer containment. Containers can mess with other containers.

They are called containers because it is Docker containers. Containers are a way to ship configured applications with minimal concern about the configuration of the host. They isolate your application from specifics on the host, including isolation from what else is running on the host.

The security concerns about containers (yes Linux containers as well) are well understood. Containers share the operating system, and thus there is a higher risk of cross-container contamination compared to virtual machines (VMs).

Contrary to your Linux cult view, security of (Linux) containers is not perfect. As Mark Russinovich points out, a simple privilege escalation (seen any of those lately, hmm?) could allow complete cross-container compromise.

Mark Russinovichs comments about trust of the environment makes perfect sense. Make sure you trust the containers running on your system. If you developed those yourself or obtained them from a trusted source - fine. If they are controlled by some less trusted entity, then assume that they - or someone who comprimises them - could be hostile and try to gain access to other containers.

> (2) The same old Windows Server virtual machines, but we're going to call them "Hyper-V Containers." Yeah, they're just VMs. Separate kernels in memory and all of that.

You misunderstand the point. Yes, the Hyper-V container is based on existing virtual machine technology. Nothing new there. The point is that you can use such a single-container VM as an alternative target at deployment time.

If you develop an application (say, a website) and package it as a container, you can deploy it to a container-aware OS. But if you are deploying it alongside untrusted containers, you'll want a higher degree of isolation, regardless of OS. That's where Hyper-V containers comes in.

> We know neither of these offer what you wanted. But good news! You can use the same container format for _both_ of these new almost-a-container systems. Yikes.

It is exactly what a lot of us want. It may not be what the cult wants, but let's be honest here: Microsoft could never produce anything the cult would want.

Take a cue from Linus Torvalds and get a cure for your Microsoft hate disease. Containers are cool, and they offer great value, also on Windows. Be proud that a technology popularized on Linux also proliferated to Windows - that is if you had anything to do with.


What OS's will this docker.exe (3) work on? It does not run on Windows 10.


Well, over a decade late to this party, but seems like they're being dragged forward.


Linux was pretty late to the party itself short of heavyweight solutions like Linux-VServer and OpenVZ which require patched kernels.

As with most everything related to virtualization, IBM did it first.


I remember using VM/CMS in school, in 1981: https://en.wikipedia.org/wiki/VM_%28operating_system%29

How much is really new?


Everything new is old already.


Sure. But ten years ago, people were using OpenVZ widely in hosting and so on. The benefits were very clear. Yet at that time period, Microsoft went in the opposite direction as they thought HDD space was growing without limit. (Vista introduced winsxs and other disk hungry features.)


They were ahead with App-V (Softricity, streamable apps which work offline in lieu of Citrix virtual desktop or Moka5 VMs), but the landscape on servers and VDI changed.


Did that make it possible to deploy, say, IIS and DNS in separate containers, easily? I always saw it sorta aimed at end-user apps. Also note it's yet another licensing hurdle.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: