Hacker News new | past | comments | ask | show | jobs | submit login
Docker isn't perfect – Issue #216 (github.com/subuser-security)
120 points by rljy on Aug 27, 2015 | hide | past | favorite | 62 comments



A number of developers from RedHat were once very involved in the project. However, these developers had a very arrogant attitude towards Docker: They wanted docker changed so that it would follow their design ideas for RHEL and systemd. They often made pull requests with poorly written and undocumented code and then they became very agressive when those pull requests were not accepted, saying "we need this change we will ship a patched version of Docker and cause you problems on RHEL if you don't make this change in master." They were arogant and agressive, when at the same time, they had the choice of working with the Docker developers and writting quality code that could actually be merged.

THIS. It was both amusing and sad to watch this happen time and time again. My favorite is what happened (or, rather, didn't happen) around CoW filesystems and how they decided to just use a FUSE-based one instead.


Upvoted because every opportunity for me to vote against RedHat's insistence on the nightmare that is systemd is a good opportunity.

Stop breaking Linux's usefulness as a server OS in a misguided attempt to make it a desktop OS. (see: systemd, networkmanager, etc)


Why do you think that systemd is not useful on a server OS?


Because it does nothing that anyone in my organization has ever asked for, nor does it do anything that anyone in any organization I've ever worked for has asked for, in 20 years of working on Linux systems.

I've never heard of any problems it solves other than things described in abstract terms that might make sense to a developer but that makes no sense to Operations/DevOps teams.

Nobody likes learning something new for zero benefit. I keep trying, though. But the more I am forced to learn about it, the more arcane it gets. It's not a learning curve, as far as I can tell. It's a learning wall with no payoff. It makes everything we do more difficult, the commands make no sense, and it brings no tangible benefit. It's change for change's sake. It reminds me of DJB's insistence on making logs in hex "just because".

Just a simple example. Why is it "systemctl list-unit-files"? What the hell are unit files? How is this in any way logical? Why is this an argument and not a flag? Why is it that I can use a systemctl command, and it tells me to use the -l flag to view non-ellipsized output, but then when I use the flag, it gives me an arcane error that tells me nothing about what happened? It's the tip of the iceberg in an endless string of junk that simply doesn't make any sense or work properly.

So, I like to vote against it whenever I can, because supporters like to keep insisting that it's the way of the future, and implying anyone who hates it is a luddite.


> Why is it "systemctl list-unit-files"? What the hell are unit files? How is this in any way logical?

I suggest you read systemd.unit(5), which begins as follows:

"A unit configuration file encodes information about a service, a socket, a device, a mount point, an automount point, a swap file or partition, a start-up target, a watched file system path, a timer controlled and supervised by systemd(1), a temporary system state snapshot, a resource management slice or a group of externally created processes."

> Why is this an argument and not a flag?

Multi-use programs with a good CLI accept the argument(s) determining what they do positionally rather than as flags. Flags should be used for options that somehow modify the behaviour of the command.

> Why is it that I can use a systemctl command, and it tells me to use the -l flag to view non-ellipsized output, but then when I use the flag, it gives me an arcane error that tells me nothing about what happened?

I don't know what you mean. For me, 'systemctl list-units -l' works fine.


It's hard to argue with someone who claims that systemd has zero benefit.

You claim unit files are confusing. Let's say we took a Windows programmer who was a sophomore in CS but didn't have any Unix or Linux experience, show him both systems. Explain how runlevels are handled in systemd vs sysvinit. Show him a systemd unit file vs the equivalent bash script. Explain how systemd tracks the process tree vs sysvinit pid file hack. You spent five minutes trying to figure out systemctl and declared it terrible. It probably took you more than 5 minutes to find /etc/init.d your first time


Perhaps. My complaint is that systemd has not started making any more sense in the year(s) I've been wrestling with it. And it hasn't given me any benefit yet.

To most of us, it's change for change's sake, which makes life worse due to the learning curve. If it made life easier for the average end user or the average sysadmin, it would have more defenders.


I don't want to start this discussion but it largely comes down to taste. Some people don't want to run dbus or have no use for journald. If you put your daemons into runit you can have a small and lean image without dbus.

Personally I loathe network manager in RHEL7 but that's not a systemd problem.


If you don't like systemd in general that's one thing, but to claim that systemd only benefits desktops is just wrong. I'm just tired of the "systemd only goal is boot speed" argument. Too many people want to join the crowd and get their old-school linux user credentials by bashing systemd without understanding or acknowledging the benefits.

sysvinit maintainers must love systemd. Before systemd came along there were lots of complaints about what a terrible hack sysvinit is. Now it's a paragon of simplicity and stability without having actually changed any..


If I had a dollar for every time a systemd supporter compared it to upstart without provocation, I'd have exactly zero dollars.

If I had a dollar for every time a systemd supporter reminded us that we need systemd because runsv was a terrible hack, on the other hand...


As a neutral party, one thing I'll say about systemd is at least they understand what a dependency is. I still haven't figured out why the upstart people think it's a good idea for a service to magically start just because its dependencies are fulfilled. Maybe they think system services are like proteins or something.


Someone tell that to RedHat. They consider the network "up" once the device gets link, not when the system gets an IP address. Even in environments without spanning tree, this is a problem due to massive parallelization -- spawning 30 processes 5ms after the network is "up" but doesn't have an IP from DHCP yet.

The solution? Put a sleep 7 in there, thus making the entire process slower than the previous solution.



So either I use systemd's sleep or my sleep, but either way it's broken out of the box. Great job!

EDIT: RHEL/CentOS only supports NetworkManager for this. Which sucks if you don't want to run NetworkManager (for a litany of reasons).

$ yum whatprovides "*/systemd-networkd-wait-online.service" No matches found


Wow, that's disgusting.


You know, the funny thing is I never exactly considered myself a systemd supporter. I just get tired of the comments from people bashing systemd which are almost always emotional and political rather than technical in nature. Maybe there's good technical arguments for some other system but they get drowned out by the "systemd is a nightmare with zero benefit that was written to make linux a desktop OS" comments..


It's not the best architected system of all time but it's hardly a nightmare. Too monolithic though, and they like to violate the UNIX philosophy.

It wasn't really any better than upstart either, which is why I was mad about seeing ubuntu rip it out in order to replace it with systemd.

The main thing that concerns me most is that systemd is now dominant on virtually every linux platform and without competition it is going to stagnate.

Upstart wasn't perfect but it was decent and it was competition, it worked well for ubuntu and it would have been a better choice for debian (unfortunately the vote was split 50/50 and the tiebreaker guy decided on systemd).


It could well be. I don't have the technical understanding or time to invest in order to make a technical criticism. But I've been fighting it for a year or two, and it hasn't started making more sense yet!


It's because every argument I've seen against systemd is by people who think systemd betrays the Unix philosophy and think we should go back to sysvinit (see rconti above). Upstart had a lot of problems, and the only good technical arguments I've seen in its favor are better compatibility with other kernels. Please point me to a good "Upstart is better than systemd" technical article.


> think we should go back to sysvinit (see rconti above).

Except he didn't. Another good example of the straw man arguments I was talking about. Hating systemd doesn't automatically make you a sysvinit fan.

>Upstart had a lot of problems

The fact that you didn't mention one of them demonstrates just how much you know about this (nothing).


Another good example of the level of technical detail I see in comments from systemd haters, thanks.


Yes, wouldn't a server want to depend on the DESKTOP BUS?

Ok, snarky, yes. Still want to know the answer :)

Also, why is it that the ONLY alternative ever mentioned is sysvinit? Systemd is not the 2nd init system ever invented, nor even the 4th or 5th. There are simpler alternatives.

Binary log files? Don't even get me started.


You think if it wasn't for systemd your server wouldn't need dbus? You wouldn't be running docker on that server either..


I understand that there are people who don't have a use for journald. It strikes me as odd to look at journald's feature set and conclude that this is being driven primarily by desktop support to the exclusion of servers, though.


Frankly i suspect journald is one bother backing the other.

The basic concepts journald is built on is lifted straight out of a doctorate thesis written by Lennart Poettering's brother.

Thinking about i don't think systemd really have a goal. It seems to be a hodgepodge of Poettering itch scratching, Gnome/Fedora/Freedesktop NIH-ing, and a bunch of "because we can" sub-projects.


Why on earth would you think Redhat, of all companies, wants Linux to become a desktop OS? They literally make all of their revenue on Linux server deployments.


Red Hat does not use a FUSE based file system. It's LVM Thin-p based....


ouch, like i was not already suspicious of RH activity.

Their threat sounds almost Microsoft. I have run into various claims where companies were worried they had to bend to MS wishes, or MS would start shipping a clone product.


Sounds like Microsoft from 15-20 years ago, not in the past decade.


There was a good discussion last night at the SF Microservices meetup about how early we are with containerized applications. The company I work for, Giant Swarm, provides a containerized stack solution and we run people through a short survey when they sign up for our shared cluster offering. The survey shows that about 15% of companies (at least for peeps showing up on our site) have some type of containerized application in production. That's not a big number. Yet.

The problem being described in this issue is one person's experience with Docker, but I've heard similar stories from others about similar projects in the past. Looking at you, OpenStack. Yes, things are breaking fast with Docker - I've experienced them myself. Yes, new companies and offerings are joining the ecosystem daily - which can be maddenly confusing for users trying to understand the technologies. This creates a sense of instability when so much is going on at once.

I think the problem is that, when new ways of doing compute things spin up, they become highly disruptive to both the people using older technologies AND the companies developing the new technologies. Given the new way of doing things usually gives an advantage to those using them, the challenge comes from trying to put those technologies into production before they are finished, all the while keeping an eye on shareholder value.

I call this the "problem cloud". It's really a people problem though, so maybe it should be called "problem people". :)


This is especially complicated in an open source project like Docker. Docker seems to be locked between a rock and a hard place in that:

a - They are post 1.0, and there is an expectation that they won't break compatibility. This has meant a real slow down in development, and it is now very hard to get a pull request through.

b - Docker is totally imature and those pull requests are needed. It is lacking in basic functionality like the ability to mount and unmount volumes at run-time. It is also slow, and suffers from a codebase that was thrown together practially overnight. So it would actually be best if Docker Inc. went back to breaking things and getting things done.

Unfortunately, what seems to be happening is that Docker is failing to not break things and at the same time they are still paralized.

For subuser, there is yet a third problem. And that is that Docker is Docker Inc. And subuser being free software, it's not a nice feeling to know that my project is at the mercy of a single company.


> (This is in response to the fact that Docker writes to cgroups, and systemd would like to be the "sole writter to cgroups" some time in the future.)

"Would like to" understates it a bit:

http://lwn.net/Articles/555920/

The kernel plans on deprecating the API that allows multiple writers to cgroups, thus requiring there to be a sole writer to cgroups. (Also, the systemd developers say this wasn't what they want, it was in response to the decisions the kernel maintainers have made in changing the API.)


@cwyers I understand this problem. However, this wasn't an acceptable situation for Docker to be in, because there is seemingly no choice "B" in which Docker comunicates with cgroups in a non-init system specific maner. I accept the concept that this is a hard technical problem, preventing race conditions while allowing more than one program to run on your system (to put it flatly), but tying Docker to systemd just wasn't an acceptable option.

I think that RedHat IS at fault in this, because their design choice of designating systemd as the single writer was extremely anti-standardization. Imagine if KDE wanted to be the single cgroup writter. Would people accept the fact that they would have to use KDE and interface with KDE in order to do containers on Linux? RedHat could have created a service "cgroupwriter" which would have communicated via D-bus or something. A "cgroupwriter" service would be far less divisive because it wouldn't be divisive on other issues such as choice of init system.


The linux kernel is breaking the way Docker currently uses cgroups. And systemd is providing a fix for systems running systemd, but this is their fault because systemd is not also providing a fix for systems that aren't running systemd?


> I think that RedHat IS at fault in this, because their design choice of designating systemd as the single writer was extremely anti-standardization. Imagine if KDE wanted to be the single cgroup writter.

This isn't a very good analogy.

> I think that RedHat IS at fault in this, because their design choice of designating systemd as the single writer was extremely anti-standardization.

This doesn't make sense. When the changes hits the kernel, all OS maintainers will need to choose a parent process for all cgroup control. Supporting this feature in systemd makes perfect sense and fits very well in the process hierarchy.

> Would people accept the fact that they would have to use KDE and interface with KDE in order to do containers on Linux?

Users and distributions who do not want to use systemd (their choice!) can use any other init system they want. They are also free to implement any other solution as their cgroup controller. Their situation is not changed one iota by this additional feature in systemd. They would have had to implement a cgroup controller either way.

In short, you are not required to use systemd if they make this choice. You may not have the benefit of someone else implementing the solution for you. But unless you're paying developers for the work, the guarantee of others doing work for you in the way you want them to is never provided.

> RedHat could have created a service "cgroupwriter" which would have communicated via D-bus or something.

They could have, but it would have added needless complexity. A significant number of tasks systemd needs to perform includes dependencies between cgroups. So mine as well manage it in the init system.

This isn't like a hard dependency on systemd. It just means that someone will need to author other solutions if systemd is not wanted.


> Users and distributions who do not want to use systemd (their choice!) can use any other init system they want. They are also free to implement any other solution as their cgroup controller. Their situation is not changed one iota by this additional feature in systemd. They would have had to implement a cgroup controller either way.

The trouble is, that Docker doesn't get to choose what cgroup controller "everyone" will use, it has to support them all. And the idea of interfacing with systemd on cgroups wasn't very appatizing because it was guaranteed to lead to Docker needing special compat code for whatever non-systemd version of cgroups control came out. While I agree with @cwyers that kicking the RedHat folks out doesn't help, it certainly felt to me like that particular discussion could have been phrased differently so that no conflict was created. For example, if the RedHat folks have said that there is going to be a single writer policy and systemd is a cgroup writer on systemd systems, that would have been far less combative than saying "and systemd is that writer". I know that the distinction is subtle, I'm not sure if I should try to find the origional thread. It was written as comments on diffs in a pull request. It would be therefore rather hard to find, and also I feel a little bad about pulling out exact names of the people involved because I think that the actual people working for RedHat are fine people who I don't want to attack and that this is somehow a problem of RedHat corporate policy and not bad apples.


I'm still not seeing the logic in your rationale.

> The trouble is, that Docker doesn't get to choose what cgroup controller "everyone" will use, it has to support them all.

Yes, this is an unfortunate consequence of the new kernel changes. Docker will need to interact with a controller. That controller might be different on different systems. Ergo, if Docker wants to support those systems it needs to support multiple controllers.

> And the idea of interfacing with systemd on cgroups wasn't very appatizing because it was guaranteed to lead to Docker needing special compat code for whatever non-systemd version of cgroups control came out.

Yes. But this would be the case no matter what controller was chosen for a particular distribution. For example, if Red Hat decided to implement a controller called Skunk one could say

> And the idea of interfacing with skunk on cgroups wasn't very appatizing because it was guaranteed to lead to Docker needing special compat code for whatever non-skunk version of cgroups control came out.

Same situation. No matter how you slice it, Docker now needs to support compatibility code for interacting with cgroups on systems it wants to support.

> I know that the distinction is subtle

It seems pretty meaningless. It's beginning to feel like the opposition, as described here, had little to do with anything technical or even design oriented and is more religious in nature.

Would the same opposition have been made against my hypothetical skunk controller? If not, seems more like an editor war than a design or engineering disagreement.


> And the idea of interfacing with systemd on cgroups wasn't very appatizing because it was guaranteed to lead to Docker needing special compat code for whatever non-systemd version of cgroups control came out.

Unless Docker is planning to move to running on BSD jails or fork the kernel, they have to do this no matter how unappetizing they find it, once the kernel moves to single writers on cgroups.


FreeBSD is working on Docker support: https://wiki.freebsd.org/Docker


Have fun with that. The Linux User Space will not work on a BSD kernel, so you would be forced into a BSD user space.

Now you are back at Square Zero. There is nothing wrong with the BSD, but it's just another different thing to learn and work with. I am skeptical this is the right way of looking at the problem.

Bottom line, the Linux Kernel changed, so Docker has to deal with it.


> The Linux User Space will not work on a BSD kernel

That's wrong, more or less. Read about Linux emulation on BSDs. Of course some low-level utilities won't work, but most of userspace does.


Or put another way. The kernel is deprecating multi-writer support for cgroups at some point in the future. systemd has announced they will provide an API for cgroups, where systemd serves as the single writer. On systems not using systemd, some other service will have to serve as the single writer for cgroups. They may or may not implement the same API. (I don't know that anyone has announced plans for an alternative to systemd's cgroups API.) Once this changeover happens, systemd's API will be the only way to use cgroups on systemd systems. This isn't portable, but that's not a problem with Red Hat's contributions to Docker, it's an issue with systemd. Kicking RHEL's contributors to Docker out doesn't actually solve this problem.


> it's an issue with systemd

I wouldn't even say this is an issue with systemd. It just happens that systemd (or logind actually) is what was selected as the cgroup controller on distributions that use it.

Mostly, it's an issue with Docker (and all other cgroup clients) that are currently using their own controllers.

It doesn't matter one bit what the distributions chose as their new controller, the existing clients would need to adapt to the new environment.

If anything, having systemd handle this has simplified their jobs significantly since they can support a huge swath of distributions capturing the overwhelming majority of user's systems by targeting a single API.


> (Also, the systemd developers say this wasn't what they want, it was in response to the decisions the kernel maintainers have made in changing the API.)

Yeah, right. Sounds like a bunch of PR double-talk.


Docker isn't perfect, but you can't argue with the fact the container ecosystem was rather stagnant until recently.

For years I knew of openvz (weird? needs custom kernel? supported?) and lxc(seems to be low level? how do I get started?) and just used xen/kvm on servers and virtualbox/vagrant locally.

It's funny when Bryan Cantrill talks about how solaris+zones had solid containers years ago.. and he is right, but the end user UX was abysmal [1].

And then docker comes along and shows...

That this can work:

  $ docker run -t -i --rm centos:7 /bin/bash
  [root@2214e639debe /]#
That building containers can be easy using Dockerfiles instead of 'make random changes to this container and then clone it forever'

That with a few cli options you can have persistent volumes and locked down networking.

So, sure, docker may have some issues. In a year we may all be using rkt or rocker or who knows what. But for now, docker is here and people are using it for things.

[1] I ran opensolaris at home for a few years. I think I only ever made 2 zones. I never got patching to work and either had to choose between sparse zone that wouldn't work right or a gigantic full zone that I couldn't manage to patch.


I'd like to point out, that subuser is not moving away from Docker. It seems pretty clear to me that we don't have any good other options. But the problems that I mentioned remain, and these problems really are causing subuser to break. I guess that's why the title of the bug report is "Docker isn't prefect" rather than "Docker svcks 1111" ;)


I just wish there was a -v during build and that -v supported any arbitrary mount like fstab. It would be nice if -v mapped directly to the mount namespace so could do things like:

iscsi:/data /data nfs://10.0.0.1:/data /data smb://10.0.0.1:/data /data ebs://10.0.0.1:/data /data

How much cooler would that be :-)


I don't use Docker so I don't know how opinionated this issue is, but it paints a bleak picture of the project.

Breaking stuff after the 1.0 release should generally be reserved for major versions (i.e. 1.0 -> 2.0), and incorrect basic documentation and specs are embarrassing for a project of this size.

Is this just a case where the project has been adopted too soon and developed too fast? It seems like Docker still has maturity issues past version 1.0 (based on a number of other negative responses to the project I've read elsewhere as well).


I can clue you in: it's extremely opinionated. After 1.0 there haven't been huge breaking changes.


DevOps/Infrastructure guy here at a startup that uses Docker in production. Docker has consistently broken in point releases after 1.0.

"Why yes, I'd absolutely love to use your immature containerization ecosystem to manage mission critical infrastructure".

To be fair, it was my mistake for not pushing back hard against Docker. Lesson learned.


I wrote a list of things that off the top of my head had broken things. Each of those changes meant that subuser couldn't run at all.


If I got it right, the points made in this article are:

* Docker breaks other software

* Docker breaks it own API at random points between major versions

* Documentation is incorrect

* The project's management is bad at working with the community

I was thinking of trying Docker, but I think I'll stick to LXC for now.


There are other things to try out as well, if you're into experimentation: FreeBSD's bare Jails[0], JetPack[1] (FreeBSD based AppC implementation) and iocage[2] (Docker-ish userland wrapper for Jails and ZFS). Give them a chance, it could be eyes opening experience, especially that FreeBSD/Jails are more powerful, stable and battle tested than bare Linux/LXC in my opinion.

[0] https://www.freebsd.org/doc/handbook/jails.html

[1] https://github.com/3ofcoins/jetpack

[2] https://github.com/iocage/iocage


It breaks its own api in minor versions, actually. There hasn't been a major version since 1.0.


I think that is what the op meant. Between major versions meaning breaking changes happening in updates happening between 1.0 and 2.0 i.e. 1.1, 1.2, etc.

That said.. is Docker using semver? It doesn't look that way so is it written somewhere that this is breaking some standard?


@mattkrea honestly, it never occured to me if Docker uses semver. I had simply assumed that it did :P


I have to admit my use of Docker is quite a bit different than yours and while I definitely understand the concerns / complaints if they aren't using it can we really be made if there are changes like this?

I only use Docker on AWS Elastic Beanstalk and they only update for security issues most often (Beanstalk is using 1.6 as I type this).

If I recall even the config file format changes between 1.6 and 1.7


Is it odd that no one from the company has responded to this in two days? I don't suggest "entertaining" responses. Rather, they could have chosen a particular complaint or two and said, "here's how things will work differently going forward, and here are the new issues in our tracker that reflect this commitment".

It could be that they just haven't seen this. No one was @-ed.

ISTM coreos/rkt may be the way to go for subuser. In my admittedly-shallow tests, it seemed simpler, easier, and better than docker, and it can consume docker files.


Don't worry, they'll have a overly-aggressive response soon enough.


Escape Docker, board your rkt and blast off for unikernel space. Be sure to avoid the interstellar systemdaemons as they are now sole proprietors of earth bound cgroupings.

In all seriousness I had never heard of Subuser before this thread and it looks very promising.


I wish the solution suggested would be something else than "Rewrite Docker". http://www.jwz.org/doc/cadt.html


@nikanj, Gnome1 vs Gnome2 vs Gnome3 wasn't about control. "Rewrite Docker" would be a question of control. To some degree, I was publicising this issue in order to gauge intrest in such a break away project, which would solve problems with Docker and take controll away from Docker Inc. I also publicised it, because everyone loves issues like this and there is no such thing as bad press :D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: