Hacker News new | past | comments | ask | show | jobs | submit login
Docker 1.5: IPv6 support, read-only containers, stats, and more (docker.com)
212 points by mohamedbassem on Feb 10, 2015 | hide | past | favorite | 86 comments



> Open Image Spec ... As we continue to grow the contributor community to the Docker project, we wanted to encourage more work in the area around how Docker constructs images and their layers. As a start, we have documented how Docker currently builds and formats images and their configuration. Our hope is that these details allow contributors to better understand this critical facet of Docker as well as help contribute to future efforts to improve the image format. The v1 image specification can be found here: https://github.com/docker/docker/blob/master/image/spec/v1.m...

This is a great start, and I hope this doesn't sound negative, but this likely wouldn't be here if CoreOS hadn't shaken things up the way they did with ACI/Rocket.


> but this likely wouldn't be here if CoreOS hadn't shaken things up the way they did with ACI/Rocket.

It's not "likely", it's absolute.

The so-called Docker "Standard" was cooked up overnight the weekend ACI was released. Prior to ACI's release, Docker made active steps to prevent there being a unified standard... they were petrified that a unified image format standard would allow someone to swoop in and eat their lunch. Docker lived in a world where nobody else would dare make any competing container system.

It seems to have all started with this github issue[1]. Docker employees are initially interested in collaborating with CoreOS and helping to co-develop an open standard -- then Shykes shuts it all down.

They call Docker an implementation of their "Docker Standard", however it's really the other way around -- the "standard" was an afterthought -- and to that end it's unproven, untested and likely incomplete. There are no other implementations of their "standard" -- so the holes have not been found. ACI already has several working implementations, and many more in the oven at this moment. They have contributed immensely back to the open standard and helped evolve it into a community driven specification -- not dictated by any single organization or source.

[1] https://github.com/docker/docker/issues/9538


Shykes doesn't "shuts it all down". And to be honest, I think he's got a pretty good point. If anything he encourages for the Docker format to be more documented:

> But here's a suggestion for you. If you were to complain that Docker's image format and runtime specification, as massively adopted as it is, is not appropriately documented, and it could be made easier to produce alternate implementations - then I would completely agree with you. In response, I would encourage the project maintainers to improve the specs documentation based on your suggestions. I would also encourage you to join the effort, and offer my help in the process.


> If anything he encourages for the Docker format to be more documented

This was after the Docker project deleted the original spec [1] in September 2013 -- ACI wasn't publically annoucned until December 1st 2014, and then Docker rolled out their "standard" in a hurry on December 7th 2014.

https://github.com/docker/docker/commit/eed00a4afd1e8e8e35f8...


It was a container manifesto that got deleted in the commit you've mentioned. I don't see any spec there.


That's not a spec. I've got no side in this, but what you're saying has happened and what actually seems to have happened based on reading your links doesn't match up, in my opinion.


Got the same impression. Now what is needed is to split the docker daemon into little parts with least privileges.

An example of that, yesterday while trying to see if I could implement `docker build` using available commands, I found out that the docker daemon is shelling out to git on the server side if a remote url is given. It seems risky since the server is running with a lot of privileges that aren't needed for that task. https://github.com/docker/docker/blob/master/builder/job.go#...


> trying to see if I could implement `docker build` using available commands

This should be possible soon! The only command missing is a symmetric `docker cp`. I've got most of the implementation ready to be reviewed in a pull request [1] (closed now, but I will reopen it soon).

[1] https://github.com/docker/docker/pull/10198#issuecomment-733...


The split into smaller binaries is something I'm very open to. We just want to make sure the user experience doesn't get sacrificed. Here's a recent discussion thread on the subject: https://groups.google.com/forum/#!topic/docker-dev/mzpAga_XZ...


The way you phrase that suggests that user experience is more important than security. Care to clarify?


That's a ridiculously loaded question. User experience is balanced against security all the time.


> The way you phrase that suggests that user experience is more important than security

Of course it is. Steve Yegge put it best:

> But I'll argue that Accessibility is actually more important than Security because dialing Accessibility to zero means you have no product at all, whereas dialing Security to zero can still get you a reasonably successful product such as the Playstation Network.

Ha ha. You can always layer security around how use use Docker, but without any usability what do you have? Chop the network cable and you get the tradeoff you are looking for.


Your response is terrifying. Comparing core, must-not-fail infrastructural systems the same in terms of security needs as a consumer-facing video game platform is, to me, very much misguided, and I think it betrays the difference in mindset between "product people" and "platform people": as a platform people, my stuff must not fucking break and vendors that build stuff that make things more likely to break are poisonous.

The continually lackadaisical approach to security and reliability (basic features around reliability being post-1.0 is unconscionably bad, and user privilege segmentation, as noted, still hasn't happened), in ways I can't patch around, is one of the reasons I'm honestly hoping for a Docker competitor to arise that would trade that growth-fuel "user experience" for security, reliability, and a less compromised ethos. (I am considering a move of my systems to OpenBSD, but in AWS that's difficult.)

This leads me to thinking about a larger problem in the current ecosystem: core infrastructure being tied to a VC-backed, growth-as-a-prerequisite startup (where you really can convince yourself that making it easy is more important than making it popular) scares me deeply, because building the right thing becomes less incentivized than building the popular thing. I hope this is not the way of the future, because things already work poorly enough and are insecure enough as it is. =/


I'm happy to address any specific concerns you have about security. It's an important topic for me.

In this particular comment I could only spot one specific criticism: the lack of built-in user segmentation, so I will talk about that. I agree that this would be a nice feature. But we decided to not rush it, and instead tell operators to rely on the underlying system features for authentication, segmentation etc. In practice that means:

* If you have an https auth infrastructure in production, drop the appropriate middleware in front of your docker daemon, and rely on that.

* If you rely on ssh keys and unix access control in production, keep the default configuration of listening on a unix socket, and use regular unix users to decide who gets to talk to the socket.

* If you run trusted payloads, or if you run untrusted payloads with acceptable mitigation in place (no root inside the container, apparmor/selinux, inter-container networking disabled etc.), then go ahead and pool all your machines into a single swarm.

* If you run untrusted payloads, then map each trust domain to an isolated group of underlying machines. This is what Amazon, Google and others do when running customer payloads on Docker for example.

It would be a nice feature to segment Docker API endpoints by user, so that different users have different views (and different levels of access) of the same underlying daemon. But that requires implementing an authentication and authorization layer, and it requires changing some aspects of the Docker API which imply privileged access to the system. For example, the 'docker run -v /foo:/bar'. This represents serious engineering work, and as much as I would like to make you happy tomorrow, I don't think you will be any happier if we ship an unfinished feature.


Great post by Fred Wilson on Perez's technological surge cycle http://avc.com/2015/02/the-carlota-perez-framework/

She predicts every major technology has a breaking point and turning point.

I can't see why the same isn't true for docker. Rapid adoption leads to growing pains, which leads to introspection, which leads to fixing of issues to create better product.

If you've been around the block, it's hard to see Rocket as competition. There is a lot of sunk cost already in Docker (Amazon, Google, Joyent, lots of startups), if it's not obvious to CoreOS already.. Docker will be the predominant way we package our applications for the next 5-10 years


> Docker will be the predominant way we package our applications for the next 5-10 years

That same effect will also drive a revolution in cloud infrastructure. I call the effect the problem cloud because it's a pain in the ass sometimes, just like a teenager.


The migration cost from Docker to Rocket will/would be trivial compared to the cost in getting things to work with Docker style containers in the first place.

I'm sure Docker will be important for a long time, but consider that it's still a bit player in application packaging and deployment - far more has been invested e.g. in AMI's for EC2, or tooling around VMWare just to mention two, and so it's by no means clear if it will manage to maintain its lead in what is still a tiny, new space.


Docker will be the predominant way we package our applications for the next 5-10 years

Disagree. I see their scope as too narrow, leading to a design that has far too many already evident growth problems to resist strong, broader-scoped competition.


Docker Issue #1988 is still an issue.

While it is still an issue, and still neglected (or more likely arbitrarily ignored for profit) Docker will be a red flag for any real corporate uses.


I agree it's still an issue. Enterprise sysadmins should be allowed to block access to external registries, including Docker Hub. There is nothing contentious about it, and it has nothing to do with profit. If you send a properly implemented patch for it, the maintainers will merge it.


That's great news. What do you consider a "properly implemented patch"? I would think the most bulletproof & simplest patch would simply allow the default index (index.docker.io) to be reconfigured to something else in docker.conf. Would you support a patch that did that?

Edit: And perhaps https://registry-1.docker.io/v2/ as well?


I think the best way is to allow a "whitelist mode" where only an explicitly specified list of URLs are allowed to be reached. Everything else would be blocked by default. This should give ops the peace of mind they need.

Note that this is an ACL change, and not a namespace change. That is important because we want image names to have the same meaning everywhere, regardless of site-specific configuration. So for example, "docker pull ubuntu" should always mean "install ubuntu from the official docker library". This is crucial to the developer experience and to respect the principle of separation of concern between dev and ops. However, if ops chooses to block access to the standard library then "docker pull ubuntu" will fail with "access blocked by your administrator", which is totally acceptable. What we don't want is the operation silently substituting a site-specific image, without the knowledge of the end user, thereby breaking their build in a thousand invisible ways.

I hope this helps. Does this mean I should look forward to a patch from you? :)


> However, if ops chooses to block access to the standard library then "docker pull ubuntu" will fail with "access blocked by your administrator", which is totally acceptable.

Actually, it isn't, at least in production, because it forces us to rebuild a lot of images that references the standard library, when a more reasonable approach would be to mirror them.

The index/registry/image identity problem is by far the weakest part of Docker, and what appears most attractive with Rocket, in my opinion. There are pretty much zero cases where I, in my ops role, can allow production deployments to have access to the official Docker repository, because it opens the door to pulling in all kinds of stuff that has not been vetted (e.g. referencing the "latest" images, and having that change between dev signing something off and deployment), and it creates all kinds of obnoxious failure scenarios.

At the same time, I don't want devs to have the hassle of having to repackage all the images to point them to our internal registries, when we could easily mirror the images that have been tested.

So if there's no easy way to point the default somewhere else, what we'll resort to instead is increasingly adding firewall rules to block the official registry, coupled with DNS tweaks to make *.docker.io point where we want it, or patching the code.

Or switch to Rocket once it gets more mature, if Docker continues to make custom image management more troublesome than necessary.


I totally agree that mirroring of official images should be easier, and right now it's an obstacle to easier production deployment. This is why it's important to have cryptographically signed, self-describing images. Then it becomes irrelevant where you download them from, and anyone could host a public or private mirror. I am 100% in favor of it and we are upgrading the registry system to allow it. Happy to chat more on #docker-dev.


> What we don't want is the operation silently substituting a site-specific image, without the knowledge of the end user, thereby breaking their build in a thousand invisible ways

But this is not preventable in any way but explicitly pulling an image with a signing key. Who knows what index.docker.io resolves to or is serving?


I will happily provide a patch that implements the desired functionality :-)

However, I think people want the option of not using the docker registry. It is not under their control, it is slow, and it is a single point of failure. Having to recompile Docker to remove the magic constants is painful. I can see that the ACL functionality would be useful, but I don't think it addresses those concerns.

Even if you believe it is a bad idea to allow users to change the meaning of image names, you should allow users the freedom to choose their own path. I think this is a fairly basic tenet of open-source: that true innovation comes when people are allowed to try new things, even if they look like a bad idea at the time.


I totally agree that users should be given the freedom to choose their own path. For example that's why we added pluggable backends for storage and sandboxing very early, before even shipping 1.0 :)

At the same time, I care a lot about design and usability - and good design requires constraints. So there is a fundamental tension between flexibility and usability. My approach to this problem is to put usability first, and improve flexibility over time. The rationale is simple: you can always make things more flexible over time - but once usability is broken, it's almost impossible to fix.

So, to apply this in the topic at hand:

* Usability first: no matter what, don't break the predictable meaning of image names.

* Flexibility over time, step 0: allow admins to block access to the Hub. Since the standard library is only hosted on the Hub, this blocks access to the standard library. Note that you are not at all required to use the Hub. You can refer to any other registry, today, simply by "docker pull URL".

* Flexibility over time, step 1: allow hosting the standard library outside the Hub. The standard library is actually community project, very similar to Homebrew. The fact that it can only be hosted on the Hub is a technical limitation. We should lift that limitation, so that even if you block access to the Hub, you can still access the standard library. This could be via a private mirror, community-controller mirrors, or perhaps a bittorrent transport :)

* Flexibility over time, step 2: allow complete control over which images are downloaded from where. This means completely separating the naming of images from their transport. This allows a sysadmin to compose an image storage and transport setup that fits their needs: S3 for this namespace, high-performance NFS filer for that, public bittorrent transport for this authorized subset of public images, Hub for the the QA department, etc. The key technical requirement here is to make image self-describing, and make cryptographic signature and verification mandatory.


> * Usability first: no matter what, don't break the predictable meaning of image names.

Give up any idea that you have control over this. You don't.

As a user, I don't want you to have control over this, because it is directly contrary to my interests to not be able to redirect requests for "docker pull ubuntu" as per your example, to go somewhere I have full control over, so that I can choose to get exactly the image I expect matches the "ubuntu" image name rather than whatever the index will currently return for that name, and do so without having to change every reference to that name to something relative to my own registry.

You can make this hard, and have it continue to annoy users, or you can fix this, but this is a the biggest usability problem with Docker to me the way it stands now.

I find it hard to take you seriously when you claim usability first, and then claim that there is predictable meaning of image names: That is only true if you by "predictable meaning of image names" means "I don't really know what I'll get". This is especially true because of the lax use of tags all over the place.

The irony is that your "key technical requirement" for "flexibility over time, step 2" is far more important with the current situation than in actually would have been if we had the flexibility you describe for that step:

If Docker provided that flexibility today, then I could have easy complete control over exactly which images gets accessed if I wanted to. Instead I need to resort to firewall rules and/or DNS hacks or changing the source if I want to prevent accidentally pulling in images different than what I expect.


> this is a the biggest usability problem with Docker to me the way it stands now > You can make this hard, and have it continue to annoy users

I have to respectfully disagree. From my experience talking to many many Docker users, this is definitely not a usability issue. To achieve what you want, literally all you have to do is give an explicit name to your own Ubuntu name, under a DNS name you control. "docker pull mydomain.tld/ubuntu" will work out of the box, and you have full control over what goes in there. The overwhelming majority of Docker users have no problem with that. It's exactly how the Golang packaging system works, for example. It's also similar to the jar naming system with its mandatory reverse-dns notation. With this system you get a consistent user experience across all Docker installations, and you get flexibility. Can you point to a specific usage scenario where you found yourself stuck because of this design?

So, I don't believe you are actually complaining, as a user, about a usability issue. I believe you are annoyed, as a fellow domain expert, that we designed it differently than you would have. I respect the fact that there were different ways to approach the problem of naming and discovering images. We made a design decision, and so far the users seem to agree with it.

There is also a usability benefit of this design which is not obvious, but has a huge impact. If we allowed fragmentation of the namespace, the first thing that would happen is that every OS vendor and every cloud provider would start shipping Docker with a modified configuration, to override the meaning of "ubuntu" to mean "ubuntu in my walled garden app store". I know this because all of them are busy pressuring us into doing it. If they had their way, not only would it not solve any usability problems, it would fundamentally break the experience of using Docker because "ubuntu" would depend on which walled garden your particular machine is running in. This would fundamentally destroy the value of Docker, which is interoperability.

This doesn't mean I'm happy with the usability of Docker's image distribution in general. There are definitely issues which I look forward to fixing. For example, image signature should be mandatory. All layers should be content-addressed. It should be easier to extend a registry to customize authentication. And the standard library images should be easier to mirror.

EDIT: in another comment you mention the specific problem of mirroring official images in production. I agree that is a real usability issue for ops. We will fix it. But I think it's orthogonal to what you suggest here, which is freedom to fragment the namespace (and I believe is not a good thing).


I disagree, and I think this is doublespeak, but thank you for explaining this before I took the time to produce a patch which you would reject.


Can you explain why you disagree and why you think this is "doublespeak"?


The doublespeak here is that you say you would welcome a patch if it was "properly implemented", but then it turns out you mean "if it does ACLs and does not replace the default registry". You claim this is for the good of the users, but actually users are asking for the ability to replace the registry, and the primary beneficiary of this policy is the Docker Registry. You only want to disallow access to the registry if it provides an error message that places the blame on the administrator, so that in practice their users demand they turn it back on.

Anyway, I think it's clear that the patch that users want isn't welcome, so I won't be wasting any further time on it.


I think the problem is that we are talking about 2 different things. You want to access a particular piece of content without being forced to connect to a particular server. I want to avoid the same name designating completely different pieces of content depending on factors outside the control of the end user.

These are both good goals. It's possible to reach both. I just want to make sure we don't sacrifice one for the other.

> the primary beneficiary of this policy is the Docker Registry

That makes no sense. When you download an official image from Docker Hub's servers, you are not charged in any way, and you are not required to create an account. The hosting and bandwidth costs are enormous, and there is no business benefit. The only reason the Hub hosts these images is because it improves the experience of using Docker, which in turn creates a larger market of Docker users to sell various services to. It is absolutely in the company's interest to allow for mirroring of the standard library, so that the burden of storing and distributing it is spread out across the ecosystem, and the company can focus more resources on things it can actually sell. It is also in the community's interest, because official images maintained by open-source maintainers shouldn't become unavailable if Docker Hub goes down, for example.

> I think it's clear that the patch that users want isn't welcome

Respectfully, it would be more intellectually honest of you to talk about the patch that you want. Just because you happen to have a soapbox on this forum doesn't make you a representative of "the Docker users", and it doesn't bless you with any particular insight on what they want collectively. That is a rather large group of people.


OK: I think it's clear that the patch that I (and some other users) want isn't welcome, so I won't be wasting any further time on it.



>Specify the Dockerfile to use in build

oh man it's finally here. I'm excited.


Finally! I can finally use one Dockerfile for Fig development, but another for staging and pushing it to Octohost, and then a final one for kicking up to AWS/CoreOS for deployment!

They all use the same code, they just have subtly different configurations and trade-offs. This is going to make that so much easier!


I feel the same way: first time I used docker when I found I can't specify file to use, I feel inconceivable.


Oh yeah, being able to specify dockerfiles is golden. We actually use Makefiles to copy docker files from different directories for building, then copying them back. What a hassle it has been.


You could always use the "wrapper script pattern", e.g. http://blog.james-carr.org/2013/09/04/parameterized-docker-c... or implement a ready-to-go solution like Tiller (https://github.com/markround/tiller) if you don't mind the overhead of ruby in your containers.


I ended up having to call container with nginx using a custom config file for each different environment. I've been waiting for this as well...


Couldn't you use symlinks that you create/unlink at build time?


Docker explicitly forbids usage of symlinks.

But there was a workaround: tar the entire directory and then build your container from the archive, like `tar cfh * | docker build -`.

Hopefully that wouldn't be needed now.


I've been doing research into Docker, because the ideas of making a dashboard to easily manage entire apps quickly and easily is very compelling for building and managing rapid prototypes. With the addition of a stats API and parametric Docker builds, this appears to be a realistic use case.

I still haven't found a good answer to whether you can embed Docker images within a parent administrative Docker image, though, in order to achieve ultimate portability. Who Contains the Containers?


Not entirely sure what you're asking, but you should probably look into "schedulers" and other tools built on top of such ecosystems. Here are a few to get you started:

Docker ecosystem: swarm and/or compose

Mesos ecosystem: Chronos, Aurora, Marathon

CoreOS ecosystem: fleet

Hashicorp: Terraform

Amazon Container Service: works with the above, will likely build their own simple one in the near future

This is less about embedding images and about managing/"scheduling" them

Edit: Zikes mentioned Kubernetes, as well.


There's also Lattice: https://github.com/pivotal-cf-experimental/lattice

Which is extracted the next generation runtime of Cloud Foundry, known as Diego.


Terraform is the odd one out; it's an "infrastructure as code" tool, like Amazon CloudFormation but with pluggable providers.



I think that's what CoreOS[1] is intended for.

[1] https://coreos.com/


CoreOS manages the backend for Containers efficiently, but it would be helpful to have a dashboard UI for diagnostics/admin for each child contained.


You might want to look into Panamax and Shipyard for nice UI on top of Docker management. http://panamax.io/ http://shipyard-project.com/


Those examples are more of what I had in mind. Clearly my research was insufficient. :p thanks!


You might want to look at Google's Kubernetes, then.

http://kubernetes.io/


Named Dockerfiles support is a nice addition. I usually split up my projects with one (or more) container running the app, and a separate container running Grunt to compile all the front-end libraries. It was a minor annoyance to separate these into different directories simply because a Dockerfile could only be named... Dockerfile.


Read only flags? Streaming stats? Docker's API gets more and more powerful everyday! Very excited about this release


You can rename containers now! `docker rename OLD_NAME NEW_NAME`

Uou can have a container named `service_prod`, deploy a new version as `service_staging`, then shuffle them with renames if the staging version is effective.


IPv6 is really a 1.0 feature.

I know how to build and i know it does go test; but it makes me a little :( that there's no public CI listed in github.com/docker/docker.


They link to their jenkins with the little "build passing" badge near the bottom:

https://github.com/docker/docker#contributing-to-docker

https://jenkins.dockerproject.com/job/Docker%20Master/

They used to use drone.io, but they recently removed it:

https://github.com/docker/docker/pull/10519

I like the creative branch naming.. "jfrazelle:burn-in-a-fire-drone" :)


Does anyone know the status of User namespace support? I would think this is a blocker for any paas to use docker.


User namespaces recently got merged into libcontainer (which is used as the default backend for sandboxing in Docker). There is 1 technical question left to resolve to enable it by default: how to abstract away the concept of UID mapping, and how does it impact sharing of volumes between containers? There is an ongoing technical discussion, I am optimistic that we will find a solid solution soon but don't want to make any promises we can't keep.


In addition to user namespaces, and the obvious sVirt / SELinux bits Dan W from Redhat has been contributing, what features / enhancements are necessary for docker to be considered mostly secure?

For reference I run all of my apps as different containers with different users on my own server. I then have iptables rules to block any outbound internet connections for containers that shouldn't. It was hilarious to see when someone hacked my wordpress install running in a container and managed to write out a perl daemon using a rexec bug in wp. But when it tried to contact its C&C server, iptables dropped it and ossec notified me. In a perfect world, I'd do something similar, but root inside the container would map to != uid 0 on the host. I'm just curious if there is anything else you consider necessary to deem docker more "secure" than it currently is.


Wish they'd enable IPv6 by default


Yes, I dont see why they shouldnt get link local addresses. It is also slightly odd that if there are existing router advertisements they dont get used, rather you have to do manual config.


It does have linked local addresses by default. It's the more complicated setup of actually routing IPv6 addresses outside our current host that's not enabled by default.

https://docs.docker.com/articles/networking/#ipv6 has more of the details (and the discussion at https://github.com/docker/docker/pull/8947#discussion_r22534... is also useful)

Basically, we can't use existing router advertisements (as I understand it) because you also have to tell your current IPv6 router that the entire prefix you use for Docker needs to go to this one host as opposed to just the one IPv6 address that host would auto-assign itself via RA.

Since there's manual outside-Docker setup involved, we can't really automate this bit. If there's a nice clean way to do so, we're definitely open to a PR (I'd love to have something simpler myself)! :)


That is not very clear from the docs then which say "By default, the Docker server configures the container network for IPv4 only. You can enable IPv4/IPv6 dualstack support by running the Docker daemon with the --ipv6 flag" - that doesnt sound like link local addresses by default...

Will have to take a look, I guess there are lots of potential setups. If you have a /64 per host it should be ok anyway, if you have a /64 for the network it might not be.


Link local addresses are not meant for application level use in IPv6 so bringing them up would only be confusing.

LL addresses are used for stuff like router advertisement, neighbour discovery (IPv6 equivalent of ARP) etc. You can't use link-local addresses without extra gyrations in socket API (scope id) so they cannot be usefully passed to normal apps.


What would be the ideal complementary feature to happen outside Docker to make this easier? Make radvd cgroup-aware, maybe?


I am not sure you need to. If each machine has a /64, then each can assign that on the docker0 bridge and run radvd which assigns addresses to the docker containers (from the /64 or eg a /80). I dont see radvd needs changes; you should be able to even run it on its own docker container.

I think the issues are more around the fact you may need to change existing addresses, plus potentially split up the /64 (or whatever) your machine has among docker bridges, plus you need to make sure the /64 is no longer on the external interface.


We really wanted to, but there was a specific reason not to. I will ask our resident ipv6 expert to provide more details.


In other news, the issues list [0] just keeps growing by the day, apparently with few or no devs committed to ensuring show-stopping kernel interoperability bugs [1] get resolved in a timely manner.

I want to love you docker, but the experience of using your products is often sooo painful!

[0] https://github.com/docker/docker/issues

[1] https://github.com/docker/docker/issues/4036

Karma-wise I'm sure there will be hell to pay for my impolite outburst.. Sorry for the offense! Just calling it like I see it.


You are right that devicemapper (basically lvm snapshots for storage, used as an alternative to aufs/btrfs on many distros including Red Hat) has been a source of headaches for us. It is relatively obscure (we knew we were in trouble when googling libdevmapper error messages returned our own source code as the first result) and frankly not pleasant to work with.

But, the good news is that we have made progress recently. In fact the 1.5 release includes several improvements to devicemapper. And as of last week, Red Hat has volunteered a dedicated engineer to escalate devicemapper problems. So, fingers crossed things will be even better in the next release :)

Another alternative is to switch storage drivers: aufs and btrfs are popular options.


Thanks for following up Solomon! I'm glad you are at least aware of what is going on with this.


GitHub has a really great Pulse feature which gives you a quick look at the flow of PRs and issues in and out of a project: https://github.com/docker/docker/pulse/monthly

Docker has actually been closing issues much more quickly than they come in. Yes, there is a backlog, but that's to be expected from a new project getting a lot of usage. They definitely do not seem to be slouching, though.


Wider exposure means more issues logged, including dupes and wish lists.


I was a little disappointed to see 1.5 released without a fix for the "tty bug" in `docker exec`, and without mention of the same known bug/limitation in the documentation:

https://github.com/docker/docker/issues/8755

Hopefully it will get fixed soon. Other than that, I'm excited to kick the tires of v1.5 -- thanks Docker team!


> Open Image Spec

I'm wondering if this will eventually merge with the ACI that Rocket implements.


That seems unlikely, based off the bad blood developing between CoreOS and Docker. See the links below:

https://github.com/docker/docker/issues/9538

https://github.com/docker/docker/issues/10643


Wow, those are some pretty hostile words coming from Shykes.

> Coming up with a new "standard", then criticizing the established open-source project for failing to implement it, is a common tactic

> One last fact, which you might find funny: one of these alternative implementations of Docker's image distribution system is developed by CoreOS, the very same vendor which is propping up this so-called standard

> Do you know how many complaints I received, since Docker was created, that I didn't "comply" with this or that self-proclaimed standard? Dozens

> But [CoreOS] never did, because as competing commercial vendors their interest is to weaken and fragment the Docker standard, not contribute to it.

~~

> based off the bad blood developing between CoreOS and Docker

You know, I think this is really 1-way... I have not see anything along these levels of hostility coming from the CoreOS camp.


> You know, I think this is really 1-way... I have not see anything along these levels of hostility coming from the CoreOS camp.

Personally, I agree with you, just trying to stay neutral with my comment. Recently went to a CoreOS meetup in NYC and from what I can tell the CoreOS guys seem super nice and very knowledgeable, nothing else I've seen anyone representing them say or do in public suggests any malice against Docker.


Looks like spam accounts / repeated behaviour which reported these bugs. Even though Shykes isnt the best pr guy in the world.


I'd hardly call 2 issue tickets "spam accounts"... and the first one definitely isn't -- there's years of activity.


I really do not use docker nor coreos, but I wonder what the huge demand is for standardizing this. At least it looks a little suspicious for me while browsing through those 3 tickets.


> but I wonder what the huge demand is for standardizing this.

Containerization on Linux is really poised to be the "next big thing", enabling all sort of new workflows, deployments, etc. In order for it to really take off, people need a universal standard image format which is portable to other implementations. There must be multiple competing implementations. It must avoid vendor "lock-in".

Back when virtualization was getting off the ground, there was no standardized format. Every vendor had their own format, and all were completely incompatible with one another. This made migrations incredibly painful, sometimes impossible.

It created a "vendor lock-in" even when it came to the open source hypervisors -- ie. once you decided on a product, you were stuck.

The OVF (Open Virtualization Format) came along after years of this... not without it's imperfections -- but it was a real life saver in a great many ways. Vendors and open-source projects alike started to support the format, allowing users to export their VM's and import them into a different hypervisor with relative ease -- no weird hacky work-arounds, no re-imaging your vm, no nonsense.

~~~~

For this kind of industry-changing technology, it's more important to have an open standard than an open implementation.


Uses for a standard format are:

Alternate implementations of docker run (e.g. Garden), although a daemonless mode for Docker may satisfy most of these cases.

Alternatives to docker build (plenty of room for innovation here IMO).

Alternative registries.


It's really annoying how long it's taking to add fuse support to containers. If I want to act on remote filesystems I have to use something like rsync.


It seems pretty close to working. One of the few (or only) issues remaining is this one: https://github.com/docker/docker/issues/10184

Basically, docker had to add device support, which it now has, but fuse is explicitly forbidden in libcontainer (https://github.com/docker/libcontainer/blob/164cd807a16e63ed...)

A patch to libcontainer or possibly Docker itself should resolve this (volunteers always welcome!)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: