I would assume the smaller images would also result in a smaller memory footprint for running the images and a general reduction in time of starting images. You seem to know a lot of about Docker, is that a wrong assumption?
The scale which I'm discussing is in the order of at least several hundred docker images per second. Previous attempts at making this work involved keep a warm elastic pool of Dockers. I'm working with at least 11 environments, ( which all have separate dependency requirements ).
Instead of trying to manage a very large pool of Dockers, I opted for a smaller pool with several larger servers to scale the microservices vertically ( using tools like chroot to help try to isolate each service per silo ).
My main issue with using Docker for this was the bulk of the containers. Startup time, RAM consumption, and the size of the images were all causing me issues.
Docker isn't a VM, so the memory usage should be pretty much on par with chroot. The only difference is shared libraries will need to be duplicated in each container (as nothing is shared) and loaded into memory multiple times, but that should be on the order of a few megabytes.
The duplication is worse than that. It's a data structure problem. Docker deals in opaque disk images, a linear, order-dependent sequence of them. The data structure is built this way because Docker has no knowledge of what the dependency graph of an application really is. This greatly limits the space/bandwidth efficiency Docker can ever hope to have. Cache hits are just too infrequent.
So how do we improve? Functional package and configuration management, such as with GNU Guix. In Guix, a package describes its full dependency graph precisely, as does a full-system configuration. Because this is a graph, and because order doesn't matter (thanks to being functional and declarative), packages or systems that conceptually share branches really do share those branches on disk. The consequence of this design, in the context of containers, is that shared dependencies amongst containers running on the same host are deduplicated system-wide. This graph has the nice feature of being inspectable, unlike Docker where it is opaque, and allows for maximum cache hits.
> The duplication is worse than that. It's a data structure problem. Docker deals in opaque disk images, a linear, order-dependent sequence of them. The data structure is built this way because Docker has no knowledge of what the dependency graph of an application really is. This greatly limits the space/bandwidth efficiency Docker can ever hope to have. Cache hits are just too infrequent.
This is only true when you're building your images. Distributing them doesn't have this problem. And the new content-addressability stuff means that you can get reproducible graphs (read: more dedup).
> So how do we improve? Functional package and configuration management, such as with GNU Guix. In Guix, a package describes its full dependency graph precisely, as does a full-system configuration. Because this is a graph, and because order doesn't matter (thanks to being functional and declarative), packages or systems that conceptually share branches really do share those branches on disk. The consequence of this design, in the context of containers, is that shared dependencies amongst containers running on the same host are deduplicated system-wide. This graph has the nice feature of being inspectable, unlike Docker where it is opaque, and allows for maximum cache hits.
For what it's worth, I would actually like to see proper dependency graph support with Docker. I don't think it'll happen with the current state of Docker, but if we made a fork it might be practical. At SUSE, we're working on doing rebuilds when images change with Portus (which is free software). But there is a more general problem of keeping libraries up to date without rebuilding all of your software when using containers. I was working on a side-project called "docker rebase" (code is on my GitHub) that would allow you to rebase these opaque layers without having to rebuild each one. I'm probably going to keep working on it at some point.
Your assumptions are wrong. Glibc is faster (and better) than musl. Systemd is faster (and better) than SYSV init scripts.
Moreover, for example, I can update my running containers based on Fedora 23 without restarting container, by issuing "dnf update", which will download updated package from local server, which is much faster that to build container, publish it to hub, download it back, restart container (even when only static files are changed).
Faster is objective, and in most cases correct. Glibc has a lot mor optimization over the years. Better is subjective and completely depends on your use case:
Similar point to systemd, it is kind of misleading to say that it's faster, it is parallel and event driven, which definitely makes it's end to end time shorter on parallel hardware. And again better is subjective, it's so much more complex that it might not always be the right choice.
Also, why use systemd inside a container at all? There's just one process in there usually.
Docker has very little overhead (apart from all of the setup required to start a container). In principle it has no overhead, but Linux memory accounting has implicit memory overhead (this is a kernel issue, not a Docker issue).
The scale which I'm discussing is in the order of at least several hundred docker images per second. Previous attempts at making this work involved keep a warm elastic pool of Dockers. I'm working with at least 11 environments, ( which all have separate dependency requirements ).
Instead of trying to manage a very large pool of Dockers, I opted for a smaller pool with several larger servers to scale the microservices vertically ( using tools like chroot to help try to isolate each service per silo ).
My main issue with using Docker for this was the bulk of the containers. Startup time, RAM consumption, and the size of the images were all causing me issues.