Like any hyped up technology, Docker gets pitched as an "easy, works out of the box, secure by default solution" but it is almost anything but. But also like any technology, if you take the time to really understand what it is doing and what problems it can solve for you, it can be really awesome to work with.
Every "Docker Considered Harmful" post I've read basically boils down to "Why would you use Docker if you can use the 10 technologies it wraps around and manage them yourself instead?" Why would I want to do that if I don't have to? Docker wraps these things well. There are weird defaults and there are some popular patterns in the community that are a bit backwards, but you have the power to work around it. Don't run your containers as root and run a dumb init process in your container. That's half of the posts complaints gone right there. Complaining about defaults is one thing, claiming that bad defaults make a technology "harmful" is just lazy.
> Every "Docker Considered Harmful" post I've read basically boils down to "Why would you use Docker if you can use the 10 technologies it wraps around and manage them yourself instead?" Why would I want to do that if I don't have to?
Or if you can't?
One feature driving Docker adoption that I think a lot of people miss is that it's got fairly workable (if warty in one case and obnoxious in the other) implementations for OS X and Windows. That removes a lot of friction for developers who work in companies where IT won't support Linux on employee workstations.
Sure, your ops team can put together a bunch of stuff manually. And then you can create a bunch of extra stuff that makes it easier for development to handle all of that inside virtual machines, and get the network bridging between those apps and the host machine working properly, and all that fun stuff. And by the time you've got it completed and working nicely, you'll be ready to launch your own Docker competitor onto the market.
> Sure, your ops team can put together a bunch of stuff manually. And then you can create a bunch of extra stuff that makes it easier for development to handle all of that inside virtual machines, and get the network bridging between those apps and the host machine working properly, and all that fun stuff.
I think this is one of the biggest reasons people use Docker, although they tend to forget to mention it since it's not exactly a technical advantage of Docker; in many ways, it allows developers to bypass DevOps or sometimes eliminate them all together. No more creating Jira tickets just to have a specific version of Imagemagick installed on a server.
I think it's one of the two biggest reasons. The other, possibly bigger, reason is that it gives you a way to install fairly complex software and have it reasonably configured and running right out of the box.
I use Docker to run stuff like PostgreSQL on my dev box. Not so much because it's a technically superior option (in truth, it's slower and eats more memory than installing it directly on my machine) as much as because it reliably lets me get up and running in a few minutes, just the time it takes to download the image, really. All the other options I've tried invariably require some fiddling, especially if you need an older version.
I suspect that all the other codepoints that have ever been spilled on the subject are mostly just an attempt to get ops on board with it.
(Disclaimer: All above comments are a mix of wild speculation and Freudian projection. I tend to come down against using it in production.)
At my work, we've had mixed results using it in production. I'd still prefer to using to avoid Ops, but besides that Docker comes with its own bag of issues even though it eliminates versioning conflicts.
For development, on the other hand, I absolutely love it. Being able to have different docker-compose.yml files for each of my projects and being able to spin up isolated versions of database services(each with their own volumes!) has been a tremendous win for me.
No they haven't. They are lurking around the corner waiting to hit you when you least expect it. The mugging you are about to get is what your ops team has had and are trying to prevent.
There is only one possible place where this can really grow to be that kind of a problem: It's when ops isn't being involved.
(And if the relationship between development and ops has broken down to the point that each one is trying to work around rather than with each other, you're already screwed. The rest is just details.)
If ops is involved, then there's no real reason they can't take charge of making sure that anything that is running in production is being built up from minimal images where they can keep track of the technology stack and all the different versions of xyz lib that are running in production. They've just got to do it using a different tool chain.
And if ops isn't involved, I'm not sure how different this really is from the typical status quo, which involves unquestioningly running whatever uberjar full of unknown (to ops) 3rd-party packages that probably have their versions being selected using Maven's default version conflict resolution strategy, so that nobody, not even dev, really knows what exactly is running in production.
Your last argument is true of any software outside of the Docker ecosystem. Are you really going to read through every single directory in node_modules/ to make sure you know exactly what you're running? I don't believe anyone who answers yes, besides in the sense that NPM will produce vulnerability reports.
If ops does their job well, that's great. A lot of people aren't that effective at their jobs, and if someone in ops is stuck in PHP Land, unwilling to learn Docker, they're going to become a huge bottleneck in short order. Some people are incompetent, but there is a lot of people who don't particularly like their jobs yet get a sadistic joy out of playing the gatekeeper role, being the ultimate decider of whether someone else gets what they want. All the worse if upper management sides with them by default since, well, they're the "webmasters".
Yeah, I'm pretty biased because I've had situations like that on a few occasions.
I'm not necessarily saying that the production situation is always improved by Docker, but what I described is not an uncommon situation and I think it often leads to teams gravitating towards Docker when their last person in ops finally leaves.
> Are you really going to read through every single directory in node_modules/ to make sure you know exactly what you're running?
What, you don’t? Each dependency comes with a license notice. Everything needs to be pinned. The npm mess that comes out of pinning and unbounded versioning is precisely why we steer clear of it.
> Are you really going to read through every single directory in node_modules/ to make sure you know exactly what you're running?
Of course we are.
Not manually, of course, but we've got automated tools that generate a report of every dependency, including transitive dependencies, and its license and version, for all the platforms we use. We definitely keep an eye on these reports.
"But it's haaaaarrrrd" isn't a great excuse for letting things get out of control. We're developers. Taking obnoxious labor-intensive manual processes and making them quick and easy is the entire reason why our profession exists in the first place.
(Granted, the fact that a single import can bring in hundreds or even thousands of transitive dependencies to worry about is a big reason why I avoid the Node ecosystem like the plague. So there is that.)
> (And if the relationship between development and ops has broken down to the point that each one is trying to work around rather than with each other, you're already screwed. The rest is just details.)
IMO 90% boils down to that, regardless of what technology one side or other wants to use. If Ops is trying to push Dev to use a particular technology or vice versa. This happens whether you're talking Docker or a particular programming language.
Either side will see their preference as solving the problem of doing their job in the best way possible. If both sides are deaf to the problems seen by each other, it becomes a problem.
Honestly, the place where I see this stuff more than anything is when developers want tools that make them more productive, because the technological merits are harder to convey.
>>Either side will see their preference as...doing their job
And in your example they’d both be wrong. Such folks shouldn’t spend as much time worrying about their job description as about what will make the company successful?.
Companies don’t become great by everyone waiting around for only managers to decide when new roles or practices should come online. Employees at any level are allowed to consider the big picture. Why do they sometimes feel they’re not allowed to or would be discouraged from doing so?
If you carefully and thoughtfully put some time into slowly baking your opinions in a way you best guess helps your company end to end to achieve its most strategic goals with dev/ops as one part of that, your opinion will be more respected and influential.
It’s also a management test. Walk into your VP’s office, dispassionately and without self emphasis, make your case about how you should be doing xyz because it makes sense within the big picture.
Most times you can’t lose, even if the plan isn’t rolled out you’ll gain credibility. If you get your hand slapped, it’s a clear signal to switch departments or companies because leaders like that have low success rates, as well as usually not being much fun to work for.
I think the "issues" they are talking about are not necessarily the ones that ops people are meant to solve, but about the issue of ops acting as "gatekeepers" that often create perceived friction in the deployment(and even development) process. By Docker putting the power of what prerequisites are made available to individual applications running on servers, it bypasses the middle-man and gets the job done faster. That's not to say there aren't trade-offs when using Docker with ops as a service, but some (usually developers) would say the trade is worth it. Let's just hope they are writing their Dockerfiles with security in mind. :)
Last year we had a $NAME come for an audit. Well known company and they did not know how to audit inside containers.
Still not sure how a corporation who runs RHEL contractually is able to use alpine and ubuntu based containers but it happens.
I dont work there anymore and I am glad because when the containers finally do get an audit and the customer is told for X years they havent been in compliance.... well..
We half of all our QA processes to Fargate sometime around April, more than halved our compute costs. After the big summer milestone there is another big project to move the rest to Fargate. I prefer running my own k8s/kops cluster, but our B team got it stood up in a week and have had no issues in at least a quarter.
You are misinformed. Fargate is out and it works. I have containers running on it right now and they have been for several weeks now. You can select it or the old EC2 approach. Try it, you may like it.
In default docker installation on most Linux boxes, communication to docker socket itself requires elevated privilege. You can't even do `docker ps`.
In jobs I held - people who can pull images and run container on servers, they also could install stuff if they wanted. Is running privileged containers blocked? It just seems something broke down in engineering process, if developers has to bypass admins to install stuff on a server. What happens if that server crashed and burned?
But they use virutal machines. For me, the disk access speed is very slow for a virtual machine. For example, elasticsearch daemon starts in 8s in Debian on real hardware and takes more than 30s in a VirtualBox. Everything that touches disk works slower.
It empowers developers to side step many of the best practices learned in SDLC over the last 20 years and ship code they shouldn't
The code is then running on a fault tolerant platform which masks the bugs and makes ops life hell when it comes to actual debugging.
Additionally upgrades and other assumptions about the base OS and system management are simply swept under the carpet with pretending those issues no longer exist.
Effectively it makes much much more work when things break and is great when it runs well or you run it in GCP where they take care of that part for you.
> you run it in GCP where they take care of that part for you
Although I'm generally inclined to agree with your positions on this thread (biased as I am due to my Ops profession), this doesn't seem like a valid criticism.
If Docker-on-GCP actually does address all the issues from the perspective of the developers/customers/users, then it's a plausible solution. The only consideration would be cost.
> One feature driving Docker adoption that I think a lot of people miss is that it's got fairly workable (if warty in one case and obnoxious in the other) implementations for OS X and Windows. That removes a lot of friction for developers who work in companies where IT won't support Linux on employee workstations.
Vagrant does this, too. Sure, it runs a VM, but so does Docker.
Inside of Linux, yes Docker does utilize cgroups, namespaces and some other stuff for isolation. However, the GP is almost certainly talking about "Docker for Windows" and "Docker for OS X", which do not run directly on the host OS, and need to be run inside of Linux VMs (like Vagrant).
Well, there is a big difference, but not the one you've managed to come up with. Vagrant is normally used to run full VMs (although you can use it to drive Docker), so while you might run App, Supporting App, PHP, Apache, Node, Nginx, MySQL, Mongodb, and Redis all as separate containers, you'd probably put them all on the same VM. Your comment almost makes it sound like you don't understand Vagrant or VMs.
This feels a bit like "Why use Dropbox when rsync exists?" type of argument. Sure, you can do everything docker does with shell scripts, and you've been able to for decades, but many people didn't, because it was "complicated". There is often a huge amount of value in simplifying things, even if it means losing some of the power and the end result being objectively worse.
Docker has reached the point that it is complicated. A lot of people never did cgroups just because it is lower, starting to touch kernel stuff. Few people want to go there.
Docker is one of those things that you can install and run; it takes a small amount of time to get running. As you said, value in simplifying things.
That being said... Anyone who takes the stance that containers are better than X/Y/Z are just showing that they don't have the drive to get into the why of how it all works.
Any argument of 'it saves overhead' can take that argument and run with it until the cows come home; but they don't understand that the overhead is all relative. As a programmer; I stopped caring about overhead and starting worrying about the fact that people will break my stuff, I just need to stop them from breaking other things using that as the foothold.
I think the base use case of Docker is still relatively simple, although like all things it has gotten to the point where, if you want to dig deep, you definitely can get way into the weeds, especially when you start talking about networking, orchestration, composing containers, and all of the tooling that now exists on top of Docker.
When you choose Docker/containers, you make a choice to expose yourself to a distinct class of problems. You make a tradeoff of saving overhead in exchange for opening up other issues. This isn't an absolute good or bad thing, as sometimes shaving off overhead is worth it even if your failure case becomes much worse.
I will say that, in general, I've found a lot people don't make this type of consideration and just go with a dogma of "Just use {technology} because it's popular".
> Anyone who takes the stance that containers are better than X/Y/Z are just showing that they don't have the drive to get into the why of how it all works.
... or they know exactly how it all works because they've been through hell back & forth several times and are now happy to have a system available that solves many little nerve-wracking problems.
> I think you could reimplement [some random part of what Docker does] easily yourself with a small shell script and some calls to mount; but I haven't bothered.
> For most purposes, the main interesting thing that Docker containers provide is isolated networking. [...] What else prevents applications from using ports? The firewall that you already have installed on your server. Again, pointless abstraction to address already-solved problems.
Interesting... so you create a network and isolate it from the rest of the network ? How do your applications serve traffic ?
Or... you create a network with namespaces and use iptables aka the firewall to network that to the other networks you created with namespaces ?
Think about it logically... when you use TCP you share the connections... your namespace isolation is exposed by the very thing that firewalls it... make sense ?
This time around, everyone keeps excerpting this sentence. I guess what people are making of this sentence is, "but I haven't bothered [to reimplement this functionality, even though it would be useful, since then I'd have to maintain it, and I haven't yet needed it]". If that's what the sentence meant, then that would certainly be a good argument that Docker is useful: it provides the missing pieces that no-one else has bothered to implement yet.
Unfortunately that's not what I meant when I wrote the sentence, and I think a more accurate reading is transparently obvious if you actually quote the sentence with context. The very next sentence describes my preferred way to get this functionality that Docker provides. The real reading of the sentence is "but I haven't bothered [to reimplement this functionality, because it would be useless to do so when there are perfectly good existing ways to get this functionality]".
Quoting people out of context is a great way to get punchy soundbites, but not a good way to reach mutual understanding.
I'll excerpt the entire section below:
> Now, it's true that Docker uses layering to be efficient in terms of disk space and time to build new containers. It defaults to using AUFS to do this.7 I think you could reimplement it easily yourself with a small shell script and some calls to mount; but I haven't bothered.
> Personally, I just use man 8 btrfs-subvolume. btrfs is a copy on write filesystem which can instantly make space-efficient copies of filesystem trees in "subvolumes", which the user sees as just regular directories.
> You can build a stock Ubuntu filesystem tree into a subvolume with btrfs subvolume create /srv/trees/ubuntu && debootstrap trusty /srv/trees/ubuntu/. Then, when you want to build a new container with specific software, you just copy that subvolume and perform your modifications on the copy; that is, btrfs subvolume snapshot /srv/trees/debian /srv/containers/webapp and work on /srv/containers/webapp. If you want to copy those modifications, you just take another snapshot.
> This is arguably better, because there's no need to maintain a lot of state about the mount layerings and set them up again on reboot. Your container filesystem just sits there in a volume waiting for you to start it.
> Naturally, if you don't like btrfs for some reason, you're perfectly able to use zfs, OverlayFS, AUFS, or whatever; no need to have a "storage driver" implemented just to do some simple copy-on-write or layering operations.
The point of Docker is basically that the container-image developer is specifying the sandbox, instead of the sysadmin specifying the sandbox.
None of the things mentioned solve the problem of the sysadmin having to "design" the solution from the top down. Docker does. A Docker image is an appliance. You don't architect it; you just configure it. You don't have to care which OS it's running inside. Docker images running on Windows don't even care whether it's Windows Core or Ubuntu inside them, even. It's a black box with defined configurability-points.
The only real comparisons to Docker are
1. Amazon's AMIs (though nobody hosts a public AMI registry except for Amazon, so it's not really a good comparison);
Both of these achieve the same things that Docker achieves: developer-distributed virtual appliances configured by the sysadmin but "managed" automatically by the runtime.
And both are just as complicated as Docker. The complexity is necessary.
You do have to care which OS it's running inside if only to know when to patch it for $vulnerability_of_the_day. It sure is convenient to consider it as a blackbox that you 'just' need to configure, but that's just pushing responsibility to the developer(s). In my short experience, the latter seldom care about security. When a security breach occurs, who is going to take the fall ? The sysadmins that supposedly run operations, or the developers that failed to provided an updated appliance ?
Let's be clear: This is a general problem with distributed infrastructure. Not necessarily docker. Any org that scales beyond 100+ engineers/services/artifacts is just not going to hire same proportion of infra people to toss application configuration over the wall to.
In the past, we have done:
* Let applications write librarian-chef cookbooks, have a chef server aggregate them
* Let applications write ansible playbooks, aggregate them in a central repo using galaxy
All of them carry the same pitfalls. If its not the OS, then you have to decide how to patch the version of OpenJDK that the developer hardcoded. If its not OpenJDK, then its maven or npm.
We have seen both sides of the arguments:
* Sysadmins cry "security"
* Developers cry "freedom"
The root cause of both arguments is fear and control. The end goal when these words (security vs freedom) get thrown in is not to find solutions, but rather to make the discussion end. Sysadmins will gladly sweep maven/nexus problems under the rug as long as they are the ones doing automation. Developers will gladly disregard all infrastructure engineering principles as long as they have full access to do whatever they want with their application.
Automation is the solution to both. Call it SecOps or whatever bullshit term. But in the end, automated security practices are necessary one way or another.
Conceptually it seems quite possible to automatically send alerts to the devs responsible when something in the container is out of date. It means that the containers have to be specified using supported methods, but I think a lot can be covered quite easily.
This article is ignoring the benefits of standardization.
Let's compare it to some other "unnecessary" thing, actual containers:
You can put stuff on ships without them, but turns out that once you start using them, just the fact that everything is standardized gives you insane benefits.
Of course you could reimplement each part of Docker differently. Of course it's not magic. Nothing is magic about a metal box, and yet that metal box completely revolutionized shipping.
That's a nice metaphor, but it's still not at all certain that Docker is the standard solution. We've had Linux for 20 years and we still don't have a single standard package manager; Docker's been around for 5 years and it's already been abstracted over by systems like Kubernetes, making Docker itself less and less relevant.
What is certain is that systems like Debian and Fedora are not going away. Your Docker images couldn't be built without them, after all. And the other tools mentioned in the article, too, are not going anywhere. So why don't you just standardize on the real underlying platform?
Because there's standardisation beyond just where the app/server/whatever is going to run in production - having a standard way to spin up, describe and control applications & their dependencies that works cross OS lets developers & devops speak the same language, with the same commands.
Is docker the silver bullet for this? No, there's tons of other options. But "everyone should use Debian and Fedora" isn't a realistic standardisation.
> and it's already been abstracted over by systems like Kubernetes, making Docker itself less and less relevant.
But k8s still uses the same concepts used with docker-style 'containers', with registries, layered, isolated filesystem, separate networking, ... all in one box.
Docker was just a big nudge in the direction of democratising scalable cloud infrastructure/application stacks. Yes it can be used for other stuff, but that's what it's extremely good at.
Anyone interested in the history and effects of the standard shipping container can read (or listen to) “The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger” by Marc Levinson.
I think Docker is very useful for beginning developers and for spinning up and trying out services quickly. The barrier for getting started with web development used to be so much higher, even for people on Mac OSX. To setup a rails development environment with a modern JS frontend, you'd have to setup xcode, mysql, redis, a node build pipeline, homebrew, and futz with system ruby vs rails-specific ruby. You'd have to setup all the above, without knowing what each part did, and barely being comfortable with bash vs terminal. Don't even get me started on how difficult it was for someone with a windows machine to get started with modern development.
Now novices can just install docker and type "docker-compose up". Even vagrant didn't make things that easy.
A few years ago, I wanted to try out Pentaho's BI platform. I spent hours configuring the JDK, Tomcat, installing all of Pentaho's dependencies, and struggling with configuration errors and outdated documentation.
Today, If I wanted to give Pentaho a spin, I could also just pull the docker image.
I see your point about most other use-cases for docker, but be careful when you make a blanket statement like "Docker considered harmful". It could be discouraging to those that docker has helped getting started with development and who do find it convenient for certain tasks.
Your first point alone would make Docker worth it for me if that was literally all it did. I've on-boarded junior devs on several projects over the last few years, and being able to give them pull access, send them the instructions for installing Docker and docker-compose and then having a working dev environment on their local device an hour after they open their computer for the first time is invaluable. The whole universe of tools that have grown up around Docker are also fantastic, but erasing the friction of starting up is a killer application on its own (also, making it so I never again have to hear "Well it works on my machine!" is also, on its own, worth pretty much any pain Docker brings).
And what I've seen is that at some point, at least a few of the junior devs get interested in what's going on under the hood and ta-da, we have our candidates for dev-ops work. Erasing the initial friction doesn't mean they erase their curiosity.
The author seems a bit out of touch. I'm a junior dev and starting a db for development is as easy as `docker run redis`. I don't even know half the tools he mentioned and i will not read the linux source documentation like he suggested to get isolation for my services.
Starting redis manually isn't that much harder though:
brew install redis
redis-server
I personally feel that people reach for Docker too quickly. It's worthwhile to learn how things actually work so that you know what to do when Docker eventually fails you.
For me, it's less about the ease of starting one thing than it is the ease of juggling a lot of things. I've got docker containers on my machine for multiple work projects and a few organizations I do volunteer dev work for. Between them, I'm running two versions each of Ruby and Python and a number of instances of Postgres and MySQL, plus local Redis containers for three separate projects. Shifting between them is as simple as docker-compose down/cd/docker-compose up. When I'm done developing for the day, I just take everything down and bring it back the next morning.
when you have to manage a large number of services with a bunch of different devs touching things, Docker is almost required to get a consistent development state.
There's literally nothing wrong with reaching for Docker right away. It allows you to have the same, repeatable development environment. Maybe you deploy to production without Docker, but any project I start I'll reach for Docker right away.
This article could use some work so I thought I'd chime in with my complaints about Docker...
* Patching security vulnerabilities in container images (aka "the next Heartbleed problem") and auditing for the same.
* docker-compose is installed via a curl to github. Say what? It's like Docker revels in ignoring the system's package manager. Docker does not and should not replace apt-get but people pretend it does.
* Too much config delegated to container entrypoint script.
* Firewall rules get wiped out too easily. More a pet peeve of mine but it'd be nice to solve this without "service docker restart".
* "--rm" should probably have been the default and the migration to something better needs to start now.
In the grand scheme of things, nothing too negative to outweigh the benefits although security vulnerabilities give me some pause.
> Patching security vulnerabilities in container images (aka "the next Heartbleed problem") and auditing for the same.
This is a lot of tooling out there to deal with this, including multiple complimentary (both as in free and tools which compliment each other) implementations of security scanners.
> Too much config delegated to container entrypoint script.
Can you explain what you mean here? Config of what?
Also, have you peered into init scripts for any given application?
Application initialization often requires a fair amount of setup that is completely dependent on what the application is. This is not really possible to abstract outside of forking and making the image itself less generic, which is perfectly fine.
> Firewall rules get wiped out too easily. More a pet peeve of mine but it'd be nice to solve this without "service docker restart".
Using firewalld helps with this as dockerd can (and does) monitor the firewalld for the need to reload rules.
Watching iptables directly is rather difficult outside of just polling iptables to see if the ruleset matches what's expected. Not sure it's feasible outside of "does the DOCKER chain exist? No -> reload"
> --rm" should probably have been the default and the migration to something better needs to start now.
You can't change the default for something like this. It also isn't necessarily the desired behavior.
If you really want this to be default, use swarm services instead.
---
Full disclosure, worked at Docker for 4 years (now recently at MSFT) and am a maintainer on the docker engine.
Docker wins because it's easy to use and becomes a de-facto standard. The author misses that completely.
I get it, you can do "manually" the same things as docker when you're a good system administrator. You'll come up with your own unique solution to most problems addressed by docker. Regardless of the fact that it'll probably be specific to a single linux distribution, you probably won't have anyone interested in investing time in learning your unique way of doing containers.
I did not waste time when I learned how docker does it.
I login into a server managed by this author, curse him for using all those "standard" techniques in his own unique way. Spend hours figuring out the details.
I login into a server running things with docker: I already know all I need. `docker ps` will tell me what services it runs, `docker inspect` for more info, `docker logs` give me the logs of whatever service I need to check, etc.
That's inconsistent with what the article says right in the introductory paragraph:
> Docker is genuinely more complex and harder to use than the alternatives.
> I'm recommending them because they are simpler to learn and use.
You may disagree with their relative ease, but it's disingenuous to say that the author misses that.
> You'll come up with your own unique solution to most problems addressed by docker.
"Unique" is a pretty extraordinary claim, considering the author is advocating using pre-existing tools and facilities. Replacing AUFS with Btrfs is a far cry from writing ones own filesystem entirely.
> I login into a server managed by this author, curse him for using all those "standard" techniques in his own unique way. Spend hours figuring out the details.
They're not "standard" (with quotes). They're standard (without quotes). They have man pages. They're well-documented and (one would hope, if Docker makes heavy use of some of them) well-understood. Again, just because he didn't use them in the Docker way, doesn't make that way unique. Chances are, if it takes you hours to figure it out, especially if you already know what and how Docker does it, you're doing something woefully wrong.
> I login into a server running things with docker: I already know all I need
I could make exactly the same statement with the situations reversed, except with the addition that I now have to learn this "docker" tool with its added complexity and new syntax to make sure I don't break anything in case I have to make a change.
Already knowing a tool fails to address the author's point.
That's bullshit. To start a docker container, you don't need to mount a layered filesystem image - nevermind figure out a way how to create and reuse it, you don't need to know all nitty-gritty details of creating network interfaces, private networking, port forwarding with NAT, cgroups, and a ton of other details that are simply not important to get an application running. Docker does that for you.
If you would have to get read all man pages for every single tool you'd have to get familiar with just to figure out how a custom "not invented here" system works - you'd be in for a treat...
> "Unique" is a pretty extraordinary claim, considering the author is advocating using pre-existing tools and facilities. [...] just because he didn't use them in the Docker way, doesn't make that way unique.
Yes you can make use of the same standard tools as Docker to achieve the same features. There are a multitude of ways you can integrate those (and multiple ways to combine your different options). Docker offers one way, the author describes another way.
The author's way is more unique.
> Docker is genuinely more complex and harder to use than the alternatives.
To make this point, the authors says: "Just read man 7 namespaces. It's well written and makes it easy to grok the concept".
I did, it's super low-level. The audience for this man-page seems to be OS developer, not web developers (or other upper stack level developers).
Any non-system admin can understand the docker documentation, be able to build, distribute and launch container images in less than an hour... Learning to use btrfs or aufs, chroot, ifconfig, init scripts, ..., is easier? Most people never heard of IPC subsystems, UTS, ... and don't need to.
The author's way is merely different. "Unique" is a superlative. The author's method(s) means more choices. There's an argument to be made that enforcing a single choice is inherently simpler (to which there are counter-arguments), but you didn't make it.
> Learning to use btrfs or aufs, chroot, ifconfig, init scripts, ..., is easier?
Yes, that is exactly what the author is asserting. Again, this seems disingenuous of you. It's clear that you disagree, but mere disagreement, with only this kind of rhetorical language, lacking any substantial backing, would be a shallow dismissal.
I'm not at all convinced that reams and reams of shell scripts to deploy complex applications is a good idea. But by all means go for it if you think that is the way to go.
This. Save me from shellscript. I guess it's possible to write good/tolerable shellscript, but most of it is garbage. Obfuscated disasters waiting to happen.
Then OS updates and or application updates can break your scripts. It is also nice not having to be married to a particular linux distro. That seems magical to me.
I have the same problem. Docker as a means of making deployable, system-imaged tarballs super-easily? Great! And they'll run almost anywhere given the aggressive portability emphasis of Docker-the-product? Even better! Networking, volume mounts, and quotas included? Holy shit, this is awesome!
...now tell me about how all this stuff gets configured.
...shell scripts, you say? &&-spliced because lots of infrastructure is affected by a layer limit? And everything's committed so you can't hook parent containers' "$package_manager upgrade" phases? So everyone is running bunches of time-consuming (or superstitious/witchcraft) commands multiple times throughput the build hierarchy cautionarily if you don't control every layer? What the fuck?!
Seriously. Docker is a great product, period. But their community totally dropped the ball on provisioning. Being able to bail back out to shell commands is an important ability, but that being the default for conventional, complex deploys is batshit insane. That's what Puppet/Chef/Ansible/Packer/pick-your-favorite-provisioning-tool were designed to solve.
These aren't specialized, high-learning-curve "old-school sysadmin-club members only" technologies. They're easy, accessible, and save you from short (quicker initial provisioning/predictability), medium (updates to low-level parts of your infrastructure), and long-term (tracking security-related dependencies) headaches. Even if people don't use one of these tools for its portability benefits (because they're on Docker, so fuck portability . . . until it manifests as a random-container-linux compatibility issue), I'm baffled as to why they don't pick them up for the benefits in maintainability. Anyone who has had to deploy more than a handful of low-level package updates in a complex containerized deploy has to have asked "isn't there a better way?!" at least twice.
I think he's trying to describe the complexity of it, so a fairer way to characterize it might be some scripts in Python or Ruby.
But then once you added enough features that that became unmanageable, you'd write it in a proper systems language like Go, and oh, right, now you're writing Docker.
Docker by itself probably doesn't make any sense to an engineer until they get a chance to see kubernetes in action. There is nothing more awesome than seeing, on the fly, your worker pool being scaled out by 100x by nothing more than
But the truth is, most websites will happily run on a single dedicated machine you rent for a few hundred dollars. (OK, two, so you have a hot spare, if you so fancy.) This fact often gets overlooked despite even Stackexchange tweeted once or twice they could run their site with a single DB server. But: cloud! Kubernetes! Hype. Meh.
Stack exchange has spent an obscene amount of time optimizing their infrastructure for efficiency on dedicated known hardware. Their DB instances also have 1.5TB of ram. You're not getting that for a few hundred dollars.
Let's unpack my statement because it might have been too dense.
1. Almost all websites can run on a single dedicated server.
2. This is true for even something as big as Stackexchange which most sites are not.
3. If your site is indeed smaller than SE then that server is likely to be a few hundred dollars a month.
Also: getting a 512GB RAM server for 5-700 USD or so is definitely possible these days. But, again, most sites won't need anywhere near that. Based not only on information and belief ;) but first hand experience I sincerely doubt most sites would need more than 64GB.
The heavy uses of Docker in my (limited) experience are enterprise stacks with their 15+ components per application and many worker threads that they use to speed up ETL/Parsing/Analytics. Essentially Docker/VMs have replaced the thousands of servers that used to reside in corporate data centers, with the benefit now being that developers can launch those same complex Application stacks with little effort on their laptop.
What used to be a 30+ day procurement process became <7 days with VMs, became < 15 minutes with docker/kubernetes.
You can also do things that you would have never even considered doing prior to Docker - like run the application stack in the cloud, but launch one of the components (still in the same deployment) - on your laptop using tools like https://github.com/telepresenceio/telepresence.
I think GP was pointing out the inefficiencies rife in "cloud scale" architecture these days, not saying that anyone can scale to StackExchange levels on a single instance.
Having a worker pool in the first place, with carefully automated agents that magicically begin working when spawned, makes the complexity of managing their launch almost negligible.
For those who haven't used k8s, note that `--context` and `-n` are mostly of multi-tenant/RBAC cases, so the command is more like `kubectl scale deploy <deployment name> --replicas <number of containers>`
Spinning up 100 nodes kubernetes clusters w/ a single button click across multiple public clouds with Rancher is pretty exciting too. You can scale up your worker pool by clicking on a spinner input in the Rancher UI instead of typing the kubectl commands. Heck, its built right into the Kubernetes GUI as well. Pretty cool.
There is nothing more awesome than seeing, on the fly, your worker pool being scaled out by 100x
This was possible, and done, long before kubernetes or docker. The frequent counter-argument I hear is that "kubernetes makes it easy" - which is often espoused by people that don't have to build or manage kubernetes in prod.
You can use one of the two container autoscalers to handle the scaling automatically based on any metric(s) you like, and the cluster autoscaler will scale the ASG as needed :)
This is useful for workloads that need to scale on some metric other than CPU/MEM, e.g. request rate, worker queue length...
I actually run 3 small bare metal clusters in production.
Maintain is basically just a `kubeadm upgrade` on all nodes + reboot (easily scriptable), after some updates get announced. OS Upgrades are done through container-linux-update-operator.
NAT is not a problem if you are below or equal to 50 nodes. Not sure if you run into problems if there are more nodes, thought.
Also BGP+metallb is quite good.
(P.S.: openstack uses NAT heavily as well. and IPVS for k8s should fix most problems if you are running into problems)
Sure - but Kubernetes (and many other things) are enabled by Docker. To overuse the Metaphor, containers in shipping weren't a big deal until Container Ports/Cranes/Ships/Trucks came along - we say that "Containers" revolutionized transport as a short hand for all those systems that were built on them.
Sure. I "just" have to go read man pages for days to understand 20 different commands. "Just" use several commands to isolate my not-a-container. "Just" use debootstrap (or not on different distros!), or actually maybe "just" use nix and guix. And "just" carefully use several btrfs-subvolume commands (or not if you want aufs, zfs, or something else!). And "just" a few more things after that - maybe "just" use systemd-nspawn (or not, for any non systemd system).
It's always "just" one more tool I can cobble together to provide what Docker gives me. This is not simple. Docker is not a bad tool for abstracting away all of these underlying details.
I'm not going to pretend that Docker is simple or flawless. But it has reasonable defaults and is easy to use. It is easy to pull containers and run them. It is easy to install on Linux, OS X, and Windows (or maybe I should "just" figure out how to run a hypervisor). It is easy to read and write a dockerfile. Critically, it is just as easy for my coworkers to use Docker as well.
That alternative tools have their own learning curves (which generally compare unfavorably to Docker's, which is exceptionally gentle) and that you have to make decisions about how to fit them together is a very valid and relevant point.
But it struck me that all of your 'just's really didn't seem like a big deal to me.
I set up a Nix installation on a separate BTRFS subvolume on a Debian-based system that I got as a multimedia PC for my uncle as a gift a few weeks ago. I also had to replace the default initial ramdisk image generator with another one to get the subvolume mounted early enough for systemd to automatically launch services that lived on the Nix subvolume on first boot.
It really didn't seem like that big of a deal for me. Each piece of it was just a small step away from a background of Linux administration knowledge I built up as a teenager when I used to distrohop and play around for fun.
All of this is to say that a difference in background is likely behind this gap of perceptions.
Someone with a background in ops is likely invested in traditions that have different strengths and weaknesses than the approach that Docker offers. For people in positions like that, approaches that are perhaps more involved but preserve more of the virtues of those traditional toolsets may seem like a smaller leap than the one to containers.
You've used the word 'just' here to highlight what to you seem unreasonable levels of difficulty or required background knowledge, which makes sense. But couldn't I just as well say that Docker advocates would have us 'just' abandon the very notion of shared libraries, 'just' try to get by without actually knowing how to build or verify our dependency chains, 'just' grab binaries from strangers on the net, 'just' download gigabytes of binary data to perform builds, 'just' virtualize Linux on macOS in order to use software that runs natively on it, etc.?
At the same time, for things like microservices development, Docker also makes some serious demands on time and knowledge, e.g., 'just' refactor all of your legacy services so they can be safely started in any order or 'just' learn Kubernetes so that your can initialize services in order of dependencies.
> It really didn't seem like that big of a deal for me. Each piece of it was just a small step away from a background of Linux administration knowledge I built up as a teenager when I used to distrohop and play around for fun.
> You've used the word 'just' here to highlight what to you seem unreasonable levels of difficulty or required background knowledge, which makes sense.
I mean, it's great that it's easy for you. And it would be great if everyone at my office conveniently had this same kind of the prerequisite knowledge. But the reality is, most don't. And the worse reality is, most don't enjoy having to learn extra tools on top of the rest of the things they need to know for their job.
Docker is one extra thing to learn. It is easier to learn for the majority of developers that don't have, or don't care to have, any Linux admin background. It is a cross-platform solution and has good adoption in the industry, which means I can probably use Docker at one company, and go to another company and use the same Docker.
> But couldn't I just as well say that Docker advocates would have us 'just' abandon the very notion of shared libraries
Why should I want shared libraries though, given I have containers? I can get an update out by rebuilding an image and rolling the container. Immutable infrastructure is a good thing.
> 'just' try to get by without actually knowing how to build or verify our dependency chains, 'just' grab binaries from strangers on the net, 'just' download gigabytes of binary data to perform builds,
Gigabytes is a gross exaggeration for Docker images, but this applies to almost any package manager. Why is a package in a Debian repository (or similar) any better from a trust perspective than an image in Docker repository? Neither of the package/image maintainers are usually authors of the software.
If you need/want to, you can run your own docker registry, where you only upload self-built images, which completely removes all your trust concerns. You could probably even build docker images with nix.
I fail to see how having to understand the internals of Linux containers and writing custom C programs is easier than writing a Dockerfile and running docker build.
Aside from being a bad argument for the reasons already listed, I'm honestly getting tired of the "X considered harmful" meme, when the author never actually makes an argument that docker is harmful, only superfluous.
The biggest complaint that I have is that Docker is a huge leaky abstraction. I end up having to mess with a lot of stuff in order to get docker, docker-compose, etc. all working Just Right®. It saves time, but it's very, very leaky, which makes it a bit of a minefield (which you can learn to navigate).
Plus one for the scathing and unapologetic criticism of Docker. I'm admitedly pretty inexperienced with Docker, but I feel like everything I've read on it seems to have virtually nothing bad to say about it, so it's nice to hear an opposing opinion.
That being said, this also feels like a "Get off my lawn" type of rant from an experienced devoloper who is stuck in their ways, and/or has a little bit too big of an ego regarding their own skillset/knowledge. It's like the author is mad at a successfull project just because they knew how to do all these things before said project came along and combined them all in an easy to use package.
I'm a junior level developer (just finished 1st year of professional work), so I look at these things in a totally different way. Maybe Docker is just reinventing the wheel, but if it makes it easier for inexperienced developers like myself to do things that require a long, complex tool chain, & concepts that aren't obvious to someone who hasn't been coding since the old days, then I'm all for it.
The point is to get work done & get it done quickly. For sure it's important to learn all the underpinnings, but I think junior developers tend to grind out passable work first, and fill in the gaps of knowledge slowly but surely as they go (at least that's my experience). But this is also why I up voted this & one of the main reasons I frequent HN, because articles like this are illuminating, & point me in directions I didn't know I should be looking :)
Docker is way more than that, and he's completely ignoring the bigger picture, and many people seem to be missing that. It's importance is not limited to "my little server" - it's the concepts that matter - not the technological details. Docker standardised a set of concepts, which have been adopted at a rapid pace for a reason. It's no accident big corporations like RedHat Google, Microsoft, Amazon are jumping on Kubernetes (which uses the docker concepts, and massively extends them), for cloud deployments, it's the future.
It enables a standard way to quickly deploy and configure applications in a relative standardised way, and democratised doing this at cloud-scale. Sure you can use it locally and for smaller-scale, but that was never that hard, although it made it easier and faster too in some situations.
I'm a devops/sysadmin guy that introduced docker to quite a few developers, and most were very hesitant and sceptical at the beginning. But once they saw the power of what docker can do for them, that attitude quickly changed - and a lot went overboard with it initially (as did I in the early days I must admit).
Stuff the devs especially loved was the fact that with a single command, they could launch the entire application stack of whatever project they were working on locally, and with another command stop or destroy it. Database servers, amqp/rabbitmq/..., reverse proxies with path rewriting, ... One person had to maintain the docker-compose config, all the others just did 'docker-compose up -d'. It also allowed them to easily add dev supporting services like mailhog, that fakes an SMTP server, where they could visualise the emails they sent in a webui, add a chaos monkey for testing, ...
For ops it also made stuff easy - the "works on my machine" virtually disappeared or at least was very quick to fix. Configuration and deployment was clear and straight-forward, even if eventually it wasn't deployed on docker.
Awesome! Thanks for the feedback. My company just took over a project for a client, and the previous team used Docker, so I will be getting my hands dirty in the new code base very soon. I know the skepticism from developer you mentioned. I know a bunch of guys of that mindset. They want to do EVERYRHING by hand, so they can have "complete control" of everything, or something like that. While I find this admirable, I do not subscribe to this. I just want to develop. The more time & energy I can spend on being creative and building new things the better. I will be happy to let Docker do some heavy lifting for me, and free up hours I would have spent on configuration/ setup, or reading documentation/ stack overflow.
It's no accident big corporations like RedHat Google, Microsoft, Amazon are jumping on Kubernetes (which uses the docker concepts, and massively extends them), for cloud deployments, it's the future.
Do you know the history of Kubernetes? And really, Kubernetes uses "Docker Concepts"?
Yes I do. Sure it originated from Google and was built with inspired by their internal stack (borg) and a ton of their experience. For a good while though, there was still competition in the form of Mesos/Swarm and unclear what platform would get the upper hand, but k8s emerged as the clear winner here - and in the last year or so - everybody jumped on it.
> And really, Kubernetes uses "Docker Concepts"?
I don't really understand what your problem with that statement would be?
In theory he's totally right. In practice Docker is bringing all these things in a unified and standard tools — basically bringing such technology to the "masses".
If you can craft your own system, feel free to move forward with that.
The power of docker became clear to me when I handed basic instructions to our web devs and watched it just work. The tooling was docker's secret sauce, nothing more. And, yes, you still need somebody who knows what the system is actually doing keeping an eye on things; you can't just hand a bunch of devs docker and fire your ops team.
So i should learn every possible combination of init system to convert some random launch scripts from the internet to run in my particular setup? Instead of `docker run postgres`? Or how i should do the same in K8s? Invent some packaging format? Wait a second.. aren't all linux packaging formats are overly complicated and are really hard to maintain comparing to docker images? What if some package work only on some ridiculous version of linux?
I've tried and failed a couple times to make Debian packages. The system is so old and crufty, writing a dockerfile is an order of magnitude simpler and there are lots of useful examples and tutorials that were written more recently than 20 years ago.
> I've tried and failed a couple times to make Debian packages.
That's a real problem. Distributions have really dropped the ball here (to varying degrees).
But there's a big difference between "Docker is $distribution_package_manager done right!" and "Docker at least sucks less than the alternatives".
The tradeoff when packaging via Docker is often in integration with other facilities provided by the OS. Some of the hassles of packaging for a native OS package manager are senseless, bad UX to be sure. But others are there for a reason: how to integrate with init systems, standard directory locations, shared cache locations, or (god forbid) desktop/windowing systems? If your answer to those is "fuck it, use Docker", you often end up with a user experience akin to driving a portable mobile home down a small city street: technically fits and obeys traffic patterns, but doesn't behave in a way that anyone who has lived there for awhile expects it to.
> But others are there for a reason: how to integrate with init systems, standard directory locations, shared cache locations, or (god forbid) desktop/windowing systems?
Having tried (and succeed) many times in making Debian packages, I can attest to the value of these "hassles" and ones like them.
Much of the value is just in making sure one thinks about that whole breadth of issues and how they'll affect your environment. You may decide that a particular hassle actually is too much effort for not enough benefit, but at least the decision is conscious and, if it "bites" you later, you know where/how to go back and change things.
Additional value can come from all the taken-for-granted work that's been done over those past 20 years. Need multiple versions of something installed at the same time? Maybe the distro people already have a standard way to do that.
Of course, all of this value can only be obtained after a remarkably huge up-front investment of learning, and, from what I recall, already knowing RPM didn't substantially lessen that load when learning debianization. It's tough to fault the attitude of "fuck it, use Docker for packaging" (for any value of "Docker", including "tarball", "pip", "npm", etc) for anyone whose career isn't Ops.
This article concludes that systemd-nspawn is a more unix-style alternative to Docker. Interestingly enough, this is what rkt uses for its default isolation (stage1).
What Docker did is bring all of the aspects mentioned in this article together into something that could be easily understood and used. Now that the ideas are familiar and there are standards like the Open Container Initiative, I think we'll see more smaller and specialized tools being built and used. Take for example CRI-O, which is Red Hat's container runtime that only targets the execution of Pods for Kubernetes.
According to the "Initial release" dates on Wikipedia, systemd is 3 years older than Docker (2010 vs 2013). And one of the core objectives of systemd was to take advantage of then-still-new Linux kernel features like cgroups.
I think people who don't prefer Docker find it hard to articulate what is wrong with it.
To me it is really a question of, how do you vet software before you use it ? Will you take the time to understand your stack before it's deployed? When you employ certain abstractions, how much visibility do you lose ? Is it worth it ?
Docker is fine as one of many alternatives, but to turn your nose up at well written, battle tested software which is part of most Linux distro's is a little crazy. I would definitely bet that people who chose to learn how cgroups work, how systemd works will see their skills age more gracefully.
It reminds me of Dev-ops candidates that I have interviewed that laugh at the idea of ssh-ing to deploy a new version of code, without understanding that Ansible is generally doing the same.
As far as I can tell, Docker is a lot like amazon AWS in that the primary reason to use it in corp (I'm talking about corp IT; as opposed to production/customer facing stuff) is that due to the hype, it somehow got past security, and you are allowed (by corp security types) to do things in it that would require filling out forms in triplicate to do on cheaper or more secure infrastructure.
I've worked places where they wouldn't let us run virtual machines of any type... except docker. Custom docker images were just fine. In the aughts, I worked places where spinning up a virtual machine on our internal infrastructure required manager approval and a day and a half of someone manually jiggering the thing. (I've been that someone doing the jiggering, too) - and I could totally understand that when aws opened for business, people practically fled to that platform.
What's interesting about AWS is that most places still don't have AWS level provisioning of virtual machines, even though there exist tools like ganeti that work and are pretty easy to use (though difficult to tie into accounting)
All of these arguments are well-aired, and the tone of the article doesn't make this particular presentation of them more useful than any of the others. I would guess a large portion of the developers actively using containers are well aware that they are built from existing system capabilities that can be utilized without docker or any container runtime. I mean this is from 2015: https://chimeracoder.github.io/docker-without-docker/#1. You can argue that the new way of bundling these things together is not better, simpler, more reproducible or less error prone, but whatever... the market has voted. I personally think it is all those things, but we can disagree about that.
One thing in particular I wanted to respond to and that is the idea of your container filling up with orphan zombie processes because init is not pid 1. If I understand the issue correctly this can only happen if pid 1 creates child processes and then itself dies/exits. I've never personally seen this issue in four years of working with containers, and since all of our containers now run on k8s they would be restarted if pid 1 exited abnormally or otherwise. I'll be interested to see if any other HN commenters have actually had this problem.
Also re: the supposed absurdity of one process per container... it's just simpler and works more robustly with orchestration. Containers are lightweight, so there's no reason to try to pack a whole system into an image. It's simpler to reason about a composition of containers than a mess of processes running inside a single container, imo.
In theory, you are vulnerable because you've eliminated the zombie reaping mechanism.
In practice, the fact that you're running only one application allows you to make lots of simplifying assumptions so you can avoid getting into this situation quite easily. Just stick with a model where parents never exit before children (and always reap children). You don't need an adoptive parent if you never create orphans.
And even if it did happen, the resource consumption is tiny. Zombie processes free all their memory, close all their files, etc. The only remainder is a small data structure necessary to support stuff like the wait() call returning the process's exit status.
"Why would I ever want to use this easy-to-use technology when I could get exactly the same effect by combining 6 different standard tools with some custom software that I've written using my decades of Linux administration experience?"
This post totally misses the forest for some trees. Docker isn't a success because it is some amazing revolutionary technology that doesn't exist in another form. Docker is a success and beloved by many because it provides a UX that doesn't require learning about a lot of this stuff upfront so you can get a lot of the immediate benefit without knowing a ton.
Especially, the ton you don't need to know is the endless configuration headaches associated with running a VM, communicating with the VM, mucking about with the filesystem, etc. I've been computering for 20 years, I've seen my share of OS's, but if I want to use a new one, it's back to square one as I'm poking around for manpages and other bullshit. I don't feel smarter after learning yet another deranged mind's conventions.
I think the author misses the point that Docker is a standard and that is where the value is. The value is the ecosystem and tooling that is possible once people standardize.
Most of the article reads like "Programming language X is Turing complete, therefore all other programming languages are pointless."
Personally, and somewhat surprised it wasn't mentioned, I'm less concerned about everything mentioned in this post and more concerned about the docker ecosystem involving people using random and potentially compromised containers off of DockerHub: https://arstechnica.com/information-technology/2018/06/backd...
Silly question, but I have heard that around a fifth of the websites that you see on the internet use 'Wordpress'. 'Wordpress' is a simple blogging platform that has a really nice editor that people like working with. There are many add-ons for 'Wordpress' that enable people to customise what it does and what it looks like. These can complicate matters, however, at the end of the day, 'Wordpress' is just a neat blogging tool and does not require rocket surgery to work with.
In this age of fancy build tools and containerisation, is it necessary and advantageous to develop 'Wordpress' with Docker, Vagrant or any other containerisation?
I could see this as being helpful if you are only allowed a consumer operating system, e.g. Microsoft Windows, but is containerisation the way one would develop a simple Wordpress site if your company's IT department allowed you to run a linux machine?
Admittedly 'Wordpress' is the Hello World of getting online but I genuinely do not know if containerisation is what people would do these days for such a simple use case.
As for the benefit... you can switch i.e. PHP version by simply editing the docker-compose.yml file - this is everything but simple on most Linux distributions.
I find myself using it for development more than anything else and it is very nice for that. For example I have a little universe of very singularly purposed things that work together in a bundle. Come time to actually deploy it in production, I would typically have these individual things broke out into their own clusters of servers or cherry pick managed services that make the most sense for cost/reliability/ease of use etc. but I'm sure I'm just inexperienced with all the different scenarios larger orgs would do. At least for development, it really is beautiful to work with and saves loads of time fiddling to get some monstrosity of permutation settings just right after googling many hours to get there. I don't recall spending a day on setting up an env and thinking that it was worth knowing how to do or time well spent.
>One primary risk with running Docker containers is that the default set of capabilities and mounts given to a container may provide incomplete isolation, either independently, or when used in combination with kernel vulnerabilities.
Obligatory link to why "considered harmful" essays might be harmful[0].
I'm not a docker "fan". I've seen one too many "hello-world" projects wrapped in docker and had to waste 30+ minutes and 2GB HDD space to do something that should have been a script.
That said, any essay that criticises docker without acknowledging the huge benefit of a layer between a full VM (more overhead, more space, more time) and dealing with platform specific issues (oh, if you're using version 3.5 of X and 4.2 of Y then you need to roll back the latest update of Z and sacrifice a goat to the Linux gods), is missing something.
I love the plain HTML and that I could read this in the train on a very spotty mobile connection.
What I got out of this post is a very good insight in how Docker works internally. And it does raise my interest in containers.
I have been disconnected from the cloud for quite some time, but I have always been interested in sandboxes, mainly for security purposes. I have used chroot before, seccomp, apparmor and firejail. This is not related to Docker directly, but the author makes the bridge for Docker to be interesting to me.
I'm still waiting for a good Docker tutorial with an actual project using actual bricks that people use in real life (like nginx, an Express app, maybe some memcache, redis, LBs, etc). All the tutorials you find revolve around mundane things like getting your environment ready and never on the actual issues that people face when using the tech.
I'm waiting for someone to invent a simple Heroku-on-your-own-server kind of thing that you can use to run small apps, maybe on top of Docker.
Dokku is not that option. Dokku is a super-complex operating system by itself, a mistery of stuff glued into each other by bash scripts, full of bugs and corner cases.
Every tool trying to simplify or standardise something will inevitably be met with critisism from someone saying: 'but it's not hard, you just do <insert entire manpage here>', completely missing the point.
I absolutely love docker myself. However, in my in enterprise is mostly valuable to interns and consultants. They show managers how quickly they can get things running and leave it to ops to actually getting things working
if you work by yourself feel free to use whatever arcane shell scripting you desire, but for the love of humanity, if you are going to expose other people to your code, please use a standardized tool like docker.
the nice thing about docker is that it's been standardized across multiple platforms, we even have windows native containers now. There's also buildkit which one can use to create a cached/incremental build system
I think that Docker became popular because of very limited functionality of distributions' package managers that doesn't match developers' expectations.
Traditionally in Linux there is no concept of "system" and "applications". There is only one large "system" and you can add parts to it. In old times, you downloaded C source code, make'd and installed it. Now you use a package manager to extend your system with new features, like playing music or editing images.
There are no "applications". If you download Firefox package from Debian repository, it is not the Firefox application; it is a version of Firefox, tuned, patched and customized for integrating into Debian. You probably won't be able to run it even in Ubuntu, let alone other distributions.
Paths are often hardcoded; you cannot install a program into your home directory or an USB drive (apt-get might let you do that but it won't fix the embedded file paths for you).
It might be good enough for a user (as we see with Google Play Store on Android), but it is very inconvenient for a developer. You often need to have several versions of a program, for example, PHP interpreter or Go compiler; no way a package manager lets you do that, compile them yourself. You might want to use an application with project-specific config rather than the one in /etc and start it on demand rather than install as a system daemon. That is not easy too. Apt-get and dpkg has literally hundreds of configuration knobs in the manual but you cannot choose installation directory.
If you want to run an old application, like Firefox 2.0, get prepared for troubles. While newer versions of libraries like gtk are supposed to be backwards compatible, it won't run with them (I tried). You will have to obtain and build old versions of gtk manually; good luck with that. On the good side, I can add that Debian maintains an archive of old packages and you need only to write a custom package manager to install them along with their dependencies.
Often official repositories have outdated versions of software; you have to add third-party repositories while giving them full root access to your machine. Want to install Sublime Text in Debian? You have to trust its authors because they can replace your sshd and you won't even notice. Also, third-party repos sometimes break or conflict with system ones.
Package managers are a pain for developers too. They have to maintain packages for all popular distributions, and even for different versions of those distribution. Because there is no "apps", you have to integrate your program into every distibution manually. Of course, you will have to do it again and again when anything changes in the distro. Distro maintainers often have to do the same thing, maintaining private patches. And on top of that, different distributions use different package format.
Maybe one of the reasons for this is a lack of a standard package manager and build system for C programs. You cannot download from Github and build a C program with dependencies with a single command. It is disappointing if you got used to languages like PHP where it has been possible for a long time.
Docker seems to solve many of these problems (and some other, like hardcoded server port numbers) by using a lightweigt virtual machine with full-blown distribution in it. Of course it looks more like a quick hack rather than reliable and well-thought solution. Also, Docker requires many resources: space on disk to store multiple images, memory and CPU time to run unnsesessary daemons inside virtual machines. I don't like it.
By the way Linux kernel has similar problem with drivers that are a part of a kernel rather than separate entities.
> ... because of very limited functionality of distributions' package managers that doesn't match developers' expectations.
In particular, package managers don't match the expectations of build systems, because package managers are written by and for ops guys.
A sysadmin working on system serving a production load rarely wants to reboot the system, let alone blow it all away and rebuild from scratch. Moreover, the sysadmin needs to administer many systems, so they don't want to have anything being installed in special places, that's just adding more complexity.
A build system has to account for the fact that devs are tinkering with the code, experimenting, etc. and need to periodically wipe it all away and run it with a clean slate.
> A sysadmin working on system serving a production load rarely wants to reboot the system, let alone blow it all away and rebuild from scratch.
Uh, actually, that's exactly what I want. It's just Ops managers [1] who can't seem to wrap their heads around the notion.
I've only once had the luxury to actually get to do that, and it took 5 minutes from IPMI reset to fully running system (scalable up to 100 boxes at a time from a single kickstart server). That was over 10 years ago, too.
> package managers are written by and for ops guys.
I just don't believe that, at least the "by" part. Linux is created by developers, my experience with the two major packaging systems is that the vast majority of the tooling for packaging has to do with the building, not administrative.
> A build system has to account for the fact that devs are tinkering with the code, experimenting, etc. and need to periodically wipe it all away and run it with a clean slate.
Ok.. so how do the Linux distro package build systems fail to account for that?
How does a packaging system have anything to do with the behavior of choosing to start with a clean or dirty slate?
[1] OK, and maybe some, maybe even most sysadmins, but please don't tar us all with the same brush.
Every "Docker Considered Harmful" post I've read basically boils down to "Why would you use Docker if you can use the 10 technologies it wraps around and manage them yourself instead?" Why would I want to do that if I don't have to? Docker wraps these things well. There are weird defaults and there are some popular patterns in the community that are a bit backwards, but you have the power to work around it. Don't run your containers as root and run a dumb init process in your container. That's half of the posts complaints gone right there. Complaining about defaults is one thing, claiming that bad defaults make a technology "harmful" is just lazy.