Hacker News new | past | comments | ask | show | jobs | submit login

You forgot possibility (d) allow multiple library versions to be present simultaneously.



See: Virtualenv (for Python), Snappy, Flatpack

The main advantage of docker, as far as my understanding goes, is more of a prebuilt system configuration thing. Need a database? Load the prebuilt PostgreSQL image onto the respective machine.

One thing I think should get more use in general is that Dockerfiles are essentially completely reproducible scripts[1]. Too many companies I've seen still use Word documents full of manual steps one can easily get wrong for all their machine setups (especially in the Windows world). If you want to test something quickly, you're bogged down for a day.

[1]: example: https://github.com/kstaken/dockerfile-examples/blob/master/m...


Over the last 15 years I've built and maintained Linux distros using apt, with custom debs and pressed files which handle configuration, automatic upgrades for most systems, and a small shell script and ssh for the rest.

Now it's all docker rather than packages, ansible (which leaves no trace of what it's doing on the target machine) rather than a for I in 'cat hosts'. Fine, but where's the benefit?


Note that I didn't make any value statement towards Docker - I don't have that much experience with it and found their infrastructure to be rather clunky, when I tried it. So, I'm not sure I'm qualified enough to make definitive statements about its usefulness.

I can see, however, a benefit in encapsulating different services in different containers, as this potentially gives you some control over them (available disk space, network usage etc.) * . On top of that, I imagine starting out on a "machine agnostic" approach can be rather useful if you have to change your network landscape further down the line: If your database already is configured as if it were running on its own server, there's certainly a bunch of unintended coupling effects you can avoid.

That said, I can't see Docker being the silver bullet it gets hyped up to be sometimes. But that goes for most new and shiny things in the tech space...

* And yes, that's already feasible without containers. Docker's approach to this seems to provide a way to do it in a much more automated way than most alternatives though.



> The main advantage of docker, as far as my understanding goes, is more of a prebuilt system configuration thing

I think the main advantage is that it standardizes the interface around the application image/container. This allows powerful abstractions to be written once rather than requiring a bespoke implementation for each way that the application is structured. Imagine writing the equivalent of Kubernetes around some hacked-together allows-multiple-versions-of-a-dependency solution. It would be a nightmare. But because the Docker image/container interfaces are codified, you can build powerful logic around those boundaries without needing to understand what's inside those images/containers. Dynamically shifting load, recovering from failures, automatic deployments and scaling are all much easier when you don't have to worry about what language the application is written in or how the application is structured.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: