Hacker News new | past | comments | ask | show | jobs | submit login

I don't understand how running a single command to start either a single container or a stack of them with compose, that then gets all the requirements in a tarball similar and just runs is seen as more complicated than running random binaries, setting values on php.ini, setting up mysql or postgres, demonizing said binaries and making sure libraries and the like are in order.



You're going to be setting all that stuff up either way, though. It'll either be in a Dockerfile, or in a Vagrantfile (or an Ansible playbook, or a shell script, ...). But past a certain point you can't really get away from all that.

So I think it comes down to personal preference. This is going to sound a bit silly, but to me, running things in VMs feels like living in an apartment. Containers feel more like living out of a hotel room.

I know how to maintain an apartment, more or less. I've been living in them my whole life. I know what kinds of things I generally should and should not mess with. I'm not averse to hotels by any means, but if I'm going to spend a lot of time in a place, I will pick the apartment, where I can put all of my cumulative apartment-dwelling hours to good use.


Yes, thank you for answering on my behalf. To underscore this, the decision is whether to set up all of your dependencies and configurations with a tool like bash, or to set it all up within Docker, which involves setting up Docker itself, which sometimes involves setting up (and paying for) things like registries and orchestration tools.

I might tweak the apartment metaphor because I think it's generous to imply that, like a hotel, Docker does everything for you. Maybe Dockerless development is like living in an apartment and working on a boat, while using Docker is like living and working on a houseboat.

There is one thing I definitely prefer Docker for, and that's running images that were created by someone else, when little to no configuration is required. For example, running Postgres locally can be nicer with Docker than without, especially if you need multiple Postgres versions. I use this workflow for proofs of concepts, trials, and the like.


I suppose like anything, it's a preference based on where the majority of your experience is, and what you're using it for. If you're running things you've written and it's all done the same way, docker probably is just an extra step.

I personally run a bunch of software I've written, as well as open source things. So for me docker makes everything significantly easier, and saves me installing a lot of rubbish I don't understand well.


After 20 years of various things breaking on my (admittedly franken) debian installs after each dist-upgrade, and spending days troubleshooting each time, I recently took the plunge and switched all services to docker-compose.

I then booted into a new fresh clean debian environment, mounted my disks, and:

  cd /opt/docker/configs; for i in *; do cd $i; docker-compose up -d; cd ..; done
voila, everything was up and working, and no longer tied to my underlying OS. Now at least I can keep my distro and kernel etc all up to date without worrying about anything else breaking.

Sure, I have a new set of problems, but they feel smaller.


Thou hast discovered docker's truest use case.

Like, legit, this is the whole point of docker. Application/service dependencies are no longer tied to the server it is running on, mitigating the worst parts of dependency hell.

Although, in your case, I suppose your tolerance for dependency hell has been quite high ;)


> Application/service dependencies are no longer tied to the server it is running on, mitigating the worst parts of dependency hell.

Until you decide to optimise for resources, and do crazy things like “one postgres instance, one influxdb instance” instead of “one instance per microservice”, and then you get back into hell pretty quick.

Winds me up how massive tiny applications become, and how my choices are to throw money (RAM) at the problem, or money (time) at the problem. I wonder when someone will do the math and prove that developer laziness is having a substantial drag on global efficiency. The aggregate cost bourn by users has to be orders of magnitude larger than the cost savings made by developers at this point.


> Now at least I can keep my distro and kernel etc all up to date without worrying about anything else breaking.

I get what you are saying, but note a word of caution - kernel upgrades can break container runtimes: https://github.com/containers/podman/issues/10623.


I'm doing exactly the same thing. I started to do everything on Synology with Doctor Compose and got rid of most Synology apps: through open source applications.

At some point I moved individual containers to other machines and they work perfectly. VPS, NUC no matter what.


Yea, in same boat and I'm wondering if there is big contingent of devs out there that bristle at Docker. Biggest issue I run into writing my lab software is finding decent enough container registry but now I just endorse free tier of Vultr CR.


I just use the github registry, but I've been paying for their personal pro subscription for years now so it wasn't really an "additional" cost for me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: