Thank you. I'm a hardware guy and I've heard about a dozen of explanations of what Docker is and even tried to set it up on my NAS but I still have no idea. And to the person about to comment with your version, I don't really care to know.
Pedantry like this is what scared OP away to begin with. Sometimes you have to resist the urge to correct or convey technical details for someone who will not understand the technical details anyway.
For example, when explaining bitcoin to people I use the analogy of a stack of receipts. That's not technically how a blockchain works, but my mom doesn't care how a blockchain works.
I tend to agree in general but I'm not sure in this case. The fact that containers are/were a form of operating system virtualization--which is how they were once referred to--is probably one of the least interesting aspects of how they're used today.
If I were explaining them to someone less technical I'd say something like they're a way to package programs with the other stuff programs need to run so that they can be reliably and quickly started up and moved from place to place.
But they serve the same purpose, even if their implementations are entirely different.
So it makes sense to provide an initial explanation of one in terms of the other, in special for people who already have some mental model of what virtual machines do.
No, they don't serve the same purpose. Try running a Microsoft Exchange in a Docker container on a Linux server. Then try running it in a Windows VM on a Linux server.
Even if they don't always serve the same purpose, they serve a similar purpose in a number of examples. It's still a valid analogy to get someone thinking about how they might work.
Containers are not VMs in the same way that WINE is Not An Emulator.
That is... yes, that's technically true but to the end user there is very very little difference, if any, unless you start playing with advanced stuff that affects the guts.
I don't understand why people don't frame the explanation in the context of what problem the thing is trying to solve.
Imagine you have to install a database (let's say postgres) on your machine. You could just install it the regular way - when you do this postgres uses certain things by default like the port 5432.
Now you're going to install a different piece of software on the machine, but this software also uses port 5432 by default and there's an error about the port already being in use. This is a pain, now you have to change the default port. It turns out these conflicts are happening for a lot of things on the machine and it's a hassle to have to make non-standard changes all over the place that you have to keep track of.
For your first attempt to solve this you decide to run postgres in a virtual machine and connect to that, but now you have an entire extra operating system to deal with and all of the overhead that comes with that. Plus you have to configure access to the VM and get all of those pieces to work too.
It'd be nice if there was a way to just install postgres in a place that is easily accessible, but separate from the rest of your machine with regard to things like ports and such. A "container" that it could be installed in that automatically handles things like port conflicts in some magical backend you don't have to deal with so it looks to you like you can always just use the defaults. This container model is basically docker (or at least my current understanding of it).
I hope this PostgreSQL example is simply meant to illustrate a point, and not taken from personal experience. I say that because PostgreSQL is designed to run multiple instances on different ports, and you can run multiple versions along side each other as well. All it takes is editing a couple configuration files.
What you want is called a "volume". When running a docker container you just pass in the -v option and it maps a directory outside the container to a path that can be used from inside the docker container, e.g. `docker run -v /host/directory:/container/directory ...` would make the directory /host/directory on the host machine accessible inside the container at the path /container/directory.
While you can directly map the host filesystem to the container filesystem in the manner you describe, that's referred to as a bind mount, not a volume. Volumes reside in the host filesystem as well but are solely managed by Docker.
Outside of trickery requiring the use of bind mounts (I use one to share ~/.ssh between a host user and container user, for example), volumes are recommended.
You may not be the target audience. Docker is a tool primarily used to help efficiently deploy your networked applications on new servers. If you don't need to deploy server apps, you can stop reading here. If you do need to deploy apps to servers, docker is worth investigating. It replaces a complex install script and git clones with an efficient clone of a lightweight 'vm' which is ready to run your app immediately.
I'm laughing my head off at how your comment, precisely about this point, has spawned a conversation that has included, "it's like a VM", "it's not like a VM", "it depends on how you define VM", "there are lots of definitions for VM", "you can use a volume, just use the --volume flag", "that's not for volumes, that for bind mounts".
OH MY GOSH CAN YOU PEOPLE NOT SEE HOW BADLY DEFINED ALL OF THE NOMENCLATURE IS?