For the Mac, just get Canonical’s Multipass (http://multipass.run) and do an apt-get to install Docker into a VM and use VS Code to “remote” to it. It will automatically install the Docker extension inside the Linux VM and you’re set.
For Windows, use WSL2 and do the same.
Both can mount “local” folders, although the setup is obviously different.
You now have a better way to manage containers than ever before.
Well, it does set up everything automagically for you. I can also dig around for my Docker CLI config and the right way to expose the Docker TCP socket to the host, but if you need a quick way to get working, VS Code is it.
Why run Docker inside a VM on a Mac, when you can just run the Linux dev environment directly inside the VM? That's just starting to sound like Docker for the sake of Docker.
Multipass, Qemu, and Parallels can all provide a solid VM on Mac host. All you need after that is your dev environment VM guest image to deploy to the team.
Some people here actually want and need Docker features. For me it's the ability to run from a given image and know that I've got _exactly_ the same image that other developers have. Reproducibility.
I might be wrong, but I think his point is that by the time you're running a linux VM for docker, why not go ahead and get the rest of the tooling for free?
Docker can still be run in the VM just fine, for cases where you want a reproducible build environment.
I do this at any company that lets me (and by lets, I mean doesn't explicitly forbid) - They all give me a Mac, and the first (and sometimes only) thing I install is usually vmware fusion, followed by the linux distro of my choice (Arch).
>for cases where you want a reproducible build environment.
Or just create your reproducible build environment as a QEMU VM image instead of a docker file. That way you only have to install a VM image, instead of install VM image/OS + install Docker + install your Docker file.
Containers solve a different problem than vms. The biggest issues (at least for me) are
1. The second a dev starts using that VM, it's no longer reproducible. The goal of docker is that a developer can create reproducible images as a part of normal development.
2. I won't be running that QEMU vm in production, but I might very well be running the exact same container image in both development and production.
Digests cryptographically guarantee that you get the correct content, which prevents both malicious tampering (mitm, stolen credentials, etc) or accidental mutations. This is why "immutable tags" are a bad substitute and an oxymoron.
There are also better caching properties when using content addressable identifiers. For example with kubernetes pull policies, using IfNotPresent and deploying by digest means you don't even have to check with the registry to initialize a pod if the image is already cached, which can improve startup latency.
> There are also better caching properties when using content addressable identifiers. For example with kubernetes pull policies, using IfNotPresent and deploying by digest means you don't even have to check with the registry to initialize a pod if the image is already cached, which can improve startup latency.
While agree on the unquoted part, this is true also for human-readable (aka mutable-that-should-be-immutable) tags, when that pull policy is set (which is by default for everything that is not `latest`)
Because you can map your working folder inside it on both Multipass and WSL2, and you can get an integrated editor experience with VS Code, which is what many people apparently want to do (I’m a tmux guy so I don’t care, but I thought I’d provide a user-friendly approach).
I would strongly recommend docker.io from Ubuntu/Debian repos. This will be always Apache 2.0 licensed. (It's a fork of Moby packaged by Debian people).
Docker Engine looks problematic since the license isn't clear at all. For instance, Microsoft didn't include Docker into GitHub Actions, they also forked Moby and packaged it on their own, since they can't comply with the End User Agreement of Docker Engine.
No, you can apt-get docker.io (the repackaged version available for the last 2-3 LTS releases, built from source and with fan networking support). Works for 99.9% of your use cases.
It's in "beta" right now but it works quite well (you can find the binary in the dedicated GitHub issue). Under the hood it just uses QEMU which in turn uses Apple's Hypervisor.framework for virtualization
For Windows, use WSL2 and do the same.
Both can mount “local” folders, although the setup is obviously different.
You now have a better way to manage containers than ever before.