Hacker News new | past | comments | ask | show | jobs | submit login
Podman 4.0 (podman.io)
356 points by d4a on Feb 22, 2022 | hide | past | favorite | 148 comments



I kinda feel bad for Docker. The prevailing wisdom seems to be that you shouldn't be using Docker anymore, use Podman. Often the reasons for this is that Podman is daemonless, supports rootless containers, etc. But I feel this is misplaced:

- The Docker daemon is actually really useful. It can run containers as other users, perform a bunch of mounting and networking stuff you can't do as rootless (or at least requires some hacky workarounds), and can monitor/maintain your containers. Podman can do lots of this as well, but requires the use of systemd. So in those cases you've just swapped one daemon for another.

- For most of my use cases, I don't really care about rootless. On servers I run all my docker containers as non-root anyway, and if I'm on a workstation I have privileges and don't need rootless.

- Although Podman claims compatibility with Docker, I've always found issues with trying to use it even up to a few months ago, prior to this release. Mostly when trying to use compose, although it does improve every release.

I basically just view the Docker daemon as an init system like systemd. It's a privileged daemon that runs other processes. But for some reason it seems to have been made into a bogeyman.


That usefulness of the Docker daemon has actually been acknowledged by the Podman project. That's the "socket" they're talking about in this blog.

If you run `sudo systemctl start podman.socket` on your Podman 3.0+ & systemd system, you get a Docker daemon compatible API listening on your localhost. You can use all your Docker API compatible tools - including vanilla Docker Compose - and it will just use Podman underneath. Compatibility with Docker features may not be 100% yet (I'm not aware of major issues, but I believe they're there), but it's getting to the point that most use cases for Docker's ""legacy architecture"" can be handled by Podman.

And if you don't need an API, it's default off on Linux hosts so you can just use Podman without that additional exposure until you opt in.


The socket doesn't give you the benefits of a long-running daemon though, namely: Healthchecks and automatic restarts, and the ability to run containers beyond rebooting. Podman can do those things but it leverages systemd timers and units. Which isn't necessarily a bad thing, but I do like that Docker does this completely self-contained.


But how do you use docker to run a container on boot if not with systemd? One nice thing about podman is that when you stop a container you’re really stopping the container. With docker you’re just breaking the client server connection.


> But how do you use docker to run a container on boot if not with systemd?

On a mac, I do ——restart=unless-stopped and it starts automatically when Docker daemon starts. Does it not work on Linux?


it works the same on Linux


> Which isn't necessarily a bad thing, but I do like that Docker does this completely self-contained.

I don't see it as a bad thing, but definitely a weird one. With complex containers you often end up with your init running docker running s6 running the app. All with different configs, with different lifecycle management, different restart behaviour. It's tiring to deal with - I like the idea of moving as much of it to the top level init as possible.


Docker healthchecks don't actually do anything besides status reporting AFAIK.


Mainly I use it for starting services in some kind of order with compose files, but other than that I believe you're correct, it only does anything at startup. It doesn't restart containers that become unhealthy it seems.


Then... why not just use docker?


Because the difference is not just the socket.


I mentioned this in other comment here but it's relevant here too

Podman:

- does not mess with your iptables unlike docker. Because of this docker containers can bypass firewall rules set by ufw

- does not creates bridged networks


Apologies for the possibly dumb question, I couldn't find this in the Podman docs. How does it do networking without bridges? Does it still use veth devices?


That's a valid question. I also did not look up until you asked. Check this out[1]

> One of the guiding factors on networking for containers with Podman is going to be whether or not the container is run by a root user or not. This is because unprivileged users cannot create networking interfaces on the host. Therefore, with rootfull containers, the default networking mode is to use netavark. For rootless, the default network mode is slirp4netns. Because of the limited privileges, slirp4netns lacks some of the features of networking; for example, slirp4netns cannot give containers a routable IP address.

[1]: https://github.com/containers/podman/blob/main/docs/tutorial...


Thanks! It seems that netavark uses macvlan to avoid creating bridge devices. Although it can also use bridges, and Docker added macvlan support too: https://github.com/moby/libnetwork/blob/master/docs/macvlan.... That page also gives a lot of background on the different methods and their pros/cons.

I guess at this point the difference is only the defaults of each system.


It has a less clear future with the company behind it.


Podman got introduced in the typical RedHat fashion... Which basically boils down to "F... you, I am taking your toys away, here's how you are going to do it from now on. Oh doesn't work, well F... you then."

I was a sysadmin managing primarily RedHat servers when they yanked Docker away and replaced it with Podman which was capable of handling absolutely zero of our use cases (they all relied on the docker daemon). It couldn't even build our simplest containers from Dockerfiles.


I use docker, too, and it's absolutely trivial to add the .repo file for the official docker repo to /etc/yum.repos.d/. "Yanked away" is pretty strong and unwarranted, IMO.


The issue with this is that it's no longer covered by Red Hat's support and lifecycle, which is why most people pay for RHEL anyway.


You can pay docker for business licences to get commercial support. At the level where you think about getting it, I honestly can't feel bad about the extra cost to the company.


Adding repos from third parties is not always allowed due to political/security considerations. Besides when paying for Redhat based on it being better/more "stable" then them breaking all your stuff by removing it is absolutely an issue.


Well, it was somewhat more nuanced than 'f... you, I'm taking your toys away'. The Docker attitute wrt to cooperation with Redhat might explain somewhat more. Some Docker folks even wore t-shirts, that explain why Redhat did what they did.


AFAIK the main points of contention were:

* Systemd integration https://lwn.net/Articles/676938/

* CGroupsv2 - Docker was slow on adopting it, which was dragging down the ecosystem because no distros wanted to unilaterally break Docker by enabling it - so a Docker compatible replacement (ie. Podman) became a blocker for a lot of container functionality in CGroupsv2 that RH wanted to enable for customers.

* Support for rootless / daemon-less containers (enabled by CGroupsv2)

* A lot of enterprise customers wanted support for purely local, private container registries, Docker wanted people to be integrated with DockerHub and were reluctant

(disclaimer: I work for Red Hat, but not on anything container-related, and I was still in college when all that drama was happening, so I don't have much insider perspective here)

https://news.ycombinator.com/item?id=28430167


I wouldn’t have been upset if they had introduced Podman alongside Docker as a choice, then replaced it when it was ready, but they didn’t. They pushed it hard as a replacement before it was anywhere close to production ready.

The way it was handled pretty much burned what remained of my trust in Redhat to the ground for me. Most of Redhats selling point is that it’s supposed to be stable and as dependable as a bag of hammers, except that’s a lie.


But Docker was not ready and was causing problems. If it wasn't, there would be no point for Podman in the first place. Otherwise, it would delay development of RHEL for years (see the Cgroup v2 issue, for example).

I'm no Redhat fan, but they do have point here. They are not the baddy here.


> For most of my use cases, I don't really care about rootless.

See, and in almost all of my use-cases, I really do. I do HPC computing, which is almost always a multi-tenant environment. This makes using Docker really hard to use, security wise. Unfortunately, more and more software/workflows are getting distributed as Docker containers (for very good reasons). This makes my life difficult. So, if I can set it up so that users can more easily work with containers on my HPC compute nodes, I'd be very happy. If the defacto solution ends up as podman (and rootless), then I'm happy.

(Yes, I use Singularity too, but it would be nicer to be able to not have to convert containers).

Now, I do very little with networking and containers, so those types of requirements... I don't care as much about. But I know that is a big feature set for a lot of people here.


> See, and in almost all of my use-cases, I really do. I do HPC computing, which is almost always a multi-tenant environment.

Maybe you need firecracker with something along the lines of https://github.com/combust-labs/firebuild?


Docker is not an option on HPC clusters I have been using so far. For me it is easier to just put SIF file where all users can run it/programs from it. There are issues in really closed evironments (no sudo on any machine) when one has to jump hoops to tweak=> build=>SIF transfer, but this is nuisance, not a problem.

BTW, are there out there any on premises multi-user HPC clusters with Docker running on the nodes?


> BTW, are there out there any on premises multi-user HPC clusters with Docker running on the nodes?

The vendor ClusterVision (EMEA, went bankrupt then returned to life) used to offer a cluster deployment option based on OpenStack+Docker; OpenStack for defining the cluster boundary on top of your hardware then Docker serving as the "image" on each compute node layered on top of the regular OS. Not at all a typical use-case for HPC containers, and with hindsight it is fairly obvious how that Byzantine stack resulted in too much complexity for them to support cost-effectively.


Yeah fair enough, if you're in a situation where you have several unprivileged users sharing a VM or something than it makes sense. I haven't come across that yet personally.


No lover of Docker, but I'd have much more confidence in the quality of the docker Daemon code than any hpc batch system's local execs.


from my experience I often find myself waiting for a hung docker daemon to give me the answer to a query. I dont know much about how it works but it feels monolithic and fragile the way a single container seems to be able to hang the whole thing and I can't even do a "docker ps" to see what's going on. Podman doesnt have any issues like that, it responds quickly at all times regardless of individual container status in my experience.


One thing that is actually nicer is that podman can run kubernetes pods from the kubernetes yaml file description (mostly) directly.

This avoids having to use kubernetes in production, and something different (like docker-compose) in local.

Being able to search in multiple container registries is very nice, and docker doesn't allow you to do that (to the best of my knowledge). It's very useful if you have a company container registry for example.


Yeah, I do agree about the pods. IMO pods are a better primitive than a single container, so its cool Podman supports them.


Why not just use Minikube or microk8s in a VM with multipass?


> Why not just ...

Podman comes with the ability to directly run Kubernetes YAML files, so the aforementioned non-standard addon things become unnecessary.

Ever tried getting a team of 100+ engineers to agree between minikube vs docker-compose? It's not fun and produces nothing interesting. Thankfullyy, with Podman such discussions are no longer needed.

Docker should've implemented this about 4-5 years ago, if they wanted to make their product The Best. Instead they became hostile to cool new product ideas, stopped on the ease of use front, and went all-in with Docker Desktop, which is just another non-standard kludge.


To be fair they were fighting for docker swarm which ended up losing, so it's not surprising that they're on the back foot.


Currently still using Docker Swarm, i wish it was more popular, because it's oftentimes the logical next step when moving from running Docker containers manually or Docker Compose locally. It is pretty simple to use (especially with Portainer), not as resource intensive as most Kubernetes distros and the Compose format now matches what you'd run locally (with extra options, e.g. on which host to run things) vs the spammy format of Kubernetes manifests or even Helm charts.

I actually wrote more about it on my blog: https://blog.kronis.dev/articles/docker-swarm-over-kubernete...

Though it's essentially inevitable that Swarm will be abandoned some day, so being able to migrate over to either Hashicorp Nomad or some Kubernetes distro is probably paramount, which is when tools like Kompose come in, when that's finally necessary. The problem of what cluster to run on still remains for all of the folks who are stuck with on-prem deployments, since most orgs can't pay for a team to manage a cluster or for bunches of hardware resources to run the full Kubernetes.

In that regard, the best option that i've found so far is either K3s with Portainer, or K3s with Rancher (RKE and probably RKE2 will still be somewhat heavyweight).


Agreed. I never really got hooked on Docker itself, but Swarm is well-scoped and easy to use without sacrificing too much; I'd argue the vast majority of k8s users would be better off with something simpler, and Swarm fits the bill nicely. Maybe Nomad does too, but IMO Hashicorp do a really bad job of documenting the progression from a localhost test to a production setup; the docs for each of their tools can be summarised as "draw the rest of the fucking owl".


I don't want core technology that I use on my machine to be at the mercy of a private company. Container technology and the most popular way to use it _should_ be open source.


Docker is open source though... How is this different to Red Hat and Podman?


I think the Podman-good-Docker-bad thing is/was amplified in certain corners, but by and large the number of users running Podman seems at least anecdotally to be a small fraction of those running Docker either on servers where K8s/other orchestrators aren't involved or Docker Desktop for Mac/PC.


> I basically just view the Docker daemon as an init system like systemd.

Interestingly enough, systemd has a feature called nspawn that can be used to run application containers.


Is there an eli5 or similar on the differences between podman / docker / rancher / others?

Feels unexplained (at least for me), how do I know which to choose when and what are the trade offs?

I recently went deeper on Rancher Desktop and it’s use of containerd vs dockerd backends and it is totally not obvious what you lose if not using dockerd and how you might fill the gaps. Feels like because docker established the space and put a bunch of related components together (cli, image building, image storing and container running) and other projects make difference choices it can be quite confusing to compare them.


Podman is for running containers on a single host. It's exactly like docker with a few additional features (like the ability to run rootless containers and run containers in something like a Kubernetes pod). Podman was developed by Red Hat to replace docker, the cli, because Docker, the company, didn't play very nice with the open source community.

Rancher is a Kubernetes implementation that can run containers across multiple hosts. Rancher is more comparable to something like Red Hat OpenShift or Hashicorp Nomad than docker.


> Podman was developed by Red Hat

Wow they should advertise that more. I had no idea and I'm pretty deep in the Red Hat ecosystem.


Why should they? I work for Red Hat, help and contribute to podman but they like to remain as 'independent' as possible. We collaborate but the preference is that Podman is an upstream/community project first.


Pretty sure they will once it's a bit more stable. They have presented at most of the kubecons for several years but it's always cautioned they will potentially break stuff.


I think OP meant Rancher Desktop, which is meant to be a replacement of sorts for Docker Desktop (it does include a Kubernetes implementation though)


Does Kubernetes usage also require podman or docker?


Kubernetes needs a container runtime, of which docker is a supported type (but deprecated and replaced with containerd for most users): https://kubernetes.io/docs/setup/production-environment/cont...


Kubernetes requires a tool which implements the Container Runtime Interface (CRI), a standardized API for starting & managing containers. This is from 2015-2016[1]. The CRI interface is defined by & owned by Kubernetes, and there's a lot of implementations: runc, crun, youki, dockershim and likely more.

For a while Kubernetes has included something called the "dockershim", it's own implementation of a CRI interface that, under the hood, calls Docker or Podman, so Kubernetes "pods" run in Docker/Podman. There's also tools like Kind[3] ("kubernetes in docker") that go further- not just hosting Kubernetes worker containers in Docker, but hosting the main kubernetes daemons also in Docker.

Kubernetes deprecated Dockershim, formally in December 2020, but is just throwing the switch now in the upcoming 1.24, expected mid-April[4]. A company Mirantis has pledged to take over support of Dockershim[5], and is calling the new effort "cri-dockerd"[6]. This should allow Kubernetes workers to continue to run via Docker or Podman.

Kind is unaffected, since it runs the main Kubernetes controllers in Docker, which then launch their own opencontainerd (one off the main CRI implementations) inside that Docker container, nested like, so no dockership/cri-dockerd is needed).

Worth re-noting that Podman includes tools to try to run Kubernetes pods directly, without running the rest of Kubernetes.

[1] https://kubernetes.io/blog/2016/12/container-runtime-interfa...

[2] https://github.com/kubernetes/cri-api

[3] https://kind.sigs.k8s.io/

[4] https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-o...

[5] https://www.mirantis.com/blog/mirantis-to-take-over-support-...

[6] https://github.com/Mirantis/cri-dockerd


Most modern Kubernetes implementations use either containerd or cri-o.


Kubernetes can support multiple solutions if I am not mistaken.

I know rancher requires Docker and is not compatible with Podman as of today.

And I believe Red Hat uses Podman in Openshift, their implementation of kubernetes.


Podman is only used for bootstrapping the cluster with the OpenShift installer. The cluster itself uses cri-o.


OpenShift uses cri-o as the runtime.


Kubernetes can use either


my understanding was that it was docker that doesnt need root or a daemon.

i always thought it seemed wrong that it needed either of those to start with.

in most other respects i think it's a drop in replacement.


What you said is true. Other than those podman:

- does not mess with your iptables unlike docker. Because of this docker can bypass firewall rules set by ufw

- does not creates bridged networks


If you use K8s or some other production container runtime, those are enormous benefits.


From a security perspective, a daemon is a nightmare. It makes so much more sense to inherit the permissions from the process it’s launched from, and as such I’m a big proponent of podman (or rkt in a distant past).


there is more to it.

If you are running kubernetes there are k8s daemons already possibly CRI-O containerd Otherwise the pod is started by systemd, which is another daemon

A Started from the command line using podman, is not a production pattern


Correct. We are not allowed any command line access to worker nodes or containers. If it isn't deployed using automation, it isn't deployed.


It depends on what you’re looking for. Most people use containerd as their backend to Kubernetes. Docker itself uses it. Podman uses a different engine. That may matter to some folks.

There are also a bunch of tools that talk directly to the docker socket.

There are a lot of varying use cases here.

Disclaimer, I started Rancher Desktop.


I think the most surprising coming from docker was that containerd can’t build images, just run them, thus nerdctl and things plus other userspace ways of building images too.. so many options..

Totally makes sense when you understand more but initial is surprising.


Podman can also be optionally be run as a system daemon that provides a docker socket, but still has/used to have quite a bit of compatibility isues.


It also has a couple of other handy features:

1. It can form up k8s-style pods for deploying tightly-coupled containers. 2. It can understand k8s YAML for deployment (although obviously it doesn't support everything k8s does), which makes it easier to do deployment config that will work locally with podman and then in a k8s deployment.


Docker can do 1. as well. Either via compose (network-mode: service:) or cli (--net=container).


you can also run static k8s pods with docker by just running the k8s kubelet which will watch /etc/kubernetes/manifests


Docker the CLI tool is made of a bunch of opensource components that Docker the company put under an opensource organization called moby: https://mobyproject.org/projects/

Poke around the different moby components like runc, containerd, buildkit, etc. and you'll start to see how they all form part of docker the CLI tool. For example runc runs a container image, and containerd is a service that can run and manage multiple containers at once (each using runc to run). Buildkit is a tool for building container images. All of these tools assume you're on a linux host, but there are tools like linuxkit, hyperkit to make linux VMs for mac and windows hosts that can then run all the other tools.

Tools like podman, rancher desktop, docker, etc. all more or less use those components and compose them in different ways or with different tweaks to support their view of a container workflow.


A bit part of it are the companies behind it. Docker is the original from the Docker company. Podman is an open source clone pushed by RH that has the advantage of not needing root access, but some apps that expect genuine Docker fail. Rancher is a simple Kubernetes implementation. LXC for a similar but different world.


I really like Podman, and I'm happy to see this new release. This little nugget about non-released features makes me most excited though:

> More features, including support for volume mounts from the host, are planned for Podman v4.1, so stay tuned for more updates.

IIUC host volume mounts for podman-machine are the last major missing feature in Podman that would allow something like Podman Desktop Companion [1] to replace Docker Desktop on Windows/Mac for most use cases - if it works well. It'd be great to replace that nagging-for-updates and nagging-for-subscriptions with a completely open source replacement tool. (I'm sympathetic to the Docker team in need of a revenue source, but I'm really happy OCI containers on Mac and Windows can be made accessible without vendor lock-in).

[1]: https://iongion.github.io/podman-desktop-companion/


Eh, not the only thing. Podman is definitely well on it's way, but my team builds tooling that builds and orchestrates our development environments. When we switched from docker to podman we immediately noticed how slow our container based build stage had become. Podman lacked some caching layer that docker was automatically orchestrating as well as parallel builds (without build kit, and build kit isn't super simple to integrate). Once we integrated build kit it made our build times very similar, but this was only possible because we had tooling to orchestrate build kit on top of podman.


As an aside, I recently tried to dabble with some BuildKit APIs and man that was one of the lowest quality Go codebases I've encountered. Probably the lowest hanging fruit would be documenting APIs. Podman (and IIRC Buildah) by comparison were clean, well documented, reasonable APIs were exposed, etc.



Whoa, thank you for sharing this. I was looking for a Docker Desktop alternative for Podman and found Cockpit but it's not the same.

This looks awesome!


I'm still on version 3 but Podman is easier to use than ever. Kinda random: it even supports `podman build` with `-v` so you can volume mount in during a build. I didn't even think about it (I have `alias docker=podman` on my machine) until I pushed `docker build -v $(pwd):/x` to Github Actions and Docker failed on it.

And apparently Podman has supported this for years while Docker hasn't :shrugs:.

https://github.com/moby/moby/issues/14080


docker supports some volume mounts while building, it's part of the buildkit backend and is used for mounting secrets during builds. It is very poorly documented and hidden behind the buildkit buildx command. Check out the RUN --mount= section here: https://github.com/moby/buildkit/blob/master/frontend/docker...


It really is poorly documented. If you don't know it exists, it's difficult to find the document you've linked. I do know it exists and I still have a hard time finding it; it's barely mentioned (or not mentioned) in the regular Docker documentation, which is where most people will be looking.

Buildkit is not exclusive to docker buildx though. You can use this with regular docker build as long as you've set the DOCKER_BUILDKIT environment variable, as noted in the document. You can also forward this to docker-compose, though there's another COMPOSE_DOCKER_CLI_BUILD variable you must set for that.

That said, it looks like this extra frontend syntax is Docker-only, which means you shouldn't use it unless you're committed to the Docker tooling and ecosystem.


Yeah it's a royal mess right now unfortunately. It kind of seems like most of the work on docker's developer experience stopped in the last few years as the company went through turmoil and different ownership changes. Hopefully some day they sit down and really clarify what the 'golden path' is for a developer using docker today. Buildkit is really cool and can do a lot of nifty things once all these new features are enabled.


Wow I just struggled yesterday without that feature. Can’t wait to switch to podman.


How did you get podman instead of docker installed in the first place?


On RHEL 8 docker didn't support cgroups v2 so the OOTB `docker` package installed podman and aliased it to docker. IIRC Fedora 30 and 31 did too? I don't remember exactly.

You can run docker pretty easily on RHEL 8 but you have to downgrade cgroups in order to do so. Plenty of guides out there now.


Current Docker supports cgroupsv2 as well.


Yes current docker does, but it didn't at the time RHEL 8 came out. The question I was responding to was "How did you get podman instead of docker installed in the first place?" and GP mentioned `dnf`, which is a RHEL/Fedora thing, so I think it's a pretty reasonable theory.


`dnf install podman` probably? I can't remember.


alternatively `sudo dnf module install -y container-tools` https://podman.io/getting-started/installation


Podman has been one of my favorite opensource projects in the most recent years. I have been using it since 1.x and it keeps getting better and better. It is one of the tools that makes me not totally hate containers and I use it all over the place. :)

At my last job we used Fedora CoreOS with Podman + systemd (I did a talk about it at Fedora Contributor Conference [1]) and I released self hosted installer that ships a Ruby application inside of a Podman pod. You can check out the systemd units here [2]. Using systemd gets you all the dependency management so your App's services start in the right order which is pretty great!

One of the cool things that I love about Podman is running everything inside A Podman pod. You get your own network namespace so you can launch a bunch of services on the same host without cluttering up your host's localhost. Here is a script I use to run Owncast on a Fedora Server [3]. I also have an script I gave my old coworker to launch all of his apps dependencies inside a podman pod [4].

If you are thinking about giving Podman a shot, check out the links and hopefully that can help you get started with or without systemd.

[1]: https://www.youtube.com/watch?v=9qMSHaHGnoY

[2]: https://github.com/forem/selfhost/blob/main/playbooks/templa...

[3]: https://gist.github.com/jdoss/ad87375b776178e9031685b71dbe37...

[4]: https://gist.github.com/jdoss/25f9dac0a616e524f8794a89b7989e...


Sincere question: is “cluttering up localhost” actually a thing?


It might not be. It maybe a workflow thing and a poor choice of words on my end.

I have a few apps running on my workstation that I run in Podman pods that have similar service deps (PostgreSQL and MQTT etc). I don't have to worry about making sure my apps are pointed at some random ports for their services and I don't have to deal with changing the ports per service in my application stack. I can just launch them in a pod and get some nice isolation in the pod's network namespace and use the defaults. I think it is a nice pattern to use for local development. I hope that clears things up.


Ah, got it. I get that. With Docker I just put things in their own Docker network and don’t think about it, and use the built-in dns. Docker’s default network doesn’t have DNS for “backwards compatibility reasons” so you have to create one that has it (this is what Docker Compose does). You can also share network interfaces in Docker (like a k8s pod) which is what I think you’re referring to (and maybe where _pod_man gets it’s name).


I was wondering the same thing but I did read another article today that talked about all the random port numbers becoming hard to reason about. In that post the author talked about using the .test TLD and either the hosts file or an internal DNS server once you grow to that scale.


If you want to learn about the history of the Podman project, you might enjoy my interview with Red Hat's container team lead and Podman's architect. https://kubernetespodcast.com/episode/164-podman/


I first learned about and was immediately sold on Podman in 2018 when Dan Walsh gave a talk at NYLUG.

I think this is the same talk: https://www.youtube.com/watch?v=riZ5YPWufsY. Called "replacing docker with podman".


Thanks, I'll check it out. I've been wanting a replacement to fill the `rkt` hole in my heart.


Related:

Podman in Linux - https://news.ycombinator.com/item?id=28687229 - Sept 2021 (89 comments)

How to Replace Docker with Podman on a Mac - https://news.ycombinator.com/item?id=28462495 - Sept 2021 (85 comments)

Podman, the open source Docker alternative ported to M1 (Apple Silicon) machines - https://news.ycombinator.com/item?id=28429650 - Sept 2021 (147 comments)

Migrating from Docker to Podman - https://news.ycombinator.com/item?id=28413470 - Sept 2021 (107 comments)

Podman: A tool for managing OCI containers and pods - https://news.ycombinator.com/item?id=28376686 - Sept 2021 (184 comments)

Podman: A Daemonless Container Engine - https://news.ycombinator.com/item?id=26101608 - Feb 2021 (241 comments)

Transitioning from Docker to Podman - https://news.ycombinator.com/item?id=25165195 - Nov 2020 (268 comments)

Podman and Buildah for Docker Users - https://news.ycombinator.com/item?id=21556894 - Nov 2019 (73 comments)

Dockerless, part 3: Moving development environment to containers with Podman - https://news.ycombinator.com/item?id=20503061 - July 2019 (40 comments)

Podman and Buildah available in RHEL 7.6 and RHEL 8 Beta - https://news.ycombinator.com/item?id=19005426 - Jan 2019 (50 comments)


I like this Podman feature: Support for socket activation

Podman will pass on the socket-activated socket to the container.

I wrote a small example demo for setting up socket activation with systemd, Podman, and a MariaDB container:

https://github.com/eriksjolund/mariadb-podman-socket-activat...


Overall the integration story between podman and systemd is way, way better than with Docker. And that's in both directions, from using the host's systemd to start and manage containers, but also for running systemd within podman containers.


I would definitely expect that given the history of systemd and docker. IIRC difficulty Red Hat team had with integrating systemd and docker was one of the reasons why they started Podman rather than continuing to push everything upstream to docker. The docker team didn't want docker to have to accommodate systemd and refused some contributions that were important for a good integration.*

*There's still some bad blood out there about it, so I just wanted to make explicit that I'm not making a value judgment on docker's refusal. I'm not educated enough on the details to make a fair judgment. Sometimes you have to say "no" to features to protect your product.


One of the folks that wrote THE manual on how to get systemd to work with Docker works at Red Hat on podman, so that's not surprising at all.


I found getting started with Podman on Windows is utterly confusing:

1) Podman has two documentation entry points (https://docs.podman.io and https://podman.io/getting-started/) because... ? It is confusing to the user at first. It makes more sense to have everything in one place.

2) Podman does not _actually_ offer binary releases (but claims to do so here: https://podman.io/getting-started/installation#windows and https://github.com/containers/podman/blob/main/docs/tutorial...) because... ? They want me to compile it myself?

3) The Windows installation tutorial links to another article (https://www.redhat.com/sysadmin/podman-windows-wsl2) that is, by today's measurements, _very_ old, because... ? I cannot imagine that things have not changed since then, I refuse to believe it :D

From what I understand, Podman will become an _actually_ easy to use and viable solution to Docker for Desktop once Podman 4.1 has been released, and we have host volume mount support, which is a must-have feature for development.


This is a great release, very exciting! When I worked for Red Hat as an OpenShift Consultant it was common to have developers super excited for using podman, but because the macOS story was poor they had to revert to docker. This removes a huge blocker for a lot of people!


New Rust-based network stack. Support for Windows and OSX. Supports WSL2 backend. Looks nice! Hats off to the Podman team!


I am not sure what is Netavark about: are they deprecating the CNI convention and rolling anew one or is it some-kind of a new layer of abstraction ?.

What if I wanted to: keep using Cilium with Pod in the future, or having a chains of multiple CNIs ?.


I'm probably just missing some fundamental knowledge, but why does podman need it's own network stack? Doesn't linux include functionality for things like virtual networks, bridges, virtual networks and similar?

EDIT: the second "virtual networks" meant to say "virtual interfaces"


When they say "network stack" they don't mean their own implementations of TCP and UDP like you would for an OS (like linux). They mean the various pieces that the container runtime has to implement. See CNI[1] for more information.

[1]: https://www.cni.dev/


So it's the "compiler" for a CNI spec into a configuration for a linux network namespace?


Yeah I think that's a reasonable way to think of it. When a container (or Pod) gets created it needs a handful of network stuff setup, from creating the virtual network interfaces, setting up route tables. nftables/iptables rules, assigning an IP address, setting up NAT, configuring container-to-container networking, etc.


Thanks, that makes sense. I'd prefer it'd be called something else than a "network stack", but that's life.


Out-of-the-box Linux doesn't handle container networking in a default fashion, because there are many common configurations and techniques available to obtain different effects and results for the exact nature of each network being attached or created.


Great news. I hope it will now work with docker compose. Missing compose support was the only reason I had to ditch it in favor of Colima. But will try it out again now.


docker-compose was really the paradigm shift I needed when it came to containers and now I'm whole-heartedly on board with containerization. No reason something like the compose syntax (or similar) couldn't be a common, shared front-end to all container solutions, but until that point, I'm going to be sticking with Docker purely for certainty that docker-compose functions as expected.


> No reason something like the compose syntax (or similar) couldn't be a common, shared front-end to all container solutions

Hopefully it will, since the docker-compose spec is being formalized: https://github.com/compose-spec/compose-spec


Yes, but it's not the final solution. For example docker and docker-compose don't support composition. There's a market opportunity for something better. I thought it might be Nix, but seems to have not caught on.


The spec finally supports the extends keyword, to build upon another service in another file. But maybe this isn't what you meant by composition?


I use it with docker-compose now and I'm running podman 3.4.4. I think you just need to install `podman-docker` on Fedora to get the docker socket. See: https://www.redhat.com/sysadmin/podman-docker-compose.


`docker-compose` is different from `docker compose` (with a space in between words).

The former will be replaced by the latter in the long run.

https://github.com/docker/compose-cli/issues/901#issuecommen...


Oh! I thought it was just a typo. That is very unfortunate naming.


There is a shim so that they can be the same. The new docker compose (with space) is compatible with the old one.


I tried switching to Podman for my Home Assistant install. One day it just stopped working - I forget the exact error, I believe it was networking related. Unfortunately I couldn’t find any reason why it broke or resources on how to fix it so I just went back to Docker. I really want to support Podman but this was a discouraging experience.


Anyone know if this can be easily used in place of Docker Desktop for Mac?


Rancher Desktop[0] is one new kid on the block.

Any reason for why you want to change?

https://rancherdesktop.io/


I spent about an hour with Rancher Desktop when I was really pissed off about the Docker Desktop licensing change. There were a few things that out of the box were a problem for me (note: really a problem with nerdctl/containerd):

1. nerdctl did not support registry mirrors for image pulls. This is an obvious blocker for some uses cases.

2. with Docker Desktop you can bind container ports to any interface on the host system, including e.g. ip aliases on localhost. It didn't seem to be possible with nerdctl using whatever VM backend Rancher is using.

2a. I'm not sure how this works today, but with Docker Desktop a `docker pull` can interact with a registry available on the host's localhost address (i.e. through an SSH tunnel established on the host). This worked with Rancher, but I believe I had to edit /etc/hosts inside the Rancher-controlled VM to point at a different IP address, whereas with Docker Desktop it just worked.

I also seemed to recall needing to manually start things with Rancher, i.e. just having the app open was not enough for docker/kubectl/nerdctl to be ready to go. But I don't remember at this point.

These are all (I hope) uncommon and weird use cases, but they are the sort of thing that will keep some people using Docker Desktop instead of an alternative. They are, for better or worse, the value add of Docker Desktop.


Personally I'd prefer to use open-source tools if possible.

For Podman vs Rancher, what are pros and cons? If you only use Docker from the command line is there any reason to use Rancher?


If only podman, not really. Optiona are also lima and colima for a more native feel.

Seemd version 4.0 is a nice release.


I want to say it's almost there with 4.0. It seems that they re-implementend most features.

For me, this is still the roadblock:

> More features, including support for volume mounts from the host, are planned for Podman v4.1, so stay tuned for more updates.

I do not run many self-contained containers and I usually end up mounting a local folder in the container, so I'll have to wait a little bit longer to fully transition to Podman.

Also, it doesn't have a native interface like docker desktop and it works mostly through the terminal. Last time I checked, there are some external tools that add a Docker Desktop app like on top of podman.


Volume mounts are different than mounting a directory into the container. That has been well supported for a while.

You can look up what a podman volume is to see the difference.


For anyone using on macOS, is it better than Docker for projects with thousands of files being watched for changes?


I’ve used it for the past 6months and haven’t noticed any issue. I don’t miss docker desktop at all.


Curious of this, too. Previous versions has issues with volume mapping to local filesystems.


How is it compared to nerdctl[0] (another docker compatible cli)?

[0]: https://github.com/containerd/nerdctl


Podman uses runc as the default container runtime. Nerdctl uses containerd. There are probably other differences but that's a substantial one imo.


This depends on the platform you're using. On Fedora and RHEL 9, the default runtime is crun. On RHEL 7 and RHEL 8, the default is runc. In any case they can be swapped as needed.

https://github.com/containers/crun


I think that's mixing apples with oranges really, as containerd uses runc as well.

nerdctl is a client app which speaks to containerd, which is a long running daemon.

podman uses a different architecture (no long running daemon)

Both projects use runc to actually launch the containers.


Podman has been a longer lived project. It's supposed mostly by Red Hat, which may be a plus or minus in your view. nerdctl re-uses more of moby's (formerly Docker CE) underlying tech. Podman has some nice convenience features to generate kube yaml or systemd units.


nerdctl is just a new client to target containerd.

Containerd has been around a long time, it was originally part of Docker itself, but was separated out and is now a standalone project.

Docker uses containerd as part of it's standard deployment.

Containerd is a long running daemon (similar to dockerd), so that's where the difference lies in that podman (in it's default setup) doesn't have a long running daemon.

The closest analogy to containerd in RH land is CRI-O.


does podman now support nftables and does not create legacy iptables anymore?


It’s worth noting that existing CNI plugins use iptables. If something uses those it’ll end up using iptables. Getting those updated or replaced would be making transitions to something newer

Disclaimer, I started Rancher Desktop


This release adds Netavark for container network configuration (in addition to the existing CNI stack support.) Netavark says "Support for iptables and firewalld at present, with support for nftables planned in a future release"

So not yet, but planned.


The thing I don't like about podman and buildah are the names... they make sense as contracts, but they also sound like joke names. Hard to sell to management why we should switch from a trusted name docker to something that sounds like its a misspelling of builder and a superhero who's only power is deploying Kubernetes clusters.


It's funny how Podman API is so similar to Docker (well, I guess it's more about OCI specs) that you can just "alias docker='podman'" and completely forget about it


I tried to migrate from Docker to Podman 3.4 recently, but just gave up. It just have still weird quirks, containers were exiting without any clear cause, fighting with SELinux... I could not manage portainer.io to get working... the docs are somewhat incomplete...Lack of support in Debian [1] means I can't easily migrate my SBCs either.

I have Podman on my radar, will give it a shot next year, so far it still feels still quite bleeding edge. But kudos to red hatters for another major release!

[1] Fully knowing there is v3.0.0 in repos for Stable, but switching to Testing just because podman?


I also tried running a service in podman 3.4.2 on ubuntu recently, and I agree that it had a lot of rough edges e.g. ports that fail to publish when restarting a service, podman ps losing track of containers if you ran podman as User= in systemd. There are lots of command incantations that you have to learn when using rootless podman which are not clearly documented (loginctl enable-linger, sudo --user USER XDG_RUNTIME_DIR=/run/user/$(id -u USER) systemctl --user restart, usermod --add-subuids, podman unshare chown). Hopefully it becomes clearer over time.


How's Podman's Apple Silicon (M1) support these days? Last time I checked - maybe 3 months ago - it was pretty rough.


Is it still a PITA to make RHEL containers with Podman?


I'm not convinced Podman will be very popular going forward.

There is multipass which can be installed on my M1 Mac with:

> brew install multipass

I can create a minikube container, or I can create an Ubuntu container and install microk8s.


Podman is a docker that was done right.


podman is awesome, looks like a good set of changes and improvements.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: