Hacker News new | past | comments | ask | show | jobs | submit login
Podman: A tool for managing OCI containers and pods (github.com/containers)
421 points by mohanmcgeek on Sept 2, 2021 | hide | past | favorite | 184 comments



From a previous discussion:

One very interesting piece of tech coming from podman, is toolbox (https://github.com/containers/toolbox). Basically throwaway (or keeparound) rootless containers with their own root directory but shared HOME. Install hundreds of dev-dependencies to build this one piece of software? Yeah, not gonna install those packages permanently. Spin up a toolbox, build it, install it in my home/.local.

You have root in the container without having root in the host system. That takes care of a lot of issues as well.

I basically no longer have development packages installed and run some applications with lots of dependencies out of toolboxes.


I've been using rootless podman containers for several years, but I've stopped using them recently. I read this security note on the Arch wiki [1]:

> Warning: Rootless Podman relies on the unprivileged user namespace usage (CONFIG_USER_NS_UNPRIVILEGED) which has some serious security implications, see Security#Sandboxing [2] applications for details.

Accordingly, I've since set sysctl `kernel.unprivileged_userns_clone` to `0` on my system which disables rootless containers, but is now supposedly safer.

The dialog on this is a bit controversial, it seems rootless podman is pardoxically both more secure and less secure at the same time. Rootless podman doesn't have root access, but with user namespaces turned on, podman has access to kernel apis that have not been rigorously tested for non-root users. Someone explained this on Security SO a bit more in depth [3]:

> The reason for this is that much of the kernel that is only intended to be reachable by UID 0 is not audited particularly well, given that the code is typically considered to be trusted. That is, a bug that requires a UID of 0 is rarely considered a serious bug. Unfortunately, unprivileged user namespaces make it possible for unprivileged users to access this very same code and exploit security bugs.

[1] https://wiki.archlinux.org/title/Podman#Rootless_Podman [2] https://wiki.archlinux.org/title/Security#Sandboxing_applica... [3] https://security.stackexchange.com/a/209533


Note: Every mainstream OS since RHEL8 has defaulted to User Namespace being turned on. Debian, Fedora, Ubuntu, RHEL, Centos ... all have this defaulted on. While overall the statements are true. There has not been many recent vulnerabilities forcing distributions to reconsider this setting. Podman is taking advantage of something that is turned on on most of the Linux systems running in the world. So saying this is not well tested on novel is a big exageration. User Namespaces have been available for almost 10 years and enabled for rootless users for at least 5 years in main line distributions.


Shouldn't the best security come from using Rootless podman, but configuring SE Linux to prevent all other binaries from using unprivileged namespacing?

The concern isn't that podman itself might use that extra attack surface (because, you give podman so much more rights by making it setuid root), but that other untrusted binaries (like a virus) might use unpriviledged namespaces to exploit the kernel.

TBH, by the time you are worrying about privilege escalation from userspace threats on a typical single-user linux machine, you have already lost.

A smart threat will just modify your $PATH or bash aliases to replace su and sudo with wrappers that execute their own commands the next time you use su/sudo to do something legitimate.


I’m not too knowledgeable about SELinux, but Podman itself can create unsafe (privileged, etc.) containers, so I don’t think naively restricting user namespaces to Podman is going to help, exploits could just create the namespaces through Podman instead of doing it themselves.

BTW, if I remember correctly Podman isn’t setuid root except some very small helper binaries, almost all of the process of creating the container can be done without root privileges.


On a workstation, I don't see so much problems. Considering that most people uses an user account that just by typing sudo (or of course spawining a process that invokes sudo) you become root without a password (because we are all annoyed inserting it, but even if we require one, it's trivial to intercept.

If it's a system you use interactively you have other problems. For a server yes, unprivileged containers are a very bad idea, not doubt.


I have found that the processes I put in motion on my workstation tend to creep into production..


I'm not sure what the tradeoff is. You can give a user access to rootless containers or you can give them root access to run containers. User namespaces are at least attempting to improve on the Docker situation where access to the Docker daemon is effectively equivalent to root access, unless that has changed recently.

Does the exposed kernel surface actually extend to the processes running in the contained environment?


You could also give an unprivileged user access to run VMs, that has its own set of security tradeoffs, but VM technology is generally considered more isolated and mature.

Unprivileged user namespaces give an unprivileged user the opportunity to use syscalls like chroot, mount, etc. which they historically couldn't use. Container managers like Docker and Podman drop those "capabilities" after creating the container, so an application running inside a container isn't that unsafe (probably, at least everybody is doing it nowadays for production servers). I think a more problematic scenario is that someone who gets hold of an unprivileged user can then create an user namespace without dropping those capabilities and then use a local root exploit based on those syscalls you normally weren't able to use.


I think the problem might be in the interplay of how many allications/daemons already try to drop privileges by running as a user after doing the few things they need those privileges for (like binding to a low port), and if you don't even need those privileges, you don't even need to start it as root, and your container has well known security properties.

If you're running rootless and using namespaces to allow non-root users access to previously root-only kernel APIs, then a bunch of prior assumptions may no longer hold, and there's a new attack space available to target that has always existed, but was of no use previously to exploit.


This is the concern that kept Debian from enabling user namespace support in their kernels by default, until their recent Bullseye release. It's not Podman-specific; it also affects Docker's rootless mode.

Can't the issue be avoided by simply launching the application as a non-root user within the container namespace?


> Can't the issue be avoided by simply launching the application as a non-root user within the container namespace?

Yes, though the issue currently is that with the likes of (rootfull) docker containers, users end up setting docker to not need to call sudo every time. Meaning, if an attacker accesses your workstation (or a server), they will be able to run docker containers as root. This may as well be giving the attacker access to root as you can really easily give yourself full root access if you can launch rootfull containers. Having (rootfull) docker run access is pretty much the same as having root access.


This sysctl `kernel.unprivileged_userns_clone` doesn't even exist in upstream kernels: the feature can only be disabled at build time.


So then what do you use for container that's better from a security standpoint?


What do you use instead?


I mostly use K8s on VMs[0], and docker the same way. Podman is a just a hobby for me, but I think running Podman in VMs is still the right way to go because it offers you a way of coding distributed systems the same way you can with K8s (with the cool style of systemd and single process containers:))), even if its just your laptop, you could move it to production clusters more easily, because you were forced to think distributed from the start. KinD[1] is also a good option if you can't or don't want to run VMs (but you have to run docker somewhere.. :/). The point is to simulate multiple nodes even when working in development.

[0] https://news.ycombinator.com/item?id=28395329 [1] https://kind.sigs.k8s.io/


I get why the pithy "chroot with more steps" comment has been largely dismissed, but when all you're going for is the temporary installation of build dependencies without impacting the permanent system, chroot really can do enough. The other stuff with user namespaces to isolate process IDs, networks, and map UIDs to get root in the user namespace without actually being root, aren't strictly needed.

There is an Arch Linux devtools project providing some shell scripts to automate setting up clean build chroots, but they're of course focused on setting up an isolated Arch system, and still require running as root. The true poor man's way to do this without being root is to use fakeroot and fakechroot, which is what the Debian builds do. The examples from Debian would all run debootstrap to set up a minimal Debian in the chroot, but you can just run "fakeroot fakechroot <any command>" to use your own build tools and bootstrap your own build environment. That way you don't even require a container runtime and don't need to open up the attack surface of user namespaces.


Along the same likes as fakeroot/fakechroot, I've found proot to be super useful for writing startup scripts for commands which make awkward assumptions about the filesystem layout.


Oh damn, I love this, thank you!

EDIT: Looks like they don't provide either an Ubuntu/Debian executable or an Ubuntu root image, which is a bit unfortunate. Odd that someone would make a project and ignore the most common Linux distro...


IME podman is optimized for use in Redhat based environment. I've had a couple of odd problems when trying to use it on Debian/Ubuntu.

It's not surprising given it's a Redhat project, but it does put me off using it as I'm most comfortable with debian derived environments.


It's pretty easy to create one yourself (from an open MR https://github.com/containers/toolbox/pull/483/files#diff-21...), but it would have been nice to have out of the box.

I've used buildah to set up a C/C++ dev environment in a Ubuntu toolbox at work without any major issues.


This sounds exactly like Singularity http://singularity.hpcng.org/


So they reinvented Nix except badly?

(Eventually people will come around to the Nix model. Especially because it's, essentially, like what happens when you install apps on phones.)


Apart from that both can be used for sandboxing/isolation, I see no similarities between the two.


Sounds like a chroot with extra steps


That's a way to describe pretty much all container things in a superficial way.


It's not even superficial. A chroot is a limiting case of a container that only rebinds the filesystem; or, equally, a container is a chroot that also rebinds the user namespace, process namespace, networking, etc.


Can you escape from a mount namespace? You can escape from a chroot.


You can change into an existing mount namespace, which is what commands like "docker exec" or "machinectl shell" do to execute a command within the mount namespace of an existing container. The container can do the same to escape its mount namespace, but it needs privileges to execute the respective namespace-changing syscall, and it needs to know which namespace to change into.

I have an application in a Kubernetes-managed container which needs to do exactly this to access the host system (to open LUKS containers and mount their contents), so I have it run in privileged mode (i.e. with access to all syscalls) and give it the full host filesystem as a bind mount, so it can do:

  nsenter -m /host/proc/1/ns/mnt mount $DEV $PATH
(The actual nsenter invocation is slightly more complicated, but that's the basic gist.)

Side-note: Since there will inevitably be a comment along the lines of "but then what do you even need isolation for", I'm very explicitly using Kubernetes as a deployment mechanism only. Other pods within the same Kubernetes are strictly isolated, but this one deliberately runs with practically no isolation. But I still get the consistent environment that Kubernetes provides (rolling upgrades, log shipping, liveness probes, etc.).


I know this was made tongue in cheek, but that was my first thought as well. Seems it would be more valuable for one to explain how and why it's better than a chroot, instead of redescribing how a chroot works but adding the word container.


It really just goes to what containers are. You can think of them as at a simple level as tars of chroots you can pass around and that an ecosystem of tooling has built up around.

This allows you to easily deploy generic ones you download from the community, and since they usually ship the script that was used to build it, it's easy to extend them with your own changes by creating your own script that uses a generic public one (the base, not the script) as a starting point, allowing everyone to share work. (e.g. get an nginx container and do the small bits to customize it for you in a script, and you don't need to worry about the building or system setup of container)

In addition to the normal chroot stuff, it uses cgroups functionality on linux to constrain it even more such as limiting CPU and RAM or disk IO, and can lock down a bunch of functionality from even root inside, and you also usually have to specifically allow whatever internal ports are listening externally (and not necessarily on the same port).

So containers are what you get when you take the idea of a chroot and try to extend and build upon it towards the direction of a true virtual machine, but without going to the step of fully emulating hardware. The benefit is that they are super lightweight and can more easily share CPU and RAM. The downside is that you're exposing more of your base system kernel to it, so it's likely somewhat less secure than a true hypervisor (but very likely more secure than just running those processes on the base system).

So, now that we have a shared base of what a container is (sort of, I'm not sure enough on the specifics to assume I didn't make some missteps in that description), rootless containers extend that so that a regular system user can run a container which internally runs as root, but not the real parent system root, but it works for the container because the kernel uses namespaces to allow it. This is both cool because it allows for root to be used even less on a system and scary because as another comment in this thread noted, it exposes some kernel APIs to "root" users (run and controlled by system users) which might not normally be checked as much for security problems since you have to be root already to use them.


Also: Julia Evans has a great writeup along these lines:

https://jvns.ca/blog/2020/04/27/new-zine-how-containers-work...


This is actually a really helpful write up - that makes sense. Thank you.


chroot only affects a processes view of the filesystem it doesn’t affect:

- the process’s view of other processes (i.e. ps isn’t isolated)

- the process’s view of the kernel’s VFS, (i.e. mount isn’t isolated)

- the process’s view of the networking stack (i.e. ifconfig and iptables aren’t isolated, and socket numbers are global)

- the process’s view of users on the system (i.e. uids/gids are global)

And so with time, the hostname, IPC, cgroups…

And more are planned on being added like the kernel keyring, and /dev/log.

“containers” means a lot of different things to different people but the main things people want are.

- resource bundling: “take everything your app needs to run, and tar it up so that it all stays together. Don’t make me worry too much about missing libraries.

- sandboxing: don’t make me have to worry too much about what else is going on the system when I run my thing and reduce the surface area for attacks.

- delivery (the big value add): give me a bunch of tooling to create that tar, version it, and get it running on a bunch of different systems without have to worry too much about the underlying OS, hardware, network setup, or storage.

And bonus for things like k8s.

- service bundling: give me a bunch of tooling to get many of those tars running on different systems and don’t make me worry too much about service discovery, load balancing, software crash recovery, hardware crash recovery, deployments, task scheduling, secrets, config files and more.


Some pointers if anyone is curious about chroot vs other isolation techniques:

https://fly.io/blog/sandboxing-and-workload-isolation/

https://devops.stackexchange.com/questions/2826/difference-b...

For further reading, searching for ⟬docker vs chroot⟭ yields a bunch of interesting articles.


chroot just isolates the file system.

A "container" (whether Docker or LXC or Podman) is a bundle of isolations that includes the filesystem as well as processes and users, and constraints on memory, CPU, networking, and kernel calls.


AFAIK Docker, LXC, Podman, etc. are actually all the same underneath: they use 'runc', which chroots into a directory (along with extra restriction/config via cgroups, bind mounts, etc.).

The differences between Docker, LXC, Podman, etc. are the config, management, etc. of containers; e.g. Docker abstracts over the chroot directories, using a set of content-addressed tarballs (called "images"), and provides commands for starting/stopping/listing/etc. the containers that are running.


Dating myself a little, but LXC came along well before runc. It predates Docker, and was actually wrapped by the first versions of Docker. I learned about LXC, and used it, after doing some research on how to build my own PaaS at work (pre-Docker) and reading that Heroku was using LXC for their platform. It worked decently for me, but required more in-depth knowledge of all the knobs for disk mounts and networking than Docker typically requires.

You may be right that LXC has switched over to runc since then. I haven't kept up. I would be surprised though.


I was actually being serious, your comment is more helpful though.


Podman is a great Docker alternative. That's why I already included it alongside Docker in https://deploymentfromscratch.com/. It fits nicely in the world of systemd services where Podman let's you run the container just as you would run the containerized service directly. I hope the future will be bright and maybe we won't need Docker the runtime anymore.


What I like most about it is that it can run as any user. There's no need for a daemon that starts as privileged user. Another benefit is that all images and container reside on a hidden directory in $HOME.


Docker also has a rootless mode as of 19.03: https://docs.docker.com/engine/security/rootless/


Wow, how did I not know about this!

Unless I'm missing something, the limitations mentioned in the docs don't seem too limiting. Is there anything that needs to be changed or watched out for in real life?


It requires a kernel that allows unprivileged user namespaces.

Docker images that run as uid 0, which many of them do, could potentially exploit their way out of the container, since kernel code running as (actual or namespaced) uid 0 hasn't been extensively tested to be safe and bug-free.

You might have to experiment with different "drivers" to get the overlay filesystem and cgroups components to work on any given host. (These are not hardware drivers, but something more like docker subsystem plugins.)

By default, it expects a dbus user session, which a headless server might not have. You can either enable one or configure a different `cgroupdriver`.


TBH the Linux kernel isn't capable of providing an isolation boundary anyways, so while it's a meaningful regression if your assumption is "attacker in the container" I highly recommend using gVisor or Firecracker, or otherwise reducing the need to trust the Linux kernel for anything like resisting hostile local attackers.


Last time I checked that was experimental, and still not production ready -- has that changed?


The fourth sentence on the page linked above:

"Rootless mode graduated from experimental in Docker Engine v20.10."


Thanks!!


With the major caveat that rootless depends on /dev/fuse for the overlay filesystems (unless you hate yourself and want to use vfs), so you can't use it inside a Kubernetes container unless it's privileged or has the SYS_ADMIN capability added.

This is how we end up with beautiful hacks like Kaniko for building container images inside container workflows. Le sigh.


Sorry for the vague reply, but I'm out with friends right now.

But basically that's fixed because newest kernels (5.13 iirc) allow to work around that without requiring root privileges.

So podman is going to stop requiring that too sooner or later.


Yeah, I am actually aware— I've been following it keenly via the buildah issue tracker, and I should have put that in my original post. But the current reality is still that building images on Kubernetes is a choice between several not-great options, especially if you're on managed k8s and can't use privileged mode as a stopgap.

Anyway, even once this stuff all lands, there's still actually no way to do what I would consider to be the actual gold standard of k8s image building, which would be a method where you build the image starting from any base layers already on the kubelet. Because currently, whether it's kaniko, buildah, docker-in-docker, etc, you're basically always either downloading everything every time, or you're having to manually manage some scheme with a long-lived cache container that you volume-in each time and purge periodically, for example: https://github.com/GoogleContainerTools/kaniko#caching-base-...

In principle this should be possible with a Kaniko-like workflow, but you'd need a separate control pod / build pod setup, where the control pod would compute all the layer hashes and then repeatedly try to spawn from the bottom-up using `imagePullPolicy: never` until one of them succeeded, and then build the remainder of the container from there.


I feel your pain, as it's my pain too.

Right now we have a pool of gitlab runners running as dedicated virtual machines but on the to-do list I have to test whether rootless podman + gitlab runner cache + $DOCKER_HOST pointing to podman's sock file can let developers use plain old docker-compose and the general docker tooling... Within a dumb pod in a kubernetes clusters, with all the bells and whistles (especially cluster autoscaling).

A man can dream.

Edit: we are a bit advantaged because we run on openstack, run our own registry (harbor) and our cloud provider doesn't charge us for bandwidth...


I'm a huge fan of the kubernetes runner for GitLab CI, but I really wish there was a way to just to `image: path/in/repo/to/Dockerfile` and have it do the right thing as far as creating/using the supplied image.


Can you open an issue in the gitlab-runner project? It would be great if one of our engineers that focuses on the Kubernetes executor can weigh in.

https://gitlab.com/gitlab-org/gitlab-runner/-/issues



You can volume mount in the host container storage into the container using additional stores.

In this article, note sections on /var/lib/shared

https://developers.redhat.com/blog/2019/08/14/best-practices...


This is no longer true. As of kernel 3.11 native overlay can be used by rootless users.


It's especially great for laptops, dockerd can be a little expensive power wise.


I am a bit concerned with the security of rootless podman: https://news.ycombinator.com/item?id=28393949


I've been a bit disillusioned with podman recently for breakage reasons. The cgroups v2 migration for example have caused a lot of pods to break and require recreation and the error message for debugging this problems are not reported well. The docs are also not missing any detail that would help debug this kind of problems.

The underlying runtime leaks through, but it feels like the podman docs don't want to document that, but they also don't point to anywhere where it is documented. So you end googling this issue, find people in the issue tracker having their issues closed because it's a problem with the runtime and then try to map the error you're seeing in podman to the other projects to google to see if there's any reported information.


> The underlying runtime leaks through

For those of us who have never used Podman or explored the depths of Docker, can you explain what you mean by that?


A recent example

> $ podman start my_container

> sd-bus call: Permission denied: OCI runtime permission denied error

What on earth is an on OCI runtime? How are its permissions configured? Here's some issues with some random flags to configure: https://github.com/containers/podman/issues/6368 . How are these linked to podman? The devs know, where is this documented for Joe random user?

Or another one I ran into:

https://github.com/containers/podman/issues/7830

Again, here's some settings in files in /etc/containers to configure that might fix it? Where is this in the podman docs, or release notes? Just found my pods didn't start after an update one day and had to dig into github issues.


I see. So the criticism here is that Podman's error reporting is sometimes insufficient to explain an error triggered by systemd/cgroup changes.

Docker has the same (class of) problem: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=992486

It is indeed frustrating. I don't think I'll count it as a reason to avoid Podman, though, since Docker is no better.


Also from that ecosystem, buildah.

I started using buildah to build container images in CI jobs because then the job can run in an unprivileged container, unlike docker.

At the time I only knew of kaniko that could do this but I preferred buildah for a number of reasons.

Buildah is really awesome at multi stage builds because you can use it from bash.


Buildah is what the Dockerfile could have been. Instead of using general-purpose shell scripting knowledge to produce images, we ended up with a custom instruction file, complete with its own quirks.

I am not sure why everyone seems to think inventing their own domain-specific language is a good thing. It isn't.


Did you try https://github.com/werf/werf ? Check authors blog with articles about werf https://blog.flant.com/tag/werf/


Buildkit has a daemonless / rootless mode nowadays.


I build container images using plain old tar, gzip, sha256sum (from GNU coreutils) and jq. No need for root access, etc.


I use kaniko a lot. Could you please expend on why you preferred buidah? (I'm happy with kaniko, and curious)


First of all I'm a Fedora user and podman/buildah have been recommended since Fedora switched to cgroupsv2.

But also I prefer using open source projects from communities and companies who are based around open source. Google do contribute a lot to open source but they're primarily an ad company who want your data. So consider using buildah a type of boycott. ;)

I use buildah privately whenever I need to quickly make an image to test something, it's so simple to use in bash. It's also very simple to add to existing images while you're testing it. So it was just natural to continue using it in CI pipelines.

I only ever used one command for kaniko so I'll admit I know nothing of it, and that was 2 years ago. But buildah feels a lot more accessible because I learned all the different shell commands that correspond to the Containerfile format.


I found that it was convenient to reuse Dockerfile rather than learning a new domain specific language and also for some reasons I was not able to use podman inside docker when I tried (our CI runs in docker containers) while kaniko works well inside docker. I'll have a second look to podman, maybe it's better now.


There are lots of cools aspects of podman, for the admin one the real nice things is its integration with systemd.

podman generate systemd <containerid>

to generate .service files which can then be launched via systemd.


This is a super underrated feature. You can do some really cool things when you combine this with Ignition and Fedora CoreOS. Ignition has the ability to manage systemd units on boot.

https://docs.fedoraproject.org/en-US/fedora-coreos/running-c...

https://developers.redhat.com/blog/2020/03/12/how-to-customi...

Automatically boot, bootstrap the OS, and run your containers via systemd, all driven by infrastructure-as-code ignition files.

A much simpler edge or single-node container experience than using Kubernetes.


Neat, thanks for the tip!


can you get journal logs from a pod? archive and limit logs size etc using existing systemd configuration?


yes, via journalctl


podman has been an amazing project to follow. Since early 2020 I've been using it for rootless containers with no daemon on my Linux development environment. Development pace is picking up and I get excited every time I get a notification from GitHub about a new tagged release.

As a bonus, the podman-compose script https://github.com/containers/podman-compose/ is getting good too!


You do not need podman-compose, in fact it is quite inferior to docker-compose.

Podman has added full docker-compose support, and in fact even the docker binary works with it. Rootfull works as of v3.0.0, rootless was added in 3.2.0. Rootless can be a bit buggy, but I believe they polished a lot of the issues for 3.3.0.

Because they've re-implemented the docker api, it can actually work with any tool that uses docker's API.


> Podman has added full docker-compose support

Not quite. I just tried to run one of my compose files, and got an error at network creation:

    ERROR: network create only supports the bridge driver


> Podman has added full docker-compose support

any pointers as to how this works?


IIRC it's emulating the docker socket and its API. As a consequence, you do need to have a podman daemon running.

https://fedoramagazine.org/use-docker-compose-with-podman-to... https://www.redhat.com/sysadmin/podman-docker-compose


As rubenbe said, they implemented the docker API, thus you'll need to start a systemd socket.

For rootless the steps are as simple as

    systemctl enable --now --user podman.socket
    export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
For rootfull just remove "--user" from the above command, and the socket is located at /var/podman/podman.sock

Then you can run both docker and docker-compose commands. Traefik and other things that use the Docker API directly should be able to use it too. Any issues you can raise on the GitHub repo, they're a bunch of friendly folks and are pretty quick at addressing issues.


So, as long as you have the podman socket enabled, you can just run the literal docker-compose script with `podman` aliased to `docker`? Interesting and cool! I think I still prefer not having to enable the socket and export DOCKER_HOST, but I will think about it. The only problems I had with podman-compose in the past few months were related to syntax issues in the docker-compose.yml file (for example ports, image names without prefixes, etc).


No, the podman socket is only necessary if you want to expose the docker API. This can then be used with the docker cli (or docker-compose). The podman cli interface is close to the docker cli interface, which is why you can do `alias docker=podman`. With the socket you would be using the official docker cli (or docker-compose), not the podman cli.


latest podman works with docker-compose OOTB


Great replacement for docker. For personal use it is pretty much a drop-in replacement. I used it with hashicorp nomad, and so far had no issues at all. One thing that might be an issue for some is Docker-in-Docker thing is kinda broken. (was around 8-10 months ago, not sure about now.)


Looks like this was resolved in the last couple months: https://www.redhat.com/sysadmin/podman-inside-container


Can't replace the docker desktop app on a mac unfortunately


Tested podman to replace docker (the cli) on a mac yesterday Most of it works fine. They have an easy way to setup a vm now with `podman machine`: https://podman.io/getting-started/installation#macos

If you want the management GUI, install cockpit: https://github.com/cockpit-project/cockpit-podman

Try podman, you'll be impressed.


there is also rancher desktop and minikube podman-env that will both work fine as alternatives to docker desktop.


Is this being posted here because of Docker changing their terms of service for their Desktop app?

Also the performance on macOS vs a parallels VM just running podman and forwarding ports is night and day.

I switched to podman ages ago because I didn’t want a root daemon on my machine anymore, but the new pricing scheme is especiallly egregious.


Doesn’t seem to be a fair comparison. Docker Desktop has access to your whole filesystem and can be used transparently in Terminal.app. This is were the bad performance results from.


This fact seems relevant to the discussion.


When I got the email notifiying me of the changes, I couldn't believe it...5 dollar per user for a basic wrapper around an open source tool? Are they serious with this?


Due to security concerns of enabling user namespaces[1], I don't run rootless podman containers any more, and my current podman use on my workstation is to run rootful podman containers inside a VM (KVM with virt-manager, actually running Proxmox, which does nested virtualization for podman VM[2].), and configures those containers as systemd services with ansible. I'd really rather just use docker-compose (or a 100% UI clone like podman-compose), but for me the whole point of using podman was to limit the API attack vector, and using the single process model, so introducing the docker API into podman isn't really what I wanted either. (podman-compose doesn't need the API but theres other reasons I haven't used that.)

I think though, you can't really replace docker with podman (without docker API) in a general sense. You just have to treat it as its own container platform. It will work for certain docker containers you've tested reliably, but if you regularly test out random container stuff you will run into problems on a daily basis. But you can use podman for new container development (because you're the one implementing it and avoiding the problems) that will also be compatible with docker. Configuring Traefik on podman has been painful[3] because it relies upon docker labels for configuration discovery (you can still write a static config file without discovery, but gets tedious), and now that there is docker API support it works, maybe no one cares to have a true dockerless podman Traefik provider, but I think that would be neat, and can probably be written with the new providers plugins[4]

[1] https://news.ycombinator.com/item?id=28393949 [2] https://blog.rymcg.tech/tags/proxmox [3] https://github.com/traefik/traefik/issues/5730 [4] https://github.com/traefik/pluginproviderdemo


> ...and my current podman use on my workstation is to run rootful podman containers inside a VM...

This makes me feel that perhaps we're slowly moving in the direction of a full circle, to how Docker Toolbox worked inside of VirtualBox, at least on Windows: https://docs.docker.com/toolbox/

Here's an example of someone's experience with setting it up: https://medium.com/@peorth/using-docker-with-virtualbox-and-...

The industry seems to be slowly advancing, with Podman, Docker with HyperV and now with WSL2 integration, all just to solve the hard permission and runtime problem, whereas the actual OCI standard and the tooling that Docker provides for getting things up and running (environment variables, resource limits, exposing ports etc., with a few warts here and there) was sufficient from the beginning for the most part.

> I think though, you can't really replace docker with podman (without docker API) in a general sense.

With this, i agree, but perhaps i've ended up with a different set of conclusions. Docker and the tools like it are good for deploying applications in a mostly reproducible way and essentially solving the dependency management issue in a sub-optimal yet passable way.

Because of all of the tools solving this problem and it oftentimes being a chief concern for it being chosen, all else becomes secondary, short of pressing issues like exploits and such. To that end, Docker inside of a VM, with separate VMs for separate apps and separate clusters for separate environments seems like the least painful option - decent security, somewhat limited attack surface, not relying on just one technology to be secure (even Podman has its exploits), but with the wide support that Docker has.

For example, you could have your project, which has a few front end containers, a few various services for the back end and a few databases, also in containers. You could essentially have 1 VM per environment with all of those running (development, testing, accept testing) in the less important deployments, but have as many VMs and servers as you need for the important ones (staging, production). Then, have separate container clusters (with something like Docker Swarm) for your development environments and the production ones and you should be good.

Sometimes the path of least resistance isn't all that bad. At least until Podman is stable enough to be a daily driver, be it in 5 years, 10 years or never.


yea, I've mostly come to the same conclusion, podman is a hobby, I can see its potential, so I actively track it, but I I mostly do stuff with vanilla docker and docker-compose, for single one-off installs, and k8s for bigger distributed stuff, either in a VM locally, or on a DigitalOcean dropet(s). I've been collecting my compose files [1]

[1] https://github.com/EnigmaCurry/d.rymcg.tech


Does Podman offer anything similar to Docker Swarm? I absolutely love Swarm mode (yet to find a simpler way of organising containers over a few small nodes), but do not want to deal with Docker the company.


Nomad comes with a Podman driver, and it is quite straightforward to install and configure.


They suggest to use k8s - you can also create containers in "pods" and generate kube config files for deployment into k8s.


kind (kubernetes inside docker) supports podman now


sadly they do not provide anything like Swarm and suggest to use k8s.


no, no orchestration, I believe

use Kubernetes


Kubernetes is a huge leap in complexity for small deployments. You can simplify it with wrappers like minikube and k3s but there's still a learning curve. It'd be nice to have a simple free alternative to Swarm.


That would be Hashicorp Nomad.


Honest question: Is it Nomad?

Because I've literally spent a couple hours watching Hashicorp videos on Nomad, and I know how to install and secure it, but I still have no idea what TF Nomad is/does. Maybe a distributed cron, that apparently has the ability to spin up a million containers?

Edit because of downvote: No, really, I'm serious. Any references to how Nomad solves the Swarm/K8s problem, because I've spent my spare time this week trying to understand it and still don't know what Nomad even is.



Thanks, these help. The first video at the top of the first page of Getting Started, is one of the ones I had watched earlier this week, but on the 5th page of that it's starting to get into the meat.

Still working through it, but it is seeming like the answer to "What is Nomad" is: Kind of a clustered systemd. By that I'm thinking of systemd as a whole, with it's abilities to run containers, timers, daemons and dependencies...

It answers the swarm/k8s call by having a job runner that can run docker containers.


Although outdated, and somewhat incomplete (I see no mention of postgresql for the nomad example) -this might help:

https://betterprogramming.pub/dockers-voting-app-on-swarm-ku...


I like Nomad, but I wish useful features weren't locked in a paid version.


More love needs to go into docker-compose for podman. https://github.com/containers/podman-compose


I've been using regular old docker-compose with podman since podman 3.2.0. You need to enable the podman socket, which kind of takes away some of the daemonless fun, but once you do that (and point your DOCKER_HOST at the appropriate location), you can "docker-compose up" and it will just work.


I used to think so until I started just using bash scripts in replacement of docker-compose files.


Anecdotally, and perhaps keep in mind I'm quite skilled with bash, in school we had a few people managing their desktops with scripts. Set things like desktop preferences, ssh config, sometimes add a different desktop environment (we each had our own user and it worked with LDAP, so we roamed around and that's why this was useful). I was the only one using bash, some others in my year used Ansible. They seemed to be spending half of each afternoon trying to fix something in Ansible, also trying to use other people's scripts if I remember correctly, etc.; for me another preference was just another line of bash and it worked reliably given that the desktops were all the same OS.

If you know bash already and you don't want to use it to reinvent a big wheel, yeah I would not hesitate to use it after seeing this struggle with more modern tools. Most of the Docker setup configs I see are already 60% bash commands strung together with backslashes on each line and broken syntax highlighting (because you're in a different file format).


What is your bash depends_on: equivalent ? I like the idea of keeping it simple bash, but seems like it is a step back.


I’ve used wait-for-it with success.

https://github.com/vishnubob/wait-for-it


I had to look up what depends_on accomplishes, so maybe I'm missing something here, but wouldn't an If/then/else sort that out as well as give you more options?


Podman is great because we dont get root access on our boxes at work.

One problem is someone still needs root access to install - or does anyone know if its possible to install as regular user?


The following reply in GitHub addresses your question:

>>that is more difficult to get as you'd need to install all dependencies in your home directory and prepare all the configuration files to use/point the right executables. We have nothing at the moment for bootstrapping podman and its dependencies from scratch for an unprivileged user.

The easiest would be to install it on the system, but still use the unprivileged users for running the containers.

https://github.com/containers/podman/issues/3100


I don't think this will work unless you have a range of user and group IDs allocated for your user. Those will be used for non-root in-container users.

    $ echo USERNAME:10000:65536 >> /etc/subuid
    $ echo USERNAME:10000:65536 >> /etc/subgid


Technically you don't need those if you always run your contains as a single user mapped to your real UID.


I know uid 0 in container will always map to uid outside the container (e.g. 1005), but I haven't tried e.g. uid 999 in the container to ensure it maps to uid outside of the container (again, 1005), does that work?


> I know uid 0 in container will always map to uid outside the container

I don't think that's true. Have a look at Podman's --userns=keep-id and related options.


ah yeah, I had forgot about that one specifically


Our helpdesk will recommend Devs to switch to canonical multipass + podman on their mac dev machines, away from Docker Desktop for mac. Currently, the helpdesk is whipping up a provisioning package that includes scripts that make docker CLI and docker-compose work from the host system just as before (shimming & proxying to VM commands/tools).

Windows users will just use WSL2.


The moment ppl got notification about Docker Desktop going to paid version only for corpo -> everyone freaked out.

This will make ppl look for alternatives now.


Yeah, I’m just a poor enterprise developer, but now suddenly I cannot use docker desktop any more, and it was already a very PITA program.


It will be the end of them.


I like podman, but is there any progress with monitoring solutions? cAdvisor image didn't work last I checked. What I need is a way for data to flow into prometheus.


Podman does support docker API so you can use something like the OpenTelemetry Collector to fetch metrics using the docker API and forward them to prometheus.

Collector: https://github.com/open-telemetry/opentelemetry-collector-co...

Docker receiver: https://github.com/open-telemetry/opentelemetry-collector-co...

Prometheus exporters: https://github.com/open-telemetry/opentelemetry-collector-co... and https://github.com/open-telemetry/opentelemetry-collector-co...

OpenTelemetry will soon support native Podman API as well.


Not sure why you seem to receive downvotes. Seems like a pretty nice layout and comment. Anyone interested in explaining why you're receiving downvotes?


I've seen anything that mentions the word "telemetry" automatically receive down votes on HN. People confuse it with companies collecting user-identifiable data from clients like apps/browsers.


Ubiquiti, their UDM Pro, uses Podman to manage contains running configuration webapps for their router. Kinda cool :)


The ubiquiti software, mainly the controller, is some of the worst I've ever seen. Bought one of their devices based on the recommendation of a friend, but never again. The controller is cordoned off now but I'm uneasy about the internet access it needs for updates. Also, with every update I need to wonder if it's going to nuke the config and factory reset the switch again (need remote hands when that happens).

Then again, I have no experience with other "enterprise" switches. All I needed is one ~20-port switch that can do VLANs anyway, so perhaps I should find one with simple old serial control and I just plug that into one of the servers for remote management.


Another great Docker alternative from one of the containerd maintainers (Akihiro Suda) is https://github.com/containerd/nerdctl. It supports compose as well.


Completely different topic, but I think it is related. Docker-in-Docker daemonless build systems have always fascinated me.

I have tracking kaniko and buildkit lately. Buildkit can behave like kaniko in a daemonless mode. I was a big fan of kaniko but somehow to me it feels a bit dead. Few months back, I was trying do a PaaS like setup with multi-stage dockerfile, it failed terribly. Because of the way kaniko writes files in the host container, it would overwrite the files crashing the container. Sometimes, it would just hang and disk would get full. Did not face that issues with buildkit daemonless and just worked seamlessly.


On macOS

brew install podman; podman machine init; podman machine start; podman run ...


It seems like podman machine is deprecated: https://www.redhat.com/sysadmin/replace-docker-podman-macos


Podman-machine was an external project that was derived from minishift and some of the images I made to allow fedora and Podman to be used. This worked, but was mostly a one man project by a community member

We have worked with the container team to introduce an actual solution, that also will hopefully soon introduce a new network stack. This we have been testing in CRC for some time.

Note: team lead on CRC, previously minisbift. Work closely with the container team. Red Hat employee


There’s so much container innovation going on that I get dizzy just by hearing about it. Lots of interesting projects popping up all over the map.


Then you will be glad to hear that I was planning to continue the podman-machine project, but under a different name... https://boot2podman.github.io/2021/01/24/reboot-new-project....


Was or are? Looks interesting, especially the runs from RAM aspect.


It's a "work in progress", basically an updated version of the old projects that serve like a proof-of-concept if you can tolerate the old versions (docker 19.03 and podman 1.9.3) The runs from RAM is an integral part of the project, the new version will have better packaging - using squashfs mounts rather than copying them over to tmpfs like the old one did.


This is from the guy that leads the container team at Redhat:

https://twitter.com/rhatdan/status/1433003250991706114?s=21


Your comment gave me the impression that Daniel Walsh made some refutation against that podman-machine is being deprecated, but the Tweet you link to say no such thing, unless it's hidden in some sub-tweet (Twitters UX is horrible to discover things).

Going straight to the source (https://github.com/boot2podman/machine), it says the following:

> DEPRECATED (with huge letters)

> Podman Machine is now deprecated. Users should try using Vagrant instead.

So one can safely assume that podman-machine is in fact getting deprecated.


I'm not sure if it was you that downvoted me, but you are mistaken.

boot2podman/machine, what you linked, is not an official part of podman. It is a third party equivalent and not relevant. The official podman repository is https://github.com/containers/podman, and the machine bits are integrated and part of podman proper:

https://github.com/containers/podman/tree/main/cmd/podman/ma...

If it was you that downvoted my comment, please consider not doing that because you disagree. The comment was factually correct and made in good faith. You're equating a third party deprecated podman-machine with the integrated into upstream podman machine. We were not referring to the same thing whatsoever.

Hopefully this makes more sense to you now.


Thanks for clarifying! My mistake in confusing the two, hopefully one can understand how another can make the mistake though as one is named "podman machine" and the other "podman machine".

My main issue with your comment (https://news.ycombinator.com/item?id=28391265) still stands, you're linking a third-party source with the impression that that source said it's in fact not being deprecated, when there is no such text in that source.

I did not downvote you, but I can absolutely understand why someone would do that, since the tweet seemingly has nothing to do with the comment your original comment was replying to.

For what it's worth, complaining about downvotes is almost never worth it. Makes for boring reading and the people downvoting you won't read it anyways.


I'm saying the source is saying this is what to use, which was confirmed by reading the source and seeing the "deprecated" version was not even related to the project whatsoever.

Dan is the guy that has been with the docker project through the redhat fork and to the creation of podman from the very beginning. If he says podman machine is what to use, it is not deprecated. Have you ever listened to Dan speak? He's very pragmatic about these things.

Being one of the authoritative decision makers, if he says to use it, it is not deprecated. A literal 1 minute perusal of the podman upstream source confirmed.


> since the tweet seemingly has nothing to do with the comment your original comment was replying to

Saying the lead developer is recommending using machine right after someone claims it’s deprecated seems relevant?


The comments were regarding two different projects, one is deprecated (old podman-machine) one is not (new podman machine)


It's also in the documentation https://podman.io/getting-started/installation


I added some clarification (hopefully) to the README, it gets a bit confusing when terms like "Machine" and "CoreOS" get reused


next your going to be complaining that its too easy to confuse docker-compose with docker compose :P


What you linked to is podman-machine. podman machine is not the same. The former was probably deprecated because podman machine is now part of podman itself.

https://github.com/containers/podman/tree/main/cmd/podman/ma...


podman-machine was a project disconnected from Podman. The podman project recently added a new command to podman `machine`.

The `podman machine` command downloads and runs VMs with the podman service enabled within them. It also configures the native podman to use ssh to connect to the podman within the VM to allow the allusion of native podman support on the host. This is the exact same thing that Docker desktop does.

We will contact the podman-machine developers to update their website to point out that `podman machine` is the way to go, to get a VM running podman on your host system.


This is what the the podman-machine github says under the DEPRECATED section:

` DEPRECATED:

Podman v3.3 now has a podman machine included, different from podman-machine

This is a new feature based on QEMU and CoreOS, unlike the old one (described here) which is based on Docker Machine and Boot2Docker, as was available in Docker Toolbox... `


I updated the README based on feedback from here...

Once there is any documentation or blog post for the new podman machine feature, I will link directly to it


Boot2podman was a derived project from minishift and machine drivers. Was not maintained by the container team... but a community member.

Podman machine as mentioned is a new integrated solution.


I will "unlock" the deprecated boot2podman projects and add some more text about it to the README.


hmm, I tried this yesterday, Catalina, fresh install of podman/qemu:

  ; podman machine start
  
  INFO[0000] waiting for clients...                       
  INFO[0000] listening tcp://0.0.0.0:7777                 
  INFO[0000] new connection from  to /var/folders/tv/ykgkkzr902n062tzjgbtwdc1bgt9y_/T/podman/qemu_podman-machine-default.sock 
  Waiting for VM ...
  qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]
  
I hadn't dug into it because the docs for this feature are still missing, and I presumed it hasn't launched - I'd seen it in the demo https://podman.io/community/meeting/notes/2021-04-06/#podman...

So I took a look again:

  ; vim ~/.config/containers/podman/machine/qemu/podman-machine-default.json
  # added arguments to cmd, "-cpu", "host"
  # based on reading https://github.com/GNS3/gns3-server/issues/1639 for that error
  ; rm /var/folders/tv/ykgkkzr902n062tzjgbtwdc1bgt9y_/T/podman/qemu_podman-machine-default.sock
  # podman machine start will complain if you don't remove the
  # socket file after the crash above. `fuser` showed nothing
  # connected to it
  ; podman machine start
  
  INFO[0000] waiting for clients...                       
  INFO[0000] listening tcp://0.0.0.0:7777                 
  INFO[0000] new connection from  to /var/folders/tv/ykgkkzr902n062tzjgbtwdc1bgt9y_/T/podman/qemu_podman-machine-default.sock 
  Waiting for VM ...
  ; podman machine ls
  
  NAME                     VM TYPE     CREATED         LAST UP
  podman-machine-default\*  qemu        45 seconds ago  Currently running
  ; podman pull alpine
  
  Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
  Trying to pull docker.io/library/alpine:latest...
  Getting image source signatures
  Copying blob sha256:a0d0a0d46f8b52473982a3c466318f479767577551a53ffc9074c9fa7035982e
  Copying blob sha256:a0d0a0d46f8b52473982a3c466318f479767577551a53ffc9074c9fa7035982e
  Copying config sha256:14119a10abf4669e8cdbdff324a9f9605d99697215a0d21c360fe8dfa8471bab
  Writing manifest to image destination
  Storing signatures
  14119a10abf4669e8cdbdff324a9f9605d99697215a0d21c360fe8dfa8471bab
  ; podman run -it alpine sh
  
  / # 


Works!


Interesting… Will podman automatically create and launch a Linux VM where the containers will live?


Seems like it

-------

Podman is a tool for running Linux containers. You can do this from a MacOS desktop as long as you have access to a linux box either running inside of a VM on the host, or available via the network. Podman includes a command, podman machine that automatically manages VM’s.

To start the Podman-managed VM:

podman machine init podman machine start

-------

from: https://podman.io/getting-started/installation


Seems to have some issues creating the temp directories for the VM in an automated fashion (at least with the Homebrew package). But it will work with some tinkering.


There's another tool that allows you to drill-down events in K8s on the pod level and even see pod logs in real time, all in one place: https://komodor.com/blog/new-pod-status-and-logs-dash-saves-...


One thing I noticed when using podman was that it was slower than Docker. Perhaps due to its daemonless nature?


Because rootless currently needs slirp4netns for IP resolution.

Rootless Docker, when configured properly, is essentially the same speed.


Generally it's slower because you can't use overlay mounts without root.


This is no longer true. As of kernel 5.13 podman can use rootless native overlay.


Glad podman exists (uninstalled docker and never looked back) and hoping more people write their guides/blogs using it instead of docker to break up the lock-in and single mindedness around containers.


I'm pretty new to virtualization, what're the pros/cons of podman vs docker in prod.?


Does it works with docker compose files? Thats why i still use docker these days tbh



Not Compose specifically. It does use 'pods' though, which are the unit of deployment in Kubernetes. This makes porting your cluster into K8s (should you wish) very simple.


Does this work on WSL2 / win10? There’s nothing explicit in the readme.


Yes, the install instructions says WSL2 is fine: https://podman.io/getting-started/installation


It does not. Podman relies on kernel features that are not implemented in the WSL compatibility layer.


It does (WSL2). Have worked with them on making sure this works and is described.

Tested on a fedora image and ubuntu


Wsl2 is running a Linux vm aka real kernel

https://docs.microsoft.com/en-us/windows/wsl/compare-version...


We use wsl2 with podman, works good.


How does Podman compare to LXD?


Application containers vs system containers.

Lxd is a daemon for lxc, which last I checked was focused on creating system containers.

This means it probably can't run OCI


Both can run systemd in containers. Podman can be used for system containers. Podman can start containers without root privilege while LXD is a daemon that needs to run as root.


LXD is, but LXC can run rootless containers just as well.

In this sense, it's similar to running the Podman-compose daemon as root, but rootless containers on it.


One can create a cluster of system containers with LXD, as opposed to local application containers (podman).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: