Hacker News new | past | comments | ask | show | jobs | submit login
Docker container security cheat sheet (gitguardian.com)
192 points by thunderbong on Aug 2, 2021 | hide | past | favorite | 35 comments



One of the biggest security concerns isn't listed which is to avoid doing 8000:8000 style port publishing because it'll open your service up to the world on most cloud providers unless you explicitly block that port using a cloud-level firewall. If you're hosting on a place like DigitalOcean you could very easily not be using their external cloud firewall.

Even if you use a cloud firewall it's worth avoiding 8000:8000 for the sake of being explicit with your intentions.

The reason you'd want to avoid that is because you'll probably have your services reverse proxied by nginx, in which case only 80/443 need to be published because the internet will be hitting nginx, not your internal service at example.com:8000 or whatever port it's running on.

This topic and many other security gotchas / best practices were in my DockerCon talk from a few months ago at: https://nickjanetakis.com/blog/best-practices-around-product..., it goes over patterns how you can use a more restrictive and secure 127.0.0.1:8000:8000 value in prod but still use 8000:8000 in dev so you can check it on multiple devices on your local network, all with the same docker-compose.yml file.


Thank you for the article "Best Practices Around Production Ready Web Apps with Docker Compose". I've been referring to it since seeing the original HN submission [0]. Something like Kubernetes was overkill for my needs, especially since Docker Compose is already a part of my development workflow.

[0] https://news.ycombinator.com/item?id=27359081

edit: wrong link


I would recommend using Rootless Docker over this cheat sheet. It makes half the issues they are trying to work around redundant. It also solves the issue with Docker punching a hole through UFW..

https://docs.docker.com/engine/security/rootless/


Some of the points in TFA are fixes to be done in the Dockerfile, meaning that any user of the Docker image would benefit, not just those who run rootless Docker.


What are the trade offs of using rootless?


Your host OS has to allow for unprivileged user namespaces, which can add significant attack surface. There is a sysctl for this:

`sudo sh -c "echo 1 > /proc/sys/kernel/unprivileged_userns_clone`

Basically user namespaces let you do the same thing as pid or network namespaces, but with users. This means you can have a "fake" root user. The Linux kernel, marvel of software that it is, is easily confused by this and basically hands you trivial privescs if you're this "fake" root user.

This problem is pretty much only getting worse because user namespaces are becoming more powerful whereas kernel security is staying the same (ie: not moving at all).

That's why it's gated in most distros by default.


> That's why it's gated in most distros by default.

Debian used to gate them behind a sysctl, but that's changing in the upcoming Bullseye release:

"The previous Debian default was to restrict this feature to processes running as root, because it exposed more security issues in the kernel. However, as the implementation of this feature has matured, we are now confident that the risk of enabling it is outweighed by the security benefits it provides."

https://www.debian.org/releases/bullseye/amd64/release-notes...

Ubuntu has allowed user namespaces by default for years. Which distros are still holding out?


I'm on Debian Buster.


The advantage of rootless though, over user namespaces with a rootful Docker daemon is that in rootless all the components run as standard user, so a compromise of the Docker daemon should just allow for escape to a standard user.

Not sure about most distros gating that sysctl. Ubuntu works fine with rootless Docker with no changes and looking at their install instructions, there's only mention of setting that sysctl on debian and arch.


> The advantage of rootless though, over user namespaces with a rootful Docker daemon is that in rootless all the components run as standard user, so a compromise of the Docker daemon should just allow for escape to a standard user.

Yep. I just was answering the question, which is what the tradeoff is.

> Not sure about most distros gating that sysctl. Ubuntu works fine with rootless Docker with no changes and looking at their install instructions, there's only mention of setting that sysctl on debian and arch.

Interesting. I haven't checked in a while. I'm also not on the latest debian though.


Are these vulnerabilities actual bugs, or just misfeatures?


actual bugs, qualys’s recent cve-2021-33909 is one example.


To clarifymy stance, now that I have a bit more time this evening… unprivileged userns is the only way forward for linux sandboxing on a bare-metal host past the boundaries of POSIX isolation. so from a security perspective I do hope most distros get this turned on at some point, and that these bugs stop being so commonplace.


There may be some overhead with networking if your application uses a very large amount of bandwidth. See:

https://github.com/rootless-containers/rootlesskit/tree/v0.1...

Otherwise for general dockerized applications, you won't notice any difference.

You may find some quirks, but these can all be worked around easily as described on the rootless docker page.

We run it in production with no issues so far.


Our develompent setup uses docker swarm which requires Overlay networking which is not supported in rootless. Otherwise I'd probably use rootless docker on my dev machine.


Every time I think "I should move off of using AWS managed services and into Docker/Kuberneties. Think of the cloud-agnostic services! The ease of spinning up test environments!" I see an article like this. And I'm reminded that I seem to be buying myself out of a lot of pain.

But am I totally wrong?


I'm afraid you are wrong and maybe suffering a bit from AWS Stockholm syndrome. Vanilla Kubernetes is far easier to setup and far faster to deploy/update than any AWS solution involving a pig/alb/vpc/sg/ec2/ECS/etc. The only downside is that you'll have to forego AWS's global deployments and redundancy as well as higher-level service offerings such as logging, alarms, tracing, etc. Nevertheless, that downside is countered by the drastic reduction in operational cost resulting from not being price gouged by AWS.


Hmm. Every company I've been where they had internal kubernetes, it was a complete mess and constant production issues.


Is it easier (and safer) than AWS Lambda/SQS/SNS/DynamoDB/S3? Because that's the alternative I'm considering.

I'm concerned with foregoing AWS's security hardening and redundancy more than global deployments and alarms (at the moment)


What accounts for "drastic reduction in operational cost" in using vanilla kubernates vs ec2? (Is it elb/alb data throughput?)


use AWS managed Kubernetes. All (most?) of the cloud agnosticism, none of the management complexity.

managing k8s yourself on bare metal is hard. Managed k8s on any provider is a real value.


I 100% agree that offloading to AWS (or Azure or GCP) managed Kubernetes is better than bare metal. Maybe it's worth moving to bare metal later.

I'm more concerned with moving to Docker images, with the associated increase in needing to manage the whole container, vs. Lambda/S3/DynamoDB. I'm especially worried about misconfiguring the images and suddenly some security threats that were handled by Amazon's services are suddenly my responsibility and I fail.


The complexity has to go somewhere


Why not use podman and run containers daemonless and under specific user/groups


FWIW you can do that with Docker as well :) Docker rootless went GA in 20.10


I now see many apps telling you to mount your Docker socket into their container. No thanks... not interested in having all my containers exposed.


I don't think it's worth doing all these tweaks.

The first reason is that it's all running on Linux anyway, and Linux is (in general) swiss cheese, security-wise. Even with a billion container tweaks, there are still holes that can be exploited from the container to escalate to the host OS.

The second is that attacks don't need to privilege-escalate to the host to cause havoc. If the attacker can read memory, they can get credentials for other network services and exploit them from the container. Or they can drop malware from the container to any users of a service, or upload it to a service. Or they can just scan the network looking for another vulnerable service. Or it could be something like EC2/ECS Metadata Service was left accessible and they can start enumerating your cloud account(s). More than enough for the average attacker.

Just assume that a Docker container is exactly the same as running a regular process on the host OS, and it will be much simpler to identify attack surfaces and mitigate them.


> I don't think it's worth doing all these tweaks.

I feel like i can understand this point of view, since following all of that advice indeed would be cumbersome. However, at the same time you definitely have to consider what it is that you're running on your infrastructure. A small internal system or even an ERP that's not exposed to the internet will probably give you more leeway in regards to being able to ship stuff now, without spending bunches of time locking everything down, especially if you build all of the containers yourself. On the other hand, a large finance application that is publically accessible and needs to weather thousands of attacks daily will probably need a rather different approach.

Overall, however, i'd say that it's good to have lists of tips like these, because figuring out all of it alone would take a whole lot of time. That said, even in the more relaxed environments, it's generally a good idea to consider at least some of them, for example:

> Unless you are very confident with what you are doing, never expose the UNIX socket that Docker is listening to: /var/run/docker.sock

Being an early adopter of Docker, i once made this mistake on a throwaway VPS. It took less than 24 hours for it to be mining crypto. That said, the socket can be a good option for tools like Portainer (which implementations like Podman miss out on), yet it definitely should never be exposed publically.

As always, security isn't a boolean of on/off, but rather is a sliding scale of sorts - figure out the risks that you're likely to be facing and choose the appropriate means to combat them. Of course, it would be better if Docker provided safer defaults, too.


I'd say it depends very much on your threat model. Would I trust a Docker container with a hostile multi-tenant setup without additional controls.... no.

Is the isolation provided by Docker worth nothing, from a security perspective... also no :)

Hardening containers is a good element of an overall security strategy. It needs to be combined with other controls, both preventative controls at things like the network layer, and detective controls to spot when a preventative control fails and allow for rapid mitigation.


Sure, but prioritization is important here too. I would rather work on detective controls first and identify the large attack surfaces / weakest links, and only much later look at system hardening. (But to clarify I don't consider things like patch management to be system hardening)


Just yesterday I was thinking hmm, copying the .env seems like a shitty way to store env vars.


> Just yesterday I was thinking hmm, copying the .env seems like a shitty way to store env vars.

It really depends on the context. Docker already supports defining env cars in container images, so it makes no sense to sneak a .env file into a container image. If all you're doing is setting env cars locally to run a container then if those env cars don't include secrets then it's pretty safe. However it would be preferable if those env cars are handled by the container orchestration system. For instance, docker compose files also support specificing env variables, as well as Kubernetes.


It’s widely accepted “best practice” to not run processes in docker containers as root but I think it does a disservice to not explain why. The top benefit in my mind is to be able to mark interpreted code (server side JS/python/ruby) as read only in case someone gains shell access in the container. It’s great to set up another user for execution, but you should also make sure to copy in code and build artifacts as read-only. I’m curious if people can name other direct benefits to a non-root user other than “I don’t trust docker/Linux process isolation”.


If you want to set the filesystem as read only in a Docker container you can do that, without needing to set-up multiple users in the container.

In general that's a good piece of hardening advice, you just need to mount an empty volume for any temp files that are needed by the app.

For not running as root, the main benefit (to my view) is that there have been multiple CVEs in container stacks where the issue fully or partially mitigated if the container was running as non UID-0

CVE-2021-30465 was partially mitigated, and CVE-2020-15257 had a requirement of the container running as UID-0.

So whilst it's not a panacea, in general not running containers as root is a good layer of defence.


This seems to mitigate the specific threat where I gain non-root shell access to the container, but there's a root process running interpreted code, which I can modify.

However, if the container doesn't contain any processes running as root, there doesn't seem to be any benefit (besides defense in depth) to marking the code as read-only.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: