On the same topic, PyPI has recently moved to a new backend, and in the process all end-to-end PGP signatures (created by the package owner upstream, proving that no tampering happened on the online servers) have disappeared from the UI, and that is seen as a "feature":
You can still get them through some obscure API and you still need to know the right PGP key for verification, but this really signals the lack of consensus and awareness on the path toward a secure software supply chain.
That is discussed extensively in the issues related to the OP. The problem is that package maintainers of distros actually check whether the GPG signature has changed in order to repackaged python projects for their distros.
It's not particularly obvious, but you find a release for which `has_sig` is true then you take the URL from that release and append `.asc` to that URL.
They've also repeatedly broken having predictable tarball download URLs, which makes it harder still to make Python packages for distros, and dismissed it as by design. Package managers shouldn't have to implement Python-specific API adapters just to find tarballs and signatures. The Warehouse team seems more concerned with a pretty UI than a working platform.
I’m not sure what the implementation status is, but PEP 458 and 480 define how to integrate TUF with PyPI. It could be that pgp is being de-emphasized in favor of TUF?
(A quick search couldn’t tell me the integration status or if there are still plans to do so, but I’m familiar with the pypi plans from the TUF side)
I wonder how much malicious code like this is doing its work deep down in the endless pyramid of npm dependencies.
And how much as-of-now clean code will turn into malicious code when bad guys take over npm repos in the future.
It might be possible to tackle this issue by some intelligent trust algo that combines a trust rank similar to google-page-rank and signed messages.
Say somebody pushes an update to their repo. Now the first user of it might read it and sign it with 'Looks OK /Joe'. And the next user sees the signed message by Joe in some kind of package-review-message list. Based on all the reviews and the trust of the reviewers, they then can calculate a trust score for the update.
For many images, no Dockerfile is provided.
And even if there is, it's not clear that it has been used to produce the software.
I tried many times to reproduce images from Dockerfiles and failed:
What `apt get update` does, depends on the current time and your network configuration.
You can never be suer what you get from DockerHub.
My strategy has been to copy the Dockerfiles and run `docker build` myself, instead of using the image files.
Alas it's does not work very well.
So it should generally be possible to go from image --> dockerfile as the information is included in the manifest. If you save an image as a .tar.gz you can extract the info from there or there are also tools to reverse engineer Dockerfiles from images
That is a good point about lack of reproducability though. I suppose an interesting attack would be to deliberately forge information in the manifest before pushing to Docker hub...
I also tend to use anything other than the official images as "inspiration" and re-implement myself.
> I suppose an interesting attack would be to deliberately forge information in the manifest before pushing to Docker hub...
"Forge" is a bit of a strong word. The main reason Docker included build information in the image history was so that build caching would work (from memory). It's not meant to be an authoritative source of information about how an image was built at all (not to mention that "RUN apt-get update" tells you almost nothing about what packages were installed, when they were updated, etc).
Personally I think that the current trend of using Dockerfiles is completely broken in terms of reproduciblity and introspectability (I'm working on a blog post to better explain the issue).
Sure, but so could the OS or app distributors. You need to establish a baseline of trust somewhere, and this will likely be on the official (or your cherry picked official) images, and you build from there.
The official images are made by the OS project themselves, so if you can get malware in there, you can just get your malware into debian/CentOS/alpine and not even bother with Docker hub :)
You can use the `docker trust` to verify which key (if any) was used to sign a given docker image. The `docker history` command will also give you a list of each of the layers as well as the command which was used to create the layer.
It's trivial to insert software into the build pipeline without being noticed.
Run your own devpi server, build a compromised version of any dependency you want, and you can install whatever you would like, with no sign of it in the Dockerfile.
Between pip, npm and DockerHub, we seem to be inviting all kinds of lurking security holes. What if the code was used in an embedded system - probably never to be updated again.
You get this type of security inherently via most os-level packagers which check upstream tarballs especially the ports-like ones which are primarily designed for from-source usage.
Unfortunately there has been a trend to ignore these and install directly of late, especially for dynamic language modules.
But they have many developers. And one developer could insert code which works and passes tests, but in Background start someday a download and start monero mining.
The point is: you have to trust the developers at this companies, that they not do anything bad and that the companies check every code of every dev. Trust.
npm packages are interesting in that they run both in-browser and in-server -- it stands to reason that the in-browser attack surface is far larger and its impact would be rarely noticed by users (since they're already accustomed to websites taking up inordinate amount of resources.)
Somewhat related, since this is about Docker security: I started looking at Traefik today. It's a reverse proxy that runs as a Docker container and automagically configures itself to expose your other services (that are also running in Docker containers).
Neat idea. However, to accomplish this you have to mount the docker socket into Traefik's container...
Which means that when a bug shows up in Traefik attackers can pivot out of the container and onto the host; access to the docker socket is equivalent to root on the host.
And of course Traefik is the thing you're exposing directly to the internet.
It's like giving the guards outside manning your castle's gate the skeleton key to the rest of the castle.
Of course, Traefik is quickly becoming popular because of its simplicity. But to achieve this simplicity it carves a giant hole in the security of your application.
Two things... it's about time we've had an official permission system for the docker API, so I can grant "inspect running containers" and nothing else and sleep at night; two - it should be possible to run traefik in a pod as two containers - one talking to the API's and tweaking the runtime config, the other serving public traffic (jwilder/nginx-proxy can do this!). It's called privsep and OpenBSD have been doing it since forever - check their remote hole count, it's still two in a lifetime.
jwilder/nginx-proxy has been instructing users to only grant read-only access to the docker daemon for as long as I've been using it, so I know this is at least possible. It's not fine-grained, but it is read-only access. https://github.com/jwilder/nginx-proxy
Those are definitely good points. I'm curious about the part at the end: "An attacker with ro access to the socket can still create another container..." How is that possible with readonly socket access?
The integration of Traefik with the Docker daemon should mainly just be used while developing (imho).
Once you get to acceptance / production environments, you are very unlikely to run plain docker containers, if you use kubernetes you interface Traefik with the kubernetes api itself, and the service account you create for Traefik can be (and should be) completely read only.
Same for Docker Swarm, Marathon, Consul and AWS ECS.
So no, Traefik is not the big security problem you make it out to be.
Sorry to be so harsh, but Traefik is one of the most amazin pieces of software I have come across in the last years that has seriously made my life much easier.
If software has an insecure mode "just for development" that absolutely shouldn't be used in production, you can be certain that a large fraction of developers will use that in production nevertheless.
Security today doesn't mean that you are safe if you do everything according to best practices and follow the docs. Modern Security includes making sure that default settings are safe, and that it should be impossible or hard to set up the software in an insecure manner.
If you make it easy to shoot yourself in the foot, that's what people will do.
A couple years ago, I wrote a container than periodically polls the Docker Socket to check for currently running containers with exposed ports and a special label applied.
It then iterates over those containers and writes a nginx.conf file to a shared volume, then sends a SIGHUP signal to another container running nginx as a reverse proxy to the containers.
The "polling job" container doesn't expose any ports and is not reachable from the outside world and the only input into this program is reading data from the Docker Socket.
Do you think this is still vulnerable to attacks like Traefik is or does this 2-container routing protect against the attacks you're thinking of?
raised over a year ago(!) is really interesting. It seems like many of the downloads may have been malicious - the author of the malicious images was scanning for open docker api ports and then installing their own images to mine cryptocurrency.
So they're essentially using docker as a dropper. Clever, in a way.
I'm scratching my head at where the /mnt mount is coming from. If you're doing "docker run -v /:/mnt <sketchy_username>/mysql" then absolutely nobody can help you.
From Kromtech's article I deduced that this only happens when a docker daemon (or kubernetes interface) is exposed to the Internet and an attacker uses that to download and start a docker image on the victim's host. Then they can bind mount a host directory like described and attack the host computer.
It should be noted that some of the reports talk about the Docker API being publicly accessible over the internet which allowed people to run containers on their machines. This is actually not the worst thing that could have happened -- having access to the Docker API gives you root access on that machine without any authentication!
(One of the ideas of rootless containers is to remove the possibility of any privileged codepath, which helps eliminate this issue.)
I don't think that's even possible. Docker doesn't let you expose the daemon over HTTP without configuring certs. I had to write an ansible script to do that, and even then I locked down my Docker port to my VPN subnet:
will happily start Docker with it listening on my IP address without TLS. It will print an all-caps warning, but nothing else (you don't even need to pass a --give-the-internet-root-access flag). However, I just submitted a PR which adds the --give-the-internet-root-access flag[1] because it's pretty obvious to me that very few users do this intentionally (and with full knowledge of the consequences).
I don't understand why people use other people's Docker images. Unless it comes from an official repository for the tool you're using, it's better to look at the source code/Dockerfile in the github link and just roll your own.
A lot of times you're just installing the package you want with apt-get within your Dockerfile anyway; a package you can't check for normal updates for anymore since it's in a container. So now you need a tooling system around making sure your packages in your containers don't have security issues.
its immutable infrastructure at its heart, so yeah, you don't do updates on containers... what you do need is periodic rebuild of your images for upgrades and each new image needs to run all integration and system tests again.
it just makes this process easier than it is without docker. But it doesnt alleviate you of writing the system that actually keep everything updated in an automated way.
It also doesnt help you deploy unless you're already experienced with docker.
and while we'Re on the topic... no, if you know how to execute 'docker run -it --rm ubuntu bash' you still don't know shit about it. sigh sorry, i'm just remembering someone from work today...
I wonder which ones have the best vetting. And if it is adequate.
I also wonder about other packaging systems. CPAN, pip (pypy?) AUR and so on. It doesn't surprise me to see this happen. I wonder what other surprises might be in any of these packages.
FWIW, I'm running mostly Debian and some Ubuntu. I always prefer to install packages via the package manager rather than directly from some tool specific repository because I'll get automatic updates and some level of testing/vetting.
>By the time Docker Hub removed the images, they had received 5 million “pulls.” A wallet address included in many of the submissions showed it had mined almost 545 Monero digital coins, worth almost $90,000.
This seems incorrect because its impossible to see wallet balances on the Monero network. So I'm assuming they just came up with the numbers based on some rough calculations.
If you go to the pool website with the address specified on the botnet you can see how much it was mined. The main article [1] linked on the news said:
> The actor has been able to mine about 630 XMR to date, which at the current USD rate is more than $172,000 for just a little more than one year of activity.
The worst part here is definitely the timeline. NPM is often criticized about security, perhaps rightfully so, but at least the issues are handled promptly after raised publicly.
There is no "perhaps" about npm, it is absolutely a shit show. It's like the creators went out of their way to build an ecosystem of security nightmares.
While this will be anecdotal I found my `pip freeze` packages a lot more manageable compared to `npm ls --parseable`.
My Full Stack Flask application sits at about 64 requirements. Where last time I used expressjs my dependancies inside of node_modules inside of node_modules border-lined to insanity.
I can see myself hand picking and reviewing my requirements.txt, which I did to some extent.
But I just gave up with npm.
Which is a bit of a personal dilemma for me, because some of my tooling needs npm.
So many packages and authors creates an impossibly large surface area to review and secure. It is my impression that most people don't even try so the issue falls on deaf ears. But for a PCI compliant piece of software for example, you would have to had reviewed every module in node_modules. As a stack it makes it a non-starter.
I've been doing Python for over five years now. I recently installed a small command line tool using npm. I was completely stunned by the number of dependencies. At first I thought it must be something else, like maybe it's running tests? But no, hundreds of dependencies. If it were written in python it probably wouldn't even have one.
You’re not OP but you’ve move the goal posts from security to number of dependencies. I think GP was focusing on how are other package managers different from npm when it comes to security except maybe apt and pacman?
And node_modules is now a flat tree for the most part.
The number of dependencies a basic JavaScript project pulls is definitely something to be concerned about though.
Because I need 90,000 stupid packages on npm which can be disabled and removed at any moment. Vs python which mostly requires 20-100 if you are really making a complex system.
Also the versioning on npm is incredibly pathetic vs pypi. I would never trust npm for anything serious or waste my time debugging that crap.
Not a webdev, but from what I've seen webdevs are forced to compete, not in what they can do, but in terms of how they can do it (frameworks, packages). So, an aspect of your work process becomes a proxy for the quality of your work. In a context of permanent competition, this forces an acritical and rapid adoption of frameworks that perhaps only marginally improve some aspects of some other frameworks. Then compound this with the inflow of bootcamp-trained webdevs with poor practices whose employability relies on having being recently trained on the latest hyped framework.
FWIW that headline isn't great. Docker hub pulls in no way correlate to innocent users pulling/using those images. It could be (and this is quite likely) just other malware which made use of those images and just used Docker hub as a repository.
There are official images for the software in question and I don't think it's that likely that that many people ignored the official ones and got these ones.
“For ordinary users, just pulling a Docker image from Docker Hub is like pulling arbitrary binary data from somewhere, executing it, and hoping for the best without really knowing what’s in it,”
This is basically what you do every time you install something (except when it's via a walled garden like an 'app store'). Besides, I'm not sure I would even classify mining for someone else as 'malicious'. It hogs your CPU a little, but if that's malicious then visual studio should be considered malicious as well.
Maybe it's not malicious in the strictest sense of the term but it's also not the same as your Visual Studio example. In your case Visual Studio is a productivity tool that brings you value and thus you chose to install it. In the case of this docker image it's not adding you value and was installed without your concent.
The JS example another commenter made is more apt however the argument there is that you still requested the site and it's content (even if you didn't really want it). Whereas many of the installs of this "dockerised" miner were remotely via exposed Docker APIs. That I think is the real crux of the potential "malice" (for want a better description) here.
Most roles and Docker Hub images are pretty simple, and you should be evaluating them anyways before using them. If you’re concerned about the security but want to save the time in building and debugging, fork it, and maintain your fork, only pulling in changes from the upstream when you have time to vet them.
Most of these things are plain text instruction files - yaml for ansible, docker's own thing for docker. It falls under the same category as random bash install scripts: download the text file, read it, use it, if it's safe.
Yes read it, use it but the next step can't be update it because then you'd have to read it again. I just don't have the time to audit someone elses yaml constantly.
This is not a backdoor. I myself have a miner on Docker hub. The image can be used by anyone with correct envars set. Should my image be removed if used by other users no matter what their intensions are?
If your image is called monero-miner and a bunch of people download it, of course it's not going to be considered malicious code.
If your image is called apache-webserver and a bunch of people download it and you've stealth bundled a monero miner, of course it's going to be considered malicious code.
EDIT: even worse than that, the images are actually back doored they open up a reverse shell to allow the remote to execute arbitrary commands.
https://github.com/pypa/warehouse/issues/3356
You can still get them through some obscure API and you still need to know the right PGP key for verification, but this really signals the lack of consensus and awareness on the path toward a secure software supply chain.
EDIT: typos