Hacker News new | past | comments | ask | show | jobs | submit login

I've been using Nix for this. It is great for building an image that contains exactly the dependencies you need. Often that is just my app, glibc and maybe a few data files. However if you need more it is trivial to bundle it in. For example I have a CI image that contains bash, Nix itself, some shell scripts with their dependencies and a few other commands and files that GitLab CI expects to be available.

I absolutely love the declarative nature and not needing to worry about each `RUN` step creating a layer which may bloat the image. For example my build process validates my inline SQL queries against the database schema. It was refreshingly simple to spin up a Postgres instance inside of the build step, applying migrations using a different CLI tool then start the build without any of these deps ending up in the final image.

The only real downside is that Nix doesn't have great support for incremental builds, so for my Rust app building the optimized build from scratch can be slow if you change a comment in a source file. But most Docker builds don't do this either (or if they do it is often buggy which I see as worse). Bazel does help this which is a notable advantage for trading off the ability to pull in other programs from nixpkgs.




> not needing to worry about each `RUN` step creating a layer which may bloat the image

Could someone please explain to me, why exactly do people avoid layers and treat them as "bloat"?

I always thought that layers are nice to have: you only need to rebuild those that had changed (typically the last ones that handle the application, while environment layers remain the same) and pulling image updates is handled much better due to only changed layers being pulled.

How is this "bloat"? Isn't that the opposite? Pushing images containing 80% of the same stuff feels more like a bloat to me.

Am I missing something here?


It depends. There is some implicit "bloat" because setting up 100 layers and accessing files through them isn't free (but caching works quite well). However the biggest problem with layers is that you can never delete data. So doing something like `RUN apt-get install foo` `RUN foo generate-things` `RUN apt-get uninstall foo` will effectively still have `foo` in the image.

It definitely depends on the use case. In many cases `RUN foo`, `RUN bar`, `RUN baz` is fine. But if you are every creating temporary data in an image the layer system will keep that around. This is why you often see things like `RUN apt-get update && apt-get install foo && rm -r /var/lib/apt`. You squeeze it into a single layer so that the deletion of the temp files actually avoids image bloat.


It is possible with build stages and copy from, but yeah not super trivial.


Definitely not trivial, but staged builds are my go-to solution. Depending on the specifics of the tech you're including it can be a lot easier than figuring out how to clean up every little build artifact within a layer - just add a second FROM line and copy precisely the pieces you need in the final image, and nothing else.

I also think it makes the build stage a lot easier to follow for people who aren't as familiar with Dockerfiles and all the quirks that come with optimizing a final image.


Exactly. You need to remember to do it and restructure your build at least slightly to do so. It isn't hard but non-default and annoying.


Depends very much on the specifics of what the RUN steps are doing and the order of them. One issue is that just changing files will often create a layer with another copy of those files with the different attributes (e.g. chmod) or possibly a layer with an empty directory for files that are deleted. That means you have very similar content in two separate layers which creates bloat.

The COPY command now supports performing a chmod at the same time to help with this issue. Another common trick is to have a layer that performs an "apt update" followed by installing software and then deleting the contents of /var/lib/apt/lists/ so that the layer doesn't have unnecessary apt files.

When developing scripts for running inside Docker, I'll often try to have the copying of the script as late as possible in the Dockerfile so that the preceding layers can be reused and just a small extra layer is needed for the script changes.


I tend to agree, but I think the angle they're going after is the mental load of ensuring consistency in those layers

A very simple example of this is installing packages and clearing the generated metadata all in one chain of a single RUN.

It gets more complicated when you look at it from the 'reproducible builds' POV; subtle binary changes from using things like dates/timestamps


Layers rebuild those that changed and every layer defined later in the Dockerfile.


Because cargo cult.

Apparently its better value to waste human time trying to debug a failed docker build with 200 commands strung together with && vs letting your runtime just mount and flatten extra layers.


I suspect folks are doing what they naturally do whether it's playing factorio or playing docker... optimize


I built a service for doing this ad-hoc via image names a few years ago and it enjoys some popularity with CI & debugging use-cases: https://nixery.dev/


I've definitely used this in CI before. It is useful for base images for docker-based CI.


I put together an example that mixes Nix and Bazel a couple of years ago: https://github.com/jvolkman/bazel-nix-example

Nix is used to build a base Docker image, and Bazel builds layers on top.


I've been doing the same with Guix. However, more so lately with declarative, lightweight VMs. It's nice to be able to painlessly make throw away environments that I can easily log into via SSH.


Do you have an example or an article demonstrating this? I just recently had the desire to build systemd-nspawn images declaratively, but couldn't find much other than Dockerfiles.


Sure. Putting a simple binary in a container: https://gitlab.com/kevincox/tiles/-/blob/a2b907eab7a84989c94.... This is the trivial case where you just stick the main executable in the command string. Nix will automatically include the dependencies.

The GitLab CI example is a bit more complex. It requires some commands that are unused by the image and some config files: https://gitlab.com/kevincox/nix-ci/-/blob/efe6f4deedc50c2474...



To get Rust incremental builds, did you consider using something such as crane https://github.com/ipetkov/crane ?

And regarding OCI images, i built nix2container (https://github.com/nlewo/nix2container) to speed up image build and push times.


Someone is working on consuming nix packages inside Bazel.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: