Hacker News new | past | comments | ask | show | jobs | submit login

> not needing to worry about each `RUN` step creating a layer which may bloat the image

Could someone please explain to me, why exactly do people avoid layers and treat them as "bloat"?

I always thought that layers are nice to have: you only need to rebuild those that had changed (typically the last ones that handle the application, while environment layers remain the same) and pulling image updates is handled much better due to only changed layers being pulled.

How is this "bloat"? Isn't that the opposite? Pushing images containing 80% of the same stuff feels more like a bloat to me.

Am I missing something here?




It depends. There is some implicit "bloat" because setting up 100 layers and accessing files through them isn't free (but caching works quite well). However the biggest problem with layers is that you can never delete data. So doing something like `RUN apt-get install foo` `RUN foo generate-things` `RUN apt-get uninstall foo` will effectively still have `foo` in the image.

It definitely depends on the use case. In many cases `RUN foo`, `RUN bar`, `RUN baz` is fine. But if you are every creating temporary data in an image the layer system will keep that around. This is why you often see things like `RUN apt-get update && apt-get install foo && rm -r /var/lib/apt`. You squeeze it into a single layer so that the deletion of the temp files actually avoids image bloat.


It is possible with build stages and copy from, but yeah not super trivial.


Definitely not trivial, but staged builds are my go-to solution. Depending on the specifics of the tech you're including it can be a lot easier than figuring out how to clean up every little build artifact within a layer - just add a second FROM line and copy precisely the pieces you need in the final image, and nothing else.

I also think it makes the build stage a lot easier to follow for people who aren't as familiar with Dockerfiles and all the quirks that come with optimizing a final image.


Exactly. You need to remember to do it and restructure your build at least slightly to do so. It isn't hard but non-default and annoying.


Depends very much on the specifics of what the RUN steps are doing and the order of them. One issue is that just changing files will often create a layer with another copy of those files with the different attributes (e.g. chmod) or possibly a layer with an empty directory for files that are deleted. That means you have very similar content in two separate layers which creates bloat.

The COPY command now supports performing a chmod at the same time to help with this issue. Another common trick is to have a layer that performs an "apt update" followed by installing software and then deleting the contents of /var/lib/apt/lists/ so that the layer doesn't have unnecessary apt files.

When developing scripts for running inside Docker, I'll often try to have the copying of the script as late as possible in the Dockerfile so that the preceding layers can be reused and just a small extra layer is needed for the script changes.


I tend to agree, but I think the angle they're going after is the mental load of ensuring consistency in those layers

A very simple example of this is installing packages and clearing the generated metadata all in one chain of a single RUN.

It gets more complicated when you look at it from the 'reproducible builds' POV; subtle binary changes from using things like dates/timestamps


Layers rebuild those that changed and every layer defined later in the Dockerfile.


Because cargo cult.

Apparently its better value to waste human time trying to debug a failed docker build with 200 commands strung together with && vs letting your runtime just mount and flatten extra layers.


I suspect folks are doing what they naturally do whether it's playing factorio or playing docker... optimize




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: