I'm a long-time user of apt-cacher-ng, but reading this reminds me of some of the pain points I regularly experience. Maybe others have some thoughts.
It would be nice if my Docker image builds, which may include apt-get steps, could benefit from the cache. I know Docker build will cache layers itself, but this doesn't check the upstream for fresher packages in the same way that could be done with HTTP caching. I know I could simply set the Acquire::http::Proxy in the Dockerfile, but then I've mixed local infrastructure concerns into a Dockerfile that should be generically usable by anyone, anywhere. It would be great if there were some way to inject these site-specific configurations into the image without tampering with the Dockerfile. This could be tricky, since the base image of any random Docker image isn't even guaranteed to be Debian. (Although I could imagine a very generic Bourne shell script that consumes /etc/os-release, if present, and performs any distro-specific customization.) This would also solve the similar problem of needing to inject site-specific trusted enterprise CA certificates into images.
Another pain point is the lack of HTTPS caching, which the author mentions. I'm not sure that dropping down to plain HTTP is the solution. I sometimes wonder if there could be a MitM proxy approach, where the cache presents a certificate for the remote hostname that is trusted by a CA certificate installed on the client. (In other words, something similar to what a Zscaler does to intercept HTTPS.)
```
FROM debian:bookworm
ARG APT_CACHE
RUN if [ ! -z "$APT_CACHE" ]; then \
echo 'Acquire::http { Proxy "'$APT_CACHE'"; };' >> /etc/apt/apt.conf.d/01proxy; \
fi && \
... rest of your commands go here ...
```
Adding apt-cacher-ng is also a good thing for CI/CD if you add it to your build servers and point your docker builds to it you'll save bandwidth and build time.
It would be nice if my Docker image builds, which may include apt-get steps, could benefit from the cache. I know Docker build will cache layers itself, but this doesn't check the upstream for fresher packages in the same way that could be done with HTTP caching. I know I could simply set the Acquire::http::Proxy in the Dockerfile, but then I've mixed local infrastructure concerns into a Dockerfile that should be generically usable by anyone, anywhere. It would be great if there were some way to inject these site-specific configurations into the image without tampering with the Dockerfile. This could be tricky, since the base image of any random Docker image isn't even guaranteed to be Debian. (Although I could imagine a very generic Bourne shell script that consumes /etc/os-release, if present, and performs any distro-specific customization.) This would also solve the similar problem of needing to inject site-specific trusted enterprise CA certificates into images.
Another pain point is the lack of HTTPS caching, which the author mentions. I'm not sure that dropping down to plain HTTP is the solution. I sometimes wonder if there could be a MitM proxy approach, where the cache presents a certificate for the remote hostname that is trusted by a CA certificate installed on the client. (In other words, something similar to what a Zscaler does to intercept HTTPS.)