Great article, wish I had something like that 3 years ago.
Adding my personal tips:
- Do not use GitLab specific caching features, unless you love vendor lock in. Instead, use multi stage Docker builds. This way you can also run your pipeline locally and all your GitLab jobs will consist of "docker build ..."
In a previous life, I set up CI runner images (Amazon AMIs) that had all of our docker base images pre-cached, and ran custom docker cleanup script that excluded images with certain tags. This meant that a new runner would be relatively quick off the blocks, and get faster as it built/pulled more images.
You can get better cache hit from tagging your gitlab runners and pinning projects to certain tags.
Adding my personal tips:
- Do not use GitLab specific caching features, unless you love vendor lock in. Instead, use multi stage Docker builds. This way you can also run your pipeline locally and all your GitLab jobs will consist of "docker build ..."
- Upvote https://gitlab.com/gitlab-org/gitlab-runner/-/issues/2797 . Testing GitLab pipelines should not be such a PIA.