“A move away from monolithic statically compiled binaries to constellations of microservices (usually bloated docker containers) is a significant part of the problem.”
It won't be true in every shop, but I do this professionally and it's been my firsthand experience. A native statically compiled binary containing just the functions that actually get called will usually be... 10-100 MB. Ungroomed Docker images are ~10GB-20GB, same as you'd have on the root partition if you sat down and brought up a linux workstation or server node manually, and this is not a coincidence. Sure, docker avoids duplicating the linux kernel, making it more efficient than an old school VM, but these days all the ~other software bloat dominates the kernel in size. Most companies do not have a 100 person team of engineers dedicated to optimizing their image build and management workflow, and pruning what goes into their containers.
Untrue, in every way.
Why did you say this?