The take-home message is that you need to have a strategy for deploying updates. It's true that not all bugs are exploitable but there's a long history of people being catastrophically wrong in that kind of conclusion.
More importantly, however, you want updates to be a routine frequent thing so you don't train people to ignore them or let the backlog build up to the point where the size itself becomes a deterrent to updating because too many things will change. If you install updates regularly, you keep changes smaller and keep the focus on the tight reaction time which you'll need for serious vulnerabilities.
One of the authors here. I'd like to second this take-home message. The core of our work was to bring to the fore-front that package management using containers is important and we need to have sound operations management/security practices in place.
We think Docker, and containers in general, is a great way to deploy software -- the speed and agility is so much better than traditional approaches. This also means that we should have sound security practices in place from the very beginning, or else we could easily end up with insecure images floating around in several places (dev laptops to public cloud).
> the speed and agility is so much better than traditional approaches
Complete agreement here – Docker's strong points are exactly the things which make patch deployment easier than in legacy environments. Hopefully we'll start seeing orchestration tools which really streamline the rebuild/partial deploy/monitor error rates/deploy more cycle when updates are available.
More importantly, however, you want updates to be a routine frequent thing so you don't train people to ignore them or let the backlog build up to the point where the size itself becomes a deterrent to updating because too many things will change. If you install updates regularly, you keep changes smaller and keep the focus on the tight reaction time which you'll need for serious vulnerabilities.