I agree strongly with number 2, and characterize it as, “there’s no such thing as a container”.
It clarifies security thinking if you pretend you don’t have a container, but instead, you’ve got a new kind of tar file, some namespacing, some niceness, iptables, and some convenience aliases:
None of it’s magic, and none of it brings new security guarantees. There’s just the stuff hosts had, that’s what they still have, so however you secured a process under that stuff, you still must do it to a process under this stuff, just now against an extra pile of abstractions and duplicate OS cruft.
Devs are mad at the security team because they want to inexplicably pile another dubious OS into with apt-getted software of dubious third party libs somehow expecting the whole thing to be safer, and the security team reaction is to want to “scan” the pile of nonsense.
Both groups, and the vendors exploiting them, desperately do not want to grapple with the implications of your point 2.
This position is not popular.
As for point 1, it could be argued Google is continuing this experiment, and learning from it, as seen in more than one CVE last year addresses obtaining root on GKE:
While gVisor and Firecracker are fantastic, I’d argue for a real “belt and suspenders” instead of just more belts. Most likely it’s better to get actually outside the host metal, stop counting on software.
Perhaps the best known commercialization of a custom hardware approach that’s readily available to end users is AWS Nitro, with an ok-if-markety backgrounder here:
It clarifies security thinking if you pretend you don’t have a container, but instead, you’ve got a new kind of tar file, some namespacing, some niceness, iptables, and some convenience aliases:
https://github.com/p8952/bocker
None of it’s magic, and none of it brings new security guarantees. There’s just the stuff hosts had, that’s what they still have, so however you secured a process under that stuff, you still must do it to a process under this stuff, just now against an extra pile of abstractions and duplicate OS cruft.
Devs are mad at the security team because they want to inexplicably pile another dubious OS into with apt-getted software of dubious third party libs somehow expecting the whole thing to be safer, and the security team reaction is to want to “scan” the pile of nonsense.
Both groups, and the vendors exploiting them, desperately do not want to grapple with the implications of your point 2.
This position is not popular.
As for point 1, it could be argued Google is continuing this experiment, and learning from it, as seen in more than one CVE last year addresses obtaining root on GKE:
https://cloud.google.com/kubernetes-engine/docs/security-bul...
That said, your point 1 can be found, if you already had a way of securing a public host with security boundaries asserted from outside the host.
For instance, one of the GKE CVEs mentions the GKE Sandbox relying on gVisor:
https://cloud.google.com/kubernetes-engine/docs/concepts/san...
https://gvisor.dev/docs/
(About that extra OS in the container, note Google’s plea for distroless and no shell. Are folks even listening?)
See also Firecracker:
https://aws.amazon.com/blogs/aws/firecracker-lightweight-vir...
While gVisor and Firecracker are fantastic, I’d argue for a real “belt and suspenders” instead of just more belts. Most likely it’s better to get actually outside the host metal, stop counting on software.
Perhaps the best known commercialization of a custom hardware approach that’s readily available to end users is AWS Nitro, with an ok-if-markety backgrounder here:
https://www.allthingsdistributed.com/2020/09/reinventing-vir...
Google offers a good writeup on the isolation layers:
https://cloud.google.com/blog/products/gcp/exploring-contain...
Note their comments on the hypervisor, and consider what Nitro is doing.
https://aws.amazon.com/ec2/nitro/
https://aws.amazon.com/ec2/nitro/nitro-enclaves/