Well at Google where a lot of development on containers took place the problems with process sandboxing and language level VMs were machine resource allocation, both in memory and in I/O bandwidth. Lets say you have three "systems" on the box, one is a collection of processes providing elements of a file system, one is a collection of processes providing computation, and one is a collection of processes providing chat services. Now you want to allocate half of the disk i/o to the file services, and a half to the compute system, then 75 percent of the network bandwidth to the chat system, 20 percent to the file system, and the remaining plus any "unused" to the compute system. You want half of the memory in the system to go to the compute system and give the rest to the network file system processes.
That is a complex mix of services running on a machine, some sharing the same flavor of VM, and you're allocating fractions of the total available resource capability to different components. If you cannot make hard allocations that are enforced by the kernel you cannot accurately reason about how the system will perform under load, and if one of your missions is to get your total system utilization to be quite high, you have to run things at the edge.
That is a complex mix of services running on a machine, some sharing the same flavor of VM, and you're allocating fractions of the total available resource capability to different components. If you cannot make hard allocations that are enforced by the kernel you cannot accurately reason about how the system will perform under load, and if one of your missions is to get your total system utilization to be quite high, you have to run things at the edge.