Why not? No virtualisation overhead, good io, memory and cpu and even network limits with cgroups and no extra committed ram for the vm. You just give team xyz a login and they run their favorite distribution and software without overhead. If you have a copy on write filesystem you even save more space and with lvm you have easy snapshots and backups. You can put a lot of users on a moderately fast machine this way. Thanks to lxc-attach it is also dead-easy to debug problems for them or install software. I'd love to have this possibility in the future.
Because
1. right now it breaks security
2. its a whole lot more to manage, and thats expensive.
I see the convenience argument, which is why people like docker, but basically adding a whole OS overhead to every process you want to run is basically insane in my view.
Docker doesn't require you to run an entire distro in your container. It's what a lot of people do out of the box because it's convenient and familiar. But as far as docker is concerned your container can be a single static binary in an otherwise empty directory.
There is a growing trend of people building micro-containers with just the bare minimum for their application. Docker is facilitating that trend, not preventing it. If only because it explicitly encourages thinking of containers as application-oriented, not machine oriented.
not saying Docker prevents it but it is hardly widespread. Unless you use Go or a few other languages your toolchsin won't even create static binaries for a start.