That seems to be the common theory, but what does that even mean? There's nothing all that special about running containers at the hardware level, they are really just groups of processes that the kernel manages a certain way.
That's not completely accurate. Hardware & OSes have not been designed for heavy multi-tenancy, leading to (among other issues) cache interference. At Netflix [1], we have to do quite a bit of work to undo that. Bigger orgs like Google have spent more than a decade improving the Linux scheduler in their own fork for similar reasons.
Very interesting link, thanks. Obviously your use case goes way beyond running a standard Docker installation. It sounds like you really want the kernel to schedule containers instead of processes -- which it doesn't really do by default, hence my comment. Perhaps it shouldn't be surprising, that you're able to get these clear performance gains from a highly optimized special-purpose scheduler. Still, I was a bit surprised. :)
However, that's at the OS level. What can realistically be done at the hardware level? It must be possible in theory to design a CPU that's better at this kind of context switching, but I don't know if a new "computer company" really wants to go there.
While that's true, this particular company doesn't seem to be targeting anything that would improve containers (OS optimizations, new CPU). So I think the OP was correct in that simply changing the BMC or making it more secure on boot won't affect containers.
People who worked at Docker, Joyent, and Sun would be the ones to answer your question :). Consider the possibility that the kernel and the h/w are increasingly at odds on commodity server hardware.