> They increase attack surface by supplying a vector to any neighbour
Could you explain what you mean by this? I'm still not getting it.
Here's the example I have in mind: I have one application listening on a TCP port, and another one listening on a different port. I don't want the first application to talk to the second application. If they're just two processes on the same machine, they can both see each other, and an attacker who manages to exploit an RCE vulnerability gets access to the port "for free"; if they're both in containers, then they can only see the ports I specify, and it's really easy for me to specify them.
This literally does seem like an additional shell of defence to me! What sort of attack would it make worse?
Trusting the isolation like this is absolutely misguided, to the extent that it’s in conflict with understanding how computers work. Those who do not remember rowhammer or spectre and their kin are doomed to repeat them.
As for your example, there’s no difference between that and OS mediated mandatory access controls, separation of uids, chroot etc. Those capabilities are present irregardless of whether you’re using containers.
Anyone selling you on containers as a security measure is pitching snake oil. They are a resource allocation & workload scheduling tool. They may help applications avoid inadvertently treading on one another’s toes e.g. avoiding library conflicts or easing the deployment of, say, QA and showcase environments on the same metal, but it’s intrinsically a shoji wall.
There’s even a cultural downside, since developers may make exactly these flawed assumptions about runtime safety, or hard-code paths for config files rather than making them relative to deployment location, make assumptions (implicit or otherwise) about uids and so on, i.e. containers can breed sloppy dev practices.
I feel like you are several metaphors removed from me right now. Rowhammer? Spectre? I am just trying to serve a couple of websites here! I'm not going to buy another rack of servers just to isolate the two. Have you seen how expensive those things are?
> Anyone selling you on containers as a security measure is pitching snake oil. They are a resource allocation & workload scheduling tool.
I agree with both of these statements — and I suspect this is where it all falls down. When you put all your applications in containers, you are not done with security, no. Nevertheless, I've found great value in having one file per version per application, being able to upgrade my dependencies one-by-one, automatically rolling back in case of failure, and taking advantage of the tools in the container ecosystem. With all that, the security features such as isolating filesystems and network ports are just the cherry on top. Before containers, I was thinking about how to give my applications limited filesystems to work with, but along came Docker, and I didn't have to think about it anymore, because I'd already done it.
This is why I feel so out-of-touch with many of the commenters here. I'm surrounded by people decrying containers for security reasons, I want to defend them because of the many benefits they have given me, and I think the people preaching some kind of True Security (where everything is 100% perfectly isolated) aren't taking this into account. I feel like I could take your comment — "Trusting the isolation like this is absolutely misguided" — and apply it to any part of the stack. You have to stop somewhere.
I've seen the salespeople. They definitely exist, but I don't think they're here on HN, and they're more likely to be learning than lying.
And developers making flawed assumptions? You know this didn't start happening with containers!
> I am just trying to serve a couple of websites here!
Don't underestimate your responsibilities as an active participant online and operator of a globally reachable computing resource. Like driving a car, if you're not fully qualified and alert then you can endanger yourself and others. This is how PHP got such a bad rap.
The fact that the container deployment tool has helped you configure security elements (such as limited filesystem access) is nice, but those elements already existed, along with the tools to manage them. Containerized software deployment did not invent them, and if you're getting procedural benefits from containers, that's great for you, but that's all it is.
Having an additional layer of restriction (be it an alloation and workload scheduler or whatever) is additional security. The fact that there are vulnerabilities that bypass layers of security is irrelevant. The power button on the machine is a vulnerability to your software. The fact the restrictions are possible with other tools is irrelevant. Grouping functionality into a singular paradigm has utility.
To be fair, more security by additional tooling adds a vulnerability in human error, but that's true of all the tools.
Could you explain what you mean by this? I'm still not getting it.
Here's the example I have in mind: I have one application listening on a TCP port, and another one listening on a different port. I don't want the first application to talk to the second application. If they're just two processes on the same machine, they can both see each other, and an attacker who manages to exploit an RCE vulnerability gets access to the port "for free"; if they're both in containers, then they can only see the ports I specify, and it's really easy for me to specify them.
This literally does seem like an additional shell of defence to me! What sort of attack would it make worse?