I guess to take this a bit further, the thought that came to mind for me is if we start processing every user on their own vm with own core and memory. Is that something anyone is doing today with web sites? It'd seemingly be horribly inefficient, but it's an interesting thought experiment.
It's not that inefficient if you are not working with real 'VMs', but with containers (or something similar). The unikernel projects mentioned by the peer below (Ling, MirageOS) can spin up a container per request, respond and then tear it down in milliseconds. They are pretty interesting from a security perspective, especially when you couple them with a read-only image - I imagine it would be pretty hard to attack something that only persists for the time it takes to handle a single request.
EDIT: Peer is below.
I imagine it would be pretty hard to attack something that only persists for the time it takes to handle a single request.
I don't see why. If the request is the attack (as it usually is), then it'll persist for just long enough to accomplish it. What kind of attacks do you see it avoiding?
I think the big benefit is that it avoids attacks that infect the server, because in this case, the server is "destroyed" when the request finishes. So a request that would maliciously upload "hackertools.php" would be useless, because the host (read: container) that the file is uploaded to is not the web server, but the container.
Yes, this is what I meant. It doesn't make the server any less vulnerable to an individual attack, but it makes it very hard to escalate it. Though there was a really interesting video about a security guy breaking out of Lambda recently and uncovering a persistent file system somewhere - will try to find it. Edit: Found it:
https://media.ccc.de/v/33c3-7865-gone_in_60_milliseconds