And what if you only have 33% of your nominal cluster capacity available because the AC goes out in your server room?
Now which containers should your cluster stop running?
You haven't actually solved anything with this, you've just changed the abstraction layer at which you need to make decisions. Probably an improvement, but does not obviate some kind of load shedding heuristic.
But that's not what this was about. It was about "hey, which of these boxes had the ldap running on it? We need to make sure that gets shifted somewhere else!"
K8S let's you say "ok, we don't have enough capacity to run everything so let's shut down the Bitcoin deployment to free up capacity".
There's no leg work or bookkeeping to figure out what was running where, instead it's "what do I need to run and what can I shut down or tune down". All from the comfort of the room with AC.
And if you're really cleaver, you went ahead and gave system critical pods elevated property. [1]
I see. I guess there's two components - prioritization for loadshedding, and knowing how to actually turn things off to shed the load. You're saying k8s makes the latter very easy, and it was actually about that latter exercise of mapping workload to infra, not enumeration and prioritization of services.
Now which containers should your cluster stop running?
You haven't actually solved anything with this, you've just changed the abstraction layer at which you need to make decisions. Probably an improvement, but does not obviate some kind of load shedding heuristic.