Agreed. But I do think there are places for containers. I will often package single binaries in containers for built in distribution and capabilities for rolling upgrades. Especially tooling that relies on a lot of externalities that can taint the system. Python applications, as an example are much easier to deploy and manage this way than dealing with Terraform / Ansible to provision correctly. Even if you're just using host networking and good ol' Docker there is a ton of operational upside with very low maintenance overhead (mental and otherwise).
I'm working with a product now that's made their k8s deployment the standard and all it's done is create bigger issues. Ops got behind on Strimzi and so we got stuck on 1.21 because we couldn't upgrade due to being locked to the Strimzi version. This caused issues because of log4j and we ran into a wall quickly with customers on GCP as soon as 1.22 ended up as GA. Honestly I'm not sure we're getting much, if any, overhead advantage since I feel like the app has become bloated due to container creep.
That and supporting 4 different ways to provision storage across customers on every cloud / on-prem is a nightmare. Customer environment installed applications on k8s is a nightmare today.
I'm working with a product now that's made their k8s deployment the standard and all it's done is create bigger issues. Ops got behind on Strimzi and so we got stuck on 1.21 because we couldn't upgrade due to being locked to the Strimzi version. This caused issues because of log4j and we ran into a wall quickly with customers on GCP as soon as 1.22 ended up as GA. Honestly I'm not sure we're getting much, if any, overhead advantage since I feel like the app has become bloated due to container creep.
That and supporting 4 different ways to provision storage across customers on every cloud / on-prem is a nightmare. Customer environment installed applications on k8s is a nightmare today.