Hacker News new | past | comments | ask | show | jobs | submit login

> Recently I had to set up a zero-downtime system for my app, I spent a week seriously considering a move to k3s, but the entire kubernetes ecosystem of churn frustrated me so much I simply wrote a custom script based on Caddy, regular container health checks and container cloning. Easier to understand, 20 lines of code and I don't have to sell my soul to the k8s devil just yet.

I'd say it depends where you're coming from. For me, setting up a Kubernetes cluster (no matter which flavor) with external-dns and cert-manager will most likely take 30m-1h and that is the basic stuff that you need for running an app with the topics you mentioned. To navigate through k8s just use k9s and you're golden.

I never get where all the "k8s is the devil" comments come from. There is nothing really complex about it. It's a well defined API with some controllers, that's it. As soon as I need to have more than one server running my workloads I would always default to k8s.




> I never get where all the "k8s is the devil" comments come from. There is nothing really complex about it. It's a well defined API with some controllers, that's it.

And Linux is just a kernel with some tools, which are all well defined, that's it!. But if you need to debug a complex interaction "that's it" and "it's well defined" isn't enough.

Kubernetes is quite complex, with a lot of interactions between different components. Upgrades are a pain because all those interactions need to be verified to be compatible with one another, and the versioned APIs, as cool as a concept they are on paper, mean that there's constantly moving targets that need constant supervision. You can't just jump a version, you need to check all your admission controllers, CNI and CSI drivers, Ingress controller, cert-manager and all other things are compatible with the new version and with each other. This is not trivial at any scale, which is why many orgs adapt the approach of just deploying a new cluster, redeploying everything to it and switching over, which is indicative of exactly how much of a pain it is.

Even Google themselves that created it admit it's complex and have 3 managed services with different levels of abstraction to make it less complicated to use and maintain.


k8s is fine. It's the ecosystem around it that I dread.

I understand how it works, it makes sense, but then you're faced with Helm, kustomize, jsonnet and a lot of bullshit just to have minimal and reproducible templating around YAML. Or maybe you should use Ansible to set it up. Maybe instead try ArgoCD. Everybody and their dog is an AWS or GCP evangelists and keep trying to discourage you from running it outside of the blessed clouds. If anything breaks you're told you're stupid and that's what you get, and I should've paid someone else to manage it.

It feels like everybody is selling you something to just manage the complexity they have created. This is what keeps me away. It's insane.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: