Hacker News new | past | comments | ask | show | jobs | submit login

Saying that 2021's Kubernetes is established because 1.0 was released in 2015 is like saying that 1991's Linux is stable because Unix had existed for 20 years at that point. Kubernetes 1.0 and 1.20 share the same name, design principles and a certain amount of API compatibility, but it's impossible to take a nontrivial application running on 1.20 and just `kubectl apply` it on 1.0. Too much has changed.

Kubernetes is just now entering the realm of "becoming stable". Maybe in five years or so it'll finally be boring (in the best sense of the word) like Postgres.




Of course 1.20 has a myriad of additional featues. But 1.0 concepts are in 1.20, the fundamentals are stable. Schedule and run containers, expose them to the external network via a load balancer (or node port).

The declarative aspect is stable. Yes, many people are writing insane go programs to emit templated ksonnet or whatever that itself has a lot of bash embedded, but that's the equivalent of putting too much bash into the aforementioned configuration/orchestration playbooks.


Playbooks are terrible. They are a replacement for expert knowledge of platform tooling. There is no replacement for expertise and knowledge of the platform.

Serious problems are always reduced to understanding the platform, not the playbook. Ansible and the python ecosystem are especially broken. I will _never_ use another playbook to replace mature ssh driven deployments.


Yep, agreed. I found that the active control loops (coupled with the forgiving "just crashloop until our dependencies are up" approach) that k8s provides/promotes are the only sane way to ensure complex deployments. (The concepts could be used to create a new config management platform, but it would be really hard, as most of the building blocks are not idempotent, and making them such usually requires wrapping them and combining them with very I/O and CPU heavy reset/undo operations, blowing caches and basically starting from scratch.)


Hmm. The expectations and engineering of a platform like kubernetes and 'fail till right' require a lot of alignment work to get to the point where 'fail till right' works.

There is no silver bullet. Automation and self healing is a selling point but when it hits engineering it usually is a dud in terms of incorporation in existing environments.

The real novelty would be to generate a declarative description from the customer and provide an in place deployment solution via k8s. That would be the ultimate replacement solution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: