Hacker News new | past | comments | ask | show | jobs | submit login

I worked at a place that was deploying K8s as I was leaving and my experience was that K8s was complicated and no one knew exactly how to use it or why we should use it, other than it seemed to be what everyone else was doing. I didn't notice any operational improvements over our prior system (which was largely ad hoc), but the kinks may have been worked out by now.



If you read the Innovators Dillemma, there is a very clear chapter about how new technologies don't really deliver their promises immediately; but as they're adopted and operators/users learn how to work with it, that's when productivity changes are very much visible. It took quite some time for AWS to be mainstream and then to be adopted and operated in a way that delivered high productivity gains, and it seems to be the same for k8s right now.


I really wonder about the causality of that, is it changing things to adapt the new technology makes you clean up / refresh / fix known issues along the way or is it the new technology itself causes less issues down the track


That doesn't really make sense. If you're moving to a new technology, even if you clean up/refresh/ fix known issues along the way, you are likely to accumulate similar such issues in the new technology as well, which would lead to a decrease in productivity. But that has never really been observed.

While I agree that the cleanup can inevitably lead to productivity gains, I believe they are overshadowed by the productivity gains from the actual change that is being made.


That only makes sense when seen as an obstacle to initial adoption from an innovators and sales point of view and could be said just about every innovation. But obviously not every IT innovation makes it.


It depends on how frequently you need to spin up new containers.

As someone on Azure, I think the solution for small-ish guys like me (who still need like 8 VMs in production to run our site due to special requirements) is a greatly simplified container management system. Something that fulfills the basic Kubernetes features, but isn't such a steep learning curve.

I think Microsoft is doing this with Azure Container Instances, but I haven't tried it yet. (No doubt AWS is doing something about it as well.)

Or they're going to take care of all the k8 configuration crap through some neat Visual Studio Tools. In a VS preview I saw a "deploy to Kubernetes" button. I just want something that will give me more web API instances when my server is getting slammed.


AWS ElasticBeanstalk or Google App Engine flex or something from Heroku is what you are talking about. Container management but simplified for the common use case. You can implement the same thing with k8s, but why bother for small stuff. The real value in k8s is large, company-wide installations where everyone shares each others hardware. Even then, I've heard anecdotes that kubernetes doesn't scale to the point Borg or whatever other proprietary system does. Then again, those are niche use cases to begin with.


Unless they've reduced the price, Azure Container Instances is mainly for one-off/bursty tasks. If you need a service running 24/7, it's prohibitively expensive.

I think once their managed kubernetes (AKS) hits GA, the learning curve will be slightly lower.


Right, container instances isn't equivalent to deploying a vm with coreos like gcp offers.

We've been running off aks for a bit, the only negative is how slow spinning up new clusters can take.





Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: