Hacker News new | past | comments | ask | show | jobs | submit login

Anecdata: series B startup. First year was a single VM with two docker containers for api and nginx. Deploy was a one shot “pull new container, stop old, start new” shell command. Year 2 onwards, k8s. No regrets, we only needed to make our first dedicated ops hire after 15 engineers, due in large part to the infra being so easy to program.

I used GKE and I was also very familiar with k8s ahead of time. I would not recommend someone in my shoes to learn k8s from scratch at the stage I rolled it out, but if you know it already, it’s a solid choice for the first point that you want a second instance.




When people talk about companies using k8s too soon, they are talking about deploying it on your own, not using a hosted service like GCE. That’s a whole new ball game and takes 100x the effort.


I read TFA as mostly complaining about the conceptual and operational complexity of running your code on k8s, not so much about operating a cluster itself.

Lots of ink spilled on irrelevant concepts that most users don't need to know or care about like EndpointSlices.

And, arguing against microservices is a reasonable position -- but IF you have made that architectural choice, then Docker-for-Mac + the built-in kubernetes cluster is the most developer-friendly way of working on microservices that I am aware of. So a bit of a non-sequiteur there.


I don’t see when you’d need to understand what an EndpointSlice is, unless running k8s itself. The concept does not leak through any of the pod management interfaces.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: