Hacker News new | past | comments | ask | show | jobs | submit login

It's not because of the networking stack.

I've yet to meet anyone who can easily explain how the CNI, services, ingresses and pod network spaces all work together.

Everything is so interlinked and complicated that you need to understand vast swathes of kubernetes before you can attach any sort of complexity to the networking side.

I contrast that to it's scheduling and resourcing components which are relatively easy to explain and obvious.

Even storage is starting to move to overcomplication with CSI.

I half jokingly think K8s adoption is driven by consultants and cloud providers hoping to ensure a lock-in with the mechanics of actually deploying workloads on K8s.




Services create a private internal DNS name that points to one or more pods (which are generally managed by a Deployment unless you're doing something advanced) and may be accessed from within your cluster. Services with Type=NodePort do the same and also allocate one or more ports on each of the hosts which proxies connections to the service inside the cluster. Services with Type=LoadBalancer do the same as Type=NodePort services and also configure a cloud load balancer with a fixed IP address to point to the exposed ports on the hosts.

A single Service with Type=LoadBalancer and one Deployment may be all you need on Kubernetes if you just want all connections from the load balancer immediately forwarded directly to the service.

But if you have multiple different services/deployments that you want as accessible under different URLs on a single IP/domain, then you'll want to use Ingresses. Ingresses let you do things like map specific URL paths to different services. Then you have an IngressController which runs a webserver in your cluster and it automatically uses your Ingresses to figure out where connections for different paths should be forwarded to. An IngressController also lets you configure the webserver to do certain pre-processing on incoming connections, like applying HTTPS, before proxying to your service. (The IngressController itself will usually use a Type=LoadBalancer service so that a load balancer connects to it, and then all of the Ingresses will point to regular Services.)


Assuming that like us, you spend the last 10 - 12 years deploying IPv6 and currently running servers on IPv6 only networks, the Kubernetes/Docker network stack is just plain broken. It can be done, but you need to start thinking about stuff like BGP.

Kubernetes should have been IPv6 only, with optional IPv4 ingress controllers.


It really feels like Kubernetes was developed by some enterprise Java developers. Nothing seems well defined, everything is done in the name of abstraction, but the rules of the abstraction are never clearly stated, only the purpose is.

I really hope someone takes the mantle of Leslie Lamport (creator of the language TLA - "the quixotic attempt to overcome engineers' antipathy towards mathematics") and replaces Kubernetes with some software with a first principles approach.


You mean you dont like 3+ layers of Nat VIA iptables ?


That's already happening anyway.


But mostly you are not responsible for those components or are using hardware solutions which are 1000 times more efficient/performant ?


+1000. Came here to rant, I'll just say "this"


For the nginx ingress case:

An ingress object creates an nginx/nginx.conf. That nginx server has an IP address which has a round robin IPVS rule. When it gets the request it proxy's to a service ip which then round robins to the 10.0.0.0/8 container IP.

Ingress -> service -> pod

It is all very confusing but once you look behind the curtain it's straight forward if you know Linux networking and web servers. The cloud providers remove the requirement of needing Linux knowledge.


I don't think this is accurate which plays into the parents point, I guess.

Looking at the docs, ingress-nginx configures an upstream using endpoints, which are essentially Pod IPs, with skips kubernetes service based round-robin networking altogether.

Assuming you use an ingress that does configure services instead, and assuming you're using a service proxy that uses ipvs (i.e. kube-proxy in default settings) then your explanation would have been correct.

For the most part, kubernetes networking is as hard as networking with loads of automation. Often, depth in both those skills are pretty exclusive, but if you're using the popular and/or supported CNI not doing things like changing in-flight, your average dev just needs to learn basic k8s debugging such as kubectl get endpoints to check whether his service selectors are setup correctly, and curl them to check whether the pods are actually listening on those ports.


> It is all very confusing

Is there an easier + simpler alternative?


Use Heroku, AppEngine, or if in k8s, Knative/Rio.

It's confusing because a lot of people being exposed to K8s don't necessarily know how Linux networking and web servers work. So there is a mix of terminology (services, ingress, ipvs, iptables, etc) and context that may not be understood if you didn't come from running/deploying Linux servers.


The deal was it all abstracted :) And now developers have to start all over with subnet masks?!


The abstraction was file a ticket and let someone figure it out for you, or script it if it happens often enough.


When you are coming from old sysadmin world and you mastered unix systems architecture and software - what k8s does is very straightforward because its the same you already know.

K8S is extremely complicated for huge swarm of webdevs and java developers that really reqlly dont understand how the stuff they use/code really works.

K8S was supposed to decrease the need for real sysadmins but in my view it actually increased the demand because of all the obscure issues one can face in production if they dont really understand what they are doing with K8S and how it works under the hoods.

Which I find hilarious.


I think you're right for small clusters, you end up needing more sysadmins. But to manage 1000 node Kubernetes cluster, I suspect it can be done with less administration


Im managing 20000 vcpus infra both k8s and plain vms. IMHO there is no big difference there. It all depends on the tools you are using around orchestration. In my experience having good sysadmins is still the key to best infrastructure management no matter the size of the company.


"I've yet to meet anyone who can easily explain how the CNI, services, ingresses and pod network spaces all work together."

Badly! That'll be $500, thanks for your business.

On a serious note, the whole stack is keeping ok-ish coherence considering the number of very different parties putting a ton of work into it.

In a few years' time it'll be the source of many war stories nobody cares about.


Answer has to include implementation details. No credit if your answer does not reference iptables.


It helps going from the bottom up, IMO. It's a multi-agent blackboard system with elements of control theory, which is a mouthful, but it essentially builds from smaller blocks up.

Also, after OpenStack, the bar for "consulting-driven software" is far from reached :)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: