Hacker News new | past | comments | ask | show | jobs | submit login

Can you expand a bit on why you don't need flannel on AWS? We're currently deploying a k8s cluster and I surely went the flannel route (following the steps of CoreOS guide to k8s) but it'd be nice to remove that setup from our deployment if possible.



AWS has VPCs, allowing you to get a practically unlimited number of private subnets.

In some cloud environments (e.g. DigitalOcean), there's no private subnet shared between hosts, so Kubernetes can't just hand out unique IPs to pods and services. So you need something like Flannel, which can set up a VPC either with UDP encapsulation or VxLAN.

Flannel also has a backend for AWS, but all it does is update the routing table for your VPC. Which can be useful, but can also be accomplished without Flannel. It's also limited to about 50 nodes [1] and only one subnet, as far as I know. I don't see the point of using it myself.

[1] https://github.com/coreos/flannel/issues/164


Could you say how you arrange that the addresses you pick for your pods do not clash with the addresses AWS picks for instances?


Kubernetes does this for you IPs. For example, if your VPC subnet is 172.16.0.0/16, then you can tell K8s to use 10.0.0.0/16.

AWS won't know this IP range and won't route it. So K8s automatically populates your routing table with the routes every time a node changes or is added/removed.

K8s will give a /24 CIDR to each minion host, so the first will get 10.0.1.0/24, the next 10.0.2.0/24, and so on. Each pod will get 10.0.1.1, 10.0.1.2, etc.

Obviously having an additional IP/interface per box adds complexity, but I don't know if K8s supports any other automatic mode of operation on AWS.

(Note: Kubernetes expects AWS objects that it can control — security groups, instances, etc. — to be tagged with KubernetesCluster=<your cluster name>. This also applies to the routing table.)


OK, I see this is the same as what Flannel does in its aws-vpc backend, but I though you were saying you could do better. Maybe I mis-parsed what you said.

If you're adding a routing rule for every minion then you will also hit the 50 limit in AWS routing tables.


Sorry about the confusion — yes, absolutely. One option is to ask AWS to increase it.

Flannel is just one of many different options if you need to go beyond 50 nodes. It seems some people use Flannel to create an overlay network, but this isn't necessary. You can use the host-gw mode to accomplish the same thing as Kubernetes' default routing-table-updating behaviour, but with routing tables maintained on each node instead.


Forgot to say: Kubernetes will keep the routing table up to date if you use --allocate-node-cidrs=true. That way, it does exactly the same thing as Flannel with the "aws-vpc" backend.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: