Hacker News new | past | comments | ask | show | jobs | submit login

flannel appears to be what kubernetes is using as well and I know it is what RedHat is using for their OpenShift platform ontop of k8. It seems like the obvious path forward.



Hi Jeff - if there was a single solution that fit all requirements, then that would be obvious :) However, experience has pointed out that there are differing sets of requirements. Some folks may need the flexibility (say disjunct fabrics) that encap provides, while others need the scale (say 1000's or 10K's of servers) or simplicity (those sort-of go hand-in-hand) that a non-encap data-path provides.

The big question that a system architect needs to ask, if they are designing a system at scale is not "should I use this technique" but "do I NEED to use this technique." We can always add more complexity and technology/layers than we need because we "may need it" in the future, and we almost always end up with a jinga tower when we are done.

So, when laying out your infrastructure, be sure to know what your actual requirements are, and don't add a lot of extraneous capabilities that you have to maintain and trouble-shoot later.


I want to make one edit, I should have said "need the flexibility" instead of "may need the flexibility". Both the scale and the flexibility can be hard requirements. I don't want folks to think that I am saying that scale trumps flexibility/disjunct fabrics. They both are equal, if that is the environment that you operate in. Again, full disclosure, I'm with the project calico team.


Flannel relies on etcd. I had stability issues with v1 of etcd which meant flannel couldn't route packets. Since then, the idea of using an immature SDN atop an immature distributed key value store fills me with dread.


Interesting. But that's an old version. Calico [1] uses etcd as well (version 2.0.5 I think?) and I've never had any problems so far...

[1] http://www.projectcalico.org/


You can of course use etcd with weave... indeed weave can be run with no dependencies, so it may make sense as a way to bootstrap etcd ;-)


What do you use instead?


I gave up on SDNs and fell back to doing what anyone does without an SDN: published ports to the host interface and advertised the endpoint <host_ip>:<container_port> to etcd. Note this wasn't with kubernetes but with a similar system. Still reliant on etcd, which I wasn't happy with, but one less cog to go wrong.


More and more, I'm finding I like what comes from the CoreOS group.


I have to agree... I do hope to see something close to a drop-in coreos cluster on a number of networks that becomes easier to use in practice and on in-house servers... The reliability is still a concern, and when you are using a PFM (pure f*ing magic) solution, it needs to be rock solid. I think within 1-2 years CoreOS will be a choice solution... unfortunately I can't wait for that, and don't have the time to become intimate with every detail therein.

Although etcd had a lot of issues even when I was in the evaluation stage... I setup a dev cluster, having a node running on half a dozen machines (developer workstations included) in the office. The etcd cluster broke 3-4 times before I abandoned the thought of using it. With the 2.x release, my test network (3 virtual servers) was a lot more reliable, but I decided I didn't need it that much, opting to use my cloud provider's table solution for maintaining my minimal configuration...

For my deployment, the cluster is relatively small, 3 nodes mainly for redunjdancy... our load could pretty easily be handled by a single larger server. That said, I decided to just use dokku-alt for deployments directly to each server (each service with two instances on each server). To make deployments easier to use, I have internal dns setup with wildcards using v#-#-#.instance-#.service.host.domain.local as a deploy pattern for versioned targets and instance-#.service.host.domain.local for default instances. I have two nginx servers setup for ssl, spdy termination and caching configured to relay requests to the dokku cluster for user-facing services. This is turning out to be simpler than implementing etcd/flannel for that communications layer.

Each instance is passed relevant information regarding the host/service information to report it's own health to table storage, internal communication is using ZeroMQ req/res interfaces to minimize communication overhead, and allow for reconnects. Which is working a little better than relying on WebService/REST interfaces internally.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: