I've got a strong interest on overlay networking solutions like Weave, but I'm not an expert on Docker and other container solutions.
What's the new thing here? If I understood correctly, it seems that you can connect your Docker host to an overlay network, so your containers can access other containers and resources through it. Am I correct to think this facilitates orchestration of the containers' network?
Disclosure: I am behind https://wormhole.network which could be seen as some sort of Weave competitor, but it's not. It covers other use cases, even though there's some overlapping e.g. overlay multi-host networking for containers https://github.com/pjperez/docker-wormhole - it doesn't require changes on the host itself, but can't be orchestrated.
I haven't come across your project before. Any relation to my (similarly named) project https://github.com/vishvananda/wormhole ? I always thought that one of the most interesting piece of my project was easy ipsec tunnel setup. It turns out that setting ip ipsec tunnels is pretty tricky.
Holy cr*p! haha! first time I see your project, too.
I think both are a bit different. My project is based on SoftEther and using an external server as a pivotal point, so all the members of the network only need outbound 443/TCP access. It's not point to point unfortunately. The idea is to make sure it works on as many scenarios as possible.
I'm just adding all the server management and simplification layer, but both server and clients are 100% SoftEther.
I haven't looked at this in detail, but does this work with the standard networking features introduced in Docker 1.9 [1] and 1.10 [2]? Can I still use 'docker network create/connect' and the DNS service discovery features of Docker? Can containers interoperate regardless of the choice of Docker plugin, or will they only work on a plugin based on the weave proxy? The wording in the post leaves that ambiguous.
Well, sort of. Before the Docker networking features, Weave relied on a proxy that intercepts Docker API calls to set up their network before passing requests on to Docker. It was/is a workaround for lacking plugin support in Docker, and even after the network plugin support was added it still allows additional functionality that is hard to implement via the plugin mechanism but there is now a Docker network plugin for Weave too.
What they've done now if I've understood it correctly, is that they've effectively leveraged that to allow them to intercept Docker API calls, and if that call requests a network provided by a CNI plugin, they call CNI "on behalf of Docker" and then pass on a modified API call to Docker, so you can have Docker/Kubernetes/Rocket on the same overlay network.
> Can containers interoperate regardless of the choice of Docker plugin, or will they only work on a plugin based on the weave proxy?
Containers don't care what network you configure. Basically Docker will just use a bridge interface, assign IP addresses for a container on that bridge, and optionally expose ports on the host. The Docker networking support lets Docker query an external plugin API to obtain the details to use for a container. Kubernetes and Rocket implements a different plugin API for the same purpose. But in both all of this happens before a container is started.
Once it's started, the container just sees a an interface bound to a suitable IP, so your containers should not need to care.
Other than IPv6 support, is there a reason to use Calico/Weave over Flannel? We've been very happy with Flannel, especially using the clean CoreOS-Flannel integration.
In the case of Weave: encryption, multicast, fault tolerance (fully distributed through a CRDT mesh), integrates beautifully with containers (can be used as a Docker network plugin and provides out-of-the-box service discovery). More over, it's stupidly simple to use and setup (zero-conf, no kv-store required).
I'm looking forward to setting up some trial deployments of calico at my workplace.
While there is an ease of understanding of bridged and/or overlayed networks, native end-to-end routing between containers with regular IP datagrams and container-level addressability has been on my wishlist for a long time.
Ahh ... had to search for "calico tom denham" to find http://www.projectcalico.org/ ok ... that makes more sense than a company dealing with biology ...
What's the new thing here? If I understood correctly, it seems that you can connect your Docker host to an overlay network, so your containers can access other containers and resources through it. Am I correct to think this facilitates orchestration of the containers' network?
Disclosure: I am behind https://wormhole.network which could be seen as some sort of Weave competitor, but it's not. It covers other use cases, even though there's some overlapping e.g. overlay multi-host networking for containers https://github.com/pjperez/docker-wormhole - it doesn't require changes on the host itself, but can't be orchestrated.