the strong points of weave network, as it is right now, are ease of use (not to be sniffed at), and enormous flexibility. it is really quite easy to create an application involving containers, that runs anywhere and does not commit you to specific architectural choices...
typically though, one weave network might be used by one app, or just a few. but you might run a lot of weave networks
weave works very nicely with kubernetes - later I shall dig out a few links for this
in our own tests, throughput varies by payload size; we tend to think of weave-as-is is best compared with using, for example, amazon cloud networking directly
for users with higher perf needs, we have a fast data path in the works, that uses the same data path that ovs implementations use... the hard problem to solve here is making that stuff incredibly easy and robust w.r.t deployment choices -- see above :-)
Correct! Once upon a time people said Amazon cloud was too slow. Then they said it wasn't suitable for large workloads. Then they said it did not make money ... etc etc. I'm not saying we are like Amazon, I'm just saying that making new stuff excellent in all dimensions at once in hard ;-)
I'd agree that Weave is great at what it does, which is providing just-enough-networking to make Docker applications as easy to design and construct as applications running on a normal LAN. This is a huge win, since I don't have to worry about portmappers and trying to discover what dynamically allocated ports are being used at runtime to connect two services.
I use Weave as the default SDN for Clocker [1] because of this simplicity, and also because it is server-less and will work on a single server Docker cloud or a cluster of tens of machines without having to think about architecture. Of course, Clocker supports pluggable SDN providers so if your networking demands are not met by Weave you can change to another provider.
I don't think Alexis (or any other Docker SDN provider) is suggesting that their software should be used for low-latency microsecond sensitive trading applications. You have to use the right tools for the job, and in this case Weave's sweet spot is its simplicity and reliability.
I REALLY wish that Clocker would support (and document how to use) something besides Weave - weave is intolerably slow for anything that requires some kind of more serious throughput between nodes. It is very, very unfortunate Clocker doesn't document how to not use Weave (for example, simply use whatever is already in place) as the rest of Clocker rocks, and the seemingly hard dependency on Weave makes it un-deployable for serious production use.
I did pop into their IRc channel a few times with a question around this, but over the space of 3 days all activity on the IRC channel was tumbleweed and crickets..
The latest version of Clocker does support Calico as well, now! But I agree there isn't much in the way of documentation on how to change this. I updated the README and the main page at http://clocker.io/ to reflect the changes in Clocker 0.8.1 but it's not immediately obvious. Try this:
that's great, thanks for answering! I have not yet looked at Calico - I have OVS-VXLAN between my worker nodes, which is a simple solution that works great. Is it possible to say "use whatever is already in place"?
It may be possible by writing a slightly modified YAML file for the Clocker blueprint itself. To avoid dragging this thread further off-topic, please drop me a line at the email address in my profile, and I can probably help you.
Wow. Is that a full 8Gbit/sec of data per host or a logical 8Gbit/sec of raw frames?
Either way, you're an ideal candidate for closed beta access to the performance focused container networking tools my startup is developing. I can contact you via email if you're interested.
on each host? what kind of physical network are you using, and how many containers per host? how many hosts? let me know if I should email about this instead.
I'm thinking replies like this should come from an individual, not an account representing a company. Having an individual behind the words (instead of a loose consensus mechanism for a company) goes a long way to establishing trust. Hoping to see a direct address of the observations raised in the post!
Using an account that I share with other people in the same team is not the same as 'marketing'. I am sorry if this somehow offends, but think of the handle as just one poster.
Just a casual observation, but you don't appear to be listening to me. You are making rationalizations of my statements instead of trying to see my point. For example, I didn't say it was the same as marketing, I said that it's 'fine for marketing', i.e. posting in union when trying to spread interest would be a fine thing for a group account.
What I may be failing to communicate here is how trust is established between entities like companies and entities like individuals. When there is a issue with trust with individuals, like in the OPs post, it's best to establish 'point-to-point' communications with people you know so you can build up the trusted relationship. Doing that as a company doesn't really work well for that.
BTW, comments like "weave has lots of very happy users who find that weave is plenty fast enough for their purposes" are implicit trust statements based on bandwagon bias. What you are actually saying is there exist a group of people who don't feel the way the poster feels and are happy with the product's current state. The implication of your statement is that others should feel this way, but there's really no way to establish that unless we heard from all those people directly. This is yet another example of why consensus sucks when trying to establish the truth for an individual. (Bitcoin has figured this out, however.)
I think about trust a lot for work, so take my comments with a grain of salt. Nobody died here. :)
We're using weave a lot at Cloud 66; we find for the majority use-case inter-container comms at this throughput is sufficient; for high throughput endpoints like DBs there is an argument for not putting them in containers in the first place… (but thats another discussion)
Except that people who can recognize when something is indeed "good enough" and then move on to the next most important thing are ultimately the only people who get things done and accomplish goals.
Paul, I think our work is good. We thought about our approach very carefully.. And we have built systems like this before. The main difficulty is to combine moving fast with limited resources, with delivering something supportable and improvable, without creating technical debt.
the author, who was part of the MS Azure team at the time, said: "as of today, this Weave/CoreOS tool and doc is the only way I was able to provision a Kubernetes cluster in Azure"
weave has lots of very happy users who find that weave is plenty fast enough for their purposes, see eg http://blog.weave.works/2015/02/24/get-your-kicks-on-cloud66...
the strong points of weave network, as it is right now, are ease of use (not to be sniffed at), and enormous flexibility. it is really quite easy to create an application involving containers, that runs anywhere and does not commit you to specific architectural choices...
typically though, one weave network might be used by one app, or just a few. but you might run a lot of weave networks
weave works very nicely with kubernetes - later I shall dig out a few links for this
in our own tests, throughput varies by payload size; we tend to think of weave-as-is is best compared with using, for example, amazon cloud networking directly
for users with higher perf needs, we have a fast data path in the works, that uses the same data path that ovs implementations use... the hard problem to solve here is making that stuff incredibly easy and robust w.r.t deployment choices -- see above :-)