I find that slightly revisionist though I suppose it depends on your definition of "early on".
To my recollection etcd had a very rough patch early on until they overhauled their raft sub-system. Hashicorp caught some flak at the time for giving the raft implementation a hard pass and writing their own. Go ecosystem fragmentation was particularly bad and a very hot topic at the time. I believe that etcd's new found stability and subsequent track record after the new implementation vindicated Hashicorp somewhat.
The original go-raft is one of the first raft implementations. At that time, the raft paper was not even officially published. Many other attempts around that time were not very successful, including go-raft.
Making a production ready consensus algorithm is not easy: https://www.cs.utexas.edu/users/lorenzo/corsi/cs380d/papers/.... Things like pipelining, batching, flow control, asynchronous snapshot were not extensively explored in the context of raft. And not much effort has been put into testing due to the immaturity of the applications of raft at the time.
We realized the problem a few months after etcd alpha was initially released and became popular. However, I went back to CMU to continue my master degree for 1yr, which slowed down the progress.
After I came back from school, together with Blake, Yicheng from CoreOS and later on Ben from Cockroach Labs, we built a solid raft impl as our first priority. Once we put etcd/raft inside etcd2, the stability of etcd greatly improved. That is about 1.5 yr after the initial release. Now etcd/raft powers many production level distributed systems: tikv, cockroachdb, dgraph and many others.
Over the last couples of years, the focus of etcd/raft is always stability and nothing else (although people are blaming us for usability :P).
etcd has succeeded as a piece of distributed systems infrastructure beyond our wildest expectation. When Alex Polvi, Xiang Li, and I started the project as a README in the summer of 2013 we identified that their still was no consensus database that was developer friendly, easily secured, production ready, and based on a well understood consensus algorithm. And largely we got lucky with good market timing, the invention of the Raft algorithm, and the explosion of good tooling around the Go language. This lead to early success as etcd was used in locksmith, skydns, and vulcan load balancer.
As the years went on we got lucky again that the Kubernetes project chose etcd as its primary key-value database. This helped to establish the project as a must use piece of infrastructure software. Which went on to influence the technology selection of storage, database, networking, and many other projects. Just check out all of the stickers of projects relying on etcd that I could find at KubeCon here in Seattle. https://twitter.com/BrandonPhilips/status/107370136987218739...
Some notable projects include: Kubernetes, Rook, CoreDNS, Uber M3, Trillian, Vitess, TiDB, and many many others.
Moving into the CNCF will help to bring a few things to the project:
- Funding and resources to complete regular third-party security audits and correctness audits
- On-call rotation and team for the discovery.etcd.io system
- Assistance in maintaining a documentation website
- Resources to fund face to face meetup groups and maintainer meetings
As a closing remark I want to thank the over 450 contributors and the entire maintainer team for bringing the project to this point. We are solving an important distributed systems problem with a focused piece of technology.
It was a lot of fun watching the project and community evolve. I think you and the team did an excellent job. I remember a huge spike in users around when discovery.etcd.io launched. It was really a game changer for us building large-scale multi-data center telecom systems. I still remember bootstrapping the first cluster in a 24 data center test and having things blow up, particularly in higher-latency environments (cross-DC)
Fast-forward 4 months, the project had grown and scaled to support the influx of new curious devs and use-cases that stretched the bounds of what was possible at the time. At the end of the 4-months, we had a 128 node cluster that stayed up for years and still powers all of the emergency notifications in a few states in the US!
I wrote the initial implementation of the raft subsystem and it was definitely not a copy/paste. We started from scratch (using etcd's core raft) with the transport layer being grpc. My initial experiment could be found in this repository [1]. I then took the code from my initial experiment and included this into Swarmkit [2]. From there we went through many iterations on the initial code base and improved the UI with Docker swarm `init`/`join`/`leave` to make the experience of managing the cluster "friendly".
We spent quite some time evaluating different raft and paxos implementations (mostly Consul and etcd raft libraries), and found out etcd to be the most stable and flexible for our use case. It was very easy for example to swap the transport layer to use grpc. The fact that etcd implementation is represented as a simple state machine makes it also much easier to reason about under complex scenarios for debugging purposes, instead of digging into multiple layers of abstractions.
In retrospect, this came with quite a learning curve. We've had to deal with issues caused by our own misunderstandings on how to use the library properly. At the same time the fact that the developers favored stability as opposed to user friendliness was exactly what we found attractive using etcd's raft. Additionally, CoreOS developers were super friendly and helpful to help us fix these issues. We've reported and fixed some bugs as well. Kudos to them for all the help they provided at the time.
What I remember is, during DockerCon in June 2016, I went into the code to see how it worked, and I found a top-level file setting up data structures and handlers that seemed to be 90% the same as the equivalent file in etcd. And the underlying implementation is reused via vendoring.
Maybe this rings a bell with you and you can tell me what I saw, because I can't find it now.
Maybe I dreamed the whole thing.
I did, and still do, think integrating etcd into Swarm Mode was a masterstroke; we had spent the previous two years working to avoid "first you must install etcd" in a different way that nobody got. Afterwards we created kubeadm to ape the 'init' and 'join' functionality.
Are you sure? I’ve spent quite some time playing with the internals of Docker Swarm / swarmkit last year and I’m quite confident it wasn’t true then. As far as I know they call go-raft directly because they only need a fraction of the features offered by etcd.
rkt was needed to push a number of ideas forward in the ecosystem at the time (4 years ago, 2014) and part of its legacy is the creation of technologies that provided plugin interfaces for the container ecosystem.
The Container Networking Interface was directly created by the work in rkt and continues on today inside of Kubernetes and the CNCF. This work made it possible for an ecosystem of networking solutions to exist that could take advantage of everything Linux has to offer.
The creation of the Kubernetes Container Runtime Interface (CRI) was also spawned, in part, by the existence of rkt and the need to consider container runtimes for use with Kubernetes. It was a long hard engineering effort but I think the separation that CRI forced the kubelet to go through and the competition of various runtimes is good for the ecosystem and the resilience of the Kubernetes project.
It is very unlikely that rkt will be part of the Kubernetes ecosystem at this point with the existence of containerd, and CRI-O as Kube CRI solutions on Linux. And there were missed opportunities on a variety of fronts along the way. But, rkt continues to be used by many organizations for other niche use cases of containers. And the shifts that rkt caused above were positive improvements for the Kubernetes ecosystem.
Pre 3.5.0 zookeeper reconfiguration of a running cluster was also much harder - that was a significant discussion point on Kubernetes when we had the etcd vs (anything) discussions early after open sourcing.
I still think etcd total ordering over history also made reasoning about changes in the system while we were writing the first versions of the controllers and caches and list-watch loops. ZK had partial order, and I was leery of that at the time.
It really depends on your use case but one of the main "pros" of etcd is the narrow latency band when writing.
This article is likely biased to the good parts of etcd as it's written by coreOS but you can see how the latency of writes in etcd is very consistent compared to the wide range of latencies experienced writing to ZooKeeper or Consul:
There are other "pros" related to the fact that it's been designed for "cloud native" architectures like kubernetes. For example, FoundationDB can perform on average at sub-milisecond latency for writes (https://apple.github.io/foundationdb/benchmarking.html) versus 1.6ms on etcd however configuring FoudationDB to run programmatically is challenging as it was designed in an environment where ops people rack physical servers.
All key/value stores have good points and bad points but that's in relation to your use case. If write or read throughput isn't the most important, say it's consistency or availability, you may make a different choice about what are "pros" and what are "cons".
Another "pro" or "con" may be the language its written in or how it runs or deploys. If you run a Java shop and have tons of experience writing and deploying Java code, it may be in your best interest to be able to have more control by using a project written in Java. conversely, if you have all go engineers, you may want a project written in go. If you only have junior engineers, you may want whatever is easiest to operate and deploy.
I wonder if this the core of Redhat knowing that the clock is ticking, and make sure that critical software that they worked on is available in an open fashion. One only has to look at Sun and MySQL to see what can happen to a once vibrant open source offering after acquisition.
I don't understand this line of thinking. IBM has plenty of people contributing to open-source projects. I wouldn't be surprised if they contributed to etcd even before the acquisition. When it comes to their open-source track record, IBM and Oracle are nothing alike.
Red Hat does not own anything that's valuable aside from their developers, who chose to work at Red Hat due to their pro-FOSS positioning. If IBM chose to start shutting projects like Fedora down or move in the direction of closed-source, these developers would have no desire to remain, and would leave, making that $40B acquisition worthless.
Red Hat has extensive customer relationships, customer databases, contracts for future revenue, partnerships, operational processes, . . . product IP is just a slice of the pie.
Glad to see that Red Hat is still committed to the Open Source movement after so many naysayers predicted that all contributions going forward would be stymied due to the announcement of the acquisition.
Not saying it will happen (just that it usually does), but I almost never see acquired companies immediately turn into the parent. Usually the acquired maintains its course until attrition and cross-pollination replace its original culture with that of the parent. The acquired company eventually exists only as a collection of intellectual property and history. Could take many years depending on how tightly IBM squeezes. That and the acquisition isn't even finalized yet.
The acquisition hasn’t even happened yet, so there’s no influence to see just yet. In fact, it’s illegal for RH to change for IBM prior to the acquisition.
I find that slightly revisionist though I suppose it depends on your definition of "early on".
To my recollection etcd had a very rough patch early on until they overhauled their raft sub-system. Hashicorp caught some flak at the time for giving the raft implementation a hard pass and writing their own. Go ecosystem fragmentation was particularly bad and a very hot topic at the time. I believe that etcd's new found stability and subsequent track record after the new implementation vindicated Hashicorp somewhat.