I have to agree... I do hope to see something close to a drop-in coreos cluster on a number of networks that becomes easier to use in practice and on in-house servers... The reliability is still a concern, and when you are using a PFM (pure f*ing magic) solution, it needs to be rock solid. I think within 1-2 years CoreOS will be a choice solution... unfortunately I can't wait for that, and don't have the time to become intimate with every detail therein.
Although etcd had a lot of issues even when I was in the evaluation stage... I setup a dev cluster, having a node running on half a dozen machines (developer workstations included) in the office. The etcd cluster broke 3-4 times before I abandoned the thought of using it. With the 2.x release, my test network (3 virtual servers) was a lot more reliable, but I decided I didn't need it that much, opting to use my cloud provider's table solution for maintaining my minimal configuration...
For my deployment, the cluster is relatively small, 3 nodes mainly for redunjdancy... our load could pretty easily be handled by a single larger server. That said, I decided to just use dokku-alt for deployments directly to each server (each service with two instances on each server). To make deployments easier to use, I have internal dns setup with wildcards using v#-#-#.instance-#.service.host.domain.local as a deploy pattern for versioned targets and instance-#.service.host.domain.local for default instances. I have two nginx servers setup for ssl, spdy termination and caching configured to relay requests to the dokku cluster for user-facing services. This is turning out to be simpler than implementing etcd/flannel for that communications layer.
Each instance is passed relevant information regarding the host/service information to report it's own health to table storage, internal communication is using ZeroMQ req/res interfaces to minimize communication overhead, and allow for reconnects. Which is working a little better than relying on WebService/REST interfaces internally.