I'd like to see a reasonable answer to this. Because up until now, I've been using data-only containers to mount directories into these app containers (i.e. Redis and PostgreSQL). The fail over handling has been horrendous for me, because there's no easy way to migrate the data across machines, unless you setup multiple hot slaves or something. And this happens often with CoreOS updates on the alpha channel.
Ultimately, I gave up and created a separate Ubuntu VM to run as an NFS server. Every CoreOS instance mounts it, and then my data-only containers now map back to the NFS mount. This way, when CoreOS moves the Redis or PostgreSQL containers, it has the data available to it.
It's not my favourite setup, but it's worked well-enough this past week that I haven't had to manually correct things while on vacation.
I'm hopeful that someone smarter/more experienced can share a better solution.
Mounting your database storage volume via NFS seems like a surefire way to cause yourself pain down the road. You might want to review the following (old but still relevant) article to understand some of the pitfalls:
The tl;dr basically boils down to the fact that PostgreSQL and MySQL (or really any good database engine running on *NIX systems) make very strong assumptions about the POSIX-ness of their underlying filesystem: flock() is fast, sync() calls actually mean data has hit disk, etc.
Docker/CoreOS/etc. aren't a replacement for a good SAN or other reliable storage. If you value your data I'd suggest keeping your core database(s) on dedicated machines/VMs (ideally SSD-equipped and UPS-backed). If managing those is too much work, consider a managed cloud database; DynamoDB and RDS can stand in for Redis and Postgres, respectively.
My immediate problem is that my software is running on a dedicated server hosted on-site; I have Internet access, but everything is hosted and run on a single massive VMWare ESXi server. I don't have the benefit of cloud-based services like RDS. I could modify my architecture to utilize that instead, and that's something I've thought about doing.
As it stands, the VM server is UPS-backed, but does not run on SSDs. There is no SAN. If I were to fix the existing implementation, I would:
a) Add a secondary VM server as a redundant backup.
b) Add a SAN
However, I don't think I can justify the capital expenditures for that. So what I'll likely do is replace the NFS server with a dedicated PostgreSQL server (VM), and perhaps start thinking about moving the majority of the infrastructure out of the building and into the cloud to take advantage of things like RDS. The latter is even more important for scalability as we add more customers.
I've created a tool to help with migrating volume data: github.com/cpuguy83/docker-volumes and also this PR to help bring forward volume management within Docker: https://github.com/docker/docker/pull/8484
Ultimately, I gave up and created a separate Ubuntu VM to run as an NFS server. Every CoreOS instance mounts it, and then my data-only containers now map back to the NFS mount. This way, when CoreOS moves the Redis or PostgreSQL containers, it has the data available to it.
It's not my favourite setup, but it's worked well-enough this past week that I haven't had to manually correct things while on vacation.
I'm hopeful that someone smarter/more experienced can share a better solution.