Basically it means 1) it's stable enough to be useful, 2) but not stable enough that we feel comfortable telling you to rely on it in production, and 3) the UI and integration into the rest of the Docker platform are susceptible to change.
It looks interesting but seems like a very Docker-centric Packer[0] in that it has code to create ready to go with Docker images on various cloud providers and virtual machines.
The real hurdle and problems you'll be solving with Docker are when you have containers on separate hosts. Will Machine handle configuring firewalls, SSL, SSH keys and all that jazz across hosts so containers can reach each other? I've automated a lot of this myself but would love something that does it for you (other than the existing discovery service tools that often introduce problems of their own). From the post it seems like Machine will also handle this but I wonder how and how well.
> It looks interesting but seems like a very Docker-centric Packer
I think you mean a Docker-centric Vagrant, no? Packer is for building images, Vagrant is for launching environments.
You can use Packer to build Docker images akin to `docker build`, except Packer will let you use a variety of provisioners to build your Docker image like Chef, Ansible, or just simple Bash scripts. It's `docker build` without the Dockerfiles. [1]
Docker Machine seems geared towards launching a resource that is ready to run Docker, whether it be a VM running locally or a remote instance in some cloud. It's a single-purpose Vagrant, to really stretch the analogy. :)
Once you have a resource that runs Docker, I think that's where Docker Compose [2] steps in. Docker Compose is more like the full Vagrant in that it spins up a full, functioning environment with running services and whatnot.
I've been using Machine for the past month and it's pretty good. It handles SSL certs for docker over IP address, which is an absolute PITA to manage manually. I hated creating new certificates for each machine, whereas machine does that for us. There's an open PR for adding PW protected CA key files, too.
Unfortunately, we use a custom SSH port which there isn't support for. It also adds a machine-specific public SSH key for connection.
The docker guys are great at accepting patches, so we can build this all if we need it.
> The real hurdle and problems you'll be solving with Docker are when you have containers on separate hosts.
Agreed! There are a lot of problems to solve here, and when the recent motion in Docker networking / plugins model settles down a bit and a solution gets merged into Docker proper, it will make inter-container communication across hosts easier.
> Will Machine handle configuring firewalls, SSL, SSH keys and all that jazz across hosts so containers can reach each other? I've automated a lot of this myself but would love something that does it for you
We definitely want to make the lives of operators a lot easier. It's still early days for figuring out the scope other than "let's make a tool to get a lot of Docker daemons, in a very straightforward way", so we'd really love to have you get involved and give your feedback! It handles some SSH key / TLS cert management right now, but there's a huge surface area of things we need to consider carefully.
This looks pretty interesting. Could someone give me a good use-case for this? I suppose spinning up dev environment on a host other than localhost could be useful? I could also see this being useful as an interface for multiple docker-installed production environments.
Multiple dev environments, dev env resets, using something other than vbox are a few for local. It's also nice to just load you cloud provider keys and not have to mess with maintaining images, etc. That's what I use it for. Machine also assists with setting up TLS (machine only creates TLS secured instances at the moment). One more thing, Machine has experimental support for creating TLS secured Swarm clusters. This makes it nice to test out Swarm somewhere besides locally.
Deploying across multiple clouds. This abstracts away virtually all of the details.
All you have to worry about is that your containers run on Docker, and as long as your provider is supported by Machine, nothing else matters, and your code isn't polluted by anything provider-specific.
To expand on this comment, common issues may include...
(1) Availability (Something fails eventually), both in terms of monitoring (Find out) and failover (How to resolve when this happens?)
(2) Security (Who had access to which key when? Are they still at the company? Where's the audit trail? Third-party managed security credentials ... can you say side-channel?)
(3) Legal jurisdiction (Are we allowed to put x in y location? Do we want government x using law y to potentially perform action z?)
(4) Latency (Site x on provider y is experiencing a DDoS)
(5) Heterogeneous management (What happens if something auto-managed gets manually-managed for awhile? Do things break?)
(6) Resource lead time and failure impact (What happens when the hardware you just wanted to spin up from the endless cloud supply isn't available?)
(7) Actual performance characteristics (Usually every provider offers only a specific selection of environments and offers weak to no SLAs/guarantees on real world performance.)
... and so on. A lot of these are sort of cloud restatements of the RFC1925[1] fundamental truths of networking. Also, this is all docker-specific, so it hinges on people wanting to build their infrastructure on a self-described insecure and immature platform and tie their whole management paradigm to it.
FWIW, I am still working on an OS and virtualization platform agnostic alternative devops process management tool[2] and am trying to get my company to agree to open source it. It can easily wrap docker, having far broader scope, and was designed to be future-proofed... it is thus potentially useful for embedded dev, the BSDs, real hardware, clusters, etc. It takes a very different paradigm to docker, abstracting cloud providers, services and platforms separately, and allowing the former to figure out how to connect and manage the other two internally... with standardized interfaces for major operations and notifications.
Obviously this limits you to unix first up, a fairly serious limitation.
Second, it's full hard to catch, security-related bugs, particularly if you attempt to make it portable across disparate unices.
Third, its significant and inexplicit dependencies to achieve pretty much anything (downloading and unpacking files, diffs, accurate snapshots, nontrivial analysis, whatever) remain technical debt and a pain in the ass both in the sense of achieving basic functionality and long term stability.
Finally, because the rest of the world has moved on from shell scripts for all nontrivial tasks for well known reasons (right tool for the job), you will find hiring anyone to manage that really hard.
However, yes, they have their place and your point is valid. For unix-like platforms cims goes one level higher than shell scripts and demands an executable that returns an appropriate exit value and emits issues to stderr when executed on the specific build platform, which may be a group of OSs ('unix'), a specific distro ('arch'), a specific version of a specific distro ('gentoo-my2015febbuild') or even a specific version of a specific distro with a specific config. You could therefore write in any language available within the environment, with the appropriate portability penalty. The environments are defined by: you guessed it, executable tests in the same spirit. This means that being as specific as required or felt appropriate ("optimistic portability" with shell scripts, for example) is allowed, while being completely repeatable once a functional platform version and service build process match is achieved (for services targeting inexplicit platforms like 'unix'). Capiche?
Shell scripts are pretty nice and it's unfortunate that the new crop of configuration management tools puts them on the sidelines instead of treating them as first class citizens in the configuration pipeline.
Now the much harder problem of orchestration is still an unsolved problem and I think that is what the author was referring to.
IMHO you can't make an orchestration product that a devops person will like. The only people i've met who appreciate pre-built, non-customized orchestration tools are people who can't program.
For the devops crowd, I think fail-closed or fail-safe shell scripts work really well for orchestration. Too many times i've seen a really complex chain of operations make it much harder to debug and fix a problem because the tool was trying to be smarter than it needed to be, or ignored failures. You get faster development time, simplicity, portability, extensibility, interoperability, etc for free. And really i'm sick of re-writing the same tool only to later fix it with a shell script because tackling the tool would be too complex or time-consuming.
I nearly choked. Perhaps in the 1980s when you could hire people with shell minutiae memorized, this was a good strategy. These days, I would be wrapping the executable for the job (purposefully in an unspecified language to facilitate broader use and survivability of the design) in a layer that makes its environment temporary and OS-level sandboxed (to stave off security challenges), and completely explicit (to nip the clasically vast number of undocumented dependencies in the bud). Ideally, versioning of such environments and their builds would be supported. That's exactly cims.
Maybe you're right. I've fallen into the trap of re-inventing orchestration but the layer on top of shell scripts as always been tiny (200 or so lines of Ruby or Python). At some point you want some kind of self-healing and monitoring infrastructure and managing all that with shell scripts I'm sure is possible but I wouldn't want to be the one to do it.
This is one of several reasons why I wrote Overcast[1]. I don't want to learn a DSL, I'd rather simplify the process of running shell scripts on multiple machines.
Thanks, yes it's still in development, mostly just adding more providers and recipes at this point. I'd love to add Docker as a provider but I had questions around implementation (https://github.com/andrewchilds/overcast/issues/18), so maybe Docker Machine will be a better alternative, or something I can require as a dependency.
This is docker-specific, obviously. Also, AFAICT it's not really 'deterministic builds' despite claims to the contrary since the network is allowed as input.
The first point is technically untrue - it works over ssh or plain bash also.
The second point is correct; it's deterministic in the sense that the ordering is always the same, and as much as Dockerfile are (they're also regularly described as deterministic). I should probably clarify that somewhere.
In the much-discussed notions of Continuous Integration (CI) and Continuous Deployment (CD), an emphasis is placed on shared infrastructure for performing rigorous builds, followed by both manual and automated profiling and testing processes. This typically assumes a developer-local infrastructure, a shared build infrastructure, a shared testing infrastructure, and a final or deployment infrastructure. Some organizations may also use a staging infrastructure separate from the testing infrastructure, which may be reserved for internal tests. Some organizations have multiple copies of various parts of this picture.
You can see then that any tool that allows you to intelligently interact with these various infrastructures would be welcomed. Unfortunately, I don't think docker-machine is it. It's more about building out low-level 'wows' on top of a relatively high-level platform (docker; which only meets a subset of use cases in the first place - and on a single OS, basically) than solving workflow issues elegantly to become any kind of long term standard, IMHO.
This looks like an interesting project. I'd have to spend some time with it to determine if it would suit my deployment needs. I wrote a tool to help with docker deployment https://github.com/mrinterweb/freighter.
Eventually docker-machine will deprecate boot2docker-cli (the piece which you use to actually manage the VM). However, boot2docker-cli is (mostly) pretty stable and a lot of people use it (including me), so we don't want to toss it out overnight or without a good migration path for b2d users.
boot2docker sets up a virtual machine with docker. Machine does this too, but it also allows you to choose a cloud provider, instead of only your local machine.
That sounds like a Docker-only competitor to Vagrant: it would be interesting to have a direct features comparaison because Vagrant is already pretty handy, even with Docker boxes.
http://blog.docker.com/2015/02/scaling-docker-with-swarm/
http://blog.docker.com/2015/02/announcing-docker-compose/
And a sneak preview of how these tools can work together:
http://blog.docker.com/2015/02/orchestrating-docker-with-mac...