> Host names that contain only one label in
> addition to local, for example
> "My-Computer.local", are resolved using
> Multicast DNS (Bonjour) by default. Host names
> that contain two or more labels in addition to
> local, for example "server.domain.local", are
> resolved using a DNS server by default.
Apparently there's nothing wrong with adapting, say, .dev.local (or
.ifft.local) with a coresponding hack, ahem, file, under
/etc/resolver/dev.local:
It seems to me that .local is reserved for purposes within a local network, not necessarily local to an individual computer itself. You could, for example, set up an internal server on your network and have it resolve with storage.local. You could argue that a collection of containers is basically the same thing, though.
That's a fair point. I had been using .dev for this purpose for years, and just applied it to this project out of habit. It still doesn't fully resolve to anything from Google, but I see that ICANN is now resolving it to 127.0.53.53 to indicate a name collision. Many other projects (like Boxen) also use .dev, so it seems to still be something that people do.
Very cool, we use boot2docker and a slightly modified fork of docker compose but have yet to automate installing everything.
To avoid reinstalling dependencies you can use multiple dependency files to separate your slowest building dependencies (i.e for ruby you can use a Gemfile and Gemfile.tip) and a git hook script to set the modified time of all files in the repo to their last change in git:
We looked at the approach of adding a Gemfile.tip, however we like the data volume approach because it doesn't require any changes to the Dockerfile and it is more like what our developers are familiar with. A Gemfile.tip can become basically another Gemfile eventually, so separating the bundle step from the build step (in Development) gives us more flexibility as well as keeping things more in line with what our developers expect to do.
What does setting the modified time of all files do?
I'm a bit confused by your bundler-cache. It seems you must be running `bundle install` at runtime, because you can't mount that volume during build. Am I misunderstanding?
That's correct. You build the image and then run (from inside a container) bundle install. This mapped well with our current flow, where you need to run bundle install after fetching new code anyway. This way, when you fetch new code, you don't need to rebuild the image and launch new containers, unless something changes in the Dockerfile or the docker-compose.yml file. That now happens fairly rarely, and, when it does, it's a pretty quick process.
You don't have to run it at runtime, however, as the data volume for a container sticks around until the container has been deleted. If you don't delete the bundler-cache container, your bundler cache sticks around. If you want to clear the cache, just remove the container.
To add to silvamerica's comment, this process is specific to running containers in development. When building images for deployment the dependencies are installed as a RUN step in the Dockerfile.
As Devin said, we're using Mesos, Marathon, and Chronos, but we're pretty excited about Kubernetes too. The networking layer, in particular, seems very innovative.
We're still experimenting with Docker, but aren't using it in production yet. Definitely going to check out these scripts though, especially that dnsmasq container.
Nice to see a post that covers the various aspects of using Docker in place of some like Vagrant for local development.
The biggest pain I've run into when using Docker for local dev is waiting for pip to install dependencies on rebuilds. This offers an interesting strategy for mitigating that and I look forward to digging into this more.
We experienced that exact same pain. Whether it's pip or Bundler, I can't tell you how many times I've installed and reinstalled requirements.
Sharing data volumes is kind of a hack to make it really easy to keep part of a container around when you delete and recreate a container. I would love to see persistent data volumes become first class citizens so you don't have to create a separate container for them.
For now, though, it's saved us from having to put everything directly into the image in development.
Install each of the pip/bundler requirements with a separate dockerfile RUN command. Each gets cached into its own container filesystem layer that way and only new requirements are pulled. Use your favourite templating tool to generate the dockerfile with multiple run commands.
But let's say a dependency is changed. Won't modifying that RUN directive invalidate the cache for everything after it, potentially rebuilding a ton of stuff anyway?
I'm not sure if this address your specific pain point, but if your issue is around building images, if you first COPY requirements.txt and then do `pip install` before COPYing the rest of your code into the image, Docker caching will actually cache that layer for you.
[1] https://www.iana.org/domains/root/db/dev.html