Hacker News new | past | comments | ask | show | jobs | submit login
Developing with Docker at IFTTT (medium.com/silvamerica)
96 points by devinfoley on Oct 7, 2015 | hide | past | favorite | 28 comments



It seems like using .dev TLD is a bad idea since Google owns it[1]. How many people use the .dev or .local TLD? What's the best practice here?

[1] https://www.iana.org/domains/root/db/dev.html


.local is actually reserved for such purpose: https://en.wikipedia.org/wiki/.local


OS X does weird stuff with the .local tld, making usage of it for local hosts non-trivial. https://support.apple.com/en-us/HT201275


Not, that weird:

    > Host names that contain only one label in
    > addition to local, for example
    > "My-Computer.local", are resolved using
    > Multicast DNS (Bonjour) by default. Host names
    > that contain two or more labels in addition to
    > local, for example "server.domain.local", are
    > resolved using a DNS server by default.
Apparently there's nothing wrong with adapting, say, .dev.local (or .ifft.local) with a coresponding hack, ahem, file, under /etc/resolver/dev.local:

http://blog.scottlowe.org/2006/01/04/mac-os-x-and-local-doma...


That looks like a great way to handle it!


It seems to me that .local is reserved for purposes within a local network, not necessarily local to an individual computer itself. You could, for example, set up an internal server on your network and have it resolve with storage.local. You could argue that a collection of containers is basically the same thing, though.


If a container has its own IP address, I think it can be considered its own device on the network, regardless of the details.


That's a fair point. I had been using .dev for this purpose for years, and just applied it to this project out of habit. It still doesn't fully resolve to anything from Google, but I see that ICANN is now resolving it to 127.0.53.53 to indicate a name collision. Many other projects (like Boxen) also use .dev, so it seems to still be something that people do.


Using anything beside a domain you own and control is a potential bad idea.


Very cool, we use boot2docker and a slightly modified fork of docker compose but have yet to automate installing everything.

To avoid reinstalling dependencies you can use multiple dependency files to separate your slowest building dependencies (i.e for ruby you can use a Gemfile and Gemfile.tip) and a git hook script to set the modified time of all files in the repo to their last change in git:

https://gist.github.com/siliconcow/d5c991f49b7550360465


Thanks!

We looked at the approach of adding a Gemfile.tip, however we like the data volume approach because it doesn't require any changes to the Dockerfile and it is more like what our developers are familiar with. A Gemfile.tip can become basically another Gemfile eventually, so separating the bundle step from the build step (in Development) gives us more flexibility as well as keeping things more in line with what our developers expect to do.

What does setting the modified time of all files do?


I'm a bit confused by your bundler-cache. It seems you must be running `bundle install` at runtime, because you can't mount that volume during build. Am I misunderstanding?


That's correct. You build the image and then run (from inside a container) bundle install. This mapped well with our current flow, where you need to run bundle install after fetching new code anyway. This way, when you fetch new code, you don't need to rebuild the image and launch new containers, unless something changes in the Dockerfile or the docker-compose.yml file. That now happens fairly rarely, and, when it does, it's a pretty quick process.

You don't have to run it at runtime, however, as the data volume for a container sticks around until the container has been deleted. If you don't delete the bundler-cache container, your bundler cache sticks around. If you want to clear the cache, just remove the container.


To add to silvamerica's comment, this process is specific to running containers in development. When building images for deployment the dependencies are installed as a RUN step in the Dockerfile.


Hey IFTTT! Jacob from Imgur here. Do you guys think you will be trying Kubernetes soon?


As Devin said, we're using Mesos, Marathon, and Chronos, but we're pretty excited about Kubernetes too. The networking layer, in particular, seems very innovative.


We're currently using Marathon and Chronos, and it's going pretty well. What about y'all?


We're still experimenting with Docker, but aren't using it in production yet. Definitely going to check out these scripts though, especially that dnsmasq container.


Nice to see a post that covers the various aspects of using Docker in place of some like Vagrant for local development.

The biggest pain I've run into when using Docker for local dev is waiting for pip to install dependencies on rebuilds. This offers an interesting strategy for mitigating that and I look forward to digging into this more.


Thanks!

We experienced that exact same pain. Whether it's pip or Bundler, I can't tell you how many times I've installed and reinstalled requirements.

Sharing data volumes is kind of a hack to make it really easy to keep part of a container around when you delete and recreate a container. I would love to see persistent data volumes become first class citizens so you don't have to create a separate container for them.

For now, though, it's saved us from having to put everything directly into the image in development.


You'll be able to create volumes as first class citizens in the next Docker release with the new 'docker volumes' command if I'm not mistaken: https://github.com/docker/docker/blob/4b4597ae17d4fd8843aa93...


Yes, I'm looking forward to it! It will still be a little ways off before we can use it with Compose, however.


Install each of the pip/bundler requirements with a separate dockerfile RUN command. Each gets cached into its own container filesystem layer that way and only new requirements are pulled. Use your favourite templating tool to generate the dockerfile with multiple run commands.

I wrote djtempl ( https://github.com/emailgregn/djtempl ) for my purposes.


But let's say a dependency is changed. Won't modifying that RUN directive invalidate the cache for everything after it, potentially rebuilding a ton of stuff anyway?


Yup, that's the trade-off.


It's unfortunate that Docker uses an imperative model. A functional model would have much better cache utilization.


I'm not sure if this address your specific pain point, but if your issue is around building images, if you first COPY requirements.txt and then do `pip install` before COPYing the rest of your code into the image, Docker caching will actually cache that layer for you.


Cool stuff.

Link the to github project: https://github.com/IFTTT/dash




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: