well, if you have, say 10 nodes on AWS connecting to the docker hub the speed is definitely an issue, because let's be honest here neither are very fast at all
*edit: also, downloads don't just happen in a data centre. chances are your (or many) office connections just really... well, are not very good. also, think of Australia. please think of Australia (our internet is something of a dire situation)
I'm using slugrunner from flynn [0] for deploying my apps. This way I can share a base image, and each compiled slug is about 40mb for ruby apps and 10mb for golang apps. This is similar to how heroku works.
When I deploy, I generate the slug using slugbuilder, push it to a local storage on the same network, and each docker task is instructed to pull the "latest" slug from the slug storage. Containers start after a code update in a couple of seconds.
Continuous deployment can be easy achieved by copying slug from staging to production, similar to how pull docker image each time is currently done.
*edit: also, downloads don't just happen in a data centre. chances are your (or many) office connections just really... well, are not very good. also, think of Australia. please think of Australia (our internet is something of a dire situation)