Not the OP, but w.r.t. [1] vendored dependencies are especially helpful if you happen to deploy the software on premise where you might not have full system access and in some cases, no internet or also strict policies on what can be installed.
If for each requirement in `requirements.txt` you have the corresponding wheel file in `/path/to/wheelhouse`, this will not touch the network and complete. I have been using this approach the past 2 years with great success.
Well, I couldn't use wheels on Python 2.6 (and certainly not use a wheelhouse properly in the environments I had to tackle back then), but yes, this is a good approach.
You can vendor your dependencies inside a proper OS package as well. Write an rpmspec that creates a virtualenv, runs your setup.py with the virtualenv python interpreter and boom - your dependencies are bundled without needing to go through acrobatics with manually vendoring them.
If your application isn't able to be installed at the system level I would really recommend this for handling deployment, regardless of whether you have full access or not - deploying tarballs with configuration management tools like puppet are hackish at best, RPM's/DEB's really are the best way to roll your software out.
EDIT: As an alternative to virtualenv which is actually a PITA since --relocatable is always broken, buildout works great.
I can't honestly imagine any circumstances when this would be better than using tools that can manage my Python packages as part of deploying or just using solid Python environment management practices. Why would I want to add the encumbrance of needing to work with system package toolchains to create packages when I want to avoid platform specific mechanisms to minimise my exposure to unexpected issues and retain visibility into failure causes since system package tools can be extremely opaque as to the cause of install failure causes.
For example I use PyEnv instead of virtalenv/venv because PyEnv is written in bash and has a much better level of isolation than virtalenv or venv. It's simple bash scripting and the only system dependencies it has are based on features you choose to use. If you want to build Python from sources you'll need compilers and Libs, etc, but other than that sort of thing, it's zero dependencies.
Edit: PyEnv also has the ability for me to compress an entire Python environment and reuse it somewhere else provided I'm using the same OS and system libraries, so I can pre-build compilation steps and cut down on system package dependencies in production environments.
I don't understand how system package managers are considered an "encumberance" by so many developers. It takes all of 10 minutes to write an rpmspec file unless you have a super-hairy build/install process (in which case you should look at fixing that) - I have one in the git repository for all of my python projects along with a Makefile with a sources target that just calls `setup.py sdist` to create a source tarball for rpmbuild.
Pushing out a new release is as simple as running `koji build f23 git+ssh://my.git.server/my/project.git` and within 15 minutes it's been published to my internal yum repository and puppet is installing the newest version on all my servers. How is managing pyenv and dealing with fabric or whatever other tool of choosing any easier than this?
We usually stick to the official Debian python-* packages (if they exist upstream). When they don't we use python-virtualenv[1][2] to pull specific versions and dependencies from pip into a virtual env.
Now when we build our Debian package (debian/rules):
Great example of this - my current client, who requires on-premise development, blocks Fastly. Guess what runs on Fastly? NPM.
So no development if you need to use npm. I can download e.g. Grunt from GitHub directly, but installing the local package immediately tries to download dependencies from npm.