I'll be interested in coming back to Python when it isn't a headache to deploy into production. I'm tired of an install requiring a GCC compiler on the target node. I'm also tired of having to work around the language and ecosystem to avoid dependency hell.
- CI system detects a commit and checks out the latest code
- CI system makes a virtualenv and sets up the project and its dependencies into it with "pip install --editable path/to/checkout"
- CI system runs tests, computes coverage, etc.
- CI system makes a output directory and populates it with "pip wheel --wheel-dir path/to/output path/to/checkout"
- Deployment system downloads wheels to a temporary location
- Deployment system makes a virtualenv in the right location
- Deployment system populates virtualenv with "pip install --no-deps path/to/temp/location/*.whl"
The target node only needs a compatible build of python and the virtualenv package installed; it doesn't need a compiler and only needs a network connection if you want to transfer wheel files that way.
You really should look at Armin's (same guy who wrote Flask and Jinja2) platter tool. It takes a few of the steps out and for an almost identical workflow as you, we are switching to it.
Actually, that's where I got the idea... but when I last looked at Platter it was covered in "experimental only, do not use in production warnings".
Considering I'd have to write a build script to use Platter, it didn't seem like it would be a lot of work to write a few extra lines and not require an additional dependency.
You can with various levels of success with a few "freeze" programs. They basically bundle up the entire environment into an executable, so the executables are stupidly large (more-or-less the size of your /usr/lib/python directory plus the python binaries), but they mostly work.
FWIW we deploy python code as debian packages that we build with dh-virtualenv.
This bakes a whole virtualenv with all python dependencies (including compiled C libraries) into a .deb package. The packages tend to be big-ish (3MB to 15MB), but the target system only needs the right python version, nothing else.
The way we do this is to have a base image that has already yum installed or pip installed all modules (non trivial, anyway) that our package needs. Then the docker image that needs to be rebuilt (that depends on the first one) is just a minimal pip install away.
Actually, nice thing about docker is you can build a compilation container (pre-built with all your C/C++ apps ready to go and shared amongst your coworkers), compile your extensions using that, and then only install them into your target container (sans compilation tools). It's a little more grunt work that way, but you get better control and reproducibility without the explosion in image sizes.