Hacker News new | past | comments | ask | show | jobs | submit login

I'll be interested in coming back to Python when it isn't a headache to deploy into production. I'm tired of an install requiring a GCC compiler on the target node. I'm also tired of having to work around the language and ecosystem to avoid dependency hell.



The way I deploy Python apps at $EMPLOYER:

- CI system detects a commit and checks out the latest code

- CI system makes a virtualenv and sets up the project and its dependencies into it with "pip install --editable path/to/checkout"

- CI system runs tests, computes coverage, etc.

- CI system makes a output directory and populates it with "pip wheel --wheel-dir path/to/output path/to/checkout"

- Deployment system downloads wheels to a temporary location

- Deployment system makes a virtualenv in the right location

- Deployment system populates virtualenv with "pip install --no-deps path/to/temp/location/*.whl"

The target node only needs a compatible build of python and the virtualenv package installed; it doesn't need a compiler and only needs a network connection if you want to transfer wheel files that way.


You really should look at Armin's (same guy who wrote Flask and Jinja2) platter tool. It takes a few of the steps out and for an almost identical workflow as you, we are switching to it.

http://platter.pocoo.org/dev/

Really nice stuff


Actually, that's where I got the idea... but when I last looked at Platter it was covered in "experimental only, do not use in production warnings".

Considering I'd have to write a build script to use Platter, it didn't seem like it would be a lot of work to write a few extra lines and not require an additional dependency.


The way I deploy Go apps at $EMPLOYER2:

- go get

- go test

- go build

- copy to target

It's possible with Python, it's easier with Go. It's a place where we could use a lot of progress.


Presumably, though, what both of you actually do is:

build.sh

Once you'd done the up-front work of figuring out how to do deployment sanely, it became equally easy for both of you.


It seems weird how you can't easily package python into an executable without Docker.


You can with various levels of success with a few "freeze" programs. They basically bundle up the entire environment into an executable, so the executables are stupidly large (more-or-less the size of your /usr/lib/python directory plus the python binaries), but they mostly work.


I've done it before, but it was kind of a pain and I got the impression nobody else used that stuff. I wonder why it's not more popular/easy.



FWIW we deploy python code as debian packages that we build with dh-virtualenv.

This bakes a whole virtualenv with all python dependencies (including compiled C libraries) into a .deb package. The packages tend to be big-ish (3MB to 15MB), but the target system only needs the right python version, nothing else.


This was posted here, and not a bad idea:

https://nylas.com/blog/packaging-deploying-python


docker?


Doesn't solve the issue of needing a C compiler for third party extensions, and definitely qualifies as a work-around for the existing toolset.

Yes, it helps. But you can use Docker with Go programs as well (and drop a lot more of the base image in the process).


The way we do this is to have a base image that has already yum installed or pip installed all modules (non trivial, anyway) that our package needs. Then the docker image that needs to be rebuilt (that depends on the first one) is just a minimal pip install away.


Actually, nice thing about docker is you can build a compilation container (pre-built with all your C/C++ apps ready to go and shared amongst your coworkers), compile your extensions using that, and then only install them into your target container (sans compilation tools). It's a little more grunt work that way, but you get better control and reproducibility without the explosion in image sizes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: