Brief clarification: some of those patterns were a consequence of us having to run Python 2.5 and 2.6 on older boxes - it's a survival technique, if you will - and having to vendor some dependencies to make it easier to deploy stuff under some other interesting constraints the article won't get into.
On the whole, these patterns make it easier for other people to not break your code when they want to add, say, another set of REST endpoints (which was my main occupation at the time).
> My ground rule is that anything I easy_install will go into env, and env will quite often be (re)built on a separate machine that matches my target environment. So I treat it as a disposable folder, and it’s never committed to any repos.
Was wondering, do you pip freeze > requirements.txt? I don't see the file in there, but I assume that's what you mean since you mention that none of the env folders make it into version control.
On a semi-related note, I used to have tons of projects like this a year ago, and having moved most of our python code to AWS lambda, our repos have shrank to maybe 25% of their original size. Granted it's not fabulous if you need to run a JS app on top of your Bottle framework, but we used to have a lot of small services here and there, the provisioning and deployment was so much work.
Adding overflow: auto to your code blocks (.syntax) would allow us to see the whole snippet. I love decorators too, glad I'm not the only crazy person :)
While it did away with the horrible 2-column layout, it looks like the article was writen with those columns in mind, so it uses short paragraphs that look good on 2-column, but feel a bit weird on a wall of text.
My (barely existant) OCD is going haywire over this article as I read it...
OT: Anyone know why all browser vendors implemented this CSS columns aberration so hurridly..? I have never seen a good use for them, so painful to read. It's always some designer trying to show off their "epic" CSS skillz.
Also, the picture with the directory tree got cropped out for me, and I couldn't figure out how to scroll it horizontally. Good thing the columns can be toggled.
Interesting as it's quite different from how I generally structure my own projects. A couple questions:
1. Why store dependencies locally? Seems a bit wasteful and unnecessary with tools like pip/virtualenv. (OP does mention virtualenv, but also "Include ALL the dependencies locally" so I'm not sure what to make of that)
2. You've chosen to separate your code into MVC-inspired directories instead of by function. Flask/Django have ways (apps, blueprints) of making application subcomponents modular. It seems like a single controllers directory and a single templates directory, etc. could get cluttered pretty quickly. What will you do as projects grow?
Not the OP, but w.r.t. [1] vendored dependencies are especially helpful if you happen to deploy the software on premise where you might not have full system access and in some cases, no internet or also strict policies on what can be installed.
If for each requirement in `requirements.txt` you have the corresponding wheel file in `/path/to/wheelhouse`, this will not touch the network and complete. I have been using this approach the past 2 years with great success.
Well, I couldn't use wheels on Python 2.6 (and certainly not use a wheelhouse properly in the environments I had to tackle back then), but yes, this is a good approach.
You can vendor your dependencies inside a proper OS package as well. Write an rpmspec that creates a virtualenv, runs your setup.py with the virtualenv python interpreter and boom - your dependencies are bundled without needing to go through acrobatics with manually vendoring them.
If your application isn't able to be installed at the system level I would really recommend this for handling deployment, regardless of whether you have full access or not - deploying tarballs with configuration management tools like puppet are hackish at best, RPM's/DEB's really are the best way to roll your software out.
EDIT: As an alternative to virtualenv which is actually a PITA since --relocatable is always broken, buildout works great.
I can't honestly imagine any circumstances when this would be better than using tools that can manage my Python packages as part of deploying or just using solid Python environment management practices. Why would I want to add the encumbrance of needing to work with system package toolchains to create packages when I want to avoid platform specific mechanisms to minimise my exposure to unexpected issues and retain visibility into failure causes since system package tools can be extremely opaque as to the cause of install failure causes.
For example I use PyEnv instead of virtalenv/venv because PyEnv is written in bash and has a much better level of isolation than virtalenv or venv. It's simple bash scripting and the only system dependencies it has are based on features you choose to use. If you want to build Python from sources you'll need compilers and Libs, etc, but other than that sort of thing, it's zero dependencies.
Edit: PyEnv also has the ability for me to compress an entire Python environment and reuse it somewhere else provided I'm using the same OS and system libraries, so I can pre-build compilation steps and cut down on system package dependencies in production environments.
I don't understand how system package managers are considered an "encumberance" by so many developers. It takes all of 10 minutes to write an rpmspec file unless you have a super-hairy build/install process (in which case you should look at fixing that) - I have one in the git repository for all of my python projects along with a Makefile with a sources target that just calls `setup.py sdist` to create a source tarball for rpmbuild.
Pushing out a new release is as simple as running `koji build f23 git+ssh://my.git.server/my/project.git` and within 15 minutes it's been published to my internal yum repository and puppet is installing the newest version on all my servers. How is managing pyenv and dealing with fabric or whatever other tool of choosing any easier than this?
We usually stick to the official Debian python-* packages (if they exist upstream). When they don't we use python-virtualenv[1][2] to pull specific versions and dependencies from pip into a virtual env.
Now when we build our Debian package (debian/rules):
Great example of this - my current client, who requires on-premise development, blocks Fastly. Guess what runs on Fastly? NPM.
So no development if you need to use npm. I can download e.g. Grunt from GitHub directly, but installing the local package immediately tries to download dependencies from npm.
> 1. Why store dependencies locally? Seems a bit wasteful and unnecessary with tools like pip/virtualenv. (OP does mention virtualenv, but also "Include ALL the dependencies locally" so I'm not sure what to make of that)
This is a good question, and I think it depends on your toolchain and also the nature of the code you are working with. Not every project you work on will be OpenSource and might be inaccessible from the public containers/vms where you deploy. So its much easier to bundle the whole application into one package and push that into the destination. e.g if you want to deploy your app to heroku/cloudfoundry/bluemix.
I use dokku/CloudFoundry these days and just toss in dependencies to requirements.txt, but at the time I often had to deploy stuff on boxes with _zero_ outside access.
> Well, because JSON is about the only format I can read on all the languages I use without ambiguity or extra dependencies
Alas, there is a downside: The JSON spec does not allow comments, which are often important for configuration files. (Though given how much that simplifies parsing, it may have been the right decision.)
So you either omit comemnts, or end up trying to shove placeholder key-values into the closest place you can to whatever you want to comment about.
I'm the author of Peewee and I've been running my blog with Flask, Peewee, gevent and sqlite. It's a very low-impact combination, lightweight and fun to use, although I'm a bit biased.
If Raymond Hettinger says that, it's probably true. :-) I agree -- I've been using peewee for a side project, and it's great. I also love the fact that the source is one file, as that makes it so easy to find stuff in and navigate. I've been working offline, so didn't have access to the docs, and just referred to the source to figure out how to do things.
Ackkkk I'm actually rewriting it currently to try and ameliorate the cruft that's crept in since the 2.0 rewrite. I want to say "But wait! If you read the code in a couple weeks it'll be so much better!"
Trust me, I'm sure I can improve my Python skills by reading the current version. I don't get a chance to use Python enough to shed the baggage of other languages to write Pythonic code, if that makes sense.
I haven't used Peewee in a project yet, but I've poked around with it. It's really nice, so thank you.
I'm using peewee for the first time on a project starting yesterday.
Not using the ORM (I like writing SQL for some reason) but the connection pooling on Postgres is nice and I'm really enjoying working with it. Thanks for your work.
Just out of interest, what were the main challenges you faced when using Celery?
Celery is a task queue. Pika could definitely be used to build a task queue (similarly to how Celery is built on top of Kombu), but I can't think of many good reasons why that would pay off. Unless your use case was not a task queue...
Flask application context problems, hiding exceptions within tasks, Json Serialisation sometimes does weird things and hard to configure (the default, pickle, seems to be warned against as a security vulnerability), weirdness with Gevent/Socket.io flask stuff, difficulty (impossibility??) to emit socket events from within a worker, finally the canvas stuff seems great on the surface but I found it mind blowingly difficult to debug weirdness with it.
My Pika/RabbitMQ stuff was much easier to build, unit test and debug (i.e. it actually works). There is every chance the I am just a bit retarded when it comes to using celery, but the fact I've built the Pika version maybe these were just real issues? Dunno, it might all work flawlessly for you in which case it's a great tool.
Brief clarification: some of those patterns were a consequence of us having to run Python 2.5 and 2.6 on older boxes - it's a survival technique, if you will - and having to vendor some dependencies to make it easier to deploy stuff under some other interesting constraints the article won't get into.
On the whole, these patterns make it easier for other people to not break your code when they want to add, say, another set of REST endpoints (which was my main occupation at the time).
You might also want to read my short piece on SSE: http://taoofmac.com/space/blog/2014/11/16/1940
And yeah, I've been meaning to change the blog layout. Hardly any time for it these days, honestly.