This will add the whole venv, which contains symlinks, scripts with shebangs, and potential binaries, and as such is totally linked to your system, so this definitely breaks if your python ends up in another location or you're entirely on another OS.
I'm not even considering the issues regarding the presented git workflow. If one wants to semi-automate a git workflow, one would rather use git-flow instead of this prepare_deployment hack.
A hook that I've found useful (though not perfect -- esp. when working on many branches) is to check that "pip freeze" and requirements.txt match before allowing a commit. My hgrc has the following line for this:
While I agree with the virtualenvwrapper suggestion, the idea that a single directory is going to 'clutter' your project folder is pushing the definition of 'clutter' to me.
I wrote a script a while ago that takes care of setting up a similar structure to the django project described and also takes care of issues such as the one you've described. https://github.com/skinnyp/djan-n-go
> So do you use virtualenv in production? Is there a good tutorial on this for my developers?
Using virtualenv in production mostly boils down to `pip -r reqs.txt -E virtual_env`(in place of pip install -r reqs.txt) and making sure virtualenv path is the first in sys.path http://code.google.com/p/modwsgi/wiki/VirtualEnvironments
You can also execfile the activation script, but I prefer changing sys.path.
I don't think there is any 'best practice' as of yet. It ranges from just running virtualenv, and then using pip + requirements.txt on deploy, to packaging up your virtualenv in a .rpm/.deb and installing via your distro's package manager.
The basic problem being addressed is one of dependencies and versions, and indirectly permissions. Imagine you have an application that needs version 1 of LibFoo, but another application requires version 2. How can you use both these applications? If you install everything into /usr/lib/python2.7/site-packages (or whatever your platform’s standard location is), it’s easy to end up in a situation where you unintentionally upgrade an application that shouldn’t be upgraded.
Additionally, you may run into a problem where your distribution only offers certain versions of python and its packages, when you need newer versions. On systems like CentOS, this becomes a bit more complex as you can wind up in 'dependency hell' trying to compile everything you may need for a complex python program. virtualenv and pip make this very easy to manage and set up.
We find using --system-site-packages on your deployment is a better solution for deploying Django - especially when it comes to dependencies that need to be compiled.
It gives you flexibility on your environment and lets you deploy onto machines without a C compiler, or strange setup requirements e.g. psycopg2 on OS X, and PIL on Ubuntu.
You then install Django, and other Python-only dependencies inside your virtualenv, which stops you polluting the global Python path.
It requires some discipline in your development setup, but it means you're able to develop across whatever platform you choose, and simplifies deployment.
What's wrong with PIL on Ubuntu? If you're running into needing to repoint JPEG_ROOT/ZLIB_ROOT - I've started avoiding that by symlinking where PIL expects things to be to where they actually are). Works a charm, and avoids using --system-site-packages.
I used to do this, but it feels like a suboptimal solution. I could use Fabric to ensure the links are there but it adds more complexity to the setup:
a) Another step to remember
b) Requires a compiler on the production system
c) Requires you to manually resolve the image library dependencies
Multiplying that across multiple packages can quicky become a headache. It's much easier to let the OS package manager deal with this, and will make your system more robust over time.
It's easier to set up than puppet/chef, but you lose a huge amount of flexibility/robustness. All fabric does is runs commands on host machines. Dependency management is your responsibility.
I wouldn't be so harsh. Both Puppet and Chef come with their own (enterprisey) overhead and it's Yet Another Tool to understand and maintain. Chef in particular feels like an over-engineered solution for anyone managing less than hundreds of servers, it adds a lot of cruft (centralized server, authorization, protocols, etc.) that most people wouldn't need. I believe a lot of people like it for the sole reason they are not experienced managing servers, so they can just use pre-made recipes and call it a day.
Anyway, you can go a long way with just a bunch of scripts leveraging Fabric's API. I have setup ~10 servers for a news portal I run from the ground up in just a few lines of code. Managing dependencies is not ridiculously difficult as you make it sound, package managers (apt-get, pip) already handle that for you without any overhead.
completely agree. Chef and puppet could be overkill for several small to mid sized environments. The learning curve for both is relatively steep as compared to fabric. With the parallel exec feature, fabric more than meets our requirements for a small setup (<10 instances).
Interestingly most posts which recommend using Puppet/Chef depend on Fabric for deployment. Is a mutually exclusive or pure Puppet/Chef approach better in any way?
If you want to use fabric (or a shell script) to run puppet, go ahead. I'm just suggesting that you really want to use a deploy system with proper dependency management.
The issue I ran into with fabric is that I often got stuck in dependency hell. The following is fabric's simplest method of dependency management:
def install_foo():
install_foo_dependency()
...
Unfortunately, you don't want to do this every time you deploy because install_foo_dependency() might take a while to run. You can work around it by checking inside install_foo_dependency whether it's already there. In practice, you probably won't always do this. Puppet usually has recipes which already do this for you.
In theory, you can do things right with fabric. In practice, you have to do a lot of work to replicate what puppet (together with assorted easy to find recipes) gives you out of the box.
As someone who has worked with a fairly involved fabric deployment & provisioning process, I'm forced to agree. Fabric is great for what it is, but you lose so much by not using chef.
Eric Holscher also has an excellent blog post, http://ericholscher.com/blog/2010/nov/8/building-django-app-... ,
describing how to deploy using both fabric and chef. There are certain instances where one tool works better than the other, and in that situation that tool is used.
I did only scan the article, but using having a remote repo hosting service (is it really developing on the server while using git?!) configured, branches for feature-dev/bugs/staging/qa/production, vm configuration via chef/puppet, separated settings files, fault reporting etc etc are most (for me) all a part of doing it the "right way" before writing a single line of my own code.
Heh, at some point soon[1]... once i've migrated the rest of my websites into rackspace cloud & their "next generation" offering stabilises i'll be writing a "This is how we do it now, it might work for you" type article.
somehow i don't think this article was targeted at you. i'm betting if you already know how to run puppet/chef, and have all the above, you already have an opinion on project setup.
True, the bit about developing on the server is straight up odd though (if i skimmed it correctly that is) as I find the debug mode exception screens rather helpful, and I wouldnt want debug mode running on a publicly reachable machine
> If you do a lot of Django development, just dump all of the commands above into a fabfile and make creating a proper Django app a one step process.
if you do a lot of django development, you probably already got a kick ass project template with a requirements file, so you have a basic working website with all the commonly used modules set up and running in 5 seconds.
I think buildout addresses the deployment problem better than virtualenv + fabric. Only "problem" is it's zope heritage and thus not cool enough for bloggers to use.
I really dislike this trend Rails has brought that a Model maps directly to a database table.
I've coded in CakePHP, Rails and ASP.Net MVC3, and out of all three MVC3 was the cleanest one for me just because any Model you created was just a simple POCO class. It didn't map anywhere and prevented you from shooting yourself in the proverbial foot. Problems inherent in Rails and CakePHP if you aren't careful.
I even asked a question on SO about this issue, ZERO responses if you can believe that. I guess the silence is answer enough. ;)
I read your SO post but wasn't quite sure what you were trying to get at, because I wasn't quite sure how you perceive the MVC structure in Rails to be.
To be honest I haven't used South in about a year so I don't really know how it has changed. But with South I always seemed to be running into issues of it allowing you to modify existing elements of your schema but not add new elements or modify types in your schema.
I started using South about two months ago for both my personal website (http://ankursethi.in) and a large-ish CRUD app that I'm working on. In both cases, South has been able to add to and modify the types in my schema. You should give it a whirl again.
This will add the whole venv, which contains symlinks, scripts with shebangs, and potential binaries, and as such is totally linked to your system, so this definitely breaks if your python ends up in another location or you're entirely on another OS.
What should be done is
So when you want to restore/deploy you'd do I'm not even considering the issues regarding the presented git workflow. If one wants to semi-automate a git workflow, one would rather use git-flow instead of this prepare_deployment hack.