This graph displays the average requests/second that the python tornado framework can handle graphed over time by commits. Very surprising and also a little concerning since I am relying on Tornado for my current projects. Just wanted to make sure that you all were made aware of this in case you are considering using Tornado for your own projects (although I have had nothing but positive experiences with it).
Here we see the movement from "Only 200 lines of code! Does 8 jillion requests per second!" to "Does things correctly, works in the real world!". Welcome to life.
Looking at the data, it appears that the performance drops are related to the StackContext feature. This feature is a convenience and not about doing things correctly to work in the real world.
Because the very earliest versions of Tornado were used by real websites, it looks like the code that did 8 jillion requests per second worked well enough in the real world.
That's often true, but doesn't appear to be the case here. This was a simple case of making invasive changes without quantifying the performance effects.
Wow, didn't expect this to make the front page of HN. There's more context in the groups thread (http://groups.google.com/group/python-tornado/browse_thread/...), but the short version is that a change to abstract away some error handling with the python 'with' statement had a surprising performance impact. When used with the @contextlib.contextmanager decorator, the with statement is more than 25 times as expensive as a try/except statement.
It looks like you forgot to call the function in 'simple_catch'. Interestingly, if you do, manual_with_catch will actually be faster in the failing case. (A try:fn() finally: return True beats everything else in the failing case)
This was the feedback from the twisted community when it launched. There were lots of issues they found in a cursory glance.
My complaint (and I heard the same complaint from a developer as recently as yesterday) was that they tossed away all compatibility with everything that existed in order to solve one specific problem. Then people tried to use it as a general solution.
I spent an hour or two soon after tornado was released figuring out how to take the good parts of tornado and the good parts of twisted and put them together for this reason: http://dustin.github.com/2009/09/12/tornado.html The result (without any attempt to optimize it) was a reducing in raw performance, but the ability to actually do ridiculous numbers of things you can't do in tornado even today.
Tornado solves the specific problem of building web applications. What compatibility was tossed in accomplishing this goal? As far as I can tell, Tornado is compatible with HTTP, HTML and other related web standards.
What are some of the things that you can do with your version of Tornado that I cannot do with regular Tornado?
All of these things are possible with Tornado by using simple blocking calls and running more than one instance of the Tornado application behind a reverse proxy. If one instance blocks, then there are other instances of the application to handle new requests.
FriendFeed uses Tornado with MySQL (I learned this from Brett Taylor's blog). It seems to work.
What a cool visualization! Between Github and a continuous integration benchmarking tool, you could generate this for any project. I would pay to have it.
Very cool. I'm using Tornado for my projects and I gotta say, I'm really enjoying the minimalism and design. I can dive into the source and tweak it fairly easily.
Recently I tweaked the templating engine to accept keyword arguments in a namespace for the {% include %} command. Anyone interested in my patch?
Don't miss the graph/chart (in the 'Tornado Performance Trend' Google doc). Yet another Tornado (and node.js and Deft) benchmark available on http://www.deftserver.org
Source - http://groups.google.com/group/python-tornado/browse_thread/...