Java servers get around 900K requests/s [...] while Rails squeezes out 6K
Yes, but that's for the /json test, which serves a static object that the framework converts to json.
I'm not sure it is really representative of the overall speed you'll get with whatever framework you'll be using: how often do you have a static page that's best served by your framework instead of leaving nginx serve it directly?
I prefer watching the test results that actually involve the database (/db, /queries, /fortunes and /updates), as it shows less raw speed for serving static things and more overall speed for your dynamic pages.
With /queries for example, Java does ~11.3K, while php (on hhvm) does ~10.7K, Python is at ~7.7K and Dart ~12.8K (For the fastest framework of each language. Rails still does bad though).
The reduced variance of the /queries benchmark suggests to me that the database is the bottleneck in this setup. This would be the first thing you'd attack if you were optimising for speed. For instance checkout the section on StackOverflow's caching -- they have 5 levels of cache, including in-memory caches on the web servers. They are doing a lot of work to take the DB out of the equation.
I prefer the /json test because it gives me a performance ceiling. I know that's the max performance I can expect, and it's up to me to design my system to get as close to that as I can.
On the other hand the db tests don't really tell me much. Does my data and data access patterns match theirs? Probably not. So it is difficult to generalise from their results. If the systems you build are always stateless web servers talking to a relational db then I can see it might be more useful.
This is a good point. It's true that for the high-performance frameworks and platforms, the database is the bottleneck. Or, more accurately—considering the trivial query and small payload—the overhead of the database driver and the wire protocol are significant factors. That said, there remains a fairly broad distribution over the Single-query test and Fortunes tests. For the low and medium-performance frameworks, the overhead of their ORM and other factors are more significant than the database. I find it can be illuminating that in many cases, the ORM code necessary to marshal result-sets into usable objects is more costly than the underlying queries.
Meanwhile, the 20-query test is a bit pathological as it runs into a brick wall with the database wire protocol and efficiency of the database driver. Many otherwise high-performance frameworks and platforms become bottle-necked waiting on those 20 queries per request. But you and I agree, a 20-query-per-request scenario should be the exception and not the rule. When developing a smooth-running web-application, it's common to aim for zero queries per page load. (For those who find this to be crazy talk, note that I'm saying we aim for that ideal, and may not necessarily achieve it.)
I too particularly enjoy knowing the high-water mark set by the JSON and Plaintext tests. When we add the next test type (caching-enabled multiple queries), we should see some interesting results.
I'm not sure it is really representative of the overall speed you'll get with whatever framework you'll be using: how often do you have a static page that's best served by your framework instead of leaving nginx serve it directly?
I prefer watching the test results that actually involve the database (/db, /queries, /fortunes and /updates), as it shows less raw speed for serving static things and more overall speed for your dynamic pages.
With /queries for example, Java does ~11.3K, while php (on hhvm) does ~10.7K, Python is at ~7.7K and Dart ~12.8K (For the fastest framework of each language. Rails still does bad though).