One thing you don't know or faced yet is that mongo fails, and sometimes it fails so hard that you would rather kick some rocks.
For example, if you start a background index generation on an eventually consistent replica set, indexing on secondary nodes are done foreground. Which means you only accept reads from slaves but slaves are unresponsive because of the index generation. In this state, if you try to do anything fancy your data will go corrupt. Only way out is to wait through the outage (which I find it pretty hard to do so). This is still not solved in 2.4, waiting for 2.6.
Replica sets with all secondaries which can't elect a primary because it lost a node, or the mostly random primary-secondary switches that drops all connections, seldomly primary reelecting itself meanwhile dropping connections for no apparent reasons. Mongo offers tin foil hats for integrity, consistency and reliability. So yeah, I'd rather examine and understand why an SQL query is slow. Because it is at least deterministic, which in mongo nothing really is.
Postgres supports free form json, XML or hstore document formats by the way, couchDb has its own specific features as a document db too. I still don't see why people want to go on with mongo this bad.
I've been running, on a five minute cron, kill -9 on the master mongo instance in our QA lab for a good long time. 24/7.
There's a program running that, 100 times per second, reads a document, MD5's the field, and writes it back. At the same time, it reads a file from the local filesystem, MD5's it, and writes it back. The document and the local filesystem file started with the same value.
After a few thousand kill -9's on the master instance, the local file and the mongo document are still identical.
I've been running MongoDB in production since 2010.
It's definitely possible to use Mongo in a way that isn't safe for your particular use case. But we're doing it correctly.
I haven't lost a bit of data in more than three years of MongoDB.
Mongo has a lot of limitations. We're currently researching various 'big data' solutions, because for us, Mongo doesn't fit that.
For ease of development (in dynamic languages, where your in-program data structures and in-database documents look almost identical), safety, and lack of headaches, MongoDB has been a consistent win for me and the teams I've been on.
Is your test producing a new document value 100 times a second, or just writing the same value back over and over again?
It sounds like it might be the latter, which is not a particularly stressful test (because you can't detect data rollback).
I'm more familiar with relational database internals, but I wouldn't be surprised if a DB just optimized out the unchanged-write entirely (they'd still need to read the current row value, but they don't have to invoke any data modification code once they see the value hasn't changed).
For a good test, you really want to simulate a power-loss, which you aren't getting when you do a process-level kill, because all the OS cache/buffers survives. You can do simulate this with a VM, or with a loopback device. I'd be amazed if MongoDB passed a changing-100-times-a-second test then. I'd be amazed if any database passed it. I'd even be amazed if two filesystems passed :-)
For example, if you start a background index generation on an eventually consistent replica set, indexing on secondary nodes are done foreground. Which means you only accept reads from slaves but slaves are unresponsive because of the index generation. In this state, if you try to do anything fancy your data will go corrupt. Only way out is to wait through the outage (which I find it pretty hard to do so). This is still not solved in 2.4, waiting for 2.6.
Replica sets with all secondaries which can't elect a primary because it lost a node, or the mostly random primary-secondary switches that drops all connections, seldomly primary reelecting itself meanwhile dropping connections for no apparent reasons. Mongo offers tin foil hats for integrity, consistency and reliability. So yeah, I'd rather examine and understand why an SQL query is slow. Because it is at least deterministic, which in mongo nothing really is.
Postgres supports free form json, XML or hstore document formats by the way, couchDb has its own specific features as a document db too. I still don't see why people want to go on with mongo this bad.