Hacker News new | past | comments | ask | show | jobs | submit login

I've been running, on a five minute cron, kill -9 on the master mongo instance in our QA lab for a good long time. 24/7.

There's a program running that, 100 times per second, reads a document, MD5's the field, and writes it back. At the same time, it reads a file from the local filesystem, MD5's it, and writes it back. The document and the local filesystem file started with the same value.

After a few thousand kill -9's on the master instance, the local file and the mongo document are still identical.

I've been running MongoDB in production since 2010.

It's definitely possible to use Mongo in a way that isn't safe for your particular use case. But we're doing it correctly.

I haven't lost a bit of data in more than three years of MongoDB.

Mongo has a lot of limitations. We're currently researching various 'big data' solutions, because for us, Mongo doesn't fit that.

For ease of development (in dynamic languages, where your in-program data structures and in-database documents look almost identical), safety, and lack of headaches, MongoDB has been a consistent win for me and the teams I've been on.




Is your test producing a new document value 100 times a second, or just writing the same value back over and over again?

It sounds like it might be the latter, which is not a particularly stressful test (because you can't detect data rollback).

I'm more familiar with relational database internals, but I wouldn't be surprised if a DB just optimized out the unchanged-write entirely (they'd still need to read the current row value, but they don't have to invoke any data modification code once they see the value hasn't changed).

For a good test, you really want to simulate a power-loss, which you aren't getting when you do a process-level kill, because all the OS cache/buffers survives. You can do simulate this with a VM, or with a loopback device. I'd be amazed if MongoDB passed a changing-100-times-a-second test then. I'd be amazed if any database passed it. I'd even be amazed if two filesystems passed :-)


I'm reading the document, hashing it, and writing the hashed value back. So it changes every time.

I plan on extending this test by blocking 27017 with iptables, then doing the kill -9, then wiping out all of the database files. That'll be fun. :)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: