Hacker News new | past | comments | ask | show | jobs | submit login

Is your test producing a new document value 100 times a second, or just writing the same value back over and over again?

It sounds like it might be the latter, which is not a particularly stressful test (because you can't detect data rollback).

I'm more familiar with relational database internals, but I wouldn't be surprised if a DB just optimized out the unchanged-write entirely (they'd still need to read the current row value, but they don't have to invoke any data modification code once they see the value hasn't changed).

For a good test, you really want to simulate a power-loss, which you aren't getting when you do a process-level kill, because all the OS cache/buffers survives. You can do simulate this with a VM, or with a loopback device. I'd be amazed if MongoDB passed a changing-100-times-a-second test then. I'd be amazed if any database passed it. I'd even be amazed if two filesystems passed :-)




I'm reading the document, hashing it, and writing the hashed value back. So it changes every time.

I plan on extending this test by blocking 27017 with iptables, then doing the kill -9, then wiping out all of the database files. That'll be fun. :)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: