Hacker News new | past | comments | ask | show | jobs | submit login

Jesus, I can't believe people still defend this shit. I don't care if MongoDB can store ten jiggerbytes, it should give an error when you're trying to insert more than that. When the hell did corrupting your data become acceptable for a datastore?



Simple guide to keep your integrity in the software business: don't crap on other peoples products and admit your own mistakes.


I'm confused, are you talking about me or MongoDB? If you're talking about me, I don't think I made a mistake, and if you're talking about MongoDB, the guys on IRC were very civil about it and did say that silently corrupting data was the wrong way to go about it.

It's these apologists who are giving MongoDB a bad name, really, because the guys on IRC were nothing but helpful about it.


Sorry, this should have posted this under the Katz tweet link. My bad.


Oh, that makes sense then, thanks for clarifying...


The 32bit version is available for convenience, nobody uses it in prod. You can't complain that a simple testing tool limited to 2gb can't store more than 2gb.


When did I complain it couldn't store more than 2 GB? I complained because it silently corrupted my data.


You used a dev version of a tool that is not meant to be ran in production or for any serious task and then complained it didn't work with your attempt. As this bug is not present in the real prod version of mongodb, it doesn't make sense to criticize mongodb for it. Also next time RTFM before using a tool you don't know much about.


I feel this is a justifiable defense, if you use a pre-release version of anything you should recognize that you are taking a risk.


The 32-bit stable versions have the same behaviour, and the MongoDB guys on IRC did admit that data corruption is not a very elegant way to handle it.


unless you run on EC2 and don't need a large instance (which is where 64 bit starts)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: