Hacker News new | past | comments | ask | show | jobs | submit login

This doesn't work. You can't just copy the files and expect them to be in a sane or consistent state.

You either need to a) use InnoDB hotbackup or b) use a slave, stop the slave, run the backup, and restart the slave to catch up.

At delicious we used B, plus a hot spare master, plus many slaves.

Additionally, every time a user modified his account, it would go on the queue for individual backup; the account itself (and alone) would be snapped to a file (perl Storable, iirc.) Which only got generated when the account changed, so we weren't re-dumping users that were inactive. A little bit of history allowed us to respond to things like "oh my god all my bookmarks are gone" and various other issues (which were usually due to API-based idiocy of some sort or another.)




Using a slave isn't fool proof either. If someone were to run a malicious command, it gets replicated, and could get backed up before being caught.


I didn't say that. Read what I wrote.

You use the slave so you can shutdown the database and get a consistent file snapshot. Then you do offline backup.


Yeah, it's true. I was a little simplistic. I usually use A, but I'm not dealing with the amount of data that delicious is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: