This doesn't work. You can't just copy the files and expect them to be in a sane or consistent state.
You either need to a) use InnoDB hotbackup or b) use a slave, stop the slave, run the backup, and restart the slave to catch up.
At delicious we used B, plus a hot spare master, plus many slaves.
Additionally, every time a user modified his account, it would go on the queue for individual backup; the account itself (and alone) would be snapped to a file (perl Storable, iirc.) Which only got generated when the account changed, so we weren't re-dumping users that were inactive. A little bit of history allowed us to respond to things like "oh my god all my bookmarks are gone" and various other issues (which were usually due to API-based idiocy of some sort or another.)
You either need to a) use InnoDB hotbackup or b) use a slave, stop the slave, run the backup, and restart the slave to catch up.
At delicious we used B, plus a hot spare master, plus many slaves.
Additionally, every time a user modified his account, it would go on the queue for individual backup; the account itself (and alone) would be snapped to a file (perl Storable, iirc.) Which only got generated when the account changed, so we weren't re-dumping users that were inactive. A little bit of history allowed us to respond to things like "oh my god all my bookmarks are gone" and various other issues (which were usually due to API-based idiocy of some sort or another.)