Hacker News new | past | comments | ask | show | jobs | submit login

I think having a day for a whole year is a bit sparse. Some startups start and shutdown within a year. Apart from checking backups after any big code change related to backups, I think backups should be checked quarterly.

It takes no more than couple hours most of the time, and as our wise said, "An ounce of prevention is worth a pound of cure"




I think that once a quarter is better, but if your start up shuts down at the end of the year, you probably don't need to worry about it


Except the reason they had to shut down might be because they never checked their backups and then they lost a critical amount of data and it turned out that the backups were indeed no good.


I don't think any startup knows that they're going to shutdown within the year. No one would take the time if they knew they were going to shut down soon.


Correct. So save some concerns (e.g. weekly verification of backups) until after a year.


Indeed, If one day a year is what it takes for you to remember to care about backups, you probably shouldn't be involved with backups. This could happen to anyone from a person losing their home photo collection, to a hospital missing critical data within a patient management system - I know because I've been involved or seen first hand both of these exact scenarios. It's what we learn from our mistakes that defines our future, not the people telling us we made them.


Check your backups daily. The whole process is automated and shouldn't cost much.


Then we'll have a check your check your backups day where you make sure your automation isn't making sure the sky is blue.


It's in there now. Row counts per table and checksums are written out with the backup for every table. If things don't match up, alarm in a loud and noisy way.


The point is that you could have a bug in the verification process that returns "all good" when it really isn't.


I have a SELECT of the most recent updated_at fields for a few tables emailed to me so I can eyeball the values


Thats a neat idea, I might look at that, thank you.


My point is that there are sanity checks built into the process. It's not difficult. There doesn't need to be a human verifying things there. Is the row count increasing over the previous backup? Check. Is the checksum different? Check. Is it non-zero? Check. And on and on.

Stop making excuses and automate testing your backups.


so, when your check process does not report because it failed to run, how do you know? Do you have a monitor for your monitor? Does that monitor have a monitor?

There is no reason not to automate your backup tests, but there is no reason not to eyeball the check is actually working from time to time.


Yes, the restore process checks in daily; alarms are thrown if it doesn't check in.

MySQL is also pretty good about not starting if the data is corrupted for whatever reason. The process not starting is a pretty obvious alarm, too.

Checks are cheap. Better to automate your logic and invest in good monitoring and automation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: