Hacker News new | past | comments | ask | show | jobs | submit login

In my startup (until our exit) I've run Postgres on my own, with streaming backups, daily backup tests (automatic restores and checks), offsite backups and (slow) upgrades.

Data wasn't mission critical and customers could live with 5min downtimes for upgrades once a year.

No problem for years. When we had hosted Mongo, we had more problems.

If I have the money I'll use a managed database. But running Postgres in a startup up to $10m ARR / ~1TB data seems not like a problem.




How do you test backup restoration on daily basis?


A script creates a database instance from a container, restores backup into it, makes some checks (e.G. table sizes above some value, audit table contains data up to the backup point etc.), sends out email that everything looks ok.


Also unzipping and decrypting, already makes sure encryption worked and zipping worked and you did not end up with a 0 byte backup file (because of permission etc.).

Same goes of course for snapshots and backups in the cloud. I have several clients which had backup problems because of misconfiguration in AWS/GCP.


super. I have been thinking of doing something similar for my productions db systems as well but was struggling to come up with some efficient way to test restoration. This gives me some starting point.


You will be very happy when restoration testing fails. Better to find out in a test than - as some of my clients have - when restoring a backup in an emergency.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: