Hacker News new | past | comments | ask | show | jobs | submit login

Dropping tables to see what happens or resetting DBs every hour is fine with a small dataset, but it becomes impractical when you work on a monolith that talks to a set of DB with a hundred+ tables in total and takes 5 hours to restore.

As you point out rebuilding small test datasets instead of just filtering the prod DB is an option, but those also need maintenance, and take a hell of time to make sure all the relevant cases are covered.

Basically, trying to flee from the bulk and complexity tends to bring a different set of hurdles and missing parts that have to be paid in time, maintenance and bugs only discovered in prod.

PS: the test DB is still reset everyday. Eorse thing happening is we need to do something else for a few hours until it's restored.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: