If you're using AWS anyway, I think using their managed Postgres / RDS could make this a lot easier operationally. No need for custom syncing/backup scripts, easier to scale the storage up.
To the larger point, while I totally agree with the premise -- most apps will never need to scale beyond one large instance, I'm not exactly sure what's the actual tradeoff is. If you're writing a simple CRUD app, it is not really controversial that it shouldn't need any complex distributed stuff anyway, it is just a simple python/Go app and a data store.
Most "fancy" things outside that single process paradigm, such as avoiding using local disk and using S3/RDS/document stores, using containers for deployment, etc. usually have more to do with making the app operationally simpler and easy to recover from instance failures than scaling per se.
> operationally simpler and easy to recover from instance failures than scaling per se.
I'd say architecturally making the app easier to perform this function, rather than rely on infrastructure to do it for you (and hoping doing so will make the app "easier" to code).
For example, use an append only data structure (like event sourcing), with snapshotting. Then your app recovery process is just "restart".
To the larger point, while I totally agree with the premise -- most apps will never need to scale beyond one large instance, I'm not exactly sure what's the actual tradeoff is. If you're writing a simple CRUD app, it is not really controversial that it shouldn't need any complex distributed stuff anyway, it is just a simple python/Go app and a data store.
Most "fancy" things outside that single process paradigm, such as avoiding using local disk and using S3/RDS/document stores, using containers for deployment, etc. usually have more to do with making the app operationally simpler and easy to recover from instance failures than scaling per se.