Hacker News new | past | comments | ask | show | jobs | submit login

For this amount of data, I would use good old Postgres, partition the data by ingestion time, then just detach the old partitions when you need to archive it.

For joining data from multiple database, if the data is large, I would use something like Presto(https://prestosql.io/) to join and process the data. But that's partly because we have already had Presto clusters running.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: