We use TimescaleDB with databases between 1-100 million rows (small by some standards, but certainly not tiny) - I love it!
- we use Postgres as our main database, so being able to keep out time-series data in the same place is a big win
- perhaps because because it's a Postgres extension, the learning curve is small
- it keeps timerange-constrained queries over our event data super fast, because it knows which chunks to search across
- deleting old data (e.g. for a data retention policy) is instantaneous, as TimescaleDB just deletes the physical files that back the timerange being deleted
- it has some nice functions built-in, like `time_bucket_gapfill`. Yes, you could write your own functions to do this, but it's nice to have maintained, tested functions available OOTB
- we use Postgres as our main database, so being able to keep out time-series data in the same place is a big win
- perhaps because because it's a Postgres extension, the learning curve is small
- it keeps timerange-constrained queries over our event data super fast, because it knows which chunks to search across
- deleting old data (e.g. for a data retention policy) is instantaneous, as TimescaleDB just deletes the physical files that back the timerange being deleted
- it has some nice functions built-in, like `time_bucket_gapfill`. Yes, you could write your own functions to do this, but it's nice to have maintained, tested functions available OOTB