Hacker News new | past | comments | ask | show | jobs | submit login

> This query takes about 50ms!

Would be useful to know how many rows you had in the table (?)




The table that we run this on has 220K rows with a size on disk of 1.8 GB. I mentioned in the post that we'd tried out this approach with 23 million rows which was slow! From our (limited) testing the speed holds up until around 1 to 2 million rows, at which point I'd suggest a dedicated solution.


Thanks, that's useful to know. I thought that your issues with speed in the past were due to using a different approach to searching in PostgreSQL than the one you described in the post.

I wonder if there are any specific limitations to search performance in PostgreSQL stemming from its underlying data model and structures or could it possibly reach similar performance and scale as say Solr (or Luecene for that matter) without too much of an overhaul. Just thinking single node here.


Have you looked into whether that 1-2M threshold still applies if the tables are partitioned? Such as a multi-tenant application where a search might be constrained to a single user_id. I wonder if that would provide, at least temporarily, a way to get past that constraint.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: