To get optimum performance with PostgreSQL full text search you need to create a column to store the tsvector values with an index on it. Then you need to fill this column with the tsv values and create a trigger to update the field on INSERT and UPDATE.
I've been playing around with full text search in Postgres, and I took this sort of approach when starting out, but then realized you I could just have the index be an expression.
So instead of (per the example)
CREATE INDEX tsv_idx ON documents USING gin(tsv);
doing something like
CREATE INDEX tsv_idx ON documents USING gin(to_tsvector('english', text));
Is there any reason you wouldn't do this? For multi-languange you you'd have to detect the language of the text, but there's no reason you couldn't parameterize that too.
You can precompute to_tsvector() in parallel ahead of time if you're storing it in a dedicated column. CREATE INDEX runs on a single thread, including the part where it evaluates to_tsvector() for each row. If you ever need to recreate the index, it'll be faster if the tsvector is in a dedicated column.
I have a table with 30 million documents using pgsql's full text index. Creating the index takes ages, and search performance is generally very poor. The difference between creating the index with the precomputed column versus creating the index with the expression in the index itself (which is how I originally did it) was substantial.
No, there's no real reason why you would not use a functional index. The reason so many tutorials use a dedicated column is simply lack of up-to-date information, I think.
That is the way I do it also. I have used Lucene and Solr a lot in the past, but I now find Postgres text search to be more than adequate for my needs and it does make software development and deployment simpler.
You're not including the title from the JSON column or setting weights in your alternate version. I'm not sure if that can be included in the index or not, but that might be the reasoning.
If you want to preprocess your document, or aggregate different parts of it with different labels and weights, it can be helpful to store it in a separate column.
Of course you can always just index your preprocessing/aggregating function and call it every time you want to search, but depending on how expensive that is, it might be in your interest to do it upfront and make your searches a bit quicker.
> This query takes about 50ms! We experimented with returning the document text with results and found that if we returned the full text of documents, it added about 350ms to the query time which seems to be network overhead rather than a slower query. If we only returned 200 characters from the texts then it only added around 100ms.
The way you have written the query, it has to extract the title from all of the JSON documents before applying the LIMIT.
Perhaps do the work to get the identifiers, order by rank, and apply the limit... all in a subquery. So that only the outer query actually touches the JSON document and extracts the titles.
Thanks for the feedback! I tried out your suggestion by running the following:
SELECT id, meta->>'title' as title, meta FROM(
SELECT id, meta FROM (
SELECT id, meta, tsv
FROM data_rows, plainto_tsquery('search query') AS q
WHERE (tsv @@ q)
) AS t1 ORDER BY ts_rank_cd(t1.tsv, plainto_tsquery('search query')) DESC LIMIT 5
) AS t2;
Please let me know if this isn't what you meant. The above did improve the speed but only a little. I think for our use case this approach wouldn't be entirely necessary but maybe if we had a larger data set then it'd be worth implementing.
I hate writing SQL blind, but more like this... note that you fetch only the identifiers with the inner query, and use them in the outer query to retrieve the documents.
SELECT d.id
,d.meta->>'title' as title
,d.meta
FROM (
-- Only select the identifier column
SELECT id
FROM data_rows
,plainto_tsquery('search query') AS q
WHERE tsv @@ q
ORDER BY ts_rank_cd(tsv, q) DESC
LIMIT 5
) AS ids
-- Join back onto the data_rows, this fetches the documents
INNER JOIN data_rows d ON d.id = ids.id;
You'll want to go through the explain plan, but treat identifier retrieval as a step by itself. Things like ts_headline (not shown), or meta->>'title' (shown) should only be performed for the few rows you actually are retrieving and you should make the data you pass around be as small as possible at all points.
Note that you don't need to call plainto_tsquery twice, you can use the one you've already declared.
And I notice you are returning the JSON document whole, have you checked whether it's faster to extract the title here or later in whatever your programming language is?
The giveaway clue in your post btw is this: "We experimented with returning the document text with results and found that if we returned the full text of documents, it added about 350ms to the query time which seems to be network overhead rather than a slower query.".
As soon as you notice a linear slowdown, when you're only returning 5 rows whatever happens... then internally you must be doing something in the query that relates to all rows.
PostgreSQL fulltext scales nicely, I use it on a platform driving over 300 web sites with 10s of millions of comments. I originally used it thinking it will buy me time before I have to add a dedicated search engine, and it helps now as I store permissions knowledge in the database too... but truth is, it's just not been an issue yet. I'm not needing to consider anything else as it's performing great. The slowest part of my query isn't fulltext search, it's my permissions stuff (because I've gone down the rabbit hole of users, groups, roles, etc and this is calculated for all content), but still the 95th percentile for searches is typically under 10ms.
Just wanted to say, I implemented this change and it did really speed things up! Thanks a bunch :) This is the query I went with:
SELECT d.id, d.meta FROM (
SELECT id FROM data_rows, to_tsquery('query') AS q
WHERE tsv @@ q ORDER BY ts_rank_cd(tsv, q) DESC LIMIT 5
) AS ids INNER JOIN data_rows d ON d.id = ids.id;
Sure, if you only need full text search there's no need to use Solr or ElasticSearch.
I've never tried with PostgreSql, but I noticed the same drop in performance in MySql at the 1-2 million documents count, although full text search queries in MySql are slower (from what I remember, I haven't tested in a couple of years).
The power of Solr and other tools out there is all the other stuff you can do, like synonyms processing, faceting, range queries, etc, etc... as well as give you more flexibility on what data you load and how you process it. But if you don't need it, there's no reason to use a different tool.
There's no question that Solr and ElasticSearch have more capability than PostgreSQL alone, but you'd be surprised at what Postgres can do!
It's certainly more than just full text search. Check out this article [1] that talks about Postgres' support for stemming, ranking / boost, multiple languages, fuzzy search, and accent support.
You are right, I had assumed that it's capabilities were similar to MySql (don't ask me why).
Still, I would describe most of those features as 'basic', as you really need some support for all that to use full-text search in real life and I wonder how easy it is to add/remove stopwords or other tweaks like that.
Anyway, thank you very much for the link, very interesting! I've been looking at PostGIS as an option for a project I'm working on and using PostgreSql as the main db is getting more and more points... Thanks!
"The accuracy of PostgreSQL searches is not the best. In the examples that follow I’ve explained how to weight titles over text, but we found a few cases where typing the exact title of a document wouldn’t return that document as the first result."
Isn't this a concern as the main objective of search is to provide accurate results?
Sometimes, Accurate Enough is all you need. It's definitely a concern, but there are other issues to balance in there -- weight of infrastructure from adding new components, etc.
Text-search ranking is customisable[0] and results vary wildly based on the ranking behaviours selected and the weights assigned to different labels. It takes a bit of fine tuning and with the wrong parameters for your data set, you can definitely get results that seem unintuitive.
This should be a concern to the author, but there's no reason to think the search ranking is not working as documented.
>Isn't this a concern as the main objective of search is to provide accurate results?
If it provides 99% of the results and misses some documents because of some weird bug or encoding issue, then it could very well be good enough for their purposes. Heck, even 90% could be good depending on what they do (e.g. serve articles in an online site).
For other uses, like police or medical records obviously they'd need 100% results.
I know "good enough" is probably not just a good idea with a startup, it's possibly mandatory since there's only so much time and money. But as a user/consumer/customer/target demographic I can't begin to describe how much I disdain knowing that something exists on a site but being unable to find it using search, particularly when I know the exact title. Reddit's search several years ago was quite bad and left a sour taste in my mouth.
I'm already cringing about people in this thread talking about "language detection" and "stemming" as if there are good, easy solutions to them.
Take your favorite language detector, like cld2. Apply it to some real-world language, like random posts on Twitter. Did it detect the languages correctly? Welp, there goes that idea.
(Tweets are too short, you say? Tough. Search queries are shorter. You probably aren't lucky enough for your domain's text to be complete articles from the Wall Street Journal, which is what the typical NLP algorithm was trained on.)
Stemming will always be difficult and subtle. It's useful but it isn't even linguistically well-defined, so you'll have to tweak it a lot. If stemming seems easy, you haven't looked at where it goes wrong for your use case.
Mine covers tags and other aspects of faceted search as well.
We were able to build a really simple search facility for our marketplace using Postgres in a very short period of time and for cheap. It was incredibly helpful for a small startup short on people and resources.
In this case we wanted to ship the marketplace as quickly as possible (we had some related promotions and partnerships that required it in a specific timeframe) with a tiny team.
Solr/Lucene is the right answer for a full blown solution but it incurs the cost of having to maintain a second service besides your webapp as well as learning and gaining expertise in a second system. All things that can and should be done assuming you see growth, but as quick solution that would work for several months, you can knock out a reasonable search in a couple weeks with tech your app is already using.
Naive question from a person with little DB experience: Why store the metadata in JSON inside the database instead of in multiple columns in the same table? Is this not the point of using a database?
It depends on what those columns are. For this case multiple columns are probably better, but if you're trying to search over multiple document types a JSON column could be more flexible.
If I'm not mistaken: Altering a table can be costly and may cause a performance issues, specially if you have millions of records. Rows will be most likely locked and get updated and during this process other threads should wait until related row become "free".
The table that we run this on has 220K rows with a size on disk of 1.8 GB. I mentioned in the post that we'd tried out this approach with 23 million rows which was slow! From our (limited) testing the speed holds up until around 1 to 2 million rows, at which point I'd suggest a dedicated solution.
Thanks, that's useful to know. I thought that your issues with speed in the past were due to using a different approach to searching in PostgreSQL than the one you described in the post.
I wonder if there are any specific limitations to search performance in PostgreSQL stemming from its underlying data model and structures or could it possibly reach similar performance and scale as say Solr (or Luecene for that matter) without too much of an overhaul. Just thinking single node here.
Have you looked into whether that 1-2M threshold still applies if the tables are partitioned? Such as a multi-tenant application where a search might be constrained to a single user_id. I wonder if that would provide, at least temporarily, a way to get past that constraint.
PostgreSQL is coming up a lot and making waves, never thought much about it until recently it turns out to be more reliable than mongodb. Interesting phenomena.
While Postgres can do full text search and it may help you in your case, you can set up Elasticsearch for this sort of quantity of data in like an afternoon, if that.
Also, these 'advanced features' people are throwing around are really trivial; they're the sort of things that would crop up in lecture 2 of an information retrieval course. Good search is really reliant on proper weighting etc (not like 'we weight the title more highly than the body' but more 'foogle is a far more important word than bar, if we have both in a query, we care about foogle more'). This generally requires a tonne of experimentation; information retrieval is not easy and it's very subtle.
Postgres seems to try and make everyone design their own ranking function, which is a recipe for disaster. I haven't used Elasticsearch for a while, but I'd be surprised if they didn't have a bunch of relatively good presets.
Postgres is good for loads of things, but Elasticsearch or Solr are state-of-the-art and Postgres isn't (so far as I know); if you aren't just hacking around then please consider the extra time to set up Elasticsearch and it'll make your life easier.
Looking at elasticsearch, the first thing I see is a pom.xml. Seems sketchy. ;)
Seriously though, none of my servers have a JVM installed. Most of them have nginx and ruby on them. And all of them talk to a handful of postgres servers. If a client wants to add some kind of full-text search to an app, this is a great approach to doing it in a cost-effective way. I don't particularly want to have to add more infrastructure just to be able to search through data. That's an additional monthly cost, along with a new environment to monitor for critical security updates etc.
If someone needs high performance search, sure, I'd cost out elasticsearch or solr. If they just need something basic, postgres full-text looks like the perfect low-effort solution.
I've been playing around with full text search in Postgres, and I took this sort of approach when starting out, but then realized you I could just have the index be an expression.
So instead of (per the example)
doing something like Is there any reason you wouldn't do this? For multi-languange you you'd have to detect the language of the text, but there's no reason you couldn't parameterize that too.