Hacker News new | past | comments | ask | show | jobs | submit login
Tantivy – full-text search engine library inspired by Apache Lucene (github.com/quickwit-oss)
333 points by kaathewise 41 days ago | hide | past | favorite | 58 comments



Major props to the authors of this library. I re-built https://progscrape.com [1] on top of it last year, replacing an ancient Python2 AppEngine codebase that I had neglected for a while. It's a great library and insanely fast, as in indexing the entire library of 1M stories on a Raspberry Pi in seconds.

I'm able to host a service on a Pi at home with full-text search and a regular peak load of a few rps (not much, admittedly), with a CPU that barely spikes above a few percent. I've load tested searches on the Pi up to ~100rps and it held up. I keep thinking I should write up my experiences with it. It was pretty much a drop-in, super-useful library and the team was very responsive with bug reports, of which there were very few.

If you want to see how responsive the search is on such a small device, try clicking the labels on each story -- it's virtually instantaneous to query, and this is hitting up to 10 years * 12 months of search shards! https://progscrape.com/?search=javascript

I'd recommend looking at it over Lucene for modern projects. I am a big fan, as you might be able to tell. Given how well it scales on a tiny little ARM64, I'd wager your experiences on bigger iron will be even more fantastic.

[1] https://github.com/progscrape/progscrape


It is a very nice library. I’m using it for a very work in progress incremental email backup CLI tool for email providers using JMAP.

I wanted users to be able to search their backups. As I’m using Rust Tantivy looked like just the right thing for the job. Indexing happens so fast for an email I did not bother to move the work to a separate thread. And search across thousands of emails seems to be no problem.

If anyone wants search for their Rust application they should take a look at Tantivy.


Tiny bug report: https://progscrape.com/?search=grep shows "Error: PersistError(UnexpectedError("Storage fetch panicked"))"


It looks like there was a bug with certain search queries that wedged a mutex because they failed to parse on my end. Deploying a fix now. Thanks!


Thanks for that! A couple of days ago I used meilisearch for a quick proof of concept, but I'll check out tantivy again via your repo.

I basically just need a fulltext search.


If you just need full text search, assuming you're already using Postgres you can get quite far just using it's own primitives

https://www.postgresql.org/docs/current/textsearch.html

https://www.crunchydata.com/blog/postgres-full-text-search-a...


AFAIK, PostgreSQL doesn't provide a way to get the IDF of a term, which makes its ranking function pretty limited. tf-idf (and its varians, like Okapi BM25) are kinda table stakes for an information retrieval system IMO.

I'm not saying PostgreSQL's functionality is useless, but if you need ranking based on the relative frequency of a term in a corpus, then I don't believe PostgreSQL can handle that unless something has changed in the last few years. Usually the reason to use something like Lucene or Tantivy is precisely for its ranking support that incorporates inverse document frequency.


Postgres's FTS is actually quite solid! You can get very far with just the built-in tsvector. The ranking could be improved, though, which was one of the reasons for creating pg_search in the first place: https://github.com/paradedb/paradedb/tree/dev/pg_search (disclaimer: I work on pg_search @ ParadeDB)


Okay, but I didn't say it wasn't solid. I just said its ranking wasn't great because it lacks IDFs. It seems like we must be in violent agreement, given that you work on something that must be adding IDFs to PostgreSQL FTS. :P


Found recently Tantivy inside of ParadeDB (Postgres extension aiming to replace Elastic)

https://github.com/paradedb/paradedb/blob/dev/pg_search/Carg...

after listening

Extending Postgres for High Performance Analytics (with Philippe Noël) https://www.youtube.com/watch?v=NbOAEJrsbaM

And inside of the main thing - Quickwit(logs, traces, and soon metrics) https://github.com/quickwit-oss/quickwit

Had a surprisingly good experience with combined power of Quickwit and Clickhouse for multilingual search pet project. Finally something usable for Chinese, Japanese, Korean

https://quickwit.io/docs/guides/add-full-text-search-to-your...

to_tsvector in PG never worked well for my use cases

SELECT * FROM dump WHERE to_tsvector('english'::regconfig, hh_fullname) @@ to_tsquery('english'::regconfig, 'query');

Wish them to succeed. Will automatically upvote any post with Tantivy as keyword


Thank you so much for sharing!!!


That's a cool design pattern combining url/rest based index and doing the search query entirely within sql. Can do same thing in postgres fdw


I recently deployed Quickwit (based on Tantivy, from the same team) in production to index a few billion objects and have been very pleased with it. Indexing rates are fantastic. Query latency is competitive.

Perhaps most importantly, separation of compute and storage has proven invaluable. Being able to spin up a new search service over a few billion objects in object storage (complete with complex aggregations) without having to pay for long-running beefy servers has enabled some new use cases that otherwise would have been quite expensive. If/when the use case justifies beefy servers, Quickwit also provides an option to improve performance by caching data on each server.

Huge bonus: the team is very responsive and helpful on Discord.


Thank you @tyler!!!


Another resource is a trigram search index (in Go) used by etsy/hound[0] based on an article (and code) from Russ Cox: Regular Expression Matching with a Trigram Index[1].

[0] https://github.com/hound-search/hound

[1] http://swtch.com/~rsc/regexp/regexp4.html

Different use-cases for alternatives to Lucene depending on your needs.


Beware, you still cannot add/remove fields: https://github.com/quickwit-oss/tantivy/issues/470

The only way to add fields is to reindex all data into a different search index.


One workaround is to use the JSON field, see doc https://github.com/quickwit-oss/tantivy/blob/main/doc/src/js...


I was searching for a Meilisearch alternative (which sends out telemetry by default) and found Tantivy. It's more of a search engine builder, but the setup looks pretty simple [0].

[0]: https://github.com/quickwit-oss/tantivy-cli


QuickWit also sends telemetry by default: https://quickwit.io/docs/telemetry


Hm, I am interested, but I would love to use it as a rust lib and just have rust types instead of some json config...

The java sdk of meilisearch was also nice, same: no need for a cli and manual configuration. I just pointed it to a db entity and indexed whole tables...

Would love that for tantivy


> Hm, I am interested, but I would love to use it as a rust lib and just have rust types instead of some json config...

Yes that's how you use tantivy normally, not sure which json config you mean.

tantivy-cli is more like a showcase, https://github.com/quickwit-oss/tantivy is the actual project.


Yes, and there is https://tantivy-search.github.io/examples/basic_search.html

But instead of this, I would prefer some way to just hand it JSON and for it to just index all the fields...

for comparison, this is my meilisearch SDK code:

    fun createCustomers() {
        val client = Client(Config("http://localhost:7700", "password"))
        val index = client.index("customers")
        val customers = transaction {
            val customers = Customer.all()
            val json = customers.map { CustomerJson.from(it) }
            Json.encodeToString(ListSerializer(CustomerJson.serializer()), json)
        }
        index.addDocuments(customers, "id")
    }


You can just put everything in a JSON field in tantivy and set it to INDEXED and FAST


Hm, I need to read up on the trade offs of going this route.

Thanks!


That's a petty objection to usable interactive search when it's easy to opt-out by adding a single command line argument.


OP is entitled to make political choices when selecting software.

Some of us have specific principles of which things like opt-out telemetry might run afoul.

OP will choose their software, I choose mine and you choose yours; none of us need to call each other petty or otherwise cast such negative judgement; a free market is a free market.


Irrational white-knighting rather than principled discussions doesn't add value here.


Suggesting you should be less judgemental is not white-knighting, nor is it irrational. Sorry bud, but not everyone thinks the way you do, different people have different principles.

Feel free to explain how either of the two comments of yours I've replied to represent principled discussion or added value, because I'm not seeing it.


It's a minor complaint, but I'm also evaluating it for a minor project. I just don't like the fact that I can forget to add a flag once and, oh, now I'm sending telemetry on my personal medical documents.


Meilisearch only sends anonymized telemetry events. We only send API endpoints usage; nothing like raw documents goes through the wire. You can look at the exhaustive list of all collected data on our website [1].

[1]: https://www.meilisearch.com/docs/learn/what_is_meilisearch/t...


also meilsearch is more like quickwit, their distributed offering but quickwit is AGPL


They serve quite different use cases.

quickwit was built to handle extremely large data volumes, you can ingest and search TB and PB of logs.

meilisearches indexing doesn't scale as it will become slower the more data you have, e.g. I failed to ingest 7GB of data.


Hey PSeitz, Meilisearch CEO here. Sorry to hear that you failed to index a low volume of data. When did you last try Meilisearch? We have made significant improvements in the indexing speed. We have a customer with hundreds of gigabytes of raw data on our cloud, and it scales amazingly well. https://x.com/Kerollmops/status/1772575242885484864


Frankly, I'm okay with Meillisearch for instant search because y'all are clear about analytics choices, offer understandable FOSS Rust, and have a non-AGPL license. If/when we make some money, I'm in favor of $upporting and consulting of tools used to keep them alive out of self-interest.


Tantivy is also used in an interesting Vector Database product called LanceDb - https://lancedb.github.io/lancedb/fts/ to provide full text search capabilities. Last time I looked it was only through the python bindings, though I know they're looking to implement the rust bindings natively to support other platforms.


I started working on a personal project a few years ago, after being insanely frustrated with the resource hog that is elasticsearch. That is coming from someone who's personal computer has more resources than what a number of generous startups allocate for their product. I opted for Tantivy for two reasons: one was my desire to do the whole thing in rust and second was Tantivy itself: performance is 10/10, documentation is second to none and the library is as ergonomic as they get. Sadly the project was a bite that was way too big for a single guy to handle in his spare time, so I abandoned it. Regardless, Tantivy is absolutely awesome.


I've been following Tantivy for a little while. It's impressive the grit that the founders have, and the performance that Tantivy has been able to achieve lately.

Mad props to all the team! I'm a firm believer they will succeed on their quest!


As someone who's used Lucene and Solr extensively, my biggest wishlist item has been support for upgrades. Typically Lucene (and Solr, and ES) indexes cannot be upgraded to new versions (it is possible in some cases, but let's ignore that for convenience). For many large projects, reindexing is a very expensive (and sometimes impossible) ordeal.

There are cases where this will probably never be possible (fields with lossy indexing where the datatype's indexing algorithm changed), but in many cases all the information is there, and it would be really nice if such indexes could be identified and upgraded.


Tantivy is great! I was using Postgres FTS with trigrams to index a few hundred thousand address strings for a project of mine [0], but this didn’t scale as well as I’d hoped with a couple million addresses. Replaced it with the tantivy cli [1] and it works a charm (ms searches on a single core vm).

[0]: https://wynds.com.au [1]: https://github.com/quickwit-oss/tantivy-cli


Did you create index on the tsvector?


adding to the chorus here - this is great tech. we use it internally at convex for implementing OLTP full text search.

other than its runtime characteristics, the codebase is well organized and a great resource for learning about information retrieval.


This is nice, I used Solr for a while and it worked well but I hated the Java underneath it, and some aspects of it seemed needlessly slow. But, I think this is still a 20th century style of search engine and we need more modern approaches. Especially, those of us with small datasets compared to internet search behemoths can probably take an effiency hit to get more useful results.


Why did you "hate the Java underneath it"?


because it wouldn't let their power level reach 9000


What I really want is being able to index documents in multiple languages. Not all my users use the same language, and I don't want their documents and queries to assume English (for stop words, stemming, etc). This is a limitation of most search libraries at this point.

You have a big list of separate libraries providing support for a variety of languages? Great. Unfortunately that doesn't help me make a real multi-language app though. Doing that work right now, with multiple indexes and routing the query, seems very difficult.


Eagerly awaiting the day someone can figure out a tantivy extension to SQLite. That would be the best of all worlds…


I would love if tantivy had a single file format, eg. .tantivy extension so you could drag it into a notebook like you can with .sqllite files.


This would be cool to compile to wasm and ship to the browser. Seems like it would give a static site super fast search powers.


I ‘m using https://stork-search.net for my static website search, but it’s no longer maintained. So yeah, Tantivy would be a great candidate to replace it! :)


Cheesy logo with a horse

- Their website :)


but y not just a vector database like pgvector?


In practice, a combination of full text and vector databases often gives superior performance than just one of the types. It's called hybrid search. Here's an article that talks a bit about this: https://opster.com/guides/opensearch/opensearch-machine-lear...

Often you take the results from both vector search and lexical search and merge them through algorithms like Reciprocal Rank Fusion.


You can think of a full-text index as being like a vector database that's highly specialized and optimized for the use-case where your documents and queries are both represented as "bags of words", i.e. very high-dimensional and very sparse.

Which works great when you want to retrieve documents that actually contain the specific keywords in your search query, as opposed to using embeddings to find something roughly in the same semantic ballpark.


Check https://github.com/infiniflow/infinity which combines vector search and full-text search providing extremely fast search performance.


Infinity looks interesting, but I don't see any mention of support for clustering.


Infinity supports HNSW vector index.


Vector databases are good for documents, but if you have a fact database or some other more succinct information store, it's quite slow to retrieve compared to trigram/full text while often performing worse.


Because it’s a full text search engine, and not a text embedding? Different query types, requirements, indexing methods, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: