Hacker News new | past | comments | ask | show | jobs | submit login

Yes,

especially if you have records that not even change often.

If you have 300,000 items in an index that you want to sort in 4 ways and want to update eg the price daily, you already consumed 36 million operations of the biggest non enterprise plan that includes 50 million operations.

Just by testing and tweaking the index every other day, we already use up to 500,000 operations.

But then again setting up search infrastructure in different countries and synching it in realtime also comes at a hefty price. So we will stick with Algolia for now, the speed is breathtaking and we will never be able to achieve 20ms responses with eg an Elasticsearch cluster.




(n.b: I am an engineer at Algolia) Algolia's engine computes relevance at indexing time by design, allowing us to deliver optimal search performance at query time. As a result, each new `sort` - by price, by name, by date added - requires several indices containing identical data.

To make this easy to implement, we provide a way to create index replicas, read-only indices that can have different settings from the master index.

When using replicas, every record added to a master index gets also added to the replica index. Same goes for deletion and update operations.

All indexing operations done on replicas are not billed.

By using replicas, you can adjust your calculate by removing the factor of four you included for each index, meaning that 300K*30 days = 9million operations/month. This assumes you update the entire index daily, whereas you could also only update the prices that changed, which would in turn further reduce the number of operations.


> we will never be able to achieve 20ms responses with eg an Elasticsearch cluster

Why not? Assuming good hardware, why isn't that possible?


IME, I haven't been able to replicate Algolia's search responsiveness with ElasticSearch, even with good hardware. I don't think ES/Lucene was ever designed for that use case. IIRC, Algolia was designed to perform well even on mobile phones. I wouldn't dream of getting Lucene to run performantly on a mobile phone.

I'd love to see if someone has done any of the "realtime" Algolia demos backed by ElasticSearch.

In any case, ES excels at very different use cases - I've only seen Algolia provide "basic" search.


Algolia has done some terrific work on search latency (and written about it which is awesome https://stories.algolia.com/how-algolia-reduces-latency-for-...).

I think ES can get there, but depends a lot on what hardware you deploy (SSDs!), how you build your index, and whether you can geographically distribute your search engine close to your users.

We have one ES cluster with hundreds of queries per second that gives median 9ms response times and 99th percentile around 160ms. Another cluster with 100x more data that gets 20-25ms median response times and 99th percentile at 360ms.

Now both of these are just the ES response time, there is additional overhead in responding to an API request and then you also start to get into where your data centers are located relative to the end users.

More (slightly out of date) background on our config: https://data.blog/2016/05/03/state-of-wordpress-com-elastics...


Totally agree with you. Algolia's speed is pretty amazing, but its price is pretty hefty also. We ended up switching over to Elasticsearch which is much cheaper and more flexible in certain ways




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: