Only indirectly. A lot of popular models used for generating vectors are nowhere near as smart as LLMs. Also, the vectors themselves are not machine learning models. They are just lists of numbers intended for comparing to other lists of numbers. Typically using some similarity algorithm like cosine similarity.
Vector search embeddings are only as good as the models you use, the content you have, and the questions you ask.
This is a bit of a pitfall when you use them for search. Especially if you have mobile users because most of them are not going to thumb in full sentences in a search box. I.e. the questions the ask are going to be a few letters or words at best and not have a lot of context. And users will still expect good results. Vector search is not great for those type of use cases because there just isn't a whole lot of semantics in these short queries. Sometimes, all you need is just a simple prefix search.
tbh I think it is very funny. Sent it to a few friends because of that. Was chased by a cursor for a minute and then did the same to him/her. Childish, but funny.
It only works well when the site is busy. The frontpage of HN makes it work very very well.
I had a discussion at work today about completely replacing the old search engine with vector embeddings because they work so well.
I think google needs to be very afraid in the coming few years because this use of AI is relatively cheap to run, simple to deploy and the models are small enough that you can build one customized based on your personal ranking of several thousand pieces of text.
1) They work really well but not in all the cases and most likely the better approach is some augment between the two. OpenSearch/Elasticsearch already include hybrid approaches.
2) The moat is having a snapshot of the web and being able to search through it efficiently.