document search, an llm use-case almost all companies want for their white-collar workers, currently follows creation of a RAG.
vector search is the first step of the process, and i argue that getting top n results and letting the user process them is probably the best of both worlds without involving "AI".
this can be enhanced with multilingual support based on the encoder used. but building the index is still an expensive process and i wonder how that could be done fast for a local user.
vector search is the first step of the process, and i argue that getting top n results and letting the user process them is probably the best of both worlds without involving "AI".
this can be enhanced with multilingual support based on the encoder used. but building the index is still an expensive process and i wonder how that could be done fast for a local user.