Hacker News new | past | comments | ask | show | jobs | submit login

What are your actionable suggestions?

I am currently testing embeddings/RAG and could use some insight on how to make the results better.




> The text needs to be fully analyzed with LLMs before embedding.

If you happen to know what kinds of questions you will be asking about your RAG index, you should pre-process the texts to add QA pairs. Otherwise you can prompt the LLM to do chain-of-thought inferences based on the source text and add them to the material.


I guess you log queries to see what is popular and then reprocess texts based on those?


Aside from a feedback loop from usage, is there a way to guess?

I guess you put put a whole doc into the I’ll and ask what questions it answers?

And then use those question plus a piece of the text and do an embedding?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: