Even long term learning it does to some extent. Admittedly I’m not very familiar with what it’s doing, but it does create “memories” which appear to be personal details that it deems might be relevant in the future. Then I assume it uses some type of RAG to apply previously learned memories to future conversations.
This makes me wonder if there is or could be some type of RAG for chains of thought…
This makes me wonder if there is or could be some type of RAG for chains of thought…