Hacker News new | past | comments | ask | show | jobs | submit login

Only because the models have seemingly only been trained on generating text that matches a prompt, ie prompt completion. Rather than knowledge retrieval/parsing/organisation.

If part of the training was to only use knowledge sourced from a vector db and that it is allowed to use its trained knowledge only for grammar rules, phrasing or rewriting information then I think it would do a lot better.

Doesn't seem like many models are trained on prompts like "Question Q"->"[no data] I'm sorry but I don't know that" = accepted during training.

This would help immensely for not just for chatbots but for personal use too. I don't want my LLM assistant to invent a trip to Mars when I ask it "what do I have to do today" and my calendar happens to be empty.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: