Hacker News new | past | comments | ask | show | jobs | submit login

Our brains also seem to tie our thoughts to observed reality in some way. The parts that do sensing and reasoning interact with the parts that handle memory. Different types of memory exist to handle trade offs. Memory of what makes sense also grows in strength compared to random things we observed.

The LLM’s don’t seem to be doing these things. Their design is weaker than the brain on mitigating hallucinations.

For brain-inspired research, I’d look at portions of the brain that seem to be abnormal in people with hallucinations. Then, models of how they work. Then, see if we can apply that to LLM’s.

My other idea was models of things like the hippocampus applied to NN’s. That’s already being done by a number of researchers, though.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: