Hacker News new | past | comments | ask | show | jobs | submit login

Most human self-reflection and to an extent even memory is similarly post-facto, however.





Humans do tend to remember thoughts they had while speaking, thoughts that go beyond what they said. LLMs don’t have any memory of their internal states beyond what they output.

(Of course, chain-of-thought architectures can hide part of the output from the user, and you could declare that as internal processes that the LLM does “remember” in the further course if the chat.)


Is it a thought you had or a thought that was generated by your brain?

In any case the end result is the same. You can only infer from what was generated


You can only infer from what is remembered (regardless of whether the memory is accurate or not). The point here is, humans regularly have memories of their internal processes, whereas LLMs do not.

I don't see any difference between "a thought you had" and "a thought that was generated by your brain".




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: