Hacker News new | past | comments | ask | show | jobs | submit login

Yeah, for axioms like the above my next question is define 'understand'. Does my dog understand words when it completes specific actions because of what I say? I'm also learning a new language, do I understand a word when I attach a meaning (often a bunch of other words to it) to it? Turns out computers can do this pretty well.



Oh please, enough with the semantics. It reminds me of a post modernist asking me to define what "is" is. The LLM does not understand words in the way a human understands them and that's obvious. Even the creators of LLMs implicitly take this as a given and would rarely openly say they think otherwise no matter how strong the urge to create a more interesting narrative.

Yes, we attach meaning to certain words based on previous experience, but we do so in the context of a conscious awareness of the world around us and our experiences within it. An LLm doesn't even have a notion of self, much less a mechanism for attaching meaning to words and phrases based on conscious reasoning.

Computers can imitate understanding "pretty well" but they have nothing resembling a pretty good or bad or any kind of notion of comprehension about what they're saying.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: