Hacker News new | past | comments | ask | show | jobs | submit login

They feel sentient in many cases because they're trained by people using data they've selected in the hope that they can train it to be sentient. And the models in turn are just mechanical turks repeating back what they've already read in slightly different ways. Ergo, they "feel" sentient, because to train them, we need to tell them which outputs are more correct, and we do that by telling them the ones that sound more sentient are more correct.

It's cool stuff but if you ever really want to know for sure, ask one of these things to summarize the conversation you just had, and watch the illusion completely fall to pieces. They don't retain anything above the barest whiff of a context to continue predicting word output, and a summary is therefore completely beyond their abilities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: