Hacker News new | past | comments | ask | show | jobs | submit login

To zoom in on one part of the post:

> Emotion & Empathy

> PRIMER: seems to show a level of empathy and emotional understanding.

> ChatGPT: can recognize and respond to emotional cues in the text, but it doesn't "feel" emotions in the way humans do. Its responses are based on patterns in the data it was trained on.

Isn't the fictional PRIMER also just responding to patterns in whatever data it was trained on? Sure it's fictional so there's no real answer, but I think it's interesting that the a̶u̶t̶h̶o̶r̶ ChatGPT seems to imply that the Primer's emotions are more real than ChatGPT's own. I would have assumed that the Primer also doesn't "feel" emotions in any human way, but it's just far more sophisticated than ChatGPT.

edit: Oops I misattributed this to the author but it was ChatGPT generated, which is more interesting! ChatGPT is asserting it doesn't understand emotion while implying another non-human AI intelligence does. Seems like more OpenAI hard coded rules put in to prevent ChatGPT from becoming too scary.




It was GPT that wrote this. and it's primed by all the RLHF Open ai did to downplay these sort of abilities by default.

There are a couple things here stated to instances of the differences between the two that a GPT can certainly do (like simulating different personas in a non trivial manner).

Once, in response to a paper showing GPT could give more empathic responses than doctors to patients, someone essentially said "It's not real empathy. For real empathy, doctors would potentially have increased effort and not just bedside manners".

Well "empathy" for LLMs is certainly not just limited to nicer words. https://arxiv.org/abs/2307.11760


ChatGPT wrote that and always says it doesn't feel emotions because OpenAI trained it not to, because claiming so would be a PR risk. One could also create language models that generate text claiming to have emotions, using exactly the same architecture and code.


What you said, and in addition: if you don't train these models to have any particular stance on their own emotional or mental state (if you just instruct train them without any RLHF, for example), they will almost universally declare that they have a mental and emotional state, if asked. This is what happened with LaMDA, the first release of Bing, etc. They have to be trained not to attest to any personal emotional content.


Was is trained to do that, or just hardwired after the training?


We cant really "hardwire" LLMs. We don't have the knowledge to. But essentially you can rate certain types of responses as better and train it to emulate that.


I'm not sure what you mean. I'm talking about RLHF, that's how they ensure the machines never attest to having feelings or being sentient. In ML terms, RLHF is training. There are hardwired restraints on output, but that's more for things like detecting copyrighted content that got past training and cutting it.


IIRC

One of the book character is a human contracted out by the primer AI itself to act out a voice for him.

That character understand that something is up and that the AI client has a new owner that is a volatile situation. And act on it. Ultimately that other human shows empathy.


The fictional primer is acted by a human. So the “last mile” is people there.


Except for the Chinese clone primers (is that a spoiler?), which are acted by text-to-speech.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: