Hacker News new | past | comments | ask | show | jobs | submit login

I couldn't agree more. We need LLMs that don't sound like anodyne predictable woke customer service agents.

I always make this argument:

If a human read all the text GPT read, and you had a conversation with them it would be the most profound conversation you've ever had in your life.

Ecelcticism beyond belief, surprising connections, moving moments would occur nonstop.

We need language models like that.

Instead, our language models are trying to predict an individual instance of a conversation, with the lowest common denominator customer service agent, every time they run (who to his credit, can look things up very well).

And I don't think fine tuning this "tone" in would be the way to go. A better way would be to re-Frankenstein the existing ones architectures or training algorithms to be able to synthesize in this way. No more just predicting the next token.




It is not surprising when they are created in the land of “Have a nice day!”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: