Hacker News new | past | comments | ask | show | jobs | submit login

A language model isn't Skynet :)



I'm working on it


...yet?


There is nothing to suggest a language model is self aware, or is capable of reasoning and will turn itself around to kill you or anyone else. Knowledge is power and it’s better to get clued up on how these things work so you don’t scare yourself.


Indeed. I think the confidence with which ChatGPT gives (often incorrect) answers and the way you can correct it, makes people feel like it is self-aware but it's not. The way it is presented really makes it easy to anthropomorphise it. It feels like you're talking to a person but really what you're talking to is the echoes of a billion people's murmurs on the internet.

There is a really big step to go for it to be self-learning which is what is one of the things it will need to be self-aware. Right now the tech is just a static model - it will not learn from being corrected. You can often argue with it saying "Hey this is wrong because..." and it will admit you're right. And then it will give the wrong initial answer back the next time.


I think AI has a bit of a branding issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: