Hacker News new | past | comments | ask | show | jobs | submit login

Does it do that because it can check it’s own reasoning? Or is it just doing so because OpenAI programmed it to not show alternative answers if the probability of the current answer being right is significantly higher than the alternatives?



I don't know. I don't think anyone is directly programming GPT-4 to behave in any way, they're just training it to give the responses they want, and it learns. Something inside it seems to be figuring out some way of representing confidence in its own answers, and reacting in the appropriate way, or perhaps it is checking its own reasoning. I don't think anyone really knows at this point.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: