Hacker News new | past | comments | ask | show | jobs | submit login

Does it give the right information or just some that sounds reasonable to you?



I'm asking pretty basic questions - "Why moven there and not mover?" kind of level - and the responses are obviously correct to me.


I recently asked GPT-3.5 ChatGPT to explain the grammar of a Japanese sentence using an obscure informal verb form, and it gave an answer that was obviously wrong. I just tried again with GPT-4, and it gave an answer that was more plausible, but which is almost certainly also wrong, because it contradicts the much-more-convincing Stack Exchange answer I had previously found via Google. (I only asked ChatGPT the question as a test.)

So, it can be helpful for basic questions, but there are limits. The harder the question is, the more likely it is to get the answer wrong, and the harder it is for the user to identify when it’s done so.


How do you know the Stack Exchange answer was correct?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: