Hacker News new | past | comments | ask | show | jobs | submit login

If ChatGPT is what we can expect from AI I’m quite afraid. It seems to be usually correct but often spectacularly and confidently wrong. I hope we never give this technology decision making power.



Adding a confidence score to every answer doesn't look like an insurmountable problem.


It generates text. It has no idea how “correct” its output is. The best it can ever do is give you information about how likely the output is to follow the input based on its corpus. That may or may not correlate with correctness.

For the sake of argument ignore issues with the correctness of the corpus itself. Imagine that the model produces a 50 word answer. Inserting the single word "not" in the answer may only change the likelihood score by a small amount either way, but it could completely change the meaning of the answer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: