Hacker News new | past | comments | ask | show | jobs | submit login

> with citations I can verify.

And do you? Every time someone tried to show me examples of “how amazing ChatGPT is at reasoning”, the answers had glaring mistakes. It would be funny if it weren’t so sad how it shows people turning off their critical thinking when using LLMs, to the point they won’t even verify answers when trying to make a point.

Here’s a small recent example of failure: I asked the “state of the art” ChatGPT model which Monty Python members have been knighted (it wasn’t a trick question, I really wanted to know). It answered Michael Palin and Terry Gilliam, and that they had been knighted for X, Y, and Z (I don’t recall the exact reasons). Then I verified the answer on the BBC, Wikipedia, and a few others, and determined only Michael Palin has been knighted, and those weren’t even the reasons.

Just for kicks, I then said I didn’t think Michael Palin had been knighted. It promptly apologised, told me I was right, and that only Terry Gilliam had been knighted. Worse than useless.




I do. It’s not complex to click on the citation, skim the abstract and results and check the reputation of the publication. It’s built into how I have always searched for information.

I also usually follow most prompts with “look it up I want accurate information”


> I also usually follow most prompts with “look it up I want accurate information”

That didn’t work so hot for two lawyers in the news a while back.


A while back when the lawyers used it, chatGPT didn’t do lookups and citation links.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: