Hacker News new | past | comments | ask | show | jobs | submit login

The difficulty of verifying the answer isn't-wrong is another important factor. Bad search results are often obvious, but LLM nonsense can have tricky falsehoods.

If a process gives false results half the time, and verifying any result takes half as long as deriving a correct solution yourself... Well, I don't know the limiting sum of the infinite series offhand, but it's a terrible tool.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: