Hacker News new | past | comments | ask | show | jobs | submit login

At my job we are scraping using LLMs. For a 10M sector of the company. GPT4 turbo has never not once out of 1.5 million API requests hallucinated. We however use it to parse data and interpret it from webpages, this is something you wouldn't be able to do with a regular scraper. Not well atleast.



Bold claim, did you review all 1.5 million requests?


I guess the claim is based on statistical sampling at reasonably high level to be sure that if there were hallucinations you would catch them? Or is there something else you're doing?

Do you have any workflow tools etc. to find hallucinations, I've got a project in backlog to build that kind of thing and would be interested in how you sort through bad and good results.


in this case we had 1.5 millioon ground truths for our testing purposes. we now have run it over 10 million, but i didnt want to claim it had 0 hallucinations on those as technically we cant say for sure, but considering the hallucination rate was 0% for 1.5 million when compared to ground truths im fairly confident.


How do you know that's true?


the 1.5 million was our test set. we had 1.5 million ground truths, and it didnt make up fake data for a single one


That's not what I asked. I asked "How did you determine that it didn't make up/get information wrong for all 1.5m?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: