Hacker News new | past | comments | ask | show | jobs | submit login

I think your "wide" narrative is actually still a very narrow narrative.

The actual wide narrative is this: yes the models lie and hallucinate, but people are realizing now that this is essentially what every human AND website currently does now! Every human presents their "facts" and "viewpoints" as though they know the whole truth but really they are just parroting whatever talking points they got from BBC or TheGuardian or Fox News, and all of those journalists are just using other sources with their own biases and inaccuracies. Basically, it's bullshit and inaccuracies all the way down!

I was chatting with my friends when out for dinner last night about ChatGPT and we concluded that while it does have inaccuracies, it's still better than asking humans for information and still better then the average Google SEO-Spam website. That is, what makes us think random human made website about say space travel is more or less accurate than ChatGPT or what our friend Bob thinks about space travel.

The truth is, most of the information we receive on a daily basis is inaccurate or hallucinated to some degree, we just have gotten "used" to taking whatever the BBC or Bloomberg or ArsTechnica says as "the truth."




I'd strongly disagree with the idea that ChatGPT is essentially as trustworthy as humans and human-generated content because humans occasionally bullshit and misrepresent reality.

You can rationalize what people are saying based on their experiences, opinions, and backgrounds. You can engage in the Socratic method with people, to unpack where their claims come from and get to the grounding "truth" of primary experience.

You can't do any of these things with ChatGPT, because ChatGPT isn't grounded – it goes in circles at a level of abstraction where truth doesn't exist.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: