Hacker News new | past | comments | ask | show | jobs | submit login

I think this is a weird non-issue and it's interesting people are so concerned about it.

- Human curated systems make mistakes.

- Fiction has created the trope of the omniscient AI.

- GPT curated systems also make mistakes.

- People are measuring GPT against the omniscient AI mythology rather than the human systems it could feasibly replace.

- We shouldn't ask "is AI ever wrong" we should ask "is AI wrong more often than the human-curated information? (There are levels of this - min wage truth is less accurate that senior engineer truth.)

- Even if the answer is that AI gets more wrong, surely a system where AI and humans are working together to determine the truth can outperform a system that is only curated by either alone. (for the next decade or so, at least)




I think there's an issue with gross misrepresentation. This isn't being sold as a system with 50% accuracy where you need to hold its hand. It's sold as a magical being that can answer all of your questions and we know that's how people will treat it. I think this is a worse situation than data coming from humans since people are skeptical of one another. But, many think AI will be an impartial, omnipotent source of facts, not a bunch of guesses that might be right slightly more often than than it's wrong.


I see your point, but I feel like there's going to be a 'eating tidepods' level societal meme within a year mocking people who fall for AI hallucinations as "boomers", and then omnipotent AI myth will be shattered.

Essentially, I believe the baseline misinformation level is being undercounted by many and so the delta in the interim while people are learning the fallibility of AI is small enough it is not going to cause significant issues.

Also the 'inoculation' effect of getting the public using LLMs could result in a net social benefit as the common man will be skeptical of authorities appealing to AI to justify actions - which I think could be much more dangerous that Suzie copying hallucinated facts into her book report.


If the only negative effect is some people look foolish, that's an acceptable risk. I'm worried a bit it's closer to people thinking that Tesla has a full self-driving system because Tesla called it auto-pilot and demonstrated videos of the car driving without a human occupant. In that case, yeah the experts understand that "auto-pilot" still means driver-assisted, but we can't ignore the fact that most people don't know that and that the marketing info reinforced the wrong ideas.

I don't want to argue with people that won't understand the AI model can be wrong. I'm far more concerned with public policy being driven by made up facts or someone responding poorly in an emergency situation because a search engine synthesized facts. Outside of small discussions here, I don't see any acknowledgment about the current limitations of this technology, only the sunny promises of greener pastures.


>we should ask "is AI wrong more often than the human-curated information?

No, this isn't what we should ask, we should ask if the interface that AI provides is conducive to giving humans the ability to detect the mistakes that it makes.

The issue isn't how often you get wrong information, it's to what extent you're able to spot wrong information under normal use cases. And the uniform AI interface that gives you complete bullshit in the technical sense of that term provides no indication regarding the trustworthiness of the information. A source with 20% of wrong info that you don't notice is worse than one with 80% that you identify.

When you use traditional search you get an unambigious source, context, date, language, authorship and so forth and you must place what you read yourself. You know the onus is on you. ChatGPT is the half self-driving car. It'an inherently pathological interaction because everything in the design screams to take the hands off the wheel. It's an opaque system, and a blackbox with the error rate of a human is a disaster. Human-machine interaction is not human-human interaction.


I agree 100% with your last point, even as someone who is relatively more skeptical of GPT than the average person.

I think a lot of the concern though is coming from the way the average person is reacting to GPT and the way they’re using it. The issue isn’t that GPT makes mistakes, it’s that people (by their own fault, not GPT necessarily) get a false sense of security from GPT, and since the answers are provided in a concise, well-written format don’t apply the same skepticism they do when searching for something. That’s my experience at least.

Maybe people will just get better at using this, the tools will improve, and it won’t be as big an issue, but it feels like a trend from Facebook to TikTok of people opting for more easily digestible content at the expense of disinformation


Interesting points.

- I wonder what proportion of people who are getting a false sense of security with GPT also were getting that same false sense from human systems. Will this shift entail a net increase in gullibility, or is this just 'laundering' foolishness?

- I think the average tiktok user generally has much better media literacy than average facebook user. But probably depends a lot on your filter bubble.


Normal bing answer the wrong President of Brazil btw. And I don't see people getting pissed of with that lol




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: