Hacker News new | past | comments | ask | show | jobs | submit login

I hear these complaints and can't see how this is worse than the pre-AI situation. How is an AI "hallucination" different from human-generated works that are just plain wrong, or otherwise misleading?

Humans make mistakes all the time. Teachers certainly did back when I was in school. There's no fundamental qualitative difference here. And I don't even see any evidence that there's any difference in degree, either.




"Sorry, computer says no."

Humans can be wrong, but they aren't able to be wrong at as massive of a scale and they often have an override button where you can get them to look at something again.

When you have an AI deployed system and full automation you've got more opportunities for "I dunno, the AI says that you are unqualified for this job and there is no way around that."

We already see this with less novel forms of automation. There are great benefits here, but also the number of times people are just stymied completely by "computer says no" has exploded. Expect that to increase further.


Because people know they make mistakes, and aren’t always 100% certain and are capable of referring you to other people. Also because the mistakes LLMs make are entirely unlike mistakes humans make. Humans don’t generate fake URLs citing entirely fake references. Humans don’t apologize when corrected and then re-assert the same mistake. Also because we know that people aren’t perfect and we don’t expect them to be infallible, humans can break out of their script and work around the process that’s been encoded in their computers.

But most people do expect computers to be infallible, and the marketing hype for LLMs is that they are going to replace all human intellectual labor. Huge numbers of people actually believe that. And if you could convince an LLM it was wrong (you can’t, not reliably), it has no way around the system it’s baked into.

All of these things are really really dangerous, and just blithely dismissing it as “humans make mistakes, too, lol” is really naive. Humans can decide not to drop a bomb or shoot a gun if they see that their target isn’t what they expect. AIs never will.


Because people know they make mistakes, and aren’t always 100% certain and are capable of referring you to other people. Also because the mistakes LLMs make are entirely unlike mistakes humans make. Humans don’t generate fake URLs citing entirely fake references. Humans don’t apologize when corrected and then re-assert the same mistake.

Pretty much every element of the above statements is false. Heck, either your response to me, or this reply, seem to be examples showing that the first one is wrong.


Society has spent literal decades being convinced to put their trust in everything computers do. We're now at the point that, in general, that trust is there and isn't misplaced.

However, now that computers can plausibly do certain tasks that they couldn't before via LLMs, society has to learn that this is an area of computing that can't be trusted. That might be easy for more advanced users who already don't trust what corporations are doing with technology[0], but for most people this is going to be a tall order.

[0] https://i.imgur.com/6wbgy2L.jpeg


Probably the main difference is that humans fail at smaller scale, with smaller effects, and build a reputation, probably chatgpt hallucinations can potentially affect everyone


Humans know when they've made a mistake. So there's ways to deal with that.

Computers are final. You don't want things to be final when your life's on the line.


> Humans know when they've made a mistake.

You'll never make senior management with that attitude. At worst, "mistakes were made" and look a bit sad.


>There's no fundamental qualitative difference here...degree either.

I've heard the same comparisons made with self-driving cars (i.e. that humans are fallible, and maybe even more error-prone).

But this misses the point. People trust the fallibility they know. That is, we largely understand human failure modes (errors in judgement, lapses in attention, etc) and feel like we are in control of them (and we are).

OTOH, when machines make mistakes, they are experienced as unpredictable and outside of our control. Additionally, our expectation of machines is that they are deterministic and not subject to mistakes. While we know bugs can exist, it's not the expectation. And, with the current generation of AI in particular, we are dealing with models that are generally probabilistic, which means there's not even the expectation that they are errorless.

And, I don't believe it's reasonable to expect people to give up control to AI of this quality, particularly in matters of safety or life and death; really anything that matters.

TLDR; Most people don't want to gamble their lives on a statistic, when the alternative is maintaining control.


Expanding on this, human failures and machine failures are qualitatively different in ways that make our systems generally less resilient against the machine variety, even when dealing with a theoretically near-perfect implementation. Consider a bug in an otherwise perfect self-driving car routine that causes crashes under a highly specific scenario -- roads are essentially static structures, so you've effectively concentrated 100% of crashes into (for example) 1% of corridors. Practically speaking, those corridors would be forced into a state of perpetual closure.

This is all to say that randomly distributed failures are more tolerable than a relatively smaller number of concentrated failures. Human errors are rather nice by comparison because they're inconsistent in locality while still being otherwise predictable in macroscopic terms (e.g.: on any given day, there will always be far more rear-endings than head-on collisions). When it comes to machine networks, all it takes is one firmware update for both the type & locality of their failure modes to go into a wildly different direction.


What you say is true, and I agree, but that is the emotional human side of thinking. Purely logically, it would nake sense to compare the two systems of control and use the one with fewer human casualities. Not saying its gonna happen, just thinking that reason and logic should take precedent, no matter what side you are on.


It definitely seems like a matter of simple math. But, I'm not 100% sure it's always the most logical choice to defer to statistics.

By definition, stats operate at the macro level. So, for instance, I may be a safer driver than the AI average. Should I give up control? I suppose it's also a matter of degree and there's the network effect to consider (i.e. even If I individually beat the average, I'm still on the road with others who don't).

So it gets a little more complicated and I'm also not sure the aversion to relinquishing control is strictly "emotional" (as in the irrational sense). There's something about the potential finality of a failure that goes along with autonomy and agency over one's own life. The idea that a machine could make a mistake that ends your life, and you never had a chance or say in that outcome is off-putting in ways that feel more rooted in rationality and survival than in emotion.


>I hear these complaints and can't see how this is worse than the pre-AI situation. How is an AI "hallucination" different from human-generated works that are just plain wrong, or otherwise misleading?

With humans there is a chance you get things right.


> How is an AI "hallucination" different from human-generated works that are just plain wrong, or otherwise misleading?

yikes, mate, you've really misunderstood what's happening.

when a human fucks up, a human has fucked up. you can appeal to them, or to their boss, or to their CEO.

the way these crappy "AI" systems are being deployed, there is no one to appeal to and no process for unfucking things.

yes, this is not exactly caused by AI, it's caused by sociopaths operating businesses and governments, but the extent to which this enabled them and their terrible disdain for the world is horrifying.

this is already happening, of course - Cathy O'Neil wrote "Weapons Of Math Destruction" in 2016, about how unreviewable software systems were screwing people, from denying poor people loans to harsher sentencing for minority groups, but Sam Altman and the new generation of AI grifters now want this to apply to everything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: