Hacker News new | past | comments | ask | show | jobs | submit login

>A doctor for every person, a teacher for every child, available any time and for free.

I'm sorry... I'm supposed to trust my healthcare and child's education to a piece of software whose primary feature is its ability to effectively hallucinate and tell convincing lies?

And assuming AI is at all effective, which implies valuable (which implies lucrative,) you expect services built on it to remain free?

That's not how anything works in the real world.




No? It's exactly how everything worked so far.

Live performance (orchestra and operas) were for rich only. Beautiful paintings were for the noble and churches. Porcelain was something needed to be imported from another continent. Tropical fruits were so expensive that people rented them.

Now we have the affordable versions of them for everyone in developed countries, and the middle class in developing ones. Yes, often we just got inferior, machine-made or digital copies, but I personally prefer something inferior than nothing.


You're comparing the value of AI versus a human being with the knowledge and skill necessary to earn a medical degree to the value of hearing Mozart live or seeing the Mona Lisa in person to Youtube and JPEGs, as an argument in favor of AI?

>but I personally prefer something inferior than nothing.

Say that again when your AI physician prescribes you the wrong medication because it hallucinated your medical history.


> You're comparing the value of AI versus a human being with the knowledge and skill necessary to earn a medical degree to the value of hearing Mozart live or seeing the Mona Lisa in person to Youtube and JPEGs, as an argument in favor of AI?

Yes, and I think it's a pretty good analogy.

> Say that again when your AI physician prescribes you the wrong medication because it hallucinated your medical history.

I personally prefer something inferior than nothing. I just said it again.

When your human doctor prescribes the wrong medication, would you reach the conclusion that the world would be better without human doctors?

The fact is simple. Professional diagnosing is such a scarce resource that people buy over-the-counter drugs all the time. It's not AI vs doctors; it's AI vs no doctor.


When a human doctor prescribes the wrong medication, it's a mistake. One doesn't conclude the world would be better without human doctors because human beings are capable of thought, memory, perception, awareness, and when they don't make mistakes - and most don't most of the time - it's the result of training and talent.

Meanwhile, AIs don't possess anything akin to thought, memory, perception or awareness. They simply link text tokens stochastically. When an AI makes a mistake, it's doing exactly what it's designed to do, because AIs have no concept of "reality" or "truth." Tell an AI to prescribe medication, it has no idea what "medication" is, or what a human is. When an AI doesn't make a mistake, it's entirely by coincidence. Yet humans are so hardwired with paredolia and gaslit by years of science fiction that such a simple hat trick leads people to want to trust their entire lives to these things.

>The fact is simple. Professional diagnosing is such a scarce resource that people buy over-the-counter drugs all the time. It's not AI vs doctors; it's AI vs no doctor.

That's not a fact, it's your opinion, and I'm assuming you've got some interest in a startup along these lines or something, because I honestly cannot fathom your rationale otherwise. You're either shockingly naive or else you have a financial stake in putting poor people's lives in the hands of machines that can't even be trusted to count the number of fingers on a human hand.

I have no doubt the future you want is going to happen, and I have no doubt we're all going to regret it. At least I'm old enough that I'll probably be dead before the last real human doctor is put out to pasture.



AI physician prescribes you the wrong medication because it hallucinated your medical history.

The big question is will that happen more or less often than it does with human doctor? Human doctors 'hallucinate' stuff all the time, due to lack of sleep, lack of time, lack of education and/or just not caring enough to pay proper attention to what they are doing.


>Human doctors 'hallucinate' stuff all the time, due to lack of sleep, lack of time, lack of education and/or just not caring enough to pay proper attention to what they are doing.

No, they don't. If that happened anywhere near all the time, we would never have given up alchemy and bloodletting, because there would be no reason to trust medicine at all, and yet it works overwhelmingly well most of the time for most people. Meanwhile, AIs hallucinate by design.


> what will LLMs ever do for us?

Hallucinations are an engineering problem and can be solved. Compute per dollar is still growing exponentially. Eventually this technology will be widely proliferated and cheap to operate.


> Hallucinations are an engineering problem and can be solved.

I'd like a little more background on that claim.

As far as I've been able to tell from my understanding of LLMs, everything they create is a hallucination. It's just a case of "text that could plausibly come next based on the patterns of language they were trained on". When an LLM gets stuff correct, that doesn't make it not a hallucination, it's just that enough correct stuff was in the training data that a fair amount of hallucinations will turn out to be correct. Meanwhile, the LLM has no concept of "true" or "false" or "reality" or "fiction".

There's no meta-cognition. It's just "what word probably comes next?" How is that just "an engineering problem [that] can be solved"?


I agree it's more than a simple engineering challenge, but I do so because it is not entirely clear if even humans avoid this issue, or even if we merely minimise it.

We're full of seemingly weird cognitive biases: Roll a roulette wheel in front of people before asking them the percentage of African counties are in the UN, their answers correlate with the number on the wheel.

Most of us judge logical strengths of arguments by how believable the conclusion is; by repetition; by rhyme; and worse, knowledge of cognitive biases doesn't help as we tend to use that knowledge to dismiss conclusions we don't like rather than to test our own.


How is that bias weird? It has a straightforward explanation - the visual system has an effect on reasoning. This, as well as other human biases, can be analyzed to understand their underlying causes, and consequently mitigated. LLM output has not discernible pattern to it, you cannot tell at all whether what it's saying is true or not.


> How is that bias weird?

The people can see a random number that they know is random, and yet be influenced by it when attempting facts.

> LLM output has not discernible pattern to it, you cannot tell at all whether what it's saying is true or not.

LLMs are the pattern. This is a separate axis to "is it true?"


Are they not an inherent problem with the LLM technology?


That's what happened with the internet, which was supposed to be the new Library of Alexandria, educating the world, liberating the masses from the grip of corporate ownership of data and government surveillance, and enabling free global communication and publishing.

It's almost entirely shit now. Instead of being educated, people are manipulated into bubbles of paranoid delusion and unreality, fed by memes and disinformation. Instead of liberation from corporate ownership, everything is infested with dark patterns, data mining, advertising, DRM and subscriptions. You will own nothing and be happy. Instead of liberation from government, the internet has become a platform for government surveillance, propaganda and psyops. Everyone used to have personal webpages and blogs, now everything is siloed into algorithmically-driven social media silos, gatekeeping content unless it drives addiction, parasociality or clickbait. What little that remains on the internet that's even worth anyone's time is all but impossible to find, and will eventually succumb to the cancer in due time.

LLMs will go the same way, because there is no other way for technology to go. Everything will be corrupted by the capitalist imperative, everything will be debased by the tragedy of the commons, every app, service and cool new thing will claw its way down the lobster bucket of society, across our beaten and scarred backs, to find the bottom common denominator of value and suck the marrow its bones.

But at least I'll be able to run it on a cellphone. Score for progress?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: