Hacker News new | past | comments | ask | show | jobs | submit login

Even most intelligent people can hallucinate, we still haven't fixed this problem. There's a lot of training material and bias which leads many to repeat those things "LLM's are just a stochastic parrot, glorified auto complete/google search, Markov chains, just statistics", etc. The thing is, these sentences sound really good and so it's easy to repeat them when you have made up your mind. It's a shortcut.





I feel like at this point we have to separate LLMs and reasoning models too.

I can see the argument against chatGPT4 reasoning.

The reasoning models though I think get into some confusing language but I don't know what else you would call it.

If you say a car is not "running" the way a human runs, you are not incorrect even though a car can "outrun" any human obviously in terms of moving speed on the ground.

To say since a car can't run , it can't move though is obviously completely absurd.


This was precisely what motivated Turing to come up with the test named after him, to avoid such semantic debates. Yet here we are still in the same loop.

"The terminator isn't really hunting you down, it's just imitating doing so..."


LLMs don’t go into a different mode when they are hallucinating. That’s just how they work.

Using the word “hallucinate” is extremely misleading because it’s nothing like what people do when they hallucinate (thinking there are sensory inputs when there aren’t).

It’s much closer to confabulation, which is extremely rare and is usually a result of brain damage.

This is why a big chunk of people (including myself) think the current LLMs are fundamentally flawed. Something with a massive database to statistically confabulate correct stuff 95% of the time and not have a clue when it’s completely made up is not anything like intelligence.

Compressing all of the content of the internet into an LLM is useful and impressive. But these things aren’t going to start doing any meaningful science or even engineering on their own.


Intelligent people do not "hallucinate" in the same sense that an LLM does. Counterarguments you don't like aren't "shortcuts". There are certainly obnoxious anti-LLM people, but you can't use them to dismiss everyone else.

An LLM does nothing more than predict the next token in a sequence. It is functionally auto-complete. It hallucinates because it has no concept of a fact. It has no "concept", period, it cannot reason. It is a statistical model. The "reasoning" you observe in models like o1 is a neat prompting trick that allows it to generate more context for itself.

I use LLMs on a daily basis. I use them at work and at home, and I feel that they have greatly enhanced my life. At the end of the day they are just another tool. The term "AI" is entirely marketing preying on those who can't be bothered to learn how the technology works.


They’re right until they’re wrong.

AI is (was?) a stochastic parrot. At some point AI will likely be more than that. The tipping point may not be obvious.


> Even most intelligent people can hallucinate, we still haven't fixed this problem.

No we have not, neurodiverse people like me need accommodations not fixing.


It is not hallucination. When people do what we call halucination in chatGPT, it is called "bullshiting", "lying" or "being incompetent".



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: