Hacker News new | past | comments | ask | show | jobs | submit login

> But there is no "reasoning" occurring here - just syntactic templating and statistical auto-complete.

I don't know why people continue to be so sure that "reasoning" is not some form of syntactic templating and statistical auto-complete.




Well we don't have an understanding of how the brain works so we can't be fully sure but it's clear why they have this intuition:

1) Many people have had to cram for some exam where they didn't have time to fully understand the material. So for those parts they memorized as much as they could and got through the exam by pattern matching. But they knew there was a difference because they knew what it was like to fully understand something where that they could reason about it and play with it in their mind.

2) Crucially, if they understand the key mechanism early then they often don't need to memorize anything (the opposite of LLM's which need millions of examples)

3) LLM's display attributes of someone who has crammed for an exam and when it is probed further [1] it starts to break down in exactly the same way a crammer does.

[1] https://arxiv.org/abs/2406.02061


I understand why they intuitively think it isn't. I also think there is probably something more to reasoning. I'm just mystified by why they are so sure it isn't.


Do you mean human reasoning in their day to day life ? Because there certainly are other kinds of reasoning, for example, logic.


Logic is a syntactic formalism that humans often apply imperfectly. That certainly sounds like we could be employing syntactic templating and statistical auto-complete.


I was trying to tease apart whether you were talking about human behavior or the abstract concept of 'reasoning'. The latter is formalized in logic and has parts that are not merely syntactic (with or without stochastic autocomplete).

https://en.wikipedia.org/wiki/Semantics_of_logic


You seem to be confusing logic and proofs with any kind of random rhetoric or syntactically-correct opinion which might in terms of semantics be total nonsense. If you really don't understand that there's a difference between these things, then there's probably no difference between anything else either, and since things that are indiscernible must be identical, I conclude that I must be you, and I declare myself wrong, thus you are wrong too. Are we enjoying this kind of "reasoning" yet or do we perhaps want a more solid rock on which to build the church?


I don't know what claim you think I'm making that you inferred from my 5 sentences, but it's really simple. Do you agree or disagree that humans make mistakes on logical deduction?

I certainly hope you agree, in which case it follows that a person's understanding of any proposition, inference or deduction is only probabilistic with some certainty less than one. When they believe or make a mistake in deduction, they are going through the motions of applying logic without actually understanding what they're doing, which I suppose you could whimsically call "hallucinating". A person will typically continue to repeat this mistaken deduction until someone corrects them.

So if our only example of "reasoning" seems to share many of the same properties and flaws as LLMs, albeit at a lower rate, and that correcting this Paragon of reasoning is basically what we also do with LLMs (have them review their own output or check it against another LLM), this claim to human specialness starts to look a lot like special pleading.


I haven't made any claim that humans are special. And your claim, in your own words, is that if mistakes are made in logical deduction, that means that the agent involved must ultimately be employing statistical auto-complete? No idea why you would think that, or what else you want to conclude from it, but it's obviously not true. Just consider an agent that inverts every truth value you try to put into the knowledge base and then proceeds as usual with anything you ask it to do. It makes mistakes and has nothing to at all do with probability, therefore some systems that make mistakes aren't LLMs. QED?

Ironically the weird idea that "all broken systems must be broken in the same way" or even "all broken systems use equivalent mechanics" is exactly the type of thing you get by leaning on a language model that really isn't even trying to understand the underlying logic.


> I haven't made any claim that humans are special

The whole context of this thread is that humans are "reasoning" and LLMs are just statistical syntax predictors, which is "lesser", ie. humans are special.

> And your claim, in your own words, is that if mistakes are made in logical deduction, that means that the agent involved must ultimately be employing statistical auto-complete?

No, I said humans would be employing statistical auto-complete. The whole point of this argument is to show that this allegedly non-statistical, non-syntactic "reasoning" that humans are doing that supposedly makes them superior to statistical, syntactic processing that LLMs are doing, is mostly a fiction.

> leaning on a language model that really isn't even trying to understand the underlying logic.

You don't know that the LLM is not understanding. In fact, for certain rigorous formal definitions of "understanding", it absolutely does understand something. You can only reliably claim LLMs don't understand everything as well as some humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: