Hacker News new | past | comments | ask | show | jobs | submit login

Well, yes — absolutely. You could say something similar about any system with complex emergent behaviour. 'All computers can do are NAND operations and any other ability is just a side effect', or something.

However, I do think that in this case it's meaningful. The claim isn't that LLMs are genuinely exhibiting reasoning ability — I think it's quite clear to anyone who probes them for long enough that they're not. I was fooled initially too, but you soon come to realise it's a clever trick (albeit not one contrived by any of the human designers themselves). The claim is usually some pseudo-philosophical claim that the very definition of reasoning is simply 'outputting (at least some of the time) correct sentences' and so there's no more to be said. But this is just silly. It's quite obvious that being able to manipulate language and effectively have access to a vast (fuzzily encoded) database of knowledge will mean you can output true and pertinent statements a lot of the time. But this doesn't require reasoning at all.

Note that I'm not claiming that LLMs exhibit reasoning and other abilities 'as a side effect' of language manipulation ability — I'm claiming there's no reason to believe they have these abilities at all based on the available evidence. Humans are just very easily convinced by beings that seem to speak our language and are overly inclined to attribute all sorts of desires, internal thought processes and whatever else for which there are no evidence.




>I think it's quite clear to anyone who probes them for long enough that they're not.

I disagree and so do a lot of people who've used them for a long while. This is just an assertion that you wish to be true rather than something that actually is. What happens is that for some bizarre reason, for machines, lots of humans have a standard of reasoning that only exists in fiction. Devise any reasoning test you like that would cleanly separate humans from LLMs. I'll wait.

> The claim is usually some pseudo-philosophical claim that the very definition of reasoning is simply 'outputting (at least some of the time) correct sentences' and so there's no more to be said.

There is nothing philosophical or pseudo-philosophical about saying reasoning is determined by output. If anything, the opposite is what's philosophical nonsense. The idea that there exists some "real" reasoning that humans perform and "fake" reasoning that LLMs perform and yet somehow no testable way to distinguish this is purely the realm of fiction and philosophy. If you're claiming a distinction that doesn't actually distinguish, you're just making stuff up.

LLMs clearly reason. They do things, novel things that no sane mind would see a human do and call anything else. They do things that are impossible to describe as anything else unless you subscribe to what i like to call statistical magic - https://news.ycombinator.com/item?id=41141118

And all things considered, LLMs are pretty horrible memorizers. Getting one to regurgitate Training data is actually really hard. There's no database of knowledge. It clearly does not work that way.


> Devise any reasoning test you like that would cleanly separate humans from LLMs. I'll wait.

Well, you don’t have to wait. Just ask basic questions about undergraduate mathematics, perhaps phrased in slightly out-of-distribution ways. It fails spectacularly almost every time and it quickly becomes apparent that the ‘understanding’ present is very surface level and deeply tied to the patterns of words themselves rather than the underlying ideas. Which is hardly surprising and not intended as some sort of insult to the engineers; frankly, it’s a miracle we can do so much with such a relatively primitive system (that was originally only designed for translation anyway).

The standard response is something about how ‘you couldn’t expect the average human to be able to do that so it’s unfair!’, but for a machine that has digested the world’s entire information output and is held up as being ‘intelligent’, this really shouldn’t be a hard task. Also, it’s not ‘fiction’ — I (and many others) can answer these questions just fine and much more robustly, albeit given some time to think. LLM output in comparison just seems random and endlessly apologetic. Which, again, is not surprising!

If you mean ‘separate the average human from LLMs’, there probably are examples that will do this (although they quickly get patched when found) — take the by-now-classic 9.9 vs 9.11 fiasco. Even if there aren’t, though, you shouldn’t be at all surprised (or impressed) that the sum of pretty much all human knowledge ever + hundreds of millions of dollars worth of computation can produce something that can look more intelligent than the average bozo. And it doesn’t require reasoning to do so — a (massive) lookup table will pretty much do.

> There is nothing philosophical or pseudo-philosophical about saying reasoning is determined by output.

I don’t agree. ‘Reasoning’ in the everyday sense isn’t defined in terms of output; it usually refers to an orderly, sequential manner of thinking whose process can be described separately from the output it produces. Surely you can conceive of a person (or a machine) that can output what sounds like the output of a reasoning process without doing any reasoning at all. Reasoning is an internal process.

Honestly — and I don’t want to sound too rude or flippant — I think all this fuss about LLMs is going to look incredibly silly when in a decade or two we really do have reasoning systems. Then it’ll be clear how primitive and bone-headed the current systems are.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: