this overlooks how they do it. we don't really know. it might be logical reasoning, it might be a very efficient content addressable human-knowledge-in-a-blob-of-numbers lookup table... it doesn't matter if they work, which they do, sometimes scarily well. dismissing their abilities because they 'don't reason' is missing the forest for the trees in that they'd be capable of reasoning if they were able to run sat solvers on their output mid generation.
Dismissing claims that LLMs "reason" because these machines perform no actions similar to reasoning seems pretty motivated. And I don't think "blindly take input from a reasoning capable system" counts as reasoning.
Does it? I think Blindsight (the book) had a good commentary on reason being a thing we think is a conscious process but doesn't have to be.
I think most people talking past each other are really discussing whether the GPT is conscious, has a mental model of self, that kind of thing, as long as your definition of reasoning doesn't include consciousness it clearly does it (though not well.)
this overlooks how they do it. we don't really know. it might be logical reasoning, it might be a very efficient content addressable human-knowledge-in-a-blob-of-numbers lookup table... it doesn't matter if they work, which they do, sometimes scarily well. dismissing their abilities because they 'don't reason' is missing the forest for the trees in that they'd be capable of reasoning if they were able to run sat solvers on their output mid generation.