I don't think we really disagree. This is what I wrote above:
"So depending how you define it, they might have some "reasoning", but so far I see 0 indications, that this is close to what humans count as reasoning."
What we disagree on is only the definition of "reason".
For me "reasoning" in common language implys reasoning like we humans do. And we both agree, they don't as they don't understand, what they are talking about. But they can indeed connect knowledge in a useful way.
So you can call it reasoning, but I still won't, as I think this terminology brings false impressions to the general population, which unfortunately yes, is also not always good at reasoning.
There's definitely some people out there that think LLMs reason the same way we do and understand things the same way, and 'know' what paint is and what a wall is. That's clearly not true. However it does understand the linguistic relationship between them, and a lot of other things, and can reason about those relationships in some very interesting ways. So yes absolutely, details matter.
It's a complex and tricky issue, and everyday language is vague and easy to interpret in different ways, so it can take a wile to hash these things out.
"It's a complex and tricky issue, and everyday language is vague and easy to interpret in different ways, so it can take a wile to hash these things out."
Yes, in another context I would say, ChatGPT can better reason, than many people, since it scored very high on the SAT tests, making it formally smarter, than most humans.
"So depending how you define it, they might have some "reasoning", but so far I see 0 indications, that this is close to what humans count as reasoning."
What we disagree on is only the definition of "reason".
For me "reasoning" in common language implys reasoning like we humans do. And we both agree, they don't as they don't understand, what they are talking about. But they can indeed connect knowledge in a useful way.
So you can call it reasoning, but I still won't, as I think this terminology brings false impressions to the general population, which unfortunately yes, is also not always good at reasoning.