Hacker News new | past | comments | ask | show | jobs | submit login

"Or just give it a lump of code and change you want and see that it often successfully does so, even when there's no chance the code was in the training set"

I did not claim (but my wording above might have been bad), it can only repeat word for word, what it has in the training set.

But I do claim, that it cannot solve anything, where there has not been enough similar examples before.

At least that has been my experience with it as a coding assistant and matches of what I understand of the inner workings.

Apart from that, is a automatic door doing reasoning, because it applies "reason" to the known conditions?

if (something on the IR sensor) openDoor()

I don't think so and neither are LLMs from what I have seen so far. That doesn't mean, I think that they are not useful, or that I rule out, that they could develope even consciousness.




It sounds like you’re saying it’s only reasoning in that way because we taught it to. Er, yep.

How great this is becomes apparent when you think how virtually impossible it has been to teach this sort of reasoning using symbolic logic. We’ve been failing pathetically for decades. With LLMs you just throw the internet at it and it figures it out for itself.

Personally I’ve been both in awe and also skeptical about these things, and basically still am. They’re not conscious, they’re not yet close to being general AIs, they don’t reason in the same way as humans. It is still fairly easy to trip them up and they’re not passing the Turing test against an informed interrogator any time soon. They do reason though. It’s fairly rudimentary in many ways, but it is really there.

This applies to humans too. It takes many years of intensive education to get us to reason effectively. Solutions that in hindsight are obvious, that children learn in the first years of secondary school, were incredible breakthroughs by geniuses still revered today.


I don't think we really disagree. This is what I wrote above:

"So depending how you define it, they might have some "reasoning", but so far I see 0 indications, that this is close to what humans count as reasoning."

What we disagree on is only the definition of "reason".

For me "reasoning" in common language implys reasoning like we humans do. And we both agree, they don't as they don't understand, what they are talking about. But they can indeed connect knowledge in a useful way.

So you can call it reasoning, but I still won't, as I think this terminology brings false impressions to the general population, which unfortunately yes, is also not always good at reasoning.


There's definitely some people out there that think LLMs reason the same way we do and understand things the same way, and 'know' what paint is and what a wall is. That's clearly not true. However it does understand the linguistic relationship between them, and a lot of other things, and can reason about those relationships in some very interesting ways. So yes absolutely, details matter.

It's a complex and tricky issue, and everyday language is vague and easy to interpret in different ways, so it can take a wile to hash these things out.


"It's a complex and tricky issue, and everyday language is vague and easy to interpret in different ways, so it can take a wile to hash these things out."

Yes, in another context I would say, ChatGPT can better reason, than many people, since it scored very high on the SAT tests, making it formally smarter, than most humans.


OpenAI probably loaded up the training set with logic puzzles. Great marketing.


Sure thing, they also adress this in the paper.

https://cdn.openai.com/papers/gpt-4.pdf

Still, it is great marketing, because it is impressive.


Since it genuinely seems to have generalised those logical principles and can apply them to novel questions, I’d say it’s more than just marketing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: