Hacker News new | past | comments | ask | show | jobs | submit login

> statistical auto-complete

Yes, but it's a hugely almost unimaginably complicated auto-complete model. And it turns out that a lot of human reasoning is statistically predictable enough in writing that you can actually obtain reasoning-like behavior just by having a good auto-complete model.

You shouldn't trvialize how amazingly well it does work, and how surprising it is that it works, just because it doesn't work in all cases.

Literally the whole point of TFA is to explore how this phenomenon of something-like-reasoning arises out of a sufficiently huge autocomplete model.




> And it turns out that a lot of human reasoning is statistically predictable enough in writing that you can actually obtain reasoning-like behavior just by having a good auto-complete model.

I would disagree with this on a technicality that changes the conclusion. It's not that human reasoning is statistically predictable (though it may be), it's that all of the writing that has ever described human reasoning on an unimaginable number of topics is statistically summarizable, and therefore having a good auto-complete model does a good job of describing human reasoning that has been previously described at least combinatorially across various sources.

We don't have direct access to anyone else's reasoning. We infer their reasoning by seeing/hearing it described, then we fill in the blanks with our own reasoning-to-description experiences. When we see a model that's great at mimicking descriptions of reasoning, it triggers the same inferences, and we conclude similar reasoning must be going on under the hood. It's like the ELIZA Effect on steroids.

It might be the case that neural networks could theoretically, eventually reproduce the same kind of thinking we experience. But I think it's highly unlikely it'd be a single neural network trained on language, especially given the myriad studies showing the logic and reasoning capabilities of humans that are distinct from language. It'd probably be a large number of separate models trained on different domains that come together. At that point though, there are several domains that would be much more efficiently represented with something other than a neural network model, such as the modeling of physics and mathematics with equations (just because we're able to learn them with neurons in our brains doesn't mean that's the most efficient way to learn or remember them).

While a "sufficiently huge autocomplete model" is impressive and can do many things related to language, I think it's inaccurate to claim they develop reasoning capabilities. I think of transformer-based neural networks as giant compression algorithms. They're super lossy compression algorithms with super high compression ratios, which allows them to take in more information than any other models we've developed. They work well, because they have the unique ability to determine the least relevant information to lose. The auto-complete part is then using the compressed information in the form of the trained model to decompress prompts with astounding capability. We do similar things in our brains, but again, it's not entirely tied to language; that's just one of many tools we use.


> We don't have direct access to anyone else's reasoning. We infer their reasoning by seeing/hearing it described, then we fill in the blanks with our own reasoning-to-description experiences. When we see a model that's great at mimicking descriptions of reasoning, it triggers the same inferences, and we conclude similar reasoning must be going on under the hood. It's like the ELIZA Effect on steroids.

I don't think we know enough of how these things work yet to conclude that they are definitely not "reasoning" in at least a limited subset of cases, in the broadest sense wherein ELIZA is also "reasoning" becuase it's following a sequence of logical steps to produce a conclusion.

Again, that's the point of TFA: something in the linear algebra stew does seem to produce reasoning-like behavior, and we want to learn more about it.

What is reasoning if not the ability to assess "if this" and conclude "then that"? If you can do it with logic gates, who's to say you can't do it with transformers or one of the newer SSMs? And who's to say it can't be learned from data?

In some sense, ELIZA was reasoning... but only within a very limited domain. And it couldn't learn anything new.

> It might be the case that neural networks could theoretically, eventually reproduce the same kind of thinking we experience. But I think it's highly unlikely it'd be a single neural network trained on language, especially given the myriad studies showing the logic and reasoning capabilities of humans that are distinct from language. It'd probably be a large number of separate models trained on different domains that come together.

Right, I think we agree here. It seems like we're hitting the top of an S-curve when it comes to how much information the transformer architecture can extract from human-generated text. To progress further, we will need different inputs and different architectures / system designs, e.g. something that has multiple layers of short- and medium-term working memory, the ability to update and learn over time, etc.

My main point is that while yes, it's "just" super-autocomplete, we should consider it within the realm of possibility that some limited form of reasoning might actually be part of the emergent behavior of such an autocomplete system. This is not AGI, but it's both suggestive and tantalizing. It is far from trivial, and greatly exceeds what anyone expected should be possible just 2 years ago. If nothing else, I think it tells us that maybe we do not understand the nature of human rationality as well as we thought we did.


> What is reasoning if not the ability to assess "if this" and conclude "then that"?

A lot of things. There are entire fields of study which seek to define reasoning, breaking it down into areas that include logic and inference, problem solving, creative thinking, etc.

> If you can do it with logic gates, who's to say you can't do it with transformers or one of the newer SSMs? And who's to say it can't be learned from data?

I'm not saying you can't do it with transformers. But what's the basis of the belief that it can be done with a single transformer model, and one trained on language specifically?

More specifically, the papers I've read so far that investigate the reasoning capabilities of neural network models (not just LLMs) seem to indicate that they're capable of emergent reasoning about the rules governing their input data. For example, being able to reverse-engineer equations (and not just approximations of them) from input/output pairs. Extending these studies would indicate that large language models are able to emergently learn the rules governing language, not necessarily much beyond that.

It makes me think of two anecdotes:

1. How many times have you heard someone say, "I'm a visual learner"? They've figured out for themselves that language isn't necessarily the best way for them to learn concepts to inform their reasoning. Indeed there are many concepts for which language is entirely inefficient, if not insufficient, to convey. The world's shortest published research paper is proof of this: https://paperpile.com/blog/shortest-papers/.

2. When I studied in school, I noticed that for many subjects and tests, sufficient rote memorization became indistinguishable from actual understanding. Conversely, better understanding of underlying principles often reduced the need for rote memorization. Taken to the extreme, there are many domains for which sufficient memorization makes actual understanding and reasoning unnecessary.

Perhaps the debate on whether LLMs can reason is a red herring, given that their ability to memorize surpasses any human by many orders of magnitude. Perhaps this is why they seem able to reason, especially given that our only indication so far is the language they output. The most useful use-cases are typically those which are used to trigger our own reasoning more efficiently, rather than relying on theirs (which may not exist).

I think the impressiveness of their capabilities is precisely what makes exaggeration unnecessary.

Saying LLMs develop emergent logic and reasoning, I think, is a stretch. Saying it's "within the realm of possibility that some limited form of reasoning might actually be part of the emergent behavior" sounds more realistic to me, though rightly less sensational.

EDIT:

I also think it's fair to say that the ELIZA program had the limited amount of reason that was programmed into it. However, the point of the ELIZA study was that it shows people's tendency to overestimate the amount of reasoning happening, based on their own inferences. This is significant, because this causes us to overestimate the generalizability of the program, which can lead to unintended consequences when reliance increases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: