Hacker News new | past | comments | ask | show | jobs | submit login

> Likewise the view that human minds are the same kind of things as LLMs cannot be disproven until we have a sufficient understanding of how human minds work

This seems pretty backwards to me. Why should this speculative view need to be disproven rather than proven?

Sure, LLMs do some things kind of like some things human minds can do. But if you put that on a Venn diagram, the overlap would be miniscule.

There's also the plain observation that LLMs are made of silicon and human minds are made of neurons- this from this you might reasonably start with the assumption that they are in fact extremely different, and the counterclaim is the one needing evidence!




I'm with you in feeling that it's highly obvious that a human mind is a different sort of thing entirely to an LLM (and in more than just the trivial sense that it's implemented in wet stuff rather than dry), but plenty of people respond to "it's just a statistical model" with "well so are you", so the opposite view seems equally obvious to them. All I'm referring to is what the standard of "proof" is and that shouldn't be different for different sides of the debate.

If we can therefore agree that it is not currently possible for either side to irrefutably prove the other side is wrong, then the discussion needs to be of a rather different nature if it is to have any likelihood of changing anybody's mind.


I don't think there needs to be a false equivalence here. It's easy to prove that an LLM is a stastical model, since we know how they are implemented, don't we? Where's the equivalent proof that a human is a stastical model?

I guess where this goes is that we don't know for sure that intelligence, or even sentience, can't emerge from a statistical model like an LLM. Which I think is a fair statement. But you can't work backwards from there to say humans and LLMs are similar!


I think you're underestimating how radical (I would say nihilisic) the viewpoint that has become common among those who claim LLMs are AGI or at least show "sparks" of it is. Many of them claim that there isn't anything to emerge that hasn't already emerged. Their claim is often that there is no "intelligence" or "sentience" or "understanding" or "consciousness" distinct from the behaviours already displayed by existing statistical models. They claim that these things are illusions, that many of the philosophical questions relating to them are simply ill-posed and that the only differences between existing computational systems and human minds are ones of degree and of implementation specifics.

It is this view that I am acknowledging cannot currently be disproved just as one cannot currently disprove the idea that these things are real, distinct phenomena that statistical models do not manifest.

Again, I personally fall very firmly on the latter side of this debate. I'm just acknowledging what can and cannot currently be proved, and there is a genuine symmetry there. This debate is not new, and the existence of LLMs does not settle it one way or another.

Edit: And re burden of proof - this isn't a court case. It's about what you can hope for in a discussion with someone you disagree with. If you can't absolutely prove someone is wrong then it's pointless to try and do so. You need to accept that you are trying to persuade, not prove/disprove. If you are both arguing positions that are currently unfalsifiable you both need to accept that or the debate will go nowhere. Or, if you think you have a proof one way or another, present that and its validity can be debated. And if so, you need to be honest about what constitutes proof and cannot reasonably be disputed. Neither "my LLM emitted this series of responses to this series of prompts" nor "I can directly perceive qualia and therefore know that consciousness is real" counts.


>Many of them claim that there isn't anything to emerge that hasn't already emerged. Their claim is often that there is no "intelligence" or "sentience" or "understanding" or "consciousness" distinct from the behaviors already displayed by existing statistical models.

Woah boy, I detect mischief in this portrayal of who is claiming what. Two questions to separate out here. (1) Can computers ever, in principle, do the all the special kinds of things that human minds can do and (2) do LLM's as of 2024 do any of those truly special things?

But also a separate thing to untangle, which is whether (A) LLMs do anything intelligently given the various possible meanings of the term, and (B) whether they are conscious or sentient or are AGI, which is a whole other ballgame entirely.

I think there's mischief here because I do think it's nuts to say LLMs right now have any special spark, but you seem to want to make proponents of (1) answer for that belief, which I think is not very fair to that argument. I think it's rare to find people in the wild who are considered respectable proponents of (1), who would say that. More often you find people like that Google engineer who got fired.

And I think there's mischief here because it's one thing to say LLMs do (A) and another to say they do (B), and I think you can reasonably say (A) without going off the deep end. And I think blending A and B together is again trying to make (1) answer for a crazy argument.


I'm talking about the people one encounters on hackernews, who are the ones I personally am most likely to actually debate these things with. I gave a specific example, the "and so are you" rejoinder to "it's just a stastical model" which, pithy though it is, implies all that I said. There are numerous examples of much lengthier and more explicit statements of that position to be found in any topic related to AI, consciousness, the recent death of Daniel Dennett, etc. If you don't hold that position then don't consider yourself among those I'm referring to, but its prevalence (or at least loudness) on HN can't be denied.

There's no "mischief" and I somewhat resent the suggestion. I haven't attempted to even argue with the viewpoint I describe, only to point out that it (and therefore also softer positions related to it) cannot currently be disproved so attempting to argue with it from a position of assuming it's irrefutably wrong is a non-starter, not that that stops many people from trying to argue with it in exactly that way.

I was trying to point out why it seems difficult and tiring to have a grown up discussion about this stuff, not misrepresent anyone's opinion.


> And re burden of proof - this isn't a court case. It's about what you can hope for in a discussion with someone you disagree with

This was a great addendum, thanks for this reminder.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: