Hacker News new | past | comments | ask | show | jobs | submit login

I think you're making a strawman to argue against. Nowhere above have I claimed that "knowing" requires "consciousness", or "it must be implemented identically to me to count", and in fact I believe neither.

But:

- In this context, following on the whole 2nd half of the 20th century where cognitive science and psychology moved past behaviorism and sought explanations of the _mechanisms_ underlying mental phenomena, a scientific discussion doesn't have to restrict itself to only considering what the LLM says. Neither we, nor the LLM are black boxes. Evidence of _how_ we do what we do is part of scientific inquiry.

- But the LLM does _not_ reproduce all the behaviors of an agent with a theory of mind. A two year-old with a developing theory of mind may try to hide food they don't want to eat. A 4-year-old playing hide-and-seek picks locations where they think their play-partner won't look. They take _actions_ which are appropriate for their goals and context which require consideration of the goals of others. The LLM shows elaborate behaviors in one dimension, in which it has been extensively trained. It has no capacity to do anything else, or even receive exposure to non-linguistic contexts.

I am in no way arguing that only meat-based minds can "know". I'm saying that the data, training regime and model structure used for LLMs specifically is extremely impoverished, in that we show it language but no other representation of the things language refers to. Similarly, image-generating AIs know what images look like, but they don't know how bodies or physical objects interact, because they have never been exposed to them. Of _course_ we get LLMs that hallucinate and image-generators that produce messed up bodies.

On the other hand, there are some pretty cool reinforcement-learning results where agents show what looks like cooperation, develop adversarial strategies, etc. There's experiments where software agents collaboratively invent a language to refer to objects in their (virtual) environment to accomplish simple tasks. I think there are a lot of near and medium-term possibilities coming from multi-modal models (i.e. can models trained on related text, images, audio, video) and RL which could yield knowledge of a kind that LLMs simply do not have.




Yes valid points you make, but I feel they are still skipping something. To me it seems like you are asking "Does it know the same things we know?"> With the obvious answer is no because it doesn't have all of the senses we have.

Someone who is blind, doesn't have a lesser concept of knowing even though they are blind. They might not "know" things in the same way a someone who is seeing, but doesn't mean their version of knowing is any less, they just know fewer facts about the world. Specifically the visual facts of what things look like. Their "knowing" functionality is equal to someone who sees.

Similarly, someone who is blind, and deaf also has full ability for "knowing" even if they'll never know things in the visual or auditory spaces.

So my argument is that your premise is wrong, the fact that someone or something has fewer senses doesn't mean it's ability to know is any less.

So back to your LLM the fact it doesn't exists in the real world is not an exclusion from its ability to know. It does not need to have all of those experiences "to know". It will never know the physical meaning of concepts like we do. Just like I'll never know the details of a city block in Jakarta (as I've never been). But not having that experience (or any experiences of multiple senses) doesn't mean I don't know.

LLMs don't need multiple cross connected sensory experiences, nor extensive history with a physical or virtual world to know things.

For an entity "to know" it means it has a model it can use to make predictions.


I think your argument goes off the rails when it jumps from "you don't need any particular sense modality to know" to "you don't need any percepts, or experience of reality or simulated unreality to know". That's a big leap, and I can't disagree more.

> For an entity "to know" it means it has a model it can use to make predictions.

Great, every PID controller, every jupyter notebook or excel spreadsheet with a linear regression model, every count-down timer can make predictions and therefore "know" under this definition. But perhaps there's a broader class of things that "make predictions". Down this path lies panpsychism. When I throw a rock, its velocity in the x direction at time t is a great "predictor" of its velocity in the x direction at time t+delta, etc, etc. And maybe there's nothing inconsistent or fundamentally wrong with saying that every part of the physical universe "knows" at least something insofar as it participates in predicting or computing the future. But I think by so over-broadening the concept of knowing, it becomes useless, and impossible to make distinctions that matter.


> you don't need any percepts, or experience of reality or simulated unreality to know". That's a big leap, and I can't disagree more.

I still feel this the the point where you're making a difference based on you desired outcome vs the actual system. ChatGPT absolutely does have precepts / a sense. It has a sense of "textual language". It also has a level of sequencing or time w.r.t. word order of that text.

While you're saying experience, it seems like in your definition experience only counts if there is a spatial component to it. Any experience without a physical spatial component to you seems like it's not valid sense or perception.

Again taking this in the specific, imagine someone could only hear via one ear, and that is their only sense. So there is no multi-dimensional positioning of audio, just auditory input. It's clear to me that person can still know things. Now if you also made all audio the same loudness so there is no concept of distance with it, it still would know things. This is now the same a simple audio stream, just like ChatGPT's langauge stream. Spatial existence is not required for knowledge. And from what I'm understanding that is what underpins your definition of a reality/experience (whether physical or virtual).

Or as a final example lets say you are Magnus Carlson. You know a ton about chess, best in the world. You know so much about chess that you can play entire games via chess notation (1. e4, e6 2. d4 e5 ...). So now an alternate world where there is even a version of Magnus that has never sat in front of a chess board and only ever learned chess by people reciting move notation to him. Does the fact that no physical chess boards exist and there is no reality/environment where chess exists mean he doesn't know chess? Even if chess were nothing but streams of move notations it still would be the same game, and someone could still be an expert at it knowing more than anyone else.

I feel your intuition is leading your logic astray here. There is no need for a physical or virtual environment/reality for something to know.


You're still fighting a strawman. You're the only participant in this thread that's talking about space. I'm going to discontinue this conversation with this message since (aptly), you seem happy responding to views whether or not they come from an actual interlocutor.

- I disagree that inputs to an LLM as a sequence of encoded tokens constitute a "a sense" or "percepts". If inputs are not related to any external reality, I don't consider those to be perception, any more than any numpy array I feed to any function is a "percept".

- I think you're begging the question by trying to start with a person and strip down their perceptual universe. I think that comes with a bunch of unstated structural assumptions which just aren't true for LLMs. I think space/distance/directionality aren't necessary for knowing some things (but bags, chocolate and popcorn as lsy raised at the root of this tree probably require notions of space). I can imagine a knowing agent whose senses are temperature and chemosensors, and whose action space is related to manipulating chemical reactions, perhaps. But I think action, causality and time are important for knowing almost anything related to agenthood, and these are structurally absent in ChatGPT UUIC. The RLHF loop used for Instruct/ChatGPT is a bandit setup. The "episodes" it's playing over are just single prompt-response opportunities. It is _not_ considering "If I say X, the human is likely to respond Y, an I can then say Z for a high reward". Though we interact with ChatGPT through a sequence of messages, it doesn't even know what it just said; my understanding is the system has to re-feed the preceding conversation as part of the prompt. In part, this is architecturally handy, in that every request can be answered by whichever instance the load-balancer picks. You're likely not talking to the same instance, so it's good that it doesn't have to reason about or model state.

But I actually think both of these are avenues towards agents which might actually have a kind of ToM. If you bundled the transformer model inside a kind of RNN, where it could preserve hidden state across the sequence of a conversation, and if you trained the RLHF on long conversations of the right sort, it would be pushed to develop some model of the person it's talking to, and the causes between its responses and the human responses. It still wouldn't know what a bag is, but it could better know what conversation is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: