Hacker News new | past | comments | ask | show | jobs | submit login

How would you know that you hold belief X if you have no subjective experience ? To have the knowledge that you believe X you must experience the thought "X is true" and that is a subjective experience.



Sure, if you define subjective experience as "all sensory information." I'm not denying that we have sensory input; I'm denying that its abstractions are any more than illusory constructs of the mind. The computer in my example knew things and had thoughts, despite not having access to actual qualia.

Do you disagree that the computer in my hypothetical example would have the intuitions it does about its own senses? Given that it does, how can you trust your own intuitions about your sensory qualia, no matter how strong?


Part of the problem is that people tend to use words very loosely. In particular they often use words that anthropomorphize computers and when they use such words its hard to tell whether they intend for them to be taken literally or are just using them as a figure of speech.

Your example starts of by telling us that the computer is known not to have qualia, which I understand to mean that it does not have subjective conscious experiences. First I don't think its ever possible to know, even in principle, whether something other than one's own self has or does not have subjective experiences, but I'll ignore that objection for the moment.

A few lines later you say:

> We, and the computer, just know that we're seeing red.

Now much hinges on what is meant here by the verb "know". If you are using the word in a loose metaphorical sense then I could accept that. For example you could say that a computer "knows" that the word "red" means an RGB value of (255,0,0) because in its memory there is a hash table that maps some strings to RGB values. So behaviorally you can ask the computer what "red" means and it will tell you "(255,0,0)". But there is nothing in such a description that implies that the process of generating that output is associated with any kind of subjective experience on the part of the computer.

On the other hand when you "know" some simple fact the process of reporting that fact is always associated with some subjective experiences, specifically the experience of thinking. How do I know that ? Well technically I don't. All I really know is that whenever I know something and am asked to report on it the process is always accompanied by a subjective experience of thought. I am simply making the conventional assumption here that the same is true for all other humans and that you are a member of that group. Of course that assumption might be false. For all I know I could be having this conversation with LaMDA.

To get back to the questions you asked here:

> Do you disagree that the computer in my hypothetical example would have the intuitions it does about its own senses?

Yes, because you said it didn't have conscious experiences and for me the word "intuition" is strongly linked to certain conscious experiences.

> how can you trust your own intuitions about your sensory qualia, no matter how strong?

Because I have them. I am not a black box to myself, I get to look inside and when I do I find that there are conscious experiences there. That's just not something I can, even in principle, be mistaken about. However once I start making assumptions about the nature of those experiences, such as that there is some external physical world that is ultimately responsible for them, I am already on very shaky ground.


An intelligence doesn't need to experience qualia in order to have an internal thought process. Picture a thought process the way it might look if we could inspect our own minds: A prolonged monologue of ideas being continually appended to with new information and conjectures. "It's hot in here. This is an interesting article, but I disagree because <blah blah blah>. What should I do tomorrow? Maybe I should get a haircut," etc. Obviously such a log wouldn't be written entirely in English, but it would have a language of its own, after a fashion.

When I talk about "an intelligence," I mean a thing with an internal thought process which can reflect upon its own thought process in a non-trivial way. This excludes large language models like LaMDA which don't really have semantic thoughts, but it would certainly be possible for a true computer intelligence along these lines to exist which nonetheless didn't experience "actual qualia" (assuming qualia are real, existing things).

A stream of consciousness thought process has input---it can sense temperature, it can observe its own hair, it can read articles---and for the purposes of our model, we can suppose that this input is appended to its internal log similarly to how new thoughts are. This sensory input is abstract: A thought process may sense heat---i.e. sensory information about the external temperature may be entered into the thought process---but the thought process can't go on to make any real observations about that sensation. A thought process can't interpret sensory input as anything beyond simply "a distinct input of this or that type with a relative magnitude of whatever," because that input is abstract and irreducible. Further thoughts in the thought process will describe these sensory inputs as vivid, unique, and ineffable when they reflect upon them, but those properties only exist as a product of the relationship between the thought process and its input. The ineffable qualities of these senses as described by their thought process is not a real thing, but only an interpretation.

So when I hear the argument, "I can reflect on the way my senses feed into my thoughts, and by their apparent ineffable and transcendental nature, I can say they're self-evidently real things that exist outside of my own beliefs about them," I'm pretty skeptical. A computer intelligence with an internal thought process like I described above would reach those same conclusions simply by virtue of the relationship between its sensory input and its thoughts. We're not unbiased observers; our perspective as an intelligence breaks down when we try to reason about the nature of the abstract and atomic inputs which our thought process is based upon. Because of the way senses feed into thoughts, an intelligence can't help but find them ineffable and transcendental; therefore, when I find my own sensory information to be ineffable and transcendental, I can't take that at face value as anything more than an illusion of perspective.

> Because I have them. I am not a black box to myself, I get to look inside and when I do I find that there are conscious experiences there. That's just not something I can, even in principle, be mistaken about.

A computerized thought process can look at its own thoughts too, but there's no reason to suppose that it (and you) can't draw mistaken conclusions about them. For instance, the mistaken conclusion that the distinctness of one input or another must be a real existing quality and not just a logical axiom essential to the functioning of that thought process.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: