Hacker News new | past | comments | ask | show | jobs | submit login
What’s so hard about understanding consciousness? (nautil.us)
174 points by CapitalistCartr on Feb 4, 2022 | hide | past | favorite | 378 comments



I do Brazilian Jiu Jitsu and a common submission technique is to cut off blood flow to the brain (known as a blood choke).

I've lost consciousness twice after being put into a blood choke. The experience of regaining consciousness is visceral. For those that haven't experienced it, it's very much like a turning on a computer.

Initially, your brain does not (or cannot) process input like vision, sound, touch, etc. Slowly, over about 30-60 seconds, you begin to see and hear but they don't make any sense. You've completely lost the context of who and where you are. Then you begin to recognize colors and shapes, sounds of voices, etc. Eventually, you see and hear everything but still haven't built a concept of the context. Lastly, your brain seems to begin making sense of all the inputs and you regain full conscious awareness.

If you think it's just like waking up from a dream or from a nap, you're wrong. It's a completely different experience, one that's so profound that I distinctly remember the entire episode even years later (and I have poor memory recall generally).


I've had the same, but in the opposite direction -- and then back again -- from hypoglycemia (low blood sugar). Over a much longer time period (several hours). Craziest thing I have ever experienced.

Basically, when you have hypoglycemia, you lose "fine" things first, and you sort of lose things in order, from "fine" (picking things up, balancing, seeing small details) towards "coarse" (talking, walking) and then of course to very "coarse" things, like being able to wake up/see.

And the physical symptoms are exactly mirrored by the mental symptoms. "fine" mental things are like the ability to formulate a complex thought with a lot of abstraction: do a math problem, talk about the future or the past, etc. "Coarser things" are like being able to think verbally or interpret the basic meaning of things -- like, what is that thing over there called? Then, you lose extremely "course" things, like being able to reason at all about what is going on in the world around you.

Oddly enough, the last thing that goes is emotion. Emotion is the baseline of how every piece of context is categorized. Things feel good or they feel bad at a level far below anything resembling what they are. At a level below speaking, having distinct thoughts about distinct objects or people, having any sense of time, place, or meaning. Emotion is always there.

It really is impossible to articulate, because words are a complete fiction.

I wish everyone could go through that, because it really makes you realize the fiction of your own "self" as a real thing.


> Things feel good or they feel bad at a level far below anything resembling what they are.

This primal good-or-bad feeling is known as Vedana in classical Buddhism. On an at-home meditation retreat two months ago I had begun to see this arise in every interaction with almost everything. So much so that someone set off a firecracker outside my house (pretty normal in my neighbourhood but at an unexpected time) and I first noted that something unpleasant had happened, then noted that my eyes had flinched, then heard the sound, then recognised the sound as a firecracker. (I was in the shower at the time, wide awake.) The first glimmer of almost every experience was this vedana.


Emotion seems to be a form of super-intelligence or pre-intelligence. Every time I worry about the rate at which we are supposedly approaching "artificial intelligence", I ask myself how close we are to being able to synthesize artificial emotion. Seems like a comfortable 1000 years, if not much much more.


Here's a head start for any future AI engineers in the year 3022.

    struct EmotionalState {
      int happiness;
      int angriness;
      int hungriness;
    };


Can you tell me more about emotion? Could you distinguish between a sense of calm or excitement? What about confidence verses doubt?


I slept once on the mats. I literally remember feeling like I was in a fuzzy and comfortable dream and waking up to my partner shaking me and asking if I am alright. I blurted out something like "What are we doing?" and I was wondering why my partner was sitting next to me and where I was and what was going on. But not in a panicked or concerned way. Like some parts of my brain knew what was happening. But the thinking me was trying to put together all the sub-conscious streams again and get them routed correctly.

Turns out his cross collar choke was tighter than I thought. It was not unpleasant and I had a mild buzzy/headache-y feel for the rest of the day. It all felt a little dreamy and is one of the strongest memories I have had in recent years. Definitely something I remember clearly.

Also: if anyone is in the Denver area we have mats at our office :)


I once woke up in the high school bathroom, laying on the floor. I was alone. I have no idea how I got there, what happened before, and how long the thing had happened. It seemed unlikely that you could be in a high school bathroom THAT LONG without someone else showing up.

It's not happened since. A one-time event. The confusion I woke up with so profound and complete and slow... I think about it often. I think about how the brain, and its consciousness, can have a bug or a crash.


You may have had a seizure.


I don't tend to have a headache afterwards, but your description matches up to my experience of coming around after fainting. I tend to feel fuzzy and dreamlike and it takes a few seconds to work out what's going on.


I live just south of Denver! What a coincidence.


I once crashed while snowboarding and lost consciousness.

When I woke up when the ski patrol started talking to me I did not have a profound experience like you, it was simply like waking up a little tired, but I was “there” instantaneously.

Did not remember the falling though, so it must have been quick.

I guess the mechanics of “turning off” matter.

A different scary thought: how can you verify you are still “you” when you go to sleep and wake up tomorrow?

Edit: I also remember when I had surgery and they injected the sleeping drug, I felt my body disappearing as the drug circulated in my bloodstream, and I lost consciousness instantly as it reached my head. It was a weird experience.


I have orthostatic hypotension, which doesn't affect me as much since I've gained weight, and I've lost consciousness from standing to quickly up, but sometimes it doesn't all go away. I black out, a moment later I am somewhere, I don't know who I am or where I am, a moment later the rest fills in.

The way I understand it as the blood pressure drops your brain stops to function properly but it might still somewhat function. As blood pressure goes back up more parts of the brain resume normal function but the "functional" state of the brain might still be abnormal. The current state is unstable, especially when new functionality starts to come back online and resume normal behavior, and the state of the brain quickly moves from a unstable state to a more stable normal state.


Yup. I have this too and have had similar experiences, but they've only lasted a couple seconds. I wasn't able to get a detailed picture like OP was describing.


I had something similar on nitrous oxide, except that there was this sense that there were parts, like crystals, which were made to slot together and they were hooking back to each other, one by one, and each addition added some kind of functionality. Unfortunately, the whole thing was rather fast -- I come out of general anesthesia like a shot, I am told -- and I was unable to observe more than that.


That sounds like a slow motion version of waking from anesthesia.

Waking from sleep feels very fast, but I've been put under for surgery more than once now, and coming around from surgery doesn't feel like waking up. It feels like something else completely.


Fainting works in reverse for me. Everything becomes harder to understand, sound becomes muffled like in low motion, vision starts to fade until you are in total darkness. And it's not similar at all to fall asleep, it's quite scary.

https://en.wikipedia.org/wiki/Reflex_syncope


if you did not have full conscious awareness when regaining first sensations, then what was it that remembers this experience? I'm saying that there was full consciousness already present as you remember everything.


I've had the same experience doing Brazilian Jie Jitsu. Very strange experience indeed.


This reminds me of standing up too fast


Yup. It's caused by orthostatic hypotension.


What you just described sounds exactly like waking up from a dream.


Definitely not the same. I feel when you dream you feel some time has passed. I've lost consciousness a number of times for different reasons and the common feeling is it doesn't feel like any time has passed. One moment I was awake and the very next I'm on the ground, my head hurts and have no idea what is going on.. for the first 30 seconds I wouldn't be able to tell you my name, where I am, what I was doing, etc. Then it all just comes back after a minute or two and you realize what happened.


It's nothing like that. Your memory is wiped out, you don't know who you are, you don't know when you are. It's like you're in an alien world, nothing looks familiar. Your ears are kind of buzzing and ringing, your visual field is fucked up, etc. The last layers of your neural network aren't turned on yet. It's pretty weird.


...sounds exactly like waking up from a dream.


I think the more interesting point seems to be that the way you appear to wake from dreams is very different from 99.9% of people. For most people, waking from a dream is absolutely nothing like that.


Nope. I've also experienced this (just sort of fell to the floor unconscious and then woke up 10 seconds later). It was like my brain was slowly paging in bits of memory that had been stored out to tape. Like, who I am. Where I am. Why I am there?. When I wake up from most dreams, I have a bit of trouble remembering what's true and what's dream, but I know where I am, who I am, and why I'm there.


Do you regularly wake up extremely groggy and drained? That usually only happens to me if I am awoken mid-REM cycle. Perhaps that's worth investigating for you?


Like Dennett in "Consciousness Explained", both of these authors seem almost entirely focused on what we are conscious of rather than the fact that we are conscious of anything at all. Their descriptions of their work are all centered on the question of what it is that we actually experience. This is is a fascinating question, and entirely worthy of investigation.

But it is not the same question as: how is it possible that we have any experience at all?


exactly ^. The first central question is "how is it possible that we have any experience at all?" That leads naturally to the other important question "who is the observer that's aware" i.e. who is the "we"/I that is aware.

It's telling that science has made incredible progress (e.g. in the last 100 years) but on questions such as these above, we know little more than people knew then. Although we do know a lot more about the mechanics of what and how we experiences things.


"Who is the observer" is not consistent. Maybe "what part of myself is the observer".


who is the observer who is observing myself?


Let's try to replace "who" with "computer". Who is the computer that observes itself? all of them. A computer is not passive, a brain is not passive. It can observe itself. In other words, the parts communicate.


The whole point is, that's not an obviously valid replacement in the context of consciousness. What's your definition of "passive"?


A city is not passive. The parts communicate. Is there consciousness there?


Maybe it is. It's a self replicator, a culture and technology preserver, a competitor in the race for resources.

I think much of what we think is human intelligence is actually crystalized intelligence in the form of language and tools. We got them by default, as citizens of our city.


Personally, I think that "who" is somewhat beside the point. "What" and "how" are the more interesting questions.

The answer to "who" just becomes "a specific instance of X" that perhaps has some low probability combination of minor characteristics compared to other Xs.


Yeah, this is the real question. I'm sure I could program an AI agent to respond to (dis-)incentives in a fuzzy way. Is it consciously experiencing pain and pleasure at that point? The answer would be "no" unless we could prove it.


I am sure someone, perhaps you, could create an AI (or just a simple hard-coded program) that responds to (dis)incentives that clearly doesn't experience pain or pleasure.

But that isn't the question. The question it possible that a sufficiently complex program could ever be able to have the experiences in the same way that humans have them (AKA "qualia").


I think it is the question, because if there is an AI that can experience qualia, and my toy AI doesn't, then there's some inflection point between the two that can experience qualia. You can set up goalposts just slightly past any toy example but you're just inventing Zeno's Qualia.


I think with enough co-processors responding to enough "(dis)incentives" you could get the same result.

Present and future anger, present and future pain, predicting whether future social cohesion will be maintained based on how a response is made to someone within a social graph. Understanding and assuming that nodes (other people) within a social graph can communicate to each other to update the state of a social interaction that they weren't present for.

I think it is possible.

If pleasure was just a variable on a scale from 0 to 10, and not its own co-processor, and an assumption that some forms of pleasure can influence parts of the social graph, it maybe isn't even that complicated anymore.

I think it would become less distinguishable from consciousness.


What is consciousness? Self-awareness seems like a good definition that can be explained in the context of artificial intelligence research. What is the experience of consciousness? The experience of an emotion? To me, this seems beyond the reach of our current understanding


Well, at the very least there has to be a "space" of experiences where they can be represented and related to each other. This representation space needs to also include possible outcomes for the agent goals, this will color every perception into shades of "good" and "bad". This would explain how we can represent and value our experiences. Why do they feel like anything then? I bet it's because we have evolved hardware for it, and equally because we're playing a game with real stakes (the human game of life).


> But it is not the same question as: how is it possible that we have any experience at all?

Let me send this ball straight back to you.

In the morning when you wake up what happens? You might take something to eat. How would you find the kitchen, how would you recognise the food, or even know you need it? Of course, you are conscious. That's how.

It is possible to be conscious because otherwise your body wouldn't be able to survive. It's essential for avoiding death. We are self replicators in a limited resource environment, after all. This balance between self replication and death evolved consciousness as an evolutionary advantage.

Being exposed to your internal and external environment is the source of consciousness. Even a single cell can seek nutrients or avoid bad places, or decide when to replicate, it has a rudiment of consciousness. But panpsychism? It's one step away from new age babble. Stones are just stones.


You're talking about a different sense of the word "conscious." What you're describing might better be called "reactivity" or something. (Don't get too hung up on the specific word; the point is there are several different meanings of the word "conscious" and the distinctions are critical.) There is no obvious reason why useful interaction with one's environment requires conscious experience, which is the phenomenon we're discussing here.


“ It is possible to be conscious because otherwise your body wouldn't be able to survive.”

False, we can program a roomba to seek out electricity to ‘survive’. E.g. zombies


You don't think it's possible to seek nutrients without having experiences?


I don't believe it is. This is the problem w/ the philosophical zombies, they are self defeating examples and can't exist in anything but a thought experiment. The fact that you're seeking nutrients at all means you can distinguish between sensory input, and therefore are experiencing something. If you can't distinguish between sensory input, then you are not experiencing anything. How can you be an effective seeker of nutrients if you don't experience anything and get feedback on whether you've found the nutrient or not?

This is why I think the hard problem of consciousness is silly and only exists because language is so imperfect that philosophers have invented a problem which they can't even decide is actually a problem or not. Can any of us even agree on what an "experience" is?

You can tell this is so because every conversation about the hard problem of consciousness starts off by trying to convince people there is a problem to talk about and use the same tired examples. We can "experience" red. There is an "experience" of being a bat.


There sure seems to be a distinction between having sensory input and being able to reflect on that experience. How else could you explain the phenomenon of blindsight?

I agree philosophical zombies are impossible, but only because they wouldn't be able to reliably talk about consciousness without a conscious "zombie master". See https://www.lesswrong.com/tag/zombies-sequence


> you can distinguish between sensory input, and therefore are experiencing something

You are assuming “you” is a unique, atomic thing here.

When your fingers curl due to being immersed in water (which is a neurological thing, not a physical thing!), your conscious mind does not experience the humidity, the finger curling system does. You just experience the effect of the skin curling through sight and touch, afterwards.

There are several other systems in our body that react to an environment that we are barely conscious of, and “barely” is being generous. Those seems like a counterexample of your “we can't not be conscious if we experience”.


Yes but see, in your example, I would then say your fingers are conscious, too. Or that there is an experience of being a finger. Who, besides the finger, could say otherwise? Just because you've moved the experience from me having it, to the finger having it, doesn't mean that the experience disappeared.

The problem is yet again w/ the definition of having an experience, as I said in my reply to the parent. It's just language, and the reason why we find ourselves in some kind of linguistic paradox trying to explain it.


It's not just language. I doubt that you think it is like anything to be a rock. You do think that it is like something to be a person (or even a finger)! Those are two different states of existence, one with experiences/qualia (i.e. it is like something to be that thing) and one without. How do we explain the difference.

ps. interesting user name.


How can you be so sure there is no experience of being a rock? Surely it's wildly different from being a finger or a person, and perhaps, very boring from a human's perspective. I'm not sure why everybody insists you can have qualia or you can't, as if it's a binary thing. Perhaps it's a gradient. I would say there is something like being a plant, or something like being a computer program. Just because we are unable to imagine it as humans does not mean that it doesn't exist.

What makes my experience of seeing red any more special than the experience of a plant reacting to light? The fact that I can type about it on hackernews and the plant can't?


Well, I'm not totally convinced it's not like anything to be a rock. But this is the generally widely-used object of "an object without qualia".

You are apparently a believer in some degree of panpyschism. That's fine. Conscious is like gravity - it's just a part of the universe and there's no explanation for it other than that it is experienced by anything with (mass, for gravity ; ??? for consciousness).

I'm not opposed to this explanation, I'm just not (yet) convinced that it's the right one.


The minerals in rocks can change in response to their environment. There has to be something that it feels like for that change to happen.


There are chemical reactions that follow gradients in chemical potential to self sustain and survive from our standpoint. Not sure one would argue that the chemical reaction has consciousness.


If I have a material that tightens or loosens based on temperature or chemical gradient, does that count as "sensory input"? I wouldn't say so, since nothing is receiving a signal from it to distinguish, and I also wouldn't say it's causing experiences, but it is enough to seek nutrients for a bacteria or a jellyfish-like creature.


You're right, a bacteria would sense and follow chemical gradients and light. And that would be a primitive form of consciousness.

A piece of material is not conscious because it doesn't need to. You see, bacterias are self replicators, humans too. And self replication is resource intensive, while resources are limited. So life becomes a competition. That is why agents need to become more attuned to their state and possible actions and outcomes.

Is that good enough to be called consciousness? Taking sensory inputs, representing your state, taking actions, observing effect, learning from it. Your goal - to exist, to make copies of yourself. If you're not aware of essential information, or don't understand your situation, you die. If you live, it's because you are aware.


If it's only one protein, is that protein primitively conscious?

It's not interacting with anything else to make those decisions, it's completely autonomous.

> observing effect, learning from it

If something is meaningfully doing that, then we're getting to consciousness. But I'll note that I don't count natural selection as "learning" here.

And observing the results of decisions and learning is not mandatory to survive and reproduce.


Natural selection is the outer loop, reinforcement learning the inner loop. Agents, by living or dying, send signals into the system, guiding the process.


Natural selection is not conscious, and agents don't need to have reinforcement.


"consciousness" as a philosophical question is not about the ability to detect state in the world (including your own state) and reacting to it. it is not about self-replication.

It is about having subjective experience aka "qualia". It is about "it being like something to be you", contrasted with, e.g. being a rock (which most people agree, it is not like anything to be).


This is an argument that an amobea must be concious, no?


Any self replicator can be. I am not sure about non-replicating agents like some AIs.

Besides having a way to copy what it learns into the future, self replication also provides a goal, a guiding force - of course, the goal is to avoid death before replication. Another great quality of self replication is its open-endedness. It can become anything as long as it keeps replicating.

For these three qualities I consider self replication the basic process behind consciousness. They create the means by which open ended learning can evolve.


Prions self-replicate. Are they concious?


I think they're two sides of the same coin.

_How_ do we have any experience will be more easily answered the more we know about _what_ we actually experience. And _what_ we experience is predicated upon _how_ we experience it. I agree both need investigating.

The _bigger_ question I see the coin representing is _who or what_ is experiencing in the first place? The way I see it, it's not really _a thing_ experiencing _something_ but instead _things_ experiencing _things_ at _times_.


That presupposes that (a) panpsychism or semi-panpsychism is false (i.e. that qualia are not inherent properties of either anything at all, nor of "any sufficiently complex system") and that (b) there is a "how" that can be answered.

I agree with you about the "things experiencing things at times".


> But it is not the same question

This is debatable. If you subscribe to the idea that consciousness is an emergent phenomenon, then it makes sense as an avenue to the answer.


If your answer to "how can anything be conscious" is "it just can, if the system meets these requirements", then obviously, that's the only answer and you may as well move on to "what are we conscious of?"

At this point, I'm not willing to regard consciousness as something like gravity or the weak force, though I concede that it could turn out that way.


My personal train of thought is this: 1) We know consciousness exists (as far as evidence goes it has more than anything else since it is the conduit to everything else) and 2) We know it is composed, in some way and to some degree, of information.

Information might be a small piece, and it might be the thing itself, but it's the only tangible avenue we have to pursue, so I don't think it's a matter of them answering or discussing a different question, I think it's them framing the same question with slightly different assumptions.


> 2) We know it is composed, in some way and to some degree, of information.

I don't see how we can be said to know this, certainly not at this point in human investigation of consciousness.


Well said. There is another harder problem of consciousness: A lot of people miss the point of the hard problem of consciousness.


I think it's a good test of how conscious a person is. The way we are taught to understand the world - through the scientific process - leads to the conclusion that consciousness is either an illusion or emergent.

It is only through direct inspection of one's own internal experience that it becomes apparent consciousness is not an illusion. Therefore those who claim consciousness is an illusion are not in fact conscious.

Those who say it is emergent are perhaps conscious but have not looked carefully enough to understand that consciousness is an entirely different category of thing to physical reality.

The problem here is that science as a system of thought operates in a kind of space separate from the space of conscious experience. The intelligence born of that scientific process which is a kind of entity that exists in it's own right - is not itself conscious and hence cannot introspect correctly about consciousness.

So when a person is operating in the mode of scientific thinking they are in a sense embodying that disembodied intelligence that is not aware. Or at least it is not able to introspect about it's own awareness. And so they draw incorrect and non-sensical conclusions that fly in the face of everyone's direct visceral experience.

It really is quite a phenomena.


Well, the reason we have experience at all is because that is being simulated by the brain. For me a more interesting question is how it is being simulated by the brain.


Isn't that just layering an extraneous concept on top without explaining anything? Is there a difference between experiencing something (real) and having a "simulated" experience of experiencing it?


I mean "simulated" in the same sense as how computational machines can simulate any other computational machine, as in you can define some arbitrary state and arbitrary rules and with such a machine apply those rules to the state to change it.

I say that because of that's what I best understand the neocortex doing (it does that and probably something more). If you buy in what the Numenta people are talking about, then certain neurons in your neocortex configure themselves to be able to predict their inputs better, so they start to form a model of their input. In aggregate, these neurons probably start to learn multiple sequential models of their input.

My guess it that these models in the brain interact with each other. I think some of the larger models that the brain models is us (individually), the environment we find ourselves in, our mind, how we relate to it, the 3D space that we find ourselves in, and such.

This is a simulation to explain what is going on "out there" (the physical world) so that the physical organism can successfully navigate "out there".

This simulated world, body, and such, is what is "real" to us.


Yes. If not for that, we wouldn't have so many bloody articles about consciousness.


If what I'm having is a "simulated experience," then where do "real experiences" exist?


The simulated experience is the real experience.


> But it is not the same question as: how is it possible that we have any experience at all?

You're right, it's not the same question. Dennett answers that question too though: you don't have conscious experience, you only think you do.

Thus, the questions he and these authors address are the only ones worth investigating. In time, exploring these questions will reveal the illusion behind consciousness, just like the progressive elaboration of the biochemistry of life revealed the fiction people tried to spin around how living matter simply had to have something different than non-living matter ("elan vital" in vitalism).


"you don't have conscious experience, you only think you do."

That's what illusionism, the position of Keith Frankish, asserts. Dennett mostly seems to support it, as you note, but more equivocally than Frankish.

For those of us who have subjective experience (SE), and are aware of it: SE is the thing, which we know must exist (as noted by Descartes).

Anyone doubting the existence of SE, either is not having SE (i.e. is a "phenomenological zombie"), or (more likely, IMO) has not identified his own SE.


> Dennett mostly seems to support it, as you note, but more equivocally than Frankish.

I believe Dennett is an eliminative materialist, so he would consider qualia to be an illusion.

> For those of us who have subjective experience (SE), and are aware of it: SE is the thing, which we know must exist (as noted by Descsartes).

Descartes begged the question. Just deconstruct it: "I think therefore I am" presupposes the existence of "I" right at the very start. Except everybody knows there is no "I", you're just a bundle of atoms, and a bundle that's changing from moment to moment. Where are "you" exactly? The argument is fallacious and implies fallacious conclusions which led to mind/body dualism.

The non-fallacious version is "this is a thought, therefore thoughts exist". This is undeniably true, and yet it does not imply the existence of an "I" or any kind of dualism between mind and matter. A thought would then simply be a specific material structure (edit: or rather, it's a particular logical structure that can be embodied as a material structure).

> Anyone doubting the existence of SE, either is not having SE (i.e. is a "phenomenological zombie"), or (more likely, IMO) has not identified his own SE.

Anyone doubting subjective experience has simply recognized that every prior claim to human specialness has failed spectacularly, that science has repeatedly shown that our obvious and intuitive grasp of perception and truth is fatally flawed in numerous ways, and therefore that we should not in a million years trust anything that we immediately perceive as completely obvious when it can be demonstrated quite easily that these perceptions are vague and often false.

For christ's sake, your senses are telling you that water breaks pencils [1], and that you're burning up when you're dying of cold [2], and you're telling me that your internal perceptions of your subjective experience, arguably the most sophisticated part of your brain, is some kind factual oracle? Sorry, that's just nonsense. You should be immensely skeptical of your perceptions, looking both for justification that they are true or explanations for why you think they are true, you should not be treating them as simply a priori true.

[1] https://scienceathomekids.com/wp-content/uploads/2020/06/IMG...

[2] https://psichologyanswers.com/library/lecture/read/76624-do-...


I always found illusionism completely nonsensical, for the reasons that Galen Strawson put forth in "Realistic Monism" [1] (emphasis mine):

> Some of them — Dennett is a prime example — are so in thrall to the fundamental intuition of dualism, the intuition that the experiential and the physical are utterly and irreconcilably different, that they are prepared to deny the existence of experience, more or less (c)overtly, because they are committed to physicalism, i.e. physicSalism.

> "‘They are prepared to deny the existence of experience.’ At this we should stop and wonder. I think we should feel very sober, and a little afraid, at the power of human credulity, the capacity of human minds to be gripped by theory, by faith. For this particular denial is the strangest thing that has ever happened in the whole history of human thought, not just the whole history of philosophy. It falls, unfortunately, to philosophy, not religion, to reveal the deepest woo-woo of the human mind. I find this grievous, but, next to this denial, every known religious belief is only a little less sensible than the belief that grass is green.

[1]: https://www.sjsu.edu/people/anand.vaidya/courses/c2/s0/Reali...


By contrast, Strawson is so enthralled with the primacy of experience that he can easily dismiss the demonstrable facts that literally everything his experience is telling him is a fiction (the world is not classical, there is no continuity of self, that our perceptions reflect evolutionary fitness and not truth, etc., etc.), and yet still maintain that experience itself must somehow be an exception. Pretty absurd indeed.


It can't be "literally everything". It is experience that enables us to correct those misconceptions in the first place. It is by experiencing that we discovered and experimentally confirmed quantum theory i.e. that "the world is not classical". It is by reflecting on his experiences that Dennett came to his conclusions.


It is reason that permits us to correct those errors, not phenomenal experience. Our perceptions and "experiences" deceive us all of the time, and through reason we have found many of those flaws. Consciousness is the final boss fight, and the battle has begun:

http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00...


> I believe Dennett is an eliminative materialist, so he would consider qualia to be an illusion.

That's correct, mostly, but it also means this position is nonsensical.

What is notable about qualia is that it is possible to have them at all. An illusion is definitionally a qualia. You cannot have an illusion without qualia existing.

Dennett tries to finesse this, but in my opinion fails. I think he wants it to be possible that you can somehow experience things in a non-mysterious way, and this this non-mysterious experience explains the mysterious experience stuff. I think he's wrong.


Dennett's position makes perfect sense -- if Dennett is a zombie. For a zombie, to whom subjective experience (SE) is not even a thing, the "hard problem" can only mean: why do people talk about SE? So that's what they set about explaining.

However, I don't think Dennett (or any human) is actually a zombie. The difficulty, is getting people to recognize their own SE. Our vocabulary, all about material and mechanisms, can't actually define SE. Instead, we have a few ostensions by which an attentive experiencer might recognize his own SE:

[Descartes] SE is that one thing, which absolutely must exist.

[Nagel] SE is "what it's like, to be...". He adds, that all our science is fully consistent with SE not existing. That's why it makes perfect sense for a zombie to believe it doesn't exist.

[Jackson]: Mary knows what seeing red is like, only when she has seen red.


I think it is preposterous to suggest that Dennett is not recognizing his own SE.

Dennett, however, requires that the mysterious part of his own SE must be explainable by a non-mysterious aspect of SE. He just doesn't have any proposal for what or how that could be. He wants to wave his hands and say, "well, we have detectors for this and that and these predictive capabilities and these modelling systems, and so ... ta-da, we're conscious!"


> He just doesn't have any proposal for what or how that could be. He wants to wave his hands and say, "well, we have detectors for this and that and these predictive capabilities and these modelling systems, and so ... ta-da, we're conscious!"

I don't think this gives Dennett enough credit, because what our instruments are all telling us is that there no single, indivisible "self" at all; we are all made up of constituent parts either none of which have consciousness themselves (eliminativism), or all of which must have consciousness (pansychism), because ineffable qualia cannot simply appear from nothing. This reddit post does a great job breaking down Dennett's position sensibly:

https://reddit.com/r/askphilosophy/comments/shneug/can_someo...


From that post;

> There's no one thing, not even a collection of things, that can be identified with what we think of as the conscious mind. Instead, we've got a whole bunch of different things, none of which has "consciousness" in a traditional sense, and these come together in a way that makes it seem as though we're conscious.

I just don't buy this at all. This seems precisely as I described it:

> "well, we have detectors for this and that and these predictive capabilities and these modelling systems, and so ... ta-da, we're conscious!"

Also, note the heavy lifting being done by "makes it seem" from the Reddit quote. This goes back to the basic problem: Dennett (and the authors in TFA) are describing what we are conscious of, what makes up our consciousness, but he and they are not addressing how it possible for there to be any subjective experience at all.

I would go a little further, even: the whole reason why there is a sense of self is precisely because there is a singular subjective experience. You can figure out what drives that experience, and even note that it isn't rooted in any kind of singular and/or stable physical system, and that's actually really interesting. But that's not addressing how subjective experience is possible at all.


> I just don't buy this at all. This seems precisely as I described it: "well, we have detectors for this and that and these predictive capabilities and these modelling systems, and so ... ta-da, we're conscious!"

No, it's actually, "ta-da, we're not conscious! but here's why we think we are!"

> but he and they are not addressing how it possible for there to be any subjective experience at all.

Because neuroscience will do this by elaborating the mechanisms. Like in this paper:

The attention schema theory: a mechanistic account of subjective awareness, http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00...

An analogy for tech nerds would be how the illusion of multitasking on a single CPU machine arises from imperceptibly fast context switching. Something similar happens in that theory, where our perceptual faculties are constantly switching between signals from our internal representations and our senses, thus producing a simplified but false conclusion that subjectivity is present.

> I would go a little further, even: the whole reason why there is a sense of self is precisely because there is a singular subjective experience.

And I'd say you're just telling yourself a retroactively edited story that there is a singular subjective experience in order to make sense of our own thoughts and behaviours. In fact, this sort of retroactive editing has been demonstrated multiple times.


> And I'd say you're just telling yourself a retroactively edited story that there is a singular subjective experience in order to make sense of our own thoughts and behaviours

So look, "The Intentional Stance" is for me one of the most important books I've ever read in this general area, and I totally buy all the stuff Dennett and others have built up around the idea that what we are conscious of is an edited, self-created, intention-injected model of our own selves (to whatever extent there is a unitary self to be a model of).

But I don't think that any of that addresses "how can we be conscious of anything at all".

In the quote I included above, who is the "you" that is telling and who is the "yourself" that is being told? But more importantly, what does "being told" mean? How does one have an experience (whether it is being told, or being cold, or being old)? It's not enough to say "we're not conscious, we just think we are" - the conundrum of consciousness is not about how humans think, but the fact that we have subjective experience (which may includes lies told to ourselves by ourselves).


> In the quote I included above, who is the "you" that is telling and who is the "yourself" that is being told?

This is still begging the question by the use of "who". There is no "who", there is no self, there are only thoughts that refer to a "self", but the referrant does not actually exist in the way that's implied by these thoughts; there's no spirit or homunculus in your mind to which "self" actually refers.

> the conundrum of consciousness is not about how humans think, but the fact that we have subjective experience (which may includes lies told to ourselves by ourselves).

I think the paper I linked is a good start on answering this question. Per my other reply to you, whether this kind of answer is satisfactory depends on what you take "subjective experience" to mean.

If you buy the thought experiments (p-zombies, Mary's room) that suggest some sort of "ineffability", then this explanation will not be satisfactory. Personally, none of those thought experiments are remotely convincing.


From the paper you linked, in the Conclusions:

> We argue that the attention schema theory provides a possible answer to the puzzle of subjective experience. The core claim of the theory is that the brain computes a simplified model of the process and current state of attention, and that the content of this model is the basis of subjective reports.

Sure, that's all fine. Subjective reports are interesting. But they are not the same as subjective experience. What we say about what we experience is no doubt complex, and has a complex relationship with actual brain behavior. But consciousness, at its heart, is not about what we report, it's about the experience of being something.


> No, it's actually, "ta-da, we're not conscious! but here's why we think we are!"

And of course the "think" has a quality to it that the hard problem is about. It's interesting how illusionists and eliminativists explain away aspects of SE by invoking (other) aspects of SE. "You merely have an illusion of being conscious" - that illusion is the hard problem, so now explain that illusion. I could be having an illusion of an illusion of consciousness.

Imagine something that doesn't exist in the usual physical sense e.g. a dinner table on the Moon. Does that table exist? Not in the usual physical sense. Your thought or imagination of it does, though. What is that thought or image in your mind's eye "made of"? Sure, you might be able to correlate it precisely with certain neurons and yet you've not answered the question. You might call the mind's eye table an illusion, but you're not gonna deny that the picture of it exists in some sense. Three things exist: the physical table, your neurons and, separately, although not entirely independently from the neurons, (the picture of) the mind's eye table. Hence, the latter is part of the universe and the fundamental substrate of the universe must support if somehow, in a way that's different from the usual physical matter tables and neurons. Is your visual brain circuitry involved in the imagination, perhaps even generating the image in your mind's eye? Perhaps, but this doesn't answer the question. If we're nothing but our perceptions, then what the heck is that imaginary table that I'm visualizing quite well while there's no perception of an actual table? What are the physical laws characterizing such mind's eye objects, somehow coupled to ordinary physical matter and yet not of the same "stuff"?

Models like the one linked don't explain why SE exists in the universe. They posit certain physical/mathematical strucutures and claim that if this or that structure is present, then ta-da there is SE (or the illusion of it, which is the same thing). People in the stone age had a model of that kind: "this piece of matter, structured with two arms and legs - it's conscious". At some point we developed language and the model got a bit more precise by demanding the piece of matter emit certain sounds from a specific location on their body. What we have today is no different in kind. We've just become more precise at locating the pieces of human matter to verify the presence of conciousness (or illusions). None of that says why that configuration of neurons experiences or has illusions, only that it does. Science tells us that experience is in the nature of certain pieces of matter and we just have to accept that without further explanation, like the fact that electric charge exists and follows certain rules. Deeper "why" answers are out of the scope of current science.


> It's interesting how illusionists and eliminativists explain away aspects of SE by invoking (other) aspects of SE. "You merely have an illusion of being conscious" - that illusion is the hard problem, so now explain that illusion.

I've explained this elsewhere, but will repeat here: this argument relies on a definition of "illusion" that begs the question on the existence of a subject, just like Descartes. Define illusion as "a perception that directly entails a false conclusion", and there is no subject needed, and no hard problem remains.

It's like you're asking me to explain the dinosaur you saw while you were hallucinating. Sure, I agree we should explore the biochemistry and neurology involved in dream-like states that yield distorted perceptions that imply false conclusions about reality. Let's not go so far as to posit that those distorted perceptions are real if there's no corroborating evidence of their existence.


> It's like you're asking me to explain the dinosaur you saw while you were hallucinating.

No, it's not like that at all. We're not discussing the dinosaur. We're discussing the existence of hallucinations (and SE in general). The dinosaur is irrelevant; the fact that it was possible to have the experience is the central question.

Again, this comes back to my fundamental argument with Dennett (and one that he graciously conceded in an email back in the 90s; not sure he would do so now): trying to figure out what it is that we are conscious of, rather than how we are conscious of anything at all. I'm 110% ready to concede that everything we are conscious of is an illusion, an error, a projection, an intent-laden stance etc. I'm 110% ready to concede that everything we think we experience as a "self" is wrong.

None of that helps to explain how experience is possible. So you're either denying that SE exists, or like Dennett insisting that mysterious SE can be explained by non-mysterious stuff.


> What is notable about qualia is that it is possible to have them at all. An illusion is definitionally a qualia. You cannot have an illusion without qualia existing.

I think that's incorrect, as it relies on a definition of "illusion" that begs the question on the existence of a subject, just like Descartes. Define illusion as "a perception that directly entails a false conclusion", and there is no subject needed.

> I think he wants it to be possible that you can somehow experience things in a non-mysterious way, and this this non-mysterious experience explains the mysterious experience stuff. I think he's wrong.

No, what he's saying is that there is no "you" to experience anything, there are only scattered but correlated thoughts that are stitched together in a way that produces a false conclusion that there is a "you".


Where does that false conclusion occur? What is the entity in which it occurs?

There is no "conclusion" here in the sense of "2+2=4". What is at stake is not a reasoned, or evidential analysis of how the world is. It is, rather, that subjective experience exists (we know it exists because we have subjective experience, and whether the experience is of something invented and false does not change the fact that the experience exists).

Regardless of whether there is a singular "you" or, in Minsky's term, a "society of mind" (or self, to line up with Dennett a little more), something enjoys subjective experience, and you call that that "you". It doesn't really matter how it arises, whether it accurately reflects the operations of the brain/body: the existence of subjective experience creates a self.


> It is, rather, that subjective experience exists (we know it exists because we have subjective experience, and whether the experience is of something invented and false does not change the fact that the experience exists).

What is in dispute here is what "subjective experience" means. If we both agree that "subjective experience" is a phenomenon that can in principle be captured by a third person objective description, then we can agree that it exists and that our observations are actually evidence of its existence.

But this is not what most people mean by this term, and it is that term that is a fiction on the eliminativist view.


> If we both agree that "subjective experience" is a phenomenon that can in principle be captured by a third person objective description,

Now we get to the heart of it (and the reason why consciousness is and has been such a difficult problem): I do not agree that this is true.


The question is not whether perceptions are true -- that's irrelevant. Undoubtedly perceptions present a skewed and unreliable view onto reality. It's whether they exist at all. You can't trick someone who isn't looking.

> Except everybody knows there is no "I", you're just a bundle of atoms, and a bundle that's changing from moment to moment.

This is begging the question in the other direction.


> The question is not whether perceptions are true -- that's irrelevant. Undoubtedly perceptions present a skewed and unreliable view onto reality. It's whether they exist at all.

Perceptions are not experience. Nobody denies the existence of perceptions, the question is whether perceptions carry something "extra", something "ineffable" that we call "qualitative experience", something that cannot even in principle be captured by a third person objective description.

Algorithms and machines arguably have perceptions but not experience. Eliminativism is the position that we don't have experience either, we're only a collection of perceptions arranged in such a way that it leads us to the conclusion that our experience is real.

> This is begging the question in the other direction.

I'm not begging the question because I'm not saying eliminativism is true because matter is all we can measure. I'm simply saying that it's demonstrably true that by every measure currently available, we are just a bundle of atoms changing from moment to moment. The only people who claim otherwise and are given any kind of credibility, are people who cite fallacious thought experiments like Mary's room as "evidence". Not very compelling frankly.


Very well then: you either don't have SE, or don't recognize your own SE. My guess is the latter.

I don't think SE is exclusive to humans, but only humans talk about it, as far as we know.


> In time, exploring these questions will reveal the illusion behind consciousness

This view is a bit closed-minded, if you ask me. You may also find that exploring those questions reveals something that you could never have imagined beforehand. You may end up finding out that consciousness is not the illusion, but everything else we perceive is.


Maybe there is more to find. History suggests otherwise though, so I'm not optimistic and continue default to non-existence, per Russel's teapot.


What this article and most others lack is an adult treatment of why serious neuroscientists and philosphers of the mind feel the problem is in fact, hard.

To me, the fact that illusion and hallucination are being invoked is strong evidence of handwaving being engaged in to dismiss an otherwise difficult problem. This is not a scientifically rigorous explanation and it doesn't take the other side seriously.

As a younger guy, I read Dennet and mistook him for a serious thinker. Then I read his rebuttals of his opponents and all he did was accuse them of 'residual christianity' and make profane acrostic poems. These guys just aren't taking the other side seriously and it's kinda silly honestly.


> To me, the fact that illusion and hallucination are being invoked is strong evidence of handwaving being engaged in to dismiss an otherwise difficult problem.

Dennett doesn't dismiss the difficulty of the problem, he just dismisses that the difficulty is where people think it is. For instance, he would fully agree that the brain is insanely complex and just elaborating how information processing works will be very, very challenging. His opponents then say from this position of ignorance on how this works, "but even if you explain all of this, there's still something left to explain!"

Is there really though? How could they possibly know this? It's a classic god of the gaps argument/argument from ignorance that theists use to cling to their deities or support intelligent design, hence "residual christianity". After all of this time debating consciousness, the very best arguments we have for qualia that cannot be captured mechanistically are thought experiments that are easily dismissed as circular (and which Dennett has ably deconstructed to show their vacuousness).

Frankly, I find Dennett's rebuttals are brutally logical and pragmatic.


If one is a "phenomenological zombie", i.e. does not have subjective experience (SE), then Dennett's position is the only one that makes any sense. The only thing to explain, is why others are believing this SE illusion.

Those of us who do have SE, and have recognized it, know something the zombies cannot know: that SE must exist.


Nice try, but there cannot be any test for zombie.

Of course the zombie would pretend they experience SE and some will even loudly disagree with the referenced article.


No, you have a perception that you think entails subjective experience. Surely you would acknowledge the many ways you or others could fool your other perceptions, but it's frankly confusing that you think this simply cannot happen to your perception of subjective awareness. The fact is, if this illusion was adaptive in evolutionary terms, which it arguably is, then you would have developed such an illusory perception.


>but it's frankly confusing that you think this simply cannot happen to your perception of subjective awareness

Because subjective awareness is not a perception, they're two entirely different kinds of of concepts. Equating the two would seems stranger to me than equating say the feeling of pain with the concept of a needle. You could convince me both the needle and the pain are illusionary, but not that my experience is. That you seem to be genuinely convinced that subjective experience and our flawed sensory apparatuses are even in the same ballpark implies to me that you probably either do not have that experience, or that unlike in my case, your experience is somehow not connected to the output that typing fingers can produce


> Because subjective awareness is not a perception, they're two entirely different kinds of of concepts.

Not under eliminative materialism.

> You could convince me both the needle and the pain are illusionary, but not that my experience is.

The ineffability of the experience of the pain is the qualia that needs explanation, the rest are all compatible with mechanistic explanations. It is the ineffable quality that disappears under any serious scrutiny.

Edit: if you want to understand how eliminativism can work scientifically, I suggested reading this paper:

The attention schema theory: a mechanistic account of subjective awareness, http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00...

An analogy for tech nerds would be how the illusion of multitasking on a single CPU machine arises from imperceptibly fast context switching. Something similar happens in that theory, where our perceptual faculties are constantly switching between perceptual signals from our internal representation of the world, and the perceptual signals from our senses.


Recognizing one's subjective experience (SE), entails knowing for certain that it exists.

That true statement is definitely not an argument in favor SE; it would be of no evidentiary value to a zombie, for example.

If and when you, naasking, have SE and recognize it: it's existence will then be impossible to doubt.


I'm specifically referring to reviews and comments Dennet has made of the work of totally secular, atheist or agnostic , non Christian philosphers of mind and neuroscientists. He literally just tosses insults around in some of his reviews and it's kinda pathetic that such an eminent scholar engages in such tactics.

Overall, He does dismiss the hard problem, calling it stage magic. This is exactly the tactic you might imagine an intelligent man would use after having spent a lifetime to fail to explain anything meaningful.


> Overall, He does dismiss the hard problem, calling it stage magic.

Because it is stage magic given his theory of mind, which he elaborated 30 years ago. I would certainly like to see these insults though if you could provide a citation.


Well, I think in some sense it's all just "controlled" hallucination, but, to me, it doesn't say much.

Sometimes I think people have some intuitive sense of how things ought to work but but they can't reconcile it with physics description of the world. For example, how is it possible that a physical system such as a brain can have experiences, such as colors, love, objects, music, etc.

I think the simplest answer to that conundrum is that, yes, physical systems can not have these sorts of experiences, they do not follow from the theories or descriptions. But what you can do instead is simulate a world with such magical qualities. This is what I think some parts of the brain do, is to simulate the world that we personally find ourselves in.

Then the real question is how does the brain do that? How does it work? What are its learning rules? What does it end up learning?

My guess is that it ends up describing this universe (including ourselves) that we find ourselves in.

Why? You could imagine that the signals coming into the brain from the rest of the body are just a bit array that is changing all the time. What does it mean? That's for the brain to find out, so it's always in the process of making and updating predictive models of its input. Why? So that the biological robot that you are in control of navigate successfully in the physical world and meet the needs of the organism, e.g. eat food, drink water, keep itself in the right temperature, keep itself from being harmed, and such.


> To me, the fact that illusion and hallucination are being invoked is strong evidence of handwaving being engaged in to dismiss an otherwise difficult problem. This is not a scientifically rigorous explanation and it doesn't take the other side seriously.

Yeah. IIRC, some people say the passage of time is an illusion too. It's pretty hard for me to take seriously anyone who defends their favored theory or interpretation by hand-waving away contrary but fundamental observations as illusions.


> some people say the passage of time is an illusion too

Well, as the article says, the brain makes top-down predictions about our future. So the experience of time is a controlled illusion updated frequently with sense data.


I find the illusion explanation for complex phenomena unhelpful.

At best, this is doing nothing to actually explain - it has no value. 'Well yes, consciousness is difficult to understand because of an error in our reasoning' . It's almost circular reasoning ... you're explaining mental phenomena with mental phenomena.


In my experience, when reading about consciousness there are a couple types of claims you should be cautious of:

1) Claims that the causes of consciousness can be easily explained. As far as I can tell, they can't. Usually these explanations use the word "emergent" a lot, and tend to be linguistically impenetrable. Often more poetic than scientific when scrutinized closely.

2) Claims that consciousness doesn't exist. These arguments often feel like semantic nitpicking, and employ the word "illusion" as though it's somehow useful for explaining why you experience.

My pet criteria for whether a statement about consciousness is useful is: "Can this possibly help explain how I can merge consciousness with someone (or something) else?" To me, this feels like a tangible measurement for how materially useful a claim about consciousness might be. An example is literature on "Split-Brain" patients (such as https://www.nature.com/articles/483260a).

edit: Another category to be wary of is:

3) Claims that we cannot understand consciousness. Usually these boil down to "how can consciousness understand itself?". Poetic and poignant, but not particularly useful or scientifically grounded.


> "Can this possibly help explain how I can merge consciousness with someone (or something) else?"

Wait, what? Why do you suppose that you can merge consciousness with someone (or something) else, can you elaborate on that? The split-brain patients are cases with presumably separate consciousnesses that are not merged and AFAIK never get merged, especially after corpus callosotomy which would make it physically impossible, wouldn't it?


Yours is a very reasonable question. First of all, merging consciousness is admittedly a bit woo-woo; however, I feel it's worth considering.

According to the literature, when the corpus callosum is severed, it appears that the verbal part of the patient loses conscious access to parts of their body. However, those lost parts continue functioning independently, and are able to communicate non-verbally.

Put another way, it seems plausible that a conscious system is split into two conscious systems. This could imply the corpus callosum is involved in unifying conscious systems.


Additionally, I feel it's uncontroversial (from a materialist perspective) to claim that human consciousness can be built. Humans do it constantly through reproduction.


I suppose if consciousness is indeed an illusion then that would at least explain why it's so hard to understand. Or rather it may not be that it's hard to understand but understanding an illusion usually means dispelling it, which is problematic when that illusion is consciousness.

Then again claiming something is an illusion is usually less than helpful. Colour is an illusion of sorts (or a qualia if we're being technical), but 'colour is an illusion' is an unhelpful answer to the question 'why is the sky blue?'.


Consciousness is literally the only thing which can't be an illusion, because it's through consciousness that illusions are perceived in the first place. Cogito ergo sum; you can doubt what you are experiencing as much as you like, maybe it's a dream, maybe it's a simulation, maybe there's an evil demon deceiving you through magical means, but you can't doubt the fact that you are experiencing at all, even doubt itself is part of the experience.


Agreed. "I think therefore I am" is still foundational, last I heard.


It’s kind of pointless to consider whether it’s an illusion, because it still comes to the question of who is it an illusion for? Heh


Precisely. There seems to be much debate about consciousness and what it is, but much of it is often unhelpful.


> If you are feeling pain, for example, this gives you very actionable knowledge about what you should do next in order to survive.

This seems exactly backwards to me. The visceral sensation of pain exists precisely in order to bypass your conscious decision-making. When you're hurting, your body doesn't want you to start weighing the pros and cons of various courses of action, it wants you to deal with the injury right now. That is the reason that pain feels very different -- much more urgent and harder to ignore -- than (say) getting a text message informing you that you have been hurt.


I've read some things suggesting that nonphysical pain can be registered in the brain similar to physical pain. A text message can certainly impact you in a similar way.

Just as an example... Imagine somebody with PTSD. Let's say they get a text message that suggests to them, even indirectly, that their trauma is about to repeat. They may feel pretty intense physical danger and an urgency to intervene that makes no logical sense.


Certain kinds of pain stimulus causes you automatically to move away from it, such as when you put your hand on a stove.

But parts of your body might be sending pain signals into the brain for which there are no automatic actions for, e.g. you are in a sauna and it's too much.

The state of the simulation that you are in is updated with this pain signal. So, now, you might experience discomfort in the sauna and it is repellent so you might want to construct an action plan to move away from the source that is causing the pain (the fact you're in a sauna). But you might want to resist that for a moment and ignore the pain, maybe you have another conflicting goal which is to try to stay longer in the sauna than you're used to. Eventually the discomfort and pain might become overwhelming so you move out of the sauna.


I agree with you. But it seems that pain probably existed before higher reasoning (since it's a associated with the older parts of the brain), so it's a bit of a puzzle. Maybe early on the pain mechanisms existed without the associated qualia, or maybe pain also forces other control systems to be bypassed.


An imperfect but generally good model of the brain is the basic/reptilian portion still in control by default, but with the higher reasoning able to act as an override.

This is why it takes effort to do something like ignore pain, say no to candy bars, etc.


> This seems exactly backwards to me. The visceral sensation of pain exists precisely in order to bypass your conscious decision-making.

That quote is not about conscious decision making, it's about how the experience of pain motivates action that is adaptive to survival. If pain didn't feel like anything, but was instead just some light bulb that went off in your head, your ancestors could easily ignore it to their evolutionary detriment.

Therefore it makes sense that pain feels like something you want to avoid so it's not something you can just ignore without considerable effort.


>When you're hurting, your body doesn't want you to start weighing the pros and cons of various courses of action, it wants you to deal with the injury right now.

This is called instinct. It is evolutionary mechanism so you don't have to think what to do when you are getting hurt and your life is in danger. You react instantaneously.


Edit: Better phrase would be: It is evolutionary mechanism so you don't have to think what to do every time* when you are getting hurt and your life is in danger.


Sounds like you're in agreement then? I don't see how it's backwards. The quoted text says nothing about decision-making or weighing pros and cons.


What I find fascinating about consciousness is that it seems to be bound to your location. But how and why? Why are you inside of your body and not in someone elses?

The article poses this question, but leaves it unanswered.

For me it begs the question: If conscousness is bound to a location, is it possible to transfer consciousness from one entity to another? Is there some kind of thing we did not yet detect? What happens when you die or someone is born?

Or maybe, we all (humans, animals and things) share a single consciousness and for each Planck time duration switch through all of them without knowing?

I love to think about stuff like this, but are those questions even answerable?


Why is it bound to your location? You are just getting sensory input from a particular set of devices: eyes, ears, tactile, brain (thoughts/feelings/memories). But 'you' are not any of these. 'You', the observer receiving and experiencing all these readings, can be anywhere.

With meditation, you can get a clearer sense of what is really 'you', and what is just being observed/experienced by 'you'. But 'you' are not your body, or your thoughts, or your memories, or your feelings. Distilled, the 'you' of all living things could well be exactly the same - it is just what is fed to them by the hardware differs.


I think 'you' (consciousness) are a combination of all of those things you mention (body, thoughts, memories, feelings etc..). I dont think you can do a distillation of the 'you', without removing consciousness. If you perform this distillation the only thing left is a dead body. (maybe thats what you mean when you say that all living beings could well be the same if 'you' is removed).

But I'm not very trained in this topic, so I'm not sure what I am saying its coherent, but I like and agree with Antonio Damasio's perspective and I find it refreshing. That is where my comment is coming from.


> But 'you' are not any of these.

Ha ha, without eyes, ears, tactile, and the rest of senses what would be left? Brain a in vat, is THAT the real you?

The self is an actor in a game, you are your game, not the "observer", a weird concept if I may say so. As if it is not related to the game it observes. Its very existence is on the line, though, one missed observation and it could be dead. You are the actor, not the observer, because you got skin in the game.


Just try a simple experiment. It will only take 10 minutes.

Set an alarm on your phone for 10 minutes. Sit down in a quiet room. Close your eyes. Your goal is to observe the physical sensations produced by natural breathing in your nose for the 10 minutes. Give it your full attention and not miss a single breath, and really feel how each breath in and out feels at the tip of your nose as the air passes it. Easy, right?

Except you will probably find that before long, instead of having your attention on the breath like you fully intended, your attention will be on some thoughts. Where did the thoughts come from? Were 'you' thinking them? Or did the thoughts just arise and steal your attention, despite YOU fully intending to be paying attention to the breath for the 10 minutes? What is then your relationship to 'your' thoughts?


Yes, thoughts pop up on their own. The brain is a team made of a generator and a critic. They learn and work together. The generator is like a language model, it will make some contextual association or random jump. But then the critic allows to evaluate this fresh thought. Doing this multiple rounds before acting is necessary for reasoning.

When you meditate you let your generator run free and unrestricted or try to quiet it, while the critic should stop from interfering. By lacking its interference you just sit without an explicit goal. You're not achieving anything, you have no purpose. Just being.

That is nice to do sometimes, I think it's a very artistic perception, but insufficient for the requirements of life. In life you need to engage the critic, you need to construct your solutions. The meditation experience is like some kinds of music, sports and art, a nice thing to cultivate, but not the pinnacle of consciousness.

Consciousness has a really important job to do, it fights for its own existence, it creates consciousness. Nobody else is going to help it if it doesn't help itself.


This is only fascinating if you refuse to accept a non-supernatural or non-metaphysical definition of consciousness. Otherwise, most of these questions become trivial when consciousness is considered as a physical, biological process bound to the brain.

Why is consciousness bound to a person's location? Because their brain is bound inside their skull.

Why are you inside your body and not someone else's? Because your brain is inside your body and not someone else's.

Is it possible to transfer consciousness from one entity to another? transfer, no, because consciousness is inextricably linked to the brain. There's no separation between the hardware and the "software."

What happens when you die or someone is born? Nothing. No afterlife, to karma, no reincarnation, no bardo, nothing.

And finally, are those questions even answerable?

Yes, I believe they are, but many people are uncomfortable with the ramifications of the answer as best supported by available evidence - we are purely physical beings existing within a purely physical universe. Despite all of the evidence from biology, neurology, chemistry, physics, etc. failing to show any credible evidence of a supernatural or non-biological component of consciousness apart from the brain, many people keep hoping someone will discover the soul someday and be free of the prison of material reality.


> Otherwise, most of these questions become trivial when consciousness is considered as a physical, biological process bound to the brain.

If it were trivial, we'd be creating conscious entities via knowhow and will, not the old fashioned (fun) way.

We... need it be said... cannot. We can't even describe what consciousness is in a way it can be measured or analyzed. We can make things that do things, sure. We can even impart some of our own "consciousness" into these doer machines like robots. But we have no useful way of understanding the phenomena we feel and how that might differ from the robot, or a cat, or a bat, or whatever.

> Why is consciousness bound to a person's location? Because their brain is bound inside their skull.

We might as well ask: why is software in a robot the way it is? Because the software is on that CPU in that physical case over there. It's a non-answer and a diversion. Software can be transferred to other hardware, and there are many questions to be had whether software is consciousness, and also what the nature of that consciousness is given that there's clearly a layer of representation (1s and 0s) different than the material reality of the electrons pumping through silicon.


>If it were trivial, we'd be creating conscious entities via knowhow and will, not the old fashioned (fun) way.

I wasn't claiming the problem of consciousness as a whole was trivial, rather the specific questions being asked in the above comment become trivial when a biological origin of consciousness is assumed.

> But we have no useful way of understanding the phenomena we feel and how that might differ from the robot, or a cat, or a bat, or whatever.

This isn't entirely true. We've studied the brain and the relationship between the brain and consciousness for decades. We may not understand exactly how consciousness arises in the brain, or even why it exists, but there is absolutely a pile of evidence that consciousness is something the brain creates and that it doesn't exist apart from the brain.

>Software can be transferred to other hardware, and there are many questions to be had whether software is consciousness, and also what the nature of that consciousness is given that there's clearly a layer of representation (1s and 0s) different than the material reality of the electrons pumping through silicon.

If you want to be pedantic, software isn't ever transferred, only copied.

Also, while the "brain as computer" metaphor is tempting, I think it's also misleading. Decades of failure to produce a general AI demonstrate how alien one architecture is from another. Brains just don't operate enough like computers for comparisons of biological consciousness to software to be meaningful.

And that 'layer of representation' is an illusion. There are no 1s and 0s separate from the material reality of electrons and silicon, that's an entirely human abstraction that doesn't exist in the hardware, certainly not at the physical level where quantum effects become relevant and everything is just a charge that does or doesn't meet a tolerance.


> the specific questions being asked in the above comment become trivial when a biological origin of consciousness is assumed.

But unless there are complementary answers, they're trivial in the "useless" sense. They (1) don't provide any explanatory power, and (2) may very well be wrong--because as we've established--we have no clue.

It's of the same argumentative caliber as the anthropic principle: "Well, we're here because that's just the way we do things here in UniverseCorp."

I too could assume that consciousness is due to a particularly fascinating arrangement of angels dancing on physical atomic pins. I could say that every person is localized to a certain angel, or maybe a soul, and vociferously call it trivial because if you assume this thing then it naturally follows. But I don't want to assume that thing because that thing is totally useless at describing anything. Which is still where we are with consciousness, and the history of science is filled with whiplash and disappointment at finding older more primitive beliefs were more "true."

> There are no 1s and 0s separate from the material reality of electrons and silicon, that's an entirely human abstraction that doesn't exist in the hardware

There are 1s and 0s that are going across the world right now from you to me, and they're being transferred into electrons and photons and who knows what else afterwards. They may be emergent phenomena, but they exist, just like molecules exist as combinations of atoms: they have rules, those rules are exact and can be understood. Those rules are also not materially real, and yet they can be made immanent. We as tiny humans have built a real system where we have separated representation from reality, and we have not yet determined whether The Universe (TM) has made a comparable innovation.


For all the fancy words here, I don't find any substance. The evidence that consciousness is tied to the physical brain is overwhelming, and evidence that it exists outside the brain is nonexistent.


Well said. I think one of the problems is, that most just think that consciousness is a given and have a hard time to acknowledge, that we actually have no damn clue what it really is.

It's easy to say, that believing in a soul or something like it is hogwash, but just closing your eyes and telling yourself "I think and therefore I am" does not bring us any closer to an answer.

We have to assume that all unfalsifiable arguments could be true until we have proof, that they are incorrect. That is the only scientific approach.


I can completely follow your logic. And I completely agree that going supernatural or decending into a soul-debate is not constructive. That was not my intent.

Still, you and I can probably agree to some extent, that there is an inner eye inside of our heads looking outside. It also makes sense to me, that those processes could spawn from a feedback loop inside of a brain. Still. why did this inner eye spawn in this head we are in? There are 8 billion heads, why this one?

I'm not arguing for a soul or something. But I find it intruiging, that there had to be a mechanism which made your inner eye spawn at the place, where it now is. Only, what mechanism? Can it be explained physically? In my opinion it has to be, but maybe we missed it up to now


>Still. why did this inner eye spawn in this head we are in? There are 8 billion heads, why this one?

I get what you are saying, but honestly that question doesn't make sense. You are this one because that is how your brain developed.. just like you physically look how you are because of how your body developed. To say "why am I how I am" to me is no different than "why do I look this way?"

>Can it be explained physically?

Well.. if I smash you in the head with a hammer, your "consciousness" would most likely be affected because I damaged your brain, right? Seems pretty straightforward that we can indirectly say that yes it can be explained physically. We may not be able to explain the details.. but seems like we can say it's just simply what our brain evolved to do. Just like how our physical looks evolved for various reasons / chance.


The question is, why did the FPV-experiencing consciousness happen to bind to the body you are in? And why does it stick?

I guess it would be interesting to look at why it persists through sleep. And whether there were ever instances of people who wake up with different personalities. Also edge cases around coma/near-death restarts.

Or maybe things work like coroutines, and no matter which thread you get assigned to the state is all in the context. I mean it looks like that is 99.9% the case given what we know about memory formation, physical brain structures etc.


> The question is, why did the FPV-experiencing consciousness happen to bind to the body you are in? And why does it stick?

You're asking questions equivalent to "why does the interior of a box happen to bind to that particular piece of folded cardboard? And why is the same interior there when the box is closed and reopened?"

If each brain has a consciousness, and each is uniquely generated by that brain, the 'binding' and 'sticking' questions go away.


That line of explanation doesn't do it for me, because there is only one brain that has my FPV consciousness. Why this particular one? I think it's a question related to the topics of breaking symmetry, or nondeterminism. It's also a great unsolved question that many smarter people than me couldn't answer so I'm not hopeful I'll live to see it answered. But I will continue to wonder at it now and then, like when articles like this pop up. :-)


>That line of explanation doesn't do it for me

I don't mean to be rude, but you are flat out wrong here.. you are not accepting the "simplicity" of what is actually happening- you want to make it something complicated and metaphysical, which is why the explanation doesn't do it for you.

>me, because there is only one brain that has my FPV consciousness. Why this particular one?

Well.. why do you look the way you do? Why doesn't everyone look exactly the same? Because there are tiny bits of random evolution/changes in our biological bodies. The same is happening in the brain- no one has exactly the same functioning brain. You have your FPV consciousness because that's the one you have.

If you are looking for some other answer.. then it's just philosophical nonsense at some point. It's like saying "Why does Uranus look the way it does, while Neptune looks different?". It's just what happened.. if Uranus and Neptune switched how they looked to us from the beginning, you would never know and be asking the same question that doesn't make sense.


Likewise no offense but I don't think you are getting my point. That's fine.

Look, the bottom line is we need to understand how the phenomenon works so we can replicate it and harness it. My thinking may or may not be wrong, but what's certain is that it's not testable. Can't wait for an explanation that is :-)


> there is only one brain that has my FPV consciousness. Why this particular one?

to me, the answer is because you were born with that one.

Maybe it's better to think about how your FPV consciousness didn't exist prior to your birth. It came into being when you came into being. I don't think there's a separation between your FPV consciousness and physical brain. If your metabolism stops your FPV consciousness stops


> Maybe it's better to think about how your FPV consciousness didn't exist prior to your birth

That is unknowable though. You may just not remember it. Is memory what makes consciousness?


People can lose the ability to form new memories without losing consciousness, but arguably without any memory at all you won't have any continuity of experience, which seems essential.


I think you aren't understanding the answers we're giving you. There is no difference between asking "why does my brain have my consciousness" and asking "why does my head have my face". It is exactly the same question.


I have a lot of trouble with that exact question. It's no wonder the concept of a 'soul' is so pervasive over the ages - it explains things so neatly. It's like there is a cosmic Kubernetes cluster and at birth a process gets randomly assigned to a CPU.


Yeah, it's called evolution. You are one bet in a pool of bets for future survival. A single, uninterrupted, open-ended process that created all life on this planet. How many of our creations can lead to such diversity in one single run?


Once you decide whether you believe in free will, which is supernatural by any interesting definition, things fall into place:)


Your "mind's eye" is getting information from your actual eyes through your optic nerves and cortex. This is how we know it is a physical mechanism, we can affect it in predictable ways with lesions, drugs and other physical methods.


I think when you learn your brain creates symbols -- neural pathways corresponding to real world objects or concepts, with further pathways linking together related symbols. One of these symbols corresponds to the system itself, a self-reference. That's the "inner eye", and it wouldn't make sense for it to be in someone else's head because then it wouldn't be self-referential.


The problem with your "non metaphysical definition of consciousness", is that it does not define what consciousness is. It actually pretends it does not exist.

Sure enough, we know consciousness is tied mainly to the body (and the brain in particular, but not only). It starts and ends with it, it changes with every small change of it. They are just one "thing", two sides of the same coin, it seems.

The problem is, and it's very not a trivial one, that we can know the physical world and our body that is part of it, but for all we know about that physical world, consciousness is not required. It's like it's added on top for no good reason. We can feel the world and make sense of it, up to a point where we have a good understanding of how the whole observable universe kind of works, but we couldn't do better, if we had to describe consciousness to a distant relative from another universe, than to say "so around here we have matter and space and time, and they act like this and that, according to those equations, and when this kind of particule enter the vincinity of that one then this happens, etc, oh and by the way, every time you have a piece of matter organized in such a way that it can somehow represents itself and its environment, then this thing feels that it exists independently, and I'm afraid I can't explain what feeling feels like, you have to be conscious to understand, sorry".

You could say that we know how physics works but not why, and similarly we know how consciousness works but not why. But that would be inaccurate, because what we know about the "how" of consciousness is actually the "how" of behavior. We know nothing about the "how" of conscious experience; for instance we know how pain works at the level of individual neurons and how it leads to some macroscopic behavior, but we have no clue how this relates to (not using "cause" because it's certainly not causal) an experience of pain, and what the subject of that experience is, or if there is only one or many such subjects, or how independent these subjects are from each others, etc...


> Is it possible to transfer consciousness from one entity to another? transfer, no, because consciousness is inextricably linked to the brain.

What if we can transplant the brain itself? The consciousness is still contained in and created by the same brain, but every other part of the experience (sensory input) are from another body effectively transferring consciousness between bodies.


It takes surprisingly little effort to trick your consciousness. VR isn't even that good yet in terms of resolution and it's quite convincing. If you were put in a VR headset as a child and given a dogs body or something it's doubtful you could tell the difference.

A similar feeling is present when flying FPV drones. A good pilot typically feels like they are the drone. Your mind immediately separates the drone from the rest of the environment because it responds to your control input and matches visual perception.


You're right and we don't even need VR for it, its easy to get engrossed in a videogame and project yourself into the game.


Completely strange people downvote this comment. It is completely logical, and if you disagree with any of this- it is up to you to prove otherwise.

Obviously it does make people uncomfortable talking like this.. I don't understand why, I guess they are scared of the simple truth that we are simply the most evolved animals on our planet- but still animals. There is nothing special about our brain (compared to other species), it's just a bit more advanced. And we have physical features that have basically allowed us to take advantage of our brain power and continue to grow.

We are animals. So anyone who wants to talk about consciousness being on something like a 'soul' or whatever better be just as prepared to talk about how my cat also has a soul.. and dolphins have a soul.. and why is this dolphin's consciousness in dolphin A instead of dolphin B.

I think the truth is a lot less interesting than people want to believe..


> Completely strange people downvote this comment. It is completely logical, and if you disagree with any of this- it is up to you to prove otherwise.

Technically, the burden of proof lies with the one making an assertion.

> Obviously it does make people uncomfortable talking like this

And people talking like this often get uncomfortable if someone doesn't accept their opinions as fact. People are weird in the most wonderful ways!

> I think the truth is a lot less interesting than people want to believe.

And also a lot more difficult to find than it seems, the nature of consciousness being what it is.


> it is up to you to prove otherwise

This is funny because the reason why I cannot prove you that I am conscious _in_ _addition_ _to_ being a body governed by the rules of physics, is the same reason why I believe consciousness is metaphysical.

Qed :)


> Otherwise, most of these questions become trivial when consciousness is considered as a physical, biological process bound to the brain.

They're also fairly trivial if you imagine God created and controls everything and that's just the way it is.

> What happens when you die or someone is born? Nothing. No afterlife, to karma, no reincarnation, no bardo, nothing.

It's certainly plausible. But is it true?

> And finally, are those questions even answerable?

> Yes, I believe they are, but many people are uncomfortable with the ramifications of the answer as best supported by available evidence - we are purely physical beings existing within a purely physical universe. Despite all of the evidence from biology, neurology, chemistry, physics, etc. failing to show any credible evidence of a supernatural or non-biological component of consciousness apart from the brain, many people keep hoping someone will discover the soul someday and be free of the prison of material reality.

Similarly, many people are uncomfortable with acknowledging that what is actually true about these matters. There seem to be certain ideas that make all people fundamentally nervous, and the question of consciousness seems to be one of the most powerful ones.


> This is only fascinating if you refuse to accept a non supernatural or non metaphysical definition of consciousness.

That's a failure to appreciate the beauty, complexity and wonders of our brains, bodies and environment. No effort on appreciating, but still wanting to feel special. Thus, consciousness simply has to have a supernatural element. It wouldn't be fun otherwise.


It's pretty straightforward to follow when consciousness ends, with death. In my mind, a harder question is when does it begin? After one cell, two cells, a trillion cells? If a scientist can pin down the moment it starts then that may lead to more progress on the other questions.


That is not really a big problem if you take consciousness as some sort of a continuous measure. For simplicity I usually use skill at playing any chosen real-time game.


The universe is incredibly intelligible and follows invisible laws. That, per definition, is a supernatural observation since we can’t touch the invisible laws.


>That, per definition, is a supernatural observation since we can’t touch the invisible laws.

The definition of supernatural is that which exists outside of the order of nature, not that which is only explicitly tangible. Natural laws are human-created abstractions which explain the observed behaviors of the natural world, which by definition makes them natural, not supernatural.


“ The definition of supernatural is that which exists outside of the order of nature.”

Natural laws exist without humans, unless one believes the universe disappears without the human to collapse the wave function.

Nature follows these laws, so the laws are abstractly outside of nature.


Again, your definitions of "natural" and "supernatural" are not ones that anyone else uses.

Natural laws are observed within nature, which makes them a part of nature. To claim that natural laws are supernatural simply because they're abstractions, and that consciousness is therefore supernatural regardless of its nature, is absurd. It's a rhetorical cheat, not an argument.


That’s fair, although I’m not sure we can observe consciousness, we can only experience it, we have no measure…


To me it's obvious: since consciousness arises from external and internal stimuli of the body, part of the physical world that it could perceive and affect is limited by the physical capabilities of the body. If there was a technology to hook up a computer to your brain and you could then have it feed you data from a sensor located somewhere far away, you would effectively extend the physical capabilities of your body, thus extending the spatial presence of your consciousness.


i wonder if the reverse is true. If you were very old and floating in space after the last blackhole has evaporated, entropy is evenly distributed, and time becomes meaningless would your consciousness end? Without any input there would be no output.

edit: i suppose a counter point would be sensory deprivation tanks should reduce your consciousness however most report it to be enhanced during the experience.


It may be that in everyday life external stimuli always overpower internal ones, but in a sensory deprivation tank you eliminate the external ones altogether.


> What I find fascinating about consciousness is that it seems to be bound to your location. But how and why? Why are you inside of your body and not in someone elses?

From what I can best tell, consciousness is a property or a set of simulated properties, I don't really know what it is and I guess we won't have good answers for it until we will be able to properly simulate something like a brain and have better brain theories to explain it.

But the world you find yourself is a simulation, and your body is simulated. Everything you consciously experience is simulated. This simulation is running inside your brain. Part of your brain is trying to explain its input so it makes models to predict its inputs. I think it learns a model of your body, and your mind, and how these thing relate to each other and how they relate to what you experience "out there".

> I love to think about stuff like this, but are those questions even answerable?

Yes. (I think so)


> What I find fascinating about consciousness is that it seems to be bound to your location. But how and why? Why are you inside of your body and not in someone elses?

This seems no more puzzling or interesting than asking why your web browser is running on your computer rather than mine. Of course the computation must be contained within the computational system that's running it.


>Why are you inside of your body and not in someone elses?

This seems definitionally incoherent, on the level of "if I had been Shakespeare, who would he have been?"

If "you" were "in" "someone else's" body... you'd be them, not you.


Can human GPT-3? Yes, human can GPT-3 too.


My consciousness does not seem bound to my body; my senses do, but it is trivial to travel to other imaginary places with consciousness.


> One point on which you both clearly agree is that consciousness is an embodied phenomenon,

It probably is but I wouldn't be so sure it is. I have neither a feeling nor a rationally logical proof of this idea. It's just an intuitive guess.

> we have the ineffable experience of being alive in this world

The experience of being, and the experience of attributes like "alive" and in "somewhere in particular" are separate things. AFAIK the latter even is attributed to specific parts of the brain scientifically and it's known these can be broken. I certainly feel like "I am" but I have no idea of whether I am alive and in which world I am the first seconds after the alarm clock rings. I feel my being and also have visual perception but judgement/semantics in what I see emerges 0.5-10 seconds later.


Well, being on delay doesn't detract from consciousness. Think of it this way: No matter how incredibly fast a monstrous AI ever becomes, it will always suffer some delay.

>> in which world I am the first seconds after the alarm clock rings.

Do you actually believe those are real, other worlds? I'm tempted to, sometimes. But if they are, it's more an argument that consciousness is real and something even grander than we acknowledge, if it can flip states and universes like that.


> Do you actually believe those are real, other worlds?

Logically this depends of how do you define a world. Even if you imagine them as parallel universes with physics similar or mindboggingly dissimilar to that of our world, you can still say there is just one world in which all these sub-worlds exist. So, talking seriously I wouldn't say there are other worlds or there are not. I would just say "observable world" and assume there are many things beyond what we can comprehend or imagine. Even if we agree, just for the sake of argument, that the very consciousness itself is not a product of the material brain, our abilities to perceive, memorize and reason surely are and this way have limitations. Believing these limitations don't render us futile in trying to understand all the phenomena of existing universe and their nature seems absurdly naïve to me.

Practically I just mean I have no idea (or even question) about the world or anything the moment I wake up. As soon as I wake up I feel like "I am" but not "I am in this world". The idea of where I am comes after a variable non-zero delay.


Waking up is an interesting thing, and that's why your post caught my attention. I often feel like I'm standing on the edge between two fully functioning worlds.

>> Believing these limitations don't render us futile in trying to understand all the phenomena of existing universe and their nature seems absurdly naïve to me.

I don't so much agree. Yes, "all the phenomena" may never be understandable. But if I think about the cognitive ability we have now (to look at a star and understand it's plasma; what plasma is; to see that the sun rises and know we are rotating on an axis while orbiting it; to be able to visualize where we are on that axis and plane; to move electrons around tiny circuits, create images and sound by controlling these things we can't see) it seems to me that you could have looked at humans 10,000 years ago and thought they would never be able to understand these things, but we're not very different from them. I'm less pessimistic on it. I think if we have another 10,000 years we'll probably be quite good at understanding the universe. Maybe even the universe of dreams?


> I have neither a feeling nor a rationally logical proof of this idea. It's just an intuitive guess.

This intuition, that there is something special and unique about what 'we' are, is baked into our brains by many thousands of years of evolution, and this makes it so extremely difficult for us to believe that there's nothing more to us then a bunch of molecules. This is why people have invented theories about souls, reincarnation, collective consciousness and what not.


It's difficult to believe we are nothing more to us then a bunch of molecules because I still am me if I loose an arm, a memory or any given part of my brain. One can gradually replace every tiny part of his body with prosthetics and the point at which he will stop being them and become an imitation both exists and does not. What's the difference between me and a perfect imitation? I believe I am going to remain where I was if someone just makes an atom-perfect copy of mine and puts it near so the copy will probably have exactly the same memories I have and believe it is me but will not be me actually.


> It's difficult to believe we are nothing more to us then a bunch of molecules because I still am me...

What do you mean by still "me"? If I lose an arm, I would definitely say I have lost a part of "me".

> What's the difference between me and a perfect imitation?

As far as I can tell, there is no difference until you begin to interact with the outside world, at which point the "yous" are no longer the same. Your body is constantly replacing itself with new parts and molecules to keep your brain and body functioning. This effectively means you are building a new copy of yourself all the time.

Why does it bother you if another process does it all at once? Does gradually replacing all of what is "you" make you feel better about "you" versus some ability to make it all at once and see it happening?

> I believe I am going to remain where I was if someone just makes an atom-perfect copy of mine and puts it near so the copy will probably have exactly the same memories I have and believe it is me but will not be me actually.

The other argument is that the "clone" you will very vigorously also try to defend it's "me" status since you are perfect copies, and it will claim that it is it's own "me" brought about not by birth, but by artificial means and claim it has the exact same rights as you would. It didn't choose the path of "birth", but if you are perfect clones, why does it have any less right to the life it believes it has that you do? Because you were "born naturally"? What if you made the clone for spare body parts and through a misstep, the clone woke up. I would argue the clone has the right to your life now for gross human rights violations if it so chooses while you would head off to prison hopefully wihtout parole.


I understand your point, but if we truly are "Ships of Thesus" in meat form, then is consciousness only a present moment phenomena that can merely tap into our biological record-keeping?

We may be relying too heavily on the basis that our experience and our sense of being are a combination that begets consciousness; when they may just as likely be separate systems.


Aren’t you describing exactly the kind of “intuitive guess” he referred to, for why consciousness is embodied?


We are unique as humans if you compare us to any biological being. Saying we are not is a very big understatement.


My interpretation of the GP's use of 'unique' here isn't specific necessarily to humans, just to conscious beings.

Still, if you take the set H of properties of humans and the set B of the properties of all other Earth creatures, you'll find that H is a proper subset of B. What makes humans 'unique' in the regard you mean isn't kind, but scale and combination.


What do you mean by unique?


> The experience of being, and the experience of attributes like "alive" and in "somewhere in particular" are separate things.

Yes, you can have the former without the latter - it's a key component of attaining "enlightenment" or "stream entry", which is perhaps most feasibly achieved through vipassana meditation. This gives you a reasonably direct experience of anatta or "non-self" (where 'self' actually means something like personal identity and the attributes thereof, not phenomenological experience per se).


If consciousness is not embodied that would imply metaphysics and body/mind dualism, e.i. something countrary what one might term the scientific world view.


Can consciousness be a part of "what one might term the scientific world view" at all? Is there an instrument (or a human expert) capable to measure consciousness and reliably tell a really good neural network vs a conscious human apart? What about a conscious vs an unconscious human?

"what one might term the scientific world view" is great and very practical but not perfect. There certainly are legitimate questions it probably has no way to address.

I feel pretty skeptical about exploring the nature of consciousness without employing techniques available in Buddhism, psychonetics[1] or something like that. These are fundamentally unscientific (although not imaginary - execution of the practices can be easily detected with EEG) but can at least let us observe the object of the study with at least some basic degree of objectivity, reason about it and do something useful with it.

[1] https://news.ycombinator.com/item?id=28838445


The most common naïve view among scientists is in fact quite close to body/mind dualism, much as phrased by Descartes. People who have examined the problem in depth tend to go for some form of pan-protopsychism, where the substrate for subjective phenomenology has to somehow be a part of the physical universe (since it clearly enters into causal relations with the rest of it!) but can also be expected to be rather physically simple, so that it can be consistently endowed with both "physical" and "subjectively experienced, phenomenological" features or states.


This is kind of where I'm at. To me clearly qualia are part of the physical world, but no amount of understanding of neurophysiology is going to account for their existence without positing "something else".

My best guess is qualia and consciousness are like an EM field, something that exists and has some relationship with known physical entities but is distinct at the same time. It may be amplified by, say, some computational structure, but exists independent of it.

At one time I had convinced myself that qualia were related to time, in the sense that it was the "substantive" manifestation of time as a dimension, but now I'm not sure what I was thinking.

I suspect in the end consciousness will end up in the domain of physics rather than neurobiology and psychology alone.


I don't think consciousness is an embodied phenomenon. It's probably just a property or set of properties of the simulated world that we find ourselves in (the simulated world running on the brain).

To me, embodiment such and such seems pseudoscientific. How is it possible at this point in time that anything inside your pinky toe might be influencing the activity of your brain without sending nerve signals to the brain or secreting something into its surrounding which might eventually signal the brain? By some spooky magic?


I think it is impossible to understand Consciousness because Consciousness creates the brain and the whole universe.

If you think of Consciousness as the thing that is first and which creates all the rest, it answers a lot of hard questions - questions like why is it so hard to even define Consciousness?

Or weird paradoxes in QM.

Or things like: why do we all share the present moment?

Once you start to see Consciousness as the foundation of everything, it all makes a lot more sense. Everything fits together much more easily.

A lot of religions have also come to this conclusion.

There is a book called Biocentrism by Robert Lanza, which I thought was really good and went into details about this.


> I think it is impossible to understand Consciousness because Consciousness creates the brain and the whole universe.

I think it's much more likely that consciousness is a result of brain activity. For example, if consciousness creates the brain and the whole universe, why don't I have any reliable access to what is going on except partially in my immediate vicinity? Also, how can it create such a wonderfully consistent world out there and I can not reliably drive to store because my mind is preoccupied and the rest of me goes on autopilot.

I think you would require fewer bits to explain it the other way around.


Replace the word “Consciousness” with “God” and it will equally make sense.


Thanks for the link. I agree with Seth that consciousness is much like a controlled hallucination - after all, we don't experience reality directly, but through the predictions of our brain. But then the question is, why do you need this hallucination? I suspect that consciousness evolved because it makes an organism more capable than one who doesn't. But what does consciousness offer over just taking in sensory input and making predictions without having qualia (much like a computer does)?


I think the important part is that you can represent more things in consciousness than just "five-sense data", ie. exotic sense data like emotions, reactions, imagination, memory, impulses, selfhood, and correlate them with the ordinary sensory data. A non-conscious animal may see a predator and get scared, but I don't know if it knows that it's scared because of the predator. I think you need an integrated space for both direct data and metadata for that.

Also, in humans, I'd presume that consciousness, being introspectable, relates to our ability to understand our own interests and behaviors, and those of others, in symbolic terms and express them with language. It's how we close the circle between thinking and "I think".

(I agree with the article that people vastly overestimate the mystery of consciousness.)

PS: I think consciousness being introspectable is very underrated. We can perceive memories of perception, we can feel nostalgic for having felt a certain way in the past, and we can realize and name that we are doing this. The "reentrancy" of consciousness is a core aspect of our cognitive and strategic flexibility.


A non-conscious animal may see a predator and get scared

This is maybe just a matter of definition, but I find it highly unlikely that that scenario exists. "Seeing" a predator already requires so much higher-level processing (not just the visual cues, but the analysis and classification) that I find it unlikely the animal does not have a working distinction between intra-body and outside-world sensations.

The problem with defining "consciousness" is that we do not have reliable inter-species communication to compare notes between species, so our definition of consciousness itself is hampered by the ability to communicate that sense of consciousness to each other. Already in your post, there's seven different interpretations of what "consciousness" could actually mean:

- ability to recall past experiences ("emotions", "memory")

- awareness of the self ("impulses", "selfhood")

- ability to conceive experiences that do not correspond with sensory inputs ("imagination")

- meta-awareness of instinctual reactions ("if it knows that it's scared because of the predator")

- introspection ("our ability to understand our own interests and behaviors")

- empathy ("and those of others")

- communicating our sense of self to others ("express them with language")

Personally, I think we should assume that all animal species possess the first four, all large mammals can do the first five, and many species have the ability to do the sixth (it's been proven in dogs, apes, parrots, crows, and dolphins -- and probably others).


All the things you mentioned are products of evolution invented for the purpose of increasing survival rate. For example emotions serve us mammals to bond with each other, bring us closer together so that we help each other in order to survive.


Explanations like "consciousness is a controlled hallucination" or "consciousness is just an illusion" don't answer anything for me.

To be able to have a hallucination or an illusion, you still need the consciousness experiencing that. Explain that consciousness.

> I suspect that consciousness evolved because it makes an organism more capable than one who doesn't.

You can't prove consciousness from behavior (afaik), and it's behavior that makes organisms more capable


> "consciousness is just an illusion"

Replace "illusion" with "perception" and it will make a lot more sense.

Consciousness does obviously exist in one form or another, but when people think intuitively about consciousness they imagine it as an actor. A little guy in your head that looks at the world and makes decisions.

That's the illusion. That "little guy", that "self", isn't the one doing the work, your brain is the one doing the work. The little guy is a how the brain tries to make sens of itself.

Just like photons hitting your eyeballs might be interpreted as cats and dogs out there in the world, the brain interprets or perceives whatever it is doing internally as a "self".

> You can't prove consciousness from behavior (afaik)

Without some sens of self any creature would have a pretty difficult time surviving, as it wouldn't be able to tell itself apart from the rest of the world.


> The little guy is a how the brain tries to make sens of itself.

That still doesn't explain consciousness itself. Though I think there are many different interpretations and definitions for the word consciousness so we might be talking about a different one.

> Without some sens of self any creature would have a pretty difficult time surviving, as it wouldn't be able to tell itself apart from the rest of the world.

A cell just replicates because that's what the molecules in it happen to do


> That still doesn't explain consciousness itself.

Well, which part doesn't it explain? It's pretty conclusive as far as I am concerned. As among other things, you can explain why philosophers would even ask that question in the first place.

> A cell just replicates because that's what the molecules in it happen to do

And transistors just go on and off, yet people still spend years studying computer science. The existence of simple mechanism at one level doesn't prevent higher order mechanism at another, quite the opposite. Those cells aren't going to do much replicating when you get killed by a tiger, some awareness of yourself and the environment helps prevent that.


> Well, which part doesn't it explain?

It doesn't explain how consciousness emerges from this. Brains/computers/processes can self-reference or self-explain all they want, that still doesn't say how doing so results in having the actual consciousness you experience.

It's not a trivial thing we're talking about here: there could be a brain just executing logic and making decisions, or there could be that same brain but it's actually aware of that fact, not just aware in the sense that it's processing inputs including the inputs from which it can derive that it itself exists, but aware like you are there and the one in that brain.

The only one that you can know for sure has consciousness is yourself, but can you explain why you have it?

If the brain is an electromechanical process that works on its own, and consciousness is just observing it, then it doesn't need this observer, the process already does the same on its own.


> there could be a brain just executing logic and making decisions, or there could be that same brain but it's actually aware of that fact

But here is the thing: You are not aware of any of this! You are only aware of the world around you and yourself in the form of a high level description. You are aware of cats, dogs, your arms and legs, etc. But that's perception, not processes being magically self aware. You have no clue what your brain is actually doing, that's completely invisible to you.

And that's the illusion. The process you are trying to explain doesn't exist. Consciousness is not processes being self aware. Consciousness is a process that tries to model the world in a way your brain can use to make sense of the world. And just like it does it with cats and dogs, so does it with itself. The thing you call "yourself" is a perception generated by the brain, it's not a thing that exist and does stuff.


If you have a model that says the brain is a singular entity there will always be problems how to phrase meta-activity such as self-awareness and consciousness. If the brain is singular, what does the observing? Does the observing part belong to the brain or not? If the brain is observing itself, which part does the observing and which is the observed? Can we change the roles, and make the observee the observer? How does "the brain" decide what "the brain" is thinking about?

The problem with this model is that it's regarding the brain as a whole as if it's a single processing entity, since that's how we experience it, but physically that's far from the truth: the brain is a highly segmented parallel processing machine and (for as far as I know) there is no clear control hierarchy within the brain to point at a single location to say "this is where [thought/sensation/self] originates".

So my theory is that consciousness is the emergent property of many separate specialized brain areas responding to and interacting with each other: either directly (neuronal links), via the body (flexing a muscle tightens the skin, creating a sensory feedback loop), and through the world (turning your head changes what your eyes perceive).

So speaking in terms of functionality, there is no "brain" as a single entity. The brain is like an ant colony, using hormones and electrical impulses to keep its disparate parts acting in unity. Under this model, consciousness can be defined as the push and pull between these parts, and self-awareness as the consensus model under which the separate parts reached some equilibrium. You won't have problems defining which parts do the observing and which parts are observed: what we "experience" as our mind is the tug-of-war between the processing areas, not the processor activities themselves. And similarly, what we experience as consciousness is the ability of our "mind" to drown out that cacophony and purposefully cede control ("focus") to different brain areas at different times.

Of course, there's no evidence that any of this is true, and there's still plenty of questions about the mechanism through which this would happen, but for me personally this model has allowed me to move on from the vicious cycle of having to first define "brain" in order to define "thought" in order to define "awareness" in order to define "mind" in order to define "self" in order to define "brain".


The hallucination analogy is useful I think in some ways, in that it could allow access to a view of reality that has important parts represented in a meaningful way. But, surely a hallucination requires an underlying consciousness to experience it? So for me it's not very useful in explaining the phenonmenon of consciousness itself. It seems that it's not far removed from the old "theatre" analogy, which requires an unexplained "homunculus" (and probably homunculi ad infinitum).


> we don't experience reality directly

That muddies the water, IMO. We do experience reality directly: our nerve endings do. The next step is processing, which is somewhat mediated by low-level expectations. This takes place at sub-conscious levels. Conscious perception comes much later, and is obviously more filtered. To call it a hallucination is word play.


There are fixed patterns in still images that appear to move when we perceive them. "We don't experience reality directly" and calling it a hallucination seems apt - the thing we experience when we look at things is less "what's actually there" and more "the best guess our brain can construct about what's there".

Additional examples of this include the non-perception of the visual blind spot, how the brain fills in details in the middle of saccades, and how task-specific training causes perceptual differences (eg, thrown baseballs look bigger to professional batters).


> We do experience reality directly: our nerve endings do.

Depending on how you define "reality" we only experience a tiny subset of it. We can only see/hear/touch/smell/taste a very restricted universe, only what our "antennas" can detect. In fact I would argue that we experience close to 0% of all of reality. On top of this extremely restricted input our processing of it is imperfect which means some of what we experience has nothing to do with reality.


Restricted as it may be, we experience reality. Experiencing reality completely is impossible anyway.

And in the context of hallucination, what we can't experience doesn't matter.


Sure, I don't disagree but (to me anyway) it's important to point out the difference between reality vs. our potential intake from it. The blind men and the elephant parable applies.


I think what you mean by reality is physical reality, if so then it is impossible for us to directly experience it. You are a simulated being in a simulated world. The simulated world runs on a physical brain. Everything you experience is "hallucinations," but if your physical brain works normally then they will be constrained to explain the input to the brain. Consciousness is most likely a property or set of properties of this simulation. Some part of your brain make predictive models of its input, and the things that it models is yourself, your current environment, other stuff going on (the mind), how these things relate to each other and such. My guess is that qualia is something like primitives to this simulated world.


We obviously don't have the full picture about sensations -- they might be a side effect of specialized (biochemical) neural architectures, but more likely there is something intrinsic to them that grants advantages to sensations.

There's something to making things really "personal" to an individual system having sensations that make it act in its best interest.

Sensations are also very flexible biochemical abstractions and solutions to problems across aeons.

It would have been quite hard for organisms to evolve without such abstractions -- like a worm recoiling in pain and a human quickly rettacting his burnt hand or avoiding certain kinds of people who have hurt him before.


maybe consciousness is a side effect, an emergent phenomena. once the brain gets complicated enough it will just arise (on a scale)... also are our decisions ours? or some inner layer in us creates the decision and then we just consciously observe it / let it play out...


>maybe consciousness is a side effect, an emergent phenomena. once the brain gets complicated enough it will just arise (on a scale)

This is my feeling too. Since the brain is just physical matter arranged a certain way, doing a certain type of processing, I really think it possible that some computer programs we use today are also doing the right kind of processing to grant them a simple subjective inner experience. Of course most programs don’t have a way to tell us if that’s the case, except maybe GPT-3 :)

> are our decisions ours? or some inner layer in us creates the decision and then we just consciously observe it / let it play out

I have evidence for it being the latter: our higher brain functions let us rationalize, to ourselves, the decisions that were deterministically made by our subconscious, so it feels like “you” made the decisions “because you wanted to”. People who have the connection between their brain hemispheres severed can unconsciously reach a decision in one hemisphere, but then consciously rationalize it in the other hemisphere, without having experienced the making of the decision [0]:

>In one example of this kind of test, a picture of a chicken claw was flashed to the left hemisphere and a picture of a snow scene to the right hemisphere. Of the array of pictures placed in front of the subject, the obviously correct association is a chicken for the chicken claw and a shovel for the snow scene. Patient P.S. responded by choosing the shovel with the left hand and the chicken with the right. When asked why he chose these items, his left hemisphere replied `Oh, that's simple. The chicken claw goes with the chicken, and you need a shovel to clean out the chicken shed'. Here the left brain, observing the left hand's response, interprets that response in a context consistent with its sphere of knowledge—one that does not include information about the left hemifield snow scene.

[0] https://academic.oup.com/brain/article/123/7/1293/380106


I think the most simple explanation is the brain runs a simulation of its environment and at some point after birth it puts its human into this simulation. That's probably when children start using the word 'I' and raise a consciousness.


Had this thought recently: The function of psychedelics is not to induce hallucinations, but to help you recognize that you’re always already hallucinating.


Is it the fact that a computer doesn’t have one of a sort?


One troubling analogy would be occasionalism.

Occasionalists believe that in order for cause and effect to work, God has to intercede. This rather elegantly deals with a whole load of theoretical problems around causal realism.

If you were an occasionalist, you might think, if such a thing as a 'god-of-the-gaps' exist, it should be possible to build a device that mimics its powers, a 'god machine'. Such a device would be very useful.

Except, if occasionalism is wrong, and the universe simply doesn't work like that, it's obviously going to be totally impossible to build a machine that replicates the effect, or to understand how the effect could function.

What if consciousness is just another 'god-of-the-gaps', and we can't replicate it or understand it, because it's fundamentally a nonsensical concept?


What if we have the misconception of thinking we are a physical person in a physical world that has access to a weird mind? What if instead they physical brain is simulating the universe we find ourselves individually in, and we are only simulated, and we infer a physical world.

You can at least scientifically model and test this.


> If you were an occasionalist, you might think, if such a thing as a 'god-of-the-gaps' exist, it should be possible to build a device that mimics its powers, a 'god machine'.

Then that occasionalist god might interfere and break your machine and maybe break you too for trying to emulate a god.


This is Dan Dennett's "illusion of consciousness".


I think this discussion is probably dead being that it's 24 hours old now but I just came across this Wittgenstein quote and felt it might be relevant and inspire some thought on this.

    The limits of my language stand for the limits of my world.
    There really is only one world soul, which I for preference call my soul and as which alone I conceive what I call the souls of others.
    The above remark gives the key for deciding the way in which solipsism is a truth. (Wittgenstein 1961: 49)
    The I makes its appearance in philosophy through the world’s being my world. (Wittgenstein 1961: 80)
    Here we can see that solipsism coincides with pure realism, if it is strictly thought out.
    The I of solipsism shrinks to an extensionless point and what remains is the reality co-ordinate with it.
    . . .
    I have to judge the world, to measure things.
    The philosophical I is not the human being, not the human body or the human soul with the psychological properties, but the metaphysical subject, the boundary (not a part) of the world. (Wittgenstein 1961: 82)

    This is the way I have travelled: Idealism singles men out from the world as unique, solipsism singles me alone out, and at last I see that I too belong with the rest of the world, and so on the one side nothing is left over, and on the other side, as unique, the world. In this way idealism leads to realism if it is strictly thought out. (Wittgenstein 1961: 85)


Whew! I read the article and this entire thread. I'm not surprised that most people are materialists. But I think if you really look into the literature with OOBEs and parapsychology you will find that there are some amazing experiences that are not explained by metaphysical materialism. And you can experience this yourself with deep meditation and/or the blessings of teachers. I think the scientific method is valid for exploring these states too. True Yoga is a science of mind control. I would advise looking into Patangali's Eight-fold Path with the goal of kaivalya/Asamprajnata Samadhi/Cosmic Consciousness and the concept of prapatti (surrender to grace of the higher self and others). Let's not throw out the baby with the bathwater when it comes to Scientific paradigms! An interesting perspective is also found in the RBC from brainvoyage.com and a fascinating refutation of materialism is found on p.6 here: http://www.pni.org/groundbreaking/TDVPunification.pdf


I've been studying consciousness recently, specifically the question of how physics can explain it. It turns out there is a Strong Emergence theory, which physics shows only consciousness as proof. I find it a fascinating theory. It is strong emergence if "The whole is other than the sum of its parts."


I.e., "The speed is other than the sum of the car parts." There is nothing there.


That would be weak emergence.


It's table stakes. If you can't process that, you are guaranteed to understand no more.


> What’s So Hard About Understanding Consciousness?

It's the same kind of hard that stands in the way of programs debugging themselves.


Why is consciousness always assumed to be a constant? Why not rather call it introspection?

If you zone out, are you still conscious? Or if you are watching a movie and not thinking about yourself, are you still conscious?


From how I see it, understanding consciousness is impossible because it's difficult to define and even harder to explain.

Go back a couple billion years before we had life on Earth. There was no life until abiogenesis (https://en.wikipedia.org/wiki/Abiogenesis) occurred. Somehow, over the course of a couple billion years, self-replicating molecules turns into single-celled life, then multicellular life, and eventually the complex lifeforms we see today.

But...that still doesn't explain consciousness.

To an external observer, a brain is just a very complex set of electrical and chemical reactions. But to that person, there's a consciousness behind it all. But that consciousness can be altered by drugs and physical injury. It calls into question if anybody actually truly has free will, or if a sufficiently advanced computer could perfectly simulate exactly how a person would react to anything given a perfect snapshot of the wiring of their neurons and states of synapses.

I don't know how to define consciousness in a way that couldn't also be applied to a sufficiently advanced AI/AGI.


> understanding consciousness is impossible because it's difficult to define and even harder to explain.

For sure there isn't one agreed upon definition of consciousness. On the odd occasion I get in a discussion about it, the first thing I say is: what is your definition of consciousness.

I think given a definition of consciousness, it can be discussed and progress can be made explaining that particular definition. But if the two parties don't agree to a definition up front, then it is likely to result in a trainwreck.


> if a sufficiently advanced computer could perfectly simulate exactly how a person would react to anything given a perfect snapshot of the wiring of their neurons and states of synapses.

And if so, would that simulation still create the subjective experience? Or would it act exactly the same as the real person except the subjective experience never occurs?


My pet theory is that people who think that consciousness is not a hard problem are P-Zombies that can't experience the problem.

On a more serious note, as a materialist, I have a really hard time accepting that my self/experience can't be explained materialistically. It's like I have this nice theory of everything, but it fails when I try to apply it to myself. Reminds me of Gödel's incompleteness theorem.


Count me as a P-Zombie. I think the reason that consciousness is a hard problem is that it's largely used as a word referring to nothing in particular, and as such can be argued about endlessly.

The real problem is locality. Why am I located in me instead of in you? Why is it so much easier for me to see out of my eyes than it is for me to see out of yours? What's the connection between my perspective and my body?

The idea that self-awareness/consciousness ("I am me") is the big problem is weird. I have no problem locating that in the material world. If I program a computer to poll all of its peripherals, report on its RAM and processors, and adjust how it runs its programs based on that, that's as much self-awareness as I have, if not more.


> What's the connection between my perspective and my body?

Your body is doing the experiencing. It is what's perceiving.

Your mind is just your body's UI for the world and your body is the user.


I understand that. It's my body that's doing the thinking, and my body that's typing this comment. The question is why there's some sort of proximity that I have with my comment that I don't have to yours.


> The question is why there's some sort of proximity that I have with my comment that I don't have to yours.

Because it expresses your thoughts. Your mind has access to them in a way that it does not have access to anyone else's thoughts.

In particular, that access includes memories of having them and taking the actions necessary to post them. If you lost those memories, and your words were not identified as being yours, you would no longer regard them any differently than you do anyone else's (it is not even certain that you would agree with them.)

Note that the above statements involve no presupposition of materialism, but they are consistent with it. In particular, from a physical point of view, it is obvious why one would only have privileged access to one's own thoughts and experiences.


I can't conceive of how there couldn't be. Your body gets sensory input, generates your mind and then your body experiences your mind. How could that possibly be dislocated from your body... The very idea is nonsense.


But this is dualism. I don't accept a difference between mind and body as a materialist.

edit: I can use a computer to remind me to use a computer to remind me about something. I can lose the use of my hands and eyes by severing biological wires. I can't pinpoint a distinction between me and the computer. I can't pinpoint a distinction between me and you, other than distance and mechanical connection (which we currently have, with some latency and signal distortion.)


"What is the difference between the speed and the car?"

The car is a thing, like your body. The speed is what the car is doing, like your mind is something your brain is doing. No problem, nothing to accept.

My own mind is something I am doing, just as I am typing just now. It should be obvious enough that neither is something you are doing.


I simply don't subscribe to self-causation.


OK, don't.

"Causation" is purely interpretation; the universe has nothing to do with that. People invented causation to help understand the universe, but there is nothing fundamental about it. So long as you think you need causation of any kind to understand consciousness, you will get nowhere.


> But this is dualism

No, you cannot deduce dualism from what I said. You filled in assumptions that aren't present and which I don't assert. It's just physicalism (materialism is an outdated term[1]) but would also work for neutral monism[2].

> I can't pinpoint a distinction between me and the computer. I can't pinpoint a distinction between me and you, other than distance and mechanical connection (which we currently have, with some latency and signal distortion.)

What? Maybe you need a psychiatrist now? This sounds like dpersonalization or derealization.

1. https://plato.stanford.edu/entries/physicalism/

2. https://plato.stanford.edu/entries/neutral-monism/


Don't diagnose me, I'm not saying anything that alarms me, causes me discomfort, or makes me enjoy life any less. I just don't hold to the idea of "myself" being a cause as axiomatic. You're free to express the distinction between my control of the computer and my control of my hand, but don't call me crazy, it's rude.


The interesting bit is not the sensory input. We mostly know how the sensors work and how they are triggered.

The fun question is, what exactly is connected to those sensors and experiences them and where is the location of that thing?

It's like we're sitting with a VR headset in our brain, playing that game we call life. But, where exactly are we and how is that headset connected to our sensors?


> The fun question is, what exactly is connected to those sensors and experiences them and where is the location of that thing?

Again, your body.

Think about it. Your body is experiencing what your brain has generated out of your sensory input.

You don't need a homunculus, you are a body. You are your body and there's nothing else to you.


For me it's worse. I don't think theres any reason to identify my perspective with any homunculus. It's watching somebody play a game, not playing it. My question is this: I "know" there are a lot of games being played - why am I watching this one?


It's the one your body generated. Your body took all its data, generated your mind out of that and experiences your mind. For anything else to happen, you'd have to pipe the signal of your mind (that your body generated) into someone else's brain.

It's like you're asking why is this computer that I generated this data visualization from this data on this computer displaying this data visualization on this computer. Umm, because that's what you asked it to do and that's the default? For anything else to have happened, you would have had to redirect the output to a network and then go receive it on a different computer.


What does it mean for a body to be "mine?" As a materialist, I accept that my body is an assemblage of physical influences. I have no reason to doubt that it has as much self-control as water flowing down a hill. So - does water flowing down a hill have a perspective? Where is it located? It doesn't have a nervous system to blame things on. Is it infinitely divisible? Is it distinct from me? If I pour water from one cup into another, can I say that I'm any less the water as I am my hands, even though I'm controlling the water, getting feedback from the water, subjecting the water as much to my nervous system as I am subjecting my hand itself? What about when I shake your hand? What if I'm blind and I ask you what you can see, then react based on that?


Don't gish gallop.

> What does it mean for a body to be "mine?"

You are your body.

> So - does water flowing down a hill have a perspective? Where is it located?

Why would it? Does it sense and produce a mind? Not that I'm aware of. You could go the pansychism route if you want though. See David Chalmers or something.

> can I say that I'm any less the water as I am my hands, even though I'm controlling the water, getting feedback from the water, subjecting the water as much to my nervous system as I am subjecting my hand itself?

According to enactivism, no. [1] In general, see embodied minds theory. [2]

1. https://en.wikipedia.org/wiki/Enactivism

2. https://en.wikipedia.org/wiki/Embodied_cognition


I'm not intending to. I wasn't proposing all of that as separable questions. I was proposing it as a series of statements that I have no material means to say anything about.

> Why would it? Does it sense and produce a mind?

I don't know what a mind is supposed to be in this context, or its importance. I don't identify my perspective with my thoughts. My thoughts are something a body does. My perspective is just my location. After death, my thoughts will cease. My location will not change.


I cannot guess what point you are trying to make here, but I do know that 'location' is not a synonym for 'perspective'.

If you are going to ask questions, you should at least be able to justify their relevance, if asked.


And then step into the Star Trek transporter. :)

You're obliterated, then perfectly reassembled on the other side. Is that still "you", "subjectively"? Should you or others even care about that question? And, if not, damn, that's ego-killing af.


A copy is a copy. The copy remembers everything you remember. The copy can spend your money. The copy can have sex with your spouse. Is the copy "you"? Technically no, but it doesn't matter because there are no transporters.

But you go to sleep every night. Is the person who wakes up "you"? Many of the atoms in the person who woke up were in that person's stomach, and many of the atoms in the person who went to sleep are in the air. Does the difference from the transporter thing matter? No reason it should. The person who went to sleep is not here. The one who woke up can spend that person's money, wear that person's clothes, remember passwords. Close enough is close enough.

You might choose not to spend that person's money, next day. But they won't be needing it, and you do. So, no problem.

If you perceive a problem, you are just tying yourself in knots, and are guaranteed to make no progress.


Or even without transporters: what evidence do I have that the me that ended this sentence is the same me that began it? If anything, I have material evidence that it isn't.


Yeah, you need a theory of identity to answer your questions, first year philosophy sutff [1].

1. https://plato.stanford.edu/entries/identity-personal/


Exactly. But having a theory of identity is just a declaration that you subscribe to one, not anything meaningful.


Umm, no. Having a theory of identity allows you to deduce the answers to your questions.


Based on an arbitrary choice. I can deduce as many answers to my questions as theories of identity that I can count. None of them will give me any insight into the question of subjectivity.


If it's an arabitrary choice then the problem is entirely you.


Here's a thought experiment. You're going to use a Star Trek teleporter-like device to make an exact copy of yourself. What are the odds that you end up being the copy? Or rather, what should you believe the odds are that you will be the copy?

Of course, I, the original, will not be the copy, but if I go in knowing I'm not the copy, the the copy will come out knowing he's not the copy. Thus, you should go in believing there's a 50% chance you are the copy, and then both the original and the copy will come out with a logically consistent belief.


A good way to know for sure would be to use a sharpie to draw a mark on the inside of the particular chamber you step into before the machine is activated. Then, after you are copied, both people will have the memory of drawing such a mark. But the copy, who just materialized in a second chamber, will be surprised to find that there is no mark in front of him.


My answer is yes but it depends on your theory of identity. Unless you're positing a soul or something though, your answer is probably going to have to be yes.


Is a computer going to ask why am I operating on this specific CPU and not on the other CPU that is located in a server rack 2 meters away from it?

To be honest, I don’t get what you are really trying to understand or find out here.


I'm pretty sure I've written this exact comment before. I'd have so much more respect for philosophers who simply admit that nobody has made any progress on the hard problem (other than increased clarity on what it is) than those who dismiss, minimize, or otherwise pooh-pooh the hard problem. The fact that the problem is "unproductive" is not a problem with the hard problem, it's only a problem for philosophers' career advancement.


> I have a really hard time accepting that my self/experience can't be explained materialistically.

I can respect this, but I think you have much bigger problems than consciousness if you're a materialist. Current natural philosophy and physics is far from anything resembling a materialistic worldview.

Where we are right now in both QM and GR can best be described as a highly precise description of what happens, but at the expense of forcibly ignoring the essence of what constitutes material reality. In the case of QM, the "just run the equations" view of the world has been dominant since Einstein reluctantly ceded ground on this issue. To put a finer point on it, unless physicists can provide a materialistic explanation for entanglement or wavefunction collapse, we're still very much in an ethereal, immaterial world.

(fwiw, I am something of a materialist, which is why I have thought of this problem beforehand.)


Entanglement is part of material reality - the "classical" world where there is no entanglement is a simplified approximation. The fact that it might feel less "material" to us is an irrelevant feature of improperly-applied intuition. It's no different than someting like e.g. the wave nature of light.

In fact, complex entangled states made of many interrelated "qubits" or degrees of freedom are one of the most promising candidates for the physical correlate of similarly-complex phenomenological experience. Complex-enough quantum computers might just feel stuff, as a matter of basic reality.


> Entanglement is part of material reality

Yes, but what is material reality?

The current understanding of entanglement in QM is indistinguishable from math. And then we need to ask: are the equations real? Seriously, are they? Or are they just an approximate description of something we don't understand that well?

Because if entanglement is part of material reality as defined by the good ol' gauge theorists and the equations, then we really do live in a math equation, and it's not clear how or why that math equation runs the way it does. Or what it runs on for that matter. Or if it exists at all and our senses are just hallucinating this. This is fundamentally different from the understanding of philosophical, physical materialism.


For me, material reality is tied to the notion that the world is consistent and conducive to analytical reasoning, that it functions like a machine. It has components which relate to each other in consistent ways that we can model and understand. The fact that these components are best described by field equations rather than newtonian mechanics is incidental.

The main alternative viewpoint to materialism, dualism, posits some further 'external' influence that acts on the world to account for phenomena we poorly understand like consciousness, and for some people psychic or religious phenomena. I think that's just fuzzy thinking, and I don't even really know what they mean. How can something not 'of the world' interact with the world? Surely 'the world' is everything that can act on or influence each other, right? I'm not sure how else to define it.


I'm pragmatically a materialist, just because I don't find use in speculating about the immaterial (by definition.) But although I thought I was so much smarter than Descartes when I was younger, I still can't figure out why I'm in me and not as much in you, and somehow partially in the computer I'm typing on. That I'm more in my head than in my hands, but I'm in both.

I really have no access to that. I've just taken a couple of extreme positions, and hope that one day somebody can come up with some way to reduce or test them materially, although I accept that I can't even speculate about a way to do it.

One is a panpsychism. The material world exists within a substrate of perspectives, and as material objects pass through them, the perspectives momentarily attach to those objects, and whatever is mechanically connected to them. This happens at an arbitrary rate (depending on the density of the perspective substrate and the relative speed of the material passing through it.) If we call the amount of time that a perspective is located "centrally" enough in a human nervous system to have access to it a "moment," then for a moment, that perspective thinks it is me. This assumes that perspectives are distinct for purposes of explanation, but they don't have to be - maybe an infinite number of perspectives think that they are me at any particular moment, and in the next moment, an entirely different set.

The other is that there's a particular geometry of electromagnetic activity that captures perspectives, and minds happen to conform to that geometry.

I don't think these are any more insightful than "the soul talks to bodies through the pineal gland." I can only hold onto them as more abstract and generalized. There's no material way to tell when you've captured a perspective, and we each only know of the existence of one with certainty. We could be Buddhist and say that the universe is one fragmented confused perspective, or say that we're a bunch of islands; either way, we have no material basis for those beliefs. The only thing we know is that we're not P-Zombies. We don't know that others aren't P-Zombies, just us.


> The current understanding of entanglement in QM is indistinguishable from math. And then we need to ask: are the equations real?

Well, they predict what we observe, which is not a property of math in general.


Entanglement as an explanation for anything about consciouness is exactly the same as saying "it's all magic", and exactly as meaningless, leaving nothing to talk about.


> To put a finer point on it, unless physicists can provide a materialistic explanation for entanglement or wavefunction collapse, we're still very much in an ethereal, immaterial world.

Maybe the lesson of physics is that the material world is not like how we might imagine it, based on our everyday intuitions. If you accept that the world is not moving and not flat, that bodies continue in motion unless acted on by a force, and that Lorentz transformations represent the relationship between different inertial frames, then you already accept things that are counter to everyday intuitions. Hopes for explanations that are not ultimately grounded in "that, as far as we know, is just how things are" might be wishful thinking.


Yes, I agree with your points. Superdeterminism could be a way to salvage my worldview.

Even so, accepting non-determinism from QM does not cause me as much cognitive dissonance than the problem of conscioussness.


> accepting non-determinism from QM

I think it's deeper than non-determinism. We can imagine an extraordinarily precise "random" number generator that is software, more than capable of fooling all measurements we as humans make of it. It would also be non-deterministic. It would also, importantly, not be real.

Until we can confidently say that the equations of physics are not actually themselves reality, we can't say materialism is even a thing. To be a materialist requires belief that what we know is a description of reality and not reality itself, and that would be true even if we had a deterministic "Newtonian" pre-QM view of the world coupled with advanced theory.


> people who think that consciousness is not a hard problem are P-Zombies that can't experience the problem

I've come to the same conclusion. Meaning no disrespect or insult to them, it seems to me that it must be the case that some folks just haven't noticed themselves.

P.D. Ouspensky talks about learning to "remember myself" in "In Search of the Miraculous" and describes how it was for him to practice changing from thinking one is self-aware to actually becoming self-aware moment-to-moment. There's a passage...

> Two hours later I woke up in the Tavricheskaya, that is, far away. I was going by izvostchik to the printers. The sensation of awakening was extraordinarily vivid. I can almost say that I came to. I remembered everything at once. How I had been walking along the Nadejdinskaya, how I had been remembering myself, how I had thought about cigarettes, and how at this thought I seemed all at once to fall and disappear into a deep sleep.

I suspect people who think that consciousness is not a hard problem must be still "asleep" in this sense, effectively P-Zombies, although the potential is there to become self-aware beings.

- - - -

> as a materialist, I have a really hard time accepting that my self/experience can't be explained materialistically.

Okay, fair enough, but then you have the infinite regress problem. How do you explain material itself materialistically?


> "some folks just haven't noticed themselves"

Some folks don't enjoy tying themselves in knots. You are absolutely welcome to spend your precious seconds of limited life on it, but it doesn't make you superior, however much you would like to think it does. It's just a choice, like between working and playing.


This is such a non-sequitur. I can't tell if you're referring to people choosing not to be conscious (I doubt that's possible) or choosing not to ponder on it (I don't think that's relevant). If people don't want to think about it, that's easily avoided. What we're talking about is people expending serious thought, "tying themselves in knots" as you say, to write essays and sometimes tomes dissuading others from pondering it either because "there's no hard problem" or "it's not productive."


I am talking about choosing to tie yourself in knots and thereby preventing any further progress toward understanding what you are talking about.

You could also choose not to tie yourself in knots, and then you would not be blocked, and could begin to make sense.


That makes more sense, thanks. But philosophers of mind, neuroscientists, cognitive scientists, and all could just as well pursue other lines of inquiry without protesting too much that there's no hard problem.


If you think there is a hard problem, it is on you to articulate one. Thus far I have not encountered one that does not depend on undefined words or circular arguments, i.e. knots.


Do you think there is such a thing as objective morality? Not just human ethics, or social norms or whatever, but that some things that occur are "better" than other things in some important sense. Or maybe, that certain actions are "good" and others are "bad"?

Like for instance, the experience a human would have when relaxing in a hot tub with their friends, is "better" in some important, real sense compared to the experience of dying in a trench in WWI?

If not, I probably can't convince you that there is a hard problem having to do with consciousness where we'd benefit from knowing all the answers. But if so, I could give it a try.


There are certainly things I like better than other things. There is probably a fair bit of overlap with what you like, but also certain to be differences. Where they differ, am I right, or you? If one of us is not wrong, then one is not objectively better.

A system for understanding consciousness that depends on objective merit will face very stiff competition from those that don't.


Wouldn’t you say the experience of “being you” or “being me” would be part of the system of how consciousness works? And therefore, whether you like something or I like something, it’s part of the happening of consciousness.

Maybe “you like the experience of drinking tea” is a totally compatible statement with “I don’t like the experience of drinking tea” in an objective sense.

I would say it’s not that consciousness depends on objective merit, but that it could be the other way around.


I didn't say anything about "superior".

Is being a "P-Zombie" inferior to being self-aware? By what metric could such a comparison be made? Is the water inferior to the wave?

Also, "tying themselves in knots" != "noticed themselves", if anything it's the opposite: all knots untie.


All knots can be untied, if that is what you choose to do. But not everybody chooses that. Thus far you have not chosen that.

"Noticing yourself" is something literally everyone does. Not everyone ties themselves in knots. The alternative to tying yourself in knots is to think clearly without inventing falsehoods.

"P-zombies" is just another word for solipsism. The moment somebody proposes either, you know there will be no more clear thought.


P-zombies and solipsism are two totally different things.


Yet, curiously similar.


What’s really interesting is that the materialistic view is sufficient to describe everything else in the world including all other humans. But the one thing it fails to explain is yourself and your own subjective experience.

I frankly have zero evidence that anyone else but me is experiencing consciousness. And there’s nothing you can utter that can change that.


Your comment reminds me of religious people, whose argument that God exists is similar: that you will realize that He exists as soon as you believe in Him. And that you can't possibly understand it otherwise.

Consciousness is a hard problem until you realize that most of it is an illusion. Which is the very reason that it is so hard to understand: there's nothing to understand, because it's not real.


Serious question: how can consciousness be an illusion? I’m not even sure what that would mean. One could argue that many things are illusory (e.g. the external world). But it’s quite hard to dismiss the notion of actual experience occurring in the world. Our experiences could lead us to false conclusions, sure, but we cannot deny their existence.


The experiences are real, but the homunculus behind your eyes that you think of as yourself is an illusion. In reality there are a bunch of fairly independent processes talking to each other, sharing the perceptions, reacting to them, feedback from those reactions again become a perception, a "narrator" process that often gives running commentary in your mother-tongue, etc. The interaction of those processes feels like a unified thing, "you", and that's the illusion. This illusion has adaptive advantages, because it helps the organism take care of its needs, so evolution selects for it.

That's my take, maybe not exactly the same as Damasio and Seth's, but compatible with theirs.


If I understand correctly, you're referring to the "easy" problem of consciousness, i.e. the "mechanistic" explanation of how a self could be constructed by the brain. That's an interesting question and I think your take is a coherent and plausible one (from a materialistic perspective). However, I still think this doesn't get around the hard problem of why these interactions actually feel like anything. I've never heard a satisfying materialistic explanation of that. Do you believe the interactions could in principle be implemented on any Turing machine? Or are they substrate dependent?


Actually I do; but... 1) such a Turing machine may have to be a lot more powerful than anything we currently have. With or without invoking QM mechanisms, there is reason to believe that every single neuron does a lot more computation than our simplistic models in current ML neural nets. 2) it may not be possible to "program" a machine to be conscious in the way we feel consciopus, we'd probably have to literally evolve it, i.e. in a rich simulate environment starting with simple artificial "organisms" that "feel" this environment and then getting progressively more complex.

But I do believe in Wolfram's "principle of computational equivalence", and thus that anything that can implement a turing machine can also implement any other complex system, including consciousness.


I guess this is where we differ. I don't see a sufficient reason to believe that increasing computational capacity/complexity alone gives rise to consciousness. Moreover, I think there are common sense reasons to believe that consciousness is not substrate independent. Therefore, I don't see it as obvious that Turing completeness is sufficient for consciousness. For example, as someone else on this post has pointed out, a sufficiently complex water pipeline can implement a Turing machine. However, I doubt it would ever be conscious, no matter how large we make it. I think representing and processing information is orthogonal to experiencing.


I think we do agree... "complexity alone" certainly will certainly not give rise to consciousness. Consciousness begins with feeling and separating the "I" from the "other"; I feel hunger, that there is food. That's why I said we'd have to evolve it in a simulated environment, one in which there are things for a nascent consciousness to feel. So in that sense yes, it depends on the substrate, but the substrate could be virtual, simulated on a powerful enough Turing machine.


You haven't explained why you think they shouldn't "feel like anything". How do you distinguish "feeling like anything" from anything else you experience?


Well, as far as I'm aware notions of feeling or experiencing are not accounted for in our current physical models. Does an electron feel anything? On the one hand, if it does, it seems to me like physical models have to be extended to include some primitive form of consciousness. This would be something like panpsychism. On the other hand, if single electrons do not have consciousness, why do large collections of them in specific structures have it? Note that, to me, it seems insufficient to say that collections of electrons can be used to model or compute with. Namely because it raises the question of why this modeling has a feeling tone (qualia) to it.

Finally, I don't know if I can make a meaningful distinction between feeling and experiencing. I believe a feeling is an experience.


I think they were referring to Daniel Dennett, not Antonio Damasio or Anil Seth. Damasio is interested in how the self is constructed (so, similar to your idea) while Seth is saying that experience is a controlled hallucination. Dennett is the one claiming consciousness is an illusion. These are three very distinct research programs.


Except that they all agree that there is no "hard problem of consciousness", that's the part that's illusory. I think Dennett takes the illusion position too far, so I think I'm somewhere between those three. But there are others who take it further even, such as Graziano in "Consciouness and the Social Brain" whose "consciousness is an attention schema" position seems so nonsensical to me that I can't even explain it despite having the read the book.


Either it's inane wordplay or it's obviously nonsense for most definitions of conscious. An illusion must be perceived, so in that sense the proposition is self-refuting. Also, you have empirical evidence against it: you know when you're conscious and that sometimes you aren't (and that it's a spectrum.) Thirdly, you can behaviorally define conscious. Fourthly, if I'm not mistaken, you can measure its presence neurologically. To say that all of this is an illusion is stupidity or being very obtuse for attention (like clickbait.)


Believing in God is a good coping mechanism if you cant deal with the fact that things, both good and bad, happen for no reason all the time. Knowing "why" gives comfort in the vast chaos that is reality.


Plenty of people believe in God(s) yet don't think that things are controlled or predestined.


You might not be very far off- apparently many people do not have an inner voice.

https://en.wikipedia.org/wiki/Internal_monologue#Absence_of_...

I now worry about some form of future cyborg racism where those without the gift are considered untouched by god or subhuman


Qualia may be operating on a different system of reasoning. I have always wondered why new qualia felt so familiar like there was nothing new to learn, but if time is as illusory as every theory says it is then it makes more sense.

To answer the hard part.... To be completely honest, I think qualia are dualistic emanations of the One Source of Truth and understanding, and describable as convex optimizations in the kinds of spaces that Incompleteness deals with. I think love and honor are hyperparameters for the hypercomputers that we refer to as our hearts and that simulation is much more likely than material because of dichotomy / reality gap.


Fellow materialist here. For me materialism, and more specifically a strong degree of determinism, is the only coherent solution to the problem of responsibility and agency.

After all, if my decisions do not flow from my state, including my experience, memories, preferences, etc, how can we say they are meaningfully mine? I fully accept a degree of randomness in the process, quantum mechanics is a thing, but randomness is just noise. No determinism, no agency.


I was a physicalist (and in some sense still am). In a nutshell your self and everything you experience and can experience is simulated. I live inside a simulation. The simulation is running on a physical brain, part of a physical human which is part of a physical universe.

This can also be probed with science. If we know how the brain wires itself, what it is computing and representing then I think we'll find these patterns in the brain.

This is how I best understand these things. I can also go into more detail how I came to think about this this way and how I got to this point.


If you start with material mechanisms, you'll never get subjective experience (SE) as a conclusion.

Start instead with SE, and then any sort of material universe can be the contents of it.


Very funny, of course, but one can easily reply that those who do believe consciousness is a hard problem are just deluding themselves. Especially when (as is usually the case) a clear definition of the concept "hard problem" is not even given.


I hardly think Chalmers (and others) forgot to define what the problem is. It's more likely you haven't read and/or understood them.


The definition given in Chalmers (1995, facing up to...) following Nagel, i.e. using the linguistic construction "what it is like", is incomprehensible and ill-defined. I believe this is the origin of the term "hard problem". Some other arguments for the existence of a hard problem build on the dogma that "philosophical zombies" are conceivable (which we can deny by saying they aren't). Others depend on inverted qualia thought experiments, which we can deflate by denying that inverted qualia are possible at all (or by denying that "qualia" is a coherent and well-defined concept). If there's an unambiguous and clear explanation of the concept of "hard problem of consciousness", or a clear argument for the existence of such a problem not built on shaky assumptions, I'd love to hear it.

By the way, there is a whole line of debate on the sense or nonsense of the ever-present "what it is like" phrasing:

>‘What it is like’ talk (‘WIL-talk’) — the use of phrases such as ‘what it is like’ — is ubiquitous in discussions of phenomenal consciousness. It is used to define, make claims about, and to offer arguments concerning consciousness. But what this talk means is unclear, as is how it means what it does: how, by putting these words in this order, we communicate something about consciousness. Without a good account of WIL-talk, we cannot be sure this talk sheds light, rather than casts shadows, on our investigations of consciousness.

(Jonathan Farrell - ‘What it is Like’ Talk is not Technical Talk)


A program running in a computer would have a difficult time distinguishing itself from the computer hardware on which it runs. And it would have no direct perception of the programmer.


A program running in a computer doesn't perceive anything at all. Why we do is the hard problem.


Had you ever experienced an OBE?

PS: I like your P-Zombie joke :D


It’s not particularly hard to see consciousness and our emotions purely a result of concoctions released into our brains.


The right analogy is trying to understand why your favorite OS works, by trying to understand why certain window appears when you click in certain region of the screen. You need to understand up to certain level the underlying mechanisms (e.g. if you want to understand how an OS works, you need how a syscall works, then you need to understand how the processor works).

All this philosophical labyrinth of words seems futile. We need to have a right system level model of the cortex and subcortical structures to progress. I feel like we are trying to resolve this problem looking at the quantum effects in the transistors (using the previously cited analogy) [neuroscience] and looking to a datacenter [cognitive science]. We need to choose the right level of abstraction. We haven't figured out what is the right one in the last 100 year. We will do... and perhaps discover that Cortico-thalamic pathway in the von Neumman base of our OS...


Sometimes when I wake up in the morning I am "conscious" of my position in the room but the position of walls, windows, the closet, etc. doesn't match, sudendly I feel like I'm instantly turned around in space! Then I realize that my belief was wrong, and it feels like I was sleeping in another orientation on the bed.

Recently I bought a telescope and took some pictures of the moon. Then I searched for a map of the moon, and to my surprise the orientation didn't match my photos. I imagined myself stading on the right side of the planet, looking at the moon which was also at the right. I realized that my "up" is actually the "side", and my photos and the map made sense. That was like the reorientation feeling, but softer.

That made me think whether one thing is having a theory of this world, and another very different when a theory is deep in our consciousness.


I believe people with psychedelic experiences can attest to the controlled hallucination theory of consciousness. But only those with a materialist world view while idealists may posit souls or alternate realities.

Imagination is also a part in probabilistic decision making, an unhinged prediction machine.


The fact that we experience what is being fed to 'us' by the various hardware devices, and not reality itself, is not controversial. The question is, what is the nature of the 'experiencer'?


See lobotomies and brain damage. See eye and nerve damage. See people with syndromes or developmental disorders.

The experiencer is the sum of your body's processes. It's what it feels like to be your body. To be connected to all your parts. To have been raised and taught to react, to have instincts and decision making circuits based on these. It's a material thing. Vision is the sense of eye, thought is the speculative sense of sensing, etc.


You are describing the experience, not the experiencer. The experience is indeed the sum of your body's processes.


My favorite read on this topic is Consciousness Explained by Daniel C. Dennett

He's a philosopher who has spent a lifetime thinking about this problem, informed by the latest science of the day. Plus he's an excellent, lucid writer. He is a materialist, optimistic about being able to have as-satisfactory of an explanation of consciousness in the future as we do for life (no "élan vital" needed).

https://www.amazon.com/Consciousness-Explained-Daniel-C-Denn...


Or as some like to call it Consciousness Denied :)


It's hard to understand consciousness, but it's not hard to understand why it's hard to understand consciousness. The problem is we are trying to understand consciousness by using consciousness. We can't step outside of it. Just like you can't directly look at your own eyes. You can see them in a mirror, and you can infer properties of consciousness by observing external phenomena, but you can't directly experience it, because you are it (at least from a materialist viewpoint - there are spiritualist traditions that say otherwise).


I'm no philosopher but feel the same way, the most advanced technology and thinking we have still has to be processed and understood by our own consciousness. It reminds me of the brain-in-a-vat problem. If you ever discovered you were actually just a simulation then the simulation would be paused, the bug fixed, and restarted. There's only so much that can be done with the perception available to us.


Consciousness cannot be understood or known in the epistemological sense of the term since it is, by definition, the ultimate subject. It also cannot be experienced as an object. It sounds very hand-wavy and esoteric but if you think hard about it, it is absolutely and undeniably the only tenable logical position.

Put in another way: if you somehow claim to be able to observe or measure consciousness as an object, then what is it that is doing the observing?

Similarly, consciousness is unchanging and without beginning or end, for noticing and knowing any change you need consciousness.


That was surprisingly not complete bollocks. Bollocks is the norm when anybody talks about the "problem of consciousness'.

There are legitimate questions to ask about consciousness, but nobody asks them. How does the perception of being conscious aid in survival and/or reproduction? It's kind of like wondering about what good being beautiful does for a blind, microscopic worm: not necessarily answerable, but anyway worth asking.


Sometimes it seems wordd don't have good definitions, at least when they can be challenged by science.

Language is mostly usage. For example we don't have a good definition of what intelligence is, on the scientific level. It might be the same thing for consciousness.

We only understand intelligence or consciousness as what we see as a soul or ghost, but nothing much.


Theorizing about the nature of consciousness without having been subjected to it's multitude of altered states is an obvious waste of time. If you can't or won't venture into the "field", then at least have the humility to address the aggregate experiential data of those who have.


The trouble is that the sort of drug-induced insight (or pseudo-insight as some would call it) you refer to is usually accompanied by strong euphoric effects, which impairs your ability to dryly reason about those experiences even a long time after the fact. Nor can I completely shake the impression that those who believe a e.g. a DMT trip offers genuine insight into the nature of consciousness simply had an insufficient grasp of the subject beforehand.


There are altered states of consciousness that occur naturally and don't lead to a sense of euphoria. Sleep paralysis for example. It happened to me once and it made me see things differently, nothing religious or mystical, just an enhanced perspective.

To give an example, during my episode I had what is usually described as an out-of-body experience. I felt myself walking around my bedroom as if I were awake, only to find out that my body never left the bed. Now if the sense of being "out of the body" is an illusion, how can we be sure that being "in the body" is not just the same illusion? (I'm tempted to think we are not anywhere and that space itself is the illusion, but that is beside the point)


One can easily arrive at the same insight (i.e. being in the body is just an illusion) without actually experiencing it. Of course, if you have such an experience and you get knowledge or inspiration from it, so much the better. But to say that theorizing about consciousness is a waste of time if you haven't experienced altered states is myopic.

(sleep paralysis is a good example though)


I don't think it's a waste of time (I'm not OP by the way) but you would surely be missing data.


There are altered states (jhana) that are neither solely induced by drugs nor "strongly euphoric" in any real sense.


My completely uneducated opinion on this, is that our consciousness is an emergent property of our subconscious, and that our subconscious is an emergent property of the physics of our brain’s electrical signals.

This opinion is almost certainly wrong, but it’s helped me while thinking about my own mind and experiences.


How does everyone find the interview format of the article?

I am building a platform for online interviews (https://taaalk.co/) so am curious to know your experience of reading discussions vs. reading an article.


Consciousness is byproduct of high intelligence and result of evolution. As your intelligence increases generation over generation it is evolutionary "logical" to be conscious and aware to increase your survival rate.


What are some evolutionary explanations for consciousness? What does the organism gain by expending energy to make its experiences conscious?


“ Consciousness is simpler than we make it. There’s no hard problem. ”

Certainly the hard problem goes away when you define it to be simpler.


I don’t think we understand how the brain implements computation well enough yet.

We won’t solve how the mind works untill then.


Who thinks it’s hard? When you have answer to that then you understand consciousness.


What’s more amazing is why a person is selected as your consciousness?


Probably the lack of an appropriate viewpoint to analyse it from.


the hard problem of consciousness is just an symptom of logical positivism, which is false in its entirety


I am become chromebook.


If you really want the answer then you need to be prepared to face some tough truths.

There is no magic. Consciousness is an illusion. It's an emergent behavior from an incredibly complex neural automata.


How can you have an illusion without an observer? An illusion is by definition a subjective experience - it doesn't explain the thing doing the experiencing at all. You say there is no magic, but your explanation might as well be "it's just magic".


An illusion is a belief or perception which does not match reality. A belief is something that is stored in memory. So there is nothing to 'observe' about an illusion; an illusion is an error in either memory or perception of the observer himself.


I think I understand what gp means. He is trying to point to a contradiction in ggp reasoning. The question is in the definition of words "subjective" and "observer". You cannot define an observer using the concept of observer, that's circular reasoning. "Incredibly complex neural automata" is just handwaving the problem away. Equivalent of saying "here the magic happens", not an explanation.


It isn't handwaving. I'm saying the act of observation is not special. It's matter knowing it's matter. "Observation" and "knowing" are simply emergent behaviors of the matter. There is no divine step between matter and consciousness. I'm saying there is nothing to explain because consciousness should not be viewed through the lens of it being special.


Honest question: following this argument, is it to be expected that a sufficiently complex water pipeline can become conscious?


Of course. This is table stakes and has been since the early 1700's. https://en.wikipedia.org/wiki/Leibniz%27s_gap


Consciousness is not an illusion, it is a real phenomenon that is experienced daily by almost every human. You might be referring to free will and its subjective experience, which is a different story.


It depends what you mean with 'real'. The experience or feeling of having consciousness is real, that's true. But the fact that humans experience consciousness doesn't make it a real physical phenomenon. Not so long ago most people believed that the Earth was flat because that's what they experienced when they looked around them. Many people believe in ghosts. Still this doesn't make them real.


> The experience or feeling of having consciousness is real, that's true. But the fact that humans experience consciousness doesn't make it a real physical phenomenon.

Unless you believe in supranatural explanations for it, yes, it does. The beliefs you mention, like a flat Earth, were/are real and also false. So in this analogy, consciousness is real, but many of the explanations for it or powers ascribed to it are false.


I won't be disappointed if it turns out to be an illusion. I suspect it's in the nature of consciousness to be comprised of many illusions, in fact. But I'm still waiting on an explanation for the illusion. Now, if you tell me the illusions prevents us from ever understanding the illusion, then I will be disappointed.


From one of the dominant theories of a material view indeed an emergent behavior from an incredibly complex neural automata.

But from our daily experiences actually reality is an emergent property from our consciousness (or awareness).

What we do feel, smell, hear, forms into concept, meaning, experiences forms our "reality".

There might be a theoretical other "reality" or it might be a bit different many think it is now, but it doesn't make it more or less real, in the sense of actual experience it's less real.


And in this complexity lies incredible beauty


This. There's nothing magic about consciousness, and if you think it is, it's because your brains are hardwired to be convinced that you're special. But we're not. In the article, this is said very well:

> Would you say that these more complicated states of emotion represent higher or more complex “levels” of consciousness?

> Seth: I’m personally very skeptical of this kind of labeling, of higher and lower levels of consciousness. There is a little truth to it, of course. Unless we can agree that human beings are in some sense, more conscious than, say, an ant, we’re missing some very important things about the nature of consciousness.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: