Thanks for the link. I agree with Seth that consciousness is much like a controlled hallucination - after all, we don't experience reality directly, but through the predictions of our brain. But then the question is, why do you need this hallucination? I suspect that consciousness evolved because it makes an organism more capable than one who doesn't. But what does consciousness offer over just taking in sensory input and making predictions without having qualia (much like a computer does)?
I think the important part is that you can represent more things in consciousness than just "five-sense data", ie. exotic sense data like emotions, reactions, imagination, memory, impulses, selfhood, and correlate them with the ordinary sensory data. A non-conscious animal may see a predator and get scared, but I don't know if it knows that it's scared because of the predator. I think you need an integrated space for both direct data and metadata for that.
Also, in humans, I'd presume that consciousness, being introspectable, relates to our ability to understand our own interests and behaviors, and those of others, in symbolic terms and express them with language. It's how we close the circle between thinking and "I think".
(I agree with the article that people vastly overestimate the mystery of consciousness.)
PS: I think consciousness being introspectable is very underrated. We can perceive memories of perception, we can feel nostalgic for having felt a certain way in the past, and we can realize and name that we are doing this. The "reentrancy" of consciousness is a core aspect of our cognitive and strategic flexibility.
A non-conscious animal may see a predator and get scared
This is maybe just a matter of definition, but I find it highly unlikely that that scenario exists. "Seeing" a predator already requires so much higher-level processing (not just the visual cues, but the analysis and classification) that I find it unlikely the animal does not have a working distinction between intra-body and outside-world sensations.
The problem with defining "consciousness" is that we do not have reliable inter-species communication to compare notes between species, so our definition of consciousness itself is hampered by the ability to communicate that sense of consciousness to each other. Already in your post, there's seven different interpretations of what "consciousness" could actually mean:
- ability to recall past experiences ("emotions", "memory")
- awareness of the self ("impulses", "selfhood")
- ability to conceive experiences that do not correspond with sensory inputs ("imagination")
- meta-awareness of instinctual reactions ("if it knows that it's scared because of the predator")
- introspection ("our ability to understand our own interests and behaviors")
- empathy ("and those of others")
- communicating our sense of self to others ("express them with language")
Personally, I think we should assume that all animal species possess the first four, all large mammals can do the first five, and many species have the ability to do the sixth (it's been proven in dogs, apes, parrots, crows, and dolphins -- and probably others).
All the things you mentioned are products of evolution invented for the purpose of increasing survival rate. For example emotions serve us mammals to bond with each other, bring us closer together so that we help each other in order to survive.
Replace "illusion" with "perception" and it will make a lot more sense.
Consciousness does obviously exist in one form or another, but when people think intuitively about consciousness they imagine it as an actor. A little guy in your head that looks at the world and makes decisions.
That's the illusion. That "little guy", that "self", isn't the one doing the work, your brain is the one doing the work. The little guy is a how the brain tries to make sens of itself.
Just like photons hitting your eyeballs might be interpreted as cats and dogs out there in the world, the brain interprets or perceives whatever it is doing internally as a "self".
> You can't prove consciousness from behavior (afaik)
Without some sens of self any creature would have a pretty difficult time surviving, as it wouldn't be able to tell itself apart from the rest of the world.
> The little guy is a how the brain tries to make sens of itself.
That still doesn't explain consciousness itself. Though I think there are many different interpretations and definitions for the word consciousness so we might be talking about a different one.
> Without some sens of self any creature would have a pretty difficult time surviving, as it wouldn't be able to tell itself apart from the rest of the world.
A cell just replicates because that's what the molecules in it happen to do
> That still doesn't explain consciousness itself.
Well, which part doesn't it explain? It's pretty conclusive as far as I am concerned. As among other things, you can explain why philosophers would even ask that question in the first place.
> A cell just replicates because that's what the molecules in it happen to do
And transistors just go on and off, yet people still spend years studying computer science. The existence of simple mechanism at one level doesn't prevent higher order mechanism at another, quite the opposite. Those cells aren't going to do much replicating when you get killed by a tiger, some awareness of yourself and the environment helps prevent that.
It doesn't explain how consciousness emerges from this. Brains/computers/processes can self-reference or self-explain all they want, that still doesn't say how doing so results in having the actual consciousness you experience.
It's not a trivial thing we're talking about here: there could be a brain just executing logic and making decisions, or there could be that same brain but it's actually aware of that fact, not just aware in the sense that it's processing inputs including the inputs from which it can derive that it itself exists, but aware like you are there and the one in that brain.
The only one that you can know for sure has consciousness is yourself, but can you explain why you have it?
If the brain is an electromechanical process that works on its own, and consciousness is just observing it, then it doesn't need this observer, the process already does the same on its own.
> there could be a brain just executing logic and making decisions, or there could be that same brain but it's actually aware of that fact
But here is the thing: You are not aware of any of this! You are only aware of the world around you and yourself in the form of a high level description. You are aware of cats, dogs, your arms and legs, etc. But that's perception, not processes being magically self aware. You have no clue what your brain is actually doing, that's completely invisible to you.
And that's the illusion. The process you are trying to explain doesn't exist. Consciousness is not processes being self aware. Consciousness is a process that tries to model the world in a way your brain can use to make sense of the world. And just like it does it with cats and dogs, so does it with itself. The thing you call "yourself" is a perception generated by the brain, it's not a thing that exist and does stuff.
If you have a model that says the brain is a singular entity there will always be problems how to phrase meta-activity such as self-awareness and consciousness. If the brain is singular, what does the observing? Does the observing part belong to the brain or not? If the brain is observing itself, which part does the observing and which is the observed? Can we change the roles, and make the observee the observer? How does "the brain" decide what "the brain" is thinking about?
The problem with this model is that it's regarding the brain as a whole as if it's a single processing entity, since that's how we experience it, but physically that's far from the truth: the brain is a highly segmented parallel processing machine and (for as far as I know) there is no clear control hierarchy within the brain to point at a single location to say "this is where [thought/sensation/self] originates".
So my theory is that consciousness is the emergent property of many separate specialized brain areas responding to and interacting with each other: either directly (neuronal links), via the body (flexing a muscle tightens the skin, creating a sensory feedback loop), and through the world (turning your head changes what your eyes perceive).
So speaking in terms of functionality, there is no "brain" as a single entity. The brain is like an ant colony, using hormones and electrical impulses to keep its disparate parts acting in unity. Under this model, consciousness can be defined as the push and pull between these parts, and self-awareness as the consensus model under which the separate parts reached some equilibrium. You won't have problems defining which parts do the observing and which parts are observed: what we "experience" as our mind is the tug-of-war between the processing areas, not the processor activities themselves. And similarly, what we experience as consciousness is the ability of our "mind" to drown out that cacophony and purposefully cede control ("focus") to different brain areas at different times.
Of course, there's no evidence that any of this is true, and there's still plenty of questions about the mechanism through which this would happen, but for me personally this model has allowed me to move on from the vicious cycle of having to first define "brain" in order to define "thought" in order to define "awareness" in order to define "mind" in order to define "self" in order to define "brain".
The hallucination analogy is useful I think in some ways, in that it could allow access to a view of reality that has important parts represented in a meaningful way. But, surely a hallucination requires an underlying consciousness to experience it? So for me it's not very useful in explaining the phenonmenon of consciousness itself. It seems that it's not far removed from the old "theatre" analogy, which requires an unexplained "homunculus" (and probably homunculi ad infinitum).
That muddies the water, IMO. We do experience reality directly: our nerve endings do. The next step is processing, which is somewhat mediated by low-level expectations. This takes place at sub-conscious levels. Conscious perception comes much later, and is obviously more filtered. To call it a hallucination is word play.
There are fixed patterns in still images that appear to move when we perceive them. "We don't experience reality directly" and calling it a hallucination seems apt - the thing we experience when we look at things is less "what's actually there" and more "the best guess our brain can construct about what's there".
Additional examples of this include the non-perception of the visual blind spot, how the brain fills in details in the middle of saccades, and how task-specific training causes perceptual differences (eg, thrown baseballs look bigger to professional batters).
> We do experience reality directly: our nerve endings do.
Depending on how you define "reality" we only experience a tiny subset of it. We can only see/hear/touch/smell/taste a very restricted universe, only what our "antennas" can detect. In fact I would argue that we experience close to 0% of all of reality. On top of this extremely restricted input our processing of it is imperfect which means some of what we experience has nothing to do with reality.
Sure, I don't disagree but (to me anyway) it's important to point out the difference between reality vs. our potential intake from it. The blind men and the elephant parable applies.
I think what you mean by reality is physical reality, if so then it is impossible for us to directly experience it. You are a simulated being in a simulated world. The simulated world runs on a physical brain. Everything you experience is "hallucinations," but if your physical brain works normally then they will be constrained to explain the input to the brain. Consciousness is most likely a property or set of properties of this simulation. Some part of your brain make predictive models of its input, and the things that it models is yourself, your current environment, other stuff going on (the mind), how these things relate to each other and such. My guess is that qualia is something like primitives to this simulated world.
We obviously don't have the full picture about sensations -- they might be a side effect of specialized (biochemical) neural architectures, but more likely there is something intrinsic to them that grants advantages to sensations.
There's something to making things really "personal" to an individual system having sensations that make it act in its best interest.
Sensations are also very flexible biochemical abstractions and solutions to problems across aeons.
It would have been quite hard for organisms to evolve without such abstractions -- like a worm recoiling in pain and a human quickly rettacting his burnt hand or avoiding certain kinds of people who have hurt him before.
maybe consciousness is a side effect, an emergent phenomena. once the brain gets complicated enough it will just arise (on a scale)... also are our decisions ours? or some inner layer in us creates the decision and then we just consciously observe it / let it play out...
>maybe consciousness is a side effect, an emergent phenomena. once the brain gets complicated enough it will just arise (on a scale)
This is my feeling too. Since the brain is just physical matter arranged a certain way, doing a certain type of processing, I really think it possible that some computer programs we use today are also doing the right kind of processing to grant them a simple subjective inner experience. Of course most programs don’t have a way to tell us if that’s the case, except maybe GPT-3 :)
> are our decisions ours? or some inner layer in us creates the decision and then we just consciously observe it / let it play out
I have evidence for it being the latter: our higher brain functions let us rationalize, to ourselves, the decisions that were deterministically made by our subconscious, so it feels like “you” made the decisions “because you wanted to”. People who have the connection between their brain hemispheres severed can unconsciously reach a decision in one hemisphere, but then consciously rationalize it in the other hemisphere, without having experienced the making of the decision [0]:
>In one example of this kind of test, a picture of a chicken claw was flashed to the left hemisphere and a picture of a snow scene to the right hemisphere. Of the array of pictures placed in front of the subject, the obviously correct association is a chicken for the chicken claw and a shovel for the snow scene. Patient P.S. responded by choosing the shovel with the left hand and the chicken with the right. When asked why he chose these items, his left hemisphere replied `Oh, that's simple. The chicken claw goes with the chicken, and you need a shovel to clean out the chicken shed'. Here the left brain, observing the left hand's response, interprets that response in a context consistent with its sphere of knowledge—one that does not include information about the left hemifield snow scene.
I think the most simple explanation is the brain runs a simulation of its environment and at some point after birth it puts its human into this simulation. That's probably when children start using the word 'I' and raise a consciousness.
Had this thought recently: The function of psychedelics is not to induce hallucinations, but to help you recognize that you’re always already hallucinating.