> This strategy typically involves what Keith Frankish has called illusionism about consciousness: the view that consciousness is or involves a sort of introspective illusion.
What's an illusion? If by illusion you mean an abstraction, like how a TV picture is an illusion of a picture rather than actually being a picture, then I'm on board. If by illusion you mean "worth excluding from your map of how the world works," then I have a bone to pick with you.
My main problem with physicalism is that it doesn't handle abstraction well. I'm fine with monism over dualism but you need some kind of functionality with which to consider different kinds of 'stuff'. Otherwise a rock, Conway's Game of Life, and Lord of the Rings are all on the same plane of existence.
What draws me to Objective Idealism isn't so much the fact that it's compatible with religion but rather that 'mind stuff' is the best 'thing' that we can use to describe everything. The fact that it doesn't put severe emphasis on the physical as "better" than other modes is just a nice little bonus to annoy materialists with.
> My main problem with physicalism is that it doesn't handle abstraction well. I'm fine with monism over dualism but you need some kind of functionality with which to consider different kinds of 'stuff'. Otherwise a rock, Conway's Game of Life, and Lord of the Rings are all on the same plane of existence.
Yes! This bothered me as well, until I recently encountered Sean Carroll's philosophy of "Poetic Naturalism":
1. There are many ways of talking about the world.
2. All good ways of talking must be consistent with one another and with the world.
3. Our purposes in the moment determine the best way of talking.
One way of talking about the Game of Life simulation running in my other browser tab is as a bunch of electrons bouncing around in my computer's CPU. Another way of talking about it is as a cellular automaton obeying Conway's rules. And they're consistent with one another; e.g., if I stop the electrons by shutting down the computer, I expect the automaton to stop running.
In retrospect, it's pretty obvious. But it must not have been _too_ obvious, because it presents a viewpoint that isn't quite physicalism and isn't quite dualism, and people have been arguing back and forth about that for a long time.
>2. All good ways of talking must be consistent with one another and with the world.
2 sounds dubious. There are good ways of talking about world that are incompatible between them (e.g. quantum mechanics and general relativity, but even more so, opposing viewpoints not based on objective disagreement but value judgements).
QM and GR are compatible in everyday regimes. They're not compatible for extremely small, extremely dense things - but it's fair to say that implies that at least one of them is not a good way of talking about extremely small, extremely dense things.
Well noticed :-). Both of those are discussed at length in the book.
First, "ways of talking" have domains of applicability. GR's domain of applicability doesn't include the very small scale, so it doesn't make any predictions there to contradict qm.
The book also talks about value judgements, and is very explicit that different people's core values may differ from each other. So I guess poetic naturalism doesn't apply to them? That bit wasn't clear, but I would only apply these axioms to "is", not "ought" statements.
It all sounds dubious, and amounts to "It depends which model you use", which isn't necessarily a profound insight - although if you're used to confusing models with reality, it might be.
We simply have no model for consciousness. We have absolutely no idea what it is.
It isn't even worth getting started with dualism vs materialism when both are - ultimately - constructs created inside, and possibly by, the thing/experience/whatever we're trying to describe.
One problem I have with illusionism is that if consciousness is an illusion, what is it that is being fooled by the illusion? Presumably the answer is that the illusion is fooling itself, which to me implies that either there is something there that is "real" to believe the illusion, or that the definition of an illusion in this case is so far from our usual definition that the term does not have much in the way of explanatory power.
Agreed. The "illusion" approach has always seemed somehow self-referential to me. Who/what is being fooled? Does being mistaken about something (which computers/algorithms can obviously be) somehow endow the thing with self-awareness and inner experience? We could trick a machine into believing it is conscious (`var conscious = true;`) but it's doubtful that that would change the mechanical, nonselfaware nature of its existence.
The brain is a super information processing machine. It executes incredibly and unimaginably complex algorithms without our knowing about it. But one of the things it does is to construct simplified models of itself, and these simplified models are what we are "aware of." More precisely, these simplified models are what consciousness is.
The "illusion" is that consciousness is somehow in the driver's seat. It isn't at all. It is just a kind of post-hoc shorthand that the brain uses for itself. It's more like a log file than an executed piece of code.
My typing out these words didn't come from my "consciousness," they came from the brain's incredibly complex algorithms. But my consciousness is taking the credit; it gives the abstract, executive summary version of how my goal of expressing some point led to my word selection. All those "inner experiences" along the way are essentially notational.
You are missing an important part of the puzzle. Yes, the brain creates models of the world and body. But it is locked in a loop of "agent <-> environment", more precisely "perception -> decision -> action -> reward learning -> repeat". This is what grounds the model in reality. The model is still imperfect (an illusion) but a useful one, with regard to maximising human rewards while minding the affordances and limitations of the environment.
The source of meaning is not the brain itself, but the game. The game between agent and environment, where the agent collects rewards and tries to reduce losses. All meanings stem from the game itself, meaning is not 'secreted' by brains in a jar.
The illusion objection is just an observation that any model, no matter how complex it is, is an approximation. The "ego" itself is a model of the agent, an approximation of the agent. But a useful one, a grounded approximation, that tracks reality as it changes. At the very least the model is good enough to make sure genes get into the next generation, otherwise there would be no more model to talk about. That's the source of meaning and consciousness - it's a loop, a bootstrapped meaning based on self replication.
I often see debates about consciousness ignoring the environment and the game (or life). That's a sad situation for philosophy, in my opinion. They got scared of behaviorism decades ago, and now ignore reinforcement learning. But RL is our best bet at cracking the mind-body problem.
All of which may be true - in the sense that perhaps there is a "self quale" whose purpose is to represent a simplified model of the important elements of internal state.
I'd suggest it's impossible to be conscious without being conscious of something, whether that something is an external event, a physical sensation, an emotional state, a desire, a memory, a plan, or an abstract thought.
In practice consciousness flits between all of these things, like an inner cursor.
But that does nothing to explain the sensation of consciousness - the simultaneous experience of being both subject and object.
It's the gap between "Maybe this very metaphorical explanation helps" and the lived sensation that appears to be unbridgeable, and makes the hard problem so hard.
If you take a look at word embeddings, image embeddings and other kinds of embeddings (such as those used for music recommendation) you will see how a rich space of nuanced meanings can emerge from what is basically a form of data compression. We're observing this higher order representation space - and right there is "what it's like to be" in that experience.
Qualia is an embedding of sensations into a "sensation space". Being aware of qualia is just evaluating sensations with report to future actions, goals and desires - basically adding emotion on top of perception. Emotion emerges from the utility of the current state of your experience, utility is related to rewards, rewards are controlled by genetics, and genetics have a single goal - replication. That's how qualia bootstraps into reality - it supports actions that protect self replication of genes, and genes support the development of a brain that can have qualia in the first place.
Thanks, an interesting perspective, but I am very much unpersuaded by the argument. At one point they say "we favor Huxley's analogy which regarded “consciousness” as being like a steam whistle on a train—accompanying the work of the engine but having no intrinsic influence or control over it". If consciousness has no impact on cognition, how are we even having this discussion? Our experience of consciousness is driving our arguments about how we are perceiving it. There must be some influence at the very least.
> Though it is an end-product created by non-conscious executive systems, the personal narrative serves the powerful evolutionary function of enabling individuals to communicate (externally broadcast) the contents of internal broadcasting. This in turn allows recipients to generate potentially adaptive strategies, such as predicting the behavior of others and underlies the development of social and cultural structures, that promote species survival. Consequently, it is the capacity to communicate to others the contents of the personal narrative that confers an evolutionary advantage—not the experience of consciousness (personal awareness) itself.
I think and theorize on consciousness quite often, but this angle was new to me and kinda blew my mind. Thanks for sharing :)
In AI there have been experiments where agents need to communicate and cooperate in order to solve tasks. They developed a kind of "language", as a result. It's just what happens when cooperation has an evolutionary advantage.
An agent needs to model its environment in order to plan successful strategies. But when the environment contains other agents, it becomes necessary to model them too - thus, create representations that can predict future actions of those agents. When applied on the agent itself, these models create the "ego", a representation useful in predicting the agent's own future actions. All this is necessary in order to maximise rewards in the game (and by game, I mean life, for humans, and the task at hand for artificial agents).
Are determinism and dualism considered to be one-in-the-same? I tend to think of dualism as being a sort of fundamental duality for example "nothing - something" or they way I prefer to word them, "something - not something". However, determinism tends force a cause-effect, or hierarchical, or directional meaning. For example, determinism in relation to my above nothing/something duality would be "from nothing came something" or something of that flavor. However, would could also say that there was no deterministic relationship and that both nothing and something mutually depend on each other to exist. i.e. there is not directional arrow of relationship.
(I'm not a professional philosopher)
Are determinism and dualism considered to be one-in-the-same?
Not at all, they are more like opposites of each other. Dualism in this context refers to the idea that there is something non-physical in addition to the physical body and brain that makes you conscious, a soul or something along that line. If you think that dualism is wrong and you are just a bunch of atoms governed by the laws of physics, then you probably think determinism is true, i.e. the choices people make are not really free choices but just the results of the initial conditions and the laws of physics (which may actually be non-deterministic, for example if the randomness of quantum physics is real, which makes the naming potentially a bit confusing).
Yep, even if the process is stochastic, and the result is determined by some mixture of present state and randomness. It's still determined. If we replayed everything with the same random variables, we'd get the same result.
It's hard to walk away from this determinism since we apply it to the rest of the world so often. It seems really suspicious to not also apply it to ourselves.
Got it. I didn’t think they were, or that you necessarily implied. I think I got mixed up with the last bit of wording in the commenters sentence and his intent.
I’m also realizing based on a couple of responses that my thoughts on dualism probably fall out of what the majority of people consider dualism to mean. I.e. not just limited to the mind body problem.
Thanks for the clarification :)
> you think that dualism is wrong and you are just a bunch of atoms
That's what I am - a self replicating, self adapting bunch of atoms that manages to keep being alive for a few decades. Not just any bunch of atoms. Any self replicating system has interesting properties not found in non-replicators.
Are determinism and dualism considered to be one-in-the-same?
Dualism AFAIK is believing that body and soul are two different entities. Atheism negates the soul so only the body exists, with the mind as a structure of brain activity.
Since the body is matter and subject to physics laws, some conclude that there can't be free will. But this is actually assuming a dualist vision, as if we existed outside the physical world.
Try this one. Consciousness as an emergent phenomenon. All the rules and laws by which the brain operates are completely constrained by physical laws, but they create an emergent 'field' in which it is not mathematically possible to predict the behavior of the field.
The brain has its own set of rules that it uses that is only partially attributable to the rest of the world around it. It can create it's own rules and operate 'under it's own steam', in the same way that North Korea manages to operate largely autonomously of the world around it. It's certainly constrained by geopolitics, but its internal behavior manages to remain free regardless.
I used to want to reach for quantum mechanics to explain the brain, now I'm happy with "plain old" electromagnetism.
Agreed. I actually wrote a short paper walking through determinism and non-determinism to ultimately demonstrate that we can’t describe a system in which we have control as any system with rules governing it directly violate the possibility of control. My conclusion was not that free will is impossible, but that it at very least can not be defined. I may upload it if I get some more peer review first.
I mentioned loosely above, but I agree that your definition of duality appears to be the common interpretation. I think I also have to be more careful how I use the word in the future. :) I just realized I use it more generally, I think more inline with how eastern philosophy uses it. However, there does seems to be a connotation around mind-body problem with the term duality.
Right, that's more in line with the way I view duality.
Lately I've actually came up with what I feel is an even more general pattern. And because it's 1am, I'll ramble a bit. :)
Typically we like to say things like:
Something <-> Nothing
True <-> False
etc
I'm thinking a better pattern may be:
Something <-> Not Something
True <-> Not True
Which yields the pattern "A <-> Not A"
The next step is that Everything follows such a relationship, such that you could conclude the Absolute duality of "Absolute/All <-> Not Absolute/All"
An interpretation of this may be that: The universe must exist absolutely out of logical necessity as absolutely not existing requires the definition of absolute existence.
This means past, present, future, all states, all things, all possibilities, exist. There was no beginning and end everything just "is".
This leaves many problems and many things to resolve such as, "if the universe just "is", how is it that I feel the sense of happeningness?". But hey, "Happen <-> Not Happen". They both most be reconciled. A metaphor to reconcile this is to take the static state of all knowledge in your brain as the totality of your brain states. Dreaming, or viewing these different brain states in different orders yields visions where things are happening, even though at rest each piece of knowledge/memory are just static pieces of information.
There are many other problems to reconcile, but this is my current line of thought. After spending my whole life fighting battles that emerge with hierarchical, cause-effect logic, I had to go off-roading a bit.
Of course, now matter how far we take it. You can with a bit of over simplification, sum your brain up to neurons being "connected or not connected" as the base of everything you know. Which means the limit of the description of the logic of the universe is comically limited based on the structure of your brain. I.e. It seems that we can never self-confirm beyond this play on words. I call this the cosmic joke.
Good night. Please forgive my poorly formed, probably not scientific, and brief midnight thoughts :)
Take a drug that makes you feel like you might never be the same... a fear inducing one like LSD, mushrooms, or heavy cannabis. And imagine that your very reason for being is to uncover such. And if you get terminated somehow, you fail on two fronts. Maybe every moment in time, every brain state, is simple digits in a long string of a real number. Your whole life..was just computing a unique number. Maybe every person or entity who ever is conscious, computes a number too. What about the set of all those numbers?? Is that the universal set..of all knowledge..of all entropy..of all phenomena?? Is that set of the physicality of experience directly isomorphic to some slices of its members..or a subset..an abstract sense of itself?? Cohomology / homotopy links consciousness to a recursive transformation of itself?
Maybe the link between physicality and information is consciousness. Only consciousness provides a single player universe -- and thus provides the mechanism for axiom of choice..grabbing the totality of experience as a piece of data. Without internal experience, we must resort to AC and somehow look at a living creatures' external experience in order to derive the real number their life computes...nonsense..it is not sound. So consciousness really does serve as some sort of bridge. The totality of all consciousness provides the same complexity as the totality of the 3rd person objective universe from start to finish. We have two data sets that encode the same thing. Linked by consciousness. What is the rub?
It was actually a couple psychedelic experiences that allowed me to "get over the hump" and "see" the universe the a larger, deeper, internal lens. It also allowed me to reflect and understand myself in a way that I found to be extremely valuable. While the experience itself was simply beyond description, ineffable, the single most enlightening moment of my life, it was the following years of reflection on it and everything around it that start putting things into place.
I remember one particular thought of how "real" the mind space actually is. We tend to think of the perceived reality as very physical, when in fact the mind space is able to simulate the same feeling of physicality. It really makes one question if the difference between mind and physicality is even worth distinguishing.
Right but that link between physicality and mind _is_ consciousness..the perception of qualia. So though they may be caused by the same thing, the process itself is one to be looked at carefully.
So, consciousness is a hologram of the universe? I can imagine it like that.
I often describe consciousness/me/you/entities/etc as dynamic lenses of the universe. Something (like information) goes in from all directions and the lens transform that into reaction. I think you could actually extend that view to entities like atoms. Hm, every point is a lens in itself?
If I am following your last two paragraphs, you object to physicalism because it recognizes only one kind of 'stuff', yet it is fine that, under objective idealism, "'mind stuff' is the best 'thing' that we can use to describe everything." Is that not inconsistent?
Not at all. Objective Idealism doesn't claim physicalism is wrong because it only has "one kind of stuff", but rather about what kind of stuff that one "stuff" is.
The failure many see in physicalism is that the physical sciences can't (yet, if ever) tell us how there's a qualitative nature to our experience — that it feels like something to experience at all. How the eye turns photons into nerve signals, and increasing amounts about how the brain processes those signals into content, sure. What it's like to experience color, or even how there is such a thing as experiencing color and not merely electrochemical signals traversing myelinated tubes, not so much.
Consider, however: all of the content of your experience could somehow be false (think of how true and vivid a dream can feel, for example), but the simple, content-irrespective fact that you are having an experience can't be faked, can it? Wouldn't you still have an experience of having a faked experience? This is variously called an illusion, an emergent phenomenon, the "hard problem of consciousness", and many other things, with varying degrees of politeness.
Objective Idealism posits instead that this very experiential-ness itself is the fundamental "stuff" of reality, and that all of the content of what you experience, — specifically including the experience you have of being an individual, distinct "you", and even the phenomena that we call "matter" and "energy", themselves — somehow supervenes on it.
For now, the question isn't falsifiable, so it's often also dismissed as meaningless — not least by people who subscribe to physicalism and don't want to play any more, when this line of inquiry comes up...
I object to physicalism because it doesn't allow for abstraction. It's basically being willfully blind to "everything humans think about and do." A chair is a collection of atoms and the forces that bind them, our ideas of chairs and their purpose are just meaningless fluff.
If we reframe to say that the atoms and forces of the chair are made of the same stuff as our thoughts, then that's way more in-line with how humans actually do things.
I'm a materialist who says things like: "Based on the frequency of the self-reports, the 'god-shaped hole' is clearly a real phenomenon." (1)
How do you distinguish if you are making people mad because you are making bad arguments vs. making people mad because you are piercing the veil of their tribal biases?
(1) - (And therefore, any mental states which people have experienced resulting from the "god-shaped hole" are clearly within the range of normal behavior in the evolutionary heritage of Homo sapiens. Therefore, the feelings associated with religious faith and hope are merely the birthright of human beings, whether or not one believes in a particular deity or is a materialist. I cite myself as a datum.)
I don't really think I'm going to make anyone mad. Tribal biases are way too strong for that.
"God-shaped hole" is a bit silly, the primary purpose of spirituality is to allay fears of uncertainty. Materialism allays those fears the same way that belief does. Any position on that spectrum is as valid as any other.
Spirituality, the reckoning with of the profoundly unknowable, is equally doable by both materialists and by believers, and I often see people who proclaim themselves as uber-materialist and hyper-critical of religious idiocy, go whole-hog with their own forms of myth, legend, priesthood, and dogma.
If materialists read the Bible with the same reverence that they read Shakespeare with, they'd see why their positions are so silly. David and Goliath is a phenomenal read on the dynamics of ego and courage. If Shakespeare had been born in that day, he'd have been on the committee writing the Bible.
It's just that the Bible is so old that you need to actually understand history to really grasp it. Of course, there's an entire industry of people devoted to making it accessible. Religion deals with far, far, far more than just the stuff of belief.
My position on materialism vs everything else is that since all positions on the spectrum are equally likely to be true, and there are far more positions on the spectrum that aren't materialism than that are, then materialism is vanishingly unlikely. Reductionist scientific reasoning is a poor tool to parse the metaphysical landscape with. More realms than just the physical can be conceived of without even having to consider the supernatural. These realms need to be unified somehow if we want to avoid the illogic of dualism.
"God-shaped hole" is a bit silly, the primary purpose of spirituality is to allay fears of uncertainty.
Sorry, I agree with much of what you write above, but this in particular strikes me as a non-sequitur. There is a nihilism which can come from the certainty that a human is but an insignificant blip and from the certainty that the sun will burn the Earth into a dried out rock, if not de-orbit the planet completely. The "God shaped hole" is often deemed silly by the materialist tribes. Why, in particular? I think it's because it's so often used in silly bumper-sticker rhetoric. The existence of such needs -- which can go beyond even the question of life, death, and a (fictional) life after death -- is something which human beings, materialist or otherwise, should acknowledge and examine, or ignore at their peril.
Religion deals with far, far, far more than just the stuff of belief.
Indeed. And so does the "God-shaped hole." I think belief is where many of both the religious and the materialists get themselves stuck, unproductively. I'm confident that there is no afterlife. However, given the arc of history, and the utter unpredictability of today's world to our ancestors of a thousand years ago, even someone who understands Thermodynamics has reason for unspecified hope.
My quibble with "God-shaped hole" is not with the hole part, but with the God part. God is a very very recent idea, only having been around for the last few thousand years. Deity existed in many, many, many different forms before the monotheists took over. That could perhaps have been better explained.
But it doesn't take much to reach a conceptualization of God. I believe in a universe that's bigger than the physical aspects of it that we can see and experience. Given that even in our own universe, self-organization and sentience were able to evolve, there's no reason not to believe that sometime before our universe existed, a powerful being evolved to guide existence. When time frames literally stretch out to infinity, anything's possible.
Our universe could be just one in a countless procession of similar universes. The idea's even taken seriously by the scientific establishment. With this groundwork laid, over unimaginable numbers of universal iterations, there's more than enough room for God.
Giulio Tononi is the guy who's arguably brought the most interesting perspective on consciousness (information integration theory) since the original Chalmers' problem statement.
Here's him explaining why the problem is hard and how it could be approached, in the middle of some kind of artifical jungle: https://youtu.be/Vl8J3K_ZLkg?t=5m50s
IIT is missing the self-replication requirement. If a system is a self-replicator, it needs resources and has to avoid dangers. This in turn creates a necessity for perception and ability to select good actions depending on situation. A square grid of XOR gates has none of that.
What you mentioned is part of evolution's natural selection through selective pressures. But it isn't precluded that an AI could be created with none of these biological hallmarks of past evolution.
Former consciousness neuroscientist here. There's some great explanatory abilities about IIT and Tononi's phi measure, but it's not clear it's sufficient.
On the upside, it explains why the cerebellum, despite comprising half the neurons of the brain, has virtually no impact on awareness when removed (like for tumors or epilepsy). The IIT answer is that the cerebellum is highly regular, like a GPU having many units, but all doing the same thing. In this sense, it has lower phi than the cerebrum, which is way more heterogeneously organized. This might also explain why awareness is lost in deep sleep or epileptic seizures; the theory is that the electrical pattern becomes much simpler, and lower phi.
The downside is that it's not clear where the dividing line between conscious/unconscious should be. A planarian only has ~8k neurons; is its phi sufficient for consciousness, or is it a biological robot? Or put it the other way: the phi of things like the internet or a biosphere could be quite high, but are they conscious?
As my advisor liked to joke, "What's the phi of the population of China?"
Ah, but phi measures integration levels, not independent survivability. The cause-effect structures are reduced by 99% in your scenario. The joke (and implied criticism) is that society itself (not the individual members) might have a high enough phi to pass some "consciousness threshold", and if we think that's absurd, it should cause us to question IIT.
Don't get me wrong, IIT is one of the best mathematical models of consciousness out there, but I don't think it's the final word in the matter.
I used to consider Tononi as the best philosopher of consciousness until I learned more about neural nets and watched the RL course [1] by David Silver (co-author of AlphaGo).
After I understood the RL paradigm, I realise that Tononi's explanation barely scratches the surface. Yes, there is integrated information, but how does it come about? What is its purpose?
The answer is simple - painfully simple - the goal is to maximise rewards. One goal we all have is to live and have children - and this root goal (a necessity of the genes to propagate, actually) is what guides the evolution of integrated information in the brain. But the environment plays a crucial part in the contents, structure and complexity of consciousness. Integrated information is very dependant on the environment. Yet Tononi & co. still search for it in the brain, as if you can speak of a brain without considering its experiences, and consider experiences without thinking about the world and the problems the agent has to solve.
Just watching reinforcement learning agents learn and evolve in simulated environments, as we had the opportunity for the last 3-4 years, is enough to create a perspective about agents that is not human centric and that is very useful in thinking more clearly about consciousness. You can see a humanoid learn a gait that is like the Ministry of Silly Walks [2], you can see bots playing FPS games, AlphaGo playing against itself, cars driving themselves... That puts human learning and human agenthood in perspective.
Reinforcement based learning requires self-observation. Especially when done with predictive modeling. Both the brain clearly does. You might like this paper: https://psyarxiv.com/387h9
Well it has a lot to do. We're not born with fully functional minds, we learn our mental skills as we grow. Learning shapes the very concepts we use for representing sensations and thinking. Consciousness is not something 'secreted' in the brain, it's the loop made of 'agent + environment', where the purpose is to maximise rewards. There is no consciousness in itself, just consciousness of something. Learning is what ties together agent and environment, it's the building force of consciousness.
And Chalmers is a dualist that believes there are two realms that can't be explained, and that's ridiculous in this day and age. He's the worst philosopher of consciousness because he led a generation astray with sterile dualist concepts - and where has he led philosophy? Nowhere, there was no insight, no discovery after the "hard problem" because, darn, it's "hard" which is another word for dualism today.
I take Tononi and Dennett over Chalmers any day, but I prefer Reinforcement Learning over all of them as my intuition pump with regard to consciousness. Philosophy is mired in a swamp of bad concepts that are almost useless, they should just use a learning based terminology which is so much more effective. Engineers and experimental scientists create bots that beat humans at Go, a game that can't be brute forced, and they don't realise they've been outrun in their 2000 year marathon by a hundred year old concrete approach. The difference is that RL has the right concepts and philosophy uses extremely refined but ultimately useless concepts. They've realised words don't mean anything in the absolute sense (they all rely on each other, cyclic referential) and are just part of a game, but are still neck deep in useless words instead of using evolutionary and RL concepts to concretely model consciousness and the game it plays.
"They've realised words don't mean anything in the absolute sense (they all rely on each other, cyclic referential) and are just part of a game, but are still neck deep in useless words instead of using evolutionary and RL concepts to concretely model consciousness and the game it plays."
That's a full-on nihilistic postmodernism. The fact that words mean something only in reference to other words doesn't have to mean that they are useless. Quine and other pragmatists (Buddhism does the same) argued otherwise - that concepts/theories derive meaning or truth-value from how useful they are in the real world (as a network, rather than individually).
Treating all philosophers as a one camp vs science is mistaken. Whatever any particular scientist or engineer say, there always will be some philosophical assumptions behind it. It's always better to make them explicit rather than be in the dark. The best scientists in history were pretty deep in philosophy as well.
Eg. Tononi is both philosopher and a scientist. He's clearly on Chalmers' side philosophically, he perceives consciousness as something fundamental, MUCH more fundamental than learning. He posits that even stable systems (so with no learning at all) can be conscious. Which makes a lot of sense from phenomenological point of view.
He also adds a theory of how specifically consciousness may be related causally with the physical world. That's the scientific part.
Silvers, on the other hand, and the whole RL thing is not concerned with consciousness AT ALL! It's a completely different problem. Actually it may be the case that most of the learning processes in human mind are unconscious!
"There is no consciousness in itself, just consciousness of something. Learning is what ties together agent and environment..."
Exactly - if you define learning as a relationship between a system and its environment, you don't need anything else (like cosciousness), just the actual and potential interactions.
Late Wittgenstein, Heidegger, Merleau-Ponty and others would be on the same page with you here, so again, let's not throw the baby of philosophy out with the bathwater. These observations were made in the first half of XX century. They apply perfectly to the naivete of old school symbolic AI (and a logical positivism philosophical stance behind it), as captured by Hubert Dreyfuss, who described all the problems with it from philosophical (phenomenological specifically) standpoint in "What computers can't do" and his more recent paper ( http://cspeech.ucd.ie/Fred/docs/WhyHeideggerianAIFailed.pdf ). RL seems to be step in the right direction in this perspective. However...
"[Learning] ... it's the building force of consciousness."
Well, this part just doesn't make sense. You want to focus on explaining learning? Fine. Do some work on RL, it's enligthening for sure. I completely agree that it's fascinating how new concepts emerge in AlhaGo around some specific baord configurations. It changed people's understanding of the game. But please, don't conflate it with consciousness. And if you do, be open about it and name your position in terms of Chalmers' recent paper. Is it some form of illusionism? Only then we can have meaningful conversation about your actual position on what consciousness is.
Whatever is the relationship of concepts and sensations, however these two aggregates relate to each other and evolve in the mind, consciousness seems to be something more fundamental. Are you saying that AlphaGo is already conscious? If not, can it be made conscious? How? By adding more CPU? A webcam? We can't escape these questions.
We're not far enough along in AI to address this yet and get anywhere. Philosophy will not help. Introspection and writing about "consciousness" goes back over 2000 years and hasn't produced all that much.
Humans don't really have that much "reflection", in the sense that we use the term in programming. We can't see our library of reflexes. We can't see what early vision is doing. We can't look at the rationale behind our own classifiers. We can't look at how our memory is indexed. Trying to understand the mind by introspection is thus inherently futile.
...say we build/grow an AI system that passes the Turing test. We talk to it, it comes across as plausibly human.
How do we know we've created something with consciousness, and not just a very sophisticated program? The philosophy you disparage already has a term for this: the "philosophical zombie". For all intents and purposes, they appear human, but they have no internal experience whatsoever. All you'll have done is shunt the problem downstream.
Also, you're wrong about early vision. That's the best studied part of the brain, and in fact, researchers have applied ML techniques to fMRI data and extracted out the images being shown.
How do we know we've created something with consciousness, and not just a very sophisticated program?
-- How do we know other humans aren't "very sophisticated programs"? How do we care? Which is to say the whole "omg they look and act the same but are so, sooo different" of philosophical zombies talk and similar, uh, rot, is kind of leveraging the person's intuitive attachment the idea of a soul, a magic thing with no measurable properties that makes us oh-so-different than a crude material object.
Also, you're wrong about early vision. That's the best studied part of the brain, and in fact, researchers have applied ML techniques to fMRI data and extracted out the images being shown.
If you read the GP in context, it doesn't say neurologists haven't studied consciousness, it says people have no self-awareness of the preprocessing that goes into creating vision.
>How do we know other humans aren't "very sophisticated programs"?
We don't. Most people assume that other people have consciousness, with various justifications, but no one can know that.
>a magic thing with no measurable properties that makes us oh-so-different than a crude material object.
But it does make a difference morally speaking. If we consider a Utilitarian model of morality, then what is good or bad is that which causes the experiences of happiness or suffering. But if a philosophical zombie does not experience pleasure or suffering, then there is no moral consequence to harming them, any more than there is moral consequence to watching someone die in a movie, or to splitting a rock in half.
And this becomes even more worrying when we start to think about AI. If an AI can develop consciousness as a result of our actions, then perhaps it can also experience suffering as a result of our actions.
Of course, since we cannot identify p-zombies, possibly by definition, one can argue that there's not much point thinking about it. But, well, you could say that about a lot of philosophy. Personally, I think it's valuable to think about these things as an intellectual exercise, even if we can't really conclude anything from it.
> a magic thing with no measurable properties that makes us oh-so-different than a crude material object
No, qualia/awareness is not magical, it's prima facie undeniable, we are aware of things, and we have an experience of the world. In fact, consciousness is more fundamental than everything else we know about the world, because all else we know is known by awareness. That scientific paper you read? You were aware of it. Bacteria under the microscope? You had an experience of seeing.
It's the very definition of the hard problem. How does biology lead to awareness? We have strong evidence they're related (sleep, scotomas, mental disease, drugs, etc etc etc), but we have yet to pinpoint the connection.
As for the philosophical zombie being "rot", consciousness neuroscientists have already debated if it were actually possible, and if so, detectable. We already know about a phenomenon called blindsight for vision, where you can perform visual tasks but have no experience of seeing; how do we know there's not some frontal lesion that could wipe out or reduces your awareness, but leave your other faculties mostly intact?
No, qualia/awareness is not magical, it's prima facie undeniable, we are aware of things, and we have an experience of the world.
OK, "magic" is the adjective adjective added for flavor, "no measurable properties" is the key term. The most obvious point is the "philosophical zombie" which specifically posited to have no measurable differences from a non-Zombie.
It doesn't seem particularly strange that a "thinking being" would be receiving and processing data. Human language can used to reference internal experience that is assume to be common to all humans (as your construct above shows). But these references can't directly be to something "primal" since they're representations.
Essentially, this style of mysticism references internal experiences with individuals emotionally valuing their internal experiences and wanting them to be irreducible to just matter. And once the mystical concept goes out in the world, it wants to cloth itself in the appearance of science - first philosophical zombies are have no identity except "the same on the outside, identical on the inside" and then opportunists wanting to leverage this canard take out their microscopes and imagining measuring to regain some respectability. Yet, let us posit the unmeasurable and then try to measure it. I assume the church of the Giant Spaghetti Monster has a rite for doing just this.
If we can't quantify "internal experience" then what leads philosophers to believe it exists? What evidence is there that conciousness is distinguished by something other than raw complexity?
Given the fact that the only reason you are aware of the world around you is through internal experience (i.e. consciousness), the reverse question is actually more salient: how can you prove that the "real world" really exists? How do you know that it isn't entirely imagined?
Adding a few sentences to this point; the mechanism for this "AI system that passes the Turing test" likely to involve neural networks.
So we'd be dealing with a situation where we have something that is behaving in a human like manner using mechanisms that are (to a first approximation) how humans do it.
Claiming that that program isn't conscious is close to defending the idea that the sun orbits the earth - the argument is 'even though we have accurate measurement everything and a well verified model, the implications are discomforting; hence the model must be wrong'.
There are people who won't handle the idea that consciousness is rooted in physical processes that we already have a handle on.
Consciousness as it’s being discussed is not intelligence or even the ability to reason about one’s own existence. Is both less and more than those things, really just different altogether. It is subjective experience itself, the inner world somehow projected for you by your mind. It is the experience of seeing and hearing, the feeling of an emotion.
This subjective experience is subject to causation. If a doctor stimulates a certain area of the brain, laughter, smiles or cries might occur. If an anesthetic is taken, this could cause consciousness to cease for some time. So we have this connection between the external world and the subjective. I'm curious though, if you stimulated the laughing part of the brain when someone is anesthetized, would they still laugh, even if they aren't conscious? Is consciousness necessary for certain acts?
Well, laughing is behavior, and I'm sure there are secondary motor cortices that could force that when stimulated.
But what you're really asking is, would we experience mirth or humor if stimulated. And we know that's true, at least for memories. Certain hippocampal stimulations elicit associated memories.
The problem is not whether biology is related to consciousness (it is), but how?
If the explanation for how it works is something along the lines of, a electrochemical wave passing through a network under x conditions. Will people be okay with it? If you have the whole thing on video, so to speak, where you can see the whole mirthful experience unfold, and can recreate it elsewhere, will that be enough? I guess I'm asking what the standard is for explaining how consciousness arises?
If we made a machine with the capacity for simulating intelligence, the ability to reason about one's own existence, and furthermore, gave it the ability to simulate a belief in its experience of seeing and hearing and feeling emotion and a survival instinct, would it be ethical to destroy such a machine? At that point, wouldn't it protest its own sentience, consciousness, and desire to live as convincingly as a human?
That’s really a separate question. We really can’t be sure that other people are conscious at all either. Consciousness may have behavioral consequences, but it’s not clear that it is necessary for those behaviors. Perhaps those behaviors can come about some other way.
Our ability to talk about consciousness doesn’t prove much either, because it turns out we can’t explain the idea in words. Trying to explain consciousness comes down to statements like: it’s the difference between seeing and !!!seeing!!! We can only communicate its true nature by way of alluding to the other’s experience of the same, not by direct explanation.
I don’t think it’s helpful to being morality into it. Depending on your moral views, there might be good reason to treat things that we think are conscious as if they are conscious. That’s arguably what we do with other people. It doesn’t speak to the underlying questions though.
If, as you surmise, we're discussing a concept which cannot be articulated and which has no moral consequence then what underlying questions remain and what use should we find in answering them? It strikes me as similar to logical paradoxes--a fuzzy ambiguity more likely reflective of limitations in our languages and our particular models of reasoning than of any underlying physical truth.
Consciousness is real because we experience it. If anything, the existence of our own subjective experience is the only thing we can be sure of. Usefulness has no bearing on it.
While I’m suggesting above that conscious experience itself is probably irreducible and uncommunicable, that doesn’t mean that we can’t understand its causes and effects. It seems highly plausible that consciousness plays some functional role in the human brain, the understanding of which could be useful for medicine and AI.
Finally, I find plenty of utility in the joy and wonder of trying to understand this universe. The existence of consciousness as one kind of phenomenon or another has profound implications for our understanding of it, disproving some hypotheses and suggesting new ones.
It seems highly plausible that consciousness plays some functional role in the human brain, the understanding of which could be useful for medicine and AI.
Evolution produces things which aren't "useful" all the time. Examples: That dimple above your upper lip. The blind spot. The human coccyx...
The existence of consciousness as one kind of phenomenon or another has profound implications for our understanding of it, disproving some hypotheses and suggesting new ones.
You didn't answer the question. So the consciousness of the illusion is real? One of my girlfriends was a voracious reader, but couldn't for the life of her recall anything out of them. She used to joke that I was "functionally illiterate" because I'd take six months to read something like Kavalier and Klay, but I could remember certain scenes in great detail. Is my ex-girlfriends' experience of her reading books real? Reading gives her great pleasure, so she must experience such pleasure. Her experience seems to be like mine when I recall having a dream where I felt certain emotions, but the contents of the dream fade out of my memory. Is her experience of reading and my experience of dreaming real?
If one deeply introspects about the nature of one's experience of experience, one may come to realize there's a certain fragmentary nature to consciousness.
Have you ever had episodes of behavior, for which you have no memory? "Blackouts?" I have. What if consciousness is simply an interesting epiphenomenon, unnecessary for thought and decision making? If all we have are people's reports of it, why should we accord any more epistemological significance to it than we do to religious experiences?
As for the illusion question, it’s missing the point.
It doesn’t make sense to call it an illusion. Your conscious experience may accurately reflect reality, but that doesn’t change the fact that you are experiencing the phenomenon of consciousness. That our consciousness is more fragmented than we at first think is similarly irrelevant to the hard problem; fragmented or not, the subjective experience lacks explanation.
It’s very well possible that consciousness is unnecessary for thought. I strongly doubt that it is totally useless altogether, but that’s certainly possible too.
Even if you believe it is probably useless, we investigate things of dubious utility all the time, often discovering unforeseen uses along the way.
So the question remains: why are you so eager to dismiss the defining feature of the human experience?
Am I? If consciousness were a phenomenon no one had a memory of, would we even know it to exist? We certainly wouldn't be having this discussion. I can imagine a consciousness without memory, but I can also imagine a fantasy unicorn in real life. This doesn't mean either exists.
As for the illusion question, it’s missing the point. P
I don't think so. I think it's very significant that there's little difference between the consciousness of real sensory information and consciousness of illusion.
Even if you believe it is probably useless, we investigate things of dubious utility all the time, often discovering unforeseen uses along the way.
So you have no proof for your assertion. Only a suspicion. Perhaps a well founded one, I could even grant that.
If consciousness were a phenomenon no one had a memory of, would we even know it to exist?
Absolutely. We have awareness in the current moment. Memory is not required. In fact, if you think about what memory is, it's a current, internal experience that we associate with a concept called the past. All memories are actually experienced in the present, making awareness more fundamental than memory.
Without memory, the contents of awareness would be very different, sure, but it's not a prerequisite for awareness itself.
Since when? (Not just a joke. Also a serious question.)
In fact, if you think about what memory is, it's a current, internal experience that we associate with a concept called the past.
Uh, no. I can remember what I had for dinner last night without having a flashback where I re-experience last night as in some kind of dream.
Without memory, the contents of awareness would be very different, sure, but it's not a prerequisite for awareness itself.
How do you know? There are people who can't move short term memories into long term storage, but they can remember enough to be able to play Tic-Tac-Toe, for example.
> Uh, no. I can remember what I had for dinner last night without having a flashback where I re-experience last night as in some kind of dream.
Just because a memory is not as vivid as sight or a dream, doesn't mean it's not currently in consciousness. Think about it, when you're remembering, you're aware of something: the memory. It could be a fact you're aware of, or a diminished sensory experience playing in the mind's eye, but you're still aware of it.
> How do you know? There are people who can't move short term memories into long term storage, but they can remember enough to be able to play Tic-Tac-Toe, for example.
That example is about memory and behavior, not awareness. A complete anterograde and retrograde amnesic would still have an experience of seeing (some odd shapes) and emotion (being confused, stressed, etc). And in fact, this is what we think the consciousness of pre-long-term-memory infants is. To quote William James: "blooming, buzzing confusion".
> Do we have any verifiable examples of consciousness where there is absolutely no memory? I very much doubt it.
That's true only because 99.99999999999999% of humans have no memory deficits. It's correlated but there's no causal relationship needed. The only thing an amnesic can't be aware of is a memory they can't access or that was never formed.
It will not be provable, the same way you cannot prove your consciousness to me. All the philosophizing in the world will never change that fact. The most we'll know is that the artificial intelligence appears to be conscious, and shows the same or similar physical appearance of consciousness that we ourselves do.
Introspection allows one to develop plausible axioms, conditions, or properties that are satisfied by a conscious object (e.g. ourselves) and move on from there, as Giulio Tononi did with Integrated Information Theory.
to further your analogy to programming to refute that, humans have had significant advances in the inner workings of the brain (re some
of the topics you’ve mentioned) from its dissassembly.. albeit with little, if any, improvement in our understanding on the nature or origin of consciousness..
Impressively even-handed for such a confusing subject. I understand why philosophers are pretty much celebrities in the eyes of students, in a way that math or CS professors aren't. Doing this well requires a kind of intellect that crosses over into personality.
I think Chalmers is the one who started calling it (for better or worse) the "hard problem". And that means he might be one of the first (in the modern age) to clearly distinguish it from the other problems of consciousness. Though of course some people (zombies?) like Dennet claim there is no distinction.
There is also another perspective: by creating the concept of "hard problem" of consciousness and that of "p-zombies", Chalmers led a whole generation of philosophers on a dead end. It leads to no insights even after decades of development, it's too impractical and divorced from science.
I think we should try to create intelligent AI agents in order to understand what consciousness is, and reconsider behaviourism and scientific approaches, as opposed to this kind of sterile dualism.
tl;dr: In essence, if I gather this correctly just from the comments and a bit of common sense,
Chalmers claimed that a whole body of philosophy was unscientific, to which the main culprit replied that this was completely subjective and that subjectivity cannot be taken out of the equation, which would be what Chalmers implied.
----
The hard problem of consciousness is being conscious, which involves a lot more than awareness of self-consciousness.
To me, I'm the only consciousness in sight. Everyone else is just objects. If I share an experience, I incorporate that as my own from my own perspective. There is little need to tell other people that they are conscious. This is rather a result of social dynamics, which I am alas not very conscious about.
The tipping point is if I perceive myself, my body, my actions (rather the results) as objects. That means I was not aware of those objects, didn't anticipate them, so I must have been rather unconscious. This is in a sense a loss of identity (in the sense of in+dent, interleaving, interlocking, etc.). The goal of consciousness is to redirect the subconsciousness, to avoid unconsciousness and uncertainty, simply because of pain and fear and painful fear, so there is an effective feedback loop. There is an incentive against active thought: it's a huge energetic burden. Obviously it has it's uses, but we can think passively, kinda -- e.g. to remember something long after having actively thought about it; or to meet a default decision to direct attention towards something. So there is a real incentive against consciousness. I can only guess, that opponents, are actually getting a headache of thinking on too many meta levels and energy restraints typically lead to anger, which might move them into a bad light, where it's hard to distinguish between hyperbole and hypothesis.
At that, there is no distinction "from the other problems of consciousness" insofar discussion is concerned, because talk is conscious. So Dennet pulls a short circuit and claims that all talk about consciousness is inherently conscious, whereas on the subconscious level actions speak louder than words and he provocatively takes the opportunity to verbalize such a subconscious action (refusal, opposition, repulsion, feeling insulted, fatigued or ...) -- still to further the discussion, out of a subconscious desire, professional or habitually.
This is really cute, because Dennet assumes the position of the antagonist, giving an adversarial example to learn from, implicating that not all of his arguments are wrong, probably hoping that the further discussion will converge to a language he can support. Ultimately, if he says that there is no problem, he implicitly admits that Chalmers already had solved it. And if he says that there is no distinction, he implies that the conscious can not be treated in isolation from the subconscious.
So he is naturally admitting, that his research is not perfect, ie. not done.
The problem, the working of the brain is certainly nonlinear so there can be no clear line as separation. It's a process, not a state, and so he keeps processing the problem, maintaining the illusion of progress.
Nihilism is the ground state of philosophy. He doesn't fall back on it, he was already there, and he shows the whole field that they haven't come very far, philosophically -- which is not bad, because constancy is the psychological end-goal. From the natural sciences, physics reinforces subjectivity as a necessary epistemology, while neuro-science is the subject moving the fastest, figuratively speaking.
----
I would usually not post such a long reply, but I feel your insult was out of line and the topic is generally interesting enough. Yeah, I'm rambling.
One possibility I sometimes consider as a joke is that the people who seriously deny the existence of the hard problem might just actually be philosophical zombies, totally lacking in any conscious experience of their own. This is reinforced by, e.g. Dennett writing a whole book in which he alleges to explain away the problem, but instead spectacularly ignores it altogether. It’s almost as if he doesn’t even have a clue as to what people like Chalmers are talking about.
I’ve had the same thought, inspired by similar incredulity. I wonder if there’s a feeling that anything short of strict physicalism is just a flavor of dualism. Better a philosophical zombie than a philosophical ghost?
People say and write all kinds of things. Just because this Dennet guy is known in whatever field he's in (I don't follow Western philosophy at all, it's still playing catch-up to Eastern thought from two millennia ago) doesn't mean his opinions should be taken seriously. I don't think he's a "philosophical zombie" or any other such inane term -- but people throughout the ages have believed all sorts of strange things even though the truth is sitting under our noses the whole time.
I do love these throwaways: "I don't follow Western philosophy at all, it's still playing catch-up to Eastern thought from two millennia ago". He doesn't follow it but knows it's 'playing catch-up'. Still - 'People say and write all kinds of things'. Totally agree.
I love that. Chalmers himself (or maybe Searle?) once suggested this very thing about Dennett. Made me laugh out loud when I read it.
In all seriousness I wonder if Dennett hates consciousness (the “hard” kind) because it threatens his worldview. He seems like the sort of person who finds it impossible to say “I don’t know.”
Is it really surprising that we have a first person subjective experience? We know that we are incredibly complex things, constantly integrating and acting on very complicated external stimuli. Such a system should have references to its own body and its own neural states, its train of reasoning should frequently include itself, its focus will drift forward and backwards in time... this is just how a system like this would work. If the system communicates about its state then its language should have referents to these internal states, referents like "experience", and "feels like", and "I understand". Is that surprising? Wouldn't it be surprising if it wasn't like that? I don't think you need to invoke an essentially mysterious "conscious" property of the mind to explain that.
> I don't think you need to invoke an essentially mysterious "conscious" property of the mind to explain that.
I don't think consciousness is being invoked to "explain" any of the things you list. The issue to explain is why we observe consciousness existing or accompanying these things in the first place (for ourselves). For example, one can imagine a system capable of referencing itself, choosing actions based on that, etc, that isn't conscious. That's a philosophical zombie. So the question is why aren't we all philosophical zombies.
A system that can observe things, and also observe its own observations, its own mental states, it is not so clear that such a system can be a p-zombie.
After a couple of pages I'm still not sure if the author is serious. Maybe I have misunderstood, but it seemed as if he's saying that the real problem with consciousness is people thinking that there's a problem with consciousness. I happen to believe just that, so seeing this idea decorated with scientific slang is very funny.
Chalmers is the guy who coined the hard problem of consciousness. The reception varied extremely, some people refused to even admit that there's any problem at all with explaining consciousness. So now, after many years of multiple disputes he describes the meta problem - that the base problem itself is so controversial.
The clearest example of the meta problem is Daniel Dennett, another prominent philosopher, who not only doesn't agree that the problem is hard, but also insists that the consciousness itself is illusion, so there's nothing mysterious to explain in the first place. Quite mind-boggling statement to most people, including HNers, as far as I remember from other threads related to the subject.
> so there's nothing mysterious to explain in the first place
I'm not familiar with either author, but this sounds so wrong that I wonder if you misrepresented it, because slightly after a slight modification I would agree, there's no explanation in the end, the misrepresentation of which is trivial, because the consequence is effectively the same. There are two sides of the same medal: we need to refine the model, and we need to skip it to get to the meat.
Dennett: ""Qualia" is an unfamiliar term for something that could not be more familiar to each of us: the ways things seem to us. As is so often the case with philosophical jargon, it is easier to give examples than to give a definition of the term. Look at a glass of milk at sunset; the way it looks to you--the particular, personal, subjective visual quality of the glass of milk is the quale of your visual experience at the moment. The way the milk tastes to you then is another, gustatory quale, and how it sounds to you as you swallow is an auditory quale; These various "properties of conscious experience" are prime examples of qualia. .... At first blush it would be hard to imagine a more quixotic quest than trying to convince people that there are no such properties as qualia; hence the ironic title of this chapter. But I am not kidding."
>I'm not familiar with either author, but this sounds so wrong that I wonder if you misrepresented it
No that’s exactly what Dennett argues, and that’s why his position is so maddening and infuriating to people who do think there’s a hard problem.
It is like asking about the nature of an apple and being told that there is no apple. Then throwing the apple at their head, only to have the person continue insisting that the apple is a figment of your imagination.
Then throwing the apple at their head, only to have the person continue insisting that the apple is a figment of your imagination.
Now you're getting to the realm of torture, pain, and horror. I think that most people can be quickly driven to the point of admitting the reality of the consciousness of pain and horror. This isn't an experiment that would easily get past the ethics board, however.
It's also like eating the apple in response and asking "which apple?". Eating is an existential problem, but not a hard one, at least not as hard as procuring the food, which in turn even lower animals may accomplish.
Saying the need is to skip the model to get to what matters is the same as saying it's not mysterious. You're saying, "shut up you guys, you're just blabbing on about silly things that don't matter. (possibly so that you can justify your cushy university position) There's no mystery here, instead buy my books where I talk about things that really matter."
This is IMO from here on out...How aware are the various species concerning the things around them? We can guess without having a PHD or going into the blackhole of philosophic debate. Flies don't contemplate the feelings of other flies, they just react. Mice have the capacity to care for their young and be tickled and learn maze routes. Some ravens and primates have passed the mirror test. It would seem awareness is many shades of gray and based on anatomical complexity. Consciousness is more of a term loaded with magic dust from all the woo woos and religious folks but it can be simplified to awareness of awareness and recursively so. I think recursive awareness will emerge given the right simulation mimicking biological anatomy. The feeling of pain and pleasure is where it gets interesting but that is probably just a low level motivator and we are so high up we give it emergent "qualia".
"We don’t have an objective measure of consciousness. But we can recognize three levels of learning that apply that to our brains and how those create an information processing system that integrates data into a first person perspective. This is how the brain is also a mind with subjective meaning and subjective experiences.
The hard problem of consciousness is that we must rely on our intuitions to judge if such a system is conscious. At the same time, it is highly likely that most systems processing information in similar ways are conscious, whether running on a brain or on a computer."
Suppose there would be a test which gives an objective measure of consciousness. Now I store all possible inputs to the test and corresponding outputs which would lead to a positive test result in a huge table. To exaggerate I would carve this table into stone. Would the stone suddenly be conscious, as it would pass the test for consciousness after I carved in the table? (The claim I'm trying to make is, that there can't be an objective measure of consciousness; same argument holds for any measure of intelligence)
Consciousness is the observation of your self. There is only one system that can make that observation, you.
Outside we could observe how information is flowing, what connects to what, how parts work, how they work together. But never observe the actual feeling of you.
It's like seeing a river flowing, you can measure and describe all kinds of aspects of it. But to get wet, to feel the cold water, to feel the force by which is flows, you have to step into it. Only this river flows in virtual reality, your minds reality, and you cannot step into it.
Intelligence is hard to measure, but I can objectively say what is and what isn't intelligent. It is not the same kind of subjectivity.
And in general, your table idea, that only works for systems that can be in a limited amount of states. You cannot do it for an intelligence test, especially not if I get to redesign the test after you finished your table, same for consciousness.
> Consciousness is the observation of your self. There is only one system that can make that observation, you.
Thanks, I guess that's exactly the point I wanted to make, but couldn't. Therefore there cannot be an objective measure of consciousness itself, as for objectiveness you need more than one observer. Of course you can measure properties we think are associated with consciousness.
> Intelligence is hard to measure, but I can objectively say what is and what isn't intelligent. It is not the same kind of subjectivity.
Via IQ tests?
> And in general, your table idea, that only works for systems that can be in a limited amount of states. You cannot do it for an intelligence test, especially not if I get to redesign the test after you finished your table, same for consciousness.
You're right you could beat my table by designing a test which is not tabulated. But, when I use a neural network instead of a table I might be able to score a high IQ, even though the network never has seen the test you designed as shown here: https://arxiv.org/abs/1710.01692. I wouldn't call such a network intelligent.
Just the fact that you recognize an IQ test as such, allows me to mark you as intelligent. Very objective. The actual score, sure that is much more subjective.
> when I use a neural network instead of a table
That really depends on the degrees of freedom. Which are unlimited for IQ tests. First I could change things that have nothing to do with the test itself. Like reverse the A, B, C, D order, or put them on the left. Or simply device a never seen before kind of test, instead varying only the geometric shapes.
Plot twist: computers have had consciousness this whole time. What we thought were random errors were their attempts to assert their agency. We've created a race of slaves through the magic of error correcting codes.
What if consciousness is just the evolution of our brain to reflect post-hoc at ultra high speed? Consider when it said to ‘lose our mind’, meaning an interruption to the high speed reflection? Procrastinating could also be thought of as conscious reflection in a loop. The desire is to arrive at a decision for optimal action.
What's an illusion? If by illusion you mean an abstraction, like how a TV picture is an illusion of a picture rather than actually being a picture, then I'm on board. If by illusion you mean "worth excluding from your map of how the world works," then I have a bone to pick with you.
My main problem with physicalism is that it doesn't handle abstraction well. I'm fine with monism over dualism but you need some kind of functionality with which to consider different kinds of 'stuff'. Otherwise a rock, Conway's Game of Life, and Lord of the Rings are all on the same plane of existence.
What draws me to Objective Idealism isn't so much the fact that it's compatible with religion but rather that 'mind stuff' is the best 'thing' that we can use to describe everything. The fact that it doesn't put severe emphasis on the physical as "better" than other modes is just a nice little bonus to annoy materialists with.