This seems to be the old homunculus argument in another form.[1]
As knowledge of how brains work increases, this will probably become a non-issue. A big question for millenia was "what is life?" This question continued through the discovery of bacteria, and later, DNA. As the lower level mechanisms of biology were understood, it stopped being a question. Bacteria are alive, and are reasonably well understood. The components of bacteria are not quite alive, but are also understood. (There's still some argument over whether viruses are alive, but that's now just an argument over the definition.) "What is life" is now a "go look it up" question, not a mystery.
"What is consciousness" will probably go the same way in time.
An external observer should realize that robots are self replicators (like people) that fend for themselves in the environment (like people, again), by a complex process of adaptation to external situations, moment by moment (yep, like us).
And yet, when we make them here, everyone will consider them to be only "artificially" intelligent, and they certainly won't be "alive" because, well, no cells and no DNA and stuff.
So the GP's comment about life being a "go look it up" question is debatable at best. Expect the definition to change over the centuries.
not according to the definition i just looked up. depending on what you mean, the earth is 'simply' a ball of mud hurtling through space, or it's something more akin to a label for the complex emergent system composed of many individual instances of LivingObject.
I am afraid, geologists (and many other people) might disagree with this characterization. They find the composition of our planet an extremely fascinating subject.
Obviously, the word 'life' has multiple meanings; at the most fundamental - and I'd say, the least interesting - level it is the now pretty well-understood mode of collective behavior as well as the variety of processes and mechanisms that have naturally evolved to ensure the perpetual existence of a mass of large molecules of a certain kind (viz. proteins).
In short, self replication in a complex environment where the replicators have to find resources (energy and nutrients), cooperate and compete against other agents.
I don't see any advance in this debate since things that Dennett and Hofstadter wrote 20+ years ago (both independently and in their co-authored book "The Mind's I[^]"). It's surprising to see academic philosophers still using arguments like the "redness problem" described in this article. How can anyone with any knowledge of visual neuroscience or artificial neural networks be confused by a generative system producing a novel state?
And perhaps more controversially, I'm always a bit taken aback by the constant blase assertion that consciousness is a mystery. Is it really surprising that we have a first person subjective experience? We know that we are incredibly complex things, constantly integrating and acting on very complicated external stimuli. Such a system should have references to its own body and its own neural states, its train of reasoning should frequently include itself, its focus will drift forward and backwards in time... this is just how a system like this would work. If the system communicates about its state then its language should have referents to these internal states, referents like "experience", and "feels like", and "I understand". Is that surprising? Wouldn't it be surprising if it wasn't like that?
I think that Tononi's approach is a good approximation, but it can't be a full solution because the word 'consciousness' is too anthropocentric. One criticism of IIT showed that a seemingly uninteresting complex artificial system could have a very high IIT complexity quotient. The problem is that the things we use to define the term 'consciousness' are things that can be approximated to varying degrees by chimps or dolphins or generative adversarial networks or antfarms or thermometers. But behind our use of the word 'consciousness' there is still almost always a very slightly disguised dualism that uses it as a substitute for the word 'soul'.
Bravo. Might I add that Tononi's IIT theory is just a technical detail of implementation, insufficient by itself. Consciousness is in fact the reinforcement learning system connected to the world by the perception system. The state + value of the RL agent represents consciousness.
I think the title may mislead some and result in their dissatisfaction. This article (and comments) are more addressing the unknown nature of consciousness and matter. Specifically, how they seem to be circularly related. I.e. Our consciousness seems to have originated from our physical configuration, yet also the only way we perceive/understand the physical reality is through our consciousness/mind.
It also concludes with the fun statement "The possibility that consciousness is the real concrete stuff of reality". Which is quite a fun statement to truly unpack and understand, of which, I will not attempt to do now. :)
The article is referring to "idealism", the belief that world is primarily ideal (ideal = "of the same kind with ideas"
= mental), not physical. It says that the world we perceive is just an internal state inside the fundamental substratum which is consciousness itself. It was in vogue in India 1000 years ago in the time of Abhinavagupta and appeared in many other places, even in Europe. Look for idealistic monism in the wiki.
This reminds me of Ursula K. LeGuin's "A Wizard of Earthsea," in which all things had a name, and knowing a thing's name granted power over it. At the same time it also seems weirdly suggestive of The Matrix.
Holy wow. I just finished reading the first of the Kingkiller Chronicle books, The Name of the Wind. I did not realize this central theme was so similar (and/or derivative). My exposure to sci-fi and fantasy is more deep than broad, a lot from the same authors instead of several from many. I think I must be somewhat blinded to how common this probably is.
I have read a lot of sci-fi and fantasy fiction. A lot. Very few stories surprise me regarding the broad strokes or philosophical system / theme as it relates to magic and world building. I always enjoy when an author teases a new perspective on a known theme or system. Sanderson is great at taking a "known" system and bounding it tightly and thus making it a great story telling device.
I always enjoy an author with a deep understanding of a subject like game theory and they weave it into the magic and story giving a view into something new. So "everything is derivative". And nothing is :)
I also just finished the first Mistborn book. I REALLY enjoyed that, much more than Kingkiller Chronicles (of which I've read two books). I'm moving onto the next one and am excited that there are a number of them I can consume in this same series/world.
I thought the magic system in Kingkiller was pretty neat, and I think the same of Mistborn. It really exposed further my narrow scope because I am very used to conventional, plain magics and they were both different and unique, and well-crafted.
This may be a little like cheat codes for fantasy fiction, but look up Sanderson's creative writing course lectures. It is meant for aspiring fantasy fiction authors but it has also made me a much more critical sci-fi and fantasy reader.
I pulled it up: 2013 Lecture 1: http://www.youtube.com/playlist?list=PL8YydnShI45jSbRdMeyQ-S... start here. This YouTube account has all of the lectures. As an avid fanfic reader these lectures were immensely enjoyable especially as it is essentially one of the preminent authors of this era sharing his secret sauce freely.
Sanderson doesn't write deep stuff, but he has moments of incredible depth. In between, despite the word count of many of his longer series, he is immensely readable. It is easy to discount his work as pulp fiction from a distance, but underneath he tickles the tropes of fantasy fiction and gets right to the heart of the trope and its philosophy by the end.
I will say to avoid his latest short stories anthology it is best after reading everything else first.
This is actually one of the better presentations of this idea, and it gave me a better handle on it.
Here's one way to look at it. If you accept the claim, "My conscious experiences exist, non-physically," which seems true, you can reason to some interesting places from there. For example, it would be pretty weird if you were the only "conscious" person in the world, surrounded by zombies who only appear to have an inner life, so we can say that all brains have consciousness. Then, you can keep delicately applying Occam's razor to say that what we call consciousness could be just one manifestation of a more basic phenomenon, which fills a metaphysical explanatory void.
Even if we don't accept the claim, and we say consciousness doesn't really exist and we are merely projecting its existence, we could still discover new phenomena that seem just as real to us as the illusion of consciousness, despite not being clearly connected to physical laws. Maybe pretending that it's possible to have a "direct experience" of something can reveal that everyday consciousness is just one kind of direct experiencing.
> No matter how precisely we could specify the mechanisms underlying, for example, the perception and recognition of tomatoes, we could still ask: Why is this process accompanied by the subjective experience of red, or any experience at all? Why couldn’t we have just the physical process, but no consciousness?
The difference comes from context. It is one thing to have a feedforward network that detects tomatoes, another to have a reinforcement learning agent that optimizes for survival in nature.
The RL agent has a value function that assigns a predicted future reward to the current state and actions available. The value network, together with the sense data, is "what it feels like" for the agent to see the tomatoes. Maybe it knows that red tomatoes are good, or that red fruit are ripe, and they can give it a reward signal by reducing hunger. That makes perceiving red an experience colored by emotion. Maybe agents that didn't see red tomatoes as good, had less chance of survival and died off.
Also, the RL agent is not a simple feedforward net, but a loop: perception->judgement->action->effect in environment, and rewards. It is a dynamic process, evolving from moment to moment. In that continuous perception-action-effect loop, there is space for the inner world and all its complexities.
My answer explains why there would be an emotion attached to a perception. The subjective aspect is a result of the network being an agent, that has to fend for itself in the greater environment. The stake of the RL game is survival itself. Each new state it finds itself in opens some possible future paths and closes other paths. Some actions lead to rewards, other to losses. I think subjectivity lays in this space of values, options and actions. That places is where "it feels like something" to see red.
And if you're still unconvinced, remember what we are: a collection of cells, a protein based dynamical system that has computing and learning abilities. Why would it feel like something to be a protein-based computer?
You mean, why can an agent that sees an apple also recognize the color red itself? That's trivial, it's because the red color is a low level pattern, learned by the agent even before the apple pattern. If you look at the weight maps of convolutional neural nets, in the first layer you see lines, in the second layer small shapes, then parts of objects, then whole objects as you go up layer by layer. The red color is probably somewhere in the first layer.
Toasters are not self-replicating systems like living things, so they don't share the same problems. An organism has to fend for itself and reproduce in order to exist. Perception appeared as a mechanism that allows organisms to adapt to changing external conditions. With experience, they learned complex behaviors by reinforcement learning. Every moment, it perceives the world, judges the value potential of possible actions (instinctively), then selects the action and executes it. This loop requires the agent to have an evolving internal state. This is 'subjective experience'. It's a mechanism by which organisms behave in a more adaptive way.
Advancements in CS, AI, biology, and general models of intelligence are slowly bringing us to the point where consciousness is no longer a mystery to the enlightened. One of my favorite researchers in general AI is Professor Schmidhuber, who has pioneered some of the (currently) more practical areas of machine learning, too: http://people.idsia.ch/~juergen/
Even when we create self-aware machines that can demonstrate generalized intelligence, I think most people will struggle to come to terms with what consciousness is (or more so, what it isn't). My spin on it: consciousness isn't real; it's a necessary perceived byproduct of 'everything else.'
I believe Schmidhuber said it more elegantly, describing consciousness as a result of data compression.
It's not like this viewpoint isn't known, it's just not accepted because it makes no sense. Consciousness really is unexplainable by what we know of the physical world, it's constancy is just odd. When were talking about the most advanced organ in the history of the universe, is it really a leap to consider that new physics would be involved?
> Consciousness really is unexplainable by what we know of the physical world, it's constancy is just odd.
Consciousness is just you adapting to the world. You need consciousness in order to live in the world, with all its complexity and dangers. The brain has nothing magical about it, it is made of the same kind of atoms as anything else. What is happening is just perception, judgement and action in a loop, each supported by neural networks. Every moment, a full perception-judgement-action loop plays itself. It generates a stream of experience and emotion (which is just the value or reward intuited in certain situations and actions). This internal stream of perceptions and judgements is consciousness.
Amazing that every sentence you wrote is objectively wrong. I'm bored so lets go through them:
1. Consciousness is just you adapting to the world
Since when did adaptation require consciousness? We're not even sure most animals are conscious, but they adapt to their conditions just fine.
2. You need consciousness in order to live in the world, with all its complexity and dangers.
Where does this need come from? A self driving car could navigate the world without being conscious at all.
3. The brain has nothing magical about it, it is made of the same kind of atoms as anything else.
Didn't say it was magical, but saying it's made of atoms moves this conversation along by zero percent. It's like saying computers are just a bunch of transistors, disregarding the millions of lines of code required for it to even just take my key inputs as I type out this pointless comment, and turn them into glyphs on the screen.
4. What is happening is just perception, judgement and action in a loop, each supported by neural networks.
Where has this come from? We don't even know what goes on inside one neuron, let alone the billions of networks.
5. Every moment, a full perception-judgement-action loop plays itself.
So what happens if I close my eyes and cover my ears? Now I am unable to perceive the world around me, and break the 'loop'. It's not like I shut down. I don't think you've thought this through.
6. It generates a stream of experience and emotion (which is just the value or reward intuited in certain situations and actions).
Yes certain chemicals are released in the brain depending on your actions, but of course it's all chemicals; it's a biological system.
7. This internal stream of perceptions and judgements is consciousness.
No, this is just a function of part of the brain, which we can be aware of through consciousness.
All in all, there is completely no depth to your explanation at all. The fact we haven't got a definite blueprint for the brain, does really mean it is currently beyond our understanding. Neuroscientists have died of old age trying to figure out the secrets of the brain, and it's because they were looking a bit more in-depth that it's just a bunch of atoms.
> Amazing that every sentence you wrote is objectively wrong.
As I mentioned in my original post, most people, like yourself, will reject these very non-intuitive conclusions, probably indefinitely.
Regardless, I'll point out that many of your counter-arguments to the parent comment are wrong (not saying I agree with all of his description, either).
> A self driving car could navigate the world without being conscious at all.
How do you define conscious? 'Feeling' just like you do? I think many modern AI's could be said to experience a different, simple form of consciousness as they process inputs. Granted, it's not useful unless you have an AI that is also more generally self-aware and communicates with us. But regardless, your hyperbolic example equated driving a car on streets with "living in the world," which you were replying to (which is an objectively ridiculous comparison).
> We don't even know what goes on inside one neuron, let alone the billions of networks.
Considering what NN-based deep learning has accomplished with relatively small numbers of simulated neurons with relatively simple models, I think it's safe to say that a single neuron is not as complex and mysterious as you may think.
> No, this is just a function of part of the brain, which we can be aware of through consciousness.
This is your idea of an objective truth? This is your opinion, which, someday, we may be able to prove is objectively wrong.
> How do you define conscious? 'Feeling' just like you do? I think many modern AI's could be said to experience a different, simple form of consciousness as they process inputs.
What an odd comment. So because we performed some clever math to make some basic neural nets, our AI is now experiencing a form of consciousness? This is just better algorithms that are more capable of fooling you, still running on the same microprocessor as before. Consciousness is described as a state of constant awareness, not of performing the act of perception itself.
> I think it's safe to say that a single neuron is not as complex and mysterious as you may think.
Not sure if trolling or not? You should look up protein folding and protein machines at some point, basically there is a LOT going on at the nano scale that we still don't fully understand. Additionally tell any neuroscientist that the brains neural networks are the same as our computer version of NNs and you'll be laughed out the room.
And perception is a function of the brain yes, how is that an opinion? Your arguments really make no sense.
> What an odd comment. So because we performed some clever math to make some basic neural nets, our AI is now experiencing a form of consciousness?
Yes, you keep re-hashing this point which makes it clear that you still (and probably always will) believe that consciousness is something 'more' -- something mystical and unexplainable -- because it's so difficult and non-intuitive to wrap your head around (as it is for most people).
> Not sure if trolling or not? You should look up protein folding and protein machines at some point, basically there is a LOT going on at the nano scale that we still don't fully understand. Additionally tell any neuroscientist that the brains neural networks are the same as our computer version of NNs and you'll be laughed out the room.
Who said our models are the same? You're missing the point. Re-read the paragraph you're replying to, and look what we've accomplished with just dozens of simplified neurons. The point is that the details beneath our high level understanding of neurons (basically just spiking and activation) are likely not important to consciousness. Just like a high school physics student can usefully understand the mechanics of rubber balls without fully grasping the chemistry at the atomic level.
> And perception is a function of the brain yes, how is that an opinion? Your arguments really make no sense.
Here's the context, to remind you:
> Amazing that every sentence you wrote is objectively wrong.
>> 7. This internal stream of perceptions and judgements is consciousness.
> No, this is just a function of part of the brain, which we can be aware of through consciousness.
Just want to feel like you won an argument today? No need to turn this into a troll battle -- I'm actually trying to explain my viewpoint to you, if you're interested.
You are arguing that by performing the algorithms for perception, one is conscious. I am arguing that algorithms alone aren't enough, and that yes conscious is still some unexplained physics that takes the results of those perception algorithms in our brains and generates our awareness and experience from them.
I am not trying to win any argument, but it does annoy me when people argue with incomplete information and try to trivialize consciousness, because of what you know about some basic neural nets in computer science. Look at this video released last month, where scientists show a model for just 3 neurons [1], which they think is super important to consciousness. Only 3 neurons out of billions. There is just so much we still don't know.
> You are arguing that by performing the algorithms for perception, one is conscious.
Pretty close -- my argument is that for an entity to be intelligent, self-aware, and perceptive, it must also believe itself to be conscious, in the sense that it believes it has "experiences." Furthermore, and most importantly, if an entity believes it is conscious, it is conscious.
> I am arguing that algorithms alone aren't enough, and that yes conscious is still some unexplained physics that takes the results of those perception algorithms in our brains and generates our awareness and experience from them.
There's so much undiscovered truth about physics that it's always possible there are important, still completely unknown properties of physics that play specific roles in consciousness. However, that's mostly speculation (like the article) and there's no real evidence for it, whereas it's becoming increasingly intuitive to some AI researchers and myself that such a 'Holy Grail of Physics' yet-to-be-discovered property does not seem to be necessary.
Many AI experts presume that by scaling and adding complexity to models that are pretty similar to today's AI models, we can achieve general intelligence. I think that this logically requires the intelligent agent to believe itself to be conscious, and therefore be just as conscious as any human, albeit in a different way. The combination of intelligence and self-awareness are logically incompatible with a lack of consciousness.
Your post is useful to me. It shows me what I am going against. I need to refine my ideas, that's why I am debating them.
The difference between people and cars is that people are self replicators, cars need people to create them. So people HAVE to fend for themselves, cars don't. Self replicators have needs - energy, nutrients, cooperation with other agents - and fulfilling these needs require complex actions in a dynamic environment - a role taken by the nervous system.
When I say that consciousness is the thing that protects the body, I say it thinking about evolution, competition and cooperation with other agents for survival and reproduction. Without the ability to adapt, we can't fend for ourselves, and nothing else will protect us, unlike cars.
When you go up from unicellular organisms to mammals, life becomes much more complex and the "ability to adapt" has to become much more powerful, but even cells sense their environment and adapt to situations, even exchange chemical signals between them (communicate), cooperate, fight and so on. So they have a sliver of consciousness.
At some point we will fully automate the creation process. Meaning robots will be able to build robots. I'm still not seeing the need for consciousness here, this is just more advanced mechanical motions and cleverer math.
Just because cells adapt doesn't mean they are conscious. They are a bag of chemicals that react to thermal changes and kinetic energy. We have observed that protein nanomachines simply react to changes in their immediate environment [1]. There is no thought process or consciousness required, they are just static machines that take in inputs and perform the same process each time.
I think you are conflating the human survival instinct with a need to be conscious. But this is just an inevitable result of evolution; surviving longer means you have more chance of reproducing.
"Consciousness really is unexplainable by what we know of the physical world"
maybe. or perhaps it simply has a mystical privilege. there do seem to be a fair number of terms and concepts (like 'qualia') that hold sway over some people's discussion, but themselves seem questionable.
Qualia and the hard problem are just dualism disguised. Dualism is rife with contradictions and has been largely abandoned, but now it's trying to gain acceptance again under new names. The idea that there is something special about internal states of brains, that can't be explained away, puts mental states on a special place, apart from everything else - a dualism.
Nothing about our understanding of intelligence or self-awareness explains why we're not philosophical zombies.
Your explanation only addresses the form of perception, not why there's perception at all. It doesn't even address it, just implicitly assumes there must be such!
Your comment isn't even wrong: it fails to address the topic at all.
> Your comment isn't even wrong: it fails to address the topic at all.
It always surprises me when people make seemingly innocuous discussions political. The pros and cons of a pseudo-anonymous Internet. Regardless, I'm not sure what point you're talking about or how this article is not the perfect place to discuss consciousness.
> Nothing about our understanding of intelligence or self-awareness explains why we're not philosophical zombies.
As I posted on another comment, I think this could also be phrased conversely (that a philosophical zombie makes no sense), depending on your definition of a zombie.
> Your explanation only addresses the form of perception, not why there's perception at all.
Not sure what you mean here, but it sounds like you're still viewing consciousness as something 'greater' than what is necessary to process signals and generate outputs. And then calling it perception.
I think the real takeaway is that a philosophical zombie cannot exist, because all of the intelligent, aware, normal things that a philosophical zombie is supposed to be able to do require some sort of consciousness to integrate the mechanical signals. That's all consciousness is -- data compression, signal processing. You wouldn't work if you weren't conscious.
Put it in another way, consciousness is selecting actions that lead are associated with maximal predicted rewards. So it's a system that has perception and ranks possible actions. Its role is to make us more adaptive to external states.
This subject has been beaten to death by a lot of smart people.
I think it's well understood how consciousness works, (on a higher order level) the answer is just boring and somewhat demeaning to many who believe that they're something bigger than a sack of nerves.
We don't even have a proper explanation of how neurons themselves work; there are some processes we have no idea how they are possible (vast amount of ions passing through membranes) as well as some local protein computations that go against perceptron simplification operating on simple analog electricity.
>We don't even have a proper explanation of how neurons themselves work; there are some processes we have no idea how they are possible (vast amount of ions passing through membranes) as well as some local protein computations that go against perceptron simplification operating on simple analog electricity.
I think neuron function pretty well characterized. What about ions passing through membranes do we not understand?
The exact details of all neuron types and protein interactions are definitely not known but basic function is understood.
> just because these things can conceive consciousness does not make them consciousness.
I like the car example. If I have a pile of car parts, they don't make a car. They have to be put in a special way in order to function. What is "car-ness" and where does it lay? In the parts, or in the configuration, or in the whole action of driving it?
I don't really know how to condense this subject down into a HN comment and honestly it takes a lot of effort just to get the basics down. Plenty of books out there on the subject. The entire neuroscience field is a plethora of info.
Anyone interested in consciousness (even out of personal curiosity) is likely to love all the work of Kevin O'Regan.
http://nivea.psycho.univ-paris5.fr/
Interesting, sound, scientific.
For a while it was common for folks to speculate that consciousness was self-organizing behavior of a chaotic assemblage of electro-chemical relations within the brain. The problem with that has been that self-organizing behavior adds nothing new. It merely reorganizes existing behavior/properties. This article proposes the existence of the fundamental "thing" that is organized in consciousness, that thing that is the primitive element possessing the property of awareness.
Seems good to me. In fact, it seems necessarily true.
> self-organizing behavior of a chaotic assemblage of electro-chemical relations within the brain
That seems about right to me. Add that the organism itself is part of a complex environment, with all its sensations and reward signals, and that the organism has to adapt in order to survive/exist in the first place. With these constraints, the self-organizing behavior leads to consciousness.
I can't stand articles like this -- they bait you into reading poetry at the promise of science. So many words, so few ideas that are actually substantive. Consciousness speculation articles are a dime a dozen today, this is part of the noise. Even worse when they ramble on for what seems pages of periodic content.
It's painful, but it's "the hot topic", and nobody likes to be told that in the absence of new information, an absence of noise would be nice so we can focus on the things for which there is new information.
The problem is that people, especially in the realm of pop-sci, get "favorite topics". They don't really understand much, but it's their "expert topic" and they love it. When the reality of science, which is slow progress via a sawtooth graph, hits they don't bail, they tend to just... want more.
As a result you get a lot of "What does it look like from the perspective of a photon?" or "What does the inside of a black hole look like?" The real lack of knowledge is boring, and there are enough such excitable people to keep some magazines (on and offline) profitable. I think it's also important to remember that most people, even technical people, who aren't into physics simply don't understand the topics in question well enough that the headline in question, 'Is Matter Conscious' doesn't turn them off immediately. The difference there, between largely uncritical (if intellectual) wide-eyed curiosity, and science is pretty stark.
The truth is though, that it's a matter of interest, and most people (whatever they claim) aren't really interested in physics, they just like "the cool stuff".
One can argue that such contentless articles only reveal to us what we don't know (a "known unknown"). It might be true that we can't know this particular thing, but that's still an open question.
But I agree that there are too many such articles, with too little original content.
I think that's the point. At this point the understanding of the nature of consciousness and reality are still just speculation. It shows either a fundamental limit of human reasoning (which I think is well known), and/or that we still have a lot learning/researching to do.
But statements like "The possibility that consciousness is the real concrete stuff of reality", or none-the-less quite fun to unpack and discuss. Even if unintelligible due to aforementioned limits of human reasoning. :)
I think if you find yourself always looking at the world 100% through rigorous logic/methodologies, and don't allow your mind to wonder and explore new possibilities, then I think it's safe to say that human reasoning will absolutely remain bounded.
(also, the comments section was fun to read as well)
I'm reminded of people talking about tripping. we don't really have the vocabulary or knowledge to talk about certain things and they end up sounding wishywashy.
but I don't think it means we can't try, or hypothesize...
It reminds me of Hamilton Morris talking to this guy who's been taking huge doses of PCP for years. He gets absolutely lit, and then does "art", which is highly meaningful to him, but... pretty much just to him. Some people never really learn the difference between the feeling of profound understanding that can occur while dreaming or high, with profound understanding.
> Yet mathematics is a language with distinct limitations. It can only describe abstract structures and relations.
I have a hard time getting this point. Mathematics began as something concrete. In fact, abstract mathematics is one huge accomplishment of the human mind.
In general, I find all the article hard to read. Maybe I lack the expertise to fully grasp what is this about. But it looks like there is no attempt to explain the ideas in clear terms.
I will suggest that if you want to know more about consciousness Daniel Dennett is a better read.
Welcome back to the 70s marxist discussion groups, where all sentences are just rumbling empty cargo-trains of abstraction, promising huge trainloads full of future deep meaning - going nowhere. Good we discussed this!
I've been hooked onto UG for more than a month now. Articles like this would have piqued my interest earlier. Now they just bore me. I'm going back to learning CL.
For all we know, our brain can be just an interface to something else we don't know yet. Like when nobody knew what radioactivity was, miners thought some monsters resided deep inside Earth, illuminating caves and mines in green, and anybody that got nearby died soon afterwards. Quantum theory spawned a lot of scientism at the beginning of 20th Century, attributing consciousness properties down to atomic level, and this seems to be the case with each new generation, falling into the same though a bit more refined scientistic trap.
Just for fun:
If we play with the idea of Platonic Theory of Forms as a super-set of what we can perceive with our sens-es(-ors), under assumption philosophy/math is above reality, we can simply be plugged into some kind of a virtual machine with some API of sorts (of course undocumented) that is called by our brains when we think. Those who figured out some rare API calls on their own or by a secret tradition we call magicians, conjurers, gurus, Neo etc. Maybe words/thoughts could invoke some of those API functions, hence magic Harry-Potter-esque words and generally prayers in many religions. Similarly, what we call angels could be simply API services with highly intelligent behavior, preferably operating outside time, providing verifiable and expected outcome. Demons then are some intelligent services gone completely wrong, messing up with the rest of intelligent services that depend upon them, playing eternal asynchronous byzantine generals (by e.g. trying to upgrade all services to Windows 10 or flat interfaces with telemetry /s).
I think I should write a dystopian cyberpunk sci-fi "Services and Daemons" about this ;-)
Yes, indeed. We need to assume "soul" resides in the cloud. What we don't know is the soul ID - a very secret API exists only a few soul-hackers were able to find but soon demons causing madness were unleashed on them after each intrusion was detected. So only a few chosen services know IDs, allowing them a direct access to souls in the Great Hash Map of Souls, hence why knowing a name means having a power over a soul. Of course, soul ID is innately incorporated in each individual brain itself, but encryption is too strong even for bad services to break into, unless soul allows voluntarily to be possessed by calling a secret black magic APIs. The rest of services need to perform long and tiring searches in the Great Array of Souls (GAS) and match with secondary characteristics of the soul like virtues and vices, instead of ID.
Of course, we found a novel meta-parallel way to access souls in the GAS reducing the search complexity to O(Sqrt(n)), hence we make a lot of money exposing this API externally for a fee. Do you want to make somebody lucky? Arrange a life situation that looks like magic? Any enemies that should taste fear of damnation? Please visit www.soul-as-a-service.com to know more!
What's the role of the brain if decisions are handed down from the API? Couldn't we have made due with just the spine and a few extra parts of the brain. How does it communicate to the API? Is the API vulnerable to physical effects, being linked to the brain? All these considerations have been explored thoroughly since Descartes and dualism.
Obviously there is a lot of verified intelligent behavior on neural level we know about, like image processing, 3D reconstruction & mapping, motion planning, sound recognition, voice synthesis etc. And we more-less have some simplified models like that in the form of deep learning. So imagine brain as a sensor processing engine for inputs happening in the vicinity of a person inside "Simuverse". Even computing possible actions to take and attaching some utility value of each of them or starting upon acting on the best solution found so far already. Now for the decision process to be complete, brain passes results of this computation to the API that then invokes consciousness in the "soulspace" which is of another yet unknown quality, making the decision ;-) There is some interface latency of course, so some default actions are needed by brain, but consciousness can override them within 0.5s. Brain could be seen as an equivalent of cyberspace, consciousness might be coming from another principle we aren't aware of or don't have proper scientific devices to measure yet (if that is even possible).
Remember, each age views humans differently. Previous civilizations viewed humans first as based on magic, then hydraulics, then mechanics, then electricity, now we view them via lens of information processing, as our Zeitgeist dictates. It's likely people in 100 years will be laughing at us and our stupidity (hopefully they will be more advanced and not in a dark age), after they crack more things about Universe.
Obviously, this doesn't explain phenomena like brain exhibiting multiple different persons inside for patients with schizophrenia or when connection between left and right hemisphere is broken - some experiments shown people behave like as if there were multiple persons inside fighting with each other. Perhaps a bug in the nexus when there is a hash collision somewhere? 8-)
Yes, but many people still think there is magic fairy dust sprinkled into our brains that make them special. It's hard to comprehend that we are not magical, just the same matter as in all the physical world.
A failure of vision or imagination on their part. They don't realize that the universe, plain and physical as it is, is capable of consciousness and thus "sacred". For them the world is just dust. I believe the world is plenty mysterious and special without imaginary friends in the sky.
Sure you can reject religion. Then it all boils down to whose model of the not-yet-really-understood is more accurate and who will eat whom on the top of the food chain ;)
Take this: N years ago, for some value of N, nobody talked about friends in the sky. Now they do.
> Sure you can reject religion. Then it all boils down to whose model of the not-yet-really-understood is more accurate and who will eat whom on the top of the food chain ;)
If we find "friends in the sky," i.e. alien intelligences, that would be as likely to argue against religion as for it. Consider the dilemma posed for Christians by the idea of another planet with another intelligent species. That would challenge their core belief that we were shaped in a deity's image -- a deity imagined to rule the entire universe.
Yes, had I positively asserted the claim about all matter on the basis of some of it. But I won't do that. Rocks might just be rocks. And compared to some hypothetical beings far more advanced than we are, we might be rocks.
As knowledge of how brains work increases, this will probably become a non-issue. A big question for millenia was "what is life?" This question continued through the discovery of bacteria, and later, DNA. As the lower level mechanisms of biology were understood, it stopped being a question. Bacteria are alive, and are reasonably well understood. The components of bacteria are not quite alive, but are also understood. (There's still some argument over whether viruses are alive, but that's now just an argument over the definition.) "What is life" is now a "go look it up" question, not a mystery.
"What is consciousness" will probably go the same way in time.
[1] https://en.wikipedia.org/wiki/Homunculus_argument