Hacker News new | past | comments | ask | show | jobs | submit login
Electrons May Well Be Conscious (nautil.us)
232 points by lxm on May 17, 2020 | hide | past | favorite | 383 comments



Honestly, as weird as it might sound, to me it's even weirder to claim that consciousness somehow is just a manifestation of effect that we are already familiar with.

Saying things like "Consciousness is emergent" or "Consciousness is just a side effect of information processing" seems to miss the point. I'm definitely comfortable saying that consciousness is correlated with complexity and information processing, but claiming that it is fully explainable in some mechanistic way sounds suspect to me.

It would sort of be like saying that since large electric charges have only been observed in nuclei with proportionally large mass, then somehow charge is just a manifestation of mass. When we go further it turns out that charge is fundamentally distinct from mass, in fact so distinct that we have to add an entirely new attribute to fundamental particles. The same story goes with spin. Once you have spin and electric charge you get magnetic moments. Ultimately the reason why a magnet is magnetic is because all of these subatomic particles conspire in just the right way to have a macroscopic effect. Of course, not all materials have the property of being magnetic even though they all are made from protons and electrons, but some materials are.

To me, trying to say that consciousness is just something emergent is like saying that electric charge is emergent from mass. I would not be surprised if some type of proto consciousness would be needed in order to understand how macroscopic objects like human brains are self aware, have sensations, etc.


> Saying things like "Consciousness is emergent" or "Consciousness is just a side effect of information processing" seems to miss the point.

It sounds suspect to me too, but the reason the idea is appealing is that it's worked fantastically for hundreds of similar phenomena. To vastly oversimplify thousands of years of philosophy, for most of human history the default position has been that every idea had to be reified into some metaphysical essence.

What is the essence that makes a dog a dog? That makes bread nourishing but rocks not? That makes rocks solid? That makes fire hot? That makes falling objects seek Earth? Again and again, we've found simple physical explanations for these puzzles, that once were regarded as just as inexplicable as consciousness. Indeed, consciousness is now the last essence that survives, and that's why some bet it'll go the same way as all the rest. "But this one is different", indeed, and every single time so far that argument has been wrong.


My background is in theoretical physics, so I absolutely subscribe to this perspective. In understanding my own consciousness, however, I find it a lot harder to see how such an explanation could work.

The only other problem that I feel may require a similarly "out-there" explanation would be the measurement problem in quantum mechanics. Why is it that a macroscopic object, made entirely of electrons, protons, etc. when interacting with an electron means that a measurement occurs, but somehow when a single electron interacts with that same electron we can just use the Schrödinger equation? The existence of some type of cutoff between macroscopic and microscopic seems to me to suffer from inevitable inconsistencies. More likely, I would think, is that something like the many worlds interpretation is correct.

Similarly, I wouldn't be surprised if something similar will happen when we understand consciousness more deeply. Simply saying that it's an emergent property is sort of cheating and discounting how different the sense of experience is from anything else that we have seen in nature.


> Why is it that a macroscopic object, made entirely of electrons, protons, etc. when interacting with an electron means that a measurement occurs, but somehow when a single electron interacts with that same electron we can just use the Schrödinger equation?

People get hung up on this semantic concept of 'measurement' when that's not what it's about at all. It's not "if you measure a property of a particle then you change that property", it's "if you interact with a property of a particle then you change it, and that it's impossible to measure without interacting".

The electron doesn't care whether you're conscious and certainly not whether you're writing anything down. Its momentum or whatever is still going to change regardless.


Indeed. Measurement is a physical process, not metaphysical one. To measure something, you necessarily need to interact with it (and at this point it's worth remembering that photons bouncing off an object and hitting your retinas absolutely count as interaction, so you can't e.g. put a ruler next to the dog and read out the results, and say you weren't interacting with it, not in the sense physicists use this word).


How does that explain the quantum eraser experiment?

a photon that has been "marked" and then "unmarked" will interfere with itself and produce the fringes characteristic of Young's experiment.

https://en.m.wikipedia.org/wiki/Quantum_eraser_experiment


Did you mean to reply https://news.ycombinator.com/item?id=23221433? I don't see how QEE is incompatible with the notion that measuring something necessitates interacting with it. Photons are usually measured by absorbing them and turning into electricity.


Absolutely! I wish this were more commonly understood. And, if you take that train of though one step further, it implies that quantum mechanics exhibits non-local causality.


The electrons in ambient air are interacting but not measuring.


> Why is it that a macroscopic object, made entirely of electrons, protons, etc. when interacting with an electron means that a measurement occurs, but somehow when a single electron interacts with that same electron we can just use the Schrödinger equation?

The way I understand it is that particles are fuzzy, but if they interact with sharper things they get less fuzzy themselves. Naturally large amounts of interacting particles (macroscopic objects) are way sharper than single isolated particles. So measurement is just theoretical concept that relates to system physically getting immensely less fuzzy to a point we can safely assume for the sake of easier calculations that it's not fuzzy at all. You can still describe such sharp system with Schrödinger equation, it still works even after measurement, but you don't do that since there are easier ways like Newtons mechanics. There's no artificial cutoff. There are just many orders of magnitude difference of how sharp things become through interaction.

You can see that experimental scientists managed to make pretty large systems to display quantum superposition if they are properly isolated from interaction with way larger systems that could knock them out of superposition.

As for consciousness my opinion is that it's just some specific algorithm that has some evolutionary utility for some animals. It's fairly good for modelling self, environment and the future. It's not in any way pinnacle of development. It's gradual like human has more consciousness than dog, and dog more than a hamster, but not to the point of absurd, that single electron or even single neuron has some amount of consciousness. Just the same as you shouldn't think that single electron or even neuron has some intrinsic ability to perform sorting or image recognition. You need hardware, and yes, more powerful hardware yields stronger results, but you still need to have at least rudimentary versions of correct algorithm running on that hardware before you can actually say it does the thing.


> So measurement is just theoretical concept that relates to system physically getting immensely less fuzzy to a point we can safely assume for the sake of easier calculations that it's not fuzzy at all.

It appears to be more complicated than that, see Wigner's friend[1] thought experiment and a recent experimental realization of it[2].

From the paper's abstract: "In a state-of-the-art 6-photon experiment, we realise this extended Wigner’s friend scenario, experimentally violating the associated Bell-type inequality by 5 standard deviations. If one holds fast to the assumptions of locality and free-choice, this result implies that quantum theory should be interpreted in an observer-dependent way."

[1]: https://en.wikipedia.org/wiki/Wigner%27s_friend

[2]: https://arxiv.org/abs/1902.05080


Wigner's friend experiment doesn't contradict my interpretation in any way. For me it's not the exchange of information that causes system to become more sharp. It's poking that system with immensely huge stick built out of billions and billions tightly and sharply interacting particles that you have to do to get that information.

So when Wigner's friend makes the measurement of quantum system through interacting with it he causes it to collapse to sharper form. Wigner unaware of the result may still think that there's still some fuzziness in the system that consists of his friend and the experiment he performed. But in fact all the fuzziness was gone when his friend performed the experiment. The fact that Wigner doesn't know that yet, changes nothing in reality. Sure, he can treat his friend and his experiment as if they were quantum system. But they are not. The math I think looks the same if you don't know things about sharp stuff or if you know all there is to know about fuzzy stuff.

The actual 6-photon experiment is a bit too technical for me but I don't supposed "Wigner's friend" in it is built of billions of particles interacting with each other. If the friend is something smaller, like just a few particles that perhaps loosely interact then it's perfectly fine to expect that after measurement of one fuzzy system by another there's plenty of fuzzines left for the macroscopic observer (Wigner) to see.


> The fact that Wigner doesn't know that yet, changes nothing in reality. > The actual 6-photon experiment is a bit too technical for me

There's a more approachable explanation, and lengthy discussion, over at Physics Forums: https://www.physicsforums.com/threads/a-realization-of-a-bas...

I'm no expert, but it seems to me the issue is of a different, more fundamental, nature.


> For me it's not the exchange of information that causes system to become more sharp. It's poking that system with immensely huge stick built out of billions and billions tightly and sharply interacting particles that you have to do to get that information.

That's exactly what I think about Schrödinger's Cat experiment. It's often implied that the hypothetical cat may be simultaneously both alive and dead, as if there are only two possible states for the cat.

There are approximately 7 x 10^27 atoms in the average human body. The figure for a cat wouldn't be much far from that I suppose. Taking that into account we can safely say that the quantum system (cat + radioactive source + poison) may actually simultaneously be in an abysmally high number of different entangled states, which may converge to, as you say, a "sharp" form.


Doesn't this experiment have a simple explanation that measuring something is essentially entangling yourself with it? So at the end of the thought experiment, both scientists and the test device are in a superposition together...

...except it's not that simple, and here is why I don't like those experiments (and why in reality, they're always realized using atoms, and not cats): there's no Wigner, or Schrödinger's cat, as a unit in physical sense. They're made of atoms. Atoms that interact and radiate information all the time. That box with a cat with it, the cat radiates information at the speed of light, which gets absorbed by the box and reradiated away. If the result of your thought experiment could be changed or confused by the following setup:

- a) put a broad-spectrum camera suite around the box prior to the experiment

- b) have it record data on the hard drive

- c) have the experimenter do their experiment

- d) have someone pull the data from the hard drive and read out the actual state of the experiment subject

... then it means your explanation for the thought experiment is wrong.


Yes it's extremely difficult not to leak (potential) information. This is called decoherence in these situations. It doesn't matter if you don't become conscious of the measurement result as long as some atoms in your body gets contaminated by the measurement they get pulled into the same branch (or whatever concept you subscribe to, it all works the same).


We can imagine consciousness as a purely informational process, which can occur in any adequately complex substrate—real or virtual. In this regard (integrated information theory), the inherent “properties” of consciousness would be more readily described using graph theory or topology.

However, we can also view physics itself as something inherently informational, given that all the base particles in the standard model can ultimately be boiled down to scalar values. Would it be possible to say that those values are ultimately derived from the inherently mathematical properties which result in this reality “existing”? What does it mean for something to exist on the basis of its inherent mathematical identity?

If both consciousness and reality are purely mathematical phenomenon (maybe consciousness is more derived through the computation-generating properties of life), why would one such phenomenon be any less expected than the other? Isn’t it impossible for a non-mathematically-derived (informational) consciousness to exist at all in such a universe?


Last I heard, integrated information theory is _not_ neutral w.r.t. substrate, i.e. to compute the consciousness of a human being simulated on a computer chip, Tononi wants us to apply the IIT calculations to the computer chip.


Oh, certainly the processes occurring on the chip itself could bring about an entirely separate consciousness while at the same time simulating a human one on a different level.


Anybody have a really bad negative feeling toward the deconstruction of science ? There was an obvious degree of existential improvement for a while, but then dissecting everything at this points .. kinda hurts my soul. I somehow wants to keep some magic unexplained phenomena to stare at without any theory, just feeling.


> Anybody have a really bad negative feeling toward the deconstruction of science ?

C.S. Lewis for one, this is from "The Abolition of Man":

> But you cannot go on 'explaining away' for ever: you will find that you have explained explanation itself away. You cannot go on 'seeing through' things for ever. The whole point of seeing through something is to see something through it. It is good that the window should be transparent, because the street or garden beyond it is opaque. How if you saw through the garden too? It is no use trying to 'see through' first principles. If you see through everything, then everything is transparent. But a wholly transparent world is an invisible world. To 'see through' all things is the same as not to see.


Well every layer man made transparent changed his life, and it became a chase, technologically driven mainly, to do so ad infinitum. Even though we're seeing resistance now because ignored side effects are biting us, I don't know if societies won't keep digging beyond QM so man can have room temperature super conductors and backyard fusion to power neverending fountain of youths. Until their sanity disappears.


I recommend reading Joy in the Merely Real by Eliezer Yudkowsky (and its follow-ups). It might help you with that. https://www.lesswrong.com/s/6BFkmEgre7uwhDxDR/p/x4dG4GhpZH2h...

Alternatively, there's the obligatory xkcd (https://xkcd.com/877/), but I feel it kind of misses the point a little.


You might like a site I made about this (or you might not...): http://www.lifeismiraculous.org/short.


>Saying things like "Consciousness is emergent" or "Consciousness is just a side effect of information processing" seems to miss the point.

On the contrary, I feel like those who think we need some fundamental consciousness property is missing the point. We expect to find some objective third-person property that is the substrate of consciousness because a phenomenon that seems so unlike every other property should have a fundamental basis. But the mistake is expecting to analyze consciousness in the same way we can analyze other objective properties of physics.

The only things we know to be conscious are complex macro-scale objects with highly complex and rare internal organizations. In fact, the only things we truly know to be conscious are ourselves. It seems to be fundamentally subjective; it is not something to be directly witnessed as a third-person observer. So our investigation should start there. What we want is a theory for deriving organizational subjectivity from a non-subjective substrate. What might this look like? A system with subjectivity will need to recognize the external from the internal. It needs an egocentric mode of representation with the ability to represent itself and its dispositions and intentional stances, as well as states to represent the external world. My intuition tells me such a system has a non-zero "inner life", i.e. there is something it is like to be it. But I see no reason to think "fundamental" subjectivity, whatever that is, could do any explanatory work here. The causal and representative power is in the organization.


It’s even worse: How can you be sure that you can separate the internal from the external? Both seem to have to be present in order to make sense of anything.


So, in a sense, everything is connected to everything else? What do you call that, hmm? ;)

This is an existential and observational query, not about industrial optimizations (ie. what works). What works is killing us!


> What we want is a theory for deriving organizational subjectivity from a non subjective substrate. What might this look like?

I think the Reinforcement Learning framework is useful here. An agent exists inside an environment, it has some rewards, it is capable of sensing and acting, and its goal is to maximise its rewards over time. This implies learning, exploration, planning and the ability to model itself and the world. Of course it is possible for many agents to share the same environment and have different bodies, rewards and needs.

I think sensing, learning and evolution are good filters for judging if something could be conscious.


this totally elides over the problem of consciousness, of qualia, which is the essential question


Indeed. I think IIT is a good theory... of something, just not exactly consciousness. Maybe a precondition to consciousness or something like that. But the thing that we think of as our consciousness is to me best explained by the "global workspace" theory, which says consciousness is the process of the various specialized parts of the mind, which are constantly working separately and in parallel, communicating their state to each other. It's like a boardroom for the society of mind, where at any point one subsystem has the podium (although there is lots of chatter and crosstalk as well). For most of us a part of the language subsystem (Gazzaniga's "interpreter") is also giving a running commentary (the internal monolog) of the information it's receiving from the other parts (with a lot of its own interpretation thrown in)... but this is not an essential feature of consciousness! We have a tendency to identify our consciousness with this commentary, but that is obviously incorrect. I think that the communication in this global workspace occurs in its own "language", a language internal to organic brains, capable of abstracting and reducing to its barest essence information from any of its components.

This view of consciousness is phenomenologically best aligned with most of the (admittedly limited) objective information we have about human conscious experience, and is consistent with the experience of various altered states of consciousness such as meditation or the use of entheogenic substances. It explains how consciousness is only a small part of what happens in our mind and why the nature of the subconscious (which is most of what actually happens in the brain) seems to be so hard to nail down. It also means that any being with a "mind" that has numerous independent and parallel processes that need to be coordinated has some measure of conscious experience, even invertebrates, and probably even living things whose information processing uses an entirely different infrastructure, such as plants. However, I can't see any way that this definition of consciousness could apply to an electron.

Edit: I think that the global workspace theory of consciousness can probably be mathematically described by IIT, but not just any integration of information results in something that deserves to be called consciouness. The information that's being integrated should be combination of perceptions (feedback from the environment) with some kind of memories of previous states, resulting in new memories and predictions, and the integration should happen through independent pre-processing of this information by relatively independent subsystems. This is still general enough to apply to nearly everything living, but I think it puts conscious experience at a higher level than merely integrating information.


Yeah, I definitely like IIT and think its on to something important. But it doesn't strike me as a sufficient condition for consciousness. I have a lot of sympathy for GWT. One of its theoretical virtues is that it coheres with theoretical properties of consciousness with independent justification like integrated information, recurrence, self-modelling, etc. But it still lacks any direct theory of phenomenology, i.e. qualia. Although I can see why scientists would avoid attempting such arguments if at all possible. This would be a good place for philosophers to bridge the gap, but I guess it is easier to make a career out of promoting panpsychism these days than to come up with something insightful to say about mechanistic consciousness.

But to move the discussion forward, I think one obvious property of qualia is that it is representational. That is to say, it is structurally related to the thing being indicated such that it can inform about the thing. For example, the red quale tells you something about red substances in the context of the space of possible colors, the external world full of beneficial and harmful substances, and the bearer of the quale with drives, dispositions, preferred states, etc. This complex millieu of properties, states, dispositions, etc, all serve to inform the properties of a quale. Its representational power is one that gives the bearer certain competencies in the actual world, e.g. pain gives one the competency to avoid damaging states. But this representational power must be intrinsic to the structure that constitutes a quale. If this were not the case, then its power to confer competency would be contextual. Pain would only confer competency in the right environment (like a reflex that has meaning only in the right environment, e.g. the grasping reflex of an infant). But this isn't the case with qualia; the experience of pain is intrinsically representative and provides its bearer with competence universally. The same can be said for emotions and our senses. This suggests to me that some kind of recurrent structure is a necessary condition for a quale: to simultaneously be the producer and the consumer of a representative state, and consume in such a way that necessarily confers competent behavior. But this discussion sounds like a different level of description of coordination between different subsystems. Information from different subsystems bear on this central coordinator, and this information confers competent behavior on downstream subsystems, i.e. contextually relevant causal powers. I see the beginnings of the details required for mechanistic qualia in theories like GWT and others based on principled analysis of brain networks.


I'm personally a fan of attention schema theory. It's based on the idea that consciousness emerges from an advancing capability to create models of both the external and internal world. For me, it scratches a couple itches. First, it's not an all-or-nothing evolutionary gamble, but develops in steps and each step has advantages. Second, it seems pretty well rooted in neuroscience and psychology, rather than relying on handwaves like "all matter has consciousness".

Here's a couple articles from Michael Graziano, the founder (discoverer? namer?):

https://www.nytimes.com/2014/10/12/opinion/sunday/are-we-rea...

https://www.theatlantic.com/science/archive/2016/06/how-cons...


We all know what a forest is. But there is no "particle of forestness".

Most of the words we use don't need laws of physics and particles exclusive for them. What makes you think consciousness is more like electromagnetism than it is like a forest?

In my opinion consciousness is just an abstraction, like a football match or a forest. We have a word for these common configurations of matter, but it doesn't mean that they are fundamentally, qualitatively different from the rest of the universe.

Show me an elementary particle of forest then I can admit that consciousness needs one too :)


Consciousness is not just an abstraction, requires self replication/evolution and ability to create models of self and environment - both being extraordinary feats.

A perceptive self replicator existing inside a multi-agent environment that affords both cooperation and competition, and who's existence depends on its ability to adapt and act well - this configuration is what creates consciousness in my opinion.

There is no particle or fundamental quality of consciousness, but it is what self replication under limited resources leads to.


Consciousness seems like it should be special, but that’s not evidence that it is. Consider, after a lifetime of subjective experiences from exercise etc, a cardiac surgeon can still know far more about your heart than you do. Nobody is arguing some mysterious force is pumping blood through their bodies, but the feeling of blood pumping through your veins somehow feels primal not simple plumbing.

You can intellectually extend that to everyone else about our bodies, so why not consciousness?


Understanding how a brain works may only require a mechanistic model, but consciousness is different.

Consider the following thought experiment: suppose that we had a complete understanding of how neurons work (as well as the brain more broadly). I think this is totally reasonable. Then suppose that we created artificial neurons that acted identically to biological neurons. If we replaced the neurons of a brain with these manufactured neurons, I would imagine that the resulting brain would still be conscious. Now imagine taking some of these neurons and removing their interior structure instead routing the inputs / outputs to a machine somewhere else where the correct computation takes place in order to determine the output from the neuron. From the rest of the brain's perspective nothing has changed so presumably it's still conscious.

Now take this to the extreme: replace all of these neurons with empty shells and route the computation to a billion people in the world to perform on pen and paper. Is it still conscious? If so, what if you removed the brain altogether and just performed the computations directly on paper, is the paper conscious?

The paper would definiy still be intelligent, it would possibly come up with interesting inventions, claim that it was sad or happy etc, but would it actually be sad?


People keep saying "consciousness" but it sounds like the topic is more specifically identity.

I have a brain, you have a brain, billions of others do, it seems like there is a reasonable amount of symmetry, but "I" look out of my eyes and not anyone else's. Replacing you with an AI is much more feasible from my perspective than replacing me with a program.


Yes! I completely subscribe on the idea that consciousness is an emergent property, but I can't begin to see how the asymmetry of me vs anyone else is reducible.


I think identity is definitely related to consciousness but you can have no sense of identity and still be conscious.


>but would it actually be sad?

To the fullest extent that you are capable of being sad. There is no even remotely plausible alternative to this physicalist argument.

I would argue that you aren't nearly as conscious as you think you are. That's the conclusion I've come to after many hours and years on the meditation cushion cultivating awareness of my own cognition. Any thought, choice, or action that you make doesn't actually happen in your conscious brain. You just become aware of it after it's happened. That's all there is.


It's premature to take for granted that a paper brain emulator can exist in principle.

Take something else less...emotionally laden. A black hole. Could we model a black hole on paper? If so, we ought to just be able to drop a computational particle in and see exactly what happens at the singularity. Except we can't -- the whole reason we call it a singularity is because we get divide by reality errors if we try to compute it.

But I would argue there's a different problem. A black hole is the most computationally efficient method of calculating itself. Any method of accurately emulating the black hole on the Planck level is going to take more energy (and mass) than the actual black hole. As a result, those pieces of paper would literally collapse into an actual black hole long before they could properly emulate the black hole they were modeling.

Which isn't to say that the brain is anything like that but the point is, it's not a foregone conclusion that we can construct a completely accurate emulator of a physical object.

Finally there's something of a category error in making the claim that pieces of paper can be sad. A paper emulation of a bar magnet would not itself be magnetic. In the world of the emulation it would produce an output that would model magnetism, yes, but that's the extent of it. Our paper brain emulator, if it were constructible, would produce output that would be a model of sadness in the emulation, but it would not actually be sad.


I think I'm almost entirely with you except for these two statements:

>It's premature to take for granted that a paper brain emulator can exist in principle.

I think it is premature to say we could simulate all of quantum mechanics with sheets of paper, so on that technicality, I totally agree. However, I think it's quite unlikely to be the case that we couldn't in principle simulate the requisite components required for a faithful replication of consciousness. But you're correct that it's probably not a given.

>Finally there's something of a category error in making the claim that pieces of paper can be sad. A paper emulation of a bar magnet would not itself be magnetic. In the world of the emulation it would produce an output that would model magnetism, yes, but that's the extent of it. Our paper brain emulator, if it were constructible, would produce output that would be a model of sadness in the emulation, but it would not actually be sad.

I think I disagree with almost all of this. The bar magnet is a bit of a bait and switch because the system, the input, and the output weren't well defined. In consciousness, you need to define the system, system input and system output properly. If the system has persistence of thought/computation, and the inputs and outputs can be defined in a way identically to those of your own consciousness, then it would actually be sad, in every way that you are. In particular, if you replace bits of paper with "neurons", your argument should absolutely still hold. So the logical extension of this argument is that either YOU also aren't capable of being sad, or your consciousness does not originate in your brain. Both of those I think are pretty close to sufficiently absurd that we can accept them as givens.


> As a result, those pieces of paper would literally collapse into an actual black hole long before they could properly emulate the black hole they were modeling.

That is wrong. Any method of emulating the black hole within a volume less-than or equal to the Schwarzchild radius of the black hole would collapse, but you could do the calculation over a larger volume and avoid that. The most massive stars are far more massive than the least massive black holes.

The smallest black hole known that I could find from a quick search is XTE J1650-500, at about 3.8 solar masses with a "15 mile"[1] diameter. There are a lot of stars more massive than that.[2]

[1] https://www.nasa.gov/centers/goddard/news/topstory/2008/smal...

[2] https://en.wikipedia.org/wiki/List_of_most_massive_stars


>Any method of emulating the black hole within a volume less-than or equal to the Schwarzchild radius of the black hole would collapse, but you could do the calculation over a larger volume and avoid that.

...This was my intuition as well, but I'm far from certain about it. It's not uncommon for the universe to find sneaky ways to prevent us from "cheating" so to speak. These often result from deep symmetries/conservation laws and fundamental limits on information.

I'm actually a big opponent of the simulation argument for that reason. I don't think you can accurately simulate the universe without having a universe to simulate it in. Otherwise, you could just 'turtles-all-the-way-down' the simulation. The information density would have to be unbounded, and this seems to be in disagreement with fundamental laws of the universe.

We don't have a theory of quantum gravity... it's definitely possible that to simulate the blackhole in a more spread out manner would be impossible. I could envision a fundamental tradeoff where to replicate the information exchanges between the microscopic constituents requires that either you satisfy the hoop conjecture, or you keep increasing the size of your model without bound, whereby for any finite size you still satisfy the hoop conjecture. (Particularly since the required mass/energy density goes down significantly as the radius increases.) I know that's wild speculation, but I feel like it's not quite crazy enough to take for granted.


How could you possibly know that you "became aware" of anything without some information being sent back from that-thing-that-became-aware? You know it so well that you are even able to comment about it.

I think the fact that people ('s brains) are so sure that they are conscious, is pretty clear evidence that for some reason your brain is getting messages back from this consciousness thing. Either that, or brains are consistently are built so they can't shake the idea that they have an observer (see: how many comments are on every HN article about consciousness, including this one).


>How could you possibly know that you "became aware" of anything without some information being sent back from that-thing-that-became-aware?

I'm sorry, but I don't quite understand either your question or the following paragraph. However, I feel like it could be an interesting line of discussion if you can explain it to me better.

I'm a bit lost, it sounds like you're suggesting consciousness and your brain are too different things? Is that right?


Well the two things are the part where physics does stuff, then the other part is where you become aware of it (whatever that even is, but you experience it none-the-less. Let's call it the consciousness).

Random assumptions/observations that might be relevant:

- The universe seems to generally work under "simple" principles that can be explained with a few mathematical equations.

- The brain is made of the same stuff as everything else.

- You are only aware of a very very small amount of things that actually happen in your body: You are not aware of what happens in your visual cortex, just what comes out of it. You are not usually aware of the sound waves themselves, just the conceptual "sound". I would call all that stuff "processed data". Sound, images, touch, all that stuff, generally get blended together into one cohesive experience.

- All that looks like a flow chart when I think about it. The photons go into your eye, stimulating nerves, electrons flying everywhere. That cascade of nerve stimulations goes into the visual cortex where stuff is processed into actual stuff that you care about, but the data is still just essentially electrical signals at that point. But then suddenly, you are aware of that processed data. Sounds like it explicitly got that processed visual data and sent it out a transmitter, to be listened to by none other than my "self".

- I guess I would then relate my consciousness to a radio receiver that's waiting on the other end. And in some specific part of my brain is a transmitter.

- A radio station can't possibly know if anyone is listening to the signal they sent, unless the listener communicates back that they are listening. In the same way, it doesn't make much sense to talk about how aware you are if the "aware part" (the radio receiver) doesn't talk back.

- Random final observations if this model were true: To maintain conservation of energy, the receiver would likely have to act like a capacitor, storing energy that was used to send the message to it, and using that energy to send a message back. Either that or it communicates back by collapsing wavefunctions (okay now it's getting a little out of hand). Also, maybe using this model, you could say that your sense of the progression of time is proportional to the amount of information sent. Which is why time goes so quickly while you are asleep, because your brain stops sending stuff out the antenna (also getting out of hand).

Hopefully you find this idea at least a little interesting. I feel like there's a world of possibilities here that haven't properly been considered.

I would love to hear your take on it.


Ah yes ok I get it now.

There is feedback from the attention centers. In particular, when you pay attention to things, those connections get reinforced. So the real feedback occurs on a bit of a different timescale than the rest of your cognition, though there's probably some shorter term feedback too if I had to guess.

In a sense, I think there's a center of attention/awareness that consists of almost all of what we usually call consciousness largely because it's contents also get fed back in as an input. There's the self-awareness component.


That doesn't solve anything though, there is still the "you" that becomes aware.


Not really. If you really start digging you realize that there isn't much "you" there left to be conscious. To a great extent I think consciousness is largely a bit of an illusion. Most of what people typically think of in terms of consciousness is really the contents of their attention. But you don't even truly control that attention. The decisions to switch from one unconscious information stream to another are themselves unconscious information streams. In essence, I'd argue that upon inspection, "attention" is the apparent last vestige where any consciousness may reside. But even deeper inspection reveals that it itself is quite an empty concept.


That's a good description of where I'm at, and I keep wondering if maybe the definition most people are using for "consciousness" just doesn't match mine. The longer I meditate, the less consciousness matters to me; it seems totally clear that I become aware of things after they've already been generated elsewhere in my brain, and that awareness is just a kind of self-reflection which maybe helps with higher-order planning or whatever. I'm still open to realizing I've missed the mark--especially because so many prominent meditators seem very focused on (or even enthralled by) mysteries of consciousness. Until then, I'm really struggling with why people would try to elevate this subjective experience to a fundamental aspect of physics.


> I keep wondering if maybe the definition most people are using for "consciousness" just doesn't match mine

I think this is true, when we talk about consciousness we talk about it as if consciousness has the ability to control the body and speak and that our thoughts are controlled by our consciousness. I think illusionist arguments are very strong here against that. We can change how someone thinks or acts based off of physical stuff (like lead poisoning someone) therefore those things must mostly be physical.

I have come to the conclusion that conciseness probably is only an experiential thing. It might not be able to control anything but there is something fundamental there experiencing something. We cant know that the world is real: we could a brain in a jar, or in the matrix, or on a massive DMT trip right now. But we can know "I think therefore I am", probably better written I experience therefore I am. It seems very strange to throw out the one thing we know to be true, that we are having a conscious experience, in favour of something we don't know to be true, that the world is real and causing that experience.


> I have come to the conclusion that conciseness probably is only an experiential thing. It might not be able to control anything [..]

Doesn't it cause us to at least have these kinds of conversations?


There is always still the _subjective experience_ of paying attention to whatever I'm paying attention to, of "being the one that sees my visual field", and so on.

There must be fundamental difference between the subjective experience I have of vision and that of a computer with a camera and processing software, I can't imagine that it has a similar experience.

How come there _is_ a subjective "me" that experiences things and can pay attention to them? Given that we are clearly bags full of extremely fancy chemical reactions.


I think there is a difference there, but it's largely because the computer with a camera is such a simple system at this point.

In contrast, you have many, many layers of very sophisticated and interconnected abstraction and reality modeling between that visual stream and other forms of processing. Typically, the higher the level of abstraction, the more "aware" of it you are as it gets filtered and dumped into your attention centers.

In short, even our most sophisticated state of the art "deep" learning algorithms are but puddles compared to the ocean of depth available in your brain. ...and almost none have any form of attentive aggregation and selection.


> There is always still the _subjective experience_ of paying attention to whatever I'm paying attention to, of "being the one that sees my visual field", and so on.

"Being me" is an experience or feeling. We have lots of other feelings both from within and outside our bodies. Could it be that consciousness is simply how it feels to have a focus of thoughts and attention in our brains?


Are you aware of the model of the mind system from The Mind Illuminated (a book that teaches meditation)?

First, I have to clarify concepts. There is a consciousness created by the mind. It's the place where sensory input is experienced. It's also the place where thoughts are experienced. It's basically the screen that allows the different parts of the mind system to communicate.

Then, there's the consciousness talked about in this article. It's a more primordial quality.

Now, that model of the mind system says that attention and awareness are the two modes of the mind. Consciousness as the screen of experience is created by the mind. Perception is created by the mind as well.


That’s the “Flashlight in the Dark” concept of consciousness as the focus of attention.


>To the fullest extent that you are capable of being sad. There is no even remotely plausible alternative to this physicalist argument.

Actually that's just hand waving, and taking for granted what needs to be proved.


I feel like you could be equally dismissive to the statement "There is no remotely plausible alternative to a godless universe."

And yet, I think that statement stands pretty strong on the basis of what we do and what we can know.


You'd like "Permutation City" by Greg Egan. It's about exactly this question, and he goes with it even further :)

For one example he considers information processing to be consciousnes, then asks about random patterns in cosmic dust that by chance encode another steps in a conscious process every 3 million years. Is that a conscious thing? What if it is?

It goes very abstract very fast, and I disagree with some conclusions, but I still recommend it.

> is the paper conscious?

Not paper, but the person implemented in such a way - yes. You don't say "my proteins are conscious", you say "I am conscious". The substrate only needs to ensure the process works correctly and the inputs and outputs are matched.

In your example it's like virtualization in IT - you can run a game on the computer, or run a virtual machine on the computer and run the game on that virtual machine. The game is running either way, it even runs on these same transistors, but it's a different process.

Besides, I think consciousness is very handweavy, and if I had to bet I would bet: - consciousness isn't a thing, it's just a bad abstraction (70%) - consciousness is emergent in a fully physical universe (25%) - consciousness is fundamental, and part of the laws of universe (5%)


Ah, the classic "argument from incredulity by implausible substrate".

Step 1: point out that any number of wacky scenarios can be isomorphic to a human brain - a person in a room transcribing symbols, an unlikely cloud of dust in space

Step 2: Note that the fantastically implausible scenario you've constructed is fantastically implausible. Redirect the implausibility into the thing you want to claim is impossible.

Step 3: Wave hands, make argument about consciousness/AI


In case of paper it's not implausible at all. It's just ignoring a necessary part of the system - a human that executes the process on paper.

I don't think it's any more implausible (that this is a conscious proccess) than saying both personalities of people with split personality are conscious.


>Now take this to the extreme: replace all of these neurons with empty shells and route the computation to a billion people in the world to perform on pen and paper. Is it still conscious? If so, what if you removed the brain altogether and just performed the computations directly on paper, is the paper conscious?

Yes, absolutely consciousness would exist in all of those scenarios — but you couldn’t claim that any one “thing” such as the paper itself, would “be” conscious. The notion of a separate self is still fundamentally an abstraction placed on top of billions of neurons interlinked with a musculoskeletal system, through which quadrillions of new atoms are constantly cycling through. On a long enough scale you only have the psychological “gestalt“ of a human.

Even the sensation of “self” is still ultimately just another internal cognitive model, because ultimately everything else we perceive is also just an internal model of reality.

In your example, that internal “self” model would continue to exist within the abstract computationally-mediated consciousness-generating process.

Each conscious process exists in its own fully separate ontological space—because it is inherently subjective.


Without wanting to cause too much controversy, this depends how broad your experience of consciousness is.

That "internal model of reality" is capable of some very interesting things if you're prepared to read past the first few chapter or two of the manual.


Sure, why not? That's literally what the materialist definition of consciousness means: it's a process that can be carried out on any appropriately-configured physical substrate; ours just happens to be neurons.


You're describing the systems reply to the Chinese Room Argument[1] and I agree with you. To go further, what else could possibly generate consciousness? A soul? Some phlogiston that breathes fire into beings? I legitimately don't understand what the alternative is to physicalism without positing some completely unsubstantiated Other Thing that also gets us no closer to understanding consciousness.

[1] https://en.wikipedia.org/wiki/Chinese_room#Systems_and_virtu...


I agree that other mediums (like artificial neurons) would likely still be conscious, but an abstract calculation on a bunch of pieces of papers, possibly spread over years and across multiple people performing the calculations? That seems odd to me. Definitely still "intelligent", but should I feel bad about ripping up those pieces of paper? Should I feel like I'm killing someone by erasing calculations off of the paper or introducing a mathematical mistake somewhere in the middle?


What does morality have to do with the nature of what is? How you feel about snuffing out some form of consciousness is irrelevant to whether it exists or not.


> should I feel bad about ripping up those pieces of paper?

Do you feel bad every time you have a beer and potentially kill off a few brain cells?


The idea that scribbles on a paper would be conscious is really hard to believe. It also sounds like it's own form of dualism that's functional instead of substance-based. I wouldn't consider it a good physical theory of consciousness.


>The idea that scribbles on a paper would be conscious is really hard to believe.

But it's not that the scribbles on paper are conscious; scribbles on paper have no causal powers for example. The scribbles on paper are the volatile storage of this causal chain: the causal cascade flowing through the actions of the person reading the symbols, manipulating them, then writing out new symbols. The bait-and-switch is that when you adjust the thought experiment, the focus shifts from the unified casual process performing certain computations to the inert substrates of the paper (or in the case of the Chinese room, the man performing the computations). But no matter the medium of computation, the causal chains are still instantiated and so there's no reason to think this process is not conscious.


Fair enough, but what makes the instantiation of causal cascade conscious? I see no reason to suppose it would feel pain or see color. What makes it so?


You're looking for a hard cut off where none exists. Most likely everything is conscious on many different levels and scopes of capability.

So the case which makes people really uncomfortable - are cows conscious (because we eat them) has the very unsatisfying answer of "yes, not in the same way we are most likely, but in some way".


There is indeed a lot about the workings of minds that neither I nor anyone else knows, but I do not mistake my lack of knowledge for strong evidence that certain things cannot be so. That would be an example of what Dennett calls the philosopher's syndrome: mistaking a failure of the imagination for an insight into necessity.

It would be like a pre-Archimedean philosopher asserting that iron boats are impossible, because how could they float?


Do you see a reason why a mechanical arrangement of atoms could feel pain or see color? Because they do.

Scribbles of paper serving as a part of a conscious system is indeed an implausible scenario. You'd need a huge number of pieces of paper, interacting in very interesting ways. It's an example specifically designed to be implausible, and so is a very poor guide for intuition.


> Do you see a reason why a mechanical arrangement of atoms could feel pain or see color? Because they do.

That's the hard problem.


Why wouldn't it? Supposing that it doesn't presupposes that somewhere along the gradual chain of replacing a human brain with this paper processing system those properties are lost either gradually or abruptly.


Assuming it's the functions the brain performs that are conscious. But it's part of the hard problem. Why and how is anything conscious in the physical world? What does that mean for other physical arrangements? How would we know?


> Assuming it's the functions the brain performs that are conscious.

As far as I know, if you turn off someone's brain functions you also turn off their consciousness. Moreover, consciousness can be manipulated by manipulating the brain. It stands to reason that consciousness is a subset of brain function (and body, sure). But I concede that this is not a rigorous proof of anything. Perhaps the consciousness treats the brain like a comfortable chair, and when it is destroyed gets huffy and leaves to go somewhere else. Seems unlikely though.

> But it's part of the hard problem. Why and how is anything conscious in the physical world? What does that mean for other physical arrangements? How would we know?

Hard to state, perhaps. Hard to answer? I suppose we'll find out once stated whether the question is actually hard to answer.


Relevant comment that argues in this direction: https://news.ycombinator.com/reply?id=23221295


The original claim was lots-and-lots-of-people calculating on many pieces of paper. But it's just an example of this endless stream of sort of ridiculous bait-and-switch arguments. First someone, says "so it's mechanical, so a big, fast, accurate speak-and-spell could simulate it" and you say "well, OK, it would have to be huge, incredibly fast and accurate and so-forth" and then the person "well, I doubt consciousness is just a speak-and-spell.


If the claim is that certain functions are what "generates" consciousness, then it's fair to ask whether any sort of functional arrangement will do, including writing stuff out on paper. I don't think it matters whether a billion Chinese are busy writing out 1s and 0s, or a computer is moving electrons around. None of that is conscious in my view, or at least I see no reason it would be.


I would sort of agree. Just avoiding first positing "well, even something silly would do" and then saying "well, that proves it's impossible 'cause it's silly". Then the argument is clearly unfair.


Your billion-person computer certainly would not appear to be conscious in real time - maybe that's why you just can't get your mind around the picture.

Can you imagine a billion people, none of whom who have never even heard of chess, writing out 1s and 0s, and thereby beating a grand master at the game? (in an appropriately slowed-down game, of course.)


Yes.

There’s a short story from 1981 by Vernor Vinge titled “True Names”. Recommended reading.


What makes electrochemical and chemical reactions in a brain so special that you see a reason for them to create consciousness? Or is it a lack of complete understanding of those processes that makes you think so?


Well, to a panpsychist, there's no reason why "conciousness" would need to stop at a human scale. The universe as a whole might be concious, and part of that concious experience might be isomorphic to the subjective experience of the brain that's being replicated on those pieces of paper. Though I can see how people might object that this is merely conflating "conciousness" with "computation", that's really inherent in what the thought experiment is doing.


It seems like the "thought experiment" has no palpable meaning. You are basically repeating the OP's argument "I don't think consciousness can arise mechanically" except more slowly. There's no more argument there. Well, except that an entire universe of paper and pencil calculations probably could not simulate a neuron in the age of the universe. But that's not even interesting.


This argument begs the question and as such "Yes it would be sad" is a sufficient answer. Why not? We've failed to define "actually sad" in any interesting way.

Emotion is an especially odd angle as we can so easily manipulate it with chemicals. Why would that be relevant to consciousness?


Or go even further and say there is nothing special about the calculation being performed or series of actions. What is important is the mathematical structure. And the structure itself has no physical existence. Therefore, no one really exists.


That was actually explored in https://xkcd.com/505/ but with rocks instead of paper. Also "Ship of Theseus".

My take on it, is that my consciousness is not the one I was born with, or the one I had a few years ago -- instead, it just thinks it is that same one, but the only link to previous versions is just memories.


Looking around at other living creatures it looks as if consciousness is a gradient. Several factors could move you up and down the gradient (to simplify), and I'd imagine that the speed of data transmission would be one of those. So as far as the thought experiment goes, I'd imagine the pen and paper method of transmission would be slow enough to knock the "brain" off the consciousness scale completely.


If you have a slow simulation process you can just run it in slowed-down simulation time.


But there might be a link between the time-density of the simulation and the emergence of consciousness.


I guess if you replace a neuron with pen&paper calculation you have to keep the timing right. As in: A song played on a piano is just the various strings moving back and forth. So you could replace the strings with people walking forward and backward. But for it to be a melody: 1. the people have to move really fast 2. all movements must be synchronized sufficently 3. there must exist a point in space time at which all those vibrations come together, ie. all that movement must add up to the final frequency of the song.

So if all people in the world try to create consciousness by calculating on paper their would have to carefully watch their timing in each step.

What do you get if a lot of people synchronize their actions? Like in synchronized swimming? dancing? a concert? maybe some primitive form of consciousness?

/edit: Try combine this synchronization with a feedback loop, ie. the swimmers or piano player not following previously determined steps of a sequence but determining the next step based on the input from other systems around them and their own output.


This is basically a convoluted restatement of Searle's Chinese Room argument. To me yes, since I think conciseness is an emergent property of the matter in my brain, and the behavior of matter is in principle computable, I think this is the logical consequence.

I don't think that this is practically possible though. Even a single brain cell is fantastically complicated and attempts to simply model neuronal behaviour from gross chemical simulations have failed. We can't even reliable model the behaviour of a single neuron yet. That's biological systems for you they are insanely complex with mind bendingly subtle feedback loops and cross-talk effects between seemingly unrelated systems that produce the macroscopic behaviour. That doesn't change the basic principle though, yes I think we're physical systems.


Sadness along every other feeling is indeed a computation, its just that people don't like to face such bleak reality, same way we can easily understand why a machine can be turned off and never to be turned on again and yet most of us have issues understanding when the same thing happen to our brains, so we rather invent a lot of mythology around such event (death). Simplified example: If I perceive my father died (input) I get sad (output). Is a system of stimulus/rewards/pushinement that has worked pretty well for the evolution of our species, is extremely complex but that doesn't make it not one.


And we know that the animals behave "sad" when somebody close to them dies or just goes away.

Emotions aren't anything special to humans. Other behaviors too. The "being special" is what humans just want to believe. Most of those promoting "consciousness" as "something special, that can't be explained by physical interactions of electrons, atoms and molecules or above that by biological interactions of cells etc, are simply in some way still hanging on their religious beliefs and projecting them there. The rest just have a new book or whatever else to sell.


As I mentioned elsewhere, your last step would not actually work in real time, so maybe that part of its implausibility.

As in Searle's "Chinese Room" (and also, though to rather different effect, in Jackson's "Mary the Neuroscientist"), you have proposed a wildly infeasible thought experiment, and then decided dismiss your rational conclusions about the outcome because... because it does not accord with your intuitions about normal situations? because it is infeasible? This simply is not the way to get to the truth of any matter.


>If so, what if you removed the brain altogether and just performed the computations directly on paper, is the paper conscious? The paper would definiy still be intelligent, it would possibly come up with interesting inventions, claim that it was sad or happy etc, but would it actually be sad?

One might argue that it would be sad, the way a human is sad, but lack the ability to immediately experience it in real time, while we can.


Isn't "real time" a concept that is somewhat biased by the frame of reference, i.e. it would just have a different concept of "real time" and consider our "real time" to very quick?


I very much doubt that a paper simulation could keep up with real-world events, and would immediately fall irrecoverably behind them, but if that simulated mind existed in a corresponding simulated world, then that world's pace would define realtime as experienced by the simulated mind.


In that context it’s the rules that define how you update the paper that give the symbols on the paper meaning. And in that context sad is both the symbols and system that gives those symbols meaning.

It’s the same situation as people playing a game of poker or go fish. The cards in their hands might be identical, but the game is what gives those cards meaning.


Your post sounds like it may be inspired by the work of Daniel Dennett and his riposte to the Searle Chinese room thought experiment.


It’s actually the China Brain scenario


Woah, that's awesome! I had no idea this was a thing :) Thanks!


Isn’t that just Searle’s Chinese Library?


Of course there is no evidence available to you that anything is special about my consciousness. And vice versa.

Are you a solipsist or not? If you don't believe in the consciousness of other beings, I can hardly argue with that on rational grounds, but you can't disbelieve in your own consciousness.


I think the point is not that you disbelieve in your own consciousness but rather that you misinterpret what it is. It must exist: cognito, ergo sum, but this does not imply that you understand its nature.


Exactly, it's a mystery.

It's also basis for humanism, since if nobody has consciousness, there's no reason to keep humans around.

So it is existential, untestable and unprovable.


Because unlike everything else in the body, we don’t know what sort of material explanation would possibly work. Instead assertions are made that subjectivity is identical to certain kinds of brain activity. What is needed as an explanation for how brain activity is conscious. How does neural activity result in the feeling of pain or experience of seeing a color?


> Because unlike everything else in the body, we don’t know what sort of material explanation would possibly work.

I think that the closest we have to an explanation for something capable of thinking are CPUs and we're just not sure what the right software is to create independent thought.

This might imply that traits like universal computation are necessary (but not sufficient) for thought.


> How does neural activity result in the feeling of pain

We can design synthetic chemical painkillers that work, suggesting at for this we actually know what’s going on. You can actually look up how these neurotransmitters etc work.

If you somehow want to suggest something else is going on you really need to support that argument extensively.


> You can actually look up how these neurotransmitters etc work.

Which doesn't explain having an experience of pain, only that certain neurotransmitters are involved which can be suppressed.


Sure it does, the signal is the feeling. You are your body, so when your body sends a signal that is you feeling something.


But how come people have different pain thresholds then?

I did a bunch of research on pain some years ago, and in order to "standardize" it, one would apply a treatment when the person said pain was greater than 7 out of 10 on a scale.

If pain were measurable "objectively", this wouldn't be necessary.


People are not uniform templates using identical DNA and histories. Our bodies are different which means our subjective experiences are different. That’s both consistent and expected.

As to measurement, we can design and build modern CPU’s but can’t inspect them in full speed operation. Similarly, mechanics often need to take something apart to test it. Living organisms present a more complex challenge, but the similarities are obvious.


Yes, but we cannot measure pain "objectively" at all. The equivalent would be if we could build a CPU, but could only determine how fast it could process by asking it to rate its processing power on a scale of 0 to 10.


Nothing is stopping a more technologically capable civilization from directly measuring our pain. You’re describing a purely technical limitation.

We have plenty of ways of directly measuring people‘s brains and CPU’s energy use etc. The issue is speed and resolution which is hardly a philosophical limitation.


> But how come people have different pain thresholds then?

Different signal sources and also differently configured/trained systems receiving and interpreting those signals.

But practically, it makes more sense to use the instrument already evolved and trained by their own experience to measure their pain: the person themselves.


> Different signal sources and also differently configured/trained systems receiving and interpreting those signals.

I think that this is a little too general a statement, given that it could be used to describe almost any human variability.


"a cardiac surgeon can still know far more about your heart than you do"

You're conflating the knowledge surgeons have with the subjective experience of a pounding heart. Surgeons study the heart qua plumbing, not as object of awareness related to primal feelings. Noting that physical changes in the brain cause changes in consciousness does not explain subjective experience.


He seems to have caught the issue dead on, since you seem to be arguing you know more about pumping blood than the surgeons based on a feeling - exactly what the people arguing consciousness can't arise from biology are doing. They feel consciousness is some big important thing that can't be produced conventionally and then resort to the equivalent of religions to explain it - feelings arrived at with no scientific method that don't really hold any weight.


"you seem to be arguing you know more about pumping blood than the surgeons based on a feeling"

No, I did not argue that at all. I said that there are multiple types of knowledge regarding hearts. Surgeons know one type in depth. The subjective experience of a racing heart is different from physical knowledge about how the heart pumps, and even if you know in depth the chemical/physical reactions that lead to and accompany a pounding heart, you do not thereby know that subjective experience.


By your definition the subjective experience says nothing about how a heart actually works.

Except that not quite true, the subjective experience tells you there are pulses which increase in frequency under stress etc. It’s just the surgeon understands what that means where subjective experience is less useful. Further, the subjective experience fails to separate the causes of a fast heart rate with the response of a fast heart rate.

This should suggest that the experience of consciousness is of minimal value when trying to understand it.


"By your definition the subjective experience says nothing about how a heart actually works."

No, that's a reductionist assumption. When I say "subjective experience" I am explicitly not indicating its quantifiable aspects. I am indicating what Nagel talks about in "What It Is Like to Be a Bat," the what-it-is-likeness, the phenomenological aspect. Not the increased pulse, not the hormones being released, not the neurons firing in the brain that correspond to a feeling of anger--the anger itself. An idealized observer enumerating all of anger's physical correlates does not give that observer the experience of that anger as it is felt by the person experiencing it. There is a difference between understanding something and living something.


Only responding to a tiny piece of your very good comment.

In the Maier/Rechtin book on system design, they spend a few pages talking about the word “Emergent”, and how “weak” it is. They says that we use the word to denote things that are almost certainly explainable, but for which we do not yet understand the mechanism of its behavior.

They use (iirc) the example of a black box system that produces a whistle noise at seemingly random moments, which observers interpret as “emergent” when reality reveals itself on closer inspection. Their point is essentially that the description “emergent” is a cop-out, and that so far nothing we’ve described as “emergent” has actually turned out to be inexplicable.

I often hear smart people saying things like “consciousness is just an emergent feature of our brains”, in a way that seems to imply “that’s that”, when in fact it’s not an explanation but rather an admission of ignorance. Admitting ignorance is fine, but ignorance is not an explanation.

I personally don’t think we’ll understand the mechanisms that underpin consciousness within our lifetimes, but I certainly don’t think consciousness is inexplicable.


"Emergence" can certainly be abused in this way, but it is nevertheless a real, useful and important concept.

Take Darwin's theory of evolution, for example. Evolution is an emergent theory in that a purely reductivist approach to biology will not find a particle or field of evolution. This does not mean either that biology is inexplicable by physics, or that evolution is pseudo-science.


In your example, I don't think that "Evolution" matches the popular definition of "emergent properties" (i.e. properties that lack concrete explanations).

Evolution is natural selection (a well-understood mechanism) applied over time. The divergence of species, families, and genuses "emerges" over time, but I think we're agreeing that this is a different use of the word.

To be clear, I'm not saying that words like "emergence" have no use, I'm just recommending that we be careful using them, lest we devolve to being satisfied by increasingly hand-wavy ideas.

In situations where we're aware of our own ignorance, we should own up to the ignorance directly, instead of describing the systems as having "emergent" (i.e. tautological) properties.


> In your example, I don't think that "Evolution" matches the popular definition of "emergent properties" (i.e. properties that lack concrete explanations).

I do not know whether that is the popular definition, but whether it is or not, it is simply wrong.

My usage, with respect to evolution, is the usage that counts in discussions of consciousness. It is the valid response to naive misconceptions about what materialism entails, such as in Searle's dismissal of the systems reply to his Chinese Room argument.


I think we've been talking past each other a bit. If I'm continuing down that path, I'm sorry...

My point in responding to your point on evolution, is that we concretely understand the mechanisms by which evolution emerges. If you cut out the natural selection, you have no evolution.

We can use the word "emergent" to convey that the web of actions that leads to evolution is highly complex. But, for evolution, one can still isolate those actions into relatively orthogonal, self-contained narratives (mate selection, disease, symbiotic relations), that come together with the effect of evolution.

In contrast, we do not concretely understand the mechanisms by which consciousness emerges. We do not have a good methodology to break down consciousness into its constituent parts, such that you can union them iteratively and end up with consciousness.

To be clear, I'm not a materialist. I don't think you need to be a materialist to think that the phrase "emergent behavior" (as commonly applied to consciousness) is a cop-out explanation, that we shouldn't be satisfied with.

I do think it's possible to construct those orthogonal narratives, and union them to get consciousness. Again, I don't think it'll be accomplished in our lifetimes. I don't think we have the language to describe it currently. But I do think it's possible, and in the meantime, it's misleading to use phrases like "emergent behavior" to smuggle out that possibility.

To be clear, I'm not taking issue with your use of the word emergent (which is more nuanced), but with the OP's. And it's not the OP's fault, because I've seen it used identically elsewhere.


You are right; I see what you mean, and I agree that just saying, as some people do, that consciousness is an emergent phenomenon, does not explain it, and does not avoid the fact that there is a big hole in our understanding here.


The answer, of course, is that there is no consciousness: if you try to precisely define what consciousness is, you’ll end up with increasingly absurd “you know when you see it” kinds of definitions. Objectively there is no any “special” consciousness, it’s just information processing systems, simple or complex.


>The answer, of course, is that there is no consciousness: if you try to precisely define what consciousness is, you’ll end up with increasingly absurd “you know when you see it” kinds of definition

There's nothing absurd about “you know when you see it”. Those are just things we don't have (or can't have, or don't yet have) good definitions for, but do exist.

"Family resemblance" [1] classes of things are often like that.

"What is a game", is a classical example. Any definition you can give can be violated, including "having a goal", or "not having real impact" etc. And yet we know a game when we see one. And there's lots of other such distinctions.

[1] https://en.wikipedia.org/wiki/Family_resemblance


While such definitions are good for casual conversation, they are not good for scientific discussion. Or you end up with nonsense like “my doorknob has consciousness“, well, lets define what consciousness is first.

With “consciousness” in particular having good definition became important since there are a lot of discussions about “is AI conscious?” which might in the future dictate policy decisions.


Let me try an easy definition: consciousness is that which makes you get out of bed in the morning and get something to eat. Life is a series of experiences and choices and something needs to make those choices in such a way as to continue life.

If it weren't for consciousness your body would not survive, or grow to adult size, or reproduce. Of course, 'get something to eat' sounds simpler than it is in reality - you would need to be able to see, walk, grasp, understand the environment and objects at your disposal, shop, cook, earn money, be a functioning member of society - the whole bag.


But all organisms “get out of bed and get something to eat”, yet we don’t pretend that bacteria have consciousness. If you go down this road you’ll have to decide if apes, dogs, crocodiles, sharks and ants have consciousness and make arbitrary cut-off somewhere along this path.


> yet we don’t pretend that bacteria have consciousness

In my view, life is a process that is governed on the grand scale by evolution (species adapting to the environment) and on individual scale by consciousness (individuals adaptating to environment).

Bacteria do both, like humans. Some AI agents also do both - evolution and learning, such as AlphaGo, just that its environment is a Go board.


Is consciousness is not emergent from something - not "fully explainable in some mechanistic way" - what type of explanation for it would satisfy you? If you're not willing to brook the idea of breaking it into smaller components, then you've abdicated understanding it all - it's just an ineffable feature of the universe we'll have to live with, some shadow of the spirit world that has zero effect on any physics calculation. That seems weird to me, and lazy to boot.


I don't know, that's the problem :)

Btw, I would imagine that our experience of consciousness is emergent from something else, but the question is what that something else is. Is it nothing more than mass, charge etc. or is it something else?


> Is it nothing more than mass, charge etc. or is it something else?

It's more like chairs, beds, phones, cars and people. Consciousness is emergent from the environment. We learn to interpret visual stimulus by seeing, we learn to interpret sound by hearing, we learn how to relate to the world by being in the world and having consequences to our actions. Our brains grow on the rich sensations they receive from the body, world and the other people. Looking for the mystery of consciousness inside the brain is a mistake, we should look at the whole game instead. The brain is just doing the learning and acting part, but the game is much more.


Big corporations have a mind, but they aren't conscious in the way we are conscious. Lot more minds exist in the world than consciousness.


this is actually one of the problems I have with the theory that everything is conscious. The electron, like the company, is actually an abstraction and part of a model. We don't even really know if an electron is 'a thing in itself'. Maybe everything is just strings or quarks, or some other smaller particle or just the manifestation of some field (sorry I'm not a physicist in case I'm butchering the analogy).

Even every individual human could be said to be some sort of zoo of 'conciousnesses'. Panspychism posits that consciousness is essentially smeered across the universe like a sort of paste, and if consciousness applies down the ladder I don't see why it doesn't apply upwards.


I think what you're lacking, if I recall correctly about panpsychism, is that only self-organized systems are conscious - not everything.

A computer, birds nest, a gun, a space ship: such things aren't self organized, therefore not conscious.

A cluster of cells, a planet, a star system, a galaxy, they are self organized therefore, permeated by consciousness.


Everything is conscious and nothing is conscious, are equally valid as hypothesis.

Who is it, that is trying to establish a definitive answer to something unanswerable?


Worth considering, I believe, is that many people do not have an inner voice.

For those of us who have conversations with ourselves, who can explore our consciousness throug dialog, it is easy to assume that all humans can do this.

Many cannot.

Many have no internal voice.

I find many folks are asking "where does my internal voice come from," and yet many conscious individuals lack it. "I think, therefor I am" has a different meaning when that question cannot be asked through internal monologue.

For myself, the concept that consciousness is an emergent and mechanical outcome became easier to accept after considering consciousness without monologue.


I absolutely agree, to the point of considering almost any materialistic (as well as theological) debate about consciousnesses pointless.

Just for fun, however: what if consciousness actually is just a manifestation of mass? (I've in fact heard a hypotheses it's a manifestation of gravity) Just trying to imagine consciousness of a mountain exceeding that of my own by orders of magnitude is an interesting experience. I imply that a consciousnesses of a different order does not necessarily has a function of acting or communicating in a way we could consider conscious.


Saying consciousness is emergent is definitely weird, but the problem is, so are all the other possible explanations. If you're trying to avoid supernatural explanations then emergence is the best of a baffling bad bunch IMHO.

(Then I would back this up a bit with "once you start investigating it, a lot of aspects of consciousness are quite different to what people would intuitively expect" and that makes it all a bit more feasible. E.g. saccades and the fovea are fascinating and counterintuitive, what else are we getting wrong?)


Is it though? Until we have more information, all these explanations are equally unsubstantiated. If you're speculating without the data and I sight needed to actually understand it, then the proposal isn't any better because it feels less supernatural. The actual explanation will probably seem really weird and supernatural to us now since we dont have the data and experiments to understand it. Today, quantum tunneling is still weird, still hard to wrap your mind around, but it's a phenomenon that we understand well enough to work with. We try to avoid it when making a CPU, try to make it happen when we use a scanning tunneling microscope. But 250 years ago trying to explain how a scanning tunneling microscope that just appeared into the world would have seemed impossible without invoking some supernatural sounding explanation. We have consciousness, we dont know where it came from yet, maybe there's a consciousness field we'll find when we figure out how to unite gravity and quantum mechanics, maybe theres an emergent property we'll understand once we can map a brain's functions and do some data science on it, maybe there's some explanation that we cant even imagine yet, but until there's data none are less supernatural than the others


> If you're trying to avoid supernatural explanations then emergence is the best of a baffling bad bunch IMHO.

Emergence is a general word, too general. I think what we need here is just self-replication. Self replication leads to competition, which given enough time leads to the evolution of perception and ultimately reasoning.


That's traditionally called a soul.


I see absolutely no evidence or suggestion that consciousness is anything else other than the summation of changes that happened in our brain in combination with the input devices we naturally have. You have decades worth of changes that occurred to get you to the point now and it's very clear and easy to see how much _less_ conscious you were when you were younger.


I agree that consciousness is correlated with this, but having a lot of experiences over years does not explain to me why I feel things.

Similarly I can definitely see how there would be a mechanistic explanation for intelligence (basically just build a machine that acts intelligently, perhaps even claiming that it's conscious), but there's a distinction between appearing conscious and actually feeling something.


The subjective feeling is just a brain state. The ‘why’ is due to whatever circumstances resulted in that brain state.


But one can definitely ask how something seemingly as complex as a "brain state" can be directly experienced as phenomenology. How is it that we can so easily describe some things about our brain states, that we'd never realize by physically looking at what brains do and how they behave.

That's the "hard problem" of conciousness stated simply, and the answer to it would seemingly have to involve some quite direct access to some part of brain state physics - meaning that the physical properties involved are quite "basic" in some sense.


> How is it that we can so easily describe some things about our brain states, that we'd never realize by physically looking at what brains do and how they behave.

As a software developer, this should be simple. Programs have their own state that you can understand without knowing the exact formulations of electrons in the processor that ultimately make up that state.

X = 5 sure but good luck figuring that out looking at the motherboard of a running computer.


So why are some brain states conscious and not others?


It would be impossible for the organism to operate effectively if they were conscious of every single thing going on in their brain at all times. In fact many things you aren't normally conscious of, if you focus, you can become conscious of for the moment.


There is no reason to expect that the self-aware conscious mind has full access to everything that is going on in the brain. In fact, from both an evolutionary and systems perspective, that would be extremely surprising.


Good luck getting to the roots of all your feelings when they can be based on any number of inputs and changes over who knows how long of a time.


A sleepwalker is not conscious, yet you can converse with them, ask them questions, get answers, you can tell them to do things.

In every external way they appear conscious, yet they are not. They are not aware of their own existence.

So consciousness is clearly more complicated than just "summation of changes that happened in our brain", there is something extra there, that we don't understand.


Perhaps they're simply not forming memories of their own existence. Can you ask a particular sleepwalker their own name and get answer? That might imply awareness of their own existence.

It seems most people want to make consciousness out to be more complicated in some way.


Ever heard of amnesia? Works the same way.


> but claiming that it is fully explainable in some mechanistic way sounds suspect to me.

What's the alternative? If it's not "fully explainable in some mechanistic way" then you need a mystical component to the explanation?

That is itself suspect, exceedingly suspect, to many people.


The alternative is that current metaphors are wrong and new metaphors are needed.


Will those new metaphors be "some mechanistic way" or not?

Proposing new mechanisms is all fine and well, But mixing in mysticism, and you lose respect from all of science.


As much as I'd like to believe that consciousness is unique, I don't think we have to look that far to show otherwise. There's an abundance of mental disorders that affect consciousness. Take for example something as (relatively) mundane as ADHD. Anyone with it that has a reasonable amount of insight will tell you that they are almost entirely driven by attention, which is the crux of ADHD.

I'd argue that "consciousness" is the illusion that's created from attention, which you're ultimately a slave to. While I don't have empirical evidence, considering how difficult it would be, I believe that attention is the heart of what we consider consciousness. What your attention focuses on is built on nature and nurture. Your attention is grabbed by what you're "interested" in and what you deem is important. It guides your thoughts, your perception, your internal representations, your attitude, your decisions, etc.

The illusion is that you have control over your attention. Again, anyone with ADHD and insight will conclude that you don't. If your mind will is a computer, then attention is the cursor that guides interactions with it. However, we're not in "control" of it since basically what we know of as the self is the mouse cursor of the mind. I don't think we move the mouse; I think that's the illusion.

Me typing this is simply a product of my attention and it leading me to the conclusion that I should type this because this topic and the representations I've constructed internally are significant to me as a result of my education, life experiences, genetics, etc.

At least, that's my opinion. I definitely don't want to come off as an authority on consciousness. I don't think anyone is (yet). Despite my view, I do believe we can uncover the mystery of consciousness from mere perseverance, even if consciousness is emergent.

---

Someone has already proposed a thought experiment, but I propose an alternative. Imagine all humans (and living creatures in general) as cells and neurons of an even greater being. What if what we understand as the universe is actually just the composition of a larger mind? As cells, would we comprehend or know that? Or would we assume we're independent beings? The infrastructure we build, the roads connecting clumps of societies, the internet connecting all humans together, etc.

Who is to say that we're not simply building more complicated versions of what we would consider synapses between neurons, nervous systems, circulatory systems, etc? Perhaps the progression of humanity, and the universe at large, is actually just a small organ during the gestation or development phase of a much larger being and we are completely unaware?

One notion that comes to mind is fractals. We see it everywhere in nature. What if consciousness does emerge from complexity and what we consider reality is just a single resolution of a much more complex and larger fractal?


For anyone interested in this kind of ponderings I strongly recommend Consciousness Explained by Daniel Dennett.

It may be the best book I've ever read.


The difference is that charge, spin etc have their own effects/manifestations, whereas consciousness does not cause anything.


You're not even wrong, it's logically impossible to derive temperature from velocity either because temperature is defined as a fundamental attribute of matter. You can only quantitatively connect them, and such connection is seen as explanation in mechanistic way.


Out of all the crazy theories about electron my favorite is single electron universe. That all the electrons and positrons are actually single particle that that goes back and forth in time, and positron are exactly the same as electrons just going in the opposite direction in time.

It neatly explains what and why is antimatter and why all particles of given kind are exactly the same. Although it's a hard sell because it predicts that all of the matter-antimatter symmetry breaking is not intrinsic or even spontaneous.

Also it make hard to wrap your head around how antimatter could form any macroscopic objects while going back in time. Or how it can behave exactly like matter while going back in time.


Closely related, especially given the context of consciousness, one of my favorite stories: "The Egg" (http://www.galactanet.com/oneoff/theegg_mod.html, or if you prefer read by Kurzgesagt: https://www.youtube.com/watch?v=h6fcK_fRYaI).


Possibly interesting fact: the writer of The Egg also wrote The Martian. Haven't read the book, but the film was great and much funnier than I expected.


My favorite version of this story is told throughout Logic's "Everybody" album. I highly recommend it.


Really profound content from a great channel. Another channel with related insightful content is "The School of Life" https://www.youtube.com/channel/UC7IcJI8PUf5Z3zKxnZvTBog

Panpsychism needs a new name. That word sounds quite alien. But the concept is familiar to humans in their "raw" state.

Young children sometimes befriend rocks, elder people sometimes talk to their plants, and some tribes and pre-modern cultures widely believed there was life inside everything. It's a little disappointing the "Wikipedia" article on it does not mention: Australian aborigines, Amazon tribes, Hindus and more.


This is similar to my favorite silly theory about dogs. There is only one dog. Every dog you meet is the same dog. Whenever the dog dies, it is re-incarnated as another dog somewhere and somewhen.


What if you see two dogs meeting each other? ;-)


A while back I read 'Conscious: A Brief Guide to the Fundamental Mystery of the Mind' ( https://www.goodreads.com/book/show/41571759-conscious ) which attempts to promote panpsychism. It does a pretty good job at exploring the curious nature of consciousness, but panpsychism I think is just at the level of 'woo'. It makes the case that because we are having trouble with fully understanding what consciousness is, we must fully engage in exploring ideas like panpsychism. While I generally support coming up with all kinds of ideas and exploring them, the proponents of this idea seem a little too convinced it's a real thing based on zero evidence, they promote quite a bit through seemingly scientific "speak" but use words like "may" "might" "possible" etc to exploit the basic fact that we simply don't know yet and want to present it as a credible possibility.


You just shot down the book/idea (panpsychism) without providing a single constructive reason. Instead, you applied a derogatory label ('woo'), and attacked the proponents instead of the idea itself, saying: "the proponents of this idea seem a little too convinced...".


I thought I made it pretty clear they propose a whole bunch of possibilities in the gaps of our current understanding. The problem is the proponents make out these possibilities are much more probable than any evidence suggests. The reason it is woo, is because they start reasoning about things based on no evidence and present it as "knowledge". Like most woo, if you accept the axioms without evidence, then you can create a world of knowledge that seems logically coherent based on those axioms. That's sort of the nature of woo. It's not attacking the proponents, it's just my observations of the situation.


You're right, I was a bit too harsh. You indeed mentioned that they make strong claims without solid evidence, which is a valid criticism.

However, from what I recall, they don't present the ideas in the book as facts or established theories like your critique seems to imply. They make it very clear that this is uncharted territory and that there is a lot of science to be done. In fact I think one of the main purposes of the book was to convince the reader that there is in fact tangible research to be done in the subject, and questions worth asking.

A large proportion of the book is indeed spent convincing the reader that panpsychism is a worthy theory, yes. And despite their lack of empirical data (which has proven hard to collect in this area), they make logically sound arguments that the theory is at least worth further consideration.

This is how many scientific theories start, as hypotheses that seem logically sound but lack specific empirical evidence. These hypotheses then guide how we design our experiments. The book argues that panpsychism is a worthy hypothesis, and it does not do so by referencing any woo or pseudoscience.


What's the difference between the axioms of woo and the axioms of mathematics?

I'm genuinely asking. My understanding is that we also accept those axioms.


Mathematics is a bit different than science, science doesn't claim truths, it says what the best description of things are based on the facts, the facts are evidence based. When you skip the facts and evidence part and just claim things, then you are in the land of woo. At the basis of science is essentially the laws of thought which, like good axioms should be, claim the smallest possible thing to reason from ( https://en.wikipedia.org/wiki/Law_of_thought ). Woo tends to claim large things partially based on existing knowledge and partially based on things we don't know but may be possible.


I think Tim Minchin put it best when talking about this topic in general:

"Science adjusts it views based on what's observed, faith is the denial of observation so that belief can be preserved"

https://www.youtube.com/watch?v=jIWj3tI-DXg


Axioms are only useful if they are widely accepted and they create an useful system. The modern axiomatic system of mathematics was created to support an already useful system, and they are (mostly) based on actual indisputable realities of our world.

The main axiom of panpsychism, if I understand correctly, is that all or most objects of reality have a mind. Now, as an axiom in itself it is pretty imprecise and not self-evident at all, as it depends on the definition of "mind". Also, it doesn't seem to create testable or useful theories out of that axiom.


The axioms of mathematics were _chosen_, because they have properties that are immensely useful. You can decide to use other axioms, and get different results, which may also be useful.

https://plato.stanford.edu/entries/settheory-alternative/


Two things that don't fit:

1) Even if electrons have consciousness, why should it be linked to our consciousness?

2) Why is decision making the essence of consciousness?

The Schroedinger equation is just a model, not reality. There doesn't even have to be an undecided state in reality. But even if the universe is conscious and decides in every moment, that doesn't explain our consciousness. If everything is conscious, in an ever increasing density why don't we have several consciousnesses in our head? Why is our stomach not conscious?

And even if we are not free to make decisions, we could still be conscious in the same way that we watch sporting events without intervening. Actually I would argue that we never decide. We always 'choose' the most preferred option that deterministically depends on our state of mind. And yet, we are conscious.


We do have several consciousnesses in our head, which end up acting as a collective and then the left brain interpreter post-rationalizes the behavior as if a single mind is at work. That’s why we act against our own decisions so often.

You can see this demonstrated with people with split brain syndrome, where the brain halves are unable to directly communicate and you can trick them out by confronting the speaking left brain with conflicting actions by the right brain. The left brain will invent an explanation on the spot.

https://physics.weber.edu/carroll/honors/split_brain.htm


Another example would be simple dreaming. The "you" experiencing the dream is indistinguishable from a conscious being (normal you) at least to itself. It has thoughts, feelings, sensations, and experiences. There can also be other people in your dream, and you can sometimes converse with them. Yet (unless you're dreaming you can read minds) your dream self can't tell what these other dream-people are thinking, even though they're all part of your mind. And they can be indistinguishable to you from conscious beings.


> your dream self can't tell what these other dream-people are thinking, even though they're all part of your mind

That's a great proposition! Nevertheless I'd be more inclined to believe that these dream-people are not really thinking, and most likely don't have conciousness, as they're stimulations of our visual perceptions slightly modified in some sort of dynamic/interactive manner and mixed together with our memory and recollections.


Yet you (your main dream self) perceives them as conscious. They pass all the tests for being real, thinking beings. They respond like real people, at least much of the time. So how can one tell if they're thinking or not?


Do people with split brain syndrome have an internal “stream of consciousness”? Or two? I’m not sure what the right word/phrase would be but I’m talking about our internal monologue or narration.


It's complicated. Corpus callosotomy, also called commissurotomy, to relieve seizures was not designed for the purpose of creating completely non-communicating hemispheres. Every patient is different. And so on. There is a long list complicating factors and a longer list of possible factors. My guess is that it adds up to it being really difficult to make two truly separate consciousnesses, and that observations of these patients see interesting effects but they are still one mind.


A fun speculation about similar things is found in the Neal Stephenson novel "Anathem," where consciousness is a manifestation of the many-worlds interpretation of QM.

The notion in the novel is that most "forks" are imperceptible, but forks in a brain result in outsized impacts on the physical world due to the tight interconnectivity of its neurons / circuitry (i.e. the normally minuscule difference in electron spin could cause an entirely different neural pathway to be followed, resulting in the conscious entity taking different actions and thus greatly impacting the world in tangible ways). It is the susceptibility to this cascading effect that somehow results in "consciousness."

It's probably baloney, but it's a fun read with some interesting things to consider :)


> The Schroedinger equation is just a model, not reality.

Maybe.

> Why is our stomach not conscious?

When I am hungry it certainly seems like my stomach is conscious. Indeed, they have found neurons in the digestive tract.


I personally like roger penrose's theory (hypothesis?) of orchestrated objective reduction of consciousness (Orch-OR). He stipulates that in evolution of the quantum state of a system the collapse of wave function IS a form of proto-consciousness. Essentially the he is saying that WF collapse is how a system becomes 'aware' of the choice and consciousness emerges from there onwards. A Brain of sufficient/right complexity is a connectionist structure that is responsible for perceiving and/or translating it.

What he's missing is the agent responsible for incorporating in the human/animal brain but some interesting candidates have come forwards like microtubules in neurons. But it is all highly speculative, though, there have been some advances in proving that biology is not too wet-warm-noisy for quantum state to persist without decoehrence for instance.. photosynthesis.


Penrose's stuff is very interesting, and I always enjoy hearing him talk about it. But for me it is a soup with too many ingredients. Better to look at the ingredients separately, for example, the idea of microtubules maintaining coherent quantum information. That already is very interesting, and would make a good meal.


its a few days late so i dont know whether anybody will read it but my thoughts exactly.

there should be a significant more research dollars spent on this as there is so much unknown there. just that fact that we still dont know how general anesthesia works and how we regain consciousness back in itself is quite fascinating. IMO in all this philosophical debate about nature of consciousness we often overlook the basic starting points in front of our eyes what mechanishm are necessary to sustain it. we need serious research on this otherwise it will be dominated by psuedo science BS like deepak chopra.


Spoiler: if you redefine "conscious" to whatever an electron is.


I think I read that somewhere on the side of a bottle of Dr. Bronner's soap in the shower. Although, every permutation of writing that could ever be written by a million monkeys at typewriters is on the side of a Dr. Bronner's too. Even this comment. #meta

Entities with too much consciousness can't handle their own consciousness and either go Warhol, invent the atom bomb, or make a startup selling silicone girlfriends because their bulging brains get in the way of their more fundamental drives.

Mental onany isn't as good as reality.


More like, consciousness is a matter of degree, rather than a binary yes/no.


Even materialists agree that consciousness is a matter of degree; panpsychists don't have a monopoly on that position. Humans are more conscious than mammals who are more conscious than fish who are more conscious than insects. But the materialist position is that any non-zero degree of consciousness requires a substrate capable of computation, and so you bottom out to zero long before you get to electrons.


So if I'm getting this right, the usual materialist position is that thinking is a matter of degree for stuff that can think, and just a straight no for everything else. While the panpsychists think it's just a matter of degree, full stop? No 100% unthinking allowed?


Great explanation. My thoughts approximately (but more eloquently).


I've been trying to formulate thoughts I have on consciousness that make sense in my head, but I haven't been able to communicate it effectively.

Basically, why is consciousness always attached to the same physical body? Why can't I ever wake up in someone else's consciousness? How does "my" consciousness know to come back into "my" brain whenever I lose it (through sleep or injury, etc).

The answer that I lean toward is that there is no such thing as you or me. There is only one consciousness and it is merely being filtered through each living (or perhaps nonliving) being in containerized modules.

So, to "me", it feels like I'm experiencing my own consciousness but in reality everyone is the same "me". You are me, I am you, etc, we are simply filtering consciousness through different atomic arrangements.

For example, let's say you read about a criminal who does a terrible thing and you can't imagine yourself ever doing that. But in reality, it is the same "you", only that your consciousness has been filtered through a different arragement of atoms that has caused that "module" to act that way. It is the same YOU who committed that crime, all it took was a different filtering device to make you act that way.

Anyway, that's kind of what I'm thinking. I'm sure it's not an original thought, but I don't know what kind of philosophy this is called other than "one consciosness".


> Why can't I ever wake up in someone else's consciousness? How does "my" consciousness know to come back into "my" brain whenever I lose it (through sleep or injury, etc).

Why doesn't the fire in your car's engine suddenly appear in the engine of a car down the street?


This is why I say I'm not able to communicate my thought very well. Your statement is simply the obvious, this is not what I'm trying to get at.

It's more like... why did my consciousness decide to attach itself specifically to "my" body, and only my body? Is it REALLY only attached to my body? Each of us are simply a collection of atoms - as Carl Sagan says, we are the universe trying to understand itself.

I have consciousness AND I'm just merely made up of atoms, while every other person is also merely made up of atoms. Does that mean my consciousness could have randomly ended up inside any other "being"? And would that consciousness be the "same" me, just in a different body? OR would "my" consciousness be affected and come out differently if it were filtered by a different being? This is kind of what I'm trying to get at.

Someone else on this thread said something along the lines of they think of consciousness as a field that permeates everything. This is along the lines of what I'm thining. Consciousness is the same, we are all the same consciousness, simply filtered in different modules.


Pretty much this. Your "consciousness" is nothing but neural activity in your own brain. By definition, it cannot appear anywhere else but your brain.


That is speculation.

We have absolutely no idea how consciousness is produced from electrical and chemical signalling between a large ball of fats.

We know that it is (because we perceive our own), but we can't map from one thing to another.

If consciousness is merely neural activity in a brain, then why can't we simulate really simple brains?


Because really simple brains don't have consciousness, they aren't sufficiently powerful enough to have the "mental machinery" needed for consciousness.

We know that brains can implement things like theory of mind, self-awareness and abstract concepts; these particular things don't require a human mind but they do appear only in animal brains above some level of complexity, and don't appear in brains so simple that we'd have the power to simulate them in 2020.


We don't know that. How would we ever observe such a thing?

Like the core problem of consciousness is that we experience it, but have no way of identifying it otherwise.


Reductionism is by definition less than a full perspective on life, humanism and depth of possibilities.


But what work is this notion of a universal consciousness doing for you? Why not just get rid of it and say that we are all just independent processes with psychological continuity that entails our experience of individuality?


> Basically, why is consciousness always attached to the same physical body? Why can't I ever wake up in someone else's consciousness?

Maybe you do, but there wouldn't be any way to tell because the brain only has memories of itself, you couldn't remember the swap even if it occurred. Our apparent reality does not rule out solipsism.


Consciousness is a physical phenomenon. You can't wake up in another body in the same way you can't just wake up in someone else's house.


I did that many, many times. That's why I don't drink anymore.


I had a similar experience with LSD except it was other consciouses waking up in my body.


That means when we die we just slip into someone else and never really even know it because we have all their memories and experiences up to that point in time. Perhaps this process even happens all the time as we lose consciousness and regain it. Our bodies are merely antennas for tuning into the vast universal consciousness. Madness.

The takeaway from this for me is to be kind to any living thing as it could be you.


Sounds like monopsychism (https://en.wikipedia.org/wiki/Monopsychism).


Because your body is the record of your experience and existence.

That's why a lot of people think a human brain would never be able to be uploaded to a computer, and if you could do it, you'd not be yourself - you'd be something else.

You are the sum of your experiences, a build up of scars, traumas, and so on.

So according to this theory, you'd only wake up on someones else body if that person had the same sum of experiences you had, yet that's impossible because only you are you - not even twins would experience this. Maybe that's why we value this experience so much: it's so unique and so intimate to ourselves, that we hold on to it dearly.

You then have hints of collective (un)consciousness. Jung explored this concept with archetypes.


> So according to this theory, you'd only wake up on someones else body if that person had the same sum of experiences you had

I know that's impossible for many reasons, but which body would you wake up in if somebody would make an exact subatomic-perfect duplicate of your body. Wouldn't the copy become just a twin having identical memories and believing he's you? Wouldn't you still only see the world from the point where you've been rather than where your copy has emerged?


Maybe he would be just like you up to the point the copy was made, after that you'd have different experiences (even if it is by having a different perspective of the world), and maybe that's enough to have the sense of his own identify?

Maybe you'd be the same in a fraction of an instance in time?

Maybe you'd never be the same because the process of duplication is an experience within itself enough to develop a different self?

Who knows :D


You are never the same you from one moment to next, and the mind adapts to deal with this change. So I would think the mind would adapt when downloaded into a new body. Certainly, your point stands, in that the mind downloaded into a new body would not be the same you, but I would argue that it might be enough of you to maintain the sense of you.


My point is that the separation of body and mind doesn't make sense in the eyes of some scientists. Like some of your memories are bound to scare tissue.

Some back it by the change of personality after major trauma.

Moving a mind to another body is most definitely quite a trauma (maybe the ultimate trauma?), which begs to ask the question:

What's the point of moving your mind to another body, if you'll cease to exist as you are? If you lose, like you said, the "sense of you".


You might be interested in the Hindu/Buddhist philosophy of Advaita Vedanta: https://en.wikipedia.org/wiki/Advaita_Vedanta


So now build out an experiment, see if you can play around with this filter device and see if you can find something clever. Step 2, invent something crazy, or start a cult. Good Luck!




Have you read this short story by Andy Weir? [0]

To me, it's like life is a conglomeration of matter built in such a fashion that it can act as an antenna, or a lens, attenuating and focusing consciousness such that it can be experienced.

[0] http://www.galactanet.com/oneoff/theegg_mod.html)?


Hang on, if there is only one shared consciousness, you would expect that you would be able to wake up in another person's body. It seems to like that would be an inevitable consequence of that proposition. The fact that this never happens demonstrates, to me, that we do have a persistent discrete individual identity.


This resonates with me as I have a similar thoughts on consciousness. The way I try to express my thoughts on this is to imagine that consciousness is a fabric that permeates the entire universe, but there exist these kind of extremely dense pockets of consciousness (ie, within us and our brains) where the fabric is tightly bound. Every kind of system that exists within the universe naturally lends itself to being conscious. The more complex the system the greater the density of consciousness it yields. Perhaps we cannot perceive consciousness in most (simpler) systems because it is minuscule compared to the density of consciousness within our brains, although we do tend to get these spooky kinds of insights and intuitions into the higher-order consciousness that emerges from many brains combined, such as collective consciousness, gaia, mass hysteria, etc.


> Basically, why is consciousness always attached to the same physical body?

Because conciousness (or rather, subjective perception) is a physical phenomenon after all - it's causally linked to the universe in a way that means it can't just "jump" brains absent some kind of telepathy.

Identity is not really mysterious from a "hard problem of conciousness" POV; it's simply one phenomenology (one 'quale') among many, and experienced meditators can even turn it on and off at will. It's really only proto-perception and perhaps proto-cognition that probably needs to be explained.


> Basically, why is consciousness always attached to the same physical body? Why can't I ever wake up in someone else's consciousness? How does "my" consciousness know to come back into "my" brain whenever I lose it (through sleep or injury, etc).

Maybe it's not attached to the brain, but rather produced/created by your brain?

> The answer that I lean toward is that there is no such thing as you or me.

I agree that there is no self. But that is different from consciousness.


I’ve got a quite scary thought about this: what if we are not only not the same consciousness after we fade away by daily sleep, etc.., but we are constantly fading away in short cycles, like every “tick” (planck time?) happening in the physical simulation of this world?

How would you be able to tell the difference?


This seems related to the concept of Last Thursdayism - if the world was created last thursday, how would you be able to tell the difference?

You have some memories of earlier times than last thursday, but there's literally nothing that prevents them from being indistinguishably fake; your consciousness could be existing in a simulation that started five seconds ago, and there's no way to disprove that.

But the reasonable answer to that is if it's not possible to tell the difference, then we define that to be the same consciousness; so that the answer to the question "if we are not only not the same consciousness after we fade away by daily sleep, etc.., but we are constantly fading away in short cycles" is "it's the same consciousness, period" by definition.


Why would it matter?


> Why can't I ever wake up in someone else's consciousness?

I've thought about this too, and my conclusion is that there's no way to know. Maybe we wake up as someone else everyday, with all the emotional/psychological baggage that entails.


If I recall correctly, you might like to search around for the term ever-present witness.

Don't have time to elaborate right now sorry.


Intuitively we would think if an animal can move, meaning it has locomotion, it is NOT because it is intrinsic to all matter (i.e rocks), rather it is because there is some bio-mechanical reason for it. Scientist could figure out how things moved and the underlying cellular processes to obtain energy in order to do it. A proponent of this article then might say that all matter is in constant motion. Then we are not talking about the same things. We are talking about directed motion to obtain food or some goal for survival. Atoms are not alive so they don't need to survive and do not have goal-oriented motion. But then those same proponents would list off how electrons move according to laws that behave similar to living creatures etc. so its some intrinsic thing. You can't win with those people. They will always have a response framed in some narrow definition that does not have any basis for what we are thinking of what locomotion means. That's a long metaphor.


I thought of one more but it's less wordy and children can understand it. If you take a lego, with enough of them in a particular arrangement...you can create a wheel of sorts so it rolls. Legos don't roll unto themselves but are necessary parts to create the wheel-like behavior. Legos are boxy so they can't roll by themselves obviously and yet they can roll when put together in a certain way. This concept relates to nature in that atoms are building blocks to create properties of things in a particular arrangement. It's easier to understand because we have no magic bias surrounding legos or sense of purpose or importance in coming to that conclusion.


The mind-body problem is an interesting philosophical debate. It would be funny if our ancestors had been on to something the whole time with the various forms of panpsychism that have occurred throughout history. We tend to overestimate our own ingenuity and heavily discount the intelligence and natural intuition of the past. Not saying this is a credible theory, just an interesting reoccurring idea throughout human history.


My favorite approach[1] to the mind–body problem is a recent one, placing the interaction between the mind and body at actual primacy. Neither the perceived nor the perceiver can exist without the process of perceiving. However absurd, the things related can be seen as derivative of the relationship itself.

Resonates with zen, as famously espoused by Alan Watts[2]: “How does the thing put a process into action. Obviously it can’t. But we always insist that there is this subject called the knower. And without a knower there can’t be knowing. Well that’s just a grammatical rule, it isn’t the rule of nature. In nature there’s just knowing.” Also said[3]: “The grammatical illusion is that all verbs have to have subjects.”

[1]: https://www.magic-flight.com/pub/uvsm_1/imc_01.htm

[2]: https://www.alanwatts.org/1-1-2-not-what-should-be-pt-2

[3]: https://www.alanwatts.org/1-1-11-limits-of-language-pt-1/


No they may not well be.

IIT is pure sophistry - some years ago Scott Aaronson showed that using their definition of potential for consciousness electronic devices employing certain kinds of algebraic error-correction codes have it - meant as a reductio-ad-absurdum of IIT, apparently its promoters have decided to try and own the absurdity.

In other news - nautil.us is an utter garbage chute of an outlet, the new-science-journalism equivalent of British tabloids and I fully expect them to publish a psuedoscientific apologia for astrology or an evolutionary theory supporting the existence of Bigfoot and the Loch Ness monster - post-Darwinnian cryptozoology anyone ?


Electrons may as well be <insert anything> as long as we cannot observe it. This is not interesting because the Occam's razor shaves it off instantly.

A much more tantalizing hypothesis (also unverifiable by construction) would be to assume presence of a metaphysical being which alters the probabilities of quantum processes ever so slightly as to have a macroscopic effect, but is careful enough to never do it for processes under direct experimental observation.

One can invent a number of such undetectable constructs, it's a good entertainment, and does show the limits of the scientific method. Practical applications of these are nil, though.



Discussions of consciousness tend to frustrate me a bit. So many otherwise smart people, for some reason, simply do not understand the hard problem of consciousness. They'll say things like "well, consciousness is just a given pattern of physical stuff" but they do not realize what to me is an obvious objection - to say that consciousness is equivalent to a given pattern of physical stuff does not explain why that pattern of physical stuff is not simply present without any consciousness associated with it. I can imagine a being that is physically identical to a human in every way, but is not conscious - there is no reason to think that intelligent behavior requires consciousness. So what is the difference between such a being and a conscious human, if the two are physically identical?


You may be able to imagine a body physically identical to yourself not having a consciousness, but you cannot deduce from that, that this is something that can actually happen in our Universe. It may just as well be true that any exact physical copy of a body containing a consciousness is also conscious. In fact, the latter seems like a simpler assumption than the former.

My own take is that we, as biased humans, put too much value on consciousness. I have not seen any evidence that consciousness is nothing more than a convenient way to organize a complex pattern of elementary particles for self preservation. In that sense the discussion of consciousness is an interesting exercise in analogies, and while it may very well put us on a track to better understand everything from quantum mechanics to the mind, it still, to me, has a very anthropic vibe to it.


>You may be able to imagine a body physically identical to yourself not having a consciousness, but you cannot deduce from that, that this is something that can actually happen in our Universe.

Agreed, but doesn't that just further throw us back to the mystery of consciousness as something that transcends the physical? Why would a given arrangement of physical stuff necessarily give rise to subjective experience?

>It may just as well be true that any exact physical copy of a body containing a consciousness is also conscious. In fact, the latter seems like a simpler assumption than the former.

Maybe, but that assumption brings us no closer to understanding consciousness, since consciousness is so radically qualitatively different from physical stuff.

What does "a convenient way to organize a complex pattern of elementary particles for self preservation" have to do with subjective experience, qualia, etc.?


> but doesn't that just further throw us back to the mystery of consciousness as something that transcends the physical?

No, I think it does the opposite - it dissolves the concept of consciousness. "Conscious" or "unconscious" is a label we put on complex physical objects to separate them into categories by their behavior, in the same way we use "hot" and "cold" to separate things by the degree of their apparent temperature. So an atom-perfect copy of your brain would be just as conscious as you, and a program running on a hardware, both of equivalent complexity to you, would be just as conscious as you.


No, we understand this objection, we just don't think it's valid.

Your thought experiment postulates a pattern of matter that is identical to conscious matter but lacking consciousness. But the fact that you can imagine this does not mean it possible or even meaningful. You give no proof that such a distinction can exist.

"Consciousness" is a label humans apply to a pattern of matter that behaves in certain ways, in the same way "liquid" is a label we apply to a pattern of matter that behaves in certain ways. It makes no more sense to imagine a being that displays conscious behaviour but lacks consciousness than it is to imagine matter that acts identically to liquid but is somehow not a liquid.

In your example, if there is no observable difference between the supposedly conscious and non-conscious beings, then there is, scientifically speaking, no difference at all. They are both conscious, in that they both exhibit the behaviours that we call "consciousness".

If you hypothesise that there is a difference, then the burden of proof is on you to demonstrate it, via measurement — again, your imagination does not count — at which point we can use scientific enquiry to find out exactly how this difference arises. However, since the difference is measurable, it is part of the pattern of matter, and so the outcome of this process will simply be a more refined definition of consciousness.


>I can imagine a being that is physically identical to a human in every way, but is not conscious - there is no reason to think that intelligent behavior requires consciousness.

I experience consciousness as having explanatory power for why I have conversations about the ineffability of consciousness. But if I can posit a being physically identical to myself but without consciousness, then, since the causal explanations for the duplicate's behavior are identical to my own, that means this being as well as myself have beliefs and say the things we do about consciousness, that are not caused by consciousness! My own beliefs about consciousness, supposedly incorrigible because they are directly caused or informed by consciousness, are necessarily not caused or informed by genuine consciousness, only some computational or informational imposter. But that means any reason I have to believe I am conscious, the zombie also has. It follows that either I am also a zombie or zombies are incoherent.


You’re aware, I assume, that you’re describing David Chalmers’ “philosophical zombie” thought experiment.

The “Hard Problem of Consciousness”— a term coined by Chalmers— is not universally agreed to actually be such a hard problem.

For anyone who wants to dig deeper into this philosophical discussion, Sean Carroll (who does not himself believe the problem to be so “hard”) has a very thorough and civil discussion with Chalmers about this on an episode of his fantastic podcast, Mindscape:

https://www.preposterousuniverse.com/podcast/2018/12/03/epis...


Yes, I am aware that I am describing what is sometimes called a philosophical zombie.

The "hard problem" is certainly not universally agreed to be hard, but so far I have not encountered any rejections of it that struck me as valid in the least bit. I'll check out your link, though.


If you’re curious, here’s Carroll’s case against the zombie argument:

https://nautil.us/issue/53/monsters/zombies-must-be-dualists...


> I can imagine a being that is physically identical to a human in every way, but is not conscious

I can't, and I think that's the central point. This means you're claiming the big c is not a physics process. But I suspect that the next sentence is a more accurate representation of your argument:

> there is no reason to think that intelligent behavior requires consciousness

To which I would say, what if consciousness is just a side effect of data integration. What if it's just what if "feels like" to be a system that has memory, external and internal data integration? Nothing magical about that.

> So what is the difference between such a being and a conscious human, if the two are physically identical?

Again, you're tacitly assuming that consciousness is not physical, thus assuming your earlier conclusion.


Your objection is far from being obvious. One objection to your objection is that your thought experiment is incoherent in the first place. Consciousness does not have be a property of a being in a separate ontology than material reality. In other words, consciousness is not something associated with conscious physical systems in a different reality, rather it is a property of these physical systems because of their particular function. So, it doesn't make sense to talk about something that behaves exactly like a human, but is not conscious. Something that is physically-indistinguishable from a human will also be "consciousnessly"-indistinguishable from a human.


My favourite book by Greg Egan is "Permutation City." It is a kind of reductio ad absurdum argument for exactly this: if you can simulate brains then where is the consciousness in all of this? Egan's book takes this idea to it's logical extremes. Very cool and mind-blowing when I first read it two decades ago.


Agreed. And I think it's even harder than that. When it comes to studying or thinking about consciousness - there is no way tor me to prove that any other person in the room is conscious except for myself. Given this, how can we even begin to talk about "AI consciousness" or in this case electrons being conscious.


> So what is the difference between such a being and a conscious human, if the two are physically identical?

One can imagine any number of fantastical creatures. A being identical to a conscious being is also conscious, there is no problem there.


I'm saying: imagine a being that is identical to a conscious being in every single physical way, but only in physical ways. Why would such a being necessarily be conscious?


The physical is all there is.


Do you not have subjective experience? If you do, then how is it physical?


Nobody knows how. But the fact that there's nothing else but fully known and understood physical forces pushing around the particles that make up my brain is rock solid science.


You're claiming that there's nothing but physical forces at work because we've yet to observe anything else. axguscbklp is claiming that subjective experience is such an observation.

To find out which position is correct we must attempt to find a link between physical phenomena and subjective experience. If we are not able to find such a link then perhaps they are not actually connected after all.

You are assuming such a link exists and has just not yet been scientifically articulated. For what it's worth, from my own review of the literature, I tend to agree we have been making progress on this front and will likely find a link eventually.


Consciousness could simply be an emergent phenomenon. Mathematical theorems and proofs are not physical things, yet we can create physical systems that can encode and prove theorems. Maybe we are conscious in the same way that these physical systems prove things without being made of mathematics.


For all you know, everyone except you is a philosophical zombie. Consciousness is subjective and you can only make observations about your own experience. Consciousness is an unsolvable problem because of that.


Forgive me if im being dense, but if that human replica still responds to stimulus in some way, and has basic wants and needs, then surely it is still conscious, just lower down on our human scale?


You can build a computer that responds to stimulus - we do it all the time - but that doesn't mean that it necessarily is conscious (that is, has subjective experience). If I build a computer that responds to stimulus, that doesn't necessarily mean (to paraphrase Thomas Nagel) that there is something that it is like to be that computer.


Well, sure, there is something it is like to be that computer, but it is only accessible to the computer, which does not pay a lot of attention to what it's like to be itself generally. When you ask me what it's like I can try to imagine becoming the computer, but in the process I would cease to exist, so for me there is only the objective computer seen from the outside.

The only reason "what it's like" seems to make sense with other humans is because we evolved this nifty faculty of empathy. But if I examine more closely what it's like to be you, I also find that at the point I'm you, and thus have subjective access to your consciousness, I'm not me anymore.

So, it seems that when looking more closely exactly the same problem occurs when asking what it's like to be a computer, or a human, therefore the "what it's like" question is not a valid argument to ascribe consciousness to one but not the other.


It is possible the computer pays as much attention to its own subjective experience as you or I but we have not given it the tools necessary to express itself.


From a logical standpoint, questioning whether Consciousness "exists" is ludicrous. Consciousness is the foundation of our capability to observe the world around us and make conclusions about it. Protons, electrons, atoms, molecules and other phenomena do not exist outside of our Consciousness and we cannot prove absolutely anything about the world that is not a part of our Consciousness. Therefore, if something is to be questioned, it is our strange desire to prove the existence of the only thing we are directly experiencing all the time - Consciousness. It's like a computer program becoming aware of itself and trying to find that awareness in the source code. It's not there.


When I used to commute via mass transit, my train load of people would all rush down the platform, into a plaza, and down only one escalator. You had people who would rush forward, people who would take a steady pace, and people who would hang back. I often times looked at the flow of people and it seemed to me it was extremely similar to a flow of particles.


At a high enough density, crowds act like fluids. At a high enough speed, so does a person.


There are a lot of different philosophical positions with regard to the mind-body problem, but very few attempts at a testable scientific theory about how consciousness works. Integrated Information Theory (IIT) is such an attempt.

Nature Opinion article: https://www.nature.com/articles/nrn.2016.44

Wiki: https://en.wikipedia.org/wiki/Integrated_information_theory


Note that IIT has been criticized [0] for being extremely gameable: it is trivial to build simple circuits that do absolutely nothing, but have more "consciousness" than all humans who have ever lived.

0: https://www.scottaaronson.com/blog/?p=1799


Scott's article devastated IIT. Lot's of people stopped looking at IIT after it appeared. Here is a related question: has IIT ever lead to an interesting neurobiological insight? Has IIT suggested an interesting research hypothesis that has been proven or disproven? In other words: is IIT an interesting scientific methodology? Short answer: no.


We can not yet measure consciousness. We can not say for certain, as an example, whether a brain or AI that we design experiences it’s own consciousness or not. And yet, they are made as all the same particles as you and I.

We can say that consciousness is something to do with the links in our brain, but how many times can I slice a brain before it is not conscious? Does a brain sliced down into single neurons not experience the world as a billion separate individuals? If I were to reconstruct the brain from its individual parts, would those individuals feel as though they had died as their consciousnesses merged?

I’m willing to still accept the traditional view that consciousness is a special sauce. That when we reproduce, it’s something that we can fork, but it is not common in the universe.

And I think that the true litmus test to the nature of consciousness is to effectively kill someone by freezing them (so that they are actually dead in the sense there is zero electrical activity) and then successfully bring them back to a conscious state. Then you will have paused a consciousness, and yet it did not leave. I do not think it’s possible. I think something like that ruins the special sauce and the resulting experiment may be able to survive (breath, react at a motor level), but effectively does not live a conscious life.


> I think something like that ruins the special sauce and the resulting experiment may be able to survive (breath, react at a motor level), but effectively does not live a conscious life.

But if we have no way to measure consciousness (and arguably, we can't, if it's truly a "special sauce"), you can't prove whether or not this reanimated creature is simply a mechanically moving shell of what was, a la every zombie movie, or the same person, or something in between.


Yes, I sat at night thinking about this for a long time after writing that post.


I have taken multiple university courses in theoretical physics, in artificial intelligence and in neuroscience, including on the topic of consciousness.

I can directly observe my own consciousness and I can tell that without a doubt it can not be explained by current physics and "complexity". It is completely different in kind.

If you believe that consciousness could arise just from complexity and information processing, then you are frankly mistaken. Whether you lack a consciousness or just the introspective ability to perceive it... or I suppose a knowledge of physics.

Now beyond saying "there is something going on", things quickly become much more difficult. Pan-psychism is a worthwhile avenue to pursue. If there is some phenomenon going on at a macrolevel, then we do expect it to be made up things going on at a microlevel. Some kind of consciousness or proto-consciousness. But am somewhat pessimistic. We don't really know what we are even looking for, so how can we expect to find it?

The attempts to connect wave-function collapse with consciousness are unconvincing to me. It seems basically like tying together two things we don't understand with each other based on... not a lot.


Until sufficient evidence is provided supporting one way or the other, we do not know whether our current understanding of physics is sufficient for understanding consciousness. You are claiming that it can't, so you now have a burden of proof which you, at the moment, cannot fulfill, and therefore your claim is unjustified.


You are trying to impose an impossible burden on me, and I have no choice but to reject it. I realize that from your point of view, the claim that "I can just feel it" is unsatisfactory, but that really is the essence of it all.

If your are sincerely curious meditation and/or drugs may be a path.


If the burden of proof is impossible to fulfill then you have by definition made an unfalsifiable claim, so your claim can be discarded immediately. "I can just feel it" does not stand as evidence because it is subjective. The only justifiable position you can currently take is "I do not know."


> I have taken multiple university courses in theoretical physics, in artificial intelligence and in neuroscience, including on the topic of consciousness.

> ...I can tell that without a doubt...

> If you believe...then you are frankly mistaken...

I find it very amusing that you present taking 'multiple university courses' as some kind of argument or credential in favor of the following unshaken belief ("without a doubt"). Did any of your courses teach you to doubt things and especially to question things that you are certain about that clearly a lot of other smart people are not?


I'm presenting it solely to demonstrate that I'm not a crank or a schizo. The reason for my firm belief is that just as I can look up and see that the sky is blue, look down to see that the grass is green, just as plainly and immediately can I look inwards and see that I'm conscious.

Other smart people believing in emergence is easily explained: They can not see what I see.


> The attempts to connect wave-function collapse with consciousness are unconvincing to me. It seems basically like tying together two things we don't understand with each other based on... not a lot.

This objection seems a bit superficial to me. I think there are plenty of connections. The idea of measurement & observer is baked into quantum physics, hello? But there is a taboo in physics on discussing this stuff.

It's nice to see some brave philosophers try to promote panpsychism. I wonder if the physicists will ever become more tolerant of things like this aswell.


Once you take this path, you can make up anything that you like. There's no way to test it. Except, I suppose, based on how you feel about it, how well societies work whose members believe it, and so on.


I agree with this. There are similar arguments that you cannot prove reality is real. Sure, but how does that help us solve the energy crisis or world hunger? It is philosophical musical chairs. Aristotle and Plato investigated how to live a better life.


I read once of the NDE of a person that is very well documented. I think it was Pam Reynolds. http://www.neardth.com/pam-reynolds-near-death-experience.ph...

It is hard to explain if you do not want to consider that our counsciuness may trascend our mortal existence.


Getting you to consider that seems to be the purpose of the publication in which the article appears.


Since identical particles[0] are inherent in QM, I don't think you could even define what one "electron" is or what it might mean for it to be conscious. But conciousness (or rather, proto-qualia, which are the part of "conciousness" that really seems to demand some physical explanation) could subsist in more complex systems, probably involving a high-dimensional internal description as a result of some sort of stable quantum entanglement.

(Indeed, the fact that the sort of basic subjective experience we're familiar with seems to empirically exist as part of intelligent life is a rather compelling argument for non-trivial quantum computation of some sort being quite feasible in the real world.)

[0] https://en.wikipedia.org/wiki/Identical_particles


Back in the 1980s, I had an exam question in Michael Genesereth's AI class at Stanford, "Does a rock have intensions". I answered "no". I forget whether that was considered correct.

This was back when AI had more philosophy and less number-crunching. And didn't do much. "Common sense" and "intension" came up a lot back then, but "consciousness", not so much. There hasn't been enough progress on common sense and intension since then. Common sense is being able to predict the consequences of actions, actual or potential, with enough accuracy to get something done without damage. That's a concrete problem, and one that current AI systems do not do well.

We don't know enough yet to address the "consciousness" issue.


Rocks definitely have intensions. [1]

But whether or not rocks have intentions, I think the correct answer is "we don't know."

[1] https://en.wikipedia.org/wiki/Intension


obligatory xkcd https://xkcd.com/505/


I would be very amused if modern science discovered souls (your "operating system") and the spiritual realm exist. Much like dragons/monsters were discovered (but renamed Dinosaurs). I already believe this so I am biased, but I do firmly believe that modern humans overestimate not only what they know but what they are capable of knowing and undetstanding. I think the realm of reality you interact with originated from elsewhere.

This and some of the topics I am learning about in quantum mechanics (like virtual particles) remind me of this:

"By faith we understand that the universe was formed at God’s command, so that what is seen was not made out of what was visible." -- Hebrews 11:3 NIV

Just sharing an opinion and an observation.


A virtual particle is only a virtual particle because we as humans implicitly have the particle view as fundamental. There's nothing special about a fluctuation in a quantum field being not long lived enough or not stable enough for us to recognize it as "a real particle".

So your quotation is right, but you need to go one level deeper.


Interesting and thank you. I have much to learn about quantum physics, but it is very fascinating.


That sounds a lot like Deepak Chopra's pseudoscientific woo.


Deepak studies a specific branch called, Ching Cha Ching, which enables him to make lots of money.


Worth watching a Deepak / Sam Harris interview, they tend to turn into - painful, but - great comedy.


Chopra annoys me too, but he is actually a good dude (IMO): the stuff he talks about is based on Vedanta, which is an old eastern tradition. And he tries to connect this with modern science. I say go for it.

I'd much rather put people like Oprah Winfrey or Tony Robbins on a woo-shitlist. But not Chopra.


No, he is not a good dude. He spreads pseudoscientific garbage. I don't care if it's based on some old tradition.


Scott Aaronson wrote about this before https://www.scottaaronson.com/blog/?p=1799

The title 'Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)' tell you what he thinks of the theory.

Some more discussion also at https://www.reddit.com/r/slatestarcodex/comments/7avunr/inte...


I theorize nothing is truly conscious. Everything reacts from a cause & effect "chain of forces" exerted upon everything in the Universe. I'm curious what we will learn when we can modify the human brain to be exactly the same as it was a year ago for an individual that volunteers to be such a test subject. Would all his/her memories be reverted back to that very day? Or would the experiment result in a failure because our concept of how we react to everything compared to other things in the universe is flawed.


Although there are popular theories of the mind which include both free will and consciousness, the two aren't the same thing.

There is no reason why, in principle, you can't have a deterministic thought process and a consciousness that observes and experiences these thoughts from the outside.

There's also no reason why you couldn't have a deterministic form of consciousness that is a part of a deterministic thought process.


I think the two are simply illusions. Humans think of as free will and similar to thinking of having a consciousness in reality is just illusions at play. We can assume consciousness is similar to an outside external force that ends up with causing an outcome. Similar as a domino being interacting with another domino and resulting in a cause & effect outcome. I don't think humans are responsible at all for anything of their own doing. Some things are in fact impossible and even with how hard we desire them.

My previous comment is suggesting that it would be interesting to learn if somehow the processing of thinking isn't happening purely in the brain. Such as if we could revert all neurons back to a previous state of time before external forces changed everything up. I like to theorize about the universe having some state that's stored somewhere outside of what's physically observable.


> I think the two are simply illusions

I've never understood this line of reasoning. If consciousness, in the sense of subjective experience ("qualia"), is an illusion, that what is being fooled? It seems to just push the question one level deeper without providing any insight.


It's our understanding/perception that is fooled. We feel like we are the conscious authors of our thoughts when really they are summoned from within and calculated for us. We feel like pilots but really we're riding coach.

I feel like there is a chance I'm misinterpreting your question though, apologies if so.


I think you're combining consciousness and free will into a single thing. Consciousness lies in the subjective experience (in fact it seems like you allow for perception); whether you have any agency or are just a subjective experiencer "riding along" with deterministic fate is a separate matter.


You make a good point. As for me, its really hard to think of consciousness as real because it's similar to other things I'm forced to observe while previously I used to think I was somewhat in control. So I think thoughts or awareness are just like external forces making whatever happens from all the previous forces.

I guess I'm wondering if we would still think we're conscious if we someday prove we're no different than a character in the video game the Sims. We currently think of the characters in the Sims as not having a conscious. But I guess we could be wrong if electrons somehow had a state being received and from everything that's happening in the universe for making it experience similar to what we do. Thus, the theory of panpsychism. But to me that just seems like consciousness is an illusion in the traditional sense because awareness has always felt like requiring more than a video game character being controlled by external forces.


Consciousness, at least to each of us individually, is demonstrably real.

Consciousness is a perception and perceptions are not illusions, even if we misunderstand what we are perceiving.

If I send you a message that says "I am not sending you a message", we can argue about what it means, but not that you got a message from me, no matter how much you trust (Edit: Or distrust) what I say to you.

Even if you don't believe you have consciousness, if you perceive you disbelieve in consciousness, then too bad: you have it.

--

In contrast, free will is a completely different and easily explained kind of phenomena.

Questions and opinions about free will predate any discovery of evidence for such a thing. (There is no evidence yet!) That is a critical clue.

Free will is just a typical case of motivated reasoning. We believe some things without any rational support because they make us feel better about ourselves, the universe, allow us to focus on more practical matters. Not because they are true, or even a valid concept.

But understanding that free will is a product of motivated reasoning suggests that explaining free will is just evidence-free motivated reasoning will not settle the issue. Because people will continue to be motivated to want it to be true, they will find it hard to simply label it as self-serving, often-useful irrationality and move on.


That’s just an illusion to me. The word conscious is a subjective construct and awareness has always been associated with it as a necessary role in expressing consciousness exists. Well if a person is just metaphorically a domino like everything encompassing his/her existence and there can be conflict between the parties to a significant degree. Then it’s arguable that neither person is truly aware but just similar to a character in a cutscene of a video game. All of us just acting out without any control to what the story entails. Similar for the discussion at hand. I think it’s a philosophical issue and where human language attempts at making it seem more possible than it being an illusion but is in fact not the case.


Your domino reference suggests you continue to confuse free will with consciousness.

Free will is the idea that out minds might somehow have a self-generated non-deterministic ability to make decisions that is unmoored or constrained from causes that others can see or investigate.

This is either trivially not true (as in the "many worlds" deterministic interpretation of quantum mechanics), or trivially true in a weak sense (quantum mechanics non-deterministic interpretation) i.e. our decisions are controlled by quantum randomness, but not by any special property of our minds.

There is zero evidence of "free will" and yet people have conjectured they might have it for as long as we have records of people's introspection. This phenomenon is easy to understand: it is a typical example of motivated reasoning. We have a biological imperative to desire freedom of action and though. "Free will" is the ultimate fantasy of freedom of thought. So regardless of all lack of evidence, people will be drawn to, and depending on rationality, adopt a strong belief, in their "free will".

So we can dispense with that self-motivated "illusion" based on good science.

In that sense, there is an illusion as you say.

---

That leaves your point (if this rephrasing of your point is acceptable to you) that self-awareness in a functional sense does not necessarily imply an entity has the qualia of self-awareness, i.e. consciousness.

I also agree with that. Somewhere between us and a deck of cards saying things like "I am conscious" is a mechanism we could build that had some level of self-referential ability, could say things to you or me that looked like consciousness to us, but which didn't really understand itself. And therefore could not actually be conscious.

We can agree that their can be an illusion of consciousness. But note the illusion is to external observers. The limited entity itself has no subjective awareness of the illusion or anything else.

---

Which is why consciousness cannot be an illusion to an entity that experiences it. It is one of the very few (the only?) completely direct experience we have, with no intermediary.

If you have the qualia of self-awareness, then you have it. There is no illusion of having it. If you don't have consciousness, you cannot experience the qualia of an illusion of consciousness.


I appreciate you writing all that for me. I'm now uncertain where I fall with my position. I don't really think I'm aware because everything is predetermined and even if randomness is thrown into the equation it doesn't matter. I know you think I'm mixing free will with consciousness but I just think a person cannot be aware if they're always going to process a certain way because of the starting point of the universe. It just seems like I'm a sim in a video game. Where everything is completely programmed out for whatever to happen and I cannot say that either a character in a video game or I, would be resembling what consciousness means for me. In any case I'm undecided now.


Thanks for your posts abellerose, I am in agreeance and find this topic generally interesting. Free will hasn't been defined in such a way I think it can even exist (that I've seen or understood).

Do you have any thoughts on how this impacts the way you see the world? Personally I find it something I'm almost keeping 'off to one side', and am aware of, but in the 'heat of the moment' I still feel like I'm an independent agent. Maybe it speaks to the self-deception at the core of every human.

Sam Harris (while being a figure I can take or leave) raised what seems a good point; that our penal systems should be therefore 100% geared around rehabilitation, not punishment.

Do you think there are other ways we can map this onto daily life?

I try and never be too certain of anything, given the veritable ocean of cognitive biases we have to fight against.. but this is a trait I developed before discovering the argument of determinism vs non-determinism.


I'm happy to encounter someone interested in the topic like I am. I'll further expand some of my thoughts in connection to the topic. I'm very interested in your own thoughts on the topic as well. I hope when writing about this topic that I further learn something.

As for me, understanding free will being an illusion, eventually made me realize a few things. Yes, some things are simply impossible. Not even a traditional concept of God, could have or make free will be real and knowing so is from understanding the logic of cause & effect. Not even true randomness can make us be free agents. Although I think randomness is an impossibility because there's always a cause and that can be passionately debated for the theory of quantum physics in our universe. But randomness doesn't matter in regard to free will being an illusion.

Furthermore, the idea of free will being an illusion made it simpler for me personally to understand harder concepts than before. How humans created the concept of good & evil, but we simply apply it incorrectly and to what we observe throughout our existence. Nothing is evil or good in the traditional sense and when we realize neither us or even a traditional God should be labeled responsible for all the wicked or good things existing on earth. Free will is an impossibility for God just like humans. My reflection from a mirror has the same control as I do. :)

Similar to how Sam Harris expresses humans shouldn't be responsible.

Thus, the concept confirms for me that everything experienced cannot be perfect from the start. Providing me with wishful thinking that the universe could repeat and where the same conditions that made our life, could happen repeat again & again; with improvements occurring each iteration depending on the starting variables. Maybe like a brute-forcing algorithm. That's the spiritual part I get out of understanding the concept of cause & effect. There's no reason for me to care if free will is an illusion because it has always been and if life doesn't repeat, well I find that more perplexing than the idea of the universe repeating for infinity. In any case, I truly wish society moves towards rehabilitation instead of punishment. I'm agnostic for context and I think life is sacred. But I also think people should be able to commit suicide safely from society allowing it and if they truly suffer.

Now, the concept of rehabilitation being superior makes the most sense if society desires to progress humanity and to the purest essences of equality. The concept of judgement & punishment being a deterrent, is a fallacy, and just a one sided convenience for the privileged. The ones born into a life where they'll never be judged & punished while the misfortunate individuals born into a life where they will eventually commit a crime. They were needing help from the start and were used & thrown away like a piece of trash throughout life. I find the current day justice system similar to the Salem witch trials. I think any hard determinist Judge forced into caring out the justice system of today, simply punishes an individual for the sake of the health of someone harmed and while the one harmed hasn't grasped there isn't free will. Otherwise I think the person harmed would have a tough time placing the blame on anyone or desiring it.

Anyway, I think the knowledge of free will being an illusion is somewhat taboo. It goes against a lot of religious beliefs. I'm not really sure if religion is a good thing that came out of human existence or simply comparable to an illness. A lot of LGBT people have had their life ruined from religious parents and a lot of slavery was justified from religious belief. There are young people being molested by priests. Maybe religion needs to evolve into understanding free will is an illusion?

Lastly, the concept of free will being an illusion may hinder how society functions under capitalism & reward based structure. People might get fed up while knowing they're destined to be modern day slaves compared to the privileged because of their birth and this is only the case if society doesn't adapt to the knowledge being wide spread. Basically, I think current society structure is wrong while knowing everything revolves around who was born into good genetics, financial & family/social status, environmental/education access and who was born misfortunate. While the media showcases the anomalies to delude people into thinking they will eventually get to the top as well.

Nevertheless, I'm a pacifist from the understanding and somewhat grateful it happened. I think the world will eventually adapt when it becomes common knowledge. I personally would have preferred to be born into such a society and instead of the current day one if humans collectively cherish equality above all else. My fear of death is gone as well and I find things more interesting than the past.



I think that you might be using the term "consciousness" in a different way from how the article uses it. Do you not have subjective experience?


I theorize nothing is truly tomato. There are just atoms, the space between them and forces that act on them.


That's actually a rewarding thought to think about.


i don’t believe this is doable. you would have to observe the state and after that rebuild the state precisely. that’s a pipe dream.

we like to believe we are in state A and transition to state B, but the brain (and our body) is one massive parallel biological computation which for all intents and purposes is irreversible.


Yes, we would have to send the same signals for a significant interval of time like it was a year ago and to the brain of the subject. Also having all the outside the body external forces aligning similar to the past and for the experiment to be successful for answering the question. It's a pipe dream but theoretically possible.


it depends. i do not think we even understand, in theory, how we would do something like this


I agree that conciousness of people is not "special", but why the need to add in an extra meta physical quantity of "conciousness". I agree it is probably a spectrum, but I think it is more just an emergent quantity.

People are just relatively high up on the scale so its different to imagine what other types of conciousness would be. Yes it is difficult to say what exactly is going on in a human brain, but I believe when you think it is entirely just a physical process, some very complicated electro-chemical interactions.


If we consider a large group of people as a whole, can we say that this group has its own consciousness? For example Congress is a bunch of old dudes with their own consciousnesses, but as a whole it makes decisions by voting and other complex internal interactions between these individual consciousnesses. For an outsider, this organization behaves like a conscious being.

Another way to look at this idea is that consciousness is when we impose a boundary condition on a "medium" made of conscious "particles". I'd even compare humans with a liquid. When it flows freely, it obeys some general fluid laws and its behavior is rather uninteresting. It's when you confine a part of this liquid into some boundaries, the captured liquid starts to demonstrate some interesting effects. Same with humans. When they act freely and barely interact with each other, they form a uniform mess, but once we impose various boundaries, such as countries, companies or various forms of organizations, humans captured in those boundaries are forced to interact with each other more often and those interactions are governed by various laws, and at that moment these fictional entities start to look like conscious beings.


At the very least, this seems to me to be a way to avoid the difficult and interesting issues of, and relating to, consciousness. I am not interested in debating whether it is possible to come up with some definition of the word "consciousness" such that one can say that even electrons have it; I am interested in how conscious, self-aware, theory-of-mind-holding minds work.


"some DILLUTION of the word "consciousness"

Fixed that for you and ... agree on all counts.

I feel like while we can't explain consciousness yet, it isn't hard to make well supported conjectures about what it needs.

First, what I experience and call "consciousness" involves an awareness of a dynamical thing, me, for my environment (a gravitationally organized semi-2D place), myself externally (i.e. controllable hands and feet), my own internal responses to the environment (sensory perception, reinforcements, patterns), and the fact that this awareness on my part seems to be encoded with information that is used by my dynamical response generating apparatus.

I.e. Self-reflection does not seem to be segregated from other forms of information used to generate my responses: My awareness of self is not just in a spectator role, since it influences action.

The fact that self-awareness is not segregated from responses creates a wonderful side benefit: I can share my self-reflection by communicating to others about it.

If you took away any part of what I just described, I don't think I would continue operating in a way I understand as "consciousness" and my behaviors would certainly be very different. I don't think other's would describe me as "conscious" either.

And none of these seemingly necessary complicated and recursive pre-conditions for consciousness shout out as reasons to include individual electrons as co-conscious in any way.

Question: Assuming an electron has consciousness, what experiment would you attempt to falsify that?

I am anti-electron on this issue. :)


> Question: Assuming an electron has consciousness, what experiment would you attempt to falsify that?

This is the question that the article ends on -- the philosophers are entertaining the notion and looking for a testable hypothesis.

We're talking about a 1-qubit Turing test, essentially. I'm not about to take a position yay or nay, (I'm a mathematician, goddammit).

For an apparatus, we could capture individual electrons in a ring of magnetic bottles. Perform tests on individuals in the chambers, advance them in a "shift register" fashion, and repeat. Do any individuals show behavior that differentiates them? A clear signal of individuality would be a real shocker.

One thing we use for macroscopic beings is a "mirror test" -- can an electron recognize itself in a mirror? Again, sounds absurd, but the general question is considered an acceptable proof of consciousness.


The thing is, science is full of things without definition. There is no definition for energy. All we have is a bunch of book-keeping. Everytime we couldn't balance the books (ie. conserve energy) we just added another column to the accounting. But what is energy?

https://www.feynmanlectures.caltech.edu/I_04.html


It is relevant to remember Kuhn's philosophy of science. Science is often thought as an institution that converges to truth. Kuhn, however, argues this is not true just like an organism gets away with evolving "good enough" features to ensure its survival, science merely evolves into "good enough" state that explains phenomenon of interest. Therefore, it is not necessary for science to converge into something that's close or even relevant to what is actually "true". Read: https://plato.stanford.edu/entries/thomas-kuhn/#DeveScie


I really should read more philosophy, but I just can't cope with all the long sentences and curlicues of jargon... It's the same reason I struggle with algebra. Academia in general builds walls of language, it's not designed to be user friendly.


In my experience, reading reliable secondary sources is a bit easier than primary sources. E.g. for philosophy, I read Stanford's Plato encyclopedia of philosophy. For math, I read nlab or Wikipedia. Unfortunately, both of these will require you to be familiar with basic terminology and concepts of the field and that's where most people stop learning at. E.g. I'm a complete layman in physics and want to learn more but the jargon beyond basic mechanics (i.e. velocity, position, acceleration etc) might as well be Greek to me.


I came up with a similar theory a few years ago. It's based on the idea of a Gaussian surface for information. The surface could be drawn around a rock or a person. In both cases, information passes into the surface from the environment (photons, vibrations, forces, motion, etc), is "processed" by the matter inside the surface, and emitted by the object after processing.

Things start to get weird when you play around with different time scales, and where to draw the surfaces. You might see a tree differently when you see it shifting and changing over longer time scales. It's also strange to think about drawing this type of a surface over different regions such as a classroom, non-contiguous regions such as every member of a family, or even conceptual regions such as mathematical functions, a scientific theory, or program source code.


I suggest you stop supporting nautil.us. Look at the bafflingly low quality of this article. Is it coincidence that nautilus articles like these frontpage once a weekend? Is it an organic submission? Or something else? This is like a throwaway class paper (which, given how nautilus sources, may very well be).


I have questions about their agenda. They were exclusively funded, initially, by the John Templeton Foundation.

Whatever one’s personal feelings about Richard Dawkins, this bit he said in The God Delusion is apropos, regarding how he feels the Templeton Foundation doles out rewards, “usually to a scientist who is prepared to say something nice about religion".


The idea is very important, though. How the article explains it is another thing.


After reading all the comments to this article, I get the weird feeling that some people, though obviously intelligent and convincing in their argument, obtain no consciousness. Further more, for the life of me, I can't decide if that supports or detracts from the idea of panpsychism. Making the argument "consciousness is not special - it simply emerges from data processing" feels so dead, but is really saying that everything is alive to a certain degree. On the other hand, saying "There is something special about consciousness that needs to be explained" almost supports the idea that consciousness is exclusive to certain entities and not others, and therefore needs explaining. That feels so backwards to me for some reason, and I can't put a finger on it.


Something that concerns me for conscious particles is that some of them could be suffering endlessly somewhere in the universe, but I think this is probably rare.

The ability to hold a conscious particle in its seat while keeping it uncomfortable implies some sort of system trapping it in place.

Life could be such a roller coaster, where good and bad stimuli are always being presented to the particles, and there isn't much hope for getting off the ride as long as the lap bar is down. The sensation of discomfort might in fact be the feeling of a conscious particle straining against its chains, trying to escape the body that has trapped it.


that article (which imho is totally devoid of any substance) still reminded me about other thought experiment by Feynman: What if there's just one electron in the universe, traveling back and force in time, becoming positron when changing time direction, consuming energy when turning forward and releasing when turning back? He was quick to reject this idea, because in this case there would be exactly the same amount of electrons and positrons in the universe, and it doesn't seem to be the case, but I just love the elegance of the conjecture.


There’s something fishy about Nautilus. One of their main contributors is the John Templeton Foundation:

“The work supported by the John Templeton Foundation crosses disciplinary, religious, and geographical boundaries. Our grantees produce field-leading scholarship across the sciences, theology, and philosophy. From probing gravitational waves to updating the modern evolutionary synthesis, they have contributed to major discoveries in the basic sciences. Other grantees have opened critical new topics to scientific investigation, including prayer, gratitude, immortality, and imagination.”

An initial reaction from someone online, https://ksj.mit.edu/archive/does-science-magazine-nautilus-h...

In 2013, Amos Zeeberg, the digital editor of Nautilus said this in an interview: “The Templeton Foundation provided all of the money for our launch and for the initial operation of the magazine. We hope they’ll fund us further, and we also are getting revenue from advertising and potentially other foundations.”


Nobody's mentioned the Conway-Specker-Kocken Free Will Theorem!? It lines up perfectly with the idea of "quantum choice" discussed in the article: Free will is when particles choose to reply to quantum measurements with non-predetermined values.

From this perspective, electrons are a little conscious, while humans are a lot conscious (because we have lots of electrons in our brain), but we don't have to define consciousness beyond the ability to be forced to make choices.


I’m a little skeptical of this theory. Mostly, because it sounds exactly like animism [0]. The consciousness discussed here sounds a lot like the “soul” that religions asserts exists. But this is the problem with science that is almost entirely theoretical built in a concept that is entirely subjective: what is consciousness?

[0] https://en.m.wikipedia.org/wiki/Animism


> Quantum chance is better framed as quantum choice—choice, not chance, at every level of nature.

But why? All the choice we're familiar with relates to predator/prey interactions, sexual selection, finding food, avoiding danger, protecting the next generation, and so on. What possible reason could electrons have for preferring one outcome of an interaction to another?

Say "highly rudimentary" and you can ignore everything else?


All of the quantum choice theories seem to be based around an incorrect answer to the measurement problem. They can be forgiven, as many of the smartest scientists in the past 100 years have fought hard for their own interpretation. I suggest watching this episode of spacetime (and a few before/after if you have time).

https://youtu.be/CT7SiRiqK-Q


There's the Mechanical Universe theory and then there's the Everything is Conscious theory.

It's interesting to see the steps taken to arrive at that.

- A person has thoughts, yes

- A chip has thoughts, yes

- A dog has thoughts, yes

- A mouse has thoughts, yes

- A fly has thoughts, yes

- A microbe has thoughts, maybe

- A virus has thoughts, maybe not

- An atom has thoughts, maybe not

- An electron has thoughts, maybe not

It's a blurry line between life and matter, and perhaps consciousness goes down to that level too.

I'm still a "Mechanical Universe" guy though


I would argue that intelligence is a tool that life has evolved to become more adaptable to its environment. Intelligence exists at many scales, and consciousness is an emergent property of high intelligence. At some point, as your model of the world becomes sophisticated enough, you have to have a model of yourself, so you become self-aware. Self-awareness evolved because it made for smarter, more effective animals.

Unicellular organisms do information processing and react to their environment. You could say they they're intelligent in that sense, they can actually have very complex behavior. However, they're not self-aware, they don't have a model of the world that incorporates a model of themselves. Neurons involved in multicellular organisms as an effective way to quickly send signals across, making multicellular creatures more adaptable to their environment. These neurons started to organize in more complex circuits that could do increasingly sophisticated information processing because that was good for survival, and so on and so on, until you get self-awareness.


Because we do not know what consciousness is, it is not reasonable discuss if electrons have or have not something we do not understand at all.


I would think that a set of particles that are literally indistinguishable from one another could not meet any definition of 'mind'.

There is no difference between an electron that's flown around the universe and one that was just created in a particle collider. If that doesn't negate the possibility of 'subjective experience' I am not sure what would.


They are only indistinguishable in all the ways that we know how to look. There may be a way to differing them that we haven’t discovered yet.


So, there is only this one electron, standing in for every electron we can observe, this electron may well be conscious, so what if there are molecular structures (our brain) that "enjoy to feel conscious"? Perhaps there is only ever just this one consciousness and we all are (read: everything is) part of it?


This feels a lot closer to Hindu philosophy actually. May be someone more knowledgeable can correct me or add more.


You are right.


David Bohm's Implicate vs Explicate Order seems to be a root question here (as indicated in the article)... https://en.wikipedia.org/wiki/Implicate_and_explicate_order


for a definition of conscious yes. the problem is that we don’t really understand what consciousness is and we tend to anthropomorphize everything.

what if consciousness is the synchronicity of the observations of the outside world with the predictions of our brains?

no more, no less, with and added layer of hierarchies and recursion.


If we consider our brains conscious, and the foundational building blocks of that consciousness are electro/chemical reactions, it seems plausible that there is an innate property of consciousness to the underlying matter of those processes.


Maturana and Varela 1987.consciousness like a pilot flying blind in conditions of very poor visibility relying only on the instrumentation in the cockpit where any environmental inputs are mere blips on the radar screen


Is that statement falsifiable?


This is very much interesting and exciting, but all I see in this article is arguments from authorities. I fail to see any facts or demonstratable proofs.

How is that theory different than any other religious belief?


check out Orch OR theory developed by Roger Penrose and an anesthesiologist from Arizona. Basically they think microtubule proteins create electron resonances that can reach a threshold time to produce consciousness that wouldn’t be available to most electrons outside the system. I’ve been researching this topic during quarantine so i’m happy to answer any questions or supply some cool facts.


panpsychism is the flying spaghetti monster.

with added pretense.


Christopher Alexander arrived at a similar conclusion, in his later works, albeit from a radically different direction.


Humans are unique in evolving recursive language. Consciousness is a recursive agent: observe me observing the world, several layers deep. Likely the exact same mutation / brain structure backs both mechanisms.

Neither electrons, nor the Universe, are recursive agents. Maybe some of the Devil's apprentices playing with reinforcement learning AI will inadvertently create a non-human consciousness, but other than that, humans are special.


I suspect the experience of being an electron is something like this:

"Wheeeeee!"


So what if they are? Will they talk back if I send them to the anode?


No, but neither are we.


You don't have subjective experience?


Next year, you'll have protesters chained up outside of the CERN particle accelerators. Don't smash our particle friends!


It seems to me that our consciousness is most likely an emergent property of neuronal activity. The thing with emergent properties is it's extremely hard to draw a line where the system that exhibits the property transitions from not exhibiting that property to exhibiting it.

Consider a cyclone, a mass of air rotating around a zone of low pressure in the atmosphere. Clearly this is an emergent property of air, or any gas really. But if you statistically analyse any mass of gas you will be able to find volumes where there are regions of lower pressure than the average and the particles nearby will have a (possibly tiny) net angular momentum around that low pressure region. Any volume of gas will have these, but are they cyclones? How long does such a state have to persist to count? How much angular momentum must be present? Must it be on the surface of the earth to count? What about near another similar surface?

You can do this with any emergent property, right down to the electrons orbiting a nucleus, does that make them cyclones? The thing is whether you say yes or not, it doesn't really matter because it's not a useful level of analysis if you want to understand the behaviour or nature of cyclones in global weather forecasting. It's a context error.

Take the idea that an electron's state is not random but chosen, per the article. Well to make a choice implies considering reasons or following a pattern of behaviour. However we can see experimentally that the states of electrons do strictly follow random statistical distributions. So you can argue that their 'nature' is to 'choose' a random state, but at this point youre jamming a useless and explanation-free definition on the term 'choose'.

Now don't get me wrong, I'm basically a materialist. I believe that my decisions as a free agent are determined by my mental, psychological and essentially neuronal state. I believe this is necessary for me to be considered to have free will, because for _me_ to have free will my decisions must come form me, and I am my state. If my memories, instincts, experiences, knowledge and values don't influence or even determine my decisions, then those decisions don't come from _me_ in any meaningful sense. I don't care if that means my decisions are in principle deterministic, that's fine, it just means I'm a consistent being.

So yes my behaviour comes from my state and you could argue that the behaviour of an electron comes from it's state, or the behaviour of a gas comes from it's state. All that means is that I'm made out of matter, but it's simply a useless level of analysis when considering consciousness, for exactly the same reasons that studying quantum mechanics and atomic motions isn't practically useful when looking at many other emergent properties of matter.

This article is like analysing an engine, reducing down the behaviour of the matter an engine is comprised of down to the fundamental particles the engine is composed of, then extrapolating that up to say that because Jupiter is made of those particles it must therefore be an engine. If you're going to say electrons are conscious, you have to also say they are cyclones, and engines, and economies, and thunder storms, and planets, and whatever else because if you take this mode of analysis as legitimate you can, without crossing any objectively clear boundary, reduce all emergent properties of matter down to the fundamental particles.


>that all matter has some associated mind or consciousness, and vice versa. Where there is mind there is matter and where there is matter there is mind.

That doesn't make sense. We know that humans (and probably most if not all conscious animals, in fact sleep may be a prerequisite for consciousness) spend about a third of their lives unconscious. This assertion doesn't hold, unless you want to tell me that all these rocks are just asleep.


May the Schwartz be with you.

And place the midichlorian detector on the Qi pad, wouldja?


No.


You're so certain -- but can you prove it?


Electrons might prefer Michael Jackson music and grapes in the summer. Not being able to disprove it is a looong way from it being true.


Grapes in the summer? That's your sensory experience, not that of an electron -- not the hypothesis being entertained. Some people are seriously considering that quantum choices might be governed by an extremely minimal, 1 qubit, consciousness. And the status is they're looking for a testable hypothesis. I'm not coming close to saying it's true: I asked if OP could prove the claim they asserted.


We avoid all the woo and counter intuitiveness is we just take the common sense substance dualism postion we had since childhood. So what if a 'soul' is outside our current scientific paradigm? If we ignored all ideas outside the current paradigm, we would still be substance dualists.


oh i am definitely going to have to finish reading this as I am certainly an avid fan of the concept of consciousness itself being the substrate of reality. that being said, i don't necessarily viewing it through a panpsychist lens is entirely accurate as that perspective more or less tends to assume consciousness as some base aspect of material reality, and as such a base unit, but my own personal spiritual experiences and development in that regard have shown me that it is rather the other way around, that consciousness is the core metaphor, the non deterministic field of probability that constitutes reality, and more emergent and highly complex life forms also gained with their development a more strict set of deterministic rules to follow (instincts, genetics and such), and as such it would be more apt to suggest that material reality through the "objective" lens we (seek to) perceive it through is just the net output of running that core algorithm of iterate over time

propagating some aspect of your essence, ideally in a stochastic system that allows for novelty to form, as really the pursuit of novelty is the only underlying force that, to me, can explain the wondered emergent forms that reality is known to take. not to mention the amazing correspondences between micro and macro scale experiences despite appearing to (which is likely in all reality illusory) be two separate connected-but-not-directly-related constructs.

in any case i just always feel compelled to ramble when it comes to consciousness and the nature of reality. i should probably read the rest of the article now. sorry if my delivery or communication was not quite clear, that's a known pain point and something I have been attempting to address ever since back around 1991 when I, having caught the big dumb, evidently never got the memo on reasonable social interaction lol


Consciouness is the observation of own states, nothing fancy about it. We humans have powerful machines to observe ourselves, our environment, and to "observe" their future states thru prediction. Observing our observability in a recursive way is what give us a feeling of "consciousness". Even some paradoxes, like Zeno's and Supertasks are simple observer's problems.


The notion that we can "observe our own states" does not stand up to genuinely deep insight about our phenomenology. Experienced practitioners in insight meditation have been able to shatter this illusion, and realize that we can only ever observe memories of some apparent "self" - that no real "self" observation of our observability 'in a recursive way' is going on (thus, pithily: "no self"). But of course, they're still conscious!


> we can only ever observe memories of some apparent "self"

To make a "self" we observe too many ideas, these experienced practitioners in insight meditation are simply removing those mental models and observing the simplest states. They are saving a memory of that, so when they stop meditating and go back to observing their "self" they remember they were conscious in a basic observation mode. That's all.


This is called explanation by definition. You can explain anything by first creating a definition you can explain, then just saying this is it.

Your explanation seems to assign non-physical magical properties to 'own state' observation. Why is 'own state' different?


> Your explanation seems to assign non-physical magical properties to 'own state' observation

What magical properties are you talking about? You would need to study electronics and programming to understand what "observation", "states" and "recursion" means, all of those are real concepts.


Consciousness cannot apply to electrons, they have no life (no need to protect from cold, get food, learn), no senses, limited internal state and no evolutionary mechanism to speak of.

In order to be conscious a system must be able to adapt itself to its environment to maximise goals, the main goal being self reproduction. Also requires an evolutionary mechanism in order for complex agents to appear. Consciousness is the inner loop of evolution, they both optimise for reproduction.

This absurd idea of conscious electrons is the natural result of armchair philosophising about consciousness. Tononi et. al had a nice idea about integrated information, but it is an unfinished idea. For what purpose is this integrated information? How does learning occur? It says nothing. I'd rather look up recent DL and reinforcement learning papers for inspiration about consciousness.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: