Hacker News new | past | comments | ask | show | jobs | submit login
Consciousness Goes Deeper Than We Think (scientificamerican.com)
226 points by mathgenius on Sept 19, 2017 | hide | past | favorite | 175 comments



This equivocation between consciousness and meta-consciousness is so widespread that a plethora of books, which are seemingly dedicated to the project of "explaining consciousness", instead spend their entire contents explaining some hypothesis involving meta-consciousness and then proceed to declare victory without ever touching consciousness itself. Jaynes' "The Origins of Consciousness in the Breakdown of the Bicameral Mind" is perhaps the most egregious offender.

This is especially troubling when you consider that meta-consciousness sells easily to the general public as "the hard problem" (our thorough use of it seemingly being a large part of what makes humans different, after all), while it's likely that it's more of an engineering problem after you've figured out base consciousness.

If there's anything it is like to be a worm, THAT'S the hard problem. Implementing recursive phenomenal access seems like an undergraduate research project after that. Yet it's most often the very thing we spend all our time focusing on. Frustrating. We need new words - or wider usage of existing ones.


> If there's anything it is like to be a worm, THAT'S the hard problem.

I don't believe we can get to the root of this "hard problem" by first trying to agree on a definition of the word before we can say anything about it. We will also need to admit the possibility that the question "are we conscious?" may be as vacuous at the end as "is Pluto a planet?".


Being unable to agree on a definition for consciousness is a huge problem. I think this happens because consciousness is being studied at a too abstract (high) level. Instead, we should switch to the low level dual of this problem - game theory and reinforcement learning. Fortunately, at this level definitions are exact and concrete - agent, environment, goal, actions, values. At this level we can understand and simulate what it means to be a worm, or a bat - as agents playing games.

I think what the current philosophic theory of consciousness lacks is a focus on the game itself. The agent is but a part of the environment, and the game is much more than the agent. The whole environment-agent-game system is what creates consciousness and also explains the purpose of consciousness (to maximize goals for agents).

Analyzing consciousness outside its game is meaningless (such as p-zombies) - the game is the fundamental meaning creator, defining the space of consciousness. The Chinese room is not part of a game, it is not embodied in the world, and has no goals to attain, that is why it is not conscious, it is just a data processing system.

On the other part, a bacteria can be conscious on a chemical, electric and foto level even if it can only process data with its gene regulatory network (which is like a small chemical neural net). A bacteria has clear goals (gaining nutrients, replication) thus its consciousness is useful for something - all consciousness needs to have a clear goal otherwise it would not exist in the first place - something rarely emphasized in consciousness articles.


>switch to the low level dual of this problem [...] reinforcement learning.

But that is itself a theory about the problem of consciousness! So why not do both? There's no obvious short cut through the confusion, but discoveries in one field may guide questions in the other. In general I think it helps to have one's feet on the ground as well as one's head in the stars (for instance Newton ground his own lenses).


>But that is itself a theory about the problem of consciousness!

No it's not! It's a computational theory of motivated action.


What I mean is that the idea that reinforcement learning is a 'low level dual' of the problem of consciousness is also a theory about the problem of consciousness. It's part and parcel of philosophical topics that one can't get out of the game...


>> Instead, we should switch to the low level dual of this problem - game theory and reinforcement learning

This sounds a bit naive.


Well, defining the "hard problem" hasn't gotten us any closer to understanding consciousness in the last 22 years. The hard problem just moves consciousness problem into an unfruitful direction. It's time for more practical approaches.


I think attempts to rule out physicalism with arguments about qualia and such have gotten us nowhere, and I have no problem with studying game theory and experimenting with reinforcement learning, but the idea that these alone will fully explain consciousness is conjecture at this point - a conjecture that I, personally, am not ready to make.

Quite a few people on either side of the physicalism - dualism debate seem to be more keen on declaring victory and cutting off further discussion than they are in getting to the bottom of the issue.

BTW, on Searle's 'Chinese Room' argument, I have always been in the 'the system as a whole would be conscious' camp.


Not sure where your 22 years estimate comes from. I think man has been trying to understand it for much longer maybe thousands of years ? - and has found answers through techniques that are outside of the mind - such as meditation.


From an empirical standpoint, meditation gives us no information whatsoever. Just thinking about something will not lead you to some sort of mystically revealed truth


How does a bacterium have clear goals? What about one that has a generic defect and doesn't "try" to reproduce? Is it conscious? Is it part of a game? Is the stone that gets dissolved by lichen part of a game?


We don't really need to agree on a definitions of consciousness. All we need is to be able to use the knowledge we do have however phrased, for the things we want to use it for.


Pluto is a taxonomy problem. Consciousness is reconciling how we explain the world (science & objectivity) versus how we experience the world (subjectivity). Consider that the world looks colored, is full of sounds, smells, tastes and feels. But the scientific explanation leaves all that out. The scientific world is the ghostly world of equations, theories and data. It's Plato's cave inverted.


I have long been of the opinion that consciousness is not a 'thing' or 'state' so much as property. I think Douglas Hofstadtler gets closest to my thinking on the matter in his book 'I Am A Strange Loop'. Just as a single molecule cannot have a temperature, since temperature is a property of multitudes of molecules, any degree of looking at the components will not elucidate their overall interactions and at the end you will simply have to admit 'this is what we call it when that happens' rather than identifying a binary test that can be weighed with complete objectivity.

I also think a great deal of the things the scientific explanations omit are things which are expressely stated as the things science does not pursue. I mean that science is inherently dedicated to finding the things which are true regardless of the personal experience of the observer and which would be true without a human observer. So when you want to look at the human observer itself, and at something which is intimately and inextricably linked to the exact biological and historical state of a human animal... you've just left the field. Not that it makes the study any 'less', you just need different tools and there are different pitfalls. We still have all the same cognitive flaws due to our own brain structure, so we do have to try to be rigorous about it.


Temperature is the aggregate of the energy of many individual particles, modulated by the tendency of those particles to transmit that energy. Thus a single molecule does have a temperature in a sense, you just need an incredibly sensitive detector to register it. Temperature is a fully reducible property.

Consider the possibility that consciousness isn't a distinctly human phenomenon, as there is no reason to believe that is the case. In fact, there is no logical reason why it would even be a distinctly biological phenomenon. It appears to me that consciousness isn't so much out of the purview of science as it is intractable to its methods, and thus unattractive as an area of study despite its fundamental character.


In a very big molecule [1] that doesn't move, you have so many parts that you can have a reasonable good definition of the temperature.

But in a very small molecule/atom, let's pick a single helium atom, you don't have a good definition of temperature. You can define the temperature of a gas of Helium using the average kinetic energy of the atoms, but you must consider that the whole gas is not moving.

If you put a balloon with Helium in a car, and the car moves at 100mph, the Helium is not hotter, it is moving. So if you can only see a single atom of Helium, you can't be sure if it's moving because it's hot or all the gas is moving. So you don't have a good definition of temperature. [2]

For a molecule with a few atoms (let's say 5 or 10) it's more difficult to be sure if there is a good definition of temperature or not, so I prefer to ignore the intermediate case.

[1] My first idea of a big molecules was DNA, because it's big and well known. But DNA is usually surrounded by water, lots of water, and ions and auxiliary proteins, and a lot of stuff. It's very difficult to isolate a true molecule of DNA alone.

An easy example of a big molecule is bakelite https://en.wikipedia.org/wiki/Bakelite that is the plastic of the old phones. Your old phone case was a gigantic single molecule, and the tube case was another. So they clearly had a temperature.

[2] At low temperatures you can assume that a Helium atom is a ball without internal structure. If you increase the temperature of the gas enough (200000K?), the electrons in each Helium atom start to jump between levels and you have some interesting internal structure and may try to define a temperature. But it's still too few parts and too short lived to make me comfortable to define the temperature of a single atom.


Since we measure temperature by the transmission of energy, and that process occurs in an entropic manner, if you have a moving reference frame, that prevents measurement of the increase in energy by the detector. The particle itself definitely has increased in energy, we just lack the ability to detect it. This is entirely analogous to our inability to measure mass using a scale in free-fall.

I suppose it comes down to whether you define temperature as "what thermometers measure" or the average potentially transmissible energy of an ensemble. I prefer the latter definition for its clarity, but like the idea of kolmogorov complexity, it is sometimes unwieldy in practice.


Temperature is defined as 1/T = dS/dU, where U is the energy of the system and S is the entropy of the system, i.e. S = k * ln(#states that have energy U)

The temperature is proportional to the energy of the system only for ideal monoatomic gases. (Ideal diatomic gases, are slightly more complicated.)

So a "hot" Helium balloon where all the particles are moving in random directions has a different temperature than a "fast" Helium balloon where all the particles are moving roughly in the same direction. It doesn't matter how difficult is to define it. (Probably you can use an infrared thermometer pointed in the direction perpendicular to the movement to get the correct temperature.)

In the fast balloon you can theoretically split the energy in A: kinetic energy of the center of mass and B: internal thermal energy, so the total energy is A+B. With some device you can extract all the kinetic energy as something useful. (for example, make the balloon hit some lever connected to an electric generator, you will need something more smarter to get the 100% of A but it's theoretically possible.) So you can extract the 100% of A

If the hot balloon with the same energy you have only A+B of internal thermal energy. If you use some device you can never extract the energy, you can never extract more than a part of B as something useful like electricity, because of the second law of thermodynamics. It the ambient temperature is T_amb and the initial temperature of the balloon is T_bal, you can get at most B * (1 - T_amb / T_bal) as something usefull like electricity and waste at least B * (T_amb / T_bal) in a heat sink.


Thanks for the pleasant pchem memories.

So, to get back to the original point, your position is that a single particle in isolation can't have a temperature because its entropy (or rather the change thereof) is 0, so the equation breaks.


I certainly grant that consciousness might not be a distinctly human phenomenon. I disagree that there is no reason to believe such is the case, however. There is some evidence: Humans are conscious. And we don't know anything else that is. That is weak evidence, but it is some. Which is more than we have for the claim things other than humans can be conscious. For that, there is indeed nothing.

The logical reason why it would possibly be a distinctly biological phenomenon is that we have not observed any non-biological system which has the property. Also, every examination of how consciousness works illuminates solely how it works in a biological system.

It is completely possible that consciousness is imply a necessary property which emerges once any complex system and its interactions reaches a certain sort of complexity. However, that might be meaningless. I mean, we might be totally incapable of recognizing other conscious entities as conscious. If a machine were made to be conscious, not as an emulation of a human, but as a machine - how would you tell? Asking it questions would be pretty pointless. There is no reason for it to develop language or even guess that there might be another conscious entity in the universe for billions of years. It would be quite a ridiculous leap for the machine to suppose that, in fact. It wouldn't have multiple individual machines against which it could develop a Theory of Mind (referring to the psychological way we model other people (or things or animals) being conscious and thinking inside our own heads, children develop it around age 3 I believe). It would presume the sensible thing, that it is the only conscious entity in the universe, and go about exploring the universe. It would see the microphone or video input as just weird useless noise as it traversed and explored its ACTUAL work, that of bits and bytes and networks and switches. You could watch its execution... and it would be exactly as useful to you as watching the flashes of electrical activity and a readout of the activity of neurotransmitters at each neural gap as far as determining whether it was conscious. And that'd be for one we might reasonably presume to operate on our own timescale and close to us in space. Could the Sun be conscious? Quite possibly. I'll grant it as something we can't rule out.


Then I don't think we have a problem. We experience the world through sensorimotor statistics: what our sensory nerves, autonomic nervous system, voluntary motor actions, interoceptive nerves, and just generally our bodies are doing at any given point in time. Science is a series of methods, implemented on top of our basic reasoning abilities, to achieve knowledge which can be generalized across individual perspectives, rather than depending on the statistics and circumstances of an individual life.

I would figure that this isn't the problem of consciousness, but it is the problem of why we don't "see" a "ghostly world of equations, theories and data".


Consciousness is at it's heart irrational. Using reason or science to explain it won't work.


Ironically, you are stating a hypothesis which is fully within the purview of reason and science. While it is in principal possible that the answer is undecidable with existing logical systems, there's certainly no existing proof that it is undecidable.


Proofs are founded on reason and logic, but reason and logic are themselves a product of consciousness. It would be a tautology.


And yet as I awake for yet another day as me, there's something in need of explanation.

Unlike Pluto - this isn't just a debate about labels.


I don't get your comment. The first sentence seems to say "solving semantic confusion won't get us to the root of the hard problem". Your second sentence seems to say "we should admit that some questions are just semantic confusion". Is that right? The two sentences seem non sequitur together.

The hard problem is not a matter of semantics, but yes indeed some confusion may be just semantic. I'm not sure if that plain truth is all you meant to say though.


> If there's anything it is like to be a worm, THAT'S the hard problem.

I am not convinced - I suspect that there isn't anything to the experience of being like a worm; that the very question presupposes self-awareness of some sort. I suspect the hard problem of consciousness might lie in the self-referentiality of self-aware consciousness (though alternatively or in addition, it might be on account of its workings being inaccessible to introspection.)

But our consciousness goes beyond simple self-awareness: having a theory of mind is a step beyond (and is realizing that others have a theory of mind a step beyond that?) Our own experience of our own consciousness is several steps removed from what a worm might experience.

> We need new words - or wider usage of existing ones.

Yes; I think the words we use are definitely an issue, perhaps because we don't remember what it was like to be an aware, but not self-aware, infant. To some people, anything that responds to external stimuli is conscious, while to others on the other extreme, consciousness entails being able to verbalize it. This might be the only case where the Sapir-Whorf hypothesis is actually correct - that our ability to think about the issue is constrained by our language.

In short, I think the author of the article made heavy weather of the issue by conflating consciousness with simple awareness, and not recognizing that meta-consciousness, and perhaps even an awareness of meta-consciousness, is an essential part of what distinguishes what is commonly meant by consciousness from mere awareness.

In case I haven't made myself clear, I don't think the hard problem of consciousness has been solved by invoking meta-consciousness; on the contrary, I think that is where the difficulty is most apparent.


> But our consciousness goes beyond simple self-awareness: having a theory of mind is a step beyond (and is realizing that others have a theory of mind a step beyond that?)

That [and this] whole space [of considerations, here] appears to be fraught with circularities like this:

Step 1. Draw a distinction between consciousness and meta-consciousness.

Step 2. "From what vantage point, and with what 'machinery' do we make a distinction?"

Step 3. Go back to Step 1.

> .. that our ability to think about the issue is constrained by our language.

I think I agree, otherwise I'm tempted to think that we would have already arrived, trivially, at some clearer kind of agreement about it.


To be clear, I don't think there is an infinite recursion here, but I do think that consciousness, as we experience it, is several steps beyond merely responding to stimuli.

It just crossed my mind that perhaps 'our ability to think about the issue is constrained by our language' and '[consciousness'] workings being inaccessible to introspection' are the same thing.


> .. that perhaps 'our ability to think about the issue is constrained by our language' and '[consciousness] workings being inaccessible to introspection' are the same thing.

I like to think about it that way, but it makes the problem feel intractable.

The problem is that we want to describe consciousness as "that thing that allows an organism to describe consciousness as 'that thing that allows an organism to describe consciousness as ´that thing that allows an organism to describe consciousness as [...]´'"


Exactly. That's why we keep going in circles when talking about these things, and why people have been repeating themselves for hundreds of years, yet the problem still seems fresh.

I wish I had something more substantive to add, but I have really come to think of this as a fundamental limit to thinking.

It's sort of analogous to starting with a set of axioms, then trying to derive explanations for these axioms from them.

This sounds defeatist, but I have yet to read a good argument for why trying to explain the "hard problem" is anything else than that. I guess you could see this in a sort of mystical light and build something spiritual around it, too...


"Consciousness is the ability of an organism to predict the future"

The problem of looking at human consciousness is similar to someone that knows little about computer processors pulling apart an I7 and trying to figure meaningful things about what is going on inside. Without knowing the history of processor design, there will be huge information gaps on why some parts work the way they do.

That said, I believe consciousness is just world modeling, which explains the recursion problem you run into, a good model can simulate itself (nested Turing machines). As creatures evolved the ones that didn't just react, but could predict properly had better survival outcomes. Predictive models competed with each other and the 'best' ones survived, until after millions of years one branch of these models became complex enough to self reference itself.


>The problem of looking at human consciousness is similar to someone that knows little about computer processors pulling apart an I7 and trying to figure meaningful things about what is going on inside. Without knowing the history of processor design, there will be huge information gaps on why some parts work the way they do.

I don't think the analogy is adequate. A processor is an object - it "objects" to all of us. It appears to have an existence independent of the thing that recognizes it as such. When we embark on an empirical investigation of something, we make a distinction between the scientific observer and the scientific object. We come to an agreement about the boundaries of the object. This does not appear to be the case if you want to call consciousness "that special [condition, or process, or property, or pattern, etc.] of being a scientific observer" (which we would ideally want because it seems to encapsulate all those special things that distinguish human beings from other organisms with nervous systems).

In that domain, we cannot make a distinction between subject and object. In order to even speak intelligibly about things, we must all draw the boundary of the thing we're talking about - but we are in the peculiar position of being the very act of drawing the boundary.


> I suspect that there isn't anything to the experience of being like a worm; that the very question presupposes self-awareness of some sort.

It also presupposes that you are a worm. It's a (logically) absurd and (practically) pointless question. How could anyone/anything possibly know what it's like to be a worm without actually having the experience of being one? Answer: They couldn't/can't.

The question also has no relevance to consciousness. We know what it's like to be conscious humans because we are conscious humans and we do have human experiences.


> I suspect that there isn't anything to the experience of being like a worm

Even a worm has goals to attain - food and reproduction. In order to attain those goals, it needs to have perceptions and to rank those by the estimated return value. That is what it's like to be a worm.


Talking about a worm as if it had goals is a teleological analogy that makes discussing its behavior easier, but I think it is a mistake to take this usage literally. If you do, you must also conclude that a thermostat has a goal, and therefore (by your argument) that there is a something that is what it is like to be a thermostat. I can only see this line of argument as a distraction that will only confuse any attempt to understand consciousness.

The rest of this discussion is hard to follow because it has apparently been edited, but you can't draw a line between applying your argument to animate things, but not inanimate ones, simply by choosing one word over another.


Yes, you can draw a very clean line. It's the line of self-replication. Self-replicators have an internal goal - that is to maintain their existence by growing and replicating. The more advanced ones develop complex responses to changes in their environments. Humans are just the apex of this process. A toaster or a thermostat don't self reproduce, so they have no goals. Having no goals, means no consciousness, because goals are shaping the development of consciousness.


> Self-replicators have an internal goal.

You are taking the teleological analogy literally again. It was Darwin's biggest insight that this is precisely not how evolution works.

Adding the word 'internal' doesn't help either. You could just as well say that a thermostat has an internal goal of staying warm.

This is moot anyway. Like Carroll's Humpty-Dumpty, you have crafted a definition of consciousness that suits you perfectly, but which doesn't match what everyone else means.


Fire self-replicates. It maintains its existence by consuming fuel and spreading itself. It's not as skilled at this as a human, but neither's a worm. What about an individual bacterium? Does it have a goal? Are they conscious in a way a humble virus is not?


Such goal attainment does not require consciousness. Even in our species, functions like breathing and blood circulation are autonomous, and they continue even in our sleep when consciousness is absent.


The fact that you have no memory of time while you are asleep does not prove that consciousness is absent.

The fact that your brain's executive network are unaware of the workings of "autonomous" activities does not indicate that those activities lack consciousness. You are also unaware of my consciousness but I assure you that it exists.


> The fact that your brain's executive network are unaware of the workings of "autonomous" activities does not indicate that those activities lack consciousness.

I'm lost. To me it means those activities lack consciousness. It's also the opinion stated in Jaynes' book, which was mentioned by user breckinloggins.

It's not the same as me being unaware of your consciousness. Your consciousness is foreign to me. It's always impossible for me to perceive it. My own consciousness is internal and I can perceive it. You may argue there are some activities of my own consciousness that I cannot perceive -- it's debatable, but we can discuss it -- but arguing about your consciousness from my perspective does not apply.


You are making a special distinction between stuff that is "part" of you, and stuff that is not "part" of you. In actuality there is no such clear distinction, there is only correlation in state, which is not binary but rather occurs on a spectrum.

Understand that I'm defining consciousness as the property of having awareness, not as the human mind. The human mind is what it is like to be a cerebral cortex with our given structure. Think of consciousness as being like a computer screen, and the mind as a picture displayed on that screen. That screen could display a very different image, but that wouldn't change what the screen itself is.


> You are making a special distinction between stuff that is "part" of you, and stuff that is not "part" of you

But this a crucial distinction when talking about consciousness, it's not arbitrary. Consciousness requires a subject to be conscious.

Splitting awareness from the human mind seems weird to me. It's not very useful to consider, say, my liver to be conscious. Even worse, there's no way to prove or disprove the assertion that it is conscious; it's even less falsifiable than talking about other people's consciousness!

The whole point of Jaynes' book is that consciousness is not required for most activities. He argues that not only are your body organs not conscious, but also that most activities you engage in every day aren't conscious either.


Decoupling awareness and its contents is a necessary prerequisite to a much more parsimonious philosophical viewpoint. Without taking that step, you're stuck with the idea that human awareness is somehow different in kind from the physical awareness that drives the evolution of everything else in the universe.


> Decoupling awareness and its contents

What do you mean?

I don't understand what you mean by "the evolution of everything else in the universe", either. Consciousness isn't required for the evolution of living beings. Or do you mean something else?


When I say evolution here I mean a change in state, not Darwinian evolution.

I'm drawing a distinction between humans, who seem to have the ability to make choices, with the prevailing scientific world view, that everything is the result of mindless causal forces.

I suppose you could hold the epiphenominalist view that consciousness is a side effect with no causal agency, but then you have to explain why things that hinder survival/reproduction feel bad, and things that aid it feel good. If consciousness was truly an epiphenomenon there is no reason why there should be any concordance between survival value of events and their phenomenal character.


I think you have it backwards: if there is no reason for consciousness to have any concordance, then it can go either way. And there is evidence of lots of evolved traits that have indeed gone "either way". Or it could be that consciousness is not a blind side effect but actually provides survival value. Or maybe it is a side effect but it's harmless. There are lots of possibilities.

In any case, it seems this is a diversion from the main argument: if you aren't aware of a process, then it's not a conscious process on your part by definition.


> That's like saying a dead worm and a live worm are not so different.

What? No.

> If there is a difference in how they evolve, there is a something to be like it because it has to have some kind of perception and action selection mechanism. What it's like to be a worm is the perception stream evaluated by the value function of the worm.

Assuming that by "evolve" you mean "evolve as a dynamical system", then yes.


Yes, that's what I mean. I deliberated a lot about using "act" or "react" but it doesn't apply to inert objects as well.


No new words are needed. What you need is that people stop conjuring new words to explain the same things over and over again, using and re-using words under different meanings without ever bothering to either listen or read their predecessors properly, which only leads to feelings of ineffectualness and a strange sensation of déjà vu.

When authors in the philosophy of mind stops rehashing ideas that could and most likely have occurred to 17th century philosophers for the sake of attaching their names to a publication, then it will see some progress.


> When authors in the philosophy of mind stops rehashing ideas that could and most likely have occurred to 17th century philosophers for the sake of attaching their names to a publication, then it will see some progress.

I doubt people involved in philosophy of the mind will ever be useful here. If one keeps up to date on neuroscience, they'll see a lot of progress being made, but none of it is coming from the "philosophy of the mind" people. Not only that, but the "philosophy of the mind" people don't even seem to be aware of much of what's happening in neuroscience.

There seems to be a certain "God of the gaps" aspect to how philosophy approaches science, where people pretend that philosophy is fundamental approach to a field which is working on a particularly large and difficult problem. So you see that philosophers often approach things like the human mind and quantum theory quite different from how they approach chemistry or non-mind physiology.


In that regard I always find it odd how much philosophy and religion seem to have in common. Large parts of most religions basically boil down to convincing people of something for which there's no actual evidence solely by the use of philosophical and rhetorical devices, it's de-facto large scale social engineering.

Which basically makes religion something like the original "science of philosophy"?


>No new words are needed. Probably true. In fact watching the film Avatar where the Na'vi can connect with their environment is perhaps the best example of consciousness in action switching its focus from one part of the body to another, where their environment is their body, and the Na'vi is consciousness itself.

Our bodies are largely autonomous with cells communicating via a variety of methods either pumping in/out chemicals to the cell itself or the immune system placing protein markers for other parts of the body (immune cells) to deal with bearing in mind that every cell in the body has a complete copy of the DNA.

One thing I have learnt about the body, is there is alot of redundancy built in for it's survival, every cell has a complete copy of the DNA which can be altered leading to cells not functioning properly, and there are a variety of means in which the consciousness can be alerted to problems which need addressing like hormone's or nerve messages.

Simply dropping 500Mg of niacin (the non flush variety) on an empty stomach demonstrates how quickly the prostaglandlins can be stimulated into action in many different parts of the body, not just the skin but internal organs like the lungs & liver. How aware you your consciousness is about these changes and effects is perhaps also an indicator of your consciousness, just like some people are more aware of their food cravings.

When considering food craving's & how pregnant women can suddenly start craving things like coal when they perhaps have never eaten it before also indicates some sort of memory that may have been passed on by their parents, or maybe there is some other subconscious mechanism which perhaps through the nose enables the body to identify a substance which contains chemicals which the body needs.

The fact our taste and enjoyment of food and drink is altered simply by light also indicates our consciousness is easily swayed over the logic or remembered/imagined experience.

Ultimately its hard to define what consciousness is exactly because when considering the blood brain barrier and how hard it is for some chemicals to reach the brain but not other's, is our consciousness largely just a collection of learned experiences where nature has utilised cholesterol into a storage medium and consciousness itself is just a memory of chemical & electrical responses in a variety of cells which certain feedback mechanisms like nerves, immune system cells and hormones the brain is able to remember and influence its behaviour as a result?


> Jaynes' "The Origins of Consciousness in the Breakdown of the Bicameral Mind" is perhaps the most egregious offender.

Can you expand on this? I am reading this book at the moment, but I don't quite follow your criticism. He doesn't seem to me to be talking about phenomena related to "consciousness of consciousness", but rather about how consciousness arises as a functional process, and as a consequence of our linguistic heritage (though I haven't finished the book yet, and I find it all quite confusing, so I'm not really sure).

For reference, I found the entry for "Consciousness, defined" in the index, and the paragraph on the referenced page that most closely corresponds to a definition seems to be this:

> Subjective conscious mind is an analog of what is called the real world. It is built up with a vocabulary or lexical field whose terms are all metaphors or analogs of behavior in the physical world. Its reality is of the same order as mathematics. It allows us to shortcut behavioral processes and arrive at more adequate decisions. Like mathematics, it is an operator rather than a thing or repository. And it is intimately bound up with volition and decision.


Note the distinction between:

1. consciousness as "awareness" or "experiencing existence" 2. consciousness as "mind" or what we experience that doesn't seem to correspond to the outside world.

The former is certainly not a result of language, and that is what is referred to as consciousness in Buddhist/Hindu philosophy. The latter is definitely influenced by language and seems to be what many modern scientists are referring to when they talk about consciousness. The conflation of the two definitions does a great disservice to understanding what consciousness "is".


> The former is certainly not a result of language, and that is what is referred to as consciousness in Buddhist/Hindu philosophy.

I recommend reading Jaynes' book (assuming you haven't already) before you draw any firm conclusions on the role of language.

I really don't think I can do him justice, but his thesis seems to be that humans were, until relatively recently (4000 years ago, perhaps?), non-conscious - they would not have "experienced existence".

He goes on to discuss consciousness as a product of metaphor, and thus with decidedly linguistic origins. That is terrible summary, though, so I recommend reading the book and forming your own view - I have found it quite enjoyable so far.

As an aside, he starts the book with a list of things that consciousness is not, from his perspective:

* Consciousness Not a Copy of Experience

* Consciousness Not Necessary for Concepts

* Consciousness Not Necessary for Learning

* Consciousness Not Necessary for Thinking

* Consciousness Not Necessary for Reason


>This equivocation between consciousness and meta-consciousness is so widespread that a plethora of books, which are seemingly dedicated to the project of "explaining consciousness", instead spend their entire contents explaining some hypothesis involving meta-consciousness and then proceed to declare victory without ever touching consciousness itself

I think this trend is bound up inextricably with our efforts to distinguish ourselves from "lesser" animals. A monkey may be able to reason intelligently, but it cannot communicate to us whether it is aware of itself doing so. We therefore lack evidence of meta-consciousness in animals and tend to attach out-sized significance to it as a potential mark of human superiority. Even though absence of evidence isn't evidence of absence.

From an evolutionary perspective, consciousness seems to derive from the same mental equipment pre-human hominids developed for handling social interactions. The ability to model and predict the mental states of other hominids provided a survival advantage. Once sufficiently developed, this ability can be applied to itself, so that the hominid is modeling its own thought processes and mental states.

To me, this doesn't seem like some sort of deeper, insoluble mystery, but rather the symptom of an unsatisfactorily detailed model of that mental equipment. I think the problem loses its peculiar appeal when we can identify with a reasonable degree of certainty that what we experience as consciousness corresponds to specific neural pathways and configurations. Right now, the philosophical arguments are too abstract to be convincing and the science is too immature to discourage all the useless theorizing.


I couldn't agree more!

Furthermore, as also pointed out by others above, all discussions about consiousness tend to conflate what appear to me to be distinct phenomenon. The first could be described as a persistence of thought or "low-level" consiousness, which is likely explained by the evolution of a working memory. The second could be described as self awareness, which is likely a small, nearly-inevitable leap from "low-level" consiousness. The third could be described as meta-consiousness, whos origin is likely explained by your social evolution hypotheses.


I found this series on the New York Review of Books website gives an understandable, and readable history of the main threads of discussion on this, http://www.nybooks.com/topics/on-consciousness/


>We need new words - or wider usage of existing ones.

Agreed - trouble is, though, whenever you start to enforce this particular aspect of the discussion, you end up either being accused of starting a cult, or following one...


I agree.

Within the "metaconsiousness" definition provided int he article, I think it very likely that the distinction is more of an illusion than anything else.

attention is the key to distinguish between unconscious thought and conscious thought

Say we're learning to operate a car... You are very "aware" of steering and pedals and gears. After a while, you start just thinking in terms of forward, this way, stop, faster. The mechanics start to be handled "subconsiously"^ as you go along. Awareness/attention then gets drawn back if something goes wrong, say your gear sticks. A lot of learning seems to be like this. Aware & consious actions repeated until it becomes subconsious. I think this process is gradual. More importantly, the observable outputs are very hard to stack neatly into two piles.

On top of that we have the problem of fooling ourselves. If I asked you "why did you change lanes?" you will probably give me an answer, but a lot of expirements suggest that answer is made up after the fact.

I think it's more likely that metaconsiousness is made of the same stuff as subconsiousness. It's just the type of consiousness that we notice (tautology?) and consider to be us.

TLDR, I agree that the more important breakthroughs will probably be at the worm consiousness level, but that is itself a theory of consiousness.

^As mentioned below, it's a problem using, defining and speculating about these terms simultaneously. I'm doing it anyway, but yeah...


It seems to me that if you extend 'conscious' to cover things outside a persons attention, you've made the word 'conscious' meaningless. If I take away all of the things that you claim to be 'conscious' of but not actively aware of, would you be comfortable with me saying you were no longer conscious? Sure you actively thinking about things, but you don't have that sensation of the bottom of your big toe in your shoe somewhere in the background, therefore you must be less than conscious. The idea of meta-consciousness I think still has plenty of merit, but extending 'conscious' to everything you are unaware of is useless and also can never stop. Why not include the things that are stimulating your nerves as well as the stimulation of the nerves itself? Why not extend it out to blanket the whole of the universe?


>Yet it's most often the very thing we spend all our time focusing on.

We think. We think about thinking. We think about thinking about thinking.


I actually think Jaynes approach is the right one. Consciousness isn't a thing in itself anymore than love is.

It's a concept but it has no destination.

I don't think it's presented as the hard problem it's just hard to think about something which isn't as if it is.


>If there's anything it is like to be a worm, THAT'S the hard problem.

Well, the problem is to give some definition of "what it is like to be X" we can cleanly separate from the "mere" sensorimotor statistics of X.


This article, like most discussions on "consciousness" I encounter reads like little more than word games to me.

We could similarly write articles about all the qualities and properties of phlogiston, and we'd have learned just as much.

What thing are we actually trying to explain here? What is the extensive meaning of the word "consciousness"? How can I emperically determine if some object/animal/thing possesses consciousness?

At this point it mostly just seems like we define ourselves to have this "conscious" property and then circularly use that as a distinguishing characteristic of humans.

How is "consciousness" not just a modern word for "soul", minus the afterlife stuff?


To me consciousness is the ability to re-engineer our existing models of the world based on new incoming data. Sentience is the ability to do this in the abstract.

I realize that this just shifts the discussion to "What, exactly is 'our existing models of the world'?", but I think that question can have much more traction, especially to an engineer, than the current state of the language over on the philosophy side.

It also implies that there is a spectrum: the complexity of the model(s), the speed at which we can re-engineer them, our ability to hold multiple models in mind at once, and the degree of abstract indirection we're able to employ are all variable. I'd suggest that future work with questions such as "What defines a sentient being?" proceed along those lines, and using these definitions would be much more valuable than trying to stratify consciousnesses into various semantic tiers that still remain quite abstract.


> How can I empirically determine if some object/animal/thing possesses consciousness?

You can't, that's just the issue and that's what gives rise to behaviorism, solipsism, etc. etc.

Sure, you could arbitrarily decided that some brain state Y recorded by instruments xzw in the field of neuroscience is a state of 'consciousness' but this is hardly a step up from heuristic methods we've used since the dawn of time, namely duck typing--if the thing looks like its conscious, and acts like its conscious, it is probably conscious.

Consciousness is one of those problems that becomes more problematic once you harp on it but only presents itself as a problem in practice in very rare cases.

Sure you could argue the issue will gain in relevancy as AI + machine learning becomes more sophisticated--but at this point there's still very much an uncanny valley that enables us to easily differentiate between biological and purely computational/mechanical systems.

When it comes to consciousness I think we do well to remember Wittgenstein's advice on doubting--that a doubt is really only worthwhile when it's useful in some fashion. To doubt just for the sake of it, or just because we can is usually not productive and leads us nowhere. Doubts are useful insofar as they are precise and rub up against propositions we hold true with some degree of firmness (actually to be able to doubt at all requires we hold some confident assertions, e.g. one of them in the case of studying consciousness: "no matter what the study says, I'm confident at the very least, that I am conscious").


> You can't, that's just the issue...

Do you judge me as a conscious being? If so, then you came to that conclusion by making some empirical observations.

What are those observations?

I'm not trying to explain consciousness here, I'm just trying to identify it.


If you want a rigorous account of what neuroscientists call consciousness, which isn't quite the same thing as what philosophers call consciousness, I'd recommend reading Consciousness and the Brain by Stanislas Dehaene. Looking at a brain, the reaction to stimulus that enters consciousness and stimulus that remains subliminal are very distinct. And, for example, only the effects of conscious stimulus persist in the brain for more than a couple of seconds. Well, Pavlovian conditioning between stimuli can occur subconsciously but only if the two two stimuli are presented at the same time. If you want to condition a dog to salivate when you ring a bell you either have to ring the bell loud enough for the dog to be conscious of it or you have to ring it while the dog is eating.


> How is "consciousness" not just a modern word for "soul", minus the afterlife stuff?

There's actually an interesting analog with intelligent design. In both cases (evolution, the brain) it's clear that what's happening is that the emergent properties from incredibly complex systems is creating surprisingly complex results (surprisingly complex to the average observer). In both cases this bothers people because it means that humans aren't anything particularly special (just matter that happens to go through certain processes), and because it's hard for them to grasp the scale and complexity of the systems involved. So an invisible non-material explanation is theorized instead which sets humanity above the "mere" material world.


Calling 'consciousness' 'soul without an afterlife' is just as much a word game as any other - which is to say that if it is offered as the starting point of a discussion, it is not simply a word game, but if it is declared to be the answer, then it is.

I don't think the circularity is there, because while there do not seem to be other animals today with the same sort of consciousness as humans, we can reasonably assume that some of our now-extinct ancestors had various levels of consciousness, whatever it is.

We are indeed groping around here like 18th-century scientists trying to make sense of heat. I don't think that makes it pointless, but we should be suspicious of the claims of either physicalists or dualists who think that they have proven the other side is wrong (though my instincts are with the physicalists.)


Yes, as Daniel Dennett says, groping around trying to find the right questions is what philosophy is. The confusion can be upsetting for both philosophers and non-philosophers. Creating a space for questions and contradictory ideas is a must for any intellectual activity, however.

>18th-century scientists trying to make sense of heat

All that mucking about with the problems of phlogiston could have been a necessary step towards thermodynamics. Similarly, speculations about consciousness could turn out to be a necessary step towards AGI (or vice-versa).


Have you heard it said that you can never know if another being is really conscious, and yet you can know that you are? Presumably you disagree? This is what, in essence, it means to be a subjective fact rather than an objective one, and why it's hard to pin it down as a "thing."


But that's the problem with the philosophy of mind. They have all these subjective notions like consciousness, self-consciousness, analytic and synthetic I, the accompanying such-and-such feeling (e.g. I-feeling), self-awareness, awareness, attention, qualia, qualitative experience vs. content of qualitative experience vs. phenomenal experience vs. sense data vs. innr experience, cognition, embodied cognition, stream of consciousness, and so forth. Each of them comes with many fine-grained additional distinctions, but they are all subjective and cannot be measured easily and every author promotes his own vocabulary.

Then they wonder why they can't agree on a common theory, quibble endlessly about the adequacy of their various subjective definitions, and most if not all of their arguments are based on 'intuition-pumping'.


The topics you mention are mostly arguments against physicalism, rather than being for something. There seems to be a lot more effort expended (on both sides of the physicalism-dualism divide) in trying to disprove the other side, rather than on explaining consciousness, though perhaps that's just what the general-interest coverage focuses on.


If "consciousness" is taken to refer to this "thing" I can be certain I have but cannot be certain you do, then it doesn't refer to any objective property that can be proven or explained, right?


Looking for proof in these matters is probably a fool's errand (though many seem to undertake it...) As for explanations, that doesn't seem to be out of the question up to some assumptions, as is usually the case with explanations. Anyone choosing to go down the rabbit-hole of solipsism is literally alone with his thoughts.


I agree. All I mean to say is that the reason it's hard to come to a consensus on what we mean by "consciousness" is that we're trying to unambiguously capture this first-person thing in third-person terms. This is bound to fail for effectively the same reason it's impossible to know (with certainty) that anyone other than myself is conscious. I can at best capture evidence for the objective properties about you that make me believe you're conscious.


I don't so much disagree as am confused by what people (including myself) mean with the word "consciousness". It also doen't seem like this is a discussion of pure qualia on the level of "Is the red you see the same as the one I see?"

For one, I certainly judge you, Mr. Internet Person Monktastic1, as being conscious. And I can somewhat unpack that judgement into relatively empirical observations that can be cross-compared with someone else. This feels really similar to my judgement of you as possesing the trait "human".

On a meta-discussion level, the rhetoric I usually encounter reminds me a lot of the Plato-Diogenes debate on the essence of "human". Can we really reduce the intuitive notion of consciousness to "featherless biped thougts" or some such?

For those reasons, I suspect that the intuitive notion of "consciousness" is more a (fuzzy) collection expectations we have on lower level traits. Like, since you are able to engage in this conversation, I judge you as human and therefore this leads me to expect that you also have a functioning pancreas. On the surface though, language ability and pancreas possession don't have any obvious reason to be linked.

Taking a bird's-eye view, I do hope that this discussion continues until the questions become sharp and precise enough to be called science instead of philosophy.


> I don't so much disagree as am confused by what people (including myself) mean with the word "consciousness".

If I close my eyes and ask myself "doesn't something seem to be happening?" (setting aside, for a moment, whether anything is actually happening -- since that requires a whole slew of interpretations), the answer is an unequivocal "yes." That (to me) is consciousness. There is a particular certainty (whether right or wrong) that applies only to that question.

Whether you have a functioning pancreas is something that, in principle, we each have access to. Whether or not "something seems to be happening" for you is inaccessible to me, even in principle. At least, as far as I can see.


While it is a curious question about why my consciousness exists instead of not existing (maybe not in the same way as my brain exists), I find it is worth pursuing as much as a question "Why anything exists at all?"

Other questions about human consciousness can be formulated in terms of observables.


No, I think consciousness has the role of protecting the life of the organism and reproduction. It's not a "soul" thing, it's a "body" thing, or more exactly, it's a "world+agent+game" thing.


My belief is that consciousness is the ability of an organism to model the world and predict the future. Any organism that doesn't just react is on the consciousness spectrum. When a creature is presented with two or more choices and has to decide which will have the best outcome, it is making a conscious choice.

The problem with starting at human consciousness is our consciousness is to advanced to easily grasp. We can not only internally model our real world very well and make predictions years in advance, we can make up completely coherent 'fake' worlds. The problem with animal consciousness is we have no means of a coherent model of what is occurring in their minds.


We also have AI agents. By creating robots we gain a lot of insight into the difficulties solved by the brain. With AI agents we can experiment much more freely than with animals or humans, we can peek into their workings and try variations.


> How can I emperically determine if some object/animal/thing possesses consciousness?

There is currently no way. We don't even know if all humans possess it and what animals do. Maybe people who deny its existence just don't have it.

> How is "consciousness" not just a modern word for "soul", minus the afterlife stuff?

It's not modern vs old, it is more scientific vs religious.

I agree that most articles about consciousness feel like word games though.


The issue presented at the beginning of the article is (as most philosophical issues are) one of semantics. Philosophers as I understand it use "consciousness" as the quality shared by things that are able to have experiences. A rock gets wet by the rain, but humans "feel" wet when it rains. A bat might not self-reflect but it feels /something/ when it uses echo-location.

On the other hand, conciseness in our everyday use of the term is very tied to the idea of attention and awareness, i.e. a "conscious action" or an "unconscious motivation". This is a very Freudian concept, that there are thoughts we think and others that lay behind. The philosophical consciousness is much more holistic, but also very vague. You wouldn't think a tree has an Ego, but it might have some sort of existential awareness.

It's cool to see the philosophy of consciousness intersect with neuroscience, since outsiders like me tend to think of these fields as at-odds.


Listening to Marvin Minsky on Closer To Truth on Youtube was a revelation for me. As in many philosophical debates, before you even start to begin to think about and communicate about a subject you have to get your terminology clear and accurate.

Conciseness is a suitcase term that contains within it a mass of different mental processes. It's just not useful to talk about it much in general, except that it's so general a term which frankly we just don't understand yet. If we did, we'd at least have a general idea of how to design one and we don't. Rather than try and make any more pronouncements about this, I'll just refer to what he has to say on the subject as he does a much better job than I ever could.

This is a good primer on this: https://www.youtube.com/watch?v=6r70jzcmMxU&list=PLFJr3pJl27...

Here's a great explanation of this in the form of an apparently simple example scenario and the very many different faculties of the brain that in entails: https://www.youtube.com/watch?v=wPeVMDYodN8&list=PLFJr3pJl27...


My best at guess is that the universe itself is conscious, and we'll eventually let go of the idea of individual consciousnesses. An ant hive sees a larger field more clearly than I do, maybe at a slower refresh rate but not by much.

Are individual ants conscious? Somewhat. Are we conscious when we're receiving reality through memes beyond our comprehension? Somewhat.

Are our rods and cones conscious? Our brain stem? The cells that make us vomit? The bacterial colonies in our sinuses?

I just think the answer to all of these questions when all is said and done will be yes. And we'll be sad how much life we killed because we didn't think it was conscious enough.

And maybe if we're lucky when we die we get to watch our consciousness grow to include all things. Maybe that's why we have circuits in our brain that seem to respond that way to mushrooms.

Maybe there's a genetic advantage to harboring the illusion that you are a separate consciousness, but it's just there to make you want to struggle to hold up your tiny corner of a collective consciousness.


No. the universe isn't conscious. The universe is matter. Consciousness is a part of the universe. Therefore consciousness is simply material. Nothing more, nothing less.

It's just that our egos are too big to accept that we are nothing more than dirt.


Have you ever considered that objective reality is seated in consciousness rather than the other way around? Look into Plato's allegory of the cave. The world around you is actually a simulation created in your brain by synthesizing inputs from your sensory organs - you aren't experiencing it directly. That model includes the conception that you have a brain and sensory organs which simulate an objective reality. In other words, the possibility that objective reality stems from the subjective world, i.e. the mind, is not an absurd assertion.


Just to nitpick: Wouldn't this by definition make it non-objective?

I would think the objective reality in your explanation would be what your sensory organs are reacting to, not what you perceive.


Yes I've thought of this but didn't want to complicate things. My current stance is that reality is dualistic and both subjective and objective are symbiotic and can't exist without the other.


Why do you believe this to be the case?


It seems to me that there is no evidence for either the subjective or the objective realms being the "real" reality. There is a strong argument for either, and they can both be explored with depth.

The concept of "reality", of a physical universe, of a "you", of order and pattern, of an persistent objective reality outside of you, do not exist per se, but as simulation and models in your mind that you accrued throughout your life.

To be more specific, there is an underlying reality that is neither "objective" nor "subjective", but from which both of these realities emerge once a conscious being appears in it. We cannot directly know the true nature of this underlying reality, as it cannot be directly experienced. What we call objective reality is really the experience of the simulation in our mind synthesized from sensory data, as well as our model of this simulation (science), and not the underlying "true" reality.


I'm not sure I follow.

From my understanding of the term, the "real" reality would be the same as the objective realm per definition;

What do you mean when you say "objective" as in "... the subjective or the objective realms"? Some sort of shared definition? If so, wouldn't this also be subjective to a certain degree?

I might just be post-hoc rationalizing my own interpretations here, but doesn't our subjective experiences of reality being based on our sensory organs' interpretation of stimuli from what is presumably the objective reality make fewer assumptions than a concept involving interconnected or "symbiotic" realities?


Just to add to the discussion: Subject would be Perceiver, and Object would be Perceived. Are they actually capable of being separated?


Why should we assume that they are not?


In an instantaneous-clip of all our sensory data (visual field, auditory sphere, tastes, smells, touches/bodily tactility), where do we draw the line between perceiver and perceived ?


The possibility of "objetive" reality coming from the mind is indeed a common theme from Plato to The Matrix. Very fun to explore, but actually useless, if not actively harmful.


That's a content-free drive-by comment. You only criticize without a single reason as to why the idea is "useless" and "harmful".


No. the universe isn't math. The universe is matter. Math is a part of the universe. Therefore math is simply material. Nothing more, nothing less.


Math isn't anything but matter. Math is just another thought construct. A powerful thought construct, but a thought construct nonetheless. As a thought construct it too is material. The electronic impulses on you computer are no different than the ones in your brain.

So yes. Math is also a material manifestation. Life is a material manifestation. Thought is a material manifestation. The problem is with you trying to separate yourself from matter, but the very thing you're using to create that separation is thought. It is the very thing you're trying to get away from, and so you cannot exist separate from thought by using thought. It is a futile and empty game.


Ideas are not material. Even if they are stored in material brains.

Especially not when they can be independently rediscovered without having any trace of them stored anywhere.


Ideas are not material, but thevardanian has a point in the context of this thread. Unless you believe in some kind of supernatural force, all you have is matter and matter-based thought. Ideas are not material but they still need a brain to be held. And a very conscious one.

About rediscovering, you can simply argue that ideas are matter properties so an intelligent being can recognize them.


You can not distinguish ideas from material reality. That is sensory perception, and your ideas that result are basically the accumulation of material structures, or simply knowledge. So ideas are not anything more than the knowledge that you have which interprets, and reinterprets the sensory perception.


Is difference (the distinction in question) something intrinsic to the world, or something imposed upon the world?


Let's not even get into "matter" and modern science which already undermines the notion that things are solid...

Let's just get to the point:

If EVERYTHING in the universe, everything in reality is "matter"... then what does "matter" mean anymore?

It may as well be magic. Harry Potter material. No difference. Because this "matter" exists in nothing.

You are in a building, in a city, in a country, in a continent, on planet Earth, in solar system, itself floating in a galaxy, itself floating in the "universe" ... itself floating in ... what?

Nothing.

At which point you call it matter, I could call it "consciousness" (in the author Bernardo Kastrup's meaning then, as a underlying field to everything), some may call it "god". We can also just call it magic. Poof! Here we are.

PS: Also see

The mental Universe

http://deanradin.com/evidence/Henry2005Nature.pdf

The Physicalist Worldview as Neurotic Ego-Defense Mechanism https://www.instapaper.com/read/811142201


How do you know? Where is your proof?

If matter is all there is, that means either you deny the existence of consciousness (which is an untenable position), or you believe that it arises out of nothing (which requires a "magical" mechanism).


It's other way around. The existence of our egos makes the claim that we are more than dirt true.


Consider the possibility that you are underestimating the wonder of dirt.


While I agree with your statement it's hard to actually prove.


Take mushrooms.


They just play with the nervous system and jumble up the brain chemistry. You ascribing those experiences as something supernatural, or spiritual is the problem. The experience is just that. A fleeting experience. To constantly demand an experience over and over again, or to ascribe certain experiences some great importance is just you interpreting those experiences. Everyday life is just as "spiritual" as anything else. It is the newness, and uniqueness that you demand. This uniqueness of experience is only demanded by the knowledge that you have. Otherwise you have no way of telling yourself that it is a unique experience. As each moment in life is a unique moment.

There's nothing really spiritual about it. There's nothing really special about it. It's the value that you ascribe to it, through the knowledge that you have which makes it "special".


> The experience is just that. A fleeting experience.

I don't know what it is. I'm open to multiple possibilities. Either way, you don't know.


All I'm pointing out is that knowing is the only thing that is there. The experiencing structure is only the knowledge that is there of the experience. Otherwise if knowledge wasn't there you wouldn't know what you are experiencing, or when you're experiencing it.


Godel Escher Bach has a bit where he explores an ant colony as consciousness, opposed to an individual ant (which are more like "thoughts", to put it one way)


Perhaps individual ants are conscious to a degree. This kind of non binary contiousnes was explored in "I am a strange loop"


Consciousness is clearly a spectrum. Ask anyone who has ever got really drunk, or even woken up slowly.

Trying to ask 'when does a baby become conscious' is like trying to say 'when does a baby become tall'.

I still think consciousness is a real thing, and we definitely don't know what causes it, and maybe it is possible to find out. But I think the answer (if there is one) will come from AI - definitely not philosophers and probably not biologists.


I think the answer (if there is one) will come from AI

We might have to wait until brain simulation becomes a reality, which I expect to happen eventually (assuming we don't manage to pull the plug on civilization).


There are species of ants that can pass the mirror test.


I'm completely lost. What does it mean to be conscious of something, but unaware of it? I thought those were synonyms.

The article asks this question:

> Consider your breathing right now: the sensation of air flowing through your nostrils, the movements of your diaphragm, etcetera. Were you not experiencing these sensations a moment ago, before I directed your attention to them? Or were you just unaware that you were experiencing them all along? By directing your attention to these sensations, did I make them conscious or did I simply cause you to experience the extra quality of knowing that the sensations were conscious?

Implicitly saying that "experience" and "consciousness" should be defined such that I experienced and was conscious of those sensations even though I was not aware of them. Thus, using the terminology of the article, I am currently conscious of a ton of things all at once:

The feeling of my feet on the floor, my butt in the seat, my tongue in my mouth, the sound of my typing, and of the rattling behind me, and of the rumbling of the train, the vibrations of the train, the propioceptive feeling of the positions of my limbs, the slight but distinctive smell of mass transit.

Is this right? Is "consciousness" in the sense of the article just a synonym for "anything the brain processes"?


> Is "consciousness" in the sense of the article just a synonym for "anything the brain processes"?

And is able to describe that thing, even if just to itself.

Have you ever experienced in a dream, being able to do some wondrous thing - write music, feel a wall from a distance, whatever. In the dream you feel that ability. But when you wake up, while you remember dreaming about it, you can not actually make yourself re-experience it. (For example: Describe the song you wrote.)

That's what the author claims is the difference between conscious and not conscious. (If I understood him correctly.)

If you have conscious thought, you can cause your mind to experience any sensation you have already had. A non-conscious being can not do that, they can only experience what they experience right this moment.


In your dream example, I remember the music. But in the article's example, I don't even remember my breath. Since I don't remember it, I cannot recall it. So while that's a nice distinction, it doesn't seem to be what the article was trying to get at.


Point taken. One reading of the article is that meta-consciousness is the new consciousness (whatever that was).

I think of consciousness as being 'what you are paying attention to and what it is like'. One problem with this is that when you're attending sufficiently strongly to something you can't actually notice what it is like; there isn't enough bandwidth free. Yet some of the associated ideas ('qualia') may linger in working memory for a more relaxed examination afterwards. Which proves that you were paying attention and that mental work was going on. (Perhaps it also shows that trying too hard is counter-productive due to lack of integration with the rest of the mind.)


Well there is attention too.

Selecting from that stream, is that the over consciousness?

And what chooses which stream to pay attention to?

It's recursive and mind bending.


I'd define consciousness as "anything the brain processes", where the goal is adaptation of the organism to the environment. It's very much about goal driven behavior.


It is hard to take the author seriously, or as an authority, when he rejects materialism and states something like "Physical death is merely a de-clenching of awareness", and he believes in the afterlife.


Why does that make the author hard to take seriously? Because of your bias toward materialism?


Well, when it comes to consciousness the alternative is dualism and dualism is the belief in magic.

Given how discredited dualism is it is amazing how far people will go to deny hey are dualists whilst spouting all the arguements and beliefs of dualists.


Dualism is the belief there is more than one fundamental substance in the world (mind & matter). There's nothing magical about that, it's just an ontological position. But it does have the problem of how mind & matter interact.


But it does have the problem of how mind & matter interact.

Yes, that is where the magic is.


Complete lack of evidence to support his "belief" that sounds very much like supernatural religious magic.


This seems consistent with my experience that when I'm attending hard to a task I have no idea what it is like.

>For instance, it is the occurrence of a sense perception that triggers the metacognitive realization one is perceiving something. N, in turn, evokes X by directing attention back to it: the realization one is perceiving something naturally shifts one’s mental focus back to the original perception. So we end up with a back-and-forth cycle of evocations whereby X triggers N, which in turn evokes X, which again triggers N, and so forth.

This also seems plausible since we can't perceive a new thing accurately without a prior expectation of what it's like. This could be solved by an iterative cycle of increasing realisticness and accuracy.


This puts me in an uncomfortable spot w/regards to abortion. My go to argument was always 'the lights aren't on yet'.

If I'm understanding correctly they might be, albeit dimly, and each day getting brighter.


I'd encourage anyone to examine their conscience, read about the viability of babies born prematurely, and consider that even post-birth a baby is not self-sufficient. We don't have good criteria for what makes something alive or conscious, yet many draw an arbitrary line at birth.

I'm not looking to continue this discussion, but I'm glad you're thinking about this.


"Self-sufficient" is not a good argument. Most people are not self-sufficient when it comes to survival on their own. Just look at the history - as soon as mass-produced food is cut off, famines kill hundreds of thousands, millions sometimes.

I used to support abortion, but more and more I'm sliding towards being maybe against it, because (some of) the anti-abortion side arguments are logical too. No matter how you draw the line based on some argument, you can use the same logic to kill adults. That includes viability, lack of pain, etc.

On the other hand, our technology is at the level where you can turn any cell in your body into sperm:

https://phys.org/news/2016-04-scientists-skin-cells-human-sp...

So if you're against abortion based on the "viability", every time you scratch your nose, you commit genocide.

It's a hard question.


Why would this be related to abortion? I don't see anything in the article mentioning human embryos, only newborns...


the lights start the moment there is attraction between 2 people..


So breaking up with my girlfriend is abortion?


[flagged]


Please keep religious flamewar off HN.


Expanding on the breathing example in the article, could we say there are at least three distinct systems in the brain that control breathing? They are:

- Conscious breathing: I follow a chosen breathing pattern with no immediate awareness of the breathing process.

- Meta-conscious breathing: I choose to be fully aware and in control of multiple aspects of breathing. I control my diaphragm and lips directly, I listen to and feel air passing, I consider how the position of my body affects breathing, etc. I also may choose an appropriate breathing pattern: deeper for hiking, more rhythmic for aerobic exercise, or slow for contemplation. After I have chosen a new pattern, that new pattern soon becomes conscious rather than meta-conscious; although the special breathing continues, I am no longer immediately aware of my new behavior. A change in activity reverts my conscious breathing behavior back to normal without any meta-conscious choice.

- Unconscious breathing: What I do when I sleep. It's clearly different from conscious breathing because I may snore, and many people even suffer from sleep apnea, which means they momentarily stop breathing while sleeping. Neither snoring nor sleep apnea are part of the experience of breathing consciously, so unconscious breathing must be very different somehow.

Am I using the terms correctly?

Curiously, many activities like speaking, eating, and swimming interrupt normal breathing yet do not necessarily invoke any meta-conscious decision making. If I had to think about when to breathe every time I wanted to speak, my meta-conscious mind would be overloaded with that task alone and I would forget what I wanted to say.


From my gut I'd say there are two - your "conscious breathing" is actually the same as your unconscious breathing process: Both are done without awareness to it and are primarily modulated via the vegetative system (O2 receptors in the carotid, hering-breuer reflex and so on).

The difference being that while you're awake you can consciously affect and correct your breathing (your "meta-conscious breathing"). When you're asleep, your body has to rely on it's other unconscious mechanisms to do that.


My interpretation of "meta-consciousness" as described by the article:

It is simply a brief period of intent, focused replay/study of the experience that just occurred, while it is very fresh in the mind/short-term memory. One is closely reviewing what happened, looking at the experience from different angles, replaying it, etc. This could all happen in the span of a second or less.

It is a matter of focus and proximity. It isn't a special layer of consciousness. It may seem qualitatively different, as the mental model of the experience is so deep & rich in the moments immediately during and after, if one is focusing closely. But I don't see the argument for this becoming some sort of "meta-consciousness".

In short, the concepts of "consciousness" and "being conscious of [something]" are related, but they aren't the exact same thing.


What if you're merely interpreting meta-consciousness according to the mode in which your mind currently operates?


Come again?


"Reflection and introspection are programming facilities in the Java programming language that allow an object to discover information about itself and other objects at runtime."

Sounds about similar.


I have a theory.

Any theory about consciousness that does not include learning is at best incomplete, but more likely just wrong.

---

Our brains create models of the world. They are constantly predicting, making choices. And on surprise, looking back to see where it went wrong. On reward, reinforcing the behavior leading up to it.

In a way, our brains are observers of our environment, our bodies, and itself.


I've been involved in meta cognition research, but I am confused by the distinction being raised between consciousness and meta consciousness, as 'conciousness' seems to reduce to 'interact', as in particle A hits particle B, and so reduced seems to have no meaning.


I lost the thread in the first two paragraphs. Can anyone explain to me why it's so obvious infants are conscious? THis seems to be taken as a given for the rest of the article. Couldn't they be in a state similar to sleepwalking which seems not conscious to me? Perhaps consciousness flashes on as you have your first few "flashbulb memories".


Consciousness does not have to mean being "mentally" (intellectually) aware of what's going on. It's about experiencing stuff, and having a _sensation_


In my view, the two dynamics are incredibly intertwined. The sensation is largely made up of and intertwined with an understanding of what the sensation _means_.

I'll try to give an example:

Think of a complex modern art painting. It's much more difficult to perceive the colors used in the painting, as the boundaries between different parts and components of the painting are far less meaningful or distinguishable, compared to most objects we encounter.

When you look around your kitchen, the sensations of color are nearly instantaneously associated with rich semantic information about the various objects. E.g. that red object is the "top" of the "peanut butter" "jar" that "I" "bought" "last week" (this is the tip of the semantic iceberg). This makes it easy to hold onto that perception -- it is able to quickly take root in a rich semantic soil that the brain has been developing over our entire lives.

This doesn't happen with a messy, semantically-limited perception like looking at a modern art painting. Sure, we can focus very hard to isolate the different streaks and blurs and blobs of color, but it is noticeably more difficult.

With an infant, as their semantic soil is very shallow compared to an adults (or even a young child), I imagine that their perception of the world is perhaps more like a modern art painting. Slowly, the colorful blurs and blob and smells and sounds take on meaning as the infant learns about the world and builds up an understanding of the complex patterns in life.


> When you look around your kitchen, the sensations of color are nearly instantaneously associated with rich semantic information about the various objects.

Nearly. This is why the main thrust of meditation is about becoming aware of ever-subtler processes, both on- and off-cushion. When there's enough awareness to distinguish between the arising of bare perception and the associations it triggers (for example), it becomes clear why we identified consciousness with association, and why it doesn't quite capture what we were originally curious about. Something interesting remains. As more and more functions (modeling, associating, remembering, attending, etc.) get distilled away, you'd think the mystery gets smaller and smaller. But experientially speaking, I think it only gets bigger.


Hi monktastic!

> This is why the main thrust of meditation is about becoming aware of ever-subtler processes.

Indeed. It's an incredible method to find insight.

> When there's enough awareness to distinguish between the arising of bare perception and the associations it triggers...

I would argue that the concept of bare perception, absent of any sort of understanding (the understanding could be quite minimal, as with an infant, dog, or even an ant), is a non-sequitur.

It's like asking for mass without of gravity. The consciousness _is_ the understanding of each instantaneous moment.

This is why meditation can only take us so far, as it is impossible to become an ant or an infant, to completely abandon all accumulated understanding. But by reducing our assumptions and constructed frameworks of meaning as much as possible, interesting things may happen.

> it becomes clear why we identified consciousness with association, and why it doesn't quite capture what we were originally curious about. Something interesting remains.

This is the crux. Without being able to totally abandon your associations and understanding, how do you know for certain some aspect of consciousness lies beyond them?

That seems to me something no human could ever experience, and that if they could, they would no longer function nor possess any awareness at all.


Hi dwaltrip :)

> It's like asking for mass without of gravity. The consciousness _is_ the understanding of each instantaneous moment.

Of course, if someone were to somehow spend time conscious without association, they wouldn't be able to convince anyone they did so. Maybe it's impossible, maybe it's possible, but it's not very amenable to debate.

So I'm trying to stick to more empirically testable claims. In particular, here I'm hypothesizing that the reason we believe consciousness is equivalent to its various functions is that we don't have enough precision of awareness to distinguish them. The experiment is simple (though not easy): become increasingly aware of the subtle processes, and plot your belief over time. My own result (and that of many who have done the experiment) is that I've clarified two aspects of my experience ("the sheer fact of experience" and "the particular forms it takes") in such a way that I no longer identify them. YMMV, but I think it's a wonderful exercise!

Relatedly, I'm curious what you think of the idea that you can only know that you are conscious, whereas it is impossible (even in principle) to know whether another being is.

Thanks as always for the chat.


Does a thermostat have a _sensation_ of temperature? Why or why not?


A thermometer senses temperature, much like the human skin does. But we would not consider it conscious because it is a reactionary device.

Of course at some point along our development of machine intelligence that may change. At some point in the past all life was reactionary, then as it evolved that changed. My take on it is the first consciousness in life occurred when creatures started making predictions on when they would eat something or get eaten and act before it occurred.


It already happened then. https://www.engadget.com/2017/07/25/deepmind-create-ai-imagi...

AI makes predictions and acts on them.


Probably not. Why? That's the trillion dollar question.


I'm not even experiencing anything when I am sleep walking, and it seems plausible that infants are in a similar state. How do we know that infants are experiencing something?


Lets challenge this. What if baby can be born alive without having consciousness? It's not impossible since we can't detect consciousness. Maybe it occurs in some babies born with brain development problems, maybe not. Besides title says "Consciousness Goes Deeper Than We Think". Sounds clicky-baity and it assumes that reader is dumb.


If you don't have consciousness you cannot interact with the world around you in a 'human' manner. So yes, when we see minds that did not develop correctly we see what we consider animal like behavior, or no behavior at all.


There's nothing saying a fully functional human must have consciousness. It could just be that some people are machines acting on biologically coded rules, and that they don't have an inner experience of their self.


It seems to me, though, that this would eventually come up in conversation. Much of human discourse is centered around the concept of individuals' inner experiences.

The thought that someone who seems on the outside very similar to me could be completely missing the "inner experience of self" that I (think I) have certainly feels wrong-- it's deeply uncomfortable and rather frightening. Furthermore, espousing such an idea opens the door to those who would attempt to use it to discriminate (likely by applying existing bias with a flimsy illusion of accuracy).

This one doesn't have much use to me beyond tangential thought experiments. It can be cited to justify terrible things in the hands of bigots, and I suspect it's fairly easily invalidated for all practical purposes.


This article, like most discussions on "consciousness" I encounter reads like little more than word games to me.


If consciousness is always present, but we don't show signs of it until ~5 months, maybe it's just that we need memories to understand the depths of our own thoughts? Like going from a 2D plane of the now to a 3D plane of the now, before, and after thus birthing more abstract thoughts such as cause and effect?


Consciousness can esists also without the object of perception because is over the duality subject/object. In a room we have perception of the objects forma the light. But in a room empty of objects we can see the light because the light autoreveal her self. In an equal mode is the consciousness


I wonder what consciousness would be like if our brains had the size of our solar system. Would we be conscious if our neurons took on the order of minutes to communicate with each other?


[Nitpicking alert] The giant squid have the fastest axons to try to coordinate their giant body. They use some tricks to make the transmission faster, so I'm not sure if you can use this speed inside a brain successfully.

From https://en.wikipedia.org/wiki/Squid_giant_axon the speed is 25m/s, so in a minute you the signal could travel 1500m ~= 1mile.


Like defining "athletic" or "smart" or even "stress" or "eating" - all go deeper than we think.


I find that any discussion of consciousness that doesn't build upon the work of so many other great thinkers (like Daniel Dennett, et al.) is usually a waste of time.


and yet such 'great thinkers' are quick to 'not build upon the work' of religious thinkers throughout the ages, despite the fact that at this level, these topics lie solely in the realm of philosophy..


Killer observations. I've been wondering if conscienceness is circular activity in the brain, such that while we are conscience there is a constant circle of activity, and upon sleep this circle stops. I suppose this article is saying that this is metacognigtion, but still circle all the way down.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: