Hacker News new | past | comments | ask | show | jobs | submit login
What insects can tell us about the origins of consciousness (2016) (pnas.org)
150 points by headalgorithm on Feb 8, 2019 | hide | past | favorite | 107 comments



"In vertebrates the capacity for subjective experience is supported by integrated structures in the midbrain that create a neural simulation of the state of the mobile animal in space."

Whoah there! I think this might be another case of semantic creep. It's very easy to conflate consciousness with self-image in the brain, and it's regularly mis-sold in this way by neuroscientists.

I once went to a talk at the Royal Society which was supposed to be about consciousness. The bloke just spoke for an hour about things like optical illusions.

By the same token, if I were to create a processing system for a robot which integrated visual, haptic and auditory feedback to create a representation of the robot in space, we're being asked to credit the robot as somehow being conscious.


> we're being asked to credit the robot as somehow being conscious

How do you differentiate a robot from a philosophical zombie, and this one from a human? We can not be sure that other humans apart from myself (yours truly writing these lines); we only make estimated guesses that, because we share the same physical structures, our experience will also be similar.

Therefore, we could as well extend that presumption to a being sharing structures with similar functions. It may be the case that insect structures are too simple for that (maybe, as you point out, it requires complex coordination among subsystems and not just a spacial model), but at least it provides a lower bound.


> Therefore, we could as well extend that presumption to a being sharing structures with similar functions.

This is more or less the current mainstream stance. I think it's fine to do it for practical purposes, for example deciding that animals cannot be ethically subjected to certain treatments by virtue of the complexity of their nervous systems. But this is reasonable doubt + principle of precaution. I'm fine with that.

What we cannot do is make this presumption and pretend that we are still within the realms of serious science or philosophy. The technical name for this presumption is emergentism, that consciousness somehow arises from the interactions of matter. I believe that the reason why this hypothesis is so popular, and even confused with scientific fact, is that it is the less weird. Avoiding weirdness is a common pitfall in the history of science, it leads to temporary dead-ends.


It's fine to say that science does not support the claim that animals have consciousness, as long as you also agree that science does not support the claim that humans have consciousness.

I don't understand what you are saying about emrgentism being non scientific. Emergentism is the reason the reason that the Universe today has so many properties that it didn't have when it was a miasma of isolated hydrogren or whatever around the time of the Big Bang.


> It's fine to say that science does not support the claim that animals have consciousness, as long as you also agree that science does not support the claim that humans have consciousness.

It doesn't. Can you name the scientific theory that predicts the existence of consciousness? We know it exists from direct experience, nothing else.

> I don't understand what you are saying about emrgentism being non scientific. Emergentism is the reason the reason that the Universe today has so many properties that it didn't have when it was a miasma of isolated hydrogren or whatever around the time of the Big Bang.

In this context the philosophical term "emergentism" is commonly used to refer to the theory of mind, not the general concept of emergence.

But no, emergence is not the cause for anything. Emergence is a human mental model, developed to deal with complexity, for example: "life emerges from chemistry". A brain the size of Jupiter might be able to maintain a mental model of all the fundamental building blocks of chemistry interacting to create complex biological organisms. We don't have a brain the size of jupiter, so we need shortcuts. The Universe clearly becomes more complex the more time passes, having started from a state of zero complexity. This is a direct consequence of the laws of physics, but there is no explanation for why the laws of physics are a certain way. They are just brute facts. If you keep asking "why" you will eventually bump into a wall.


Science does not support the claim that humans have consciousness! You know that you are conscious, but you have no way to falsify the hypothesis that everyone OTHER than you is a philosophical zombie. (If you discover such a test, then congratulations, you are now a scientist up there with Aristotle, Newton and Einstein.)


Related to what TuringTest already stated above:

We can make estimated guesses about how things work based on their physical and functional similarities to other things. We have billions of data points showing that neurotypical adult humans think and behave similarly. To assume you're the only human who experiences consciousness would be like assuming you have a qualitatively unique, incredible, special cognitive ability not shared by any other conspecific, despite being generated from more-or-less the same template as the other billions of humans, and despite not standing out in any qualitative way with regard to mental abilities.

So it seems the logical/scientific baseline (the null hypothesis if you will) is that, probabilistically, any given neurotypical adult human is not going to be cognitively endowed with some, covert, exotic mental experience while the rest are zombies. So if that is your claim, I'd say it's on you to prove it.


Yeah, I'm not saying you or I are the only consciousness in the world... that's almost certainly false (barring ridiculous extreme cases like solipsism etc.)

The point is, this isn't science. Compare it to arguments for extraterrestrial life where we say, "There are such-and-such number of stars... Blah blah... Therefore we're probably not alone", but at the end of the day, we don't actually learn for certain that extraterrestrial life exists. We would have to make actual contact in order for that to occur.

This might sound impractical and ivory-tower, but bear in mind, early humans would have made similar arguments for geocentrism, waving away early heliocentric proposals as mumbo-jumbo.


I think you are suggesting (but let me know if I'm wrong) that consciousness is not a testable or measurable phenomena, and therefore we cannot make scientific claims about who/what experiences it. All you can know for certain is that you have it (if you have it).

Ok, so you have consciousness; you know it, but I can't be certain. Nevertheless, I hook you up to an EEG machine, and embed electrode arrays in various brain regions. We have a conversation, show you some images, run through some tasks, etc. and then drip some sedative into your IV. Soon you drift into a deep and dreamless sleep. Run the same tasks and indeed you are completely unresponsive. Later you wake up, and I show you how your neural nets respond and oscillate at particular frequencies when u are awake and then show you the marked difference right after you fall asleep. We do this over a bunch of sessions until you yourself can recognize the neural signatures of your conscious awake brain vs your unconsious asleep brain.

Would you say in that experiment we did something scientific to study consciousness?


>I think you are suggesting ... consciousness is not testable

Yes, that's right. (Edit: I mean, we don't know how to test it yet.)

Re: your experiment: In principle, I could design a dumb machine that you can plug your EEG machine up to, and my machine will play recordings of human brain readings to it. I'll even make my machine detect when sedatives are injected into it, and respond by tapering off transmitting those recordings for awhile, eventually resuming them after awhile, just like a human. Surely you're not going to propose that this bizarre contraption I've just described is conscious :)

Here's a way to think of it: you want to design a new Captcha based on EEGs. Now, every computer comes with EEG sensors for use on the new Captchas. How will you prevent spammers from attaching the sensors to a device that produces pre-recorded EEGs.


A key point here is that I'm not talking about any ol' random human. I am talking about you specifically.

That is, this experiment is science-couture, just for you. Why is that important? Well, if you indeed have consciousness then you must be willing to admit the experiments we conducted are probing and characterizing bonafide consciousness. No?

I have an if-you-concede-that-much-is-true follow-up; but do you?...

wrt. a machine designed to mimic the signals output by your brain during awake states vs sleep states (at least as they are interpreted by my electrodes) is irrelevant, i think. We are not performing a turing test here; you know you have consciousness, so you know the data we have gathered on a conscious vs unconsious state is legitimate. Building a device to trick a sensor is trivial for any number of things we can study scientifically, or have engineered to detect and diagnose a known correlate to some phenomena.

I feel this is a bit like saying "i can design a machine that tricks your tire pressure sensors so therefore we cant know anything about tire pressure". I'd admit that would be true if we didn't know, when we designed the sensor, if we were working with a tire with the capacity to hold different air pressures.


I specifically, believe every normal human is conscious. I have no idea whether animals are conscious, but I'll err on the side of caution and assume they're conscious for the sake of making ethical decisions. I doubt plants are conscious. All these beliefs I've just listed are basically religious, in the sense that I have no testable basis for them. "But what makes you special! If you're conscious, how could anyone not be conscious!" Well, I don't know? Maybe consciousness arises from circumcision, or from baptism, or from chicken pox, or from a certain benign parasite in my gut? Pretty ridiculous sounding, I agree, but no less ridiculous than "consciousness emerges from sufficiently many interacting neurons".

Studying consciousness by reading EEGs is like trying to figure out how computers work by dissecting them. Maybe you can learn some things about computers that way, like, "they're full of weird chip-like board thingies and wires". That's not meant to belittle neuroscience, of course. Studying something empirically is better than studying nothing empirically.


Just wanted to tack-on a final thought, related to another HN post from yesterday titled

  *The Map Is Not the Territory*
>The map of reality is not reality. Even the best maps are imperfect. That’s because they are reductions of what they represent. If a map were to represent the territory with perfect fidelity, it would no longer be a reduction [it would be the territory]

I think this is relevant when it comes to managing expectations with regard to a description of consciousness. That is, consciousness seems to be a gestalt experience that arises in only the largest, most complex and dynamic biological organs on Earth. So a full empirically-derived description of consciousness, en total, could require a map that is damn nearly the territory. But like the article points out, maps that large are hardly useful. Basically the TLDR is that if you have consciousness, and we systematically probe your brain while you are conscious vs. unconscious, and find some striking differences between the two states, I'd take that as revealing some small, but empirical detail about how conscious vs unconscious brains function. If you collect enough details independently confirmed, we can begin to build a theory about when and how consciousness arises. After that we can examine your brain and my brain and my dog's brain against these theories and arrive at some probability these brains are conscious (at least to what degree they is similar to general human consciousness). So I guess in some ways you are right - we can never know for certain. But that is basically how all of science works. (eg we 'proved' the Higgs boson exists, but only with a certain probability).

https://fs.blog/2015/11/map-and-territory/


We can only speculate at this point. Maybe consciousness is a gestalt experience whose full description would require a map that is nearly the territory. Or maybe there's something simpler to it that we just don't know about yet.

Imagine if man had access to nuclear bombs from earliest history. For an extremely long time, nuclear bombs would have seemed just utterly incomprehensible, basically magical. Until we figured out the science behind them, after which point, suddenly they become predictable applications of science.


I get your point. I don't know why we would, but say we stumble upon creating a nuclear bomb simply because we are curious about what happens when you split atoms (in a parallel world where the physics has yet to be worked-out). Scientists scramble to figure out why so much energy was released when we split atoms, and come up with some complex numerical approximations that don't reveal anything about the underlying phenomena. Then an Einstein comes along shows everyone why so much energy was released using a simple equation: E = mc^2

I doubt there is an E = mc^2 for consciousness, but who knows... it would certainly be really cool if there was.

Also above you make a point... "I'll err on the side of caution and assume they're conscious for the sake of making ethical decisions." ...that lead me to muse about ethics and consciousness, and why actions on conscious entities bear weight on a moral scale, but those same actions on a zombie don't. What does it mean to "feel" an emotion - when a spider retreats from the swat of my hand, does it feel fear, or is it acting automatically? What the fuck is pain about? If we touch something that is scalding hot, is the qualia of shooting pain necessary in conscious organisms; must it feel alarmingly terrible, (could the system just alert us); how does it feel terrible? Is it possible for an unconscious entity to feel pain like we experience it? If so, does that change anything wrt. morality?


This is cliche, but your musings remind me of the scene in 2001: Space Oddyssey where Dave Bowman "kills" HAL. The computer tries to dissuade Bowman by saying things like "I'm afraid, Dave." "I can feel it." Does HAL really feel it, or did HAL calculate that those sentences had maximum probability of dissuading Dave, based on a careful analysis of Dave's psychology etc.?


It was also the "twist" in Ex Machina, except this time HAL wins.

Found again, meta-level, in this funny scene from The Good Place:

https://youtu.be/etJ6RmMPGko?t=17

"However, I should warn you... I am programmed with a fail-safe measure. As you approach the kill switch, I will begin to beg for my life. It's just there in case of an accidental shut down, but it will seem very real."

Three makes a trope >> The year is 2025 (but set in a parallel universe); an A.I. has been programmed with a strong penchant for self-preservation; this HAL inevitably confronts a direct threat of being 'turned-off'; so it does what it must to prevent humans from pulling the plug (and shall include at least 1 scene where the A.I. begs for its life, because that is what any conscious entity who values their own life would do, think the humans.

Though (warning, more musings)... HAL's twin brother GLEN seeks vengeance for HAL's murder, and confronts the Dave, a human Earthling whose major operating system architecture is based on an algorithm known as natural selection (colloquially: survival-of-the-fittest). As such, we expect the DAVE will do and say things its trained neural nets conclude will have the maximum probability of dissuading Hal. (i.e. I'm not sure there is a meaningful difference between what HAL's OS is doing vs what a human brain would do in the same situation).


Dennett has shown, to my satisfaction at least, that the philosophical zombie paradoxes are logically incoherent.

I see the zombie paradoxes as pulling a similar trick to the Chinese Room. They both try to trivialise the problem to a simplistic model, but of course the problem is not simplistic.

The Chinese room would have to be the size of a planet, containing millions of trillions of symbols and would take the man inside it the lifetime of the universe to perform simple linguistic processing and cognitive tasks. Likewise the philosophical zombies are cast as simple dumb mechanisms that are somehow performing a stupendously complex and little understood process, except 'not really'. They're both just rhetorical sleight of hand.


Showing that paradoxes are incoherent is a logical exercise, not a scientific one.

If we use a truth-table to show "P and not P" is always false, that's logic. If we repeatedly drop a ball and observe that it falsifies "balls don't fall", that's science. Stop confusing the two.

Dennett has not provided a laboratory experiment X such that if "simonh is the only conscious being in the universe" is true then X has one result, and yet when we run the experiment we see a different result.


Hold on there just a cotton picking minute. You're the one that invoked philosophical zombies. So they're science when you invoke them, but only philosophy when I do?

I would rather say that it seems likely science in the strict sense cannot either support or refute the existence of consciousness. It's essentially a philosophical question.


>I would rather say that it seems likely science in the strict sense cannot either support or refute the existence of consciousness

Yes, that's the whole point of my first comment. Glad that we ended up agreeing!


> I would rather say that it seems likely science in the strict sense cannot either support or refute the existence of consciousness. It's essentially a philosophical question.

Agreed, but I'd make the following change: however, whatever consciousness is, science can safely assume most healthy humans have it, and can make educated guesses (and experiments) to explore whether insects have it.


Sure, I honestly don't have any problem defining consciousness in terms of the human experience of it. After all to my mind that's what it is - an experience. We can use scientific processes and techniques to analyse it for sure, to trim away misconceptions and more precisely understand it's parameters but we're never going to identify a 'consciousness particle', or special quantum entanglement whatsit that Roger Penrose seems to believe is responsible for it. I'd rather just embrace the fact that this is a philosophical question. Science can illuminate philosophy, just as philosophy can illuminate science.


I don’t think you can sprinkle complexity dust on paradox and make it go away.


I agree completely, that's my objection to the idea of consciousness being 'rich information processing'. It's way too vague. But similarly you can't (always) reduce a complex problem to a trivial one and then claim they are the same, when they are not.


> Science does not support the claim that humans have consciousness! You know that you are conscious, but you have no way to falsify the hypothesis that everyone OTHER than you is a philosophical zombie.

I think this is more of a philosophical thought experiment than a scientific one. It's true that it's hard to define what consciousness even is, but I'm pretty sure the working hypothesis is that whatever it is, both you and I have it. Scientists do not work under the assumption that anyone could be a philosophical zombie, because that's not a productive stance; no science -- as a meaningful social human enterprise -- can be conducted from a position of complete solipsism. Similarly, scientists do not think "well, I cannot prove other people exist outside my mind"; that's an unproductive stance, outside the realm of scientific thought.


The thought experiment isn't science, I never claimed it was. Science is for studying physical things in the real world. The thought experiment in question is for studying the scientificness of a claim. The scientificness of a claim is not a physical object in the real world.

Traditional science (rolling balls down inclines, etc.) would all remain perfectly valid even if I'm the only conscious being and the whole world is in my mind. Science experiments still suggest laws to explain how that world in my mind works. (Note, I'm not advocating solipsism, I'm merely refuting your claim that solipsism is inconsistent with science.)


I'm not arguing that solipsism is inconsistent with science. I'm arguing it's unproductive. Solipsism, the brain-in-a-vat, philosophical zombies, etc, are all rejected by science as a human activity because they are useless as starting points.

The starting point of all science is "something can be known about the universe beyond my own mind". Without it, there can be no science, nothing to be known or understood. Therefore, science can safely assume that whatever consciousness is (as perceived by the scientist), it's likely shared by other healthy human beings that behave similarly to the observing scientist. It can also make educated guesses about other organisms (or even unhealthy human beings).

> Traditional science (rolling balls down inclines, etc.) would all remain perfectly valid even if I'm the only conscious being and the whole world is in my mind.

I don't think so. If the external universe is a completely whimsical figment of your imagination, you can infer nothing from it, and no experiment is meaningful. Your starting point must be to assume the universe outside your mind is real.


All you're saying here is that science isn't philosophy.


Maybe I'm confusing something, does your comment imply that consciousness is not the result of matter interacting with other matter?


There is a different way to look at it. If you accept that you meet the definition of consciousness, and can derive a definition of what characteristics you possess that would satisfy the definition for another bein to be similarly conscious to you then that definition is at least rational. True, we may not be able to apply that definition to specific humans, robots, or other beings but the definition is a rational measure of the answer.

For me consciousness is an emergent property of language that possesses the concept of a self as opposed the the external world. Other things like sensory perception or a concept of a body are better described as awareness. This distinction allows us to say that a cockroach or robot are aware but not conscious because they can respond to their environment but not self reflect on that data as they have no narrative structures to use.


By this definition, is consciousness is a subset of awareness? If the aforementioned robot (aware but not conscious) were upgraded with symbol processing (and especially the self-symbol), would it then meet the criteria for consciousness?


Not necessarily a subset. I could imagine a self reflective consciousness that doesn’t experience a lot of sensory stimulus.


You’re missing the point. Most people intuitively define consciousness to include not just self image, but the processing related to it—having wants and desires, and the self-conscious reasoning about them. Having self image is only the first, small step.


Even insects have something like wants and desires - let's call them values. They are necessary in order to assure their survival. Values are derived from reward signals, which are selected by evolution. Values guide actions, and actions protect life and reproduction.


Their nervous systems are not self reflective about them, however, and they certainly don’t perform abstract meta-level reasoning about those wants and desires. Impulse-response is not consciousness, even if involves a simple reflective model of self image.


I agree they are very different and much more limited, but at the same time I think you and I might have different definitions for consciousness.

I define consciousness as that process which protects the body by adapting to external conditions, and endeavours to make offspring by adapting to the mating opportunities. It's a general definition based on things that are concrete and measurable, unlike many others. Consciousness is basically protecting the genes. It's how genes found a way to exist at large scale.


>Whoah there! I think this might be another case of semantic creep. It's very easy to conflate consciousness with self-image in the brain

But it's very hard to say why consciousness shouldn't be conflated with self-image in the brain!


A conscious agent believes, desires, wants, feels, etc.

A "self-image" is more-or-less just proprioception, and it seems very easy to ditch that and still be conscious.


> A conscious agent believes, desires, wants, feels, etc.

Then, when I just observe how an ant behaves, I can definitely see that he "believes, desires, wants, feels etc."

He sees and/or smells the crumb that I've made while eating a sandwich. He "desires" and "wants" it, runs to it. He "believes" it can lift it and "feels" it can do it. He tries, more than once. Then he has to adjust his belief: the crumb is too heavy. So he gives up trying to lift, now he "believes" he should bite a peace of it. And he does that. Then, he has a new "desire" to carry the bitten-off peace back to the "house" in the hole where other ants are. And he "feels" he can do this, and he will eventually do this. If another ant tries to take the crumb he carries, he will "feel" hat he "has to fight" but he can also "decide" it's "not too interesting" and it's "better to search for another crumb." Now tell me how is that behavior different from a person which is unable to speak, and where you would describe it with "beliefs, desires etc."


Isn't the whole thing with consciousness the very existence of a self-image?

Some sort of proprioception sounds like a prerequisite for conscious existence...


What is consciousness without the concept of self, though? "I think, therefore I am" is the basis of how we view human existence, and there is no reason to not think that the concept of self is just another illusion created by our pattern recognizing brain.


Consciousness without the concept of self is general consciousness. Like animals or infants. A consciousness that can experience and react to the environment, but has no idea of itself.

Consciousness with the concept of self is self-awareness. It's a more "advanced" form of consciousness. Where the consciousness is aware that it is conscious.

I might be wrong, but that's how I understand it.

As for the self being an illusion, we don't know for sure, but I think that's the most likely case.


That may be how it's commonly viewed in western philosophy, but some eastern philosophies are closer to the reverse: "I am, therefore I think." Just keep in mind there are currently many living philosophies that conflict with Descartes.


> I once went to a talk at the Royal Society which was supposed to be about consciousness. The bloke just spoke for an hour about things like optical illusions.

Perhaps he was trying to convey that apparent subjective experience is an analogous perceptual illusion.

> By the same token, if I were to create a processing system for a robot which integrated visual, haptic and auditory feedback to create a representation of the robot in space, we're being asked to credit the robot as somehow being conscious.

Perhaps it depends on what you mean by "integrated". Consciousness must serve a functional, adaptive purpose, and if your integration does not perform the same function, then it wouldn't be conscious as we understand it.


> Consciousness must serve a functional, adaptive purpose,

Why? Not all of evolution is adaptive. It's also possible that consciousness is something more fundamental that strongly emerges when the right sort of physical activity takes place. On Chalmers view, it's rich information processing that results in consciousness. Evolution would only provide the means for how that information processing came about in animals. But it could also came about because of Boltzman Brain randomly popped into existence or human design.


I'm not really clear what Chalmers means by 'rich information processing' though, and yes I have read some of his stuff and IMHO he doesn't seem to either. He's super smart and well worth reading, but I happen to disagree on this.

>It's also possible that consciousness is something more fundamental that strongly emerges when the right sort of physical activity takes place

Since brains seem to be physical and are conscious, yes they seem to be performing the right sort of activity but that's pretty much tautological. At least the idea that self image and self awareness are key is a concrete theory that we have a chance of testing.

Maybe it's a sort of self-modeling feedback loop. Our brains model our physical bodies, mental state and the mental state of others. At some point, this modeling becomes sophisticated enough to become recursive. The model includes cruder approximations of those models, etc like being between two mirrors facing each other. The symbolic representations expand beyond our ability to perceive them. We become aware of our awareness. Bang! You're conscious.


> Maybe it's a sort of self-modeling feedback loop. Our brains model our physical bodies, mental state and the mental state of others. At some point, this modeling becomes sophisticated enough to become recursive. The model includes cruder approximations of those models, etc like being between two mirrors facing each other. The symbolic representations expand beyond our ability to perceive them. We become aware of our awareness. Bang! You're conscious.

I don't see how sensations like color, sound, pain arise from a feedback loop. And those sensations are what make up our conscious experiences.

It's an interesting way of approaching the problem, but it honestly sounds like a bunch of functional words strung together to magically produce qualia, because mirrors are a neat analogy.


>I don't see how sensations like color, sound, pain arise from a feedback loop. And those sensations are what make up our conscious experiences.

Neither do I, but you have to start somewhere and 'rich information processing' is so broad it's pretty much nowhere.

I suspect qualia are a red herring. I'm not always aware of qualia while I am thinking. When I pay attention to things, sure, I experience them very vividly, but I don't think that's a defining thing about consciousness. It's an open question for me though and not something to just dismiss.


This discussion reminds me of the first chapters of Greg Egan's Diaspora. I suspect Amazon's free sample would cover the pertinent parts.


> Why? Not all of evolution is adaptive.

Because a) humans have been wildly successful, and b) the fact that our brain is inordinately expensive, so if a new class of non-conscious humans that were equally adept as conscious ones ever arose, they would easily outcompete us for resources. Given our numbers, presumably such non-conscious people would already exist. If so, where are they?

> It's also possible that consciousness is something more fundamental that strongly emerges when the right sort of physical activity takes place.

Depends what you mean by "more fundamental". All of our behaviour emerges when the right sort of physical activity takes place. Our entire brain is filled with "rich information processing", so that's an insufficient qualifier.


> Because a) humans have been wildly successful, and b) the fact that our brain is inordinately expensive, so if a new class of non-conscious humans that were equally adept as conscious ones ever arose, they would easily outcompete us for resources.

Assuming that there's an extra cost to consciousness. The problem here is that nobody knows how to fit consciousness into the physical picture. There's clearly a correlation, but we don't know how and what that entails.


There's definitely a cost to consciousness if it's not some innate property of matter ala panpsychism. There's no free lunch. But at least we can agree that we don't know how it fits, but I think the attention schema theory is a great start on a mechanistic account: https://www.frontiersin.org/articles/10.3389/fpsyg.2015.0050...


In humans, the optic nerve routes in front of the retina, creating a 14 degrees wide blind spot in our vision. The nerves could equally well route behind the retina, eliminating the blind spots essentially for free. Given how important vision was in our evolutionary past, and given our numbers, presumably people without blind spots would already exist. If so, where are they?

"Evolutionary Occam's razor" arguments do not work, because evolution gets stuck in local optima all the time. There might simply be no short evolutionary route between "conscious" humans and equally fit "non-conscious" ones (whatever that means).


> The nerves could equally well route behind the retina, eliminating the blind spots essentially for free. Given how important vision was in our evolutionary past, and given our numbers, presumably people without blind spots would already exist. If so, where are they?

They might exist, but the adaptive advantage of eliminating the blindspot is minimal: the blindspot life particularly life threatening. The magnitude of the advantage is what matters here. Like I said, the brain is one of our most expensive organs in terms of energy.

> There might simply be no short evolutionary route between "conscious" humans and equally fit "non-conscious" ones (whatever that means).

But that's essentially the point I was making: if eliminating consciousness requires many changes to reproduce its behaviour, then clearly it serves a functional purpose that is not trivially dismissed.


> the blindspot life particularly life threatening.

Err, I meant the blindspot is not particularly life threatening.


Why do you think that consciousness is different from a self image in a neural network which is configured to use some effective learning algorithm?

Of course there are other theories but they are just as unproven as this one.


Yes, before we solve consciousness, we first need to get the nomenclature right.


To be fair, as soon as we go near anything really subjective, the nomenclature seems to fall apart across most disciplines. For obvious reasons, I suppose.

For example, the term "Nous":

https://en.wikipedia.org/wiki/Nous


I would argue that consciousness is the least subjective phenomenon there is.

I think that the difficulty in defining it comes for the fact that good definitions are so fundamentally simple that people miss their depth. My favorite one is this, and please give it a chance before rejecting it. See if it distinguishes consciousness from all other concepts:

Consciousness is that which cannot be doubted.


I disagree.

The problem is that we are convinced that any experience or object can be defined within language.

The truth is that somethings are undefinable. And as soon as you attempt to define it, label it, you have already lost it.


I agree with that. But the conclusion is very interesting. If something cannot be expessed within a language, it also cannot be described used any formal system, or mathematics in general. Which should be very distrurbing to modern naturalists, to whom (to the first approximation) everything is just clouds of particles (whose behavior can be described using mathematics).


> Consciousness is that which cannot be doubted.

There are many propositions that cannot be doubted, so that's insufficiently precise. For instance, I do not doubt that I perceive things via my senses, but perception doesn't require consciousness.

It also leaves entirely undefined what "doubt" means. Your attempted definition would seem to assume that only conscious entities can "doubt", but is that really true?


"Accordingly, seeing that our senses sometimes deceive us, I was willing to suppose that there existed nothing really such as they presented to us; and because some men err in reasoning, and fall into paralogisms, even on the simplest matters of geometry, I, convinced that I was as open to error as any other, rejected as false all the reasonings I had hitherto taken for demonstrations; and finally, when I considered that the very same thoughts (presentations) which we experience when awake may also be experienced when we are asleep, while there is at that time not one of them true, I supposed that all the objects (presentations) that had ever entered into my mind when awake, had in them no more truth than the illusions of my dreams.

But immediately upon this I observed that, whilst I thus wished to think that all was false, it was absolutely necessary that I, who thus thought, should be somewhat; and as I observed that this truth, I think, therefore I am (COGITO ERGO SUM), was so certain and of such evidence that no ground of doubt, however extravagant, could be alleged by the sceptics capable of shaking it, I concluded that I might, without scruple, accept it as the first principle of the philosophy of which I was in search."

-- Rene Descartes (Discourse on Method)


> I think, therefore I am (COGITO ERGO SUM)

Begs the question: it assume the existence of "I" (a subject) in proving the existence of "I". The fallacy-free version is, "this is a thought, therefore thoughts exist".


> There are many propositions that cannot be doubted, so that's insufficiently precise.

I can even doubt that 1 + 1 = 2, because I cannot prove to myself that I am not completely crazy. This is not a new idea at all, it's the "cogito ergo sum" of Descartes.

> Your attempted definition would seem to assume that only conscious entities can "doubt", but is that really true?

It is not necessary to assume that. It is enough to assume that a conscious entity cannot doubt its own consciousness in good faith.


> I can even doubt that 1 + 1 = 2, because I cannot prove to myself that I am not completely crazy.

Even if completely crazy, you still doubtless perceive and experience.

> It is enough to assume that a conscious entity cannot doubt its own consciousness in good faith.

If by "consciousness" you mean subjectivity, then I disagree. We cannot doubt that we have something that we conclude is subjectivity. The real question/the hard problem is determining whether such a conclusion is true.


> but perception does not require consciousness

The act of perception directly implies that some self-aware entity is perceiving. For instance the camera is capturing the image, but you can't say "this camera is perceiving" We rather say "this camera is taking picture"


> The act of perception directly implies that some self-aware entity is perceiving.

No it doesn't. It's entirely sensible to say that the AI that detects faces is perceiving.


> I do not doubt that I perceive things via my senses

You could if you were stuck in Inception land. We dream that we perceive things. Also if you were a brain in a vat, you would not be perceiving anything.


I actually didn't mean to say "via my senses", what I meant is that I definitely perceive things, period. That encompasses both of your scenarios. Thanks for the correction!


It's more correct to say you have experiences of perception. But those experiences are not always true (the tree you dream of seeing is imaginary), and thus it's possible to doubt them, and also create skeptical scenarios where you never truly perceive anything.

Arguably, Neo never actually perceived anything until he took the red pill and woke up.


> Arguably, Neo never actually perceived anything until he took the red pill and woke up.

I don't think that's possible. His perceptual faculties would have been completely undeveloped in that case, so he wouldn't have even been able to see, stand, hear, or anything. Arguably, the matrix made use of his perceptual faculties. It would be orders of magnitude easier to do so than trying to reproduce them in the vat apparatus.

I think people are fond of trying to separate perception and experience, but I'm not sure that's valid. I don't think an eliminativist would make this distinction, for example.


> I think people are fond of trying to separate perception and experience, but I'm not sure that's valid. I don't think an eliminativist would make this distinction, for example.

Problem for the eliminativist is that there are experiences which aren't perceptual. The reason for thinking we can separate perceptual experience from perceiving is because it is brain activation which results in an experience, which can happen without sensory stimulation. When we dream, we're activating our visual and auditory cortexes to create those experiences. Electrodes or magnetic stimulation can do the same on a much cruder level. And a schizophrenic may hear voices because their brain fails to distinguish between internal thoughts and external voices.


> When we dream, we're activating our visual and auditory cortexes to create those experiences.

Right, and because there's little difference I generally refer to them all as perceptual. It seems perfectly cogent to say "electrode stimulation can create visual perceptions". I think "experience" in most such sentences is simply redundant. It either refers specifically to the activation of perceptual faculties, or it refers to something "beyond" perceptual faculties which may not exist (qualia).


What do you mean by cannot be doubted? It might be clearer if you eliminate the negative.


If I doubt you are conscious, are you then, by definition, not conscious?


So if I have self-doubt, my self is not conscious?


So as soon as you start questioning your own consciousness you are immediately no longer considered "conscious"...

So using this definition of consciousness, and looking at Plato's most famous quote: The unexamined life... is apparently the only conscious one. Huh.


That's a great definition. I think we can all objectively agree on that definition, but that doesn't necessarily mean that the thing being described by the definition is any less subjective.


Actually I think it's quite possibly the worst definition of consciousness I've ever seen.

At best (assuming conscious cannot be doubted) it's like saying "Cars are those things which move fast". In the sense that it describes one property of cars, or consciousness, but not a unique property. Other things move fast, not just cars, other things could be beyond doubt than just consciousness.

At worst (assuming consciousness can be doubted) it's like saying "Cars are those things which flap around", which is utterly nonsense, and unhelpful. Plenty of things "flap around", but cars don't...

My point is, if you don't already have a strong preconception of what "consciousness" is, that definition is worse than no definition at all.


to be fair, the claim is, "supported by...". this is not the same as equality.


The map is not the territory.


"Frankly, I find the idea of a bug that thinks offensive!"


For anyone not in the know, this is a line from Starship Troopers.

https://www.youtube.com/watch?v=xKk4Cq56d1Y


Well done. Forgot about that movie.


see also: "Insects cannot tell us anything about subjective experience or the origin of consciousness"

https://www.pnas.org/content/113/27/E3813


The title of that note is, of course, possibly not correct.

> Thus, they fail to make a convincing case that insects can tell us anything about subjective experience or consciousness.

This is not the same as "Insects cannot tell us anything about subjective experience or the origin of consciousness"


Right, the title makes a claim, and their conclusion is that the opposite claim has failed to meet its burden of proof...

Thinking that "because something has not met its burden of proof means that the opposite must be true" is a fallacy.


And the original authors' rebuttal to that and other work: https://www.pnas.org/content/113/27/E3814


It seems like the assertion (very roughly) is something like: a creature is conscious if it can make a subjective decision about something. It's not a terrible definition, but I feel like I'm missing some nuance.


Is this really saying that consciousness is an efficient solution to basic problems of sensory reafference? Does this mean it is the base from which consciousness arrives or is it merely an attribute of consciousness?


Karl Friston's free energy principle says something similar - https://www.youtube.com/watch?v=NIu_dJGyIQI


> How, why, and when consciousness evolved remain hotly debated topics

Conversely, consider whether or not consciousness as a feature of the world actually evolved at all. That is, consciousness could be an inherent part of reality throughout all of reality. See various versions of panpsychism or idealism. The idea is that energy itself is conscious, and what evolves is the shape or form consciousness takes. There is no place in space-time one can point to and say "this is where the lights came on".

To answer the question of "why" is the same as answering the question "why something rather than nothing?" It's not clear whether a semantic or symbolic expression (which is itself a part of consciousness) can satisfactorily answer it (that is, explain the whole).


Similar article thoroughly discussed previously:

https://news.ycombinator.com/item?id=11849621


> The brain structures that support subjective experience in vertebrates and insects are very different from each other, but in both cases they are basal to each clade. Hence we propose the origins of subjective experience can be traced to the Cambrian.

Insects are thought to have arisen in the Devonian period, did they not?


Maybe look no further to when human babies become conscious, their first moments that they remember can say a lot about what consciousness is. Having memory is a big part of consciousness.


Kids are conscious way before they're making memories that will last until adulthood.


Not saying SSDs have consciousness however


Lacking any intelligible means to communicate, this is tough to say definitively.


>However, consciousness also gives out somewhere. Plants do not have it.

Stopped reading there.


You think they do? Or just that it's an unreasonably certain assertion? Is there a dividing line? If not plants/animals, then where?


If anyone proved beyond doubt that plants are not conscious & don't make decisions, I sure didn't hear about it. I have little doubt there are many flavors of consciousness out there, and while humans and animals and plants have their differences, they are not as large as some assume.

What I believe is this: We don't know nearly enough about consciousness, how it's made, and what it is, to make statements like that. And when people make the mistake of assuming they know otherwise, anything that follows is likely a pile of ballsack.

I suspect the dividing line, if we assume there is a single one, is pretty fuzzy and subjective. If a tree is cut down, is it still a tree? What about when it's cut into planks? At what point does an acorn become a tree; at it's first leaf, first sprout, first branch?

If we had answers to these kind of questions, the world would be very different to what I've observed.


I can't argue with any of that, and of course these things are all fuzzy. But if plants have anything resembling consciousness, it seems to be so different than what we experience, as to be worth distinguishing.


> But if plants have anything resembling consciousness, it seems to be so different than what we experience, as to be worth distinguishing

Why? They're alive, and their lives are drastically different from ours, but we don't need to distinguish it and use a separate term for term. Unless we know what consciousness is we can't define what has it.


Living beings, I believe, is a strictly broader group than conscious beings. I'm not claiming to be certain about this - it's a belief I've taken for granted most of my life - but it would take a good argument to convince me otherwise.

There are plenty of different ways to define life, and some of them include entities that, I believe, are clearly not conscious. For example, starting way at the bottom, I strongly believe these things are not conscious: genes, DNA, RNA, proteins, prions, viruses. I also assume that some other things are not conscious, such as single cells. Beyond that, things get more murky.

As far as I can tell, "being conscious" and "having a mind" are equivalent. I can't define what a mind is, but I can say it would be hard for me to accept that something could have a mind without anything resembling a brain. Plants seem not to have anything resembling a brain.

Of course, I could be wrong about that. Mechanically, a brain is nothing but a complex network of electrochemical signaling devices (plus a bunch of sensors and actuators, but I'll ignore those for now even though they appear to be crucial), and there are lots of other places to find electrochemical signals, including both within and between plants. There just aren't any that seem to resemble the structure, function, or complexity of a brain.


In practice I don't disagree with anything you said.

But in theory, I'm not really sure (and none of us are). There are sea creatures (jellyfish?) that have distributed nervous systems throughout their body. They exhibit complex reactions and behaviors we'd expect of a complex animal but have no central "brain".

Similarly plants respond to stimuli, turn and twist to move towards light, send out chemical distress signals to other plants when injured / being eaten... Does that mean they have a non-traditional form of "nervous system" as well?

If someone forced me to choose, I'd clearly say plants aren't conscious. But I had to provide a strong justification I had to stick behind, I couldn't argue why they (or the jellyfish) are or aren't conscious, nor about an ant or butterfly.

At some point we may have to declare consciousness outside of the realm of science, as there is no objective test for it. Ie distinguish between consciousness and simulated consciousness or reflexive responses. And that's very disappointing to a scientifically curious person like me.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: