Hacker News new | past | comments | ask | show | jobs | submit login
A Neuroscientist’s Theory of How Networks Become Conscious (2013) (wired.com)
159 points by tegeek on Oct 29, 2014 | hide | past | favorite | 82 comments



Integrated Information Theory I think is a fad, and a bad one at that.

One could think of integration of information as a necessary but not sufficient condition for consciousness a la Scott Aaronson [1] who applies more rigorous mathematical examples of integrated systems - in fact ones close to maximal possible integrated information - that very clearly are unlikely have any measure of consciousness (like expander graphs and error correcting codes). What about ciphers, where small changes to a single bit in key, ciphertext or plaintext radically alters the evolution of the internals and output of the system?

But even that might be too strong a statement. Aren't minimally interacting systems such as Conway's Game of Life Turing Complete? There are tiny, tiny Universal Turing Machines, and UTMs are essentially free to encode/decode their tape inputs in any way they see fit.

Ultimately while IIT has a 'mathematical description' it mostly only serves to obfuscate a naive and trivial notion that 'the system has to be complicated and has to integrate and process local information in some holistic global way'.

No shit.

[1] http://www.scottaaronson.com/blog/?p=1799


Not quite. I attended a talk by Dr. Koch at MIT. I didn't get all of it, but it seemed to me that his definition tries to figure out what's included or excluded in conscious systems and how conscious they are. It may or may not be right, but it at least it's working towards something with predictive value.

The examples you gave are mostly abstract and not physical. If consciousness is a physical phenomenon, these algorithms must be manifest before they can qualify. As for the game of life, the pieces on a physical board aren't actually interacting with each other with feedback loops. In a computer, the circuits are mostly feed forward and the algorithm is simulated rather than physically realized.


The examples though _do_ have physical manifestations - everything that exists in the universe has a physical manifestation (by tautology). How is a feed forward circuit carrying information (to be integated) not a physical realization? It's not clear why neurons carrying potentials and mixing them count but not an FPGA (no longer an ALU simulation) doing cryptographic operations for a bitcoin mining rig. Furthermore as an FPGA more directly implements the information integration system than an ALU fetch-execute cycle, is it 'more conscious'? For me that sounds absurd.

Presume the Game of Life _did_ have a direct physical implementation, let's say neurons fire according to GOL rules. Would you then be forced to call it conscious or not? Wouldn't it depend on the cell configuration? An all blank cell configuration certainly isn't conscious but the system would have the same low IIT score as a manifestation running some UTM-equivalent setup.

Finally, what about physical systems that do highly integrate information? N-body problems exhibit highly integrated behavior manifest in the so-called butterfly effect. Why do the air molecules in a balloon not count as conscious? This physical example escapes the "America" criticism - every molecule applies a force on every other and due to n-body mechanics the system is highly integrated.

My worry, which I think is mostly corroborated, is that the definite clarity of the IIT definition will serve to obscure the fact that underlying definition has no real definite clarity or derivation.

Ultimately the IIT formula was imagined to capture some heuristic notions - but it was not derived. For this reason, added to those above, I am very skeptical.

Or maybe it's my CS background that makes me biased toward thinking that _how_ information is integrated matters purely more than some measure of how much mixing there is (e.g. by one of many versions [5 now?] of the IIT formula).


So I'm not exactly sure how an FPGA is wired, so I won't comment on that. However, I feel, but cannot prove that the GOL neurons might have some kind of internal experience.

Your air example is a good one, however, the kind of interactions they have depend on temperature. Also, the information a air molecules transmit has little explicit encoding other than a few attributes, so I imagine that it wouldn't feel like much to be air. Feedback loops are probably frequently formed and broken, so I'm not sure what to make of that.

I think the IIT formula is interesting. I'm not sure if it's right (it probably isn't and will need modifications at least. Dr. Koch talked about this version of it being invariant across time, which sounded off to me). I'm planning on learning more about it though because it sounds like might have at least part of the answer and it might be on the path to finding something more experimentally amenable which is more than I can say of most theories I hear about.

I'm also attracted to the physicality requirements of IIT because it's clear to me that abstract computation alone is not enough to reify physical phenomena. For instance, mathematics often returns imaginary solutions to physical problems. Some math represents reality, but not all of it. It's important to stick close to physical processes in physics.

I haven't yet heard that there were five versions of the formula, but that doesn't surprise me. It's actively under development and it hasn't hit the point where we can conduct experiments to narrow it down. As far as I'm concerned, it's an interesting idea at this point, and I hope I can take away some useful tools from it.


"WIRED: Getting back to the theory, is your version of panpsychism truly scientific rather than metaphysical? How can it be tested?

Koch: In principle, in all sorts of ways. One implication is that you can build two systems, each with the same input and output — but one, because of its internal structure, has integrated information. One system would be conscious, and the other not. It’s not the input-output behavior that makes a system conscious, but rather the internal wiring."

Isn't this the definition of a non-testable assertion? The observable input-output behavior is the same, yet he claims the property of consciousness is different. So where is the test?


Maybe one has a higher (non zero) phi rating, which apparently means it is conscious. I agree with you, I don't see the test. I'd also like to agree with Koch, intuitively what he says makes sense - it's possibly a lack of understanding on the writers part that this key point is lost.


Well, he's stipulating that we build it, so we know what the internal wiring is. That doesn't necessarily mean that what's he's describing is possible or even coherent.


According to Koch, consciousness arises within any sufficiently complex, information-processing system. All animals, from humans on down to earthworms, are conscious; even the internet could be. That’s just the way the universe works.

Clearly one can build a hugely complex system that performs some special-purpose function and isn't anywhere near "conscious". Consider a big supercomputer doing finite-element analysis. Or a big network of packet routers.

Koch would probably argue that those aren't "sufficiently complex"? So what's the definition of "sufficiently complex"? Something that is "conscious"? This is not helpful.

Perhaps we should focus on "common sense", rather than "consciousness". Common sense can usefully be defined as the ability to predict (at least the near-term) consequences of actions. Given that, planning and evaluation of alternatives is possible. Most animals above the insect level have some capability in that area. They're not purely reactive. We really need to get this figured out so we can build robots that can handle new situations without screwing up too badly.

Neuroscience has work to do at the bottom. Check out "http://www.openworm.org/", where some people are trying to simulate the simplest nematode known at the neuron level. Until that works, neuroscience doesn't really have neurons figured out.

Classic quote: "Philosophy is the kicking up of a lot of dust and then complaining about what you can't see".


> Consider a big supercomputer doing finite-element analysis. Or a big network of packet routers.

There's another perspective here. We always argue from a very anthropocentric point of view and measure experience based on actions and reactions that a human would perform. Ie. asking questions, seeing something and saying something based on what we see, etc. The supercomputer you described can hardly be considered conscious with regards to such actions. However it might be very conscious with regards to a different set of actions, ie. turning off some switches which allow for intercommunication between the different CPUs (the computer might react by rerouting some packets), it might react to raising the room temperature (by spinning up the fans), it might react to removing some RAM (by allocating more data on the hard disk), etc.


That line of philosophy goes back to the steam engine governor, which some people at the time considered intelligent. That concept didn't yield much value then, and it doesn't now. (Unlike Maxwell's 1868 paper "On Governors", which established the mathematical basis for stable feedback control, and is still the basis of basic control theory.)


It's not directly related to the original article, but the idea of consciousness arising in any sufficient system reminded me of an essay by Hans Moravec, "Simulation, Consciousness, Existence" [1] (and that essay in turn makes me think of the dust theory of reality described in Greg Egan's "Permutation City").

"Perhaps the most unsettling implication of this train of thought is that anything can be interpreted as possessing any abstract property, including consciousness and intelligence."

[1] http://www.frc.ri.cmu.edu/~hpm/project.archive/general.artic...


Many drugs induce unconsciousness, but these don't work by decreasing the complexity or organization of our brains. Doesn't this neatly refute the idea that consciousness is an "immanent property of highly organized pieces of matter, such as brains?"


Not really, because complexity can have an on-off switch: a global one, or at least a partial.

If you implement a software system that is conscious, but then suspend it to a storage device, and resume it one week later (so that it is surprised: where did the time go?) its complexity hasn't gone away. It just wasn't conscious for one week because it wasn't running.

Another thing to consider is this: there may be more than one consciousness in your brain! The one which is you, the one typing on Hackernews, is just the one which has access to the "console" so to speak: the fingers, the eyes, ... There could be other consciousnesses hidden in your brain that are suppressed. Like "background daemons". Maybe that part of the brain that regulates your body while you sleep (or don't sleep) is also conscious!

The consciousness-process which is "you" is deactivated during sleep, but the other background "daemons" remain conscious. (Clearly, sleep is not a complete, global shutdown of brain activity, in other words.)

(Also, I'm here reminded of dolphins which put half their brain to sleep at a time.)


I like your computer analogy because it's easy to reason about. Surely a powered-down computer is not conscious. So if consciousness has an off switch, then it's not immanent (inherent) to organization.

Koch compares consciousness to the electric charge of an electron. But the electric charge has no off switch!


"But the electric charge has no off switch!"

Exactly. And "consciousness is computation" is an hypothesis that does not really work.


> * And "consciousness is computation" is an hypothesis that does not really work.*

Why not?

All of reality might be a computation.


> Why we should live in such a universe is a good question, but I don’t see how that can be answered now.

His last point here is salient. Much like the various interpretations of quantum mechanics, any theories of consciousness are untestable. But my problem with "consciousness" is that it's not even a well-defined concept. Suppose I replace all occurrences of "consciousness" in that article with the word "qualgma". It makes just as much sense.

Gravity, electrons, and energy are all concepts that are defined to exist as the manifestation of certain physical phenomena that can be modeled and predicted with mathematical equations. These words are just English simplifications of the equations.

But consciousness has no such analogue. It's just a term that people throw around when they talk about the brain. I believe that over time, it has started to acquire a more well-defined meaning -- something along the lines of "the emergent properties of the brain's activity resulting from lower-level processes". This is similar to how a human body is just the emergent result of many atomic interactions. This version of consciousness can be modeled, theorized about, and tested experimentally. There's nothing magical about. As the brain's neural circuitry becomes better understood, you could say consciousness is a simplified model that still has good predictive capability.

However, I would also like to focus on a much more interesting topic in the article:

> My consciousness is an undeniable fact. One can only infer facts about the universe, such as physics, indirectly, but the one thing I’m utterly certain of is that I’m conscious.

I believe Koch is conflating two distinct concepts. If we take the definition of consciousness as the predictive modeling of the brain, then this is something totally different than what he's talking about here. Let me alter his quote:

> That I experience my existence is an undeniable fact. One can only infer facts about the universe, such as physics, indirectly, but the one thing I’m utterly certain of is that I am experiencing existence.

It's really astonishing to me that he words it this way, because independently I have thought almost exactly the same thing for a long time.

I've often struggled to put this notion into words, but let me try it in a new way with an analogy. Suppose you see an apple floating in front of you. You tell other people "Look at this apple!" But they just go "What apple?" So, you try to get crafty. You take a picture of the apple with a digital camera. But when you show people the photo on the computer, they still see no apple. So you zoom in on the pixels and start copying the RGB values for each one by hand onto paper. "Look, this value is 237!" you say. And you sit down with a friend, and start calling each and every pixel's values out as he puts them in manually into his image editing program. When he's does, you say "Ha! Look, there's my apple, right on your screen! And you put it in there yourself!" But he stares at you quizzically, and says, "I still see no apple; just an empty table." And no matter what kind of tricky experimental method you try to come up with to get everyone else to understand the concept of this apple that's always floating in front of you, every attempt to catch it produces a representation of the apple for you and nothing for everyone else.

It's frustrating because the people you tell about the apple say "Well, you have provided no testable predictions, and no data that can be independently verified by everyone else in society. Clearly your apple doesn't exist." But it does! You know it does! In fact, out of everything that constitutes reality, the floating apple is the one thing you're most sure of. Frustrated, you walk around thinking you're crazy until one day someone says, "Well, I don't see an apple floating in front of you, but do you see the orange floating in front of me?" Which, of course, you don't.

Now replace "apple" with "the fact that I am experiencing my own existence". It's an element of reality that is only apparent to you on a personal level, and since it isn't testable by other people, it is not a part of their reality, and thus does not exist as far as they are concerned. And thus it's not science. Yet even though it's not science, it is the one thing in life I am most certain of. My five senses could all be faked with advanced enough neural circuitry. I could be in an entirely simulated environment, with entirely simulated physics. So when I order things in probability of how likely they are to be an illusion, experiencing existing falls at the lowest probability. In fact, you could strip me of all my senses and put my brain in a vat, and as long as it's running, I'm still experiencing existence.

If the idea bothers you that things might exist that are incapable of being verified consensually by society, look at this image of all the particle interactions (so far discovered): http://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Ele...

If you'll notice, gluons "hang on" to the rest of the particles by merely one interaction. If this interaction was not present, as far as positivists are concerned, then gluons don't exist. But that seems entirely unreasonable to me. I could imagine plenty of particles that could "exist" that simply don't have an interaction with the ones in the standard model that constitute our reality.

Anyway, this is probably the most bizarre post I've ever written on HN, but hopefully what I'm trying to convey "clicks" for some people.


Two thoughts:

1. Your nearest definition of consciousness "emergent properties of lower level functions" or something along those lines doesn't really cut it, either. To say that something is an emergent property of another thing, without showing the steps of that emergence is kind of like defining red as "an emergent property of the sun and a sheet of cellophane". It's HOW it emerges that's the hard question.

2. If you were pleased to hear another person's discovery that "my experience of experiencing is irrefutable", you might like to read Descartes's account from over 400 years ago in his Discourse on Method.

He veered off into some strange and tenuous territory, but the opening of that book is excellent and it uses that discovery (a.k.a. the Cogito - I think, therefore I am) as the first axiom from which to build a whole philosophical system, and it's a really interesting read.

This is the guy who invented algebraic geometry, from which we get the Cartesian plane, and the idea of ordered pairs, which is such a fundamentally useful idea that it's hard to see just how brilliant a discovery it was. He had some things to say about consciousness that have not yet been improved upon.

EDITED to say that the book I was talking about was Descartes's Meditations on First Philosophy, for anyone interested. He decides, "let's doubt that anything at all exists, just doubt everything - where does that take us?"


In the same Discourse on Method, Descartes makes assumptions about how the human body works and these assumptions are completely wrong. Interesting read just to see how someone perfectly logical can be completely wrong at the same time.


It was not completely wrong if he was right about some things. It's one thing to say someone broke a window and another thing to say who and how and when. He proposed an explanation which know now to be false. But the thing to be explained is still there.

Sad thing is that much of the literature in philosophy of mind will discard dualism solely on the fact that past attempts to explain dualism have failed. And more over, some have just failed to convince the opposite side, which is open for subjectivity.


Oh, for sure. Aristotle is an even better example. Doubtlessly one of the best minds on record, and yet he came up with a "completely wrong" model of the universe. Really makes you question what we are so sure about these days, and also what it means to be wrong - Aristotelian physics worked well enough to be useful for a thousand years.



This is very very good. Thank you for taking the time to make that comment. I have often struggled to make this same point but find that many people today are so deeply committed to a worldview of scientific materialism (the objects of physics constitute all that is), positivism (everything true must have empirical evidence), and a general anti-philosophicalism, that they just cannot accept it.

I think it just goes to show how sick and deficient that worldview is. Have you witnessed that cliche in some films where a character confronts a scientist and says, "But there's more to life than cold hard facts! What about love and art and seeing your child go to school and ...!" And all the scientists watching the film scoff and think that's just fluffy and inexact and those are just emergent properties anyway. And it's true, it is fluffy and inexact. What the character should say is, "What about the fact that I experience my own existence!", just like you've said it. And then, afterwards comes love and free will and personhood and all the other stuff. But the hard problem of consciousness is the immaterial's undeniable "foot in the door". If you are interested in a true philosophy of all that is, pay attention!


> > That I experience my existence is an undeniable fact. One can only infer facts about the universe, such as physics, indirectly, but the one thing I’m utterly certain of is that I am experiencing existence.

> It's really astonishing to me that he words it this way, because independently I have thought almost exactly the same thing for a long time.

This is just the famous "cogito ergo sum" of Descartes, dating back to the 17th century. A significant piece of epistemology is devoted to criticizing this reasoning:

http://plato.stanford.edu/entries/descartes-epistemology/#4


Well and thoughtfully said, thank you.

I find consciousness perhaps the hardest of the hard problems, in particular the self-awareness aspect of the phenomenon. I experience it, as I suspect (but can't prove) do the rest of HN's participants. But damned if I can figure out how it emerges from organized systems such as ourselves or even whether it's necessary.

I can imagine going about my day emitting behaviors without self-awareness: stimulus/response, variation/selection/inheritance over the short and long term yielding the rich behavior we associate with ourselves, our conspecifics, and the other critters we share the planet with. I can imagine everyone and everything else doing the same - behaving, absent awareness. So far, I haven't seen how consciousness is required for behavior.

And yet I feel aware, never mind that I don't see why that's required. Inevitably, that suggests to me a continuum of consciousness, from less organized to more organized systems. For me, it's a long walk from the Nernst equation and neuronal membrane potentials to my dog's happy disposition, which algorithmically seems to be wag more, bark less.

It's good there are hard problems to think about.


> I experience it, as I suspect (but can't prove) do the rest of HN's participants.

Wouldn't it be interesting if science had to be advanced by resorting to individual proofs? In other words, currently we can perform one experiment and have everyone else verify what we have done by looking at our results.

It may turn out that there are some aspects of reality that can only be "proven" at the individual level; however, one would assume that this applies to everyone equally.

Continuing my analogy above, if everyone can "prove" to themselves that their own floating fruit exists (even though they can't see everyone else's), that should be enough to consider it science in my opinion.

(In essence, we move from global consensus about one event to global consensus about many similar, but individual events.)


Where things gets interesting, is when you realize you can manipulate the other person's orange through your own view. that realization comes with the fact that consciousness, I believe, is shared across all living beings.

My experiences with my own practices indicate that these closed off views are indeed not closed off. be aware, that my practices involve the esoteric and the occult. Many people are not comfortable with these discussions and will actively refute any merit whatsoever. Interestingly enough, I do my own semi - scientific theories regarding the occult, only that N=1, me. And I also have to acknolodge that my own sensors also can actively deceive me (placebo).

I've given up on proving this or that phenomenon within an esoteric background. Either you practice and have seen these things for yourself (and another N=1), or you haven't.


>I can imagine everyone and everything else doing the same - behaving, absent awareness.

In which case you are reasoning in error: people's behavior depends on their subjective experience quite a lot of the time. You simply couldn't get the same behaviors without the experiences.


You all seem talking as if consciousness is a Boolean, rather than a float number, which is likely the case. This has been strongly hinted by scientific research on some types of (non-human) primates: those who when exposed to mirrors are able identify themselves as the ones in the reflection.

Is also testable on kids: before the age of 3 they are unable to identify themselves as the people in any reflection. They slowly acquire this ability plus all the other traits we cluster in the word "consciousness"; including things such as conscious self-preservation, language, adaptive/abstract thinking, etc.

>If the idea bothers you that things might exist that are incapable of being verified consensually by society

This is nitpicking: But it depends way to heavily in what "society" means here, there is still people who believe heart is 6000 years old; so is never all people. the majority of people knows nothing about nuclear physics, and that doesn't make it false nor non-science.


I tried to make this point to two neurosciences a few weeks ago and they basically said "but consciousness IS well defined" and I really had to scratch my head because they appealed to "I know it when you see it" logic which is completely useless when trying to figure out the physical phenomena that produces conscious experience. Actually, on reflection it seems that it makes the point that we do not even have a good working definition for consciousness all the more obvious.

On the deeper observation here: there are plenty of things that can exist, but whether they actually have being in this universe is a trickier question.


hofstadter has a pretty decent defintion worked out in 'godel, escher, bach" and "i am a strange loop":

consciousness arises when systems in the brain model the external world, and then begin modeling their modelling of the external world, and then begin modelling their modelling of their modelling...


Perhaps Hofstadter has a definition of consciousness, but what you gave is a (right or wrong) description of how consciousness is implemented, which is not what most people would call a definition.


And can do this without crashing/hanging forever.


> I'm experiencing existence

Right, but I don't think this is as unrecognized as you think. Entire systems of ethics are built on the assumption that since I'm experiencing existence, and you are roughly the same as me, you're probably experiencing existence too.

Some systems expand on this and stretch or restrict the definition of "same" from "skin color" to "has a central nervous system" and other notions.

The longer I'm around, the more I start to think that this experience can be really different. Quite a bit of research is going into some fragments of this topic, we call it "intelligence". And we come back surprised whenever we find out that something which we didn't think of as "the same as us", like an octopus, can have it's own "experience of existing".

More recently there's a growing body of work that's starting to show that things even less similar to us might have an experience in their own way. We're finding out plants, for example, have all kinds of interesting and subtle ways of responding to the universe. People are afraid to call this "intelligence", but there's definitely something there.

It may be more that "life", of any kind, can be considered under this notion of experience, and not just self-similar things.

A different spin, "you are the universe experiencing itself"


> If you'll notice, gluons "hang on" to the rest of the particles by merely one interaction. If this interaction was not present, as far as positivists are concerned, then gluons don't exist. But that seems entirely unreasonable to me. I could imagine plenty of particles that could "exist" that simply don't have an interaction with the ones in the standard model that constitute our reality.

A problem is that when we use concepts to describe new concepts, lots of little assumptions tag along each referenced concept that may be different to every person. So, when I explain a concept, many people may understand my concept differently, while some may understand my concept precisely.

There is a difference between 'what could exist' versus 'what is assumed to exist'. When we communicate, we have to be very careful about the definitions of terms and their context.

I say "tree", we both may think of a tree in our minds. Your tree may come with assumption that the tree must have leaves in order to be a tree. My tree may come with the assumption that a tree does not need to have leaves, in order to be a tree. Neither of us may touch upon this point, while we are busy communicating about trees, and I think that is what is most important to recognize.

The rest of what you say though, did click for me, so thank you.


I'm not sure this gets you where you want to go but...

You can find common ground between the statement you quoted if you want to take solipsism as your definition of "consciousness." IE, experiencing your own existence is consciousness, or is the test for it.

That isn't necessarily a bad definition. I quite like it.

What I've never liked is this CS Philosophical debate. It devolves into semantics so often it makes me suspect that's most of what it is. We define consciousness in terms of what it does, in which ace we may discover multiple solutions. Or, we can define it in terms of how it does it, in which case we'll have to wait until it's discovered before we can see what it does. Either way, the word consciousness is not useful enough to merit all this trouble it causes.

Contemplating its own undeniable here-ness, that's the kind of spiritualism that separates bloodbags from greaseballs, for now.


I think you partly nailed it with I am experiencing existence. We experience the process of experiencing. Then I think you're wrong when you say consciousness can't be measured. It probably can.

What we have here is a system receiving inputs, observing how it reorganises itself under these inputs and influencing this reorganisation based on the results of previous experiences. For sure we can measure that a system doesn't merely learn but influences its own learning process depending on previous learning experiences. This may be what allows us to project knowledge from a domain to another, and what allows us to produce artificial knowledge by learning from others. We produce arbitrary connections through memory / continuity of experience and communication. This would provide a basis and a reason for language and societies.

What we miss here is a self-sense, and Koch hints at it IMHO. Integration may be key. I'd say dependency to make it clearer, state dependency. Two linked states where one reflects exactly the value or the other could effectively be labeled as one. They would feel as one, too. What I define as "me" is what has direct influence on me and what I have direct influence on. There isn't a clear boundary though. This definition still has a problem as we define me as what is related to me, so maybe a better definition could be found by taking it by the other end and defining as feeling like an individual "me" every intergrated conscious instance significantly disconnected from the others. A fragmented big abstract me, for short.

If we define consciousness this way it exists in the positivist framework (which satisfies me a lot) and yet it can exist in more exotic frameworks too, like the ones where one can experience that everything ultimately depends on everything when seen under the right scale, and that the sense of self can be extended to everything.

I'm still unable to devise a clear experiment to prove consciousness defined this way though, maybe because I'm plain wrong and it's impossible !


You say "not science" but do you actually mean "not verified by the scientific method, yet"? It may be that the perception of self by another remains a meaningless collection of words, or progress may provide instruments that enable verification.


I've been pondering these same issues for well over a decade as well, but I've become partial to the Penrose-Lucas argument which implies that there is some sort of non-computational (perhaps even non-physical) component to human consciousness.

http://en.wikipedia.org/wiki/Orchestrated_objective_reductio...


I believe the problem is not that we can't define the "experience of existence", because this phenomenon could exist entirely independently of our physical (scientific) reality. The problem is that there is an interaction between the experience of "I am aware" and the physical reality, for otherwise, we would not be discussing this topic here!


I think what a lot of people mean when they say consciousness is "what is being directly experienced" (ie. the experience of ones own existence). Now there's seems to be more restricted definitions around, but this one is to me the most self evident, and useful (and I'm not aware of any other word that does the job, which is kind of mad).


>Now replace "apple" with "the fact that I am experiencing my own existence"

I find it interesting to replace "apple" with "UFO", and you get a scenario that is no longer theoretical.


Maybe it doesn't exist and is just humans own limits and wondering they throw at each others. Can a protocol knows it's in water ?


>I believe Koch is conflating two distinct concepts.

I believe Koch is conflating many distinct concepts - as most of the consciousness research field seems to.

How can you say anything useful until you define your terms?

It seems to me humans have many different layers of experience:

Visceral pain/pleasure Instinctive drives Emotional/limbic experiences Locational (physical mapping) Memory-based/learned Abstract/intellectual/symbolic

Consciousness is another kind of experience which adds a symbol that works a bit like JavaScript's 'this', and seems to create a level of experiential indirection - so instead of sensing something directly, you experience how it affects the 'this' symbol. It seems to do this by abstracting the experience and simultaneously comparing it with an associative database of previous experiences, using all of the circuits in the list. (And probably others too - I won't pretend this is a complete model.)

You need some basic ability to abstract to have a 'this.' The better you are at it, the more bandwidth passes through the experience/memory/abstraction/future prediction/new memories system.

Simpler animals have basic input circuitry, and they learn instinctively without abstraction.

But... I'd guess surprisingly many animals of all kinds have a 'this' network. The human advantages are a 'this' network with more bandwidth, a much improved ability to learn from experiential invariants, an ability to make useful predictions, and an ability to store and communicate 'this' experiences using external technologies.

If this sketch is accurate, consciousness doesn't have to be mysterious. But it also doesn't depend on a simple connection count.

The Internet won't be conscious because it has no 'this' network. It has no pain/pleasure circuits, no emotional circuits, and no instinctive drives. But most of all it has no ability to compare memories with current experiences and to abstract learning from that. There are systems - like Google search - which have incredibly basic precursors to abstraction/learning/prediction. But they still mostly run on a batch basis with input->processing->output, and no explicit self-concept state memory.

My guess is that if you built a system from these elements and included the current state of the art - or better - machine learning, machine vision, and natural language elements, with useful analogs of instinctive drives and the other animal fundamentals, you'd potentially have something that at least acted and sounded conscious, and would likely do some unexpected things. (Which is one of the practical ways we think we recognise consciousness. Conscious entities are hard to model. In fact we're probably easier to model than we think we are, but face to face, consciousness in other people usually implies surprises.)

Whether or not it would be truly self-aware or just pretending is a question that's impossible to answer without resorting to metaphysics.


I really like this idea that you fleshed out in this post, it seems to relate more to the more common concepts of "consciousness" that are usually referenced. I do think that your description and Koch's don't necessarily have to be disjoint, it is possible that the "simple connection count" is highly correlated with this pattern of self that you have identified. The specific subsystems that you listed may just be very effective (and thus common) ways that this theme of consciousness usually emerge, though not necessarily the only ways. Disclaimer: I tend to think of this subject as more speculation than science anyways, at least currently, thus the opinions I have voiced are just that.


Indeed, without resorting to metaphysics, I don't know whether any other beings besides myself are self-aware or just pretending. I have good reasons to believe other humans are self-aware, but I can't prove it scientifically. It's too bad that scientists tend to be so dismissive of philosophy; I think some rigorous metaphysics would really help in this field.


  How can you say anything useful until you define your terms?
Can you say useful things about games, even though no clear definition of 'game' exists?

Can you say useful things about 'love', even though it is not well defined?

Clear definitions are not necessary to say useful things. Some rough boundaries around the term is usually enough.


How about emergent properties when connecting 3 billion humans to a network? Sounds like a quite complex system to me :)


Here's my perspective, fwiw:

Intelligence is the capacity to acquire and apply knowledge. Knowledge is acquired when it is represented in a system. Action is the application of knowledge. Thus, all systems in which actions occur are intelligent.

The potential for intelligent behavior is determined by the existence of a goal, the opportunities to acquire and apply knowledge that are relevant to the goal, the resources available to acquire and apply knowledge, and the capacity of the knowledge representation to express truths. With the exception of the goal, adjusting these constraints scales intelligence.

Intelligence requires a goal to determine action selection. No actions occur in systems without goals, and therefore such systems are not intelligent. The goal of a system does not influence the scale of intelligence of the system (which can be thought of as the magnitude of a vector), but only its purpose (which can be thought of as the direction of the same vector).

Opportunities to acquire knowledge range from stones falling on levers (the force exerted on the lever – from the perspective of the lever) to humans touching hot elements (nerve impulses from the sensory system – from the perspective of the human mind). No observation is fundamentally superior. Representations of observations (as knowledge) determine potential applications.

Opportunities to apply knowledge exist when it is possible for a system to perform an action. At the beginning of a game of chess, each player has many opportunities to apply knowledge. After the losing player has made his/her final move, the player has no further opportunities to apply his/her knowledge. Opportunities to acquire or apply knowledge which are not relevant to the goal do not influence the potential for intelligent behavior because they will not influence action and will not be applied, respectively.

Resources to acquire and apply knowledge make action possible. The resources available to apply the laws of nature are typically presumed infinite (mathematical models preserve truth and the laws of nature conserve energy, but just as mathematical models require resources for the completion of proofs, it can be assumed that the laws of nature may require resources). This differs from the limited resources available to computers and human minds.

Knowledge representations range from the experience of observations (the force exerted on the lever and the nerve impulse) to stored, abstract concepts (mathematical models written on paper and memories in a mind). All knowledge can be expressed as truths. Knowledge representations express truths with respect to the perspectives defined by their grammars. Knowledge representations with wider perspectives (fewer assumptions) are subject to fewer biases, and are therefore more expressive.

The degree to which an intelligent system observes itself determines its level of consciousness. Most humans are not capable of independently observing which of their neurons are firing at a given moment, or the location of the electrons traveling through their electronic devices. The availability of such observations would result in a higher state of consciousness. Perfect observation would allow a system to be completely aware (although attempting to observe all of a system at the same time, including its state of observing itself, is an infinite set), resulting in the highest possible state of consciousness. Intelligent systems that are conscious of their action selection are able to optimize this process, subject to the capacity of their knowledge representation to express truths.


Are you sure that action is always the application of knowledge? When a star gets old and goes supernova, I don't think it's really fair to say that it's because of knowledge or intelligence on the part of the star. Or when a rope gets old and frayed and eventually snaps. Or when an asteroid randomly collides with a planet. Or when lightning strikes.


You haven't defined "knowledge".


For me questions on the nature of consciousness have always been deeply connected to the question of identity. Specifically the following thought experiment: What if there was a machine that would read every atom of your body and recreate your body at a different location at a different time whilst destroying the original. Would the arising consciousness still be your own original consciousness or would it be a copy? What if we keep both, the original and the copy? Are there now two separate instances of your consciousness running?

I guess the only answer in line with Koch is yes. Consciousness emerges out of a complex system. And I might add that consciousness is not a continuously working "state of mind". It fades in and out during our daily lives. The consciousness we had yesterday is lost today and replaced by a new one, which runs on a different chemical setting in your brain (albeit shares most of the memories of the previous one).

I think that while I can "experience my own existence", this "existence" doesn't always refer to the same consciousness - one moment to the next the underlying system might have changed marginally or even substantially (in the case of being based on an entirely new set of atoms, which just happens to be in the same configuration as before).


TL;DR:

Christoph Koch says that 1) Make a really complex system. We can even measure how complex systems are! 2) ??? 3) Consciousness!

By the way, that's not panpsychism, that's emergentism, and for much better accounts of that read CD Broad or Jaekwon Kim.


The biggest neural network yet built was a 100 billion neuron simulation of the human brain[1]. That didn't exhibit consciousness, but is considerably smaller than the 1 quadrillion (1 million billion) synapses a human brain has.

I'm unsure what the relationship between a synapse and a neuron is. There are claims[2] that a human brain has "only" 100 billion neurons itself.

Of course, it's true that consciousness may emerge from the complexity of the connections rather than the raw count.

I'm unconvinced. It seems possible that whales (500 million neurons) have more complex brain networks than humans, and yet by some definitions they aren't conscious.

Conversely, some argue that birds are conscious, and yet they have considerably smaller brains than some neural networks.

[1] http://www.izhikevich.org/human_brain_simulation/Blue_Brain....

[2] http://www.quora.com/How-many-neurons-are-needed-to-create-a...


"Of course, it's true that consciousness may emerge from the complexity of the connections rather than the raw count."

Your "of course" is taking as proved something that is not (the consciousness is computation hypothesis).

In your benefit you said may so of course you're not convinced. Which is good because people shouldn't believe in things that aren't testable. :)

And you talk about birds but there are als unicellular organisms reacting to anestesics, so that doubt should be even broader


When we think of consciousness, we normally associate with the brain. But according to the complexity theory, shouldn't there also be consciousness in most of our organs? The liver is a massive network of cells that are in at least a moderate amount of communication, and yet I am not aware of its consciousness.

But perhaps it is conscious, just at a level that's inaccessible to my brain, which is a nearly distinct structure with very minimal relative communication to the liver.

I also found it interesting that he denied the idea that America might be one larger consciousness. I've had multiple experiences on LSD which have suggested to me that as an aggregate humans are one larger organism with a single collective consciousness, that operates both at a slower speed and a higher overall awareness. (Most accessible in the 150-200ug range)

When you think about how individual neurons work, spraying neurotransmitters at each other to trigger responses from the next neuron, essentially passing information in a complex yet highly organized web, are humans that much different. The set of humans who design cities are different from the set of humans who manage government policies are different from the set of humans that try to go to space. Each human passes information to others in highly complex ways that form super organized macrostructures. It doesn't seem to be that much of a stretch then to say that consciousness also arises from the interactions of the macrostructures in the same way consciousness arises from the interactions of all the parts of our brain.

And the internet would seem to be a massive facilitator of this. Because of the internet, the amount of communication I do per day is enormous, and my interactions happen at a global (though mostly English speaking) scale. I doubt communications of this scope and magnitude were available to humans even 20 years ago.


The problem I have with this is representation. One's brain cells are a certain representation (in other words implementation). Would the same consciousness be present with another representation (e.g., computers on the internet)? How about if we chose a completely different (nonsensical) representation, for example, at time instance t0 the brain is represented by grains of sand, then at t1, the brain is represented by e.g. the trees in a forest, then at t2, etc. My point is that you could ascribe consciousness to just about anything. Also, if the representation of brain cells produces a consciousness, would the (dual) representation of "NO brain cells" also produce a consciousness, and would they thus both exist at the same time or would they collapse into one consciousness?


Yes, I share your belief in collective consciousness. I think there's little doubt we'll find they do exist, once we find a testable definition of consciousness.

BTW one doesn't even need to take acid to realize this. Humans have thought this for thousands of years. It's a common belief of ancient eastern philosophies (metaphysics) such as Taoism and Buddhism, that "we're one consciousness". Our egos just don't think about it, much like a cell of your body operates in another (lower) level of consciousness, where it thinks it's "independent of you".


I find it useful to not think of the brain as what's in my cranium, but rather my brain is my entire neural network.

I feel not entirely convinced that I'm a Harvard architecture. A von Neumann architecture seems more plausible... long story short because humans have huge problems learning to walk on two legs.

Evolutionary neuroscience might provide the answer we seek, through the normal technique of belief propagation.


The internet, as a neural network, "dreams" of cat videos when they go viral. Huh.

In fact, it "decides" for itself which ones should go viral.

Interesting thought.


I blame my deep integration with the internet for my belief that I'm actually a cat.


This brings me to an interesting thought experiment I struggle with:

Most likely most people here would agree that if you make an exact copy of a person's brain, whilst leaving the original intact, it would be a new person, identical but divergent from the original. A new thread of consciousness by such definition.

But then, what if you destroy the original at the moment of copy? It would appear to the same.

But then, what if you replace each neuron one at a time over a period, maintaining the original network? This question is troubling because it brings into obvious doubt the integrity of our notion of consciousness. As it is in fact the case that we shed most of the atomic matter that constitutes us in a given year, we are clearly immaterial. Patterns.

So put plainly: should you copy your brain all at once, killing the original, are you a new person? But if you are: transitioning slowly piece by piece over time, which is what we observe in nature, this maintains the conscious strain? How are these different?

It's obvious to me there is something fundamental here we are missing. I welcome any insights you all might have had in similar thought experiments.


You seem to be assuming the actual existence of a "thread of consciousness" or a "conscious strain", when it could be an illusion.

When you wake up after a dreamless sleep, are you the same person, the same conscious entity, as went to bed the night before? Or is that entity now "dead", and "you" are a new entity that has just inherited its memories (most of which it in turn inherited from its predecessors)? How could you ever tell? In fact, are you the same conscious entity from moment to moment, or at least from thought to thought?

More "making a copy" thought experiments, none terribly original:

- If you're disintegrated and immediately reassembled, are you still you?

- Does using different atoms make a difference?

- Does leaving a gap between disintegration and reassembly make a difference? If so, how long a gap? What if you're resurrected at the Omega Point by sufficiently advanced aliens/post-humans?

- If you're split in two (sagitally, coronally, or however), and each half is immediately reconstructed into a whole human, each identical to you before the split, which is you? Which pair of eyes would you find yourself looking out of? Both? Neither?

- If the two "yous" exchange atoms, such that one ends up with the entire complement of atoms that made up you before the split and the other ends up with none, does that affect the claims of either to be the "real" you?


Excellent points, and actually in line with my line of question as well: I think my assumption of a thread of consciousness was a semantic mis-communication, as bringing that idea into question was indeed where my questions were leading.


There's nothing fundamental we're missing: the reality beneath our intuitive understanding of personal identity and continuity is just a lot messier than our intuitions allow for. Nondiverging copies are, in some sense, the same person, but they're also completely useless: the instant you "wake them up", they diverge and become different people. Destructive "moving" preserves personal identity, but is goddamn creepy because it provides no way for the "moved" person to verify that they remain the same before-and-after. The "continuous stream of consciousness" model is intuitive and lets us detect when something in our heads changes (you can feel yourself going from "sober" to "drunk" while awake in a very different way than if I just injected you with booze while you slept... For Science, of course).


Several philosophers have attended to those very thought experiments. I recently read some of Locke's "Essay Concerning Human Understanding" (Book 2), and he has some interesting thoughts on person-hood and persistence of consciousness in chapter 27.


The second question is the same as The Ship of Theseus.

http://en.wikipedia.org/wiki/Ship_of_Theseus


I failed to find anything concrete in Koch's argument. Its funny because at some level he is right - there is this thing called consciousness, but even when the most rational people try to describe it they inevitably sound unscientific. I imagine that before Darwin everyone must have sounded crazy when they talked about biology. I guess the field of Neuroscience is still waiting for a revelation.


Also worth a read: http://kk.org/thetechnium/2008/10/evidence-of-a-g/

Too much emphasis is usually placed on the "consciousness" part of general intelligence.


"But a malformed packet could also be an emergent signal. A self-created packet."

If a program sent out a malformed packet, this would indicate an error in the sender's program. Let's assume that a receiving program reads the malformed packet and is able to process it. It results in something favorable (no clue how this would be determined, but let's go with it) and so the receiving program repeats whatever it originally did to get this malformed packet sent to it, so that it gets another malformed packet. Multiply this by thousands of other programs doing similar things to "hack" other programs into sending malformed packets to perform useful functions, and maybe you have something happening here.

The above process basically describes a genetic algorithm, which I don't think the Internet is. It also assumes that most programs are flexible enough to produce bugs like this - creating malformed packets. Does this mean we should stop unit testing code to allow more freedom from the programs? Plus, the above also assumes there is some way of ranking an outcome from a malformed packet as "favorable". Maybe that would be humans?


Having read and listened to Alan Watts I have become convinced that the self or ego or "I" as we commonly call it is a delusion, and a dangerous one at that. It leads to all kinds of confusions and contradictions.

It is this delusion that is the source of our pain and jealousy. We should realize that we are part of a greater whole, that our own existence is inseparable from the rest of the universe. We are all one. Cheesey.

Rather than say "I think therefore I exist", I prefer the simpler, "thought exists"


This requires a leap of faith, usually not an indicator of good science, but it's not unimaginable. One could argue that his terms are ambiguous or otherwise ill-defined, but what if this kind of research could be the starting point for such definitions?


He has also a book about it: http://www.amazon.de/The-Quest-Consciousness-Neurobiological...

I have read it half way, a lot of information there.


The initial basis that electrons just instrinsically have charge isn't proven. Infact there is a proof of why electrons have quantized charge (by Dirac), but it requires that magnetic monopoles exist, which haven't been observed.


I also thought it was really interesting that they chose that to start with. I personally find the electron to be one of the more confounding and least talked about pieces of the current state of physics understanding.

That being said, it is hard for me to imagine what a proof that charge is intrinsic to an electron would even look like. Proving that something is the result of something else seems much more straightforward than proving something not being the result of anything.

Also, only talking about an electron's charge being intrinsic is ignoring a lot of the mystery of it. The fact that it is a point particle is a pesky issue as well. An electron is just an electron with no underlying "pieces" or mechanisms. Obviously that could be wrong, but no one has been able to prove it yet.

I think using the electron as an example for our understanding of how the universe works is pretty ignorant. It's one of the weirdest parts of physics I've encountered during my career. It is a topic that few people actively work on but feels like their is clearly something amiss with our current understanding.


That theory is not even close to point in the right direction on research.

Consciousness is not computation.

An do not just emerge from complexity. If it would, then the ammount of quantum information and complexity in any tropical thunderstorm would make it a supergenius entity. And is not (or prove me wrong).

Consciousness is also not mind. Mind is more like the sum of intelligence. Consciousness something else, more fundamental. Only quantum biology could be interesting on researching this.


I don't think we can equate complexity of a tropical thunderstorm to the complexity as described in the article. The discussed complexity arises from local connections (be it persistent bonds or repeated interactions) between entities. While interactions in the thunderstorm might seem "complex" to our mind, they are not "complex" from the point of view of "complex systems", they are rather "complicated".

Moreover thunderstorm network (if we consider the interactions of particles in the thunderstorm) is very transient. No feedback loops emerge in the system (to my understanding of thunderstorm). Also the system does not adapt to the surrounding environment through it's reconfiguration of internal connections. Thunderstorm lacks many properties of the complex systems discussed in the article.


A storm doesn't have feedback loops in brain-like interconnections. But in a quantic level there is thermodynamic activity re-adapting and re-shaping the system all the time. In any case, as Giulio Tononi would say, "it has zero integration." And it doesn't have consciousness, only energetic physico-chemical activity.

I agree with that view.

The important part is (a) that it does not have computation and (b) regardless of complexity, computation is not consciousness. Saying it and expecting it to "magically emerge from it" does not make that hypothesis any truer.

They are throwing darts in the dart and making it sound cool.

But, okay, I'll be nice and not troll the effort of making this subject cool and let's not put theory against theory because is not productive.

What about seeing something testable?

Take a Paramecium.

It does not have many neurons, so it doesn't have synapses but it still learns where the food is and reacts to anestesics as we do.

What about that?

That radical theory published there is not predicting the Paramecium, much less anything about the one in the Homo Sapiens Sapiens.

Bring me news on how quantum-biology is behaving down there and we might actually get somewhere.


Nice argument from italics.


The idea you can magically "generate" consciousness by running an algorithm is really silly superstition.


Completely agree.

Also as theory is essentially useless as it does not predict anything.

They are throwing darts in the dark and making it sound cool.


Are we discussing consciousness everyday ?

https://news.ycombinator.com/item?id=8515361




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: