Hacker News new | past | comments | ask | show | jobs | submit login

Indeed. I think IIT is a good theory... of something, just not exactly consciousness. Maybe a precondition to consciousness or something like that. But the thing that we think of as our consciousness is to me best explained by the "global workspace" theory, which says consciousness is the process of the various specialized parts of the mind, which are constantly working separately and in parallel, communicating their state to each other. It's like a boardroom for the society of mind, where at any point one subsystem has the podium (although there is lots of chatter and crosstalk as well). For most of us a part of the language subsystem (Gazzaniga's "interpreter") is also giving a running commentary (the internal monolog) of the information it's receiving from the other parts (with a lot of its own interpretation thrown in)... but this is not an essential feature of consciousness! We have a tendency to identify our consciousness with this commentary, but that is obviously incorrect. I think that the communication in this global workspace occurs in its own "language", a language internal to organic brains, capable of abstracting and reducing to its barest essence information from any of its components.

This view of consciousness is phenomenologically best aligned with most of the (admittedly limited) objective information we have about human conscious experience, and is consistent with the experience of various altered states of consciousness such as meditation or the use of entheogenic substances. It explains how consciousness is only a small part of what happens in our mind and why the nature of the subconscious (which is most of what actually happens in the brain) seems to be so hard to nail down. It also means that any being with a "mind" that has numerous independent and parallel processes that need to be coordinated has some measure of conscious experience, even invertebrates, and probably even living things whose information processing uses an entirely different infrastructure, such as plants. However, I can't see any way that this definition of consciousness could apply to an electron.

Edit: I think that the global workspace theory of consciousness can probably be mathematically described by IIT, but not just any integration of information results in something that deserves to be called consciouness. The information that's being integrated should be combination of perceptions (feedback from the environment) with some kind of memories of previous states, resulting in new memories and predictions, and the integration should happen through independent pre-processing of this information by relatively independent subsystems. This is still general enough to apply to nearly everything living, but I think it puts conscious experience at a higher level than merely integrating information.




Yeah, I definitely like IIT and think its on to something important. But it doesn't strike me as a sufficient condition for consciousness. I have a lot of sympathy for GWT. One of its theoretical virtues is that it coheres with theoretical properties of consciousness with independent justification like integrated information, recurrence, self-modelling, etc. But it still lacks any direct theory of phenomenology, i.e. qualia. Although I can see why scientists would avoid attempting such arguments if at all possible. This would be a good place for philosophers to bridge the gap, but I guess it is easier to make a career out of promoting panpsychism these days than to come up with something insightful to say about mechanistic consciousness.

But to move the discussion forward, I think one obvious property of qualia is that it is representational. That is to say, it is structurally related to the thing being indicated such that it can inform about the thing. For example, the red quale tells you something about red substances in the context of the space of possible colors, the external world full of beneficial and harmful substances, and the bearer of the quale with drives, dispositions, preferred states, etc. This complex millieu of properties, states, dispositions, etc, all serve to inform the properties of a quale. Its representational power is one that gives the bearer certain competencies in the actual world, e.g. pain gives one the competency to avoid damaging states. But this representational power must be intrinsic to the structure that constitutes a quale. If this were not the case, then its power to confer competency would be contextual. Pain would only confer competency in the right environment (like a reflex that has meaning only in the right environment, e.g. the grasping reflex of an infant). But this isn't the case with qualia; the experience of pain is intrinsically representative and provides its bearer with competence universally. The same can be said for emotions and our senses. This suggests to me that some kind of recurrent structure is a necessary condition for a quale: to simultaneously be the producer and the consumer of a representative state, and consume in such a way that necessarily confers competent behavior. But this discussion sounds like a different level of description of coordination between different subsystems. Information from different subsystems bear on this central coordinator, and this information confers competent behavior on downstream subsystems, i.e. contextually relevant causal powers. I see the beginnings of the details required for mechanistic qualia in theories like GWT and others based on principled analysis of brain networks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: