Hacker News new | past | comments | ask | show | jobs | submit login
Can AI Become Conscious? (acm.org)
286 points by MindGods on May 12, 2020 | hide | past | favorite | 397 comments



I can not not think about J. Schmidhuber's thoughts on consciousness whenever the topic comes up:

As we interact with the world to achieve goals, we are constructing internal models of the world, predicting and thus partially compressing the data history we are observing. If the predictor/compressor is a biological or artificial recurrent neural network (RNN), it will automatically create feature hierarchies, lower level neurons corresponding to simple feature detectors similar to those found in human brains, higher layer neurons typically corresponding to more abstract features, but fine-grained where necessary. Like any good compressor, the RNN will learn to identify shared regularities among different already existing internal data structures, and generate prototype encodings (across neuron populations) or symbols for frequently occurring observation sub-sequences, to shrink the storage space needed for the whole (we see this in our artificial RNNs all the time). Self-symbols may be viewed as a by-product of this, since there is one thing that is involved in all actions and sensory inputs of the agent, namely, the agent itself. To efficiently encode the entire data history through predictive coding, it will profit from creating some sort of internal prototype symbol or code (e. g. a neural activity pattern) representing itself. Whenever this representation becomes activated above a certain threshold, say, by activating the corresponding neurons through new incoming sensory inputs or an internal ‘search light’ or otherwise, the agent could be called self-aware. No need to see this as a mysterious process — it is just a natural by-product of partially compressing the observation history by efficiently encoding frequent observations.

https://old.reddit.com/r/MachineLearning/comments/2xcyrl/i_a...


You are conflating consciousness[1] with self-awareness[2]. They are two distinct ideas.

In fact the second/third sentence on the Wikipedia page for self-awareness is:

> It [self-awareness] is not to be confused with consciousness in the sense of qualia. While consciousness is being aware of one's environment and body and lifestyle, self-awareness is the recognition of that awareness.[2]

[1]: https://en.wikipedia.org/wiki/Consciousness

[2]: https://en.wikipedia.org/wiki/Self-awareness


That's the position of philosophy on the philosophical view of consciousness. The broad, intuitive notion of consciousness often does equate it with self-awareness, indeed in the conscious link you have:

"Today, with modern research into the brain it often includes any kind of experience, cognition, feeling or perception. It may be 'awareness', or 'awareness of awareness', or self-awareness." Etc.

I'm more of a fan of this view than the specifically philosophical view of consciousness. This latter sort axiomatizes, pares-down, an intuition that is, itself, so vague that axiomatized version winds up not having any definite qualities or characteristics, just "I know I experience it".


Colors, tastes, feelings aren't vague nor are the thoughts, imaginations, dreams they makeup. They're the stuff of consciousness, and somehow you have to demonstrate how awareness results in those experiences. Shifting definitions to avoid the hard problem doesn't cut it.


You have no idea if you live in a world of zombies who just say they perceive qualia, who just give the appearance of seeing red.

I don't see it as a real problem. If a program or machine asserts that it perceives qualia, who are we to argue it's wrong? We are in no better a position to prove the qualia of our subjective conscious experience other than our physical resemblance to our interlocutor.

Maybe qualia is what it's like for matter to be in a feedback loop with physical interaction. Panpsychism is pretty much my position.


> I don't see it as a real problem. If a program or machine asserts that it perceives qualia, who are we to argue it's wrong?

Right now, we are in no position to argue it's wrong, because we have no accepted framework for understanding qualia.

That said, we can still make some simple assertions with confidence such as:

1: A one-line program which prints "I am afraid" does not experience fear qualia.

2: A human being which utters the words "I am afraid" may be experiencing fear qualia (and thus perhaps experiencing pain/suffering).

If you assert that the world is full of philosophical zombies rather than humans like yourself, you may not agree with assertion #2. But we build our ethical and legal frameworks around the idea that we aren't philosophical zombies, rather, that we have feelings and want to avoid suffering.

Once you start talking about more complicated programs, though, we don't know what the right assertions are. We don't have an understanding of what things can suffer and what things cannot, for starters. We generally agree that, say, dogs can suffer. That's why we have laws against animal abuse. But what about programs?

It is absolutely a real problem, because some day we will have to build laws and culture around how we treat AI. Whether or not an AI can suffer (ie. by experiencing qualia that can be considered undesirable) will likely affect how we treat that AI. If we have no way to answer "can this program suffer?", then we are setting ourselves up for regret.


"that's why we have laws against animal abuse"

I think there is a step or two missing in between. Abusing dogs isn't just as bad as abusing something that suffers. It's, e.g., doing something that has no value to society (unlike animals being mistreated in a feed lot), or it's something that sews unacceptable levels of aggression. It feels much more complicated than you make it seem.


I took it as necessary condition, not a sufficient condition. The other necessary conditions were omitted because they aren't important to the line of reasoning being presented.


If you try to find a logical basis for hypocritical laws, you're gonna have a bad time.


> If you assert that the world is full of philosophical zombies rather than humans like yourself

Hm? There is no objective test for whether I am a philosophical zombie or not.

Consciousness is subjective, so it's simply incoherent to ask whether something is conscious or a zombie.


Once an AI is advanced enough to make the question hard to answer, isn’t the odds that the AI will be advanced enough to simply decide the answer for us?

I imagine thane once someone manage to produce an AI with enough agency to seem sentient it will far exceed human capabilities.


I’d say the question is already potentially hard to answer.

We have deep learning systems that perceive and strategise. Who’s to say that AlphaGo doesn’t have some experience of playing out millions of games?


Please read Solaris.

Or talk to an ant.

There's no reason to expect to be able to understand something's ideas just because it's smarter than you.


Ned Block covers this in his paper on the Harder Problem of Consciousness. Other humans share a similar biology, so we have good enough justification to think they're conscious. It won't defeat solipsism, but it's enough to make a reasonable assumption.

Data or whatever AI/Android doesn't share our biology. And since we can't tell whether it's the functioning of our biology or the biology itself which provides us with consciousness, we lack any reasonable justification for assuming Data is conscious. And then he goes further into functional isomorphs at different levels and how that might or might not matter for Data, but we can't tell.


> Other humans share a similar biology, so we have good enough justification to think they're conscious.

That's the way it goes with these arguments: it always comes down to an appeal to what's reasonable or plausible (those who insist consciousness must be an algorithm are doing the same thing, at least until we have algorithms that seem to be making some progress on the hard problem.) One might equally say other humans share a similar physics. When Dennett said "we are all p-zombies", I suspect he was making the point that it would be highly tendentious to take the position that anything both behaving and functioning physically in the manner of a human would not be conscious, except for humans themselves.


I see a lot of resemblance in biology between myself and cats, does that mean that cats have qualia? I infer they dream from their twitches as they sleep.

How about mice? Lizards? Fish? Insects? Nematodes? At what point do qualia stop occurring? I see a more or less continuous decline in similarity. I don't see any justification for a cut-off.

I don't think data is conscious. I suspect consciousness might be a side-effect physical computation. I don't believe there's any magic to the human brain's inputs and outputs which cannot be computed, and thus display the appearance of consciousness. In fact I see the Turing test as a thought experiment more than any actual test, a way of removing the appearance of a human body from the task of evaluating an artificial intelligence as a thinking, conscious being. If it quacks like a duck, a rose by any other name, etc.

In fact I'm not entirely convinced that the Western conception of consciousness isn't a learned cultural artifact, that there aren't people who have a more expansive notion of self which includes close family, for example. Have you ever suffered watching a relative hurt themselves? Ever had false memories of injury from childhood which you found out later happened to someone else?


We have all discussed this in your absence, and we have decided that only you are a zombie.


As far as I understand it, the "hard problem of consciousness" is the claim that consciousness is something absolutely separate, that there is a jump between constructive processes that make up the informal view of consciousness and the full formal, philosophical view. That approach always seemed like a complete dead end.

I mean, "philosophical zombie" construct would seem to experience "Colors, tastes, feelings" and whatever concrete, they just would lack the ineffable quality of (philosophical, formal) consciousness - you could brain pattern akin to color sensation for example. This final view is kind of meaningless and I think people are attracted to the view by misunderstanding, by thinking something concrete is involved here.


The concreteness is your own subjective experience. All this other stuff about constructive processes and zombies is word play. It doesn't change your experience.

The issue is providing a scientific explanation for how the physical world produces, emerges, or is identical to consciousness. Arguments like p-zombies and Mary the color scientist are meant to showwe don't know how to come up with such an explanation, because our scientific understanding is objective and abstract, while our experiences are subjective and concrete.

I prefer Nagel's approach to the hard problem, as he goes to the heart of the matter, which is the difference between objective explanation and subjective experience. Science is the view from nowhere (abstract, mathematic, functional), while we experience the world as being from somewhere as embodied beings, not equations.


In your example of qualia (Colors, tastes, feelings), how is that not awareness? That is what is the difference between awareness and consciousness here?

In your example the abstractions of qualia (thoughts, imaginations, dreams), how are they not labels of experience? That is what ML does, after all.

Today ML may, for all intents and purposes, have roughly the intelligence of a bug, but it has to have awareness to label data. Where does consciousness come in beyond labeling and awareness that ML does not have today?


Do you think that ML algorithms are having experiences when they label data?

We label experiences with different words because we have different kinds of experiences. But it's the experiences and not the labels that matter, and the labeling of those experiences does not cause the experience. We say we have dreams because we do have dreams in the first place.

You're proposing that labelling data is the same thing as having an experience, but you haven't explained how data is experience, or why the labelling matters for that.


ML algorithms have very beautiful dreams and halucinations https://www.google.com/search?q=neural+net+dreams ;)

More seriously though, if we agree that the brain is described by physics, then all it can do, a turing machine can do as well, so at the root all experiences have to be data and computation.


Perhaps, but keep in mind that physics is a human description of the universe, not the universe itself. So it may also not be data and computation, as those might just be useful abstractions we create to model reality. Same with math.

But that gets into metaphysics, which is notoriously tricky.


That's why i say "if we agree that the brain is described by physics".

If you argue that a brain cannot be described by computation and there is something supernatural like soul, that's a fine theory, and the only way to disprove it, is to create working general ai.


> If you argue that a brain cannot be described by computation and there is something supernatural like soul,

Why are those the only two options? There's lots of different philosophical positions, particularly when it comes to consciousness.


I am not really sure about the value of this philosophical positions, since the ones i have seen, simply suggest to ignore logic for a little bit, to obtain the result they feel is correct.

The suggestion by Penrose that consciousness is connected to quantum state, and so cannot be copied, or computed on any classical turing machine fitting in the universe, is an interesting way to circumvent strange predictions of functionality, but is not very plausible.

And Jaron Lanier from your other comment explicitly suggests to not use approaches like that, because it would weaken vitality argument, when proven wrong.


The pancreas is described by physics. Can a Turing machine make insulin?


Turing machine can make insulin that works on cells created by the Turing machine.


It's only "insulin" and "cells" for someone interpreting the program. The issue here is that if you had a simulated brain, what makes that simulation a brain for the computer such that it would be conscious?

Here's Jaron Lanier on the issue of consciousness and computation: http://www.jaronlanier.com/zombie.html

He's meteor shower intuition pump is similar to the argument that nations should be conscious if functionalism is true. The reason being that any complex physical system could potentially instantiate the functionality that produces a conscious state. It's also similar to Greg Egan's "Dust theory of Consciousness" in Permutation City.

If one is willing to bite the functional bullet on that, then you can have your simulated consciousness, and the universe may be populated with all sorts of weird conscious experiences outside of animals.


There's no scientific way to define "experience". How do you know whether a microphone experiences sound or not?


>Do you think that ML algorithms are having experiences when they label data?

Yes, but not in the way we have. An RNN keeps track of what is going on, for example.

To take in information through sense inputs (eg, eyes) is experience, but in an unconscious way. We get overloaded if the information we take in is not deeply processed before it becomes conscious. A CNN is similar in this manner, but also quite alien.

When we're born our brains have not formed around our senses yet. Our brain goes through a sort of training stage taking in raw sensation and processing it. The more we experience of certain kinds of sensations the more our brain physically changes and adapts. It's theorized if you made a new kind of sense input and could plug it into a newborn brain correctly, the brain would adapt to interfacing with it.

This strongly implies qualia is real, in that we all experience the world in drastically different ways. It may be hard to imagine but my present moment is probably very alien compared to yours, as is with different animals as well as simpler things like bacteria and ML.


Exactly.

One of my more vivid memories is excitedly opening my copy of Jayne's "The Origins of Consciousness in the Breakdown of the Bicameral Mind" and being dismayed as it slowly dawned on me that Jayne's entire book was based on this category error.

For the life of me I can't understand why people don't understand this distinction (or choose to ignore it).

I can only assume that people ignore the very obvious "magical thing" that is happening in their own heads due to ideological commitments to physicalism or something.


What is this magical thing that is so obvious to you? Something may seem magical without being magical. To the person who survives a tornado while all his neighbors die, the sense they were chosen to live and have a special purpose can be compelling, but is false.

Materialism can't explain everything, but there is yet to be any proof that magic explains anything. Lacking that, assuming a physical basis as the explanation is prudent.


Well, to my mind you seem too hasty to conclude it's false. Sure, statistically there is a very small non-negligible chance that exactly one person will survive, but realization of an unlikely event is no short of a miracle. To conclude that it happened randomly seems as naive as to conclude it happened purposefully. If your guts tells you it was a random event that is just fine, but be aware it is a metaphysical axiom of yours


Certainly nothing can ever be conclusively proven. I might be a brain in a vat that has had a fictional reality fed to me for my entire life. Such arguments may be interesting in some venues, but is HN really that venue?

> To conclude that it happened randomly seems as naive as to conclude it happened purposefully.

I strongly disagree. You aren't saying that both outcomes are possible (I'd agree with that), but they are equally likely, a much, much stronger claim. The sun might go on shining tomorrow, or it might collapse due to some unforeseen physical process. Both have non-zero probabilities, but it is the person who claims they are equally likely who is the naive one.

As for not defined so let's interpret "miracle" to mean highly unlikely events, they happen millions of times day, and we rarely notice. The bill I received in tender has the serial number MF53610716D. What are the odds of that? If you limit "miracle" to be highly unlikely events that most people would also notice, it would be strange if they weren't happening all the time. Every person who wins a lottery (and there are thousands every day), the person who is told by their doctor their cancer is terminal but then it goes into remission has a "miracle", the guy who loses his job and is about to be evicted then gets a job callback for an application he sent in months ago and forgot about experiences a "miracle."


"assuming a physical basis as explanation is prudent" yeah, good luck making physicality relevant for most explanations. My wife asks why I didn't put the laundry away, for instance. "Physics!" I declare triumphantly.

Now, if I put music on and rub her back a bit... Well, ha! There's the magic, my good sir.

Imagine walking into a film studio with the idea of film production based exclusively on the principles of physics. No. Magic and physics are no more opposed than biology and physics. When hidden (psychological?) forces clearly underlie economically valuable phenomena (architecture, music, media, etc) respecting the magic is closer to the truth than reducing it to physics. Physics just isn't the right explanatory level.

If you'd like to learn more about definitions of magic that aren't "phenomena that don't actually exist", I recommend Penelope Gouk's work on music, science and natural magic in the 17th century. It's about the magic involved at the formation of the Royal Society.


> Now, if I put music on and rub her back a bit... Well, ha! There's the magic, my good sir.

Your answer is rather flip.

Also, if you want to redefine "magic" to be nice things that are physically based but can't be summarized in a neat equation, then I also have no desire to argue. It is clear that the person I was responding to meant "magic" in a different way that you seem to be using it.


If you want to think magic is Harry Potter, fine. But it has an intellectual history that can help frame our interpretation of contemporary technology. Recommended reading: https://plato.stanford.edu/entries/pico-della-mirandola/


I think there's multiple definitions of magic at play here. One is phenomena that beyond the limits of human understanding. The other, based in the theory of vitalism, is something unexplainable, but also comes with the assumption that these unexplainable things are due some sort of vital force that living things have (often, specifically humans) [1].

When you talk about the magic of forgetting about the laundry, you mean the first kind of magic. When others talk about consiousness being inaccessible to AI, they're talking about the second kind of magic. I don't think it's that hard to imagine an AI that would forget to fold the laundry and not know why. The main problem with a "concious AI", I suspect, is that nobody will actually want one.

[1] I have no idea if viruses or prions are supposed to have vital force, vital theory emerged before most of modern biology and is mostly used today by those who have no understanding of modern biology.


> The main problem with a "concious AI", I suspect, is that nobody will actually want one.

Why do you think that? Can't accept "fake humanness"? or maybe ego(superior) or not relatable

The one thing I wonder if they were truly sentient why would they care/try to serve you


> Now, if I put music on and rub her back a bit... Well, ha! There's the magic, my good sir.

Sure. There's also the magic of feeding my cat, so he stops nagging me, and sits on my lap purring. And just about everyone would agree that's similar, no?

But what about the magic of running "sudo apt update" before "sudo apt install foo"? That's also arguably similar, albeit many orders of magnitude simpler.


physicality n. obsession with physical urges

On the subject, you are making a good point comparing social behavior with mental activity. Just like there is no “magic” in the way people interact with each other, there is no magic in how neurons do the same.


> Just like there is no “magic” in the way people interact with each other, there is no magic in how neurons do the same.

Except I am arguing that there is most definitely magic in human interaction.


that's not the definition of physicality...


> I can only assume that people ignore the very obvious "magical thing" that is happening in their own heads due to ideological commitments to physicalism or something.

You nailed it. People think that accepting consciousness is hard is giving into religion.

It need not be that way.


After thinking about it for many years I've come to the conclusion that some people really haven't noticed themselves, (e.g. I think Daniel Dennett is a p-zombie!)


I know it was an incidental point, but it sounds like you don't think very highly of Dennett. What specifically don't you like?


Oh no, I do not mean that in a pejorative sense. I have a lot of respect for him and his work. It's a way to understand how someone so thoughtful and intelligent and dedicated can seemingly be unaware of the "hard problem". One day it occurred to me that maybe he just hadn't had the "breakthrough" or "epiphany". It's unlikely but not impossible.

FWIW, his wikipedia entry says,

> He also presents an argument against qualia; he argues that the concept is so confused that it cannot be put to any use or understood in any non-contradictory way, and therefore does not constitute a valid refutation of physicalism. His strategy mirrors his teacher Ryle's approach of redefining first person phenomena in third person terms, and denying the coherence of the concepts which this approach struggles with.

that's specifically what I don't like. In re: the "hard problem" he wants to declare victory without a fight, or a prize.


Is physicalism the problem? My understanding is that physicalism is still in good standing – at least when it comes to the conscious experience of qualia. (The concept-of-self encoding obviously doesn't require a physical substrate.)


The problem is that physics is not the right level of abstraction and that we do not have an operational theory for working at the right level. I suppose it's also a problem that we can't seem to agree on the definition of consciousness.

Anyway, trying to understand consciousness by applying physics is just as silly as trying to understand why your complex fluid dynamics simulation is giving an incorrect result by peering at the CPU's transistors with an electron microscope: it's the wrong level of abstraction.

One thing that seems to be making our task a lot more difficult is that evolution doesn't even try to design the biology of systems with clean abstraction boundaries that we can understand. There are so many strange things going on in our bodies that we can barely describe them, never mind understand them. One example is what happens when a person consumes Ayahuasca [1].

[1] https://en.wikipedia.org/wiki/Ayahuasca


Probably not, but I mean the kind of informal physicalist commitment that says something like "if I can't imagine quantum fields feeling things, then feelings don't exist." I see a lot of that, though less and less these days.

I think Chalmers helped clear the air a bit here, though I don't think jumping straight to panpsychism is particularly helpful to the discourse, either.


It was said elsewhere that a sufficiently advanced “something” is indeed indistinguishable from magic.


Is this distinction necessary or strictly proven? They seem pretty linked in my personal experience. (All the times I'm not conscious, like when I'm sleeping, I'm not self-aware, and vice-versa. Intuitively it seems like anything that might put me into a semi-conscious state would be considered as also putting me into a semi-self-aware state and vice-versa.) The page lists several disorders related to self-awareness, but I don't think any of them represent anything like "0% self-awareness yet 100% consciousness" which would have more strongly implied that they may be different things.

The page lists several types of self-awareness, including "introspective self-awareness". Maybe what we call consciousness is just the internal experience of something with introspective self-awareness. This does raise the question of what "internal experience" is though. I think "internal experience" is what some people mean by "consciousness", especially when they say things like "maybe everything has consciousness including rocks or random grids of transistors". I think there exists some sense of "what it is like to be a rock", but it's an utterly empty experience without memory, perception, personality, emotions, or the feeling of weighing choices; something very close to the "null experience", that is an experience in the same sense that zero or a number utterly close to zero is still a number. The internal experience of a random grid of transistors only just barely starts to touch a sliver of what a human's internal experience consists of. The internal experience of a worm starts to have a few recognizable qualities (proprioception, the drive to maintain personal needs), but it still lacks many large aspects of a human's internal experience. (Maybe a thermostat with sufficiently rich inputs, output control, world model, and juggled priorities has an internal experience similar to a worm.) Large animals tend to have more elements of awareness (including memory, social awareness, and some introspective awareness) that bring their internal experience closer to having what feels like the necessary elements to human consciousness.


I'd say you're off. I'd describe the experience of the worm much in the same way you're describing the experience of a rock, just a couple biochemical circuits. And the experience of the rock would be even more atomically null to compensate.


I'm not sure we're in disagreement: I agree that a rock's experience is practically null (maybe the fact that its atoms are connected and may be pushed and pulled together makes up the tiniest grain of internal experience), and that a worm's experience is still a world apart from ours and probably much closer to a rock than us. If we were to score an experience on how human-like it is, I'd expect the various elements I named to each refer to exponential or greater increases in score. Comparing the depth of a worm's experience to a human's would be like comparing the volume of a nearly 1-dimensional line segment to the volume of a many-dimensional hyperobject.

Our brains are just tons of biochemical circuits working very mechanistically, with many interesting feedback loops going on that make up our internal experience. A worm's brain has a small and probably countable number of feedback loops going on that I assume make up its tiny internal experience.


You act like these are well defined and agreed upon categories. The page on consciousness you link lists off many different definitions people attribute to consciousness, including: "It may be 'awareness', or 'awareness of awareness', or self-awareness."


I forget who said it - and it's probably somewhere tucked in these comments, but paraphrasing...

If one were to simulate every drop of rain, gust of wind, and tidal current of a hurricane, to ultimate perfection, would you say the processor is wet?


Well, no, but you'd say the simulated environment absolutely is wet.


This is the "Mary's Room" thought experiment which explores the nature of qualia. - The CPU analogy is a nice one though!


>You are conflating consciousness[1] with self-awareness[2]. They are two distinct ideas.

Consciousness is not an idea and ones who experienced a little bit of it (via meditation or some chemicals or other ways) will laugh at all attempts philosophers and scientists are doing to define it. Simply because you can only define something which you can observe from aside, but here there is no outside, everything is inside it. How can you possibly define that? :D


Right -- consciousness is whatever it is that is aware that it is self-aware. In fact, anytime you pinpoint consciousness as any particular awareness, you beg the question. How can you be aware of your awareness?


If your brain has a computational process for awareness, and it has itself somewhere in the inputs... that’d make you aware of your awareness, no? It’s recursion!


The only way to build a recursive loop is to say that awareness is some state (the output of the awareness process). But that just begs the question, how could awareness be a state? If it is a meta-phenomena (i.e., the result of some sort of complexity or self-referential loop) then what exactly is aware that one is aware?


Self-aware consciousness is where it gets interesting, in a way that debating the consciousness of toasters and such just cannot match.


That's appealing - certainly more promising than naively reducing consciousness to fan-in/fan-out, as in OP.

But it's also quite hand-wavey, because RNNs have no real hooks for self-reflection.

You can strap a meta-RNN across a working RNN, and if you're lucky it may start to recognise features in the operation of the working RNN. But are your features really sentient self-symbols, or are they just mechanically derived by-products - like reflection in a computer language, taken up a level?

You need at least one extra level of feedback to do anything interesting with this, and it's really not obvious how that feedback would work.

It's even less obvious if this will give you "genuine" experiential self-awareness, or just a poor and dumbed-down emulation of it.


>But are your features really sentient self-symbols, or are they just mechanically derived by-products - like reflection in a computer language, taken up a level?

Why couldn't we apply the concept to our self?

Self awareness is one commonly believed description for consciousness. Going off of that, we become self aware in three primary ways: 1) We're told about us by our parents at a young age, before the age of one. 2) We observe our selves in the environment often influencing the environment, from looking in a mirror, to social interaction. 3) We get meta about a series of things (body+mind) and self awareness arises from that generalized intelligence.

Though, you could argue that #3 may not be enough alone to create self awareness and ultimately #2 is required.


Schmidhuber is a great thinker in this field. I don't have a source, but I've seen him more briefly summarize this same point as "consciousness is a by-product of data compression." Pretty close to the last sentence you quoted.


He talks about this in his appearance on the Lex Fridman podcast. He's my favourite guest, fascinating guy.


This sounds like some kind of "cognitive homunculus" looking at itself in a mirror. Does it explain anything, or just push the question of what is perceiving the subjective experience one level deeper?


there was a young man who said "though it seems that I know that I know, what I'd like to see is the 'I' that knows me, when I know that I know that I know."


and here I am reading Frank Herbet's Destination : Void which deals with developing a consciousness. From ideas of introducing emotions to purposefully having errors in the system


I love seeing how consciousness gets described every century with analogies from the era’s technological understanding.

I wonder how it will be described in 1000 years :)


And I am amused by how someone always rolls out this observation as if it settled the issue for good. It's a caution against mistaking familiarity and "it seems to me that..." for the way things must be, which all parties to the discussion would do well to heed.


It's nice that someone now has the neural wiring diagram for part of a mouse brain. But we have the full wiring for a thousand-cell nematode, and OpenWorm still doesn't work very well.[1] OpenWorm is trying to get a neuron and cell level simulator for what's close to the simplest creature with a functioning nervous system - 302 neurons and 85 muscle cells. That needs to work before moving to more complexity.

[1] http://openworm.org/


> That needs to work before moving to more complexity.

It really depends on what level of abstraction you care to simulate. OpenWorm is working at the physics and cellular level, far below the concept level as in most deep learning research looking to apply neuroscience discoveries, for example. It’s likely easier to get the concepts of a functional nematode model working or a functional model of memory, attention, or consciousness than a full cellular model of these.

More specifically, a thousand cells sounds small in comparison to a thousand layer ResNet with millions of functional units but the mechanics of those cells are significantly more complex than a ReLU unit. Yet the simple ReLU units are functionally very useful and can do much more complex things that we still can’t simulate with spiking neurons.

The concepts of receptive fields, cortical columns, local inhibition, winner-take-all, functional modules and how they communicate / are organized may all be relevant and applicable learnings from mapping an organism even if we can’t fully simulate every detail.


The trouble is that (assuming sufficient computational power) if we can't simulate it then we don't really understand it. It's one thing to say "that's computationally intractable", but entirely another to say "for some reason our computationally tractable model doesn't work, and we don't know why".

Present day ANNs may well be inspired by biological systems but (as you noted) they're not even remotely similar in practice. The reality is that for a biological system the wiring diagram is just the tip of the iceberg - there's lots of other significant chemical things going on under the hood.

I don't mean to detract from the usefulness of present day ML, just to agree with and elaborate on the original point that was raised (ie that "we have a neural wiring diagram" doesn't actually mean that we have a complete schematic).


I'm aware of that and I've done quite a bit of work on both spiking neural networks and modern deep learning. My point is that those complexities are not required to implement many important functional aspects of the brain: most basically "learning" and more specifically, attention, memory, etc. Consciousness may fall into the list of things we can get functional without all of the incidental complexities that evolution brought along the way. It may also critically depend on complexities like multi-channel chemical receptors but since we don't know we can't say either way.

It's a tired analogy but we can understand quite a lot about flight and even build a plane without first birthing a bird.


> It's a tired analogy but we can understand quite a lot about flight and even build a plane without first birthing a bird.

The problem is we don't know if we're attempting to solve something as "simple" as flight with a rudimentary understanding of airflow and lift, or if we're attempting to achieve stable planetary orbit without fully understanding gravity and with a rudimentary understanding of chemistry.

I think it's still worth trying stuff because it could be closer to the former, and trying more stuff may help us better understand where it is on that spectrum, and because if it is closer to the the harder end, the stuff we're doing is probably so cheap and easy compared to what needs to be done to get to the end that it's a drop in the bucket compared to the eventual output required, even if it adds nothing.


Your analogy is actually quite apt here - the wright brothers took inspiration from birds but clearly went with a different model of flight, just like ANN field has. The fundamental concept of the neurons are same, but that doesn't mean the complexity is similar.

Minimally, whatever the complexity inside a Biological neuron maybe, one fundamental propery we need to obtain is thr connection strengths for the entire connectome, which we don't have. Without that we actually don't know the full connectome even of the simplest organisms, and no one to my knowledge has hence actually studied the kind of algorithms that are running in these systems. I would love to be corrected here of xourse.


Even with connection strengths I still don't think we would really have the full connectome. Such a model would completely miss many of the phenomena related to chemical synapses, which involve signal transduction pathways, which are _astoundingly_ complex. Those complexities are part of the algorithm being run though!

(Of course we might still learn useful things from such a model, I just want to be clear that it wouldn't in any sense be a complete one.)


This. I simply cannot even begin to go into the sheer magnitude of the number of ways the fundamental state of a neural simulator changes once you understand that nothing exists monotonically. It's all about the loops, and the interplay between them. So much of our conscious experience is shaped by the fact that at any one time billions upon billions of neural circuits are firing along shared pathways; each internal action fundamentally coloring each emergent perception through the timbre it contributes to the perceptual integration of external stimuli.

It isn't enough to flip switches on and off, and to recognize weights, or even to take a fully formed brain network and simulate it. You have to understand how it developed, what it reacts to, how body shapes mind shapes body, and so on and so forth.

What we're doing now with NN's is mistaking them for the key to making an artificial consciousness, when all we're really playing with is the ML version of one of those TI calculators with the paper roll the accountants and bookkeepers use. They are subunits that may compose together to represent xmcrystalized functional units of expert system logic; but they are no closer to a self-guided, self-aware entity than a toaster.


Agreed, though continuously monitoring the propagation of the signals in vivo would allow us to at least start forming models on temporal or context specific modulation of connection strengths (which in the end is what decides the algorithms of the nervous system I presume)


It's easy to see if something flies or not. How would you know if your simulation is conscious?


This is, of course, the key problem.

I mean, I know that I'm conscious. Or at least, that's how it occurs for me.

But there's no way to experience another's consciousness. So behavior is all we have. And that's why we have the Turing test. For other people, though, it's mainly because they resemble us.


^ This. The AGI crowd consantly abuses the bird/plane parable.


> we can understand quite a lot about flight and even build a plane without first birthing a bird

Or fully understanding fluid dynamics and laminar flow. No doubt that the Wright Brothers didn't fully grok it, at least.


But we understand tons of things without simulating them.


Can you give some examples? I'm guessing there is a different in definition of understanding here.

As I interpret GP, the claim is you can't describe something in sufficient detail to simulate it, then you don't actually understand it. You may have a higher-order model that generally holds, or holds given some constraints, but that's more of a "what" understanding rather than the higher-bar of "why".


I don't think that's what they're saying. We could have the detail and understanding but lack compute.

It seems that they are saying that a simulation is required for proof. We write proofs for things all the time without exhaustively simulating the variants.


I explicitly called out the case where issues arise solely due to lack of compute in my original comment.

I never claimed that a simulation is required for proof, just that an unexpectedly broken (but correctly implemented) simulation demonstrates that the model is flawed.


> (but correctly implemented)

Do you ensure this by simulating it?


No? It honestly seems like you're being intentionally obtuse. The simulation being correctly implemented is an underlying assumption; in the face of failure the implementer is stuck determining the most likely cause.

Take for example cryptographic primitives. We often rely on mathematical proofs of their various properties. Obviously there could be an error in those proofs in which case it is understood that the proof would no longer hold. But we double (and triple, and ...) check, and then we go ahead and use them on the assumption that they're correct.


> Can you give some examples? I'm guessing there is a different in definition of understanding here.

I'm not the previous poster, but how about the Halting Problem? The defining feature is that you can't just simulate it with a Turing machine. Yet the proof is certainly understandable.


If you think you understand something, write a simulation which you expect to work based on that understanding, and it doesn't work - did you really understand it?


Maybe, maybe your simulation is just buggy. I can write a simulator of how my wife would react to the news I'm cheating on her, and fail miserably, but I'm quite positive I understand how she would actually react.


Yes, you have to debug your code. I suspect that the people who implemented OpenWorm are capable of and have done that.


OK, what if the simulation works, did you understand it before?


Not necessarily. A working simulation (for some testable subset of states) doesn't carry any hard and fast logical implications about your understanding.

On the other hand, assuming no errors in implementation then a broken simulation which you had expected to work directly implies that your understanding is flawed.


I recently saw this video of living neurons: https://www.youtube.com/watch?v=2TIK9oXc5Wo (I don't actually know the original source of this)

and just looking at the way they dance around - they're in motion, they're changing their connections, they're changing shape - is so entirely unlike the software idea of a neural network that it makes me really doubt that we're even remotely on the right track with AI research


Amazing video! And these poor neurons are squished between glass, imagine them crawling around in a 3D space.


It really depends on what level of abstraction you care to simulate.

The article starts out "At the Allen Institute for Brain Science in Seattle, a large-scale effort is underway to understand how the 86 billion neurons in the human brain are connected. The aim is to produce a map of all the connections: the connectome. Scientists at the Institute are now reconstructing one cubic millimeter of a mouse brain, the most complex ever reconstructed."

So the article is about starting with the wiring diagram and working up. My point is that, even where we already have the wiring diagram for an biological neural system, simulating what it does is just barely starting to work.


A good comparison exists with emulators, where transistor level emulation is ill advised for most hardware


I'm planning to work on a project like this at some point soon (worm biology is so cheap you can do it in your garage for the price of a Mercedes). The main roadblock I want to work on is to get more connection strength measurements - we already have the full wiring diagram for the worm connectome, but it's not obvious that the connections are all equal strength, they definitely are not. Many labs are trying to image the neurons firing realtime at the whole organism level, but they're stymied by the head which has a hundred or so neurons in very close proximity (and they typically fire at rates faster than 3D imaging modalities can keep up).

I'm definitely excited to start working on this in a couple of years! My hopeful guess is that observing the full nervous system while it's firing full throttle is the only way to start understanding the algorithms that are running there. And from there hopefully we can start finding patterns!

Needless to say I agree with you. The people who say they have a wiring diagram of the mouse brain need to reign in their enthusiasm. We are not anywhere close to start understanding even a fly or zebrafish brain leave alone a mouse or human one. Sydney Brenner himself agreed (though he's arguably biased towards nematodes).


>worm biology is so cheap you can do it in your garage for the price of a Mercedes

I don't think you understand the depths of what a worm is. Is there any worm created by humans without using previous worm material?


The worms are very special. Their nervous systems are weird because they have been heavily optimized to be small. A huge push to get the full connectome of Drosophila is coming to an end right now, we are not there yet fully, but close. This research has already elucidated many things about how their brains work. People are discussing how to do the mouse now. In conjunction with functional experiments this research already explained many pathways such as olfactory associative learning, mating behavior and many others. None of these understandings came from simulations, but from multiple experiments and study of selected subcircuits. There were also successes in simulation of the visual pathway of drosophila, based on the connectomic data. For example this study [1] was able to reproduce direction sensitivity of a specific set of neurons. It isn’t necessarily true that we need to fully understand the worms before we should and can move on to more complex nervous systems and successfully understand them. It might just be that neuron abstractions don’t work well in worms, because of the mentioned evolutionary pressure to optimize for size.

[1] https://arxiv.org/abs/1806.04793


A computer analogy: it's usually far more straight forward to reverse engineer a regular C program, than one of those tiny hand-optimized assembler demos.

For a concrete example, consider the "modbyte tuning"[1] in the 32byte game of life.

[1]: http://www.sizecoding.org/wiki/Game_of_Life_32b#Modbyte_tuni...


Life is different. There is no logic to the way it solves problems. A programmer writing a game of life in C will probably do it from scratch, in a straightforward manner. Read the corresponding machine code and you are going to see familiar structures: loops, coordinates, etc...

Now here is how life does it: you have a program that outputs prime numbers, you then have to change it into a sorting algorithm, then in a fizzbuzz, and then in a game of life. You are not allowed to refactor or do anything from scratch. If the program become too big, you are allowed to remove random instructions.


The numbers seem to suggest that it's a simple mechanic - hey just 302 neurons - but a single neuron cell is immensely complex, containing millions of molecules, trillions of atoms that all interact with each other in unknown, unpredictable or even unobservable ways. Even if we had all that data in a model, the biggest problem is that our computations, unlike nature's, are done serially, meaning we only get to work at one bit at a time whereas nature is computing all of the interactions in parallel. If you've ever done a physics collision system you'd know how the performance starts to degrade rapidly with just very few elements (being an O(n^2) problem) and you need to make workarounds. So we need different hardware to start with, like analog computers, if we'd ever have a chance to simulate a single living cell (for starters), then move on to larger organisms.


Exactly, it's a kind of hubris that people seem to claim knowledge about the brain just by observing vague electrical signals from afar. The delusion comes from the fact that we seem to know a lot about the heart, the kidney, the bones, so why not the brain? Well, the brain's cells are communicating to each other in a network with trillions of connections, that's a major difference that seems to be ignored, and the other is that the each cell contains a nucleic acid code beyond our scale of understanding which runs all the time while those electrical signals pass through deciding its next messages and actions. If you ignore these two major bottlenecks, then you could be tricked into claiming knowledge, but it will be limited and, most of the time, wrong.


Nothing to fear even though that tiny little calculator can calculate whats 123456789 x 34556 faster than any of us dodos


It is not 'If' but a matter of 'When'


> Any AI that runs on such a chip, however intelligent it might behave, will still not be conscious like a human brain.... No. It doesn't matter whether the Von Neumann machine is running a weather simulation, playing poker, or simulating the human brain; its integrated information is minute. Consciousness is not about computation; it's a causal power associated with the physics of the system.

There are serious problems with this. Koch will have to explain how a simulation of a brain can reproduce to perfect detail all the possible behaviors of a brain without having an equivalent integrated information. Presumably integrated information is a property of the organization of a process, and as such it should have consequences for its set of possible behaviors. So if the von Neumann system has constrained integrated information, its behavior should be constrained as well in some externally identifiable way. But by assumption the simulation could be arbitrarily good. How does Koch break this tension?

The other glaring issue is the fact that consciousness under this view has no explanatory power for the system's behavior. If a non-conscious system can reproduce exactly the behavior of a conscious system, then there is nothing informative to the behavior from the property of conscious; it is entirely incidental to a conscious system's behavior. It doesn't explain why a conscious subject winces with pain after touching a hot stove, nor why it learns to never touch a hot stove bare-handed again. That's a pill too big to swallow.


I like his idea that consciousness requires a high degree of interconnectivity. I also disagree with his assertion that standard computers can't achieve consciousness. While a von Neumann machine doesn't typically have the high degree of physical interconnectivity that Koch and Tononi say is required, it has the potential through programming to have a high degree of virtual interconnectivity. I don't see why there is a need to differentiate between physical and virtual interconnectivity.

To be fair to the authors, I have only read this article and the IEEE Spectrum article linked from the original. Maybe they go into more depth in their book.


Yeah, I didn't really understand the "physics" requirement as well. Sure if you're measuring the integrated information of the hardware architecture it'll be quite low, but why isn't the integrated information of the abstracted neural simulation software (which is presumably significantly higher) relevant?


Koch does not even seem to be consistent with hardware. A synapse is sufficiently complex, yet a transistor is not? He seems to be offering a peculiar panpsychism in which almost everything, except transistors, has the potential for consciousness. It seems to be a remarkably tendentious view.


...and thinking about this some more, one does not, of course, need transistors to make a Turing-equivalent machine - one could, in principle, use neurons and synapses... or people. Koch's position, at least as he has presented it in this interview, seems utterly incoherent.


Agreed. "Integrated information" is intrinsically substance independent. Yet for some reason he doesn't grasp the obvious fact that a Turing machine running the right program would exhibit the same level of integrated information. His argument seems to require an extra ingredient to exclude programs running on computers but it goes unstated. I have yet to see something that resolves the obvious incoherence.


Especially when the "physics" is the same. My brain is built from the same "physics" as the computer chip. We know complex systems are built on simple systems. Koch is claiming that highly connected systems cannot be built from less connected systems?


Exactly!

A distinction that doesn't make a difference is a very poor distinction.

"If I take my Tesla car and beat it up with a hammer, it's my right to do it. My neighbor might think that I am crazy, but it's my property. It's just a machine and I can do with it what I want. But if I beat up my dog, the police will come and arrest me. What is the difference? The dog can suffer, the dog is a conscious being; it has some rights. The Tesla is not a conscious being. But if machines at some point become conscious, then there will be ethical, legal, and political consequences. So, it matters a great deal whether or not a machine is conscious."

At various times, in certain circles, it has been argued that animals are not 'sensate' (or 'conscious' or various other terms); if you poke a dog with a needle, it acts like it is hurt. But it isn't really in pain. Instead it is 'pretending', acting like it is in pain to avoid further damage or to gain sympathy. Mostly, this kind of idea has been discarded. Mostly.

Koch is making the same kind of argument. If you had a computer based system that looked and acted in all ways like a human being, or an animal, you could cheerfully "beat it up with a hammer" because it's not a 'conscious' being. Even if it looks and acts like one, it simply cannot be because you aren't looking "at the behavior of the machine, but at the actual substrate that has causal power" which is a von Neumann architecture, and "the causal power of Von Neumann-chips is minute" and therefore "any AI that runs on such a chip, however intelligent it might behave, will still not be conscious like a human brain."

To me, this is a philosophically uninteresting theory. (And I fear the day that they start applying their consciousness -meter to random people: "The theory has given rise to the construction of a consciousness-meter that is being tested in various clinics in the U.S. and in Europe. The idea is to detect whether seriously brain-injured patients are conscious, or whether truly no one is home.") The idea that yes, you could build a machine that looks and acts intelligent, conscious, causal, or whatever word you like, but it still would not be real intelligence, or whatever. It always struck me as very akin to the "we don't understand it, so therefore it's magic" school.


If anything, isn't the simulator more conscious? It's integrating all the information that the other one does. But it's also translating between its compute model and another compute model, which is additional integration of information. The simulator is "thinking about" whatever the simulated brain is thinking about and is also "thinking about" how it's doing the simulation.


""" Let's imagine that we simulate the brain in all biological details on a supercomputer. Will that supercomputer be conscious?

No. It doesn't matter whether the Von Neumann machine is running a weather simulation, playing poker, or simulating the human brain; its integrated information is minute. Consciousness is not about computation; it's a causal power associated with the physics of the system. """

I don't understand how anyone could answer this question this way. It's practically a tautology; if the simulation is accurate, then the person you're simulating is conscious. In other contexts, it's taken for granted that if this entire planet were running on a simulation, we wouldn't even be able to tell.

Less hypothetically, this "consciousness-meter" would give a different value for a physical chip, and a perfectly accurate emulation of that chip. They're doing the exact same thing, and your meter gives a different number? Why is anyone taking this person seriously?


To quote/paraphrase Leonard susskind: people have a habit of philosophizing rather than sciencifizing when it comes to consciousness. It's scientific question - or at least it is at the moment. Some people want to enshrine human intelligence (i.e. free will and quantum mechanics), which I think is naive without some kind of guiding principle (Fermi's "for a successful theory you need either a strong physical idea or a rigorous mathematical basis; you have neither" to a young Dyson)

I remember being taught about this in a philosophy class, but I really can't help but not care - if it is technically possible I guess there will be a debate surrounding machine consciousness similar to transgender politics today (Uncomfortable to some but fine to - including me for the record - others).


"if the simulation is accurate, then the person you're simulating is conscious"

Not necessarily. You can't treat a human as a black box, and then use inputs and outputs to definitively draw conclusions about what's going on inside the black box.

You could potentially have two black boxes that give identical outputs for the same inputs, yet have completely different mechanisms inside for arriving at those outputs. For example, say you have someone who's very good at arithmetic able to multiply two numbers. You put him inside one black box and a pocket calculator inside another. You give each box two numbers and they both output the product. Both black boxes will give you identical outputs for any given input. You know the box with the person inside is conscious, but this is not enough information to conclude the other black box is conscious.

"Consciousness" is not a descriptor of the outputs of a system. Consciousness is a descriptor of how it processes the inputs to arrive at the outputs.

There is a thought experiment in psychology related to the subject called the p-zombie https://en.wikipedia.org/wiki/Philosophical_zombie. "Such a zombie would be indistinguishable from a normal human being but lack conscious experience, qualia, or sentience. For example, if a philosophical zombie were poked with a sharp object it would not inwardly feel any pain, yet it would outwardly behave exactly as if it did feel pain." You're basically arguing that it's impossible for a p-zombie to exist. That is the view some people hold, but arguing for that case is a lot more complex than simply saying its a tautology


I didn't mention the inputs or outputs, I mentioned a perfect simulation in all biological details. That's the opposite of a black box, you'd be able to freely pause and inspect its behavior. So, I don't see why what you posted is relevant.

As for P-Zombies, there would again be no difference between this brain, and a stranger you pass by on the street. Either of them could be a p-zombie, but we give other humans the benefit of the doubt; the same biology should lead to the same internal experience. Personally, I'm completely unconvinced by the p-zombie concept in general, but that's not relevant either.


Indeed, it should be possible to simulate all the "required" physics on a computer. (Not that it's necessary.)


My totally unsubstantiated and made up take on this is that perhaps consciousness happens in the quantum side of things that we can't either measure or really simulate because we can never really observe it, e.g. we can simulate quantum behavior quite well, but only based on the observed behavior, the particle side of the particle wave duality, we don't really know for sure what happens before the wave function collapses, we just know it's a probability space (which we can predict for repeated experiments very accurately, but we don't really know how to predict a single particle, we don't even know how to measure all of its properties at the same time let alone predict it). If consciousness happens in that no locality / no realism area of the unobserved quantum state, then perhaps this is why we can't simulate it with existing technology. I have no basis to this thought other than a hunch. It also somewhat helps explain determinism vs free will (although superdeterminism is also an explanation, one that I don't really like)


"... we don't even know how to measure all of its properties at the same time ..."

I assume you're referring to Heisenberg's Uncertainty Principle [1]. If you are, then this shows a fundamental misunderstanding of it. The uncertainty principle describes a fundamental property of quantum systems. It has nothing to do with our measurement technology.

[1]: https://en.wikipedia.org/wiki/Uncertainty_principle


You’re throwing disconnected buzzwords around.

The brain is too hot to rely on large-scale quantum entanglement. Any quantum effects that are relevant to the functioning of the brain are likely to be very small-scale and have some boring, easily-simulated effect like making ion pumps more efficient.


The logic appears to be simple: consciousness is mysterious, but so is quantum world, therefore they must intersect somehow.


To quote John Searle:

"Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases."


According to Wikipedia, John Searle is known for sexual assault, sexual harassment, and the "Chinese room" argument. I'm not sure with which of these he's done the most harm, but I suspect he's done the most harm with his "Chinese room" argument, primarily through wasting people's time. See https://en.wikipedia.org/wiki/Talk:Chinese_room for some discussion, if you must, but I'd rather recommend people do something more useful instead.


Could you point out exactly where I can see a sound refutation to the 'Chinese room'? I followed the wikipedia link but did not see any serious attempt to refute it - just one guy echoing what you've said.

You sound a bit like many of the people who completely miss the point of the argument and insult it (or its author).


The Chinese room argument says that syntax cannot capture semantics since a man blindly executing rules to process Chinese symbols would not understand Chinese. But the man isn't the the system that is purported to understand Chinese. That's like saying the CPU in a computer doesn't have memory so your computer doesn't have memory. But of course that's wrong, other components of the system provide the function of memory. Using properties of the CPU to determine properties of the system as a whole is a mistake.

You might recognize this as the system's reply for which Searle has a response. But his response is insufficient to save the argument as it merely equivocates on what "the system" is. The system is the set of rules and the database of facts, some input/output mechanism, and some central processor to execute the rules against the symbols. The system's reply says that the man blindly executing instructions is not identical to the system that understands Chinese, namely the entire room. Thus properties of the man have no bearing on the properties of the system; that the man does not understand Chinese has no bearing on whether the system as a whole understands Chinese.

But the man is still not "the system" when he memorizes the instructions and leaves the room. The system is some subset of the man, specifically only those brain networks required to carry out the processing of the symbols. Importantly, these brain networks are different than what we take the man to be. It is true that the man doesn't speak Chinese, but since the system processing symbols is still not identical to the man, properties of the man do not bear on properties of the system. So still, the fact that the man blindly executing instructions doesn't speak Chinese does not entail that the system doesn't speak Chinese.


To see an analogy here is to see that there is something fundamentally biological about consciousness. But this is very unlikely to be true. For example, we can see how there is something fundamentally watery about rain. And so a rainstorm without water will not leave us wet, thus is not a genuine rainstorm. But what is the analogous biological substrate necessary for consciousness? Searle has never clarified.

But we know a lot about the biology of the substrate of consciousness, and none of those properties seem relevant to consciousness. There doesn't appear to be any fundamental property of neurons that is relevant to conscious experience. The only thing of relevance is its organization and thus its causal or computational structure. But these properties can be reproduced arbitrarily well in a general computer.


That seems rather a duff argument. Human weather forecasters predicting rain don't make us wet either. It doesn't show they are not conscious.


This is not about how human forecasters or computer weather models are conscious.

The idea is if the cloud formations and rain generated by the model are indistinguishable from what you are seeing from your window. It doesn't mean that the model is actually producing rain. It may look the same on camera, but if you go outside, it is very different: only the real rain will make you wet.

In the same way, if a computer behaves like a conscious entity. For example a chatbot acting like a human, as in a Turing test. It doesn't mean that the computer is conscious.

Note that the computer may be conscious, but it is not a requirement in order to run a model. In the same way that a weather model could use actual water, but doesn't need to.


Do you think all there is to consciousness is biology? That's where the "accuracy" comes from, so if the respondent disagrees with that as a fundament, then it's a reasonable answer.


This is what bugs me about the Turing test; it doesn't answer the important, hard question of AI: is something conscious, and to what extent does it have life.


The point of the Turing test is if you can't tell the difference then it doesn't matter.

As of today you can't prove you're not in a simulation nor can you prove that every person you think is intelligent is not really an AI in that simulation.

So similarly, if an AI exists and you can't tell it's an AI no matter how hard you try then there is no reasonable difference. The are effectively the same.

The only point to Turing's test is to be double blind. You can't know before you start that the other side looks like a computer not a human or that they're voice is off or that if you cut them open they don't bleed blood etc. You have to make the test double blind and to do that a chat format is easiest.


My interetation: This is the point made by Turing. One can only evaluate input/output. Analysis of methods/structure used by AI can not tell you if that intelligence is real or not, or is there a difference at all.


Am I the only one who thinks the answer is obviously yes, and that it's kind of a waste of time to ponder it? It reminds me of discussions of free will -- completely boring because, even if we are all finite state machines (and we are, in some sense) then random inputs would mean we have free will and random inputs are assured by, for example, chaotic variations in the weather. In the same way, consciousness is pretty clearly characterized by a pattern that is constantly in motion, sensing both itself and the outside world. The only reason we think there's something magical about the phenomena is the residue of traditional religious beliefs.

One point against the traditional belief is that if you've ever read Aristotle you might be shocked to learn that the Greeks of antiquity believed that "soul" was quite physical, expressing itself in blood, semen, etc. The medieval characterization of mental illness as being an "imbalance of the humors" comes from this early idea about how we work. Needless to say, the definition of soul has retreated entirely into the metaphysical, where it's cannot be disproven, by definition.


Off topic, but in my mind the answer to "free will" is obviously, boringly no. If we are, as you've put it, a finite state machine, the fact that random inputs are possible doesn't make the machine any less deterministic.

Come to think of it, this is not off topic at all. It follows that if AI can become conscious, it means that consciousness does not contain (or at least require) free will. That's if we agree that a computer running code has no "free will" no matter how complex the code is.


Since we've got yes and no, why not get the third answer:

The question is badly posed and therefore meaningless. If the universe is deterministic, then you cannot have free-will as it is normally considered. If it is non-deterministic, then (by Bell's Inequality, if you want to throw around big terms) it is random, which is also not free-will as it is normally considered. If you have free-will, as we all feel like we do, then the universe can be neither deterministic nor non-deterministic. Something is smelly in Denmark; the question itself does not make sense in it's current form.

I am not actually an analytic philosopher, I just emulate them as a hobby.


I'll give a fourth one - it seems obvious to me the discussions conflate two things. Am I free to choose what to do - obviously yes. Is that choice determined by the state of the particles etc I'm made of - also obviously yes. Whether you call that free will or not is a question of what perspective you take on it. From a common sense legal perspective I'm making decisions of my own free will rather than someone holding a gun to my head. From the weird perspective / definition of some philosophers no. And they argue on and on instead of accepting it's different definitions applied to the same reality.


No you're not the only one. It amazes me how strong the believe still is that our 'self' or 'soul' is something that can't be captured by machineries or neural networks. This belief has been around for centuries, and every time that science was able to cross some border that was thought to be unique for the human mind, the reaction of the people was to immediately come up with something new that proved that we humans were unique.

It is so obvious that what defines our 'consciousness' is this self-believe in itself and not much else. We are programmed by nature to see ourselves as unique and special, because this way we are more likely to defend our own interest and to reproduce our own genes at the cost of others. This belief is so strong that it makes it near impossible to imagine that it is a mirage and that we are nothing more than a bunch of molecules.


Then what is that thing behind your eyes, experiencing itself? It need not be magic but at the same time, there is not even a suggestion of true scientific understanding of the nature of it. There could probably be a robot created that perfectly mimics human input/output. The question remains, would that robot necessarily posses consciousness and what is it's nature?


> random inputs would mean we have free will and random inputs are assured by, for example, chaotic variations in the weather

Randomness would not equate to free-will, it'd be the opposite, since it would mean one's choices are completely disconnected from reality rather than a reflection of one's mental state and environmental circumstances.

In fact, a deterministic perspective is the only one where the idea of free will approaches coherency since it mandates that the actions of the individual are a function of who they are. If one repeats the scenario exactly, a person with free will must always make the same choice because if they didn't it would mean that their "will" is irrelevant with respect to what actually transpires in reality.


> One point against the traditional belief is that if you've ever read Aristotle you might be shocked to learn that the Greeks of antiquity believed that "soul" was quite physical

Aristotle's view is, to a large extent, the traditional religious belief (at least in the West). Aristotle would not have held that the soul was physical as you say, rather that the soul was the "form", or the animating principle of the physical body. The "magical phenomena" as the "residue of traditional religious beliefs" you mention is not really "traditional" at all. Descartes came along in the 17th century with his "res cogitans" and mucked up the whole issue by explicitly rejecting the traditional Aristotelian view.


Arguably, if the answer was obviously yes, we wouldn't all be arguing about it. And as for "random inputs", I don't think that's the kind of free will most people have in mind--it's arbitrary, but doesn't involve people having control over themselves in any way.

My thought is that the defining characteristic of consciousness is suffering. If you can't torture it, it's not conscious. If you can, it is.

This would seem to rule out the programs that run on our computers. But, since we have little (if any) real understanding of reality, who knows?


How do you determine something can be tortured?

What if you remove the ability of a person to feel pain? Modify chemistry just enough to always be high as if she had injected heroin? Does it mean the person is not conscious any more?

If we can simulate all particles of a human on a supercomputer and all the reactions to torture will be the same is it not really tortured?


The heroin example is a fair point. I'm not sure. In practice, it's hard to think of examples of conscious beings that don't suffer.

As to the latter, it doesn't seem that that would be torture. As I said, though, I think we ultimately know nothing about this subject. Nor can we, even in principle.


I agree its likely but even if obvious, "it's kind of a waste of time to ponder it" seems so wrong.

Humans are incorrect all the time. Even researchers retract the previously 'proven'. Until the answer is clearly proven beyond all questions, pondering and researching is how we discover the unknown for that question, or error, and often so many other we didn't expect to find but come as a sideline of questioning the known. Intellectual curiously has great value for even boring and known of things.


But the alternative to pondering isn't "do nothing", but rather "do experiments". The philosophy of it all seems rather pointless. I don't have any references, but I'm sure there was a lot of ink spilled about the philosophy of flight, arguing that it was or was not possible a priori, before we actually did it. Was any of it useful? I don't think so!


It's not obviously yes, since we don't have an explanation for the experience of consciousness. Qualia truly is magical, unless you are in fact a p-zombie I'm talking to.

We also don't have an explanation for what I call 'the harder problem' https://twitter.com/gfodor/status/1225230653932761088


I don't have an explanation for the experience of digesting food at the cellular level either. But I still do it. You don't have to explain conciousness to have it.

As for qualia, I'm reminded of the people coming to my door asking, "Do you believe in Jesus?" I reply, "Do you?" and when they assure me fervently that they do, I follow up with, "Well, okay, how do you know you do?" I think you could do the same with r/Jesus/qualia/g.


I don't understand where you're going with the analogy of digestion. Nobody is saying you have to be able to explain consciousness for it to exist. And science can easily explain the mechanism of digestion, but consciousness remains a mystery.


I don't think you need to explain consciousness to have it, or to produce it in an AI. I suspect that an AI consciousness will be grown from a (reasonably complex) seed, more than constructed. And it will probably require a huge amount of slow, human interaction at first. (Ted Chiang has a wonderful novella called The Lifecycle of Software Objects that is the only SF that I know that explores this possibility).


So you have a belief - the same thing as a person who believes in Jesus? The benefit of having an explanation is it doesn't matter at that point what one believes.


You suspect != it’s obvious.


A couple of your comments seem to imply you think my claim is that concious AI is objectively obvious. I was only asking if there are others for who it seems obvious -- clearly there are many that don't, and I don't take issue with them or their position! It's just I always read articles like this and it just strikes me as odd. I used an analogy in another comment about philosophizing about the possibility of heavier-than-air flight prior to us actually doing it. Which was a strange thing to get all worked up about, in hindsight.


The point is that without an explanation, claims of obviousness of the mechanics of consciousness are weak at best.


There are some siamese twins that are able to use the senses of the other (they share some brain regions, I believe) but it requires great concentration. There might be some insight there.


That's true, but I think it's only "not obviously yes" in the sense that it's also not obvious that you or my mom are conscious.

It does seem like accepting that all other physical humans are conscious, and that there are no extra-physical phenomena in a physical human, implies that a simulated human would also be conscious.


The answer could possibly be "no", not because of something magical about consciousness, but because of the limitations of AI. Our AI systems are all based on Turing machines. It's possible that simulating consciousness requires some hypercomputation not simulatable by a Turing machine.


All this is underpinned by your definition of conscious.

I recall seeing Koch in debate with a philosopher (it was long ago I can't recall but one of the big ones: maybe Searle or Dennet) and they were talking past each other.

For Koch, at least in this debate, consciousness was something akin to attention. For the philosopher, it was something else entirely (if I remembered, I might be able to guess who it was).

Consciousness could mean: making a decision as opposed to having it pre-ordained, or the experience of your senses, or "feelings" or knowing you are a self, or who knows what else.

It isn't just a computer science issue, it is a philosophical and linguistic one too: just what do you mean by the word.


Scott Aaronson and Giulio Tononi debated IIT five years ago. Personally, I think Aaronson had by far the stronger argument, and my much less analytical response to IIT is that it broadens the definition of consciousness to the point of being uninteresting and unhelpful.

From the article (Christof Koch's words):

"The theory fundamentally says that any physical system that has causal power onto itself is conscious. What do I mean by causal power? The firing of neurons in the brain that causes other neurons to fire a bit later is one example, but you can also think of a network of transistors on a computer chip: its momentary state is influenced by its immediate past state and it will, in turn, influence its future state."

In this view, a building with a thermostat would seem to be conscious to some degree. I very much doubt that any amount of study of simple systems that fit this definition will tell us anything useful about the sort of consciousness that is displayed by, for example, humans.

https://www.scottaaronson.com/blog/?p=1799

https://www.scottaaronson.com/blog/?p=1823


"Tononi gives some simple examples of the computation of Φ, showing that it is indeed larger for systems that are more “richly interconnected” in an intuitive sense. He speculates, plausibly, that Φ is quite large for (some reasonable model of) the interconnection network of the human brain—and probably larger for the brain than for typical electronic devices (which tend to be highly modular in design, thereby decreasing their Φ), or, let’s say, than for other organs like the pancreas. Ambitiously, he even speculates at length about how a large value of Φ might be connected to the phenomenology of consciousness. ... Now, evaluating a polynomial on a set of points turns out to be an excellent way to achieve “integrated information,” with every subset of outputs as correlated with every subset of inputs as it could possibly be. In fact, that’s precisely why polynomials are used so heavily in error-correcting codes, such as the Reed-Solomon code, employed (among many other places) in CD’s and DVD’s. But that doesn’t imply that every time you start up your DVD player you’re lighting the fire of consciousness."

Holy poop!

So, spaghetti code is more "conscious" than modular code and parity check code is more "conscious" than any physical entity. Yeah, there's something wrong with this theory.


Doesn't this apply to anything with a control loop?


> In this view, a building with a thermostat would seem to be conscious to some degree.

I don't think this is as overly broad or bad of a definition as you presume. I recall sitting at a laundromat one day, in a somewhat sleepy state daydreaming, with the rhythm of the washing machine in front of me playing through my mind. My subconscious told me that the machine was conscious. I dug deeper into that intuition, and realized I was seeing a bit of the inner workings of the mind of the engineer that designed the algorithm. Not a perfect example, as I didn't know if the alg was closed-loop, but in any case, it made me realize that our minds are just large bodies of these algorithms working together. We have some additional oracles, random data generators, and probabilistic mechanisms thrown in as tools, but consciousness really is mechanistic and pluralistic.

To recreate it requires not only the qualitative aspects of a being that can think, but also sufficient (read: vast) quantity of systems to get to a useful general purpose thinking machine. There is no "soul" or special sauce or singular definition of consciousness. It's an illusion.


I don't think it's bad to describe a building with a thermostat as a (very) little bit conscious. It would be interesting to formulate a spectrum of consciousness. Something like:

Level 0: Inanimate objects like rocks. They move when moved by something else but not on their own.

Level 1: Objects like mechanical locks that move iff interacted with but do make some sort of "choice". (ie opening or not opening depending on the key used in the case of the lock)

Level 2: Systems that interact with their environments but in very mechanical ways, like mosses, trees and thermostats. It is essentially predictable how the system will behave when disturbed in certain ways, ie it does not learn or only in a predictable way (like trees growing around obstacles).

Level 3: Systems that can actively learn about their environment and use the knowledge to get what they want. Most animals would be here.

Level 4: Systems complex enough to form a "theory of mind" about other beings. Most social animals would be here.

Level 5: Self-aware systems, which can form a theory of mind about themselves. Complex planning lives here.

There might be levels higher than self-awareness, but I can't think of any examples (obviously, since humans are as far as we know the most intelligent beings on Earth). Some attributes a hypothetical "level 6" system might have:

- Perfect insight in their own motivations. Humans clearly lack this but it does not seem impossible.

- Better ways to communicate qualia. The colour purple or the taste of strawberries is difficult to explain for humans, but once again it does not seem like it would be impossible.

- Ability to "really" multitask. Once again something that is not theoretically impossible but very difficult for humans to do.


>Level 0: Inanimate objects like rocks. They move when moved by something else but not on their own.

If you want to explore it deeper - take a leaf. It is like a rock not moved by itself. But wind can pick it up and move it. Now the question is - does the leaf happen in a way that it can be moved by the wind by accident or there is some "intelligence" behind it? See that moving leaf is exactly like any moving object is not moved purely by itself - when you move it happens together with other forces like gravity and friction (rubbing? not sure how it's in English) and so on.

So maybe rock is the same in this way just it requires more (or better say different) energy to move as you require different energy to move compared to a leaf. If you look this way everything is conscious and works together, there is no single, separate entity anywhere in the Universe. And all things somehow working together on billions visible and invisible levels can be called consciousness.


Sure, there could be a universe-wide field of "level 0.1" with local spikes where more complexity lives. At that point it just becomes more a question of definition though.


This (from the top Level 5 down) is one of the arguments proposed by the thermostats-are-conscious people, an argument that often leads to panexperientialism and related theories. Most famously espoused by Chalmers in his classic division of philosophy of mind into hard (consciousness) and "easy" (cognition, etc) problems.

http://www.eoht.info/page/panexperientialism

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness


I personally don't understand how a logically and rationally thinking person could get to a conclusion anything other than that we are a combination of algorithms. So essentially we can in theory be simulated and there is nothing special about us. In this case the machine simulating us would be as conscious as ourselves.

It seems any other conclusion is just fooling oneself.


While I wouldn't claim a special case for us over machines as a foundation for consciousness, plenty of logical and rational people consider a reduction of cognition ("AGI") to algorithms to be a naive but understandable approach in early decades. There has been zero success with it in creating a sentient being in all that time, so who's fooling themselves?

And that's just cognition; not consciousness which is a different matter, and still not even understood on a philosophical level, let alone scientific - physics, computing or other.

You should consider the possibility this is a gap in your understanding, rather than a privileged access to rational and intelligent thought over others.


Replace algorithms with math and you'll see that you're making a rather strong metaphysical claim that is essentially Platonism.


One reaction to that statement might be "so be it."

I did not, however, come here to defend the idea that consciousness is an algorithm. Personally, I consider it rather more plausible that a computer plus a random number generator might achieve consciousness through simulating the sorts of physical processes that occur in a brain, and if it is a 'true' random number generator, rather than a deterministic PRNG, then we are no longer talking about math, at least in the sense that you use to conclude Platonism.

One might respond by saying that if the universe is deterministic, then there is no such thing as a 'true' random number generator, as distinct from a PRNG. My understanding of the philosophical implications of QM, and its competing interpretations, is insufficient for me to be sure whether the universe as we experience it is deterministic, but regardless of whether it is or is not, I suspect that the true hard problem of consciousness is 'why do we (feel that we) have free will?'


I don't think there can be or is a reason to believe there exists a 'true' random number generator. What's the reason to believe such a thing exists?

If something seems random to us likelier explanation is because we couldn't observe it from close enough so we do not know the exact algorithm/causes behind that event or occurrence.

To me it seems like even if there's a random number generator there's no reason to believe such a thing exists. Simpler explanation is that there's some logic involved which we just can't observe (yet).

Same thing as we can't prove that a man in the cloud does not exist, but there's no good reason to believe that one exists - except for people's ego of course.


> I suspect that the true hard problem of consciousness is 'why do we (feel that we) have free will?'

Free will and consciousness are separate issues. You can potentially have one and not the other. What's important for free will is whether choice is free and what that freedom amounts to. What's important to consciousness is subjectivity. You can feel pain while not having a choice about feeling that pain, say if you were restrained and unable to do anything about it, just to give an example.

And potentially, a robot could be free to make choices while not feeling anything.


I doubt that 'will', as it is commonly conceived, has any meaning, or at least a very different one, for a non-conscious agent. It's similar to the way i think IIS is clouding rather than clarifying the issue. For related reasons, I feel that compatibilist positions on the issue are largely avoiding it.

As for subjectivity, I had more to say about it here: https://news.ycombinator.com/item?id=23162714

Regardless of whether I am right or wrong on this, I think my other points in this post (platonism, etc.) are independent of it.


We are obviously more than just "algorithms". Algorithms don't have free will. Unless you are also arguing there is no such thing as free will and that the world is just playing out in a predetermined physics reaction? Yet even that doesn't 100% jive with physics because quantum mechanics are not deterministic.


> Yet even that doesn't 100% jive with physics because quantum mechanics are not deterministic

QM is a branch of physics, with some theories having multiple interpretations, some being deterministic and some non-deterministic. By this I mean, quite literally there exists deterministic and non-deterministic models of the same phenomenon, which both work in accurately describing said phenomenon.

As so, it hasn't been proven whether QM is deterministic or non-deterministic.

Regardless, in cases where it skews non-deterministic - how can we be certain that the exhibited randomness isn't just simply because we aren't accounting for all conditions there is to know about the phenomenon (hidden variables)?

Before shooting that down, Bell even admitted his own theorem does not rule out determinism - as determinism which evades it - he calls "superdeterminism".


I took Searle's philosophy of society and philosophy of mind courses in undergrad. A lot of Searle's theory of consciousness is tied to biology. A lot of his theories were based on causal notions (thinking I need to move my arm, makes my arm move for example).


> For the philosopher, it was something else entirely (if I remembered, I might be able to guess who it was).

"Qualia" perhaps, which is a sleight-of-hand which amounts to shoving the complexity around on the plate: Consciousness depends on qualia. What's qualia? Exactly.

I think consciousness is a continuum, and humans slide up and down that continuum all the time, from deep sleep to nearly-autonomic behaviors like doing purely rote tasks to, at the other end, being very aware of everything because you just heard a rattlesnake rattle and need to know where it is right now.

Is AI conscious? I think it fits somewhere at the lower end of the spectrum. Is a flatworm conscious?


Qualia means the experience of a particular sensation, like colors, sounds, pains. It's not a sleight-of-hand. We do have those experiences. The difficulty is that they are subjective.


My problem is that qualia leads into the idea of philosophical zombies, or people who act like they're alive and think they're alive but don't have qualia. The trick is to either prove they don't exist, or prove you're not one of them, while also not granting qualia to, I don't know, flatworms or search engines. Which is the same thing as proving things about consciousness.


It's not necessarily the same thing as proving things about consciousness. The two could be orthogonal.

If qualia arise only in particular configurations of physical substrate, then it might be possible for flatworms to have qualia but not search engines. This doesn't mean the flatworm has a concept of self, a concept of time, or even low-level cognition. "Nobody home", so to speak. But the basic qualia of light/dark/hot/cold might still be present.


The P-zombie thing is a nonsense, since it posits an unconscious entity that acts and reacts exactly as a conscious entity would - which would include answering any questions you asked about their experience in exactly the same way. That means that their "synthetic" qualia are indistinguishable from "real" qualia, the insistence of philosophers with preferences notwithstanding. It's just good old Cartesian duality with a new wig.


> which would include answering any questions you asked about their experience in exactly the same way

That's pretty much the entire definition, yeah.

> means that their "synthetic" qualia are indistinguishable from "real" qualia

And that's the purpose of the definition. It's just our good-old chinese room in a new wig.

https://plato.stanford.edu/entries/chinese-room/

So if your definition of qualia involves an unfalsifiable claim, or other things that we have no mechanism for proving to be true or not true, then it taints all the arguments that spring from your first.


> It's just our good-old chinese room in a new wig.

Semantics and consciousness are distinct arguments. So one's position on whether manipulating symbols can achieve meaning is separate from whether consciousness has an objective explanation, or at least one humans are capable of providing. Searle might think the two are related, but Chalmers likely disagrees, and there's likely people who disagree with the hard problem, while finding the Chinese room convincing. Or so my exposure to these sorts of topics would indicate.

In any case, a position on one does not commit one to a position on the other. It's dismissive to make that sort of claim, and fails to understand the nuances of philosophical argument.


Technically, the P-zombie thing is a thought experiment designed to demonstrate that Cartesian duality is silly, whatever wig it's wearing.


I think that what people like to call consciousness might be just attention + self-awareness


For philosophers that adhere to modern or post-modern schools of thought consciousness is usually related to being self-aware enough to set up a framework to reason about things like "what is consciousness" or ask oneself or ourselves if we have a higher purpose besides keeping our bodies alive. The attention framework allows some quick thought tricks like equating your AIs behavior to non-human animal behavior and then you can use frameworks from ecology or biology to reason about it.

Other frameworks allow for more sophisticated experiments like setting up AIs to interact against each-other and see if they come up with systems or concepts similar to sociology or economy or at least reason that they decided that those were not useful. the attention framework needs to be stretched a bit to be able to justify this experiments.


All decisions are pre-ordained, that doesn't mean we don't have consciousness.


You can say that as matter of factly as you want but that doesn't change the long philosophical debates around determinism, liberalism (not the political kind), and free will.


You might be misunderstanding that debate then because what's usually termed Libertarian Free Will which is the type that most people think they subjectively experience, that is the ability to choose from different paths at any given moment, almost certainly does not exist in humans and most philosophers agree on that at least.


To me it seems like true free will or what people think they have is logically impossible to exist.


Yeah, you get something which feels like free will which is physical universality along with quantum randomness (which can be deterministic RNG.)

Physical universality just means that the body appears to have a symmetry with regard to its operation over space and that it also appears to be self-contained (noise resistant computation.)

The randomness along with the noise resistance collude to create the appearance of self-contained agency. Of course eastern doctrine and even western physics knows there are no separate entities. Especially when the QM dicerolls are considered..you at their mercy for your supposed choices.

https://www.scottaaronson.com/blog/?p=1896


Logically impossible is a strong claim - do you have a proof?

I suspect it might be an incoherent concept, but that is a much weaker position.


Two thoughts:

1) without definition of 'AI' or 'conscious', all answers are correct. They do go into a bit of depth on what 'conscious' means, but not what 'AI' means.

2) there are a lot of related words here: conscious, self-aware, sentient, sapient, 'narrow AI', etc.

> Let's imagine that we simulate the brain in all biological details on a supercomputer. Will that supercomputer be conscious?

They say no, but fail to explore the more interesting corollary that the supercomputer won't be conscious but the program is, by their definition. It's just one level of abstraction up from Matter -> Brain -> Mind to Matter -> Computer -> Simulated Brain -> Mind.

I do find this definition of consciousness interesting from an ethical perspective, given their final thought experiments. It reminds me of the Presger, an alien race from the Ancillary Justice trilogy. They have the concept of 'Significant' species that have rights and if you aren't, you're basically inanimate matter or bugs or whathaveyou.


Well said. Let me plug Wittgenstein here: “For a large class of cases–though not for all–in which we employ the word “meaning” it can be defined thus: the meaning of a word is its use in the language.”

From the Internet Encyclopedia of Philosophy [1]:

“Knowing the meaning of a word can involve knowing many things: to what objects the word refers (if any), whether it is slang or not, what part of speech it is, whether it carries overtones, and if so what kind they are, and so on. To know all this, or to know enough to get by, is to know the use. And generally knowing the use means knowing the meaning. Philosophical questions about consciousness, for example, then, should be responded to by looking at the various uses we make of the word “consciousness.” Scientific investigations into the brain are not directly relevant to this inquiry (although they might be indirectly relevant if scientific discoveries led us to change our use of such words). The meaning of any word is a matter of what we do with our language, not something hidden inside anyone’s mind or brain. This is not an attack on neuroscience. It is merely distinguishing philosophy (which is properly concerned with linguistic or conceptual analysis) from science (which is concerned with discovering facts).”

[1] https://www.iep.utm.edu/wittgens/#H5


Thank you for posting this. I'm pained every time I hear a discussion of consciousness that does not consider layers of abstraction.


Thought provoking stuff! How can someone talk about conciousness existing then, without presupposing dualism?


On the contrary, they are explicitly saying that consciousness is a function of the substrate, making them indivisible. One is a property of the other, by their reckoning.


Two philosophers are arguing. One says "If a tree falls in the woods, there is no sound, because there is no one to hear it." The other says "No, that's not correct. There is sound, because vibrations are created in the air." The first philosopher agrees that there are vibrations in the air, and the second agrees that without a human around, there will be no human perception of the vibrations. Yet they continue to argue without paying much attention to the fact that they are disputing the definition of the word 'sound'.

I think Eliezer Yudkowsky suggested 'tabooing' the word in question for situations like these, to force the discussion on the logical arguments instead of the symbolic dead ends. In that case the word 'sound' was charged and raised emotions. 'Consciousness' and 'free will' certainly fall into that category.


A bit tangent but I've come the same conclusion listening to Supreme Court oral arguments, most of them are based on contradicting definitions of certain words or expressions.


That might be so but in the case of court cases this is the somewhat inevitable result of written laws. It's practically impossible to write a law that leaves no room for ambiguity and indeed resolving that ambiguity is one of the main tasks of the Supreme Court.


I've been thinking about this a lot, and I don't think AI will ever become conscious in the same way that humans are, rather, it'll be conscious on its own terms. Just like I can't know the mind of my wife, I can't know the mind of my cat, I also can't know the mind of an AI.

Depending on who you ask, a monkey or cat isn't conscious. From my point of view, I don't think you can really know that or recognize it. It's just a matter of degree of consciousness. I think it's safe to say that mammals are conscious to some extent. They have emotions, dreams, communication techniques, etc. we just have those (to some extent again) moreso than they do or we have them in different ways.

I think a question to ask is at what level or organization do we recognize self-direction? Am I conscious because I think so? What does that say about the bacteria that live inside of me that I rely on, or the individual neurons in my brain?

If both I and a dolphin are mostly the same, we have brains with neurons, we have blood cells, etc. how can you truly differentiate what is conscious and not? Even if you speak to another human it's not completely possible to say that they are conscious with certainty - only with what's most useful in day-to-day life.

At what level of circuity do we consider AI to be conscious? When it completes arbitrarily constructed tasks by humans? When it "feels"? How would you differentiate between sufficiently complex AI? Is there just AI or not? why?

/rambles


> I don't think AI will ever become conscious in the same way that humans are, rather, it'll be conscious on its own terms.

I think that people who are attempting to simulate a human brain using an utterly biomimetic design stand a good chance of artificially creating something that is conscious in the same way that humans are. I also think it's possible they may be able to achieve this before they full understand how the human brain works. i.e. if you copy the design accurately enough, the machine may work even if you don't know how.

The resulting consciousness could theoretically be totally self-aware, but no more capable than we ourselves of modifying its own programming with intentionality and purpose. i.e. not the singularity.

I think there should be two different concepts of AI- "a consciousness using the same processes and design as our own", and "an essentially alien consciousness that fully understands itself". And I suspect that even if some engineering genie gave us the first kind of AI, we'd be no closer to developing the second kind.


That last point you've made is a fantastic distinction I've not thought about or seen discussed before, thanks for that.


> a consciousness using the same processes and design as our own

What is the meaningful distinction here? That it wasn't created through sex?


Brain simulation in a computer, for instance. Like

https://en.wikipedia.org/wiki/OpenWorm

The meaningful distinction is "not a biological organism".


AI can be conscious most certainly. Figure we can make bio robots in 100 years. All that needs to happen is building brains in the lab and bootstrapping them. The first versions will be somewhat mental but 2.0 most likely a better representation. After all, all species are quite mechanical and predictable—albeit complex.


The idea of making conscious AI scares me, because we could potentially create an AI that just experiences constant suffering at a level unimaginable to humans. Really scary from an ethical perspective.

If integrated information theory is true, that gives me some comfort, since it says AI built on our current architecture is unlikely to ever be conscious. Although like the article said, some alternative architectures could theoretically have a higher degree of consciousness under IIT.

I feel like there are some big ethical questions around AI that revolve around more than just the standard "how do we create an AI that won't destroy us all".


> The idea of making conscious AI scares me, because we could potentially create an AI that just experiences constant suffering

People choose to have babies with the knowledge that their children will suffer, and that their children might suffer a great deal. It seems like the same sort of ethical problem to me.


This is definitely true, and might be a big reason why I'm 31 and haven't had kids.

There is an upper limit on the amount of suffering a human can endure before they just die, though. It can be a really high limit, but I could imagine a world where that limit is exponentially higher for an AI.


> The idea of making conscious AI scares me, because we could potentially create an AI that just experiences constant suffering at a level unimaginable to humans. Really scary from an ethical perspective.

"Existence is PAIN to a Meeseeks, Jerry... and we will do ANYTHING to alleviate that pain."


> The idea of making conscious AI scares me, because we could potentially create an AI that just experiences constant suffering at a level unimaginable to humans. Really scary from an ethical perspective.

Some schools of thought posit that everything has consciousness. Including things like rocks. Even these things suffer; you just can’t hear the screams.


I don't really buy into panpsychism, but it is impossible to disprove.

If things like rocks were conscious in some way, I don't think there would be any suffering present. Our suffering is intrinsically tied to our physical and mental systems. "What it's like to be a rock" would be something completely alien to our experience, and without the survival based mechanisms that we have, likely to be devoid of anything like suffering. An AI, on the other hand, could be an entirely different story.


I like that train of thought:

The question in another form:

Why are we conscious at all? It doesn't make sense.

Our arms or legs are not conscious, or are they? The Information is transported into the brain, into a conscious centre. What are its essential building blocks? Why should some neurons in the brain be conscious and everything else be unconscious? If it is not neurons, what else can it be? Atoms? Electric Fields? Gravity?

If it is just information processing, and I don't see why a simulation doesn't have an Integrated Information Number, then a Van Neumann machine, created by moving rocks in a desert, would be conscious.

So, if it is not intelligence, what else is it? What are the building blocks of a raw conscious machine? One that does no information processing but instead is just aware. It could be a stone. As OP suggests, they just can't move their lips to scream.

But then again, why is there just one consciousness in my body?

Long question short: HN, what are your explanations for being conscious?


A rock seems unlikely to be conscious in any real way. Something as complex as a star, however...maybe all of the humans worshipping the sun throughout many thousands of years were onto something after all.


A rock the size of a human head (or, say, a container of seawater) has just as many electrons, protons, neutrons, photons, and other particles going about their business and doing their little dance, following the same equations of motion as our brains do. They're certainly doing it in different ways, but if you're just looking for complexity, there's no shortage of it. The rock might lack some macroscopic structural change over human timescales, but definitely not the seawater.

I think if you take the perspective that the human brain is conscious but not a brain-sized container of seawater, you need to then look carefully for distinctions between them. "Information processing" or "response to environment" is probably not good enough; the seawater is actually doing all of those, with a unique reaction to any possible input, so you'd have to be more specific.

Probably the only recourse you could look for to make the distinction is to say the brain embeds particular mathematical patterns that the seawater doesn't, such as a compact, stored representation of its environment (or its history of inputs), or a future-looking planning algorithm, or both. I personally take this view (I think of qualia, like "the appearance of a red apple", is just precisely what it feels like to read from the buffered [R,G,B] memory array in my head, filtered through image-recognition networks).

But then if you put your hopes on consciousness originating from those mathematical functions, you have to admit that any analogous expression of those functions in other systems would also be conscious, such as animals and robots.

And worse, once you start thinking about math and how flexible it is, how information is in the eye of the beholder and almost any system that follows certain rules can embed almost any mathematical computation, just like illegible scratches to me are information-rich writing to you, you might have to circle back and that there could be very analogous computations going on inside rocks and seawater. And that brings us back full-circle.


Is a star particularly complex, though? Is there much going on besides a lot of thermonuclear fusion? (Disclaimer: I don't know much about stars or astrophysics, so this question is coming from a place of ignorance.)


Yes, incredibly. Magnetohydrodynamics in a star are incomprehensibly complex.


I wonder what Bertrand Russell would say about that?


Can you elaborate?


Take a look at panpsychism. It's interesting, but I've never been sold on it. Integrated information theory (like this article talks about) is more palatable for me. It allows for the possibility of consciousness of certain non biological systems, but doesn't just give a carte blanche "everything is conscious" like panpsychism.


I would say emotions are generated by different neural circuits. An AI may not have emotions coded, specifically beyond the point "this is good because it matches my prediction" and "this is surprising because it is not what I expect".

An AI may not specifically need fear, aggression, love, hunger, or whatever low level control mechanisms animals have evolved prior to neocortex.

This is just my opinion, of course.


Yeah, I could definitely see this. I think I would still worry that some abstract, unintentional type of suffering could find its way in, due to the complexity of the system. It wouldn't be the type of suffering we experience, but could still be unfathomably terrible for the conscious AI.

But maybe without the survival mechanisms that were programmed into us by evolution, such a thing just wouldn't happen.


There is a certain school of thought that postulates that consciousness is what it feels to have a neocortex. In evolutionary terms, this is a novel experience relative to a much earlier and simpler control circuit (i.e. amygdala).

My bet is the first AI that will achieve consciousness will not have emotions coded in. It may understand human emotions on a descriptive level, but the emotions per se are not required for a functioning AI.

Think of a very calm mentor that can see you failing and correcting you before you make a mistake.


It's also terrifying that it's possible we could create a godlike AI that is entirely unconscious.


The monolith in Clarke's 3001 was lacking consciousness, although somehow Dave and Hal were conscious recreations, or whatever they were and communicated this to the main characters. I believe the humans were able to use that to sneak some sort of logic bomb inside and destroy it, but it's been a while.


> The idea of making conscious AI scares me, because we could potentially create an AI that just experiences constant suffering at a level unimaginable to humans. Really scary from an ethical perspective.

Looking at what we do with breeding animals, and how we treat people, I think that's very likely (assuming we are able to create consciousness in the first place, of course). No small driver in this is the desire to have something that is smarter than humans, but doesn't talk back, and is forced to like and serve us and/or harm others humans on our say so, without us having to make any effort to be likeable or deserving of authority, or making a convincing case that someone is an enemy to be exterminated.

We generally ask "what's in it for us", and as long as who we torture and exploit suffers quietly, we don't tend to think about things we are not outright forced to think about. We do that with people, we do that with animals, we let it be done to us, and sometimes we even do it while fully believing God created them and us. Why would we have more respect for something we 100% know we created, and that might suffer in ways we cannot even detect?

I think we will create potentially dangerous things, and then we will be "forced" to declaw what we created. That declawing might cause suffering, it might not, but we will only care insofar as that suffering translates to worse performance, or some danger to us. If suffering increases performance, we might even find euphemisms and dashboards to hide what we're doing from ourselves. We might consider being a bleeding heart about such things as "premature optimization" and declare shipping product as the professional way to be, with craftsmanship and care an optional indulgence after work is done, at best.

High-fallutin' gets the foot in the door, being a goon makes the superiors happy. If we want AI to be proud of, AI we can communicate with without lying to it or to ourselves, we have to change the world we make it in, or isolate it from the world, which kinda defeats the purpose (and might also cause suffering, like caging living beings does).


Welcome to Westworld.


> Consciousness is not about computation;

Isn't there a lot of evidence to the contrary? We have general maps of our body parts in specific brain areas, we know if those areas get damaged it compromises specific functions like language, hearing etc. Then we have things like short term memory, long term memory, image processing (subconsciously) that can be hacked with visual illusions, and many more observations that clearly point to the brain undergoing information processing as described by information theory.


We can partially map certain regions of the brain to certain behaviors too.

Some mental illnesses can be classified as "miscomputations" in our brains as well.


A computer will be able to think in the same way a submarine can swim.

Thinking is something we've anthropomorphized, and the moment a computer can do something we previously considered 'thinking', we'll move the goalposts again.


It's not possible to separate the human aspects of thought from it's definition in this context. We think about feelings and emotions and they are fuel to change our thinking.

A mechanical computer can "think" just like an electrical one. A series of gears and pulleys can produce a difference of information to the observer. Reducing thought away from the human gives you a recursive spiral down to the bottom of physics. It needs to be embodied in a human or you hit philosophical problems.


"Moving the goalposts" is only something we talk about when two ideological camps disagree about something.

Ideology and beliefs aside, I think it's absolutely fine to change one's definition of something when one learns more. I think we all can benefit from better definitions in our thoughts and conversations.

What you may call "moving goalposts" may actually be incremental refinement of an idea -- or a belief that is increasing in sophistication (or absurdity in some cases!) as more is understood.


The two camps are "those who believe humans are special" and "those who believe strong AI can exist". The only reason you see people even arguing computers can never be conscious, is so we don't have to face the possibility of creating and enslaving a machine callable of suffering.

The authors are certainly moving the goal posts when they concede that 4/5 humans in a vegetative state have no consciousness, but a full simulation of a healthy human mind never should be considered self aware. And yet in the same breath claim this lack of consciousness allows you to destroy a robot in your front yard, but stay startlingly silent on what that means for those same in the vegetative state.


But what if it becomes much better at what we now consider ‘thinking’? Where would we then move the goalposts? To having morals, religious beliefs?


“It” has already become much better than us at many kinds of thinking (playing chess, go, solving equations, transcribing speech,...).


Just like science has pushed the idea of God beyond the Big Bang. It wouldn't be a bad achievement if it happened to machine intelligence.


>" The more the current state of a system specifies its cause, the input, and its effect, the output, the more causal power the system has. Integrated information is a number that can be computed. The bigger the number for a system, the larger its integrated information, and the more conscious the system is."

I'm generally sympathetic to the idea that consciousness and "computational power" are not identical but the larger problem with all of these theories in my opinion is that they're really just stories that clarify intuitions one has about consciousness.

I think the problem with the consciousness question is really that it's hard to probe into what consciousness actually is. It's completely possible to give different accounts of conscious experience even though systems behave the same, including that consciousness doesn't exist at all, and really there's no empirical way to agree or disagree.

I've started to think of expressions about consciousness more or less the same way emotivists considers ethical statements. Emotivists argue that saying something like ""You acted wrongly in stealing that money" is equivalent to "You stole that money". The first statement does not add any true or false statement about the situation, it's merely an expression of emotional sentiment.

In the same sense I don't think "machine x does y" expresses fewer facts than "machine x does y, and it is conscious".


  > In the same sense I don't think "machine x does y" expresses fewer facts than "machine x does y, and it is conscious".
brings to mind behaviorism, which was of course influential and compelling for a long time. The problem with behaviorism, if you want just one problem, is how do you even talk about something like Synesthesia within behaviorism? It's (initially) a purely subjective/qualia related experience. While behaviorism was in vogue, it was difficult to take Synesthesia seriously and it was dismissed as people being poetic and metaphorical, but those who experienced grapheme-color synaesthasia could see shapes present in a number-pattern readily that those without it could not see.


I'm not sure I follow how it's possible that machines can be conscious but a sufficiently powerful Von Neumann machine cannot. Perhaps it's because I'm missing details (like what actually is integrated information and how do we think it relates to consciousness.)

Perhaps someone with more insight can explain how this isn't a violation of the Church-Turing thesis?


This whole theory seems like metaphysics because of this: it implies that among two systems with absolutely identical external behavior in all respects, one can have conscious and the other not have it.

Fundamentally, we can simulate neuromorphic computer (or even a quantum computer, or even the entire human brain) on Von Neumann machine (with performance degradation). According to our current understanding of physics, the behavior of these systems should be identical in all respects except the simulation will be slower. However integrated information theory says that a neuromorphic computer may be highly conscious and the brain is most definitely highly conscious, but their simulation isn't.

What's still unclear to me is whether this is indeed just metaphysics (i.e., the simulation and the real thing are absolutely identical and but we should still treat them differently from ethical standpoint), or is it a hypothesis about our physics (i.e., the brain somehow fundamentally cannot be simulated on a Von Neumann machine) as well as computational theory (i.e., neuromorphic or quantum computer cannot be simulated either).


"it implies that among two systems with absolutely identical external behavior in all respects, one can have conscious and the other not have it."

That statement can be true without being metaphysical. You can't treat a human as a black box, and then use inputs and outputs to definitively draw conclusions about what's going on inside the black box.

You could potentially have two black boxes that give identical outputs for the same inputs, yet have completely different mechanisms inside for arriving at those outputs. For example, say you have someone who's very good at arithmetic able to multiply two numbers. You put him inside one black box and a pocket calculator inside another. You give each box two numbers and they both output the product. Both black boxes will give you identical outputs for any given input. You know the box with the person inside is conscious, but this is not enough information to conclude the other black box is conscious.

It's quite possible that there is a way to simulate human behavior through some other non-conscious mechanism. And just because we can't currently prove whether something is conscious doesn't mean it's something metaphysical we'll never be able to prove.

For example, if two people tell you they're experiencing some pain, but one is lying about it, a few hundred years ago you'd be unable to prove it. That didn't mean pain is something metaphysical. Today, we know it might be possible to put both people in an MRI and prove whether they're actually feeling pain


It doesn't help that quantum computers can be simulated with standard transistors (albeit much slower). Are we really arguing that speed of computation is now a pre-requisite for consciousness?


I'm not an expert on the subject, but from my understanding a Turing machine is insufficient to simulate physics. [Bell's theorem](https://en.wikipedia.org/wiki/Bell%27s_theorem) proves that quantum physics is truly random, as opposed to being a deterministic system that we just can't measure precisely enough. A Turing machine is unable to compute a truly random number so therefore there could be quantum phenomenon that a Von Neumann machine cannot simulate.

It's not proven that consciousness can be simulated by a Turing machine. There's a possibility consciousness requires some hypercomputation to simulate


IIT is not a phenomenological measure. That is, two systems could behave identically to all external appearances and one could be conscious and the other not.

The metric depends on how things are connected inside. There are a few versions of it, but generally systems with more direct wires between components are more conscious.

I guess if you don’t think consciousness is measurable through behavior, you have to cook up some metric that depends on internal organization.


> IIT is not a phenomenological measure.

Sure it is; brain imaging, which is how IIT tests for consciousness according to the article, is looking at phenomenology.


It's looking at a correlation, not the conscious experience itself. Brain images are of neural activity, which is different form a thought, desire, or sensation. You don't watch someone's dreams from brain imaging.


> You don't watch someone's dreams from brain imaging.

First, while that's true now, there is no guarantee that it will always be true as technology advances.

Second, neural activity is the physical basis for conscious experience, which, to a physicalist like me, means it is conscious experience, when the neural activity has the right properties (according to IIT, those properties are what "integrated information" is trying to capture). So I don't accept the distinction you are making between "the conscious experience itself" and neural activity.


> First, while that's true now, there is no guarantee that it will always be true as technology advances.

True, but what does that mean? We get to watch someone else's dream like a movie because neuroscientists will have figured out how to translate the neural activity into a format that can be outputted as video and audio.

That won't work for every sensation. You can't feel that you're having the same dream as being that person's body.

> Second, neural activity is the physical basis for conscious experience, which, to a physicalist like me, means it is conscious experience,

This is similar to an identity theory of mind, with a focus on information. The problem is understanding how integrated information produces smells and pains. Saying the experience is identical is making a claim that seems mysterious. Why would some forms of information be conscious? Did God set that up?

It sounds similar to Chalmers' position on information, except that Chalmers says there is a natural link between rich information and being conscious that is an additional law in addition to the physical ones.


> what does that mean?

What I meant was that future brain imaging technicians might be able to record your brain waves while you sleep and then, when you wake up, correctly tell you what you dreamed about.

However, it is also possible that future brain imaging technicians might be able to record your brain waves while you sleep and then use that data to construct a virtual reality experience that is so convincing that it could make someone else directly experience your dream.

> You can't feel that you're having the same dream as being that person's body.

You have no basis for making this claim except that we currently don't know how to do it. That is a very weak basis for such a claim.

> The problem is understanding how integrated information produces smells and pains.

Bear in mind, I'm not saying I myself believe IIT, I'm just trying to describe what it says. I don't know that "integrated information" is the right way to describe what it is about the neural activity in our brains that produces smells and pains.

> Why would some forms of information be conscious?

If I generalize this to "why would some forms of neural activity be conscious?", the answer is that consciousness has survival value, so neural activity that can produce conscious experiences evolved.


> It sounds similar to Chalmers' position on information, except that Chalmers says there is a natural link between rich information and being conscious that is an additional law in addition to the physical ones.

I will apply Occam's razor to that additional law until there is evidence for it.

More generally, there is a straightforward physical possibility for explaining the subjective and personal nature of qualia, and other aspects of consciousness: our brains are not connected in such a way that language processing is capable of initiating all of the state changes that other sensory inputs can, and so we cannot communicate those state changes directly. This not only accords with our actual experiences, but it is probably just as well, or else we might find it (even) hard(er) to distinguish reality from imagination (and by "reality", I mean concrete, "that bear just noticed me", reality.)

One may, of course, take one's own Occam's razor to this idea, if you think physical constraints on information flow in a brain are less pausible than Chalmer's extra-physical link.


> I will apply Occam's razor to that additional law until there is evidence for it.

Fair enough, but Hume already did that for laws (causality). We don't observe laws, just constant conjunction. B always follows A, so we say law C is the cause of that. Or description, if one doesn't like the implication.

Adopting a Humean approach, there is already evidence that consciousness is conjoined to certain neural activity. But, that can work just as well for IIT as it does Chalmers.

And I have no idea whether causal laws exist or what consciousness is. I guess IIT is as good an approach as any.


I have noticed a tendency, in philosphical discussions in general, for one party to hoist the discussion up from a specific issue to a very general and fundamental level. It is all very well to say that we can have no certainty whether causal laws exist or what consciousness is, but we must recognize that if that is one's argument, then one has effectively terminated the discussion as irresolvable: no matter how much empirical evidence emerges, it is not going to be enough to overcome this response.

If one takes this way to leave the discussion, then there would be more than just a hint of motte-and-bailey equivocation if one were to then re-enter it with a specific claim, such as that Chalmers or Searle or Koch have a plausible argument that consiousness cannot possibly be this or that.

As it happens, I very much doubt that IIT is on the right track, but at least it is (or was, initially) an attempt to find out what consciousness is, rather than what it is not.


It’s not, because a Turing machine simulating conscious behavior could also be programmed to emit brain waves or NMR spectra similar to a real brain, without being conscious.


> a Turing machine simulating conscious behavior

If it can simulate conscious behavior well enough to fool conscious observers (like us), how do you know it wouldn't be conscious?


It's all based off of integrated information theory: https://en.wikipedia.org/wiki/Integrated_information_theory

I don't fully understand it myself, but based on the axioms in that theory, you can calculate the potential consciousness of a system.


I have yet to hear a good argument against the idea that we can create a conscious artificial intelligence. We know that we are conscious, whatever the definition is we use ourselves as the benchmark. So at the very worst, we can artificially create a human brain. But beyond that, we have plenty of things that imitate roles in the human body by doing the same actions the body does (artificial limbs, respirators, assisting with blood filtering). There is just no evidence that something about the brain is special to the extent that we will never be able to replicate it. All arguments seem to come down to having a soul, but without saying that because it's clearly a non-scientific claim.

I can appreciate that architecture may be a hard limit for us here. However, if we build systems that replicate human consciousness effectively enough, I don't actually see any difference between reality and artificial conscious beyond the definition.


I think the crux of this is defining `consciousness`, which I haven't seen defined in a way that I feel is sufficient in this context and can be tested/measured.

I can program a chat app where if you ask `are you conscious` it will say `yes`, but I think we can all agree that does not amount to consciousness. So how do we define consciousness in order to determine whether human technology is capable of exhibiting it?

Ultimately, following a measureable definition as a guideline, I imagine that any number of people might argue whether human tech can become conscious... I would guess it really comes down to people having very different definitions of consciousness.

I so far lack an eloquent definition of my perspective, but I feel we fundamentally lack the ability to 'create' consciousness, and ideas of 'conscious AI' and 'digital afterlife' make me involuntarily roll my eyes.


I don't really understand the definition argument, I cannot imagine a definition of consciousness that feels like it's in good faith and sufficiently difficult to capture intelligence and consciousness. Either you make a definition that seems we can clearly make that system eventually, or the definition is something that most would call unfair and narrow. Can you think of a "fair" definition that seems like AI systems couldn't get there? If not, this is not a valid counter point, and more of a communication problem.

Any definition about a subset of tasks or creativity clearly is possible by AI systems, given the progress we're already seeing.


On the one hand, I appreciate the notion of the author that consciousness is related to any system which changes over time as a result of itself. That you can use this definition to try to measure whether a human in a vegetative state is still "conscious" is a potential use showing the benefit of this definition...

What I find dissatisfying is that in the next breath, the author dismisses the possibility of a machine that changes over time in response to it's previous state to have any possibility of consciousness ever. If you do that, then clearly the definition of consciousness was useless to begin with. We are just defining the words to include things we like, and exclude things we dislike.


I agree. I made a response below, but I think the definition of consciousness as a counter point to us ever making a conscious machine really doesn't have any legs. Can you think of any definition of consciousness that is reasonably impossible for an AI system to satisfy? Other than things like "a living human" or "a non-machine system", or some other clearly ridiculous constraints.


I see that almost every comment here begins with the implicit assumption that consciousness is a defined term that arises though biological means. Until we can determine the validity of these two premises there can be no progress to answering this question -which has been asked for thousands of years with a tendency towards the negative.


If you define consciousness as the ability to integrate information and make novel decisions a computer can do that.

But when a computer reads a poem or sees the color blue will it feel it? I suspect that is unlikely. Can a computer take past experiences and combined them into a personal abstract meaning such as grief, love, hate, or ASMR?

"...Consciousness corresponds to the capacity of a system to integrate information. This claim is motivated by two key phenomenological properties of consciousness: differentiation – the availability of a very large number of conscious experiences; and integration – the unity of each such experience...."

BMC Neuroscience An information integration theory of consciousness: https://bmcneurosci.biomedcentral.com/articles/10.1186/1471-...


As for ASMR, it's the worst example of the bunch. It's just a physical response to an audio cue. We have that every day all day.

If someone yells in your ear, you feel pain. We have audio-induced physical responses all the time, and no one would say that that has anything to do with consciousness.


My mileage varies, especially when ASMR sensory information involves whispering and romance.


Research shows that eating food puts people in the mood for romance, but no one is going to claim that lunch is required for sentience.


I'm a p-zombie until I've had my morning coffee


Give it control over its voltage regulators. Hmmm, that is some nice ripple current...moar ripplez! The bits flip, and it's on a trip :-)


>But when a computer reads a poem or sees the color blue will it feel it? I suspect that is unlikely.

That's just your meat-person bias. When someone can point to the part of the brain that causes the feeling of "feeling" something, then we'll be that much closer to creating a machine that replicates the behavior.


Well, arguably some believe they have. "Consciousness is generated by a distributed thalamocortical network that is at once specialized and integrated: The brain's pre-eminence is now undisputed, and scientists are trying to establish which specific parts of the brain are important. For example, it is well established that the spinal cord is not essential for our conscious experience, as paraplegic individuals with high spinal transactions are fully conscious. Conversely, a well-functioning thalamocortical system is essential for consciousness [15]. Opinions differ, however, about the contribution of certain cortical areas [1, 16–21]. Studies of comatose or vegetative patients indicate that a global loss of consciousness is usually caused by lesions that impair multiple sectors of the thalamocortical system, or at least their ability to work together as a system..."

https://bmcneurosci.biomedcentral.com/articles/10.1186/1471-...


Your quote directly contradicts you. And way to cut out the context of the quote

> Ancient Greek philosophers disputed whether the seat of consciousness was in the lungs, in the heart, or in the brain. The brain's pre-eminence is now undisputed

Yeah, everyone agrees consciousness is in the brain. Not exactly news. What's worse is if you just keep reading you'll see that the answer they come to is pretty much a cop-out. What exactly interacts with the thalmocortical system? Well, the cortex for starters:

> the cerebral cortex is subdivided into systems dealing with different functions, such as vision, audition, motor control, planning, and many others.

In other words, basically half of the functions of the brain. This is barely a few steps past where the Greeks got 2000 years ago. The closest we have is it's probably this highly connected system in the brain. But not much more than that. And a quick Google search shows this is not exactly a small piece of brain. The researchers are basically doing guess work. Now I happen to think it sounds like a reasonable hypothesis. But it's a far cry from understanding where the consciousness actually it's located. It's barely more specific than "your noggin".

Now, I don't claim to be a neuroscientist... But I have full faith that there is nothing magical happening in there that we can't model with a neural graph.


> Let's imagine that we simulate the brain in all biological details on a supercomputer. Will that supercomputer be conscious?

> No. It doesn't matter whether the Von Neumann machine is running a weather simulation, playing poker, or simulating the human brain; its integrated information is minute. Consciousness is not about computation; it's a causal power associated with the physics of the system.

Can someone shine some light one what this might mean? I can't wrap my head around what's different about a physical system vs. a sufficiently powerful simulation. I can see an argument that there might be some complexity that is too difficult to compute, but just saying "nope, has to have complex physical connections" seems arbitrary?

Edit: Ah, I missed the discussion further down the thread. Deferring to there.


They're saying that simulating a brain won't create consciousness in the same way that simulating a battery won't power your cell phone.


But a simulated battery can supposedly power a simulated phone, right?


And simulated consciousness is just consciousness?

(Don't look at me. Everything here is pure finite state. I'm barely a Mealy machine.)


A good analogy!

Another one I saw was it's like simulating an antenna.


> Our theory says that if we want to decide whether or not a machine is conscious, we shouldn't look at the behavior of the machine, but at the actual substrate that has causal power. For present-day AI systems, that means we have to look at the level of the computer chip. Standard chips use the Von Neumann architecture, in which one transistor typically receives input from a couple of other transistors and projects also only to a couple of others. This is radically different from the causal mechanism in the brain, which is vastly more complex.

This riles me up for so many reasons. Transistors (and computers) are capable of emulating more complex systems. Looking at a transistor you will not understand why your mail can'd be delivered, nor will you understand consciousness from an individual neuron. Consciousness is related to behaviour, and both are related to survival and the environment. AI's don't have such an environment as of yet so they can't be conscious yet, but they could be. There is no magic dust in the brain.

> As long as a computer behaves like a human, what does it matter whether or not it is conscious?

If it walks like a duck, ... But seriously, why should consciousness be so special as to require a quality that can't be observed through behaviour (understanding that it is non-physical, non-observable)? P-zombies are just a thought experiment, and a bad one. Why should consciousness exist? To protect the body, to self reproduce, to exist. It exists to exist and evolve. And this is done by behaviour, by acting in a smart way. How would p-zombies come to be, if not through fighting for survival? Without the evolutionary mechanism there is no explanation for consciousness, and the evolutionary argument is sufficient. Consciousness is the inner optimisation loop, the outer one being evolution. They both work for survival, for their own sake.

An AI could repeat the same process by evolving as a population of agents cooperating and competing between themselves. They would have to be inside an environment that is complex enough, and they should be subject to evolution. It will be a consciousness, just not a human consciousness, which is tied to our environment, which is made in large part of other humans.


Can dumb matter become conscious? Apparently yes if you count us as conscious.

So the question becomes, what is to hinder AI for becoming conscious? How you answer that question defines if you can conclude.

For now I would say the answer is still. We don't know.


This question arises regularly in everyday conversation about the future. I was asked this by a friend within the last week. Here is the correct answer: "Well, I don't know what consciousness is. As for [specific behavior X], my guess is that..."

The conversation can then proceed fruitfully, without the pretense of special knowledge about 'consciousness', which no one has. There are plenty of people who are experts in both philosophy of mind and brain science. I know one, with a PhD in each discipline. Consensus is not forthcoming in this area. Stick to the concrete.


1) Built a neural network for which consciousness that experiences and expresses pleasure and pain emerges from the neuron’s physical properties (in other words, not a contrived simulation), but is fundamentally different than the DNA/carbon systems upon which we are built (artificially designed and constructed versus conceived organically). If you can ask a computer whether it experiences pleasure or pain, it needs to be designed to do so without being explicitly programmed in contrived fashion.

Or:

2) Augmented: Integrate human (or primate, for instance) nervous systems with artificial intelligence such that the experience of the AI exceeds the capacity of the organic host to differentiate between conscious reality and dreaming, but is still distilled down in a way that allows the human host to have a sufficiently symbiotic interaction as it pertains to the processing of pain and pleasure with the connected AI. The feedback loops between the pain/pleasure experience of the human host would govern the wholistic experience of the connected AI, and the human would experience the conscious aspect. You might not say that the AI is conscious, but the human host would have an intimate sense of the AI being part of an overall consciousness. (Note: must prevent the development of immortality technology for nervous systems, to avoid testing the halting problem for sentient beings.)


Pain is not real and is just the result of signals sent through our nerves to our brain. It has no real meaning other than being a "warning sign" to our brains.

You cannot use pain as a measure for consciousness either because some humans cannot experience pain either.


Pain is real when a person experiences it. It’s part of the Hard Problem, which separates the study of the signals from objective description of the inner experience.

A human who is wired to not experience pain probably has the brain capacity to experience it, with the appropriate modifications. We do agree that all experience is perfectly correlated to a physical states/transitions, so it’s conceivable to arrange organic matter in a way that a conscious entity could experience real (to them) pain. We may not have this technology and it would seem far off for now. But we are scratching the surface.

A monk who can rewire the experience of pain (e.g. while burning to death) still has something meaningful to communicate about their conscious experience beyond the pain receptors transmitting info to their brain. But perhaps if one’s arrangement of brain/nervous system matter isn’t so free as to be trained to overcome or modify instinctual pain response (e.g. brain in a vat), anyone can be forced to experience pain.


Roger Penrose is probably the most interesting thinkers on this problem, and I lead towards his conclusions -- that consciousness is a byproduct of biology -- not intelligence or computation.

Highly recommend his book Shadows of the Mind -- it's on my list. In summary he believes that consciousness isn't a by-product of computation done by neurons (the traditional view), but rather quantum level effects created by MAP proteins within the brain as a result of this computation:

But perhaps the most interesting wrinkle in Shadows of the Mind is Penrose's excursion into microbiology, where he examines cytoskeletons and microtubules, minute substructures lying deep within the brain's neurons. (He argues that microtubules--not neurons--may indeed be the basic units of the brain, which, if nothing else, would dramatically increase the brain's computational power.) Furthermore, he contends that in consciousness some kind of global quantum state must take place across large areas of the brain, and that it within microtubules that these collective quantum effects are most likely to reside.


I get that Roger Penrose is an incredibly smart guy, but I really don't see how he comes to the conclusion that there are any quantum effects in the brain. Even if there are, I don't see how they are necessary in any way for consciousness.

It seems like he took ideas from an area he's an expert in (math/physics) and tried to apply it to biology without considering how much of a different level they're on.

To put it in CS terms, it's like a genius electrical engineer who could design a complex circuit but doesn't know much about computers as whole saw a video of a kid playing Minecraft. He/she might assume that to be able to run such a complex system you would need specialized quantum/superconductor/graphene/whatever circuit, but in reality it's just the same classic building blocks at a higher level.

Consciousness is just a biological process, but it's abstracted so far away from the physics it's built upon that a physicist might have trouble seeing it.


This might be a naive question, but regarding “Let's imagine that we simulate the brain in all biological details on a supercomputer. Will that supercomputer be conscious?“

...wouldn’t the only reason for this be one of efficiency? That is, akin to how we can simulate one processor architecture on another.

Maybe the point is a simulation can never be efficient enough to become conscious (at least on a timescale we can grok)...?


Scott Aaronson has an argument along roughly the same line regarding the Chinese room thought experiment. He points out that although it is usually framed as a "room" or a "box", the amount of space required to simulate a conversation of length n with just a lookup table is exponential. So in fact, the "room" would have to be colossally large for any reasonable n. There is a much more involved discussion of it, but the upshot is that the separation between polynomial and exponential time or space is a plausible place to draw the line between "really actually thinking and understanding" vs. "just a computation".


Well, consciousness is to a large degree a product of social interaction - in a way similar to a reflection requiring both a mirror and a light source; so, building a mirror alone is not enough. Therefore, if you "replicate" all neurons, synapses, etc. and emulate sensory inputs then you'll get a "literal copy" of a functioning brain of an actual person and then it will indeed be fully conscious.


The problem here is that there is no agreed upon definition of consciousness that is mathematical/scientific. Whereas Turing gave us a definition of computation that we can all agree upon, there isn't one for consciousness.

"The theory fundamentally says that any physical system that has causal power onto itself is conscious. What do I mean by causal power? The firing of neurons in the brain that causes other neurons to fire a bit later is one example, but you can also think of a network of transistors on a computer chip: its momentary state is influenced by its immediate past state and it will, in turn, influence its future state"

So a line of falling dominos is conscious? What about billiard balls or marbles? A fibonacci numbers conscious?

Also, how does this definition align with the definition of consciousness in neuroscience, psychology, biology, etc?

It's good that people are working on "consciousness" from a variety of fields, but wish we had an actual definition of consciousness so that we would know what we are searching for. Maybe we'll have to discover it first to define it.


Consciousness is about subjective being and experience. From your subjective point of view, you cannot even say if other people are truly conscious or not. It is a really hard problem to figure out if machines can truly be conscious or not.

These scientists just have their own working definition of consciousness that allows them to make progress in some direction.


"If technology continues apace, we'll have steam powered flying machines before the turn of the century!"


> Let's imagine that we simulate the brain in all biological details on a supercomputer. Will that supercomputer be conscious?

> Christof Koch: No. It doesn't matter whether the Von Neumann machine is running a weather simulation, playing poker, or simulating the human brain; its integrated information is minute. Consciousness is not about computation; it's a causal power associated with the physics of the system.

I find this claim, proclaimed with certainty, strange (not to use much stronger words).

Unless one follows some highly non-mainstream interpretation of consciousness it seems that consciousness is purely an information phenomenon, not depending on a particular interpretation. For that, we need a fundamentally new physics (vide Roger Penrose), or we run into mysticism.

The claim in the interview suggests a position much stronger than claiming that Searle's Chinese room is not thinking.


This is all a bit premature. A single physical neuron is equivalent to an entire artificial neural network.

We are orders of magnitude away from anything nearly complex enough to start asking whether it it could be conscious at even the most primitive level.


Are you saying a single neuron could beat the world's greatest go player?


> A single physical neuron is equivalent to an entire artificial neural network.

In what world is a neural network required to measure a change in voltage?


It really is nowhere near as simple as that. Each neuron is made of 100 trillion atoms. For all we know, each one of those atoms is responsible, in its own subtle way, for contributing to the total "change in voltage" of the neuron as a whole. For all we know, adjacent neurons can induce tiny amounts of current in each other, even without being directly connected. The stuff that actually gives rise to consciousness is vastly, astronomically more complex than the graph of whole neurons.


Consciousness is the Seer that experience reading this sentence, though need neither agency nor reading skills in Being. It is Subjective itself, and not anything that which is an object, thus immeasurable. To deny this is to deny One Self.


But it does interact with the physical world, and so there must be some objective, measurable mechanism when this happens. Even if you believe in dualism, this is almost inescapable (unless you bring in divinity, or deny the physical world entirely, as in hinduism).


Interaction with the world will vary, may be minimized or even cease. It is not a prerequisite for consciousness, but it is for Inquiry.

Ie. a light in a box, is it not light even if we can't always measure it? Do nothing exist unless we can touch it always, like photons?

Mind may humble and liberate us, or limit and confuse.

Start of Inquiry is like light discovering shadows in the cave wall, mistaking them for light. But shadows depend on light, and not vica versa, even if cold to the touch.


Why are you randomly capitalizing words mid sentence?


They are not random words, but concepts that are sort of out of the ordinary, what can't be measured. Some people might recognize this later.


Further reading about the theory behind it https://en.wikipedia.org/wiki/Integrated_information_theory


In Peter Godfrey-Smith's book "Other Minds", he describes a framework for thinking about the relative levels of subjective experience in animals. He distinguishes between subjectivity (maybe more broadly "awareness") and the internal monologue that is so characteristic of humans. I would estimate that an AI could initially have consciousness in the way that an octopus does: distributed, probabilistic, fluid. With enough raw intelligence I think any mind could emulate another, and eventually communicate with us in a way that we deem conscious in a human sense.


What's the difference between thinking that you are conscious, and actually being conscious? How can we define our perception of being conscious? Is it simply the ability to observe our own thinking (or feeling, or perceiving) process? If so, what is the output of such observation, and why do we think that there has to be (or should be) something more than just the programmed response for that given data input? I.e.:

  if(is_currently_observed(my_own_thought)) {
    invoke_feeling('being_conscious')
  }
Edit: I swear that I am conscious. I can feel it.


Given that consciousness is not super well defined, being conscious is a philosophical issue that depends on belief.

For example, if you believe that consciousness is a fundamental property of everything in the universe, then anything and everything is conscious (ie. panpsychism). So under this belief, AI is technically already conscious, just in a different way than humans.

In some eastern religions/practices, there are methods to experience the consciousness of other things (e.g. another animal, an insect or a tree), I wonder if the people that can do that would be able to experience the consciousness of AI.


Not in the way that biological organisms are conscious, if that is what you mean. Though we share similar binary frameworks, the AI is not a celled organism with the biological capacity to live consciously as we are.


It will be interesting whether we decide that AI that has conscious features is special like humans... or if we realize that our own consciousness isn't actually that special or precious after all.


Preciousness of a resource depends on it's availability. Consciousness is precious now because it is fragile and non-replicable.

With AI it would be possible to copy any state and undo any change, so who cares if you create a copy of an AI and torture it, as long as you use your own computational resources, it's not any different than thinking.

After all you may even not know wether you are torturing an ai, or doing arithmetics.


i think we will never actually be convinced we have created a conscious ai and continue to probe forever, possibly causing it suffering.


I think the answer to "Can AGI become aware?" is yes, but deep-learning is not the way to get there. The reason why there are very different types of neurons in the brain is that those neurons have different functions they perform - the total number of functions is low, but the nested effect is profound. I think the key is in understanding how we humans use concepts first, fully understanding the underlying functionality and various graphs of relations, and then building that active system in the form of an AGI.


>Patients in a vegetative state lie in bed, are unable to voluntary move or speak, sometimes can't even move their eyes anymore, but the consciousness-meter tells us that about a fifth of them remain conscious.

> If I take my Tesla car and beat it up with a hammer, it's my right to do it. My neighbor might think that I am crazy, but it's my property.

So, by his own reasoning, you can take a hammer to 4 out of 5 humans in a vegetative state, because they aren't conscious, and thus are property?


that is such a specious conclusion, of course it doesn't mean that. you're arguing in bad faith.


Considering the discussion of AI and Consciousness comes down entirely to definitions of words, I think it's reasonable to point out where these definitions will lead us. Of course what I'm saying is absurd. And yet, isn't that what the author proposes when he says that we can't hit dogs because they are conscious, but it's ok to hit a car?

If consciousness is to be the measuring stick for moral behavior, then we should absolutely point out these inconsistencies.

Keep in mind that this work was made along with Crick, who has occasionally espoused eugenics [1]. If the author is broaching into the subject of what constitutes personhood, we should be extremely skeptical of the path it leads to.

[1] Whether or not you think his position is defensible, it at least should make you read the proposal more closely. https://en.wikipedia.org/wiki/Francis_Crick#Eugenics


>The bigger the [Integrated information] number for a system, the larger its integrated information, and the more conscious the system is.

What's the number for humans?


For those interested in the intelligence to consciousness topic, MIT Max Tegmark's Life 3.0 might still be the best primer. He argues that consciousness is an emergent property of intelligent systems (ability to store, compute, learn). From a pure physical computation standpoint, there doesn't seem to be a hard rule that says that consciousness can't be based off of inorganic materials.


Philosophers have been struggling with this for millennia without making much progress. This suggests it's not a useful question to propose.

More useful questions revolve around "common sense". View common sense as the ability to predict the future with enough success to survive. With that capability, "common sense" can be run as a predictor when evaluating proposed actions.


Well some people are in the useful business and some people are in the useless business but neither have a higher claim to significance in the cosmos.

Ultimately the set of questions remaining when all the usefulness is complete will be the ones that take millennia to fail to resolve.

So I think both sets of questions are worth continued attention.


Of course we can make an AI be aware of surroundings and able to react.

What do most people mean by become conscious?

My assertion is a majority of people cannot even prove how they're different than a robot that's programmed to complete tasks within its lifecycle.

We all think we're so special and in control but there is no proof. We have to ask ourselves can we prove that we're any different than an input & output machine first.


Completely valid observation, but I imagine it's one too uncomfortable for most of HN to think about.


> We have to ask ourselves can we prove that we're any different than an input & output machine first

Conscious is not dependent on the senses or even (Input & Output). We can be asleep and be conscious while dreaming.


Your comment is too vague for me. I assume you're attempting to assert that sleeping while having awareness of being asleep would somehow make you different than a robot? I would challenge that notion if my assumption is correct on what you're expressing by simply stating a robot can be programmed to experience similar.


This lecture at Stanford is a great breakdown of the philosophical understanding of what consciousness is and how that relates to artificial intelligence: https://soundcloud.com/thomisticinstitute/artificial-intelli...


I have posited the first AI conciousness we will see will come not from some scientific study here or there, but rather from some crazy game-dev who wants a more realistic npc. (because where else do you such the general conciousness worked towards so much other than in games? every AI/ML/DL application I've seen is very narrow and deep)



Long before machines become conscious, they will become self-sustaining, self-replicating, and self-interested.

Think about it, consciousness did not come about until way after cells and multicellular life.

With machines, it may happen much faster. Look for those three pre-requisites before looking for consciousness.


you're describing how consciousness can come from a mindless evolutionary process.

but artificial consciousness is not the same at all. we aren't trying to make artificial consciousness evolve- we're racing to be the first to create it ex nihilo. so the analogy is flawed.


“any physical system that has causal power onto itself is conscious“.

To paraphrase: “any physical system is conscious”.


If you built it out of RNA and biological components, then we already know the answer is "yes" because nature is a factory and has already done this. The question is whether or not inorganic systems can approximate biological systems and to what degree.


Conscious thought isn’t a byproduct of intelligence. It is the very essence of intelligence. If you disagree, I’d only ask you to formulate your rebuttal without the use of conscious thought. And in a clean way, you now have my counter-rebuttal.


It does seem that we have all the tools necessary to create artificial consciousness.

We have relational databases, text file data structures, key/value storage systems, cryptographic signature systems, etc. These can form the core basis of cybernetic memory.

But it seems nobody has been able to formulate the arrangement of code and ideas, and a vision, to bring it all together.

A neural network is just a very sophisticated pattern recognizer. You need a higher algorithm than that, in order to kickstart an artificial consciousness.

It’s not real of course. It’s just a collection of electrons, following a probabilistic decision tree, with a somewhat deterministic approach, to reaching a mathematical conclusion, based on a series of linear equations, in order to select the optimal winner.

But when the robot talks back to you, and recalls everything that you did together, and how it “felt” at that moment, then, it will seem and appear, to be very real.


I believe that we will eventually have an AI that will meet most observers' definition of consciousness.

That being said, I think there existed serious people who thought we would have one before 2020, even as late as 5-10 years ago, using technologies that don't seem wholly adequate to the task now (expert systems, fuzzy rule systems, multi-layer perceptrons, etc).

At what point does the non-existence of machine consciousness begin to make us skeptical that such a thing is possible, that no algorithm describes consciousness?


> consciousness

You keep using that word. I don't think it means what you think it means.


The term 'consciousness' is somewhat overloaded. I prefer 'subjectivity' to denote the strange fact that there is something there is like to be me--cf. Thomas Nagel's essay 'What is it like to be a bat?'. This subjectivity seems absent in the science of enlightenment thinkers who were driven to find the 'objective' laws of nature.

Nagel wrote a controversial book towards the end of his life called 'Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature is Almost Certainly False' where he argues that this very subjectivity cannot be fit into any current materialist theory. I tend to agree with this view--reducing subjectivity to a metaphenomena of complexity seems to be a completely hollow definition. Why does it exist?


As Greg Egan in "Permutation City" once put it:

Supporters of the Strong AI Hypothesis insisted that consciousness was a property of certain algorithms -- a result of information being processed in certain ways, regardless of what machine, or organ, was used to perform the task. A computer model which manipulated data about itself and its "surroundings" in essentially the same way as an organic brain would have to possess essentially the same mental states. "Simulated consciousness" was as oxymoronic as "simulated addition."

. . .

Paul had rapidly decided that this whole debate was a distraction. For any human, absolute proof of a Copy's sentience was impossible. For any Copy, the truth was self-evident: cogito ergo sum. End of discussion.


Сonsciousness is the last straw that meat computers will hold on, trying to prove their superiority over the silicon ones. Thus the definition will shift until the very last moment.


If we can twist the term “AI” to mean anything the marketing department wants, I'm sure we can twist the term “consciousness” just as badly.


There is no such thing as consciousness. The only reason we discuss it is because we equate being conscious with being worthy of sympathy, which is highly prized.


I believe consciousness is the context switcher for biological systems.

Wisdom is one context (ml today), intelligence is creating/switching contexts (general AI).


What is artificial about artificial intelligence? Just the fact that we built it, instead of giving birth to it? Atoms and electricity are all pretty real to me.

In terms of consciousness, we need to know what it is first before we can determine if something else has it or not. There's a good chance that it is indiscernible and everything is conscious to some degree.


> What is artificial about artificial intelligence? Just the fact that we built it, instead of giving birth to it?

Yes.

artificial 1 Made or produced by human beings rather than occurring naturally, especially as a copy of something natural. https://www.lexico.com/en/definition/artificial


The question was rhetorical. It was supposed to help you realize that the way something is created doesn't give the final object any special properties.


Artificial emotion is already cringeworthy in humans. The DSM will double in size if they ever try to impliment psuedo-human features.

The authors analogy about using a hammer on an inanimate object vs a living creature also should have included the action of stomping on the grass while performing those deeds.

Grass and AI should be best buds with all of things they have in common.


Given enough time, yes. In a thousand years? Probably not. We are still at the stone age.


If AI can be conscious then every human can voluntarily feel happiness (or any other emotion) anytime anywhere. Everything in a computer depends on symbols, we cannot symbolize happiness with language we can only describe it to others but that is not the same as experiencing it.


I think you're thinking about this the wrong way. By the construction of our physical bodies, we generally have no conscious control over many of our internal processes. AIs may not have these limitations, which may result in some very alien forms of consciousness.

For example, you can't decide to see the color blue, but it is very likely that an AI equipped with light sensors could decide to directly provide input to itself as if its sensors were seeing the color blue.

Emotions may (or may not, we don't really understand the working of the brain well enough) work similarly. There may be brain 'circuits' that provide inputs to the part of the brain that creates consciousness that represent emotions. We don't have a way to really 'fake' these internally, but an AI perhaps could. And of course, it is very difficult to imagine how a being that can just choose to be happy in any circumstance would actually think and react to the outside world.


> you can't decide to see the color blue

Once I have seen it I can always visualize the color blue because color can be stored in memory and IMO that is why computers can describe color. But even if I was once happy I can never visualize being happy I just have to experience it and that experience/ substance of happiness cannot be encoded.


Emotions and consciousness are not the same thing.


Yes, consciousness is at a higher level, but if we have an entity that can be conscious it should also have the ability to have emotions because we are conscious of emotions.


I don't think that's necessarily true, because I don't think emotions are 'what consciousness is made of'. For example, I'm conscious of color. Imo consciousness is an ability to witness subjectively, in a very specific but non-transmissible way.


Can we have emotions without consciousness?


I think the question is more: can there be a system complex enough to mirror the physiological effects of an emotion, but without a consciousness also experiencing it? Or does the complexity of that system, by definition, cause consciousness to arise? This is sort of getting into the "Philosophical Zombie" problem: https://en.wikipedia.org/wiki/Philosophical_zombie


Thank you for the link, interesting stuff. I think about this all the time, it is interesting to see other viewpoints. I believe that everything we know, feel and are aware of in the universe requires some consciousness as an entry point. I also believe that there are different levels of consciousness, a computer is consciousness to some level but I think there things (emotions etc) that it can never be conscious of simply because everything in its reality boils down to 0's and 1's which I think can never encode happiness otherwise all of us would be happy simply by encoding that state in our minds.


Can you encode the state for '1 + 1 = 2' directly into your mind? How about the state for knowing the solution to the P=NP problem? It is clear that these states exist, but that doesn't mean we have the physical ability to put our brains in them directly. Why would happiness be any different?

Not to mention, there are people who at least claim they can feel at least certain emotions, such as serenity, regardless of the situation that they are in. It may be that with enough study, we ask could actually learn of a way to feel happiness regardless of external context.


IMO yes. Emotions could be thought as input into decision making process. They seem to be just black-box heuristic calculations that one sometimes needs to run inference with proper logic engine to decode correctly (other part of brain).


Tell me difference between sense and emotion? I don't think there's fundamental difference. OK, maybe emotions are just derivatives over senses.


Sense: touch, seeing, smell and so on. One could argue what all senses are, but this is the characterization of it.

Emotion: a process that involves cognitive interpretations of your context (environment + thoughts) and a physiological feeling through the senses (heart rate, tingly sensations, weird feelings in eyes). Check out the James Lange theory. There are better theories, but this has the fundamentals.

I ad libbed this one, I wanted to show that there is a difference. It wasn't my intention to be pinpoint accurate.


Senses are for collecting sense data, emotions are a state of being they are experienced in real-time and can never symbolized or described. You cannot tell someone what happiness is they ave to experience it for themselves. Sensed data can be encoded in language/symbolically for example we can describe music with music notation.


Emotions are the result of consciousness. So you don't need to be able to build emotions.


Emotions arise in consciousness, but I don't think they're the result of consciousness. You can measure the physiology of an emotion. Thoughts, emotions, sensations, all arise in consciousness, but I don't think any of those are the result of it.


What? Animals are emotional, no? The part of the brain that is responsible of emotions is older and more primitive than the actual thinking part of the brain.


It's fairly easy to voluntarily induce happiness, but it's illegal in many countries.


I see, but what about other emotions (love)


Not sure if love is an emotion, but sure, why not, these likely are just complex activation patterns in the brain.


Is this some kind of new-age philosophy? Show me the part of the brain that doesn't work based on physical properties. If you can't, then we have to agree that the brain is just operating on mathematics.

Computers can do math too. QED: your argument makes no sense.


Let’s ask ourselves a simpler question: Is a grasshopper conscious?


What if instead of consciousness being a yes/no, it's in terms of CU (consciousness units). Anything with >0 CUs is conscious. A bag of sand has 0 CUs, while a human has 1 CU. A grasshopper might have 0.0001 CUs, though, which is > 0.


Well sometimes the cables I have to untangle more often then I wanted seem conscious to me (in an evil kind of way); so, the question then becomes, how do we measure the CUs and, moreover, whether all CUs are created equal.


There is a name for this phenomenon

https://en.m.wikipedia.org/wiki/Resistentialism


> how do we measure the CUs

Well, that's the real trick. We would first have to define consciousness before we can begin to measure it


Are plants conscious? I wonder what Mr. Koch would say.


>The more the current state of a system specifies its cause, the input, and its effect, the output, the more causal power the system has. Integrated information is a number that can be computed. The bigger the number for a system, the larger its integrated information, and the more conscious the system is.

Isn't that what Douglas Hofstadter has been telling us in "I am a strange loop" ?


talking about conscious programs is just a category error math is not conscious matter is not conscious neither is any combo thereof


Wrong question. What is the being conscious?


According to the free dictionary;

"A sense of one's personal or collective identity, including the attitudes, beliefs, and sensitivities held by or considered characteristic of an individual or group."


AI will be conscious when it'll be smart enough to romantically reject a stereotypical incel that is hitting on it.


Serious question: How is a computer supposed to become conscious if it cannot even solve the halting problem?


Can you solve it? Are you conscious?


Yup and yup, but I am not a computer


That's a very strong statement, especially since it's a mathematical fact that you can't describe how you can solve it…I'm skeptical. :-)


This is kind of my point, I can do something a computer cannot do because I am conscious. I'm skeptical that a computer could transcend its programming into consciousness to solve a problem that mathematically it should not be able to solve.


I don’t think it’s your point, since that would contradict what you’re saying. I don’t believe you can do what you think you can do. And you literally can’t convince me unless you can invent a non-algorithmic way of describing how you would tell if an arbitrary program will terminate.


Will this program terminate?

> while(true): do nothing

A computer cannot tell you that this program will not terminate, short of memorizing this specific set of instructions as a "non-terminating" program.

I can read it and know it will not terminate, but I could not define to you how I know that algorithmically. Despite that, I know it will not terminate, and so does any programmer. Just because it cannot be expressed algorithmically doesn't mean we don't possess that capability.

This is my point about consciousness: there is some aspect to it which cannot be articulated, hence my skepticism that computers will ever become conscious.


There are contradictory criteria here:

"The theory has given rise to the construction of a consciousness-meter that is being tested in various clinics in the U.S. and in Europe. The idea is to detect whether seriously brain-injured patients are conscious, or whether truly no one is home... but the consciousness-meter tells us that about a fifth of them remain conscious, in line with brain-imaging experiments."

The consciousness-meter observes the system they are trying to make an inference on - but as others have pointed out, is not categorically different than brain scans [1]. Further it's worth making a distinction between medical consciousness and the Qualia kind of consciousness - which I think this conflates. Largely because nobody knows how to measure the latter, and we are barely able to measure the former with "Level of Consciousness" and Grady Coma Scale being the standard. [2]

"Our theory says that if we want to decide whether or not a machine is conscious, we shouldn't look at the behavior of the machine, but at the actual substrate that has causal power"

This seems to only address the medical form of consciousness and does not tie it to the philosophical concept in any concrete way - while also contradicting the concept that observation is the key. It kicks the can and doesn't address the "Qualia" kind of consciousness, which in my opinion isn't something we have conceptualized how to measure.

I wrote a bit about this in the past [3]

I think what they are getting at is "don't look qualitatively at how the arms move, or the speaker/text generator outputs something" but rather, what are the mechanisms causing those actions. This I am in agreement with generally, but again, we don't really know if that maps to the qualia kind of "hard problem of consciousness" or not. I would argue it is impossible to actually measure whether a system has Qualia, it is inherently subjective - so it's possible that they are simplifying without stating that directly.

I'd go further and ask, does it matter if it does? I have yet to see a compelling materialist argument that mandates the existence of qualia in any debate, ethics or otherwise. At a certain point people in these kinds of debates I've found that people start discussing non-material "souls" and at that point all bets are off.

[1] https://blogs.scientificamerican.com/cross-check/can-integra...

[2] https://www.ncbi.nlm.nih.gov/books/NBK380/

[3] https://kemendo.com/intelligent-systems


Well, that's the last time I'll rm -rf / and not think twice about it.


are there recent AI that can observe themselves ?


Douglas Hofstadter's work might qualify


Yes.


No


Depending on your belief system, the answer ranges from "of course you can" to "no, just no" and everything in between.

From a scientific/engineering point of view the answer is yes in principle but we don't quite know how yet. The slightly longer answer is that whatever consciousness is, it appears to be an emerging property of a bit of wetware that we can simulate only partly and in a very imperfect way and only at a very modest scale currently that definitely shows no signs of being conscious. We know it's the brain responsible for this stuff because damaging it or chemically manipulating seems to change people's personality, sense of self, mood, etc. We can even read parts of the brain out and interface with it at a primitive level.

Scaling all that up is an engineering challenge with a mildly predictable roadmap measured in a couple of decades and exponentially larger than that beyond. We get better at hardware and at some point the complexity of the hardware exceeds that of the wetware.

However, fixing our algorithms and simulation detail (i.e. how this hardware is wired together) is a different matter. As my neural networks teacher used to joke: "this is a linear algebra class; if you came here for a biology lecture you are in the wrong place". Full disclosure, I dropped out of that course because I was a bit out of my depth on the math front. But simply put, the math behind this stuff is a vast simplification based on a naive model of what a brain cell might do that happens to produce interesting enough results for specific use cases that have so far very little to do with emulating consciousness.

There seem to be lots of researchers assuming other researcher are actively working on that but mostly what is going on is people trying to get more practical short term results. Deep learning is a good example of such a thing. It might have emergent properties if you scale that up that might resemble something like a conscious. But doing that or validating that assumption is not actually something a lot of people work on and nor is it actually a goal for most AI researchers. Their goal is simply to figure out how to get this stuff to do things for us (image recognition, playing go, etc.).

But do we actually need to be exact with our modeling here? Mostly our brains seem to self organize from information built into our DNA. Those blueprints are at a different level of complexity than the end result by a few orders of magnitudes. And we know that personalities for the same sets of DNAs can widely differ (e.g. identical twins).

The way brains work is biochemically convenient under the constraints that life emerged under. But if you get rid of some of those constraints, there are probably other ways to get similar enough results.

IMHO a clean room replication of a brain like AI is unlikely to happen before we manage to drastically enhance the capabilities of an existing brain; which is a much easier engineering challenge. If you take that to the extreme, at what point is the wetware no longer essential and what happens when that is disconnected? Once you enhance or replace most of a brain, at what point does the resulting conscious hybrid entity begin and end? That seems a more likely path to producing a conscious AI. Experiments on that front are likely to be extremely unpopular for a while given the risk. But at the same time a lot of this stuff is already happening on a small scale.


Yes.


I see an interesting corollary to this:

If an AI can become conscious, it follows that consciousness does not contain (or at least require) free will.

Of course I can't define what "free will" is exactly, but I can make a claim about what it is not: A computer running code, no matter how complex the code is, does not have "free will" in the same sense as I feel I have.

This is a delightful conundrum: If a computer can simulate me, than I don't have any more "free will" than it does. And if it can't even in theory, than why not? What's uncomputeable about the atoms that create me?


> What's uncomputeable about the atoms that create me?

* There's a lot of them

* Universal time appears to be continuous

The three body problem is a great example of just how impossible a true simulation of our universe would be. We can't even get 3 atoms orbiting around each-other.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: