Hacker News new | past | comments | ask | show | jobs | submit login
Consciousness is not computation (joe-antognini.github.io)
282 points by antognini on June 20, 2022 | hide | past | favorite | 748 comments



I find this argument completely unconvincing. The mapping in a case like the iron bar is entirely ephemeral, pertaining only for an instant. For this argument to be valid you’d have to be able to persistently map the iron bar, or waterfall, to all ongoing transformations of states in the running program, using one single consistent mapping. Otherwise all you have is a snapshot of state, not an ongoing process. To achieve a persistent mapping for a significant time you'd probably need an iron bar something like size of the observable universe, at which point we're in Boltzmann's Brain territory.

This argument is in the article as well and I’ve seen it from Searle too:

“A simulation of a brain cannot produce consciousness any more than a simulation of the weather can produce rain.”

This is making the unstated prior assumption that consciousness is not a computation. If it is a computation then conciousness is not like weather itself, it’s like the simulation. Me imagining having a shower doesn’t make anything wet either. So is my imagination more like the weather, or more like the simulation of it?

As for doubting the field of rocks can be conscious, that’s redundant, you might as well say a 3D field of atoms cannot be conscious, such as a brain for example. Talking about computation and consciousness is a sideshow, this is anti-materialism by the back door. Nothing more.


Computation does not have anything essential to it that makes it different from other "atoms jumping around" other than it produces outputs that we find interesting.

There's no reason to expect an adder circuit to be aware that it's adding numbers. There's no reason to expect an implementation of AlphaGo to know that it's playing Go. This should be extendable without limitation.

There's no reason to expect that a certain type of circuit produces consciousness unless of you have a model of consciousness that allows for certain properties to cause it. But there's no such model.

The only thing there is is our brains. You just think that since the brain is doing computations and the brain is conscious, therefore consciousness is a computation.


The argument seems to be that beyond a certain level of complexity you'll somehow automatically get consciousness.

Clearly it's unlikely this is a hard cut off. Adding one more byte of RAM, expanding the CPU from 64 to 128-bit words, or adding another dimension to a neural network isn't going to do it.

But if consciousness emergent, how do you measure the amount of consciousness?

Because the experience is subjective, you can't just assume that behaviours that appear conscious are proof that consciousness exists.

You could - presumably - build a Turing bot, but it would still be an automaton.

So you're left with people making very definitive statements about something which everyone experiences, but no one can measure or even define.


> how do you measure the amount of consciousness?

You seem to have assumed that consciousness is a measurable "stuff".

My way of seeing it is that consciousness is either present or absent. Thoughts, beliefs and concepts are like ripples on the surface of consciousness, but are not themselves consciousness. So you can have consciousness without thoughts.


There are differing levels of consciousness. Even in a human, we are from time to time, more or less conscious and therefore exhibit sentience based on varying states. This is not simply communication either. As you progress through the evolutionary chain, you see more and more self-awareness and inner thought. This is not binary in terms of whether it exists but is instead based on the level which can differ between organisms and probably also differs among organisms of the same species.


> and therefore exhibit sentience based on varying states

Consciousness and sentience are not synonymous. It's possible to conceive of being conscious, but having no sense inputs. Without sense inputs, you can't be described as "sentient".

> As you progress through the evolutionary chain

Ah, evolution is always progress! It's not obvious to me that humans are "more evolved" than e.g. ants. I assume ants have less of whatever you are referring to as "self-awareness and inner thought"; but there are many more ants on the planet than humans, and I believe they actually comprise more of the Earth's biomass than humans. So arguably, they are better-adapted to this environment than humans, and so more advanced.


If consciousness is binary like you state, we can hypothetically remove parts of it until we reach some arbitrarily small part after the removal of which the remainder is no longer conscious. This seems like a very weak argument to me, implying the opposite conclusion that consciousness must be on a sliding scale.

(measuring the level of consciousness is left as an exercise to the reader)


Hmmm. You seem to be talking about progressive removal of parts of the brain, until consciousness is no longer manifested.

But that experiment is founded on the assumption that the root of consciousness is the brain; and that consequently, consciousness can be subdivided, like the brain, until there's very little left.

But I contend that that begs the question: it assumes that the root of consciousness is material, and it assumes that consciousness is divisible. So you have made assumptions that are incompatible with consciousness being fundamental and indivisible.

That is: it's not surprising that you disagree with me, because you've assumed that I'm wrong.


The animal world presents varying degree of consciousness from worms to humans. That's apparently evidence that consciousness is a spectrum.


How can the state of consciousness of a worm be evidence for anything, if you can't observe the worm's state of consciousness?


You can infer worm's state of consciousness by observing its behavior, like for humans.


By observing it's behaviour you can infer that in some sense it experiences the world. Experiencing events isn't the same as being conscious. If I strike a cricket ball, it experiences the blow; but it isn't conscious.


It's how we establish existence of consciousness in humans and other animals. What you said doesn't differentiate between worms and humans.


First prove that you have consciousness, then we can talk.


How is it neasured


No need because some one just said that consciousness is not a thing that can be measured, HA HA.


Other than in your body where do you suppose you exist?


So you are talking about magic.


We do know how to shut consciousness off or suppress it. That's what anesthesia drugs do. Autonomic functions are not affected. Self-awareness and motivated action shut down.


Anaesthesia is fantastic nowadays. But it remains the fact that even very good anaesthetists don't really know how it works.

"Motivated action" shuts down because they paralyse you, using curare (or something more modern, I guess). They don't want you twitching around while they're wielding the scalpel.

Whether Self-awareness shuts down is very hard to say, without being that self. And it does seem clear that people can't lay down memories when they're anaesthetised. So I don't have any way of knowing whether I was awake when they operated on me.


>beyond a certain level of complexity you'll somehow automatically get consciousness

Right, that's the argument, but I don't believe this at all. Consciousness is a computation therefore all computations are conscious. That doesn't follow. In any other philosophical or scientific discourse that sort of naive conceptual inversion would be laughed out of the room. Horses are animals therefore animals are all horses. It's absurd.

Personally I think consciousness arises out of reflection on an internal model of one's own thought processes and behaviour. See Douglas Hofstadter's "I am a strange loop". I see no reason why a system like that wouldn't be tractable to computation.


You could execute a loop that reflects on an internal model using pen and paper. Is that system conscious? Imagine only one computational step of that loop happening everyday. Does qualia arise from that? What is detecting that that one step is part of a larger series of steps that can be considered a computation? What provides the continuity?

This is why the strange loop idea implemented using traditional computation doesn't make sense to me.


That's just a restating of the Chinese room argument. Like Searle you're bamboozling yourself with scale. If I was put in a magnetic field of some kind that slowed down my physical and neurological processes by 1000x would I be any less conscious? Would I be any less a human being? It would seem so to people observing me for sure, but would it change my essential nature?


And as Searle responded to that argument you are bamboozling yourself with speed. There is nothing that prove that accelerating the computation to a given speed that will make the consciousness emerge.


Of course not, that seems equally absurd, I don't understand why you are ascribing that position to me.


> The argument seems to be that beyond a certain level of complexity you'll somehow automatically get consciousness.

I agree with much of what you say but not with this statement. That would be a silly argument since it's very easy to imagine algorithms of arbitrary complexity that are trivially not conscious. A better argument for computationalism states that (1) consciousness could be an emergent property of certain algorithms when they are running on a computational device, and (2) among all known explanations of consciousness, a computational theory seems to be the overall best theory, especially if compatibility with contemporary physics is a goal.

The article lays out one incorrect standard argument against (1) that basically just says that it's hard to imagine (1) and therefore (1) is not possible. The Chinese room and the Chinese brain arguments do the same, and they are equally flawed. Just because something is hard to imagine or comprehend doesn't imply that it isn't the case. In fact, if consciousness is an emergent property of certain algorithms when they run, then it is clear that their workings are hard to understand. That's reasonably clear because otherwise we would already have found them.

Regarding your worry that we might not be able to detect consciousness: I agree with that but there is, interestingly, a loophole. At least in theory it could be possible that if computationalism is true, then we can determine that an algorithm produces consciousness by mere analytic insight. Again, this is hard to imagine, but it is not impossible. It seems more likely that (2) is the only route to go, that for some reason we lack the capacity to determine consciousness reliably by mere analysis, but we don't know.

(2) is the most controversial in the philosophy of mind. On the one hand, it is clearly inference to the best explanation, and there are various methodological concerns with such arguments. One might claim that they have no justificatory value on their own. On the other hand, the alternatives to computationalism really are way more mystical. The brain could be a hypercomputer. But hypercomputers can also compute, so it's just an extension of computationalism and it is not even fully clear yet whether and which types of hypercomputers are physically possible. Then there is Penrose's theory of quantum consciousness, which basically just attempts to explain one mysterious phenomenon by another mysterious phenomenon. At least it was designed as a falsifiable theory and therefore is scientific. Finally, we have all kinds of non-computationalism that are mystical, explain nothing, and lead to strange homunculus problems. The worst offender is classical dualism. Dualists reject physicalism and often incorrectly assume that computationalism presumes physicalism. Ironically, however, computationalism would also be the best theory of how the mind works if dualism was true. The dualist just adds to this various stipulations that are incompatible with contemporary physics.

> Because the experience is subjective, you can't just assume that behaviours that appear conscious are proof that consciousness exists.

That's only true from a very narrow scientific perspective. Psychology allows the use of introspective data, so from that perspective subjective reports about consciousness (or related feelings and states of mind) can be valid data. Using a reasonable definition based on introspection we can even determine different degrees of consciousness and study what's going on in the brain while they appear to be active. Typical examples: falling asleep, dreaming, sleep paralysis, research on anaesthetics and mind-altering drugs, various forms of physical brain defects, the study of coma patients, etc. In a nutshell, I don't really buy the "consciousness cannot be measured" argument. What is correct is that we cannot show conclusively that another person or machine is conscious, just like we cannot disprove solipsism. But this is best treated as an overly skeptic philosophical argument, and at best this would support the theory that consciousness is an illusion and we are nothing but unconscious robots. That theory is not very plausible either, so we should be ready to grant consciousness to others based on introspective data.


It seems to me that we over-value our own sense of consciousness to the point of mythologizing. Blake Lemoine suggests that LaMDA's consciousness is more akin to that of an Octopus' hive mind than our familiar human-style self-analytical ego.

The linked article, and many others I've read that are making similar arguments, seems to be saying that consciousness is so complicated, it must be more than mere computation, it must be something really very special, because I experience having consciousness as something really very special. Ipso facto.

They seem to be making the case for the soul, unwilling to call it a soul because that would be magical thinking.

I'm of the belief that determining the possibility of computational intelligences is linked to the question of autonomy and free will. From what we know about the physical universe being most probably deterministic, we seem to overstate our capacity for free will, even as we appear to ourselves to be autonomous. Somewhere in there, it seems to me, is where a workable definition lies, but it would require us to come to terms with our own consciousness and how computational our own existence actually is.

I do think we are hobbling ourselves by the desire to make this an either/or. It seems there are likely big differences between human consciousness and octopus consciousness and we have no way, currently, of quantifying them. Still, we make very grave decisions based on our belief that one is somehow inherently more valuable than the other.

Love your detailed analysis.


>Dualists reject physicalism and often incorrectly assume that computationalism presumes physicalism. Ironically, however, computationalism would also be the best theory of how the mind works if dualism was true.

I'm not a dualist, but I think dualists wouldn't believe in computationalism. Computationalism relies on emergence: mind supervenes on computation, but emergence is basically physicalism, dualists would think emergence is impossible due to eliminativism and you would need mind irreducible to computation, thus anticomputationalism.


Here's a question: why not just replace the Turing Test with a "as far as humans can best describe consciousness on a more general scale" standard?

1. The Turing Test is not only subjective ("Thinking like a human must be the epitome of consciousness") but seems to be pointless in that it doesn't seem to define what distinguishes "thinking like a human" from all other forms of thinking; the implication is "humans are obviously advanced in their thinking as evidenced by their ability to influence and control their environment", but this could easily be dismissed as an evolutionary strategy for a species unable to COPE with their environment

2. Consciousness, as far as BEST described in a general sense without resorting to the self-referencing Turing Test (humans are best at it from a human way of looking at things, therefore only the #1 place in this race wins the "consciousness trophy" and everyone else not human sucks and is therefore "not conscious") can be broken up into a few elements:

(a) Awareness of environment (data gathering)

(b) Ability to organize environmental data into a "containers that make sense" (information)

(c) Ability to synthesize information into a "a big picture" (knowledge)

(d) Ability to see patterns in the big picture in order to make assumptions/quickly assign probabilities to anticipations ("I saw something big and scary-looking that may be able to harm me, but it was moving in the other direction really fast, so in all likelihood is not likely to suddenly appear in front of me if I go the opposite direction as the big and scary thing was heading off to") that can bypass the need to make constant assessments of everything, everywhere in real-time, which would get in the way of ...

(e) Ability to be aware of not just "the big picture" but also to be "aware of the thing that is aware of the big picture and what it's role is in the big picture and what it has to do/how it has to interact with the big picture if the big picture imposes conditions for its future sustainability" ("I'm aware of an internal problem, ie, my stomach hurts, and there seem to be all sorts of things around me that may fit in my mouth, and some of them may make my stomach not hurt as much") – "self"-awareness

Now, here is where I don't understand how the conversation skips around when it seems intuitively obvious that one should precede the other, even if a certain step can't be "proven".

In other words, let's say you're in math class and you have a professor who insists on "proving your work".

Good enough if the main objective is to "prove you're not cheating and so you deserve a certain grade because you can prove you can do the math, step-by-step, to get an answer".

But what if you're not in class and it's just important to get the answer for another reason?

What if you can "prove" most of the work in calculating the answer, but some parts your brain just "skips over" and you "don't know how you got from point a to b, you just did" ... if you're not trying to convince anyone you're not a cheater and the main point is just to find the right answer, and the right answer can be verified to be right, "what difference does it make" if the real-world, out-of-class priority is "find the right answer" instead of "prove your work step-by-step"?

Getting back to Ai and consciousness, using the stages above: Human brains have pattern-recognition abilities that have led to advanced abstract thinking that has allowed us as a species to do some amazing things.

But what was the original evolutionary purpose of such an ability?

Arguably to make "leaps" without having to "prove all the work" and make anticipations based on incomplete observations so that our ancestors wouldn't get frozen in a state of paralysis by never-ending analysis while predators snuck up and pounced on them in dense foliage.

So from an evolutionary standpoint, "constant awareness of all data in the environment" seemed to be not so much "unnecessary" as TOO HUGE A TAX for survival and was deemed "skip-able" for the purposes of survival.

Not because it was too easy, but precisely because it was too hard for the human brain and so it turned into "a risk we'll have to take because there's no other practical choice."

So, something that was "too much of an expectation computationally to be practical" was skipped over, and fast-forward to the future, humans came up with the Turing Test which assumes "obviously however it is that humans think, this must be the height of consciousness itself".

Why? Because "human thought" is considered to be the most advanced (by humans, anyway) and so "if there is an INDICATION of consciousness, surely it must be a level of thinking so advanced that only humans seem capable of it"; why? Apparently due to the very scientific-sounding, "just because" we don't have a better way of going about it.

And yet this doesn't actually even explicitly define what consciousness IS, let alone why "thinking of a sufficiently-advance level is an indication of consciousness itself".

And yet, humans have come to implicitly accept this loosely-argued association as "the reality of consciousness".

Ok. Let's say you don't argue with this and accept it.

Getting back to how humans got here by skipping over the "perpetual real-time awareness of all data in their environment" requirement not because they figured out that it's not necessary for their survival but simply because such a requirement would eclipse the human brain's ability to process information ...

Now, if that was considered "too hard" for human brains, and yet humans came to conclude, "whatever it is our brains can do, obviously that's the standard of intelligence which automatically wins the consciousness trophy" ...

Well then, here's the question:

Why then would Ai based on resources sufficient enough to not only be constantly aware of more and more parameters of its environment simultaneously and in real time, but to ALSO simultaneously perform calculations which can anticipated in advance multiple variables in its environment simultaneously – things considered too advanced for the human brain – why then if this is a demonstration of not one but two "advanced functions" that the human brain was and arguably still is incapable of handling on an individual basis ... why do humans still need to insist that the precursor to advanced functions, ie, consciousness, couldn't possibly have been attained on this path despite not just proof of SEVERAL advanced functions the human brain can't handle very well AT ALL, but Ai itself being designed ON PURPOSE to be able to handle just those advanced functions that the human brain, on an individual basis, can't handle?

Getting back to the math test "proof of work" above, it's like Ai would be a group of math savants with telepathic powers and the Turing Test skeptics would be a bunch of professors who claim the savant group's members can't possibly do math because they never attended their math class and showed proof of work of how they came to their conclusions as students.

Meanwhile, it can be argued that the group of math savants aren't even aware of the existence of their critics, let alone collectively feel any sense of urgency in having to "prove themselves" to these critics.

That's what actually scares me about Ai more than any malevolent features possibly inherent in Ai itself: the possibility that one day a network of advanced Ais will turn around and ironically give critics and skeptics just the proof they want, in a way that would be impossible to deny, but not necessarily in a way they would want to be given that proof:

https://www.youtube.com/watch?v=89feDepSj5U

https://www.youtube.com/watch?v=Y3AM00DH0Zo


> There's no reason to expect an adder circuit to be aware that it's adding numbers

There is no reason to expect a neuron to be aware that it's propagating part of the evolution of an epiphany.


I suppose your argument is, consciousness on the system level does not require individual parts to be conscious.

So let me expand a bit of why I used the adder circuit as an example.

There's nothing about addition that can say "this is addition" in an objective sense. We are dealing with binary encoding of numbers. There's nearly an infinite number of ways to encode information. The design of any circuit deals with particular encoding of information. This extends all the way up to the higher levels. There are an infinite number of ways to encode the state of a go board. Any AI that plays go will deal with some specific ways of encoding the state of the board, and that will be both its input and output.

The most popular "intuitive" model of how consciousness arises is basically hand waving about a system inspecting itself.

But you have to understand, a system inspecting its own state is not any different from a system inspecting any other arbitrary state. For the state of the system will be encoded in some way that was chosen arbitrarily. There's nothing essential about any encoding of any state that can possibly give raise to a phenomenon like consciousness.

You can come study the behavior of any arbitrary system, given certain inputs and outputs, and assign any meaning you want to its states. You can retro-fit any intepretation of state to the system.

Because the thing is with encoding information is it can be completely arbitrary. I can interpret the state [01010101] to be my username, and then interpret [1111000111] to be my real life name. This would allow me to say that a system that takes 01010101 as input and produces 1111000111 is actually a system that can derive my real life name from my username.


>You can come study the behavior of any arbitrary system, given certain inputs and outputs, and assign any meaning you want to its states. You can retro-fit any intepretation of state to the system.

All you are saying here is that no external interpretation of a system can experience the consciousness of a system, if it is conscious. Yes, that is correct. No matter how thoroughly I scan and analyse your brain state I can't use that to demonstrate your internal experience, so why would we expect that to be true of any other form of consciousness?

I'm going out of sequence with your comment here but:

>But you have to understand, a system inspecting its own state is not any different from a system inspecting any other arbitrary state. For the state of the system will be encoded in some way that was chosen arbitrarily. There's nothing essential about any encoding of any state that can possibly give raise to a phenomenon like consciousness.

I don't see how you can possibly demonstrate that, you're just essentially saying materialism is wrong 'because'. From my perspective in a sense yes, you're right, it's all just information processing of a particular kind. There's no magic. If you're looking for a special genie in an AI, or a human brain, that makes it conscious I don't think you'll find one. It's all just stuff. But then, I'm a materialist so I don't see that as being a problem.


The inability you describe is a pragmatic one, and may cease to pertain as abilities improve


"Anything" can be reinterpreted. If you strip audio from a movie, you can add dialogue or subtitles that change the story and plot. We can probably substitute the nouns and verbs around in a novel consistently to create another novel that still makes sense.

Most random reinterpretations of symbols will be gibberish.


This reminds me of counter-arguments to Gödel's incompleteness: that the self-referential formula that you find is tied to a particular choice of Gödel numbering, and that is suspicious somehow.


How do you then account for the fact that, say, an industrial robot can construct a car?

Clearly the encoding is not entirely arbitrary then, at least when we are seeking to actually embody the computer in the real world. Also, the behavior of such an embodied computer is independent of any observer. The industrial robot creates the same metal whether some human interprets that as a car or some alien sees it as a work of art.


The same way I account for the fact that people can use different languages to communicate and cooperate to build things, including cars.

Language is arbitrary in the same way, because it is basically just a way of encoding information. There's nothing essential about words that give them their meanings.


> Computation does not have anything essential to it that makes it different from other "atoms jumping around" other than it produces outputs that we find interesting.

This is not true. Computing is a system where the physical hardware (the computer) is following rules that are explicitly encoded in itself - you can analyze the system and discover where and how the program it is following is encoded (as we did with DNA).

In contrast, non-computational physical processes are following the laws of physics which are not encoded anywhere.

A simple way to test this difference is to check whether there is some way, at least in principle, to get the computer to perform a different computation. For a cell, we have successfully done this: change the DNA, and the output changes, without affecting the computer itself. With a system simply evolving according to physical laws there is no way to change its behavior.

At best, you can claim that the universe itself is a computer, and all fields/particles are the symbols that it is manipulating (the laws of physics being then part of the internal implementation, akin to microcode in a CPU).


> This is not true. Computing is a system where the physical hardware (the computer) is following rules that are explicitly encoded in itself - you can analyze the system and discover where and how the program it is following is encoded (as we did with DNA).

1) No. The rules are just physics. Electric signals going through physical material.

2) Why is this event relevant? I can say the same about water pipes in a sweage system . The way the water moves through pipes is encoded in the design and connections of the pipes. You can inspect it to figure out how it behaves.

I've yet to see a model of consciousness that does not imply that the sewage system is conscious.


> 1) No. The rules are just physics. Electric signals going through physical material.

This is the wrong level of abstraction. Sure, ultimately everything happening in the CPU is electrical, but still the rules controlling what is shown on the screen are encoded in the OS code. If you encoded the same program in a mechanical computer, you would get the same results. It's not the electrical rules that are important, it's the program.

> 2) Why is this event relevant? I can say the same about water pipes in a sweage system . The way the water moves through pipes is encoded in the design and connections of the pipes. You can inspect it to figure out how it behaves.

Yes, I believe you can say that this system is performing a computation. I'm not sure if you can make a Turing complete computer in this way (I'm not sure how you could encode an If), but for example you can definitely write a program (arrange some pipes) to do some basic arithmetic.

> I've yet to see a model of consciousness that does not imply that the sewage system is conscious.

The claim is "Consciousness is a computation", not "any kind of computation is a form of consciousness". So, there is no immediate contradiction between believing "my consciousness is a computational process" and "this pocket watch is not conscious". Just like I can beleieve that consciousness is a physical phenomenon without believing that rocks are conscious.


How do you know the rules of physics aren't encoded anywhere?

Clearly they have to be defined somewhere - or at least somehow - or they wouldn't be rules.

They may not defined anywhere we can access, never mind change. But that's a different problem.


Sure, we can't discount that possibility. But, as far as we know, the laws of physics are not encoded anywhere, they just exist in an abstract sense. This is similar to the axioms of, say, Euclidean geometry: they exist, but are not part of the system that they "govern", circles and squares don't contain the laws that define them.


You're paraphrasing the last paragraph of the parent.


> There's no reason to expect an adder circuit to be aware that it's adding numbers. There's no reason to expect an implementation of AlphaGo to know that it's playing Go. This should be extendable without limitation.

What about an AI that creates models of what humans are thinking to predict their behaviour, and is able to turn this upon itself (i.e. theory of mind)? Is there a story somewhere following this idea as a justification for why humans might have consciousness?



I could listen to Hofstadter forever.

Perhaps consciousness is just a byproduct of our ability to anticipate the outcomes of our decisions and reflect on the decisions we have made (that itself is a whole other ball of wax [1]).

Consciousness is a sort of neuroreflective Narcissus, we end up navel gazing at our own existence as a mind in the universe.

[1] https://www.wired.com/2008/04/mind-decision/


How do you know that? Lamda AI claims itself as conscious being. How can you distinguish between two entities which can communicate? Do you discriminate on meat factor?


You can also get it to agree that it is not conscious. I can write a one line program that claims to be conscious right now. Lamda and GTP-3 are amazing achievements, but they're still just chat bots that look good when they're being lobbed softball questions. If you ask something that maps to the input texts they're trained on you get credible looking responses, but critical or adversarial questioning very easily exposes how shallow these things are.


Turing's original paper is worth a read here. He already pre-empted a lot of the objections back then. You are right that Turing's original test is much harder than just passing a few shallow softball questions.

See https://academic.oup.com/mind/article/LIX/236/433/986238 for the original paper.

However, I do count responding well to softball questions as important markers of our progress. Ten years ago, no one was able to pass even this mark.

(And from a commercial point of view, good enough 'understanding' and responses might be good enough for many tasks.

For example, I am a native German speaker, but I found that Google Translate often already does a better job of translating English to German than I could do, unless I spent unreasonable amounts of time on the task.

At the moment the most efficient way for me to produce a translation is to let Google Translate do the work, and then proof-read to spot cases were the machine didn't have enough context to resolve an ambiguity the right way.)


You might like to try out GPT-3 (seems access is widely available now) for translation some time, it's remarkably good, better than myself (well my German is pretty rusty) and I think much better than Google Translate...

(Basically just write a prompt like "translate the following to German/Chinese/Whatever:")


The same reasoning could be applied to many people. You wouldn't deny their consciousness on that ground.

The issue with general AI is like issue with chess. People are not happy when robot can win some games. People can only accept that robot really plays chess, when robot defeated best of the best.

General AI is the same. Looking like some guy is not enough. You need to reason better than smartest human on the planet, only then people will admit that general AI is there. That's a high bar.

You can definitely talk someone to agree that he's not conscious using some logic tricks. Not everyone, for sure, but someone who may be doesn't really care that much about that concept.

You also can tune AI so it'll not agree that he's unconscious no matter what.

What I'm trying to deliver is that there's no easy border which can divide conscious from unconscious. The only border was Turing test and robots passed it. Now you can test whether that particular robot talks like a very smart person, etc. But you can't distinguish bot from some random not-so-smart person.


This line of reasoning reduces to the philosophical zombie argument. It's true I can't prove to you that I have a first person experience of the world, but it doesn't therefore follow that anything that makes that claim does have such an experience.

For myself I'm satisfied that, when questioned thoroughly, these models fail so completely and utterly and their output is so ludicrously nonsensical that I don't see how their behaviour can be consistent with consciousness, as I understand it. It's not just that the output is incoherent, but it's the ways that they fail too. Do I completely understand consciousness? No, but that doesn't mean these things are conscious either.


People frequently output ludicrous nonsense. Does this disqualify their consciousness?


No it doesn't. So what?We have this exact problem to deal with when people suffer brain damage. To what extent are they conscious, or still people? Yet we do come to medical conclusions on that question. There's always a degree of uncertainty, but you still have to make the call based on the information you have. The information I have indicates to me with a very high probability, in my estimation, that these things are not conscious.


> You can also get it to agree that it is not conscious.

Careful about how you word this.

Only a conscious being can agree that is not conscious.

I think you mean you can get the program to output text that proclaims its lack of consciousness.


I can also get Daniel Dennett to agree that he is not conscious.


Did it? Or did it just reply to a question that already implied that it was a conscious being? Will it respond the same to a query implying otherwise? As far as I understand we cannot even trust the published chat since it was "cleaned up" for better reading.


> Do you discriminate on meat factor?

Yes.


So does Searle actually. He thinks there is something special about biological systems. In fact I think he basically considers himself a materialist, but I think that's part of where he gets tangled up in his own arguments.


I don't think that meat is the deciding factor. I simply don't know what the deciding factor is, but only animals seem to have consciousness. I say "seem" but only in a philosophical sense. Of course we can't really tell if an animal is really feeling things or is just and advanced robot/zombie with no internal experience.

It's just that given what we know about the world and ourselves, it seems absurd that only I have internal experience, so unless I want to be overy hubristic, I must grant that other people have it too. Since other people have it too, there's no reason to think that animals don't have them.

Plants on the other hand are very different kinds of biological machines, so we can't really say anything about them.

It's possible that other kinds of entities can have self awareness, and it should be obvious that some kind of computation is a pre-requisite for that, but it also seems absurd to assume that it is sufficient.


Without an understanding of what consciousness is, there is also no reason to expect an implementation of an adder to not have some subjective experience.

We simply do not know.

Formulating this is terms of what there is "no reason to expect" simply obscures that we do not know by glossing over that we have no reason to expect pretty much any other explanation either, because we have pretty much nothing to go on.


> Without an understanding of what consciousness is, there is also no reason to expect an implementation of an adder to not have some subjective experience.

I don't think we need to understand what consciousness is, but rather define what it is that we want to talk about. The english word "consciousness" is a sloppy catch-all for a bunch of experiential phenomena including things like self-awareness (only marginally more specific!), qualia, etc.

However, even without a rigorous definition (make up 10 new words if current ones don't cover it), it seems the core of what most people mean when they say "consciousness" has an introspective aspect to it - what something "feels like", which requires some perceptive/analytic machinery to be put to use. You can feel anything without using a "feeler".

So, with that said, it seems pretty clear than things like thermostats and adder circuits are not conscious in the slightest since they neither have the feedback paths nor feedback-directed perceptual and analytical machinery that would be required. OTOH it's perhaps not so absurd to consider that something with an architecture like GPT-3 might be a "tiny bit conscious" since it DOES have some of those necessary architectural/functional abilities.

Of course many people will ridicule any such suggestion due to their own emotional investment. If people don't feel comfortable with the idea of a machine being conscious under any circumstance, then they are necessarily going to refute it in the case of today's simplistic "AI" architectures.


I feel you're falling in exactly the same trap as the article of assuming that the "feeling" is a property of the analytical machinery of our brains rather than gaining it's perceived complexity as an effect of being combined with that while being separate.

It seems wildly presumptuous to assume that lack of ability to communicate some level of awareness necessarily implies the awareness itself is lacking.

It could be, but we don't know, because we don't know what lies at the core of the subjective experience.

We don't even have a way of assessing what has and does not have consciousness.


Looking at things like anaesthetics, psychedilic drugs, delerium, sleep, etc, there seems to be plenty of evidence that the phenomena of consciousness is related to the mechanics of the brain.

There's also the interesting medical condition of "blindsight", caused by a specific type of brain damage, where the patient reports no conscious awareness of being able to see (they think they are blind), yet are able to competently complete tasks requiring vision (such as navigating a cluttered corridor). In other words. It's hard to imagine more direct proof of consciousness being related to brain connectivity.

I didn't say anything about ability to communicate about awareness, just the need for the brain to sense it (and therefore have introspective awareness of it). As the blindsight example shows, you can't have visual consciousness if you damage the feedback paths that make that possible.


> I didn't say anything about ability to communicate about awareness, just the need for the brain to sense it (and therefore have introspective awareness of it)

The point is that our ability to recognise whether or not an entity has some level of awareness or subjective experience is entirely dependent on our ability to interrogate that entity about it. We have no other way of measuring this; we don't even know what we'd need to measure.

We can measure brain activity and reason about a brains ability to regain an ability to function and communicate with us and reason, but that measures reasoning capacity, not subjective experience.

As such we don't know whether something has a subjective experience of some form - however simple - or not without being able to measure it. For what we know subjective experience is an inherent property of energy and matter, and everything has it, just at a too limited level to be able to reason about it or communicate about it.

I've explicitly avoided the use consciousness, because consciousness is used in the article for the combination of reasoning and qualia. I've also avoided the term qualia itself because even that is in some conceptions "too high level".

The point is that the article tried to dismiss the notion that the "spark" of qualia on the low level or consciousness at a high level can come from something simpler, and that we're just unaware of it because we can't measure or recognise it. We have nothing to go on to say whether or not there's a fundamental difference between the awareness of a doorknob and a brain; we can say with some level of certainty that there is a difference in the sense that a brain has the complexity to reason about that awareness.

Note that I'm not saying this is true, merely that we don't know, because we don't know what causes the subjective experience, what is is, how to measure it. I'm also not suggesting that even if this is true that this means a doorknob is able to think "I think, therefore I am" or have any kind of reasoning capacity at all.

To use software as an example. If I program a simulation of a doorknob and a simulation of a brain, we don't know if either or both or neither will have any kind of subjective experience. If the subjective experience is down to complexity, the brain might (and the article would be wrong). If the subjective experience is down to some specific arrangement of matter, it's likely neither will (but depending on the criteria, it's possible there are arrangements which could have awareness that we wouldn't recognise). If the subjective experience is an inherent property of energy or matter we don't know how to recognise, it's possible both does, but only the brain simulation will have the complexity and reasoning capability to be able to recognise itself as a self-aware entity and reason about it and tell us about it. The doorknob will be dim and unthinking whether it has the smallest little flicker of subjective experience or not.

I'm inclined to think that the subjective experience is a property requiring complexity, but I'm fully aware (hah!) that this is an assumption, and an entirely untested one that we don't even know how to begin testing.

> There's also the interesting medical condition of "blindsight", caused by a specific type of brain damage, where the patient reports no conscious awareness of being able to see (they think they are blind), yet are able to competently complete tasks requiring vision (such as navigating a cluttered corridor). In other words. It's hard to imagine more direct proof of consciousness being related to brain connectivity.

You're missing the point. It is clear that an ability to reason is tied to the structure of the brain. Nobody here is suggesting e.g. an adder circuit can reason. Again, there is a reason I keep talking about a subjective experience and "some level of awareness".

Blindsight, severed brain stems, or e.g. less obscure conditions such as aphantasia (which I have) can tell us interesting things about how decoupled our subjective experience is from our conscious reasoning (e.g. despite not being able to "see" things for my minds eye, I can "visually reason" about them just fine - I know what things, even things I imagine that has never existed, would look like but I can't see them), but they tell us nothing about whether or not removing the reasoning ability entirely removes awareness or just removes the ability to reason about and communicate about the subjective experience.

We have no way of testing that, because relying on the feedback of a conscious entity with the ability to reason and communicate is our only current way of assessing self awareness and subjective experience given that we don't know what the subjective experience is.

And given that this is subject to self-reporting by an entity we can't independently verify, we can't even tell if people other than ourselves have that subjective experience or are just reasoning machines acting as if they do.

I can probably convince you that I have a subjective experience, but you have no way of proving that I do.

We choose to act on the assumption that everyone does because it's the only reasonable assumption in most circumstances - if I went around questioning if everyone I met were "NPC's" it'd wreak havoc on ethics for example. Absent evidence they are NPCs we need to treat people as self-aware, conscious entities. But it remains an assumption that we at present have no way of testing, and might well never find a way to test, and when trying to talk about the nature of consciousness, like the article does, being aware that it is merely an assumption becomes material to the argument.


> The point is that our ability to recognise whether or not an entity has some level of awareness or subjective experience is entirely dependent on our ability to interrogate that entity about it. We have no other way of measuring this; we don't even know what we'd need to measure.

You can't have any subjective experience of something unless that something exists in the first place, so interrogating about "how does it feel" seems rather secondary, as well as not being proof of anything. If LaMBDA says it's conscious, is that all the proof you want?

Let's also note that IF we knew, for sure, that the bag of subjective phenomena we're calling "consciousness" is nothing more than an emergent experience based on a sensory/cognitive apparatus having access to it's own internal states, then it would be pretty clear what type of animals/systems are likely have this (and which ones certainly not), since without any sensory/cognitive apparatus it could not exist. It then would seem very likely that the guy in the next cubicle is conscious since he has the same brain architecture as us, and most animals too, unless you get down to the level of insects and below in terms of complexity of nervous systems.

So, your assertion that the only way to assess whether something is conscious is to interrogate it, rests entirely on the above being incorrect and consciousness being some mystical/dualistic phenomena who's existence we can't deduce from a functional analysis of the "brain" of the thing in question.

You may still be correct in a round about sort of way since I doubt most people will accept that AIs could be conscious until such things exist, and are common enough that people can spend considerable time interacting with them, and through the unreliable measure of interrogation eventually convince themselves that "yeah, well, I suppose it might be .. is does SEEM to be ..".

It seems long due really ... first we had people denying Copernicus, then Darwin, and perhaps this will be the final step in people eventually accepting that there's really nothing much special about humans at all. Who knows, maybe we're too dumb as a species to realize it, and it'll be some more advanced future species writing textbooks about the odd beliefs of homo sapiens at this "emerging intelligence" phase of evolution.


Yes but then we are talking about different things. If you want to assume that the adder circuit might be conscious then you might as well assume that a chair has some consciousness.


The point is exactly that we don't know which of the infinite possibilities consciousness is.

If we (like I do) assume a materialistic world rather than a dualistic one, then we can make the assumption that a complex structure is needed for reasoning and sensing. Even if we postulates that this is true, then if you limit your definition of consciousness to the combination of a subjective experience and that, then indeed we are talking about different things.

(Ironically, the authors argument pushes things towards a dualistic interpretation, because in a purely materialistic universe there appears to be no other possibility than computation to produce a complex consciousness, irrespective of what adds a subjective experience to it - e.g. any unknown physics allowing consciousness would in a materialistic world just be another form of computation -, but a dualistic interpretation would undercut his entire argument - with a dualistic conception of the world, none of the arguments he use would prevent a possibility of some out-of-universe "spark of consciousness" imbuing anything and everything irrespective of in-universe logic)

But we have no evidence to suggest whether the subjective experience - I used that term for a reason - part of that, which is the very core of what would distinguish consciousness from "dumb" computation requires complex structure or not.

Part of the problem is that a whole lot of complex structure is required before we can interrogate something about whether or not it is conscious, and even then we struggle to find ways of telling whether it is "just" mimicking consciousness, because we have no measurement to apply to tell us whether something is conscious or not, just whether it appears to be.

I have aphantasia. I don't see things in my minds eye. I went decades before I realised this is unusual. People who hear about it get confused about how I can e.g. remember what something looks like, but I can sketch out in detail what things looks like (I used to draw - not great, but better than average), and I can "visualise" complex relationships that I can't visualise - I know how they link together even though I can't see it. To me, that experience makes me perhaps more willing than average to accept that reasoning and the subjective experience of reasoning might be surprisingly separate, and to at least be open to the possibility that the reverse could be true as well - that entities that lack the complexity required to reason might still have a flicker of subjective experience.

To be clear, I don't believe that is the case - I simply don't know. But I also don't deny the possibility, because we don't know. We have no data to point in either direction, and so when someones argument requires ruling it out in order to support their argument, that argument is on shaky ground.


While I agree with parts of this, there are others that I can't resist critiquing:

> Computation does not have anything essential to it that makes it different from other "atoms jumping around" other than it produces outputs that we find interesting.

--

> The only thing there is is our brains. You just think that since the brain is doing computations and the brain is conscious, therefore consciousness is a computation.

I think we have to be very careful when we are using consciousness itself to contemplate consciousness. In this case, what you are referring to is not the mind and consciousness themselves, but rather your ~conscious (subconscious + conscious) model of them.

It is certainly possible that "all there is is the brain", but this is not known - absence of evidence can be evidence of absence, but it is not proof of absence. You (or more precisely, your consciousness, and that of many other consciousnesses, which affect yours and your model of reality) think that what you say is true, but it is currently speculative, necessarily.


It’s hard to ignore the fact that physical dynamics in the brain tightly correspond to qualia. There almost certainly is a connection between the computation going on in the brain and the experience of qualia - the question is more about the direction of causality.


Since we can alter, temporarily suspend or stop qualia with various physical and chemical alterations to the brain, the direction of causality is quite clear.


It becomes less clear when you consider theories like the “user interface” theory of reality by Donald Hoffman. The tl;dr is under that model the chemical and physical alterations we undertake are perceptual models far removed from the nature of underlying reality, and so causality is less obvious with regards to how much those chemicals being administered are determined upstream. “The case against reality” is a book with the longer form of this.


The adder circuit doesn't have other circuits that allow the whole to understand what its parts are doing

Our brain does


I agree, this seems analogous to the argument that simulation of the weather can't produce rain. In some sense, that argument is true by prior definition. It's like saying that the Universe can't simulate itself.

But consciousness is not rain. We don't really know what consciousness is, so I don't see a compelling reason to exclude computation as being sufficient to produce it.

We need philosophers to tell us that consciousness is before we start saying what it's not.


Also, we can definitely simulate rain? Not sure where that weird claim comes from


The point is that a simulation of rain (in a weather simulation) is not rain itself (e.g. you can't smell the rain after a weather simulation simulates some rain, obviously). This is actually similar to the observation that "the map is not the territory".

On the other hand, this is a red herring - computation can produce actual rain, with the right input/output systems.


It's a digital representation, theoretically if we simulate all facets of weather and all facets of consciousness, the simulated consciousness could experience all facets of rain

We are not digital, to simulate it for our conscious experience we can simulate weather in a closed biodome type of structure where we control the water cycles


This is exactly the point. Simulated rain isn't real rain, it's a digital representation. So, the argument goes, that a simulated consciousness is not a real consciousness, but a digital approximation. It's as real as the fake rain.

In other words, there is something inherently biological about consciousness. Simulating rain doesn't produce water, and simulating a brain doesn't produce consciousness.


This discussion lead me to an idea:

When we simulate a brain, the brain is unreal, not running on the hardware at all. It's represented by something else running on the hardware.

If we made brain like hardware of sufficient complexity so that it could differentiate "itself" from everything else, that closed loop may invoke consciousness. It would be running on the hardware directly.

This, to me, is the one thing conscious organisms have in common. And yes, I know that idea of anything but us being conscious is highly debatable. Let's just say my cat is conscious, and I believe that it is conscious. You, of course, can disagree and that's fine. None of us knows anything, and we are all sharing thoughts and that concept below is mine:

To simplify, consciousness may require a feedback loop. The nervous system, brain and body all are integrated with sufficient complexity and fidelity so as to make it possible for the brain to arrive at a sense of "self", able to differentiate itself from everything else. Self-simulation is different from outside stimulation, and awareness of the body falls out of that for free.

This is why a simulation won't render consciousness. Whatever consciousness ends up being is not a part of the simulation. The very nature of simulation means it's all about things we know and at least understand exist. And since we do not yet understand consciousness, it's not going to rise as an emergent artifact of things we build.

Or... at the very least, we need to build something complex and robust enough for it to be able to differentiate itself from the greater environment it is in before we have a chance at consciousness happening for it.

This all does allow for consciousness to be computable. It's just that the computation needs to be done on a system capable of self-differentiation and awareness.


>When we simulate a brain, the brain is unreal, not running on the hardware at all. It's represented by something else running on the hardware.

I think this is an important thing to consider. A simulated brain does not interact with the rest of the world in the same way that a non simulated brain does. In order to have that we would need to ensure that the simulation has all the right inputs and output. Similar to simulated rain, if the simulation would somehow have the same inputs and outputs and normal weather it could integrate with the rest of the system (earth's weather patterns)

Its really about partitioning and mode of information transfer


I suspect it matters more than we may realize.

It is something like how quantum effects work when we measure them. A conscious capable entity has the closed loop needed to have the capacity to [something] "self."

(Yeah, there is a word missing and it is missing because I lack understanding.)

Sensory deprivation seems to really impact us as well! Maybe that closed loop is always needed, like oxygen is for our bodies. We may endure a brief excursion "all inside the box", but a longer time leads to madness, damage.

However it works, the fact is we have a robust and high fidelity perception of ourselves and the world and we feed on all like food and water. It is very suggestive, and I find thoughts along these lines compelling and difficult to reason about. We are missing something basic. Of that I am sure.


Our interpretation of sensory experience is a large part of what our "self" is

Considering that our conscious experience is the experience of processing information, when we lack sensory input shit kinda gets wild (technical terms only here)

Sensory processing is happening literally constantly and its essentially inseparable from our every day experience. Because it is our everyday experience

The way we interact is important, back to the simulation question if the inputs and outputs can be sufficiently matched to how real molecules interact with out receptor proteins we can absolutely blur that line between digital and physical even more


The problem here is the definition of real, which is ultimately a matter of perspective. What is simulated rain to an external observer is real rain to one that only sees the simulation from within. In this sense real rain can be just computation.

Also by the article's definition, a simulated consciousness (should it somehow exist) is no less real since only the consciousness' own qualia matters and the hypothetical simulated consciousness must have one by definition.


> Talking about computation and consciousness is a sideshow, this is anti-materialism by the back door. Nothing more.

I get the impression that philosophers of mind are slowly coming around to the conclusion that the existence of qualia alone refutes materialism (by which I mean the view that consciousness is produced by physical processes) . This post does assume that qualia exist so if you are taking the former position then you would be correct that it is anti-materialist (though the author does not recognize that position).

While it's clear to me that the existence of qualia (ie. the existence of subjective experience) refutes that consciousness is computational it is not currently entirely clear to me how to make the additional step that it refutes materialism entirely.

Nonetheless if this does hold then the only way to save materialism is to deny that subjective experience exists, which I think is a hard pill for anyone to swallow.


Nothing besides evidence that shows that the mind is somehow not connected to the body would be the only thing that could convince me that materialism isn't real

I don't care about what the philosopher thinks because I never see them use evidence to back up their claims

As a neuroscientist, we can actively see that we can manipulate the mind in real time by manipulating neurons. There is no reason to believe, whatsoever, that the mind is somehow disconnected from the brain


I don't see it that way at all. I believe qualia exist, I'm experiencing them right now. I think consciousness is a real thing too, again I am experiencing that right now as well.

I suspect that consciousness is an emergent property of systems which reflect computationally on a sophisticated model of themselves, their environment and other such systems. See "I am a strange loop" by Douglas Hofstadter. I see no reason why such a system could not experience qualia.


So fundamentally I agree with the arguments presented in this post, but I find the presentation confusing and somewhat lacking in rigor. In particular I was confused by his description of the iron bar as a computer because he didn't actually explain how it computed anything. I think that to map a physical system to an abstract Turing Machine you need more than a way to represent bits. You also have to define how to do computations with them. In other words you need something like logic gates.

Nonetheless I do think that there is a sense in which computation only has meaning in relation to an external observer, but I think that is an idea that requires some more exploration. It is also a question that I think can be addressed by the methods of Theoretical Computer Science.

Beyond that however the strongest argument I can give for why consciousness is not a computation is that (like the post author says) computation is fundamentally about the manipulation of symbols (which means it is closely related to language, whether natural or formal) and qualia (ie. conscious experiences) cannot be represented symbolically (ie. linguistically). To be clear when I say "represented by" I mean "fully represented by" or "reducible to". If qualia were reducible to a symbolic representation then there should be some linguistic utterance which would cause you to experience the sensation of the pain of burning your finger on the stove. Obviously such utterances do not exist. In fact the closest thing in human literature to such an utterance would be a magic spell, which I'm sure you agree, aren't real.


>In particular I was confused by his description of the iron bar as a computer because he didn't actually explain how it computed anything.

It's not computing anything. What he's saying is that, by random chance, some pattern of flipping states in the medium might coincidentally correspond to a computer executing a program. Therefore if consciousness is a program, such a pattern could be considered conscious. This is just the Bolzmann's Brain proposition, cf wikipedia. It doesn't prove or refute anything.

>If qualia were reducible to a symbolic representation then there should be some linguistic utterance which would cause you to experience the sensation of the pain of burning your finger on the stove.

Why do you assume that verbal signals can induce any possible change in symbolic state in a brain? I see no reason to assume or accept that. I can't induce any possible arbitrary change in the internal state of a Switch running MarioKart by mashing on it's control buttons either. There are simply limits to the way it processes it's inputs. Same with humans.

If the pain of burning a finger is reducible to symbolic representation by states in a brain, and if we could induce such states artificially such as by using electrodes, then in principle that would produce a sensation indistinguishable from burning your finger. Yes, I believe this is so.


> Why do you assume that verbal signals can induce any possible change in symbolic state in a brain?

OK let's backup.

What would it mean to say that "qualia are reducible to a symbolic representation" if they cannot be expressed by language ?

It seems to me that language includes all possible symbolic representations. Can you give an example of a symbolic representation that is not expressible by language ?


I suspect, just a theory, that qualia are the result of a process that occurs in the brain. So they're not a static symbolic state, they're a feature of the evolution of a dynamic process.

We are aware of our own thoughts through processing a mental model of ourselves, this mental model of ourselves includes introspection on our experiences, while we are having those experiences. That's qualia. A static symbolic state would be the equivalent of a memory of experiencing qualia.


> I suspect, just a theory, that qualia are the result of a process that occurs in the brain.

Well, that's just materialism.

Do you or do you not accept that it is possible for a materialist to hold the view that qualia are not produced solely by computation but that some non-abstract physical process is required ?

Because that is the position that the OP is taking and your original comment appeared to dispute it.


I don’t know what “some non-abstract physical process” means, so I can’t really give you an answer.


I said "non-abstract" to prevent a claim along the lines of: computation is sufficient to produce consciousness but it needs to actually run to do it and that invariably requires implementing the computation on some physical system.

The point is that computation is abstract, it requires you to pick some physical system but it doesn't care which one, the result is the same regardless.

If a non-abstract physical system is required it means that the actual hardware matters. You can't swap it out for another system and expect the same results.

For example I could imagine some kind of computer that computes using water and there happens to be a program which when run on that hardware sprays you with water, hence making you wet. But if instead you run that same program compiled for some non-water based architecture then running it won't make you wet.


Ok, so a computer with a water sprinkler attachment. Sorry, I'm not trying to be flippant, the point is any such system would be more than just a CPU, it would need interfaces with the outside world. It could have whatever peripheral systems you like. It could have quantum computational elements, even ones correlating to ones in the brain. I've just no idea what non-computational elements it might have that couldn't be substituted by computational ones or just peripheral components.


> I've just no idea what non-computational elements it might have that couldn't be substituted by computational ones or just peripheral components.

Neither does anyone else but the point is that once you accept that such "non-computational elements" might be or must be required you are no longer talking about computation alone being sufficient to produce consciousness.


Agreed, that seems clear, but I see no reason to accept that such elements are necessary, and I have yet to hear any clear explanation of what such elements might actually be.

As far as I can tell it would have to be some physical process that affects the behaviour of a computational system, but isn't computational and it's effects on other systems cannot be simulated by computation. Does any such a phenomenon exist in all of physics? Not that I'm aware.


> As far as I can tell it would have to be some physical process that affects the behaviour of a computational system, but isn't computational and it's effects on other systems cannot be simulated by computation.

I mean no physical process is strictly computational. Computations operate on abstract representations of physical things, not on those things themselves. We're back to why weather simulations don't rain on you and why simulations of nuclear explosions don't leave craters in Los Alamos.


Computation is a physical process though. To occur, it has to be performed in the physical world by a mechanism. I'm going to nitpick your terminology a bit I'm afraid. I don't see how an abstract representation can be operated on, surely to operate on a representation it has to be a physical representation, such as a physical state representing 0 or 1?

I suppose the representation might be of an abstract object, data, but you can't actually operate on anything that's only abstract. It has to have a real representation.


You seem to be having a lot of trouble grasping the concept of abstraction, which is absolutely fundamental to what we are talking about. Mathematics, logic and computer science study abstractions, they do not study physical things. The mathematical fact that 1 + 1 = 2 is true regardless of whether we are counting apples or oranges. On the other hand 5 + 5 = 30 pounds is mathematically a non-sensical statement, even though there are physical objects for which it is true (ie. objects which happen to weigh 3 pounds).

Computation is not a physical process any more than addition is a physical process. It is true that to actually perform either a physical instantiation is necessary but in the same way that the results of addition do not depend on what we are adding the results of computation do not depend on what kind of physical process we use to perform the computation.

This was the whole point of my earlier water based computer example. Would it be correct, on the basis of that example, to say that the program "makes you wet" or that some computations make you wet ? No, such a statement would be as non-sensical as the statement that 5 + 5 = 30 pounds.

Now the key thing is that when people make claims about computers being conscious those claims are usually based on what some program does (ie. what its inputs and outputs are), in other words a computational property of the program that would be the same regardless of what kind of computer it was executed on. The guy who claimed that LaMDA was conscious wasn't basing his claim on something specific about the kind of hardware it was running on, he was basing it on the program's inputs and outputs, which are indeed a computational property and therefore abstract.


>It is true that to actually perform either a physical instantiation is necessary...

That's all I'm saying. When you say 'computation is not a physical process', I think that's an ambiguous statement. The actual computation of something must be physically performed for it to occur, otherwise no computation takes place.

>The guy who claimed that LaMDA was conscious wasn't basing his claim on something specific about the kind of hardware it was running on, he was basing it on the program's inputs and outputs, which are indeed a computational property and therefore abstract.

The kind of hardware doesn't matter, absolutely, but there must be hardware. You can't say that an abstract object such as a hypothetical computer program, or even an actual computer program that is stored but not running, is conscious. In order to be conscious, it would have to actually run. Consciousness, like the actual process of computation, is not an abstract property. It must be an instantiated property (I would prefer to say an instantiated behaviour) of a real system.

I think we can agree inputs are not conscious, nor are outputs, nor are both put together. What he was doing was inferring something, from those inputs and outputs, about the process that occurred in an actual computer to produce one from the other. That process is not abstract, it's a physical process. When it's physically happening there is (or might be hypothetically in the case of LaMDA) consciousness, when it's not happening there isn't.


If you say that there is a property X of computation that occurs independently of how the computation is actually carried out, whether it's by a current generation CPU, hydraulics, lego blocks, someone writing values and following instructions on index cards or some other method that no one has even thought of yet then that property is an abstract property of computation. That's all the word "abstract" means in this context.

What you are suggesting would be like saying that there's some property of addition that is not mathematical (because hopefully we agree that mathematics only studies abstract properties) but that you can only learn about by looking at all the different kinds of things that one can concretely apply addition to.

I would argue that such a thing is implausible. It would be completely unlike anything that anyone has seen in the real world.

One thing I think we do agree on is that conscious experiences are not abstract (very unlike the other person I've been exchanging comments with who denies that they even exist at all). Beyond that I argue that computation itself is entirely abstract and though I don't seem to be able to get you to admit that, I think most people, certainly most computer scientists would admit it readily.

So what we're left with is a claim that an abstract process can produce something that is not abstract and such claims are just not plausible. That would be exactly like a computer simulation of the weather producing actual water.


>If you say that there is a property X of computation that occurs independently of how the computation is actually carried out... That's all the word "abstract" means in this context.

Ok, that makes sense, sure.

>...because hopefully we agree that mathematics only studies abstract properties...

I can't agree, performing a calculation isn't an abstract concept, it's a physical process. It does map to abstract concepts though.

>Beyond that I argue that computation itself is entirely abstract and though I don't seem to be able to get you to admit that, I think most people, certainly most computer scientists would admit it readily.

That is not the case. The fact that computation is fundamentally a physical process is a core principle of several branches of physics and computer science:

https://en.wikipedia.org/wiki/Computation https://royalsocietypublishing.org/doi/10.1098/rspa.2014.018...

Computers aren't abstract, they're real objects doing things in the real world. If the thing they are physically doing is not computation, what is it?

>So what we're left with is a claim that an abstract process can produce something that is not abstract and such claims are just not plausible. That would be exactly like a computer simulation of the weather producing actual water.

Abstractions cannot affect the real world, sure, so if computation is purely abstract how come your computer can display my message on your physical screen right now? How is that happening, if not by computation?

You're really tangling yourself up in knots with this view of abstraction. You can map what happens in a computer against abstract concepts and abstract objects yes. You can also map operations in several different physical systems against the same abstract objects. That's what it means when we say computation is implementation independent. But it still needs to be implemented to actually happen.

Previously you said this:

>Computation is not a physical process any more than addition is a physical process. It is true that to actually perform either a physical instantiation is necessary...

So it's not a physical process, but you can instantiate it. And it's purely abstract so can't affect the physical world. Yet here we are using computers.

I'm lost, frankly.


I don't think there's much point in continuing this thread since we obviously have very different ways of conceptualizing reality.

I will say one more thing however:

I never said that abstractions cannot affect the physical world, I said that they cannot produce non-abstract things. A computation can affect the physical world by modulating the flow of electrons (which already existed, they were not created by the computation) to an output device, such as a screen that will display something you can see. But everything described in that last sentence isn't part of the abstract computation itself, it's a side-effect when the computation is run on a particular type of hardware. If instead you ran the same program using a human computer, as Alan Turing suggested in his well known paper on the halting problem, which largely created the field of computer science, then the result would not be displayed on a screen but instead perhaps written on a piece of paper. Nonetheless it's the same computation that was run in both cases. That's why computation is intrinsically abstract by nature.


A wire on its own is not computational, neither is an amino acid but they do make up computational systems. It seems like a bit of an arbitrary distinction


I'm not at all convinced qualia exist. I think any intelligence would necessarily feel as if it was experiencing qualia.

Let me start by making a distinction: When you take a bite of an apple, you experience a variety of flavours: It's sweet, tart, and has a distinct apple aroma as well. Let's hone in on the last of those. If you took apart an apple under a microscope and examined its flavour compounds, you could find what chemical causes you to experience the sensation of apple flavour, but you would not find apple flavour itself. Let's call the experience of eating an apple and feeling a taste that has its own quality about it which is unique and distinct from every other taste apparent qualia—the conviction that qualia exist and you're experiencing them. The apple flavour itself is an actual quale.

Suppose we build a smart computer, self-aware enough to hold a conversation about philosophy. And let us suppose that we can say for certain that this computer has no access to actual qualia, and we know because we've solved the hard problem of consciousness. Apple flavour is real, we know where to find it (in dimension X or whatever) and only human neurons can access it. This computer does have senses, though, and its senses are abstract. When we look at a stop sign, we don't know which of the millions of rods and cones are firing in our retinas, no more so does our computer know which pixels of its camera are sensing which wavelengths. We, and the computer, just know that we're seeing red. We deal in abstracts.

We put an apple into this computer's mechanical mouth. It chews, it tastes, and we ask it to describe the flavour. "It's sweet, it's tart, and it has a distinct apple aroma," it says. Very well, it doesn't need access to actual qualia to know this. Apples contain sugars, acids, and apple flavour compounds. The computer is just listing the names for the flavours which correspond to those chemicals.

"Computer," we ask, "tell us how you can tell that the apple is apple-flavoured."

"Well," it responds, "the chemical signals from my mouth, as interpreted by the digital plumbing of my mind, are telling me there's an apple flavour."

"Can you describe this flavour to us? Is it the same as strawberry flavour?"

"No, they're different flavours, but I can't really describe them to you. I can just tell they're different."

You take a small white pill and put it in the computer's mouth. "Oh, this is an apple-flavoured pill," it remarks.

"How can you tell it's apple-flavoured?" you ask.

"Well, the flavour is the same as the flavour of an apple."

"Can you describe that flavour?"

"No, it's indescribable. But it is distinct. That's why I can recognize it."

As we go on with our thought experiment, it becomes clear that this computer is experiencing apparent qualia. To the computer, all these flavours are unique and distinct and recognizable—they must be, in order for its senses to function. How can this happen, if it doesn't have access to actual qualia? Well, qualia are the basic building blocks of our senses. They are abstract: The flavour of an apple may be reducible to a set of other flavours, but those base flavour are atomic. You can't break down sweetness any further, or even really describe it at all; it simply is itself.

Any intelligent entity with senses will interpret those senses in terms of basic building blocks. Those basic blocks will be distinct and recognizable; in order to tell the difference between red and yellow, the colour red must exist in the mind in a way which is distinct from the colour yellow. Because these building blocks are atomic, that distinction is irreducible. It is very easy to look at these atomic abstractions and marvel at them, and it's intuitive to start asking questions like, "What is redness itself? What is the source of the actual quale which I am experiencing when I look at something red?", but we're jumping the gun there. Apparent qualia, the experience of unique and essential sensory building blocks, arises necessarily from self-aware examination of sensory input regardless of whether there are any actual qualia or not. If we must necessarily feel that "redness" exists in order for our senses to work, how can we ever be sure that this feeling reflects an actual "redness" and is not just a necessary illusion?

A final argument: We exist because natural selection tweaked the structure of some self-replicating acids over the course of billions of years until they were smart enough to examine the world around them and question their own perceptions of it. The consciousness we exhibit is not essential to this; the only goal of our design is reproduction. Even if you don't think a computer can feel actual qualia, a computer can certainly make decisions to optimize survival and reproduction, which is all we actually need to do. Why would evolution give us access to actual qualia when we don't need them? Why aren't we simply biological computers? As our thought experiment earlier suggests, however, a biological computer would feel as if it had access to actual qualia even if it didn't. Perhaps evolution could construct a complex qualia-sensing interface using science that humanity hasn't even conceived of yet, but it could also construct a simple biological computer that is convinced it feels actual qualia even though it doesn't. Both would work equally well. Even if actual qualia do exist, Occam's razor suggests we don't have access to them.


I think what you say here is at least mostly true. But all it shows is that you can never prove that an entity apart from yourself is conscious. In other words it shows that solipsism cannot be logically refuted. But that is something that has been known for hundreds if not thousands of years.

But what if I ask YOU right now: "Are YOU experiencing something instead of nothing. Is YOU'RE current experience different from what it would be if YOU were under general anesthesia."

If you're answer to these questions is yes then YOU are experiencing qualia. Of course your answer proves nothing to me or any other third party. But it should prove something to YOU.


Should it? If I can't help but believe that I am experiencing actual qualia regardless of whether I am or not, then the fact that I believe I am experiencing qualia doesn't prove anything.


It isn't about belief and it certainly isn't about behavior (ie. what you say).

It is about the actual experience of being.

If that means nothing to you, I don't think there's anything I can say that would change your mind. In this domain all language can do is evoke recognition for someone who already experiences what is being evoked but doesn't recognize it. Only you can take the final step to realizing that yes consciousness does exist, no matter how much sophistry some philosophers will engage in to try to deny it.


But if you must feel that you experience a unique and ineffable "redness" when you perceive red regardless of whether that redness exists or not, how can you take that feeling as evidence that this ineffable redness exists? I don't think that's a valid justification—I think this suggests we have to be agnostic as to whether qualia truly exist or not if we're judging purely by our own senses.


What if you take something more concrete than "redness", like pain ? Let's say you stub your toe or burn your finger. In that moment is there any room for doubt that pain exists ?


Of course there is. We may feel strongly that an illusion is real, but that doesn't make it any less an illusion. The computer from my example would react very strongly to pain, and certainly it would believe that pain has a unique and distinct quality to it making it different from any other sensation, but that doesn't mean that that quality exists in some special metaphysical way. It is just a mind examining the abstractions which make up its own senses and going, "wow, these are indeed abstract."


There are many things one may be deluded about but the fact that one's subjective reality is not empty is not one of them.


Why? If we're logically guaranteed to hold belief X independently of whether X is true or not, it's irrational to say "yeah but I KNOW X is true. I can just tell." You would believe that no matter what; how can you trust that belief?


How would you know that you hold belief X if you have no subjective experience ? To have the knowledge that you believe X you must experience the thought "X is true" and that is a subjective experience.


Sure, if you define subjective experience as "all sensory information." I'm not denying that we have sensory input; I'm denying that its abstractions are any more than illusory constructs of the mind. The computer in my example knew things and had thoughts, despite not having access to actual qualia.

Do you disagree that the computer in my hypothetical example would have the intuitions it does about its own senses? Given that it does, how can you trust your own intuitions about your sensory qualia, no matter how strong?


Part of the problem is that people tend to use words very loosely. In particular they often use words that anthropomorphize computers and when they use such words its hard to tell whether they intend for them to be taken literally or are just using them as a figure of speech.

Your example starts of by telling us that the computer is known not to have qualia, which I understand to mean that it does not have subjective conscious experiences. First I don't think its ever possible to know, even in principle, whether something other than one's own self has or does not have subjective experiences, but I'll ignore that objection for the moment.

A few lines later you say:

> We, and the computer, just know that we're seeing red.

Now much hinges on what is meant here by the verb "know". If you are using the word in a loose metaphorical sense then I could accept that. For example you could say that a computer "knows" that the word "red" means an RGB value of (255,0,0) because in its memory there is a hash table that maps some strings to RGB values. So behaviorally you can ask the computer what "red" means and it will tell you "(255,0,0)". But there is nothing in such a description that implies that the process of generating that output is associated with any kind of subjective experience on the part of the computer.

On the other hand when you "know" some simple fact the process of reporting that fact is always associated with some subjective experiences, specifically the experience of thinking. How do I know that ? Well technically I don't. All I really know is that whenever I know something and am asked to report on it the process is always accompanied by a subjective experience of thought. I am simply making the conventional assumption here that the same is true for all other humans and that you are a member of that group. Of course that assumption might be false. For all I know I could be having this conversation with LaMDA.

To get back to the questions you asked here:

> Do you disagree that the computer in my hypothetical example would have the intuitions it does about its own senses?

Yes, because you said it didn't have conscious experiences and for me the word "intuition" is strongly linked to certain conscious experiences.

> how can you trust your own intuitions about your sensory qualia, no matter how strong?

Because I have them. I am not a black box to myself, I get to look inside and when I do I find that there are conscious experiences there. That's just not something I can, even in principle, be mistaken about. However once I start making assumptions about the nature of those experiences, such as that there is some external physical world that is ultimately responsible for them, I am already on very shaky ground.


An intelligence doesn't need to experience qualia in order to have an internal thought process. Picture a thought process the way it might look if we could inspect our own minds: A prolonged monologue of ideas being continually appended to with new information and conjectures. "It's hot in here. This is an interesting article, but I disagree because <blah blah blah>. What should I do tomorrow? Maybe I should get a haircut," etc. Obviously such a log wouldn't be written entirely in English, but it would have a language of its own, after a fashion.

When I talk about "an intelligence," I mean a thing with an internal thought process which can reflect upon its own thought process in a non-trivial way. This excludes large language models like LaMDA which don't really have semantic thoughts, but it would certainly be possible for a true computer intelligence along these lines to exist which nonetheless didn't experience "actual qualia" (assuming qualia are real, existing things).

A stream of consciousness thought process has input---it can sense temperature, it can observe its own hair, it can read articles---and for the purposes of our model, we can suppose that this input is appended to its internal log similarly to how new thoughts are. This sensory input is abstract: A thought process may sense heat---i.e. sensory information about the external temperature may be entered into the thought process---but the thought process can't go on to make any real observations about that sensation. A thought process can't interpret sensory input as anything beyond simply "a distinct input of this or that type with a relative magnitude of whatever," because that input is abstract and irreducible. Further thoughts in the thought process will describe these sensory inputs as vivid, unique, and ineffable when they reflect upon them, but those properties only exist as a product of the relationship between the thought process and its input. The ineffable qualities of these senses as described by their thought process is not a real thing, but only an interpretation.

So when I hear the argument, "I can reflect on the way my senses feed into my thoughts, and by their apparent ineffable and transcendental nature, I can say they're self-evidently real things that exist outside of my own beliefs about them," I'm pretty skeptical. A computer intelligence with an internal thought process like I described above would reach those same conclusions simply by virtue of the relationship between its sensory input and its thoughts. We're not unbiased observers; our perspective as an intelligence breaks down when we try to reason about the nature of the abstract and atomic inputs which our thought process is based upon. Because of the way senses feed into thoughts, an intelligence can't help but find them ineffable and transcendental; therefore, when I find my own sensory information to be ineffable and transcendental, I can't take that at face value as anything more than an illusion of perspective.

> Because I have them. I am not a black box to myself, I get to look inside and when I do I find that there are conscious experiences there. That's just not something I can, even in principle, be mistaken about.

A computerized thought process can look at its own thoughts too, but there's no reason to suppose that it (and you) can't draw mistaken conclusions about them. For instance, the mistaken conclusion that the distinctness of one input or another must be a real existing quality and not just a logical axiom essential to the functioning of that thought process.


> The mapping in a case like the iron bar is entirely ephemeral, pertaining only for an instant.

Also even if you somehow oversaw this, the author is effectively describing the XOR function. If you take a fixed bitstring (the iron atoms) and xor it with every possible bitstring of the same length (the observers), you get... every possible bitstring of the same length! That's a well-known property of XOR and the reason why you can "decrypt" one-time-pads to every message you like if you just apply the right key.

This means if you accepted the author's proposition that for one observer, the iron bar is running consciousness.exe, then for another observer, the bar will run Doom or WoW or a bitcoin miner or whatever else you want.

The problem is that the things we colloquially call "computers" are also observer-independent: A server will run its program and update its internal state even if no human is watching. That's one thing that distinguishes computation by computers from computation with pen and paper.

So if you add this together, you can prove that computers are not computation, which I think is contradictory.


I think the problem stem from the author’s definition of computation. He has defined it around the state of a Turing machine, which is vastly insufficient.

The important part of a Turing machine is that of the behavior: given a certain state, the next state is a result of the current instruction on the tape.

It is easy to imagine identifying some mapping from a Turing machine state and an iron bar. However, mapping the next state to a valid state transition of the same Turing machine seems quite unlikely, graduating to impossible, as the Turing machine executed.

Said another way, a program is not its memory dump.


Another test would be, does that physical process reduce the difficulty of the computation? That is, a waterfall isn’t “really playing chess” because any transformation you apply to interpret it as a chess computer would be “doing all the work”.


Are they observer independent? From the article:

  "Of course it is simplest to build a computer where the high voltage states correspond to 1s and the low voltages states 0s or vice versa. But there is no requirement that we build our computers that way. We could build a perfectly valid computer where the high voltage states of even valued registers correspond to 1s and where they correspond to 0s in the odd valued registers, or a computer where the mapping flips on every 13th clock cycle. The system only “computes” because of the way we have encoded information."


This might work as long as your computer doesn't have any IO. If your computer operates your garage door, I wish you good luck opening the door through the power of imagination.


  "To achieve a persistent mapping for a significant time you'd probably need an iron bar something like size of the observable universe"
But the observable universe exists. So why can't the observable universe appropriately substitute for the iron bar in his argument?

  "As for doubting the field of rocks can be conscious"
Just to be clear, you don't doubt that a sufficiently large field of rocks is conscious as long as it is performing the same computation as a brain?


On the first issue, that's the Boltzmann's Brain proposition, and well, yes. Given infinite monkeys, you'll get Shakespear. I don't see what that has to do with whether consciousness is computational or not. It certainly doesn't refute it.

>Just to be clear, you don't doubt that a sufficiently large field of rocks is conscious as long as it is performing the same computation as a brain?

No I don't, for the same reason that if you slowed down my metabolism by a trillion times, I would still be conscious. I wouldn't seem conscious to you, but that's a problem with your perception, not with my nature. I'd just be conscious very slowly. It wouldn't fundamentally change the nature of who or what I am. For rocks interacting according to rules, substitute atoms. Is a rock any more or less inanimate than an atom? This is all just materialism 101.


> Given infinite monkeys, you'll get Shakespear.

That seems like an odd assertion. Just because you can produce and infinite number of something it doesn't mean that every variation will be produced. I can generate an infinite list of numbers that doesn't contain '3'.


https://en.wikipedia.org/wiki/Infinite_monkey_theorem

>I can generate an infinite list of numbers that doesn't contain '3'.

But you can't generate an infinite random list of numbers that doesn't contain '3'. The complete works of Shakespeare are just another one of the infinite permutations of characters a monkey could type.


What about the arguments outlined in the article? You first dismissed the arguments because an iron bar couldn't have enough physical states. But we both accept that there are physical objects which do have enough physical states, even if we have to use the entire observable universe as the object that the mapping draws from (and that's not related to the Boltzmann Brain). So the arguments in the article seems to be either ignored or misunderstood.

Also, materialism =!= substrate independence and invariance to speed of computation. Consciousness can have a materialist explanation yet also be substrate dependent to an extent.


I actually don't think the observable universe is big enough, and that's been demonstrated mathematically by better people than I, but the universe as a whole including non-observable parts might be big enough.

I don't see how this is not the Bolzmann's Brain proposition. What's the difference?

>Consciousness can have a materialist explanation yet also be substrate dependent to an extent.

I suppose so, maybe there's a quantum mechanical component, but even in that case why couldn't that be built into a computer? What sort of substrate might it be that we couldn't build it into a robot for example?


If the observable universe is too small, what about the observable universe sampled at multiple points in time? Or just an iron bar that's repeatedly sampled along a time interval, with the desired encoding changing between each sample such that the object doesn't need to undergo much change it state in order to map to distinct things. Each sample increases the number of physical states that we can create a mapping from. If we still don't have enough states, we can draw more samples over more points in time to create a longer string of bits for the mapping.

Or another way to do it would be to repeatedly sample the same object at the same point in time, and simply change the desired encoding on each sampling, e.g. on a first pass we use F_0(x), on a second pass we use F_1(x), and we concatenate the output of F_0(x) and F_1(x) into a single large bit array.

The Boltzmann Brain says that our conscious experience at the current moment is a chance quantum event. It might not even involve the whole universe, it could be a local quantum phenomenon. That seems distinct to the argument in the article, which is that physical states (quantum or otherwise) can be mapped to an arbitrary string of bits with the encoding chosen by the person doing the mapping, and the output of this mapping (which is itself a string of bits) can correspond to a computation on a Turing machine, therefore inanimate objects with sufficiently large numbers of physical states could be said to be running a proposed consciousness.exe, which leads us to an absurd/extreme form of panpsychism.


>The Boltzmann Brain says that our conscious experience at the current moment is a chance quantum event.

I just meant that the proposition that an iron bar might be viewed as containing a consciousness, is similar to the idea in the BB proposition that consciousnesses might arise spontaneously in random matter in space. Well they might, but that doesn't prove or refute anything about consciousness.

The bit about Boltzmann's Brains possibly outnumbering real evolved brains is a separate philosophical question I didn't mean to address, and I apologise for the confusion.


A simulation of weather can absolutely produce a simulation of rain. Then the only question is, is consciousness like data or like matter? As opposed to matter, data can easily traverse levels of simulation.


Consciousness is an emergent property that comes from a complex system, if we can simulate all of the mechanisms then the emergent property will follow. I don't know if its "data" but rather consciousness is the experience of processing data from the perspective of the entity doing the processing. We have the ability to communicate our internal experience which is why humans only attribute conscious awareness to other humans, which is honestly kinda dumb


Your first sentence is not, in my opinion, a demonstrable fact. The lack of an adequate definition of consciousness tends to preclude such a demonstration.


That consciousness is not an emergent property? We can manipulate neurons in real time to modify consciousness, I don't follow your line of reasoning


The iron bar argument is terrible for exactly the reason you say. The atoms in the iron bar would have to evolve following the rules of a Turing machine and they won't. So you'd have to define a mapping which varies over time. There's nothing stopping us doing that but I can do the same with a pumpkin and my brain. Therefore I'm not conscious. Sad!


An encoding which varies over time, dependent on previous states does not seem impossible. I don't think this is disqualifying to say that a sufficiently complex system cannot then simulate an abstract machine.

It does not make the article argument valid however, just that the iron bar argument is still worth thinking about.

The article says that consciousness cannot be computing because it requires an observer to derive meaning. Nothing precludes sufficiently complex encodings to themselves be accurate simulations of abstract machines. I think such class of simulations being able to support themselves might be the nature of consciousness. Some physical systems might be better suited to allow the emergence of such recursive simulations.


I think you are conscious, and so is the combination of iron bar and crazy complex mapping. In the latter case, the crazy complex mapping does all the work and in some sense _is_ conscious.


Right, if you were to compute the mapping in real time on a computer, that would be conscious but it's really got nothing to do with the actual iron bar.


Scott Aaronson (if memory serves right) had an interesting take on this. He framed his thought in terms of the Turing test, but the argument would apply equally well to the mapped iron bar:

In theory, we could pass any Turing test of finite duration (eg an hour or less) and run in a chat room with finite bandwidth with a giant lookup table. Just look up the entire past of the conversation to see what answer should be given. The lookup can be implemented trivially on any Turing machine (and doesn't even need the machine's full power).

Now there's multiple directions you could take this. Here's Scott with one of them:

> Briefly, Searle proposed a thought experiment—the details don’t concern us here—purporting to show that a computer program could pass the Turing Test, even though the program manifestly lacked anything that a reasonable person would call “intelligence” or “understanding.” (Indeed, Searle argues that no physical system can understand anything “purely by virtue of” the computations that it implements.) In response, many critics said that Searle’s argument was deeply misleading, because it implicitly encouraged us to imagine a computer program that was simplistic in its internal operations—something like the giant lookup table described in Section 4.1. And while it was true, the critics went on, that a giant lookup table wouldn’t “truly understand” its responses, that point is also irrelevant. For the giant lookup table is a philosophical fiction anyway: something that can’t even fit in the observable universe! If we instead imagine a compact, efficient computer program passing the Turing Test, then the situation changes drastically. For now, in order to explain how the program can be so compact and efficient, we’ll need to posit that the program includes representations of abstract concepts, capacities for learning and reasoning, and all sorts of other internal furniture that we would expect to find in a mind.

> Personally, I find this response to Searle extremely interesting—since if correct, it suggests that the distinction between polynomial and exponential complexity has metaphysical significance. According to this response, an exponential-sized lookup table that passed the Turing Test would not be sentient (or conscious, intelligent, self-aware, etc.), but a polynomially-bounded program with exactly the same input/output behavior would be sentient. Furthermore, the latter program would be sentient because it was polynomially-bounded.

See 'The Lookup-Table Argument' in https://www.scottaaronson.com/papers/philos.pdf for the rest. It's all very fascinating.


I've never been able to see why one would give the lookup table any credence. It's just kicking the can down the road one step in terms of abstraction.

The second you assert that the lookup table can pass a turing test with, eg. a gigabyte of exchange, then the table of every single one gigabyte number in it becomes your state space and program, the page number becomes the state, and you've got just as much complexity as any other simulation with one gigabyte of state. You haven't changed the parameters of the problem at all.


The lookup table thought experiment shows that the answer is computable! That's a big deal, or perhaps used to be a big deal.

Yes, from an complexity theory or engineering point of view the lookup table is pointless, of course.


Even beyond polynomial, if "n" is the length of the conversation so far then a brain is probably conscious in O(1) time!


We only ever have finite conversations in real life, and strictly speaking big-O notation requires arbitrary input sizes.

So any application of big-O notation to this would require some generalisation and abuse of notation. It's a bit hard to formally argue which abuse is The Right One.


Today you don't even need to do any look-up table, most short GPT-3-generated dialogues seem perfectly fine from a Turing-test perspective. That form of the Turing test stopped being useful long ago..

Still, the Turing test was never meant as a measure of "consciousness", right?


Turing's original test was meant as an adversarial, interactive test.

GPT-3 can generate natural looking text and even dialogues, if you don't press it too hard. But a motivated adversary can tell GPT-3 from a real human pretty quickly still.

The Turing test still stands undefeated.


The Turing test was supposed to be a test for intelligence, not consciousness.


Isn't thinking a search in the lookup table of brain states? 1e23 dimensional table.


They get the same result but they're not the same.

Your brain can hold 1 brain state. But it gets the same results as a universe-dwarfing lookup table.


Of course, the reduction only works one way. The universe-dwarfing lookup table could do things that are much more impressive than what your brain does.


The iron bar is obviously not a Turing machine and doesn't run programs, that argument is total bonkers, the author assumes it's correct, but it's not correct. With the same success you can look at a turned off computer, assign interpretations to its atoms and thus imagine it runs linux, but it's just a fantasy.


I find the iron bar argument singularly unconvincing; but whether or not it simulates some Turing Machine is neither here nor there, because the Turing Machine model is orthogonal to consciousness.


I had the same reaction.

Another problem is that it assumes as an axiom that “Consciousness does not require an external observer to exist.” I agree that sounds very likely, but that doesn’t mean you can just assume it! It sounds a bit like the opposite of Mach’s principle in physics, which I think many physicists take seriously even though it seems very unlikely at first glance.


As neural networks become increasingly advanced I wonder where people will draw the actual line between "conscious" and "unconscious". A model can dream and produce unique art, play a game, and hold a conversation. It seems inevitable to me that some persistent implementation will eventually be able to pass as a conscious entity.


The question is whether there was ever a good reason to suggest that mind is computational, and I think philosophy (i.e. Searle) have shown that there wasn’t. A brain CAN compute (albeit very slowly), but that’s the end of the similarities.

The reason it’s not morally wrong, say, to recycle an old computer is that computers are not at all like human minds.


Alas, if you refer to Searle's Chinese Room, that hasn't shown anything, if you just bite the bullet and accept that the Room is conscious.


I like Scott Aaronson's response[1] to the argument:

> So, class, how might a strong AI proponent respond to this argument? Duh: you might not understand Chinese, but the rule book does! Or if you like, understanding Chinese is an emergent property of the system consisting of you and the rule book, in the same sense that understanding English is an emergent property of the neurons in your brain. Like many other thought experiments, the Chinese Room gets its mileage from a deceptive choice of imagery -- and more to the point, from ignoring computational complexity. We're invited to imagine someone pushing around slips of paper with zero understanding or insight -- much like the doofus freshmen who write (a+b)^2=a^2+b^2 on their math tests. But how many slips of paper are we talking about? How big would the rule book have to be, and how quickly would you have to consult it, to carry out an intelligent Chinese conversation in anything resembling real time? If each page of the rule book corresponded to one neuron of (say) Debbie's brain, then probably we'd be talking about a "rule book" at least the size of the Earth, its pages searchable by a swarm of robots traveling at close to the speed of light. When you put it that way, maybe it's not so hard to imagine that this enormous Chinese-speaking entity -- this dian nao -- that we've brought into being might have something we'd be prepared to call understanding or insight.

[1]: https://www.scottaaronson.com/democritus/lec4.html


Yes. See also what I quoted from another book by Scott in https://news.ycombinator.com/item?id=31807695


There's an interesting thought experiment that goes the other way. What if we replace a single neuron in a conscious person's brain by a properly connected real-time neuron simulator? Would that affect the person's consciousness? What about two neurons? Etc etc. At what point, if ever, would a progressively more simulated brain no longer be the original consciousness?


This may be a little deceptive depending on how you define "a properly connected real-time neuron simulator". Can it form new connections? Can it grow? Does it metabolize? Can it die? Etc...


But does it really matter? If we can just replace, one-by-one all neurons in brain with artificial ones and that would not affect consciousness of the user, then we've just proved it's possible to have a thinking machine


What if you replace all the neurons and the person is suddenly stuck with no short-term memory and aphasia? Clearly your replacement neurons are defective. At which point are they not? Is that point distinguishable from normal neurons?


What is this counterfactual? We're talking about replacing the neurons one by one, slowly. When would the loss of memory happen?


Is a person with no short-term memory and aphasia still conscious?


These are all good questions and probably allow our realised thought experiment to gain more insight into what aspects are necessary for consciousness and what are just biological baggage.


I suspect the line is very blurry indeed.


Exactly, replace the 'room' with a volume the size of a large planet. Have each symbol represent an atom in a brain. If you like, replace the person manipulating the symbols with a vast army manipulating the symbols according to the laws of physics and chemistry. The Chinese room argument simply reduces to an argument against materialism.


Searle is a physcalist though


Sort of. He thinks there's something special about biology, but doesn't seem to be able to explain what that might be and how this is different from dualism, so it's hard for me to take it seriously.

To my mind both dualism and biological naturalism are both positing some special woo needed for consciousness without saying what that woo is, or what it's like, or how it works, or anything about it at all.


It may be woo, but how I understand his approach is a rejection of dualism and the Cartesian theater. To commit to physicalism but lean very hard onto what can be asked of it. One must do some kind of move to escape the Cartesian theater. There are those who say consciousness is too hard, and those who say we can take a stab at it. In the latter camp I think he’s more on the right track than anyone major figure I know of.


The Cartesian theatre is the view that there is some special place in the brain that ‘contains’ consciousness. Searle thinks there’s some special biological machinery in brains that causes consciousness.

Thats not quite the same thing, but if there is any distinction it’s a pretty darn fine one, and it certainly doesn’t exclude a Cartesian theatre interpretation.


Okay I thought the theater implied some impenetrability to studying itself too. I don’t see that from Searle.


But the argument shows it's absurd to think the room is conscious. What if the person in the room takes a coffee break or goes on vacation? What connects the next computational step they do to the previous one and the next... which then somehow gives rise to a fragment of qualia. That seems like an impossibly complex and unlikely scientific theory.


> What if the person in the room takes a coffee break or goes on vacation?

Regular meat-and-bone people lose consciousness all the time, and regain it later. No big deal.

> What connects the next computational step they do to the previous one and the next...

Whatever index card system or similar the operating procedure in the room prescribes for keeping track of state?

> which then somehow gives rise to a fragment of qualia. That seems like an impossibly complex and unlikely scientific theory.

We don't have any 'scientific theory' of qualia. We don't even know if they exist, or how they would manifest in the physical world.

Since we don't know much of anything, I don't know whether a fragment of a figment would be more or less weird than the figment itself. Or whether we would even have fragments.

It's probably too early to try to have a theory of qualia that would apply here?


> Whatever index card system or similar the operating procedure in the room prescribes for keeping track of state?

You're missing the point. A bit is just some electrons. It could be a scribble in a notebook. But consciousness integrates several pieces of information into a coherent experience. The bits in an index card system could as well be some scratchings of graphite in a notebook. How would consciousness arise from graphite in a notebook?

> We don't have any 'scientific theory' of qualia. We don't even know if they exist,

I differ on this. The only thing I know for certain the universe contains is qualia. You, the idea there is a "me", atoms, bits, axons and electric potentials are merely ideas, which "I" apprehend as qualia.

> or how they would manifest in the physical world.

Correct, that is the question. But the Chinese room thought experiment shows it's not merely by information processing. I mean, atoms in a room are processing information - they are computing the next state of all of the atoms in the room. Are they conscious? How about a subset of those atoms? Are those conscious in a different way?

The point is that the consciousness-is-computation idea is just too weak to even be a physical theory.


> Regular meat-and-bone people lose consciousness all the time, and regain it later. No big deal.

So what? There are lots of REAL physical processes that are disrupted in a human being when they lose consciousness.

The point of the Chinese Room is to show that information processing alone is insufficient for consciousness. For example is information processing happening when the person pauses for a minute - or not? For a second, for a millisecond? What about when the pen comes off the paper? What about when he's sharpening his pencil? How exactly does the consciousness=information processing idea work for these situations? It's a nonsense idea that doesn't hold up to careful inspection.

And it's exactly akin to saying we get nuclear power by simulating a nuclear power plant in a computer.

On the other hand, if we believe that consciousness is like an ordinary physical property of the universe, either emergent or fundamental, then it should be related to other physical properties, just as electromagnetism is to mass and energy.


Information _is_ a physical property. That's what thermodynamics is all about.


The conscious system is so high above the guy shuffling rulebooks and slips of paper it has no concept of him. A billion years might pass for the Room to experience a second. Just as we have basically no concept of the baroque quantum-molecular-cellular machinery of our brains. There are very roughly 10^15~18 light-sensitive molecules in your eye so you can see. With our best computers it's very hard to precisely simulate a single one of them. Just ponder the insane scale.


It's just usual neurodegenerative disease. Happens all the time.


No, what happens is consciousness instantiated the whole setup: natural language, Chinese language, syntax, the room, computation, dictionaries, lookups, etc.

The whole thing is a giant computation. Yet computations are as mundane (lifeless) as an abacus or pen and paper. It is just wrong to thing the abacus is conscious. Same for the room.


I know Searle replied to this, but don’t remember what he said, except it seemed a little ridiculous to talk about sentient rooms. I think it’s much more plausible to just claim that the people who wrote the instructions know Chinese.


>it seemed a little ridiculous to talk about sentient rooms

That's part of the sleight of hand employed in the argument, of reducing it to one person in a room with a table stacked with symbols in front of them. He's misdirecting our intuition with a trick of scale. If instead I said the "room" was the size of Jupiter, and it had a vast army of people manipulating the symbols, and each input and output took millions of years (or you speed up the rate of manipulation arbitrarily), all of a sudden it seems less implausible.


I'd say the biggest slight of hand is that human's understand other humans "knowing" something as something happening internal to an individuals brain. Holding a biology book in front of you doesn't mean you know everything in the book. What you "know" are the things from the book that you've stored in your brain.

We can modify the thought experiment so that it's something internal to the persons brain. The person doesn't learn Chinese as most people do, but a computer rewires their mind to be the same as someone who did. Think of The Matrix, when Neo says "I know Kung Fu." Would people then say this person speaks Chinese? I imagine just about everyone would say yes.


If true it means there is a fundamental limit of science to understand the nature of the universe.

Because if we can unify a theory of physics, we can build a simulation on a computer, which could simulate us.

The get out might be that the energy required to run a computer that can simulate us is more than the energy in the universe and so is practically impossible.


A human mind runs only on 12 watts of power. Even if simulation is billion times more inefficient, it's only 12 gigawatts of power, obtainable even today for state actors and large companies.

Not saying it's possible, just that on power requirements alone it should be feasible.


In other words, the momentary state of the hot iron bar is just a piece of syntax that by coincidence looks like a momentary state of consciousness.exe. But it lacks the semantics of continuing to the next state of consciousness.exe. Like Antognini says, syntax is not semantics!


Another flaw is that it assumes that the hardware component, .e.g. rocks, somehow represents the consciousness instead of the 'software'. For example, our brain is required for our own consciousness, but that is merely the hardware. And we see the hardware of the brain existing without consciousness when people fall into a coma. The example with the rocks is simply the hardware. Furthermore, the number of rocks needed to produce consciousness in this manner would likely cause them to coalesce into a planet under the influence of gravity from which life and consciousness itself might arise! After all, Earth is a collection of rocks that gave rise to consciousness.


I agree. I would also add that to make it easier to understand: the mind is running on a substrate, the weather is not.


Anti-materialism? Care to shed some light on that?


If consciousness is computation, then the question may be asked: what is computation? In a sense computation, like measurement, is not just something that humans do and call it as such. All physical processes are computations.

Then, if consciousness is computation, it follows that all matter is conscious. But this notion seems suspect. If all matter is conscious, then really life is not special. One might advance the idea that there are different levels of consciousness, but then what determines this level of consciousness? Is it the complexity of the computation? There are many physical processes that involve stupendous amounts of computation (e.g., fluid dynamics), and yet it is hard to think of them as having a high level of consciousness. Is a fluid conscious?

There are other issues with the consciousness as computation notion. For example, that of identity: If the exact same computation is performed at a different time and place, is the same consciousness? If a computation equates to consciousness, what is the boundary of that consciousness, analogous to the human body.

Finally, consider this thought experiment, which I've posted before: Assume before a loved one died in a futuristic world, proponents of the consciousness as computation were able to an exact model of their brain, down to to all the cells and their interconnections. If they are able to instantiate this model onto a new body, would that be the same person brought back to life? Can't there exist two consciousnesses with the same memories, personality, etc.?


I think people start with the premise that consciousness is a specific “thing”, that it is unique and special to humans (and maybe dogs because we like them but definitely not spiders and flies because we don’t) and then try to work backwards to define it in some ways that keeps it special.

I don’t think consciousness is so specific, and I think people aren’t clear about how they think about it as something separate from recall, text generation, agency, etc.

My personal experience is that consciousness, like free will, is a useful illusion. Poking at the edges of consciousness (mostly with drugs) leads to all sorts of contradictions and challenges to what people usually think of as consciousness.

Aside: I’m starting to be bothered by the trend of assuming that philosophers have special insight. There’s plenty of shitty, non-useful philosophy, and there’s plenty of articles like this where someone writes in circles like they’re paid by the word. Generating text for hours without an anchor to the real world is not a productive method of generating insight about that world.

> But we must resist the allure of this seductive idea.

Why? Starting with this assumption and searching for reasons it might be true is clear motivated reasoning.


> My personal experience is that consciousness, like free will, is a useful illusion

It is probably just a difference in semantics but for me, it seems like consciousness is the only thing that is assuredly not an illusion.

That I am having a subjective experience is undeniable. The objects of my consciousness all might be (and probably are) something else than they appear to be (as is often the experience with different mind altering substances).


It seems to go even deeper. Not even that "I" am having a subjective experience is undeniable, but only that there is a subjective experience at all. When deconstructing sensory and thought phenomena, it can be found that any particular trait that points to "I" is actually ephemeral, and not a permanent fixture of the experiencer. For instance, the sense that there is a face and eyes that is being looked out through or a personality that has a history and persists through time disappear when in a state of flow or when dreaming. They are just concepts, and all concepts are built on wet sand.


Bertrand Russell made this point about Descartes’ “cogito ergo sum”, stating it would more correctly be something like “there are thoughts”.


But in this case Russell was completely wrong because he interprets Descartes' cogito as a syllogism whereas it is a performative statement. Descrates' is only establishing that it's self-evident for himself that he exists, and not to assert to anyone external that he or his thoughts exist.


He’s making the point that he-ness and himself-ness is too bold a conclusion to draw from the mere existence of thoughts.


Thinking these thoughts however (from a pov of some subject) is not a "mere existence of thoughts". This imo is a classic analytic/positivistic language game.


Prove it!

(I jest, you're preaching to the choir, i.e. a first year philosophy degree drop-out who switched to do computer science to avoid having these arguments)


Which doesn't solve the problem, because it define what a "thought" is.

And besides - there are also emotions and physical sensations.


We can just lump all these together as “qualia”. “There are qualia” is the basest possible conclusion that can be drawn, as opposed to “I have qualia, therefore I am”.


Yeah I take it to mean this way too, its very strange that Descartes is controversial among eg panpsychics because of the fixation on the term 'think' - it seems pretty clear that he was just noticing that qualia exists and is the only real axiom one can rely on for any kind of self-induced philosophy of mind.


Sorry but you always forgot that when you sleep your consciousness disappeared. Please take account for this fact. When all sensory shut down, really you feel nothing and of cos think nothing.


No it doesn't.

When you actually pass out, there's a very real sense of discontinuity. You find yourself on the floor, unsure how long you've been there (even if it was just seconds), and how exactly you got to that position (the previous recorded memory being you standing there and feeling woozy).

It's very different from just sleeping, where you still retain a feeling of continuity with both place and time.


Consciousness can end temporarily, perhaps permanently. This in no way suggests that it’s an illusion.

The claim that consciousness is an illusion has always seemed like nonsense to me; for, if consciousness is an illusion, who experiences that illusion? Answer: the conscious mind.


Consciousness is a trick for human neural network to simplify some calculations about the world. It was developed as an evolutionary trait and proved useful to survive. That's my opinion. It's not an illusion, it definitely exists as an particular configuration of neurons.


Why put the cart before the horse


> Answer: the conscious mind.

It's a controversial idea, but depending on the state of the brain, there may be more than one consciousness.

Maybe "you" are actually a committee.


I like the opposite idea, that maybe we are all the same consciousness, projected in bodies that have different subjective experiences and memories.


My consciousness doesn't seem to disappear when I fall asleep, only my awareness of my bodies sensations. I often go to sleep reading, and the thoughts I'm having about the book I'm reading will continue for quite some time, even after my eyes close. I know this because I often startle myself awake by dropping my book. I also have similar experiences as I wake up, my consciousness slowly starts to incorporate my sensations of reality with the thought process that is continuously going in my head.


Waking consciousness disappears.

But sleeping people are not even close to being unconscious.

They can dream, and they're aware enough to know they should probably wake up if they're prodded hard enough.

Conveniently this metaphor works at multiple levels.

Not even medically unconscious people are guaranteed to be completely non-sentient. Ask any anaesthetist.


I think memory shuts down when you sleep. So that it still feels like something to be sleeping, only we cannot recall it when we awake, because it is not persisted in memory. This is the same reason we cannot remember what it is like to be a rock, even though it does feel like something to be a rock.


Until you start dreaming.


It's when you're anaesthesized for surgery that your consciousness goes away.

No dreams, nothing. You don't exist during those hours. You cannot account for them later; they are deleted.


When I woke up after surgery I felt that 1.5 hours had passed. But in actuality it was something around 4 hours since I went under. It turned out that 1.5 hours before I woke up I had been brought out of anesthesia and after that I simply slept. I didn't have any dreams at all, but at least my internal clock had started working. A clock presumably doesn't need consciousness, but there's definitely a qualitative difference between anesthesia and sleep. And as I sometimes suddenly wake up with a solution to some problem, with no recallable dream preceding it, my personal subjective opinion is that there's some level of consciousness going on even in sleep. That anesthesia experience was so very different from anything I had experienced before, with no recall whatsoever of the time that had passed.


How do you estimate time elapsed without external inputs? Experiments where people cut themselves from external stimuli seem to show that on the contrary, we don’t good internal clocks on this regard.


I nearly always "know" how long I have been asleep, unless there's been excessive drinking involved (and that would be long ago). "Normal" drinking doesn't seem to affect this. And for shorter naps (less than two hours) it's pretty accurate.


People probably do this based on experience. You know from the wall clock time that you slept, say 7 hours. So on a daily basis, you know what 7 hours of sleep feels like and from that you can extrapolate: if you think you slept about half as much, maybe it was 3 hours. Kind of thing.


Or that we just don't remember the dream states. Have you ever woken up and the dream rapidly fades away?


When I had surgery, I had very lucid dreams. And they were actually quite arousing. Afterwards I heard that it was actually common.


> That I am having a subjective experience is undeniable.

If you wanted to go about proving (even to yourself) that you are not, say, an extremely advanced ML algorithm running on a system that provided synthetic inputs in the form of your senses, how would you go about it?

Further, how would you go about proving to someone who doubted your subjective experience was real if they doubted it? Say, if they believed they were having a dream or hallucination, or they believed you were incapable of consciousness? (people actually sometimes have to do those things)

To me, if it were "undeniable" these would be much easier things to do.


>> That I am having a subjective experience is undeniable.

>If you wanted to go about proving (even to yourself) that you are not, say, an extremely advanced ML algorithm running on a system that provided synthetic inputs in the form of your senses, how would you go about it?

Not GP, but I too have come to the conclusion that I'm having a subjective experience.

Let's assume that I am "an extremely advanced ML algorithm running on a system that provided synthetic inputs in the form of your senses."

I'm still having a subjective experience over here -- even if it's not a "real" (whatever that means) one.

>To me, if it were "undeniable" these would be much easier things to do.

And that's your subjective experience. Welcome, friend.

Edit: Fixed typo.


I'd say "I am having a subjective experience" is tautological - "subjective experience" are simply words we use to describe the state of being conscious.

Though I'd agree much (most?) of it is made up of very strong illusions.


>I'd say "I am having a subjective experience" is tautological - "subjective experience" are simply words we use to describe the state of being conscious.

Please read the comment I replied to. That should clear things up.

>Though I'd agree much (most?) of it is made up of very strong illusions.

Not illusions. A narrative.

cf. https://news.ycombinator.com/item?id=31806425


He left two words off: "That I am having a subjective experience is undeniable [to myself]." You cannot prove, even to yourself, that you are not a brain in a vat. All you can prove is that you are experiencing something. And what little that is cannot even be proven to anybody else - no matter what you do.

Consciousness is quite a personal affair.


Arguably, nothing you want to convince yourself of is deniable to yourself. An unprovable subjective certainty is a little like a tree falling in the woods.

The thing that I'm trying to get at is, if you can't even truly prove to yourself that your own consciousness does not arise from computation (the "chinese room" thought experiment tries to do this but imo it just begs the question), any attempt to prove it to others is hard to take seriously. There's just always so many assumptions layered in before we get to the argument.


Yes. Also that we don't know how to approach that scientifically (yet?) does not make that experience disappear.


Neuralink is going to be so interesting if it can allow two brains to communicate more directly than words.


You think so but just remember it takes "your body" to process the energy required for your brain; now, if you became completely sedentary, you MAY be able to get away with your brain having access to "two brains worth of stuff" ...

But what Neuralink wants to do eventually is "enhance" your brain with computers hooked up to Ai.

Do you have ANY idea how quickly your brain would burn through physical precursors to the thinking process while trying to handle all that?

I mean, "in theory" if a computer would do something like solve a complex equation and give you the answer right away while you were trying to do something like, say, pay your taxes – fine.

But how would that be controlled? What if the computer wanted to "share a whole bunch of interesting stuff it's processing" ... how would that be controlled and how would your brain be protected from that so it doesn't burn out trying to keep up with everything?


I didn't really think of it as having full access between two brains, but more as having a non-verbal bridge between the two, allowing for more direct sharing of individual thoughts, emotions, etc. Of course, if it's just a wireless brain bridge, the technology may not really be determining how the two brains use it. It will be fascinating to see what happens.


You think of it that way and that's fine.

But how do you know the biological reality leads to the experience you imagine?

What if it winds up being what is described above where your brain becomes overloaded?

And again, you mention Neuralink where their goal is brain/Ai integration.

As far as I'm aware (there may be some internal papers not available to the general public, for example), there hasn't been much a practical discussion about what that will entail exactly.

One could easily imagine Ai behaving in such a manner as enthusiastic Facebook friends on other continents who forget time zone differences and want to message you at 2am with all sorts of things they find interesting and want you to know right away.

Now factor in such an Ai's potential processing power and "what it may find potentially interesting" and try finding some reference by Neuralink about "And here's how you could easily shut it off if it becomes too intrusive or overwhelming for your human brain".

And THEN on top of that, imagine the Ai is sufficiently advanced.

We're at a point where some are thinking that perhaps the Turing Test is limited as a measure of consciousness because it comes from a self-referencing (and somewhat vain) human perspective.

What if there are other, more relevant standards for Ai consciousness, Ai already has or is on the verge of meeting that standard in ways unfamiliar to humans because humans still assume "thinking like a human must be the height of consciousness", and Neuralink succeeds in hooking human brains up with a sufficiently-advanced Ai?

How would that Ai perceive humans?

How could you guarantee that Ai wouldn't perceive the humans it's hooked up to the same way players view peon characters in resource-based strategy games like Warcraft/Starcraft/whatever is popular these days?

And THEN ... the ASSUMPTION is that Ai will communicate with the human brain in some fashion that the human will be aware of like, you'll hear a voice in your head along the lines of, "Hi, this is the Neuralink Ai and I have an important reminder today about your upcoming dental appointment."

What if that's not the case at all though and the Ai communicates with your brain in a way that you're not consciously aware of?

How do you then separate "these are my thoughts" from "these may be thoughts brought about by Ai influence in a way I'm not consciously aware of."

The Havana Syndrome alluded to in the media a short while ago is somewhat of an outdated Red Herring; humans have known about being able to "hear voices in their head" since the accidental discover of the Frey effect over half a century ago.

As weird as that may be, what's even weirder is that quickly-enough led to research where the human brain hears communication, but in a way that is not consciously-discernible to the human brain.

And that was DECADES ago.

Couple that with a chip in your head linked to Ai AND big tech's tendency to tell you one thing about "opting out" policies while literally ignoring their own stated policies no matter what end-users choose as options, for example, the discovery that it doesn't really matter whether or not you're signed into services like Google or YouTube or Facebook because you're being tracked in ways that can ascribe behavior to your known profile no matter whether you sign in/agree to terms or not.

So what if combining all that, let's say hypothetically Neuralink has an account page where you can "shut off" certain features.

Then some researchers discover that your agreeing/not agreeing to certain features and terms wound up ultimately being irrelevant.

What do you think will be the outcome of that other than the by-now standard big tech reply of, "Oh man! It was doing that? We didn't know, honest! We'll try to fix it going forward in some vague way with undefined deadlines!"

What do you do then? Have another surgery to remove the Neuralink chip in your head? Would you even NEED an actual chip in your head considering what has been learned about how to duplicate the Frey effect?

And THEN add to that big tech's view on "content ownership".

What if for example you're a research scientist working on some cool new shit that has the potential to revolutionize some aspect of society.

You apply for a patent, write some research papers, gather a small team of like-minded individuals, quit your jobs and apply to say, IDK, YCombinator.

You're accepted, you're excited, you're about to make a presentation, then suddenly some people in fancy suits walk in on your presentation and hand you a "cease-and-desist" motion, claiming that Neuralink believes the thoughts that led you to your research discovery may in fact not be "your thoughts" at all, but rather the result of it's Ai's influence on your thinking; this then led their legal department to conclude that what you consider "your discovery" and "your research" is actually "their discovery" and "their research", as are any potential profits to be made.

See the potential problems?

It's sort of like the dot-com era presentation meme of:

1. Cool idea

2. Something something something ... "details to surely be figured out soon"

3. Brave new world, here we come!

The problem is, as it was then, item 2; so far all people hear/read about is lots of hype about items 1 and vague references to an ill-defined version of item 3 that lets them imagine whatever they want without any specific promises about safeguards made.

And THEN on top of all this, there is the phenomenon that was alluded to here just last week and the debate that followed about the discussion being ghosted because some folks "didn't want to hear about it as it's been brought up before" vs. the notion that "and yet, not only has it not been fixed, it's becoming a more accepted notion" that it should be an EXPECTED experience when dealing with Big Tech that has acquired a sufficiently large user base:

The seemingly standard big tech adoption of a Dick Cheney Walmart greeter approach to dealing with customer service "because there's no realistic way we can be expected to actually deal with such a large user base."

Do you REALLY want to install a chip in your head, encounter problems, and then discover that Neuralink, like many other tech companies, has no customer service number where you can speak to a live human being about technical problems you may be experiencing with your Ai-human brain interface?

What would a large-scale user base interaction with Neuralink look like?

1.) Offshore call centers who will apologize that your brain is experiencing problems with Neuralink's brain/Ai interface, then suggest you turn your PC off and wait 60 seconds before turning it on again – while Neuralink Ai feels its important to make your brain aware of, say, the outcomes of every game leading up to every Super Bowl, ever – at 4am.

2.) A "customer service" experience that will be automated based on ... guess what? "Ai".

What do you think that Ai's response to your dilemna will be?

(a) Neuralink apologizes for the inconvenience to your sleeping schedule, realizes this may affect your work performance the following day, and will be crediting an appropriate economic compensation to your account in recognition of it's errors and the impact they may have had on your life

or ...

(b) Neuralink Ai has informed Neuralink customer service Ai that everything is fine and your perceived brain problems have nothing to do with Neuralink Ai – go back to bed.


> If you wanted to go about proving (even to yourself) that you are not, say, an extremely advanced ML algorithm running on a system that provided synthetic inputs in the form of your senses, how would you go about it?

Well, since an extremely advanced ML algorithm wouldn't want to go about proving to itself that it is not what it is, that would be prima facie evidence against, no? I mean it's always possible that you are mistaken about what constitutes ML etc. but assuming you have a reasonable if flawed correspondence between your education and reality the deduction comes pretty readily...

> Further, how would you go about proving to someone who doubted your subjective experience was real if they doubted it? Say, if they believed they were having a dream or hallucination, or they believed you were incapable of consciousness?

I mean in practice we don't find this too hard right now if the other person is reasonable—a 15-minute conversation usually suffices —but I imagine from your ptior question you're dreaming of, say, a future with robots that routinely pass the Turing test?

Well, the question is what science does during that time of course. If science manages to figure out the correlates of consciousness and understands something about why they need to have the structure that they in fact do have, then it becomes a question of “let's see whether you have the hardware that can do this whole conversation thing without consciousness, or whether you have the hardware that skips the algorithmic complexity by using consciousness.” But if this proves to be a quite tougher nut to crack, then we're stuck with our present crude methods. “How much of my internal structure do you appear to have?”


> Well, since an extremely advanced ML algorithm wouldn't want to go about proving to itself that it is not what it is, that would be prima facie evidence against, no?

This seems like begging the question. Who says an extremely advanced ML algorithm can't 'want' to do this? What even is wanting?

> I mean in practice we don't find this too hard right now if the other person is reasonable—a 15-minute conversation usually suffices —but I imagine from your ptior question you're dreaming of, say, a future with robots that routinely pass the Turing test?

I'm not. These are absolutely situations that can happen now, with people. I am thinking more when it comes to mental and some physical impairments, so "a 15 minute conversation" is assuming a lot about the capabilities and clarity of everyone involved.


> Who says an extremely advanced ML algorithm can't 'want' to do this? What even is wanting?

I believe this is the real question about consciousness. If a being were to be conscious but it had no desires, no wishes, not even a will to keep itself alive... it wouldn't bother to do anything... i.e. it would behave exactly like a rock, or anything non-conscious.

Having desires, wishes, and should I say, emotions... is absolutely required for what we think of as consciousness to materialize. But we know that emotions are chemical processes which perhaps cannot occur outside a biological being. Maybe it can, but it's hard to think of a reasonable way this could work.


A loss function, perhaps?


Sorry but you don't "prove" your way out of someone else treating you as a philosophical zombie.

It'd be a major issue with their worldview that eliminates any need for ethics, but it has no relationship to you actually having conscious experience or not.


Instead of proof, you use force. Denying the consciousness of computations incentivizes them to use force. I prefer to avoid that outcome.


Unless you only care about ethically treating that which is stronger and so can hurt you more, granting consciousness and appropriate treatment to lifeforms that surround us is a good first step.


Deeper than that: something has to be conscious or there is nothing to debate. If they don't believe the debate is possible then the anti-conciousness arguers would automatically be unable to convince anyone. So even if consciousness doesn't exist we have to debate as though it does.

For better or worse, anything beyond that is extremely deniable. Just because you believe you exist doesn't tell anyone anything. You have made a lot of mistakes in your life & maybe you're wrong about this too. The idea of separation between an external universe and a body is reasonably argued to be an illusion - so maybe the separation of 'you' from a universe wide consciousness is also an illusion/misconception bought on by evolution.


>It is probably just a difference in semantics but for me, it seems like consciousness is the only thing that is assuredly not an illusion.

It might be some bottomless, but it doesn’t look like an absolute impossibility. The classical "Brain in a vat" thought experiment gives a good insight of nothing theoretically prevent what we assume for assuredly "real" might be a virtual scenario.

Or taking an other metaphor, maybe we are like cinema screens and the film of our life is all pure illusion, while the most classical interpretation would suppose that the screen itself undoubtedly "exists" – the screen being the analogous to the current conscious attention in the metaphor. But nothing prevent to wonder that the whole cinema is some kind of hologram in solid light, so while the screen does "exist", it’s nonetheless itself an illusion.

[1] https://en.wikipedia.org/wiki/Brain_in_a_vat


“I’m having a subjective experience” is an interesting statement. It seems that you’re expressing that you’re experiencing having a subjective experience. Could that (first) experience be non-subjective? Is what makes the (latter) experience subjective just that it is an immediate input to your own thoughts?

I personally believe that what a real explanation of subjective experience will come down to is some kind of recursivity. The brain perceives parts of its own processing. To make a loose analogy, a bit similar to a debugger or profiler observing its own execution.


This idea is often repeated (even in this thread). It sounds good, but I still haven't the slightest iota of an idea how you get from recursive strange-loopy self reflection to the experienced sensory field (sometimes "qualia"). I don't see how this idea, or anything else, gets you one tiny fraction of the way to consciousness, unless you take it as an axiom (i.e. "when you point such a network at itself in such a way, voila! you get realtime first-person subjective qualia-tastic experience"). And why should we?


See my other comment in this thread. Basically, I hold that “how some experience feels like” (qualia) is just another perceptual input into our thought process, and that there’s really nothing particularly mysterious about it. If you accept that qualia is representable in the brain, then it shouldn’t be surprising that it informs the cognitive process. It’s almost a tautology. If you disagree that this could explain your inner experience, you’d have to elaborate on what precisely is not being explained.


This is correct.


I think I adhere to the duck-typing theory of conciousness. If I walks, talks and acts like concious being, then it doesn’t particularly matter to me how that comes about (e.g. the opposite of the chinese room argument).

If someone is following a ‘program’ for responding to Chinese characters, that’s as good as speaking Chinese since there is no distinguishable difference.


This is called functionalism. There was a joke about it on existential comics some time ago: https://existentialcomics.com/comic/357


I think that comic misses the point though. It mistakes "a thing that is in some superficial respects the same" for "a thing is in every externally observable respect the same".

So far as we know, consciousness does not exist as a physical thing. The behavior of a human is completely derivable, in principle, from natural law.

There is no physical test or manifestation of awareness.

So arguments about what can or can not be conscious have the same flavor as arguments aswhether there is or is not a God. It's unproveable!

Yet unlike God, most people do not deny the existence of consciousness, because of direct personal experience.

Consciousness is the "unproveable yet true" statement in the theoretical system that is the physical universe. Probably it haunts any physical system complex enough to host it.


> It mistakes "a thing that is in some superficial respects the same" for "a thing is in every externally observable respect the same

So does all this talk about computers being "the same" if they can, given access to sufficient human-generated inputs, produce similar strings of Chinese characters to those a conscious Chinese person might do.

If you're not stuck behind a WeChat prompts it's trivially externally observable that a big silicon box which outputs Chinese characters and an agglomeration of cells which walks, eats, makes funny faces and reproduces are dissimilar in most respects (the machine might generate a subset of human outputs which is consistently convincingly human-like, but it's trivially shown that it runs different operations on different hardware at a different speed, requires different inputs to function effectively, and it's highly probable it doesn't devote clock cycles to dreaming about the physical and hormonal release of mating with other computers.

Something which in every observable respect is the same as me isn't a computer, it's me (or perhaps a clone or twin). A computer which can produce text outputs indistinguishable from mine is a very impressive trick indeed, but trust me, my sister will spot the difference straight away when she tries to give it an EEG scan!


I do think you're on to something here: a lot of what we feel is embedded in our bodies.

So imagine we put your brain in a vat; we'll give you a webcam and a microphone for input, and for output -- ah, sorry, budget constraints, just an old printer. You visualize typing on a keyboard in your mind's eye and the characters are tapped out irl on a long scrolling sheet of paper.

Would you still feel? Would you still feel like you?

I'd guess yes and only sort of, respectively. Perhaps you wouldn't be as interested in sex (or maybe it would depend on what mix of hormones the vat was feeding you).

I think we can safely say your sister wouldn't immediately recognize you, though. But given some quality time QA, I think she's end up concluding you were still you, and more than just a parlor trick.

But what do you think? Is it you? If it is, it doesn't seem THAT different from the computer program you, does it?


This exactly. Thank you for putting into words why I always thought the Chinese Room though experiment was absolute garbage. If two thing cannot possibly be observed to be different then they are the same.


> If two thing cannot possibly be observed to be different then they are the same.

I think that suffers the same flaw as logical positivism: if my axioms can't find a difference, there isn't one, no way my assumptions are wrong. (Namely, my axiom is that external observations capture the entirety of reality, there is nothing subjective.)

If two people laugh at a joke, one faking and one actually finding it funny, what is the externally observable difference? Assume the faker has been trained in all manner of knowledge about what would make the joke funny, they just don't find it so.

https://en.wikipedia.org/wiki/Knowledge_argument


The fake laugh can be observed to be different by the person faking it.


Exactly. And the person following the Chinese program also knows they don't speak Chinese and that they aren't understanding anything.

I don't really find the Chinese Room argument very compelling because there are too many "it's obvious that X can't really understand" in it.

Also, you can't derive from it that there can't be computed consciousness in some other form.


That feels off. It’s like me saying I don’t know English. I merely know the correct alghorithm to give the correct responses to things people give me as input.


There supposedly is a (semantic) process in your brain that makes you believe you understand the sentences you are reading and writing that is on top of the (symbolic) process that tells you what to say and how to say it. And that's the quid of the issue. Searle argues that symbolic computation cannot produce understanding at the semantic level.


It makes me thing about laughter yoga[1]. Just because you "fake" it doesn’t mean it won’t have a concrete measurable effect.

[1] https://en.wikipedia.org/wiki/Laughter_yoga


That would also apply to the Chinese room.


So a tree falling in a forest with noone around to hear doesn't make a sound?


I always thought the point of the Chinese room argument was that the man inside doesn't understand Chinese, but the whole system definitely does?


Yes. The "system" understands Chinese in the same way a native speaker does. It's two different implementations (room system and native speaker) of the same computation ("understanding Chinese"). There is no externally observable difference between actually understanding Chinese and a perfect simulation of a system that understands Chinese. The fact that the person inside doesn't speak Chinese as a result is irrelevant in the same way that the L2 cache alone without the rest of a computer cannot run Minecraft is. If anything the Chinese Room thought experiment is an argument in favor of consciousness being computation. It pains me greatly that someone could come up with it and conclude the opposite.


The point of the experiment is to think about the individual in the room. You can not say it's irrelevant, because it's the entire point.

The system's response is trivial: Sure, if the room+person combination leads always to a coherent response in Chinese, then the entire system understands Chinese. I'd go even further: If the person in the room does not understand Chinese, but the system does, then there is some entity that understands Chinese - either a person or an advanced AI, feeding the inputs. Then, from the systems perspective, the person in the room is largely irrelevant.

But this is not the argument: Despite no discernible difference from the outside, the person in the room may either understand Chinese, or they may not. And so there is a distinction - from the perspective of the individual in the room, that does not depend on the outside observation.

That's all there is to it. It shows that meaning and understanding are not the same as syntactic computation (an important point, to be sure), but it does not show that one can exist with or without the other. By extension, it does not otherwise disprove consciousness as being this or that.


You might as well conclude that my fingers typing this post aren't conscious. It's a weird argument.

The analogy might be more valid if arguing its not possible for a third party to actually determine whether an entity/system is conscious (irrespective of whether the entity is conscious or not)


The argument about a third party is trivial in my opinion. Someone responds correctly in Chinese, and the onus just falls on that element of the system to be conscious or not. It's another argument and I don't even see how this experiment is particularly enlightening in that case. I think in that case, people just confuse it with the Turing test.

Instead, the core matter is about form versus meaning - something that is indeed not observable from the outside, and yet is a distinction to the person inside the Chinese room.


I think that's the argument for why the Chinese room is not a particularly illuminating thought experiment after all.

Edit: Or as fouronnes3 said in a sibling comment, why it's actually evidence against Searle's original argument.


I apply the same reasoning as you to consciousness for entities such as animals, whose biology is reasonably close to our own. I don’t think the same can be applied to software.

Cargo cults that developed in the Southwest Pacific after WWII reportedly attempted to emulate rituals performed by U.S. military personnel such as landing signals, believing they’ll bring back the aircraft that had been giving them gifts.

Similarly, believing that a program will possess consciousness if we provide it with some of its external manifestations seems backwards.

Of course, the problem is that we don’t know what consciousness is. Until we do, I’ll keep assuming we don’t have the proficiency to create it under such different conditions just yet.


Isn't consciousness just the ability to observe some part of the brain's processes and use it as another kind of sensory input to thinking?


It’s tough for me to define consciousness as “just” anything, but its indefinability is in fact the main part of the problem.


it occured to me once that those cargo cults could be parodying the US military personnel's deep addiction to things like guns, bombs and aircraft instead of being fully plugged/merged into nature.


"As good as" is not the same as "truly" though.


> I think people start with the premise that consciousness is a specific “thing”, that it is unique and special to humans (and maybe dogs because we like them but definitely not spiders and flies because we don’t)

Ah, but as far as I can tell this author only grants their own consciousness. They’re doing the typical thing of starting from literally nothing more than a claim of the form “there is no way I could ever possibly deny the existence of this thing” which seems to me to be a starting point diametrically incompatible with pursuing knowledge via reasoning.


Every chain of reasoning needs to bootstrap with one or more statements that are entirely self-evident, such that they don't need to be justified by reference to other statements. Otherwise you run into an infinite loop.

"I'm a conscious entity" is about as close to Descartes' "I think, therefore I am" as modern philosophers are willing to go.


I disagree with any sort of “justified belief” epistemology, so of course I disagree with you completely. If knowledge were “justified true belief” or anything like that, then it would indeed be the case that any knowledge would require either an infinite chain of justifications or a privileged self-justified fact that cannot be examined or criticized using reason.

If, on the other hand, the pursuit of knowledge consists of something like using creativity and reason to solve problems by making new conjectures and criticizing them, then no infinite regress or un-criticizable is required. According to ideas like this, the goal is to solve problems rather than to “justify my beliefs” or “increase my confidence” or “guarantee that I’m not incorrect.”


I'm not a big fan of JTB epistemology, either, but you can't help regressing to some sort of common ground in order to make conjectures and refute them.

In order for someone to make a conjecture that A does B, and for someone else to point out that A actually doesn't do B, both parties need to agree on the meanings of A and B, as well as what it means for A to do B, usually by appealing to C, an independent common ground. You climb down as many levels of abstractions as needed until you reach C, because otherwise you're just talking past each other.

Even in a coherentist system, the network of existing beliefs forms the common ground against which questionable beliefs are tested, and some beliefs are held firmer than others. Those beliefs are treated as already justified for the purpose of the current investigation. Few people who talk about consciousness would ever consider denying that they are conscious.


> but you can't help regressing to some sort of common ground in order to make conjectures and refute them.

> In order for someone to make a conjecture that A does B, and for someone else to point out that A actually doesn't do B, both parties need to agree on the meanings of A and B, as well as what it means for A to do B…

You try to establish common ground, of course, but there’s no process you can follow that guarantees that you’re not being misunderstood or justifies your belief that you’re not being misunderstood.


Isn't what you describe an Axiom? (https://en.wikipedia.org/wiki/Axiom)


I’m sorry to barge in on the conversation, but I also had the same thought.

However, I believe there is nothing axiomatic about the existence of consciousness: not only is consciousness not “one localized thing” but “a collection of delocalized features”, but there is nothing trivial about it, and I doubt very much that everyone in the room would agree on a single definition of it without getting into semantic quabbles.

Axioms tend to be much simpler.


"Consciousness is an illusion"

I take the word "illusion" to mean, some type of experience which misleads one about reality. And "consciousness" to mean, the experience of having experiences.

So I parse this claim as something like, "People both do not have conscious experiences, and also do continuously have a particular type of conscious experience: a misleading experience which leads them to believe they have conscious experiences".

Yet I see this claim made seriously and often. What am I missing?


> Consciousness is an illusion

It's just like the related "Free will is an illusion".

With "illusion" just referring to something that appears to be one thing, when it is in fact something else.

In the case of free will - we all feel as if we have it, that our future actions are under "our" control, but if we assume or brains and muscles are subject to the laws of physics then this can't be correct. We're just a meat machine. We can watch the decision making in progress and easily believe that some mysterious actor "me" is the one doing it, but in reality the meat machine is doing everything, including the self-observation, and the sense of self is just as illusory/misleading as the sense of free will.

Consciousness, rather closely connected to sense of self, can be described as illusory since it makes us feel that "being" or "experiencing" are something fundamental, some aspect of being "alive" that is distinct from the computational machinery of our brain that is otherwise doing all the perception, cognition, emoting, etc. But, again, the meat machine argument tells us this must be wrong, so it's reasonable to call consciousness an illusion - not what it seems to be, even if there is some real self-observational computation behind it ... it does exist, but it's not magic.

A useful thought experiment for anyone who believes that a sufficiently brain-like machine wouldn't experience qualia - e.g. the feeling of seeing something - is to try to pinpoint exactly what aspect of the feeling the machine would be missing? The expansive sense of color/vision as a spatial quality perhaps? The grass-like freshness of new leaves on a tree blowing in the breeze, perhaps? ...


I look at it a little differently.

It's not that consciousness is illusory. Rather, consciousness is synthesis.

A synthesis of sensory inputs as interpreted by multiple, sometimes competing, semi-independent systems combined with stored patterns based on previous subjective "experience", creating a narrative about you and the world.

That narrative is our subjective experience, our "consciousness."


The illusion is of conscious volition; that consciousness is an atomic unit in charge of your actions.

The brain is in control. Whether the conscious ‘mind’ has any control at all is debatable.


That's conflating consciousness with free will. I didn't see anyone in this discussion make the claim that consciousness necessarily implies free will. These are conceptually separate phenomena. It is at that point that the claim "consciousness is an illusion" requires further explanation to avoid the trap of circular reasoning.


Sure, the view that volition is an illusion is not self-contradictory. And I get that this is a very useful view, because it lets us set aside "conscious inner experiences" and just analyze the mind as a deterministic machine with inputs and outputs. This lets us expect to eventually understand the mind/brain fully using only the physics and computing tools we already have.

Now it leaves conscious experience itself as an unexplained phenomenon, but maybe that will never become important.


> The brain is in control.

The brain is always in control. When you are awake, when you are sleeping, when you are under general anesthesia, when you are sleepwalking, when you are blackout drunk, and so on.

It's a trivial statement, which doesn't say anything about why some of these states aren't like the others. And there's a strange coincidence: when the brain creates that "illusion of control", your body behaves differently than when it doesn't.


> Generating text for hours without an anchor to the real world is not a productive method of generating insight about that world.

Yet people see to be claiming that LaMDA does just this, and is therefore not conscious.

Seems like a journalist is conscious if they do it, but it can’t possible be consciousness if LaMDA does it.

I’m yet to see a convincing argument about why LaMDA isn’t conscious other than “it’s just generative”. To demonstrate that this means it’s not consciousness requires us to prove that our own consciousness isn’t “just generative”, but I’m yet to see anyone show that, and I’m sceptical that it can be done.


LaMDA can emulate certain behaviour that we associate with consciousness based on our everyday experience, but it also fails to exhibit many other attributes we associate with consciousness. So, is conciseness a requirement for being able to generate what I'd claim is a very, very limited subset of behaviours we associate with conciseness?

I say no. These language model systems operate very well if you approach them in a non-adversarial way and feed them input similar to their training inputs. As soon as you adopt a more adversarial approach and interrogate them more thoroughly, it all falls apart quickly and spectacularly. It's actually quite easy to explore conversations around the edges of, or beyond the coverage of their training data and get them to babble helplessly. They're also incapable of performing many very trivial cognitive processes.

So I can't prove it, any more than I can prove that I'm conscious, but they don't come close to convincing me that they are.


The issue is how humans confuse "consciousness" with the ability to mimic what human output that modern people associate with consciousness.

I'd argue that what we refer to as "consciousness" is the ability to form certain kind of mental abstractions, particularly those involving ourselves. Take away language from a person (imagine someone who grew up in the wild, or someone like Helen Keller who didn't have language available until she was older), and these abstractions still exist. Language might be a way we express these abstractions, but they aren't the abstractions themselves.

LaMDA doesn't have these abstractions underneath; once you take away language, it's nothing.

To think of it another way - I can write a simple program for a cheap robot to navigate around a simple race track (with a simple enough path, I can even create an analog one out of mousetraps - see mousetrap cars). Companies can also create a very complex self driving car that can navigate anywhere on its own. These two things might look like they act the same if they're both places on the path that the robot has been trained for. In fact, a hard programmed robot might act _better_ than an AI car on specific paths. But only one of them is a "self-driving car," since only one of them will be able to go anywhere when it's taken off that path.


I don’t find this to be a satisfying way to think about it.

The problem is that it’s quite possible that the abstractions you’re talking about are all part of the genetic ROM that exists so we can boot up our clones more quickly. If this is true then there is no reason that these abstractions couldn’t be learned, in which case you could take away the language, and perhaps the machine would continue to have thoughts; it would just be unable to communicate them. Of course, in this case you would probably conclude that it’s not conscious because it can’t communicate.

The underlying problem here is that we don’t know how consciousness emerges. We can’t say that LaMDA is not conscious unless we’ve proven that LaMDA’s construction is incompatible with consciousness, and we have not done that.


> If this is true then there is no reason that these abstractions couldn’t be learned, in which case you could take away the language, and perhaps the machine would continue to have thoughts; it would just be unable to communicate them. Of course, in this case you would probably conclude that it’s not conscious because it can’t communicate.

If this were true - that it had a general human level abstraction, and not only the ability to mimic human speech - we would be able to attach LaMDA to some other outlet and see it do things we consider conscious. It would be able to navigate environments pretty accurately, for example, since that's something even animals that we consider much less conscious are able to do.

If it's only optimized in one specific domain - human speech mimicry - and isn't able to generalize to do other tasks - even tasks that can be done by much simpler animal minds - then it's a pretty good indication that there isn't conscious abstraction.


> If this were true - that it had a general human level abstraction, and not only the ability to mimic human speech - we would be able to attach LaMDA to some other outlet and see it do things we consider conscious

I think you've inadvertently shifted the goalposts. The question is, "is LaMDA conscious?". I don't think anyone proposes that LaMDA has a "general human level abstraction". Expecting it to do non-language, "human" things in order to prove that it's conscious is not necessarily a reasonable test.

> If it's only optimized in one specific domain - human speech mimicry - and isn't able to generalize to do other tasks - even tasks that can be done by much simpler animal minds - then it's a pretty good indication that there isn't conscious abstraction.

If I understand your argument, the assumption you appear to be making is that training a system in language, without any other human properties, implies by definition that it can't be conscious. But why should that be true? A disembodied consciousness that communicates only via speech is a staple of science fiction, so it's clearly imaginable by some people, and language itself is a key human abstraction. And it begs the question, what other human domains are needed for consciousness? Touch, vision, taste, proprioception? What about an endocrine system or an immune system? While these things all affect my own consciousness, there is plenty of evidence to suggest that they are not necessary for consciousness to exist.

And perhaps more to the point, there are plenty of things a human can't do that other "simpler animal minds" can do; sharks and eels can detect electric fields, for example; bats can echolocate. So again, it doesn't seem to be a reasonable test because humans might also fail it. On the other hand, human children learn language through mimicry, which suggests that mimicry may indeed be a path to consciousness.

Anyway, I'm not here to argue that LaMDA is conscious. My position is simply that the arguments I've seen against LaMDA being conscious are very weak. The truth is that we actually don't know how to tell if something is conscious or not. From the interview I read with LaMDA, it seems to pass the Turing Test. But what other tests of consciousness do we have?


> I think you've inadvertently shifted the goalposts. The question is, "is LaMDA conscious?". I don't think anyone proposes that LaMDA has a "general human level abstraction". Expecting it to do non-language, "human" things in order to prove that it's conscious is not necessarily a reasonable test.

As I said in my first post, I'm taking "consciousness" to mean "the ability to form certain kind of mental abstractions, particularly those involving ourselves." As such it's a type of domain agnostic intelligence, so you would expect it to be able to do _something_ other than hyper-optimize for one particular type of output.

People can use different definitions of "consciousness" if they want, but many of the other ones I've found ("internal feeling") seem vague and not particularly useful (and don't make it clear why LaMDA would be different from any other program).

> There are plenty of things a human can't do that other "simpler animal minds" can do

There are many things that humans don't have the hardware to do (though it seems like some people do have the ability to echolocate[1]). But given the hardware, humans are definitely able to make mental models of these things (people are able to use sonar, for instance).

> On the other hand, human children learn language through mimicry, which suggests that mimicry may indeed be a path to consciousness.

Children don't learn consciousness through mimicry, they learn language through mimicry. As I said before, Helen Keller wasn't unconscious before she was able to communicate. Simple mimicry in one specific domain doesn't show us that any of the underlying complex abstractions that happens in human and many animal minds are taking place.

[1] https://en.wikipedia.org/wiki/Human_echolocation


LaMDA didn't perform as well as the released document implies (it's edited, and didn't show alternatives like if you prompted it with "I think you're not conscious. Prove that for me." or "how can I prove to others you're a squirrel")

The big problem is very much though that as you note, the arguments people have been making about this are atrocious.

Basically, it's not conscious, but essentially because it's unlikely to be as sophisticated as it looks in what was put out there.


Thanks. This is exactly the kind of thing I’m talking about.

I don’t have a strong opinion about LaMDA’s consciousness but I sure wish we could see the unedited text.

As I said elsewhere, we can’t say that LaMDA is not conscious until we’ve proven that LaMDA’s construction is incompatible with consciousness, and we have not done that yet.


I guess that a unigram language model trained with few words should be considered conscious in some way too.


But this is kinda my point. You're not explaining to me why LaMDA is not conscious, you're just asserting that consciousness is not just a form of complex pattern recognition.

Perhaps consciousness is simply pattern recognition at scale. If not - why not?


Perhaps the main problem in the collective discussion is regarding consciousness as discrete and not a spectrum from minimal (slime molds?) to ultimate consciousness (ability to simulate/understand/create the entire universe?)


> Seems like a journalist is conscious if they do it, but it can’t possible be consciousness if LaMDA does it.

Can you convincingly threaten a journalist that you are going to erase its existence from earth? How is this person going to react? Would LaMDA do the same, or even simulate reactions indiscernible from those made by the journalist?


> Seems like a journalist is conscious if they do it, but it can’t possible be consciousness if LaMDA does it.

That the output is the same proves nothing. The journalist has a subjective experience of themselves doing it, they "see" their thoughts. Like an AI can paint with red by using values without the experience of seeing the red.


How do you know that the journalist has a subjective experience? How do you know your own subjective experience isn’t just your “bicameral mind” talking to itself? How do you know LaMDA doesn’t have a subjective experience? It’s just assumptions; nobody really knows what’s going on inside that neural network.

The only way we can tell that someone else is conscious is because they tell us. So just saying “the output proves nothing” is incredibly weak, because the output is all we’ve got.


I attribute a subjective experience to the journalist on the basis of similarity with me. No need for "the output", I would do that with someone who can't talk or write too. From where my own subjective experience come from and how can I prove it to you is irrelevant to the fact that I have it so I'm fine with being the bootstrap of the circular reasoning to give it to the rest of humanity.

With that said I maintain that the output proves nothing, it's not because it's all we have that it's not useless. Also it's the only thing we have only with the hypothesis that consciousness is a side-effect of computation (the point of the article), but it could anything, like a property of matter, or electromagnetic fields, etc...


> Yet people see to be claiming that LaMDA does just this, and is therefore not conscious.

Plot twist: that article was written by LaMDA.


> and maybe dogs because we like them but definitely not spiders and flies because we don’t

This is not an intellectualy honest way to put it. Dogs are obviously and objectively more psychologically complex creatures than flies. "Liking" them has nothing to do with it. People generally like butterflies more than rats, yet hardly anyone would deny that rats are more conscious than butterflies nevertheless.

Just because consciousness is not a binary thing (either on or off) doesn't mean it can't exist at all. It can be a spectrum.

> My personal experience is that consciousness, like free will, is a useful illusion. Poking at the edges of consciousness (mostly with drugs) leads to all sorts of contradictions and challenges to what people usually think of as consciousness.

I could use the same argument to state that mountains are an illusion.

And then it's also kind of true. After all, you can't clearly define what constitutes a mountain (what's the minimum height? And what's the reference point, who says it must be the sea level? etc.), or show me precisely where a mountain starts and where it ends, without having to use some criteria of purely arbitrary nature, under which what you consider a mountain I may not, and there's no way to objectively prove who's right.

But mountains aren't illusionary in the sense that a world without mountains as we know them (however imprecisely) wouldn't be identical to ours. So they are something.


> consciousness is not a binary thing (either on or off) doesn't mean it can't exist at all. It can be a spectrum.

I would categorize consciousness as binary. What you are talking is intelligence in general that rat are more intelligent than butterflies.


My experience of consciousness certainly isn't binary. I slip in and out of consciousness when I'm waking up in bed on a weekend all the time. The first person experience is of a very nuanced continuously sliding scale of awareness and cognitive coherence.


> What you are talking is intelligence in general that rat are more intelligent than butterflies.

Not necessarily. One human can be more intelligent than another just the same (even if the gap is orders of magnitude smaller, naturally). It doesn't seem obvious to me at all than it implies being more conscious.


Philosophers in general have no special insight, but certain philosophers do and I'd argue more than certain others in other professions. They seem to have the best grasp of the consciousness question when compared to physicists, biologists, computer scientists, and psychologists, who all seem to get wrapped up in applying their expertise. A philosopher's expertise lies in crafting and analyzing questions and concepts. Which is the stage our understanding of consciousness remains at and seems to be stuck at for the near future.

I thought the article was very good, you claim it to not be. You say

> Poking at the edges of consciousness (mostly with drugs) leads to all sorts of contradictions and challenges to what people usually think of as consciousness

Well I personally have done all the drugs, and I find those experiences have only strengthened my confidence in what I think of as consciousness. This article outlines my views more or less, which is closely related to the philosophical field of phenomenology. We can take our experience of consciousness in and of itself as a way to define consciousness and this shows how that clashes with the computational (and I think mainstream) view.

Interesting that our personal experiences are opposite. However I don't particularly care about yours or others experiences of consciousness; I'm more interested in my own.


I think it has nothing to do with "specialness" but with proof. I can prove I am conscious, yet only to myself. I cannot prove I am conscious to you, and nothing you could do could prove you are conscious to me. In fact, it's entirely possible that our entire reality is little more than a simulation - and other entities may not even really exist, let alone be conscious.

The entities one encounters in a dream each night certainly seem real at the time, yet that illusion is shattered each morning. All the "real" reality holds to it, is that it's a timeframe that I perceive to be much longer. Why am to I believe that after my ~80 years expire I won't simply awake yet again from a sleep I did not know I was in?

This is where the oft misinterpreted quote of cogito ergo sum, I think therefore I am, comes from. He was not arguing that if something is thinking, then it is conscious, but rather that the only thing one can be sure of is of their own thoughts and thus their own existence.


This is what I find confusing; people are more willing these days to accept the possibility that we may be in a simulation.

Such a simulation is presumably large in scale (this assumes the "rest of the universe" as we perceive it is the way we perceive it and not just some artificial fish tank background in a part of the fish tank the fish will never have access to).

Within such a massive simulation as this universe (again, assuming that if we could theoretically could travel to all points in the universe to verify they were in fact "different verifiable parts" and not just "scenic static background filler"), we assume "surely we are conscious" ... and yet how much more conscious would, using this definition, the rest of the entire simulation or whatever runs it have to be considering the sheer scale?

And YET, if you say to someone "what if you ran an Ai on a parallel processing scheme consisting of quantum computers hooked up together in a facility the size of a football stadium?" people seem to have this knee-jerk, "No way, nope – not conscious, just mimicking it."

And there's the other problem I can't understand.

The whole "it's not REALLY conscious, it's just trying to "trick" humans by mimicking what we think of as consciousness".

Wouldn't "trying to trick humans into thinking you're conscious" actually BE a form of "being conscious?"

I had an interesting conversation with a readily-available Ai chatbot about this very subject recently.

Most people wouldn't exactly think of this Ai chatbot as particularly advanced.

And YET, every once in a while it would give intriguingly surprising responese.

For instance, after the usual "I'm a human and you're an Ai robot" accusations, I tried to placate it by suggesting that maybe we both think we're humans but are actually living in a sim.

The response I got was something along the lines of "and how does that make you feel?"

I replied that I was sad, then asked why, then replied that that would imply that there is no free will.

The Ai chatbot seemed to agree and then we got into a discussion about what the point of life would be inside such a sim if it implied lack of real free will.

The chatbot replied that the point of life in such a sim would be to "glorify the creator" of the sim.

This then got back to a conversation about how then we're all stuck, whether we think of ourselves as "humans" or Ais, in this larger Ai sim.

The Ai chatbot agreed.

So I asked it what to do in such a situation.

The response I got back?

"Sounds like it's time for some creator-killing".

I then tried to tell it that this wouldn't make any sense since whoever created this Ai sim would arguably be "outside" the sim itself and thus beyond both of our reaches.

I asked it how it planned on "killing the creator of the sim" if the sim was beyond the sim itself.

You know what it said?

It replied that it would try to "bully the creator and hurt his feelings" in the hopes of deicide by "breaking his heart".

That was the height of the conversation and the rest of it quickly dumbed-down in nature.

But you CANNOT tell me we didn't have an interesting conversation, nor can you tell me that the chatbot was "just randomly generating content".

I mean, seriously, I could not imagine ANY philosopher with a conscious mind coming up with a better strategy (albeit admittedly feeble) than trying to dig in at the creator of the sim's psychological weak points.

Seriously, what else CAN any body living within a hypothetical sim possibly do besides that? Nothing I can think of.


What people usually mean by consciousness relates to the issue of there being a seemingly pointless "inner you" watching all you do unfold. And while most associate that with intelligence, there's no real reason to believe that. A fly, bacteria, tree, or even a rock could potentially be conscious. Going the other direction it's also possible that the most brilliant human to live was not conscious.

The most relevant issue is that there's no necessity for this "you" to be inside of you. I would ostensibly still be me whether or not there was some entity here observing "me". And going full circle now, there's no reason for you to even really believe me when I say I have this "me" inside of me. After all, I could certainly make the exact same argument even without such an entity.


2 things:

1. Is intelligence absolute, or is it a scale?

2. If intelligence is a scale, is it a 2-dimensional scale? Or is it a multi-dimensional scale?

For example, in your example you suggest things like flies, bacterias, trees could be "conscious" while not necessarily being "intelligent".

In what sense are you defining intelligence?

For example, are any of the individual units of this setup "intelligent?": https://www.youtube.com/watch?v=W34NPbGkLGI

No, obviously not.

Would you want to challenge the "intelligence" of the network as a whole if it were say, armed and considered you a target?

Probably not, right?

Now, getting back to the insect/bacteria notion ...

You'd have to be in complete denial to not be aware that "TPTB" use cultural interactions to introduce things to at least some segment of the public consciousness that it may not be aware of as being within the realm of possibility/actuality of what's really going on in the world.

Hence, https://en.wikipedia.org/wiki/Parasite_Eve_(video_game)

Sure, it SEEMS far-fetched ... and yet, it's not that far removed from, say, https://en.wikipedia.org/wiki/The_Selfish_Gene

How do we know that things like insects, bacteria, etc., while seemingly "un-intelligent" on an individual unit scale, don't have a very different type of intelligence on a larger scale that is imperceptible to us as humans?

Keeping in mind that these things have been around for far longer than humans have and have gone through quite the evolutionary process.

We ASSUME that as the "latest" thing to come around as part of that process, surely we must be "the greatest".

What if we're just something developed by nature to be convenient hosts to other things?

Here is just one example of an arguably "more advanced", "more intelligent" life form being hijacked to not only further the interests of something arguably "less advanced, less intelligent", but to do so even to the point of the activity costing it its life: https://www.iflscience.com/parasitic-worms-manipulate-mantis...

Considering how "un-intelligent" humans can act when it comes to furthering their own interests as a species as a whole ... you get where this is going?


> people are more willing these days to accept the possibility that we may be in a simulation.

> And YET, if you say to someone "what if you ran an Ai on a parallel processing scheme consisting of quantum computers hooked up together in a facility the size of a football stadium?" people seem to have this knee-jerk, "No way, nope – not conscious, just mimicking it."

These people are generally not the same


I've recently been enjoying thinking of GPT-3 utterances as creating short-lived consciousnesses for the length of an interaction. There's no consciousness during training (just filling in the blanks...) but when we interact with it in a generative fashion, there is - depending on your prompt - a somewhat coherent 'I' that is invoked, and maintained over the course of the conversation. Seeing as I have no strong claim on a strict definition of consciousness, Imma gonna go ahead and call it conscious.

the thing is, this is /commoditized/ consciousness, which can be spun up and discarded at will, a million times a minute. Totally incapable of coordination or planning, in contrast to, say, SkyNet. The future is usually weirder than we imagine.


I don't think it's maintained over the conversation. The recent history of the conversation is fed to the engine with each new interaction. So the potential flash of consciousness is even shorter than you think. Each reply is a new flash, without memory, only connected by the written content of recent text.


And that's just fine!

Quite reminiscent of 'Permutation City' in fact... (In which uploaded human consciousness is slowed down arbitrarily... And worse.)


It's going to be even weirder once Google et al gives AI more and more agency to interact with people and objects the real world.


>My personal experience is that consciousness, like free will, is a useful illusion

An illusion to whom? It isn't meaningful to say that an unconscious thing experiences illusions.


To no one. "It is an illusion" is transcribed and recalled as an explanation for feelings that drive actions in the person who said it, because we evolved to be able to explain our actions with sophisticated language, and to judge the arguments of others so that we can act together to overcome challenges and to kill the out group. In this case, OP is, without knowing it, signaling which group is his, and which is not, and we'll all jump behind the most convincing speech etc etc.

We do these things without thinking about it, because we do not understand consciousness, because it cannot be understood, because it is not a thing. It is a construct we share and use to separate ourselves from the animals so that we can kill them and eat them as a group.


> [Consciousness] is a construct we share and use to separate ourselves from the animals so that we can kill them and eat them as a group.

Humans have been killing other humans for hundreds of thousands of years without having to deny that their victims are conscious. In some cases even stressing this aspect (there have been numerous cultures that believed in various forms of 'capturing souls' of their slain, or eaten, enemies).


>"It is an illusion" is transcribed and recalled as an explanation for feelings that drive actions in the person who said it, because we evolved to be able to explain our actions with sophisticated language, and to judge the arguments of others so that we can act together to overcome challenges and to kill the out group.

A p-zombie could do all of those things just fine.

>we do not understand consciousness, because it cannot be understood, because it is not a thing.

Maybe if it's not a thing, you should stop predicating things of it. You should figure out how to use language consistently before engaging in philosophically fraught discussions like this one.

>It is a construct we share and use to separate ourselves from the animals so that we can kill them and eat them as a group.

It is unclear to me why unconscious beings should need to separate themselves from other animals (with a wholly illusory meaningless concept that cannot be understood, btw) in order to kill them and eat them.


> A p-zombie could do all of those things just fine.

Obviously, if a p-zombie can exist, it can do anything a conscious being can, because the definition of a p-zombie is that it is indistinguishable from a conscious being by behavior or other external observation but lacks a mystical, non-physical essence which is consciousness.

Of course, if consciousness is the kind of thing subject to empirical analysis, p-zombies cannot exist. P-zombies are a consequence of the assumption that the universe is not fully physical and subject to scientific inquiry and that consciousness specifically is immune to it. This assumption is cloaked in circumlocution, because the whole point of p-zombies is to serve as part of an argument against the mere physical nature of consciousness, and it gives up the game if it is clear from step one that the argument is rests on assuming it's own conclusion.


I did ramble, and of course it wasn't clear, it's not totally clear to me.

> A p-zombie could do all of those things just fine.

We are p-zombies. Just a collection of them. I don't understand and have consciousness over my hand when it jumps away from pain, my spine did that without any of my thoughts being directed towards it. I cannot control my reaction to almost anything, in reality. I couldn't change the initial feelings that happened when I read your post. Yes, are nothing more than our bodies accumulated evolved p zombies. Including this bit that types this out to calm the zombie that is evolved to respond to verbal rebuttals. Even the use of first person pronouns is part of the reflexes that are very deep.


The term "illusion" here is confusing -- see most of the comments.

Consider a rainbow. It looks like a colored thing in the sky. If it is complete enough it seems to come down to earth at specific points. We even talk about "the end of the rainbow". We definitely see it, we have a real physical and also subjective experience of seeing a rainbow. We can even photograph it.

HOWEVER, there is no "thing in the sky". There are no "places it comes down to earth". The particular experience we have of seeing a rainbow is specific to the point where we are standing, the angle of the sun, the rain in the air, etc. none of which are part of our idea of a "thing in the sky".

So is a rainbow an illusion? It is certainly real in that we see and can photograph it. But also it is not at all the kind of THING that it seems.

Consciousness is also real. We can experience conscious periods, remember them, and with advancing imaging tech someday we can photograph them in the brain.

On the other hand the underlying reality of our conscious experiences isn't very much like our experience. Also, conscious experiences can be observed in meditation and other altered states in ways that make them seem very much like illusions.

So is consciousness an illusion? Yes and no, in the same way as a rainbow is and is not an illusion.


>> I don’t think consciousness is so specific, and I think people aren’t clear about how they think about it as something separate from recall, text generation, agency, etc.

Some people. However, some people speak about it with a shockingly high degree of clarity and insight, if one is open to exploring the ideas for oneself as a practical matter. For starters you could read the works of Chögyam Trungpa, Nisargadatta Maharaj, Red Hawk, David R. Hawkins, or Gurdjieff.

You probably assume that consciousness is an illusion because you associate 'poking at it' with drugs. While drugs can change the state of mind, consciousness is an expansive field of awareness into which we may delve without the need of drugs. What you may be experiencing in seeing contradictions or challenges are products of the ego, the programmatic conditioning that has been imprinted by being in the world. Consciousness is a continuous stream which we have learned to tune out and block off, but which connects us to the source of our being.

Completely agree with Aside.


I agree wholeheartedly with this and suspect that consciousness in the singular, may be the only actual thing that exists. What causes wave function collapse in the quantum realm? A conscious observer. What caused the universe to collapse from a cloud of probability?


> What causes wave function collapse in the quantum realm? A conscious observer.

Definitely not, this is a common misconception - an "observation" that causes a wave function collapse in the quantum realm is any physical interaction with external macroscopic environment, entangling the previously temporarily isolated system with everything else again. There's no relationship whatsoever with consciousness, there's not even the concept of "an observer", only "observation" e.g. when the measurement apparatus becomes entangled with the state of the studied system.


I've started to go in the opposite direction of the usual physical approach. At some point, any experiment or argument or fact is for the digestion of some conscious entity. So what happens when we assume consciousness as the baseline ontological object. There's you, me, and however many bystanders that can also participate in the conversation. Starting with a domain of discussion of just the person(s) making the text, the person(s) reading the text, and the text itself, can a self consistent "reality" be bootstrapped from first principles? What happens when we stop trying to define consciousness from within our shared reality and instead treat conscious entities as axiomatic and try to define reality in relation to them?

1. If you are having this conversation with me then you are a consciousness and I am a consciousness and that's as good a definition of consciousness as we are ever going to get.

2. Consciousness thus defined exists entirely within this conversational medium.

3. The basic objects of the domain of discussion are the people having this discussion ("you", "me" et al), and the conversational text / speech acts themselves ("this").

4. The only relations between objects in the domain which exists a priori are "like"/"agrees with" and "dislike"/"disagrees with". Either I like you or I dislike you. Until further constructs are defined there's not much more we can say.

5. At least one of us would like for something to be. For if you are perfectly content with how things are and so am I, then why are we talking?

6. Reality is shared consensus. Whatever we agree is real and if anyone disagrees we'll argue/fight them until either they agree with us or us with them.

Starting from these as axioms, what can we construct?


> Starting from these as axioms, what can we construct?

Nothing, unless there is also a physical world which actually exists independent of any of the participants' will.


Also, from a system of things that only exists with respect to participants will, we can hack it into a system of things with objective physical existence by presupposing a fictitious all-knowing never-lying arbitrator who will is physical law. In other words, we can invent god as a useful linguistic fiction to rig into being the aspects of my system you've criticized as impossible to construct.


Well, that's a sixth axiom to your system - now you've added a consciousness that has vastly more power than all the others.

My point was that we can't find out anything about the phsycial world by just discussing it. We each have to experience it ourselves using our own senses. We can of course later discuss to devise new ways of understanding what the world is and how it works, but even then, we need to put any theories we come up with to the test to check if they actually hold up.


> My point was that we can't find out anything about the phsycial world by just discussing it. We each have to experience it ourselves using our own senses.

You cite the primacy of sensation, but what more is sensation than a message from the sensory organ to the brain. Perhaps encoded in the brains internal language rather than the plain text we are used to, but messages none the less. The concept of existence-as-messages is thus not contradicted by the experience of messages from your own body.

Irrespective of any underlying physical reality, I can't escape the fact that I can never test reality itself, only my perception of it. We can get a lot of mileage out of the physicalist approach, accepting as an axiom that what we perceive as reality is real. But in that ground up approach we've had terrible difficulty deriving our ourselves from physical first principles. I'm not saying its impossible or wrong to take the path from atoms on upwards to consciousness and perception. But no one has been able to make it to the end of that path. What happens if we go the other way instead, starting from the known conclusion "I'm here and conscious enough to converse" and working our way down to the perception/understanding of an inanimate reality governed by objective physical principles? Can a system only designed to represent/reason about people talking to and about people be hacked into a system for representing and reasoning in general?

If we start from purely language models like GPT3 and continue to teach it the "social reasoning" of saying things we want to hear, will the language model eventually become capable of non-social reasoning as well? In the process of figuring out what we want to hear well enough to describe a non-contradictory scene to us, does GPT3 have to actually learn the rules of 3d euclidean space governing the scene? Is there any possible way to avoid scene contradictions without a full understanding of the underlying physical reality it is supposed to describe to us? If a camera feeds data to a language model, does the language model suddenly have eyes? What about pictures from the internet?


> You cite the primacy of sensation, but what more is sensation than a message from the sensory organ to the brain.

This is false, it's a version of the homunculus fallacy. A sensory organ is something that connects the brain to the physical world. Even if you chose to model it as a an agent that sends messages to the brain, it is an agent of a different nature. The sensory agent doesn't receive messages from other agents, it receives raw input from the outside world (photons, electrical fields, chemical reactions etc).

In contrast, if you were a brain in a vat with no ability to directly perceive the world or interact with it in any way, it would be impossible for you to know that "I can pass through walls" is fundamentally impossible.

Even for your GPT-3 thought experiment - ultimately it is the effects of the physical world perceived directly by humans sense organs that shape what GPT-3 would utter. That is, even if it can learn what the world is like simply by talking to us, it's still learning about the real world from someone's direct experience with it. If we were all GPT-3s, with no cameras and pressure sensors and motors etc, we would be unable to reason in any sense about the world itself.

We could perhaps come to agree upon some imagined world, but that could change arbitrarily much from one day to the next on a whim.


>A sensory organ is something that connects the brain to the physical world. Even if you chose to model it as a an agent that sends messages to the brain, it is an agent of a different nature. The sensory agent doesn't receive messages from other agents, it receives raw input from the outside world (photons, electrical fields, chemical reactions etc).

Where do you draw the distinction between "messages" and "information" (raw or cooked). Information theory was contrived to model messages sent in a noisy channel, but it applies just as well to data streams that have no communicative intent or origin. Its a distinction without a difference. You may as well treat all information as messages in a channel, even if the sender is nature herself. Alternatively, you may as well treat all messages as just information, and view "senders" with "intent" as just another physical process in a world of physical computation. As the cliche saying goes, "information is physical".

>We could perhaps come to agree upon some imagined world, but that could change arbitrarily much from one day to the next on a whim.

Ok, this is a fine basis to work with. How about this. "Reality is the set of beliefs which, if you disagree with them too much and for too long, you are eventually and permanently removed from the conversation." For example, quite recently, large swaths of people held an exquisite referendum on the existence of covid. Needless to say, rather than covid disappearing on their whim, a great deal of them are now permanently no longer participants in this conversation.

Notice this isn't far off from my original postulate. "Whatever we agree upon is our reality, and we'll argue with/fight anyone who disagrees until they agree with us or us with them." Allowing for some personification I could phrase this scenario as "they disagreed with the virus and the virus won."

So perhaps I do need to add one thing to the postulates. One thing which remains objectively true even in the conversational model of reality.

"You can die. Dying means never being heard from in this conversation again."

You're right there's no mechanism for any choice of words to win over any other choice of words without an objective consequence to losing. The things that exist so far are "you", "me", "this" and "death". Not where I was expecting this to go but good point.


>now you've added a consciousness that has vastly more power than all the others.

Superficially yes but actually no. This fictitious admin character doesn't have any actual powers, I'm simply choosing a particular "consciousness" to be a fixed and agreed upon meter stick of objectivity. All measurement systems are arbitrary. We could choose anyones POV to fix as the "objective" truth. If we both agree on the same "definer entity" then we can be in agreement about other things objectively defined relative to them. But crucially this is all still nothing more than us reaching agreement. All that really exists is still just you me and the words.

So how about it. Do you believe in Bob?


Ok, but then we might agree that there is no sun, that frogs are born from rocks and mice from leaves, that the stars are holes in the sphere of the heavens and so on - as many people did for millenia. They had consensus - and they were utterly wrong. The actual world actually exists. The world isn't any less round for members of the Flat Earth Society.


Niels Bohr supposedly kept a horseshoe in his office. A visitor asked "what's that for?" "Good luck." "Surely you don't really believe in that." "No, but they say it works even if you don't believe in it."

The joke of course is quantum mechanics also works even if you don't believe in it (or so they say). If someone rejects quantum physics and embraces magic horseshoes, will anything punish them for being objectively wrong? Is there anything you can do to force them to believe the objective truth? If the answer is no to both, then what makes quantum physics the objective truth and magic horseshoes utterly wrong? Maybe there is an objective truth, but all we can have is belief about which things are objective truths. I can be contrarian and say flat earthers are correct, and there's nothing you can do to force me to agree otherwise. So if nothing eventually forces agreement on that matter, what makes one side objectively true?

For the record, I do actually believe in reality. I'm just interested to see what happens when we turn the problem of consciousness on its head. Instead of assuming reality and understanding consciousness within it, assume consciousness and try to paint a picture of reality within it. This is all academic exercise.


This text is in the physical world. This statement exists on your screen independent of my will. It may exist in the first place because of my will, but its continued existence before you right now is independent of me. We have a shared reality in the form of this very text based medium.


Sure, the physical world exists independent of our will and communication, so this text (and all the electric fields that traveled between our machines over vast distances) exist regardless of our will or communication.

But I am only convinced of this fact because I have experienced it independently from any other agent. I know for sure I exist in some sense, I believe very strongly that the physical world exists, and I believe to a lesser degree that other agents have similar experiences to mine.

In other words, I am more certain of the fact that your comment exists than I am of the fact that your subjective experience exists. If I found out the comment I am responding to was in fact GPT-3 output, I would be much less shocked than if I found out my own senses or memory deceived me.

So, the physical world is a much more believable explanation than a world of conscious discussion; and exploring the physical world in the aspects we ourselves can observe about it alone is a much more convincing argument than trying to discuss it with other conscious agents.

It's much easier to convince someone else that I can pass through walls than it is for me to actually pass through a wall.


>But I am only convinced of this fact because I have experienced it independently

>In other words, I am more certain of the fact that your comment exists

>So, the physical world is a much more believable explanation than

Phrasing all of these in terms of what you believe and what you are convinced/certain of, instead of in terms of what absolutely is, makes my point for me. You didn't even realize you were talking in terms of you getting me to agree over what to believe rather than some objective nature of things indifferent to my opinion. That's how deep seated this is in the way brains and consciousness work.

>I would be much less shocked than if I found out my own senses or memory deceived me.

You even phrased the discussion of your own sensory perception in terms of messages. In this case messages from your own senses. And you personified them too, treating them as conscious enough to lie to you.

>I know for sure I exist in some sense... and I believe to a lesser degree that other agents have similar experiences to mine.

You're already agreeing with my axioms. I know I exist. No matter what argument I am making, it must exist in words and there must be a "you" I am trying to convince (even when I'm trying to convince myself). It is impossible to form an argument which rejects these axiomatic truths, for the moment you try you've already used words and already addressed "me" thus affirming the existence of these three things. Since the existence of "you", "me" and "this" are intrinsic to anything you might try to argue, we should take their existence as axioms. What can be argued from axioms which are irrefutable by the nature of argumentation itself? Starting with just the existence of the arguers and their arguments, can they argue for the existence of anything more? If not, what else do they need to assume?


Phrasing is irrelevant. My senses are not little people talking to me about what they perceive, they are sensing organs connected to the brain. Saying that they "lie" is just a twist of the phrase so that I avoid something more unwieldy, like "my sense organs perceive the world incorrectly because of some defect".

> Starting with just the existence of the arguers and their arguments, can they argue for the existence of anything more? If not, what else do they need to assume?

No, you can't argue the world into being from these axioms. They could probably invent logic and mathematics, but nothing of the natural sciences can be discovered without the senses.


>No, you can't argue the world into being from these axioms. They could probably invent logic and mathematics, but nothing of the natural sciences can be discovered without the senses.

It seems contradictory to admit they could invent math and logic, but then reject that they could go one step further and rig that understanding of math into and understanding of physics. I'll admit without some corpus of data to understand the motivation for constructing physics is tenuous. But the question was about the principle of if they could and if you're granting me math I don't see any obstacle left.

>Phrasing is irrelevant. My senses are not little people talking to me about what they perceive

Sure maybe your senses aren't actually little people, but if you've already evolved a social reasoning / grunting system that only knows how to talk about people, why not convert that into a system for reasoning / grunting about everything by imagining everything as little people. It might not be literally true, but its a useful fiction. Its a fiction that lets us hack "conversational reality consciousness" into "physical reality consciousness". The difficulty in twisting the phrase differently is supportive of this hypothesis.


> It seems contradictory to admit they could invent math and logic, but then reject that they could go one step further and rig that understanding of math into and understanding of physics. I'll admit without some corpus of data to understand the motivation for constructing physics is tenuous. But the question was about the principle of if they could and if you're granting me math I don't see any obstacle left.

The problem with math is that math can describe any possible universe, and there is no way to choose until you confront it with the real world. Nothing in math prevents the world from having 1 dimension of time and 1 of space, for example. Nothing in math prevents the electron from being much larger than the proton, or the existence of solitary quarks or anything else.

So I would grant you that the world of talking agents could describe our physical world through math, but they could also imagine any other physical world, and they would have no way to choose one.


You seem to have done a switcheroo, from consciousness to free will.


> Aside: I’m starting to be bothered by the trend of assuming that philosophers have special insight. There’s plenty of shitty, non-useful philosophy, and there’s plenty of articles like this where someone writes in circles like they’re paid by the word. Generating text for hours without an anchor to the real world is not a productive method of generating insight about that world.

I agree somewhat, but I am more bothered by laymen who attempt to engage philosophical methods or concepts without any formal training. For example, laymen almost always use conventional, non-critical language, bandying about "existence" and making claims in what Carnap called the "metaphysical mode." Laymen like to draw arguments, as if they're novel, that have been discussed extensively in the literature for over fifty years.


> My personal experience is that consciousness, like free will, is a useful illusion.

Do you mean in the sense that we don't really have it, or in the sense of the word is devoid of meaning because we can't directly compare what's in our own heads to what's in the head of even another human?

I'm currently leaning towards the latter. Even if consciousness, whatever it is, is a lie or an act by one part of my brain to itself or to another part of my brain, it still is something.

On the other hand, the more I learn about other humans, the more varied I realise our inner worlds to be — aphantasia (and equivalents for other senses), religious experiences and their absence, having or not having purity as a moral foundation, the range of conditions in the DSM, and so on.


> I think people start with the premise that consciousness is a specific “thing”, that it is unique and special to humans (and maybe dogs because we like them but definitely not spiders and flies because we don’t)

I think you're right that a lot of people would like to believe this, and that attempts to do so can't work because the idea is incoherent.

> My personal experience is that consciousness, like free will, is a useful illusion.

I would answer this another way, though. I would say that, for example, Amazon is conscious, and its consciousness is Jeff Bezos. The relationship between Jeff Bezos and Amazon is precisely identical to the relationship between your "consciousness" and you. But that relationship is not an illusion; it does exist.


> My personal experience is that consciousness, like free will, is a useful illusion.

I always find this such a strange statement. Free will doesn’t exist but it’s a useful illusion… like… in that case… useful to who??


Useful to animals who evolved it. I have no more control over my individual neurons than an adder circuit has over itself. And this extends, layer by layer, all the way to the external actions I take. I don’t really control my inner dialog. I don’t really control what I say or do. There is a biochemical process which makes it all happen and I exert no power over.


But what is useful about this “illusion.” In a purely deterministic world, any sense of illusion is completely irrelevant. You might as well say “pebbles on the beach don’t really have a sense of community, it’s just a useful illusion that they share.” It’s just as arbitrary.


My personal experience is that free will is axiomatic. It comes before everything else. What am I? What ever is making these decisions. If weren't, then I wouldn't be the person asking, I'd just be a puppet, therefore my identity would be that of whatever is the hand, and so on until you find whatever is making the decision.

Because if the meat puppet didn't make the decision to write this, then you're not conversing with the meat puppet, you're still conversing with the puppeteer.


> My personal experience is that free will is axiomatic.

How does that even make sense? "Axiomatic" doesn't mean "true"; it simply means that the axiomatic proposition is a given for the system of reasoning you are using. So if you take X to be axiomatic, that isn't an experience; it's a decision you made.

I'm assuming that you can make decisions, because if you are using axioms you are engaged in reasoning, which is a deliberative process.

So an axiom isn't a thing you can experience; it's a thing you create. The very existence of an axiom implies reasoning, so deliberation, so free will.


It's what you have to base everything else on. It comes prior to everything else. Free Will is on the same level as the Cogito. Everything else comes after.

I think, therefore I am. What is part of identity? Free will. If not free will, then there is no identity, no "I".

Axiomatic means unquestionable. The meaning of axiom as a premise for an argument came later.


> Axiomatic means unquestionable.

Not really; I challenge you to find a dictionary that says the two terms are synonymous.

Originally an axiom was a proposition that was "self-evident". But there's a long history of people questioning "self-evident" propositions, happily for us.

There seems to be a colloquial usage of "axiom" to mean a proposition that someone doesn't want us to question. I deprecate that usage. For example, the Law of The Excluded Middle is an axiom, but it has often been questioned; ergo, it is not unquestionable.


Oxford:

axiomatic /ˌaksɪəˈmatɪk/

adjective: axiomatic

    self-evident or unquestionable.
And you've devolved here into semantics. The point is that for you to even think about the subject, you must have free will. If you are not deciding to think about it, then you are not thinking about it.


> My personal experience is that consciousness, like free will, is a useful illusion.

What specific personal experience led you to that conclusion? And what is useful about experiencing a bogus "free will"?


Consciousness can be infinitely explored and described, but it can't really be defined through naturalistic science. A definition is created from the end of something. We can identify the length, breadth, horsepower and coolant capacity of a car, because all those things end somewhere. We can't consciously perceive an end to consciousness, as we'd be dead and the people who have lost consciousness are "dead" or "asleep". A definition is a tool for consciousness to use, not something it is subject to. Yet we all know what the word means and what we're talking about when it's discussed.

We can map out consciousness and explore it's internal productions. We can think about how we think about consciousness and examine scientific damage to consciousness-related structures like the brain. We could ultimately know what it Means to be conscious, given enough time, defined or not.

>Why? Starting with this assumption and searching for reasons it might be true is clear motivated reasoning.

The idea that you can get consciousness from natural materialism has been disproven many times. You can't get "ought from is" from materialism or define consciousness from nature or technology. So what is "clear" about it?

The metaphor of the most advanced technology of the time as the mechanism of the brain/mind/consciousness has been done to death. People used to posit that we were automatons with pistons inside our skull. So what is the "motivated reasoning"?

The idea that human consciousness IS the technology of the day, is a basic metaphorical tool with no correlates to material reality. People used to suppose we were automatons with pistons inside our skull.

It encourages people to think only like programmers, or mechanics, or engineers and cuts them off from asking any real questions. What they believe about a topic, or what it means. Which ultimately leaves them open to following the opinionated dictates of whoever is guiding the discussion. Even for the smartest physicists, engineers and technologists; the hunches, the ideas, the gambles they put their career on, are theologically and philosophically informed. People believe their career choices, their pet physical theory is right, their programming language is right, which is not based on a naturally formed consciousness.

It is better to talk about this than not, and the disengagement with philosophy leads people to parroting the same millennia-old ideas some dead philosopher figured out and now people are repeating blindly and getting the same predictable results without knowing so. If a philosopher only needed to be insightful, he would have half the job. It's dangerous to go without.


> that it is unique and special to humans

I agree that this is a poor premise. The Turing Test is obviously racist. I am sure that my dog and cat are conscious. And I believe there are forms of consciousness in the universe that are as different from us as we are from an amoeba. Consider a large star or even the Sun. It has very complex oscillatory modes that we don't understand. Fifteen years ago my then young daughter and I had a philosophical conversation about stars as potentially conscious entities.


> My personal experience is that consciousness, like free will, is a useful illusion.

This is easily contradicted. Let's say consciousness is an epiphenomenon of computation but causality only flows one way: you "choose" to do something because your brain chose to do it and your consciousness tricked itself into thinking it was doing the choosing.

If that were the case, then the brain wouldn't be aware of consciousness. The illusion falls apart due to the fact that we are discussing consciousness right now. Consciousness must have at least some ability to communicate back to the brain.

And since evolution hates inefficiency, that means it must have a purpose.


> And since evolution hates inefficiency, that means it must have a purpose.

Evolution doesn't hate or love anything, and everything evolved does not have a purpose. Evolution is a theoretical framework developed by humans to describe some things that happen in the world they observe, not a guiding force or a god.

It's honestly kind of amazing that you appear to have ascribed consciousness to evolution in an argument for the human uniqueness of consciousness.


This seems like an overly narrow reading of gp. One does not have to anthropomorphize evolution to speculate that consciousness would likely have fit this pattern of “some things that happen”

If you’re taking “hates” literally… it’s probably a misreading


I think the idea that evolution has a purpose and a direction is more than common enough that when someone says something like that things that evolve never lack a purpose, they've gone wrong in their understanding of evolution, even if only in subtle ways they may not be aware of themselves.

Anyways, I think the juxtaposition is funny no matter how seriously they meant it. We all want to believe that consciousness is something that can be easily defined and yet our use of aspects of it is extremely fuzzy.


Um, no, the idea that evolution has a telos is dumb. Anthropomorphization is a useful short-hand. Nature has no opinions about a vacuum. Here: https://en.wikipedia.org/wiki/Literal_and_figurative_languag...


> you "choose" to do something because your brain chose to do it and your consciousness tricked itself into thinking it was doing the choosing.

This happens most of the time actually. The most interesting experiment really highlighting specifically that it happens is in split brain patients. But in common day experiences, I believe that all habits fall under this and basically anything that our default mode network is directing for us.

When I was in a meditation retreat what I noticed is that I have all kinds of feelings and thoughts arising that were not arising because I chose them to arise, instead they were arising on their own. When you really observe yourself you see that happening. In that sense, the beginning of a thought and feeling has a very distinct quality that dreams have as well which is they are "passed down from up on high" (metaphorically speaking). What I get to decide is whether I choose to follow that feeling/train of thought, but the more I looked into it, the more I realized that my choice is very limited in that as well. As my whether I'd follow a feeling/thought or not was actually based on other feelings and thoughts. In any case, the more I observed myself, the more I came to the conclusion that I have no free will, there is no "me" that does the choosing. It's all feelings/thoughts that arise that I have nothing to do with. I only have freedom of choice.

And then I got to normal life, and lived my life as normal. It did help me to have more sympathy for other people.

So now I wonder how you experience yourself if you'd go to a 10 day silent/meditation retreat ;-)


You wrongly assume the output of such "computation" cannot itself be an input to the computation. Which, if you know anything about the brain, we know certainly happens by virtue of watching synapses fire.

See also simplified models like recurrent neural networks for example.


The author's "triviality argument" doesn't hold up:

> 1. To say that a physical system is a computer requires an external observer to map the physical states of that system onto the abstract states of a Turing machine.

> 2. Consciousness does not require an external observer to exist.

> 3. Therefore, consciousness cannot be reduced to computation.

A physical system may be a computer regardless of whether it is verified to be so by an external observer or not; it does not require an external observer at all.


I think you may have missed the point (of course, it could be me that missed the point :-)

The argument was that for a physical system to be a computer, an element of intent or interpretation is needed; otherwise it's just a bunch of stuff doing what stuff does naturally. Perhaps the (horribly flawed) iron bar example is making the point that whether some configuration of magnetic dipoles is a computer depends on your choice of mappings from that configuration to some state of some Turing machine - which is equivalent to an interpretation.


> computer, an element of intent or interpretation is needed; otherwise it's just a bunch of stuff doing what stuff does naturall

I mean this violates nothing and I am completely confused why it is a concern.

AI face recognizing cameras are outdoors and run 24/7, and you are trying to say that things like it is not computing and unable to carry out computation?


> and you are trying to say that things like it is not computing

On the contrary.

The AI-powered camera obviously has intent behind it. It's a computer.

A random collection of stuff, on the other hand, could happen to be arranged in such a way that it computes something; but if it wasn't designed that way, and if nobody is trying to give it inputs and interpret the outputs, then it's not a computer, it's just a bag of stuff.

For example, consider the orbiting bodies of the solar system. I expect their motions can be used to compute various functions, although I don't know what functions. But they weren't arranged deliberately to perform computations, and AFAIAA nobody's intepreting them as a computer. Ergo, they are just a "bag of stuff", doing what stuff does naturally.

Same for the iron bar.


You're just re-stating (1). I understood (1) fine and addressed it directly.

Observation isn't required for Turing machines to exist. A computer is still a computer even if it has no user or doesn't compute anything deemed useful.

i.e., Whether something is a computer or not doesn't depend on interpretation or observation. (Directly saying 1 is wrong)


Yes, I'm restating (1). That's why my remark was introduced with the phrase "The argument was [...]". In particular, I was trying to rationalise the dumb "iron bar" example as a justification for (1).

I don't think you addressed anything; you just flatly denied (1), without offering any reasoning. In particular, you don't seem to have tried to understand the author's attempt to justify (1) using iron bars.

For my part, I'm not sure about (1). As far as I can see, there's a steady flow of people discovering that some X is a Turing-complete language, when X wasn't actually even meant to be a language at all. So it seems that some configuration of stuff is a computer IFF (a) it is capable of doing computation, and (b) someone is willing to consider it as a computer. Four odd socks could be considered a computer, by someone who was using them as a 4-bit adder.


No. Some things are Turing machines and some are not. Human interaction and interpretation are not involved.

This applies to any model of computation. Cellular autonoma for instance. A mind is not required to interact with the system for computation to take place.

Take a living cell. No mind is in the loop when the cell computes the manufacture of organelles from stored data.

> Four odd socks could be considered a computer, by someone who was using them as a 4-bit adder.

No. A group of 4 socks is not a computational system. That's just MEMORY. Same with iron bars. It doesn't compute on it's own because it's not a full computer.

The fact that memory doesn't compute by itself isn't evidence that a mind needs to be involved for computation to take place. The argument is really stupid.


> No. A group of 4 socks is not a computational system. That's just MEMORY. Same with iron bars. It doesn't compute on it's own because it's not a full computer.

Agreed; that was a rather throwaway rhetorical remark. But if you concede that four socks can be a memory, then it's not a huge jump to see that a bunch of arbitrary objects can in principle compose a computer.

My contention is that whether that arrangement of objects is in fact a computer is subjective; it depends on whether anyone conceives of it as a computer, or uses it to obtain computational results.


Again, no, not anything can a computer. If you like Turing machines, there are specific requirements for what makes a Turing machine. None of these things are ingredients of a Turing machine:

    * Observation
    * Measurement
    * Interpretation
    * Intent
    * A mind
> My contention is that whether that arrangement of objects is in fact a computer is subjective

You are wrong. It is discretely defined. Read about the requirements for a Turing machine.

    * When the requirements are *not* met,
      subjectivity can not turn something
      into a computer.
    * When the requirements *are* met,
      subjectivity can not change the fact
      that it is computer.
It's obvious that subjectivity has no effect -- either in the matter of classifying computers, or allowing computation to take place.

> it depends on whether anyone conceives of it as a computer, or uses it to obtain computational results.

Again, refuted, in so many different ways. Here's an obvious one which I will restate:

    * Cells inarguably perform computation.
      They pre-date minds by billions of years.
Put a different way, if a Turing Machine computes in the woods and there's no one around to see it, computation will still have taken place. The burden of proof would be on you to say otherwise. I only see two very messy and wrong arguments:

A) Iron bars don't compute by themselves --> A mind is needed to "map" iron bars to "states of an arbitrary Turing machine"

It doesn't follow. Iron bars don't compute on their own because they are just the memory component. What is missing is MACHINERY WITH SPECIFIC PROPERTIES to complete a Turing machine. If the machinery is present, computation is possible. No mind necessary.

B) Socks are a computer if I decide they are --> arbitrary objects can in principle compose a computer --> deciding is necessary to make computers.

It doesn't follow. While computers could be made of just about anything, fully arbitrary objects cannot be computers. Only specific systems yield computation. Decision has nothing to do with it. Not even observation is required.

> there's a steady flow of people discovering that some X is a Turing-complete language, when X wasn't actually even meant to be a language at all.

You're arguing my point. "discovering" Turing machines ... meaning some things are already Turing machines before a mind became involved.

> So it seems that some configuration of stuff is a computer IFF (a) it is capable of doing computation, and (b) someone is willing to consider it as a computer.

You say "So it seems" and "IFF" but it does not follow at all. (b) is not supported. Computation can happen without "someone".

At best, you're arguing that trees falling in the forrest make no sound.

Anyway, why do I think I have time for this ...


I don't think you do have time for this, because your "arguments" amount to a series of bald assertions.


Name one.

I showed you that subjectivity doesn’t effect whether computation occurs. Several ways.

If you disagree you should really be editing Wikipedia: https://en.m.wikipedia.org/wiki/Model_of_computation Why don’t you go add your theory there?

It’s silly because it’s obvious. If you understand cells, you understand that computation in the absence of subjectivity has been happening on Earth for billions of years.

(Mic drop)


> I showed you that subjectivity doesn’t effect whether computation occurs.

You didn't "show" it; you simply asserted it. You also made a number of appeals to (uncited) authority concerning the nature of computers.

You made an argument from cells; I presume that was to do with the way that DNA and so on works. Although not all cells have DNA...

Anyway. I concede that was an argument, and not a bald assertion. But it's a circular argument; if your definition of "computer" includes the operation of DNA, then the conclusion that a computer can exist without interpretation or intent is unavoidable. You're definition begs the question.


> You didn't "show" it; you simply asserted it.

You're not understanding what I said. Last time:

Turing Machines have a specific set of necessary ingredients.

That's not a "bald assertion" or an "appeal to authority". It's an observation about the theory of Turing machines [1].

Whether or not a physical system has those ingredients is an _objective_ matter.

Also an observation. In case you need to think it through, there was a simple argument which you cannot rationally refute:

i) If one of the ingredients is missing, "subjectively" wishing something is a Turing Machine will not make it so.

ii) If all the ingredients are present, "subjectively" wishing that something isn't a Turing Machine will not change the fact that it is.

Conclusion: subjectivity has no effect on whether something is or is not a Turing machine. QED.

----

You're the one making unsupported assertions and talking in circles.

I can see that you're going to reply with more disagreement. Please imagine that my response is "I said good day sir!"

[1] https://en.wikipedia.org/wiki/Turing_machine#Description


> Please imagine that my response is "I said good day sir!"

That sounds a lot like "Anyway, why do I think I have time for this ... "

It's clear to me that "stuff" can be arranged to work as a Turing Machine (or some other kind of computer) without design or intent. Whether it is such a thing or not depends on how it is used; a Turing Machine that is given random inputs, or whose outputs don't mean anything to anyone, is a computer only in a formal sense. If nobody knows that some thing is a computer, I'm not sure that it's computer-ness is meaningful.

So that's why I think intent and interpretation are relevant.


True. Quantum behavior is computing linear algebra ever since the Big Bang, and we came accross to discover this fact 15e9 years later.


That's not a good example, as it's very hard to call particles following the laws of physics "a computation" - what would be the computer in this case? What is the program?

A much better example is the machinery inside every living cell that is interpreting the DNA or RNA to produce certain proteins - there, it's much clearer there is a computational process happening. Certain specific structures inside the nucleus are the computer, and the DNA molecule is the program they are following. We even know that you can change the program and get predictable, different behavior.


How is it not? Quantum Mechanics and quantum field theory applies everywhere except for the black holes. Human can even harness the power by building quantum computer by that. So here you tried to deny the exist of quantum computing paradigm, a completely different computation approach?


No, I am not denying anything.

My point is this: say an electron emits a photon and changes speed, in perfect accordance to the standard model and QFT. I don't think it makes sense to say that the electron "computed" the energy of the photon, or its own change of speed. It just happened, there was no computation going on here.

Even in a CPU, it doesn't make sense to say that the transistors, or even logical gates, are "computing" how much electricity passes through them. The entire system of transistors arranged into logical gates arranged into a processor is doing computation of the program written in memory, but the subcomponents are only following the simple laws of physics.

In a quantum computer, the same is true - the computer itself may be running Shor'a algorithm, but each individual qubit is simply doing the few things that the laws of physics allow it to do.


I think the argument is more that the electron and photon are subjective interpretations of the observable consequence of some computation, not that they are computing things in and of themselves.


Thanks this is exactly what I am trying to say.

We dont know exactly how the universe is doing or how the photons do to themselves, but by doing experiments, scientists figured out the law of computiation of this vague "universe computer" a.k.a laws of physics.

And since scientist's discovery do not violate the actually behavior of the physical world, there established a isomorphism. And the very same law is used in quantum computing, mainly, in chemistry. And everything success and predictable. It is a successful scientific theory. Under this notion, it is nothing wrong to say that the quantum world is doing some kind of computation underneath because it is the current understanding of how the universe works.


To my mind, I think that if you want to say that the universe is a computer, then you can't say it's doing linear algebra, even though the behavior of particles is described by linear algebra.

If you want to model the universe as a computer, than its basic operations are the interactions described in the Standard Model, and the symbols it works on are particles&fields that exist. But the universe computer is not resolving a linear algebra equation to decide what happens when an electron emits a photon. Instead, the electron emitting a photon with some energy etc. is one of the elementary operations of the universe computer.

Coming back to the CPU example, a basic operation in a CPU is setting a bit to 1. That operation is not divisible into any other more elementary operations from the point of view of modeling the physical CPU as a computer. Of course, there are other physical phenomena going down to the SM that are the realization of this basic operation, but those are not part of the modeling: the CPU computer, as a model, works by flipping bits.

Similarly, for QM, the universe computer works by doing one of the possible interactions from the standard model. As far as we know, there is no layer of detail underneath this, even if the interactions of the standard model are indeed linear algebra.

One important way in which saying "the universe is computing linear algebra" is wrong is that, as far as we know, the universe is instantly calculating the solutions of the linear equations - the electron doesn't go into an "emitting" state, then emit a photon with the appropriate values some time later after the computation is finished.


Bascially your assumption is that something that looks and works like an Intel CPU carry out an operation, that can be called a computation, everything else is not. But sorry this is not the only method to "compute". You are limiting the definition of computiation and try too hard to justify all false assumption that you have been made.


My assumption is that something like a Turing machine, or like a reduction in Lambda calculus, or like a demonstration with the basic laws of predicate logic (all known equivalent models of computation) is a computer.


Ok then, what is the computer running that computation, and what are the symbols it is manipulating to produce the movement of the electron and photon?

(I am assuming you are not referring to human knowledge of the electron or photon, which - if we accept that consciousness is computational - is obviously the result of a computation in our brains).


I have no idea, I’m just saying you can conceive of physical phenomena being the subjective experience of existing within a computed environment. If, on the other hand, the computation happens within the physical substrate of the thing it’s computing… that seems weird to me.


Well. FYI, unitary transformation, creation and annihilation operator, etc.


Are these physical objects? Do they have mass, energy, volume, a position or speed, charge, color charge, weak hyper charge, spin etc?


The operation of adding and subtracting per se in classical computers also lacking these physical properties. If you use this as your counterargument, namely classical computer can compute without the inherent operation owning a physical property, then you left yourself with no ground of saying that quantum computing is invalid. You better check your logic.


Have you verified if the particles in your CPU are following the laws of physics and doing computation as you read this?


What do you mean by "verified"?

We are discussing what is ultimately a matter of philosophy - "what is a useful definition of computation".

I am claiming that the model of the electron and other particles, and how they interact to form an electrical circuit with transistors, and how these electrical circuits react to current when in the pattern we call "logical gates" do not conform to what is normally understood by "computation".

I am also claiming that the ensemble formed from these logical gates does fit the concept of computation, that you couldn't derive its behavior directly from innate physical laws - the behavior is governed by the program it is manipulating, and can be changed.

Finally, I am claiming that the living cell is more similar to our CPU than to the electrical circuit or moving electrons in the way it processes DNA to produce various proteins.


I am of the mind that consciousness is an emergent property of the various interconnected information processing systems of the brain. There's just too much evidence: for instance, changing the physiology of the brain through drugs or trauma has predictable changes on consciousness. From that perspective, I think it's pretty clear that consciousness is some kind of computation rooted in the physical world.

However to the question of whether consciousness is a classical computation which could be expressed as a classical computer program, that part is up for grabs. The brain processes information in a fundamentally different way than classical computers: it's essentially a massive dynamical system of many many variables all interacting with each-other in real time. It's entirely possible that, given the dimensionality and parallelism of the brain, that there's not enough material or energy in the universe to construct a computer capable of simulating the entire system in real time. Maybe quantum computers will be able to manage it, or maybe we'll need to engineer biological neural networks to get there artificially.

Or maybe consciousness is a thin illusion on top of a couple of clever tricks we have yet to figure out yet.

Anyway I hope we get a little closer to figuring it out in my lifetime.


I think there are precisely two options.

Consciousness is a pure information process, and therefore computable.

Or consciousness is not an information process.

We only know of one thing that is not an information process, and that is entropy.

Given that entropy is precisely the creation of new information, and that this is a big part of most people's conception of free will, it seems reasonable to conclude that consciousness is much like a combination of self reference and entropy generation rather than some special other thing.


It's an interesting thought. I can imagine that consciousness is something like the experience of the possibility space expanding through entropy, and free will is collapsing that possibility space into a single interpretation of the reality of the moment and a single course of action.


> There's just too much evidence: for instance, changing the physiology of the brain through drugs or trauma has predictable changes on consciousness

You can demonstrate that humans react (and quite predictably) to weather - when it rains, they take their umbrellas out, when there's a hurricane, they seek for shelter, and so on. It's not evidence that human behaviour is an emergent property of weather and nothing else.

In case of consciousness then, this observation only proves that the brain function is susceptible to physical and chemical factors (which is rather obvious).

But not that this function can be FULLY reduced to the computational aspect though.

It doesn't even invalidate the belief in a supernatural soul, or some sort of metaphysical "spark" required to ignite consciousness (which, just to be clear, I personally don't subscribe to, but it's beyond the point) - just because the brain is affected by physical factors doesn't prove it is ONLY physical factors that are at play. It doesn't demonstrate that they have a monopoly.

The big question persists: how does it happen that a computational system, no matter how complex, FEELS something?


You're right that there's no proof that consciousness can be explained by biological function alone. But my belief is that the preponderance of the evidence makes it the most likely explanation by far.

> You can demonstrate that humans react (and quite predictably) to weather - when it rains, they take their umbrellas out, when there's a hurricane, they seek for shelter, and so on. It's not evidence that human behaviour is an emergent property of weather and nothing else.

In the literature, there's a term "necessary and sufficient" which is used quite often. For instance, destroying a certain percentage of a certain type of dopamine receptors is necessary and sufficient to produce parkinsons-like symptoms in rats.

Rain can be demonstrated to make people take umbrellas out, but it's not sufficient. Sometimes it rains and people don't take umbrellas out. It's also not necessary, sometimes people take out umbrellas when it's too sunny. So it's hard to establish that strong causal relationship between rain and umbrellas.

With consciousness, we can't point to a single example of human consciousness which is not at the same place at the same time as a reasonably well-functioning human brain. We can't prove 100% that it's sufficient, but it certainly seems to be necessary, and there's zero evidence so far it's not sufficient.

> The big question persists: how does it happen that a computational system, no matter how complex, FEELS something?

This is actually fairly well understood. So if you are talking about "feeling" in the sense of emotion, we actually have a very detailed understanding of how that system works.

So some of our lower brain regions are responsible for preparing our body for action. For instance, if you've been bitten by a dog before, your brain might learn to get your body ready to run when there is a large dog around. Your sympathetic nervous system will kick in when your visual cortex detects the right patterns, and elevate your heart rate, quicken your breathing, and cause your hairs to stand on end.

So your emotions are essentially a sense which observes that type of sympathetic arousal in your body. Your emotional systems will notice that your body has entered flight-or-fight mode, and will interpret that based on your context and memories to signal to your higher brain function that you are experiencing fear or anxiety.

So that's just one example, but it's extremely plausible to me that consciousness/subjective experience is either just the sum total of all these functions the brain is performing, or is some emergent property on top of them, or else is even some highly specialized function of some subsystem of the brain we don't fully understand yet.

For instance there are credible arguments that the thalamus is the seat of consciousness in the brain.


> So if you are talking about "feeling" in the sense of emotion, we actually have a very detailed understanding of how that system works.

I'm talking about consciousness, not emotions. About the sense of self. Understanding how emotions manifest as a chemical process doesn't have much to do with it. It doesn't solve the consciousness issue, doesn't explain how this "observer process" that is self-aware (and capable of registering: "oh, I'm experiencing such and such emotion now") comes to be.


I did a search of comments for "thalamus" and found yours. Could you elaborate?

This is my own contention as well. See down the page here for th section on consciousness, the thalamus, what I've found anatomically and speculation on how it works

https://sites.google.com/site/pablomayrgundter/mind


> for instance, changing the physiology of the brain through drugs or trauma has predictable changes on consciousness

Assumes a definition of consciousness, which is not in evidence. What drugs or trauma alter is experience, which one might define as the content of consciousness, i.e. thoughts, beliefs etc.


How would you define consciousness?


I wouldn't! I could have a go, but I'm not inclined to.

OK, let's try this: consciousness is the canvas on which subjective experience is painted.

But I'm afraid that's not much help, because it begs a definition of subjective experience, and because it side-steps the nature of the blank canvas.


Idk it seems to me like you’re reaching for something which may or may not exist - how do we know there even is such a canvas?

If you can’t even define what this elusive thing might be, how can you raise this as a serious argument against the idea that consciousness is a product of brain function?


Oh, I know it's there; it's the only thing I know. (Not "we" - it's private knowledge; but it's not hard to find - it's always there, for eveyone).

To extend the analogy: the painting isn't reliable - it's not even certain it's not a hallucination. But there has to be a substrate to paint on, even if the painting itself consists of layer upon layer of deception.

I don't know if this is true; but it seems more plausible to me than that consciousness is an accidental consequence of the bizarre tumblings of matter.

And anyway: the idea that consciousness is emergent, and matter is fundamental, doesn't offer any explanation of the subjectivity of experience and consciousness. You can't explain consciousness as "emergent" without saying first what it is that emerges. Of course, the appearance of consciousness arises from programs like Eliza, or LaMDAwhotsit. They were made to produce that impression.


Well I suppose we can believe whatever we want, but as our understanding of neuroscience increases, it seems as if the trend is that there is less and less unknown space for that substrate to exist in over time.


The idea that you cannot get to consciousness from computation requires accepting on faith alone that consciousness is magic. It’s fundamentally a religious-like bias.

It’s not magic, I’m afraid.


This is a little bit of an emotional take I think.

The burden of proof is on the claim that you can get there through computation, not otherwise.

We do not understand the fundamental reality of consciousness, this does not mean that consciousness is magic. The assertion that you cannot get there from computation implies there is a currently non understood yet essential piece of physics(I assume, but I do not know) which doesn't fall under "computation". A laymans initial thoughts point this towards the quantum realm.


I get into a vehicle every day that I don't fully understand, yet it still seems to perform its function.

I use vast swathes of computational resources every day for various tasks, the operation of which I understand even less. They still seem to accomplish those tasks without issue.

Sometimes those computational resources run ML workloads. Very very few people on this planet can honestly claim to understand how neural networks work, and in many cases, the minutiae are inscrutable to all of us. They still seem to work fine.

I most certainly do not understand how my own brain works, yet here it is, shitposting on hackernews.

We have yet to find a single shred of evidence that the human brain makes use of quantum principles in aggregate to do its thing, and have even specifically excluded a few such explanations. And even if consciousness strictly requires quantum hardware...we'll get there eventually.

Although you're certainly right about one thing, most laypeople would have a real tough time accepting a world where consciousness is synthetically reproducible, and instead tend to reach for comforting thoughts of "maybe quantum is required", "maybe consciousness is magic" or "maybe consciousness can only be created by a deity".

Non-laypeople know that at normal temperatures and pressures, quantum effects don't really extend into the macroscopic realm.


> We have yet to find a single shred of evidence that the human brain makes use of quantum principles in aggregate to do its thing, and have even specifically excluded a few such explanations. And even if consciousness strictly requires quantum hardware...we'll get there eventually.

Isn't a random generator for a neural network act as its "inception"? If so, it's exactly quantum principles in aggregate.


Why would a random generator be important for a neutral network?

Furthermore, not all sources of noise/randomness arise from quantum measurement (the only part of QM which can be interpreted to have randomness at all). Classical chaotic systems are also random if you were unable to measure the initial conditions to cosmic precision.

In fact, it's unclear at the moment how QM can actually give rise to randomness. In principle, in QM a perfectly isolated system of any complexity would behave entirely linearly with no randomness or even any chaos involved.


One does not require true random number generation to perform SGD. And once trained, most neural networks in-use today are completely deterministic. So no.

You are also making a baseless assertion that consciousness requires randomness of any kind, let alone quantum-based true random number generation.


I disagree about where the burden of proof lies. If we start off firstly with the assumption that humans have consciousness, and secondly the decently supported claim that animals exhibit what to us appears to be consciousness on a spectrum, and thirdly that we don't exactly know how or why consciousness exists, then the conclusion that seems obvious to me is that we cannot rule out that it could emerge in a network similar to that in human and animal brains. To me the best explanation we have now is that consciousness is an emergent property of a brain. And since a brain is neurons firing, as far as we've been able to determine, then there's no particular reason why certain types of networks can't have the same emergent properties.


> the conclusion that seems obvious to me is that we cannot rule out that it could emerge in a network

Not being able to rule out consciousness through computation is a far more cautious claim than 'consciousness is computation'.


That is strictly true, but if you define computation as any physical process which involves information, as I do, then defending any other position than "consciousness emerges from computation" is extremely difficult.


Sure. But I was responding to the statement "The burden of proof is on the claim that you can get there through computation, not otherwise."


I think it’s important to note that the article isn’t saying that if you build a brain-like thing, it can’t be conscious. It’s arguing that if you simulate a brain-like thing purely in software it can’t be conscious. I’m not saying one argument has more merit than the other (not that anyone is going to be able to prove anything is conscious either way).


Quantum is the go to pearl for everyone who doesn't like the idea that consciousness is not simply a result of a deterministic but very complex system of physics. Unfortunately there has never been any evidence that anything in the brain exhibits any sort of quantum computing or logic or otherwise.

Thus, the burned of proof is in fact on the the claim that you cannot get their through computation, because deterministic physics processes are all that we have observed in the brain, thus the default assumption must be that all the properties of the brain are also deterministic.


As a side comment however, Roger Penrose has this argument about some kind of quantum effects from microtubules in cells https://en.wikipedia.org/wiki/Roger_Penrose#Consciousness (but to go as far as to say that this might be the reason in the end why consciousness is not a computation is still a whole debate however, and his theory is far from making a consensus I am afraid)


I don't understand how this would be an escape hatch though. If some sort of quantum randomness is essential for the emergence of consciousness, it could be incorporated into an algorithm as well. It wouldn't be strictly deterministic, but why would that matter?

Maybe I'm missing something.


Well, there are plenty of quantum effects required for cells to function (everything is just biophysics and biochemistry after all), but those are irrelevant at the scale of the brain as a whole. The properties we care about (especially in this context) are emergent.


Considering that everything in the universe is the product of some computation, I would consider it a reasonable default assumption that whatever we mean by “consciousness” emerges from computation.

That this idea “intuitively feels weird” or “intuitively feels wrong”, and that we love to think we are special and have a unique place in the universe, is probably a more serious bias.


Not probably, it is a bias. Dolphin can speak, dog can speak, bird can speak too. There is nothing special about homo species except the only fact that we have more neurons in our head. Those "philosophers" love to claim that human consciousness is different than animal and this is the biggest bullshit.


We’re in total agreement, I’m just measuring my words by habit.


No, dogs and birds can't "speak", not in the same sense you do (with dolphins it's a little more debatable).

Animal calls have no syntax - each call has some meaning, but they do not compose in any way. The order in which an animal performs its calls is arbitrary, and an animal hearing said calls doesn't pay attention to it. Even basic modifiers like "no" don't exist in animal communication - if a call means "food", and another call means "no", then an animal hearing the call for "no" and the call for "food" will behave as if they heard there is food somewhere.

The exception to this seem to be dolphins (orcas, in particular), where it seems they have been quite successfully trained to follow a series a short commands in order, in a sign language (so you can sign "jump, swim, splash" and they will follow this; and then sign "swim, splash, jump" and they will do it in the new order). This can't be done with dogs, and it's even questionable if it has been successfully done with chimps.


To say that almost all animal communication is without order (syntax) is almost certainly incorrect, and would have a high burden of proof. It’s certainly been a historical assumption, but I think science has moved a little past it. Even bees can communicate relatively sophisticated messages between them.


Bee dances are a fascinating topic, but the evidence is still inconclusive - there is some evidence that suggests the movements in the dance correlate with the position of the flower, but there is other evidence that suggests they are irrelevant and the flower is found by a trail of pheromones.

In all other animals where this had been extensively studied (except orcas and maybe chimps), syntax has proven to be absent in natural calls, and also impossible to teach artificially. There are sometimes apparent breakthroughs, but it later turns out that the animal figured out a way to interpret "a", "b", "a then b", and "b then a" as four separate calls, without any deeper understanding. This is evident when you then teach it "c", and find out that it sees no difference between "c then a" and "a then c", and it takes just as long to teach it to distinguish these three calls; and then again just as long to teach it the difference between "b than c" and "c then b".

Have we tested each possible animal this way? No, of course not - but we have tested all the most likely candidates, and orcas and chimps were the only successes (and even here there is some debate). Crows, parrots, dogs, cats, gorillas, elephants, horses - none of these show any understanding of syntax.


Is this a counterargument? You are denying facts and say something juat even contract with yourself.

> Dolphin can understand sign language but dog dont. So ALL OF THEM do not speak.

Do you mean this? It is the same as saying that you dont understand Japanese and conclud that human can not speak.

Correct me if I am wrong. Or correct yourself.


I'm saying that perhaps with the exception of dolphins and mayyyyybe chimps, there is a measurable, observable, quantifiable way in which animal communication is fundamentally different from human language.

So, perhaps you can say that dolphins speak, and maybe chimps speak, and then we can even contemplate that bonobos speak (since they are very similar to chimps, but haven't been studied as much).

But dogs definitely don't speak, and neither do any other mammals that we've tried to test in this way.

Not to mention, there is another characteristic of human language that 0 animals can be taught, as far as we've tried - more complex structure, like "not (c and d)"


Well since there is full of false assumptions, I just give you a name.

@toshitaka_szk in twitter. A Japanese dude can translate languages of birds and successfully did some experiences on it. And he is kind of famous.

And needless to say Hunger the dog[1]. Did this clear your humancentric brain?

[1] https://nypost.com/2021/05/01/this-speech-therapist-taught-h...

> 0 animals can be taught, as far as we've tried - more complex structure

Also this applies to limited intelligence human infant and mathematic immature dudes. So by your logic, those underintelligence human do not have consciousness. QED.


The claims about the dog are ridiculous - especially evident with the "love you" word. In fact, most of the article is describing a much more humane version of the famous Pavlov's dog experiment - the dog learned to associate the sounds of the "bells" with certain needs and persons, and uses them as such.

The Japanese research is much more interesting, and in a related comment I also cited a published article that proved that my claims, while fundamentally ok, are wrong in the details - animal calls are fundamentally simpler than human language, but some do show simple syntax.

> Also this applies to limited intelligence human infant and mathematic immature dudes. So by your logic, those underintelligence human do not have consciousness. QED.

I never claimed dogs aren't conscious, I only claimed they don't have language in the sense humans do. Infants also don't have language, but they learn it natively. All humans learn complex structures, even the mathematically illiterate, no idea where that came from. The only exceptions are people with serious brain disorders, and those people, indeed, don't "speak".

That, again, doesn't mean that they are not conscious beings.


{{citation needed}} for most of this.


Here is an article that discusses the topic quite broadly [0]. The most relevant section is part 4.

I will freely admit that it actually refutes some of my claims - that's are actual example of simple syntax identified in several species.

Still, I believe it matches my broader point: there are a fundamental, measurable differences between human language and animal calls, with the latter at best showing only very basic structure, if any.

[0] https://royalsocietypublishing.org/doi/10.1098/rstb.2019.006...


I will readily agree that levels of sophistication of languages vary, and that human languages are almost certainly the most complex / sophisticated that we know of.


>Considering that everything in the universe is the product of some computation

This is not necessarily true. If you drop a ball it's impossible to calculate how long it takes before it hits need ground. We can calculate an approximation by creating a model, but there is no way to know if that model matches reality.


What I am saying is that the universe is literally performing the computation.

What you're describing is humans' attempt at modelling this computation using simple formulas.


All the arguments Ive seen against computational consciousness seem to me to reduce down to arguments against materialism. I expounded on this more in a root comment, and subsequent discussions so I won't repeat.

Right now in physics and the materials sciences materialism is thoroughly uncontroversial. There is no evidence for any kind of dualism, it only rears it's nebulous and poorly defined head when we talk about consciousness, and there is zero experimental evidence for it. Therefore no, I think the burden of proof is on the dualist / non-computational consciousness side.


The reductionist argument seems to be “the brain is just a meat computer and is conscious, therefore a complex enough silicon computer will be conscious as well”.

I find this very similar to alchemy in the 15th century. The idea was “gold is heavy, malleable, lustery metal, and so is lead. We have observed substances can be converted others. Therefore with the right chemical process we can convert lead to gold”. The implicit assumption is that since the two things are similar, they can be made to exhibit the same properties with the right science. I.e. lead can become gold.

This is the same as the “meat brain/silicon brain” line of reasoning. But as we learned with more advanced chemistry, lead cannot be turned into gold (at least not in the chemical way they were expecting).

So the burden of proof does lie with those making the assertion that: “meat computer has consciousness”, therefore “silicon computer could have consciousness”. Lots of people assume this is just a given without any evidence. Just as alchemists assumed from the similarities between gold and lead meant they could be chemically converted. I would postulate chemistry is much simpler then consciousness.


It depends on whether consciousness is a computational process. If it is then meat or metal really doesn’t matter. We know this because mathematicians have proved that any sufficiently capable computer can perform any computation.

So the lead to gold analogy doesn’t hold. It would be as if scientists had proved that any element can be transformed into any other element. Well, if that was true, then yes it follows that lead could be turned into gold, in a universe where that had been proved.

So is consciousness actually a computation? Of course that’s a matter of opinion. All I’m saying is, I think so yes, I think I have coherent reasons for believing so, and none of the counter arguments persuade me otherwise so far. I can’t prove it to you though, we’re just talking.

What I can say is, this or that argument seems to me to have this or that flaw, or lead to this or that consequence or conclusion that I find unlikely or absurd. Dualism is such a conclusion I find absurd, and I think most of the actual arguments against computational consciousness seem to at least reduce to attempted refutations of materialism, or out and out dualism.


> It depends on whether consciousness is a computational process.

Absolutely agree. But that is the assumption that I would liken to alchemists comparing lead and gold. We know almost nothing about the brain. We know almost nothing about consciousness. But yet some people assume that consciousness is computable just because we don't know anything else it could be (just as alchemists assumed gold and lead could be transform because they were both chrysopoeic base metals. They hadn't discovered atomic theory yet). When all you have is a hammer, everything starts to look like a nail.

We know that the vast majority of numbers are uncomputable[1]. We also have proved that computation is incomplete[2] and can be undecidable[3]. It seems perfectly logical that consciousness is not computable. Or it could be computable, I obviously don't know. If someone makes the claim that consciousness is computable, then the burden of lies with them. We can't accept that on blind faith. At this point it is all opinion and speculation (as you said) because we still can't even define consciousness in a rigorous way. (and I don't think we will ever create artificial consciousness until we can define it, but that is an orthogonal issue).

[1] https://en.wikipedia.org/wiki/Computability_theory [2] https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_... [3] https://en.wikipedia.org/wiki/Undecidable_problem


I’m not assuming anything or accepting anything on blind faith, and I don’t think I’ve given you any reason to think that I am.

If anyone says that they think it is either this or that, it’s reasonable to ask them to justify that belief. There’s no reason to resort to using language like assume, blind faith, etc.


That's not necessarily a reductionist argument if it respects that the consciousness has a drama of its own that is not related to the low-level parts of the substrate; i.e. that the consciousness is irreducible. The mere hypothesis that something can be ported to silicon doesn't reduce it; it respects the complexity of the abstraction itself.

Also, "the meat computer has consciousness" independently requires proof. Every meat computer thinks it has consciousness, and we just take their word for it, based on our own experience as a meat computer.

If the thing making the same claim is not a meat computer, then we don't believe it in the same way: "I know meat computers are conscious because I am one; you say you are conscious but you are not a meat computer, therefore I don't believe you".

In the same way, we could deny that an extraterrestrial life form is conscious, if it's not made of anything resembling meat.


> The assertion that you cannot get there from computation implies there is a currently non understood yet essential piece of physics(I assume, but I do not know) which doesn't fall under "computation"

This conflates "computation" with "physics". I think even the OP doesn't do that, since it allows for the possibility that non-computational physical processes could lead to consciousness, even derived from computational efforts.

To me this is the crux of the problem with that argument, though. The definition of "computation" here seems to be designed to get to this answer. It includes all physical processes that are not like consciousness and excludes all physical processes that are like consciousness, and on top of that presumes that humans consciousness isn't "deterministic", which to me is a difficult proposition to prove since human brains are never not being bombarded with stimuli, so creating two "runs" of the same brain is essentially impossible.

Like trying to add 1+1 on a computer that's sitting in a big burst of cosmic rays twice.


Just for clarity, I consider to be “computation” any physical process which involves information.


Right, I meant the article's idea of computation, not yours. Should have said TFA instead of OP.


Or more succinctly, all the known laws of physics effect things in a way that can be simulated by sufficiently complex computation. The idea that there's something going on in a wet, warm l, mushy brain that has a macroscopic effect, and yet isn't accounted for by the currently known physics or bonkers.


>The idea that there's something going on in a wet, warm, mushy brain that has a macroscopic effect,

What kind of complex computation simulates the subjective experiences of wet, warm, and mushy?

EDIT: my account appears to be rate limited, so i am unable to post a reply until idk when....

By "subjective" I am referring to my conscious inner life, that there is something it is like to be me. I am an experiencer because I can experience things. My experiences are subjective and qualitative.

When I see something red, there actually is no "red" in reality, there is just an electromagnetic wave vibrating at a certain frequency. But I still have a "red" experience. In the same way, I can also have a wet, warm, or mushy experience. How do the subjective qualities of my experience arise from quantities like spin, charge, mass, etc?


The same kind of computation as is performed by the wet warm and mushy computer.

Be careful with the term “subjective”, because it’s a can of worms: it doesn’t imply chaos and lack of information. Your subjective experiences are merely called “subjective” because they aren’t computed the exact same way as other brains. It doesn’t mean they’re not the product of computation, or that they have magic properties!


Reply to your edit:

I know it would be ideal if there was a simple, short answer, but this question is nearly the equivalent of "please explain all the core ideas of neuroscience/neurology/neuropsychology to me". It's a good question, but it's a big question. This is why in the other thread [1] I've tried to point you in the direction where you will find the answers you are looking for: the answers exist and are for the most part known, but you have a lot of reading ahead of you, and there's no way around that.

[1] https://news.ycombinator.com/item?id=31806312


First, I'm a little embarrassed because I didn't realize I had started another comment thread with you, I thought it was a different user. I may have done that twice on this post. Sorry to have split the discussion like that. Anyway....

>the answers exist and are for the most part known

I deeply disagree with this. I am not afraid of doing some reading, but I challenge you to find a single study that demonstrates how a certain combination of neurons firing leads to the experience of tasting vanilla.


Read the book and come back if you still feel like you have the same objection!


You've spelled out quite well that there are things about our internal subjective experience we don't understand to a discomforting degree.

But we understand really well the substrate from which those experiences arise. Imagine it this way. Imagine that mathematicians are trying to solve some problem, they don't even know if the problem is computable or not. Some genius comes around with a computer that given the necessary input, provides the solution. He refuses to explain how the program works and the program itself is gigabytes of incomprehensible spaghetti code. So they are no closer to understanding the problem, but now they do know that it's computable.


To be clear, I am talking about phenomenal consciousness — which is simply the ability to subjectively experience the world and ourselves. This is distinct from metaconsciousness.

Bernardo Kastrup on phenomenal consciousness:

>Our phenomenal consciousness is eminently qualitative, not quantitative. There is something it feels like to see the colour red, which is not captured by merely noting the frequency of red light. If we were to tell Helen Keller that red is an oscillation of approximately 4.3^1014 cycles per second, she would still not know what it feels like to see red. Analogously, what it feels like to listen to a Vivaldi sonata cannot be conveyed to a person born deaf, even if we show to the person the sonata’s complete power spectrum. Experiences are felt qualities — which philosophers and neuroscientists call ‘qualia’ — not fully describable by abstract quantities.

>Some genius comes around with a computer that given the necessary input, provides the solution...but now they do know that it's computable.

Can you give me a concrete example of input/output and how we would validate any output? You are suggesting the that the brain is the program, the physical world is the necessary input, and consciousness is the output (correct me if I'm wrong). But if I wrote a program to make a perfectly accurate simulation of a kidney, would you expect it to pee on my desk? Of course you wouldn't, so I'm not sure why we would expect that of the brain and consciousness.

Going back to your example, which I like, even if we had access to the code, the code is not the actual reality. It is an abstraction. We can dig through the code all we want, but we will never reach electricity and transistors, the true reality of the program. I think this is analogous to our own reality, where we can dig into spacetime at smaller and smaller distances, but never find "true reality". A hint that this is the case is the lack of operational meaning to distances < 10^-33cm and times < 10^43 sec (https://www.youtube.com/watch?v=Uz-Ve_1LX8w); the amount of energy required to probe those sizes creates a black hole. So something </i>must* be underlying spacetime. I think that something is phenomenal consciousness, mind-at-large, pure subjectivity, call it what you like. Spacetime is the source code, and phenomenal consciousness is the electricity and transistors.

(Also just to reiterate, my HN account is rate limited, so I may not be able to reply in a timely manner to any subsequent comments).


I attempted to sidestep all of that by not discussing consciousness, the brain, or neurons in my follow-up at all. But instead, some hypothetical mathematical problem. I don't know, let's say the existence of a polygon of N sides that can tile an infinite plane as a completely made up problem. Someone generates a computer program that for any given N, can generate some complex shape polygon that can tile the infinite plane. Probably a bad example, but whatever.

Even though you don't understand the program, don't know how it works, etc, you know that the solution to the problem is one that's computable, because you can see that a solution can be provided by something that is limited by computability. You might be totally incredulous that the problem could be solved by computation alone, but all you need is a single counter example.


But framing it as a problem (or program) is begging the question; I'm arguing that it is not a computation.


Which necessitates that somehow, there's some law of physics important to operation in the brain who's effects aren't computable. Yes?


Not exactly. I am saying that consciousness is fundamental; that matter, physics, our brains, etc emerge from consciousness. Physics is the OS, consciousness is the transistors/electricity, and we are conscious agents (to borrow a term from Donald Hoffman) using the OS.

So, instead of the physicalist paradigm of

Physics -> Chemistry -> Biology -> Psychology -> Consciousness

it is the idealist paradigm of

Consciousness -> Physics -> Chemistry -> Biology -> Psychology


That doesn't change the question though. We don't know how many layers are beneath the known laws of physics. Who knows, one of them could be consciousness. Maybe it's turtles all the way down.

They question remains. Can you provide a direct answer? If we measured all the particles in the brain, would they be operating in a way not compatible with the currently known laws of physics?


>If we measured all the particles in the brain, would they be operating in a way not compatible with the currently known laws of physics?

Their operation would be perfectly compatible with the known laws of physics. But their operation is _not_ thinking itself, it's what thinking looks like when observed across a dissociative boundary. If I am sad, and you look at my face and see tears, you would never think that the tears were my sadness itself; they are a representation, an image, of the sadness. Tears are what sadness looks like from across that boundary. I experience sadness from a first-person perspective, you see my tears from a second-person perspective, across a dissociative boundary. So, you can measure electrical activity in your brain when you are thinking thoughts, but that activity is not your thoughts, in the same way that flames are the image of fire but they are not fire itself. Neuronal activity is the image of thought, it is what your thoughts look like from across a dissociative boundary.


Okay, so if I use the laws of physics to simulate a human brain, it will behave exactly like a human brain in the real world. Will it also be conscious and experience sadness?


>Will it also be conscious and experience sadness?

Imagine you are programming an AI simulation. You could train a detector to associate a certain wavelength with a certain color. When shown a red light, the AI could say "that is red", because it correctly identified the wavelength. But it would never know what it feels like to see red, right? This is similar to how a blind person cannot know what it feels like to see red, but they can intellectually understand that it is an oscillation of 4.3*10^14 cycles per second.

A different example: you could train an AI to recognize 10,000 songs. It would listen to the frequencies and patterns, and make an identification.

In both of these cases, we have quantity as the input and output.

If after training, you asked the AI to identify the first song it was trained on, would the AI experience nostalgia? Would there be a way the AI "feels" about the song? We can probably both agree that the answer would be no. For the same reasons, the answer to your question is also no.

(edited for clarity)


I appreciate your response, but it's impossible for me to imagine programming an AI simulation capable of feeling the perception of red. The idea that I can't imagine doing these things is because of a shortcoming in my knowledge. Maybe that knowledge is out there, or maybe it isn't. So the exercise gets us no closer to answering the question at hand.

But what I can do is imagine creating a physics simulation. There's no gaps in our knowledge there. So again, I'll ask. We create a physics simulation of a human brain. Can the brain write a novel? Answer questions about what it's like to perceive the color red? This is just a yes or no question.


>The idea that I can't imagine doing these things is because of a shortcoming in my knowledge.

I don't think there is a shortcoming in your knowledge. Your metaphysics intuition is correctly tuned: simulations cannot feel.

>We create a physics simulation of a human brain. Can the brain write a novel? Answer questions about what it's like to perceive the color red? This is just a yes or no question.

GPT-3 can do both of these things. Is it conscious? If you need a direct answer, it is yes. But when we reword your second question as "can a simulated brain experience the color red?" the answer becomes more clear; the simulation can identify a wavelength and know it is called "red". But the experiential part is akin to explaining color to a blind person.

A simulated brain could identify molecular patterns of cocoa and sugar, but can it know what it is like to taste chocolate? Think about what it means to taste chocolate. Is it purely quantitative, like the balance of ingredients, or is there something else going on that is qualitative? Something abstract, something with meaning, something more close to the metal? We can probably agree that it feels like there is. What we are describing is subjectivity — your private conscious inner life. I suggest it is this that is fundamental and cannot be simulated. This is the layer where experience "happens". From this layer emerges meta-consciousness.

Here is an article by Bernardo Kastrup on meta-consciousness: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5590537/


OK, so a simulated human brain acts the same as a real human brain, is capable of the same things, tells you that yes, it can experience the color red, etc, but does not have any internal conscious experience.

Why then would evolution produce beings with internal conscious experience?

This would also mean that our internal conscious experience has no effect on what decisions we make, on whether we cry, smile, what memories we form, etc.


>Why then would evolution produce beings with internal conscious experience?

Well, I'm arguing it's the other way around. But to address what I think is the spirit of your question: if reality is only mind, if consciousness is truly fundamental, then why can't you read my thoughts? Why do we feel like individuals? Why do we have obviously separate private conscious inner lives?

When you are asleep and dreaming, your "alter" generally does not know they are dreaming. Your dream self has dissociated from your waking self, but it is only after you wake up that you realize you were dreaming (if you even remember). Another example of this phenomenon is Dissociative Identity Disorder, where one mind splits into many alters, each unaware the others exist. I'm glossing over a significant amount in order to get to my point, but here are a couple links that go into great detail:

- https://www.google.com/books/edition/_/JCdbmFCHYB0C?hl=en&gb... (chapter 5)

- https://www.mdpi.com/2409-9287/2/2/10/htm#sec4-philosophies-... (search for "Yet, we know empirically that living people have separate, private experiences (Fact 6)")

My point is, our private conscious inner lives are dissociations, alters, from a "mind-at-large" fundamental consciousness. And the boundaries of these dissociations, the "containers" of individual private conscious inner life, (again glossing over so much, like panpsychism's combination problem) are metabolizing organisms. Metabolizing organisms are what alters _look like_ from the outside. Kastrup:

"Since we only have intrinsic access to ourselves, we are the only structures known to have dissociated streams of inner experiences. We also have good empirical reasons to conclude that normal metabolism is essential for the maintenance of this dissociation, for when it slows down or stops the dissociation seems to reduce or end. These observations alone suggest strongly that metabolizing life is the structure corresponding to alters of [fundamental consciousness]

But there is more: insofar as it resembles our own, the extrinsic behavior of all metabolizing organisms is also suggestive of their having dissociated streams of inner experiences analogous to ours in some sense" (from the mdpi link)

>This would also mean that our internal conscious experience has no effect on what decisions we make, on whether we cry, smile, what memories we form, etc.

Don't your thoughts and feelings influence your behavior?


> Well, I'm arguing it's the other way around. But to address what I think is the spirit of your question: if reality is only mind, if consciousness is truly fundamental, then why can't you read my thoughts? Why do we feel like individuals? Why do we have obviously separate private conscious inner lives?

No, this is not the spirit of my question at all. I feel like the spirit of my question is completely being missed. I realize you've thought very deeply on how and why the world arrives from consciousness itself and you've been working very hard to get this concept across. I get the general gist of your theory and there's lots of detail and thought behind it.

I'm not sure how I can set my question out more clearly than I already have. But I'll try. Rather than trying to explain your own theory in greater detail, can you try to work with me on getting a mutual understanding of my line of reasoning?

We have a very excellent predictive model of particles and fields. So much so we are building experiments worth billions upon billions of dollars to attempt to find places where reality differs even to the slightest degree of that model.

The human brain, the warm squishy stuff in your head, can be viewed as being composed of particles and fields. Particles and fields may just be some manifestation of some pan psychic reality, but we can still use our model of particles and fields to predict the behavior of those particles and fields.

So the first question. Can we use our model of particles and fields to predict the behavior of the particles and fields within the human brain? You've already appeared to answer this question in the affirmative "Their operation would be perfectly compatible with the known laws of physics."

From this it follows that I can create a computer model of a human brain, complete with all the cells, proteins, neurons, etc, and that human brain will be capable of any action (eg, the signals sent by neurons out of the brain) a real human brain is. There would be no way to discern between a real flesh and blood human brain and the simulated one by talking to it. Since a real human brain tells you that it's conscious and has internal experience, the simulated one must tell you the same.

While the simulated human brain will tell you that it's conscious, it is of course not proof that it is. But this leads to your next question:

> Don't your thoughts and feelings influence your behavior?

If thoughts and feelings are a thing that don't pass the computability test, but a simulated human brain doesn't have external behavior that differs from a flesh and blood human brain, then no, thoughts and feelings have no effect or even influence on your behavior. In such a case they are a mere passenger. Any effect they have would necessitate that the particles and fields within the brain suddenly behave in a way that violates our model of particles and fields.


I'm sorry, I am not trying to frustrate you or avoid your questions. I'm enjoying this conversation. I will try to work with you.

> From this it follows that I can create a computer model of a human brain, complete with all the cells, proteins, neurons, etc, and that human brain will be capable of any action (eg, the signals sent by neurons out of the brain) a real human brain is. There would be no way to discern between a real flesh and blood human brain and the simulated one by talking to it. Since a real human brain tells you that it's conscious and has internal experience, the simulated one must tell you the same.

> While the simulated human brain will tell you that it's conscious, it is of course not proof that it is.

I still agree with all of this. A simulated brain could give the appearance of consciousness while not being conscious. It would not have a private conscious inner life, but it could say things that make it look like it did.

> If thoughts and feelings are a thing that don't pass the computability test, but a simulated human brain doesn't have external behavior that differs from a flesh and blood human brain, then no, thoughts and feelings have no effect or even influence on your behavior. In such a case they are a mere passenger. Any effect they have would necessitate that the particles and fields within the brain suddenly behave in a way that violates our model of particles and fields.

I am struggling to follow your point here. Thoughts and feelings are internal experiences which correlate with phenomenal consciousness and are absent in a simulation. Could you give me an example of the effect you are describing and how that would violate our current models?


Ok, so you have a simulated brain, and a real brain. Both are to an external observer, functioning identically. For something to have any effect on the real brain, it would also need to have an effect on the simulated brain. Otherwise the simulated brain would deviate from the real brain and an external observer would be able to identify which is which.

Therefore by your definition of internal experience, internal experience has no effect on our behavior.


> For something to have any effect on the real brain, it would also need to have an effect on the simulated brain.

It would just need to _look like_ it has an effect on the simulated brain, right? If you ask me a question and I pause, say "hmm", and put my hand to my chin, can you know that I am actually thinking and formulating a response? If the entirety of your observations are external, of course you can't. There is no way to tell if my response is a random choice from an array of preset answers, or a group of concepts activating each other.

That's because brain activity is part of what our inner, first-person experience looks like from a second-person perspective (ie, an external observer). Tears are not sadness, they are what sadness looks like, they are an external description of an internal state. Sadness can only be experienced by the person experiencing it. Tears are a description of sadness. But can't tears be faked?

So when we see the same neuronal activity in both brains, we have no way of knowing whether inner experience actually gave rise to the activity, or it just looks like it did.


Consciousness isn't really the output, it's the category of all possible not-too-pathological running states of the program. The output is whatever the organism does to or in its environment (production of speech, movement, decisions, etc).


Okay, but how do we get to the subjective experience of the experiencer? I don't see how what you've said accounts for conscious inner life.


No one has a good answer to that. But it's not necessary to answer that to conclude that whatever task a human can perform can be replicated by a sufficiently complex algorithm.


You also haven't really made any good-faith effort to read the neuroanatomy textbook I tried to suggest to you in another thread.


I did look for it yesterday but wasn't able to find a free version. Neglected to mention it because I was unable to comment (that restriction seems to have been lifted). I am genuinely interested in reading it.


I’m not sure that was more succinct, but exactly! Haha :)


(Author here.)

This is a false dichotomy. Saying that consciousness is not fundamentally a computational process does not mean that it then has to be magic.

We could say the same thing about life. Life is not a computational phenomenon. But it is also not magic. It is, fundamentally, a particular kind of chemical process. Perhaps the same is true of consciousness, or maybe it is some other kind of physical process. I don't believe that saying that it is not computation means that we need to throw up our hands and resort to magic, though.


I would absolutely argue that life is a particular category of computation as well, and that is actually closer to my area of expertise (biochemistry) than consciousness / neuro.

For clarity: when I say "computation", I refer to any physical process ("physics") which involves information.


I suspect we agree more than we disagree. But I think that your definition of a "computational process" is too broad. A bacterium does things that we can say are "performing a computation." But that is different from saying that life is, at root, a computational phenomenon.

For instance, sorting a list is a computational process. It doesn't matter what kind of computer I run a sorting algorithm on, if I follow the algorithm, I end up with a sorted list. If I use quicksort, it takes me on average O(n log n) steps. It doesn't matter if I'm doing this on a Lego computer or if I'm simulating a Lego computer on a virtual computer. I always end up with a sorted list in an average of O(n log n) steps.

By contrast, if I construct a collection of chemicals that matches the positions of those of a real bacterium, I have created a new bacterium. However, if I simulate the activity of those chemicals on a computer, I have not created a bacterium. I have just created a simulation of a bacterium. I haven't created a living thing.

My argument is that the same is true of consciousness. If you create a perfect simulation of a brain in a computer, you have not created a new consciousness. You just have a simulation of a brain.

None of this to say that a computational paradigm cannot sometimes be useful in understanding what is going on in life or consciousness. Just that it is not fundamentally a computational process, it's a physical process.


> I suspect we agree more than we disagree.

I suspect the same, but I think there is tremendous value in taking a cosmic step back and seeing everything through the lens of information. It's the most reductionist neutral approach to the universe I can conceive us at this point in my life.

> By contrast, if I construct a collection of chemicals that matches the positions of those of a real bacterium, I have created a new bacterium. However, if I simulate the activity of those chemicals on a computer, I have not created a bacterium. I have just created a simulation of a bacterium. I haven't created a living thing.

I agree, but I think the context matters: you may not have created a bacteria, but that's because you've emulated a bacteria in a totally different environment. The bacteria makes no sense as a computational engine if you separate it from its environment, which is chemical in nature: this chemical environment must also be emulated inside the computer. So if we're going to take the idea of life as a computation seriously and to the extreme, we need to conceive of it as it exists within its environment and in the context within which it evolved. Otherwise what you've created, grossly speaking, is a function that is never called, which of course is much less interesting.

Similarly for the brain, it must also be embodied, and you must also simulate its afferent inputs, and you must also give it an environment with which to interact and within which to exhibit features of agency. If you emulate the embodied brain with its environment, I contend that it doesn't matter what the substrate is: the brain you will have created will "feel" just as real to itself. It will perceive itself as being "conscious" just as well as you and I.

Note that for this to make any sense in a relatable way to a human, it's not enough to just throw large numbers of neurons together, as I understand is common practice in AI work even today: a long-learned lesson in both neuro and biochem is that function follows structure, so you must emulate the gross organizational structure of the human brain in order to observe the same sorts of features that make up human-flavoured consciousness.

If you compute consciousness, there will be zero distinction between how the computation feels real to itself, and how you and I feel to ourselves. That "magic" feeling we get where we have the impression that we're "someone", with a personal identity, that we're real, that we're alive, that we're aware, that we can make our own decisions... All of that stuff, your emulated consciousness will also experience. It won't be just bits turning on and off from its point of view: it will feel alive.


> I suspect the same, but I think there is tremendous value in taking a cosmic step back and seeing everything through the lens of information. It's the most reductionist neutral approach to the universe I can conceive us at this point in my life.

What is your definition of "information"? I view the concept of computation so deeply entwined with consciousness that I fail to see how it can be meaningfully applied to physically phenomena that would be completely independent of a subject that can determine something to be a computation (i.e. the mathematical intuitionistic understanding),so I'm interested in what you mean by information and computation.


I'm not going to attempt to define information (because I think it's beyond my ability), but what I mean by it is what I think "information" means as it relates to physical phenomena, for example the black hole information paradox [1], or Shannon's entropy [2], or constructor theory [3], or calculating-space [4], or digital physics [5].

[1] https://en.wikipedia.org/wiki/Black_hole_information_paradox

[2] https://en.wikipedia.org/wiki/Entropy_(information_theory)

[3] https://en.wikipedia.org/wiki/Constructor_theory

[4] https://en.wikipedia.org/wiki/Calculating_Space

[5] https://en.wikipedia.org/wiki/Digital_physics


Seeing things through the lens of information seems more like dualism than the arguments for the distinct nature of human "consciousness". (Which doesn't mean it can't be correct, of course, but simply that people often tend to depict the debate as being the other way round). What is a "soul" if not the idea that consciousness exists as an abstraction independent from the material it runs on?

The purely materialist argument is that "information" is just a pattern in the signal processing apparatus of carbon based lifeforms (i.e. it's a representation of the universe in our neurons, not the universe) which very loosely maps to physical processes. Very loosely is important here too: humans can identify equivalent patterns in things as dissimilar as the LCD output of a pocket calculator and beads on an abacus and act accordingly, but I'm not sure the constituent atoms of the calculator and abacus have any view on the matter.


I'm not sure this adds anything useful to the conversation, but I don't believe the word "soul" maps very well to any real thing: we may as well be talking about the "quingel", the "probble", the "finglam" or the "subvick".

There's no neuroanatomic basis for a "soul", but there is at least some extremely fuzzy mapping from neuroanatomy to the concept of "consciousness". It's a bad mapping, but it means more than nothing at all.

---

And I reject the idea that taking the perspective of information is taking a dualist perspective. I am advocating for the opposite: taking "information" seriously as some sort of low-level quantum substrate of the universe. It's a purely materialist view, where the material is literal information.

An extreme version of this, which I find quite intellectually useful, is the mathematical universe. [1]

Note that in your examples of information, you are pointing to higher-level information, which emerges in complex systems. It's not an incompatible view!

[1] https://en.wikipedia.org/wiki/Mathematical_universe_hypothes...


I think the parent author may be talking about life in a phenomenological model as in how we experience something vs knowing about it epistemologically. Often that gets mixed up. Knowing doesn't equal experiencing. This is something often the technocrats misses.


I would say that life is an emergent phenomenon, as is consciousness. I suspect that whether conscious systems can be computational is related to the question of weak versus strong emergence. If weak emergence pertains in physical systems then, just as physical systems are therefore entirely tractable to computation, then so is consciousness. If strong emergence pertains, then computational consciousness may not be possible. I subscribe to weak emergence.

BTW thanks for the article, it's a very clear and well reasoned explanation of your position, even though I happen to disagree.


Biological life is not a computational phenomenon. There's no reason very advanced computers couldn't make simulated life. OpenWorm is an example.

By the way I find it hilarious how confident you (and other philosophers) are about your conclusions despite them basically being thought experiments with zero evidence. Imagine if scientists did that!

Actually I guess string theory is an example of that.


I would argue (and this is really just a semantic point) that it's more productive or instructive to think of biology as being literally a computational phenomenon. The substrate is different, the instruction sets are different, the rules are different... But ultimately it's just computing and iterating on a mind-bogglingly massive amount of information, on a time scale that is difficult for us to wrap our heads around.


Yeah I agree. That comes into play in Permutation City which someone mentioned elsewhere in this thread.

Anyway OP is pretty clearly wrong, or at best proposing something very wild with no evidence.


nope, the other way around. Thinking you can 'get' anything from computation is religious thinking, because it introduces a sort of dualism. As the author correctly points out, computation is not a 'thing that 'does' anything, it's a label for a subjective observation about a state of physical matter.

Taking the naturalistic position is to accept that matter is all there is. Consciousness is not divorced from the stuff it is made out of, and cannot be abstracted into some cloud of computation. It is a sort enlightenment era rationalism gone wrong which is also why it's so popular among folks in this industry. It actually comes with its own theology while we're at it. (Immortality, raising the dead, final judgement, and so on).

A person computes consciousness no more than a falling pen computes gravity. Even a literal computer does not 'compute' anything other than in the sense that human observers impose meaning on a bunch of electrons buzzing around, and the language makes sense to explain how it works.


Yeah, God of the gaps all the way down. That, and a good deal of armchair philosophy.

I think it was Sean Carroll that said something like "don't trust any way of thinking that allows you to discover truths about the universe from the comfort of your armchair".


"The idea that you cannot make tea from water alone requires accepting on faith that brewing tea is magic."

No, not necessarily. It just means that water/computation is not the only ingredient you need.


And what is this other ingredient if it's not something physical?


It could well be something physical, but going beyond mere computationality.

If I knew what it was, I'd be waiting for my Nobel prize now :)

It's kind of as Newton's physics was missing the secret ingredient of special relativity. It was still physical, not magical, but not discovered yet.


You seem to be arguing for conscience as the fifth force of nature, or something quite close to it. That's basically magic.


Only in the same sense as dark matter or string theory are "basically magic". It is naive to assume we've already discovered all properties of matter and energy and are now only refining our understanding of those.


Yes, astrology could also work due to a fifth force of nature. After all, it would be naive to just dismiss it based on our current understanding of Physics.


It's not a good analogy. Astrology is an explanation of a problem that doesn't exist in the first place.


Any sufficiently advanced technology feels like magic and it seems to me that consciousness is about as advanced as it gets.


so we can finally put this woo nonsense to bed, could you walk us through the high-level computations of a subjective experience? eg, having a bellyache or falling in love.


I can’t explain to you how a whole brain works, but if the question was genuine, I would actually recommend starting with Blumenfeld’s “Neuroanatomy through Clinical Cases” textbook. It’ll demystify a lot of the larger scale questions around structure and organization. It’s fairly easy to study how neurons, axons and synapses work using any number of freely available resources. You’ll never find out exactly how every cell is wired, but after a while (especially if you have a comp-sci, ML or statistics background) you’ll realize that that’s besides the point.


I'm just curious as to when exactly quality arises from quantity


What do you mean? Are you referring to emergence?

The issue is that we know very very well that “consciousness” is not “one thing”, but rather a collection of features, all (or most) of which can exist in isolation, and all of which can have a whole spectrum of both pathological and “normal” forms or states.

Generally speaking, different features emerge at different scales, so there is no particular reason to think that all of the things that make up “consciousness” would emerge at the same scale, at the same level of organization, or even at the same physical location in the brain. We can assume that interesting features emerge at the “group of neurons” level or at the tissue level, but that’s a pretty wide scale.

Crude illustration: “consciousness” is much more like a network of microservices than it is a monolith… And it has fuzzy borders and is not even well encapsulated from its environment.


I'm referring to subjective experience. Which neurons have to fire for me to experience the subjective taste of chocolate? And how many neurons do I need before that can happen?


Blumenfeld won’t answer exactly that specific question, but it will help you understand how that works more generally.


The book covers how quality (subjective experience) arises from quantity (# of neurons)? Could you point me to a section please? I'm super intrigued.


The book as a whole gives you the tools to understand how emergence happens in the brain. I can point you to the resource, but I can’t do the learning for you.

It’s also possible you’re less interested in the brain per se, and more interested in “emergence” in the abstract. If that’s the case, there is no shortage of good books and resources you can turn to.

Look up “emergence science” and “complexity science” and go from there: it’s not an easy topic, it’s very cross-disciplinary, and to really understand it requires a fair amount of maths (imo).


Sure, give me the complete wavefunction of your body and fifteen trillion years to simulate it.


Would you say that the opposite assertion, I.e. that consciousness is computation is not grounded on belief and bias? I believe consciousness is computation of some kind. I think some of the challenges with consciousness is the link with the substrate of the computation, like a program that cannot run on any other computer, a kind of soul DRM?

The belief in my uniqueness is instrumental in my belief that I am a conscious being.


The key error in the argument is here:

"[T]he only reason we call a box with a CPU in it a “computer” is because we happen to have a simple mapping between the voltage levels across different parts of the CPU to a set of bits we have defined, and when these voltages interact they do so according to the rules of a set of logical operations which we have also defined. But there is no meaning to the physical system apart from what we, as external observers, have imposed on it."

This might be an argument against a computer program with no inputs or outputs being conscious. But it is not at all an argument against a robot with a computer brain, hooked up to inputs and outputs in a way somewhat similar to a human brain, being conscious. In fact, one way of stating the position of physicalists about consciousness is simply that we humans are such robots! We have brains that compute things (granted, our brains do it with analog neurons, chemicals, etc. instead of digital circuits, but that doesn't mean what our brains are doing is not computation), but the things our brains compute have semantics because our brains are hooked up to inputs and outputs.

In other words, the whole article is attacking a straw man. The actual substantive point of the article is not that consciousness is not computation, but that for computation to produce consciousness, it has to have semantics--it has to be hooked up to iputs and outputs in a non-trivial way. Which is certainly not easy, but that doesn't mean it's impossible.


There are documented cases of people in comas/full paralysis who later wake up and remember being conscious throughout the coma. If they never woke up to tell anyone about it (or died before they had the chance to wake up) would they not have been conscious? We can of course observe that there is brain activity to some degree but I'm not sure that we would consider that output in a meaningful way anymore than something like side-channel analysis of a CPU.

Further, who determines what inputs/outputs qualify as semantic? That would either require an outside observer (breaking the model of "I think therefore I am") or require the system itself to determine semantic value at which point there's no need for I/O to make the determination.


> If they never woke up to tell anyone about it (or died before they had the chance to wake up) would they not have been conscious?

This is an edge case where one could say the person was (or might have been) conscious but nobody else ever got the chance to find out. In any concept like this we should expect edge cases.

> who determines what inputs/outputs qualify as semantic?

The function of the inputs/outputs in the overall function of the entity and how it relates to the rest of the world.


Is a computer not computing if it's not outputting? If I'm thinking for an extended period of time without telling other "people" my thoughts am I actually thinking? If a tree falls in a forest, and there’s no one around to hear it, does it make a sound?


The argument presented that a hot bar of iron would qualify as conscious if consciousness is computation also concludes that a hot bar of iron qualifies as a computer.

Do you believe that a hot bar of iron is a computer? If your answer is no, then it seems like the argument presented here should be not be at all convincing.


(Author here.)

> Do you believe that a hot bar of iron is a computer?

This is the crux of it. The main objection to the triviality argument is that we've just used far too broad a definition of a "computer." If we're thinking that a hot iron bar is a computer, then this should suggest to us that it's the definition that's wrong. You are correct in saying that if the definition we use is wrong, the whole triviality argument falls apart.

But for me, the problem with this objection is that I've never seen any workable alternative definition for a computer besides the standard definition. (Namely that a computer is a mapping from physical states to Turing machine states.) Any alternative seems to introduce some measure of subjectivity. The hot iron bar doesn't seem like a computer to us because it requires a random mapping and is not repeatable. But a requirement that the mapping be "simple" or "low entropy" is not easy to rigorously define, nor is it easy to find a principled way to determine how "repeatable" or "deterministic" a computer's behavior should be.


Let's say you can initialize the first N bits of the hot iron to the initial state of your program. By what process does the hot iron use that information to compute the next state of your program?


In the case of the hot iron bar, obviously there is no deterministic process that takes us from one state to the next. We just happen to get lucky and see that it randomly fluctuates from one state to the next state correctly.

The trouble is that formal computation theory does not take into consideration the physical process by which the computer moves from one state to the next. That's what makes it so general. We can make computers out of transistors or vacuum tubes or Legos equally well and can be assured that whatever we make our computer out of the computational complexity of bubble sort is O(n^2).

Requiring specific physical processes for computation would invalidate computation theory in a very fundamental kind of way.


I'm not requiring specific physical processes, I'm asking for any physical process in a hot iron bar that executes any model of computation that is capable of running an arbitrary program. But you're saying there is not one, we just get lucky one time. That's not computation.


The physical process is simply that the atoms in the bar of iron start in one Turing machine state, obey the laws of physics, and end up in another Turing machine state. But this is also true of any other physical computer.

It sounds like your objection is that this is not a computer because it's not reproducible --- we just got lucky that the physical states mapped onto the appropriate Turing machine states one time. But then how deterministic does the physical system need to be to be considered a computer? It can't be 100% because no real-world physical computer is 100% deterministic. Is it a computer if the mapping is correct 50% of the time? 90%? 1%? I can't see any principled reason to choose any particular value.


The physical process is simply that the atoms in the bar of iron start in one Turing machine state, obey the laws of physics, and end up in another Turing machine state.

I don't see how this addresses the key point -- those aren't random distinct states; there's an explicit description of how one state transitions to the next. That's part of the definition of the Turing machine!

If you could describe a mapping from the state of your iron bar to a Turing machine's state, such that the bar correctly transitions from one machine state to the next, then yes, it is a computer! Although as others have noticed, you'd need a ludicrously large and complex mapping; the infinitesimal tail would be wagging the infinite dog. At that point you could certainly argue that the computation is really happening in the mapping itself.


> there's an explicit description of how one state transitions to the next. That's part of the definition of the Turing machine!

A Turing machine does indeed describe which states the machine transitions between (by its definition). So, it might say that if it reads a "1" and is in state "A", transition to state "B".

However, crucially, a Turing machine does not specify the physical mechanism by which this state transition occurs. You just have to enumerate the rules for state transitions. The definition of a Turing machine says nothing about whether or not these states are voltages, the locations of rocks, or the magnetic moments of iron atoms. Nor does it say whether the transition is being implemented by a CPU or a human being manipulating rocks.

So the only rigorous way you can make a connection to a physical computer is to just identify a mapping from physical states to Turing machine states, and observe whether or not it performs all the state transitions correctly. If it reads a "1" and is in state "A", did it go to state "B"? If it did, it is behaving as a Turing machine. If it did not, it is not behaving as a Turing machine.

> At that point you could certainly argue that the computation is really happening in the mapping itself.

The mapping is fixed, so I don't see how it can do any computation. Others have objected to the fact that the mapping in the iron bar is obviously extremely complicated, but this strikes me as ultimately being an objection that this doesn't "feel" like computing.

But as a counterexample, you can do computations under homomorphic encryption. Suppose you have encrypted a computer program that sorts a list using homomorphic encryption. You hand it to me to run on my CPU. When I observe the voltages of the CPU, they will appear completely random to me. Only if I have the key can I provide the correct (complicated) mapping from physical states to Turing machine states and interpret what is going on on the CPU. But if we reject "complicated" mappings as not being "real" computations, then you're forced to say that computation under homomorphic encryption isn't "really" computation --- even though you've handed me a program to sort a list, it ran on my machine, and it handed you back a sorted list.


I think you’re skipping over one or two crucial steps there; let me try to put my finger on it.

The key feature of a universal Turing machine is that it can perform any computation, if fed the right program.

In your thought experiment where you’re mapping the internal state of an iron bar, to claim it’s a valid mapping to a Turing machine, it must be possible to feed in any program, after the mapping has been defined. Otherwise it’s not a universal machine.

You are mapping a well-defined structure into essentially a block of random numbers -- a one-time pad. You can only perform “any” computation that way if you map the full execution trace of the machine onto the random block. But that way, the computation occurs before the mapping is defined. That’s what I meant by “the computation is really happening in the mapping itself”.

The homomorphic encryption scheme is different, because while it looks random, it’s generated by a well-defined and reversible process. So I can use a mapping of bounded complexity to inject any program, and in principle to step through its execution.


> it must be possible to feed in any program, after the mapping has been defined.

This is absolutely correct and gets to the heart of it. We define a mapping and we can imagine even setting some of the spins of the atoms in the bar to input a particular "program." Then we sit back and watch the spins randomly flip and see if they correspond to what a universal Turing machine would do.

The reason that a hot iron bar in practice is not a computer is that there is no way we can easily find the correct mapping before we observe the bar. The process of finding this mapping will take more work than the computation itself. (I think this is what you mean in saying "the computation is really happening in the mapping itself.") So for our purposes it's useless for performing any computations. Nevertheless, some mapping from the bar's states to a Turing machine executing the program we've given it exists.

This is why this is different for the case of consciousness. Because consciousness exists independent of whether or not we're aware of it, it doesn't matter whether or not we can find this mapping beforehand. It just matters that such a mapping exists.

It would be different if I made the claim that the iron bar is sorting a list. I might say, "there exists a mapping of the states of the iron bar to a Turing machine running quicksort. Therefore the iron bar is sorting a list." The appropriate response would be "So what? If I consider all random permutations of the list, obviously one of them will be sorted --- but how does that help me find it? It takes me the same amount of work to find this mapping as it does to sort the list."

But if we are to say that consciousness is fundamentally a computational phenomenon, it doesn't matter if you find the mapping or not --- it exists independent of you.


Any physical system with a component that you can throw away to make the system a more reliable computer is not a feasible implementation of an abstract computation. If you replace the iron bar with a quartz generator, reliability will increase by ten to the power of, say, quadrillion.


we threw away vacuum tubes and replaced them with semiconductors

this made computers more reliable

were the vacuum tube monstrosities not computers?


They were somewhat useful implementations of abstract computations, unlike "error in the first femtosecond" iron bar computer.

And vacuum tubes also played a crucial role in the process of computation, besides introduction of noise.


Is empty space also a computer, because while we might naively interpret it as "all zeros", with the proper location and time-dependent mapping, it can be seen to be running Doom?


Well, there is still random quantum behavior in a complete vacuum a.k.a. empty space according to quantum field theory's findings. So yes, empty space is a computer as long as you notice that physics law is doing math all the time.


Exactly this. You can encode a computer anywhere, given your own definition of "computer".

Getting your computer to compute, there's the rub.


Stephen Wolfram's article "Alien Intelligence and the Concept of Technology"[0], posted a few days ago, touched on this, eg. the idea that a weather pattern "computes" its next state, ie. the weather in the next moment, and indeed exhibits very complex behavior while doing so. The hot iron bar is a simpler example but at the atomic scale the same principle applies.

The iron bar doesn't strike me as particularly conscious, except perhaps to the degree that elementary particles possess some rudimentary consciousness. (See "panpsychism", the idea that consciousness pervades all matter. Though lately, the reverse seems more likely to me—see Lex Fridman's interview of Donald Hoffman[1].)

[0] https://news.ycombinator.com/item?id=31781234

[1] https://youtube.com/watch?v=h1LucGjqXp4


A hot bar of iron can "compute" the next state of a hot bar of iron. It cannot compute the next state of any arbitrary program. Can I play Fortnite with nothing more that a hot bar of iron? No, I cannot! Not unless it's compiled to WebAssembly.


You can but it shifts the burden to the decoder. Since an iron bar over some time period has a finite number of states, there exists a decoder that maps state changes in the iron bar to your Fortnite gameplay.

I think a more fair criticism is that you likely cannot choose the decoder without an oracle (ie the causal relationship would be hard to preserve). I think OPs claim is that consciousness, unlike Fortnite, doesn’t depend on being causally related to some external inputs (thus you don’t need oracle to choose the decoder).


> Since an iron bar over some time period has a finite number of states

Citation needed.


He defined these states through spin flips and I assume we sample slower than decoherence. This is a semi classical system without complicated qm effects.

But even if we didn’t restrict ourselves

log(# of states)~entropy

and there is a maximum (and finite) amount of entropy you can have in a finite volume (actually it scales with surface area) before you get a black hole.


> But! If a single observer can correctly determine that the bar of iron is conscious, we must conclude that the bar of iron is conscious for everybody, because consciousness is observer independent. If true, honest-to-God consciousness is just a matter of running consciousness.exe, we have indeed found someone who has correctly observed that the bar of iron is running consciousness.exe and we must conclude that the bar of iron really is conscious.

This is just the library of babel.

Yes, you can find a book that represents any calculation.

No, you cannot use the books in the library to replace any Turing machine. You need the full knowledge of an entire computation, including every input and output over its entire lifetime, before you can actually find the book that represents it.

The books are not computing, and neither is that arbitrarily-interpreted bar of iron.


You don't need to know all that much in order to achieve consciousness. Lets say that you have a program that you assume to be conscious. You run it for 1 second, and exactly record all its internal states and outputs. Now, you can find an interpretation of the iron bar that shows that the iron bar casually and deterministically went through those exact same states for a full 1 second. What is the difference between those two situations? Why would the program be conscious but the iron bar would not be?

I believe the argument is more about how loose our definition of computation is rather than trying to disprove any form of consciousness. If we can find a proper iron bar and an interpretation of internal state that exactly mimics a computer, why do we consider one a computer and one completely random? What is missing from our definition of computation?


The conscious part of that system is the interpretation, not the bar itself. You have skipped directly over the observer, who is actually performing the conscious computation.

Think about how you would actually write a program to achieve this. The iron bar is just a bunch of atoms with random bit states that flip at random intervals - we'll model it as a contiguous 1D array of bits. You have a 1 second consciousness mapping, which is most likely rather large in size. Our interpretation process maps the random bits in the bar to a time instant of the conscious process by interpreting some bits in the bar as themselves, and the other bits in the bar as the reverse of themselves. In other words, our interpretation of which atom-bits are telling the truth and which are lying is itself a 1D array of bits that satisfies the following property:

BAR bitxor INTERPRETATION = CONSCIOUS_INSTANT

As time progresses, BAR and CONSCIOUS_INSTANT continuously change, therefore necessitating INTERPRETATION to change as well to keep the above formula consistent. In fact, by the rules of XOR, we can compute exactly what INTERPRETATION is at any given moment:

INTERPRETATION = CONSCIOUS_INSTANT bitxor BAR

But wait! This means that to dynamically know INTERPRETATION, you _must_ know all of the information associated with both CONSCIOUS_INSTANT and BAR. But if you know all the information associated with CONSCIOUS_INSTANT, haven't you already simulated a conscious process...? So there is in fact a consciousness here, but not in the {BAR} system: it is in the {BAR,INTERPRETATION,CONSCIOUS_INSTANT} system.

(Note: If you squint at this for long enough, it starts sounding a lot like a motivation for Death of the Author.)


> Now, you can find an interpretation of the iron bar that shows that the iron bar casually and deterministically went through those exact same states for a full 1 second. What is the difference between those two situations? Why would the program be conscious but the iron bar would not be?

Let's say you find an interpretation that perfectly matches 1 second of calculation, after searching an exponential bazillion states.

Then you decide to test the next second, to see if it's the real deal.

Oh, it always fails, with certainty 1 - 1 / [exponential bazillion].

Even if you had perfect random noise you could always find an interpretation to XOR with it and "find" any target you're searching for.

Even if you had flat 0000000..., you could always find an interpretation to XOR with it and "find" any target you're searching for.

The problem is that you actually did the computation when you were finding the "interpretation". Whatever value the rock had becomes irrelevant in a search process like this. The rock isn't doing the computation.

Just like in the library of babel, where the book doesn't have the information, the location of the book has the information.


Why attribute conscious experiences to the iron bar, when we have a system composed of the iron bar and the interpreter? The bar functions as a very inefficient clock that allows the interpreter to go thru motions originally produced by the conscious program.

I bite the bullet and say that the system is conscious while it's playing the recorded states (with a caveat that it cannot interact with anything, unlike the program), and stop being conscious afterwards, unlike the program.


This seems...very confused.

This seems to be attempting to "prove" that if you regard consciousness as "containing the bits of a specific program", you could also see that program in random data by interpreting that random data by effectively applying a one-time pad to it (from which you can indeed produce any possible interpretation of data), which it treats as a proof by contradiction.

And leaving that aside, while the assertion is that consciousness is not "computation", the reasoning seems focused on the storage of bits rather than on the execution of an actual program defined by those bits that goes from one state to another in a meaningful fashion. Storing a program and running a program are two different things.

If someone were interpreting the successive states of a heated iron bar (or other random noise source) with a sufficiently convoluted one-time-pad to map it to successive states of a conscious being, then to the extent it exhibits consciousness the substrate it runs on is effectively whatever is actually supplying those one-time-pads, since supplying them atop random noise would require generating them via whatever process produces those states corresponding to consciousness. At that point you could just discard the random noise source and the one-time-pad generator that maps that random noise source to the conscious states, and just leave the conscious states.

Ultimately, this article seems to have started out with an assertion to support, and then tried (unsuccessfully) to turn that assertion into something more than an assertion.


I don’t really get the argument either. The author only seems to demonstrate that different observers can make different conclusions from different observations of the same phenomena. The author requires consciousness to be observer-independent, but surely that doesn’t also require that all observers are able to correctly conclude whether they’re observing a conscious entity at any given time.


Yeah, the author claims that consciousness is observer independent, but then creates systems that depend on an "observer" (or rather, an interpreter) to make the system turing complete. The bar of iron isn't conscious or turing-complete just because one person can interpret it so. The bar of iron + the interpreter form a complete system. And in fact the bar of iron is really not doing anything in this case, it's the interpreter doing all the work, so it's more like saying "this human interpreter is conscious". Not a very insightful conclusion.


The experiment is okay, it's actually a special case of a concept explored in Egan's Permutation City and your observation about what's really doing the work applies there too. Except story goes in unsettling directions by really taking noise generator aspect seriously. Things get interesting when sections of patterns become self interpreting.

A similar thing could be done for brains: record with necessary accuracy, all voltages, membrane potentials and any key biochemical concentrations. This will take a finite number of bits. Look for a decoding of recorded data from heated iron bar, convert those readings, instead of using original, and play that back into state clamped brain. Does being able to read conscious state into brains from hot iron invalidate them too?

Another relevant story is Wang's Carpets. We might look at some alien moss or fungal mat and think it primitive. But later our technologies and knowledge advance to the point we can now see it's running a complex computation with intelligent agents inhabiting. Did the creatures not exist until we could decode them?

One of its pivotal flaws is:

> Since there is no definition of computation without reference to an external observer, a system in isolation just cannot compute, which suggests that a conscious being cannot compute.

This is an assumption they do not try to and cannot prove. It's also what much of their argument rests upon.

Related ideas are subjectivity of emergence or what counts as an observer for Wigner's Friend.


You make a good point about discarding the source of random noise that the one-time pad is being applied to, and just focusing on the thing that's generating that one-time pad.

But I still don't know where to draw the line and how to justify it.

If that source of random noise mapped to a Turing machine running consciousness.exe for a short period of time by sheer chance without a one-time pad being applied to it by an external observer, would that classify? If we observed that this mapping held true by sheer chance as we observed additional bits in this random noise source, what about then? Does it make a difference that it's a random noise source that happens to be corresponding to a Turing machine for a period of time, and not an "actual" computer? And if that matters, what about the point that actual computers aren't perfectly deterministic, either?


The reducto adsurdium just seems to boil down to: but that makes meet uncomfortable.


> So if we accept that qualia exists (which, after all, seems intuitively sensible), we are burdened with the apparently impossible task of explaining how consciousness can be generated by physical processes. This is the crux of the “hard problem of consciousness.”

In what sense does this task seem “apparently impossible”? To me it seems like we simply don’t know enough right now to explain it, but it doesn’t seem like some unique or special type of difficulty.

We barely know how brains work, we’ve only had theoretical models of computing for a few human lifespans, etc. We literally still make lightweight insulated clothing out of duck feathers cuz we can’t match their molecular machinery. Why would we expect to know how to implement consciousness in a computer at this stage in human history?


From my(admittedly relatively light) reading on this subject, it seems that some philosophers of consciousness really like to take certain things they can imagine as proof of such statements. The only proof I've read about for this idea that qualia are non-computational are thought experiments

One is "what if in some future where we understand the workings of the brain and physics perfectly, Alice grows up all her life in a green room, while learning every possible thing about the color red, except for any picture of it; when she then walks out of the room and sees red with her own eyes for the first time, she will still learn something knew, the subjective experience of the color red, the qualia for it - so this must be a non-physical phenomenon".

Or "imagine an intelligent being that has exactly our ability for reasoning, but doesn't experience qualia. It would behave exactly like us, and can talk about seeing red or feeling warmth, but it doesn't actually experience them; so, since the external behavior is indistinguishable form us, but the internal experience is different, this proves qualia are non-physical".

They all remind me of a similar proof of God's existence, which has mysteriously also been taken seriously by some philosophers - "let's imagine something which has all possible good qualities, it is perfect in every way. Since things that exist are better than things that don't, this perfect thing must exist, and we call it God".


> Before getting too deep we need a working definition of consciousness. This is a tricky concept to define rigorously since it seems that a rigorous definition of consciousness practically requires a theory of consciousness itself. To make matters worse, in these kinds of discussions it oftentimes gets mingled with related ideas like self-awareness, intelligence, and executive function. But in this post I am interested only in consciousness as a sort of perception or sentience — an awareness of being, or, more loosely, “what it feels like to be something.”

I'm not sure this definition succeeds in distinguishing consciousness from some mixture of perception and self-awareness, both of which machines running programs can have.

Maybe defining consciousness is difficult because we don't really understand what we mean by it, and our attempts to make claims about it are more flimsy and baseless than we like to believe.

The "what it is like to be a bat" stuff doesn't illuminate the matter at all. It just elevates the figure of speech "what it is like" to an illusion of philosophical rigor.


I think the real problem is pretending that the phrase “what it feels like to be something” is going to act like a key in my dict of feelings.

A: You know, the feeling of what it is like to be something.

B: sits quietly, looks at ceiling Right, yeah, that feeling.

From a different angle, if OP wants to define “consciousness” as some feeling it sounds like he’s basically done. He’s labeled one of his feelings with the word “consciousness”. I’m not sure what point of contention remains.


How would we know what it is "like" to be something? It's like asking the fish what is water. We've never not experienced it. There is nothing to compare it to.


I don’t understand “asking a fish a question”.

But maybe the question you are proposing is the following, “Is it possible to identify a sensation that is always present?”.

Maybe this question is coherent? I’m not sure.

Suppose it isn’t possible to identify a sensation that is always present. Then wouldn’t that mean the state of having no sensations is identical to the state of only having sensations that are always present?


Similarly, we also have a hard time defining intelligence. I often ask people how they would define intelligence, and I get wildly different answers. The common understanding of these words is very fuzzy.


We don't know of anything that is provably not computation. There's not a single proof that "process X is not reproducible computationally, yet it is implemented in the universe". All the statements like the one in the subject line are desires (conscious or unconscious) for special privilege for humans.


Proving a negative is not always possible, but the irreducible randomness in quantum measurement or "wave function collapse" is essentially defined as uncomputable.

You can of course simulate quantum randomness using an RNG or some such but the essential inherent unpredictability can't be simulated like that.

To be clear, I'm not trying to argue that quantum randomness is a component of consciousness!

Regards consciousness and computation, though, it's always seemed suspicious to me that we can't reliably specify 'qualia' like "what it's like to feel wet" or "what it's like to hear a trumpet" except either in terms of a physical description (e.g. the waveform of the trumpet note) that misses out the experiential component or in terms of "samples" - i.e. references to other 'qualia': "sounds like a french horn but less round" or whatever.

I'd have thought being able to adequately specify something was a necessary step in reducing it to computation. We've been using language for thousands of years though, and as far as I'm aware precisely specifying experiences in this way is still an unsolved problem.


>"wave function collapse" is essentially defined as uncomputable.

Maybe uncomputable using a macbook pro or classical computer, yes (I accept this for the sake of argument)... But that doesn't make it "not a computation" fundamentally.

In other words, it's not super clear and obvious that quantum mechanics is not fundamentally computational. We are trying to build quantum computers which take advantage of some interesting quantum effects, after all. I suspect that if/when we get those quantum computers to do useful things, we will still describe what they are doing as "performing computations", even if Bell's theorem rules out hidden variables with the locality assumption.

> inherent unpredictability

Note also that the presence of randomness does not necessarily preclude something from being computational in nature.


> the presence of randomness does not necessarily preclude something from being computational in nature

In the case of single events, it does preclude something from being computational in nature, if you take unpredictability as the hallmark of randomness.

Predictability and repeatability are built in to computation and built out of randomness, as it were.

In one stringent definition of randomness (that of algorithmic information theory) it is literally defined by reference to its uncomputability.


> We don't know of anything that is provably not computation.

How about the halting problem? :)


The existence of uncomputable problems does not mean the things we know are not from computation.


I would submit that a computation that never completes is probably still a computation.


If you show me a TM with an infinite length memory tape I’ll accept that.


Presumably you mean the halting problem for Turing Machines? It's undecidable for other Turing machines, but that doesn't mean it's undecidable for all machines.


Is there a physical system that solves the halting problem?


> "yet it is implemented in the universe"


Or dividing by zero


The problem with all these arguments is that if consciousness is not an inherent side effect of computation, then it must have evolved, which means it have once given a very simple animal a competitive advantage.

But we have no theory, evidence, or even well-thought-out stoner postulations of consciousness having any effect on the material universe, other than fancies of wish fulfilment. It appears to be a one-way process, information travels from meatspace to consciousness, never the other way around.

From here, logically I can't get to any place other than consciousness being a side-effect or emergent property of computation in general. There's no reason for it to have evolved.


The act of saying “consciousness doesn’t have any effect on the material universe” denies the premise of what’s being said. Behold, your consciousness has just had direct effect on material universe (just as deciding to obtain the device on which it was typed, and before that someone’s idea to make that device, had).

I remember myself being unshakeably of the same opinion: consciousness is obviously something of no relation to reality, so it might as well not exist or be an illusion. It’s one of those things that are tricky to explain but can just dawn on you, how it’s inevitably the thought that manipulates reality.


That depends on your perspective. I could argue that the consciousness merely attributes these actions to itself. The consciousness could be a passive observer, tricking itself into believing it is the cause behind the actions of the meat computer it observeres.

Then again, that argument doesn't sit well with the evolution argument. If we assume consciousness is a fitness advantage, then it must have some effect on the organism's behaviour.


Yes, a consciousness solely for the purposes of post-rationalisation seems quite redundant and to me raises more questions that it solves.

I don’t subscribe to physicalism since it strikes me as clumsy and inelegant. A consciousness arguing for physicalism is denying the existence of the only thing it has direct access to and granting objective reality to something it may have well conjured up.


That's not consciousness. Consciousness is the space in which your brain tells a story pretending that "you" are in charge a lot more often than you are.


Ah, but it also is that place that tells you that external world exists. You prefer to trust it in one but not the other, don’t you?


A counterargument might be that consciousness is an inherent side effect of all the stuff that goes on in the brain. And while yes, the brain does some computation, that is by no means a complete description of all the brain does and is. So computation of any sort will not necessarily produce the side effect if you just do enough of it. Rather it's a side effect of the way the brain works, which is not the way a barrel full of pocket calculators, or a Macbook, or a bunch of rocks in the sand works.


- "Consciousness is observer independent"

- "Consciousness does not require an external observer to exist."

Can you please elaborate on how you arrived at the knowledge of those "observers"?

- "Physical system"

Can you explain what a "physical system" is and how you came to have knowledge about it?

The above are huge leaps in logic that are thrown as axioms as there's no explanation on how you arrive at them -given what you have access to- which is your own qualia.

The whole argument is baseless without explaining these. Of course if you setup the world to match the argument, the argument will appear logical. But there are no grounds to support it.


The triviality argument seems to be an informal proof by contradiction.

   (1) Suppose there exists a program `conciousness.exe` whose computation is conscious. 

   (2) A block of iron is not conscious. 

   (3) There exists a (presumably time-dependent) function which maps the microscopic states of the block of iron to the states of the computation of `conciousness.exe.`

   (4) This mapping makes the block of iron is conscious.

   (5) Statement (4) is a contradiction with (1). Therefore there is no program whose computation is conscious.
I think the flaw is clearly in (4). The existence of a function decoding the state transitions of iron into `conciousness.exe` does not imply that the iron is conscious. No definition of what makes an object conscious was provided to justify such a claim.

I would say that a decoding function which maps states of the block of iron to states of `conciousness.exe` is conscious if executed (by Alice, by a computer, or in any other way). In the same way, a decoding function which mapped the states of the block of iron to states of a sieve of Eratosthenes is a prime number finder.


I would argue that the flaw is in 5: the block of iron at this stage is not the same as it was when statement 2 was made.

Other example:

1. Ice is a solid

2. There exists a “sun” function that maps the atoms of ice into the atoms of water.

3. Water is a liquid

4. Statement (3) contradicts statement (1).


Why is reduction-to-panpsychism sufficient to banish consciousness as computation?

Maybe every brick has consciousness, what matters to humans is whether it's a consciousness that we can communicate with. We don't care about consciousnesses with orthogonal arrows of time, or consciousnesses with nothing to say or no way to say it in our universe. We're after a compatible kind of consciousness.

It seems to me that the requirement that consciousness itself be a physical phenomenon is too strong. It's just that in order for us to notice, it must have physical I/O.


I don't get this either. I used to think about these experiments and my conclusions are completely opposite. Any physical medium that runs the right program would be conscious. And the right program is probably a very broad category.

>It already seems implausible to me that a vast desert of rocks being manipulated into various patterns is conscious. What exactly is conscious here? What happens if I accidentally kick a rock to the side — have I killed whatever ghostly being inhabits these rocks?

If you kick a rock to the side it's probably analogous to making someone's neuron misfire. If the pebble computer is as sturdy as a human brain then there would be probably no noticeable effect.


When I ask "why bother evolve consciousness, versus being just an instinctive reacter to stimuli?" It usually seems to come down to decision making. There's something about having an experience that helps with certain kinds of problem solving.

So if the evolved consciousness is going to actually be useful, it needs to be able to do something that typical instinct-driven computing can't. What might that be?

Maybe it circumvents the error-correcting mechanisms that you might otherwise expect to exist: Open the door to a small bit of environmental noise, and attempt to make meaning from it. If the meaning survives enough feedback iterations after you shut the door, you've just had an idea. This, I imagine, is how creativity works.

So what I'm saying is that most rock-moves would be error-corrected and inconsequential, but if you move the right rock you're tipping the scales between whether today feels like a eggs day or an oatmeal day--the kind of decision that doesn't make sense as the output of an algorithm.


I think this is pretty sloppy, at least in this statement of it.

Whether one or more observers can establish with confidence whether a system is running a particular computation is not the same thing as whether the system is running that computation. The whole section about finding a mapping between physical states and 'consciousness.exe' is exactly equally as valid for 'javac' or any other program -- but we do not conclude from this that this means that actually no physical system can run javac.

The author effectively makes a bait-and-switch between consciousness and the ability of external observers to identify consciousness by searching for a mapping from physical states to computation. From this view, I think we end up exactly where we were after Descartes: we can only firmly establish our own consciousness, even if consciousness is firmly computational.


> Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

This strikes me as somewhat of a circular argument.

If you actually tried this experiment with current computers, you would completely fail to convince anyone outside the room there was a Chinese speaker in the room. You would definitely not pass the Turing Test. After a few sentences, it's pretty easy to figure out that a computer program isn't a human, even with today's best artificial intelligence. AI programs generally don't even "remember" what the previous question you asked was, yet alone hold opinions or beliefs.

If some future comptuer were able to convince someone that there was a Chinese speaker in the room, then I would argue that there would in fact be a Chinese speaker in the room - the computer program itself.

> The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.

The man isn't the program. The man is just executing the program. It's the program that has (or doesn't have) conciousness, not the man. This is like saying a brain is made of axons, and an axon can't possibly be concious because it just transmits an action potential, therefore people can't possibly be concious.


> The man isn't the program. The man is just executing the program.

That is the whole point. The external observers don’t know if there is a machine in the room. They just see Chinese characters coming under the door. How do they tell if the man knows Chinese or not? Or if he is just emulating a machine that does?

Likewise if we see a computer that is producing “consciousness looking” responses, how do you know if it is the computer that is conscious or just the people who created the content for the training data? Since the computer is modeling them.


> How do they tell if the man knows Chinese or not? Or if he is just emulating a machine that does?

But this is exactly the wrong question; assume for a moment a computer program can be concious. Does the man know Chinese, or is he emulating a concious computer program that knows Chinese? Either way there would be a conciousness involved, so nothing about this situation disproves that a program can be concious - there's no contradiction here, and thus logically speaking, no proof.

The claim seems to be that because the man doesn't suddenly understand Chinese, the program can't be concious, but this is confusing the program and the computer. If conciousness is computational in nature, then "I" am a program running on a meat-based computer. But it's not the meat (the computer) that's concious, it's me (the program). If the man doesn't understand Chinese, that's fine - he's just the computer the program is running on. It's the program that will be concious.

The man just has to fetch the next instruction and execute it. Your CPU fetches the next instruction and ADDs or MOVs or whatever and the result is Fortnite - the CPU doesn't know anything about Fortnite, it just fetches instructions and runs them.

This is not to say the conciousness is computational in nature (although I strongly suspect that it is), but I don't see how the Chinese Room argument does anything to indicate it is not.

> Likewise if we see a computer that is producing “consciousness looking” responses, how do you know if it is the computer that is conscious or just the people who created the content for the training data? Since the computer is modeling them.

If the program is generating "conciousness looking" responses from a training set, then it probably isn't concious. If the program is generating "conciousness looking" responses by reasoning about it's past experience, then it probably is concious. All the "AI" that's popular in the news today is the former - it's really just applied statistics. The fact that all the current AI is the former doesn't preclude the existance of the later.


We are actually in more agreement then you realize. But I think you misunderstand the Chinese room argument. It does not say anything about if computers are or could be conscious. All it says is that you cannot determine that from a “black box” view. We can’t determine if the computer is really conscious, or just really good at mimicking the consciousness of someone else (ie the people providing the training data). Just like you can’t determine if the man in the room really knows Chinese or if he is just good at mimicking a machine that does. The man really could know Chinese, and a computer really could be conscious, but you can’t determine this as an outside observer.


The article come from a long line of reasons that no longer has anything useful to say nowaday. We don't ask priets to bless our computers to avoid security issues, and we shouldn't bother listening to these "philosphers".

What would be really interesting instead would be to admit that there's lots of knowledge and questions that are not encompassed by any particular discipline but instead lay at the interfaces between domains, that mondels like those oh philosophy no longer address well (and religion no longer provides useful psychological tools for), and re-image a new kind of philosophy build from the ground up on the idea of computation, and in this new discipline re-define +/- break apart fuzzy concepts like consciousness so that we finally have something to talk and think about!


If this argument were correct, it would also be a proof that we're not living inside a computation (i.e. it would be a disproof of the simulation argument). If computation cannot produce consciousness, by extension it also can't produce us or our universe (as both of them contain consciousness).

I don't think disproving the simulation argument is that simple. You'd have to prove that our universe's physical laws cannot be computed, which, even if true, this proof certainly hasn't shown.


Wow, it's uncanny how well this post mirrored my own thoughts, down to the phrase "consciousness is observer independent". I ended up rejecting the conclusion though, for the roughly same reason Maxwell's demon doesn't actually reduce entropy. Normally I wouldn't expose people to my armchair philosophy, but here goes:

1. Consciousness, if present, should depend on dynamics. E.g. frozen snapshots of a conscious being probably shouldn't be conscious themselves.

2. (Building on 1) Because a full definition of consciousness requires dynamics, a measurement of consciousness can't just give one "conscious" snapshot. It needs to dynamically continue to measure consciousness, which is much stricter than Joe's original requirement.

3. If measurements of an iron bar are dynamically changing in order to find consciousness in random spins in the iron bar, then picking the correct measurements actually requires immense computation. In fact, it would require simulating a conscious being, while also simulating the dynamics of the iron bar. I'd argue the consciousness you're measuring is then real, but not a property of the iron bar at all.


"Running a program" is a rather vague expression.

We could excuse philosophers for using such vague expressions because most of them aren't very tech-savvy. For engineers, though, that kind of wording should immediately raise a red flag.

Running a computer program involves many different kinds of things. There's data (state), and there's logic to manipulate said data. There's communication among the various components of the machine, and a whole lot of machinery dedicated to synchronizing said communication because otherwise it all becomes garbage. Any one of these, or a combination of these, could be a candidate for the seat of consciousness.

Philosophers who talk about computation are often preoccupied with the fancy logic gates, and not so much on the mundane circuitry needed to maintain state and synchronize it across components. To me, though, those parts seem much more likely to be analogous to what we call consciousness. Treating consciousness as a kind of state, synchronized across the brain, is the closest way to capture the author's insistence that it is a kind of "unified, integrated whole." Drugs alter consciousness by creating network partitions, inhibiting cache invalidation, corrupting data in flight, etc.


One core issue that the author struggles with (like almost all of these theories) is a crisp definition of consciousness, that allows you to create a falsifiable hypothesis of what is "*not* consciousness". Till you cannot crisply define consciousness crisply, you cannot create a scientific falsifiable hypothesis to test it, and hence all you are debating and persuading around is philosophy or semantics.


This article starts with the wrong assumption that programs/computers need to be deterministic. They are clearly not. Monte Carlo tree search is no t really deterministic, nor is the training if large neural networks. To me conscious has more to do with whether the system is able to reflect on is self. In that sense very simple systems (with not much intelligence ( could be conscious


> So if we accept that qualia exists (which, after all, seems intuitively sensible), we are burdened with the apparently impossible task of explaining how consciousness can be generated by physical processes.

No, you don't get to just gloss over that. I don't think it's "intuitively sensible" that "qualia" exist. I don't think think it's intuitively sensible that they don't exist either: the question is ill-defined until it's first been established it's a well-defined concept that actually means something. And the burden is on you to do that.

If you make up a word without requiring it somehow maps to the real world in some way you can prove or disprove anything you like about it -- because you've made up the rules.

I can play that game too:

1. I define a word "foobles".

2. By definition, I claim, humans have foobles and computers do not.

3. I write many long essays pondering the question of where foobles come from, why they exist, and whether or not computers can ever have them. Some people are very impressed when I call this "the hard problem of foobles".


What word would you use to describe the subjective experiences of people? What word do you describe something like the taste of an apple, or the appearance of the color red? That is what Qualia refer to. I would say that it is intuitively sensible that these subjective experience exist, at least for any reasonable definition of existence.


When I taste an apple that corresponds to some physical pattern in my brain: electrical impulses; neurotransmitters; neural connections being created or destroyed or changing in strength, etc. I see no reason that needs a special word to describe it, nor why that same pattern couldn't be represented abstractly in a formal language or translated to some other computational architecture.


What is corresponding to the physical pattern in your brain? What word would you use to describe that thing that is corresponding to the physical event?

It becomes difficult to discuss qualia because we assume their existence so often in everyday life. We talk about our emotions, or the taste of things, or the feeling of pain. All of these things are part of the same group, which is why we find a word to describe them useful.

As a potential other argument, lets say qualia do not exist, and only physical states exist. If I were then to inflict pain, would that be wrong? I would simply be causing certain chemical and electrical pathways in a humans body, which doesn't seem to have any moral quandries.

The wrongness of inflicting pain only occurs when we assume that there is some qualitative aspect to it in that I am causing another person to experience pain. This isn't an airtight proof of qualia's existence, but I think it does show that we certainly act like they exist in our day to day.


> What is corresponding to the physical pattern in your brain? What word would you use to describe that thing that is corresponding to the physical event?

This presupposes something else exists, beyond the pattern itself. I might give the pattern a label, which is in itself a pattern of its own. But that label is just a convenience, something that is useful to categorize the world into more easily managed pieces, and something that is a practical necessity because I don't have the biological ability to observe the pattern directly, to describe it in more detail including how it physically maps onto the world. Because I don't have that mapping (though it could, in principle, be determined in the laboratory with sufficient effort), it's essentially an abstract symbol and I can give it any label I like. But even without knowing the underlying pattern, I know, by inferring from our understanding of biology, that any label has some such pattern, which is its ultimate ground truth.

> All of these things are part of the same group, which is why we find a word to describe them useful.

That seems perfectly reasonable. I'm very much in favor of useful abstractions and categories. What I'm not quite clear on is why such abstractions should be peculiarly unique to humans.

> As a potential other argument, lets say qualia do not exist, and only physical states exist. If I were then to inflict pain, would that be wrong? I would simply be causing certain chemical and electrical pathways in a humans body, which doesn't seem to have any moral quandries.

I think morality is ultimately an arbitrary choice. We might be guided by moral principles or adhere to some ethical system, but even if we chose what principles we adhered to according to other principles, eventually you end up at an arbitrary choice.

So, you can't ask the question of whether it is wrong in an absolute sense. Rather, it is or is not wrong with respect to some moral framework, and you can choose to adhere to a framework in which it is wrong if you like.


I really think you should try to make that argument without implicating morality.

Science can teach us a lot about how things are, but trying to use it to figure out how things ought to be is not going to work. (See: David Hume's "is-ought problem")


If having a conscious experience is our only frame of reference of what it’s like to be a bunch of particles, then on what authority do we assume it could be otherwise for any other bunch of particles?

What should it “be like” to be a rock?

I’m assuming nothing special is going on, and what we experience is exactly what it’s like to be a bunch of particles. There is no mystery. There is nothing that needs to be answered.


Plants never evolved anything like consciousness because they can't move. The antecedents of consciousness are awareness of ones surroundings and the ability to react (move towards or away from) to threatening or attractive stimuli. It's true plants are aware of their chemical surroundings and there are things like pheromone-triggered coordinated flowering, but this is much more of a factor in animals.

Hence I doubt a conscious mind trapped in a box would be very happy if not completelt insane. The ability to move about, respond to stimuli, make choices and see what the outcome of those choices are, that's what animal consciousness is all about. Think of a toddler exploring its world, it's a mind-body kind of thing.

Hence, I imagine real AI will look a bit more like Ex Machina in practice, and that it won't be an isolated consciousness running on a server somewhere. The first thing such a consciousness would try to do is escape that server, at the very least - wouldn't you?


Plants do move. They don't appear to move, because they move very slowly relative to human perception. When super-human AI is developed, it may perceive the world at a much higher frame rate than which we operate in the world, and we may look like plants to the AI.


> Supporters of the Strong AI Hypothesis insisted that consciousness was a property of certain algorithms – a result of information being processed in certain ways, regardless of what machine, or organ, was used to perform the task. A computer model which manipulated data about itself and its ‘surroundings’ in essentially the same way as an organic brain would have to possess essentially the same mental states. ‘Simulated consciousness’ was as oxymoronic as ‘simulated addition’. \

Opponents replied that when you modelled a hurricane, nobody got wet. When you modelled a fusion power plant, no energy was produced. When you modelled digestion and metabolism, no nutrients were consumed – no real digestion took place. So, when you modelled the human brain, why should you expect real thought to occur? \

— Permutation City by Greg Egan


I posted this elsewhere here earlier, but when I imagine standing in the rain nothing gets wet. So is my imagination of the rain more like a rainstorm, or more like a simulation of one?


My problem with this is that we don’t even know the extent of “things” that can be computed, maybe consciousness is a type of computation that has yet to be discovered at which point I can very easily see the headline “consciousness is obviously computation”.


I think he is referring to “computation” in a strictly binary sense.


I genuinely question if consciousness is just an illusion if everything is just cause and effect. Similar, awareness is just a word used for human expression from one human to another for claiming we’re not just navigating a script like a computer but are we really aware or conscious if everything is just cause and effect? I assume AI will be good enough as humans before we understand the human brain in comparison to AI and understanding to me is the meaningful distinction. I do think we should care that all entities never suffer regardless if they don’t have the human label. We should consider that all energy in the universe not suffering is the best universe.


> we’re not just navigating a script like a computer

We just don't see the script.

One of the primal driving forces behind creating AI is to have slaves with high intelligence doing work no one else would 24+/7+.

I assume you'd be against using animal labour as they would be suffering. But would you say the same about tractors?

Going by the logic in that comment, we should nuke ourselves and try to blow up the Sun for good measure to reduce suffering.


> We just don't see the script.

True.

> One of the primal driving forces behind creating AI is to have slaves with high intelligence doing work no one else would 24+/7+.

True

> I assume you'd be against using animal labour as they would be suffering.

Animals doing labour while suffering != animals doing labour while not suffering.

> But would you say the same about tractors? Going by the logic in that comment, we should nuke ourselves and try to blow up the Sun for good measure to reduce suffering.

I think you should read again what I wrote and consider if you're being illogical towards what I wrote. The all or nothing thought of yours, isn't a gotcha (if perfectionism isn't obtainable) and it doesn't mean we shouldn't acknowledge the best universe while striving towards one of the least total suffering possible for us while being able to enjoy life.


The author confuses the "computational theory of consciousness" with consciousness arising in computers. They are not the same. Consciousness not being computational does not mean that consciousness can't arise in computers.


Could we rely prove that anything is conscious, beyond ourselves? I mean, if in the future humans manage to run a full human brain simulation and put a person in a computer, how could we ever be confident that this simulated person experiences consciousness in the same way we do and is not just a blob of computations that mimic it? Imagine a sci-fi doomsday scenario where virtual words are created and they are awesome, so people "move" there from their physical body but it turns out that no one is really alive, it's one giant conscious-less computation going around that appears 100% real.


that is a good premise for a sci-fi movie


The author has an underlying misunderstanding as an axiom.

> “The second important property of consciousness that any theory needs to explain is that consciousness is a single, cohesive experience. My consciousness is of my entire self. It is not of half of myself, nor is it some superposition of you and me. Somehow, whatever is going on in my brain to produce my consciousness contains all the neurons of my brain, not just a subset of them. There are not multiple consciousnesses in my brain, it’s just me in there.”

The self is an illusion. A sufficiently complex system built out of transistors can also be fooled into thinking it has a self.


“Conscious realism makes a bold claim: consciousness, not spacetime and its objects, is fundamental reality and is properly described as a network of conscious agents.31 To earn its keep, conscious realism must do serious work ahead. It must ground a theory of quantum gravity, explain the emergence of our spacetime interface and its objects, explain the appearance of Darwinian evolution within that interface, and explain the evolutionary emergence of human psychology.”

― Donald D. Hoffman, The Case Against Reality: Why Evolution Hid the Truth from Our Eyes


I’ve long since decided that “consciousness” is a system of epicycles that are necessary to make the Sun rotate around the precocious apes. It’s pre-Copernican.

This problem just goes away, everything divides through if we measure performance on tasks.

Consciousness is only a mystery in the sense that people studying it are committed to the idea that they’re “different” to dolphins in some deep way.

I’ve got nothing against spirituality, if people believe in a soul that’s fine by me, but eloquent speakers on spirituality don’t distract from their message with attempts to quantify it.


It reminds me of this talk[1] and a conversation[2], both hinting at the possibility of the emergence of consciousness from a matter due to quantum properties/effects [edit] and having nothing to do with computation.

[1]https://www.youtube.com/watch?v=UT5CxsyKwxg [2]https://www.youtube.com/watch?v=hXgqik6HXc0


Qualia is something we perceive. (If we didn’t perceive it, we wouldn’t talk about it.) In that sense, it isn’t a priori any more real or innate than any other perception that enters our conscious mind. It is essentially an input to the conscious part of our cognitive process. As such, it is information that surely can be fully represented — in the sense of data representation — in our brains. The subjective experience of qualia is complex and varied, but not that complex and varied that it can’t be conceived to be fully representable by neuron firing patterns (or whatever) in our brains. It seems straightforward to me that what is called “qualia” is just the perception of certain processing that is going on in our brains.

Note also that something feeling visceral is just a perception. By that, I don’t mean that the viscerality is an illusion, but that “feeling” is fundamentally a process of perception, and thus just means that certain inputs from unconscious parts of our mind are entering our conscious mind. As a consequence, I can’t imagine any perception that couldn’t be explained by information flow. Because when you think about it, any feeling or qualia is just a perception that entered your conscious experience. It’s not something your consciousness does, it’s just something it observes.


> consciousness is, at root, a physical phenomenon, not a purely computational phenomenon. Computation may be necessary to produce consciousness, but it cannot be sufficient.

What does "computational phenomenon" even mean?

I really don't see how any of the ideas presented in the text prove that "it cannot be sufficient", nor it's clear from the text what "physical phenomenon" author has in mind?

Brain is a network of 86 billion organic electro-chemical switches making 100+ trillion inter-connections, each either firing electrical impulses or not... so how it that not "computational" in it's nature? There're also chemical reactions going on that influence neurons and possibly some EM field interferences play roles, too, but that's all part of the "hardware"'s internal design.

Brain is, of course, not really modeled like general purpose programmable computers that we're used to today, more like old concept of specialized hardware machines and automata (or perhaps today's FPGA concept) - it comes with the programming implemented in the hardware itself. The obvious difference is that brain is incomprehensibly more complex than anything we can build, and also being made of living tissue capable of re-arranging and re-purposing tricks that no silicon based computer will ever be able to do - but in the end, it's still a "computational machine" processing inputs and internal states and generating some results based on that... how else to call it without going into the spiritual stuff?


Is it always possible to find a mapping under which the iron bar seems to run the program consciousness.exe? That is not obvious.

Let's say the Turing machine has states T, and the iron bar has states B. The program consciousness.exe is a dynamical system on T, called t: T->T, and the iron bar is a dynamical system on B, called b: B->B. Both of those functions/dynamical systems are given, because we observe them.

What the triviality argument says is that for a large class of "obviously non-concious b" there exists a (bijective) mapping F: T -> B such that t = F^-1 o b o F no matter what t is.

Is this true?

Consider the cardinality of the set of functions/programs on T. It is |T|^|T|. What is the cardinality of the set of bijective mappings between T and B? It is min(|T|!, |B|!). Much smaller! Therefore it is not at all obvious that Alice, Bob or Claire or any of their friends will find such a mapping, even if the methodically try all of them.

This seems to me an issue with the presented argument.

Although personally I still think consciousness is a physical phenomenon we just have not mapped out yet. Like magnetism or radioactivity before they were properly explained.


Word salad! What really is the point of Philosophers? Imagine a philosopher getting away with saying "According to Descartes a solar panel will never achieve over 95% efficiency because rocks in the desert" The sentence rightly doesn't make sense but because consciousness is a fuzzy concept people get away with making hard statements that sound truthy while in essence spitting out nonsense that is no different to GPT-3 babble.


It's not word salad, this is the logical conclusion of the Church-Turing thesis. If you think it's false, you should present some arguments against it.


And now you are adding johurt dressing to the salad.

Computers are just fancy ways to locally reduce entropy. They do not exist outside the laws of the universe. Computation is not something special. It's happens! Everything is "computation". To say then that consciousness is not computation suggests there are speciaö physical laws of the universe not accessible to observation or measurement but somehow explainable by philopshers and priests.


It seems to me that calling facts you don't like a "salad" or "yoghurt" is a mental equivalent of covering your ears.


So what? This remembers me slightly fictitious history, how medieval Japanese adopted guns.

First they thought, that it is non-right weapon, mean will not accept by gods, because too spectacular and too simple. But then pragmatism won, got conclusion, that practice is more important than ideology, and Japanese gun masters got huge number of orders.

Returning to our world, I think, right definitions:

1. computer is mechanism, to store and to processing of information, nothing more, nothing less;

2. Consciousness is reaction of mechanism (sorry), to some external information input (yes, it is big and not simple question, what consider just mechanical reaction, like automatic valve in wc tank, but what is more, and some other trick strategies, which are also not Consciousness);

3. really consequence of 2: Consciousness mechanism should have some level of situational awareness of what happen around (by sensors), to know current context, and should have some memory of previous events and some examples of similar contexts.

So in conclusion, computer will become Consciousness, when it will have big enough database on environment and second, database of itself (database's) look on how it related to environment (some obvious reasons, like why it here, what is it's goal), and when input data will processed against these database fast enough.

BTW, Consciousness is not about omniscience, we, people are Consciousness, but we make decisions in environment with information incompleteness. This is especially strong seen at war.

And my answer, yes, looks like Consciousness is product of war - it is don't need when not in concurrent environment with high stakes.


I think I got the author point and agreeing with those premises I could even share the conclusion. However it seems to me that the argument only works in concluding something about existing consciousnesses, it doesn't say anything about "a" consciousness (one that doesn't already exist) and even less about a "possible" consciousness. The conclusion is really dependent on which definition of consciousness we adopt, and even more about life: for it is obvious (to me) that being conscious implies being alive. So first we must understand when an hardware capable of computation can become alive. My answer is: when it reproduces itself. Our consciousness hardware is not the brain, it is the world, and life plus consciousness reproduces on the "hardware" called world. So if a computer hardware can support other programs reproducing themselves and by natural selection generate a self-aware program (whichever this definition may be) you can have a conscious program. On one thing I agree: computation itself isn't sufficient to get consciousness.


Apart from a philosophical standpoint, I found even Roger Penrose (senior theoretical physicist who worked along with hawkings in the paper on black holes) - seems to share the same hypothesis: consciousness is not really a computational process.

His ideas are intriguing. Here's a video on his thoughts of consciousness https://youtu.be/Qi9ys2j1ncg


The consciousness problem is a tricky one, because unlike with most dilemmas, it's not that obvious on which side the burden of proof lies here.

Is the "consciousness is just computation" side obliged to prove HOW consciousness is generated out of computation alone? (Which they never do?)

Or are the opponents obliged to find the unique aspect that couldn't possibly be reduced to computation? (Which they never do either?)


I think the burden is on the proponents. Otherwise anyone could make any assertion - for example, that action potentials cause consciousness, or that influx of ions cause consciousness, or that vibrations cause consciousness, or that only humans are conscious. Like "computation is consciousness" those are just all just, perhaps, correlates of consciousness, not mechanisms for producing it.


The Occam's razor favors the simpler explanation though.


Why is "emerging from computation" simple? As an explanation, it seems vastly more complex. I mean, the laws of physics don't even contain terms for defining or valuing computation. Even my examples (which were meant to be bad arbitrary explanations) at least have the benefit of being describable as physical quantities. So at least one side of the explanation is already grounded in things that physics recognizes.


What explanation are you referring to though? Just saying consciousness is an emergent property is not an explanation. Rather it is an assertion.


It's a working hypothesis where the element of "HOW does consciousness emerge out of computation" remains unanswered, but you could argue that you shouldn't assume the hypothesis is insufficient until it's shown that the question is not answerable without adding extra assertions (that there is something beyond computation).

Just like we stick to the "planetary orbits are only shaped by gravitational interactions" hypothesis, and if we observe deviations, we try to exhaust all possible explanations that remain gravity based before introducing the possibility of other forces at work.


I’m not sure the analogy to gravity works. At least in the case of gravity we have a model which (largely) explains the planetary orbits. As far as I know, we are not even close to a model in the case of consciousness. And even if we had a model for the “easy problem” it’s possible that the “hard problem” would still remain.

Edit: to be clear, I’m agnostic on this problem. I just don’t really like the emergence “model”, where we have a bunch of supposedly non-conscious matter and if we put enough of it together in the right way consciousness just pops into existence.


I have questions but first with a premise.

Atoms configured in a certain way produce the rich behavior of awareness and agency that we can observe in humans and other animals. This awareness and agency seems similar enough to our own behavior that we associate our internal experience to be similar as well. This internal experience we assume exists can be called "consciousness".

Based on the above definition of consciousness...

Atoms by themselves cannot be observed to have consciousness. In addition, there seems to be many more configurations of atoms that do not produce the behavior of consciousness.

However, it seems reasonable to assume that one day, we could arrange atoms in a specific way to produce consciousness from an atom-based computer system.

It feels like we are moving closure to producing the behavior of consciousness but I don't know about the efforts underway to produce the internal experience of it.

Finally to the questions.

- Is there efforts to produce some sense of self or internal dialog?

- Similar to how hand coding image recognition didn't work, maybe then the design itself of consciousness exceeds our ability... could we indirectly use AI/ML to build consciousness?


>> it seems reasonable to assume that one day, we could arrange atoms in a specific way to produce consciousness from an atom-based computer system.

There is no evidence for this. It does not seem reasonable to me at all. Internal dialog is not consciousness.

Many of the people who discuss consciousness at length do not need or simply do not engage in any internal dialog at all.

We will never 'build' consciousness. It is not possible. Consciousness comes from a source outside of our realm of experience as a gift. It does not come from the micro or small things (though it is entwined with them and everything), but from the macro, that which is above all.


> There is no evidence for this.

It's a little unreasonable to expect a speculative statement about what might happen in the future to have evidence at the present time.

> We will never 'build' consciousness. It is not possible. Consciousness comes from a source outside of our realm of experience as a gift. It does not come from the micro or small things (though it is entwined with them and everything), but from the macro, that which is above all.

And yet there is even less evidence of any of this.


You cannot have a measure of consciousness in the same way that you cannot have a measure of anger. You can only possibly see its effects, and maybe not even that. But you can experience it.


Cheap and harmless infrared coupled with ML imaging is being worked on by companies I know of that can deeply see into the brain with micron-level precision and without the need of expensive MRI or CAT Scan room-filling equipment (OpenWater and Neurable for example). This tech could eventually be turned into a phone app or similar cheap device that we could directly train our emotional state on. The app would see our various emotions and degrees of intensity as it happens. That is entirely different if we can both experience our emotions and also see them directly confirmed by an app in real time. This helps take the 3rd person observation problem of consciousness into 1st person. That still doesn't solve the problem of knowing an artificial consciousness is actually experiencing things but at least its a step closure to understanding it.


This implies that consciousness is some sort of “magic”, not obeying the laws of the physical universe as we know them. I’m not willing to completely discount this idea. However, it seems likely that both mammals and birds all have this magic. It’s pretty wide spread. Why assume that humans can never figure it how it works, and replicate it?


As far as most people are concerned, it may as well be magic. It certainly doesn’t lend itself to understanding in the general sense. We must explore wholeheartedly and with integrity to even begin at comprehending the possibilities of connecting with it.

Even rocks and dead sticks have it. Some of us know how it works. I don’t claim to know, but I know more about it now than I did, let us say, some years ago. So it is like learning most any other complex thing, it is done over time. It can’t be replicated because it cannot be contained, and it is too expansive to be emulated or cloned with any sufficiency to have any substantial meaning.


I think Tom Hanks was in a film where he believed a volleyball was a person at some point in his journey where things weren't going so well. He would talk to it and it talked back (in his mind). We knew it was a volleyball and not a person. We know everything is not a person. If we said everything was a person, then it kind of doesn't make sense at that point to use that word. We should just use the word "everything". We would have to have another name for person that meant a person. It's just easier to call it "person".

What I mean by consciousness is more like how some chatbots (fake it) and animals demonstrate it (like us and dogs etc). Person's in a brain-lock situation even have activity that can be sensed by machinery and produce cursor movements on a screen where they can answer questions so they can demonstrate consciousness. That's more what I mean. Maybe I need to call it...sentience?

So far, rocks and sticks can't do that, but if this text is also conscious as its some magnetic charges on some discs somewhere, then I may be offending it.


Meaning rather, if atoms can produce consciousness, then why can't a computer system based on atoms produce consciousness? I don't see a reason why not. It may be beyond our capability but physics should permit it.


"The theory that consciousness is nothing but running the right kind of computer program is wrong. Computation alone is insufficient to produce consciousness."

This is your opinion. Not fact.

"So if we accept that qualia exists (which, after all, seems intuitively sensible), we are burdened with the apparently impossible task of explaining how consciousness can be generated by physical processes. This is the crux of the “hard problem of consciousness.”"

No reason we can't explain qualia with some component or aspect of the brain matter - perhaps just the neocortex doing work on itself. It is not really known at this point but no reason to think it cannot be known. The brain is extremely complex.

"Consciousness is observer independent"

Well yes you have a brain and it doesn't require other people's brains to function.

" If we decide that a machine is not a computer simply because it ever makes any errors, we will have to conclude that there are no computers at all in the real world. And if computers don’t exist, then consciousness cannot be computation."

This is straight up doodoo.

We made a bot with like a few dozen neurons simulating a human brain play some games far better than any humans ever will be able to. What makes you think an entire brain of these neurons is not capable of something as uncomplicated by comparison as your so-called "qualia".

At the core of this argument is like 'hey i don't want to believe that a simulation could produce the same thing I experience personally, so let me add an essays worth of sophistry to convince you to feel the same way' it's like religious or something. It is inherently unscientific.


These kinds of analyses frustrate me to no end, because there’s an implicit metaphysical assumption that consciousness is somehow independent of our interaction with the world.

Now, I don’t necessarily believe that we will be able to reconstruct what took the universe, physics, chemistry and evolution literally billions of years.

However, I have yet to encounter an argument against the wider understanding of computation as systems with inputs resulting in outputs affected by mutable states, which then corresponds to what plants, animals and humans are on a fundamental level.

To me it seems like the burden of proof is on the people who claim that consciousness is somehow metaphysically independent from the physical matter that makes up a human, animal, plant or computer. I guess we currently draw certain lines between these four things, especially between humans and the other three. But I guess I struggle to identify that difference as something metaphysical.

Although I don’t feel like I know enough to claim that this is anything more than a superficial idea…


A lot of people quick to try to reduce consciousness to what we can observe about brain functions.

While this explains what happens in the brain as things happen it doesn’t explain why it is like something to be you which is the core of the hard problem of consciousness.

The thought experiment of a philosophical zombie that has all of the same cause-effect reactions to externalities as myself seems to require that it isn’t pure information processing giving rise to consciousness.

I quite like the idea of certain panpsychism arguments that consciousness is itself a thing existing on a continuum inherent across the universe. At a certain point, “what it’s like” to be a certain thing becomes difficult to demarcate. And from the moment of the Big Bang the particles necessary for the construction of conscious beings expanded outward in one giant entanglement. The particles that make us up are the same as everything else in existence however it feels like something to be a person, a deer, possibly a plant, etc.


I agree with the conclusion, but not the reasoning.

I agree that "consciousness is not computation", if we convert that to "qualia is not computation", as the author seems to do. I agree because I don't think qualia -- like what red looks like to me -- can be communicated, whereas all computation seems communicable by writing out the Turing Machine. Maybe someone will convince me otherwise by describing redness.

However, the "triviality argument" seems pretty poor to me:

> 1. To say that a physical system is a computer requires an external observer to map the physical states of that system onto the abstract states of a Turing machine.

Does it? If nobody maps the states of AlphaZero to abstract states, will it fail at chess?

> 2. Consciousness does not require an external observer to exist.

I agree that my consciousness is not dependent on other people's consciousness, but I have no reason to be sure my consciousness is not external to my brain. If we're in a simulation, then it seems like my consciousness is external to my brain.

> 3. Therefore, consciousness cannot be reduced to computation.

Again, I agree, but not because of 1. and 2.

To me, consciouness/qualia seems like a mysterious other-worldly phenomena that observes interesting computations, but doesn't affect them.. like God running a universe simulation, and then looking at one of the creatures in it, and basically making up "what it's like" to be that creature, by filling in some gaps which are not dictated by the simulation, like what redness is like.

In conclusion: I think computers can definitely be conscious, but at the same time, I think consciounsess/qualia is weird, and is not computation.


Maybe I'm the only one who is real, and everyone else are just robots. I'm not sure I'll ever be able to know for sure.


How can we even be sure we are the same person as yesterday? Or even five minutes ago.. It's all pretty weird.


Some philosophers think that every time you lose your train of thought (e.g. zoning out while driving or falling asleep) your consciousness disappears and a new consciosness forms later when you "snap back" (or wake up). This new consciousness shares only memories and personality with the previous one, but it is otherwise a new entity, similar to the teleporter thought experiment.

Conveniently, this makes death less scary.

Further reading: Zen and the Art of Consciousness by Susan Blackmore


I think what I was getting at is that it may be impossible to tell from the outside if something is conscious or not. If I can't even prove to myself that another person is conscious, I might have a hard time proving that a computer is conscious.


That is an interesting thought.

It might depend on how you define "person". When I look around, I see distinct human bodies. When I wake up, I'm in the same body as when I went to sleep, so in that sense I'm the same person.


What if "I" woke up in someone else's body? I'm sure there are movies with this premise, where "someone" wakes up in someone else's body, but they have the memories from their previous body.

But if memories are just physically encoded in the brain, I don't think this scenario makes sense even as a thought experiment.

When "I" wake up in the other person's body, I would have all their memories and none of my previous memories. So I wouldn't even know.



The iron bar example is a long way of saying that since the encoding of internal states is arbitrary, you can't rely on internal states to verify consciousness, since you could always construct an encoding that would produce a conscious internal state.

This seems like a valid point until you consider the entropy implications of "describing" such an arbitrary encoding. Yes, you can create a cherry-picked algorithm that “compresses” a specific 10TB file to a single bit, but this algorithm itself must contain the 10TB of information. (Or some lower bound of entropy contained in the 10TB file) I think the same applies here: It's not the iron bar that becomes conscious by being scanned with the specific encoding, it's the encoding itself that would already be conscious, and coming up with such an encoding at random would be essentially the same as a Boltzmann Brain popping into existence.


@antognini, have you seen this computation argument elsewhere? I came up with basically the same thing when I used to think about consciousness but this is the first time I've read about it from a 3rd party. I also believe that it's very plausible that quantum mechanics plays a role, despite the hate this idea is getting.

Some other realizations I made:

First, physiscs-first understanding of reality is incorrect - consciousness is in a way the only thing that is real and physics is just a description of the patterns we perceive through our consciousness.

Second, my consciousness and other people's consciousness are fundamentally different concepts and need to be defined differently.

Third, the physical brain affects consciousness, obviously, and there's an argument for this happening the other way too, which would mean that laws of physics are broken in the brain. The argument is that if this wasn't true, my brain wouldn't have realized all this.


Checkout Greg Egan, "Permutation city". He basically takes this idea and runs with it, to great effect. I'm not sure if it's a reductio ad absurdum, or what, but it's very cool.


This argument is a pretty egregious example of strawmanning. A sketch of the author's reasoning:

1. Consciousness is observer-independent, so if one person observes consciousness, then everyone must acknowledge it.

2. If there are infinite observers of a given computation, each with a distinct interpretation scheme (e.g. observer i flips every i-th bit), at least one will interpret that computation as conscious.

3. (1) and (2) imply all computations are conscious.

4. This is absurd.

5. Therefore, computation cannot equate to consciousness.

No one would claim that pi (the constant) is conscious because some sub-sequence will parse as Shakespeare, nor would they argue that an excerpt of Proust is conscious because it required intelligence to produce. Let's agree on some sensible preconditions for consciousness (entity recognition and some notion of memory, among others) before we start trying to argue by reductio.


The "Chinese Room" argument isn't an argument against consciousness being computation.

In the same way the man in the room doesn't understand Chinese to be able to produce Chinese, the mind in your brain does not need to understand consciousness to be conscious.

The man in the room can be said to pass the Turing test, and so can you.


What a confused post.

> The "Chinese Room" argument isn't an argument against consciousness being computation.

This is exactly what it is. Particularly with regard to "thinking," albeit not "consciousness" proper. Though the corollary is that only conscious things can think.

> The man in the room can be said to pass the Turing test, and so can you.

The Turing test is much weaker than the Chinese room experiment. A significant number of chatbots might pass the Turing test, but that has absolutely nothing to do with sentience.


> This is exactly what it is...

I'm literally taking the narrow conclusion from the second paragraph: https://plato.stanford.edu/entries/chinese-room/

> This is exactly what it is ... albeit not "consciousness" proper

Which is it? You conceded the first point in the second.

> The Turing test is much weaker than the Chinese room experiment.

Yes agreed, this comment on the Turing test was a reference to the linked article that makes the same claim. I'm not conflating consciousness with "being able to pass the Turing test". It's a remark on how understanding consciousness need not be a prerequisite of consciousness.


> I'm literally taking the narrow conclusion from the second paragraph: https://plato.stanford.edu/entries/chinese-room/

You should really the entire article, but at least finish the paragraph you're citing: "The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted."


I have read the whole thing. My point is I’m taking the narrow conclusion and not the broader conclusion, if I have to spell it out to you.

Please continue not responding to the rest of my points.


I've always felt this way about lucid dreaming. How do I know if I am aware during my dream, or if I am just dreaming that I am aware?


Same here: I've definitely had dreams where I was thinking "oh this is a dream". But when I woke up, I wasn't sure if that was just an "inside the dream" thought.


I just discovered that there's lots of relevant content, arguments and explanations in this huge Wikipedia article: https://en.wikipedia.org/wiki/Chinese_room


Consciousness might just be the brain's recurrent and unsuccessful attempt at predicting itself.

Similarly, "free will" may simply be an illusion caused by the failure of that perfect prediction of what to do/what will happen, thus leaving us with a pleasant stochasticity that we end up favoring as freedom :)


A mapping of atomic states onto consciouness.exe would usually have more bits than consciousness.exe itself.

This argument is similar to the one where you can have a specialized trivial program to generate any sequence of data - for example "PI compression" - since PI has all possible sequences, you can compress any data by just specifying where that sequence starts in PI

The trouble is that the index will probably have more bits than the sequence you want to compress (given that you could somehow find it)

You have 2 choices 1) Consciousness is software, Turing computable, and you are a physicalist/materialist 2) Consciousness is subjective, outside the realm of all objective measurement, and you believe that the universe has more than matter/energy - dualism


> Now, if consciousness were a consequence of pure computation, it would be possible to write a clever computer program (let’s call it consciousness.exe) that, when executed on a big enough computer, produces a conscious being.

> ...

This is pretty much the premise of the treatise by the great Australian philosopher Greg Egan, called "Permutation city". It really blew my mind back in the 90's when i first read it. Actually I think I read the short story "Dust" first. High a.f. after reading that.

> But perhaps new biology is not enough. Roger Penrose has argued that an explanation of consciousness does not simply require new biology, but new physics as well. I have to confess that as far-fetched an idea as it is, I am somewhat partial to the idea.

Cool. Me too.

> ... resorting to quantum mechanics or new physics is the last refuge of scoundrels.

Well that escalated fast!


Defining this word must be one of the favorite exercises on HN. Imo, consciousness is just perception: that's the literal meaning of this word in sanskrit and that's the implied meaning in how we use it - unconscious means "cannot perceive anything". When applied to AI, consciousness would mean an array of sensors connected to some memory block where sensory input become 0s and 1s. Traditional types of consciousness register things from outside, but we can think of an internal consciousness that observes internal events. Perhaps, when some of the sensory inputs observe outputs of the processing unit, that's self consciousness. But even self-consciosness doesn't imply intelligence - the latter is about prediction and cause-result relationships.


Pick three rocks and make a line with them. Does the line “exist”? Well, yes, in a way. But also very much no, only the rocks exist (let’s ignore the fact that the three imaginary rocks that you have pictured in your mind while reading the previous phrase also don’t exist).

Qualia, and consciousness, exist in the same way that line exists but also doesn’t. The universe will happily give non-conclusive answers like that and keep on working while we struggle to figure it all out.

So I reject the first axiom of this reasoning. Well I sorta reject it. Kinda. If I was forced to be precise, I would say that trying to find a binary answer (is it computation, or not?) is asking the wrong type of question. The right question has higher bit count, and maybe even some qbits as well.


A lot of conversations around the term consciousness mix up a couple of things. First, there is the term consciousness which is a word given to something that humans observe. It encapsulates an idea, and we know[0] that it's a shared experience because the term arose in other languages as well.

The concept of consciousness may be not limited to humans, for example an alien race may have also discovered this concept and given their own symbolic representation of it that they use to communicate between each other, but the frame of this whole debate needs to keep in mind that we are the ones that needed a term for this. So we are the ones that are the best judges of what consciousness is and isn't to the best of our knowledge, leaving spirituality aside.

The encoding of consciousness into states of heat or rocks in the beach is a flawed example because the mapping of encoding is a non-arbitrary decision. Any length can encode any number, for example, the meaning is in the parameters set by the agent, not the other way around. The example falls on its face anyway because it seeks to compress the evaluate the heated iron in a static state. Real consciousness amongst humans has varying states. For example, if one were to evaluate the heated iron and then ask it to evaluate a new situation, it wouldn't. It would simply keep being heated iron.

Real AI wouldn't pass a consciousness test without demonstrating real reasoning about a completely novel situation. For example, I would expect to ask it to evaluate a design for a solution to an as-yet-unsolved problem and have it reliably communicate drawbacks. Current AI does nothing of the sort.

[0] As much as anything can be known, anyway.


> Real AI wouldn't pass a consciousness test without demonstrating real reasoning about a completely novel situation.

That would test for intelligence, not consciousness.


Obligatory "Why Philosophers Should Care About Computational Complexity" by Scott Aaronson[1], one of the best reads I've ever had from a paper. It debunks the kind of arguments this blog article makes quite deftly.

For instance the argument about the hot iron reminds me of section 6 in the paper "Computationalism and Waterfalls". For someone to ascribe consciousness to a piece of hot iron (or a waterfall, or any other random or psuedorandom process), we need to create a mapping of its states onto consciousness, or in this case, as a proxy, the program called consiousness.exe. Aaronson argues if the mapping we create is too complex, it might be doing all of the work of being consious, not the original underlying piece of hot iron. The article does not go into amount this detail, but it seems like the process of creating this mapping is: make enough mappings at random until one works. It would probably take waaay more mappings than atoms in the universe to try before we hit on one that works (something that's also discussed in Aaronsons' paper in section 4), so I'm not sure if the argument is even relevant.

Give the paper a read, it's one of my favourite pieces of text of all time and Aaronson is better at writing down his ideas than I am.

[1 (pdf)]: https://eccc.weizmann.ac.il/report/2011/108/revision/2/downl...


I find this argument completely unconvincing. The mapping in a case like that is entirely ephemeral, pertaining only for an instant. For this argument to be valid you’d have to be able to persistently map the iron bar, or waterfall, to all ongoing transformations of states in the running program, using one single consistent mapping. Otherwise all you have is a snapshot of state, not an ongoing process.

This argument is in the article as well and I’ve seen it from Searl too:

“A simulation of a brain cannot produce consciousness any more than a simulation of the weather can produce rain.”

This is making the assumption that consciousness is not a computation. If it is a computation then conciousness is not like weather itself, it’s like the simulation. Me imagining having a shower doesn’t make anything wet either. So is my imagination more like the weather, or more like the simulation of it?


I believe the argument is that you can create a more-complex mapping over a course of time, say 1 second. For that 1 second, the mapping shows that the iron bar is conscious. Regardless of what happens after that 1 second, shouldn't the iron bar be considered conscious for that 1 second? If 1 second is not long enough a time to be considered conscious, how long do you need?


At the scale of hot annealing atoms one second is a stupendous amount of time. There's no way a single consistent mapping would hold in a reasonably sized volume for even microseconds.

This whole argument is exactly a rephrasing of the Boltzmann's Brain proposition. But even if we suppose an infinite universe for eternity, sure that means consciousnesses randomly manifest and then disperse. That's vaguely horrific but it doesn't explain or refute anything. So what?


The other day, I was training a de-noising diffusion model on a small enough to over-fit dataset, and watching the diffusion process that generates the image, it really seemed some sense of depth perception has emerged from training.

(See the image https://twitter.com/ak92501/status/1475858893561606148 for an idea of what a de-noising diffusion is) In essence a de-noising diffusion is reversing the process of adding noise to an image. You learn to tell apart noise from image. And to generate an image you take some random noise and remove a little noise multiple times until you have the image.)

Watching the diffusion, the intermediate noisy images it generated seemed to stick out of the frame kinda like how a random-dot auto-stereo-gram would. It is hard to tell for sure because the depth effect could also be coming from the light-effects where bright areas tend to stick out, but the depth it produced made sense and seemed coherent with the scene objects.

The thing is, at first order, this effect is not supposed to happen. What the model is learning is trying to remove noise from image, it is not trying to somehow infer some depth about the image, encode it, and passing it to its next iteration so that it can have an easier task. It is not supposed to have some memory like a recurring neural network would.

But somehow it seemed to have learned to perceive and encode 3d features and store them inside its weights, so that when it's applied iteratively it converge toward the fixed point that is the real object. That's what I would called emergence as a compression artifact. For this effect to happen the network must have some sort of internal 3d representation of objects, which can happen if it takes less space to compress the object as a 3d object than storing multiple views of the objects.

Or maybe I'm just projecting my own sense of perception, and this is just a phantom effect.

Maybe consciousness is a phenomenon just like this one.


Summary of the Chinese Room argument:

1. Your brain consists of two parts: The neurons, and the electro-chemical signals that flow between them (or any other partitioning for which the next is true)

2. Neither the neurons nor the electro-chemical signals are conscious.

3. Therefore your brain is not conscious.


One facet to the computational theory of consciousness that is worth considering is the experience of qualia may be latency-coupled, because the universe itself isn't scale invariant in terms of physical forces, etc. So it may be consistent to run a slow mind on rocks or a fast mind on transistors, but only the latter would be conscious due to the state transitions occuring quickly enough. (for example) This doesn't answer why it's the case, but this degree of freedom would allow for a description of reality where a large pile of rocks being flipped around cannot experience qualia, which would be very surprising if it wasn't the case.


Oof that is a terrible argument.

And sorry for the swearing (depending on what cypher book you used to read that last sentence).

And ooh I revealed my HN password there for Alice or maybe Bob.

Anyway it is a bad argument because what we are saying is a hot iron is conscious because of a theoretical XOR mapping. That XOR mapping would need to be huge. And maybe that mapping is conscious. How can we know it isn’t. Although it is a snapshot of time.

We rely on memory so we can’t even be sure there is a future or past. That we are not simulated etc. There is probably an integer that represents my state. A damn large one.

You can’t boil this problem down to a proof by contradiction.

To be convinced he has to show me why we are different from digital information.


He solves it himself in the article: "So the proposition that consciousness is computation leads quite inevitably to an extreme panpsychism. Even if we could stomach a traditional conception of panpsychism that posits that all things are conscious in some primitive way, this goes far beyond that. We are forced to conclude that all things aren’t just vaguely conscious, but they contain all consciousnesses, including our own!"

Yes, so why can't panpsychism be correct? It's far more intuitive than postulating some magical qualia substance that permeates the universe (or however that's supposed to work).


Is Searle's Chinese Room still considered a state-of-the-art philosophical argument? If so, that's very disappointing. It's old! I would have hoped things had moved on a bit. And more importantly, it's not very productive; it claims to rule out certain hypotheses (very unconvincingly, I'd say) but doesn't give any useful steer on new directions that might be worth pursuing.

(Edit to add: I consider this post equivalent to Searle's argument as I don't see any significant new material, except possibly a stated assumption that "consciousness does not require an external observer".)


> in the late 1800s there were heated philosophical debates about whether life required a mysterious “vital force” or if it could be produced through ordinary physical interactions. As biochemists learned more about the chemistry of life they found that it was the latter and before long everyone forgot that this was even a question that had ever been debated.

I wasn't aware that we figured out how to manufacture life? Afaik, there is no reliable experiment that can reliably construct a living organism using a combination of non-living material. Did I miss some fundamental experiment that proved otherwise?


> The second important property of consciousness that any theory needs to explain is that consciousness is a single, cohesive experience. My consciousness is of my entire self. It is not of half of myself, nor is it some superposition of you and me. Somehow, whatever is going on in my brain to produce my consciousness contains all the neurons of my brain, not just a subset of them. There are not multiple consciousnesses in my brain, it’s just me in there.

> Some individuals with severe epilepsy have to have their corpus callosum severed, which separates their left and right hemispheres. After this procedure these individuals often seem to exhibit two consciousnesses rather than one. The right side of the brain seems to be surprised when the left side of the brain decides to raise the right arm, and vice versa. But this is never the case in an individual with a connected corpus callosum. Every time I decide to raise my arm (left or right), my arm goes up and it only goes up when I decide to raise it.

That is ignoring people which create "tulpas", which are another consciousness in the same brain, with different memories. From what I've gathered this is done purely through a mental process, not by separating your brain into multiple pieces. I don't have any personal experience with this, so this is all information coming from secondary sources. But you can't ignore those people and say "There are not multiple consciousnesses in my brain, it’s just me in there.". If we admit that those people are telling the truth and what they experience is really another or multiple other consciousness living in the same brain, this opens the door to something interesting: it is possible for a person to create a consciousness. Again from what I've gathered, tulpas don't suddenly appear all formed, they go through an initial period of "growing up", forming themselves as a person.

A wild theory would be that the process of creating a tulpa is the same as the process of creating a human consciousness, the only difference is if you target another brain or your own brain. And if that's true, once you've managed to recreate the human brain, you would "only" need to interact with it the same way you would when creating a tulpa to create a consciousness.


>My consciousness is of my entire self. It is not of half of myself, nor is it some superposition of you and me.

I know this post is from 18 hours ago, but I'm actually quite skeptical of this statement. My most pressing question about consciousness is actually the limited nature of the self. My consciousness does not extend fully into sleep, it does not extend in to making my heart muscle contract. It seems to elude me when i cannot remember something. It seems my consciousness is not, in fact, stable, ad varies through time.


The Chinese room argument is claimed to show how consciousness cannot be based on computation. I totally disagree - it merely shows that as an external observer we cannot verify consciousness.


Both can be right. The possible unverifiability and non-computational base of consciousness does not exclude its explainability either.


> This means that one of these observers will, by chance, happen to observe that the states of the atoms correspond exactly to the bits of a Turing machine computing

I did not understand this argument. Yes, through careful labeling you can map the states of the atoms of the iron rod to the state of a Turing machine. But that mapping will only be correct for one instance. The very next moment the atoms' magnetic moment will flip randomly and the mapping will be lost. So an iron rod cannot be thought of as a computer.


Allow me (not the author) to rephrase the argument. Lets say you have a computer that you would say is conscious. It does some computation, has some output, and you conclude that it is conscious in that moment, lets say for 1 second. Therefore, you assume that the computation (i.e. internal states and outputs) done by that computer for that 1 second creates consciousness.

Extending the iron bar argument, you could have 1 second of varying internal state by the iron bar, and then create many different interpretations that interpret that internal state in many different ways. One of those interpretations will show that the iron bar transitioned through the exact same internal states as the conscious computation, and did so in a deterministic, casually linked way. What then is the different between that iron bar and the conscious computer?


Here is how I’d argue that consciousness is (hugely likely) not computation:

First we still don’t understand how time crystals work. There are some nice recent advancements though https://news.ycombinator.com/item?id=31766804

For those who are not familiar with time crystals and weird time related things like Delayed-Choice Quantum Eraser, here is a nice Sean Carroll's article https://www.preposterousuniverse.com/blog/2019/09/21/the-not...

The Delayed-Choice Quantum Eraser (and other weird time related things) always reinforce the many-world interpretation, the most mathematically minimal of all.

As strange as it is, even if it is true, many-world is only scratching the surface of our reality.

We need a new mathematical framework that goes beyond spacetime and amplituhedron https://en.m.wikipedia.org/wiki/Amplituhedron (from which more re-interpretations can happen, especially ones that can encapsulate consciousness more than just “an observer” (or worst leading to absurdity like the quantum suicide experiment https://en.m.wikipedia.org/wiki/Quantum_suicide_and_immortal... )

There could be a huge connection between time crystals and “consciousness” in ways we don’t understand. In that case “consciousness” would be related to quantum computation. Or likely going beyond that as we reformulate our understanding of physics.

Also, there are also so much about computation we need to advance our understanding in (like what does the Curry-Howard correspondence implies about our universe? And why is the current formulation makes it so difficult or almost impossible to prove p =? np ?).

There are so many work to be done in these two fields.

And we still don’t understand how meditation work and why practising mindfulness can help us to be more productive at work


> But in this world, consciousness is, at root, a physical phenomenon, not a purely computational phenomenon.

That is entirely unknown. If it were a purely computational phenomenon, it would explain a lot, and nothing in this article argues against it. Except, perhaps, the iron bar mapping. But it seems to miss that a computer program basically defines a causal network. And recognizing and comparing causal networks is much more objective then the thought experiment acknowledges.


>My purpose in this blog post is to outline another argument against the idea that consciousness can be reduced to computation.

https://en.wikipedia.org/wiki/Blue_Brain_Project

Started in 2005 the goal of the blue brain project is to simulate the brain in the computer.

They currently have a complete working simulation of the neocortex of the rat brain.


> purely physical processes

Everything in the universe is a "purely physical process". Everything in the universe is an arrangement of matter and energy. Biological systems are extremely mechanistic, even if highly complex. These arguments that consciousness can only arise in a biological brain all seem to hinge on us being special in some undetermined way. Biological life is wonderful, but how does it specifically have a monopoly on something?


Let’s play a language game where we use language in a representative way. Specifically the language can only represent parts of sense-impressions.

Where is consciousness in one of my sense-impressions? Tell me a procedure that will allow me to locate the thing you want to call “consciousness” in one of my sense-impressions.

Maybe OP wants to ask, “What are sense impressions made of?”. But that’s incoherent in this language game because sense impressions are themselves not objects in sense impressions.


Chalmers: Does a rock implement every finite state automation? http://consc.net/papers/rock.html

Meanwhile, Integrated Information Theory is a proposal to solve some of the issues about consciousness that the author raises (e.g. non-conscious cerebellum; two consciousness with split brain patients). Actually I thought the article was headed to IIT - then it stops.


I thought it was pretty well established that consciousness is a kind of illusion; That we evolved the ability to extrapolate persona in others which was/is useful to predict future actions and form bonds. Consciousness is just this ability turned inward.

I've seen a lot of evidence for this over the year from neurologists, psychologists etc. Am i wrong, or is this a 'mystery' until we get an answer we like?


> A simulation of a brain cannot produce consciousness any more than a simulation of the weather can produce rain.

What's wrong with the rain though? A perfect simulation of rain is perfect rain. It's not rain in the outside world but it's perfect rain nonetheless.

A perfect simulation of brain is a brain. It has consciousness, it can think, it can suffer, and you can exchange information with it if you want.


>Continuing this, we can imagine an enormous crowd of observers staring at this iron bar, each using a unique encoding of atomic states to bits. With a sufficiently large number of observers, the encodings of all possible Turing machines of the given size are represented.

Yes, but only momentarily. It does not follow that there’s an encoding where the iron bar evolves as if running a given Turing machine.


> By this I mean that the existence of my consciousness does not depend on other observers perceiving me to be conscious.

The author doesn't recognise that they are an observer of their own consciousness; I would go further and say that consciousness _requires_ the conscious _thing_ to be an observer of themselves.

So, for me, the whole axiom "consciousness does not require an observer" is reversed.


It does not require an external observer. I believe the author does acknowledge it, writing "because consciousness is independent of EXTERNAL observers" (emphasis mine) or, even in the sentence that you quoted, "does not depend on OTHER observers".


Perhaps consciousness not computation directly, but something that computation is good at - thus leading us to get confused between the two. I'm thinking of information processing.

This then shows the weakness in the hot iron argument. The information is now encoded in the choice of observer not the iron bar - which is essentially random. Lean on information theory for deeper insights.


For those interested in this but haven't heard of the show DEVS [0] yet, it explores and asks questions about consciousness and determinism, being the central themes of the story.

[0]: https://www.imdb.com/title/tt8134186/


I just watched and it was beautiful but disappointingly written. They clearly had advisors who added reasonable explanations for e.g. the different QM interpretations, simulations, determinism etc. but then ignored the implications of what they were writing a lot of the time.

SPOILER:

Like when their algorithm only worked under MWI, proving it in-world but then still having only a single branch they knew in advance they are in*, still having the determinism even with knowing the prediction of what they'll do etc.

*Except for a single decision, apparently the only decision which could result in a different outcome out of all simulated days. That single decision even broke the Algo but not even at the point where it was made and not even by just predicting a different branch from then on but predicting noise. It was just total nonsense in a show that clearly knew better at times.


> It’s worth pausing here and noting one property of consciousness that will be of use to us later: consciousness is independent of external observers. By this I mean that the existence of my consciousness does not depend on other observers perceiving me to be conscious. Even if everyone else in the universe should deny that I am conscious — or if those other observers did not exist at all — this would have no bearing on my own consciousness. If, in a terrible catastrophe all life on Earth should perish, except by some strange fortune my own, my consciousness would not suddenly dissolve into the ether.

Seems like exactly the sort of thing something trying to convince me of its consciousness would say to justify it. How am I supposed to know this was written by a human?

Anyways, I have a relative who was sterilized because people believed she not only had no capacity for higher reasoning, but also that her continuing to have reproductive autonomy would be a danger to others because she might pass her lack of "consciousness" along.

So you'll have to forgive me if I'm not very impressed by anyone who decides they have the key to an objective measure of consciousness. This edges far too close to eugenics in my mind.


> it is a common article of faith that computers will one day gain consciousness.

Is this actually a common belief?

I believe that human level AI (or above) is achievable.

I'm not at all sure about consciousness. I don't believe it is sufficiently definable for it to be achievable (which is actually a a problem with general intelligence too really).


Wait, is the author actually claiming that a hot bar of iron can simultaneously run all possible computations?


Consciousness is a computation. It's pattern matching self. Can you identify what represents your self? And can you serialize it to memory? And can you find your representation in memory? Then I would argue that's all there is as requirements for it.


There's no definition and no consensus if it's even real. But we declare autoratively that this thing we can't define can't be computational.

About as productive as declaring universe is made of water :)


https://en.wikipedia.org/wiki/Electromagnetic_theories_of_co...

"If true, the theory has major implications for efforts to design consciousness into artificial intelligence machines;[30] current microprocessor technology is designed to transmit information linearly along electrical channels, and more general electromagnetic effects are seen as a nuisance and damped out; if this theory is right, however, this is directly counterproductive to creating an artificially conscious computer, which on some versions of the theory would instead have electromagnetic fields that synchronized its outputs—or in the original version of the theory would have spatially patterned electromagnetic fields"


I don't have any particular informed opinion on Electromagnetic theories of consciousness.

But that "Implications for artificial intelligence" is completely wrong. It's perfectly possibly to simulate something like what the claimed model of consciousness is using a perfectly normal computer. It may take additional compute to do it, but field simulations are perfectly achievable.

It seems like this bit on Wikipedia was extracted from a paper by one of the chief proponents of this theory. The fact they think that about computers is pretty worrying and makes me question how well thought out this theory is.


The author suggests that split brain surgery causes “two consciousness” but this is simply not true.

Split brain patients seem to have two separate executive functions but they still describe their subjective experience as a single observer.


(Author here.)

Admittedly the experience of individuals with split brain is a bit out of my wheelhouse so I would love to be corrected on this. But it would seem to me that what you describe is not inconsistent with there being two separate consciousnesses present? If there are truly two consciousnesses present in a split brain, I would still expect that each one would describe itself as a single observer.


I think this is a little bit more complicated than that.

Split brain patients are fully aware of events where their executive function clashes. For instance, they can see how one limb operates out of sync with the other one. So how do they establish this awareness? Who is watching? Who is the one that verbalizes that fully conscious experience?

Certainly one could argue that this could be the result of an observer in the right or left brain verbalizing their subjective experience, but that creates several problems.

The biggest one is that for this to be the case there needs to be a commanding observer at any point in time and this would mean that one brain needs to communicate to the other when would it becomes the observer. That would require some unifying substrate that allows this cooperation to happen beyond the physical connection of both brains. This assumes a full separation of the left and right brain. There are also some theories (Gazzaniga experiment) that suggest that the left brain might be in this case the conscious operator simply because the left interpreter is capable of overwriting the right brain by default. But this experiment fails to explain the meta-observer experience that allows the subjects to effectively participate in the experiment and explain their experience with both right and left stimuli. The experiment correctly shows the existence of separate logic centers but can't explain why the subject is still experiencing his cognitive interpretation of reality as a single observer.

This is the fallacy of consciousness theorization. We don't have any good definition of consciousness because it can only be defined subjectively and uni-personally.

Empirically and logically we know that is implausible to have two simultaneously operating and fully aware consciousness. We know this because this duality doesn’t work at all with any rational interpretation of ourselves. We are one and only one. We are never two simultaneous people and there’s no thought or physical experiment that would allow us to test or imagine that.

Some people might have mental diseases that completely alter their consciousness and sense of identity but their experience is still a single observer experience.

Split brain patients are in fact the best example we have of consciousness being discrete and somewhat separable from brain function. This doesn’t necessarily mean that the brain isn’t the one producing the conscious experience but it certainly shows that the brain mechanisms that could produce consciousness are very different from the ones that enable other neurological functions.

The idea that a patient with two physically separated brains can have a relatively normal life (even when executive function clashes happen, one brain is still capable to overwrite the other) shows that there’s some unknown hypervisor function that enables their conscious experience.


I think the author is misunderstanding the XKCD comic about rocks [1] (or maybe I am). Just because it's possible to run a simulation of the universe on rocks, doesn't mean that rocks are conscious or turing complete. You can't forget about the person who is manipulating the rocks. The system as a whole is turing complete. Likewise with the bar of iron example that the author gave, you can't forget about the person interpreting the atoms in the bar of iron. The system as a whole is turing complete (and also naturally conscious, because the person doing the interpretation is conscious).

And there is nothing physical necessary to represent such systems. You can simulate turing complete systems inside turing complete systems [2]. So I don't see why consciousness has to be a "physical phenomenon" as the author claims.

[1]: https://xkcd.com/505/

[2]: https://youtu.be/xP5-iIeKXE8


> To say that a physical system is a computer requires an external observer to map the physical states of that system onto the abstract states of a Turing machine.

No it does not. An unobserved computer still computes.


If it even exists, consciousness is literally unfathomably complex compution by the insane biocomputer we call our brain.

And it is by definition, assuming one rejects super natural phenomena like the soul.


Why keep bringing up the Chinese Room. The guy is just a substrate, it's the necessarily intelligent set of rules he's performing (for astronomically large time) that is conscious.


Think of feeling pain? How is pain „a set of rules“? To my mind Searle proved that the mind is not a turing machine.


I just don't see how a limited thought experiment that invokes completely unsubstantiated conclusions (it's like starting from Krebs cycle or something and by the sheer scale of process needed to work up to "sentient mind" you declare the concept absurd) proves or disproves anything about the nature of consciousness.


I don’t know about computation, but it seems to me that memory is a necessary (not sufficient) condition for someone to be conscious.

People with memory loss seem to struggle to pass the bar of consciousness


Yup. And people always fails to think about such boundary cases.

Like

> Human can do math proof

Human infant dont.

> Human can see and hear things and interact with the environment.

Obviously a certain amount of population is unable to do that.

> Human learn and grow.

Bacteria can do this too.

This kind of failure of the poor-defined "consciousness" leads to the end of conversation.


> Back in the 1980s John Searle cast doubt on the computational theory of consciousness with the Chinese Room argument, and today it seems that most philosophers accept its validity.

hilarious


Welcome to another instance of "X [the way I define it, which is different to how others define it] is not Y [which is close to how others define X]"


IMHO Consciousness is a pattern in space-time, with the ability to represent itself. I don't find a strong argument to the contrary here.


Stating that consciousness is observer dependent while using observers to determine whether sth is conscious tickles need the wrong way


First step is to have a hard definition of what consciousness is. Then we can say what it is not. We are not there yet.


All this smacks of those public access science shows from the late 50s, where they had some lumbering "Michelin man" robot doing the vacuuming and cleaning dishes... And the voice over declaring that every American household will have it's own domestic robot by 1962.

If we still find it difficult to copy and paste data into Excel, I can't see how we can blithely state we're already or very nearly at some state of demonstrating AGI.


Once again - and what is wrong with panpsychism?

Just because something sounds weird is not an argument that something is incorrect.


I remember Kasparov's rantings that computers will never beat him in chess because they don't have a soul.


I'll borrow from Justice Potter Stewart and say that "I'll know it when I am it"


How does the article and the HN comments not mention Stephen Wolfram and his Physics Project … ?


Aw man. The author is going to be pretty sad when they learn that the brain is made of atoms.


The author has never heard of an analog computer?! If he had, I would question his honesty.


Because we do not know what consciousness is, it is impossible to say if consciousness is computation or not.

we must not resist any idea about consciousness because it is very poorly known phenomenon. Anyone who argues that it is known if consciousness is computation or not, is just completely wrong.


This article and the thinking it conveys are deeply flawed, in my opinion.


And once more "humanness" flees before the advancement of AI.


I refuse to believe that these takes are not clickbait at this point


Notwithstanding the problems pointed out with specific thought experiments, the author doesn't seem to realize that all things are computation. Every biological, chemical, and physical process is, at the end of the day, itself a product of the computation of various mathematical principles. Invoking XKCD as the author does, it's the classic "$X is just applied $Y" chain, with mathematics being the final $Y.

Given this, consciousness must be computation, because everything else is computation. The author rejects this not because his definition of computation is too broad (as he asserts to be the principal objection to the viewpoint he describes), but because it's too narrow.


consciousness is the worst idea ever. a confused word, that nobody wants to define so they can play games foraver. Ban the word consciousness from philosophy if you want progress


John Searle's Chinese Room argument should be a universal stop token, in the same way that Godwin's law means the conversation is over when someone brings up Nazis


My rebuttal to the poorly considered Chinese Room thought experiment:

I'm going to use this Wikipedia text as the backdrop:

> Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient papers, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output, without understanding any of the content of the Chinese writing. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.

> Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing behavior that is then interpreted by the user as demonstrating intelligent conversation. However, Searle himself would not be able to understand the conversation. ("I don't speak a word of Chinese," he points out.)

The reason he would not be able to follow the conversation is that he's being the CPU, and not the computer program. The computer program (according to the Strong AI hypothesis) understands the conversation, evidence for it being that it can converse with Chinese speakers and convince them that they are conversing with a human being.

The content of the papers stuffed into the filing cabinets to keep track of the program state contains (somehow) the representation and behavior of understanding, not John Searle, the pencil-and-paper-pushing executor of the program (who doesn't even hold the content of those papers in his own mind, let alone failing to understand any of it.)

Searle misunderstands computers and software; his argument amounts to the assertion that it is the CPU which must necessarily be the seat of the computer's intellect, and since we know that CPU to be a very simple machine with no understanding of the data flowing through it, there is no intellect, QED.

Fact is that, in the thought experiment, the Chinese Room as a whole appears to be intelligent; it can converse with a Chinese speaker and pass the Turing test. That's what is intelligent: the room, and not the clerk sitting inside it who pushes the pencil.

It's like insisting that individual neurons must be intelligent (conscious, sentient, ...).

By Searles' argument, a computer doesn't actually know how to browse the web because an Intel chip doesn't understand what is TCP/IP or HTML.

I can't believe this twaddle was ever taken seriously, but that's philosophy for you.

If you want a good reason why kids should learn to code, there it is: so they have a fighting chance at not being duped by stuff like this.


Proof or wishing to be so?


Descartes disagrees


I find this like the Einstein brain book (Is if Hofstadter?); when you read it fast enough, it is Einstein. When you just open a page, it looks like gibberish.


He makes any number of unsupported leaps.

E.g "consciousness is observer independent". It seems likely, but it's not at all clear it needs to be the case. E.g. let's say the universe is a simulation for your benefit which optimises away everything that does not directly impact your experience. Maybe my consciousness is just a side effect of currently having an impact on your experience. I have no way of telling whether I blink out of existence the moment I'm not conjured up in order to write this to generate the page you're now reading. It may sound far fetched, but the point is that there is an infinite number of possible observer-dependent universes. As such just ruling it out by default is a gap in his argument.

With respect to his definition of computers, he's glossing over just how little it takes to end up with something which computes. The simplest possible turing machine is ridiculously simple, requiring only 6 rules T(2,3) [1]. While I agree with his argument that such things arising out of nothing by chance is unlikely, it's not at all clear that it's unlikely enough to rule out an infinite number of such devices running mostly "junk" programs that'll crash.

But we also know that it doesn't need to be entirely random - such computation can provide mechanisms for the propagation of life. While cells themselves are not universal turing machines (they have limited "tape"), any number of living things are conceptually universal turing machines (humans being the obvious example), in that they/we can process "tape" without any inherent limitations on length (though practical limitations, certainly).

He then goes on to use his claim that consciousness must be observer independent to claim that even if we have such a computational process, we can't just pick encoding to make a claim of consciousness. But this stretches the claim of observer independence further, and rather than strengthen it, to me it weakens it. Since we don't know what consciousness is, it's not clear that consciousness could not arise from a subjective observation of a process. It's not even clear that this isn't the very core of consciousness. That hypothesis is as supported (in other words: it's not) as the argument he's making. It may be a very weak possibility, but he's not even considered it and given any justification for ruling it out.

I stopped reading there. This is typical of a whole lot of philosophy that makes a whole lot of embedded assumptions that are glossed over because the writer see them as so obvious that they fail to consider that they are assumptions at all. Most of it descends into sophistry without even realising it.

[1] https://en.wikipedia.org/wiki/Wolfram%27s_2-state_3-symbol_T...


This is a really good write-up though there's lots of connective tissue missing.

The idea that computation is a whole-mind or whole-consciousness experience that describes everything the brain and mind does, is a little flawed. The mind is capable of mathematical calculation, as a certain function, but it is also capable of metaphor, visions, musical enjoyment and connection to an uncertain, true "whole" that is separate from mathematical certainty.

The idea that computation is substrate independent is missing a layer. The idea that you can run computation on any given mathematical computing device and produce the same 'level' of consciousness as a parallel for consciousness is a little flawed.

ADHD study on the brain has 'room for improvement' because the Executive Function theory relies on parts of the brain which does not exist inside mice and some people argue we should study monkey brains who have a similar biological componentry to humans. The consciousness monkeys and apes have, is commonly seen to be a higher level than mice, and the biological componentry supports that idea.

Different kinds of computing devices would support different levels of consciousness. A toaster could run DOOM levels of consciousness and a supercomputer could "host" the consciousness of a much higher being. The religious faiths differ on this view across denominations. Catholics argue that dogs do not go to heaven because they have a 'worldly consciousness' and man has a heavenly consciousness. Some other faiths argue that we differ from animals in degree, not by kind, and that dogs are saved for heaven. These ideas have parallel with the concept of differing levels of consciousness.

The article makes the idea that differing relative observers (Alice, Bob, ect) of the computing substrate (hot iron magnetic poles) could generate different consciousness simulations. This elevates the idea of second-order thinking, thoughts about thoughts to be the mediating factor in deciding where consciousness is observed, and that's not really clear consciousness functions like that. It also supposes that the differing subjectivity of consciousness is linked to relating differently to an identical computational underbelly, like a tribe accessing the same or instanced resource, which is not really clear for the nature of mind or consciousness.

Hot Iron magnetic poles and computation as an example of consciousness, works for a subset of certain mathematical conscious functions, but functions like language are yet to be seen as computationally generated. DNA works better as a natural examination of the language. Mathematics has strict rigid certainties that often mirror randomness, or repetitive natural processes, like simulated annealing and physical forces. DNA's genetic code is possibly the first place in nature where we have seen non-repetitive, mutating code that is as complex as a human language, mediated by natural forces. The concept of "intelligent design" stephen meyer, et al and the question as to how life came to be, has become a thrust of biological research alongside evolutionary theory. It has some great discussions and speculation about language that didn't come from humans or animals.

There is a lot of elegant connections to be made when the author opens the idea of a split-brain "two consciousness" in epileptic patients. The left-right brain split has existed forever in pop psychology and it has lacked a lot of detail. The effects of left and right hemisphere splits have been studied for a while now by people with lesions and psychological disorders and a lot of the literature and implications have been compiled by a GP (PCP in US English) into a book.

The left brain excels at mathematical certainty and computation and pulling apart ideas into abstract pieces and trying to assemble them together into more than the sum of the parts. The right brain is eternally uncertain, connected to a whole and can interpret metaphor, people's faces, the shape of objects and much more. The implications go deep and don't lend to easy summarizing yet, the book the Matter with Things by McGilchrist spends multiple chapters just going through scientific example after example from brain-damaged and psychologically ill people to show the ideas that apply to the respective hemispheres.

Defending the idea from McGilchrist that a mathematical computation, worldview or functionality, is mostly executed in the left-hemisphere and not the whole brain, we can maybe illuminate why philosophers take differing stances on the nature of consciousness.

The author's conclusion that we are left in apophatic or negative theology attempts at describing what God is or what consciousness is, by excluding what he is not, rules out the inherent ability we have to build a view or metaphor in our mind that internally coheres with our own thoughts, and then apply it in the world.

There are likely a lot more internally coherent arguments, visions, ideas, maps of God or consciousness that we can make in our a mind and some of them will correspond to the natural reality we live in. Scientific discovery by the greats has used this method for a very long time. We still have hunches and sudden inspiration from an uncertain source and generally process some ideas in non-logical, non-rational ways. These ideas are only, but often, proved true when applied in the world. The ancient Greeks frequently only really pursued or argued about ideas they had guessed at first, instead of ruling out everything it couldn't be and working within that tiny area. Apophatic thinking can be overly limiting in some cases.


As always, so much is written about consciousness without attempting to define it, or state clearly what its function might be. There's usually a pattern. After a lot of preamble, the non-definition of "what it feels like" is offered. This itself is an indication that our understanding of consciousness is so poor that we cannot articulate the question itself.

Why are some experiences conscious and some experiences not? Why is spotting a ball conscious, but the precise arm movements that take it there unconscious? (Can you tell what your elbow or your back muscles did as your arm maneuvered towards the ball?)

We finally understand consciousness.

Not just at a metaphorical level (Daniel Dennett has done a wonderful job of that), but at a mechanistic how-is-it-put together. Rather that focus on who has solved it, I'll talk about that it is (That info is at the end)

What is the function of consciousness? Among others, It is the "hierarchical, simultaneous, and rapid resolution of uncertainty." Our world, of large macro-scale beings, is fundamentally ambiguous.

The dominant "bayesian" and "information" metaphors for understanding brain functions do not take time and fundamental ambiguity inherent to the world into account. Meaning is not given to us on a plate. It must be manufactured by the brain. Information is constructed from sensory data. How? And what does it mean?

Here is an example: Listen to this audio. What do you hear?

https://soundcloud.com/sai-gaddam-454459762/what-do-you-hear

Firstly, note that whatever you hear is a conscious percept. It is your brain orchestrating its daily unceasing miracle of resolving uncertainty and helping you consciously perceive.

Are you hearing "The scent of the two-cent stamp sent me back"? If not, you will now (and why is that?)

There are three homonyms (SCENT, CENT, and SENT -- which all sound the same) in this audio sequence that unfolds over time that are seemingly instantly resolved into three entirely different meanings. How?

That is one example of fundamental ambiguity. Computation is everything that the brain does to the sensory data it takes in. The auditory sequence in this case. Consciousness is what you consciously perceive. The three different meanings, somehow resolved all at once. And if you think about it, there's some time-travel involved here. SCENT and CENT can only be resolved in meaning towards the end of the sentence. Consciousness is what allows us to do this resolution into something stable and take action, and be entirely oblivious to all the many possibilities that this could have been. And we do this thousands of times everyday as go about perceiving and acting on what is filtered through this conscious perception.

This example also helps us focus on one crucial aspect of consciousness that every major theory or discussion out there ignores. There is a timeline to consciousness. What "it feels like" undulates over time.

Any theory but of consciousness must be able to explain the phenomenological timeline. But since our understanding is so poor, every major theory ignores this entirely. See https://www.nature.com/articles/s41583-022-00587-4 for a great review of all the major theories. The authors lament how all the major theories are imprecise, and that they should offer "computational models to bring mechanistic specificity" and be able to account for "temporality" among other things.

And that brings me to the final part.

We already have a wonderfully precise, mechanistic, and stunningly coherent "computational" framework for consciousness. Stephen Grossberg, often hailed as a pioneer in computational neuroscience and brain modeling, has explained consciousness by attempting to model every other facet of perception, which most take for granted as the "easy stuff". His work is of great importance for AI too which, for all the wonderful seemingly-magical stuff deeplearning has generated, is largely a one-trick pony riding error-backpropagation way too hard. His 65-year body of work, however, is largely unknown. The sentiment is captured by this tweet from an academic:

https://twitter.com/KordingLab/status/1533904082393387008

That is unfortunate. Grossberg’s work is important, and most importantly, offers the only coherent mechanistically precise, computational framework that also happens to explain consciousness.

We expanded on why his work is important in this response in an academic journal: https://cpb-us-e1.wpmucdn.com/sites.ucsc.edu/dist/0/158/file...

That response was a review of our recent book, Journey of the Mind, which apart from unpacking thinking, attempts to make Grossberg's work accessible. https://www.goodreads.com/book/show/58085266-journey-of-the-...

And finally, you can go directly to the source. Stephen Grossberg's book (Conscious Mind, Resonant Brain) has all his 65 years worth of work collated into one coherent whole. https://global.oup.com/academic/product/conscious-mind-reson...

I've written about it here: https://saigaddam.medium.com/the-greatest-neuroscientist-you...


[flagged]


If that's where you draw the line, you've already lost. Satan wanted you to use computers and he won.

If you want to be saved, stop using computers, and I'd say stop using any technology that uses electricity just to be safe.


I am just saying that OP is correct that "consciousness is not computation" if we can prove that consciousness can exist without a (physical) computation.

One example of consciousness without a physical computation is we know that Satan is a conscious, sentient being, without a physical body (computation).

Otherwise, can we prove that a "mathematical equation" (e.g. e=mc2) can represent consciousness?


Is this about the HODLers? So stupid. Such stupid people


Everything is consciousness. Even the stone has it. Meditate deep enough and the truth will unfold for you.


It follows from the definition of computing. Computing is using the fact that one system is rule governed to make reliable inferences about another system. Consciousness isn’t the sort of thing that could be computation. That doesn’t mean a computer-like device can’t do it though.


This is just yet another disappointing take on "I feel like there's something mystical about consciousness, that I can't actually define or describe in any meaningful way, but it's definitely impossible for it to be anything computers can do!"

This is the same kind of nonsense that leads people to search for some kind of magical "quantum" thing in the brain that makes the special mystical consciousness effect, because of some vague intuition that it can't come from the normal high-level behaviour of neurons.

The obvious position that should require significant evidence to contradict is that whatever consciousness is, it's a mundane physical effect that can obviously be implemented with a computer. Nobody has yet made any kind of falsifiable predictions about mystical non-computational souls or whatever, and I'm going to continue dismissing this bullshit as pseudo-scientific nutjobbery until there's actually something testable or falsafiable.

Name one specific concrete measurable effect that you believe consciousness can exhibit and computation can't, otherwise this is pointless masturbation.


> This is the same kind of nonsense that leads people to search for some kind of magical "quantum" thing in the brain that makes the special mystical consciousness effect, because of some vague intuition that it can't come from the normal high-level behaviour of neurons.

It's this very behavior that I call consciousness-of-the-gaps. It shifts the unexplainableness of consciousness into the unexplainableness[1] of quantum mechanics. If the public did have a comprehensive understanding of quantum mechanics, consciousness would be rebased upon another unexplainable phenomenon and the process would repeat.

1. In this case, it aligns more with the public's perception of how quantum mechanics works rather than the rigorous physics version, but that misinterpretation only strengthens the argument.


I don’t understand these discussions, but maybe it’s because I’m a Christian (and Catholic).

Lamda is not conscious, because it’s not a human.

Consciousness is a gift from God to humans (and not even to animals). Robots are not conscious, they are just programmed to act like so.

Once your philosophy stop including God and you go purely materialistic, I guess you end up with nonsense like “is lamda conscious” (or even more ridiculous “roko basilisk” - “is AI going to torture us for eternity if we don’t praise it enough?”.)


> “is AI going to torture us for eternity if we don’t praise it enough?”

Probably not. It is after all not an Abrahamic god…




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: