Hacker News new | past | comments | ask | show | jobs | submit login

I can not not think about J. Schmidhuber's thoughts on consciousness whenever the topic comes up:

As we interact with the world to achieve goals, we are constructing internal models of the world, predicting and thus partially compressing the data history we are observing. If the predictor/compressor is a biological or artificial recurrent neural network (RNN), it will automatically create feature hierarchies, lower level neurons corresponding to simple feature detectors similar to those found in human brains, higher layer neurons typically corresponding to more abstract features, but fine-grained where necessary. Like any good compressor, the RNN will learn to identify shared regularities among different already existing internal data structures, and generate prototype encodings (across neuron populations) or symbols for frequently occurring observation sub-sequences, to shrink the storage space needed for the whole (we see this in our artificial RNNs all the time). Self-symbols may be viewed as a by-product of this, since there is one thing that is involved in all actions and sensory inputs of the agent, namely, the agent itself. To efficiently encode the entire data history through predictive coding, it will profit from creating some sort of internal prototype symbol or code (e. g. a neural activity pattern) representing itself. Whenever this representation becomes activated above a certain threshold, say, by activating the corresponding neurons through new incoming sensory inputs or an internal ‘search light’ or otherwise, the agent could be called self-aware. No need to see this as a mysterious process — it is just a natural by-product of partially compressing the observation history by efficiently encoding frequent observations.

https://old.reddit.com/r/MachineLearning/comments/2xcyrl/i_a...




You are conflating consciousness[1] with self-awareness[2]. They are two distinct ideas.

In fact the second/third sentence on the Wikipedia page for self-awareness is:

> It [self-awareness] is not to be confused with consciousness in the sense of qualia. While consciousness is being aware of one's environment and body and lifestyle, self-awareness is the recognition of that awareness.[2]

[1]: https://en.wikipedia.org/wiki/Consciousness

[2]: https://en.wikipedia.org/wiki/Self-awareness


That's the position of philosophy on the philosophical view of consciousness. The broad, intuitive notion of consciousness often does equate it with self-awareness, indeed in the conscious link you have:

"Today, with modern research into the brain it often includes any kind of experience, cognition, feeling or perception. It may be 'awareness', or 'awareness of awareness', or self-awareness." Etc.

I'm more of a fan of this view than the specifically philosophical view of consciousness. This latter sort axiomatizes, pares-down, an intuition that is, itself, so vague that axiomatized version winds up not having any definite qualities or characteristics, just "I know I experience it".


Colors, tastes, feelings aren't vague nor are the thoughts, imaginations, dreams they makeup. They're the stuff of consciousness, and somehow you have to demonstrate how awareness results in those experiences. Shifting definitions to avoid the hard problem doesn't cut it.


You have no idea if you live in a world of zombies who just say they perceive qualia, who just give the appearance of seeing red.

I don't see it as a real problem. If a program or machine asserts that it perceives qualia, who are we to argue it's wrong? We are in no better a position to prove the qualia of our subjective conscious experience other than our physical resemblance to our interlocutor.

Maybe qualia is what it's like for matter to be in a feedback loop with physical interaction. Panpsychism is pretty much my position.


> I don't see it as a real problem. If a program or machine asserts that it perceives qualia, who are we to argue it's wrong?

Right now, we are in no position to argue it's wrong, because we have no accepted framework for understanding qualia.

That said, we can still make some simple assertions with confidence such as:

1: A one-line program which prints "I am afraid" does not experience fear qualia.

2: A human being which utters the words "I am afraid" may be experiencing fear qualia (and thus perhaps experiencing pain/suffering).

If you assert that the world is full of philosophical zombies rather than humans like yourself, you may not agree with assertion #2. But we build our ethical and legal frameworks around the idea that we aren't philosophical zombies, rather, that we have feelings and want to avoid suffering.

Once you start talking about more complicated programs, though, we don't know what the right assertions are. We don't have an understanding of what things can suffer and what things cannot, for starters. We generally agree that, say, dogs can suffer. That's why we have laws against animal abuse. But what about programs?

It is absolutely a real problem, because some day we will have to build laws and culture around how we treat AI. Whether or not an AI can suffer (ie. by experiencing qualia that can be considered undesirable) will likely affect how we treat that AI. If we have no way to answer "can this program suffer?", then we are setting ourselves up for regret.


"that's why we have laws against animal abuse"

I think there is a step or two missing in between. Abusing dogs isn't just as bad as abusing something that suffers. It's, e.g., doing something that has no value to society (unlike animals being mistreated in a feed lot), or it's something that sews unacceptable levels of aggression. It feels much more complicated than you make it seem.


I took it as necessary condition, not a sufficient condition. The other necessary conditions were omitted because they aren't important to the line of reasoning being presented.


If you try to find a logical basis for hypocritical laws, you're gonna have a bad time.


> If you assert that the world is full of philosophical zombies rather than humans like yourself

Hm? There is no objective test for whether I am a philosophical zombie or not.

Consciousness is subjective, so it's simply incoherent to ask whether something is conscious or a zombie.


Once an AI is advanced enough to make the question hard to answer, isn’t the odds that the AI will be advanced enough to simply decide the answer for us?

I imagine thane once someone manage to produce an AI with enough agency to seem sentient it will far exceed human capabilities.


I’d say the question is already potentially hard to answer.

We have deep learning systems that perceive and strategise. Who’s to say that AlphaGo doesn’t have some experience of playing out millions of games?


Please read Solaris.

Or talk to an ant.

There's no reason to expect to be able to understand something's ideas just because it's smarter than you.


Ned Block covers this in his paper on the Harder Problem of Consciousness. Other humans share a similar biology, so we have good enough justification to think they're conscious. It won't defeat solipsism, but it's enough to make a reasonable assumption.

Data or whatever AI/Android doesn't share our biology. And since we can't tell whether it's the functioning of our biology or the biology itself which provides us with consciousness, we lack any reasonable justification for assuming Data is conscious. And then he goes further into functional isomorphs at different levels and how that might or might not matter for Data, but we can't tell.


> Other humans share a similar biology, so we have good enough justification to think they're conscious.

That's the way it goes with these arguments: it always comes down to an appeal to what's reasonable or plausible (those who insist consciousness must be an algorithm are doing the same thing, at least until we have algorithms that seem to be making some progress on the hard problem.) One might equally say other humans share a similar physics. When Dennett said "we are all p-zombies", I suspect he was making the point that it would be highly tendentious to take the position that anything both behaving and functioning physically in the manner of a human would not be conscious, except for humans themselves.


I see a lot of resemblance in biology between myself and cats, does that mean that cats have qualia? I infer they dream from their twitches as they sleep.

How about mice? Lizards? Fish? Insects? Nematodes? At what point do qualia stop occurring? I see a more or less continuous decline in similarity. I don't see any justification for a cut-off.

I don't think data is conscious. I suspect consciousness might be a side-effect physical computation. I don't believe there's any magic to the human brain's inputs and outputs which cannot be computed, and thus display the appearance of consciousness. In fact I see the Turing test as a thought experiment more than any actual test, a way of removing the appearance of a human body from the task of evaluating an artificial intelligence as a thinking, conscious being. If it quacks like a duck, a rose by any other name, etc.

In fact I'm not entirely convinced that the Western conception of consciousness isn't a learned cultural artifact, that there aren't people who have a more expansive notion of self which includes close family, for example. Have you ever suffered watching a relative hurt themselves? Ever had false memories of injury from childhood which you found out later happened to someone else?


We have all discussed this in your absence, and we have decided that only you are a zombie.


As far as I understand it, the "hard problem of consciousness" is the claim that consciousness is something absolutely separate, that there is a jump between constructive processes that make up the informal view of consciousness and the full formal, philosophical view. That approach always seemed like a complete dead end.

I mean, "philosophical zombie" construct would seem to experience "Colors, tastes, feelings" and whatever concrete, they just would lack the ineffable quality of (philosophical, formal) consciousness - you could brain pattern akin to color sensation for example. This final view is kind of meaningless and I think people are attracted to the view by misunderstanding, by thinking something concrete is involved here.


The concreteness is your own subjective experience. All this other stuff about constructive processes and zombies is word play. It doesn't change your experience.

The issue is providing a scientific explanation for how the physical world produces, emerges, or is identical to consciousness. Arguments like p-zombies and Mary the color scientist are meant to showwe don't know how to come up with such an explanation, because our scientific understanding is objective and abstract, while our experiences are subjective and concrete.

I prefer Nagel's approach to the hard problem, as he goes to the heart of the matter, which is the difference between objective explanation and subjective experience. Science is the view from nowhere (abstract, mathematic, functional), while we experience the world as being from somewhere as embodied beings, not equations.


In your example of qualia (Colors, tastes, feelings), how is that not awareness? That is what is the difference between awareness and consciousness here?

In your example the abstractions of qualia (thoughts, imaginations, dreams), how are they not labels of experience? That is what ML does, after all.

Today ML may, for all intents and purposes, have roughly the intelligence of a bug, but it has to have awareness to label data. Where does consciousness come in beyond labeling and awareness that ML does not have today?


Do you think that ML algorithms are having experiences when they label data?

We label experiences with different words because we have different kinds of experiences. But it's the experiences and not the labels that matter, and the labeling of those experiences does not cause the experience. We say we have dreams because we do have dreams in the first place.

You're proposing that labelling data is the same thing as having an experience, but you haven't explained how data is experience, or why the labelling matters for that.


ML algorithms have very beautiful dreams and halucinations https://www.google.com/search?q=neural+net+dreams ;)

More seriously though, if we agree that the brain is described by physics, then all it can do, a turing machine can do as well, so at the root all experiences have to be data and computation.


Perhaps, but keep in mind that physics is a human description of the universe, not the universe itself. So it may also not be data and computation, as those might just be useful abstractions we create to model reality. Same with math.

But that gets into metaphysics, which is notoriously tricky.


That's why i say "if we agree that the brain is described by physics".

If you argue that a brain cannot be described by computation and there is something supernatural like soul, that's a fine theory, and the only way to disprove it, is to create working general ai.


> If you argue that a brain cannot be described by computation and there is something supernatural like soul,

Why are those the only two options? There's lots of different philosophical positions, particularly when it comes to consciousness.


I am not really sure about the value of this philosophical positions, since the ones i have seen, simply suggest to ignore logic for a little bit, to obtain the result they feel is correct.

The suggestion by Penrose that consciousness is connected to quantum state, and so cannot be copied, or computed on any classical turing machine fitting in the universe, is an interesting way to circumvent strange predictions of functionality, but is not very plausible.

And Jaron Lanier from your other comment explicitly suggests to not use approaches like that, because it would weaken vitality argument, when proven wrong.


The pancreas is described by physics. Can a Turing machine make insulin?


Turing machine can make insulin that works on cells created by the Turing machine.


It's only "insulin" and "cells" for someone interpreting the program. The issue here is that if you had a simulated brain, what makes that simulation a brain for the computer such that it would be conscious?

Here's Jaron Lanier on the issue of consciousness and computation: http://www.jaronlanier.com/zombie.html

He's meteor shower intuition pump is similar to the argument that nations should be conscious if functionalism is true. The reason being that any complex physical system could potentially instantiate the functionality that produces a conscious state. It's also similar to Greg Egan's "Dust theory of Consciousness" in Permutation City.

If one is willing to bite the functional bullet on that, then you can have your simulated consciousness, and the universe may be populated with all sorts of weird conscious experiences outside of animals.


There's no scientific way to define "experience". How do you know whether a microphone experiences sound or not?


>Do you think that ML algorithms are having experiences when they label data?

Yes, but not in the way we have. An RNN keeps track of what is going on, for example.

To take in information through sense inputs (eg, eyes) is experience, but in an unconscious way. We get overloaded if the information we take in is not deeply processed before it becomes conscious. A CNN is similar in this manner, but also quite alien.

When we're born our brains have not formed around our senses yet. Our brain goes through a sort of training stage taking in raw sensation and processing it. The more we experience of certain kinds of sensations the more our brain physically changes and adapts. It's theorized if you made a new kind of sense input and could plug it into a newborn brain correctly, the brain would adapt to interfacing with it.

This strongly implies qualia is real, in that we all experience the world in drastically different ways. It may be hard to imagine but my present moment is probably very alien compared to yours, as is with different animals as well as simpler things like bacteria and ML.


Exactly.

One of my more vivid memories is excitedly opening my copy of Jayne's "The Origins of Consciousness in the Breakdown of the Bicameral Mind" and being dismayed as it slowly dawned on me that Jayne's entire book was based on this category error.

For the life of me I can't understand why people don't understand this distinction (or choose to ignore it).

I can only assume that people ignore the very obvious "magical thing" that is happening in their own heads due to ideological commitments to physicalism or something.


What is this magical thing that is so obvious to you? Something may seem magical without being magical. To the person who survives a tornado while all his neighbors die, the sense they were chosen to live and have a special purpose can be compelling, but is false.

Materialism can't explain everything, but there is yet to be any proof that magic explains anything. Lacking that, assuming a physical basis as the explanation is prudent.


Well, to my mind you seem too hasty to conclude it's false. Sure, statistically there is a very small non-negligible chance that exactly one person will survive, but realization of an unlikely event is no short of a miracle. To conclude that it happened randomly seems as naive as to conclude it happened purposefully. If your guts tells you it was a random event that is just fine, but be aware it is a metaphysical axiom of yours


Certainly nothing can ever be conclusively proven. I might be a brain in a vat that has had a fictional reality fed to me for my entire life. Such arguments may be interesting in some venues, but is HN really that venue?

> To conclude that it happened randomly seems as naive as to conclude it happened purposefully.

I strongly disagree. You aren't saying that both outcomes are possible (I'd agree with that), but they are equally likely, a much, much stronger claim. The sun might go on shining tomorrow, or it might collapse due to some unforeseen physical process. Both have non-zero probabilities, but it is the person who claims they are equally likely who is the naive one.

As for not defined so let's interpret "miracle" to mean highly unlikely events, they happen millions of times day, and we rarely notice. The bill I received in tender has the serial number MF53610716D. What are the odds of that? If you limit "miracle" to be highly unlikely events that most people would also notice, it would be strange if they weren't happening all the time. Every person who wins a lottery (and there are thousands every day), the person who is told by their doctor their cancer is terminal but then it goes into remission has a "miracle", the guy who loses his job and is about to be evicted then gets a job callback for an application he sent in months ago and forgot about experiences a "miracle."


"assuming a physical basis as explanation is prudent" yeah, good luck making physicality relevant for most explanations. My wife asks why I didn't put the laundry away, for instance. "Physics!" I declare triumphantly.

Now, if I put music on and rub her back a bit... Well, ha! There's the magic, my good sir.

Imagine walking into a film studio with the idea of film production based exclusively on the principles of physics. No. Magic and physics are no more opposed than biology and physics. When hidden (psychological?) forces clearly underlie economically valuable phenomena (architecture, music, media, etc) respecting the magic is closer to the truth than reducing it to physics. Physics just isn't the right explanatory level.

If you'd like to learn more about definitions of magic that aren't "phenomena that don't actually exist", I recommend Penelope Gouk's work on music, science and natural magic in the 17th century. It's about the magic involved at the formation of the Royal Society.


> Now, if I put music on and rub her back a bit... Well, ha! There's the magic, my good sir.

Your answer is rather flip.

Also, if you want to redefine "magic" to be nice things that are physically based but can't be summarized in a neat equation, then I also have no desire to argue. It is clear that the person I was responding to meant "magic" in a different way that you seem to be using it.


If you want to think magic is Harry Potter, fine. But it has an intellectual history that can help frame our interpretation of contemporary technology. Recommended reading: https://plato.stanford.edu/entries/pico-della-mirandola/


I think there's multiple definitions of magic at play here. One is phenomena that beyond the limits of human understanding. The other, based in the theory of vitalism, is something unexplainable, but also comes with the assumption that these unexplainable things are due some sort of vital force that living things have (often, specifically humans) [1].

When you talk about the magic of forgetting about the laundry, you mean the first kind of magic. When others talk about consiousness being inaccessible to AI, they're talking about the second kind of magic. I don't think it's that hard to imagine an AI that would forget to fold the laundry and not know why. The main problem with a "concious AI", I suspect, is that nobody will actually want one.

[1] I have no idea if viruses or prions are supposed to have vital force, vital theory emerged before most of modern biology and is mostly used today by those who have no understanding of modern biology.


> The main problem with a "concious AI", I suspect, is that nobody will actually want one.

Why do you think that? Can't accept "fake humanness"? or maybe ego(superior) or not relatable

The one thing I wonder if they were truly sentient why would they care/try to serve you


> Now, if I put music on and rub her back a bit... Well, ha! There's the magic, my good sir.

Sure. There's also the magic of feeding my cat, so he stops nagging me, and sits on my lap purring. And just about everyone would agree that's similar, no?

But what about the magic of running "sudo apt update" before "sudo apt install foo"? That's also arguably similar, albeit many orders of magnitude simpler.


physicality n. obsession with physical urges

On the subject, you are making a good point comparing social behavior with mental activity. Just like there is no “magic” in the way people interact with each other, there is no magic in how neurons do the same.


> Just like there is no “magic” in the way people interact with each other, there is no magic in how neurons do the same.

Except I am arguing that there is most definitely magic in human interaction.


that's not the definition of physicality...


> I can only assume that people ignore the very obvious "magical thing" that is happening in their own heads due to ideological commitments to physicalism or something.

You nailed it. People think that accepting consciousness is hard is giving into religion.

It need not be that way.


After thinking about it for many years I've come to the conclusion that some people really haven't noticed themselves, (e.g. I think Daniel Dennett is a p-zombie!)


I know it was an incidental point, but it sounds like you don't think very highly of Dennett. What specifically don't you like?


Oh no, I do not mean that in a pejorative sense. I have a lot of respect for him and his work. It's a way to understand how someone so thoughtful and intelligent and dedicated can seemingly be unaware of the "hard problem". One day it occurred to me that maybe he just hadn't had the "breakthrough" or "epiphany". It's unlikely but not impossible.

FWIW, his wikipedia entry says,

> He also presents an argument against qualia; he argues that the concept is so confused that it cannot be put to any use or understood in any non-contradictory way, and therefore does not constitute a valid refutation of physicalism. His strategy mirrors his teacher Ryle's approach of redefining first person phenomena in third person terms, and denying the coherence of the concepts which this approach struggles with.

that's specifically what I don't like. In re: the "hard problem" he wants to declare victory without a fight, or a prize.


Is physicalism the problem? My understanding is that physicalism is still in good standing – at least when it comes to the conscious experience of qualia. (The concept-of-self encoding obviously doesn't require a physical substrate.)


The problem is that physics is not the right level of abstraction and that we do not have an operational theory for working at the right level. I suppose it's also a problem that we can't seem to agree on the definition of consciousness.

Anyway, trying to understand consciousness by applying physics is just as silly as trying to understand why your complex fluid dynamics simulation is giving an incorrect result by peering at the CPU's transistors with an electron microscope: it's the wrong level of abstraction.

One thing that seems to be making our task a lot more difficult is that evolution doesn't even try to design the biology of systems with clean abstraction boundaries that we can understand. There are so many strange things going on in our bodies that we can barely describe them, never mind understand them. One example is what happens when a person consumes Ayahuasca [1].

[1] https://en.wikipedia.org/wiki/Ayahuasca


Probably not, but I mean the kind of informal physicalist commitment that says something like "if I can't imagine quantum fields feeling things, then feelings don't exist." I see a lot of that, though less and less these days.

I think Chalmers helped clear the air a bit here, though I don't think jumping straight to panpsychism is particularly helpful to the discourse, either.


It was said elsewhere that a sufficiently advanced “something” is indeed indistinguishable from magic.


Is this distinction necessary or strictly proven? They seem pretty linked in my personal experience. (All the times I'm not conscious, like when I'm sleeping, I'm not self-aware, and vice-versa. Intuitively it seems like anything that might put me into a semi-conscious state would be considered as also putting me into a semi-self-aware state and vice-versa.) The page lists several disorders related to self-awareness, but I don't think any of them represent anything like "0% self-awareness yet 100% consciousness" which would have more strongly implied that they may be different things.

The page lists several types of self-awareness, including "introspective self-awareness". Maybe what we call consciousness is just the internal experience of something with introspective self-awareness. This does raise the question of what "internal experience" is though. I think "internal experience" is what some people mean by "consciousness", especially when they say things like "maybe everything has consciousness including rocks or random grids of transistors". I think there exists some sense of "what it is like to be a rock", but it's an utterly empty experience without memory, perception, personality, emotions, or the feeling of weighing choices; something very close to the "null experience", that is an experience in the same sense that zero or a number utterly close to zero is still a number. The internal experience of a random grid of transistors only just barely starts to touch a sliver of what a human's internal experience consists of. The internal experience of a worm starts to have a few recognizable qualities (proprioception, the drive to maintain personal needs), but it still lacks many large aspects of a human's internal experience. (Maybe a thermostat with sufficiently rich inputs, output control, world model, and juggled priorities has an internal experience similar to a worm.) Large animals tend to have more elements of awareness (including memory, social awareness, and some introspective awareness) that bring their internal experience closer to having what feels like the necessary elements to human consciousness.


I'd say you're off. I'd describe the experience of the worm much in the same way you're describing the experience of a rock, just a couple biochemical circuits. And the experience of the rock would be even more atomically null to compensate.


I'm not sure we're in disagreement: I agree that a rock's experience is practically null (maybe the fact that its atoms are connected and may be pushed and pulled together makes up the tiniest grain of internal experience), and that a worm's experience is still a world apart from ours and probably much closer to a rock than us. If we were to score an experience on how human-like it is, I'd expect the various elements I named to each refer to exponential or greater increases in score. Comparing the depth of a worm's experience to a human's would be like comparing the volume of a nearly 1-dimensional line segment to the volume of a many-dimensional hyperobject.

Our brains are just tons of biochemical circuits working very mechanistically, with many interesting feedback loops going on that make up our internal experience. A worm's brain has a small and probably countable number of feedback loops going on that I assume make up its tiny internal experience.


You act like these are well defined and agreed upon categories. The page on consciousness you link lists off many different definitions people attribute to consciousness, including: "It may be 'awareness', or 'awareness of awareness', or self-awareness."


I forget who said it - and it's probably somewhere tucked in these comments, but paraphrasing...

If one were to simulate every drop of rain, gust of wind, and tidal current of a hurricane, to ultimate perfection, would you say the processor is wet?


Well, no, but you'd say the simulated environment absolutely is wet.


This is the "Mary's Room" thought experiment which explores the nature of qualia. - The CPU analogy is a nice one though!


>You are conflating consciousness[1] with self-awareness[2]. They are two distinct ideas.

Consciousness is not an idea and ones who experienced a little bit of it (via meditation or some chemicals or other ways) will laugh at all attempts philosophers and scientists are doing to define it. Simply because you can only define something which you can observe from aside, but here there is no outside, everything is inside it. How can you possibly define that? :D


Right -- consciousness is whatever it is that is aware that it is self-aware. In fact, anytime you pinpoint consciousness as any particular awareness, you beg the question. How can you be aware of your awareness?


If your brain has a computational process for awareness, and it has itself somewhere in the inputs... that’d make you aware of your awareness, no? It’s recursion!


The only way to build a recursive loop is to say that awareness is some state (the output of the awareness process). But that just begs the question, how could awareness be a state? If it is a meta-phenomena (i.e., the result of some sort of complexity or self-referential loop) then what exactly is aware that one is aware?


Self-aware consciousness is where it gets interesting, in a way that debating the consciousness of toasters and such just cannot match.


That's appealing - certainly more promising than naively reducing consciousness to fan-in/fan-out, as in OP.

But it's also quite hand-wavey, because RNNs have no real hooks for self-reflection.

You can strap a meta-RNN across a working RNN, and if you're lucky it may start to recognise features in the operation of the working RNN. But are your features really sentient self-symbols, or are they just mechanically derived by-products - like reflection in a computer language, taken up a level?

You need at least one extra level of feedback to do anything interesting with this, and it's really not obvious how that feedback would work.

It's even less obvious if this will give you "genuine" experiential self-awareness, or just a poor and dumbed-down emulation of it.


>But are your features really sentient self-symbols, or are they just mechanically derived by-products - like reflection in a computer language, taken up a level?

Why couldn't we apply the concept to our self?

Self awareness is one commonly believed description for consciousness. Going off of that, we become self aware in three primary ways: 1) We're told about us by our parents at a young age, before the age of one. 2) We observe our selves in the environment often influencing the environment, from looking in a mirror, to social interaction. 3) We get meta about a series of things (body+mind) and self awareness arises from that generalized intelligence.

Though, you could argue that #3 may not be enough alone to create self awareness and ultimately #2 is required.


Schmidhuber is a great thinker in this field. I don't have a source, but I've seen him more briefly summarize this same point as "consciousness is a by-product of data compression." Pretty close to the last sentence you quoted.


He talks about this in his appearance on the Lex Fridman podcast. He's my favourite guest, fascinating guy.


This sounds like some kind of "cognitive homunculus" looking at itself in a mirror. Does it explain anything, or just push the question of what is perceiving the subjective experience one level deeper?


there was a young man who said "though it seems that I know that I know, what I'd like to see is the 'I' that knows me, when I know that I know that I know."


and here I am reading Frank Herbet's Destination : Void which deals with developing a consciousness. From ideas of introducing emotions to purposefully having errors in the system


I love seeing how consciousness gets described every century with analogies from the era’s technological understanding.

I wonder how it will be described in 1000 years :)


And I am amused by how someone always rolls out this observation as if it settled the issue for good. It's a caution against mistaking familiarity and "it seems to me that..." for the way things must be, which all parties to the discussion would do well to heed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: