Hacker News new | past | comments | ask | show | jobs | submit login
A man who can read letters but not numbers exposes roots of consciousness (sciencemag.org)
430 points by gumby on Aug 1, 2020 | hide | past | favorite | 213 comments



The last paragraph surprised me even though it shouldn't have. It's shocking to have a clever scientific story suddenly grounded in the reality that this person is rapidly dying. In some sense he has donated his body to science even before he died.


It was pretty jarring. You’re reading along, fascinated if you’re me, then jarringly reminded of the fact that this was a human being, not a Petri dish. Empathy’s gotta be one of the most important skills for an experimental social psychologist.


>Empathy’s gotta be one of the most important skills for an experimental social psychologist.

Or maybe the exact opposite? I would find it very hard to do research and empathize with the subject of my research at the same time.

This has also just made me realized that this problem with empathizing with my research is actually highly problematic when it comes to the research I am actually doing at the moment.


An EMT friend described her take as, you need empathy in order to help people, but you need to be able to switch it off because sometimes what you're about to do is going to hurt them.

It's like any of our instinctual responses. Sometimes it helps, sometimes it's in the way.


It's not really switching it off as such, it's more that you have to "long-term empathize" with the person you're helping.

Setting a broken bone or popping a joint back in to place will hurt right now, but the consequences of not doing it will hurt the person a lot more over time. So you do inflict pain on them right in the moment, which is certainly unpleasant to do, but in the hope that the overall pain and discomfort for them will be significantly lessened.

At least that's how a friend of mine (former firefighter) put it.


As someone with really bad dyslexia and dyscalculia I can't say I'm particularly surprised. I've done a lot of research on what neurologic differences are present and they're fairly interesting:

http://www.neuroanatomy.wisc.edu/selflearn/Dyslexia.htm

https://www.brainfacts.org/archives/2011/dyslexia-and-the-br...


Would you be interested in adding what you’ve learned to the discussion here: https://www.kialo.com/our-perception-as-humans-is-not-necess... ?


Somehow this is not surprising for me, I have heard about experiments related to free-will wherein the brain already made a decision even before the conscious mind became aware of it. Thus putting doubts on our conscious experience of free-will.

Also, in terms of problem solving, there are experiments which show that the brain calculates the answers and "presents" the answers later on to the consciousness. It's not like the problems were solved within the scope of consciousness, which is of course contradictory to what we experience. We feel the contradictions because we experience everything through consciousness, i.e. we can't experience things which happen outside realm of consciousness. The brain scans prove in a sense that there is a disconnect (For instance in this article the brain scans showed signals related to seeing faces in the numbers, but the conscious experience was not seeing them).

The link between what the brain does, and what we experience (consciousness) is indeed the hard problem of consciousness.


All of the research you describe can be safely bundled into the same category as "dead salmon shows brain activity on an fMRI if you squint hard enough." It's phrenology for the 21st century, except with dyes and MRIs instead of head shaped calipers.

The second you say the words "conscious mind", you leave the realm of science. At best, "consciousness" is an abstraction used by psychiatrists and laypeople over much deeper, more complex, and less understood concepts in neuroscience. The field has yet to differentiate the roles neural plasticity plays between newborns, the average college student, and brain trauma victims - let alone grapple with the implication that our brains are the result of a billion years of evolution.

The nature of human intelligence is far wilder than you or I can imagine, Horatio, but publish-or-perish cannon fodder is clearly not the answer.


I'm struggling to come up with something more substantive to say in response, but this is a bad take and it needs to be said. Studies that demonstrate high level brain activity without the corresponding conscious experience are important in that they help to constrain where and how conscious experience is generated. Platitudes about human intelligence have no utility here.


The poster you responding to is correct to a large extent. Yes, you can record waves, and you can even classify them like N170. Responses can be 'localized'; however, one has to be a bit careful. Even many researchers say it is new phrenology: for instance, William R. Uttal wrote a book on it, called "The New Phrenology. The limits of localizing cognitive processes in the Brain". This book is not based on philosophical arguments, but based on the research done by cognitive neuroscientists. I am yet to come across a substantial criticism against Uttal's book.

Just because localization is at most one can do, it is not cure to solve all puzzles/problems.


>Just because localization is at most one can do, it is not cure to solve all puzzles/problems.

But who is saying it does?


The media, grant writers, some scientists, and anyone. listening to the former three groups - who end up parroting the decision making experiment and other novelties of the academic rat race as if they offer some sort of insight into free will.


> conscious experience is generated

Is there any evidence that it is generated?


It seems to come and go with life, wakefulness, blows to the head, etc. It can be modulated through stimulating brain cells or destructive damage. Yes, there's a lot of evidence that the brain generates consciousness.


What else could it be?


Why does our not fully (or even close to fully) understanding how the brain works mean conscious vs subconscious is unscientific?

Is all mental health research unscientific based on the same logic?

My understanding is that it's conclusive that some of our brain's processes we feel we are controlling (conscious thinking), while others happen in the background without us realising. The fact that we don't understand everything that makes that the case doesn't stop that being the case.


You're making a common mistake, that of equating "scientific" with "good". Science is a particular method for learning about the world, not a marker of moral worth. I don't want to get too far into the epistemological weeds here, but there are plenty of other, perfectly valid ways to understand the world, many of which are used and studied in academia. The easiest examples to point to are philosophy and mathematics. Mathematicians are not scientists, neither are philosophers.


I may well be misunderstanding a definition of science, but no I wasn't confusing it with "good".

I don't know what the best definition of science is, but here's the first one Google suggests:

> "the intellectual and practical activity encompassing the systematic study of the structure and behaviour of the physical and natural world through observation and experiment."

Can you explain what excludes the studying of human behaviour or mental health from how you define "science"?


This is what I meant by "getting into the epistemological weeds" heh. I'll requote your definition with emphasis.

> the intellectual and practical activity encompassing the systematic study of the structure and behaviour of the physical and natural world through observation and experiment.

All three of the italicized aspects are not how the mind is studied. The mind is not a physical object. It cannot be observed or experimented with in a systematic way.

You can obtain three samples of three different kinds of steel and perform a destructive test on all three of them, then when you're done you can obtain more. This is materials science and you can get paid a lot of money to do it.

There is "neuroscience" where you do the same thing, with neurons. Neurons are not minds. You'll never learn how a mind works by studying neurons, any more than you can learn how a building works by studying steel. If you're looking to learn how a building works, steel is just one of the hundreds of topics you need to understand.

There is no "building science" and there is no "mind science", because buildings and minds are not physical, natural things. They're unnatural things, conjured up by human invention and imagination. Physically speaking, i.e. according to the best observation techniques the field of neuroscience can come up with, a newborn, a brain trauma victim, and a college student are all identical as your parent notes.

This deficiency in understanding is what the commenter you were responding to was pointing to. People thinking they can understand the mind by doing fMRIs. And thinking any kind of rigorous exploration and study as "science" leads to this kind of mistake.

Another example, computer science isn't science. It's math. Computers are deterministic, there's no point in running experiments. Computers are not natural, nor are they physical. Silicon is, and you can study that by running experiments and observing the results.

To get epistemic, knowledge comprises justified true beliefs. Science is a type of justification. There are other kinds. If you use other forms of justification, then you don't get scientific knowledge out of it, overloading science makes people believe that all knowledge that comes out of academia is scientific, i.e. made through repeated experimentation and observation, such that we can rely on it to not fail on us.


Oh come now.

> The mind is not a physical object. It cannot be observed or experimented with in a systematic way.

So clearly false if you read the article, or have ever looked at an optical illusion.

> ... because buildings and minds are not physical, natural things. They're unnatural things, conjured up by human invention and imagination

Then likewise Ecology, Geology, Biology and all other sciences that cover systems aren’t science? What is left? Surely not physics because that could be quantum OR relativity. Surely not QM because that could be particles OR waves.

> Another example, computer science isn't science. It's math. Computers are deterministic, there's no point in running experiments

Ok now I know you are trolling.


>> The mind is not a physical object. It cannot be observed or experimented with in a systematic way.

> So clearly false if you read the article, or have ever looked at an optical illusion.

"Mind" vs. brain. For example, you will never observe the concept "triangularity" or the number 8 or the color "red" in a brain scan. You will see brain activity correlates between what is most likely someone perceiving an instance of triangularity (e.g., a concrete triangle drawn on the blackboard) or, say, the symbol "8" that's been drawn with a red marker, but those are not concepts, which are general. You cannot imagine "triangularity", only particular triangles. But all physical triangles are concrete and particular. You can't draw or construct an abstract triangle because any such drawing or construction will be a particular triangle that excludes all others (an infinite number of them, actually). Properties like the color red can be thought apart from any particular red thing, but you won't find "red" by itself rolling down the sidewalk.

So you have the following syllogism:

Every physical thing is particular. No abstract thing is particular. Therefore, no abstract thing is physical.

So if we can know things abstractly, then it must follow that such things exist in a nonphysical way in our minds. But there is no physical thing in which abstract things can inhere as abstract things because physical things are always concrete and particular.

Things get worse for reductionists. Current reductive views of physical reality exile properties like color as we commonly understand them to the realm of consciousness or mind or whatever. I.e., color as we commonly think of it is reduced to the experience of the reflectance properties of matter, that is, to a property of the mind, because as we have assumed, it is not a property of matter. But if color is not a property of matter, and the mind is material, then color cannot be a property of the mind. Therefore, the mind must be immaterial. This latter view of reality is essentially Cartesian where the universe is divided into impoverished res extensa and the metaphysical rug of res cogitans under which we can sweep all of those unseemly phenomena that reductive accounts of reality cannot cope with. Of course, you might be able to get away with that as long as you're a Cartesian dualist, but materialism, the metaphysical bastard child of Cartesian dualism, takes Cartesian dualism, jettisons res cogitans and attempts to reduce all of those unseemly phenomena attributed to res cogitans to phenomena proper to res extensa. Of course, materialism is utterly incapable of dealing with this problem by definition. Stubborn, dyed-in-the-wool materialists like the Churchlands or Dennett, instead of rethinking their presuppositions, have resorted to the pathetic tactic of denying the very thing they were supposed to explain. Can't explain color or abstract concepts? Then they must not exist!


>Every physical thing is particular. No abstract thing is particular. Therefore, no abstract thing is physical.

But abstract things can supervene on the physical. Information, for example, is abstract, but it supervenes on some physical stuff. Granted, information is not identical to any particular instantiation, but the abstract pattern can be manifested by a particular physical instantiation. You're welcome to call information immaterial if you like, but it presents no metaphysical difficulties for physicalism.

>Stubborn, dyed-in-the-wool materialists like the Churchlands or Dennett, instead of rethinking their presuppositions, have resorted to the pathetic tactic

Why are non-materialists so fucking angry? Incivility doesn't help your cause. If your arguments were good they would stand on their own without embellishment.


> But abstract things can supervene on the physical. Information, for example, is abstract, but it supervenes on some physical stuff. Granted, information is not identical to any particular instantiation, but the abstract pattern can be manifested by a particular physical instantiation. You're welcome to call information immaterial if you like, but it presents no metaphysical difficulties for physicalism.

Example? The world "information" is often used in a magical way. Patterns need not be immaterial, and I never argued for that, but I really don't know what you mean by "information". (FWIW, "supervene" is another one of those terms.)

> Why are non-materialists so fucking angry? Incivility doesn't help your cause. If your arguments were good they would stand on their own without embellishment.

Pot to kettle? Looks, there's a history there that maybe you're not privy to. Eliminativists and other materialists have consistency refused to address these fundamental problems while simultaneously ridiculing and dismissing anyone who doesn't agree with them. So you'll have to forgive me for being "uncivil". After a while, it's hard not to conclude that we're dealing with willful ignorance or intellectual dishonesty.


Information is state or configuration of one system that tells you something about another system. The pixels on your screen contain information about the state of my brain because the particular pattern of lights communicates the thoughts in my head. Information is abstract because it is independent of the medium: the pixels on the screen, pressure waves in the air, marks on paper, etc, can all be used to carry the same information.

Supervene means something is constituted by the configuration of some substance. Or the more common definition: A supervenes on B if there can be no change in A without a corresponding change in B.

>Eliminativists and other materialists have consistency refused to address these fundamental problems

I admit that people have reason to be frustrated with certain materialists, Dennett chief among them. I have my share of frustrations with him as well. But there's this trend I see with non-materialists (online and professionals) showing active disdain for materialism/physicalism that is entirely unhelpful. Ultimately we're all just trying to solve one of the hardest problems of all. Genuine efforts to move the conversation forward should be welcomed. Intractable disagreement just points towards the need for better arguments.


Okay, so intentionality is essential to information. Let's take your example of the pixels on your screen.

There is nothing intrinsic to those pixels or that arrangement of pixels that points to the state of your brain. That doesn't mean there isn't a causal history the effect of which are those physical pixel states. It is, however, entirely a matter of convention how those pixels are arranged by the designers and how they must be interpreted in conformity with the intended convention. You must bring with you the hermeneutic baggage, so to speak, that allows you to interpret those pixels in the manner intended by the designers. Those same pixels will signify something else within a different context and it is the observer that needs to have the contextual information to be able to interpret them in conformity with the designer's intentions. Furthermore, the designers of the program could have chosen to cause different pixels to light up to convey the same information. They could have instead caused those pixels to resemble, in aggregate, English language sentences that, when interpreted, describe the image of the state in your brain. But there is nothing about those pixels qua pixels that can tell you anything about your brain state. The meaning of each pixel is just that it is a pixel in a particular state, and the meaning of the aggregate of pixels is that they are an aggregate of pixels, each in a particular state. You can call that supervenience in that the meaning of the aggregate follows from the meanings of individual constituting pixels, but none of that changes the fact that the pixel states as such, whether individually or in aggregate, do not intrinsically mean your brain state. This is analogous to written text. A human actor with some meaning in mind causes blobs of ink to be arranged in some way on paper in accordance with some convention. Those blobs of ink are just blobs of ink no matter how many there are or how they're arranged. The reader, which is to say the interpreter, must bring with him a mental dictionary of conventions (a grammar) that relates symbols and arrangements of symbols to meanings to be able to reconstruct the meaning intended by the author. The meaning (or information) is in no way in the text even if it influences what meaning the interpreter attaches to it.

As Feser notes[0], Searle calls this derived intentionality which is different from intrinsic intentionality (thoughts are one example of the latter). So I do not agree that anything abstract is happening in your panel of flashing lightbulbs.

[0] https://edwardfeser.blogspot.com/2010/08/fodors-trinity.html


>Searle calls this derived intentionality which is different from intrinsic intentionality

But what makes derived intentionality not abstract? What definition of abstract are you using that excludes derived intentionality while including intrinsic intentionality?

But lets look more closely at the differences between derived and intrinsic intentionality. Derived intentionality is some relation that picks out a target in a specified context. E.g. a binary bit picks out heads/tails or day/night in my phone depending on the context set up by the programmer. Essentially the laws of physics are exploited to create a system where some symbol in the right context stands in a certain relation with the intended entities. We can boil this process down to a ball rolling down the hill along one track vs another track is picking between two objects at the bottom of the hill.

How does intrinsic intentionality fare? Presumably the idea is that such a system picks out the intended object without any external context needed to establish the reference. But is such a system categorically different than the derived sort? It doesn't seem so. The brain relies on the laws of physics to establish the context that allows signals to propagate along specific circuits. The brain also stands in specific relation to external objects such that the necessary causal chains can be established for concepts to be extracted from experience. Without this experience there would be no reference and no intentionality. So intrinsic intentionality of this sort has an essential dependence on an externally specified context.

But what about sensory concepts and internal states? Surely my experience of pain intrinsically references damaging bodily states as seen by my unlearned but competent behavior in the presence of pain, e.g. avoidance behaviors. But this reference didn't form in a vacuum. We represent a billion years of computation in the form of evolution to craft specific organizing principles in our bodies and brains that entail competent behavior for sensory stimuli. If there is a distinction between intrinsic and derived intentionality, it is not categorical. It is simply due to the right computational processes having created the right organizing principles to allow for it.


An essential feature of abstract things is that they do not exist independently and in their own right. For example, this chair or that man (whose name is John) are concrete objects. However, the concepts "chair" and "man" are abstract. They do not exist in themselves as such. The same can be said for something like "brown", an attribute that, let's say, is instantiated by both the chair and by John in some way, but which cannot exist by itself as such. So we can say that "chair", "man" and "brown" all exist "in" these concrete things (or more precisely, determine these things to be those things or in those ways). However, apart from those things that instantiate them, these forms also exist somewhere else, namely, the intellect. However, they exist in our intellects without instantiating them. Otherwise, we would literally have to have a chair or a man or something brown in our intellects the moment we thought these things. So you have a problem. You have a kind of substratum in which these forms can exist without being those things. That does not sound like matter because when those forms exist in matter, they always exist as concrete instantiations of those things.

W.r.t. derived intentionality, the relation that obtains here between a signifier and the signified is in the mind of the observer. When you read "banana", you know what I mean because the concept, in all its intrinstic intentionality and semantic content, already exists in your intellect and you have learned that that string of symbols is meant to refer to that concept. I could, however, take a non-English speaker and mischievously teach them that "banana" refers to what you and I would use the term "apple" to mean. No intrinsic relation exists between the signifier and the concept. However, there is an intrinsic relation that obtains between concepts and their instantiations. The concept "banana" is what it means to be a banana. So the derived intention involves two relations, namely, one between the signifier and the concept (which is a matter of arbitrary convention) and another relation between the concept and the signified, which necessarily obtains between the two. Derived intentionality is parasitic on intrinsic intentionality. The former requires the latter.

So what is meant when we say that computers do not possess concepts (i.e., abstract things), only derived intentionality, we mean that computers are, for all intents and purposes, syntactic machines composed of symbols and symbol manipulation rules (I would go further and say that what that describes are really abstract computing models like Turing machines, whereas physical computers are merely used to simulate these abstract machines).

Now, my whole point earlier was that if we presuppose a materialist metaphysical account of matter, we will be unable to account for intrinsic intentionality. This is a well known problem. And if we cannot account for intrinstic intentionality, then we certainly cannot make sense of derived intentionality.


Your description of abstract things sounds like a dressed-up version of something fairly mundane. (This isn't to say that your description is deficient, but rather that the concept is ultimately fairly mundane.) So I gathered three essential features of instrinsic intentionality: (1) does not exist independently, (2) exist in the intellect, (3) exist in the things that instantiate them.

Given this definition there are a universe of potential abstracta due to the many possible ways to categorize objects and their dynamics. Abstracta are essentially "objects of categorization" that relate different objects by their similarity along a particular set of dimensions. Chairs belong to the category "chair" due to sharing some particular set of features, for example. The abstract object (concept) here is chair, which is instantiated by every instance of chair; the relation between abstract and particular is two-way. Minds are relevant because they are the kinds of things that identify such categorizations of objects along a set of criteria, thus abstracta "exist in the intellect".

You know where else these abstracta exist? In unsupervised machine learning algorithms. An algorithm that automatically categorizes images based on whatever relevant features it discovers has the power of categorization which presumably is the characteristic property of abstracta. Thus the abstracta also exists within the computer system running the ML algorithm. But this abstracta seems to satisfy your criteria for intrinsic intentionality (if we don't beg the question against computer systems). The relation between the ML system and the abstracta are independent of a human to fix the reference. Yes, the algorithm was created by a person, but he did not specify what relations are formed and does not fix reference for the concepts discovered by the algorithm and the things in the world. This is analogous to evolution creating within us the capacity to independently discover abstract concepts.

(Just to preempt a reference to Searle's Chinese room argument, I believe his argument is fatally flawed: https://news.ycombinator.com/item?id=23182928)


You're trying to reduce abstraction to statistical pattern classification, but that doesn't work because statistical measures are inherently bounded in generality, indeterminate and ambiguous while my concept of, say, triangularity is universal, determinate and exact.

Say I give you an image of a black isosceles triangle. Nothing in that image will tell you how to group those features. There is no single interpretation, so single way to classify the image. You might design your algorithm to prefer certain ways of grouping them, but that follows from the designer's prior understanding of what he's looking at and how he wants his algorithm to classify things. If your model has been trained using only black isosceles triangles and red rhombuses, it is possible that it would classify a red right triangle as rhombus or an entirely different thing and there would be no reason in principle to say that the classification was objectively wrong apart from the objective measure of triangularity itself. But that's precisely what the algorithm/model lack in the first place and cannot attain in the second.

Furthermore, just because your ML algorithm has grouped something successfully by your measure of correctness doesn't mean it's grasped essentially what it means to be a member of that class. The grouping is always incidental no matter how much refinement goes into it.

Now, you might be tempted to say that human brains and minds are no different because evolution has done to human brains what human brains do to computer algorithms and models. But that is tantamount not only to denying the existence of abstract concepts in computers, but also their existence in human minds. You've effectively banished abstracta from existence which is exactly what materialism is forced to do.

(With physical computers, things actually get worse because computers aren't objectively computing anything. There is no fact of the matter beyond the physical processes that go on in a particular computer. Computation in physical artifacts is observer relative. I can choose to interpret what a physical computer does through the lens of computation, but there is nothing in the computer itself that is objectively computation. Kripke's plus/quus paradox demonstrates this nicely.)

P.S. An article you might find interesting in this vein, also from Feser: https://drive.google.com/file/d/0B4SjM0oabZazckZnWlE1Q3FtdGs...


>You're trying to reduce abstraction to statistical pattern classification, but that doesn't work because statistical measures are inherently bounded in generality, indeterminate and ambiguous

A substrate with a statistical description can still have determinate behavior. The brain, for example is made up of neurons that have a statistical description. But it makes determinate decisions, and presumably can grasp concepts exactly. Thresholding functions, for example, are a mechanism that can transform a statistical process into a determinate outcome.

>doesn't mean it's grasped essentially what it means to be a member of that class.

I don't know what this means aside from the ability to correctly identify members of that class. But there's no reason to think an ML algorithm cannot do this.

Regarding Feser and Searle, there is a lot to say. I think they are demonstrably wrong about computation being observer relative and whether computation is indeterminate[1]. Regarding computations being observer relative, it's helpful to get clear on what computation is. Then it easily follows that a computation is an objective fact of a process.

A computer is at its most fundamental an information processing device. This means that the input state has mutual information with something in the world, the computer undergoes some physical process that transforms the input to some output, and this output has further mutual information with something in the world. The input information is transformed by the computer to some different information, thus a computation is revelatory: it has the power to tell you something you didn't know previously. This is why a computer can tell me the inverse of a matrix, while my wall cannot, for example. My wall is inherently non-revelatory no matter how I look at it. This definition is at odds with Searle's definition of a computer as a symbol processing device, but my definition more accurately captures what people mean when they use the term computer and compute.

This understanding of a computer is important because the concept of mutual information is mind-independent. There is a fact of the matter whether one system has mutual information with another system. Thus, a computer that is fundamentally a device for meaningfully transforming mutual information is mind independent.

[1]https://www.reddit.com/r/askphilosophy/comments/bviafb/what_...


Sorry most of your argument here is going over my head. My mind gets the glazed over feel as soon as I hear the word “epistemology”.

It seems like you are using the word “mind” as a specific technical word within the field of philosophy rather than the general usage of the word so I’ll just defer to you for any philosophical context of the conversation. I’m not interested in that.

What got my goat was the implication that complex systems like the mind and the brain or any other complex system aren’t a “science” because it somehow isn’t pure enough/reduced enough.


> What got my goat was the implication that complex systems like the mind and the brain or any other complex system aren’t a “science” because it somehow isn’t pure enough/reduced enough.

That wasn't the implication, but perhaps more needs to be said. The first distinction I made was between brain and "mind" (here understood as "intellect", that is, the faculty responsibly for conceptualization, or, abstraction from particulars enountered in the senses) by appealing to a classic Aristotelian argument for the immateriality of the intellect. So you can study the physical brain using various empirical methods and through the lenses of various empirical science, sure, but if the intellect is immaterial, then obviously you can't subject it to any direct empirical experimentation. That doesn't mean it cannot become the object of a science (i.e., psychology as classically understood), nor does it mean you can't make observations about human beings to gather supporting evidence of some kind to draw certain conclusions. An immaterial intellect just isn't something you can look at under a microscope or at which you can fire subatomic particles.


> That wasn't the implication... appealing to a classic Aristotelian argument for the immateriality of the intellect

Now what you are saying makes more sense to me. I must not have read far enough up the comment chain to get the full context.

I guess I view the mind/intellect as a “state” of chemicals/electrical impulses/influences/etc that you could in theory take a snapshot of and is therefore material no matter how abstract the thought pattern. Trying to separate the brain vs. intellect is a false dichotomy from my perspective. I’m not sure if you are actually arguing for the Aristotelian perspective (as was my initial assumption) or if you are simply explaining a viewpoint.

I might note that I’m fascinated by super-determinism[0] with non-local variables right now. I can see that the repeatability of recording a given state of mind makes less sense since current QM theory could not guarantee that you could ever fully capture a given state of anything.

https://en.m.wikipedia.org/wiki/Superdeterminism


I am indeed arguing for the Aristotelian position. A lot of mind/brain talk is thrown around without any deep appreciation of the metaphysical presumptions being made, much less the metaphysical consequences of those presumptions.

> no matter how abstract the thought pattern

This needs to be explained. What exactly is a "thought pattern" and what does it mean for it to be abstract? As I've noted, matter is always concrete whereas abstract things aren't really things in that they cannot exist in their own right. You and I have the concept of "triangularity" in our intellects, and "triangularity" means exactly that and is therefore intelligible as "triangularity", but that concept is not reducible to any particular triangle. However, only particular triangles exist in the physical world. You would need to show how "triangularity" could exist as a concrete physical thing without also being a particular triangle. Then you'd have to show how concrete triangles instantiate this concrete "triangularity".

That's the Aristotelian angle. However, we can also approach this issue from the materialist angle. Take for instance the color red as we commonly understand it. Now, since Galileo and Descartes, matter has been construed as essentially colorless. Instead, matter has reflectance properties and color is construed as an artifact of the mind that in some unexplained way results from those reflectance properties, but is completely distinct from those reflectance properties. This is an essentially Cartesian view of the world wherein the universe is ultimately divided into two basic kinds of distinct things, namely, mental substance and extended substance, i.e., (a particular understanding of) mind and (a particular understanding of) matter, respectively. There are serious problems with Cartesian metaphysics, but for now, it's enough to observe that we moderns more or less hold to that view of matter. Now materialism also holds that view of matter. However, it denies the existence of mental substance leaving you with a broadly Cartesian view of matter. The trouble is that it is now impossible to explain things like the color red as mental phenomena as construed by Cartesians. This is known as the problem of qualia.

There are three directions you can take to try to preserve this view of matter and while accounting for qualia. One is to retreat back to Cartesian dualism. Another is to dabble with panpsychism (which is arguably just crypto-dualism). A third is to deny the existence of the very thing you were supposed to explain (eliminativism). Each of these has serious problems. However, a better option is to reconsider the metaphysics of matter. Aristotelian metaphysics does not suffer from these issues.


> "Mind" vs. brain. For example, you will never observe the concept "triangularity" or the number 8 or the color "red" in a brain scan.

Nor will you see Shape::TRIANGLE or the number 8 or Color::RED if you put a CPU under a microscope, but computer programs are capable of reasoning about all of them. What's your point?


Shape::TRIANGLE, 8 or Color::RED aren't concepts. They're merely symbols. In principle, you will find them encoded in some way under a microscope (as some arrangement of physical states, though it will depend on the particular physical medium what the particular arrangement will look like). Your computer program can process that physical state to mimic some aspects of reasoning, but that's entirely a matter of how the program is designed to operate with these physical arrangements. You cannot analyze or derive anything from the symbols qua symbols themselves because there's no meaning there to analyze or from which to derive things.


How would you define a “concept” then? How do concepts themselves intrinsically have meaning that symbols don’t?


In this context, these symbols are conventional signs taken to refer to meanings that are not themselves. Concepts are apprehensions of form, or we might say "meanings", the what-it-is-to-be of the given thing. When I write "123", that series of characters obviously is not itself the number 123. A human interpreter in possession of a mental dictionary can read that series of squiggles to arrive mentally at the concept of the number 123. But all that exists in computers are representations, not the things they are meant to represent, and their meanings are entirely in the mind of the human observer who assigns conventional meanings to those representations.


How is that different from our brains? All that exists in us may very well be our neuronal representations of a concept. Alternatively, the meaning of a representation in a computer is interpreted and acted upon by other elements within a computer


As far as the mind has physical effects, it can be studied scientifically. When I study a building by studying its steel frame, I am studying the building inasmuch as the steel frame is a component of the building.


Depends on what do you expect as an outcome of the study.

https://en.m.wikipedia.org/wiki/Hard_problem_of_consciousnes...


You can apply science to human behavior and mental health. The problem is trying to integrate that work into theories of consciousness, you start to blur the lines between science and philosophy.


I agree that it can be approached from non-scientific angles (I'd hazard a guess that any subject that can be considered scientific can be approached unscientifically - sometimes for good reasons, sometimes not), but I think(?) you're agreeing with me in my responding to this comment:

> The second you say the words "conscious mind", you leave the realm of science.


> Is all mental health research unscientific based on the same logic?

Mental health research is focused on clinical outcomes so it's not the same as basic research on consciousness. All of these fMRI studies have been useless for the latter because they're blunt tools - perfect for studying tumor blood flow or correlating which parts of the brain are kinda-sorta active during some extreme scenario (stuck in an fMRI "for science") - but useless for actually probing the nature of the mind.

They tell us that there is "something" there that can be loosely called the "conscious" and "subconscious" mind in academically trained company but it's still trying to shoehorn late 19th century ideas to cutting edge neuroscience - it comes down to selling grant proposals using familiar colloquialisms. We could very well find out tomorrow that there are higher level networks within the brain that make the separation meaningless and you'd still see grant writers overuse them any chance they get.

> My understanding is that it's conclusive that some of our brain's processes we feel we are controlling (conscious thinking), while others happen in the background without us realising. The fact that we don't understand everything that makes that the case doesn't stop that being the case.

These papers are a lot worse than "we don't understand everything." They're often times actively harmful to our understanding.

The brain is so complex that fMRI studies are the scientific research equivalent of a full body CT scan. The older you are, the more likely you are to find something that looks like a tumor. By the time one is in their 50s, full body scans are more likely to result in serious complications from unnecessary biopsies than find a malignant growth. Likewise, the more complex the brain the more room you have to find spurious correlations in fMRIs in just about any scenario where the subject is conscious or even dead if the researcher is really bad at statistics - which most are.

This makes them the perfect tool for scientists under pressure to publish or perish, but not for studying the brain.


Not knowing something is not intellectual justification for being broadly dismissive. To come to know new things requires being broadly open.


Isn't "conscious mind" just shorthand for "frontal lobes" ?

If we can detect the brain solving a variety of things without ever hitting the frontal lobes then that's pretty interesting.


In fact, some such high level decision making processes that involve the prefrontal cortex can occur unconsciously.

https://www.frontiersin.org/articles/10.3389/fnhum.2012.0012...


The core assumption in cognitive neuroscience is that cognition is centered in the activity of the brain. The second is that neuronal activation corresponds to function. After that it's about finding proxies for measuring activation. Which part of that makes the field junk science?


Funny about that fish. As soon as I hear "neuroscience" or "neoroplasticity" it triggers my BS detector. The former because it's used to add credibility to a lot of really fuzzy stuff and the later because it's a big fancy word for being able to learn or change.

That's not to say I automatically regard something as BS because of those words, but they increase my level of scrutiny.


> the brain already made a decision even before the conscious mind became aware of it. Thus putting doubts on our conscious experience of free-will.

Seems to conflate consciousness and free will. Just because you're not conscious of it, your brain still made that decision. And your conscious mind can still go back on the decision and "change your mind".


> your brain still made that decision

This is what I have always said when hearing these arguments. Our brain is us, what we see and decide is that part of the brain being broadcasted through out our mind. But people often times change their mind, stop them selves from doing something all the time which means our minds kinda run on a consensus.

IMO thoughts are like the interplay of various regions of the brain broadcasting it's say. For example when I contemplate something it's like a debate from a multitude of differing perspectives. So sure each thought was decided by a different region beforehand, but that's like saying words were spoken before being transmitted over the wire.


> And your conscious mind can still go back on the decision and "change your mind".

No, that's not possible. That's what those experiments are showing, you're just failing to grasp its consequences.

If later you report "changing your mind", that will also have happened unconsciously, and you wont' even know why in your conscious experience: but you will be convinced you chose that - when you really haven't.


what's funny is you're ignoring what the experimenter himself concluded about the experiments. he concluded that we have free will to veto these choices, its always rich to see his experiments so badly misused.


> he concluded that we have free will to veto these choices

That's circular. The brain makes a decision, then the brain can choose to veto it? Where does the decision to veto come from?

Free will as you're describing is an impossibility: To even come up with the idea of vetoing something, there must be some process in the brain that's started by "something else", and the something else must be also possible to veto ad infinitum.

Think for yourself instead of just accepting what the experimentallist suggests you should conclude.


I hear what you're saying, but I'm still not sure that consciousness over a decision is what determines whether it was done of free will. Conscious or not, it was your brain making that decision. Not someone else telling your brain what to think.

Your brain thought about it, considered the options, made a decision, and then informed your consciousness of it. That doesn't undermine the idea of free will, it just changes how we think about consciousness.


> Conscious or not, it was your brain making that decision. Not someone else telling your brain what to think.

If you accept free will, there IS something else telling your brain what to think. Unless you explain what this something else might be, your hypothesis is not scientific.

If you accept your brain, and your brain alone, makes the decision, then you accept that, given the condition of the brain before that decision was made, then the brain MUST make that same decision regardless of anything external to it. Hence, no free will.

Even when you consider quantum physics (which most neuroscientists, from the books i've read on the topic, say have almost certainly very little effect on brain processes outside of normal chemistry), you will still end up with only a probability of possible decisions you could've made.


I don't even disagree with you, but my point here is that what you are describing is all orthogonal to the idea of consciousness. You can make exactly the same arguments with or without considering whether your thoughts are conscious.


> there are experiments which show that the brain calculates the answers and "presents" the answers later on to the consciousness.

This made me think of an async process


I recently listened to a podcast from Sean Carroll where they talked about consciousness being like a jury. There are competing systems that speak as one. So an async process might be a good way of thinking about it.

Experiments where they detect the decision being made a second or two before the person becomes aware of it may be picking up on this. Perhaps if they measured different parts of the brain they might seen alternate decisions being made.


Promise.all()


When you've been working on Node.js for too long. xD


Because if it's longer than that your rotten soul is already burning in the callback hell.



> I have heard about experiments related to free-will wherein the brain already made a decision even before the conscious mind became aware of it

I believe these studies were not able to be replicated. Also, we have no way to scientifically measure consciousness. So a study that purports to time it is measuring reaction speed, not consciousness per se.


> Somehow this is not surprising for me, I have heard about experiments related to free-will wherein the brain already made a decision even before the conscious mind became aware of it.

But it's your brain that's making decisions based on your experiences, desires, and goals. Obviously your conscious mind can't do everything, it's there for the new and hard stuff. All the rest is pushed down to lower layers, but it's all still you.


I have heard about experiments related to free-will wherein the brain already made a decision even before the conscious mind became aware of it.

You are thinking of Libet's famous experiments.

https://www.wikipedia.org/wiki/Benjamin_Libet#Implications_o...


There are much more recent studies showing similar results https://www.nature.com/news/2011/110831/full/477023a.html


an amoeba has consciousness.. and what you call the hard problem is a problem with the brain doctrine .. as the hands of a concert pianist have no trouble running the show with the rest of his or her parts cooperating to make it possible. .. the mind/brain looking on then .. as in many/most other experiences where a whole person is in flow.

its freaky to think my parts, glands, organs were not freely and willingly coordinating by way of the cns.. and i have no experiential evidence .. in the amoeba and up .. that a brain is even needed for consciousness.


yes, consciousness is a spectrum, as we already see it during normal human development/lifecycle.

and sure, in that sense maybe even a rock is conscious to some degree, at least compared to hard vacuum.

but conflating the importance of the CNS with the lack of a hard cut-off is a folly.

similarly, just because the concert pianist can't play well after a hand injury doesn't make the hand more important than the brain. because similarly, the pianist can't play well after many types of head/brain injury. naturally, because the pianist learned to play through the hands, fine-tuned and memorized what to do with the hands to get what the brain wants to hear. hence the importance of practice.

amoebas behavior and adaptability is very limited, because its comparatively very simple. an so we might be fairly certain it lacks the hardware/wetware to run what we call consciousness. and it might turn out that a big fancy brain like ours is not needed for consciousness, but likely there's still some cut off, meaning at least a few neurons (or similar networks and [predictive?] feedback loops) are likely needed.


> In contrast, when shown numbers with embedded faces, the number’s effect apparently swamped that of the face: RFS reported seeing neither; everything looked like spaghetti. Yet an EEG still showed the characteristic N170 spike for registering faces. Somehow, his brain was still processing and identifying a face—a fairly high-level skill—even though his conscious mind was oblivious. This deficit shows high-level cognitive processing and consciousness are distinct, Koch says. “You can get one without the other.”

I think this misses another possibility besides "cognitive processing and consciousness are distinct".

Namely, "the part of them that is conscious of the cognitive processing is not well-connected to the part of them that is generating speech and controlling their motor systems."


Makes me think of this silliness I saw today: https://mobile.twitter.com/BFriedmanDC/status/12892831187305...

I recommend first watching it with you eyes closed and after that seeing how you can manipulate your perception. It was easy to hear “brain needle” with my eyes closed.


I encountered this a while ago, and it's still astonishing that it's possible to hear such different things.

"Brain" and "green" are so similar that mistaking one for the other is not interesting. The really interesting pair is "storm" and "needle" — they seem wildly different.

I think what's going on is that the "ee" vowel sound and the hissing sound of an "s" both have a lot of high frequencies. It might be that an "ee" from a high-pitched speaker sounds similar to an "s" from a lower-pitched speaker, so the interpretation depends on what you infer the speaker's vocal frequency range to be.

And then when you notice that "needle" and "needo" sound almost the same, the "o" in "storm" is not that much of a stretch. "eedle" ~= "eeto" ~= "sto".


Whichever word I repeated in my head as I heard it was what I heard even if looking at another. I could easily and reliably make it change between brainneedle, brainstorm, greenneedle and greenstorm using this method.

Edit: kind of think this is to do with the n at the start of needle being implicit as you already get the n sound at the end of green and brain. I think if you register another n the remaining sound sounds most like eedle and if you don't the remaining sound sounds most like storm... Or I may have listened to this too many times


I heard "brain needle" with my eyes closed also. I could hear "green needle" but couldn't hear "storm" at all.


I am not a native english speaker... didn't work for me. With closed eyes I heard something like brain-needle :D and while reading both words shown, I still heard "brain-needle".


Exactly my experience, too. I simply cannot hear either of the words shown.


This is insane. Does anyone know where I could read about how this works or how the audio is generated?


I don't and that's a good question, but you may be interested to know that it need not be generated. Look up some YouTube videos relating to the "Paul is dead" conspiracy theories and there are several examples. People swear they can hear "I buried Paul", but you can also make yourself hear "Cranberry sauce" (which is what a Beatle said was actually the line).


I don't get it, all I can hear is some noise followed by 'brain storm'.


The point is that the "brainstorm" bit can also sound like "green needle" if you focus on it.


I started with my eyes closed and heard nothing but noise. With eyes open I caught a glimpse of the brainstorm label and started to hear that and only that. No green needle, as much as I tried to make it flip, it did not.


Woah. Can confirm it works as described for me - I hear both "green needle" and "brainstorm", depending on which word I read, or which word I think of when I close my eyes.

... or at least that was true until now; I've just retried it again, and I've lost the ability to hear the "-storm" ending. I can hear "green needle" or "brain needle", but no "storm.


The comments below are fascinating too. Some claiming that it's two separate recordings even though you can easily see that it's a single six second video, and others even talking about Russian brain washing.


Strange I only hear Brain Storm, eyes closed or open, reading or not reading,

Instead of Green Needle I am hearing a weird backwards sounding sound.


I cannot hear brainstorm, no matter how hard I focus


Whoa. I get the green/brain bit and can kinda tweak my perception to hear both, but needle and storm? When I hear both from the same audio they sound so very different and I can't fathom how they come from the same waves.


This looks like one of the early posts: https://www.reddit.com/r/blackmagicfuckery/comments/8jxzee/y...

Deep in the comments, someone posted this:

> If you slow down the audio/video to about 25% speed, you'll notice that the sound for 'needle' begins a fraction earlier than 'storm'. This is because the 's'-sound (in storm) actually begins in the 'ee'-gap of 'needle'. The 't'-sound of 'storm' atches with the 'dl'-sound of 'needle'. The whooshing sound during the 'ee'-gap of the 'needle' sound is heard by our brains as an 's'-sound and - since the brain is already primed for 'storm' - we hear the rest accordingly.

The toy is supposed to be saying Brainstorm: https://ben10.fandom.com/wiki/Brainstorm


That's exactly what I was trying to understand, and how I was trying to understand it. Thank you


Before a basic sense-percept arrives at conscious awareness it is transformed by various sub-minds. Insight meditation is about investigating this fabrication process. Here is a free resource on that topic for anyone who wants to take it further: https://www.dharmaoverground.org/web/guest/dharma-wiki/-/wik...


> his brain was still processing and identifying a face—a fairly high-level skill

Is it? I thought the brain has specialized parts for identifying faces (hence prosopagnosia) which makes this more of an automatism than higher reasoning.


Yeah, I remember learn that in this playlist https://www.youtube.com/playlist?list=PLyGKBDfnk-iAQx4Kw9JeV... very cool


Thank you for this, it's fascinating!


This looks great, do you have any other similar lectures bookmarked?


if you special case a high level skill, does it stop being a high level skill?


You nailed it.

This is a reported piece, so the description could be completely off the mark. But the quote from the neuroscientist is

> “What it tells me,” says Christof Koch, a neuroscientist at the Allen Institute who specializes in consciousness, “is that … you can get dissociation between cognition and consciousness.”

which is a wild overintepreretation.

Gerstmann syndrome is the same kind of deficit where you have preserved consciousness and severe impairments in one very specific cognitive domain. Aphasia is a more common version of a focal neurological deficit which is equally analogous.

This whole article is just pop science nonsense (although it's entirely possible the underlying research has merit; it's not easy to tell based on an article about it).


Sometimes I read about split-brain experiments and wonder, if my brain were to split, which side I'd end up on?


Both.


There's very interesting research into this question. This video is easily digestible. https://youtu.be/wfYbgdo8e-8


What even is I though?


"I" is something that knows its place in space and time, the here-and-now, but also has a memory of here-and-now moments of the past. As such "I" is an illusion. If a quantum clone of you is created both I's will have the same memory of the past here-and-now moments, so both will be you at the same. So probbaly something similar happens if a brain is split.


By that definition, a GPS module with an SSD is an "I".

In discussions that invoke the words "self" or "consciousness", I find it common to hear pithy or clever definitions. However, rarely do those answers offer any actionable insight.

Are you sure that your words "I" or "consciousness" point at extrinsic things in the world? Have you not just invented a new kind of elan vital? If you claim that "consciousness" is a useful category of phenomena within consensus reality, then great. Demonstrate that.

Dark matter is a mystery, but it is a mystery comprised of concrete phenomena. Terrestrial biogenesis is a mystery, but it too is comprised of concrete phenomena. Heck, even the amorphous term "love" points to a more concrete category of phenomena than the word "consciousness" typically seems to.

FWIW, I think there are concrete approaches to the problem. One method HN might like is to study how we judge subjects as being conscious. Anyone want to build a NN classifier for this?


> By that definition, a GPS module with an SSD is an "I".

No it's not. I didn't explain what I meant by "knowing your place in space and time", sorry about that, but this needs elaboration.

I think we'll agree that consciousness (self-awareness) is not binary: it can be present in varying degrees in computer systems, animals and humans. The degree of self-awareness though I think is a function of how rich the model of the world is in your mind. A GPS module can have its own coordinates and a precise UTC clock, but its model of the world is the most primitive.

Thus knowing your place in space means some model of the world and your relative place within it. Domestic cats and dogs have their own models - of the house and possibly some surroundings, but they are very limited. Humans typically have the most elaborate and the highest level abstract model of the world, therefore our self-awareness is the highest among the creatures.

Time is a trickier one though. Our sense of time seems to be the function of the chain of here-and-now moments. You sense your place now, but you also have a memory of the previous moments as one seemingly uninterrupted flow. This I think is what creates an illusion of being, though obviously "I" doesn't exist in the same sense as a physical object does.

Then there's a whole part of how the model of the world that we build in our minds should also be practical, i.e. help us achieve our goals efficiently, but I'll skip it for now as it gets too far from the original question.


Is your definition actually accurate enough to capture instances of intuitive judgements that "this is a conscious being" without being overly broad to include "obviously non-conscious" things?

By calling it a spectrum, the already fuzzy definition becomes so broad as to be almost useless. Your spectrum and "knowing" model admits conscious GPS, albeit "low consciousness." What about spiders or GPT-3 or suitibly chosen digits out of an RNG? At best these all fail the intuitive answer test for most people.

Perhaps you have a mental model of Yudkowsian self-optimizers? Or maybe you are thinking more classically along the lines of Kant?

The whole problem I am trying to communicate is that your are playing and refining a model without defining exactly what your are modeling.


Self-awareness is paradoxical, I am aware of that. What I'm trying to model at least mentally for now, is something that has a chance of becoming self-aware unless we are missing something else.

In other words, let's say these are the necessary but not necessarily sufficient conditions for the emergence of self-awareness: (1) having some model of the world (MOTW), (2) maintaing a relative position of Self in MOTW, (3) maintining a history of positions in time.

The level of self-awareness then becomes a function of the complexity of MOTW and the complexity of the system's interfaces. For example is the model good enough to help it survive or reach whatever goals it's programmed to achieve? Is the system flexible and adaptible enough? Can it discover and learn by itself?

So again, in this regard a GPS system is so primitive that it may be comparable to that of an amoeba or even worse. A coordinate space alone is not a useful or interesting MOTW that would help your GPS box achieve its goals.


I don't think it's paradoxical. I just get the impression you may have jumped to building models without carefully considering the exact phenomenon you claim to deconstruct. "Self-awareness" or "consciousness" are just words. What do they point to in the world?


I experience some sort of an illusion of existence. Then I look at other creatures that look like me and act like me and I generalize, I say: maybe they exist the same way as I do. Then I hear from others that the generalization seems to be called "counsciousness" or sometimes "self-awareness". It's paradoxical because "I" doesn't exist the same way as a rock or any object and yet its continuity creates a resemblance, an illusion of existence.

Try to think of your exact quantum copies - what happens the moment you are cloned? What happens to your "I"? Nothing. The "I" never existed in the first place and therefore nothing happens to it when it's physically copied ;) That's the paradox.


Hi, can you explain what a ' Yudkowsian self-optimizer ' is? I've googled it and the only page that it comes up with is this one! :-)


Looks like I mispelled the name. I was simply referring informally to Eliezer Yudkowsky and the LessWrong-style thoughts on optimization processes. You can read the initial idea here[0].

If you haven't heard of LessWrong before, then you have hit the motherload of all rabbit holes! That site is definitely worth the dive.

[0]:https://www.lesswrong.com/posts/HFTn3bAT6uXSNwv4m/optimizati...


Thanks!

I know about lesswrong, and you’re not wrong


"Thus knowing your place in space means some model of the world and your relative place within it." Soooo you are saying a GPS module plus a GIS database? :)

If someone puts me in a pitch dark barrel, and then carries the barrel to an unknown to me location will I cease to be conscious by that definition even if I'm kicking and screaming and yelling for help?

I'm working on self driving cars. They know very precisely where they are, and they know very accurately what is around them. (They even have a sense of where things will be!) Does that make them self-aware?


The barrel: no, you won't lose your self-awareness because you have a memory and a whole history of your self-aware moments before the barrel. However, staying in the barrel for long enough (say 30 years?) might affect your consciousness though I wouldn't recommend experimenting with it :)

The self-driving cars are interesting in this regard, but in terms of complexity of their model of the world they are still way behind that of the most primitive insects even, I think. This is due to their limited capability of discovering the world and learning about it. A car that can also sense surfaces and hear things might have a chance to build a more complex model. Add the ability to park and connect to a charging socket by itself and you get a very, very basic organism, possibly a bit self-aware, too.

But just because a self-driving car has a model of a macro-world (roads, buildings, other cars) doesn't make it any more self-aware than bacteria.


About your analogy - quantum cloning may not be allowed by laws of nature. That would make any consciousness unique at any point in time.

Also I feel the experiment suggests split brains would most likely become 2 very different (new?) entities with different behavior after losing integrity.


> quantum cloning may not be allowed by laws of nature

It might be that the original should be destroyed, I understand. So be it, destroy the original and create N clones. But the thought experiment is an interesting one regardless.


No, nothing about destroying. It's just you cannot know something sufficiently at infinite precision due to uncertainty principle. It just depends if maximally allowed knowledge of all of person's chemical states is sufficient to reproduce consciousness "sufficiently" closely (I bet it is).


"I" isn't a memory or a here and now. It's the universe experiencing itself. It's an observer. What would happen to that experience if it was split? Would it make a new independant observer?


Since he couldn't process numbers in any useful manner (when communicating, to remember his hotel room, to draw it, etc.) the simplest explanation is that it didn't reach the "central brain processing" area. While this might not be the same thing as "consciousness", it's unlikely that this visual recognition made it very far in the brain. (Maybe the brain damage blocked it from going anywhere at all.)


> Since he couldn't process numbers in any useful manner

I don't think this is true, because the article says:

> Yet he could still do mental arithmetic and perform other mathematical operations. > He eventually mastered an entirely new digit system (where ⌊ stood for 2, ⌈ for 8, etc.); determined to keep working, he had his computer rigged to present the new numerals onscreen.

That suggests to me that he could indeed process numbers, but just couldn't see them


If he had simply lost the ability to recognize the glyphs for 2-9, he would have still seen their shapes, and he could have learned them again, just as he learned new glyphs for them.

What seems to be happening here is a sort of crosstalk or feedback. His brain is still recognizing the glyphs, but those neurons are sending unwanted signals elsewhere in his visual cortex which scramble the perceived image.

My interpretation of this case is that the image we perceive consciously is a combination of the raw visual data and various recognition modules, so that we don't just perceive "a curved line crossing back on itself in the lower right of the image" plus "the abstract notion of the number 8", but we see the number 8 localized where the line forming it is. Same with any other symbols, objects, faces etc.

Because of this, the neurons that represent the conscious perception of a certain section of the visual field must receive both "low-level" information (the curves forming the digit, their color, the color of the background), but also "mid-level" recognition output ("the digit 8"). So if the neuron(s) recognizing the 8 act up and send scrambled signals, they can scramble the whole conscious perception of the corresponding area of the visual field. The fact that it also disrupts his conscious perception of faces overlying the digit confirms this.


Interesting.

The way we can do captchas suggests you may be on to something there.


I have also seen this example, where a man blinded by strokes still avoids obstacles and recognizes emotions, without any conscious awareness[1]. Is this the same phenomenon?

[1] https://www.nytimes.com/2008/12/23/health/23blin.html


That's called blindsight.

In the published article they say that there is no evidence of blindsight. There is whole paragraph named: "No Evidence of Implicit Knowledge Found with Discrimination Tasks."

>Furthermore, we showed that he exhibited chance-level accuracy on two-choice discrimination tasks for these distorted stimuli, suggesting that his deficit did not spare the sorts of implicit knowledge sometimes found in other deficits of visual awareness


Damn, I read your first sentence as "That's called bullshit".

I wonder how much of the world I am accurately seeing.


It is the phenomenon called blindsight, mentioned in this article. (Edit: but not explained, so your question is reasonable)


From what I recall in my neuroscience class, this is likely to be caused by damage to the visual cortex, which is used to consciously process visual information, but not the superior colliculi, which is used to unconsciously process peripheral vision. This is caused by damage to one of the visual processing regions but not others.

From the article, it appears something similar has happened. Corticobasal syndrome potentially killed off some brain cells but not others, leading to damage at some areas but not others.


For reference, from the article:

"Sara Ajina, a neuroscientist at the University of Oxford who studies visual awareness deficits such as blindsight—a residual, unconscious “sight” in people with damage to the brain’s visual system..."


Never forget the possibility of a conversion disorder: https://en.m.wikipedia.org/wiki/Conversion_disorder


I don't understand how these experiments lead to the conclusion separates higher cognitive functions from consciousness. I fact I don't see anything that necessarily she'd any light into the roots of consciousness here. Can someone explain how the results of the experiment lead to that conclusion? This looks to me like, if it were a computer program, encoding/decoding of the symbols seems to be messed up for a largely contiguous block of symbols. Like you get jumbled up weird shapes when you feed the wrong encoding to a program.


I think RFS' loss is a little deeper than ablation of a contiguous block of symbols, since characters and numerals aren't likely to be represented sequentially in the brain they way they are on keyboards or ASCII. I agree with your conclusion otherwise. I see no connection to cognition in what the scientists explored.

Specifically, RFS' loss was specific to both encoding (recognition) and decoding (writing) of only those numerals that have no analogs among letters (not 0 or 1) had failed in RFS. Presumably this implies that the neurons that map the visual coding for numerals 2-9 in the cortex to and from their underlying concepts were lost. However the concept of each numeral persisted, so RFS could learn replacement glyphs and map them to the concepts, thereby using them as equivalents to numerals 2-9.

So I agree with you, this seems like only selective damage to the V2 or V3 portions of the visual cortex used strictly to encode/decode numerals 2-9. I too see nothing related to higher cognition or consciousness, at least in the early stage of RFS' disease mentioned here.

Methinks the author at Science Mag who summarized the work got a bit carried away in seeking cosmic revelations therein.


My understanding is he could see the numbers but failed to recognize or make sense of these pictures.He wasn’t aware of the letter pictures anymore and could not associate them with the “numerical symbols” stored in his memory (if our brains do it this way). But with the stored symbols he still can do mental math so his higher cognitive function must be at least partly intact. Using the computer program metaphor we can say his input routine for numbers is scrambled but the other calculation routines remained intact.

My guest is because we do math, our brain create the association for not only the numbers but also what we do with these numbers (maths routines). And because we don’t work with numbers all the time (we also can do symbolic maths), it’s necessary to separate the numbers with its meaning to accomplish these feats. If I would “program” our brain, it’ll be much more efficient to separate the recognition routines with the higher-function routines because we need to adjust the recognition routines all the time due the (slightly) different fonts and hand scripts of all the people.


That he could identity people, without being aware of it.

Basically, if there were faces inside a number, the EEG showed that his brain recognized the faces, but he did not know there were faces, instead he couldn't understand what he was seeing.

So the hypothesis conclusion from this is, recognizing someone's face happens separately from consciousness.

Basically, the experiment appears to show that your brain can recognize faces, without letting your conscience know about it. So you can't tell the face is there, but other part of your brain can.


But that seems fairly obvious, at least to me. Face recognition seems to be an extremely fundamental capability that has probably existed in some form almost as long as brains and image-forming eyes themselves. In RFS’s case, his brain’s ”face module” spotted the faces, his damaged ”number module” couldn’t converge to anything useful, and the jumbled mess for whatever reason overrode the faces in higher-level perception fusion processes. Possibly because the faces were so small so were lost in the noise, so to speak. Would be interesting to know how RFS would have perceived a face with small numbers drawn on it.

In any case, it is pretty well understood that our conscious visual perception is almost entirely unlike the raw stream of nerve signals emitted by the retina. We’re completely unaware of the huge amount of filtering, denoising, smoothing, stabilization, spatiotemporal fusion, extrapolation, interpolation, object detection/classification/conceptualization/contextualization/prioritization, and other processing that is done to create what we consciously perceive.


> But that seems fairly obvious, at least to me

There's many things that grow from prior knowledge as theories and hypothesis. So maybe from what we already know for the brain, you theorized that the conscious brain plays no role in recognizing faces, yet science needs to validate hypothesis with experimental proof. This is one such experimental data point that seem to validate that theory. I'm not sure we should start to dismiss the scientific process from what we consider obvious or not. Many prior "obvious" things were false, so obviousness isn't good enough in my opinion.

> In any case, it is pretty well understood that our conscious visual perception is almost entirely unlike the raw stream of nerve signals emitted by the retina

For me, the more interesting bit is more with regards to the limits of consciousness. In this case, it seems no amount of conscious effort trying to see and make sense of what you're seeing can actually unscramble what you're seeing. The experiment seem to indicate that the raw stream works as intended, demonstrated by being able to see the shape when rotated 90 degree, but that there is another part in the brain responsible for turning the raw shapes and colors into abstract cognitive information such as it being the number 8 or a face. And that this part is not controlled by consciousness.

Thus it shows that some high level cognitive tasks, like visual pattern recognition are seperate from consciousness. And this isn't something I would have necessarily assumed. Maybe I would have when it comes to really raw low level visual signals, like putting color labels to the wave length information of light, but when it comes to labeling an abstract concept, such as something being a number or a face, I'd have thought that some part of conscious thinking would be involved.


This has some implications for artificial general intelligence. If consciousness does not automatically emerge from some sufficient high cognitive processing skills, those sci-fi super-human AI may never arrive.

Anyway, I feel sorry for this patient. This must be a tough disease to die with.


I would argue consciousness comes, not from cognitieve processing skills, but from empathetic skills.

I see to possible ways. The first is: one forms a model of 'others' (for interacting better) and then applies that model to themselves. The second is: one wants to communicate thoughts, and thus needs to create a model for thoughts. Which includes conceptualizing 'self'.


It's increasingly looking like there are deep biological reasons for expressions like "gut instinct". There are a lot of neurons in the digestive system. A recent paper[2] appeared to show that the fundamental purpose of sleep is to regulate reactive molecules in the digestive system. Another item I saw in quanta, [or somewhere else][0], described how gut bacteria have a direct link to the vertebrate brain and immune system.

So my modest proposal is that if we want AGI, we need to figure out how to implement a digestive system in silico.

[0] https://www.sciencemag.org/news/2018/09/your-gut-directly-co...

[1]https://www.quantamagazine.org/how-microbiomes-affect-fear-2...

[2]https://www.quantamagazine.org/why-sleep-deprivation-kills-2...


>So my modest proposal is that if we want AGI, we need to figure out how to implement a digestive system in silico.

The problem is that whatever function digestion provides to the mind must do so through modulating neural signals. That is, its influence is purely functional in nature. Thus we can replicate its influence without replicating digestion, e.g. simulate the dynamics of the vagus nerve. There is nothing essential about digestion to consciousness.


The last sentence is a bit facetious.


With various embodied cognition or extended mind theses floating around, can't be too sure :)


Can you think of a disease that's not tough to die with?


> Somehow, his brain was still processing and identifying a face—a fairly high-level skill—even though his conscious mind was oblivious.

I'm maybe wrong about this, but isn't the ability to recognize faces a fairly low-level skill? Case is point being other social animals having it, like apes, monkeys, or even sheep [1] (who can even recognize human faces [2]).

So maybe they're here referring to the ability of seeing representations of faces (little faces on the letters). Or is this just a blunder of the person who wrote the sciencemag article?

[^1] http://news.bbc.co.uk/2/hi/science/nature/1641463.stm [^2] https://www.bbc.com/news/science-environment-41905652


The paper describes “event-related potential (ERP)” [1] measured via EEG.

“The N170 is a component of the event-related potential (ERP) that reflects the neural processing of faces, familiar objects or words.” [2]

They also measured the inverse of recognition “...improbable events will elicit a P3b, and the less probable the event, the larger the P3b.” [3]

[1] https://en.m.wikipedia.org/wiki/Event-related_potential

[2] https://en.m.wikipedia.org/wiki/N170

[3] https://en.m.wikipedia.org/wiki/P3b


There is a difference between “low level” and “human specific”. Faces are highly complex geometrically, while numbers, such as 8, are simple. Animals sure can recognise simple shapes (https://docsmith.co/2007/01/can-animals-recognize-shapes/), but they cannot endow them with abstract meanings, such as "number 8". The person in the article lost the basic ability of recognising very simple shapes while still being able to recognise more complex shapes, like faces, and to operate with abstract concepts.


Babies can recognise faces so I would definitely class it as low-level. Anything you can do as a baby is low-level.

High-level are things like reading and writing that are learned rather than evolved.


Good point. So whoever wrote that got it wrong, right?


Recognizing faces is the highest level of visual processing. The lowest levels are detecting regions of contrast, lines, then orientations of lines...

[Textbook](https://www.amazon.com/Basic-Vision-Introduction-Visual-Perc...)


> That he could still interpret letters, Schubert says, lends support to the idea that the brain has a specialized module for processing numbers.

This makes a lot of sense to me. I have almost no trouble at all remembering long strings of numbers, but I have the opposite problem with names. I cannot remember names to save my life unless I use them on a daily basis. I feel like this is a flip-flop from how most people process information. It seems to me that lots of people are very very good at remembering names and have a hard time remembering numbers. some people look at me offended when I forget their name as if the only reason I would ever forget their name was because I didn't like them. It's very embarrassing. When I show I can remember numbers or lines of code just by looking at them once, people are amazed but then move on.


Remembering names is generally not a thing that humans are good at.

My intro physics prof had a party trick where he'd memorize the names of everyone in the class within the first couple weeks based on their school ID picture, but he was quick to point out that he didn't have any natural affinity for remembering names, he just brute forced it.


This is one of those things where people say "oh I can't remember names" and what they mean is "I don't put any effort in remembering names".

I know, because I've been the same. Now I repeat names in my mind again and again. The next time I see the person, but don't speak to him or her, I repeat the name in my mind. An couple of minutes after being introduces to a new person, I repeat the name. An hour later, I repeat the name. A day later, I repeat the name. I'll even write some names down inside a notebook.

And guess what, now I'm excellent at remembering names and are one of these people who have seen you once and then approach you with "hey djhaskin, how are you?"


And as far as I know its not something that's come up in the subliminal priming research, which is how we know most of the structure of how we recognize numbers and letters. I do wonder if it works like this for everybody, though, or if it could something specific to him.

As for names I did a blog post[1] speculating that remembering names might correlate to how you deal with the following:

>So, imagine someone is walking along, down a street. They see the store their going to and they enter through the shop door. So, when you were imagining this, did you see it? Which direction was the person walking, relative to your mind's eye? Did they turn to the left or right to enter the store? What color was everything?

[1]http://hopefullyintersting.blogspot.com/2018/12/ways-of-thin...


It doesn't make any sense at all to me. It wasn't numbers as such he was having trouble with, he could still do math, he just couldn't recognize the symbols. And he was even able to learn symbols for numbers again,as long as they were new ones he made up himself instead of the standard symbols.

It's a bizarrely specific cognitive deficit. It's not like "I can't remember your name", it's more like "I used to know your name, and I can remember this new name I've made up for you, but when you tell me your 'real' name, not only can't I remember it, I can't even make it out, it all turns to static."


I'm "suffer" from this condition, too. I haven't found much literature about it as I thought I would.


I too. If a name is unfamiliar to me, even if it's no more than 5-6 letters long, I can't for the life of me recall it. But I can remember random sequences of numerals up to length 12. Why?

I wonder if there's some term for this malady, like 'literophasia' or 'numeraphilia'? I wonder too if it occurs equally as often in both genders. I would guess not.


To add some color to this, if it's a common name it's really hard to remember but if it's a foreign name to me or something I've never heard of, it's as if my brain thinks it's a random sequence of sounds and I'll remember it forever. Indian names, native Hawaiian names, no problem, but forget about it if your name is George. It's just the common names I'm used to that I have a hard time with.


Second that, i have that with organic chemicals.

Diazabicycloundecene, sure, used it once, will remember it forever.

Oh nice to meet you George, what was your name again?!


This reminds me of two mind-boggling discoveries I’ve had about the human mind.

The first is that the two halves of your brain act somewhat independently. And in rare cases, people’s left and right halves are “split” apart, and they act entirely independently - e.g. their left and their right hands will act separately, or even argue with each other. CGP Grey made a great video about this:

https://www.youtube.com/watch?v=wfYbgdo8e-8

The other is that most people have a “voice” inside their head - e.g. when they read, they hear their own voice inside their head reading; their thoughts are like sentences they “hear;” they could look in the mirror have a conversation with themselves. Some ~10% of people do not have this internal monologue.

https://news.ycombinator.com/item?id=22193451

https://news.ycombinator.com/item?id=11162927


> eventually mastered an entirely new digit system (where ⌊ stood for 2, ⌈ for 8, etc.)

Why invent a specialized number system? How difficult would it have been to teach him Chinese numerals for example.

Also very curious on his perception of Roman numerals.


Brilliant. Roman numerals are, after all, just letters. The "why haven't they used them?" becomes pretty jarring once you think about it.


Roman numerals are not a place-value system and you can’t make a computer use Roman numerals just by switching to a custom font.


OpenType font shaping is shockingly powerful ([e.g.](https://blog.janestreet.com/commas-in-big-numbers-everywhere...), so I'd be surprised if someone hasn't made a Roman numeral font already.


Heh, I knew I should have said something about OpenType shenanigans :)


Roman numerals are a lot harder to do arithmetic with.


True. Maybe that's the reason why they chose that new system.


Something that is (crucially) missing from the article but appears in the paper: his ability to read words describing numbers like "five" remains in tact, and so is his ability to read Roman numerals.


https://www.livescience.com/nueroscience-patient-who-can-not...

^ This article (reporting on the same research) seems to be a bit better.


Maybe so, but the ads were so annoying I couldn't keep reading. It was almost comical how they kept popping up every few seconds, like some kind of parody.

Fortunately, https://outline.com/5nJgjL

Curiously this article presents a different hypothesis about why he is able to read 0 and 1:

> It's also "surprising" that his brain doesn't have problems with "0" and "1," McCloskey added. It's not clear why, but those two numbers might look similar to letters like "O" or "lowercase l," he said. Or those two numbers might be processed differently than other numbers in the brain, as "zero wasn't invented for quite a long time after the other digits were," he said.


> Or those two numbers might be processed differently than other numbers in the brain, as "zero wasn't invented for quite a long time after the other digits were," he said.

Come on, that's just silly. What are they even suggesting here? Zero is encoded in a different area of RMS's brain because he learned it as an adult, after its invention under the Nixon administration?

Also from that page: "The group of researchers created new digits for RFS which they called "surrogate digits" so that he could use them in daily life."

That's just not some sort of new jargon, it's what the word surrogate means! How does someone end up writing for a living without learning how to use a dictionary?


> 0 and 1 looked normal—perhaps because those digits resemble letters

Or perhaps this is proof that we live in a simulation that is written in binary.


How is "cognition != consciousness" surprising, especially given the (now old) fMRI studies complicating (our experience of) free will?

Why would one assume awareness and processing abilities to be related, if there is no evidence for it?


As a software developer I imagine the brain not as some separated layers but as a huge collection of routines/procedures, for this or that (including procedure for understanding numbers), interconnected with a spaghetti code of links. Many of these processes specialise in making decisions and calling other processes. One specialised procedure is called the consciousness, it is blind to the rest of the processes and it is tricked into believing that it does all the calculations while it merely reads the results from other processes.

Also, I highly recommend the book The Language Instinct by Steven Pinker.


https://en.wikipedia.org/wiki/Epiphenomenalism

Read Patricia Churchland instead:)


The consciousness procedure also has links out to the other procedures. It calls them with inputs based on what they send it. It' a co-recursive system.


I like to think of it as not even a collection of procedures, but just a giant jumble of self-modifying spaghetti code.


Reading is weird.

I've never had migraines, but a few years ago suddenly, and briefly, lost the ability to read and write. After a number of doctors and tests ruling out a TIA, best guess is one-time transient aphasia from a migraine aura.


Migraines in general are weird. It's rare[1], but I can occasionally tell when a severe migraine is about to occur because rooms feel too big. I walked into my bathroom a few days ago (which is about 2mx2m/6'x6'), and each wall seemed about 2 feet longer than it should, and the far corner looked about 3 feet too far away (it was more a feeling than a visual disturbance - it's hard to describe). I felt _small_, which is a really odd feeling when it comes on suddenly. About an hour later, I had a severe migraine.

I'm assuming it's a subtle form of Alice in Wonderland syndrome[2].

[1] I get regular migraines, but I can only remember this happening a few times, possibly because it's fairly subtle

[2] https://en.wikipedia.org/wiki/Alice_in_Wonderland_syndrome


Oh wow, I get that semi-regularly (couple times a year maybe) when I'm sitting with in front of my HTPC TV with my wireless keyboard. Everything feels far away, and I feel tiny and dissociated from my actions. Not subtle at all. Standing up snaps me out of it, but it feels kind of cool so usually I let it persist.

I don't think it's ever preceded any kind of headache, but I've never tried to correlate the timing with anything else. I'll have to pay more attention when it happens again.


I've never had a migraine but I've definitely had that feeling like the room was too big. I think it only happened once or twice as a kid from a fever or something.


I had that as a kid for a while; when it reached its peak, I ended up in hospital with meningitis. From that period, I distinctively remember feeling small and the room walls and ceiling receding somewhat, and also some weird stuff with colors - I could focus on the visual noise I saw (kind of like the noise you get in a digital camera in low-light condition), and focusing on that would trigger the feeling of the world expanding / me shrinking. All of this only happened at night.


Same thing happened to me in college, although I never went to the doctor about it. I thought it was a drug flashback at first, and was terrified it was permanent, but it went away after about two hours.


Just to clarify a bit, it's possible he can't communicate thoughts about numbers. That's a different perspective than can't read them.

I say this because my father had a stroke ~3 years ago. It compromised him significantly. He definitely can communicate the way he used to. But one of the patterns we've seen is he can have a thought, but he can't communicate it. He tries but it doesn't come out. I haven't recognized it's specific like the man in the article. But I wanted to mention that processing and communication are two seperate steps.


The original paper Lack of awareness despite complex visual processing: Evidence from event-related potentials in a case of selective metamorphopsia [1] is not as Freudian as the article about “consciousness”:

> How does the neural activity evoked by visual stimuli support visual awareness? In this paper we report on an individual with a rare type of neural degeneration as a window into the neural responses underlying visual awareness. When presented with stimuli containing faces and target words—regardless of whether the patient was aware of their presence—the neuro-physiological responses were indistinguishable. These data support the possibility that extensive visual processing, up to and including activation of identity, can occur without resulting in visual awareness of the stimuli.

[1] https://www.pnas.org/content/117/27/16055


Silly question: how do they know he isn't lying?


I read the paper and they address that point by saying that they don’t see any benefit he would gain from lying.


Attention or maybe out of some psychological trauma he started lying to himself about this particular issue.


I knew an older woman who had a hard time remembering freeways by their numbers but could remember them all by their names (e.g. "Santa Ana" freeway instead of "5"). That seemed odd because to me the number was the name, so why be any different?


They weren’t always numbered. Up north, there are still people who refer to Highway 40 as Lincoln Highway.

I’ve heard a couple of LA natives say that’s an explanation for SoCal’s affinity for calling them “the 5”: they switched from “the Santa Ana”.


> there are still people who refer to Highway 40 as Lincoln Highway

This is interesting to me because here in Iowa, the Lincoln Highway is "old Highway 30". In the 60s they rerouted Hwy30 about a mile south of the existing Lincoln Way so they could make it 4 lane and avoid towns.


In case it wasn't obvious from my post above, the Lincoln Highway referred to in both my post and the parent post is the same highway [1].

The part I thought was interesting is I assumed it would have the same number designation across the entire nation, but that doesn't appear to be the case.

[1] https://en.wikipedia.org/wiki/Lincoln_Highway


When they gave him the foam 8 he didn't recognize it and they speculated it was because the brain prioritised sight over the sense of touch. Surely they tried the same experiment after asking him to close his eyes first?


Oh man, I wonder what he pictured in his head if he tried to picture an 8 instead of read one? Nothing? Squiggles? Headache?

Honestly, I could ask a billion more questions about this situation. So fascinating.


I would’ve asked him to draw digits, potentially with his eyes closed, to see if he still has a representation of the shape of digits in his mind.


I believe that's the actual drawing next to the digit 8?


I think it was on this site that someone posted the very interesting perspectives on consciousness by Roger Penrose.[2] From that I found this video by Dr. Stuart Hameroff, M.D.: Microtubules & quantum consciousness.[1] Higly recommended.

[1]: https://www.youtube.com/watch?v=R5DqX9vDcOM

[2]:https://www.youtube.com/watch?v=hXgqik6HXc0


A real life Flowers for Algernon :(.


Letters are phonograms with strong clues about pronunciation. Numerals are pure ideograms with no clue about pronunciation. Modern English is a mix of phonograms and ideograms, with more of the former. Its very likely they trigger different subsystems of the brain. And in this case the ideogram subsystem was defective.


The post only mentions recognizing letters and digits, but no mention of the subject actually writing them. Since the subject was an engineer, maybe writing numbers would be a sort of mechanical thing, that could bypass that limitation? That would've been a nice addition to this study.


The SF novel Blindsight by Peter Watts explores the idea of aliens that are cognizant but not conscious. Fascinating book.

https://en.m.wikipedia.org/wiki/Blindsight_(Watts_novel)


Blindsight was a well crafted book but it sort of underplayed the benefits that consciousness gives us. We're bombarded by a huge number of sensory impressions at any given time. The unconscious parts of our brain do a huge amount of work in processing it and figuring out which bits are important. But without consciousness to synchronize, serialize, and persist it then all that information will just fade away within a few seconds. To remember something, or talk about it, or engage in multi-step planning regarding it you need to be consciously and not subliminally aware of it.


Perhaps the coherently serialized stream of qualia isn't the only possible way to intelligence?


Oh clearly not, GPT-3 clearly produces fairly intelligent responses but by the mechanisms that in a human would be pure reflex. And your unconscious reflexes can produce responses that can be relatively intelligent at times. It's just that creates with brains that evolved on Earth need consciousness to remember anything more than a couple of seconds.

Also, who knows how qualia fit into this, if they exist they might very well exist for subliminal stimuli too. In the scientific study of consciousness qualia aren't a part of the paradigm really. And in philosophical investigations of consciousness don't have subliminal stimuli as part of their paradigm.


And how do you call the stuff which gets filtered by consciousness so that it can remember more than a couple of seconds? That's what I meant by the qualia.

As an example what I mean and it was recently on HN, too: apparently the perception of "now" is a hack supported by delays so that input from various senses is perceived as synchronized even if it does not arrive synchronized to the brain, plus compensating the temporary blindness while doing saccades. Aliens could have different "hack" there, resulting in profoundly different perception of time.


If you're interested in learning more about how sensory impressions go on to become conscious awareness, at least so far as we knew 5 years ago, I'd highly recommend the book Consciousness and the Brain.


Can you relate this with color blindless? Most of time I could swear I could see that green,orange and red different but somehow my brain refuses to identify them.


I might be wrong but my understanding was that the cause of color blindness is an abnormality in the retina, not a neurological condition.


There are definitely cases where color-blindness is caused by a neurological issues. Strokes can cause temporary or permanent colorblindness, for example.


“Intelligence” is a 100 different little daemons running in tandem.

I’m surprised at how this analogy almost never comes up in a community of programmers.


I hate to say this, but could he just be lying?

That’s what I worry about when we try to learn a lot about human minds from a case.


I call shenanigans. There is nothing inherently different about numbers and letters. They are all just abstract shapes. There is no possible way that evolution could have produced a mechanism that could distinguish between the two without training.


Couldn't the brain damage have essentially caused the state from the training process for those specific glyphs to fail with "garbage" associations? Thus he gets disoroentating confusion and reinforced revulsion when he tries to understand interfering with any occupational therapy to retrain it such that arbitrary glyphs lacking such associations didn't have those issues.


Yes, that's possible. (I actually missed the bit about the brain damage. I somehow got the impression that article said that he was born with this. I guess I skimmed too fast.)


If evolution was responsible for character recognition, reading would not be taught in schools...


Yes, that was kind of my point.


Like the Google Translate app that live translates from camera, there could be developed googles for him that translate everything that looks like 2..9 into his new representation of numbers.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: