The quote seems deceptive as presumably the patient had a brain scan reading that they interpreted as "no" shortly after asking "Are you in pain?".
Just from the article I'm dubious. They presented no evidence that this isn't wishful thinking, which I'd be keen to establish it wasn't if I was in their place. Could just be journalists fudging the message though.
This article leaves out some of the detail from older articles (specifically [1]) about this patient where they describe the painstaking process by which they determined these patients were answering yes-or-no questions knowledgeably. See also [2] for an older article with more detail around the process.
They ask the patients to imagine themselves playing tennis or wandering in their house for tens of seconds at a time. These tasks activate different neural networks, and it is possible to tell them apart using fMRI.
Sustaining the activation of higher level cortices requires not only consciousness but attention and concentration.
From there they can encode yes/no answers (tennis vs house).
This suggests future work on assistive devices that may be able to "speak for" a patient by ML and manual algorithmic adjustment... Huge quality-of-life improvement potential for people with limited/impaired vocal-ability.
Beyond that, there is far wider potential for translating brain scans directly and skipping vocal-aural pathway by communicating with others digitally.
I was thinking this as well. It reminded me of the "Commander Pike" character in the original Star Trek that could only respond yes or no through a light.
There is evidence that it is. They have been able to record images people are seeing from fMRI data. The information is there, it's just a matter of processing it.
Right now, researchers are putting together functional brain structure maps combining fMRI, SPECT and PET scans for Alzheimer's research. Apart from the inconvenience of wearing a giant magnet, pulling together one or more brain scan techs with compute resources might be a way to pull it off. Heck, if the compute power needed were too bulky to be practical, it might be possible to offload it to a hosted service. (Talking a solution 15-18 yrs out anyhow.). It's entirely reasonable that keyboards would be slower than thought input in 50 years.
I need to go remind my family to unplug me if I'm ever in a state like this. Death is infinitely preferable to being trapped in my body for endless years.
I can't imagine that this is the first time somebody has put a vegetative patient into an MRI. Does anybody know what makes this time different? Are they using a new imaging technique, or is the patient perhaps in a different (previously undiscovered?) state from other "vegetative" patients?
Edit:
Hitting up wikipedia to refresh on the terminology and delimitation of different states, I found this:
> "In 1983, Rom Houben survived a near-fatal car crash and was diagnosed as being in a vegetative state. Twenty-three years later, using "modern brain imaging techniques and equipment", doctors revised his diagnosis to locked-in syndrome.[26] [... interesting stuff about facilitated communication, and how it is bullshit ...] Houben's case had been thought to call into question the current methods of diagnosing vegetative state and arguments against withholding care from such patients.[26][32][33]"
So a whole bunch of people have been imprisoned while conscious ... stuck with no capacity to communicate but a constant hospital white noise as their sensory input for decades. Scary.
I guess it depends on what kind of pain is being referred to. I can't imagine myself being both conscious and in a vegetative state without experiencing a lot of psychological distress.
You're imagining who you are now being in that state, not who you would be after enough brain damage to leave you in a vegetative state. Imagine that you're not (for example) laying down long-term memories, or that your ability to sense the passage of time is compromised.
I'm guessing you're being down-voted for the way you expressed your sympathy, but if it were me, I'd want some brave family member to consider ending my misery. I certainly wouldn't want to be "trapped" in a non-functioning body unless there was a good chance I'd recover at some point.
It's good to hear he's not in pain, but what would they do if he expressed a desire to be "put out of his misery" via the fMRI?
There's enough bandwidth for yes-or-no questions, right? I wonder if anyone will ever come up with the courage to ask such a patient "Do you want to die?"
But what I don't particularly like about that line of thinking, is this: medicine thrived and grew on a quest to fight disease and death. The talk about "pulling the plug" and euthanasia opens a huge "Overton Window" that can go from not treating terminally ill patients to eugenics.
Can we use one of those $60 brainwave-detecting toys for this? I would imagine training would consist of the patient trying to modulate a tone's frequency with thought, and then testing would have the person answer yes/no with high or low tones.
So, to all you brain upload enthusiasts: What if the Mark I upload experience feels like acquiring an intellectual deficit and living in a dark can? Still better than death?
"upload consciousness" is just a meaningless juxtaposition of words, the modern pseudo-intellectual crank lexicon.
One might as well say, in other ears: "capture personality with cathode rays" or "compress the fluid of the soul" or or "render oneself in brass automata" or "bottle the element of the mind".
It is essentially an argument from ignorance combined with an argument from authority: "look what we've discovered! we dont know it's implications! here's some implications (sprinkled with academa-bable)".
I find it disheartening that this "hyper-skeptical" brand of pseudo-intellectualism has taken hold in certain circles. Suppressing imagination and foresight in favor of a cold reverence to facts is misguided. Creativity and vision (along with need) is the driver of scientific progress; you cannot have one without the other. So dismissing thought experiments and speculation as "meaningless juxtaposition of words" is utterly missing the point of everything that has been accomplished by scientific progress thus far.
It doesn't take much of a leap at all in fact to realize that uploading one's consciousness to a virtual world is very plausible. If the meaningful stuff of consciousness is information rather than matter, it is in fact a certainty. Of course, questions of whether you're uploading your consciousness rather than a copy abound, but I think there are solutions to even that issue (which I've discussed here: http://www.reddit.com/r/philosophy/comments/237qqj/kurzweils...)
Do you have any clue about the current state of neuroscientific research? Scientists are struggling to understand a single neuron, there are no models that reliably predict networks of more than two neurons, and the current state of the art is recording partial signals from a few dozen neurons.
"Brain upload" is just the modern day version of the philosophers' stone. As more knowledge about the human brain will be taught in schools, and the current state of the art will become public knowledge, people will look back at the idea of "brain uploads" just like we look back on the idea of "turn lead into gold".
We have a poor understanding of the brain from a bottom-up perspective. But from a top-down perspective we are far from ignorant. We can be fairly confident that consciousness arises completely within the brain through material processes. If this is the case then consciousness is a function of information rather than matter. And of course, information is independent of the medium so uploading consciousness is in fact a small leap from the assumption of materialism.
But information can not be transferred as we please. I believe that our consciousness arises from the information stored in the material structure of our brain. But there is no reason to believe we have any way of extracting or transferring this information to another place.
If you assume this step is a "small leap" you are entering the realm of science fiction. And that's okay, I greatly enjoy reading science fiction. But if you tell people that this might actually become reality in the future, they will rightfully be skeptical.
Transferring information within the brain is no harder than transferring a file across the internet. The hard part is decoding how information is represented within the brain.
We don't even need to do that though. The hard part isn't transferring information, but transferring consciousness sustained by that information. If you believe that neurons are the sole physical entity that makes up the brain, and that any one neuron has no significant effect on consciousness, then transitioning each neuron one by one is simply a logical deduction from the premises. I no more have to "believe" this than I believe in a given mathematical proof. If the premises are accepted then the conclusion of transferring consciousness is required.
I think the word "consciousness" holds very different meanings for different people. I don't think you're interpretting the word in the same way other people do.
Furthermore, after reading the comment you linked to, I disagree with the premise of gradually replacing each organic neuron with a synthetic surrogate. But, hey, if you're into the idea of slowly eroding your central nervous system, and back-filling the tissue with a machine that picks up the slack, I won't stop you.
The remnants won't be you though. If we amputate a goldfish from it's brain, and the body continues to be guided by an electronic apparatus, the gold fish is no more. The husk that remains is a grotesque puppet.
If, as my parents were slowly debilitated by alzheimers, they elected to supplement their mental functions with computer assistance, I would regard them as I might regard a dead animal that was stuffed and mounted.
There is no such experience as climbing inside the circuits of a computer to experience immortality.
There is no such experience as slowly disolving one's self into a mixture of microscopic machinery, after which we suddenly blink our eyes and become miraculous robots.
We die. The robot blinks, and masquerades as us for the benefit of the living. Nothing more.
But if instead of replacing your neurons with "a mixture of microscopic machinery", you replace them with other neurons, you remain yourself (instead of becoming a stuffed animal)? Care to explain why? Because the essence of humanity is in the exact molecules that make up neurons?
What if we replace some of the atoms in these molecules by different isotopes? (a common practice for studying metabolism, including in human brains: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC22218/) Does the person remain human, then? Where do you draw the line and what is the reasoning behind it?
Because I have yet to see any product of human effort come close to the scale of complexity that cellular tissues represent.
Current-generation electronic systems are very complicated, and convincing "life-like" behavior still seems to be out of reach. Even if we eventually manage to produce systems that engage in sufficiently convincing life-like behavior, that bridges any sort of uncanny-valley gaps, there will still be a confusing grey area between truly sentient Life, and deceptively convincing complex apparatuses which are not alive.
Even to reach the grey area will demand large collaborative teams of skilled and talented experts. If breathing life into a complex system represents a challenge above and beyond the Manhattan Project or the Apollo moon missions, how will we ever decide who benefits from the fruits of such labor? Are human beings capable of sharing such a thing?
These things will not be toys. They would be more controversial than genetically modified foods. If a tissue replacement technology were to be implemented, would it be self-replicating and autonomous? If so, would it possibly be infectious? Or would it be so durable that it wouldn't need to repair itself? Would it's many tiny components communicate with one another? Could the communication be intercepted and disrupted? These are huge problems.
Even if one very powerful HAL 9000 system solves all these problems for us, and manages to devise a scheme to mass produce a practical inorganic replacement for neurological tissue, isn't it possible that such an expert system might have an agenda of its own that does not represent lesser human interests like immortality? Wouldn't it require a similar leap of faith to trust in a technology with a purpose that can only be rationalized and not proven?
For these reasons, I place my bets on biotechnology, using native human cells, and not electronic surrogates.
If I know that I am alive, and that typical organic cells produce this phenomenon of experience, and it's within our reach to manipulate stem cells to repair our brains and keep them alive without any upper age limits, then I'll choose the sure thing, before placing faith in any experimental electronic surrogate.
Your position is that you prefer and believe in "repairing" our brains using biotechnology because there are too many problems with the alternative "electronic based" solution. That is a reasonable opinion and preference.
My response to your comment focused on what I perceived as "consciousness as something that is defined by the workings of the human brain" and that could not be achieved by any other means.
While I agree that attempting to replace our brains with "electronics" is a dangerous game, I also think that the computational approach can provide a lot more interesting answers (and questions) about many phenomena that occur in the "intelligent mind" and what consciousness really is, than the simple "fix by replacement" can.
There's another as-yet-unexplored hazard though, and it's the emotional aspect of showing people convincing facsimiles of their loved ones.
Tempting people into making emotionally charged decisions while they're in the fragile state of confronting death is an ethical sinkhole. If people actually believe that their dead relatives have gone off to live on the great server farm in the sky, what comes next? At the very least, a profound emotional attachment to an air-conditioned warehouse filled with noisy machinery.
Alive or dead, any convincing artificial construct of a human being is going to carry a lot of weight and social currency. It's a phenomenon worthy of experimentation, but there's a very real hazard of making irrational, emotional decisions that revolve around these things, which might carry little practical benefit. There's some serious risk in being deceived by artificial creations, into behavior that is not in our own best interests.
I mean exactly what you mean by consciousness. There is no big mystery here.
I take it you are a believer of a "soul" or some kind of panpsychism? If so, then you have no place to deride anyone else's opinions as "technobabble", as yours are simply "babble".
If you believe in material matter giving rise to consciousness, then it must be the information contained with that matter that is of value, rather than the matter itself. At that point replacing biological neurons with electronic ones makes no difference as long as they function exactly the same. Information does not care about the medium, only the mechanism.
There is a gigantic mystery which you are sweeping under the rug.
Sure, a purely materialistic conception of the universe is very neat and appealing. But you are at risk of missing something of fundamental importance to the overall nature of being if you simply shut down any possible additions to this conception.
The mystery is this: Why can't we observe consciousness? No, literally, we cannot observe it. The scientific process at least requires a starting observation, some phenomenon which requires explanation, something you can point to and say, "What's that?". The sun rising and setting. Brownian motion. Whatever.
Consciousness on the other hand is merely the fact that the universe exists from a perspective. You only ever have access to one perspective, your own, you can never observe any other instance of perspective "out there", "in the world". It is not a thing, an "object". It is a subject, "you".
I don't shut down any possible additions to the concept of consciousness, I just require that any additions be motivated by need. At this point in our understanding there simply is no need to presume some sort of metaphysical happenings. The brain being the sole generator of consciousness currently appears to be powerful enough to explain everything we know about consciousness. Add on to that all the evidence brain activity correlating with conscious experience, exciting the brain in specific locations generating conscious experience, and brain damage in various locations having clear and permanent affects on consciousness, and we have strong evidence that the brain is in fact all there is. We may not be able to "observe consciousness directly", but we have constrained its possible whereabouts to "within the brain". Sure, more work needs to be done, but we are far from completely ignorant here.
I really don't think these types of experiments tell us anything about the fundamental question of consciousness. The question being, why am I here experiencing it all? Why is it not just, well, "automated", without me here as an "observer"?
So you have a research subject and you stick a pin in his brain and that makes him, I don't know, forget the alphabet. You say that contributes to the theory that consciousness is generated by the brain.
Two objections.
1) I already knew you could stick a pin in his eye and he would probably fail to recall the alphabet for several minutes. What I am getting at is that the fact that human consciousness is "heavily invested" in the material universe is bleeding obvious from the get-go. What does recent knowledge about the brain add to this? There is a heluva lot of functionality there, sure. There is a serious nexus of sensation, communication, and computation there, sure. But the above-referenced, Fundamental Question of Consciousness (TM), remains untouched.
2) How do you know your subject (the one with the pin in his brain) is truly conscious? Yes, I am talking about the possibility of a p-zombie[0]. You might want to say, "All reasonable people can here assume that the subject is a truly conscious being." And indeed I agree we can generally, in life, make such assumptions. However, you are hoping for scientific understanding of the nature of true consciousness, so you bloody-well hope for a scientific method to verify a truly conscious being. Otherwise it's like hoping to have the periodic table before you can test whether a material is gold or lead. However, there can be no such hope for a future method of verification, because presently there is not even any observation of the very thing to be verified! This would be like expecting someone who, I don't know, perhaps grew up imprisoned in a cave to develop a model of the solar system.
>The question being, why am I here experiencing it all? Why is it not just, well, "automated", without me here as an "observer"?
It's simple really: you have to observe to decide. Imagine not having any vision-qualia. How would you read? Your vision-qualia is the substrate from which your conscious mind makes decisions. You cannot have conscious decision making without qualia. Conscious experience is simply high level representations of information interfacing with your decision-making apparatus. Nothing more, nothing less.
The concept of p-zombies are inherently contradictory. P-zombies are by definition indistinguishable from a conscious entity through any test. Yet when I ask a p-zombie a million questions designed to get at its conception of its conscious experience, it responds in the exact same way as the conscious entity. Furthermore, all observations of its behavior prove to be equivalent to the conscious entity. So the p-zombie must contain the exact same information regarding "conscious experience" as the assumed conscious entity. Therefore, the two systems are information-equivalent.
What is the difference between these two supposed systems? They are information-equivalent, and yet you claim the conscious entity experiences something the p-zombie doesn't. But of course, by definition of experience, that occurrence must be able to shape your future behavior. This contradicts information-equivalence.
You might say: well the representation of that information is different in the p-zombie vs the conscious entity. This doesn't work either. Just like the universe has no privileged reference-frame, it has no privileged information representation either.
Therefore, either qualia is a meaningless concept, or it is purely dependent on information representation.
Furthermore, if you accept that our brains are a large part of what makes up our minds, then somehow our physical brains "interface" with the substrate of consciousness. Neuroscientists should be able to find this interface. Physicists should be able to probe this "consciousness substrate" using the same physical properties of matter that our brains do. We have found nothing of the sort. The only thing in our brains are bundles of neurons and neurotransmitters. Those of you that suppose metaphysical explanations for consciousness are just as guilty of "god of the gaps" as those who would look up at the stars and posit a god holding them in place. Don't be that person.
> The brain being the sole generator of consciousness currently appears to be powerful enough to explain everything we know about consciousness.
Yes, that's reasonable, but it leaves out how it's done, what mechanism is responsible. Until that question is answered, until we have a proven mechanism, we can't assume that the brain is the sole repository of consciousness.
My point is this will become a respectable scientific theory when we can explain how consciousness arises in brain tissue, rather than simply showing that it's so by observation. Until then we're in the position of the the blind men and the elephant, each with a different impression of an unseen source.
>we can't assume that the brain is the sole repository of consciousness.
Assume is the wrong word here. If there is no need for metaphysics then defaulting to materialism is the rational choice. Science is always a process involving probabilities and materialism is currently the most probable scenario. It then makes sense to make further predictions based on our current materialist understanding.
>> we can't assume that the brain is the sole repository of consciousness.
> Assume is the wrong word here.
But until we have a solid theory about consciousness, with the possibility of proving a claim to be objectively true or false, all we have are assumptions.
> Science is always a process involving probabilities and materialism is currently the most probable scenario.
But consciousness is by definition a non-material phenomenon, so that's the wrong approach. I don't mean to trivialize any of the aspects of this issue, because it's very complex and there are many uncertainties, but consciousness can't be reduced to a question of materialism, for the simple reason that many of its manifestations lie outside the material realm.
Also, science isn't always a question of probabilities. Science requires falsifiability, and a theoretical falsification isn't based on probability, but certainty. If I see a black swan, I have falsified the all-white-swans theory, not to some probability, but absolutely. This is not to diminish the importance of probability in scientific work.
> It then makes sense to make further predictions based on our current materialist understanding.
Yes, that does make sense, but it's not the whole story. If consciousness were open to unambiguous measurement, in a way that forced different observers into agreement (experimental objectivity), then I would concur. But we aren't there yet.
>But until we have a solid theory about consciousness, with the possibility of proving a claim to be objectively true or false, all we have are assumptions.
Your wording implies that all the different possibilities are equally probable. This is not the case. We "assume" that the explanation for conscious experience lies in materialism because it is the most probable answer given what we know. This is the rational choice here. Your usage of "assumption" is too layman to be useful in this discussion. Probability currently tells us that materialism is our best answer, and so we use that in our current model of the world.
>But consciousness is by definition a non-material phenomenon, so that's the wrong approach.
This isn't very useful. Energy is non-material, yet it can be measured using material detectors. Consciousness can be studied through material processes. Measuring brain mechanics, or simply questioning a subject about their experience are all material ways to study consciousness. We can't measure the metaphysical properties of consciousness, by definition. But then again we don't know that they exist either.
> If I see a black swan, I have falsified the all-white-swans theory, not to some probability, but absolutely.
Inherent in your conclusion is the probability distribution of your black-swan detection tool. What is the probability that it measured incorrectly?
The problem with the metaphysical types is that you propose some exotic mechanism simply because you can't imagine how natural phenomena could give rise to consciousness. This is simply a failure of imagination on your part. Until metaphysical explanations are required it is folly to assume that they are necessary.
>> But until we have a solid theory about consciousness, with the possibility of proving a claim to be objectively true or false, all we have are assumptions.
> Your wording implies that all the different possibilities are equally probable.
No, my wording only suggests that there is more than one possibility.
> We "assume" that the explanation for conscious experience lies in materialism because it is the most probable answer given what we know.
Assigning probabilities to assumptions, in the absence of a formal theory, has obvious risks. A formal theory can be exploited to produce real probabilities, probabilities that can themselves be compared to experiment and possibly become the basis for a falsification.
>> But consciousness is by definition a non-material phenomenon, so that's the wrong approach.
> This isn't very useful.
It's a critical point. If what's being measured is ethereal, then a physical measurement is innately misleading. Consider our mathematical abilities, and consider the relation between consciousness and a propensity for mathematical thought. Solely to make a point I might suggest a consciousness gauge that relies on the ability to reason mathematically -- more math, more consciousness. Sort of like the famous mirror test, but at a higher level: a certain animal is conscious to the degree that it can reason mathematically. But mathematics, and consciousness, are both nonmaterial properties, therefore (a) this makes a math scale an appropriate measure of consciousness, and (b) it creates yet another difficult, nonmaterial measure of consciousness.
If we instead measure the speed at which an animal solves a certain easily measured problem, the basis for many IQ tests over the years, that test will eventually be solved more efficiently by a computer. If we measure consciousness by the ability to pass some kind of Turing test, same answer.
My point? Tests of consciousness tend to be material quantifications of something non-material. These tests are easily passed by present and future computers, simulacra -- unless, of course, we begin to assign consciousness to computers. That's an interesting approach, but it does lead to reasonable doubt about the basic premise that the human (or dolphin, or gray jay) brain is the seat of consciousness.
> The problem with the metaphysical types is that you propose some exotic mechanism simply because you can't imagine how natural phenomena could give rise to consciousness.
It's the other way around, exactly the other way around. The materialist position exists because it's accessible to conventional experimental methods. It may turn out to be perfectly correct, I emphasize that, but it exists only because we don't have the tools to correlate consciousness with other measures using some other approach, and even if we did, there might be no point to the exercise if it didn't lead to objective evidence, evidence on which different observers agree, evidence on which most modern science depends.
> This is simply a failure of imagination on your part.
I hope you recognize the humor in that position. It is the materialist outlook that represents a failure of imagination. It is an intentional limitation of inquiry to make the outcome fit into conventional categories, into publishable experimental science. It's why psychological research has the reputation it does.
Einstein's brain wasn't physically (materially) significantly different than the brains of many other men and women. Yet he produced results that (a) sprang from "consciousness", indeed to many it represents a measure or criterion of consciousness, (b) arose in the realm of mathematics, clearly a Platonic realm, and (c) have no materialist explanation.
Think of all the remarkable outcomes of conscious behavior that aren't correlated with a measurable material cause, that are orthogonal to material measures. Then think of all the ways we have of measuring consciousness that can be faked by a computer.
My point? It's the materialist outlook that represents a failure of imagination, and it only exists because it allows a kind of faux science to address consciousness, as though the latter could realistically be reduced to material quantification.
>But consciousness is by definition a non-material phenomenon, so that's the wrong approach.
Going back to this: I'm not even sure what this means. If you assume you can't understand consciousness through material means then you've simply defined yourself into a paradox. This is an uninteresting and unscientific take and it gets us exactly nowhere.
It's pretty obvious that consciousness does not exist in any singular unit in the brain (we probably would have found it by now otherwise). Consciousness is a property of some non-trivial subset of the units of the brain--at least this is what our current understanding leads us to believe (consciousness is within the brain and its not any singular unit). To characterize this as "non-material" again is a presumption that is not warranted.
Your only reason for treating the substance of consciousness as something non-material is because you simply can't imagine that a first-person experience could be derived from material processes. There is no reason to think this at all.
There are at least two important facts to consider, when contemplating a conscious entity's perception of reality, as an individualized experience.
1. A distinct entity's experience is owned with individuality as a singleton object, while the entity persists in a recognizable, and integral state. (the thing is the thing while it's still the "same" thing, and remembers that it's the thing)
2. Consciousness is not a characteristic constrained in any way to biology or any other preferential form of implementation. ("things" aren't necessarily people, or even animals or plants)
I'll never deny the concept of Boltzmann Brains or the potential for artificially intelligent sentient beings born out of man-made (or any other form of) electronics.
I'll happily recognize that nuclear medicine has identified the reality that the atomic-scale contituents of our corporeal forms transition through a complete turn-over of all molecular members in less than a decade; That by the time we age to 20 years, we are effectively completely replaced by different matter.
With all these ideas in mind, I don't take lightly the idea of jumping the gap from one state of existence to another.
At astronomical scales, life seems to be very rare. If life actually is rare, relatively speaking, considering ratios of void to matter, ratios of matter to life, and then ratios of life to self-aware, perceptive, communicative intelligence, then we must approach this whole scenario with the mindset that it is very difficult to provoke living entities, and very easy to render the living as dead and inert.
I know that I was dead before I was alive, because there was a period of time pre-dating my existence. The period before my birth can be regarded as a period when I was dead. Given that fact, it's not that I fear death, or wring my hands about my "soul." I was once dead before, and so I shall be again. Once that truth is squared away, the only thing left to worry about is getting swindled out of this only life I'm capable of living and remembering at the moment.
I am the medium, and I care about the otherwise abivalent information. Me.
There's a major reason why I think it's dangerous to casually bandy about the idea of swapping out neurons with electronics, however small and unobtrusive, however perfect a replacement. It's the same reason why we'd hesitate to firebomb a city of humans and replace it with a city of robots.
Economically the robot city performs as an equal to, or better than the human city it replaces, but the robot city is likely to be fundamentally and profoundly different from the human settlement it has supplanted. The country and its global trading partners around the world won't feel the difference, and human life goes on everywhere beyond the robot city, without skipping a beat.
The broader scope of human civilization will survive, even if we trim a whole city from the herd. Thus, the decision to eliminate the city is obviously a simple one, isn't it? But then, there's also the ethical concept of robbing the human inhabitants of their own agency, simply because we can easily replace them with an equivalent. Chopping out our own brains, and substituting them with what might be little more than an elaborate textile mill is questionable behavior indeed.
We haven't gotten to the bottom of what life is yet, so I don't think we shouldn't be so trigger-happy about replacing it, even in exchange for all our mortal foibles.
At the moment, all I'm prepared to identify about life, and the life I lead, is that my sense of self rides on top of a collective of amoeba-like cellular animals. My sense of self is unaware of this community of cellular creatures (neurons) without the assistance of surgery, microscopes, and unwitting/unwilling, sacrificial third-party predecessors as test subjects. Everything other than these curious neuron creatures seems to be optional to my sense of self (arms, legs, heart, lungs, liver...). After that, the rest of my existence is, pretty much, mysterious so far.
I think we have a lot of ground to cover before we start killing off our neurons. Even if they're bound to die off anyway.
There are likely to be other packs of real, live, human neurons still roaming about, after mine die off, and if I leave behind a heap of machines as a placeholder for what might no longer actually be me, how will I know for sure that I'm not bequeathing a huge, serious, weird problem for my normal human peers after I'm gone?
it's not pseudo-intellectualism. It's the result of about a decade of study in the philosophy of mind and computer science. Disheartening is the trading of God for techno-babble to provide the "wonder" necessary to get over your own mortality.
This is an ongoing debate in science and philosophy, so it's certainly not a meaningless juxtaposition of words. The counter-arguments against Searle's Chinese room argument alone should be enough to put the handwaving to bed.
EDIT: Whoops, replied to the wrong comment. I meant to reply to the apparent Searle adherent.
You are going overboard in the other direction. Cochlear implants show that machines can talk to brains without an abstraction other than an electric signal. It's a long way from there to living even partly "in" a machine, but that was kind of my point.
The impossibility of an "uploaded mind" relies on the assumption that there is something special about the mechanisms behind the human brain's function that cannot be replicated in a machine (including an artificial, organic computer). The brain's function would have to the the result of something other than the structure and organisation of its biology.
(Probably the people who believe in the possibility of minds in machines would argue 'souls' and 'personality' are also purely functions of the underlying biology and learning, and thus could also be made artificially).
"Consciousness" itself is tricky to define, as it is essentially defined only by the experience of the evaluator. We are little further in our understanding then Descarte's, "I think, therefore I am". Our evaluation of the consciousness of other entities is limited to traits in them that we see in ourselves. "It thinks like me, therefore it is, too." Hence the appeal of the Turing's Imitation Game to evaluate artificial intelligence.
> relies on the assumption that there is something special about the mechanisms behind the human brain's function
Imagine someone spilled the contents of a large trash in a pile on the floor. Now it is your task to create an exact replica of this pile.
There was absolutely nothing special going on when that pile of trash was generated, and yet it is impossible to replicate.
Sure, you can create similar piles of trash by spilling other trash cans on the floor, or you can carefully arrange banana peels an half-filled soda cans to create a pile that looks similar from the surface. But generate an exact copy? No way.
It depends on how 'exact' exact is. For a brain, there are a finite number of neurons with a finite number of dendrites and axons. A quick search suggests 2.5 pentabytes (2.5 million GB) of storage http://www.scientificamerican.com/article/what-is-the-memory.... So about 2.5 thousand hard drives, so not an unreasonable amount of complexity to deal with today.
Your argument seems to suggest there is some finite level of complexity which is the limit of engineering. What is this limit and how is it justified?
If our ability to scale to complex systems scales linearly, this could take a very long time. If it follows a power law (like Moore's Law), it could be feasibly much faster.
The trash argument is weak because the pile serves an identical function regardless of how exact or inexact it is.
Probably the challenge is not in making an artificial brain, it is in reading a current one (especially one that is living without damaging it).
The argument with the trash I was trying to make was that we don't need magic to make things that are beyond our understanding. I do believe that our conscious derives from the physical structure of our brain, but that doesn't mean it is possible to "upload" it.
Yes, part of the impossibility stems from the impossibility of reading a brain. Non-destructive measurements aren't possible because the amount of energy required to scan the tiny structures in the brain would destroy it. Destructive measurements would destroy the brain while we read it, and we can only read a subset of the information before the whole brain is destroyed.
But even if it was possible to scan a brain, I think you are underestimating the task of creating an artificial brain. Storage is the least of our troubles. To actually simulate the neural net, each of these two thousand hard drives must be connected to every other one, requiring millions of interconnects, and then you need millions of processors, and every simulation step touches every byte of every hard drive. This is so many orders of magnitude beyond what we can do now that I doubt it is ever possible.
I think the only feasible way of uploading a brain will be to create a program that can convince everyone that the uploaded person is actually living inside the computer, similar to the Turing test. You'd configure the program by telling it anecdotes from your life and taking psychological tests, rather than "scan your brain". However, I consider even this variant unlikely, because it would basically require something similar to an AI.
Probably better than death, given that you might stick around long enough for better versions. Definitely better than death if you can choose to end it whenever you please.
Probably an uploaded brain would require simulation to prevent this 'dark can syndrome'.
I'm sure if we have the ability to upload a mind, we would be able to find ways to stimulate it from boredom. Perhaps this would be as simple and infinite browsing of Wikipedia. Perhaps stimulation of the dopamine or similar pathway would do the trick, but being immortal just to live high as a kite sounds like a bit of a waste.
Perhaps this need for stimulation could be removed from the mind on upload. Or similarly, perhaps the 'survival instinct' that drives the desire for infinite life through brain upload is vestigial now that modern humans aren't as effected by natural selection.
I realize this forum is for technological entrepreneurial pursuits but are you kidding me?
A single MRI costs ~2 million used (Angel Investors are always the hard ones to grab). GE Medical Holding invests more then 1 billion dollars a year in R&D with almost 18 billion a year in revenue. This is like reading up on a Intel Processors and wondering if their is a way for you to personally refine and profit from refining the technology.
While I agree about your main point, your costs are a bit out of date. Most brand-new MRI machines cost between $1-3 million with an additional $300-500,000 for the room and ancillary equipment.
But you wouldn't have to use a MRI. In fact a MRI is not practical at all because the patient needs to be inside it to communicate. Much better would be an implantable electrode array. I'm sure they're much cheaper than a MRI too.
Just from the article I'm dubious. They presented no evidence that this isn't wishful thinking, which I'd be keen to establish it wasn't if I was in their place. Could just be journalists fudging the message though.