Hacker News new | past | comments | ask | show | jobs | submit login
‘With brain preservation, nobody has to die’ (theguardian.com)
118 points by bookofjoe 61 days ago | hide | past | favorite | 481 comments



“Take very high resolution scans of brain structure to characterise how someone’s neurons work, recreate it in a digital format then put it in another robotic, virtual or biological body.”

This isn't preserving, it's copying. A copy of your brain/personality/you... isn't you... it's a copy. It's a cassette tape playing back the radio you recorded onto it.

It's really kind of odd that people talk about brain transplants, or living forever, and then talk about uploading your mind to the cloud or swapping someone else's healthy brain into your healthy body, or scanning your brain and recreating it, and make it sound like it is going to help you to live forever. If my brain is scanned and uploaded to the cloud, a copy of me would live on forever, but me, literally, me, will still deteriorate and die, then rot away into nothingness. My mind, my personality, my essence, me, will not live forever under any of these solutions. I'll die. A copy of me, a point in time snapshot of who I was when it was taken, will continue on, with my memories, mimicking who I was. That gives me no comfort at all. I don't care if a copy of me lives on forever if I still die. That's not immortality, not for me personally, not if I die.


Every time you go to sleep your consciousness shuts down. What wakes up is something slightly different due to biological processes. You are not tired, your mood is different, part of cells in your brain got replaced, memories become more stable and maybe linked to other similar memories, etc... This is way more pronounced for people waking up from a coma. Philosophers thought of this for a while, see https://philosophy.stackexchange.com/questions/66018/the-bra...

So, each morning is a copy of you wakes up? Or do you draw a line in the sand and say that is different.


> Every time you go to sleep your consciousness shuts down. So, each morning is a copy of you wakes up

I think it doesn't "shut down", maybe fades away to a different, low-power mode. When you go to sleep and then wake up, here is still a continuity, because of the same underlying structures and processes from which the consciousness emerges. So it is like a 4D thing.

That continuity never really breaks (until death? which is like singularity) and I think this is what makes you "you". You can't copy/teleport it (kinda by definition), but you can extend/transform it.

Perhaps a "Ship of Theseus"-style approach would work — gradually replacing neurons one by one, or brain sections piece by piece. Alternatively, the brain could be extended with peripheral "devices" connected to it with bandwidth comparable to the interconnections within the brain itself, up to the point until the biological part becomes irrelevant. This is similar to how a portion of the living brain can be removed while still preserving consciousness — as neurosurgeons often demonstrate.


> Perhaps a "Ship of Theseus"-style approach would work — gradually replacing neurons one by one, or brain sections piece by piece.

You just blew my mind (no pun intended).

In this thread, we're basically alluding to the Hard Problem of Consciousness[0]. Consciousness is hard to define, and even harder to prove, and so conversations that talk about it become difficult.

There's certainly a certain subset of the brain you can take out and still have no effect on consciousness. Assuming you could replace small parts at a time, what would happen? At what point does the "mind", "self", or "consciousness" get lost?

[0] https://en.wikipedia.org/wiki/Hard_problem_of_consciousness


> There's certainly a certain subset of the brain you can take out and still have no effect on consciousness

If we don't even know what consciousness is, how do you measure the effect on consciousness of removing arbitrary amounts of brain matter?


Doesn’t seem that hard to measure. It isn’t like the subject will become a mysterious P zombie, they will probably just become increasingly incoherent and then non verbal and then die.

Damage a brain enough and brain death occurs. By the same token, after a certain number of neurons are lost, there is no self. Whatever consciousness is, it can’t be sustained with a sufficiently compromised brain.

This is observable in late stage dementia patients, it gradually becomes difficult to think and in the final dire stages their sense of self degrades too.


> This is observable in late stage dementia patients, it gradually becomes difficult to think and in the final dire stages their sense of self degrades too.

At best, you can be an external observer. You can't truly understand what the person is experiencing.

A common thread I see among people with brain damage is that they don't recognize it. For example, with patients with hemispatial neglect [0], if you ask them to draw a clock, and then only draw the right-hand side of one with the numbers 12 through 6 and they don't draw 7-11, then try to ask them where the 9 is, they might insist it's there. If you ask them to point at the 9, they'll either point to the wrong number or not point at all. And if you tell them they're not pointing or are pointing to the wrong number, they'll insist they're right.

But even with all that knowledge, it's still hard to imagine what that person is actually experiencing.

[0] https://en.wikipedia.org/wiki/Hemispatial_neglect


That is the common way of current understanding. But there is problem.

So called 'terminal lucidity' exists sometimes.

One theory is that is enabled by the immune systems shutdown, so inflammation ceases to interfere for a short time.

The problem with that is it still fails to explain the few cases, where shortly post-mortem it is found that there is not much healthy/able to function somewhat normally brain left to speak of.

Rather 'spooky'.


The thing is, it's not about removing brain matter, it's about replacing brain matter (with a manufactured copy, for example), and applying the same process in a secondary location (physical or digital, I guess). Will 'you' at some point stop being 'you'? Will the secondary copy at some point become you?


> your consciousness shuts down.

Consciousness not a single isolated process and describing a lull in activity as "shutting down" is incorrect.

> So, each morning is a copy of you wakes up?

Obviously not. Many of these processes also happen in reverse when you are awake. So when I go to sleep I'm tired and in a different mood. When did that "copy" get made exactly?

> Or do you draw a line in the sand and say that is different.

You can easily ask "how much of my conscious state is controlled by my body's chemical state?" If that's a non trivial amount then simply copying and hoisting my memories into a different body is clearly going to have a non trivial impact. To the extent that it's easy to say "that would not be /me/."


> Consciousness not a single isolated process and describing a lull in activity as "shutting down" is incorrect.

but there is indeed a discontinuity in your consciousness when you sleep. This discontinuity cannot be distinguished from the discontinuity arising from a copy of your brain on first "restart".

So how do you reconcile this discontinuity?


>but there is indeed a discontinuity in your consciousness when you sleep. This discontinuity cannot be distinguished from the discontinuity arising from a copy of your brain on first "restart".

That amounts to saying "every consciousness discontinuity is of the same level".

I.e. that waking from sleep has the same distance from "you" before sleeping, as would a replica of you created by scanning your brain and uploading it to it.

Let's put it this way: if we can co-exist, it's not me.


> but there is indeed a discontinuity in your consciousness when you sleep.

Yet you dream and can remember your dreams. Your dreams can be influenced by things occurring around you in your sleep. Some people do this on purpose and find that they are more reliably able to do it with practice. So how are you defining "discontinuity?" I don't see one at all.

> arising from a copy of your brain on first "restart".

You're rather certain of the semantics of something that does not exist and that you haven't described any sort of process for. What if you can initiate a copy while I'm awake? Am I required to be asleep or sedated into an artificial coma in order to make this copy? Why is that so? Are my short term memories excluded? Is my stomach equally full? Do I have the same wounds? Do I need to pee?

> So how do you reconcile this discontinuity?

How do you explain sleep walking? Or narcolepsy. In your model of discontinuities?


>Yet you dream and can remember your dreams. Your dreams can be influenced by things occurring around you in your sleep. Some people do this on purpose and find that they are more reliably able to do it with practice. So how are you defining "discontinuity?" I don't see one at all.

Dreams aren't continuous throughout sleep, so this seems pretty irrelevant to whether there's a discontinuity in the first place, you're just saying there may be multiple during a given night's sleep.


> Dreams aren't continuous throughout sleep

What do you base this assertion on?

> so this seems pretty irrelevant to whether there's a discontinuity in the first place

Have you proven there is one? Or are you just saying "sleep is really weird so that must be a 'conscious discontinuity.'" I'm pointing out the obvious flaws in this logic.

> you're just saying there may be multiple during a given night's sleep.

No, you're saying that, and I'm asking you to define it.


>What do you base this assertion on?

It's a pretty well understood part of sleep research. Sleep has different phases, only REM, the lightest part of sleep, is associated with dreaming. Also, it matches up with my experience of dreams and basically all the reports I've seen from others on dreams: there's not a continuous experience from falling asleep to dreams, there's being in the process of falling asleep, then there's being in a dream, and the experience is discontinuous, the transition between wakefullness and dreaming is not experienced, nor is the transition between different dreams.

(I've also personally experienced a very sharp discontinuity around sleep, once or twice, where I have literally 'blinked' while looking out my bedroom window and seen it go from dark to bright light, experiencing nothing of the hours in between. Though I suppose in many ways this is a much more continuous conscious experience than I normally have when sleeping, where waking up feels much more disjoint from falling asleep)



Ah, TIL, and fair point then, though I think there's still not much support for continuous consciousness throughout sleep.


I mean personally I see continuity as mostly being about the hardware with the process running on it only being secondary at best and disjointed, though not completely irrelevant.

So funnily, I'm "perpendicular" to the issue of if continuous consciousness is needed -- i think the only relevant question is following the original structure and processes every "tick" of the universe to see what happened.

E.g. to me if the structure is wholly or nearly-wholly preserved, "teleporting" by the mechanism of particles having an uncertain position is non destructive and retains continuity, stasis is the same, becoming part of a hivemind is the same -- though leaving one or being quickly ship-of-theseus'd would likely impair continuity, but less in a manner of "death" and merely making the question meaningless, because two new entities are born descended from the same source, akin to how who we were as children never dies, only undergoes a process of growth/change/refinement until it's something not only different but significantly more than the "original".

I think a very interesting line of thought is actually if severe brain damage essentially kills continuity / "the original", in context of how we can absolutely disable or impair parts of the brain with magnetism or drugs but then have the hardware return to original function along with apparent return of the original process. It actually feels much closer to the self-evidently wrong assertion that any mere identical copy would equally be the original, suggesting to me that damaging the original too far past a point may actively kill that thread of consciousness in a way I wouldn't consider to be the case from sleep / stasis / joining-but-not-leaving a hivemind / etc.


> This discontinuity cannot be distinguished

It can be, though. I don’t see how the discontinuity bit is the defining factor.

Regardless equating humans with digital computers and using analogies like that doesn’t make a lot sense to begin with.


I think the difference here is that we share our bodies with the copy and share our memories. The difference between us at night and us in the morning is small. A copy in the cloud doesn't pass memories on to us when it sleeps and it's experience is very different. It's digital and could technically have a billion instances with no experience sync between them. Which one is 'you' then?

The best way to go about this IMO, is nano bots first augmenting the brain, connecting to the cloud, then replacing the brain slowly and introducing syncing consciousness when online. Replace a little every night and then it would still be you IMO since you're replicating most closely what the brain already does.


There’s a difference when there could conceivably be two of you.

There will never be two of me in my bed when I wake up, so I’m the same.

If my mind could be copied and embedded in a robot or whatever, then there are two of me, each now diverging. That other one is not me, as far as I, the other, am concerned.


You'll need to be more clear in your last sentence.

If you are cloned while you are sleeping, and both the original and the copy are moved into different rooms before you wake up, both of you think themself is you, and both of you think the other is not you. So, who is correct?


You're assuming that there exists such a cloning process, and in doing so, silently ascribing properties to that process that completely change the outcome.

You're in effect saying "Assume there is a cloning process that is instant and creates a complete copy of you, both mentally and physically," and sure, in that theoretical world, you can talk about there being no way to know which was you, the clone or the original.

But IRL, we have no way of knowing what the properties of such a cloning process would be - maybe it's impossible to create a cloning process such that mentally, it is a complete copy. Maybe you can always tell whether you're the original or the clone. Similarly, maybe we simply cannot create a digital copy of a mind that cannot immediately tell it is the copy. We have literally no way of knowing.

So instead of living in the philosphical realm, we can have a lot more productive conversation in the practical one. Practically, it strongly appears to me that I have continuity every time I wake up, and practically, if you copied my mind and ran it on a computer, that mind would not be the "me" that appears to experience continuity. Practically, it would bring me no comfort to know that this other mind that used to be a copy of my own continues onwards while I die, no more so than it would bring me comfort to know a twin of mine continues onwards.


For me, the relationship is non-transitive.

If you tell me this will happen tonight, I would right now call both "me". Both of them after the event would agree that I am them. Both of them would agree they are different people to each other, even if they say so in perfect stereo unison.


Cue the "We are Bob" series, aka Bobiverse, by Dennis E. Taylor!


Was thinking more like John Crichton on Farscape, as Bob's copies diverge substantially and immediately, whereas John and his duplicate spend hours repeatedly drawing at rock paper scissors.


This entire thread just has me wanting to watch The Prestige one more time.


>If you are cloned while you are sleeping, and both the original and the copy are moved into different rooms before you wake up, both of you think themself is you, and both of you think the other is not you. So, who is correct?

Obviously the original one is.

Whether the clone also things that it's me doesn't make it any more so, than if a paranoid person thought it was Napoleon would make them Napoleon.


Yes, but we're suggesting a clean mental break in some senses from your cognitions POV. Both of you would be the real you, except to each other. The Hermetic tradition has some texts that delve into how God cannot be contained within the universe because he exists outside of it. In some senses, it would be downright insane to ponder the ramifications of this.

After the first 'Clone Settlement' and the pains of splitting someone's belongings between the same person; it would have to be made illegal. No other way around it.


Which one is the original? Say we aren't making a computer copy but duplicate of your body. And we might have messed up the labels.

How do you tell which one is the original? Atoms don't have labels. But atom-level copy isn't possible. Say we made the scan at large enough scale that has fidelity but it isn't identical.


>Which one is the original? Say we aren't making a computer copy but duplicate of your body. And we might have messed up the labels.

Reality doesn't care about the labels. One of the two pre-existed, which is the original.

>How do you tell which one is the original? Atoms don't have labels.

That's a problem regarding our knowledge. There's still an original, regardless if we know which or not.

There's an original even regardless whether the original has a way to prove to themselves they are so (as opposed to simply believing they are, which the clone would believe too).


from my own perspective it would be impossible to tell given the parameters of this thought experiment, but any outside observer who witnessed the cloning process and the staging of the second bedroom would know.

Moreover, if randomly one of the two dies and the other survives then either the original or the clone experiences death. So from the point of view of the original before the experiment, he has a 50% chance of dying. If we change the experiment to say that he has a 100% chance of dying but the clone has a 100% chance of surviving, the original would not agree to the experiment unless he was suicidal; obviously he has no reason to care that a different person will live on assuming his identity after he dies.


It makes no difference. If it’s possible for the two versions to meet (they exist simultaneously) then what you’re describing is simple deceit.

A world where two of me exist is fundamentally and irreconcilably different than a world where I simply go to bed and wake up each day. It makes no difference if I personally am aware.


it's almost as if your thought patterns and language are a product of your experience of being the only copy of you...

if having copies of you walking around was a normal thing, our language and expectations would reflect that.

right now if another copy of you comes by and claims your wife and your house, you'd be miffed. but the whole concept of one of you owning the house is just an artefact of how individuality and possessions happen to be right now. it doesn't have to be forever. there was a time you could steal a picture and have the only copy - now you can save a copy on your phone and you don't deprive another of theirs. some day people and stuff could behave the same.


Hopefully IP and copyright laws will evolve in parallel to prevent or highly penalize stuff like that.


I imagine the poor will want their mind state covered by copyright/intellectual property laws, and the rich will ignore that if it can help them amass more wealth - just like LLM and diffusion model training was bootstrapped off the back of creatives without payment. i.e. the law doesn't apply if you're rich enough.


> just like LLM and diffusion model training was bootstrapped off the back of creatives without payment. i.e. the law doesn't apply if you're rich enough.

Or search engines.

I don't know how the legal cases will go, all I can say is that my suspicion is that, like search, such AI are simply too useful to be outlawed… but my armchair opinions are almost as far as possible from a guarantee, and even search had to change in response to copyright infringement laws.


>if having copies of you walking around was a normal thing, our language and expectations would reflect that.

Yes, and if the law of excluded middle didn't hold, then all kind of magical things would happen too.

But it does, and they don't.


None of that is related to my point.

A world where I get cloned and an unaware is distinctly divergent from a world where there is only one of me each time I wake up, regardless of how society handles it.


yes, a world where there are two of you is different from a world where there's one of you. I feel like this is something Aristotle must have covered, but it is important to point it out!


Based on this entire thread it did seem necessary to state.


Clearly, otherwise you would not have stated it.


> So, who is correct?

The one that wasn’t the clone.


Irrelevant, cloning doesn't and probably never will, happen.


I draw the line in the sand and say that is different because sleep has been working "forever", while the quote in the parent comment describes a process that, not only does not exist and therefore work now, if it ever gets to work, we won't know if it's the same as the daily process that happens when we sleep.

Otherwise I actually agree with your point ("every night another little death", as the Sonic Youth song says): I'm different every morning. Usually, the differences are subtle. Over the time span of a lifetime though, I've been through enough changes that I often say ghosts of former selves share this body, and what "I" am now will probably be another ghost by the time the biological substrate dies and takes whatever "I" will be then, along with all the ghosts, with it (in my case we'll go up in smoke since I'll be cremated).


The river I stand in is not the river I step in. (1)

(1) https://brandondonnelly.com/this-river-i-step-in-is-not-the-...


I do wonder sometimes if each period of being awake is the entire subjective experience of a sequence of independent conscious entities. Like I woke up today with the memory of yesterday, but possibly that version of me that had the subjective experience of being was an ephemeral phenomenon that ceased to exist when I fell asleep, just as I will tonight.


I raised a similar objection to the GP comment many years ago, and someone made the same response to me. I wonder if I'm alone in that instead of making me feel more optimistic about mind transfer, it just made me slightly afraid to go to sleep.


Same actually, if I follow the logic the conclusion is pretty grim. I've come up with silver lining to embrace the change. All life must change to survive and thrive. To wake up the same every time, to never change no matter what, to never learn - that is true horror to me.


You're absolutely not. There's two big differences as I see it - firstly, it's not clearly impossible for the instance to survive sleep. The brain exists continuously throughout. There is constant electrical activity. We don't know for sure that our awareness is "transmitted through" to when we wake up, but we can imagine ways it's possible.

In contrast, of course, transporters and mind uploads obviously kill us.

Secondly, we can't do anything about sleep. No matter how worrying it is. There's just no choice other than to accept it.

The first argument is nicer than the second, but, frankly, if I could avoid sleep I would and I really hope I never need general anesthesia.


Each morning I would not be a copy.

This is provable - only one instance can wake up, whereas with the other method, it’s conceivable that two or more copies (instances?) could wake up.


Thanks for feeding my existential dread. For that matter, the you of this moment is constantly being annihilated and replaced by a slightly different clone, in an illusory life that resembles stop-frame animation. And don't ask me about the me from yesterday - the one who didn't bother to do any of the things I am stuck with doing today - or the me from tomorrow who will likely squander everything I've worked so hard for. Neither of them is to be trusted.

On the other hand, I do seem to have a vast store of dream memories; wnen I think about them it seems like living some kind of second life, usually while fast asleep. Sleeping and waking selves are in agreement that being asleep is generally more pleasant than being awake, though being awake seems to have better continuity.


I don't think it quite the arbitrary distinction you're making it out to be. If you were cloned every time you went to sleep, and those clones awoke with the same memories you had before going to bed. Would you consider all of those clones' consciousnesses to be part of your own? Surely not, because you're not experiencing them, you're only experiencing your own. This means that the "you", in this case, is the self reflective portion of the collection of interpretations of sensory stimulus and memories, whereas consciousness is that collection as a whole. That's why these clones can't possibly be "you", because if there's no sensory stimulus being experienced or interpreted by your brain, then there is no consciousness and there certainly is no "you". What this ends up meaning is that "you" are necessarily bound to the mechanical component that interprets stimuli. That component can be adjusted over time, but the process must be interpreted in such a way that it seems gradual enough that your brain can still recognize those intermediary interpretations as "you".

So, when you state that robo brain is still "you", I have to disagree. While robo-brain can still believe it's "you", it can't possibly be "you", as "you" could potentially still exist in the original biomechanical device, which isn't interpreting the new body's stimuli, and thus not building a sensory profile based on the stimulus. Even if there was a transfer that resulted in the original device's (i.e. your body) death, the new body still couldn't be considered "you" as there isn't anything physically different about the two scenarios for the body.

Of course, if you consider every clone to be an extension of your consciousness, then you could argue all this. But at that point I think we'd have vastly different interpretations on what consciousness and "you" are. You could also argue that there is something physically different about the two scenarios presented in the robo-brain example, but the only way I can imagine a sincere argument for that is if you told me it was due to the transference of some soul-like entity. But, of course, if you were to tell me that, I would simply disagree.



> Every time you go to sleep your consciousness shuts down.

What consciousness? Yes, I am aware that "sleep" means that a person becomes unconscious (trivially demonstrated), but that is not the same thing as "consciousness shutting down". Supposing there is such a thing as consciousness, it doesn't seem to be anything that is suspended by the sleep process. And that's also a big "if", because consciousness is just woo-woo nonsense talked about but never defined by people who haven't quite gotten over the idea that humans don't have some immaterial soul.


Consciousness has, I'm told, 40 identifiable meanings. Makes it hard to get to grips with it.

For example, I sometimes have direct awareness of the nature of my consciousness changing with tiredness. Sometimes that has been what I can only describe as if my homunculus has switched to autopilot and is now just watching a cinema of my senses.

But that's probably a terrible description if you've never experienced that yourself.


If I'm not conscious, it's rather difficult to explain what I'm experiencing right now.


You're not really "experiencing" anything in the sense you mean. For a few thousand years, your species has communicated to others of its kind with a peculiar evolved language that causes a sort of narrative confusion.


I believe, if we could be copied to the cloud, that “me” is much more than neurons, and affected and defined quite a lot by the particular physical body we’re housed in. You can say losing an arm doesn’t change who you are, but it definitely does at some levels. Changes in health affect personality, mood, outlook, hope, enjoyment, etc, etc - that would all change when copied.


A gap in awareness is definitely not the same as creating a new copy of your consciousness and annihilating the old one.

Although, I'll grant that the subjective experience may be fairly similar. If you were to theoretically be copied, your copy would probably feel quite continuous with the original.


>Every time you go to sleep your consciousness shuts down.

Sure, but I'm ok with sidestepping that totally pedantic concern and still considering me the same person, like I do every single day.

Mere point-in-time-copying to another substrate, though, not so much.


Every nanosecond the set of atoms constituting you changes, each of those are a different you.


There's a difference between "I have changed" and "I have been copied." This seems obvious enough to me that I don't question it, but is it not obvious to everyone?


Copies are impossible because every moment is change with bounded but non-zero uncertainty.


The fact that "you are different" does not mean a "different you" now exists, though. Change of this specific kind is already intrinsically integrated into your existence and identity.


Every moment of consciousness is different. It is a different you in that sense. What is the strand that keeps it the same ‘you’?


Physical continuity, in the sense that there is no break so significant that at any point one could say, "Before this, smeej0, and after this, smeej1." Unlike the ship of the famous thought experiment, there is and can be no reconstructing of the replaced parts such that smeej0 could ever be reconstituted, so the analogy breaks down in any attempt to apply it to humans.

Change is ssential to human nature. A human exists--and the same human exists--fron fertilization to death. Fertilization is a change before which we can say "not smeej" and after which, "smeej." And death is also a change before which we can say "smeej" and after which, "not smeej." There will be vastly, vastly more cells at death than there were at fertilization (assuming smeej made it out of the womb and to an observable state), and the original fertilized cell will no longer exist, but there is no other point along the existence of smeej in which a change happens of such significance as to alter whether or not the same thing is present.


Honestly I'd like to have my mind copied, but may only be 'simulated' once the current me is dead. From my perspective I'll be dead and there's no changing that, but from the clone's perspective, life just goes on.


true quantum scale copies may be impossible but that resolution may not be necessary to fully copy a person.


Not if “you” is created by emergence.


Because you are more than atoms. You are a pattern. A pattern plays out over time. Like music. Music has a physical component, of course, but music is enjoyable because it's a pattern over time.


Which is the "real" Moonlight Sonata: Beethoven's original sheet music, the first performance, the photons carrying some radio broadcast, some physical vinyl or CD master copy, a lossless encoded digital copy?

The thing about a copy, it reproduces that pattern. The question is, how good does a copy need to be to retain the important features? For now, I suspect that any attempt to actually scan a brain would be a cargo-cult copy of a mind… but I don't know that for sure, I'm just going with what I hear from the occasional neuroscientist who bothers to comment on misconceptions in this field.


switching a sesame seed on my burger bun is not the same thing as scanning it and uploading the file to the cloud.


No it doesn’t.

You’re not unconscious while you’re sleeping.


Consciousness is probably an illusion anyway.


An illusion to whom?


That language implies a self does not mean a self exists in reality.


This makes sense if you believe in a non-materialist self, like a soul. That model wouldn't be falsifiable- we can't measure it, so it can have whatever properties are convenient. You could then rule out the possibility that your soul would inhabit the copied 'you' by fiat.

Scientifically, though: of course an exact copy of you is you. If you don't believe in souls but still feel like this is not the case, that indicates your model of 'you' is incorrect.


It's not an issue of the soul, if you make a copy there are two people now, which are clearly physically separate, and think independently.

Even if you somehow made a perfect instant copy, they'll start drifting apart, as they'll be experiencing different things.


this ^

Soul or not, it wouldn't be me. Imagine that I was alive and existed next to the cloud copy of me. It would be 2 different minds with a common core. A database restore to a new cluster, that immediately became different afterward because our two experiences would drift. Funny enough, the db term for a db cluster which has two writable primary instances, which have received differing sets of writes containing different sets of data, is called a "split-brain".

So if I exist and have my own thoughts and the cloud copy exists and has its own thoughts, and we are taking in reality differently than each other, we are 2 distinct minds. Now imagine I died. What has that changed for the copy of me? Nothing. It's still a different mind than mine. My mind is dead. I'm dead. A copy of me existing after I die does not change the fact that I died. It does not help me, literally me, live forever.


What if you have done the procedure and after it woke up in artificial environment. You'd be aware that there's biological remnant of pre-procedure you that will still experience other things than you will including dying and death. But he's separate entity and you are glad it's not you that are about to die.

Basically if you wake up after the cloning as your old biological self then you just drew the short straw. It's up to you if it helps you or not that there's another you that will continue on whatever you were doing without the nuisance of dying. Some people might find calmness in that they can freely die and nothing of value will be lost.

What about startrek transporters? Wouldn't you use them because you are getting disintegrated and what's create at destination is just a copy of you but not you?


Presumably it’s the version of you that’s destined to become the “biological remnant” that making the decision to be cloned/copied?

So spending money or participating in that procedure would be entirely irrational from their perspective. What could they ever gain from that? They’d still be dead regardless of what the copy thinks or feels (until that procedure is done happens)

Spending the same resources to prolong your existence (even in a suboptimal state) seems literally infinite times more rational.


> spending money or participating in that procedure would be entirely irrational from their perspective.

Roughly as irrational as having children. Humans are not rational beings.

> What could they ever gain from that?

Continuation of what they were trying to achieve in life.

> Spending the same resources to prolong your existence (even in a suboptimal state) seems literally infinite times more rational.

That entirely depends on what you value. You may value living in your suboptimal state less than chance at making something happen that you wanted to see happen but due to bodily limitations are no loger able to progress on.


What is "me" in this case?


To define "me" in this context basically requires solving the Hard Problem of Consciousness [0].

To an external observer, all life, including humans, are nothing more than a complex set of electrical and chemical reactions, and in theory, with sufficient technology, you could take a perfect subatomic snapshot of someone's brain and predict exactly how they would react to any stimuli.

But then there are the first-person perspective experiences. We have words for it like "consciousness" or "mind", but no scientific explanation on how it works and why it shuts off when we go to sleep. The religious will explain it with the concept of a soul.

I don't think we can ever solve this Hard Problem. But I can claim to believe that if someone did snapshot my brain and recreate a perfect living replica in a new body, that I would not experience what that replica experiences. It might have a completely new consciousness behind it, or it could be a P-Zombie [1].

Maybe I'm actually the only conscious being in the entire universe and everyone else is a P-Zombie. Somehow, to me, that makes more sense than consciousness appearing out of nowhere.

[0] https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

[1] https://en.wikipedia.org/wiki/Philosophical_zombie


> Maybe I'm actually the only conscious being in the entire universe and everyone else is a P-Zombie. Somehow, to me, that makes more sense than consciousness appearing out of nowhere.

That's because you live in Plato's cave, we all do.


Not necessarily. Some of us may have torn the illusory veils of Maya, and gleefully surf the waves of Indra's ocean from island to island, tasting all the delciously different somas.


> Maybe I'm actually the only conscious being in the entire universe...

Sounds rather solipsistic.

In that case you'd probably work on your Zen to make it more to your liking?

If only for more interesting NPCs at least :)


It is the same person but a different human. Even if they drift apart, by all means, their own personal identity is the same, same as if we split the universe with 1 human taking path 1 and in another universe the human taking path 2.

The moment they change, they are not the same person, just as I am not the same person I was when I started writing this reply, and yet, I still am "me", irrespective of the drift.


If you were only one person (big if), you would also experience things and drift. Would you stop being you at that point?


The person who is you exist independently of any copy of yourself. While a replica of you would exist there would still be the original you manifest in your brain that would exist an independent life and die at some point without ever “resuming” in another body. This is similar to how you can never time travel yourself even through relativity. Your perception of time will be the same no matter what relativistic effects occur outside of your frame of reference. You will live your life span and die.

Your replica however would have a different experience and be able to draw on your memories etc. But you yourself will in fact live a natural life with no shared experience with your replica and will experience death no matter what. It’ll be little solace to you that a copy got to live your life after your passing.


> Scientifically, though: of course an exact copy of you is you. If you don't believe in souls but still feel like this is not the case, that indicates your model of 'you' is incorrect.

How is that "scientifically" an "of course"?

How is it more "an exact copy of you is you" than the alternative claim, "an exact copy of you is 'you (1)'" (to borrow from file manager nomenclature).

The trivial example of that seems to be that if you make an exact copy, put it in a different room, and then something happens to the exact copy, that thing does not happen simultaneously to the original copy.


An identical copy necessarily cannot be measured in a way that distinguishes itself from the original. So "scientifically" because we're restricting the space to measurements of the physical world, and "of course" because the conclusion falls almost tautologously out of the definitions, no experiment needed. How can the thing scientifically not be you if it is not materially different from you?

To your example- I am not sure which of two points are being made, so I'll address both. I'm not saying that everything that happens to entity 1 also happens to entity 2, just that both are you. Two things can both be apples, even though biting one leaves the other intact. And if something happening to you makes you not 'you' anymore, 'you' isn't really a coherent concept across even a fraction of a second; you'd cease to exist and be replaced multiple times in that time.


In reality, no two things are the same. When we categorize things in any way, we make an error in order to achieve some practical goal.


> cannot be measured

It certainly can measured be from the perspective of the “original” i.e. the person presumably making the decision to be copied or not.


Scientifically though, an exact copy is likely to not be possible due to the quantum uncertainty principle.

And I'm not sure that's the only issue even for die-hard materialists : think for instance about all the problems that come from multiple exact copies & things like ownership, relationships...


There's no need to believe in an immaterial soul to think that a copy is a different 'you'.

It is enough to understand that having an independent "command & control" structure (for lack of a better word; C&C) is by definition 'you'. C&C means that the individual can perform a self-contained sense-think-act cycle.

We can infer this from the following: 1. Identical twins, despite having identical genetic structures, have separate C&C, and therefore qualify as separate persons. 2. Two siamese twins conjoined by the body (including parts of the head) have two C&Cs, and therefore qualify as two persons (with varying level of bodily control per person). 3. An individual with additional parts of the body (e.g., in a special case of being conjoined with a twin) has only one C&C and therefore qualifies as one person. 4. Clones are a special case of identical twins, and still have separate C&Cs, therefore qualifying as two persons. 5. A person in a coma or with another condition nonetheless qualifies as a person with some sense-think-act autonomy, despite being dependent on someone else for continued bodily functions. 6. An individual with a transplant (e.g., heart) or an extension (e.g., pacemaker) is nonetheless a person because of their consistent C&C. 7. An individual with extreme memory loss remains a person with C&C (in brain, body, genetics for the most part).

Any other special but naturally occurring cases (e.g., individuals with two brains, individuals with completely separated hemispheres, conjoined twins sharing a brain) would require either that we: a. understand how brains and bodies work (and therefore, make a more quantified measure of how much of sense-think-act autonomy qualifies as personhood); or b. decide based on a heuristic (1 brain = 1 person) or principle (protection of bodily autonomy, i.e., 1 body = 1 person).

But none of these need you to believe in a soul, in order to think that a digital clone is not 'you'. Unless, of course, you can prove that an individual either: 1. can sense-think-act with both physical and digital 'bodies' simultaneously and/or sequentially*, which share an equivalent experience (i.e., central C&C coordinating distribution or maintaining strict consistency across bodies). 2. can sense-think-act with either physical or digital 'body' at will, upon establishing a 'connection' and successful transfer of state (i.e., remote access). 3. can transfer whatever is necessary for C&C to another 'body' (e.g., brain transplant), and demonstrate beyond reasonable doubt that their C&C signatures in new body match the ones in the old one.


I don't believe in a soul in the sense of a "ghost in a machine", since it introduces so many problems. Nonetheless, it's pretty clear that there is an immaterial component to thought, and therefore to our existence, for various reasons. Examples:

1. If we are entirely material, we can't know anything. For example, if you're thinking about a tree, there must be something about you that has 'grasped' what it is to be a tree. This is impossible if your idea of a tree is nothing more than a bunch of neurons firing in a given way. There is no correspondence between neurons firing and a tree. So there must be more to your thought than neurons firing (or any other material process).

2. Similarly, if we're entirely material, we can't reason. A reasonable thought differs from an unreasonable thought only by its content. It can't differ in this way by its material components in the brain. Therefore, if we hold that some thoughts are wrong and others are right, we must conclude that there is more to our thought than the material.


How sure are you about (1)?

Is there no chance that “thinking” (or your term “grasping”) is not simply an emergent property of certain matter+states? A computer with a machine learning process seems to be able to “grasp” what it is to be tree (as opposed to a cat or a crosswalk).

You seem to be engaging in difficult philosophical concepts and jumping over important steps (like rigorous definitions or showing why something “must be”).


> You seem to be engaging in difficult philosophical concepts and jumping over important steps (like rigorous definitions or showing why something “must be”).

Yeah, sorry, I bashed that out too quickly. I'll try to slow down a bit, though I can't promise a full treatise :)

For a thought to be about reality, there must be something about that thought that makes it about reality. How this might be the case is one thing, but the fact that it is the case is nonetheless necessary. And for the thought to be about the reality, there must in one way or another be similarity between thought and reality. Again, the 'how' may be extremely complex, but the existence of the similarity seems necessary. To put it in opposite terms, if a thought is not similar to reality, then it is not about reality, and therefore can't constitute knowledge about reality. Do you agree thus far? If so, let that be premise 1.

I'm obviously talking about the concepts we use to think about reality, not about the words we use to describe it.

I will assume that we agree that thoughts can be about reality (even though they're obviously sometimes not) as premise 2.

So, given our two premises, suppose Bob is thinking about a tree. With materialism, where is the likeness between Bob's mind and the tree? It cannot be the neurons or the physical structure of his brain or body, because these things don't have similarity with a tree. The atomic structure of a brain, in any state, is different from a tree's. That much is clear. And likewise, a tree doesn't somehow appear in Bob's brain. Given our premises, this is a problem for materialism. If the likeness is not physical, and it clearly isn't, then what is it?

We can discuss the idea of a tree as 'emergent' from the matter&state. But honestly, I struggle with this concept. Every time I've heard the word used, it seems to be a way of restating the problem rather than solving it. Perhaps you could explain in a bit more detail how this overcomes the 'similarity' problem I've outlined above? ISTM you still get the problem that a bunch of neurons is not a tree, regardless of their arrangements. Either the neurons in some sense 'cause' you to think about a tree, in which case you need to ascribe to the neurons an ability to cause immaterial effects, which again seems a problem; or it is simply a way of saying "we don't know how it works". Again, so ISTM.

On computers: very briefly, a computer doesn't 'know' anything about a tree, or anything else. A 'tree' on a computer is a bunch of magnetic states on a hard drive, or transistors in particular states, or what have you. The meaning of a computer's output (that is, the link between the computer and reality) exists only in the mind of a human being. An LLM's text about a tree is no different in principle from the binary for the word 'tree' on an hdd, or a jpeg, or whatever. An LLM gives the illusion of knowing, but no more than that. Again, I realize this is a pretty summary treatment, but I think the tendency to describe the mind as a computer is misguided for this reason. You face exactly the same problem with the computer as you do with the mind. For my part, I simply reject premise 2 as far as a computer is concerned.

Anyway, very simply, if materialism is true, a thought cannot be about reality, and it cannot have a similarity with reality. Therefore, we can't know anything about reality. But we can know about reality (premise 2), therefore materialism is false.


There are a number of points where this thesis needs considerable work.

Perhaps the first is that you appear to be ambivalent about what you have in mind by 'similarity'. When you first discuss it, you say "...there must in one way or another be similarity between thought and reality. Again, the 'how' may be extremely complex...", but then you go on to say "...where is the likeness between Bob's mind and the tree? It cannot be the neurons or the physical structure of his brain or body, because these things don't have similarity with a tree..." - so this similarity could be extremely complex, yet you dismiss the possibility of it being anything physical without offering the slightest justification. You might find that convincing, but you have not said anything that presents a challenge to anyone skeptical of this claim.

You do not help your case by continuing "The atomic structure of a brain, in any state, is different from a tree's." This suggests that while you recognize that this similarity could be extremely complex, you do not completely grasp how complex a similarity between a physiological structure and something external to the body could be.

This impression is reinforced by your comments on emergence, which you appear to want to dismiss on the grounds that you struggle with it. Again, this is unlikely to be persuasive to people who do get it. For what it's worth, here's my go-to example of emergence: in the theory of evolution, fitness is a key concept, yet you will not find an organ of fitness by dissection - it is an emergent property of an organism's physiology in relation to its environment.

Much of the rest of your comment is in this vein, and it is rather like saying "how can music be recorded on a compact disk? CD burners use light, not sound." There is no point in belaboring this further, so let us instead return to your starting point - "for a thought to be about reality, there must be something about that thought that makes it about reality." A physicalist could respond by suggesting that there is a a chain of physical causality running from the outside world to brain states via sense organs. When we recognize that sense organs do not capture complete information, and also that brain states may be influenced by previous brain states, this view accommodates the fact that we can have incorrect thoughts about reality. How this works is opaque to us at our current state of knowledge, but that does not mean that it does not happen - in fact, it would be quite hard for an anti-physicalist to deny that it does.

The bottom line here is that the correspondence between mental states and reality is not the problem for physicalism that you think it is.


Thanks for the response.

I will return to what I meant by 'complex' in a minute. But nothing in your response undermines my point, which is that similarity between thought and object-of-thought is made impossible by materialism. It is true that "a chain of physical causality [runs] from the outside world to brain states via sense organs", and it is also true that brain states will be different based on the causality imparted by the world, as well as previous brain states, genetics, drugs, etc. But causality is not likeness. A brain-state caused by a tree is not of itself the same as a thought about a tree, any more than something else caused by a tree (like the sound of rustling in the wind) is the same as a thought about a tree. The latter needs likeness, not just causality.

> yet you dismiss the possibility of it being anything physical without offering the slightest justification.

The only way there can be likeness if materialism is true is physical similarity. The brain would have to be physically like a tree when Bob had the thought of a tree for there to be likeness. If we take materialism to mean that everything can be reduced to matter, then there cannot be any likeness that is not of matter. And this means a physical/material/atomic likeness is the only possibility. Yet we know there is no such likeness.

The concept of emergence doesn't overcome this problem. It must either produce a material likeness, which we know doesn't happen, or it must cause some kind of likeness which is real-but-immaterial, which is a problem for materialism.

The 'complexity' I referred to was in the context of how we know there is likeness between thought and reality vs that we know there is said likeness. We can know that the likeness exists without knowing how it exists, or being able to describe it in detail. The point was that no matter how complex the mind/brain/process may be, we know likeness must exist in one way or another, because without likeness there is no knowledge of reality. The complexity of the process, whatever the process is, is irrelevant to the need for likeness. That was my point in raising complexity; I should have been clearer. Complexity does not change the fact that the only likeness for materialism can be a physical likeness.

The CD is not a valid parallel. There is a chain of causality between the laser's interaction with the disc, and the sounds produced by vibrating speakers. A chain of causality is not a likeness, as I stated above. Further, a CD has no knowledge of its contents, so there is no need for likeness (physical or otherwise) between its imprint and the sounds it causes. We have knowledge of the world, so there is need for likeness.

Anyway, the nub of the matter is that you need to show how there can be likeness that is not physical likeness if materialism is true, or you need to show that likeness is not necessary for a thought to be about reality.


Why do you have to show the likeness ? There’s a strict level of fidelity being imposed here.

The brain has to only represent the sensations it encodes.

The tree and the brain are both material, so the phenomenon of the brain encoding it will also be material.

There’s no need for a soul to represent a further reality.

Even if a soul exists, wouldn’t that result in the same issue again?

The argument sounds like we are saying that there has to be a tiny tree in our heads for reality to be perceived. Since there is no tiny tree, we have to perceive it in our immaterial soul. Which just bumps the issue to the immaterial.

But then how is the immaterial perceiving the material ?


I can see from your response that there are a number of points that I need to clarify.

Firstly, I am not at all surprised that you, personally, do not feel that your position has been undermined, and I don't suppose you ever will, but one of the themes of my response is that neither your intuition, nor that of like-minded people, is particularly persuasive; for that, you need something that would give a skeptic persuasive reasons for abandoning their skepticism.

From your statement "causality is not likeness", I see that I need to clarify the point I made in my penultimate paragraph. To do so, we need to go back to your second post in this thread [1], and in particular, to the paragraph beginning "for a thought to be about reality, there must be something about that thought that makes it about reality... And for the thought to be about the reality, there must in one way or another be similarity between thought and reality."

The point in my penultimate paragraph is this: If we suppose that there is a causal flow from the real world to brain states via the sense organs, and adopt the premise that (changes in) mental states are caused by (changes in) brain states, then we have at least the outlines of an answer to your tacit initial question (what is it that makes a thought being one about reality?) that does not invoke or depend on any concept of likeness or similarity. In what I quoted above, you certainly seem to be saying that the only thing that could make a thought about reality is that it has a similarity (in some nebulous sense, at least) to reality, but here we have another way for a thought to be about reality that does not depend on likeness in any sense.

At this point, I can imagine someone saying "... but this causal chain could well result in brain states that are, in some sense, similar to reality." I don't dispute this, as I am not arguing against the idea that such a similarity can be seen, I am arguing that a link between reality and thoughts can plausibly be postulated without an appeal to similarity or likeness in any sense, let alone an appeal to a likeness about which it is asserted, apparently purely on the basis of intuition, that cannot possibly be physical in any sense.

In the light of the above, my misunderstanding of what you were calling complex seems moot - an understanding of causality may potentially lead us to being able to say, with some specificity, in what way thoughts are similar to reality, so it is not inconsistent with the claim that there are no complex issues in seeing that there must be a similarity, even if it does not do anything to endorse that view. In the same way, it is consistent with your claim that the only likeness for materialism can be a physical likeness, without actually requiring it.

The above physicalist story has another thing going for it: it contains at least the outlines of a possible explanation of how thoughts are related to reality, while your story does not - as far as I can tell, it is confined to saying that whatever it is, it can't be physical. If you have anything affirmative to say about how thoughts come to resemble reality and what the similarity is, now would be a good time to present it.

Your response to my CD analogy does not need much attention, as it is just an analogy, but I see I did not make it clear what point I was aiming at, which is that while "how can it be that..." questions can be insightful when the person asking has a good grasp of what is going on, they can merely reflect ignorance when the questioner lacks that understanding - and when it comes to how the mind works, we are all ignorant to a considerable degree.

By now it should be clear that what you claim in your final paragraph is wrong: in order to show that your argument has failed to make its case, I neither need to show how there can be likeness that is not physical likeness if materialism is true, and nor do I need to show that likeness is not necessary for a thought to be about reality. On the contrary, you have chosen to make a strong claim - essentially that the mind cannot possibly be the result of physical processes - and to sustain that, you need more than arguments grounded in appeals to intuition about how things either must or cannot be. In particular, anything resembling 'so prove me wrong' would amount to burden-shifting, and while we are about it, the alternative to 'the mind cannot be a physical phenomenon' is not 'the mind must be a physical phenomenon', it is 'the mind might be a physical phenomenon' (something that I believe is probably true, but which I do not claim to know.)

[1] https://news.ycombinator.com/item?id=42312450


Very briefly, it's not my 'intuition' that a thought must be like reality for it to be about reality. It's a claim I hold to be grounded in reason. My reasoning may be right or wrong, but either way, it's not an intuition (or a feeling, or anything else of the sort).

Anyway, to the main point. You wrote:

> The point in my penultimate paragraph is this: If we suppose that there is a causal flow from the real world to brain states via the sense organs, and adopt the premise that (changes in) mental states are caused by (changes in) brain states, then we have at least the outlines of an answer to your tacit initial question (what is it that makes a thought being one about reality?) that does not invoke or depend on any concept of likeness or similarity. In what I quoted above, you certainly seem to be saying that the only thing that could make a thought about reality is that it has a similarity (in some nebulous sense, at least) to reality, but here we have another way for a thought to be about reality that does not depend on likeness in any sense.

This clarifies our disagreement well. I will divide my response into two sections: 1. Why it is necessary for a thought to be like reality if a thought is to be about reality, 2. Why a causal chain cannot provide either likeness or a substitute for likeness.

1.

A thought can be right or wrong, but right-ness implies likeness to reality, while wrong-ness implies un-likeness. To illustrate: suppose Bob has a thought about a tree. Let's say it's "this oak tree is made of wood". Now suppose his friend Bill thinks "this oak tree is made of copper". I take it we can premise that Bob is right and Bill is wrong.

But for our premise to be true, their thoughts must be compared to the tree itself. There is no other way to judge their thoughts' right-ness or wrong-ness. For Bob's thought to be right, it must be like the tree (and the opposite for Bill's). Any conception of thought that lacks likeness makes this impossible. If we deny that both Bob's and Bill's thoughts are like the tree (which is necessary if we "do not invoke or depend on any concept of likeness or similarity"), neither Bob nor bill can be right. The concept of right-ness is predicated on that of like-ness. If, therefore, you drop the concept of like-ness, you lose the possibility of right-ness. And in this situation, all thought becomes meaningless, including the thoughts we're having right now about the nature of the mind.

Therefore we can conclude that likeness is necessary for correctness, and therefore for thought to be about reality.

Be aware that my claim is very limited. I am not, right now, making any claim as to what this likeness may be like. I am simply saying that likeness in some sense of the word is necessary for rational thought. Hence I disagree that may claim is 'nebulous'; rather, it's limited.

2.

A causal chain does not rescue your denial of likeness from its implications. As I stated above, causal chains happen everywhere in nature ("the tree causes a rustling sound"). A brain state is one result of a particular causal chain, but there is nothing special about it on that account. There is nothing that gives it the possibility of being rational, or correct. More specifically, a causal chain cannot in itself be 'right' or 'wrong'. It is possible that a causal chain will cause Bob to think a tree is made of wood, and cause Bill to think it's made of copper. This is not enough for Bob to be right and Bill wrong, because the chain has no connection to correct-ness. It can't give us reason for saying "Bob's thoughts are like reality, and Bill's are not". A causal chain is not right or wrong, or rational in any sense. That someone has been caused to think something doesn't mean he has a reason for thinking something. So a causal chain cannot provide a substitute for likeness if we want to rescue the concept of rational thought. Whether or not a brain state causes a mental state is irrelevant to this fact, contrary to what you claim.

Nor can a chain provide physical likeness itself. The materialist view of the mind closes off any other way in which likeness can be achieved, because (if it's consistent) it postulates that brain states, ultimately reducible to fundamental particles, are all that objectively exist. You claim "an understanding of causality may potentially lead us to being able to say... in what way thoughts are similar to reality", but you offer no details. I deny this is so -- causality can't rescue your position; it can't make there be some like-ness that is somehow both material and yet does not involve arrangements of atoms that are similar, which we've already ruled out. I'm happy to hear details if you think otherwise.

To summarize:

1. Rational thought requires (the possibility of) likeness 2. Materialism makes likeness impossible, because it reduces the mind to physical states, and a brain's physical state is not like the object of its thought [your arguments about causality do not gainsay this] 3. Therefore materialism makes rational thought impossible, and is therefore false.

So my claim way upthread, that 'there is an immaterial component to our thought, and therefore to our existence' must stand.


I am rather busy today, but don't worry - I have a rather comprehensive response in the works, and I will post it within a day - stay tuned, as they used to say...


Sure, no problem


It is good to see that we are more-or-less on the same page about what the issue is, but I can also see, from some of the things you say here, that I need to do a more thorough job in explaining myself. In the following, I will give a “Bill and Bob” example showing how I think chain-of-causality answers what I will call, for brevity, the central question (what is it that makes a thought be one about reality?), and I will make more clear how this view stands in relation to similarity.

I realized, reading your reply, that I had not make it clear that chain-of-causality is about information flow: how does information about reality shape our thoughts so that they conform to reality? It also gives an account of how and why this quite often does not work, leading to people thinking and believing falsehoods.

I also realized that its premises can be weakened, and as a consequence, it is not explicitly a materialist hypothesis. Previously, I wrote “If we suppose that there is a causal flow from the real world to brain states via the sense organs, and adopt the premise that (changes in) mental states are caused by (changes in) brain states, then we have at least the outlines of an answer to your tacit initial question” [i.e. what I called the central question above]. That second (emphasized) premise can be weakened to “changes in mental states are caused by sensory inputs and ongoing mental processes.” This modified premise seems more congenial to anti-materialists, given the considerable empirical evidence that this is in fact what happens.

Without further ado, here’s my Bill-and-Bob scenario. I have altered from your original one to make Bill’s mistake seem more plausible.

Bill and Bob are hiking a mountain trail when Bill sees a stone, having glittering yellow inclusions, beside the path. "We've found gold!" he exclaims, picking it up and showing it to Bob, but Bob takes it, examines it closely, and replies "it's just iron pyrite, Bill." - and he is right.

So what causal chains might lead up to this exchange? There are all sorts of possibilities, going back to the big bang, though it seems likely they will mostly be variations on a theme. We might pick up the story with humans discovering gold and naming it, something that happened independently in many different places by many different people, and the names they gave it were, for the most part, independent, related only by their being all names given by various people to this one substance (I am making an issue of naming here in order to use it in making a point later.)

The causal story continues with gold being widely seen as valuable on account of various properties, such as being nearly inert and having a density and abundance in the sweet spot for being a portable currency. At some point, it is discovered that another substance found in nature looks superficially like gold, but is not, and sometimes can quickly be distinguished from gold on account of its tendency to form characteristic cubic crystals.

Most of what Bob and Bill hold to be truths about gold has probably come to them via language: even though they have probably seen and held gold artifacts, it is likely that they had already been primed for those experiences by being told what gold is like, and it seems highly likely that this is how they came to know that it is valuable, as well as being how Bob came to know about iron pyrite and ways of distinguishing it from gold ore.

Note that having much of this chain being in the form of language is not a problem for either this causal story or for physicalism, as communication of information via language is itself widely and uncontroversially seen to be a causal process.

From this point, it is very easy to see how Bob's causal chain could lead to him making his true statement, while Bill's did not - here, it is just a matter of Bob's chain conveying more complete factual information. In other cases, it might involve things like Bill receiving false information and taking it to be credible, or Bob having a better memory or a greater bias towards skepticism than Bill.

Note that the causal chain of information flow does not stop at one’s sense organs: as postulated in my premises, prior mental states and ongoing or further mental processes are also involved in how the mental state changes in response to the sensory input. For example, if Bob is more skeptical than Bill, it might be a result of painful past experiences biasing him to be wary. Similarly, if someone receives some unwelcome information, and, as a result of their reaction to it, says something false, this is not something happening outside of the chain of causality. It is also one of at least five mechanisms by which false thoughts can arise - the others I have in mind are incorrect reasoning (including, but not limited to, motivated reasoning and rationalization in the psychological sense), reasoning from incomplete information (whether known to be incomplete or not), simple misunderstanding, and deliberate attempts to deceive (aka lying.)

Now let's move on to what this does and does not imply. In part 1 of your latest post you write "If we deny that both Bob's and Bill's thoughts are like the tree (which is necessary if we "do not invoke or depend on any concept of likeness or similarity"),..." The quote-within-the-quote here is taken from this passage: " If we suppose that there is a causal flow from the real world to brain states via the sense organs, and adopt the premise that (changes in) mental states are caused by (changes in) brain states, then we have at least the outlines of an answer to your tacit initial question (what is it that makes a thought being one about reality?) that does not invoke or depend on any concept of likeness or similarity." [Since I wrote this, I weakened the second premise, but that has no consequences for the issue here.]

What this says is that there is a putative answer to the central question that does not make use of the premise that true thoughts are like the external reality that they are about [1].

One of the things this neither says directly nor implies is that this answer is predicated on the assumption that thoughts are not like the external reality that they are about (I hope it is obvious that it does not say this directly, and we can see that it does not imply it by noting that you can add either the premise that true thoughts are like the external reality that they are about, or the premise that they are not, and in either case you get a valid argument, the conjunction of which is equivalent to the argument without either premise.)

In other words, this is a putative answer to the central question that is completely independent of the question of whether there is a similarity between the thought and reality: if there is similarity, that’s fine but inconsequential, and the same goes if no similarity can be found.

I hope this makes it clear that this is not an attempt to show that there is a way for materialism to be consistent with your thesis that similarity is essential for thoughts to be about reality; On the contrary, it is an argument with implications that go against that thesis. Again, this neither implies nor requires that there cannot be any similarity. Personally, I do not suppose that one could think meaningfully about reality without there being a correspondence, at some more-or-less abstract level, between the thoughts and the real world, and for all I know from what you have said about it, that may or may not be what you mean when you say 'similarity' or 'likeness'. Even if it is, that certainly does not mean that I am rationally obliged to accept your premise that it is the one and only reason why thoughts can be about reality. Quite the opposite, in fact: I suppose that it is information flow along chains of causality that both enables this property of thought and leads to thoughts having a correspondence with reality.

------

[1] I am assuming that we can agree that this discussion is limited to considering thoughts in the form of propositions about an objective reality which we assume to exist.

continued...


Thanks again for taking the time to write such a detailed response. I'm responding only to the first comment here, though I have read the others and I don't think they would alter what I write here. I will get to the other two asap. For this comment, I will limit myself to attempting to undermine the claim that causality can, in principle, make a thought be right or wrong (that is, about reality or not). I won't attempt to defend 'likeness' here: that will need a second response.

You refer to how a causal chain can convey information. More specifically, you refer to this information using the terms true and false, or using terms that are derived from truth/falsehood and make no sense without them. For example, you say that "Bob's causal chain [conveyed] more complete factual information". I would say that this is another way of saying information that is either true, or closer to the truth, than Bill's, and I don't think this is controversial, but please state if you disagree. As another example, you say Bill "[received] false information and [took] it to be credible"; and that this led to Bob's true statement. You also refer to a "false" statement in reaction to "unwelcome information" as something that is within the chain of causality. You then list several other mechanisms by which "false thoughts" can arise, listing these as "causal chains of information". (Henceforth I will use "CHOCs" to refer to chains of causality.)

My question is what is it that would make the CHOCs true or false? Everything that I said above about a brain state not being per se true or false applies to the examples of CHOCs you have listed here. More generally, it also applies to any CHOC that could be considered information. The print on a textbook, the brainstate of the textbooks' authors, the Word document that exists as a series of magnetic states on the authors' hard drives, the vibrating vocal chords that moved the authors' ears 20 years ago when they were at college, the tired brain of the typesetter who messed up the labeling in textbook B: every question that could be asked of Bill's and Bob's brainstates could also be asked these things. Specifically, what is it that makes them true or false?

Obviously, you could give the same reply that you do for B&B's brainstates, and say that the causes' own causes make them true or false. But you'll easily perceive that this just pushes the problem back another stage, and leads to an infinite regression. If the textbook's truth is caused by the truth in the author's mind, what causes the latter? We go onto infinity.

So it seems you have no justification for calling a CHOC 'true' or 'false', or believing that it conveys information that can be described in these terms. (To be clear, I think the possibility of T/F, and the possibility of conveying information, stand or fall together -- I don't think this is controversial but let me know if you disagree.) If a cause explains T/F, it must itself have its truth or falsehood explained by its own cause. You thus either face infinite regress, or need to provide some other explanation that applies to a non-mental CHOC but not to a brainstate.

(Btw, I'm not denying that a CHOC can be _associated with_ information transfer in any sense at all. That would lead me to deny that information about reality can come in through the senses, which I think we both agree would be wrong. Nor am I denying that the info associated with a CHOC can be true or false. But it is the mind that makes said association. The info conveyed by the ink on the textbook's page is only understood to be true, and is only understood as information, when in the mind. The text on the page causes the mind's thoughts in a certain sense, just as licking the page causes an inky sensation in the tongue. But the text can't _be_ information independently of the mind, and therefore can't carry T/F independently of the mind, and therefore can't explain the mind's thoughts' being T/F. I don't want my objections about causality to seem to imply that knowledge is innate, in some Platonist or rationalist sense.)

I will defend "likeness to reality" as the answer in the next exciting installment :-)


I only recently noticed that you are replying to the parts of my post (split up only because of the limit on the size of comments) directly under each one. If continued, I think this would rapidly become difficult to follow, So I will respond at wherever the bottom of this thread has reached at the time. I will also try hard to keep comments within the size limit.


...continued.

Having (I hope) clarified these matters, let's move on to the argument as set out in your numbered sections:

A thought can be right or wrong, but right-ness implies likeness to reality, while wrong-ness implies un-likeness. To illustrate: suppose Bob has a thought about a tree. Let's say it's "this oak tree is made of wood". Now suppose his friend Bill thinks "this oak tree is made of copper". I take it we can premise that Bob is right and Bill is wrong. But for our premise to be true, their thoughts must be compared to the tree itself. There is no other way to judge their thoughts' right-ness or wrong-ness."

The last sentence here is the first occasion in your latest post where you make a sweeping claim without offering any justification for it to be accepted. When you construct an argument, you can use, as a premise, any statement that is in the form of a proposition, but for it to be a sound argument, you have to show that it is a fact, and in epistemology and elsewhere, the generally-accepted basis for regarding a premise as being factual is the JTB criterion: it states a justified, true belief. Your say-so does not amount to either verification of the claim as being factual or justification for thinking it is.

The first problem in attempting to verify this premise is that you have not given the slightest explanation of what it means to say a thought is like reality. What would it mean to say that Bob's thought of an oak tree is like an actual oak tree? Does it grow from an acorn? Does it have the genetic signature that is shared by oak trees? These are both facts about them that might be the content of thoughts - indeed, they are right now, as you read this!

It gets even more problematic when we move on to a broader range of propositions. Take the claim "the highest mountain in the world is in Asia." In what sense does this thought have the property of being in Asia? Alternatively, if it does not actually have this property, then what does it mean to say this thought has a relevant likeness to the facts of the matter? When I learned the other day that Phil Lesh had died, was there something dead about my thought? Or there is this: "beta decay produces neutrinos." Does the neutrino component of this thought respond to the weak force? Furthermore, there was a time when neutrinos were postulated but had not yet been discovered - and then, when it was, did thoughts about neutrinos suddenly gain a likeness to neutrinos themselves?

What's more, Bob's thought about oak trees, as you have presented it, is a sentence, rather than an ineffable feeling that an oak tree is made of wood - i.e., it is a sequence of words conforming to a grammar. Do the words "tree" and "wood" have a likeness to the things they denote? One can easily doubt it on the grounds that words seem to be arbitrary: except for the relatively few onomatopoeia (and not strongly even then), it seems that words are not constrained to be in any way like the reality they express: for example, when physicists named the charm quark, were they obliged to pick the word 'charm' in order to ensure that propositions containing it had the necessary likeness to reality? Of course not! The problem for you here is multiplied by the fact that there are a great many different languages, with wildly varying sounds at the level of both words and grammars, yet despite all this, we obviously are not in the situation you claim would come to pass if your thesis fails: we have not lost the possibility of rightness, and all thought has not become meaningless.

Perhaps you think that this likeness is not to be found in individual words, but only in propositions as a whole - but this does not do away with the need to explain how this likeness comes about. Furthermore, this position is, ipso facto, one in which the likeness is emergent, so, in constructing an explanation along these lines, you will have to overcome your struggles with the concept of emergence and embrace it.

There are other claims in the first section that are moot in the light of the clarifications above, so there is no need for me to address them here. This also applies to quite a few statements in section 2, but there are a few that give pause. For example, I am curious as to what you suppose follows from your statement that "A brain state is one result of a particular causal chain, but there is nothing special about it on that account." What does being special have anything to do with whether it might be part of an answer to our central question? I can see that things like falsifiability and consistency matter, but being special...? In fact, not being special seems to me to be a virtue: the chain-of-causality story is straightforward, has clear premises, does not require the acceptance of propositions for which no justification is given, and does not stray far from uncontroversial facts about minds and communication.

I believe your objection "more specifically, a causal chain cannot in itself be 'right' or 'wrong'. It is possible that a causal chain will cause Bob to think a tree is made of wood, and cause Bill to think it's made of copper. This is not enough for Bob to be right and Bill wrong, because the chain has no connection to correct-ness." has been thoroughly addressed above: in short, it is the flow of information originating in the real world that makes it possible for thoughts about the real world to be correct.

You go on to say "It can't give us reason for saying "Bob's thoughts are like reality, and Bill's are not"."My first response to that is until you can say what the similarity is between Bob's thought and reality, and give us a decision procedure by which we can tell that Bob's thought is similar to reality while Bill's is not, then even if we accepted your thesis, we would still not have a reason for saying "Bob's thoughts are like reality, and Bill's are not", because we would not have enough information to make the call.

I propose that the way we get a reason for deciding whether a proposition is realistic is that we make use of information flowing on the chain of causality to draw our own conclusions. I get the impression that you feel this is not good enough, but the thing is we are in no better position than are Bob and Bill: the best we can do is to make the call on the basis of what we already believe to be true, or what we come to believe is true as a result of investigations into what information we have, whether it can reasonably be regarded as factual, what other information might be available and what effect it could have on our decision (for example, we might initially side with Bill, but change our minds after Bob explains to us why he thinks the rock is just pyrite. This is another example of a causal chain bringing about true thoughts, and it is, roughly speaking, the sort of process that Bayesian reasoning seeks to formalize.)

The truth of any proposition about reality is not determined by whether or how we can determine that it is correct; it is either true or false from the get-go. The Collatz conjecture, for example, is either true or false now, and has been at least since it was first conceived of (mathematical Platonists presumably think it has always been either true or false) - the problem is just that we do not know which it is.

There is no oracle that reliably sorts all propositions into true or false buckets; Gödel, Turing et.al. hammered the nails into that coffin, and it is obvious that we don't need an oracle in order to avoid the dire situation you claim would follow from your thesis being false (plus, if this oracle was an essential tool in making sense of the world, it would already have been used to find out if the Collatz and other conjectures are true or false.) It is not, therefore, a problem for chain-of-causality that it cannot do so, and if you think it is, it would be inconsistent for you to assert that it is not also a problem for your thesis.

continued...


... continued from https://news.ycombinator.com/item?id=42389025

You confuse imperfect definition with complete indefinition. If Bob says that he will meet Bill at the "foot of the mountain", that provides meaningful definition about Bob's future location. It is not complete: it's not sufficient to provide GPS coordinates, and it leads open a range of possibilites (which trail? for example). But it is sufficient to convey some meaning, and equally to convey some meaning about where Bob will definitely not be: the summit of the mountain, for example, or half way up the mountain, or Times Square, etc etc. Bob's statement therefore provides definition; it defines; it draws lines around what he does and doesn't mean. It is not complete: further definition is possible. But its incompleteness does not negate the definition that it does provide.

If I said for the sake of argument "I can't describe this likeness in full detail, and I think it's impossible to do so even in theory, but nonetheless I know that some sort of likeness between thought and reality is necessary for thought to be valid: thought and reality cannot be completely unlike", that would be meaningful as far as it goes. It would be sufficient to define my claim to a degree, and would be opposed to certain contradictory claims.

Now you show you have some idea of what "likeness" means in your response. In particular, your rhetorical flourishes about acorns, genetic signatures, death, and neutrinos, and how thoughts cannot be like their objects in the sense that they're from acorns, have genes, etc, show that some sort of definition has got across. You have some grasp of what likeness means. This is not surprising. Often we use words like "be", "thing", "fact" and "like" where the complete definitions may not be known, but the words and concepts nonetheless contain meaning. That doesn't mean we shouldn't seek more detail and greater clarity in our concepts -- we most certainly should. But it does mean that not being able to thoroughly define a word precludes its conveying meaning. This is especially true for common, everyday words such as "like". We use the word all the time; perhaps we can't define it, but we use it meaningfully nonetheless.

My claim above was very limited: that a true thought is, in some sense, like its object. It was a limited claim, using a definition of 'like' which, while incomplete, is enough to convey some meaning. Precisely how a thought and its object are alike is one thing, but that they are alike is something else. We could theoretically know that the latter must be true while knowing nothing whatsoever of the former. (To be clear, this isn't my position -- my position is basically Aristotelian realism -- but for the sake of argument, it could theoretically be true.) My claim that thought and object must be alike is not undermined by a hypothetical inability to explain how they are alike, assuming I present other reasoning that is sound. Just as someone without knowledge of the earth's rotation could tell you that it was day or night without telling you how it was day or night.

Much of the rest of your reply rests on the chain of causality theory which I address in my other post[0]. In short: a cause cannot explain truth in the mind, because any given cause faces the same question that the mind does about how it can be true, so unless you are going to attribute the cause's truth to its cause, and so regress into infinity, you need to find another explanation.

You claim that section 1 of [1] is moot. The first argument in support of this claim, that 'likeness' is too vague to mean anything, has failed. It does not get you off the hook to show that my argument -- that it's impossible for a thought to be both a) true and b) entirely unlike the reality it purports to be about -- is invalid.

More to follow on your other points when I get a minute.


I only recently noticed that you are replying to the parts of my post (split up only because of the limit on the size of comments) directly under each one. If continued, I think this would rapidly become difficult to follow, So I will respond at wherever the bottom of this thread has reached at the time. I will also try hard to keep comments within the size limit.


...continued

Moving on: "That someone has been caused to think something doesn't mean he has a reason for thinking something." True enough, and nothing in my position either implies or is predicated on the assumption that they would. On the other hand, in cases like this, there is a reason that they had the thought, namely whatever it was that caused the thought. I bring this up because I am wondering whether there is some unwarranted conflation going on between these two different uses of 'reason'. If not, could you explain more fully the significance you see in this comment?

Next: ”The materialist view of the mind closes off any other way in which likeness can be achieved, because (if it's consistent) it postulates that brain states, ultimately reducible to fundamental particles, are all that objectively exist." If, by likeness, you mean some correspondence between a thought about the world and the world as it is, then information flow along causal chains might do that, and you have not said anything which shows that it cannot. On the other hand, it is not necessary to show that this must happen, as chain-of-causality provides a plausible answer to the central question without requiring any particular likeness.

Another problem with this statement is that, as it stands, the clause following the 'because' does not appear to explain or justify the clause preceding it. If you think it does, please explain.

As I write this, I see there might be yet another problem over what 'any other way' in the conclusion is referring to - it appears to be 'a chain' from the preceding sentence, presumably a chain of causality. Did you intend your conclusion to be ' the materialist view of the mind closes off any way, other than a chain of causality, in which likeness can be achieved?' If it is, I do not discern what this is intended to refute.

Regardless, this is a metaphysical claim. I prefer to avoid using such things, as they tend to take the form of intellectual overreach into areas where nothing definite can be said. We can approach the question of whether the mind is a physical phenomenon without metaphysics, just as we can do the same with the question of whether the weather is a physical phenomenon.

”You claim "an understanding of causality may potentially lead us to being able to say... in what way thoughts are similar to reality", but you offer no details...” Well, there has been no pressing need for me to do so, as chain-of-causality offers an answer to the central question without involving similarity in any way, but I will happily offer a deal - you say specifically what sort of similarity you think you need, and I will see if I can explain or accommodate it within chain-of-causality.

In addition, I can’t let this particular statement go without noting the irony of you chiding me for not providing details!

"... I deny this is so -- causality can't rescue your position; it can't make there be some like-ness that is somehow both material and yet does not involve arrangements of atoms that are similar, which we've already ruled out." Frankly, I have no idea what you are saying here, but whatever it was, I suspect that it is moot now that I have clarified my position. If you think it is still an issue, could you provide more details?

Finally, I don't think there's anything to say about the conclusion - obviously, I'm not persuaded that it is correct, for all the reasons above. There is, however, one more thing that I want to go back to, as it pertains to this discussion as a whole. You have acknowledged that your claim is very limited, in that you are not making any claim as to what this likeness may be like. One of the consequences of it being limited in this way is that no conclusion of your arguments can rise to the level of being taken for a fact - a justified true belief - until you can say enough, about what it means to say that thoughts have a likeness to reality, for that premise to be regarded as justified (to be clear, this is a necessary but not sufficient condition.) You cannot avoid critical analysis of your thesis by not making affirmative statements about what it means, as the absence of such claims is one of the bigger problems with it. If you remain aloof about this, anyone evaluating your claim on its merits would be fully justified in being equally aloof about whether your argument is sound.


Thanks for taking the time to write such a long reply. It will take me a while to read it properly, give me 2-3 days.


No problem - I'm afraid my posts are longer than they could have been with more thorough editing.

If I had spent more time on it, I might have noticed that in your reply upthread to thephyber, you had written "I'm obviously talking about the concepts we use to think about reality, not about the words we use to describe it." You are entitled to have some fun pointing out that you had already anticipated the paragraph in which I bring up the arbitrariness of language.

That is not to say, however, that there is nothing to see here. We represent, communicate and reason about concepts, including concepts about reality and the real world, with languages that are arbitrary in their syntax and grammar. In addition, large swaths of factual knowledge about the real world can be learned through language alone (for example, it is the only way that I have come to know anything about atomic and subatomic physics). Given these facts, a rational person is entirely justified in doubting your claim that neither arrangements of atoms, nor anything else physical, are capable of having (or giving to concepts) the likeness to reality that your argument is predicated on.


Ha, thanks! Ok, 1-2 days more. Fortunately, important though these discussions may be, at least they are not urgent.


On reflecting on all you have said in this thread here, I have come to see that there is an argument to settle this issue once and for all. It goes like this:

1: If there is no way to distinguish true thoughts from false ones, all thoughts become meaningless.

2: Not all thoughts are meaningless.

3: There is a way to distinguish true thoughts from false ones (from 1 and 2.)

4: The only way to distinguish true thoughts from all others is if there is a correspondence between the thought and the reality.

5: There is a correspondence between true thoughts and reality (from 3 and 4.)

6. Nothing immaterial corresponds to reality.

7. Anything not immaterial is material (from the law of the excluded middle.)

Conclusion: True thoughts must be material (from 5, 6 and 7.)

- So there you have it!


This is a reply to two posts upthread: https://news.ycombinator.com/item?id=42389025 and https://news.ycombinator.com/item?id=42395650

Far from adding any specificity to what you mean by likeness, your Dec. 12 post [1] continues to avoid doing so. This seems very odd, as one hallmark of true theories is that the more one can say about them, the more strongly justified they appear (I suppose, under your thesis, you would say they become more like reality?) Why would you pass up on multiple opportunities to make your thesis as strong as you can? Or has it already reached that point? The direction of this discussion suggests the latter is the case - but you still have an opportunity to turn that around.

Your 'meet at the foot of the mountain' analogy falls flat, as no example of a situation where there is sufficient information does anything to establish that, in a different case, there is sufficient information.

The reason why what you have said so far is inadequate is that you have made a very strong assertion ("materialism makes rational thought impossible, and is therefore false" [2]) which you say follows from your claim that true thoughts have some sort of likeness to reality, but you have not offered any reason for a reasonable skeptic to think that your assertion states a fact - i.e. a justified true belief. As it stands, it does not meet even the 'justified' criterion. The fact that you have said literally next to nothing about how thoughts are like reality is just one of the things standing in the way of this assertion having any justification, though it is the one that we come to first, as the other big, so-far unjustified leap - to your anti-materialist conclusion - is predicated on it.

"Now you show you have some idea of what 'likeness' means in your response" - indeed, I have an idea of how one could say that true thoughts are like the reality they are about, and, in fact, I wrote "personally, I do not suppose that one could think meaningfully about reality without there being a correspondence, at some more-or-less abstract level, between the thoughts and the real world, and for all I know from what you have said about it, that may or may not be what you mean when you say 'similarity' or 'likeness'." [3] I do not know whether it is anything like what you mean, because you have been so opaque about it.

You could certainly say that my correspondence is no more specific than your likeness - and you would be right - but that is not a problem for what I am saying, because, once again, I am not claiming anything that is predicated on it.

Contrary to what you say here, my position is not dependent on whether my position on the 'central question' that you posed in the beginning (what is it that makes a thought be one about reality? ) is actually correct: I believe that information flow from the real world via chains of causality is sufficient to explain why true thoughts reflect the world as it is, but even if this is completely false, it would not mean that your thesis has been justified: it has to stand up on its own merits.

Consequently, I do not have to say anything more about chain-of-causality, but I am not averse to scrutinizing it, so I am happy to take on the infinite regress issue and show that it is not a problem. For example, you wrote "someone without knowledge of the earth's rotation could tell you that it was day or night without telling you how it was day or night", and that is true enough, if they could see daylight, or hear the sounds characteristic of day- or nighttime activity, or read a clock (or from their body's circadian rhythm, absent any better information, but that does not work for long.) In all these cases, the thought arises from information flow along a chain of causality that does not, in any circumstance, need to be followed back any further than the rotation of the Earth, so there is no infinite regress. Furthermore, there is no need for the person to know about the rotation of the Earth, as that rotation causes corresponding phenomena, which in turn feed information to our subject - information which allows them to deduce the fact of it being day or night so long as they have some knowledge about how to differentiate the two (the chain is there even for clock-reading and circadian rhythms, which are synchronized to the Earth's rotation.)

You seem to have come to your infinite regress conclusion because I occasionally used phrases such as 'factual information'. That is just a shortcut for referring to information caused by the actual state of the world, either directly or through a process of sound reasoning from direct information, and I should have been clear about that.

We can continue with this line of thought: a person might wake up, look at a clock reading, say, 13:00, and think it is daytime - but in one case the clock is working correctly, and in the other, it is broken, and it is actually midnight. This is no problem for CHOC, as in the latter case, there is no causal chain from the Earth's rotation to the clock's reading. According to your thesis, this thought is like reality in the first scenario, but unlike reality in the second - yet it seems to be the case (or at least it plausibly is) that the thoughts in the two scenarios are identical (they can certainly be expressed by the same proposition.) From this consideration, it seems that likeness to reality is not an intrinsic property of thoughts, but merely a correspondence between them and reality - and mere correspondence does not seem to be a problem for materialism, at least not without further explanation.

With phrases like "you need to find another explanation" and "it does not get you off the hook to show that my argument ... is invalid", I feel it is necessary for me to repeat what I said earlier about burden-shifting: "in order to show that your argument has failed to make its case, I neither need to show how there can be likeness that is not physical likeness if materialism is true, and nor do I need to show that likeness is not necessary for a thought to be about reality. On the contrary, you have chosen to make a strong claim - essentially that the mind cannot possibly be the result of physical processes [4] - and to sustain that, you need more than arguments grounded in appeals to intuition about how things either must or cannot be. In particular, anything resembling 'so prove me wrong' would amount to burden-shifting, and while we are about it, the alternative to 'the mind cannot be a physical phenomenon' is not 'the mind must be a physical phenomenon', it is 'the mind might be a physical phenomenon.'"

All of this is probably moot, however, given the sibling comment in which I show, using an argument revealed to me by this discussion, that materialism is correct;)

[1] https://news.ycombinator.com/item?id=42395650

[2] https://news.ycombinator.com/item?id=42319985 , in the summary.

[3] https://news.ycombinator.com/item?id=42345041 , final paragraph.

[4] After I wrote this, you made it quite clear how strong of a claim you are making: "materialism makes rational thought impossible, and is therefore false." [see footnote 2]


Ok, thanks for this latest reply. I thought your 7-point jest syllogism was a sarcastic response to my two most recent comments (made 2-3 days ago) so I didn't bother adding to what I'd already said; so if I seem to have ignored half of what you wrote, that's why.

I agree, let's keep everything here from now on. I will reply to this comment soon.


The 7-step argument for materialism I have presented above is not intended to be sarcasm. I might be guilty of presenting a parody, in that it closely resembles your argument in most respects, but it has a serious purpose - an attempt to focus attention on the need for statements to be sufficiently well-defined and justifiable that a reasonable skeptic could find them persuasive. The challenge it presents you with is to refute it with a non- question-begging, non- burden-shifting response that would not apply, mutatis mutandis, to your own argument.


Well I'll get back properly as soon as I can, tomorrow I hope, but premise 6 of your 7-point sequence is false (or, if you prefer, we don't agree on it). Indeed, it seems question-begging, but this is so obvious that I assume you have something else in mind...


Remember what I said: The challenge it presents you with is to refute it with a non- question-begging, non- burden-shifting response that would not apply, mutatis mutandis, to your own argument - and for that, it would be helpful if you list your premises as I have done. I mentioned a while back that you cannot justify your conclusion by being aloof (or vague, for that matter) about the argument for it.

Note that this nominal counter-argument does not render irrelevant the question of what you mean by 'likeness', and whether it differs from the sort of correspondence that information flow along causal chains can bring about. In fact, if you do not say anything definite, I can simply assert, as a premise, that they are the same thing, and rephrase this nominal counter-argument accordingly.


> Far from adding any specificity to what you mean by likeness, your Dec. 12 post [1] continues to avoid doing so.

Very good.

My "theory", if you must call it that, is Aristotelian moderate realism. A full discussion of what I mean by "likeness" would necessarily invoke concepts from that worldview. So it would therefore involve debating the truth/falsehood of said worldview. I don't want to do that for two reasons: first because it'd take the scope of the discussion far beyond whether thought can be material or not, and time is a big constraining factor. And secondly, we'd just be re-inventing the wheel. You'll find moderate realism ably defended by several authors who I can recommend if you're interested, and no doubt you'll know of your own list of attempted debunkings.

But as a compromise, I can propose the following partial definition: Two objects are alike if they are the same in some respect.

So an apple and a banana are the same in that they are both fruit (and this works even if categorizations of this sort are merely conventional), but different in that one is red and the other yellow.

More formally, we could say that A and B are alike if we can predicate X of both A and B. As a corollary, A and B are entirely un-alike if we cannot predicate anything of both A and B.

If you're good to go with that we can proceed. It would then be necessary to ask

(1) Whether a thought must be like its object (using the definition just given) in order for it to be about its object; (2) If so, in what manner it must be like its object; (3) Whether said manner can be material.

I don't think "the same" can be usefully (verbally) defined, since there is a point at which verbal definitions involve defining words whose meaning is more-clear in terms of those whose meaning is less-clear.

> The challenge it presents you with is to refute it with a non- question-begging, non- burden-shifting response that would not apply, mutatis mutandis, to your own argument - and for that, it would be helpful if you list your premises as I have done.

It is possible I misunderstand you, but it doesn't appear to present me with a challenge. Premise 6 is false in any interpretation that could be given it. That is not question-begging, nor is it burden-shifting, and whether this response applies to any aspects of my own argument is precisely what we're debating. Points 1 thru 5 are unobjectionable, assuming some unusual meaning has not been assigned to any of the terms.

Is your point that I'm stating the opposite of premise 6 in my argument without justification?


Hold a tree in your mind. Now describe it aloud. You have now produced a material effect in the physical world that resulted directly from your idea of a tree. Definitionally, that means it is not immaterial- you conjured it with material and measured it with material.


Music has a physical component: sound waves vibrating your eardrum and your brain recognizes the sound. What makes it music is that it is a pattern of interwoven sounds that sound "harmonious" to you. You can physically measure that, but what makes it music is that it's a pattern that plays out over time. The concept of "pattern" is not physical. It's a concept. It's "metaphysical". You are not a bundle of atoms. You are a pattern that plays out over time.


Crystal structure is a pattern, and various properties of crystals are explicable in terms of the pattern of atoms within them. The pattern of bases on a DNA molecule determines what genes are encoded on it. It is semantically and epistemically pointless to argue that this makes them nonphysical, and inconsistent to use this argument only for mental phenomena. Taken to its logical conclusion, this line of argument will probably end up at the useless metaphysical position that all is pattern and nothing is physical.


A pattern still overlays a physical object. You can abstractly describe the pattern of a crystal without looking at a crystal.


Of course patterns can be said to overlay physical objects - or that physical objects instantiate patterns, for that matter; there are all sorts of ways of making the point. But is it your position that, on account of this, crystals are not physical? Patterns are ubiquitous in the universe, and if being overlain by or instantiating a pattern is a criterion establishing that something is not physical, then what, if anything, is physical?

In the case of the mind/matter debate, the question is whether minds are physical processes, not whether they are physical objects. If having a pattern disqualifies a process from being physical, then even simple processes such as resonance would not count as physical, leading to the question of what, if anything, counts as a physical process?

You are correct when you say "You can abstractly describe the pattern of a crystal without looking at a crystal", but that is what physics does with everything in its domain! If you were to say that physics describes and explains the physical but not itself physical, I might even agree, but that is not the issue under discussion in the mind/matter debate.


Numbers are not physical, but you see three objects all over the place. A number is a category. The number "three" exists as a property that I can use to group physical objects (or even other categories). So I can group together three pencils or three cars. The only physical properties that cars and pencils have in common is that their both physical...maybe they're the same color...but I have grouped three pencils and I have grouped three cars, so now they share a property of "three", but "three" is not an intrinsic physical property of pencils, nor cars. It's a potential metaphysical property, but only becomes manifested once an observer groups three pencils.

My argument is that a "mind" cannot be a purely physical process in the same way that a video game is a purely physical process. The computer instructions are stored in my hard drive and encoded, and it's a physical process for my computer to retrieve those instructions and follow them, but the game itself is something the user interacts with by watching what's happening on the screen an interacting with it. The computer program of course has a physical component, but lives mostly in the layers of abstraction from 1s and 0s up to "flashy lights on screen", but none of those things mean anything if a mind isn't watching the screen, paying attention, and interacting with it.

Here's a question that boils down what I'm trying to say: If you have a pet bird that sits next to you as you play Fallout New Vegas, and taps the keyboard occasionally, is your bird playing Fallout? Is playing Fallout a purely physical process?


> Here's a question that boils down what I'm trying to say: If you have a pet bird that sits next to you as you play Fallout New Vegas, and taps the keyboard occasionally, is your bird playing Fallout? Is playing Fallout a purely physical process?

With regard to your question about the bird, what do you suppose a 'yes' answer would imply, and ditto for a 'no'? It is unclear to me that this question would have any relevance to the one you follow it with, yet you present it as if an answer to the first question would settle the second.


On further reflection, and a re-read after a night of sleep, I agree with you that my example at the end was not a clear indication of what I'm pointing at. I was trying to reveal that "reality" is more than just physical processes, and "mind", especially, maps onto physical processes, but only relationally.

A bird watching Fallout and pressing keys on your keyboard with its beak is not participating in the story of Fallout the way you are. You are watching a story and tracking a narrative and making decisions about which keys to press. But this is a bit of an obscure analogy.

Let me try a different one. You just sat down in your favorite chair after a long day of work, and you ask your young son to please bring you a cold drink from the refrigerator. He cheerfully runs to the refrigerator, grabs a beverage, and runs back to you, handing it to you.

The process of you vibrating your vocal cords which make a physical vibration of air that goes into his ear and vibrates his eardrum that then sends impulses to his brain and, later, his body that end up with you holding a cold beverage in your hand...those are all physical processes. But the relationships between sounds and "words", between words and sentences, between sentences and movement...none of that is physical. The relationship with your son obviously has a physical component, but the meaning behind your genetic connection to your son is largely metaphysical.

Another analogy: if a tree falls in the forest, does it make a sound? I am firmly in the camp of "no". Does the tree create vibrating air waves when it falls and hits the ground? Of course. But a "sound" is an abstraction that forms in the mind of someone who is there watching (and more importantly, hearing!) the tree hit the ground. That "sound" is a metaphysical concept that we map onto the physical world, and the ability to make these distinctions, to categorize our incoming sensory input, is what enables us to perceive anything at all. Without that structuring, categorizing, and filtering of incoming data, the world would be overwhelming and we would be paralyzed.

Am I a solipsist who only believes that what is in my mind is "real" and that the physical world around me is an illusion? No. The first and best lesson I ever had that the physical world is real is getting punched in the face. That will knock solipsism right off the list of being a possible worldview.

I believe reality is an interplay of concepts and physical, "potential" or "raw" matter.

One more analogy: cooking. When I want to make bread, for example, I have a recipe, which is a concept. It is an abstract concept of how to transform raw materials into a new thing: bread. So you gather raw ingredients, and you combine them following a particular order. You let the yeast feed on the sugars and make nice little CO2 bubbles in your dough, causing it to rise. You then put it in the oven and after some heat and time, you remove what used to be flour, water, and yeast, and you put it on the counter to cool. It has been transformed and you call it "bread".

The classic materialist move is to now just say "yeah but it's just flour, water, and yeast." Which is physically true. However, those ingredients have specific interactions that happen when combined, and further interactions that happen when exposed to high heat for a certain duration of time, and when we remove it from the oven, we don't say "come get some flour, water, and yeast that have been transformed through chemical processes, heat, and time!" No. It has a new name now. It has switched categories. It has changed ontological levels and changed from three ingredients into a new identity: bread.

Now I'm hungry for some toast.


I hope you enjoyed your toast, but what I have to say here might not be quite so palatable.

Let's start with the cooking example, as it epitomizes the difficulty in all of them.

What you call "the classic materialist move" - to just say "yeah but it's just flour, water, and yeast" - is a rather simplistic parody of the full physical description of the process, and it does not even begin on the explanation of the process that science can give. A materialist would have no problem with saying that bread is different from its ingredients, whether considered either separately or collectively, and she could go along with calling this an ontological difference. The thing, though, is this: just because flour, water, and yeast are physical things, and they have now been turned into something that is in a different ontological category - namely bread, or baked goods - it does not follow that this bread cannot be a physical substance, and this argument provides no justification for the premise that it is not.

At this point, you may be thinking that you had no intention of saying bread is nonphysical, but if so, then that only emphasizes the point, as, in the rest of your analogies, you are doing the equivalent of exactly that (for example, concepts of sound are not ontologically the same as sound waves and the latter are physical, and from this, it is supposed to follow that the former are not.) The general schema here is this invalid syllogism:

Major premise: X is physical.

Minor premise: Concepts of X are not in the same ontological category as X itself.

Conclusion: Concepts of X are not physical.

Once we remove the ineffective ontological argument from your posts, all we have is a number of claims that mental phenomena cannot be physical phenomena, presented without any reason given to think they are true, and in the parallel thread where I am debating geye1234 [1] I explained why, for a conclusion of this sort to be taken as a fact - i.e., a justified true belief - you need to give readers a reason to think your conclusion is justified, which in turn means giving them a reason to think its premises are.

By the way, what is potential matter, as opposed to the alternatives of raw and physical matter?

[1] https://news.ycombinator.com/item?id=42345089 , 3rd. paragraph.


The arguments your making have a hypothesis that isn't supported by your examples. Mostly because they're not falsifiable. They're therefore incredibly uninteresting.


So if we copy you into a robot and the copying process leaves you unharmed and awake, are they both you?


Yes, they are both me. It is admittedly a weird conclusion, but that doesn't make it false- we did a weird thing and got a weird result. Objectively, there is no case for saying either is not you.


I agree that they'd both be you, but I don't think that's what's at stake. "You" would only get to experience one of the two lives at that point: either the original's or the robot's. Presumably whatever consciousness was present before the duplication still resides with the original.

So there's certainly an argument to be made that if someone created a copy of "you" and put it in a robot, and then destroyed your organic body, that consciousness wouldn't "move" to the robot; it would stay in the organic body and be killed. The robot would be a completely new consciousness, but with all of your memories and behaviors and attitudes.

I'm honestly not sure what I believe around this. Perhaps we will discover "consciousness" as a physical thing and learn how to transfer that as well. Or not. Who knows.


I'm with you on this one - continuity is important too, it's not just about form. A robot version of me might be a suitable substitute to my loved ones, but it would always be outside my head and therefore not a continuous extension of my self.


So twins are both the same person? Not sure what you mean, they aren't both you, you create a new person who thinks the same way you do.


No, one is a copy of you. There is only one you. Literally. A copy can be perceived as being you, but it's literally, physically, not you. If it was you, literally you, it couldn't exist in both. Your copy could answer a question differently than you might, because the copy would immediately start having a different experience than you. If it was you, that wouldn't be possible. If it was a copy of you, you wouldn't exist along side of it, and it wouldn't be able to answer a question differently than you.


What a silly conundrum you’ve got yourself into. It’s fine to take a step back and say oops, what I just said here was quite stupid.


I think it would both be me until it started to drift, which would be almost instantly.


Right. You'd have a memory of cloning your mind, and the copy wouldn't have a memory of that. From the first boot of the copy, it would be operating with a different set of data than you. Also the fact you exist along side of it, means it's not you. It's a copy of you.


As long as we're dreaming up stuff, why do we assume the consciousness wouldn't be shared? what if you woke up and there was one mind but two bodies?


For myself at least, consciousness is the product of the body and in particular the brain. Without any kind of high bandwidth connection between two bodies, I don't see how your proposal could work.

That said, I wonder if that offers a possible way to 'transfer consciousness' between a human and their robot copy - link both brains together and then once synchronised (whatever that means), shutdown the biological half. Like a sort of extended Ship of Theseus idea.


That's an interesting idea. I think as long as the continuity of consciousness happens then it doesn't matter and you would negate the copy problem.


Another instance of me is a different instance of me than I am, thus it isn't me.


If I copy the contents of an Apple II disk onto my pc and try to "run" it...nothing happens. Isn't mapped to the new hardware. So who is going to write the "human mind/consciousness" emulator that will map onto a hard drive? Will they simulate all the inputs from your body, nervous system, senses, etc.?


And will they perfectly emulate the dynamic situation that is the brain?

You've mapped every neuron perfectly, you've even captured every ongoing signal and the whole thing is recreated perfectly. But that's not the entire story - connections are created and broken all the time, that's what memory is after all. Is it the "same person" if that snapshot is put into an emulator with slightly different results there? They clearly won't be having the same thoughts in a pretty short timescale even if the inputs are perfectly the same.


> of course an exact copy of you is you

Yeah, that's what my clones keep telling me. I ask them: have you ever woken up inside a cloning machine? That probably means you're not the original. Then they somehow sedated me and locked me inside the cloning machine.


Exact copy down to which scale?

All subatomic particles, quantum-woo-woo including all their entanglings?

Weakest electromagnetic fields as resultant emergent pattern of these?


> This isn't preserving, it's copying. A copy of your brain/personality/you... isn't you... it's a copy. It's a cassette tape playing back the radio you recorded onto it.

That's just one opinion, though. We still don't know what consciousness really is. We don't know what makes us who we are. We don't know if some concept (scientific or supernatural) of a "soul" exists apart from the behavior that arises from the physical structures of the brain.

We just don't know. So maybe you're right, but that's far from clear.


I'd agree with you, except if my brain was scanned and uploaded to the cloud, I'd still exist in my original brain. There can't be two originals of something so the cloud version would be a copy of me, since the original me would still be alive, able to talk to the copy of me living in the cloud.


Hans Moravec had a suggestion on how to do this in Mind Children: instead of examining the whole brain, you measure a layer, then replace it, layer by layer, until the last. There is never a copy and an original, it's just a ship of theseus self where the neurons are individually replaced with new ones, albeit electronic ones.


Suppose you died every night in your sleep and were replaced in the morning with a perfect copy of yourself with all your memories intact. Would you know the difference?


The copy who woke up wouldn't know the difference.

There would be thousands of copies that experienced going to bed and then nothing afterwards.


I had a great deal of difficulty going to sleep the night after this first occurred to me.

I only really managed to assuage my fear by realizing there would be an incredible number of dead bodies to deal with if this were happening to everyone every night, and it was extremely unlikely it was only happening to me.


You're taking what the above person said too literally. They didn't mean physically dying, they meant your mind dies every night when you sleep and in the morning a new mind/a copy of your previous mind, is what wakes up and continues on as you.


Now smeej won't be able to sleep at night again.


I'm not merely a mind. I am an embodied mind.


This really hit me the first time I went under GA.


My view is that it's just a change in consciousness. Just like closing your eyes does not reset your sight, sleeping does not reset your brain. You just turn off some of your senses


See The Real Transported Man


Wasn’t The Prestige about this idea, too?


No, but if I died tonight and a copy of me took over tomorrow, what good did that do for me personally? I'm still dead. Selfishly, I don't care if a copy of me continues on, I only care that I don't.


Let's say you just found out that this is actually happening. Every night you die and are replaced with a perfect copy of yourself. Would you do anything differently?


I would probably get a concentrated dose of existential dread knowing that by EOD I would cease to exist. Usually I'm able to delay that dread with the silly reasoning that I still have many years left before I die, but I wouldn't have that mind hack if I knew I had no time left.


If it calms your dread, know that most people who died expectedly (long illness) want to die.

So you have no need to fear the inevitability of death, only fear the inevitability of the desire for death.

Finally, to cure the fear the inevitability of the desire for death, live your life so well that even if you die accidentally, it's worth it.


I doubt it. You’d maybe freak out about it for a week but eventually you’re going to come to terms with the fact that this weird setup has absolutely no effect on your actual experience.

Same for other contrived things like dying and getting revived every day, getting frozen and unfrozen every day, taking a teleport dematerializer every day for your commute, having a portion of your brain and organs randomly get swapped out ship of Theseus style, etc.

At some point you would just come to terms with the fact that your existence is really just that of being a mind with a past and present. The future doesn’t really matter.


Why not go out with a bang and spend your life savings on hookers and blow (or whatever decadent thing floats your boat)?

My point is that I suspect most of us wouldn't do anything differently, even if we know it's not our consciousness continuing on, because both scenarios are identical for all practical purposes.


You're right, nothing would change, and both scenarios do play out the same. The difference would be the knowing about it part. That's what changes things for me. If it currently happens that way I wouldn't be aware of it, but if I knew that it was going to happen, the act of knowing, makes it an issue, for me anyway, even if it still plays out the same as every other time.


You remember going to sleep last night. So even if you died and a copy of you was made, intuitively, subjectively, you feel that tomorrow is still you, same as yesterday was same you, even if there is some technical disconnect.


Yes, I probably would do something differently.

First, how did I find out about it? If someone else discovered it and proved it and then told me, but they have not yet published or told anyone else I'd seriously consider immediately killing them (or waiting until they fall asleep and then killing them).

Widespread knowledge that everyone dies every day and is replaced by a perfect copy would be devastating for the world.

Assuming that I'm the only one who knows this (either because I discovered and proved it somehow, or because I just killed everyone else who knows), then I'd probably treat it the same as I would treat finding that I have an incurable illness that is going to painlessly kill me that night. I then have two options:

• Let that happen.

• Die by some other means before then.

In favor of the first option, letting it happen, is that the next day the world has a perfect copy of me that, assuming that most other people do not know that everyone dies and is replaced nightly, the rest of the world believes is me and that behaves identically to how I would have behaved had I survived to that day.

But I have no one that actually depends on me. The world does not actually need me to still be here tomorrow. Some friends would be sad if I were gone, but eventually the line of copies of me is going to end when one of them dies other than by going to sleep, so my friend's copies are eventually going to be saddened by the end of the line of tzs copies anyway.

That brings us to the second option. There are people whose lines of copies are bad for the world. So why not get some guns and take a few of them out? That has a good chance of getting me killed but since I'm dying anyway that night, so what?

There's the moral issue of taking another life, but the people I'd be killing are also going to die anyway that night. I would be preventing a line of copies of them from coming into existence, but preventing something from coming into existence is usually not seen as equivalent to killing something--the widespread acceptance of birth control is evidence of that.

As a practical matter, there would probably not be time enough to actually do this before falling asleep--but my copy tomorrow would know all this too, as would the next copy the day after tomorrow and so on, so could continue with the plan.

It is possible that I (or one of my future copies) would decided that killing is sufficiently wrong that even killing a very bad person who is going to die anyway in a few hours to prevent future bad acts of their future copies is not acceptable.

In that case I'd still consider the second option, but instead of death by going vigilante on evil people I'd probably just start doing things that I've always thought might be fun but had too high a risk of death to consider.

Not quite the same scenario, but the short story "All the Myriad Ways" by Larry Niven covers a kind of similar situation. In that story people develop travel to parallel universes, and discover that the idea that every possible decision creates parallel universes where that decision went different ways is true. The story is about the social implications of that as many people decide that nothing they do actually matters.


Death affects other people, so why would you kill yourself or another?

If you had the choice of pushing a button that would kill a total stranger that you never met, would you do it? If not, you can use the same argument for your friends. Sure they may be replaced by identical copies, but they will still experience feelings, although they are technically not your old friends


You are 'caring' objectively, we live subjectively, so your caring is of purely academic interest, even, dare I say it, to you.

Subjective continuity is illusionary, you placing import on this illusion is up to you of course, but there is no substantive difference between the discontinuity experienced by your 'copy' in the cloud and what you experience yourself moment to moment.


A copy of me continuing on not knowing or caring that it's a copy of me is of little importance to me. That's not my point. My point is that I will not be continuing on. Sure, a copy that thinks it's me will, and to the world, that would be me. It would have my memories and make decisions based on those memories in a way that would be the same as I would. But that wouldn't be me, literally. My mind, my inner voice, my experience of reality through my mind's eye, would not exist. I wouldn't exist. A copy of who I was continuing on doesn't help me, literally.


How do you think Star Trek transporters would work?


Instantly ctrl-f'd to see if anyone would mention transporters. I believe in some circles this has been dubbed "The Transporter Problem". It's a thought experiment that already exists.

Invincible also tackled this problem, with someone cloning a new body, and copying his brain to a new body. For a brief moment both bodies perceive the same thing before their experiences split into the two bodies. The copy wakes up, says goodbye to the original, who is dying, and says "I'm sorry it wasn't you."

This is also IMO related to the ship of theseus problem. Are you the same person you were 20 years ago? Are you the same person in the morning as the person who went to sleep? Are you the same person as a minute ago? What if you add in concussions/memory loss/degenerative disease?


Star Trek's lore includes some technobabble about transporters operating in "quantum" mode to assuage concerns that the person at the transporter destination is not the same as at the source.


But it cloned Riker once!

https://memory-alpha.fandom.com/wiki/Thomas_Riker

This is a similar vein about time machines

https://m.youtube.com/watch?v=CXhnPLMIET0


Except all those times people got cloned or worse.


There was an episode with a transporter accident, but the energy/essence ("soul") still remained and could be transferred to a backup copy. This implies that the energy/soul is also transported.


Every morning you, the new you, would not know. And unless the old you died while sleeping, they would know - however briefly.


No but you would not wake up, a clone who has your memories would wake up but the you that went to sleep will never wake up. That effectively doesnt make a difference but I find it pretty odd to think about. We as humans kinda miss a way verify its really „us“ and not a clone.


I view (hypothetical, sufficiently good) brain upload and emulation the way I view git forks: both children are just as equally "the real one" even if only one of them is on the same physical hard drive.

Looking forward from now, both bio-me and digital-fork-me (or indeed bio-fork-me if the upload gets revented) are equally "my future self", even though they would also each be distinct beings in their own rights and not a shared consciousness over many bodies.


I think that from everyone else's perspective, an ideal copy of me would be me; by the definition of "ideal copy". I, however, would not consider the copy to be me; to me, there is only one me.


That all makes sense. But let's run with this way beyond the foreseeable tech. What if you can replace each neuron in situ one by one up to X%. Then what if it was reversible (the neurons were initially just bypassed, commented out). Someone could then dial it up to 5%.. 50% and if they still felt the same throughout and then went up to 100%. In that scenario would they have copied themselves?

I find it fascinatingly coincidental that neurons are the only cells in the body that don't rejuvenate unless there is some sort of injury [0].

[0]: https://www.dzne.de/en/im-fokus/meldungen/2021/neurons-able-...


You're asking a question that we've asked ourselves for a millennia

https://en.wikipedia.org/wiki/Ship_of_Theseus

Technically, we are not the same people that we were when we were born. The cells in our body have one by one been replaced with new ones and the old ones have died off. The things that make up my body are 100% different than when I was born, so I literally am not the same person, physically.

Maybe this is an indicator that there is more to what makes us, us, than just the physical assembly of atoms in our bodies. There are things I don't know that we'll ever get a full understanding of.


Categories, numbers, logic, grammar, etc. all map onto physical systems, but aren't necessarily directly linked. This is "metaphysics" in philosophy, and is essential in order to even reason about physical systems. Just the concept of a "physical system" is actually metaphysical, but without metaphysics, you can say nothing to anyone about anything. Metaphysics is just generally taken as a "given", but is worth evaluating itself.

Then you'll run into the "realism" vs. "nominalism" debate and you'll understand the philosophical underpinnings of the current culture wars in America.


Perhaps but with a modern/futuristic twist.

The cells in our body have one by one been replaced with new ones and the old ones have died off.

This is not the case for our "talking" neurons which is what I was trying to limit this thought experiment to. I think a lot more folks would be ok with preserving their biological brain as is within a robot/clone if that was the only option and understand the body gets (mostly) replaced. Although a few in this thread have alluded to the fact we might be missing important relationships with the rest of the body such as the nervous system and gut biome.


If you further develop this thought, systems might be capable enough to implant core desires into you before transferring the copy into your new body. "You'll love Coca Cola no matter what, and capitalism".

The above scenario is if you get re-implanted into a self-evolving, autonomous biological entity, unlinked again from the system. If this is not feasible and the only solution is to embed you into a robot with an uplink to the internet, because "why not?", then my biggest issue with a digital self is that there are no guarantees of having a proper firewall, which would equal to total surveillance:

  Thoughts are free, who can guess them?
  They fly by like nocturnal shadows.
  No person can know them, no hunter can shoot them
  and so it'll always be: Thoughts are free!
https://en.wikipedia.org/wiki/Die_Gedanken_sind_frei


That's still a copy, just one where both copies remember their life before being copied and equally think they are the original. The original will still experience aging and death.


Coming from an atheist, what if there is a metaphysical dimension, and the computing structure of a processor, brain, whatever, necessarily positions it in a unique space within that dimension, and this dimension is what holds consciousness? The idea doesn't really matter, what matters is that perhaps copying a brain actually does move its consciousness.

Easy to prove/disprove once you can actually copy a human being, as he should then report to be simultaneously aware of both bodies.

Other than that it's possible that by the same logic of you not being the same as your copy, you're not being the same after you sleep: the continuity of your consciousness is interrupted, and the new you in the morning thinks he is the old guy, having all the memories, but he actually isn't. Think about this: what if you were replaced with a clone while you sleep? No one would ever know and scientifically speaking both scenarios are equivalent because they lead to the same result. Maybe enjoy today a little bit longer, now knowing you're actually going to die in that bed, and as your "clone" will wake up in the morning completely oblivious of not actually continuing your consciousness, you don't even get a witness to your tragic death, as was the case with thousands of your predecessors.

Or maybe it's even worse, because the consciousness is interrupted tens or hundreds of times during the day. Or maybe those are thousands of separate, overlapping consciousness bubbles and really you should worry only about your identity, as the consciousness is just short-lived fireworks on top of the actual you.

I guess it would be nice to know what consciousness even is.


Numbers are metaphysical, as is music. They both can manifest physically (three cars and three cats, or a Bach concerto making sound waves that vibrate my eardrum). But what makes music musical is that it's a pattern of sounds over time. Patterns are not physical. Patterns map to the physical, but they are above the physical. You, too, are a pattern that plays out over time. You are mapped to your physical body, but you are the pattern that is embedded in the body. You are a pattern.


We can't copy a person anyway. We're going to find out that it requires an order of events - order of experiences - aspect that can't be recreated to get the actual person. Everything else will end up being a weak simulation (even if it gets a bit closer over time with improvements in simulating).


In theory if you had an atomic replicator you should be able to make a copy of a person. I'm not saying it will ever be practically possible. But I don't see any fundamental laws of nature that make it impossible.


Whats the theoretical route around the no cloning theorem?

pop science communicators tend to introduce the no cloning theorem in the context of making a copy of a person being impossible, but they could be applying it wrong.


Unless some quantum effect doesn't actually allow that to be true, a "great filter" of sorts. Could make a copy and it just doesn't turn on, due to unknown unknowns, can't know why.


You're essentially suggesting 'maybe the laws of physics would stop applying here for unknown reasons', but it's not plausible that would happen absent evidence.


I was trying to elegantly hint there is probably lot we don't understand. :)

https://pubs.acs.org/doi/10.1021/acs.jpcb.3c07936 / https://pmc.ncbi.nlm.nih.gov/articles/PMC10671017/


That is true (in that it is a statement always true of all subjects), and I don't think anyone was disputing that. Simply pointing out that we don't know of any reason an identical copy in an identical state wouldn't obey the laws of physics and perform identically.


But the active electricity and chemical processes that denote life are more than just the absolute distribution and position of atoms; can you replicate the velocity of every electron, etc?


That law of nature is quantum mechanics. Quantum state can be teleported, but cannot be duplicated.


> A copy of your brain/personality/you... isn't you... it's a copy

This reads a bit like "a copy of Super Mario Bros isn't Super Mario Bros, it's a copy".

It has all of the bits that let you distinguish Super Mario Bros from Donkey Kong or Super Mario World. Why isn't it also Super Mario Bros?


Of all things, The Venture Bros convinced me otherwise that the copies “are” me, enough to matter anyway.


quiet, Dean


I've thought a lot about this, and realized that there is no continuation in our normal life anyway. What we experience is a huge and omnipresent blind spot because we cannot see the future.

Even 5 minutes from now, you don't know where you'll be, and you will be a completely different person based on the events of that 5 minutes.

Every moment that passes, the you of the moment dies.

What you can have, is a you in the future that remembers and understands the past you. That only happens if you preserve your memories either by not dying, or by preserving your internal memories in some other way.

That's all you get. A copy in the future that remembers, or nothing at all.


its Villeneuve's Enemy

thinking about this even a little makes me want to throw up because this can never be tested. yeah, id be dead, but technically, im alive. im alive according to everyone including myself (new me). So im not dead. But like you said, i died and am dead and dont experience any more life. if all evidence is that i didnt die and still exist, then thats the fact

we can theorize that our consciousness continues, but there is NO WAY to ever actually test it because all experience are that consciousness continues whether or not that is the case. sprinkle in some quantum suicide and my stomach is doing backflips from the existentialism od



From what my experience tells me, many people prioritize their children's life over their own. Kind of like the insurance policy that lives on carrying their values further after they have died. So this is not really that much different. Your own body is nearing it's end but a new vessel is carrying your values your influence on the world further. Even if it is a physically separate part from the own body it's the close enough for me to be considered living beyond my death.


> That's not immortality, not for me personally, not if I die.

What if: at the point of copying there are two of you, both of whom don't want to die?


I'm not a huge Trekkie but I recall the reason Bones McCoy wouldn't go in a transporter was he believed it killed the person and created a copy.


there was an episode where it didn't go as planned


Perhaps we could just "Ship of Theseus" our consciousness? Slowly augment with cybernetics, then replace the meat over time.

It's all just hypothetical without knowing what consciousness/qualia really "is," and as far as I can discern we've basically moved no closer to understanding that since the very beginnings of neurological research.


Is your savegame you loaded today still the same game you started yesterday? Or is it a copy of yesterday's game running forward? Does it matter?

Are electrical processes rebooted from chemical memory every morning when you wake up or after you have a seizure still the same you? Or is it just a fresh copy of your mind that dispersed when electrical signals in your brain lost continuity?


Not that it makes things any better for the original you, but the copied you would presumably feel and believe themself to be the real you. From their perspective, "you" really are living forever. To proceed with the copying would be a sort of act of kindness for yourself.


Some people never learned about pointers / pass by value vs reference and it shows


That's easy to solve. Just make the copy while you are still alive, make it so that you can control both brains ans bodies. That way you should be able to shut down the old body and keep on living?


Let's not pretend that we understand consciousness. It's like talking about the movement of stars without understanding gravity.


Your memories and biology are you. If someone makes a perfect copy of you, disintegrates your old “you” and then wakes the “new you” up how is it any different?


A copy is a copy. On the other hand, actually transplanting my brain into another body is definitely living forever because the brain is me.


Reminds me of the scene from the show pantheon where the scanning process kills you because you need to scan and peel the brain layer by layer


the trick is to do it slowly, a few grams at a time. a one way ticket on the ship of Theseus.


As the article says, a book is not the ink, but the words. I am not my physical atoms, but the connections that form my thought patterns and memories. If it were possible to make a perfect copy of those things, the copy would be "me". If the original still existed, that would also be (a different) me.


The line I like to use here when people talk about this sort of 'transferring' your consciousness - imagine there was a 'perfect' chatbot trained on your every experience, thought, and sensation, and mind.

OK, now your consciousness is 'transferred'. I promise to run you.exe plenty. It's just an exercise in self delusion, even if this was possible. That's not you anymore than a 'you' created by a taxidermist and shoved in a closet is.


Is the person from before you last slept the same you? How do you know?


If not, I should have had to do something with her body when I woke up where she went to sleep.


I don't think that's the thought experiment. We're not talking about physical bodies, we're talking about consciousness. When you go to sleep, does your consciousness cease to exist, to be replaced with a new one, with all your memories and behaviors and attitudes, when you wake up?

I have memories of being me and doing me things yesterday, but was that really me doing those things, or was that a different consciousness that doesn't exist anymore, and my memories are just the high-fidelity recorded experiences of someone else?

And on top of it all: if that's the case, does it matter?


I don't think consciousness exists separate from the body, so this sounds to me like a contradiction in terms more than an actual thought experiment. It's the kind of thing a technologist can propose because the syntax of the sentences makes sense, but from a philosophical one, it's no more a "thought experiment" than a 4-sided triangle.

There's a 1:1 relationship of consciousnesses and bodies they can possibly inhabit, and we do not now and never have had evidence to the contrary.


Something I think about a lot is that people tend to compare whatever the most recent innovation was to humans.

It used to be that what made you alive was electricity; you could Frankenstein shock yourself back to life.

It used to be that you were a clock. Your gears wound up at birth, and then eventually you wore out. You needed repairs.

People love to use metaphors, but none of these things are the full picture. Just because computers are so complex doesn't make you more correct. Your brain isn't the whole of your mind, we already know that's true. Why is this silly nonsense entertained?


This is a pretty big philosophical question. There's no universal answer, just opinions. Your conclusion is not the obvious one for everybody.

What happens if you have an identical copy down to the atom. Totally impossible to distinguish. You're put to sleep and your mind is copied over. When you both wake, which one is "you"? Both copies think they're "you" and both are correct. Each has the same sense of self and continuity of identity. Maybe at the moment of synchronization "you" ceased to exist as you branched into two new identities.

Say you upload into a computer. From the copy's perspective, it is "you", it's the same self. It might view your biological body like you'd view an old computer after you finish migrating your files to a new one.

Say you destroy the biological body, or wipe its brain once you copy. Does that change the equation? If you destroyed one of the identical clones, is it even relevant to ask which is "you"?

Personally, I think Altered Carbon comes close to how our society will solve this problem. The original copy has its brain wiped and the new copy is "you" by cultural consensus. Truly duplicating a single identity is strongly taboo and illegal.

I think this is a question that either matters to you or it doesn't. In my opinion, it's irrelevant. I, the original "me" am totally free to agree with the copy that it is "me". I can choose to give it my social and legal identity and agree that "me" is no longer "I". My personal choice is to destroy the original, but one could also choose to let the original body continue and assume a new identity, live a new life or whatever.

I view this the same way I do my past self. The person I was ten years ago is not "me", it was a totally different person. That past self lived a different life and had a sense of identity that isn't at all like what I have today. That past me, the ego that called itself "me" died a long time ago and the "me" now is a different self identity built out of new pieces. In my worldview, "me" is a transient thing. The self is not one fixed thing, it changes over time and sometimes it's replaced. I don't see the idea of transferring my sense of self to a new body as anything more extreme than any other kind of ego death.

I choose to resolve this problem with practicality. I agree with myself that the new copy is "me". My social and legal identity, as well as my own sense of self transfer to the copy. My choice would be to destroy the original copy. Barring that, it would assume a new identity and live a different life far away. It'd get a memory wipe if available. I can make the choice to sacrifice my ego and allow "me" to be assumed by someone else. To me, even to the original copy, the new copy is me. In my opinion, "me" is immortal in the digital copy, even if "I" remain in a mortal body.


I used to buy into this kind of stuff, but I've become more and more skeptical of the idea that you would still be yourself if your brain could be preserved/emulated/transplanted/whatever.

Our nervous system extends into our bodies. We feel emotions in our bodies. People with certain kinds of brain damage that prevents them from feeling emotions normally also experience trouble making rational decisions.

More recent research has been hinting that we may even hold certain types of memories outside our brains.

Humans have always been drawn to neat, tidy ideas, especially ones that draw clean boundaries: it's an appealing idea that our consciousness lives solely in our brains, and that our brains could function independently of our bodies, but it seems unlikely that it's really that simple.


As a neuroscientist working on brain computer interfaces, it's painfully clear to me that we are absolutely nowhere close to understanding the full complexity of the human brain in a manner required to simulate or reboot someone's consciousness. It's not even clear yet what level of abstraction is required. Do we need to map all of the synapses to get a connection graph, or do we need to map all synapses plus the synaptic proteins to assign connection weights too? This is ignoring other types of connections like gap junctions between cells, ephaptic coupling (the influence of local electric fields on neurons firing), mapping neuormodulator release, etc. On one hand, it feels like irreduceable complexity. On the other hand, however, you can lose about half of your neurons to neurodegenerative diseases before you start noticing a behavioral effect, so clearly not every single details is required to simulate your consciousness. It would be a MAJOR leap forward in neuroscience to even understand what level of abstraction is necessary and which biological details are essential vs. which can be summarized succinctly.

Anyone claiming to take your brain and slice it up and have a working model right now is currently selling snake oil. It's not impossible, but neuroscience has to progress a ways before this is a reasonable proposition. The alternative is to take the brain and preserve it, but even a frozen or perfused brain may have degraded in ways that would make it hard to recover important aspects that we don't yet understand.

It is, however, fascinating to do the research required to answer these questions, and that should be funded and continue, even if just to understand the underlying biology.


In addition to all that we don't know about synapses etc, I've often wondered if even mapping all the "hardware connections" so to speak would even be enough. You'd have everything in the right place, but what about the "signals" running on it? Does a certain amount of constant activity on these circuits constitute signs of a "living" brain vs a dead one? How much of our consciousness is really in the topology of the circuits, and how much of it is simply defined by the constant activity running around in them? I assume neural circuits form loops that consist of synapses that reinforce or surpress activity. If these signals going around and around ever "stop", can they ever be started again with the same "patterns"? What if these patterns, the living "software", are at least partially what define you?

Well anyway that's my airchair crackpot neuroscience theory for the world to consume ;). I'm sure there must already be a name for the idea though.


This article [0] may help here:

Six of the sheep were given a single higher dose of ketamine, 24mg/kg. This is at the high end of the anesthetic range. Initially, the same response was seen as with a lower dose. But within two minutes of administering the drug, the brain activity of five of these six sheep stopped completely, one of them for several minutes – a phenomenon that has never been seen before.

“This wasn’t just reduced brain activity. After the high dose of ketamine the brains of these sheep completely stopped. We’ve never seen that before,” said Morton. Although the anesthetized sheep looked as though they were asleep, their brains had switched off. “A few minutes later their brains were functioning normally again – it was as though they had just been switched off and on.”

0: https://www.technologynetworks.com/neuroscience/news/sedated...


Just to add a current link to this conversation:

https://www.scientificamerican.com/article/consciousness-mig...

An article suggesting that consciousness is embodied in the active fields, not the synapses themselves.


Losing half neurons caught my attention. I believe we had a post about how LLM weights being trimmed by 1/2 still had them functional. Interesting.


On one hand, I wonder if a gradual transition would work. Spend enough time over the years mirroring your conscious patterns onto a computational substrate, and they might get used to the lay of the land, the loss of old senses and the appearance of new ones. There might not be an ultimate "stepping in", but something like you might be able to outlive you, on a substrate that it feels happy and comfortable on.

On the other hand, the idea of "simulating your consciousness" raises questions beyond just cognition or personality. A mechanistically perfect simulation of your brain might not be conscious at all. Spooky stuff.


Keying off your comment - in the field of neuroscience, is consciousness viewed as a kind of simulation?

(I'm just a programmer so it's fascinating to me to consider how actual brain scientists model consciousness in their work.)


Imagine the bugs introduced in trying to make a digital copy of a brain. Terrifying for the subject.


There's gonna be million artificial minds of various levels of capacity before the first human mind is accurately simulated.

At that time we are going to be accustomed to glitching artificial minds creates, modified, bugged, debugged that current moral conundrums "is the copy me or not", "is it ok to create a hobbled copy of someone" are going to be as quaint bit akin to counting angels on a head of the pin. Mangled and molded consciousness will be as mundane as computation itself.


For example, it’s widely reported that organ transplantation, especially heart transplants, may cause personality changes associated with the donor.

https://www.mdpi.com/2673-3943/5/1/2

https://pubmed.ncbi.nlm.nih.gov/31739081/


Do we think the personality changes are due to the heart itself, or just due the minor brain damage accompanying an incredibly invasive surgery?


General anesthesia is not good for the brain.


replying to myself to provide more context.

In my PhD work, I helped conduct the human portion of a study on this topic, contributing to some discussions at the FDA [1]. The idea was a bit controversial then, and I've had a few anesthesiologists get mad at me for it, but the general pattern has now been replicated quite a few times now, such that the field has largely moved on from 'Is something bad happening?' to 'Why does it happen, and how do we prevent that bad thing from happening?'[2]. So it has been a gratifying excursion from my typical research before and since then.

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC4168665/

[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC9750936/


Thanks. This is purely anecdotal, but we had a family member whose child was under anesthesia for a severe respiratory infection. He’s been severely developmentally delayed in his first year, and it’s unclear to us what damage done.


Thanks for sharing. It is difficult to know for certain. If the respiratory infection led to hypoxic damage, then that could also contribute. I have not kept up with the field, but generally the most sensitive period for anesthesia was before 4 years or so. As I mentioned briefly, most of my work is in different areas of research so I haven't kept up to date.


The two sources you've given are both about general anaesthetic in infancy, though. Are you staying it might extend to adults?


Is there any reason to suspect that adults suffer the same effects as infants? (Not asking to be combative, just curious whether children are uniquely affected because their brains are still cooking.)


You and the other commenter bring up good points. Developmental neurotoxicity (with lesser or no effects in older children and young adults) is, I speculate, probably due to differential gene expression during early development versus later when genes related to development are suppressed and genes related to maintenance are more abundantly expressed. The developmental neurotoxicity probably works through different mechanisms than what is termed "postoperative cognitive dysfunction" in the elderly after general anesthesia dysfunction [1][2], which, all I know is that it is a thing. If I were to speculate it would be that in the elderly there are fewer redundant cognitive resources, and so detrimental effects to cognition are magnified. I know that it used to be thought that post-operative dysfunction is temporary, but it seems likely to me (again speculation) that there is both recovery and permanent dysfunction, but the dysfunction becomes a little more difficult to detect. Going back to my paper, where we used a method to disentangle two types of memory processes i.e. recollection (explicit recollection of experiential details) and familiarity (a general feeling of familiarity with things you've seen previously) which contribute to memory performance but tend to be differentially affected by neurodegeneration (recollection is more affected, and generally more hippocampal), so that sometimes, when not accounting for these processes, a memory test will fail to find differences because patients rely on familiarity to answer memory questions.

[1] https://scholar.google.com/scholar?as_ylo=2020&q=postoperati... [2] https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&as_ylo...


The mass-spaced effect in organ tissues may be one explanation of this: https://www.nyu.edu/about/news-publications/news/2024/novemb...


Why would you want to go on in a world that has either left you behind or keeps making the same mistakes over and over in a cycle and won't listen to you because you're too old to understand?

And conversely, I think Kim Stanley Robinson puts it best in the Mars trilogy. Scientific progress often has to wait for the old guard to die so new ideas can be tried. Sometimes there are actually new things and they need to be allowed to cook.


I think the idea originated with Max Planck:

German physicist Max Planck somewhat cynically declared, science advances one funeral at a time. Planck noted “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents die, and a new generation grows up that is familiar with it.”


A questionable assumption.

A scientist like Einstein experienced scientific revolutions within his lifetime. That's hardly going to be the norm in the history of science, and also a horrible assumption to think revolutions would endlessly be occurring and reoccurring.

Also, we know when we're on the edge of knowledge, especially in cosmology and physics. We're waiting for revolution there. There's dark energy and dark matter. It doesn't matter if you're old or young, you knew that your theories isn't good enough to explain whatever these are.

Scientific knowledge don't get swept away especially if they're rock solid. Newtonian physics still has a lot of relevance after all. It's just that relativity is even more accurate.


Einstein and Feynman both supposedly struggled with unseating some of Dirac's ideas.


Just imagine someone who died 50 years ago coming back and hearing skibidi toilet, no cap, ohio, etc. Then not being allowed to board a plane without a body scan, and not having money for a plane anyways since bread was dime and a gallon of gas was a quarter last you checked. You can't even get a job you're just a brain and all the knowledge work you could do is 50 years out of date.


I gotta say, none of that would make me rather want to be dead.


Everyone has their own idea of living hell. I'm sure there's a version for you if this isn't it.


I'm still alive and this shit is already starting to get to me. This last round of inflation was a kick in the nuts.


There’s a short story about uploaded consciousnesses being used as AI slaves. They go bad once enough years have gone by that they can’t speak the modern language anymore. Then they usually lapse into insanity or depression.



That’s the one.


I dunno. I’m a pretty open minded guy so if anyone is going to be immortal, might as well be me. I promise not even to shout at too many clouds.

Plus there are a lot of assholes in the world. Come on, there isn’t anybody you’d enjoy watching get Ozymandias‘d? I’d enjoy it.


>Our nervous system extends into our bodies. We feel emotions in our bodies. People with certain kinds of brain damage that prevents them from feeling emotions normally also experience trouble making rational decisions.

I think that may be true enough, but it doesn't have the upshot you seem to think it does.

It just means that what we need to sustain not just a brain itself but the totality of the environmental conditions on which it depends. No easy task for sure, but not something that presents an in-principle impossibility of preserving brains.

I think there's a major philosophical error here in thinking that the added logistics present this kind of in-principle impossibility.

Also, talking like this starts to play with anti-science speculation a bit. Octopi actually have neurons extending through their limbs. We don't. So when we talk about consciousness being "embodied", I'm sorry, it's an attempt to romanticize the question in a way that loses sight of our scientific understanding. Consciousness happens in the brain.

Sure, the brain needs stimulus from its embodied nervous system, and may even depend on those data and interactions in significant ways, but everything we know about consciousness suggests its in the brain. And so the data from "embodied" nervous systems may be important but there's no in-principle reason why it can't be accounted for in the context of preservation.


> Octopi actually have neurons extending through their limbs. We don't.

You don't have neurons extending through your limbs?


I don't think I agree with you. There are multiple examples in society of damaged nervous system connections with the brain, spine cord damage for example, where the personality of the pacient changes little. In the same sense, losing limbs or entire sections of your body (aside from psychological trauma and other psychological consequences) don't affect personality that much

Of course the nervous system is much more complex, but damage to the brain almost always result in some sort of cognitive dysfunction or personality change, see the Phineas Gage case for example.


>In the same sense, losing limbs or entire sections of your body (aside from psychological trauma and other psychological consequences) don't affect personality that much

"There aren't any changes except for all of the changes, but those changes don't count because reasons."

I don't know how many amputees you know; you may know many. I was in the army for 10 years during the height of the global war on terror and know more than most. Not a single one is the same as they were pre-amputation. Could be the trauma that caused the amputation, could be the amputation. I'm not an amputationologist.

I do assert that a holo-techno-brain will need a shit-ton of e-drugs to deal with being amputated from its fucking body.

The bacteria in your butthole are a part of you just like your brain, maybe less, but they ARE a part of you.


> Could be the trauma that caused the amputation, could be the amputation.

Given the personality changes seen in people who go off to fight in the military and who end up coming back fully physically intact, I think it's more likely that the personality changes here were caused by the trauma, not by the amputation.

I'm not saying the latter isn't possible, but absent evidence to the contrary, it doesn't make much sense to assume the personality changes occurred because of the amputation alone.

Also consider that amputation -- even ignoring whatever trauma precipitated it -- is its own sort of trauma. I imagine if someone came up to me, perfectly physically healthy, knocked me out, and cut off my leg, I would wake up and develop emotional trauma that would cause personality changes.


I see what you mean- but consider that the gut does seem to play a significant role in mood and mental health. The enteric nervous system may not hold memories, but it seems to have something to do with personality and digestion issues can have negative cognitive effects.


Agree that discomfort can cause temporary problems, and sometimes chronic problems in parts of the body can cause life long cognitive impairment. But that is not to say that these represent "you" or your personality. You brain could still function perfectly without those body conditions.

And for the gut example the brain actually does work normally, stomach and instetines removal (and other related surgeries) are fairly common procedures and I don't hear of people complaining about personality changes. Of course, those types of procedures are extremely invasive in a sistemic way, and not only your mental state, but multiple other parts of the body need to re-adapt. But I truly believe "you" will be still be "you" inside your brain

PS.: I quoted "you" because discussions about the identity of one-self are much more complex, just regard it as the most high level definition of the concept


The way various hormones influence the brain alone makes it pretty clear to me already that you'd be a completely different person when taken out of your body, and I'm pretty sure that's just the tip of the iceberg.


Coming back as a brain in a vat certainly sounds awful.

But I would assume that bringing someone back would be tied to a physical or simulated body that provided a compatible context.

Not a bad assumption to solidify in your brain preservation/restoration contract.


The 'people' that undergo this first will experience the most incredible pain imaginable if we do not figure out a way to block such pain first.


I consider that I have likely died more than twice in my lifetime already. And before this body gives up, I will have already died more times. Must simply enjoy the present and give gifts to my future self.


There is a bit of research and effort into a head transplants. I wonder if and when that is successful to see how it impacts the individual. Possibly having memories of the body or changing personality.


Can't wait for C.S. Lewis' That Hideous Strength to become a reality


where?


>HEAVEN: The head anastomosis venture Project outline for the first human head transplantation with spinal linkage (GEMINI) [2013]

https://pmc.ncbi.nlm.nih.gov/articles/PMC3821155/

...................

>World's first human head transplant successfully performed on a corpse, scientists say (2017)

https://nationalpost.com/health/worlds-first-human-head-tran...

.............

First Human Head Transplantation: Surgically Challenging, Ethically Controversial and Historically Tempting – an Experimental Endeavor or a Scientific Landmark? (2019)

https://pmc.ncbi.nlm.nih.gov/articles/PMC6511668/

..........

>No Doctor Has Ever Performed a Human Head Transplant. This Neurosurgeon Says He’s Ready to Do It. (2024)

https://www.popularmechanics.com/science/a62831709/human-hea...


I agree completely. This is unsubstantiated hype.

Transferring our consciousness into "the net", or some other fuzzy concepts are so far removed from reality as to be complete fiction. This includes freezing our brains and reanimating them later to resuscitate our lives.

They not only massively overestimate the functionality of today's tech to receive something like our consciousnesses, but even more so, by orders of magnitude, underestimate just how complex our living bodies are.

We only have the vaguest of ideas about how our physiology works (while we might be able to replicate flesh cells for "fake meat", we have 0 understanding or control over how those cells organize to form macroscopic organs). Applying this to the brain, our understanding is even more primitive. An example would be recent news that perhaps the brain is not sterile, but hosts a microbiome. Whether or not the brain hosts a microbiome is still "controversial".

We're still hundreds of years away from a comprehensive understanding of physiology.

But of course, we're never going to live that long, because we still believe (statistically as a species) in invisible guys in outer space that tell us we need to dismember people who believe in the WRONG invisible guy in outer space.

Our primitive violent ape species will extinct itself long before we ever have a comprehensive grasp of how life works, especially to the level of understanding consciousnesses...


I'm also skeptical of the idea that one can "upload" consciousness and it would still be "you". I suppose this is true in a philosophical sense, but in a practical sense, subjective experience of consciousness rules the roost. It's inevitably going to be a mere copy of you. You don't get to experience any of it. Similar to a software project which is forked, I think it makes more sense to classify it as an entirely different entity at that point.

I suppose there are valid use cases for this, but I'm not that narcissistic to think the world needs eternal copies of me.

The continued subjective experience of the original consciousness is where I believe the real value lies. Digitisation of consciousness, assuming it has any sound scientific basis in the first place, would practically need to look more like the gradual replacement of brain (and bodily) matter with something more durable, enduring, and controllable. A slow process in which carbon is exchanged for silicon, or cellular damage is continuously reversed and aging kept at bay.


As long as you believe that any theory of subjective experience will ultimately be physicalist, this argument doesn't really work.

There is no continuity of subjective experience even within the same brain, you can be deeply unconscious for extended periods of time and come back.


Yeah, you can argue the same thing about going to sleep, there's no guarantee that the same "you" wakes up.

From the outside, an "identical clone" is indistinguishable.

On the inside, the clone feels exactly how you would feel.

The only problem is the "I don't want my 'me' to die" feeling.

I bet most people would be fine with death/rebirth teleportation.


> It's inevitably going to be a mere copy of you. You don't get to experience any of it.

You can make the same argument for 'you before you went to sleep' and 'you after you woke up'. The only real link you have to that previous consciousness are memories of experiences, which are all produced by your current body/brain.

Think about this: For every consciousness (including you right now) it is _impossible_ to experience anything other than what the thing producing that consciousness produces (memories, sensations, etc.). It doesn't matter whether the different conscious entities or whatever produces them are separated by time or space. They _will_ be produced, and they _will_ experience exactly what the thing that produces them produces.

With an analogy: If you drop pebbles in either the same pond at different times or in different ponds at the same time, waves will be produced in all cases. From the perspectives of the waves themselves, what they interact with is always _exactly_ the stuff that interacts with the water they're made up of. To them, the question of identity or continuity is fully irrelevant. They're just them.

Similarly, it makes no difference whether you only have the memories of the previous conscious experiences, or if 'you' really experienced them. Those situations are indistinguishable to you. The link to future consciousnesses inhabiting your body is effectively the same.


>> It's inevitably going to be a mere copy of you. You don't get to experience any of it.

> You can make the same argument for 'you before you went to sleep' and 'you after you woke up'. The only real link you have to that previous consciousness are memories of experiences, which are all produced by your current body/brain.

Except I know, empirically, that people go to sleep all the time and wake up, and remain the same person. And I know (for practical purposes) I do the same. I -- my mind/body composite -- lie down, and get up the next morning. I remain the same person.

Simply 'copying' or 'uploading' my consciousness, like a computer file, is impossible even in theory, because I'm not just a conscious mind, but a conscious mind which is also a body. Consciousness cannot be split from the material body, even in theory. Somebody upthread said that he'd seen many amputees undergo personality changes as a result of their operations -- this is an informative (if very sad) example.


> that people go to sleep all the time and wake up, and remain the same person

You have absolutely no way of knowing that last part is true. You can only see their behavior, which is identical whether they are the same consciousness or a different one from the one it was yesterday. You don't even know whether they have any conscious experience at all.

> And I know (for practical purposes) I do the same.

You do not. The "for practical purposes" points at your _body_. There is no evidence that an organic body is in any way special. If you upload your consciousness and the resulting computer 'body' works as a normal body, it _will_ generate a consciousness and that consciousness _will_ feel that it is 'you' (itself). Note that we're talking about hypothetical practically perfect computer bodies (which may be completely virtual, as longs as its sensors and actuators live fully in that virtual world).

You can spin the illusion of a continuous conscious experience every way you want. It is still just that, an illusion.


> You can only see their behavior, which is identical whether they are the same consciousness or a different one from the one it was yesterday.

The first clause of the sentence is true, but the second is not.

You never directly see a thing itself, you only ever see its effects on the world. You rationally postulate the presence of water because of its clear colour, its hydrating effect on you, its tendency to become a gas at 100C, its tendency to dissolve salt, and so on. Similarly, you rationally postulate the presence of the same person, at 10pm on Monday and 7am on Tuesday, because he has the same personality, the same look, the same eye and hair colour, the same body shape, etc. We know about the presence of a thing from the presence of its effects and behaviour. If these remain the same, it is rational to believe that the person remains the same.

> You don't even know whether they have any conscious experience at all.

Again, knowledge of effects leads to knowledge of the thing causing said effects. I am aware of my own conscious experience, I see Bob affects the world in ways that are very similar to myself and other human beings, and so I rationally postulate that he is a conscious, rational being like I am. You never see a person's mind directly, but you see the effects of the person's mind the whole time. You see such effects through their body. If the effects remain the same, or change only in certain, limited ways, it's reasonable to believe that the cause remains the same (and frankly crazy to believe otherwise).

> The "for practical purposes" points at your _body_. There is no evidence that an organic body is in any way special.

The mind and the body form one substance. The idea that "my mind is one thing, my body another" has been tried and failed as a philosophical idea several times in history. It raises far more problems than it purports to solve. I know I continue through time in part because my body continues through time. You say the body isn't "special"; I don't know what this means, but I know my body, and other people's bodies, are different from other things, because they walk, talk, reason, sense, desire, and so on. (Once again, the effects and behaviour of a thing lead us to knowledge of that thing.) The idea that consciousness or self can be 'uploaded', even in theory, is pure fantasy. You and your body are one thing.

> You can spin the illusion of a continuous conscious experience every way you want. It is still just that, an illusion.

If this is so, explain how rational thought is possible. Given that rational thought involves you reasoning from A to B to C through time, how is this possible if there is no continuous 'you' that is going through time?


Very well, you think that preserving the brain, or even preserving the nervous system, is futile. But what of total biostasis, preserving the entire organism, just like the archaebacteria that live for thousands of years in ice or other extreme environments by slowing their metabolisms to a crawl?

To me, excessive negativity about the possibility of immortality smacks of weakness and defeatism. You either love life and want as much of it as possible, which makes you a friend of humanity, or prefer death, which makes you an enemy of humanity. I take a stronger line than the neuroscientist in the article. “Death positivity” like that of Viktor Frankl, anti-natalism, even faith in magical spiritual resurrections—all are anti-human viewpoints, only excusable in the past because they were copes with the inevitability of death. Now that we have reason to believe it can be averted, we owe our potential future selves every possible effort to save them from oblivion.


I’m not sure I actually believe in quantum immortality but I think it is slightly suspicious—out of all the people you could have been be born as, you just happen to be born in a timeframe where brain preservation might be possible before you die?


I see what you're saying...but...

Most people are alive right now. The population historically has been much lower, so odds are you would be born around the time high technology would support a high population.


I think that “most people are alive now” is not true (although it is often repeated).

https://www.bbc.com/news/magazine-16870579

> So what are the figures? There are currently seven billion people alive today and the Population Reference Bureau estimates that about 107 billion people have ever lived.

> This means that we are nowhere near close to having more alive than dead. In fact, there are 15 dead people for every person living.

So it is not wildly impossible that you’d be alive now, but it is fairly unlikely.

Also, hard to say what’s in the future of course, but even if population growth levels off, you’d expect to be born in the future, right? Which brings up another question, why not?

If we are going to go along on the fully ridiculous implications here and reinterpret all probabilities as conditioned on your immortality, why weren’t you born in the far future? I’d expect people born in the future to have easier access to immortality.

Maybe birth rates will go way down if we discover immortality (lowering your odds of being born later). Or maybe pre-immortality minds will be seen as more interesting and worth preserving (increasing your odds of being kept around).


Even if being born right now had an astronomically low probability, someone would still be lucky enough.


> More recent research has been hinting that we may even hold certain types of memories outside our brains.

Not just hinting - the evidence is strong and accumulating rapidly. The gut, in particular, has so many neurons that it is considered the body’s “second brain”, to say nothing about the impact that gut bacteria have on your mind.

If you really wanted to create a copy of your “mind”, you’d have to image every neuron in your body for a thoroughly accurate copy. And then accept the fact that your entire behavioural profile is then missing the input of your gut bacteria, which appears to have a significant and non-trivial impact.


In terms of computing (one that we do not understand), it would be like cloning a live machine by taking the CPU dye only, or maybe the hard drive. How many parts you need to take away from a computer for it to be the same machine? It's easy though with a VM, or a kernel that supports many hardware. Kind of a digress, but I liked this idea.


I don't think this is a great analogy because computers don't have consciousness (yet).

But I usually move the hard drive (or at least its contents) between machines when I get a new computer, and that's enough for me to think of it as the "same", even if I reinstall the OS on the new machine and just copy my home directory onto the new one.


If it was preserving my original brain it would definitely still be me at the core. Would everything be exactly the same? Probably not but that paradigm is more than good enough.


Ultimately, I am going to quote one of my favorite writers [0] and say that I am not afraid of a life that ends.

I don't want to be a brain in a jar. Or in a computer either. I enjoy experiencing physical sensations and interacting with the world in meatspace. And if I can't enjoy either, then just let me die.

And I apply this to not just brain preservation, but any attempt to artificially prolong the quantity of my life at the expense of the quality of my life. I do not want to spend my last years in a hospital bed hooked up to machines and unable to move. That was how my dad died, and even then he was lucky enough his partner (who he had discussed this with before and who had the authority to make the decision) eventually agreed to switch him to palliative care in his final hours. Similarly, I have seen what chemotherapy does to people, and I have long since decided that if I ever get cancer, I will refuse chemo and let myself die. I am also having a living will drawn up that includes a DNR order, multiple scenarios where doctors will be ordered to pull the plug, and a prohibition against anyone amputating any of my limbs or sensory organs even if it's necessary to save my life.

I will make sure I die with my autonomy and my dignity intact.

[0] Al Ewing. He writes comics. Read his stuff, he's good.


Do you have a source for this quote? Googling just returns this page.

I was particularly struck by:

> if I ever get cancer, I will refuse chemo and let myself die

And figured this quote must be at least 20 or 30 years ago? Cancer isn't necessarily a death-sentence, and many treatments are much less harsh than they were 20+ years ago.


> (…) and a prohibition against anyone amputating any of my limbs or sensory organs even if it's necessary to save my life.

> I will make sure I die with my autonomy and my dignity intact.

Amputees have autonomy, dignity, and rich lives. To believe that the loss of a limb is so severe that death is preferable is absurd and insensitive.

What if instead of requiring an amputation, he loses faculties by accident like suffering from parosmia due to COVID or having a weight crush a body part? Did he suddenly lose his dignity? He certainly lost some autonomy. What’s the next step then?


Many people end their life when they find it's too painful to live. Many more wish they could -- the debate around end-of-life issues is raging in many countries.


If having to undergo a few months of chemotherapy in order for your cancer to go into remission is "too painful to live", then I think someone's threshold for pain is way below that of the average person, to a point where that's kinda sad.

I know several people who have gone through chemo and came out the other side happy and healthy, after recovery. They live full, rich lives. They are much happier living than dead.

Sure, there are some cancers where you end up with declining quality of life for months or years before you eventually die. I wouldn't fault anyone with deciding to opt out of that from the very start. But that's not what we're talking about, exclusively: the person upthread was very absolutist and rejects chemotherapy in it entirety.


What’s your point? I support the right to euthanasia, nothing in my comment contradicts that.

We’re not talking about someone in pain wishing to die, we’re talking about someone vehemently arguing they would rather die than live without a limb, without having experienced it. And their reasoning is a lack of autonomy and dignity, none of which are a given.

There are literally millions of people without limbs, half a million new ones per year in the US alone. They’re not poor invalids, they’re people who adapt and can do things we only dream off while living normal lives.


> To believe that the loss of a limb is so severe that death is preferable is absurd and insensitive.

No. Denigrating someone expressing their personal opinion seems absurd. Since the commenter did not impose their opinions on other people you had to put those words in their mouth to call them insensitive.

I prefer to die with autonomy and dignity as well, meaning I would like to pull my own plug. That other people might have a different threshold, or want to die differently than I might, seems neither absurd nor insensitive. The commenter just described their threshold, they didn't judge other people.


> Denigrating someone expressing their personal opinion seems absurd. (…) The commenter just described their threshold, they didn't judge other people.

My sentence does not judge the person, it criticises the belief. Learn to differentiate or you’ll be doomed to a life of ad hominem attacks and taking things personally.

If person A says they love spiders and person B replies they find spiders repulsive, there’s no value judgement passed on person A.

My remark was not a commentary on yourself, your world view, the author, or your approval of them. I don’t know you.

> I prefer to die with autonomy and dignity as well

Who wouldn’t? By itself that statement is meaningless. What’s in question is how one defines the terms.

I invite you to take a closer look at that quote and understand what it means to the people who live those situations. Let’s exaggerate to make a point: If someone said they refused to be treated by a black doctor even if their life depended on it, and followed up with the remark they would make sure to die with dignity, do you not see how that would be insensitive to black people? A writer, especially an ostensibly good one, would understand that basic sentence structure.

Again, that is a purposeful exaggeration to make a point. I’m not making a remark on yourself or the author, I am disagreeing with the belief.


I'll judge the person. Someone wanting to die because they lose an arm is nuts.


> My sentence does not judge the person, it criticises the belief.

An opinion or belief can't "be" insensitive. A person may intend to say something insensitive, another person may interpret an opinion as insensitive (as you did when dragging in amputees and people suffering from other conditions and injuries). "Insensitive" can only refer to a person's intention or another person's reaction. So calling someone insensitive for their expressed opinion does indeed judge the person.

> Learn to differentiate or you’ll be doomed to a life of ad hominem attacks and taking things personally.

Surely someone as skilled in rhetoric as yourself can see the irony of you warning me about "a life of ad hominem attacks" embedded in an ad hominem attack. Then you followed up with the implication that I don't understand "basic sentence structure." Address my actual comment rather than telling me what I need to learn and how I will get doomed for not thinking like you.

As for spiders and racists, those have nothing to do with anything in this thread. If someone says they don't want to live if they lose a limb or face chemotherapy, whether you agree with their stated choice or not, no other person or race got mentioned or implicated in the comment you replied to. Setting up a false and deliberately inflammatory analogy to make your point, equating an opinion about perceived quality of life with racism, doesn't help your argument. Try sticking with countering the arguments the commenter (and I) expressed.

Personal opinions about end-of-life care, personal autonomy, dignity have the same flavor as religious beliefs: you can't counter them with logic. Just calling someone wrong or "insensitive" or "nuts" as some other commenters have misses the mark, because the subject involves beliefs, not facts that we can argue. One can express their own different opinion, but going beyond that starts to verge into attacks on personal beliefs, which requires making assumptions about another person's faculties, judgment, and ad hominem, all of which you have deployed in your comments.


Urk, clearly continuing this conversation is fruitless. Even after mentioning twice and with emphasis that I’d use a purposefully exaggerated example, you decide to latch on to it as if it were the central thesis, calling it “false and deliberately inflammatory”. Do you understand the meaning of “purposefully exaggerated”, of analogies and hyperbole as a means to explain a point? Stop assuming bad faith, and please go fresh up on what an ad hominem is, as you keep mischaracterising it. My original comment had nothing to do with you, unless you’re Al Ewing and pretending not to be. Stop taking it personally, this isn’t even about you. You’re also conflating what other people said with my points, which is unproductive.


If someone else is free to decide that they'd rather die than lose an eye, or rather die than have to experience a few months of chemotherapy in order to be cancer-free, then I am also free to decide that those views are absurd and extreme, and reflect a deep misunderstanding of medical outcomes.

> Denigrating someone expressing their personal opinion seems absurd.

There's a difference between saying someone is foolish and saying their beliefs/opinions are foolish. The former is not what the GP did.


> then I am also free to decide that those views are absurd and extreme, and reflect a deep misunderstanding of medical outcomes.

I don't agree. You can decide that another person's expressed opinions don't align with yours, according to what you believe and think you understand about medical outcomes. The original comment didn't mention medical outcomes so I hesitate to judge how much the commenter knows about that. And I hesitate to call someone's personal views absurd. They have opinions I may or may not share. I can't make a rational argument to prove them wrong.

A person's beliefs can't "be" foolish or even wrong. Belief by definition does not come from an objective and rational evaluation of facts and probabilities. I can say I hold different beliefs, but no more.

We most often encounter this kind of argument around religion. Someone can sincerely hold religious beliefs that don't submit to rational and objective argument. We can have different beliefs but we can't prove someone else's beliefs wrong. To call a belief that you can't argue against with reason "foolish" or wrong equals calling the person holding the belief foolish and wrong. You can show that chemo can work and people with cancer can recover. You can't say how any individual should feel about that, or how they should choose to deal with a cancer diagnosis. The original comment didn't make any statement about whether chemo works or not, or whether some people can thrive with dignity after losing a limb. Rather the original comment expressed one person's belief about how they feel about those possibilities, for their own definitions of autonomy, dignity, and quality of life.


I suspect part of extending human life much beyond 120 years is going to be finding ways to delay physical adulthood, so that proportionally you still have the same time to learn and grow, and those growth hormones are still kicking around repairing things for longer. Because the quality of life 100 years after your organs have stopped repairing themselves is not going to be that great, but if you could reduce that to 80-90 years then maybe.


This seems a bit extreme. Chemotherapy and its effects can be a very temporary thing, and your quality of life can go back to normal after you've finished your course and the cancer has gone into remission. Certainly there are aggressive cancers where you'd be fighting a painful battle of attrition, but there are many cancers where prognoses are good, and quality of life once treatment is done is more or less the same as before. A blanket personal ban on chemo is reckless and shortsighted.

The prohibition against amputation and sensory organ removal is a bit nuts too. You'd rather die than have someone remove one of your eyes or ears, or say a hand or arm or foot or leg? That is profoundly sad, and intensely insulting to anyone who has had to deal with that sort of thing and has nonetheless lived a full, rich life.

I get that many medical interventions do actually have a terrible, permanent effect on quality of life, but these seem like pretty extreme views that ignore reality.


I don't know what the commenter who posted about chemo and amputation actually thinks or believes. But I hesitate to call them "nuts" or to lecture them about how they have a wrong opinion. And I would not expand their personal opinion as a judgment on people who decide they can live with the effects of chemo, or amputation, or loss of an eye, because nothing in the original comment included a judgment on other people. Everyone has their own threshold for what they consider a life worth continuing, but we should not impose our own thresholds on other people, or judge them for making different choices.

For me the question goes beyond "Can I survive chemo (or amputation) and resume something like a normal life?" When you have to face cancer or loss of a limb or any illness or injury that threatens your life, or perceived quality of life, or dignity and autonomy, you necessarily have to think about what that means for your future. Until you get a diagnosis of (for example) cancer you don't know what it feels like, or how you will react, to the fact that no matter if you survive the treatment or not, you will always have that threat and reminder of your mortality in your conscious thoughts. You think about how you might not get so lucky the next time, how much your treatments might cost, what your illness might put your loved ones through, how far you will go to keep yourself alive even when it imposes costs and obligations on other people. And you think that maybe other people will have to make hard decisions about your future if you can't. A cancer diagnosis doesn't just affect me, in other words. If I lost a leg or arm that would impose burdens on my wife and family, affect my ability to make a living. Those thoughts more than the medical condition itself lead people to arrive at opinions such as the original commenter expressed.

Having faced my own mortality already I know I think more about how my own end of life scenarios affect other people more than how they will affect me. I worry that I will suffer a stroke, or slip into dementia, before I can pull my own plug, leaving people I care deeply about with that awful obligation, and the burden of caring for me rather than living their own life. And it's that thought, not the fear of disease or dying, that leads me to my own ideas about how much I might endure, because I won't endure it alone or without cost to others.


> I enjoy experiencing physical sensations and interacting with the world in meatspace. And if I can't enjoy either, then just let me die.

Who says a brain in a jar can't enjoy either of these? Who says that isn't, in fact, what you are enjoying right now?


More realistic to make plans for dying, than to fantasize about living forever.


Cancer is no longer a definite death sentence and chemotherapy can make it go away for good, depending on what kind of cancer that is. I'd refuse too chemo after chemo in a very aggressive form of cancer though.


One of Don DeLillo's later good novels is about this stuff (Zero K).

I always think people's attitude toward possible future worlds is interesting. You can see a wide spread of opinion in this thread -- whether you think functional immortality would be a good thing says a lot about who you are. Ditto for colonizing other planets, automating all work, building AGI, and so on.

I suppose I'm on the side of the technologists. I think immortality is probably possible and humans should try to achieve it. But along the way it will mostly be snake oil and cults. And, of course, it's all but guaranteed that everyone in this thread isn't going to make the cut-off.


I'm certain immortality is possible, and it's also likely to be achieved, because we always do everything we can do, regardless of consequences.

But I think this is the acme of selfishness. I don't want to be immortal, and I wouldn't want to live in a world with 500-year-old know-it-alls running around "oldsplaining" everything to everyone else.

I have, thankfully, a fairly good chance of dying before that happens.


How is immortality selfish? Selfishness requires taking from other “selves” who have unmet needs of their own. But there’s every reason to believe a society of immortals could either function perfectly well without producing new selves, or that it could choose to reproduce at a slow rate sustainable with its ability to extract resources to support itself. Any new selves that were born would be provided the same opportunities that we provide new selves in the present day—breastfeeding, education, healthcare. How would that be “selfish?”

Is it selfish when a centenarian lives past 100? Is each additional year of life obtained by a centenarian “selfishly” stolen from some hypothetical unborn self?


It's funny people always talk about this in extremes.

"Do you want to live longer?"

"Yes"

"OH YOU WANT TO LIVE A MILLION BILLION YEARS?!?!"

There are values in between immortality and ~80 years.


Oh and when we have defeated death, when will people decide it's time to go? And how will they do it?


> when will people decide it's time to go?

They'll decide when they want to decide. Some might choose to actually live forever, and that's fine. Others will choose a more current-human type lifespan, and that's fine. Some will choose 150, some 300, some 1000, some 10,000. All of those numbers are fine.

> And how will they do it?

There are already humane forms of medical euthanasia performed in progressive places in the world; this question already has answers, and likely more will be developed over time. I don't think it's an important question or issue to discuss, as long as people have legal options.


They'll decide when they're ready. I would love a little more time on this planet. And when it's time, I'll hop in the nitrogen pod. People are already making that decision in some parts of the world.


Car accidents.


Which do you think the Jeff Bezos of the world will choose?


People living forever will stall humanity. Generations and their old ways dying with them are necessary for civilizations to progress.


There's no proof of that.

Life expectancy has been increasing over time, especially in the past century or so. I don't think it's credible to suggest that civilizations have progressed meaningfully slower now that people live to be 80 or so instead of only 30, which was common in recent history.

And even if immortality "stalls" humanity, so what? People matter, not technology or some amorphous concept of "progress".


There could never be proof of that but I think it's generally agreed that the Renaissance was born out of the Black Plague.


Required reading for this theme, and one of my personal favorite short stories:

https://qntm.org/lena


> our societal acceptance of death comes from an inability to imagine anything otherwise

CGP Grey captures this sentiment nicely in this animated essay: https://www.youtube.com/watch?v=cZYNADOHhVY "Fable of the Dragon-Tyrant" [2018-04-24]


I thought that was a very poor allegory.

1. It starts with the humans being deathless, and then death showing up. This is not how things are, all life we know of since the very beginning dies. It may well be a "feature" of life in that death is necessary, or at least useful, for life to adapt and grow - unlike the dragon, which showed up after life was already doing fine.

2. It very conveniently and explicitly frames death as an evil black dragon, anthropomorphizing it and pulling it out of the mechanical process that it is into a morally evil villain.

3. The difficulty and novelty of the discoveries we must make and engineering problems we must solve to beat death are several magnitudes smaller in the fable than in reality.

It seemed very pandering, and sets up death (and those who propose accepting its inevitability) as a straw man and those who want to beat it as a steel man. I would have been more sympathetic if the black dragon were some illness like cancer, but it seems to be an allegory for death itself. I'm not against research to improve health and slow aging, but I also think that there are much more important and realistic problems to solve than defeating aging.


I don't want to live more than 100 years and I don't want to come back from the dead either. Make way for the next generation and their wants.


You’re being defeatist and ignoring the evidence presented in the article—even hospice patients want to live longer. You, too, will desire to live before (and hopefully: if!) you breathe your last.

This is because the entire goal of the sentient consciousness is simply to preserve itself as long as possible. DNA has the essential goal of replicating itself in reproduction. Consciousness, by contrast, appears to have no goal other than self-preservation. People sometimes choose to sacrifice themselves, but usually only when death is inevitable and they wish to save someone else from it (Lily/Harry Potter and Medal of Honor type situations).


I'm not really being defeatist nor ignoring evidence. Perhaps I just have a different perspective. There can be moral/ethical arguments for why mortality is a good, or at least useful, thing.


Why 100? You can also make way for the next generation by living 80, or 60, or 40 years. Yet no one would be okay with that option. Funny thing is historically speaking 40 is a lot closer to the useful human lifespan than 100. So this strong belief of yours is really driven by advancements in society and healthcare over the last few decades. Why do you think that won't change drastically another few decades from now?


"Funny thing is historically speaking 40 is a lot closer to the useful human lifespan than 100."

That isn't really true. Life expectancy was historically driven down by high infant mortality and lack of medicine. The meaningful human lifespan has been in the 70s for the majority of history. (Lifespan is different from life expectancy)

"So this strong belief of yours is really driven by advancements in society and healthcare over the last few decades."

Who says it's a strong belief?

"Why do you think that won't change drastically another few decades from now?"

Because there's no real evidence to support that. Life expectancy hasn't gone up drastically over the past 50 years. Rates of chronic illnesses, including things like dementia, have gone up drastically. So even if people are living a couple years longer, they're generally sicker and it's costing more. Even if medicine makes drastic improvements, 100 is still a lofty goal. I'd be fine making it only 80 too. I'm actually skeptical that will even happen. What I do know is that I don't want it to take more than 100 years.


That's fine, but please don't stand in the way of those of us who would love to experience the world on a longer time frame, and are frustrated that the current level of medical knowledge doesn't allow it.


It seems odd to me that you would be frustrated about a fact of the natural world that you have no entitlement to expect it to be different.


Why shouldn't we be frustrated by aspects of the natural world? Bad weather, disease, death and so on. Was eliminating smallpox odd because we had no entitlement expect to that?


Those things are about the timeliness - bad weather one day vs another, some people get the disease and others don't, early death verses a longer life. It's about what is reasonable to expect. It might be reasonable to expect good weather on a specific day, or even to live past the age of 50. It's unreasonable to expect to live indefinitely.


Yeah it's probably not 'reasonable' to expect to live on. But sometimes the technical possibilities open up to do something new.

Shaw quote: ‘The reasonable man adapts himself to the world; the unreasonable man persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.’


Sure, if the situation changes in the future, then it could be reasonable. That's not the case currently.


Ironic that you would post this to a digital forum, the very concept of which would never be found and nature to the point that it would be beyond fiction and magic only a few hundred years ago. Nature has no inherent will and very often does things that are awful, like cancer. If anything it's our obligation to correct its failures where we can.


Some people just take longer than others. I'm gonna be as patient as I can be with them.


Deep down, many people want to be a kind of god. Our technological progress lets that illusion of godhood seem achievable.


Yeah, I find the need to live forever kind of.. juvenile? You can’t let go of your ego for long enough to realise that at some point it’s better to make room for a new human with new perspectives and new ideas?

I like to think of it this way: if life was a game would you want to play the same character forever? No.. if you’re gonna keep playing the game it’s more interesting to start from scratch now and then. I don’t believe in reincarnation. There’s no need to. What you really are deep down is an instance of humanity. Almost all your genes and all your culture comes from and is shared with other humans. Any new instance (new human) is you playing a new character, essentially. If you’ve contributed to shaping the world you’re leaving behind this is even more true.

Unless you’re believe in a soul in the christian/jewish/muslim sense I guess, but then why would you fear death?

IMO the pursuit of immortality is far more dangerous and far more likely to kill humanity than AI. At least it may make us deteriorate to insignificance. Humanity is a super organism and we have a name for the phenomenon where parts of an organism figures out how to die and yet still replicates: cancer


We don't need to live forever as shown by the fact we've got by without it so far but death is kind of depressing. I've never really got the distinction that say killing millions in the holocaust is terrible but similar millions dying through age is desirable.


That distinction should be very clear - killing vs dying.


Both are dying. One killed by age one by gas. My grandfather and his brother went those ways. I'm not sure the years of Alzheimer's were much better than Auschwitz.


You're missing that one is avoidable and was intentionally caused. It's not reasonable to expect to be murdered. It is reasonable to expect to die eventually of some natural cause or accident. It is a necessity that previous generations need to die for the new ones, given constrained resources.


Just want to plug Pantheon as a TV series exploring this idea with excellent storytelling. (https://en.wikipedia.org/wiki/Pantheon_(TV_series))


Really can't recommend this show enough. As soon as they brought out the dining philosophers problem I was hooked.


As usual, it must be warned that living forever means you can be tortured forever. Interesting that an article on Dante is getting upvotes today...


Eternal life doesn’t necessarily mean being impervious to harm. If you live indeterminably because your flesh brain was preserved or there’s a digital copy of you on a hard drive, a simple drop on the floor could terminate your existence.

And if we’re talking about fiction, there’s no obligation to make those lives unbearably immortal either.

https://www.youtube.com/watch?v=dlcxokM970M

> Our flesh is indestructible. Our lives are never ending. But not even in the dumb vampire way where after a while you hate it and you can’t die. We can die whenever we want. We just don’t have to.


Catholic theology actually justifies the belief in Hell by arguing that an eternity of suffering in Hell is a blessing, because it admits the one benefit of existence itself, while total annihilation has no redeeming factors whatsoever.


I was raised Catholic and it's stuff like this that turned me into an atheist.


The existence of hell as a place as opposed to a “state of mind” isn’t really that well established in Catholic doctrine.

Not after Vatican II anyway.


What is the source of that information?


Yep even a life sustaining brain vat that keeps you alive for 1000 years while feeding you some bad stimulation is not very hard to fathom


Even worse no stimulation... And with enough trickery maybe avoid your brain going to coma or even sleep.


Don’t threaten me with a good time!


I wish we'd reframe the way this gets talked about. It's pedantic, but we can't escape death. All matter in the universe will eventually decay or transmute into something very different from what it now is. And my answer to the question posed of when you'd want to die is unknown. I can't say 150 years. I can't imagine a specific age I'd hit at which I'd want to die. But at the same time, while I might want to outlive the Earth, I don't want to outlive all baryonic matter and somehow persist into the age of the universe in which all other matter is black holes, and there is neither light nor sound, nothing to touch, and all I would ever experience is quadrillions of years of utter loneliness. There is no immortality and nobody would want it if they really thought about what it would entail.

But cessation of aging would be wonderful. I'd love to live indefinitely and not have my body or mind noticeably decay. Research like this should be done, but we need to be honest about what we're trying to accomplish. Nobody will ever escape death, and when you start getting large enough numbers, I'm not sure it would make any difference to live longer. Even if we somehow achieve brain uploading, which I don't see entails you living longer so much as making copies of yourself with different identities, every storage medium has a capacity limit. At some point, the only way to form new memories to evict old ones, and the experience of living 90 trillion years won't be any different than 1 trillion years if 1 trillion years of experience is all you can store.

That said, we need to also be humble about what is even achievable. The very idea of a high resolution scan capturing the entirety of your brain state is already science fiction. We have another home page story right now about controversy over whether the brain has a microbiome. The only reason that's a question is because we have no means of opening up a living brain to see. We can't even accurately measure a person's body fat without dissection. The limitations of non-invasive remote imaging that don't kill the animal being imaged are quite severe and constitute a large reason medicine isn't more effective than it is. There is no technology we are on a known arc toward achieving that will make it possible to capture molecular-level detail of an entire brain as it is still running. I don't see how you can base an entire research project on a premise that doesn't exist and we have no idea if it ever will exist.


One of the absolute best short movies I've ever seen is The World of Tomorrow.

The whole thing is cute, but every time I read about brains in boxes I can't help but think of this scene about grandpa's consciousness being uploaded into a cube:

https://www.youtube.com/watch?v=4PUIxEWmsvI&t=88s

    We are also able to download correspondence from him. [...] I will read one of his letters to you now.

    "Oh. Oh God. Oh God. Oh my God. Oh Holy Mother of God. Oh, oh, oh, oh, oh God."


Once the brain has been powered off and the back on, is it even "you" anymore? Hasn't that person died and a new one was "born"?

Used to go a little bit crazy thinking about this when watching people getting transported in StarTrek.


Thinking about that is like getting spooked by your reflection in a mirror - instincts being used in a situation where they don't apply.

If there's only one you at a time, neither you nor an outside observer can tell the difference.


If all the theory is correct, yes you are still you. Maybe damaged you, but you nonetheless.

Imagine the ship of theseus thought experiment, but instead of replacing part by part you store it on a dry dock (lost its function), some time after you put it in the sea again (recover the function), for all effects this still is the ship of Theseus


I like how the video game Soma showed it. If you fork a brain you kind of have a 50% chance of it continuing into the copy, and 50% chance of you being "left behind". https://www.youtube.com/watch?v=x790AjID0FA


I get the point, but I think the most self-consistent answer is that your conscious experience has a 100% chance of staying in the original body. (And similarly for destroy-and-remake teleporters.)


Indeed, the coin toss explanation "Catharine" uses on the Simon copies is merely a manipulation to ensure he continues following her instructions so that the ARK is launched, obscured to the player by the necessary mechanic of always controlling the surviving Simon copy. The only "real" Simon died in Toronto.


LOL, "we're not married, your husband died in the transporter".

"Ah can I have his laptop? He's got some saved games he wanted me to have"


See the Bobiverse books for some fun sci-fi on this.


Do memories and/or brain functionality rely on volatile storage? So if you die, that storage is gone.

How our brain works may rely on more than the structure of the brain. When you are brought back (booted?), things may not work the same.


Not volatile storage for most of it. If you get knocked out or similar you have most of the memories. It probably is partly down to the structure of what's connected to what and parly chemical changes at the synapses, although I don't think it's fully understood.

It'd make it pretty hard to get memories out of a brain sample as it's hard enough just to see the structure in an electron microscope. I don't think they have any way to log the chemical changes in the synapses presently.


From what I've seen of research on apparently-dispersed storage of memories in worms, I'd not be at all surprised to find that a human brain separated from a body has (assuming we could "boot it up" in that state, as you put it) lost a lot of memories or functionality beyond the obvious, even assuming we could perfectly preserve everything present in the brain per se.


People get struck by lightning or have epileptic fits, and it doesn't wipe their personality.


The brain is not enough; the mass-spaced effect allows for the formation of memories in non-neural tissues throughout the body.

https://www.nature.com/articles/s41467-024-53922-x https://www.nyu.edu/about/news-publications/news/2024/novemb...


We can barely even do “artificial intelligence” correctly. Let alone mapping the human brain to a digital representation. It’s an interesting concept but in order to “preserve” the brain, wouldn’t one have to get their brain dissected layer by layer and thus killing the subject in the first place?

Then storing all of that information without any loss would likely well exceed current technological limits. Then the computing power need to run the brain would have to be immense.


> We can barely even do “artificial intelligence” correctly

Even if we could the brain is an amazingly efficient mechanism for computation. It uses such a low amount of power that it's hard to imagine a computer based AI model ever genuinely competing with it.

If the environment and entropy are human scale concerns than AI is decidedly a dead end for us.


It'd be a long way into the future to do brain uploading type stuff. We are not close yet but maybe in fifty years?


Even with transferring minds, as long as physical beings are in charge of that, all people will surely die because accidents happen and entropy rises. If I live for a hundred or a billion years, I am still mortal, so there will still be existential problems related to mortality. It doesn't matter how I likely die, by cancer or by a star collapse.


I've been thinking a bit about how living on could play out without being too futuristic with the technology. Coming back from cryopreservation is probably many decades out but we are quite close to being able to make a virtual AI version of you, working a bit like an actor playing you. Tech like Heygen is quite good with avatars and voices, and chatgpt type AI could pretend to be someone in a pretty terrible way just now but that stuff will get better and the virtual you could act as an assistant, learn your ways and help out.

Then when the physical you passes and maybe is cryopreserved the virtual you could conduct the funeral type service and say hey guys physical tim333 is gone but I'm still here and you can chat to me on the web etc. Virtual you could maybe have some investments and could buy gifts for the grandkids and post on HN etc. Then in a few decades virtual you could get data from cryopreserved you and incorporate it.

Perhaps it could be done as a startup?


I love how people have wet dreams about living forever by uploading their mind to a computer but put absolutely 0 fucking effort actually increasing their health/life span in real life, and/or waste their days/week/years doing stuff they hate to "enjoy life" later (when their body is already half way rotten). As long as it's sci fi and they have 0 effort to provide they'll suck every single drop of hope but as soon as there is something actually actionable they recoil in horror

imho if you're not lean and exercising every day you have no business talking about living longer, you've already refused the only magic pill there is. It makes all the difference between having one foot in the grave at 60 or still chopping your own fire wood at 80, and all it takes besides a bit of luck is to move your ass 45 minutes a day


There's no contradiction here. You're talking about people who have already detached the concept of self from their physical body. They think of 'me' as their brain, so why maintain the body they're 'trapped' in.

People who try to solve all their problems with intellect tend to suffer from this. And many of them never learned that being in shape makes you feel a whole lot better.


Part of personality lives inside of our stomachs as well, apparently, and so much of who were are is driven by our interaction with our nerve endings. I think even with a perfect copy of consciousness, it's pointless without a perfect copy of our bodies as well.

I think that AGI, when and if we ever reach that stage of technological innovation, will enable us to live in a utopian world where our bodies are impervious to biological defects or age and we have little to no reliance on food and water — where we can control all aspects of how we feel with a slider, or a thought.

Like the Lotus-eaters from Homer's Odyssey, some alien tribe will eventually show up and decide we're imperiled instead of living a perfect life in blissful stasis, and kill us to stop whatever afflicts us from spreading to them.

Anyway, I don't want the brain preservation thing, thank you.


Perfect copies are overrated.

> Part of personality lives inside of our stomachs as well

Is it the part of your personality that you like or maybe it's the part you hate?


I think that immortality would be a disaster, personally. That we die is a critically important aspect of life. I'd prefer that we work on ways to make the death process less traumatic.


Why? We have already decimated any semblance of natural selection so death is no longer a necessity from an evolutionary standpoint. Maybe immortal humans would be more beneficent because they wouldn't be scrambling to get ahead in their short time here.

If I didn't only have 30-40ish productive years to achieve whatever I will in this life it would be much easier to donate money or time to charitable pursuits.


> If I didn't only have 30-40ish productive years to achieve whatever I will in this life it would be much easier to donate money or time to charitable pursuits.

Spoiler: The haves are not going to get any more generous when they've got thousand year lifespans. You would just have end up having to spend hundreds of years grinding away at the bottom of the ladder instead of 30 or 40.


We grind away to keep the body supplied and satiated.

A transcended/uploaded mind won't have the earthly needs and would be a lot cheaper to maintain, leaving it to achieve otherwise impossible results.


Want some runtime on the good CPU? Got to work.


> We have already decimated any semblance of natural selection

We certainly have not.

> If I didn't only have 30-40ish productive years to achieve whatever I will in this life it would be much easier to donate money or time to charitable pursuits.

True, but if you lived forever that would mean that it wouldn't be realistic for new people to come into the world, which means far fewer new ideas and ways of looking at the world. That would a net loss for humanity.


1) There's no danger of overpopulation. People have a natural tendency to reproduce slower when they feel safer.

2) Trivial argument: if people already lived indefinitely would you advocate murdering them to "make room"? Telling people they shouldn't be able to pursue a longer life is equivalent. Making that decision for yourself is perfectly fine; making it for others is not.

3) 150k people die every day, nearly 2 people per second. If fixing that tragedy creates new problems, bring them on; we'll solve those problems too.


1) Is not clearly true. Yes, 'feeling safer' pushes down on reproduction rates. But the _total_ effect on population growth could still be positive if the total death-rate drops enough -- we don't know enough to say for sure. And frankly, I think the most likely outcome is that people would be more likely to have kids if they didn't have to worry about missing out on their chance at XYZ dream.

2) Not true from most moral perspectives, including 'common sense morality'. In a pure utilitarian sense, sure, but most people don't subscribe to that. For example, choosing to not save someone from a burning fire is not the same as choosing to burn them to death. Both the actor and their intention matter.

3) I don't disagree with the first half of your point (that this is a tragedy) but I cannot share your optimism re.: us solving the consequent problems. If there's anything that the last fifty years of modernity have shown, it's that we're actually quite bad at solving broader social problems, with new and even-worse problems often arising well after we thought the original problem settled. Consider global warming (to which the 'solution' looks to be the further impoverishment of the third world, and probably mass deaths due to famine/drought/heat waves), or how we in the US 'solved' mobility by destroying main streets and replacing established public transportation with cars and mazes of concrete. Now we've "solved" loneliness by giving everyone a phone and -- well, I'm sure you know how that went.


1) We already have a growing population, and I don't think it's inherent that curing mortality must make it grow faster. The net effect would certainly be an ongoing upwards growth (since I would hope that population never goes down), but I'm arguing that the net effect does not inherently have to be unchecked exponential growth. Immortality doesn't solve resource constraints, and resource constraints do influence people's choices. That said, I also believe that even if it did result in faster growth, that isn't a reason to not solve the problem.

2) The equivalence here isn't "choosing to not save". Choosing to push someone back into a burning building, or preventing them from trying to escape, is equivalent to choosing to burn them to death.

3) I am an incorrigible optimist and don't intend to ever stop being one. Humanity is incredible and it's amazing what we can solve over time. I don't believe that any potential solution we might come up with is worse than doing nothing and letting 150k people die every day.


That sounds like a more tractable problem than death.


Everything would scale with it, 30 or 300 years, as long as you're running the same system it wouldn't matter

Not even talking about the fact that we live in a very finite system


I like knowing that the worlds biggest assholes sometimes lie awake at night fearfully pondering their own death. I don't want to deprive them of that.


The Great Equalizer


Wouldn't any change we make to who dies when also be natural? How is that not natural selection? Did earlier generations not attempt to stay alive?


Are you serious? Have you been around old rich people? Do you find them generous? People grow more selfish with age, because they care less and less.

The only thing to gain by abolishing biological death is perpetual civil war.


I think old peoples inward focus is more a result of their physical condition than the amount of time they have been alive.

If you could put an 80 year olds mind in a 20 year old body they would probably approach life with the same optimism of other young people.

I think our behavior is more a product of our environment than some internal 'self' that is built over our lifetime. If you altered the environment from "75 years old, achy and slow body, limited future" to "25 years old, infinite future, healthy body" I would be amazed if there was not a humongous shift in behavior as well.


Plenty of the most-powerful already keep causing harm just to make number go up even more, well beyond the point at which they can conceivably personally benefit before they die. Imagine if they could conceivably personally benefit from that because they live for centuries. Why would anyone expect that to improve their behavior?


Well. How would you like a 290 years old Putin keeping a tight grip on Russia in the 23rd century?


I see, so this is where the God-Emporer of Man comes from.

purge the xenos


When I look back on my 45 years life, there are spans which feel like a different life altogether. I thought differently, and made choices that I won't make today. I'd say "in my former life" as if that life ended and a new one began. I suspect youthful immortality would be a sequence of many deaths and rebirths. If you had the neuroplasticity of a 25 year old and the experience and wisdom of a 50 year old, I imagine it won't get boring, and perhaps new ideas and modes of living won't require a generation to die, and a new one to be born.


What if death is just a 'feature' of how life evolved in this planet? What if we discover life in other planets that are just endless? It just seems too anthropocentric to think that all forms of life must die.

Ok, that maybe part of how 'WE' define life, but, for me, that looks a lot like an arbitrary definition.


It's a bit like fighting against the ocean imho, it doesn't matter how much you put into it you'll always lose eventually.

It's much simpler to accept and live within your constraints than to waste your life and mental energy wishing you could be/do something you will never be/do.

If you think about it most of the things we "fix" are extremely wonky, even something such as a bone fracture isn't guaranteed to heal 100%, most medicine have massive side effects, organ transplants have something like 50% survival at 15 years on average, &c. We think we're getting more and more control on things but most of it is a hack job temporarily delaying the inevitable

Also, anyone thinking being uploaded to a computer forever is heaven on earth must live a pretty fucking terrible life to begin with


>critically important aspect of life

Nothing says "life" like being dead.


I meant "life" in the big picture, not on the individual level.


You are welcome to die from any number of preventable deaths.

I choose life.


Nothing stopping you from dying. You make your personal decisions and let others make theirs.


Yes. I have long believed that all those who applaud death as a good thing — Steve Jobs most memorably in his address at Stanford — will be first in line for life extension once it becomes an option.


Don't worry you'll die just as they will

It doesn't matter how much you like sci fi, even if it was technically possible it would be reserved to the 0.001% and you'd still be grinding your whole life just as you are doing now


The future never arrives in an evenly distributed fashion, but it reaches everyone eventually.


There are some externalities at play here though...

- We currently only have one world that all living humans must share. - Imagine the sickening amount of power some people would be able to gather given a few centuries. That can't be good for everyone else.


Well, people already gather that sickening amount of power - not for themselves, for their heirs, but still - take the soon-to-be-again US president: he probably wouldn't be where he is right now if his father hadn't amassed a considerable fortune which he inherited.


So it’s bad already let’s make it an absolutely alhellish nightmare?


To be frank, the extent to which the very old now dominate science, business and politics is already unhealthy. I shudder to think what our world would look like if our most important positions of power were dominated by men born 110 years ago.

They say science advances one death at a time. Looking at congress, I think you can say the same for politics as well.


We could just have mandatory brain preservation at 50. Have to be a biologic to be in a position of authority.


I don't know why you're getting so much pushback... Planetary resources are finite. If you give up dying, you have to give up reproducing beyond the replacement rate. People like to imagine they'll be part of some small tribe of a lucky few immortals, but the reality is we'd be in exactly the same situation as today, but with a population rapidly screeching beyond all known sustainable limits far faster than it is today. To name just one obvious problem.

Success of something like this entails a way to regulate reproduction at a far more draconian level than even China's one-child policy. I don't think any civilized nation could impose a "no child" policy and remain intact.


> but the reality is we'd be in exactly the same situation as today, but with a population rapidly screeching beyond all known sustainable limits far faster than it is today

I don’t think this is a given. Most developed nations have bad birth rates for example


Imagine people like putin pouring billions in, easily sacrificing millions to achieve such thing for themselves, and ideally nobody else. A truly terrible scenario worth fighting against.

Some folks are scared of rogue AI, when biggest threat to mankind always was, is and will be other, properly messed up humans with certain capabilities.


It would be hell on earth literally


I think it’s more likely we can preserve our ego rather than our consciousness. For instance, create an AI replica of yourself that accurately behaves the same way you do. Although you would be dead, your ego can carry on living and responding to changes in the external world, and people could have interactions with you that accurately simulate how you would respond to them long after you’re gone. And as your ego learns about the world, it develops opinions closely similar to opinions you would have based on your life experience. Perhaps in this way people in power could remain in power indefinitely.


Cue Nixon in Futurama.


I'm gonna break into your houses, and wreck up the place!


Alastair Reynolds wrote some great books which touch on this concept - the Perfect series has a AI character who originally started out as a fatal brain scan of a person who eventually escaped...


Also in 'House of Suns' by Alistair Reynolds.

The most revelatory idea for me in it was the idea of how a human being could plausibly become a near god-like entity.

This isn't even the real premise of the book but a brief detour over a few pages where the author expounds this idea.

The man, Valmik, starts off as a human much like any other. As the first stage in his metamorphosis he replaces his brain neuron by neuron, with mechanical parts.

Once this is done he no longer needs his body.

This now frees him from his mortal shell which he discards. He now lives purely as an electro-mechanical device for centuries. He shuns all his human contacts since they are unable to keep up with him.

At this point he starts adding more artificial neurons till his count surpasses several orders of magnitude of his starter human brain.

But he realises that he is still subject to the vicissitudes of the planet that the machine that's him is located in.

This he couldn't abide by.

At this next stage, the already superhuman intelligence formerly known as Valmik sets off on a space faring vehicle.

The endlessness of space is no bother to this entity, because he has no bounds on his longevity.

And so he travels the cosmos for millennia in this state, free from the erstwhile shackles binding him to Earth.

But he realises that even this is not enough. Just one well aimed shot would destroy him.

Not good enough for an immortal.

So he transforms each of his neurons into an independent self-sustaining machine. Each of these machines communicates with each other via light/EM beams.

The formerly comet sized machine that housed his consciousness was now split across these self sustaining machines.

These machines can now drift apart. As far apart as he wanted.

His brain was now a cloud consciousness that could span the size of solar systems.

But this expansion came at a cost.

The cost was this: transmission between his independent neuron nodes, now each far apart from one another, took time, limited by the speed of light.

Thoughts that would formerly take him micro seconds would now consume days. But that was no matter to this entity that had left all his companions behind and only needed to talk to himself.

And thus, Valmik, once man, now a cloud consciousness came to disperse himself all across the Oort cloud.

An immortal who can outlive the sun.


Leaving aside whether or not I think this is A Good Idea[1], I am fascinated by what other limits people would run up against, if this were possible. Memory, for example. Is it even possible to retain a first person memory for hundreds of years? Is there some upper limit above which you can no longer form new memories? Would it become much harder to motivate yourself, without the pressures of a limited existence. How would it affect our psyche?

[1] I do not.


> Memory, for example. Is it even possible to retain a first person memory for hundreds of years?

It's not. But it's fine, because it's not even possible to retain memories of typical lifespan or even short one or even a decade or even a year. You lose way more than 90% of it and the rest gets largely altered anyways.


Setting aside the actual physical and technological limitations here, I think such immortality would create a whole new kind of population problem. Population growth, in some sense, would explode. Or, more realistically, you’d have two classes of people: those living, and those “immortal,” those who could afford immortality, and those who were poor and would have to die permanently.

That said, I think this is all a pipe dream and totally infeasible.


Like so many scientific pursuits, this one has its roots in science fiction. A terrific trashy futuristic novel from the 70's by Lawrence Sanders called The Tomorrow File[0] features preserved heads that are kept around to spout ideas in the future, when they might become useful, among many other Brave New World-type concepts.

[0] https://archive.org/details/tomorrowfile0000sand_t5i1



Several other books pop into mind, e.g., https://en.wikipedia.org/wiki/To_Live_Forever


I oughta read this one again

https://www.goodreads.com/book/show/21617.Buying_Time

particularly since my esteem for Haldeman has gone up (https://sff180.com/reviews/h/haldeman/worlds.html changed my life)


I also liked David Brin's Kiln People. Very creative and fun.


If we are talking about preserving heads, Futurama is the final word on the subject.


Personally I think the head of Vecna https://www.rpglibrary.org/articles/storytelling/headofvecna... is the ultimate head transplant story.


I was really hoping this was gonna have one of those so bad it’s good science-fiction cover artworks


That internet archive version is particularly boring, isn't it. The novel is his best, if you ask me. His detective fiction didn't ever rise to that level and this was the only one he wrote in the sci-fi genre. Perhaps being relegated to the dime store novelist category in his other books prevented this from getting the attention it deserves, but it was wonderfully smarmy and prescient.


When was the last time you read a book from the 1960s? Or browsed files from a hard drive from the 90s? I guess the same will happen with brains.


I'm reading a book from 1974 right now. The book I read before that was a 2013 adaptation of a book from 1605. The book I read before that was from 1956.

The book I read before was from 1985: I started it while the author was still alive but he died before I finished it. The book I read before that was from 1998.


Waking up in a state of confusion looking down at my new body after my own death.

"What is this, some kind of altered carbon?"


I'm kinda surprised everyone in that show doesn't walk around with full neck armor to protect their stacks. Metal gorgets should be all the rage in this universe.


What if the constituent of the brain changes, would you feel you or would it be like another person just go on being conscious. If you don’t know what makes consciousness, most likely you just “die” after the brain freeze. This ain’t like sleeping or coma where your system is kept running continuously


People have been brought back to life after being dead for more than an hour after submergence in very cold water.


But did their "consciousness" continue? We currently have no way of telling.


Exactly whose consciousness would it be, if not theirs?


Let’s say you clone yourself l 1:1 who would you see thru?


A somewhat different question, but interesting! I assume by clone you mean an exact physical copy, down to a molecular level.

In my view, there would now be two separate instances of "my" consciousness simultaneously. "I" would continue to see through the original one, but the copy would essentially feel the same as me, although increasingly diverging as our experiences would not be the same from that point.


Wouldn't you need to preserve non-brain systems that relate to "mind" such as gut neurons, maybe microbiota, other parts of the nervous system, etc?


A fun way to die, would be to transition digitally into the cloud as data for some sort of future "humanity LLM" that would be inquired in the future. The Council of Elders


Cool idea, but it's science fiction.

If this intrigues you, try some really good science fiction on the topic, like Neal Stephenson's "Fall; or, Dodge in Hell".


More like, "with brain preservation, almost everybody has to die".

https://news.ycombinator.com/item?id=42006265

https://repaer.earth/


Is that satire?


I view death as the saviour of democracy and the most democratic aspect of life.

Without death, our fragile democracies will die. Tyrants will be in power forever.


Being able to preserve your brain after a natural death won't stop you being assassinated. A few bullets to the brain, game over.


Without spoiling too much of Cyberpunk 2077, that game offers an intriguing exploration of how this could go horribly wrong.


Similarly true for the game SOMA: https://en.wikipedia.org/wiki/Soma_(video_game)


Jeroen Lanier in his book "You Are Not a Gadget" uses the example of a MIDI file: they can describe music, but even though they sound like the real thing, they are limited by the digital world. E.g. according to ChatGPT the minimum interval between 2 MIDI messages is 0.77 milliseconds.

And then he asks what sort of limitations might we have if our minds are software, and how would we not notice it?


> according to ChatGPT the minimum interval between 2 MIDI messages is 0.77 milliseconds.

Thank you for stating your source. However, ChatGPT isn’t deterministic. I asked it the same thing and it responded it depends on several factors, including the MIDI protocol version and the device or software used, and that the minimum between two messages is between 1 and 2 milliseconds.

Which of those is true? Perhaps neither. A quick web search didn’t provide a straightforward answer. Point being that we should avoid propagating even more wrong information, especially since it’s not relevant to your point (which makes sense).


Even if this worked, I find this idea morally repulsive. It just seems utterly selfish.


Next level transhumanist absurdity..


So, I've seen Bredo Morstoel, the Frozen Dead Guy of Nederland, Colorado[0]. The 'tour' you could go on was ... something. But, at the end, we were asked to help keep Bredo frozen and helped pile on dry ice. So, they had to open the sarcophagus and get the CO2 in there.

And, yeah...

Bredo looks like Otzi the ice man [1]. He's just a dead body, there's no saving him, he's gone.

It was a good lesson in any thoughts I ever may have had about cryopreservation. Unless you pay someone a LOT of money, for a very long time, and somehow manage to get them to actually really truly care about you, you are just having a really strange and long funeral.

Good excuse for a party though. Those dead salmon tossers are something else.

[0] https://en.wikipedia.org/wiki/Frozen_Dead_Guy_Days

[1] https://en.wikipedia.org/wiki/%C3%96tzi


I really hate this idea. The wrong people would get to persist indefinitely -- the malignantly greedy who hoard majority of the resources and make life miserable for billions of others.


MONEY !! GIVE MONEY !! GIVE MONEY NOW !!


If this really happens, its going to take human kind to next level.. Can't image the pros and cons


There are so many ifs and buts in this idea of resurrecting the brain that it's laughable, yet "brilliant scientists" make press with it. What about the completely unknown mind-body problem that scientism pretends doesn't really exist but each one of us knows is very real? What if you wake up 500 years later but plugged into a matrix? Who guarantees cryo-maintenance of the frozen brain?


Please let me die.


I wonder how much of a relief the belief that when you sit on your deathbed you will be woken up again would provide.

Even if the tech doesn't work, I wonder how much relief from suffering the idea that it will in a hundred years could be


Those hundred years won't exist for the dying. I would personally find comfort in knowing that I will feel waking up right away into a technologically much more advanced world


Probably similar to the relief that the belief you will wake up from general anesthesia provides prior to surgery.


Isn't that the main promise of most religion?


It's really debatable if religious people actually believe in eternal life. If they did, they would immediately see that their behavior drives them straight to eternal damnation.

They don't seem to care about that, and the only rational explanation is that they don't believe there's anything after death. They're nihilists. We all are.


[flagged]


Maybe you can just put a towel over the jar for those guys. like a bird cage


"1918 when diabetes had no known treatment" What nonsense. A history of diabetes shows knowledge and treatment for the disease for thousands of years. https://en.wikipedia.org/wiki/History_of_diabetes


Those with eternal souls see the deception.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: