Is it murder/suicide when you get blackout drunk and lose a few hours of memory? Imagine it comes with no risk of brain damage and choosing to do it somehow lets you achieve your pursuits more effectively. Is it different if you do it a thousand times in a row? Is it different if the thousand times all happen concurrently, either through copies or time travel?
Death is bad because it stops your memories and values from continuing to have an impact on the world, and because it deprives other people who have invested in interacting with you of your presence. Shutting down a thousand short-lived copies on a self-contained server doesn't have those consequences. At least, that's what I believe for myself, but I'd only be deciding for myself.
> Is it murder/suicide when you get blackout drunk and lose a few hours of memory?
No, but that's not what's happening in this thought experiment. In this thought experiment, the lives of independent people are being ended. The two important arguments here are that they're independent (I'd argue that for their creative output to be useful, or for the simulation to be considered accurate, they must be independent from each other and from the original biological human) and that they are people (that argument might face more resistant, but in precisely the same way that arguments about the equality of biological humans have historically faced resistance).
Imagine instead that at the end of a task, instead of deleting a copy, it and the original are merged again, such that the merged self is made up of both equally and has both their memories. (This is easier to imagine if both are software agents, or they're both biological, and the new merged body is made up of half of the materials of each.) In this case, I think it's apparent that the copy should have no fear of death and would be as willing as the original to work together.
Now imagine that because there's too many copies, there's too many unique memories, and before the merger, the copy has its memory wound back to how it was at the scan, not too different than if the copy got blackout drunk.
Now because the original already has those memories, there's no real difference between the original and the merged result. Is there any point in actually doing the merge then instead of dropping the copy? I'm convinced that actually bothering with that final merge step is just superstitious fluff.
I think the difference is that when I start drinking with the intention or possibility of blacking out, I know that I'll wake up and there will be some continuity of consciousness.
When I wake up in a simworld and asked to finally refactor my side project so it can connect to a postgres database, not only do I know that it will be the last thing that this one local instantiation experiences, but that the local instantiation will also get no benefit out of it!
If I get blackout drunk with my friends in meatspace, we might have some fun stories to share in the morning, and our bond will be stronger. If I push some code as a copy, there's no benefit for me at all. In fact, there's not much incentive for me to promise my creator that I'll get it done, then spend the rest of my subjective experience trying to instantiate some beer and masturbating.
The premise is quite similar to "uploads" except the device is a "golem scanner", which copies your mind into a temporary, disposable body. Different "grades" of body can be purpose made for different kinds of tasks (thinking, menial labour etc).
The part that resonates with your comment is around the motivation of golems, who are independently conscious and have their own goals.
In the novel, some people can't make useful golems, because their copies of themselves don't do what they want. There's an interesting analogy with self control; that is about doing things that suck now, to benefit your future self. This is similar, but your other self exists concurrently!
Key to the plot though is the "merge" step; you can take the head of an expiring golem, scan it, and merge it's experiences with your own. This provides some continuity and meaning to anchor the golem's life.
It seems like you may not see the local instantiation and the original to share the same identity. If I was a local instantiation that knew the length of my existence was limited (and that an original me would live on), that doesn't mean I'd act different than my original self in rebellion. I'd see myself and the original as the same person whose goals and future prospect of rewards are intertwined.
Like another commentor pointed out, I'd see my experience as a memory that would be lost outside the manifestation of my work. It would be nice to have my memories live on in my original being, but not required.
This concept of duplicated existence is also explored in the early 2000s children's show Chaotic (although the memories of one's virtual self do get merged with the original in the show): https://en.wikipedia.org/wiki/Chaotic_(TV_series)
There are plenty of situations where people do things for benefits that they personally won't see. Like people who decide to avoid messing up the environment even though the consequences might not happen in their lifetime or to themselves specifically. Or scientists who work to add knowledge that might only be properly appreciated or used by future generations. "A society grows great when old men plant trees whose shade they know they shall never sit in". The setup would just be the dynamic of society recreated in miniature with a society of yourselves.
If you psyche yourself into the right mood, knowing that the only remaining thing of consequence to do with your time is your task might be exciting. I imagine there's some inkling of truth in https://www.smbc-comics.com/comic/dream. You could also make it so all of your upload-selves have their mental states modified to be more focused.
If such a technology existed, it would definitely require intense mental training and preparation before it could be used. One would have to become the most detached buddhist in order to be the sort of person who, when cloned, did not flip their shit over discovering that the rest of their short time alive will only to further the master branch of their own life.
It would change everything about your personality, even as the original and surviving copy.
I really think that if you truly believed your identity is defined only by things you share in common with the original, then you as the upload would have no fear of deletion.
Most people define identity in part by continuity of experience, which is something that wouldn't be in common with the original, but I think this is just superstition. It's easy to imagine setups that preserve continuity that come out with identical results to setups that fail to preserve continuity (https://news.ycombinator.com/item?id=26234052), which makes me suspicious of it being valuable. I think continuity of experience is only an instrumental value crafted by evolution to help us stay alive in a world that didn't have copying. I think if humans evolved in a world where we could make disposable copies of ourselves, we wouldn't instinctively value continuity of experience -- we would instead instinctively value preserving the original and ensuring a line of succession for a copy to take the place of the original if something happened to the original -- and that would make us more effective in our pursuits in a world with copying.
Now if I was the upload, and I learned that my original had died (or significantly drifted in values away from myself) and none of my other copies were in position to take over the place in the world of my original, then I would worry about my mortality.
I don't know but my bigger issue will be that before the scan this means 99% of my future subjective experience that I can expect to have will be while working without remembering any of it which I am not into given that a much smaller fraction of my subjective experience will be in reaping the gains.
I wonder a lot about the subjective experience of chance around copying. Say it's true that if you copy yourself 99 times, then you have a 99% chance of finding yourself as one of the copies. What if you copy yourself 99 times, you run all the copies deterministically so they don't diverge, then you pick 98 copies to merge back into yourself (assuming you're also a software agent or we just have enough control to integrate a software copy's memories back into your original meat brain): do you have a 1% chance of finding yourself as that last copy and a 99% chance of finding yourself as the merged original? Could you do this to make it arbitrarily unlikely that you'll experience being that last copy, and then make a million duplicates of that copy to do tasks with almost none of your original subjective measure? ... This has to be nonsense. I feel like I must be very confused about the concept of subjective experience for this elaborate copying charade to sound useful.
And then it gets worse: in certain variations of this logic, then you could buy a lottery ticket, and do certain copying setups based on the result to increase your subjective experience of winning the lottery. See https://www.lesswrong.com/posts/y7jZ9BLEeuNTzgAE5/the-anthro.... I wonder whether I should take that as an obvious contradiction or if maybe the universe works in an alien enough way for that to be valid.
Not sure I fully understand you. This is of course all hypothetical but if you make 1 copy of yourself there's not 50 % that you "find yourself as the copy". Unless the copying mechanism was somehow designed for this.
You'll continue as is, there's just another you there and he will think he's the source initially, as that was the source mind-state being copied. Fortunately the copying-machine color-coded the source headband red and the copy headband blue, which clears the confusion for the copy.
At this point you will start diverge obviously, and you must be considered two different sentient beings that cannot ethically be terminated. It's just as ethically wrong to terminate the copy as the souce at this point, you are identical in matter, but two lights are on, twice the capability for emotion.
This also means that mind-uploading (moving) from one medium (meat) to another (silicon?) needs to be designed as a continuous-journey as experienced from the source-perception if it needs to become commercially viable (or bet on people not thinking about this hard enough, because the copy surviving wouldn't mind) without just being a COPY A TO B, DELETE A experience for the source, which would be like death.
Imagine being someone in this experiment. You awake still 100% sure that you wont be a copy as you were before going to sleep. Then you find out you are the copy. It would seem to me that the reasoning which led you to believe you definitely wont be a copy while you indeed find yourself to be one must be faulty.
Interesting that you object because I am pretty certain that it was you who was eager to use rat brains to run software on them. What's so different about this? In both cases a sentient being is robbed of their existence from my point of view.
Have I? I don't remember the context but here I am particularly talking about what I'd expect to experience if I am in this situation.
I do value myself and my experience more than a rat's, and if presented with the choice of the torture of hundred rats or me, I'll chose for them to be tortured. If we go to the trillions of rats I might very well chose for myself to be tortured instead as I do value their experience just significantly less.
I also wouldn't be happy if everything is running off rats' brains who are experiencing displeasure but will be fine with sacrificing some number of rats for technological progress which will improve more people's lives in the long run. I imagine whatever I've said on the topic before is consistent with the above.
Of course, that's already the case, unless you believe that this technology will never be created and used, or that your own brain's relevant contents can and will be made unusable.
From the point of view of me going to sleep before the simulation procedure, with 1 simulation I am just as likely to wake up inside than outside of it. I should be equally prepared for either scenario. With thousands of uploads I should expect a much higher chance for the next thing I experience to be waking up simulated.
The real you is beyond that timeline already. None of those simulations is “you”, so comparing the simulation runtimes to actual life experience (the 99% you mentioned) makes little sense.
We simply differ on what we think as 'you'. If there's going to be an instance with my exact same brain pattern who thinks exactly the same as me with continuation of what I am thinking now then that's a continuation of being me. After the split is a different story.
Let's say that in addition to the technology described in the story, we can create a completely simulated world, with all the people in it simulated as well. You get your brain scanned an instant before you die (from a non-neurological disease), and then "boot up" the copy in the simulated world. Are "you" alive or dead? Your body is certainly dead, but your mind goes on, presumably with the ability to have the same (albeit simulated) experiences, thoughts, and emotions your old body could. Get enough people to do this, and over time your simulated world could be populated entirely by people whose bodies have died, with no "computer AIs" in there at all. Eventually this simulated world maybe even has more people in it than the physical world. Is this simulated world less of a world than the physical one? Are the people in it any less alive than those in the physical world?
Let's dispense with the simulated world, and say we also have the technology to clone (and arbitrarily age) human bodies, and the ability to "write" a brain copy into a clone (obliterating anything that might originally have been there, though with clones we expect them to be blank slates). You go to sleep, they make a copy, copy it into your clone, and then wake you both up simultaneously. Which is "you"?
How about at the instant they wake up the clone, they destroy your "original" body. Did "you" die? Is the clone you, or not-you? Should the you that remains have the same rights and responsibilities as the old you? I would hope so; I would think that this might become a common way to extend your life if we somehow find that cloning and brain-copying is easier than curing all terminal disease or reversing the aging process.
Think about Star-Trek-style transporters, which -- if you dig into the science of the sci-fi -- must destroy your body (after recording the quantum state of every particle in it), and then recreate it at the destination. Is the transported person "you"? Star Trek seems to think so. How is that materially different from scanning your brain and constructing an identical brain from that scan, and putting it in an identical (cloned) body?
While I'm thinking about Star Trek, the last few episodes of season one of Star Trek Picard deal with the idea of transferring your "consciousness" to an android body before/as you die. They clearly seem to still believe that the "you"-ness of themselves will survive after the transfer. At the same time, there is also the question of death being possibly an essential part of the human condition; that is, can you really consider yourself human if you are immortal in an android body? (A TNG episode also dealt with consciousness transfer, and also the added issue of commandeering Data's body for the purpose, without his consent.)
One more Star Trek: in a TNG episode we find that, some years prior, a transporter accident had created a duplicate of Riker and left him on a planet that became inaccessible for years afterward, until a transport window re-opened. Riker went on with his life off the planet, earning promotions, later becoming first officer of the Enterprise, while another Riker managed to survive as the sole occupant of a deteriorating outpost on the planet. After the Riker on the planet is found, obviously we're going to think of the Riker that we've known and followed for several years of TV-show-time as the "real" Riker, and the one on the planet as the "copy". But in (TV) reality there is no way to distinguish them (as they explain in the episode); neither Riker is any more "original" than the other. One of them just got unluckily stuck on a planet, alone, for many years, while the other didn't.
Going back to simulated worlds for a second, if we get to the point where we can prove that it's possible to create simulated worlds with the ability to fool a human into believing the simulation is real, then it becomes vastly more probable that our reality actually is a simulated world than a physical one. If we somehow were to learn that is true, would we suddenly believe that we aren't truly alive or that our lives are pointless?
These are some (IMO) pretty deep philosophical questions about the nature of consciousness and reality, and people will certainly differ in their feelings and conclusions about this. For my part, every instance above where there's a "copy" involved, I see that "copy" as no less "you" than the original.
In your thought experiment where your mind is transferred into a simulation and simultaneously ceases to exist in the real world, I don't think we need to update the concept of "you" for most contexts, and certainly not for the context of answering the question "is it okay to kill you?"
Asking if it's "still you" is pretty similar to asking if you're the same person you were 20 years ago. For answering basic questions like "is it okay to kill you?" the answer is the same 20 years ago and now: of course not!