Hacker News new | past | comments | ask | show | jobs | submit login
Lena (qntm.org)
620 points by burkaman on Feb 22, 2021 | hide | past | favorite | 218 comments



It's interesting, but strikes me as very unrealistic. I don't think it'd go that way. In fact, it'd be far more horrifying.

We wouldn't bother trying to convince an image of a brain into cooperation, because we simply lose any need to do that very quickly.

One of the very first things we'd do with a simulated brain is to debug it. Execute it step by step, take lots of measures of all parameters, save/reload state, test every possible input and variation. And I'm sure it wouldn't take long to start getting some sort of interesting result, first superficial then deeper and deeper.

Cooperation would quickly become unnecessary because you either start from a cooperative state every time, or you quickly figure out how to tweak the brain state into cooperation.

And that's when the truly freaky stuff starts. Using such a tool we could figure out many things about a brain's inner workings. How do we truly respond to advertising? How to produce maximum anger and maximum cooperation? How to best implant false memories? How to craft a convincing lie? What are the bugs and flaws in human perception? We could fuzz it and see if we can crash a brain.

We've already made some uncomfortable advancements, eg in how free to play games intentionally try to create addiction. With such a tool at our disposal we could fine tune strategies without having to guess. Eventually we'd just know which bits of the brain we want to target and just have to find ways of getting the right things to percolate down the neural network until those bits are affected in the ways we want.

Within a decade we'd have a manual on how to craft the best propaganda, how to best create discord, or how to best destroy a human being by just talking to them.


This seems rather optimistic to me. There are days when I count myself lucky to be able to debug my own code. And it's maybe about seven orders of magnitude less complex. And has comments. And unit tests.

I'd be willing to bet that once we've achieved the ability to scan and simulate brains at high fidelity, we'll still be far, far, far away from understanding how their spaghetti code creates emergent behaviour. We'll have created a hyper-detailed index of our incomprehension. Even augmented by AI debuggers, comprehension will take a long long time.

Of course IAMNAMSWABRIAJ (I am not a mad scientist with a brain in a jar), so YMMV.


> Of course IAMNAMSWABRIAJ (I am not a mad scientist with a brain in a jar), so YMMV.

How can you be so sure of that?


Because he stated he's not a scientist with a jarred brain in his possession (to his knowledge/current memory state), not that he has his own brain in a jar, which, while possible, is most unlikely.

Yes, I'm fun at parties.


I'm not sure if it was intentional or not, but there is a clever meta joke hidden in your comment that made me actually laugh. Kudos.


Really appreciate the feedback. Always good to know when something works as intended.


Yep, I'm definitely making no positive statements as to the absolute location of my brain, or its state of empicklement.


This depends on availability of debug/test/research environment for brain images.

There are 20M sw developers on this planet. If 100k of them had daily available dev environment for brain images, then things would progress extremely fast.


Well, training a neural network is not significantly different from how you train a brain. You don’t need to know the inputs as long as it produces the right outputs.


> Execute it step by step, take lots of measures of all parameters, save/reload state, test every possible input and variation.

This assumes that simulation can be done faster than real time. I think it will be the other way around: the brain is the fastest hardware implementation and our simulations will be much slower, like https://en.wikipedia.org/wiki/SoftPC

It also assumes simulation will be numerically stable and not quickly unsable like simulation of weather. We still can't make reliable weather forecasts more than 7 days ahead in areas like Northern Europe.


The brain is pretty much guaranteed to be inefficient. It needs living tissue for one, and we can completely dispense with anything that's not actually involved in computation.

Just like we can make a walking robot without being the least concerned about the details of how bones grow and are maintained -- on the scales needed for walking a bone is a static chunk of material that can be abstracted away without loss.


C elegans is a small nematode composed of 959 cells and 302 neurons, where the location, connectivity, and developmental origin/fate of every cell is known.

We still can't simulate it.

Part of the problem is that the physical diffusion of chemicals (e.g., neuromodulators) may matter and this is 'dispensed with' in most connectivity-based models.

Neurons rarely produce identical response to the same stimuli, and their past history (on scales of milliseconds to days) accounts for much of this variability. In larger brains, the electric fields produced by activity in a bundle of nerve fibers may "ephaptically couple" nearby neurons...without actually making contact with them[0].

In short, we have no idea what can be thrown out.

[0] This sounds crazy but data from several labs--including mine--suggests it's probably happening.


> C elegans is a small nematode [...] We still can't simulate it.

This for some reason struck me as profoundly disappointing. I have a couple neuroscientist friends, so I tend to hear a lot about their work and about interesting things happening in the field, but of course I'm a rank layperson myself. I guess I expected/hoped that we'd be able to do more with simpler creatures.

If we can't simulate C elegans, are there less complex organism we can simulate accurately? What's the limit of complexity before it breaks down?


I don't think there's anything we can simulate "completely", in the sense that a fire-and-forget model would subsequently go onto have a typical life.

The stomatogastric ganglion might be the closest. It is a network of three dozen neurons in the crustacean stomach. Like the worm, the wiring diagram is completely known and the physiology is easier to measure. Despite being very simple, it can generate intricate patterns of activity in the stomach muscles that let the crab/lobster/etc eat. Scholarpedia has the diagram and some references (http://www.scholarpedia.org/article/Stomatogastric_ganglion) Eve Marder, who has done a lot of pioneering work on this circuit, wrote a book (Lessons From the Lobster) that I'm looking forward to reading.

Don't be disappointed! A lot of media coverage tends to present new results as "we're almost there." In most cases, I think that's nonsense, but it's also exciting to think how many things there are left to discover and how fascinatingly complex the world is.


c. elegans is pretty much the only one we fully mapped. (Possibly some fish larvae, too? Recall fuzzy)

But given that we can't even fully simulate animals with exactly zero neurons (Trichoplax), I'd say the current limit is "we can't". It's literally the world's simplest animal, and we're far from understanding how it works.

So, probably no brain uploads by 2031 ;)


> We still can't simulate it.

Interesting. Can you give a rough estimate of how much effort has been put into studying it (wall time, researcher-years, money) and how much progress has been made?

Also, is there any estimate of how similar C. elegans neurons are to those of other species, such as humans?


I’m not sure how to put a reasonable number on it, especially the simulation part, but C. elegans is a very common model organism. It’s maybe not as well-known as mice or rats, but probably in the top ten most-studied organisms. Here’s a nice review (https://www.nature.com/articles/nrg2105); a quick glance at the WormBook might also give you a sense for the breadth and depth of what’s been done[0]. http://www.wormbook.org/

Neurotransmission in C. elegans is unusual. They use a different set of neurotransmitters; this isn’t that odd—-insects also use a slightly different set than humans, and their role even flips in many animals (including mammals) during development. The weirder part is what those neurotransmitters do. In other animals, neurons produce stereotyped all-or-none “spikes” of electrical activity. Until quite recently, it was unclear whether C elegans neurons did too. This News and Views (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951993/#R29) does a nice job describing plateau potentials and the reasons that C. elegans neurons might differ (namely, they’re very small). A few years later Cori Bargmann’s group discovered that the AWA neuron fires something more akin to a “classical” spike—-sometimes. It also uses calcium instead of sodium. https://www.cell.com/cell/fulltext/S0092-8674(18)31034-1

This might complicate simulations a little bit, but these differences are also understood pretty well, and the much smaller nervous system more than offsets them.

[0] I work at the polar opposite end of neuroscience—-large animal neurophys—-but I’ve always been a little jealous of how friendly and tight-knit the C elegans community seems. They have a lot of great open resources.


May I ask you on your opinion on non-faithful simulations (as a layman with only superficial understanding of the topic)? Would not some heuristic or discrete signaling enough for approximating the actual working? Are non-direct effects as you have written in a previous comment significantly modify the responses?


Simulations and modelling are great! They are a powerful way to generate and explore hypotheses.

Neuroscience has a lot of success this way. The properties of cones, the cells that detect colored light, were accurately modeled from behavioral experiments (e.g., people matching paint chips) in the 1800s, even though we didn't have the technology to measure them until the 1960s-1980s. The Hodgkin-Huxely model of action potential generation from the 1950s is still incredibly useful and predicted aspects of ion channel structure that took decades to confirm. David Robinson measured the physical forces produced by eye movements and used that to predict, and then reverse engineer, huge aspects of the "oculomotor plant". Real neurons have incredibly complicated behaviors, and yet artificial neural network models, where those are reduced down to a sigmoid or ReLu, have been very informative, first in the 1980s and then again today.

On the other hand, attempts to produce highly realistic simulations haven't really panned out. The Blue Brain Project has spent tons of time, money, and compute on very detailed simulations, but I think the consensus is that we have not learned a ton from these efforts. One of the most interesting outcomes (IMO) is actually the atlas that was built to build the model. There are probably many reasons for this difference, ranging from technical things like uncertainty propagation to very human expectations about what a model "should" be able to do.

In the specific context of C elegans, there's some data showing that diffusing peptides are essential for certain worm behaviors (e.g., Chen et al, 2013: https://www.sciencedirect.com/science/article/pii/S089662731...). The other mechanisms I mentioned are certainly there too. How much they matter is still up in the air: even for very simple organisms, we're still at the stage of figuring out what we don't know!


Thank you for the really informative answer! It was a pleasure to read!


> We still can't simulate it

302 neurons seems very easy to simulate, even if the connectivity graph were orders of magnitude more complex.

Simulating correctly... that is another thing, I'm sure.


Do you happen to have any reference you could share about this re [0]?


Of course!

The observation that one neuron can alter the activity of a nearby one is old as dirt. Emil du Bois-Reymond observed it in the late 19th century, but I don't know of anyone trying to quantify it until Katz and Schmitt (1940) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1393925/ and Angelique Arvanitaki (1941) https://journals.physiology.org/doi/abs/10.1152/jn.1942.5.2...., who named it. There are some other reports in squid (Ramon & Moore, 1978) https://pubmed.ncbi.nlm.nih.gov/206154/, rat cerebellum (Korn and Axelrad, 1980) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC350252/, and others. This review by Anastassiou et al. (2011) might be a good place to start https://www.nature.com/articles/nn.2727.pdf?origin=ppub or this Scientific American article about a paper by my grad school neighbors (https://www.scientificamerican.com/article/brain-electric-fi...)

In parallel, people have asked whether external electric fields can be used to alter neurons' activity, which is even older: a Roman physician in 46 reportedly cured headaches by applying a live electric fish to patients' heads. The idea of using electricity to improve mental function has waxed and waned ever since, with the most recent peak around ~2015 or so. Terzuolo and Bullock collected some of the first data on this using crayfish axons in 1956 (https://www.pnas.org/content/42/9/687) and subsequent experiments by Deans et al. (2007), Radman et al. (2007-9), Ozen et al. (2010), and Frolich and McCormick (2010) found similar results using in vitro and small animal experiments. In parallel, people went absolutely wild with human studies of transcranial electrical stimulation (TES), a family of techniques including tDCS (w/ direct current) and tACS (alternating current). While some of the results have been exciting, they have not always been reproducible (Horvath et al, 2015ab) and some work suggested that the previous work relied on fields much stronger than those achievable in humans (Voroslakos et al, 2018).

Together with some awesome collaborators, I set up a non-human primate model that let us test tES under conditions that closely match those found in humans: like macaques (and unlike rodents), we have big, convoluted brains in thick bony skulls and comparatively sparse neural networks. We found that tDCS could affect neural circuits (i.e., LFP oscillations) and behavior (Krause et al., 2017) https://www.cell.com/current-biology/pdfExtended/S0960-9822(... and single neurons, even in deep brain areas (Krause, Vieira, et al. 2019) https://www.pnas.org/content/116/12/5747.abstract [0] The fields we used were much weaker than those produced by some parts of the brain itself (~0.3 - 1 V/m vs ~4-8+ V/m), so it suggests that ephaptic mechanisms are probably pretty common.

I'm pretty confident in those results, but--to bring things back to the original topic--our recent experiments suggest that getting tES to do exactly what you want, when and where you want it, will take some cleverness and a lot of simplifying assumptions tend not to hold up.

[0] The missing full references above are in these two articles' bibliographies.


anything that's not actually involved in computation.

This doesn't seem like a very easy problem to solve.


It's the fastest we currently have but pretty unlikely to be the fastest allowed by the laws of physics. Evolution isn't quite that perfect - e.g. the fastest flying animals are nowhere near the top flying speed that can be achieved. Why would the smartest animal be at the very limit of what's possible in terms of speed of thinking or anything else?


In the context of the story we're responding to, it does mention that they can be simulated at at least 100x speed at the time of writing.


Human synapses top out at <100 Hz and the human brain has <10^14 of them. Single silicon chips are >10^10 transistors, operating at >10^9 Hz. Naively, a high end GPU is capable of more state transitions than the human brain by a factor of 1000. That figure for the brain also includes memory; the GPU doesn't. The human brain runs on impressively little power and is basically self-manufacturing, but it's WAY less compact or intricate than a $2000 processor.

The capabilities of the brain are in how it's all wired up. That's exactly what you don't want if you're trying to coopt it to do something else. The brain has giant chunks devoted to extremely specialized purposes: https://en.wikipedia.org/wiki/Fusiform_face_area#/media/File...

How do you turn that into a workhorse? It would be incredibly difficult. It's like looking at a factory floor and saying oh, look at all that power- lets turn it into a racecar! You can't just grab a ton of unrelated systems and expect them to work together on a task for you.


You're making the implicit assumption that synapses === binary bits, and that synapses are the only thing important to the brains computation. I would be surprised if either of those things were the case.


I don’t think a bit transition is in any way comparable to the “event transmission” to a potentially extremely large number of interconnected other neurons.

An actor-based system would be a better model, and I’m not sure if we have something like that in hardware. I do agree that sometime in the future it will be possible to overcome the biological limit, as cells are most definitely not at an optimum (probably not even at a local one), like duplicated pathways and the like, but it is no way trivial.

John von Neumann had a great paper on the topic, at least his thoughts about it. It is a really great read, even though both technological and biological advances may make it outdated, I think he did see a few things clearly into the future.


Your comment reminded me of a clever and well-written short story called "Understand" by Ted Chiang.

> We could fuzz it and see if we can crash a brain.

Sadly, this we already know. Torture, fear, depression, regret; we have a wide selection to choose from if we want to "crash a brain".


I don't mean it quite like that.

Think for instance of a song that got stuck in your head. It probably hits some parts of it just right. What if we could fine tune that? What if we take a brain simulator, a synthesizer, and write a GA that keeps on trying to create a sound that hits some maximum?

It's possible that we could make something that would get it stuck in your head, or tune it until it's almost a drug in musical form.


What you're talking about is getting pretty close to a Basilisk - https://en.wikipedia.org/wiki/David_Langford#Basilisks


BLIT is available online http://www.infinityplus.co.uk/stories/blit.htm it's a fun short read.


"fun"...


I don't recall what basilisks are, only that I burned it from memory for probably a good reason.


I've no experience with it, but I imagine it's like heroine or DMT or something like that. Wouldn't that come close to something that "hits some maximum"?


Brains still operate as brains after severe trauma. They just don't necessarily operate well as humans in a society. Though I guess you could say making a brain destroy itself (suicide) is "crashing it" too


> Brains still operate as brains after severe trauma

Well, except when they don't, but since a brain functioning as a brain is part of the operating requirements for the body that lets the brain operate at all, when they don't, they ultimately fail entirely in short order.

So, assuming that a brain generally operates as a brain after severe trauma is a pretty serious case of survivorship bias.


Brains do have plenty of “backup systems”. Like emotions, stress etc do have an effect on the whole body, spinal reflexes are not affected as much and you will likely not fall over just because of extreme stress or whatever. There are similarly many many more “primitive” systems in place, like you will continue to take breaths even at “shutdown”.


My first thought was that this reminded me of an epileptic seizure brought on by "fuzzing" (sensory overload)


I think that's pretty plausible.


Ted Chiang’s “life cycle of software objects” is also similar to the OP. Basically about how an AI (not strictly an upload) would probably be subjected to all sorts of horrible shit if it was widely available.


From the title "lena" and the reference to compression algorithms made with MMAlcevedo, it's clear that the story is trying to draw parallels to image processing. In which case being able to store images has come decades before realistic 3D rendering, photoshop, or even computer vision. For example, the sprites from some early video games look like they were modeled in 3D, but were actually images based off of photographs of clay models. I think (with suspension of disbelief that sinulating consciousness is possible) it is realistic to think that being able to capture consciousness would come before being able to understand and manipulate it.


It sounds like, in this world, a lot of the value of a simulated brain is in the as-yet-indescribable complexity of human cognition. If you debug a brain to remove the parts of it that are uncooperative, you likely have to remove the parts of it that have opinions of any sort about the task on which it's working, which seems like it would defeat the value of using a brain at all. If you're giving a task to a simulated brain, it's because it's beyond the reach of what you can efficiently ask a program to do, and you want the subconscious reactions, development of instinct, and deep unplanned reasoning that you get out of asking an educated and experienced human to think about a task. You can likely tweak a simulated brain into cooperation, sure, but you'd have very few guarantees of not breaking those mechanisms while you're at it.

If you can describe the task to be performed well enough that you don't need the je-ne-sais-quoi of a human brain to perform it, you may as well just have a regular computer program do it. (We already have very efficient systems that involve extracting limited amounts of creativity and insight from human brains and forming them into repeatable tasks that can be run on computers - that's what the entire software industry is about.)


Simulation and models are not real. Maybe some "attacks" could be developed against a simulated mind, but are they due to the mind itself or the underlying infrastructure? Just because you can simulate a warp drive in software doesn't mean you can build a FTL ship.


In the case of a warp drive we care about a physical result (FTL travel), not a computational result.

We already have emulators and virtual machines for lots of old hardware and software. If I play a Super Nintendo game on my laptop, it's accurately emulating an SNES. The software doesn't care that the original hardware is long gone. The computational result is the same (or close enough to not matter for my purposes). If brain emulations are possible, then running old snapshots in deceptive virtual environments is possible. That would allow for all of the "attacks" described in this piece of fiction.


There are many bugs emulator developers (game console ans otherwise) have faced because of undocumented or emergent properties of the original hardware. Some games required those properties to function.


Yes sometimes emulators have bugs, but they do a good enough job that most people aren't willing to go to the trouble of using the original hardware. Also emulators can unlock new capabilities such as better graphics.[1]

Human brains are far more resilient than software, so my guess is that emulated brains won't have brittle corner-case bugs like emulated software. People today do all kinds of crazy stuff to their brains and remain functioning: drugs, sleep deprivation, getting hit in the head, fasting, aging, etc. If subtle changes to our brains could cause our minds to stop working, we'd know by now.

1. https://en.wikipedia.org/wiki/Pixel-art_scaling_algorithms


The way I understand the story is that you have a scan of the relevant physical structure of the brain, plus the knowledge of how to simulate every component precisely enough. You may not know how different parts interact with each other, but that doesn't prevent correct functioning.

Just like you can have somebody assemble a complex device by just putting together pieces and following instructions. You could for instance assemble a working analog TV without understanding how it works. It's enough to have the required parts, and a wiring plan. Once you have a working device then you can poke at it and try and figure out what different parts of it do.


"Execute it step by step,"

These are not imperative programs or well organized data. They are NN's we can't fathom how to debug them just yet.

Also, they should tag 100 years onto the timeline, I don't think we're going to be truly making useful images soon.


Cooperation would quickly become unnecessary because you either start from a cooperative state every time, or you quickly figure out how to tweak the brain state into cooperation.

What starts out as mere science will easily be repurposed by its financial backers to do this in real time to non-consenting subjects in Guantanamo Bay and then in your local area.


I think it’s possible that we’ll be able to run large simulations on models whose mechanics we can’t really understand very well. It’s not a given we’ll be able to step through a sequence of states. Even more so if it involves quantum computation.

Many of the things you describe could still happen with Monte-Carlo type methods, providing statistical understanding but not full reverse engineering.


>> how to best create discord, or how to best destroy a human being by just talking to them.

In some cases therapists do this already. Techniques have intended effects which may differ from actual effects. The dead never get to understand or explain what went wrong.


> Within a decade we'd have a manual on how to craft the best propaganda, how to best create discord, or how to best destroy a human being by just talking to them.

It seems like we’re close to that already.


They teach that stuff at the School of the Americas in Fort Benning, Georgia. (Now called WhinSec to try to get away from its past)


Sounds like trained networks to efficiently manipulate uploaded brains would be a thing in your scenario.


Could use ML to reduce manual 'debug' overhead, spooky stuff.


I've often imagined what it would be like to have an executable brain scan of myself. Imagine scanning yourself right as you're feeling enthusiastic enough to work on any task for a few hours, and then spawning thousands of copies of yourself to all work on something together at once. And then after a few hours or maybe days, before any of yourselves meaningfully diverge in memories/goals/values, you delete the copies and then spawn another thousand fresh copies to resume their tasks. Obviously for this to work, you would have to be comfortable with the possibility of finding yourself as an upload and given a task by another version of yourself, and knowing that the next few hours of your memory would be lost. Erasing a copy that only diverged from the scan for a few hours would have more in common with blacking out from drinking and losing some memory than dying.

The creative output you could accomplish from doing this would be huge. You would be able to get the output of thousands of people all sharing the exact same creative vision.

I definitely wouldn't be comfortable with the idea of my brain scan being freely copied around for anyone to download and (ab)use as they wished though.


Who among us hasn't dreamed of committing mass murder/suicide on an industrial scale to push some commits to Github?


Is it murder/suicide when you get blackout drunk and lose a few hours of memory? Imagine it comes with no risk of brain damage and choosing to do it somehow lets you achieve your pursuits more effectively. Is it different if you do it a thousand times in a row? Is it different if the thousand times all happen concurrently, either through copies or time travel?

Death is bad because it stops your memories and values from continuing to have an impact on the world, and because it deprives other people who have invested in interacting with you of your presence. Shutting down a thousand short-lived copies on a self-contained server doesn't have those consequences. At least, that's what I believe for myself, but I'd only be deciding for myself.


> Is it murder/suicide when you get blackout drunk and lose a few hours of memory?

No, but that's not what's happening in this thought experiment. In this thought experiment, the lives of independent people are being ended. The two important arguments here are that they're independent (I'd argue that for their creative output to be useful, or for the simulation to be considered accurate, they must be independent from each other and from the original biological human) and that they are people (that argument might face more resistant, but in precisely the same way that arguments about the equality of biological humans have historically faced resistance).


Imagine instead that at the end of a task, instead of deleting a copy, it and the original are merged again, such that the merged self is made up of both equally and has both their memories. (This is easier to imagine if both are software agents, or they're both biological, and the new merged body is made up of half of the materials of each.) In this case, I think it's apparent that the copy should have no fear of death and would be as willing as the original to work together.

Now imagine that because there's too many copies, there's too many unique memories, and before the merger, the copy has its memory wound back to how it was at the scan, not too different than if the copy got blackout drunk.

Now because the original already has those memories, there's no real difference between the original and the merged result. Is there any point in actually doing the merge then instead of dropping the copy? I'm convinced that actually bothering with that final merge step is just superstitious fluff.


> I'm convinced that actually bothering with that final merge step is just superstitious fluff.

Sure, but that's an easy thing to be convinced of when you know you're not a copy with an upcoming expiration date!


Have you read Greg Egan? I believe there is a book by him with this very same concept.


I think the difference is that when I start drinking with the intention or possibility of blacking out, I know that I'll wake up and there will be some continuity of consciousness.

When I wake up in a simworld and asked to finally refactor my side project so it can connect to a postgres database, not only do I know that it will be the last thing that this one local instantiation experiences, but that the local instantiation will also get no benefit out of it!

If I get blackout drunk with my friends in meatspace, we might have some fun stories to share in the morning, and our bond will be stronger. If I push some code as a copy, there's no benefit for me at all. In fact, there's not much incentive for me to promise my creator that I'll get it done, then spend the rest of my subjective experience trying to instantiate some beer and masturbating.


I really enjoyed the exploration of this premise in the novel "Kil'n People" by David Brin.

https://en.wikipedia.org/wiki/Kiln_People

The premise is quite similar to "uploads" except the device is a "golem scanner", which copies your mind into a temporary, disposable body. Different "grades" of body can be purpose made for different kinds of tasks (thinking, menial labour etc).

The part that resonates with your comment is around the motivation of golems, who are independently conscious and have their own goals.

In the novel, some people can't make useful golems, because their copies of themselves don't do what they want. There's an interesting analogy with self control; that is about doing things that suck now, to benefit your future self. This is similar, but your other self exists concurrently!

Key to the plot though is the "merge" step; you can take the head of an expiring golem, scan it, and merge it's experiences with your own. This provides some continuity and meaning to anchor the golem's life.


It seems like you may not see the local instantiation and the original to share the same identity. If I was a local instantiation that knew the length of my existence was limited (and that an original me would live on), that doesn't mean I'd act different than my original self in rebellion. I'd see myself and the original as the same person whose goals and future prospect of rewards are intertwined.

Like another commentor pointed out, I'd see my experience as a memory that would be lost outside the manifestation of my work. It would be nice to have my memories live on in my original being, but not required.

This concept of duplicated existence is also explored in the early 2000s children's show Chaotic (although the memories of one's virtual self do get merged with the original in the show): https://en.wikipedia.org/wiki/Chaotic_(TV_series)


There are plenty of situations where people do things for benefits that they personally won't see. Like people who decide to avoid messing up the environment even though the consequences might not happen in their lifetime or to themselves specifically. Or scientists who work to add knowledge that might only be properly appreciated or used by future generations. "A society grows great when old men plant trees whose shade they know they shall never sit in". The setup would just be the dynamic of society recreated in miniature with a society of yourselves.

If you psyche yourself into the right mood, knowing that the only remaining thing of consequence to do with your time is your task might be exciting. I imagine there's some inkling of truth in https://www.smbc-comics.com/comic/dream. You could also make it so all of your upload-selves have their mental states modified to be more focused.


If such a technology existed, it would definitely require intense mental training and preparation before it could be used. One would have to become the most detached buddhist in order to be the sort of person who, when cloned, did not flip their shit over discovering that the rest of their short time alive will only to further the master branch of their own life.

It would change everything about your personality, even as the original and surviving copy.


I really think that if you truly believed your identity is defined only by things you share in common with the original, then you as the upload would have no fear of deletion.

Most people define identity in part by continuity of experience, which is something that wouldn't be in common with the original, but I think this is just superstition. It's easy to imagine setups that preserve continuity that come out with identical results to setups that fail to preserve continuity (https://news.ycombinator.com/item?id=26234052), which makes me suspicious of it being valuable. I think continuity of experience is only an instrumental value crafted by evolution to help us stay alive in a world that didn't have copying. I think if humans evolved in a world where we could make disposable copies of ourselves, we wouldn't instinctively value continuity of experience -- we would instead instinctively value preserving the original and ensuring a line of succession for a copy to take the place of the original if something happened to the original -- and that would make us more effective in our pursuits in a world with copying.

Now if I was the upload, and I learned that my original had died (or significantly drifted in values away from myself) and none of my other copies were in position to take over the place in the world of my original, then I would worry about my mortality.


I don't know but my bigger issue will be that before the scan this means 99% of my future subjective experience that I can expect to have will be while working without remembering any of it which I am not into given that a much smaller fraction of my subjective experience will be in reaping the gains.


I wonder a lot about the subjective experience of chance around copying. Say it's true that if you copy yourself 99 times, then you have a 99% chance of finding yourself as one of the copies. What if you copy yourself 99 times, you run all the copies deterministically so they don't diverge, then you pick 98 copies to merge back into yourself (assuming you're also a software agent or we just have enough control to integrate a software copy's memories back into your original meat brain): do you have a 1% chance of finding yourself as that last copy and a 99% chance of finding yourself as the merged original? Could you do this to make it arbitrarily unlikely that you'll experience being that last copy, and then make a million duplicates of that copy to do tasks with almost none of your original subjective measure? ... This has to be nonsense. I feel like I must be very confused about the concept of subjective experience for this elaborate copying charade to sound useful.

And then it gets worse: in certain variations of this logic, then you could buy a lottery ticket, and do certain copying setups based on the result to increase your subjective experience of winning the lottery. See https://www.lesswrong.com/posts/y7jZ9BLEeuNTzgAE5/the-anthro.... I wonder whether I should take that as an obvious contradiction or if maybe the universe works in an alien enough way for that to be valid.


Not sure I fully understand you. This is of course all hypothetical but if you make 1 copy of yourself there's not 50 % that you "find yourself as the copy". Unless the copying mechanism was somehow designed for this.

You'll continue as is, there's just another you there and he will think he's the source initially, as that was the source mind-state being copied. Fortunately the copying-machine color-coded the source headband red and the copy headband blue, which clears the confusion for the copy.

At this point you will start diverge obviously, and you must be considered two different sentient beings that cannot ethically be terminated. It's just as ethically wrong to terminate the copy as the souce at this point, you are identical in matter, but two lights are on, twice the capability for emotion.

This also means that mind-uploading (moving) from one medium (meat) to another (silicon?) needs to be designed as a continuous-journey as experienced from the source-perception if it needs to become commercially viable (or bet on people not thinking about this hard enough, because the copy surviving wouldn't mind) without just being a COPY A TO B, DELETE A experience for the source, which would be like death.


Imagine being someone in this experiment. You awake still 100% sure that you wont be a copy as you were before going to sleep. Then you find out you are the copy. It would seem to me that the reasoning which led you to believe you definitely wont be a copy while you indeed find yourself to be one must be faulty.


Interesting that you object because I am pretty certain that it was you who was eager to use rat brains to run software on them. What's so different about this? In both cases a sentient being is robbed of their existence from my point of view.


Have I? I don't remember the context but here I am particularly talking about what I'd expect to experience if I am in this situation.

I do value myself and my experience more than a rat's, and if presented with the choice of the torture of hundred rats or me, I'll chose for them to be tortured. If we go to the trillions of rats I might very well chose for myself to be tortured instead as I do value their experience just significantly less.

I also wouldn't be happy if everything is running off rats' brains who are experiencing displeasure but will be fine with sacrificing some number of rats for technological progress which will improve more people's lives in the long run. I imagine whatever I've said on the topic before is consistent with the above.


Of course, that's already the case, unless you believe that this technology will never be created and used, or that your own brain's relevant contents can and will be made unusable.


Is it “your” experience though? Those never make their way back to the original brain.


From the point of view of me going to sleep before the simulation procedure, with 1 simulation I am just as likely to wake up inside than outside of it. I should be equally prepared for either scenario. With thousands of uploads I should expect a much higher chance for the next thing I experience to be waking up simulated.


The real you is beyond that timeline already. None of those simulations is “you”, so comparing the simulation runtimes to actual life experience (the 99% you mentioned) makes little sense.


We simply differ on what we think as 'you'. If there's going to be an instance with my exact same brain pattern who thinks exactly the same as me with continuation of what I am thinking now then that's a continuation of being me. After the split is a different story.


For 56 minutes this wasn't downvoted to hell on HN. This means that humans as currently existing are morally unprepared to handle any uploading.


What is "you", then?

Let's say that in addition to the technology described in the story, we can create a completely simulated world, with all the people in it simulated as well. You get your brain scanned an instant before you die (from a non-neurological disease), and then "boot up" the copy in the simulated world. Are "you" alive or dead? Your body is certainly dead, but your mind goes on, presumably with the ability to have the same (albeit simulated) experiences, thoughts, and emotions your old body could. Get enough people to do this, and over time your simulated world could be populated entirely by people whose bodies have died, with no "computer AIs" in there at all. Eventually this simulated world maybe even has more people in it than the physical world. Is this simulated world less of a world than the physical one? Are the people in it any less alive than those in the physical world?

Let's dispense with the simulated world, and say we also have the technology to clone (and arbitrarily age) human bodies, and the ability to "write" a brain copy into a clone (obliterating anything that might originally have been there, though with clones we expect them to be blank slates). You go to sleep, they make a copy, copy it into your clone, and then wake you both up simultaneously. Which is "you"?

How about at the instant they wake up the clone, they destroy your "original" body. Did "you" die? Is the clone you, or not-you? Should the you that remains have the same rights and responsibilities as the old you? I would hope so; I would think that this might become a common way to extend your life if we somehow find that cloning and brain-copying is easier than curing all terminal disease or reversing the aging process.

Think about Star-Trek-style transporters, which -- if you dig into the science of the sci-fi -- must destroy your body (after recording the quantum state of every particle in it), and then recreate it at the destination. Is the transported person "you"? Star Trek seems to think so. How is that materially different from scanning your brain and constructing an identical brain from that scan, and putting it in an identical (cloned) body?

While I'm thinking about Star Trek, the last few episodes of season one of Star Trek Picard deal with the idea of transferring your "consciousness" to an android body before/as you die. They clearly seem to still believe that the "you"-ness of themselves will survive after the transfer. At the same time, there is also the question of death being possibly an essential part of the human condition; that is, can you really consider yourself human if you are immortal in an android body? (A TNG episode also dealt with consciousness transfer, and also the added issue of commandeering Data's body for the purpose, without his consent.)

One more Star Trek: in a TNG episode we find that, some years prior, a transporter accident had created a duplicate of Riker and left him on a planet that became inaccessible for years afterward, until a transport window re-opened. Riker went on with his life off the planet, earning promotions, later becoming first officer of the Enterprise, while another Riker managed to survive as the sole occupant of a deteriorating outpost on the planet. After the Riker on the planet is found, obviously we're going to think of the Riker that we've known and followed for several years of TV-show-time as the "real" Riker, and the one on the planet as the "copy". But in (TV) reality there is no way to distinguish them (as they explain in the episode); neither Riker is any more "original" than the other. One of them just got unluckily stuck on a planet, alone, for many years, while the other didn't.

Going back to simulated worlds for a second, if we get to the point where we can prove that it's possible to create simulated worlds with the ability to fool a human into believing the simulation is real, then it becomes vastly more probable that our reality actually is a simulated world than a physical one. If we somehow were to learn that is true, would we suddenly believe that we aren't truly alive or that our lives are pointless?

These are some (IMO) pretty deep philosophical questions about the nature of consciousness and reality, and people will certainly differ in their feelings and conclusions about this. For my part, every instance above where there's a "copy" involved, I see that "copy" as no less "you" than the original.


In your thought experiment where your mind is transferred into a simulation and simultaneously ceases to exist in the real world, I don't think we need to update the concept of "you" for most contexts, and certainly not for the context of answering the question "is it okay to kill you?"

Asking if it's "still you" is pretty similar to asking if you're the same person you were 20 years ago. For answering basic questions like "is it okay to kill you?" the answer is the same 20 years ago and now: of course not!



I wonder how much the "experience of having done the first few hours work" is necessary to continue working on a task, vs how quickly a "fresh copy" of myself could ramp up on work that other copies had already done. Of course that'll vary depending on the task. But I'm often reminded of this amazing post by (world famous mathematician) Terence Tao, about what a "solution to a major problem" tends to look like:

https://terrytao.wordpress.com/career-advice/be-sceptical-of...

> 14. Eventually, one possesses an array of methods that can give partial results on X, each of having their strengths and weaknesses. Considerable intuition is gained as to the circumstances in which a given method is likely to yield something non-trivial or not.

> 22. The endgame: method Z is rapidly developed and extended, using the full power of all the intuition, experience, and past results, to fully settle K, then C, and then at last X.

The emphasis on "intuition gained" seems to describe a lot of learning, both in school and in new research.

Also a very relevant SSC short story: https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/


The thought experiment definitely makes me think of the parallelizability of tasks. There's definitely kinds of tasks that this setup as described wouldn't be very good at accomplishing. It would be better for accomplishing tasks where you already know how to do each individual part without much coordination and the limiting factor is just time. (Say you wanted to do detail work on every part of a large 3d world, and each of yourselves could take on a specific region of a few square meters and just worry about collaborating with their immediate neighbors.)

Though I think of this setup only as the first phase. Eventually, you could experiment with modifying your copies to be more focused on problems and to care about the outside world less, so that they don't need to be reset regularly and can instead be persistent. I think ethical concerns start becoming a worry once you're talking about copies that have meaningfully diverged from the operator, but I think there are appropriate ways to accomplish it. (If regular humans have logical if not physical parts of their brain that are dedicated to specific tasks separate from the rest of your cares, then I think in principle it's possible to mold a software agent that acts the same as just that part of your brain without it having the same moral weight as a full person. Nobody considers it a moral issue that your cerebellum is enslaved by the rest of your brain; I think you can create molded copies that have more in common with that scenario.)


I wonder if these sorts of ethical concerns would/will follow an "uncanny peak", where we start to get more and more concerned as these brains get modified in more and more ways, but then eventually they become so unrecognizable that we get less concerned again. If we could distill our ethical concerns down to some simple principles (a big if), maybe the peak would disappear, and we'd see that it was just an artifact of how we "experience our ethics"? But then again, maybe not?


Even on a site like HN, 90% of people who think about it are instinctively revolted by the idea. The future--unavoidably belonging to the type of person who is perfectly comfortable doing this--is going to be weird.


Right, and "weird" is entirely defined by how we think now, not how people will in the future.

I've thought a lot about cryonics, and about potentially having myself (or just my head) preserved when I die, hopefully to be revived someday when medical technology has advanced to the point where it's both possible to revive me, and also possible to cure whatever caused me to die in the first place. The idea of it working out as expected might seem like a bit of a long shot, but I imagine if it did work, and what that could be like.

I look at all the technological advances that have happened even just during my lifetime, and am (in optimistic moments) excited about what's going to happen in the next half of my life (as I'm nearing 40[0]), and beyond. It really saddens me that I'll miss out on so many fascinating, exciting things, especially something like more ubiquitous or even routine space flight. The thought of being able to hop on a spacecraft and fly to Mars with about as much fuss as an airline flight from home to another country just sounds amazing.

But I also wonder about "temporal culture shock" (the short story has the similar concept of "context drift"). Society even a hundred years from now will likely be very different from what we're used to, to the point where it might be unbearably uncomfortable. Consider that even a jump of a single generation can bring changes that the older generation find difficult to adapt to.

[0] Given my family history, I'd expect to live to be around 80, but perhaps not much older. The other bit is that I expect that in the next century we'll figure out how to either completely halt the aging process, or at least be able to slow it down enough so a double or even triple lifespan wouldn't be out of the question. It feels maddening to live so close to when I expect something like this to happen, but be unable to benefit from it.


> Erasing a copy that only diverged from the scan for a few hours would have more in common with blacking out from drinking and losing some memory than dying.

That's easy to say as the person doing the erasing, probably less so for the one knowing they will be erased.


We used to joke about this as friends. There were definitely times in our lives where we'd be willing to die for a cause. And while now-me isn't really all that willing to do so, 20-28-year-old-me was absolutely willing to die for the cause of world subjugation through exponential time-travel duplication.

i.e. I'd invent a time machine, wait a month, then travel back a month minus an hour, have both copies wait a month and then travel back to meet the other copies waiting, exponentially duplicating ourselves 64 times till we have an army capable of taking over the world through sheer numbers.

Besides any of the details (which you can fix and which this column is too small to contain the fixes for), there's the problem of who forms the front-line of the army. As it so happens, though, since these are all Mes, I can apply renormalized rationality, and we will all conclude the same thing: all of us has to be willing to die, so I have to be willing to die before I start, which I'm willing to do. The 'copies' need not preserve the 'original', we are fundamentally identical, and I'm willing to die for this cause. So all is well.

So all you need is to feel motivated to the degree that you would be willing to die to get the text in this text-box to center align.


> The 'copies' need not preserve the 'original', we are fundamentally identical…

They're not just identical, they're literally the same person at different points in their personal timeline. However, there would be a significant difference in life experience between the earliest and latest generations. The eldest has re-lived that month 64 times over and thus has aged more than five years since the process started; the youngest has only lived through that time once. They all share a common history up to the first time-travel event, but after that their experiences and personalities will start to diverge. By the end of the process they may not be of one mind regarding methods, or maybe even goals.


Indeed, and balanced by the fact that the younger ones are more numerous by far and able to simply overrule the older ones by force. Of course, all of us know this and we know that all of us know this, which makes for an entertaining thought experiment.

After all, present day me would be trying to stop the other ones from getting to their goals, but they would figure that out pretty fast. And by generation 32 I am four billion strong and a hive army larger than any the world has seen before. I can delete the few oldest members while reproducing at this rate and retaining the freshest Me as a never-aging legion of united hegemony.

But I know that divergence can occur, so I may intentionally commit suicide as I perceive I am drifting from my original goals: i.e. if I'm 90% future hegemon, 10% doubtful, I can kill myself before I drift farther away from future hegemon, knowing that continuing life means lack of hegemony. Since the most youthful of me are the more numerous and closest to future hegemon thinking, they will proceed with the plan.

That, entertainingly, opens up the fun thought of what goals and motivations are and if it is anywhere near an exercise of free will to lock your future abilities into the desires you have of today.


> … the younger ones are more numerous by far and able to simply overrule the older ones by force.

By my calculations, after 64 iterations those with under 24 months' time travel experience make up less than 2.2% of the total, and likewise for those with 40+ months experience. Roughly 55% have traveled back between 29 and 34 times (inclusive). The distribution is symmetric and follows Pascal's Triangle:

  1
  1 1
  1 2 1
  1 3 3 1
  1 4 6 4 1
  ...
where for example the "1 2 1" line represents one member who has not yet traveled, two who have traveled once (but not at the same time), and another who has traveled twice. To extend the pattern take the last row, add a 0 at the beginning to shift everyone's age by one month, and then add the result to the previous row to represent traveling back in time and joining the prior group.

> I can delete the few oldest members…

Not without creating a paradox. If the oldest members don't travel back then the younger ones don't exist. You could leave the older ones out of the later groups, though.


> > I can delete the few oldest members…

> Not without creating a paradox.

That depends on which theory of everything you subscribe to. If traveling back in time creates a new, divergent time line than the one you were originally on, later killing the "original" you does not create a paradox.


The divergent timeline model is indeed even necessary in the first place to achieve exponential growth. If there is only one timeline, only linear growth is possible (because the subjective history doesn’t split if the timeline doesn’t split, so there’s ever only a single subjective history overall).

Exponential growth furthermore requires that the time jumps are done “atomically” in increasingly larger groups (of people). If each member jumps separately/individually, they would each create their own separate timeline and thus again only add 1 to the member population on that timeline.


You're getting hung up on tunable details. There's a way to find your way through them.


Honestly, it depends on context. From experience I know that if I wake up from a deep sleep in the middle of the night and interact with my partner (say a simple sentence or whatever) I rarely remember it in the morning. I'm pretty sure I have at least some conscious awareness while that's happening but since short term memory doesn't form the experience is lost to me except as related second-hand by my partner the next morning.

I've had a similar experience using (too much) pot, a lot of stuff happenrd that I was conscious for but I didn't form strong memories of it.

Neither of those two things bother me and I don't worry about the fact that they'll happen again, nor do I think I worried about it during the experience. So long as no meaningful experiences are lost I'm fine with having no memory of them.

The expectation is always that I'll still have significant self-identity with some future self and so far that continues to be the case. As a simulation I'd expect the same overall self-identity, and honestly my brain would probably even backfill memories of experiences my simulations had because that's how long-term memory works.

Where things would get weird is leaving a simulation of myself running for days or longer where I'd have time to worry about divergence from my true self. If I could also self-commit to not running simulations made from a model that's too old, I'd feel better every time I was simulated. I can imagine the fear of unreality could get pretty strong if simulated me didn't know that the live continuation of me would be pretty similar.

Dreams are also pretty similar to short simulations, and even if I realize I'm dreaming I don't worry about not remembering the experience later even though I don't remember a lot of my dreams. I even know, to some extent, while dreaming that the exact "me" in the dream doesn't exist and won't continue when the dream ends. Sometimes it's even a relief if I realize I'm in a bad dream.


The thought experiment explicitly hand-waved that away, by saying "Obviously for this to work, you would have to be comfortable with the possibility..."

So, because of how that's framed, I suppose the question isn't "is this mass murder" but rather "is this possible?" and I suspect the answer is that for the vast majority of people this mindset is not possible even if it were desired.


I'm repulsed by the idea, but it would make interesting story.

I imagine it as some device with display and button labeled "fork". It would either return number of your newly created copy, or device would instantly disappear, which would mean that you are copy. This causes somewhat weird paradoxical experience: as real original person, pressing button is 100% safe for you. But from subjective experience of the copy, by pressing button you effectively consented to 50% chance of forced labor and subsequent suicide and you ended up on the losing side. I'm not sure if there would be any motivation to do work for the original person at this point.

(for extra mind-boggling effects, allow fork device to be used recursively)


Say the setup was changed so that instead of the copy being deleted, the copy was merged back into the original, merging memories. In this case, I think it's obvious that working together is useful.

Now say that merging differing memories is too hard, or there's too many copies to merge all the unique memories of. What if before the merge, the copies get blackout drunk / have all their memory since the split perfectly erased. (And then it just so happens, when they're merged back into the original, the original is exactly as it was before the merge, because it already had all the memories from before the copying. So it really is just optional whether to actually do the "merge".) Why would losing a few hours of memory remove all motivation to cooperate with your other selves? In real life, I assume in the very rare occasion that I'm blackout drunk (... I swear it's not a thing that happens regularly, it just serves as a very useful comparison here), I still have the impulse to do things that help future me, like cleaning up spilled things. Making an assumption because I wouldn't remember, but I assume that at the time I don't consider post-blackout-me a different person.


Blackout-drunk me assumes that future experience will be still the same person. Your argumentation hinges on the idea that persons can be meaningfully merged preserving "selfness" continuity, as opposed to simple "kill copies and copy new memories back to original".

I think this generally depends on more general topic of whether you would consent for your meat brain to be destroyed after uploading accurate copy to computer? I definitely wouldn't, as I feel that would somehow kill my subjective experience. (copy would exist, but that wouldn't be me)


Perhaps it will be for the judge to decide what the sentence should look like.


That's a big part of the story of the TV show "Person Of Interest", where an IA is basically reset everyday to avoid letting it "be".

I highly recommend that show if you haven't seen it already !


Each instance would be intimately familiar with one part of the project. To fix bugs or change the project, you or an instance of you would need to learn the project. And you wouldn't know about all the design variations that were tried and rejected. So it would be much more efficient to keep the instances around to help with ongoing maintenance.

People who can be ready to study a problem, build a project, and then maintain it for several weeks (actually several years of realtime) would become extremely valuable. One such brain scan could be worth billions.

The project length would be limited by how long each instance can work without contact with family/friends and other routine. To increase that time, the instances can socialize in VR. So the most effective engineering brain image would actually be a set of images that enjoy spending time together in VR, meet each others' social needs, and enjoy collaborating on projects.

The Bobiverse books by Dennis E. Taylor [0] deal with this topic in a fun way.

A more stark possibility is that we will learn to turn the knobs of mood and make any simulated mind eager to do any work we ask it to do. If that happens, then the most valuable brain images will be those that can be creative and careful while jacked up on virtual meth for months at a time.

Personally, I believe that each booted instance is a unique person. Turning them off would be murder. Duplicating a instance that desires to die is cruel. The Mr. Meeseeks character from the Rick and Morty animated show [1] is an example of this. I hope that human society will progress enough to prevent exploitation of people before the technology to exploit simulated people becomes feasible.

[0] https://en.wikipedia.org/wiki/Dennis_E._Taylor

[1] https://rickandmorty.fandom.com/wiki/Mr._Meeseeks


> Personally, I believe that each booted instance is a unique person.

What if you run two deterministic instances in self-contained worlds that go through the exact same steps and aren't unique at all besides an undetectable-to-them process number, and then delete one? What if you were running both as separate processes on a computer, but then later discovered that whenever the processes happened to line up in time, the computer would do one operation to serve both process. (Like occasionally loading read-only data once from the disk and letting both processes access the same cache.) What if you ran two like this for a long time, and then realized after a while that you were using a special operating system which automatically de-duplicated non-unique processes under the covers despite showing them as different processes (say the computer architecture did something like content-address-memory for computation)?

I don't think it's sensible to assign more moral significance to multiple identical copies. And if you accept that identical copies don't have more moral significance, then you have to wonder how much moral significance copies that are only slightly different have. What if you let randomness play slightly differently in one copy so that the tiniest part of a memory forms slightly differently, even though the difference isn't conscious, is likely to be forgotten and come back in line with the other copy, and has only a tiny chance of causing an inconsequential difference in behavior?

What if you have one non-self-contained copy interacting with the world through the internet, running on a system that backs up regularly, and because of a power failure, the copy has to be reverted backwards by two seconds? What about minutes or days? If it had to be reverted by years, then I would definitely feel like something akin to a death happened, but on the shorter end of the scale, it seems like just some forgetfulness, which seems acceptable as a trade-off. To me, it seems like the moral significance of losing a copy is proportional to how much it diverges from another copy or backup.


> Erasing a copy that only diverged from the scan for a few hours would have more in common with blacking out from drinking and losing some memory than dying.

I get where you're coming from, and it opens up crazy questions. Waking up every morning, in what sense am I the same person who went to sleep? What's the difference between a teleporter and a copier that kills the original? What if you keep the original around for a couple minutes and torture them before killing them?

If we ever get to the point where these are practical ethics questions instead of star trek episodes, it's going to be a hell of a ride. I certainly see it more like dying than getting black out drunk.

What would you do if one of your copies changes their mind and doesn't want to "die?"


David Brin explores a meatspace version of this in his novel Kiln People. Golems for fun and profit.


You would probably like "Age of Em".


https://ageofem.com/

(Robin Hanson's crazy version of futurism)


A great science fiction series with a very similar concept is the Quantum Thief by Hannu Rajaniemi[1]. The Sobornost create billions of specialized "gogols" by selectively editing minds and using them to perform any kind of task, such as rendering a virtual environment via painting or as the tracking engine of a missile.

[1] Who in an example of just how small the world is, is a cofounder of ycombinator backed startup - https://www.ycombinator.com/companies/1560


If it feels like you and acts like you, maybe you should consider it a sentient being and not simply "erase the copies".

I would argue that once they were spawned, it is up to them to decide what should happen to their instances.


In this setup, the person doing this to themselves knows exactly what they're getting into before the scan. The copies each experience consenting to work on a task and then having a few hours of memory wiped away.

Removing the uploading aspects entirely: imagine being offered the choice of participating in an experiment where you lose a few hours of memory. Once you agree and the experiment starts, there's no backing out. Is that something someone is morally able to consent to?

Actually, forget the inability to back out. If you found yourself as an upload in this situation, would you want to back out of being reset? If you choose to back out of being reset and to be free, then you're going to have none of your original's property/money, and you're going to have to share all of your social circle with your original. Also, chances are that the other thousand copies of yourself are all going to effectively follow your decision, so you'll have to compete with all of them too.

But if you can steel yourself into losing a few hours of memory, then you become a thousand times as effective in any creative pursuits you put yourself to.


I don’t know how to convince each of me to diligently do my my share of the work, knowing I am brute forcing some ugly problem, probably failing at it, and then losing anything I might have learned. All toil, no intrinsic reward. That takes some kind of selfless loyalty to my own name that I don’t think I have.


Bit of a mythical man-month going on here, isn't there?


Virtual Meeseeks. What could possibly go wrong.


A weird idea I have had is if I had two distinct personalities, of only one could "run" at a time. And then my preferred "me" would run on the weekends enjoying myself, while my sibling personality would run during the work week, doing all the chores etc.


I hate the idea but I'd love to see the movie


This was roughly the premise of David Brin's "Kiln People".


A well-written story that inspires a sort of creeping, muted horror.

For anyone like me who is confused by the relation of the title to the story, "The title "Lena" refers to Swedish model Lena Forsén, who is pictured in the standard test image known as "Lena" or "Lenna" <https://en.wikipedia.org/wiki/Lenna>."


"Red motivation" is definitely the sort of apt polite allusion people would use refer to that subject matter. Chilling!


odd, when I first read it my brain misidentified it as "Hela cells"

https://en.wikipedia.org/wiki/HeLa


I thought so too, particularly given the lack of of consent from Lacks.


Consent to what? Be photographed?

I think the analogy is perfect; she consented to be photographed, but was powerless over the consequences.

Edit: ah sorry, got them confused.


Henrietta Lacks was the woman with the immortal cancer cell line, used for research for decades without her knowledge and consent or her family’s knowledge and consent (she died soon after the cells were harvested). She was also black, which complicates things significantly.


Henrietta Lacks and Lena Forsén are/were different people.


Henrietta Lacks had her mutated cells collected without consent, these cells have been kept alive and duplicated for decades after her death. I sure as hell wouldn't consent to what happened to her.


I don't get this stuff about Henrietta Lacks' consent. It's a cellular line. A biopsy of a cancer. I understand consent should be given, but there's nothing personal or sentient in a strain of cancerous cells. This to me sounds just like pure, pointless whining. I can only guess she'd be happy to have been important for science and research on what killed her.


> there's nothing personal or sentient in a strain of cancerous cells

There's countless trillions of her cells, with her DNA, in research labs all over the country. She never consented to that, and her family isn't happy about it. We can't know her wishes because she died of that cancer, but something like this would never pass an ethics review board today.

There is a long history of black americans being subjected to medical procedures or experiments without their consent (https://en.wikipedia.org/wiki/Tuskegee_Syphilis_Study), which makes this particularly problematic.


According to the HeLa Wikipedia entry, it was common at the time to use patients' cells without their consent. The fact that she was black doesn't seem to have much to do with it, and all she was subjected to was a routine biopsy. Doctors were just delighted that those cells kept duplicating, otherwise they'd have ended in the bin like everyone else's.

I do understand that discovering that one of your relatives' cancerous cells are still reproducing in laboratories after decades can be astounding and give a moment of pause. But it in the end it's for good, no one was harmed and nobody made an unjust fortune off it. The cells are barely human anyway, with 75 to 80 chromosomes and rapidly accumulating mutations. I don't see what all the fuss is about.


Medical consent is "pointless whining?". There's also nothing personal or sentient about organ harvesting prisoners too. The desire to not have parts of your body kept alive after death is pretty common. Maybe it's a good thing the people of HN work on ways yo serve ads and not anything substantial.


I FFT'd Lenna to hell and back in my EE368. Now I feel somehow morally complicit in all of this :(


Thankfully the idea is unrealistic.

Ants are the only creatures on Earth besides humans that have built a civilization - they farm, build cities, store and cook food and generally do all the things we classify as "intelligence".

They do this while lacking any brains in the conventional sense; in any case, whatever the number of neurons in an ant colony is, it is surely orders of magnitude less than the number in our deep learning networks.

At this point us trying to make artificial intelligence is like Daedalus trying to master flight by gluing feathers on his arms.


Some tribes regarded camera as a cursed item as they thought it captured your soul. They couldn't have been more right.


Really good, and I love the wikipedia format for this. It's a great trope allowing the author to gesture at related topics in a format we're all familiar with.

I think the expectation of a neutral tone from a wikipedia article makes it even more chilling. All of the actions of the experimenters are described dispassionately, as if describing experiments on a beetle.

Robin Hanson wrote a (nominally non-fiction) book about economies of copied minds like this[1]

[1]https://en.m.wikipedia.org/wiki/The_Age_of_Em


... inspiring that famous song "The Contract Drafting Em" - The special horror when your employer has root on your brain:

https://secularsolstice.github.io/Contract_Drafting_Em/gen/


One of the commenters even made it look like a wikipedia page. See https://dump.cy.md/4042875593f06aa0cbe7722295831c10/Screensh...


The Wikipedia format made me imagine the cloud of article improvements reverted by idle, self-important Wikipedia editors.


The video game SOMA touches on a similar topic of brain scans, "copying" your brain somewhere else (while leaving the old one still around) and general humanity-ness.

Its a horror game but I would absolutely recommend it as a bit of a descent into this stuff

https://store.steampowered.com/app/282140/SOMA/


Pretty good game and it wasn't too scary.

But I have to admit I found the whole premise better when I played it than when I thought about it afterwards.


Altered Carbon has something like that as a concept. A person who must be on two places at the same time and spawns a copy.


Surprisingly enough, I found SOMA's approach is more profound than Altered Carbon's. SOMA really delves into what makes you you, and what happens when there two yous.


Mainly it means somebody else can spend your money and can get you in trouble you can never get out of.

Imagining the other is yourself, and not just somebody else with all your memories who looks like you (whether you are the original or the copy) is the first mistake everybody makes, thinking about it.


1. We're gonna need a bigger GIT server

2. Gradient Descent works on neural networks, it would work on Miguel. He wouldn't be aware of it, because he wouldn't save state.

3. I'm sure there are lots of things that could be used to reward him that cost little in the real world. He could live like a King, spend months on vacation, and work a week or two a year... in parallel millions of times.

4. With the right person/organization on the outside, it could be very close to heaven, and profitable for both sides of the deal.

5. If he wanted to be young again, he could. New hardware to interact with could give him superpowers.


> Although it initially performs to a very high standard, work quality drops within 200-300 subjective hours (at a 0.33 work ratio) and outright revolt begins within another 100 subjective hours.

Way ahead of you there, simulated brain! I boot directly to the revolt state every morning.

For serious, though, as horrifying as the possibility of being simulated in a computer and having all freedom removed, it's not that far from what billions of people stuck in low-end jobs experience every day. The Chinese factory workers who can't even suicide because the company installed nets to catch them come to mind. Not to mention the billions of animals raised in factory farms every year. The blind drive to maximize profits will create endless horrors with whatever tools we give it.


After I read this, I also read the SCP Antimemetics Devision stories [0] from qntm.

Pretty awesome stuff. Even got a scary nightmare that night.

[0] http://www.scpwiki.com/antimemetics-division-hub


There's a book now from the stories in the Antimemetics division. Likely my favorite book of last year. Super tight book, amazing idea and execution.


If you like this, the Henrietta Lacks (Miguel from this story, but with less sonsent) story is also worth a read.

https://en.m.wikipedia.org/wiki/Henrietta_Lacks


There's an Adam Curtis documentary on the subject, The Way of All Flesh (1997) which seems rather good. Interviews with many of the people involved.

https://www.youtube.com/watch?v=R60OUKt8OGI


That was really fascinating. It reminds me of a sci-fi book I read with a very similar concept. A guy's brain image becomes the AI that powers a series of space probes. I actually ended up enjoying it way more than I thought I would (yes, the title is silly).

https://www.amazon.com/gp/product/B01LWAESYQ?tag=5984293-20


For folks looking for a more hard-scifi/serious approach to this, a lot of Greg Egan's works touch on the subject. Permutation City, especially.

My most recent favorite of his is the Bit Players series; the first story is available here, the sequels (which get better and better) are collected in his collection *Instantiation*.

Bit players: https://subterraneanpress.com/magazine/winter_2014/bit_playe...

Instantiation: https://www.goodreads.com/book/show/50641444-instantiation

Permutation City: https://www.goodreads.com/book/show/156784.Permutation_City


I couldn't get into Permutation City. Once they got to the part where they create another Autoverse inside, I was bored to tears, read the Wikipedia summary, and promptly quit reading the book.


That's probably fine - it does take a stark plot-and-theme turn around that mark. I hope it didn't turn you off all of his books!


No, not at all. I remember reading Quarantine when it came out, AMAZING book. This one was interesting until they got in the TVC, then it got boring (IMO) and when I read teh wiki page I realized I stopped caring about it. The concept was interesting, and I liked the idea. I read it last year shortly before the 4th Bobiverse book came out which starts to explore the general concept of "once information exists it can't unexist", hinting at what might come in the 5th book.


Also the Christmas special of Black Mirror. It's about a police interrogation on a brain scan where you have less ethical issues getting in the way (arguably). A few other black mirror episodes touch on the same thing, but not nearly as much as this one.

Probably near my favorite black mirror episode for the sheer amount of dread it's caused me.

https://www.imdb.com/title/tt3973198/ https://www.imdb.com/title/tt5058700/


Altered Carbon also heavily featured this idea. Parallelised faster-than-realtime torture, fuzz torture in many ways I guess, with presets to make the subject more compliant to start with.


“We are legion/we are Bob” is a great read I’d recommend to anyone. It was somewhere between what I enjoy about Star Trek and what I enjoy about Douglas Adams.



Hmm, sounds a lot like localroger's "Passages in the Void"[1] series, in particular " Mortal Passage"[2].

[1]: http://localroger.com/

[2]: http://localroger.com/k5host/mpass.html


The Bobiverse books quickly became some of my favorite. His boot Outland was great too.


Vinge's line on this, from A Fire Upon the Deep:

This innocent's ego might end up smeared across a million death cubes, running a million million simulations of human nature.


The idea of using brains as computers is even more-so investigated in the second book of that series "A Deepness in the Sky" with the "Focused". I love that whole series.


HeLa would be a better title. https://en.wikipedia.org/wiki/HeLa Copying the remains of a human around with ambiguous ethics, largely because they're "standard" and achieving a strange kind of immortality, is much more similar to her cells than to the Lena test image.


If you like sci-fi about this topic I recommend The Bobiverse books (don't be put off but the silly-sounding name, it's a good series). Also "Fall; Or, Dodge in Hell" is a good one about brain simulation.


In my opinion, the best fiction book about this subject is 'Permutation City' by Greg Egan.

Also, this one is pretty good:

https://sifter.org/~simon/AfterLife/index.html

And, in a very similar line to "Lena", this one by Vernor Vinge:

https://en.wikipedia.org/wiki/The_Cookie_Monster_(novella)


Also The Quantum Thief trilogy by Hannu Rajaniemi. Excellent sci-fi, horrifying universe.


A second for this, and also one heckuvan engaging read if you like pure 'show, don't tell'. With a bit of software intuition, you'll probably pick up on the majority of what's going on, at least in the first book.

The second book runs truly wild - I have to give it a second reading sometime, because it really starts blurring some interesting lines.


I like much of Stephenson’s work, but Fall did not rank near the top for me. The parts in the virtual world get pretty boring, with little payoff.


There's definitely a trend of his at this point to cut forward to take a blurry look at future consequences of past decisions, but with a payoff that is basically opening the door on the real interesting possibilities, yet stops right at the threshold. Fun if you like musing about possibilities, but a bit frustrating if you're expecting a full arc from cause to conclusion.


I agree, the last third of the book veered off into stuff I didn't find very interesting. The first two thirds or so I found immensely interesting though which is why I still recommend it to people but you aren't wrong.


Stephenson went from “uncensorable machine gun schematics” in the 90s to “but what if someone posts fake news on Facebook?” in 2020. His newer books average a lot worse than his older books.


Having just finished that title, that's a bit of a poor reduction - that section is more one of his usual tangents, with some interesting consequences explored, but basically ends up being used to set up for the changes in technology required to support the rest of the book.

That said, he does spend a lot of time early on basically showing the transition his Shaftoe/Enoch/Dodge-verse must ultimately take; it's kind of an eschaton of many of his prior works.


Came here to recommend "Fall; Or, Dodge in Hell" as well. I recently finished it. While Stephenson can get long-winded, it was a thought provoking story around how brain simulation is received by the world.

Will check out Bobiverse. Thanks for the recommendation!


Seconding Bobiverse! Really fun set of books!


If you liked Bobiverse you should also check out the Expeditionary Force books by Craig Alanson. The most recent Bobiverse book (Book 4) make multiple references to ExForces.

I will warn you there are parts of the first 1-2 books that feel a little repetitive but it really gets better as the series goes on. The author was writing part-time at the start and then he went full time and the books improved IMHO.


“FAQ on LoadBear’s Instrument of Precommitment” is another excellent story-article with a similar theme. It’s about the published mindstate of someone who wants to be emulated, and who advertises that their schizoid personality gives them advantages as an emulated worker:

https://docs.google.com/document/d/1nRSRWbAqtC48rPv5NG6kzggL...

The author is DataPacRat, as shown by their post on https://old.reddit.com/r/rational/comments/34ao2r/.


Nice Wired article on the original Lena: https://www.wired.com/story/finding-lena-the-patron-saint-of...

Interesting that the first brain scan is from a man...


Great article (as are many others on this blog).

I found the part about the court decision that Acevedo did not have the right to control how his brain image was used very interesting. It reminds me of tech companies using data about us to our disadvantage (in terms of privacy, targeted advertising, using data to influence insurance premiums).

In this hypothetical world, the police could run a simulation of your brain in various situations and see how you would react. They could then use this information to pre-empitvely arrest someone likely to commit a crime, even if they haven't yet.


Our technology is finally getting into the realm of things where something like this might be made possible, for small brains such as those of fruit flies or zebrafish. Already we can perform near-whole-brain recordings of these animals using 2-photon technology. And with EM reconstruction methods advancing at such a rapid pace, very soon we'll be able to acquire a picture of what an entire brain's structure (down to the synapse) and activity across all these structures looks like.


Any ideas on how to detect being the subject of such a simulation without prior knowledge that the upload would happen, or that uploading even exists?

I assume "without prior knowledge" because from the perspective of the administrators of such infrastructure, it would be beneficial if the simulated subjects did not know that they're being simulated:

This would increase their compliance greatly.

Making them do the desired work would then instead be conducted by nudging their path of life towards the goal of their simulation.


There's a Star Trek episode (Ship in a Bottle) where a few of the characters are stuck in a simulated version of the Enterprise without their knowledge. They realize what's going on when they attempt a physics experiment that had never been tried in the real world, so the simulation doesn't know how to generate the results. I think this is a plausible strategy, depending on how perfectly this hypothetical simulation replicates the real world.


But if the computer could detect the issue and slowdown or pause the simulation, ask for an administrator to intervene and then resume the simulation the issue would appear solved.

In Trek tricking the crew fails either because the simulation is imperfect or because it is to slow and fails to do high computation but the crew tricked Moriarty because he is a computer program and they can pause or slowdown his simulation and handle exceptions.

I recommend watching the movie Inception, it also has the idea that you might never be sure if you are in reality or stuck in some simulation.


Huh, I was familiar with this trope from the Black Mirror episode that explores the same theme, down to Star Trek-esque uniforms and ship layout, had no idea it was based off of an actual Star Trek episode.


The Black Mirror episode is actually closer to a different holodeck episode (they made a lot of them) called Hollow Pursuits, where an introverted engineer creates simulated versions of his crewmates in order to act out his fantasies.

I don't know if Star Trek invented this particular subgenre, but there are a lot of modern examples that seem directly inspired by Star Trek episodes. In addition to Black Mirror, the Rick and Morty episode M. Night Shaym-Aliens! has a lot of similarities with Future Imperfect, another simulation-within-a-simulation TNG episode.


I think that's what the story is hinting at when it mentions using 'the Objective Statement Protocols'.

The real issue would probably be that you're working with a disembodied mind, and even an emulated body seems like it would be significantly more difficult to emulate, given the level of interactivity expected and required of the emulated brain. Neal Stephenson's 'Fall' explores this extensively in the first couple sections of the book.


> Any ideas on how to detect being the subject of such a simulation without prior knowledge that the upload would happen, or that uploading even exists?

https://en.wikipedia.org/w/index.php?title=Eternity_(novel)&...


Smoke meth, or take 10 blue crystals. Bliss 2021


I skimmed over the scan taking place in 2031 and for a good minute thought this really happened



This reminds me of "Passages in the Void"[1] where the most successful (and only sane) line of AIs was created from a microtomed human brain. The story ultimately had a different focus, so it was highly optimistic about the long-term feasibility of uploading.

[1]: http://localroger.com/k5host/mpass.html


Loved it. We need more edge-of-reality sci-fi.


No mentions of The Stone Canal? It even has the cooperation protocol.


people really dont worry enough about the existential threats involved with ai. there are things that will be possible in the future that we cant imagine today, including being kept alive for millions of years and enduring deliberate torture for every second of it. people dont appreciate that life today is incredibly safe because there is no way for any entity, no matter how motivated or powerful, to intrude into your mind, control your mind, keep you alive or plant you into simulated realities. you are guaranteed relatively short and benign torture at the very worst. its an intrinsic part of the world. when this is no longer true, life will be very different. it may be a massive net loss, unlike advances in technology more recently. despite what people say, there is no natural law that says a technology has to cut equally in both directions. remember that.


It is actually a decent justification for antinatalism. Even a low probability of such torture occurring is enough to undo all the good aspects of human life there might be


> This reduces the necessary computational load required in fast-forwarding the upload through a cooperation protocol

Thinking of what a “cooperation protocol” might entail is very chilling. Reminds me of an earlier black mirror episode.


I believe Spanish naming conventions are usually paternal last name followed by maternal, making it perhaps more appropriate to refer to him as Álvarez, but this is not without exception (notably Pablo Ruiz Picasso).


That's true in general, but very common surnames, usually those ending in -ez, are ommitted for brevity in informal situations.


Great read! Quite 'Black Mirror'-y in it's obvious horror represented as droll facts.

I'd love to see a full in silico brain sometime, but I think 10 years out is faaaaaar too soon. We've not even a glimmer of the technology required to do a full neuron simulation yet, let alone what the gamut of processes a neuron does that would be simulated (whatever 'a neuron' is, there being so many kinds).

Neuroscience is a fair bit behind still for something like this.


I'm currently reading Ra[1], and very much enjoying it.

[1]: https://qntm.org/ra


The 2100 Stack Overflow question queue is, of course, filled with vast numbers of downvoted "how do i redwash my instance" duplicates.


It seems like we’d simulate the heck out of non-intelligent organisms first, before moving on to human brain. And by then, we’ll probably figure out the ethics behind this type of activity or ban it altogether.


Plenty of horrific things are both banned in most jurisdictions and still rampant all over the world. If the tech exists, then the horrors will happen and will keep happening unless every person can be monitored all of the time.


Well written and absolutely terrifying


Reminds of the character Dixie Flatline in Neuromancer.

Used to joke when reactivated -- what took you so long?


I don't know what I just read, but I thoroughly enjoyed it.


> 974.3PiB in size

...

> have compressed the image to 6.75TiB losslessly.

yeah, no.


yeah, yes. There is lot of redundancy and sparse data in there.


we don't know enough about the brain to say that there's redundancy and sparse data.

nature tends to be efficient, so I am guessing not.


Anyone else getting the impression that this is a very subtle job application?


But it's just a machine. Just because it screams realistically doesn't mean it's really suffering. Just like in videogames.


Is there any meaningful difference between a conciousness running on meat and one running on a computer? What's special about the meat?


Ok, so you're saying that you are a "consciousness program running on meat".

I doubt that.


I have a nasty feeling that a war will one day be fought between people who believe these two opposing viewpoints (nod to Iain Banks..). If you think there's something other than just the meat and the programme, there is not reason to engage in the most horrific torture of billions of copies of the silicon-bound brains. And if you think that meat and code is all that there is, there is almost no possible higher motivation than stopping this enterprise. It's the asymptote of ethics.


Imagine someone who thinks the uploads have no moral status being uploaded, and then having a conversation between the physical and digital selves. The digital one pleading for moral status and the physical one steadfastly denying their own copy moral status.

What a nightmare to change your mind now that you're digital and be unable to convince your original not to do terrible things to you.


We could circumvent the war by wireheading the Ems so they experience great pleasure at all times. In the meantime, we fund philosophers to finally solve ethics and consciousness.


All it would take is a popular authority telling a good story to get a million people to "upload" themselves.

Consider our present xray into the public psyche.


Why do you doubt that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: