Hacker News new | past | comments | ask | show | jobs | submit login

In others words make a domesday robot that remembers people before it kills them.

As they die, some will take solace in a religious belief that numbers in the machine represent everything they were and ever will be. Others will just die.

A digital tombstone to a dead race.




In his defense, if you're dying anyway, you might as well leave a "ghost" behind. The ghost might not be you, and it will certainly have some psychological issues to deal with due to knowing that it's one ontological level "down" from a real, flesh-and-blood person, but you were going to die anyway.


Why do you assume the implementation hardware matters?

If it does, why assume brain-meat is better, as opposed to worse?


I assume that ontological security matters. If I know my consciousness runs on meat, I know that I have my own personal substrate. If I know I'm in the Matrix, I know that whoever has `root` access can alter or deceive me as they please.

The one thing nobody ever specifies about these crazy schemes, which would otherwise be a great way for humanity to get the hell off of Earth and leave the natural ecosystem to itself in our absence, is who will be root, and how he's going to forcibly round up everyone who doesn't like your crazy futurist take-over-everyone's-minds scheme. Hell, what's going to stop him from rampaging across the real Earth and universe, destroying everything in sight, while everyone else fucks around having fun in VR?

I'm really wondering why this nasty, insane idea has been cropping up more frequently lately in geek circles.

And that's not even starting into the sheer ludicrousness of claiming people's consciousness is pure software when we know that all kinds of enzymes and hormones affect our personalities!


> And that's not even starting into the sheer ludicrousness of claiming people's consciousness is pure software when we know that all kinds of enzymes and hormones affect our personalities!

That's a bug to fix in implementation accuracy. I'd obviously prefer more accuracy, but if it comes down to a choice between less-than-perfect available implementation accuracy or dying of old age, I'll happily take a less accurate implementation, especially one that preserves enough information to fix that issue later.

The much more serious bug I am concerned about is the continuity flaw: a copy of me does not make the original me immortal. I'd like the original me to keep thinking forever. Many proposals exist for how to ensure that. The scary problem that needs careful evaluation before implementing any solution: if you do it wrong, the copy will never know the difference, but the original will die.


No human should ever be root. But we might just trust a Friendly AI. Well, provided we manage to make the AI actually Friendly (as in, does exactly what's good for us instead of whatever imperfect idea of what's good for us we might be tempted to program).


And if we don't, we all die (at best), but that's nothing new. Nor is it avoidable by other means than FAI.

The route to unfriendly AI is revenue-positive right up until it kills us.


The question is not really whether such and such implementation is best. The question is, does changing implementation preserves subjective identity?

I bet many people here would not doubt the moral value of the emulation of a human (feelings and such are simulated to the point of being real), but would highly doubt that it would be, well, the "same" person as the original.


That's actually a good point, if a confusing one. I'd like to know the answer as well, though I believe there's a chance the answer will be "mu".


When the robot points the flamethrower at you, and announces using the Siri voice, "Fear not, a backup has been made", you will no longer be confused.


Yeah, by that point I'll know the AI is an Unfriendly AI, and I'll be deeply sorrowful and scared for the future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: