Hacker News new | past | comments | ask | show | jobs | submit login

The question is not really whether such and such implementation is best. The question is, does changing implementation preserves subjective identity?

I bet many people here would not doubt the moral value of the emulation of a human (feelings and such are simulated to the point of being real), but would highly doubt that it would be, well, the "same" person as the original.




That's actually a good point, if a confusing one. I'd like to know the answer as well, though I believe there's a chance the answer will be "mu".


When the robot points the flamethrower at you, and announces using the Siri voice, "Fear not, a backup has been made", you will no longer be confused.


Yeah, by that point I'll know the AI is an Unfriendly AI, and I'll be deeply sorrowful and scared for the future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: