Hacker News new | past | comments | ask | show | jobs | submit login

Yep, and that’s rather terrifying, is it not? Is there any good reason to assume that future AGI will share our sense of morality, once it is smart enough to surpass human thought?



> Is there any good reason to assume that future AGI will share our sense of morality

I think it would be surprising if it did. Just as our morality is shaped by our understanding of the world and our capabilities, a future AGI's morality would be shaped by its understanding of the world and capabilities. It might do something that we think is terrible but isn't because we lack the capacity to understand why it's doing what it's doing. I'm thinking about how a dog might think going to the vet is punishment but we are actually doing it out of love.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: