Hacker News new | past | comments | ask | show | jobs | submit login

I used to be in this camp, but we can just look around to see some limits on the capacity of human intelligence to do harm.

It's hard for humans to keep secrets and permanemtly maintain extreme technological advantages over other humans, and it's hard for lone humans to do large scale actions without collaborators, and it's harder for psychopaths to collaborate than it is for non-psychopaths, because morality evolved as a set of collaboration protocols.

This changes as more people get access to a "kill everyone" button they can push without experience or long-term planning, sure. But that moment is still far away.

AGI that is capable of killing everyone may be less far away, and we have absolutely no basis on which to predict what it will and won't do, as we do with humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: