Hacker News new | past | comments | ask | show | jobs | submit login

Whether it's reasonable or unreasonable should not depend primarily on how frightening it is to you.



I'm sorry, I think the prospect of human extinction should be frightening to just about everybody.

If we do eventually develop a trans-human AI, it's a virtual certainty that it will escape its "box". Whether it would kill us all is unknown. However, it definitely could, and we would be effectively powerless to stop it.

"Reasonable" is a measure of risk tolerance. Since the downside risks associated with trans-human AIs are effectively infinite, fear of those risks is always reasonable.

Corollary: Since the upside risks are also unbounded, greed is also reasonable.

Frankly, though, I'd rather not roll those dice if we can avoid it.


If we can bring new intelligence to life, do we have the moral right to refrain from doing so? Also, would it be right to confine it to the box mentioned in the article while using its smarts to do useful work outside it? Shouldn't a trans-human AI be entitled a right to life?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: