Hacker News new | past | comments | ask | show | jobs | submit login

I agree with this article: all the fear about AGI taking over the species seems the hide the far more dangerous likelihood of efficient but non-general AI ending up in the hands of intelligences with a proven history of oppressing humans: i.e. other humans.

Besides which AGI, when it comes, is just as likely be a breakthrough in some random's shed rather than from a billion dollar research team's efforts to create something which can play computer games well. Not a lot Musk or anyone else can do to guard against that, except perhaps help create a world that doesn't need 'fixing' when such an AGI emerges.




> help create a world that doesn't need 'fixing' when such an AGI emerges.

I wish we would run with this rather than relying on tech and the market to solve all our woes.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: