I agree with this article: all the fear about AGI taking over the species seems the hide the far more dangerous likelihood of efficient but non-general AI ending up in the hands of intelligences with a proven history of oppressing humans: i.e. other humans.
Besides which AGI, when it comes, is just as likely be a breakthrough in some random's shed rather than from a billion dollar research team's efforts to create something which can play computer games well. Not a lot Musk or anyone else can do to guard against that, except perhaps help create a world that doesn't need 'fixing' when such an AGI emerges.
Besides which AGI, when it comes, is just as likely be a breakthrough in some random's shed rather than from a billion dollar research team's efforts to create something which can play computer games well. Not a lot Musk or anyone else can do to guard against that, except perhaps help create a world that doesn't need 'fixing' when such an AGI emerges.