I don't really have a notion of whether an actual AGI would have a desire to kill all humans. I do however think that one entity seeking to create another entity that it can control, yet is more intelligent than it, seems arbitrarily challenging in the long run.
I think having a moratorium on AI development will be impossible to enforce, and as you stretch the timeline out, these negative outcomes become increasingly likely as the technical barriers to entry continue to fall.
I've personally assumed this for thirty years, the only difference now is that the timeline seems to be accelerating.
I think having a moratorium on AI development will be impossible to enforce, and as you stretch the timeline out, these negative outcomes become increasingly likely as the technical barriers to entry continue to fall.
I've personally assumed this for thirty years, the only difference now is that the timeline seems to be accelerating.