> The most plausible of the "foom" scenarios is one in which a tiny group of humans leverage superintelligence to become living gods. I consider "foom" very speculative, but this version is plausible because we already know humans will act like that while AI so far has no self-awareness or independence to speak of.
One of my hopes is that "superintelligence" turns out to be an impossible geekish fantasy. It seems plausible, because the most intelligent people are frequently not actually very successful.
But if it is possible, I think a full-scale nuclear war might be a preferable outcome to living with such "living gods."
I do wonder how long this bias towards the status quo will survive human history. This is the first big shakeup (what if we’re not the smartest beings?), and I think the next is prolly gonna be long lives (what will the shape of my life be?). From there it’s full on sci-fi speculation, but I definitely am putting money that humans will incrementally trade comfort and progress for… I guess existential stability.
One of my hopes is that "superintelligence" turns out to be an impossible geekish fantasy. It seems plausible, because the most intelligent people are frequently not actually very successful.
But if it is possible, I think a full-scale nuclear war might be a preferable outcome to living with such "living gods."