> We only know their judgements were "sound" after the event.
In the sense that no human being can claim in advance to always exercise "sound judgment", sure. But the judgment of mine that I described was also made after the event. So I'm comparing apples to apples.
> As for "common sense", that's the sound human brains make on the inside when they suffer a failure of imagination — it's not a real thing
I disagree, but I doubt we're going to resolve that here, unless this claim is really part of your next point, which to me is the most important one:
> Such humans are the norm.
Possibly such humans far outnumber the ones who actually are capable of sound judgment, etc. In fact, your claim here is really just a more extreme version of mine: we know a significant number of humans exist who do not have the necessary qualities, however you want to describe them. You and I might disagree on just what the number is, exactly, but I think we both agree it's significant, or at least significant enough to be a grave concern. The primary point is that the existence of such humans in significant numbers is the existential risk we need to figure out how to mitigate. I don't think we need to even try to make the much more extreme case you make, that no humans have the necessary capabilities (nor do I think that's true, and your examples don't even come close to supporting it--what they do support is the claim that many of our social institutions are corrupt, because they allow such humans to be put in positions where their bad choices can have much larger impacts).
Well argued; from what you say here, I think that what we disagree about is like arguing about if a tree falling where nobody hears it makes a sound — it reads like we both agree that it's likely humans will choose to deploy something unsafe, the point of contention makes no difference to the outcome.
I'm what AI Doomers call an "optimist", as I only think AI has only a 16% chance of killing everyone, and half of that risk guesstimate is due to someone straight up asking an AI tool to do so (8 billion people isa lot if chances to find someone with genocidal misanthropy). The other 84% is me expecting history to rhyme in this regard, with accidents and malice causing a lot of harm without being a true X-risk.
In the sense that no human being can claim in advance to always exercise "sound judgment", sure. But the judgment of mine that I described was also made after the event. So I'm comparing apples to apples.
> As for "common sense", that's the sound human brains make on the inside when they suffer a failure of imagination — it's not a real thing
I disagree, but I doubt we're going to resolve that here, unless this claim is really part of your next point, which to me is the most important one:
> Such humans are the norm.
Possibly such humans far outnumber the ones who actually are capable of sound judgment, etc. In fact, your claim here is really just a more extreme version of mine: we know a significant number of humans exist who do not have the necessary qualities, however you want to describe them. You and I might disagree on just what the number is, exactly, but I think we both agree it's significant, or at least significant enough to be a grave concern. The primary point is that the existence of such humans in significant numbers is the existential risk we need to figure out how to mitigate. I don't think we need to even try to make the much more extreme case you make, that no humans have the necessary capabilities (nor do I think that's true, and your examples don't even come close to supporting it--what they do support is the claim that many of our social institutions are corrupt, because they allow such humans to be put in positions where their bad choices can have much larger impacts).