Well argued; from what you say here, I think that what we disagree about is like arguing about if a tree falling where nobody hears it makes a sound — it reads like we both agree that it's likely humans will choose to deploy something unsafe, the point of contention makes no difference to the outcome.
I'm what AI Doomers call an "optimist", as I only think AI has only a 16% chance of killing everyone, and half of that risk guesstimate is due to someone straight up asking an AI tool to do so (8 billion people isa lot if chances to find someone with genocidal misanthropy). The other 84% is me expecting history to rhyme in this regard, with accidents and malice causing a lot of harm without being a true X-risk.
I'm what AI Doomers call an "optimist", as I only think AI has only a 16% chance of killing everyone, and half of that risk guesstimate is due to someone straight up asking an AI tool to do so (8 billion people isa lot if chances to find someone with genocidal misanthropy). The other 84% is me expecting history to rhyme in this regard, with accidents and malice causing a lot of harm without being a true X-risk.