Exactly. While there is an argument to be made that people are the real danger, that is beyond the discussion taking place. It has already been accepted, for the sake of discussion, that the nuclear missile is the danger, not the math which developed the missile, nor the people who thought it was a good idea to use a missile. Applying AI to the missile still means the missile is the danger. Any use of AI in the scope of that missile is just an implementation detail.
You said that "AI is an application of math. It is not clear how that can be used directly to harm a body." I was trying to illustrate the case that if humans can develop harmful things, like nuclear weapons, then an AI that is as smart as a human can presumably develop similarly harmful things.
If the point you are trying to make is that an AI which secretly creates and deploys nuclear, biological, or chemical weapons in order to destroy all of humanity, is not an "AI risk" because it's the weapons that do the actual harm, then... I really don't know what to say to that. Sure, I guess? Would you also say that drunk drivers are not dangerous, because the danger is the cars that they drive colliding into people's bodies, and the drunk driver is just an implementation detail?
> I was trying to illustrate the case that if humans can develop harmful things, like nuclear weapons, then an AI that is as smart as a human can presumably develop similarly harmful things.
For the sake of discussion, it was established even before I arrived that those developed things are the danger, not that which creates/uses the things which are dangerous. What is to be gained by ignoring all of that context?
> I really don't know what to say to that. Sure, I guess?
Nothing, perhaps? It is not exactly something that is worthy of much discussion. If you are desperate for a fake internet battle, perhaps you can fight with earlier commenters about whether it is nuclear missiles that are dangerous or if it is the people who have created/have nuclear missiles are dangerous? But I have no interest. I cannot think of anything more boring.
I'm specifically worried that an AGI will conceal some instrumental goal of wiping out humans, while posing as helpful. It will helpfully earn a lot of money for a lot of people, by performing services and directing investments, and with its track record, will gain the ability to direct investments for itself. It then plows a billion dollars into constructing a profitable chemicals factory somewhere where rules are lax, and nobody looks too closely into what else that factory produces, since the AI engineers have signed off on it. And then once it's amassed a critical stockpile of specific dangerous chemicals, it releases them into the atmosphere and wipes out humanity / agriculture / etc.
Perhaps you would point out that in the above scenario the chemicals (or substitute viruses, or whatever) are the part that causes harm, and the AGI is just an implementation detail. I disagree, because if humanity ends up playing a grand game of chess against an AGI, the specific way in which it checkmates you is not the important thing. The important thing is that it's a game we'll inevitably lose. Worrying about the danger of rooks and bishops is to lose focus on the real reason we lose the game: facing an opponent of overpowering skill, when our defeat is in its interests.
Cool, I guess. While I have my opinions too, I'm not about to share them as that would be bad faith participation. Furthermore, it adds nothing to the discussion taking place. What is to be gained by going off on a random tangent that is of interest to nobody? Nothing, that's what.
To bring us back on topic to try and salvage things, it remains that it is established in this thread that the objects of destruction are the danger. AI cannot be the object of destruction, although it may be part of an implementation. Undoubtedly, nuclear missiles already utilize AI and when one talks about the dangers of nuclear missiles they are already including AI as part of that.
Yes, but usually when people express concerns about the danger of nuclear missiles, they are only thinking of those nuclear missiles that are at the direction of nation-states or perhaps very resourceful terrorists. And their solutions will usually be directed in that direction, like arms control treaties. They aren't really including "and maybe a rogue AI will secretly build nuclear weapons on the moon and then launch them at us" in the conversation about the danger of nukes and the importance of international treaties, even though the nukes are doing the actual damage in that scenario. Most people would categorize that as sounding more like an AI-risk scenario.