I don't know the risk of Terminator robots running around, but automatic systems on both USA and USSR (and post-Soviet Russian) systems have been triggered by stupid things like "we forgot the moon didn't have an IFF transponder" and "we misplaced our copy of your public announcement about planning a polar rocket launch".
But the reason those incidents didn't become a lot worse was that the humans in the loop exercised sound judgment and common sense and had an ethical norm of not inadvertently causing a nuclear exchange. That's the GP's point: the risk is in what humans do, not what automated systems do. Even creating a situation where an automated system's wrong response is allowed to trigger a disastrous event because humans are taken out of the loop, is still a human decision; it won't happen unless humans who don't exercise sound judgment and common sense or who don't have proper ethical norms make such a disastrous decision.
My biggest takeaway from all the recent events surrounding AI, and in fact from the AI hype in general, including hype about the singularity, AI existential risk, etc., is that I see nobody in these areas who qualifies under the criteria I stated above: exercising sound judgment and common sense and having proper ethical norms.
This is where things like drone swarms really put a kink in this whole ethical norms thing.
I'm watching drones drop handgrenades from half the planet away in 4k on a daily basis. Moreso every military analysis out there says we need more of these and they need to control themselves so they can't be easily jammed.
It's easy to say the future will be more of the same of what we have now, that is, if you ignore the people demanding an escalation of military capabilities.
We only know their judgements were "sound" after the event. As for "common sense", that's the sound human brains make on the inside when they suffer a failure of imagination — it's not a real thing, it's just as much a hallucination as those we see in LLMs, and just as hard to get past when they happen: "I'm sorry, I see what you mean, $repeat_same_mistake".
Which also applies to your next point:
> Even creating a situation where an automated system's wrong response is allowed to trigger a disastrous event because humans are taken out of the loop, is still a human decision; it won't happen unless humans who don't exercise sound judgment and common sense or who don't have proper ethical norms make such a disastrous decision.
Such humans are the norm. They are the people who didn't double-check Therac-25, the people who designed (and the people who approved the design of) Chernobyl, the people who were certain that attacking Pearl Harbour would take the USA out of the Pacific and the people who were certain that invading the Bay of Pigs would overthrow Castro, the people who underestimated Castle Bravo by a factor of 2.5 because they didn't properly account for Lithium-7, the people who filled the Apollo 1 crew cabin with pure oxygen and the people who let Challenger launch in temperatures below its design envelope. It's the Hindenburg, it's China's initial Covid response, it's the response to the Spanish Flu pandemic a century ago, it's Napoleon trying to invade Russia (and Hitler not learning any lesson from Napoleon's failure). It's the T-shirt company a decade ago who automated "Keep Calm and $dictionary_merge" until the wrong phrase popped out and the business had to shut down. It's the internet accidentally relying on npm left-pad, and it's every insufficiently tested line of code that gets exploited by a hacker. It's everyone who heard "Autopilot" and thought that meant they could sleep on the back seat while their Tesla did everything for them… and it's a whole heap of decisions by a whole bunch of people each of whom ought to have known better that ultimately led to the death of Elaine Herzberg. And, at risk of this list already being too long, it is found in every industrial health and safety rule as they are written in the blood of a dead or injured worker (or, as regards things like Beirut 2020, the public).
Your takeaway shouldn't merely be that nobody "in the areas of AI or X-risk" has sound judgement, common sense, and proper ethical norms, but that no human does.
> We only know their judgements were "sound" after the event.
In the sense that no human being can claim in advance to always exercise "sound judgment", sure. But the judgment of mine that I described was also made after the event. So I'm comparing apples to apples.
> As for "common sense", that's the sound human brains make on the inside when they suffer a failure of imagination — it's not a real thing
I disagree, but I doubt we're going to resolve that here, unless this claim is really part of your next point, which to me is the most important one:
> Such humans are the norm.
Possibly such humans far outnumber the ones who actually are capable of sound judgment, etc. In fact, your claim here is really just a more extreme version of mine: we know a significant number of humans exist who do not have the necessary qualities, however you want to describe them. You and I might disagree on just what the number is, exactly, but I think we both agree it's significant, or at least significant enough to be a grave concern. The primary point is that the existence of such humans in significant numbers is the existential risk we need to figure out how to mitigate. I don't think we need to even try to make the much more extreme case you make, that no humans have the necessary capabilities (nor do I think that's true, and your examples don't even come close to supporting it--what they do support is the claim that many of our social institutions are corrupt, because they allow such humans to be put in positions where their bad choices can have much larger impacts).
Well argued; from what you say here, I think that what we disagree about is like arguing about if a tree falling where nobody hears it makes a sound — it reads like we both agree that it's likely humans will choose to deploy something unsafe, the point of contention makes no difference to the outcome.
I'm what AI Doomers call an "optimist", as I only think AI has only a 16% chance of killing everyone, and half of that risk guesstimate is due to someone straight up asking an AI tool to do so (8 billion people isa lot if chances to find someone with genocidal misanthropy). The other 84% is me expecting history to rhyme in this regard, with accidents and malice causing a lot of harm without being a true X-risk.
If they'd wiped us out, we wouldn't be here to argue about it.
We can look at the small mistakes that only kill a few, and pass rules to prevent them; we can look at close calls for bigger disasters (there were a lot of near misses in the Cold War); we can look at how frequency scales with impact, and calculate an estimated instantaneous risk for X-risks; but one thing we can't do is forecast the risk of tech that has yet to be invented.
We can't know how many (or even which specific) safety measures are needed to prevent extinction by paperclip maximiser unless we get to play god with a toy universe where the experiment can be run many times — which doesn't mean "it will definitely go wrong", it could equally well mean our wild guess about what safety looks like has one weird trick that will make all AI safe but we don't recognise that trick and then add 500 other completely useless requirements on top of it that do absolutely nothing.
At that time there was nothing hypothetical about them anymore. They were known to be feasible and practical, not even requiring a test for the Uranium version.
How is it not a double standard to simultaneously treat a then-nonexistent nuclear bomb as "not hypothetical" while also looking around at the currently existing AI and what they do and say "it's much to early to try and make this safe"?
There was nothing hypothetical about a nuclear weapon at that time - it "simply" hadn't been made but that it can be made within a rather finite time was very clear. There are a lot of hypotheticals about creating AGI and existential risk from A(G)I. If we are talking about the plethora of other risks from AI, then, yes, not all hypothetical.
I gave a long list of things that humans do that blow up in their faces, some of which were A-no-G-needed-I. The G means "general", this is poorly defined and means everything and nothing in group conversation, so any specific and concrete meaning can be anywhere on the scale from the relatively-low generality but definitely existing issues of "huh, LLMs can do a decent job of fully personalised propaganda agents" or "can we, like, not, give people usable instructions for making chemical weapons at home?"; or the stuff we're trying to develop (simply increasing automation) with risks that pattern match to what's already gone wrong, i.e. "what happens if you have all the normal environmental issues we're already seeing in the course of industrial development, but deployed and scaled up at machine speeds rather than human speeds?"; to the far-field stuff like "is there such a thing as a safe von-Neumann probe?" where we absolutely do know they can be built because we are von-Neumann replicators ourselves, but we don't know how hard it is or how far we are from it or how different a synthetic one might be from an organic one.
Some risks there are worth more effort in mitigating them than others. Focus on far out things would need more than stacked hypotheticals to divert resources to it.
At the low end, chemical weapons from LLMs would, for example, not be on my list of relevant risks, at the high end some notions of gray goo would also not make the list.