This piece frames this as a debate between broad camps of AI makers, but in my experience both the accelerationist and doomer sides are basically media/attention economy phenomena -- narratives wielded by those who know the power of compelling narratives in media. The bulk of the AI researchers, engineers, etc I know kind of just roll their eyes at both. We know there are concrete, mundane, but important application risks in AI product development, like dataset bias and the perils of imperfect automated decision making, and it's a shame that tech-weak showmen like Musk and Altman suck up so much discursive oxygen.
The problem with humanity is we are really poor at recognizing all the ramifications of things when they happen.
Did the indigenous people of north America recognize the threat that they'd be driven to near extinction in a few hundred years when a boat showed up? Even if they did, could they have done anything about it, the germs and viruses that would lead to their destruction had been quickly planted.
Many people focus on the pseudo-religious connotations of a technological singularity instead of the more traditional "loss of predictability" definition. Decreasing predictability of the future state of the world stands to destabilize us far more likely than the FOOM event. If you can't predict your enemies actions, you're more apt to take offensive action. If you can't (at least somewhat) predict the future market state then you may pull all investment. The AI doesn't have to do the hard work here, with potential economic collapse and war humans have shown the capability to put themselves at risk.
And the existential risks are the improbable ones. The "Big Brother LLM" where you're watched by a sentiment analysis AI for your entire life and if you try to hide from it you disappear forever are much more, very terrible, likelihoods.
That's already happening unfortunately. Voice print in call centers is pretty much omniscient, knowing your identity, age, gender, mood, etc. on a call. They do it in the name of "security", naturally. But nobody ever asked your permission other than to use the "your call may be recorded for training purposes" blanket one. (Training purposes? How convenient that models are also "trained"?) Anonymity and privacy can be eliminated tomorrow technologically. The only thing holding that back is some laziness and inertia. There is no serious pushback. You want to solve AI risk, there is one right here, but because there's an unchecked human at one end of a powerful machine, no one pays attention.
Yes. I frequently get asked by laypeople about how likely I think adverse effects of AI are. My answer is "it depends on what risk you're talking about. I think there's nearly zero risk of a Skynet situation. The risk is around what people are going to do, not machines."
I don't know the risk of Terminator robots running around, but automatic systems on both USA and USSR (and post-Soviet Russian) systems have been triggered by stupid things like "we forgot the moon didn't have an IFF transponder" and "we misplaced our copy of your public announcement about planning a polar rocket launch".
But the reason those incidents didn't become a lot worse was that the humans in the loop exercised sound judgment and common sense and had an ethical norm of not inadvertently causing a nuclear exchange. That's the GP's point: the risk is in what humans do, not what automated systems do. Even creating a situation where an automated system's wrong response is allowed to trigger a disastrous event because humans are taken out of the loop, is still a human decision; it won't happen unless humans who don't exercise sound judgment and common sense or who don't have proper ethical norms make such a disastrous decision.
My biggest takeaway from all the recent events surrounding AI, and in fact from the AI hype in general, including hype about the singularity, AI existential risk, etc., is that I see nobody in these areas who qualifies under the criteria I stated above: exercising sound judgment and common sense and having proper ethical norms.
This is where things like drone swarms really put a kink in this whole ethical norms thing.
I'm watching drones drop handgrenades from half the planet away in 4k on a daily basis. Moreso every military analysis out there says we need more of these and they need to control themselves so they can't be easily jammed.
It's easy to say the future will be more of the same of what we have now, that is, if you ignore the people demanding an escalation of military capabilities.
We only know their judgements were "sound" after the event. As for "common sense", that's the sound human brains make on the inside when they suffer a failure of imagination — it's not a real thing, it's just as much a hallucination as those we see in LLMs, and just as hard to get past when they happen: "I'm sorry, I see what you mean, $repeat_same_mistake".
Which also applies to your next point:
> Even creating a situation where an automated system's wrong response is allowed to trigger a disastrous event because humans are taken out of the loop, is still a human decision; it won't happen unless humans who don't exercise sound judgment and common sense or who don't have proper ethical norms make such a disastrous decision.
Such humans are the norm. They are the people who didn't double-check Therac-25, the people who designed (and the people who approved the design of) Chernobyl, the people who were certain that attacking Pearl Harbour would take the USA out of the Pacific and the people who were certain that invading the Bay of Pigs would overthrow Castro, the people who underestimated Castle Bravo by a factor of 2.5 because they didn't properly account for Lithium-7, the people who filled the Apollo 1 crew cabin with pure oxygen and the people who let Challenger launch in temperatures below its design envelope. It's the Hindenburg, it's China's initial Covid response, it's the response to the Spanish Flu pandemic a century ago, it's Napoleon trying to invade Russia (and Hitler not learning any lesson from Napoleon's failure). It's the T-shirt company a decade ago who automated "Keep Calm and $dictionary_merge" until the wrong phrase popped out and the business had to shut down. It's the internet accidentally relying on npm left-pad, and it's every insufficiently tested line of code that gets exploited by a hacker. It's everyone who heard "Autopilot" and thought that meant they could sleep on the back seat while their Tesla did everything for them… and it's a whole heap of decisions by a whole bunch of people each of whom ought to have known better that ultimately led to the death of Elaine Herzberg. And, at risk of this list already being too long, it is found in every industrial health and safety rule as they are written in the blood of a dead or injured worker (or, as regards things like Beirut 2020, the public).
Your takeaway shouldn't merely be that nobody "in the areas of AI or X-risk" has sound judgement, common sense, and proper ethical norms, but that no human does.
> We only know their judgements were "sound" after the event.
In the sense that no human being can claim in advance to always exercise "sound judgment", sure. But the judgment of mine that I described was also made after the event. So I'm comparing apples to apples.
> As for "common sense", that's the sound human brains make on the inside when they suffer a failure of imagination — it's not a real thing
I disagree, but I doubt we're going to resolve that here, unless this claim is really part of your next point, which to me is the most important one:
> Such humans are the norm.
Possibly such humans far outnumber the ones who actually are capable of sound judgment, etc. In fact, your claim here is really just a more extreme version of mine: we know a significant number of humans exist who do not have the necessary qualities, however you want to describe them. You and I might disagree on just what the number is, exactly, but I think we both agree it's significant, or at least significant enough to be a grave concern. The primary point is that the existence of such humans in significant numbers is the existential risk we need to figure out how to mitigate. I don't think we need to even try to make the much more extreme case you make, that no humans have the necessary capabilities (nor do I think that's true, and your examples don't even come close to supporting it--what they do support is the claim that many of our social institutions are corrupt, because they allow such humans to be put in positions where their bad choices can have much larger impacts).
Well argued; from what you say here, I think that what we disagree about is like arguing about if a tree falling where nobody hears it makes a sound — it reads like we both agree that it's likely humans will choose to deploy something unsafe, the point of contention makes no difference to the outcome.
I'm what AI Doomers call an "optimist", as I only think AI has only a 16% chance of killing everyone, and half of that risk guesstimate is due to someone straight up asking an AI tool to do so (8 billion people isa lot if chances to find someone with genocidal misanthropy). The other 84% is me expecting history to rhyme in this regard, with accidents and malice causing a lot of harm without being a true X-risk.
If they'd wiped us out, we wouldn't be here to argue about it.
We can look at the small mistakes that only kill a few, and pass rules to prevent them; we can look at close calls for bigger disasters (there were a lot of near misses in the Cold War); we can look at how frequency scales with impact, and calculate an estimated instantaneous risk for X-risks; but one thing we can't do is forecast the risk of tech that has yet to be invented.
We can't know how many (or even which specific) safety measures are needed to prevent extinction by paperclip maximiser unless we get to play god with a toy universe where the experiment can be run many times — which doesn't mean "it will definitely go wrong", it could equally well mean our wild guess about what safety looks like has one weird trick that will make all AI safe but we don't recognise that trick and then add 500 other completely useless requirements on top of it that do absolutely nothing.
At that time there was nothing hypothetical about them anymore. They were known to be feasible and practical, not even requiring a test for the Uranium version.
How is it not a double standard to simultaneously treat a then-nonexistent nuclear bomb as "not hypothetical" while also looking around at the currently existing AI and what they do and say "it's much to early to try and make this safe"?
There was nothing hypothetical about a nuclear weapon at that time - it "simply" hadn't been made but that it can be made within a rather finite time was very clear. There are a lot of hypotheticals about creating AGI and existential risk from A(G)I. If we are talking about the plethora of other risks from AI, then, yes, not all hypothetical.
I gave a long list of things that humans do that blow up in their faces, some of which were A-no-G-needed-I. The G means "general", this is poorly defined and means everything and nothing in group conversation, so any specific and concrete meaning can be anywhere on the scale from the relatively-low generality but definitely existing issues of "huh, LLMs can do a decent job of fully personalised propaganda agents" or "can we, like, not, give people usable instructions for making chemical weapons at home?"; or the stuff we're trying to develop (simply increasing automation) with risks that pattern match to what's already gone wrong, i.e. "what happens if you have all the normal environmental issues we're already seeing in the course of industrial development, but deployed and scaled up at machine speeds rather than human speeds?"; to the far-field stuff like "is there such a thing as a safe von-Neumann probe?" where we absolutely do know they can be built because we are von-Neumann replicators ourselves, but we don't know how hard it is or how far we are from it or how different a synthetic one might be from an organic one.
Some risks there are worth more effort in mitigating them than others. Focus on far out things would need more than stacked hypotheticals to divert resources to it.
At the low end, chemical weapons from LLMs would, for example, not be on my list of relevant risks, at the high end some notions of gray goo would also not make the list.
I don't think it matters. Even if within a hundred years an AI comes into existence that is smarter than humans and that humans can't control, that will only happen if humans make choices that make it happen. So the ultimate risk is still human choices and actions, and the only way to mitigate the risk is to figure out how to not have humans making such choices.
In the decades to come. Although if you asked me to predict the state of things in 100 years, my answer would be pretty much the same.
I mean, all predictions that far out are worthless, including this one. That said, extrapolating from what I know right now, I don't see a reason to think that there will be an AGI a hundred years from now. But it's entirely possible that some unknown advance will happen between now and then that would make me change my prediction.
> it's a shame that tech-weak showmen like Musk and Altman suck up so much discursive oxygen
Is it that bad, though? It does mean there's lots of attention (and thus funding, etc.) for AI research, engineering, etc. -- unless you are expressing a wish that the discursive oxygen were instead spent on other things. In which case, I ask: what things?
The pauses to consider if we should do <action>, before we actually do <action>.
Tesla's "Self-Driving" is an example of too soon, but fuck it, we gots PROFITS to make and if a few pedestrians die, we'll just throw them a check and keep going.
Imagine the trainwreck caused by millions of people leveraging AI like the SCOTUS lawyers, where their brief was written by AI and noted imagined cases in support of its decision.
AI has the potential to make great change in the world, as the tech grows, but it's being guided by humans. Humans aren't known for altruism or kindness. (source: history) and now we're concentrating even more power into fewer hands.
Luckily, I'll be dead long before AI gets crammed into every possible facet of life. Note that AI is inserted, not because it makes your life better, not because the world would be a better place for it and not even to free humans of mundane tasks. Instead it's because someone, somewhere can earn more profits, whether it works right or not and humans are the grease in the wheels.
>The pauses to consider if we should do <action>, before we actually do <action>.
Unless there has been an effective gatekeeper, that's almost never happened in history. With nuclear the gatekeeper is it's easy to detect. With genetics there pretty universal revulsion to it to the point a large portion of most populations are concerned about it.
But with AI, to most people it's just software. And pretty much it is, if you want a universal ban of AI you really are asking for authoritarian type controls on it.
Practical AI involves cutting-edge hardware, which is produced in relatively few places. AI that runs on a CPU will not be a danger to anyone for much longer.
Also, nobody's asking for a universal ban on AI. People are asking for an upper bound on AI capabilities (e.g. number of nodes/tokens) until we have widely proven techniques for AI alignment. (Or, in other words, until we have the ability to reliably tell AI to do something and have it do that thing and not entirely different and dangerous things).
Right, and when I was a kid computers were things that fit on entire office floors. If your 'much longer' is only 30-40 years I could still be around then.
In addition you're just asking for limits on compute, which ain't gonna go over well. How do you know if it's running a daily weather model, or making an AI. And how do you even measure capabilities when we're coming out with with other functions like transformers that are X times more efficient.
What you want with AI cannot happen. If it's 100% predictable it's a calculation. If it's a generalization function taking incomplete information (something humans do) it will have unpredictable modes.
Is a Tesla FSD car a worse driver than a human of median skill and ability? Sure we can pull out articles of tragedies, but I'm not asking about that. Everything I've seen points to cars being driven on Autopilot being quite a bit safer than your average human driver, which is admittedly not a high bar, but I think painting it as "greedy billionaire literally kills people for PROFITS" is at best disingenuous to what's actually occurring.
It is very bad. There's more money and fame to be made by taking these two extreme stances. The media and the general public is eating up this discourse, that are polarizing the society, instead of educating.
> What things?
There are helpful developments and applications that go unnoticed and unfunded. And there are actual dangerous AI practices right now. Instead we talk about hypotheticals.
They're talking about shit that isn't real because it advances their personal goals, keeps eyes on them, whatever. I think the effect on funding is overhyped -- OpenAI got their big investment before this doomer/e-acc dueling narrative surge, and serious investors are still determining viability through due diligence, not social media front pages.
Basically, it's just more self-serving media pollution in an era that's drowning in it. Let the nerds who actually make this stuff have their say and argue it out, it's a shame they're famously bad at grabbing and holding onto the spotlight.
The "nerds" are having their say and arguing it out, mostly outside of the public view but the questions are too nuanced or technical for a general audience.
I'm not sure I see how the hype intrudes on that so much?
It seems like you have a bone to pick and it's about the attention being on Musk/Altman/etc. but I'm still not sure that "self-serving media pollution" is having that much of an impact on the people on the ground? What am I missing, exactly?
My comment was about wanting to see more (nerds) -> (public) communication, not about anything (public) -> (nerds). I understand they're not good at it, it was just an idealistic lament.
My bone to pick with Musk and Altman and their ilk is their damage to public discourse, not that they're getting attention per se. Whether that public discourse damage really matters is its own conversation.
Just to play devils advocate to this type of response.
What if tomorrow I drop a small computer unit in front of you that has human level intelligence?
Now, you're not allowed to say humans are magical and computers will never do this. For the sake of this theoretical debate it's already been developed and we can make millions of them.
It looks imaginary. Or, if you prefer, it looks hypothetical.
The point isn't how we would respond if this were real. The point is, it isn't real - at least not at this point in time, and it's not looking like it's going to be real tomorrow, either.
I'm not sure what purpose is served by "imagine that I'm right and you're wrong; how do you respond"?
On some things that is not a bad position: The old SDI had a lot of spending but really not much to show for it while at the same time forcing the USSR into a reaction based on what today might be called "hype".
The particular problem arises when both actors in the game have good economies and build the superweapons. We happened to somewhat luck out that the USSR was an authoritarian shithole that couldn't keep up, yet we still have thousands of nukes laying about because of this.
I'd rather not get in an AI battle with China and have us build the world eating machine.
> What if tomorrow I drop a small computer unit in front of you that has human level intelligence?
I would say the question is not answerable as-is.
First, we have no idea what it even means to say "human level intelligence".
Second, I'm quite certain that a computer unit with such capabilities if it existed, would be alien, not "human". It wouldn't live in our world, and it wouldn't have our senses. To it, the internet would probably be more real than a cat in the same room.
If we have something we can related to, I'm pretty sure we have to build some sort of robot, capable of living in the same environment we do.
If I drop a baby on your desk you have to pay for it for the next 18 years. If I connect a small unit to a flying drone, stick a knife on it, and tell it to stab you in your head then you have a problem today.
> The Biden admin is proposing AI regulation that will protect large companies from competition
Mostly, the Biden Administration is proposing a bunch of studies by different agencies of different areas, and some authorities for the government to take action regarding AI in some security-related areas. The concrete regulation mostly is envisioned to be drafted based on the studies, and the idea that it will be incumbent protective is mostly based on the fact that certain incumbents have been pretty nakedly tying safety concerns to proposals to pull up the ladder behind themselves. But the Administration is, at a minimum, resisting the lure of relying on those incumbents presentation of the facts and alternatives out of the gate, and also taking a more expansive view of safety and related concerns than the incumbents are proposing (expressly factoring in some of the issues that they have used "safety" concerns to distract from), so I think prejudging the orientation of the regulatory proposals that will follow on the study directives is premature.
What I have heard from people I know in the industry is that the proposal they are talking about now is to restrict all models over 20 billion parameters. This arbitrary rule would be a massive moat to the few companies that have these models already.
Yup. I continue to be convinced that a lot of the fearmongering about rogue AI taking over the world is a marketing/lobbying effort to give early movers in the space a leg up.
The real AI harms are probably much more mundane - such as flooding the internet with (even more) low quality garbage.