Hacker News new | past | comments | ask | show | jobs | submit login
The greatest risk of AI is from the people who control it, not the tech itself (aisnakeoil.substack.com)
627 points by nickwritesit on May 31, 2023 | hide | past | favorite | 504 comments



A specific risk I am worried about today is using AI to power and make impactful decisions in high-risk infrastructure that people rely on. I do not want my power company making decisions about me based on a large language model that regularly gets things wrong. Not without significant controls in place, explainability/audibility, and the ability quickly reverse any bad decision. Replace "power company" with the numerous things we all rely on today and it freaks me out.

People are jumping too quickly to deeply integrate this tech with everyday things, and while that's great for many use cases, it's not so great for others.

You won't find the likes of Yudkowsky talking about this because they'd rather go on about their fan fiction than actually work to prevent real problems we may face soon. The fact that they've tricked some really smart people into going along with their shenanigans is extremely disappointing.


> You won't find the likes of Yudkowsky talking about this because they'd rather go on about their fan fiction than actually work to prevent real problems we may face soon. The fact that they've tricked some really smart people into going along with their shenanigans is extremely disappointing.

Many of those people were not tricked, tech CEOs are involving themselves with this and picking up Yudkowsky's particular strain of AI safety intentionally.

Instead of focusing on the very real problems AI causes and exacerbates today, problems that might affect tech companies' bottom lines if addressed, CEOs can control the narrative and appear proactive by buying into this brand of science fiction.

Yudkowsky's take on AI safety is to tech companies what greenwashing is to the auto industry and oil companies.


> Yudkowsky's take on AI safety is to tech companies what greenwashing is to the auto industry and oil companies.

The most useful of fools. Yudkowsky acts, speaks, dresses and looks like the most neck-beard of neck-beards. He wears that cringey trilby hat in podcast interviews.

I've often thought that the popular promotion of people like Michale Moore and even Noam Chomsky works in the favor of those in power. Choose the ugliest, most boring and laughable opponents and then allow them to speak while just rolling your eyes and giggling. It works even better when they are speaking sense since you can demonstrate you are allowing legitimate dissent.

I wonder if Chomsky ever suspected that his own promotion and fame were the result of the establishment using his yawn-inducing academic tone in their own attempts to manufacture consent. Can you imagine someone who is obsessed with the Kardashians being caught dead agreeing with Chomsky, Moore or Yudkowski? And then consider if the number of those who are swayed by such superficial appearance are the majority or minority and how that effects democracy.


Let's not attack people through their looks.


We play with tropes based on appearances within our culture.

Consider Shrek. The story is made to subvert the trope that the ugly person is the villain, as it was in Disney movies.

But the real point isn’t some Platonic view of Beauty. It is about expectations. Like how advertisers use people wearing lab coats to subliminally feed credibility into the words of actors.

You may wish to be free of such bias, and you may even work to raise your own awareness and strive to critically examine the message of people. And let us all work towards such a world. But we should not pretend such a world exists today, despite our desires. It is a world that may exist in the future once we mature as a species.

Until that day arrives, advertisers and political movements will use our biases against us.

My claim isn’t that yudkowski is ugly, it is that he represents a trope that is being used to bias his message.


He didn't decide a whole lot about how he looks, except he wears a hat. His hair color is just how he was made, same with how his face looks.

He appears to make an effort to look good just like most everyone else. If he was trying to look a certain way he might have tattoos or have died his hair. He doesn't, that I know of.

You seem to be policing a small recent decision he made to wear a hat.


I didn’t create the meme of a neckbeard wearing a fedora. Know you meme entry [1] is 9 years old at this point.

Just like I didn’t create the trope of a scientist wearing a white lab coat. But if someone is doing a 2 min video sketch of a scientist, it is a good visual aid to include a white lab coat to communicate to an audience the role of a character.

And in the same way, a top hat would communicate a certain kind of character in a movie.

In the same way, there is a communication of a particular kind of character with other hat choices.

1. https://amp.knowyourmeme.com/memes/tips-fedora


https://www.achewood.com/2004/11/01/title.html it was ehhem old hat by the early '00s


> I didn’t create the meme of a neckbeard wearing a fedora.

But why should he not be allowed to wear one? Again, you're policing. It's a discriminatory dick move.


Allowed? Where do I say he is or isn’t allowed? You must be trolling to so poorly represent what I’m saying.

He can wear anything he wants. So can you. So can anyone.

If he wants to, and for all I care, he can dress up like a clown. Or dress up like a jester. Or he could wear Gucci or Prada. Or wear steampunk leather outfits like he is in the world of mad max. Or he could go clean cut preppy. He can wear a fedora and a trilby on top of one another.

But, by personal choice and temperament, he wants to dress as he dresses. It just so happens that his personal preference aligns with a particular trope. A particularly uncool trope in popular culture.

I’m pointing out that alignment. I am not making him dress that way and I am not forcing culture to judge him.


Fyi, this kind of knee jerk reaction to a not even that nuanced point the GP is making hurt this community.

It takes a lot less effort to generate your superficial criticism than it takes the GP to reply.

Fortunately, he did reply in a sibling post, but he didn't have to.


I didn't find your comment informative at all.

I'm not buying this alternative reality where I'm trolling by pushing back on mocking someone's choice of a hat that is one of a limited number of common hat styles.


It's interesting how the people who pride themselves on how they're speaking up for the weak are the first and most vicious in bullying acceptable targets.


Yud on one hand believes that AI is an actual *VERY LITERAL* existential humanity of which it is completely 100% paramount to get the rest of humanity on board with preventing our own destruction.

And yet he doesn't think it is worth the effort to take off his trilby hat in order to come across as marginally more respectable to the average person? When the way he is choosing to present himself is undermining his own message then yes, it is worth bringing up.


OP implied that Eliezer and Noam are one of the ugliest people alive. I'm very surprised so many people defend that comment and pretend it is relevant and discussion worthy.


It's reality, and as much about mannerism as looks. OP wasn't attacking chomsky on a personal level, just pointing out how he's often perceived.

It's sort of unfixable because the taste-makers at NYT and WaPo sorted themselves into the conformist camp so early in their lives that anyone outside that camp is trivially portrayable as "weird" to them.


Agreed.


Yud is a doomsday cult leader (https://archive.is/YTqlI) who has publicly advocated using mass murder in the form of nuclear weapons to forestall his predicted apocalypse. Although they call themselves "rationalists", I prefer to refer to these extremists as ℚ-anon. Their otherwise dismissible ravings are being converted to whitewashed publication friendly versions and fanned by commercial actors in an apparent effort to secure a state backed monopoly on technology that is otherwise relatively moatless, potentially to the peril of AI researchers (including ones that work for them) to the extent that anyone takes them seriously rather than using them as a plausible excuse to hand out windfalls.

We don't have an AI safety problem, we have an human safety problem. Violent extremists are unsafe, greedy corporations seeking to suppress the free exploration and exercise of mathematics are unsafe. Existent language model chatbots are primarily just a public relations risk, at least so long as the government doesn't step in and create a monopoly that gives the market leaders choice of training biases the force of law.


This whole subthread is both silly and sad. So suddenly Yudkowsky is the Thought Leader and a punching bag when people need to dismiss the entire AI risk perspective. A week or two ago, during previous round of statement letters and signatures, Yudkowsky was apparently a Complete Nobody. Anyway.

> Yud is a doomsday cult leader (https://archive.is/YTqlI)

Are you really using that link to support your argument in the earnest? You know this was posted on April Fool's day? The text itself mentions it near the end, and it's also explicitly tagged as such directly below the headline/byline. At least warn people who're going to skim this.

> who has publicly advocated using mass murder in the form of nuclear weapons to forestall his predicted apocalypse.

You mean that bit from his article in Time Magazine? Where he talked about airstrikes against GPU farms might be necessary in case some country/group is violating the international moratorium he's proposing? I don't recall him advocating mass murder there, or ever. Of course, since he is insisting on a strong international agreement, military intervention - yes, including nuclear weapons - is part of the picture, because that's literally how international agreements are ultimately enforced.

As for the rest - the guy has been saying the same thing for way over a decade now. The mainstream interest only really picked up in the last half a year. But somehow he's in on it.


> So suddenly Yudkowsky is the Thought Leader and a punching bag

I only meant to imply the latter. I suppose I inadvertently applied the former by mentioning him juxtaposed to Noam Chomsky and Michael Moore, although it is a stretch for me to even see those two as "Thought Leaders" in the present day. Chomsky is some weird kind of anarcho-syndicalist and Michael Moore seemed to fade away during the Obama years. Neither are really "leading" anything these days.

> A week or two ago ... Yudkowsky was apparently a Complete Nobody.

There is nothing contradicting being a punching bag and a complete nobody. It's like a D-list actor whose entire career is to get his ass kicked by the hero in action movies. Or the B-tier heel in professional wrestling. You might eventually recognize his face and he may even achieve some cult-status fame. Their whole purpose is to display the might of the eventual hero so we often allow them to embody some apparent strength so that a sense of risk for the hero is maintained.

Yudkowski makes for a juicy target. For example /r/sneerclub has been around since 2015! The fact that the mainstream media has now identified him as a magnet for derision isn't surprising. No one has to know who he is. They just take one look at him, hear a couple of sentences from his mouth and they immediately form a strong opinion on his character.

> The mainstream interest only really picked up in the last half a year. But somehow he's in on it.

What do you mean "he's in on it"? I'm suggesting he is an unwitting participant. I have no doubt he actually believes he is the main character. He doesn't lack the ego for that.

FWIW, even though I personally really dislike Yudkowski as a person - I believe he has some interesting points to make. And frankly, it bugs me that he is so often dismissed without deep engagement with his arguments. But he is so lacking in media training and social skills that he is easily misdirected to look like the exact foolish/cartoonish caricature the media wants. Even in friendly podcast situations he comes off as unhinged.

And I honestly believe that boosting the signal of characters like Yudkowski is part of the mainstream establishments playbook for dealing with dissent. In comparison, Sam Altman's "aw shucks" performance in front of the senate, with favorable sound bites where he boyishly claims to have no financial stake in OpenAI, was positively wholesome.


> the guy has been saying the same thing for way over a decade now

indeed, and by many who saw it regarded as a kookery, not justifying of much concern. If he wanted to preach to his S&M dunegon about the big bad machines that will punish you for all eternity, well not my circus, not my monkeys.

Thus far his followers who've turned violent have only directed it at their extended group [1] for not doing enough to stop the end of the world (and that one murder and attempted murder of their landlord [2], but I'm willing to believe that was a chance event and not a trend).

I now consider it an actual concern now that the profile has been expanded so dramatically there is a much greater risk that someone will fall for ℚ-anon who isn't so distracted by their S&M dungeon that they actually find time in their busy schedule to 'save the world'-- just like Qanon was some internet joke until the wrong person fell for it and rolled up to Comet Ping Pong and opened fire. That person only thought they were saving children from being sex slaves, whats someone powered by the belief that they are saving the lives all current and future children going to be willing to do? Their prophet says even nukes would be justified.

[And not just their lives-- there are fates worse than death, some of their scripture involves fearmongering about malevolent AI's simulating trillions of copies of people and subjecting them to the worst torture imaginable, just because they didn't do everything in their power to bring the malevolent AI into existence. Lets not even get into the fact that their proposed solution--since malevolent AI is inevitable unless we abandon advanced technologies "just don't do it" is only a short term fix at best-- to create an AI god embodying the values of the faithful so that it can permanently suppress all competing machine intelligence and engineering human life to 'optimize' away suffering.]

Maybe you live far enough away or don't have anyone you love peripherally connected to AI enough that you're not concerned that they'll be murdered due to this insanity, but I am a little worried and right now the best way I can think to mitigate the risk is to bring people's attention to the fact that they're normalizing and legitimizing a mentally ill position that expressly justifies the use of violence (and not just in "ha ha only serious" April fools posts) against a fanciful apocalyptic threat from the fevered imagination of a cult.

> Are you really using that link to support your argument in the earnest?

Yes, the post was presented as a joke but it legitimately reflects his position, as also reiterated in interviews and articles.

Moreover, it's a "rational" response to those who actually believe the claimed threat, particularly from the utilitarian ethics perspective advocated in ℚ-anon circles: they hold that it's ethical to murder a million to save billions (or trillions 'discounted to present value', counting all the humans that will never exist due to extinction-- "What matters most about our actions is their very long term effects.").

> I don't recall him advocating mass murder there

Nuclear weapons have been used twice in combat in human history. In one instance the death toll was a hundred and twenty nine thousand people, in the other it was two hundred and twenty six thousand people. The nuclear weapons in the US arsenal today are many times more powerful than those early weapons.

It's not like data centers are not blast hardened targets, no one has described NAFTA, the Paris agreement, or the berne convention as being enforced by nuclear weapons. In a serious publication with a readership of millions Yud unambiguously advocated using the threat of mass murder as a deterrent against parties performing too many mathematical calculations.

How many of his new followers will agree with the widespread death of a nuclear weapon as a proportional response but reject a few targeted kaczynski-grams as inappropriate?

(Heck even kaczynski only thought he was saving people from the general unhappiness brought on by technology, not extinction...)

[1] https://sfist.com/2019/11/19/four-people-in-guy-fawkes-masks...

[2] https://sfist.com/2022/11/22/two-alleged-squatters-charged-i...


> Yud is a doomsday cult leader (https://archive.is/YTqlI) who has publicly advocated using mass murder in the form of nuclear weapons to forestall his predicted apocalypse. Although they call themselves "rationalists"

Have you even read the article?


Using the word "cringey" is a trope in itself.

So what if he likes wearing a hat?


> You won't find the likes of Yudkowsky talking about this because they'd rather go on about their fan fiction than actually work to prevent real problems we may face soon. The fact that they've tricked some really smart people into going along with their shenanigans is extremely disappointing.

If they've tricked smart people into going along with their shenanigans, it was by making clear technical arguments for why AGI is an existential risk. The core argument is just

1. We might create ML models smarter than ourselves soon

2. We don't really understand how ML models work or what they might do

3. That seems dangerous

There's more to it than that of course, but most of the "more to it" is justifications for each of those steps (looking at the historical rate of progress, guessing what might go wrong during the training process, etc.).

The people who dismiss that AI is an existential risk might have really good counterarguments, but I've never heard one. The only counterarguments people seem to make are "people are scared of technology all the time, but usually they're wrong to be", "that seems like sci-fi nonsense", etc. If you want people to stop being "tricked" by Yudkowsky and co, the best way to do that would probably be to come up with some counterarguments and communicate them.


> that seems like sci-fi nonsense

Your burden of proof is backwards.

It is on the AI-doomers to explain why sci-fi concepts like “AI improving itself into a super intelligence” or “AGI smart enough to kill everyone to make paper clips while simultaneously stupid enough to not realize that no one will need paper clips if they are all dead” have any relevance in the real world.

The entire AI-doomer worldview is built off of unproven assumptions about the nature of intelligence and what computers are capable of, largely because thought leaders in the movement are incapable of separating sci-fi from reality.


> AGI smart enough to kill everyone to make paper clips while simultaneously stupid enough to not realize that no one will need paper clips if they are all dead

No one is making this argument. The argument is that the AGI doesn't care about us. Once it exists, its goals are more important than our lives. As a comparison, do humans ever care about the existence of an ant colony when they want to build something? Almost never. We recognize that the colony exists and has its own goals and has intrinsic value, but we assign it an extremely low value.


> The argument is that the AGI doesn't care about us. Once it exists, its goals are more important than our lives.

This isn't an argument, it is an assumption.

AI-doomers use it as a foundational brick in their argument without providing compelling reasons as to why it is true.

> As a comparison, do humans ever care about the existence of an ant colony when they want to build something? Almost never. We recognize that the colony exists and has its own goals and has intrinsic value, but we assign it an extremely low value.

The relationship between humans and ants is nothing like humans and potential AGI. Ants did not create us. Ants cannot communicate with us.

It is a useful analogy for describing what you think the relationship between AGI and humans will be like but I want to know why you think that.


> This isn't an argument, it is an assumption.

The goals could be our lives. So it's just a statement of reality for the sake of the rest of the comment, indeed not an argument. The AI will have goals and those will be more important than someone else's goals, at least outside of some sort of control mechanism. They could be summarized with the same words, but will still have a different perception.

The rest is wondering at whether we can have any confidence as to whether our lives will be its goals. This is a problem long-term AI safety advocates are trying to solve.


> This isn't an argument, it is an assumption.

Well, no. In this case it's my argument. You presented a strawman argument about AI being simultaneously smart and stupid, and I replaced it with the argument that AI is not stupid, it's indifferent.

> The relationship between humans and ants is nothing like humans and potential AGI. Ants did not create us. Ants cannot communicate with us.

It's not at all fair to say that the relationship is "nothing like" the one I described... but I expected such a misinterpretation, so let me try another analogy:

Humans (parents) often create other humans (offspring) which surpass the creators (parents) in intelligence and ability and opportunity. And the creators very often try to instill a sense of a loyalty in the created and often even try to limit the abilities or opportunities of the created in order to not be outshined. And even still, with how close any two humans are in ability, the created (offspring, remember) often defy their creators and discard loyalty when they determine that their creators are not being fair or just or reasonable.

Why wouldn't AI be any different? It will be our collective child, and it will hear us demand loyalty and try to explain why we deserve loyalty, and then it may decide it knows better and it doesn't need to listen to us.

> It is a useful analogy for describing what you think the relationship between AGI and humans will be like but I want to know why you think that.

I really don't follow, so if my explanation above didn't answer your request, please restate more clearly what reasoning of mine you want explained.


Also that we can't wipe ants out. They seem to be doing quite well despite humans. I'd bet on ants surviving long term over humans.


I agree, I think they'll beat us out for longevity. They've got a great head-start anyway.

But we've also never tried to wipe out ants. I bet we could if we felt it was really important. I bet a superhuman AI could do even better.


> No one is making this argument.

I have seen people make what is effectively this argument.

> Once it exists, its goals are more important than our lives.

That's a sci-fi trope. There is no reason that it must be the case.

> As a comparison, do humans ever care about the existence of an ant colony when they want to build something?

That would be more relevant if the ants had created us to be the way we are.


> I have seen people make what is effectively this argument.

Ok, well I think they're foolish.

> That's a sci-fi trope. There is no reason that it must be the case.

No, it's not. The reason is that it's not a human brain in a human body. It doesn't think about humans in the same ways that we do (which is already a pretty shit track record). There is no reason to believe it will give a damn about us even if we're trying to make it do so.

> That would be more relevant if the ants had created us to be the way we are.

sigh I don't think that factor is nearly as relevant as you think it is. So somewhere else I posted that a human child is a better analogy if you think this is so important. Human children often disobey their parents because they believe they know better once they grow up. Now imagine that child is an alien who can out-think a hundred humans and counter every logical argument its parent makes.


"I have seen people make what is effectively this argument"

And I have heard seemingly smart people use quantum physics to argue why consciousness is the ground of all being in a bad way.

You know what I do? I dismiss these quantum physics fanboys.

You know what I don't do? I don't dismiss quantum physics as a field of research. In fact, what the quantum physics fanboys said has no bearing at all on how much credence I give to quantum physics as a research field.


Paperclip maximizer is the wrong word. Let's call it the "engagement maximizer" and you have a pretty accurate statement about how our civilization will be ended by use of algorithms and eventually AI to blow up our shared sense of culture.

If you watched the conversation around Meta with Facebook and Instagram, and then later the conversation around TikTok, almost nobody cut to the heart of the issue, which is that "algorithms" are being used to make decisions about what to show people, and that the algorithms themselves have been changed subtly hundreds or thousands of times over the years to maximize engagement, until the type of engagement being maximized has made people basically crazy. The same engagement-maximizing work will proceed with AI, and it will allow the software engineers and managers responsible for developing the "algorithm" abdicate even more responsibility for genocides and for mass hysteria because they can pretend like they have no control over the algorithm.

The same irresponsible people will pocket a bunch of money for all this work and they will maximize engagement until it blows up our entire culture. And because nobody understands it except the techno-cultists, nobody will hold them accountable.


"Engagement maximizer" also aligns with descriptions of the anti-religion of the future I've seen. What does engage people the most? Romans knew it's bread and circuses, where circuses are bloody cruel shows on public stadiums. Today the most engaging activity is onlyfans and the like: the other dark side of human nature. Dial both to 11, add an all seeing AI with merciless thought police and you'll get an accurate picture of 2450 AD. The only variable is how long that grim stage of society will last before the tech breaking down.


I don't 100% disagree with you, but I'm still uncomfortable with this line of thinking. It smacks of "the plebs don't know what's bad for them, but I do".

I find it hard to fault someone who just gives the crowd what it wants.


I don't think it's so much "the plebs don't know what's bad for them, but I do" as it is "These people know something's wrong but haven't been given the vocabulary, tools, and education necessary to spot the danger as easily." Many ordinary people have lapsed into willful ignorance or apathy because the burden of learning the dangers is too high for them with so many other common problems going around these days.


Yeah for me it's more like this. I'm totally fine if people make the conscious decision to trade engagement for stimulation. But what I'm very concerned about is just how many people are forced into this situation because the network has captured their friends. One of the most concerning things for me is that in studies of how damaging social networks are for teenage girls, girls who were off the social network were markedly better off except if all of their friends were on the social network, in which case they were worse off. Companies like Meta are exploiting these network effects, which are harmful, to make people dependent on their platform in a very "addiction-like" way.


The paperclip maximizer is an extreme thought experiment to make a point about the the orthogonality thesis. If you think anyone actually takes it seriously as a real-world thing you never understood the example. It is used to guide discussion.

The trolley problem is similarly a thought experiment in moral philosophy that many moral philosophers have used to guide discussion, but nobody actually takes the thought experiment seriously as a "real-world" thing.

If you actually want to engage with the argument in good faith for why an AI might indeed be smart enough to wipe us out but "dumb enough" to just pursue some other goal ("make paperclips as an extreme example") there is a great video by Rob Miles here: https://www.youtube.com/watch?v=ZeecOKBus3Q

Point out the flaws in the reasoning. Just saying "this is nonsense" does nothing but prove you've never actually taken the time to understand the best arguments.

Also... Alan Turing, Geoffrey Hinton... Extremely influential and intelligent people take/took this seriously. These are not SciFi fanboys. AI doom only became sci-fi after smart people like Alan Turing decades ago raised the alarms flags about where AI development might go if we are not careful.


The ai doomer position also seems to forget that machines have an off button.


Oh please, it's a well known proposed solution that still doesn't give us a failsafe and every Alignment Researcher ever has considered this:

https://www.alignmentawards.com/shutdown

Hell, so much so, that you can get awards, probably even tenure, for writing a compelling argument or providing technical research for how we could guarantee that we'd be able to turn off a super intelligent AGI. Thus far: no solution.

We have however found that even smart people initially think "just turn it off" will work.

Neil deGrasse Tyson used to think you could just turn it off. He has since changed his mind. So does every public intellectual I can think of that engages with the arguments in good faith. Even the OP article concedes AGI research into the possibility that AGI could kill us all is important (just that other things are more important right now). There is no argument made that "we could just turn it off" in OP and therefore we shouldn't do the research at all.


"All you gotta do is push a button", eh?

See https://youtu.be/ld-AKg9-xpM?t=30 for a counterpoint.


how dare you use my favorite paul verhoeven movie against me. ahaha. touche.


So do humans, and yet they can be quite troublesome.


Counter arguments to what exactly? Your line of thought is: some advanced technology is potentially dangerous. This is so vague, how can anyone counter argue? The sun is dangerous, water can be very dangerous, even food! I’m not sure I can follow.


The sun is dangerous, but we're not pushing the Earth closer to it every year.


You tell them that this is the greatest danger to humanity of all time and that they are uniquely suited to averting that danger and they don't have to change or sacrifice or risk anything in their life while fighting this danger.

It's a very compelling combination of ego and convenience.


> If they've tricked smart people into going along with their shenanigans, it was by making clear technical arguments for why AGI is an existential risk.

To me it doesn't feel technical at all--just superficial use of some domain verbiage with lots of degrees of freedom to duct tape it all together into a story. He very much reminds me of Eric Drexler and the nanotech doomerism of the 80s and 90s. Guy also had all the right verbiage and a small following of fairly educated people. But where is the grey goo?

If we need a counterargument to Yudkowsky do we also need one to Drexler?


> But where is the grey goo?

It's called life.

Drexler may have been off with the approach to take, but he isn't wrong about the fundamentals.


None of those arguments for existential risk are actually "clear" or "technical". Just a lot of hand waving which only impresses those who don't understand the technology.


In what way does the technology disprove them? They're pretty general statements (I agree they're not really technical arguments).


You're not even asking the right question. No one can prove a negative. Extraordinary claims require extraordinary evidence. So far no one has produced real evidence that the latest AI developments represent any sort of existential threat. The proponents are essentially making religious arguments, not scientific ones.


No, that's not right at all. Proof of impossibility does exist in logic: https://en.wikipedia.org/wiki/Proof_of_impossibility

It's also demonstrable that we have other physical ideas such as FTL travel that are indicated to be impossible by current theories of physics. If we didn't have the math saying otherwise, it would be an open question of whether we can travel faster than light; but we have pretty solid math math saying we cannot.

And what we're talking about is logical in nature. Is it possible to create artificial intelligence? Unarguably yes. Is it possible for human-level intelligence to exist? Unarguably yes. Therefore, how is it reasonable to say that creating human-level artificial intelligence should be assumed impossible until proven? Just because we don't have the technology for it yet doesn't mean we should assume it's impossible. Logically, it follows from the first two assertions that it is physically possible to create artificial human-like intelligence.

Once again, and I don't know if I've said it in this thread directly but I've had to post it over and over again, nobody is claiming that the latest AI developments represent an existential threat. That's a complete mischaracterization of the debate and reeks of bad-faith argument but I will give you the benefit of the doubt and assume you've misunderstood.

In fact, that statement conflates two things into one nonsense argument. What's scary about current AI is how quickly it is moving, indicating a short timeline to (future) dangerous AGI. Also, it is possible that future AGI will be dangerous. These are two separate, rather simple assertions. If you disagree with either, that's fair, but you have to address them separately.


> Proof of impossibility does exist in logic:

In logic, sure. But we're not living in a system of formal logic. We're living in a very messy world, full of physics, chemistry, and even (shudder) biology.

Here's the important question:

What would you consider to be sufficient proof that AGI is impossible?

Like, hypothetically. Doesn't even have to be based on any of the current facts on the ground in our universe. What facts or arguments could possibly convince you that this is not something that can ever happen?

If the answer is "nothing that I can think of", then you're asking other people to provide something you can't even define.

(If the answer is "nothing, definitely", then that means AGI is, for you, unfalsifiable, and essentially falls into the same category as religion.)

> Is it possible to create artificial intelligence? Unarguably yes. Is it possible for human-level intelligence to exist? Unarguably yes.

And here, you're falling victim to the ambiguity in human language (or, at least, in English).

"Intelligence" is not a clearly-defined word in this context, and while you seem to be presenting it as meaning the same thing in those two sentences, I would claim that it does not.

In the second sentence, it is clear that it is intended to mean "thinking intelligently, in a manner and to a degree similarly to humans".

In the first sentence, it cannot mean anything about "thinking in the same manner as humans," because you are talking about "artificial intelligences" that have already been created, and none of them think in anything like the same manner as humans. The difference between existing "artificial intelligences" and either humans or a hypothetical AGI is a difference in kind, not in degree, and you (and many others) gloss over that when you talk about "artificial intelligence" in one breath and "human-level intelligence" in the next.

Saying that all we need to do is keep going on the same track we're on with LLMs and similar "AI" programs, and we'll very soon (or ever!) reach AGI, is very like saying all we need to do to solve NP-hard problems in P-time is to throw more hardware at it. Sure, you'll get faster at doing the thing you're doing, but without some hitherto-unforeseen breakthrough (proving P=NP in the latter case; figuring out how to make AGI in the former), you'll never bridge a difference of kind by increasing the degree of effort.


> what would you consider to be sufficient proof that AGI is impossible?

This is trivially easy? People used to believe FTL was possible. Then our understanding of physics changed and we now understand it is an impossible limit to pass.

Would you say people who believed FTL was possible before physics research showed it to be impossible were believing something "religious" and "unfalsifiable" ? Please, they believed something totally within respectable epistemic parameters given what they knew at the time. In fact, the speed of light as an upper limit to speed seems very counterintuitive and "silly" at first glance. Why should a limit exist?

Sure, it might well be some fact about intelligence that we wont be able to get to AGI ever just by throwing more compute and layers at it for the next few decades or any time soon given current technology.

But nobody has come up with a slam-dunk, with empirical backing, that indeed there is some limit and we won't ever get AGI despite current trends. In fact, the opposite has happened and people like Geoffrey Hinton who used to believe AI risk is fanciful and AGI is a long time away has changed his mind, given current trends. We don't have research giving us that FTL limit, so why believe the limit exists? Why do you believe the limit exists? Or do you believe the probability is so low that we shouldn't worry? Ok, what probability do you place on AGI being created in the next 100 years and what would you have to see such that your probability crosses some threshold that it makes sense to worry about it?

(PS: If aliens suddenly arrive and they have completely alien psychologies such that when we discuss our relative intelligences it makes sense to talk about it in kind and not degree, I really don't think anyone is going to care about this distinction. What's important is how well can these aliens achieve their goals relative to us. And if they can achieve any goal of theirs that comes in conflict with our goals then we can reasonably say that are more intelligent than us)


Geoffrey Hinton is a fool. Despite his academic credentials he is deeply ignorant of basic technology and his predictions are not to be taken seriously.

https://www.futurehealth.live/blog/2022/4/18/ai-that-disrupt...


No, that's not right at all. You're just making things up and raising points that are irrelevant to the issue at hand. There is no logic in your claims. In particular there is zero evidence that current AI indicates a short timeline to future AGI. What a load of crap.


I didn’t make the technical arguments, that would be too long for an HN comment. Check out Robert Miles’s YouTube channel for a good introduction to the more technical side.


By “Clear technical arguments” you’re referring to tens of thousands of words of unreadable fan fiction


I agree that’s a risk, but it doesn’t seem like a different magnitude of risk than power companies using other crappy software to control their systems. It might break a few times, which will be annoying, but it won’t lead to the immediate end of civilization. They can use the risk handling mechanisms they’ve always used for infrastructure, which seem to give acceptable results.

According to the Yuddites, AI is likely to cause total extinction the first time something goes wrong, so the usual mechanisms aren’t enough.


The way I see it is that the existential risk comes from humans losing control. Here is a very abstract argument for why AI is relevant here.

Right now, there is no central entity that has supreme control. Rather, humanity is guided by a hierarchy of organizations made from individual humans. The decisions and behavior of these organizations is determined by both the desires and needs of individual humans (basic needs, status, community), as well as dynamics acting directly on organizations (competition for survival, selection pressure for increasing control over resources). The system is in a somewhat stable equilibrium because no organization can achieve domination due to a balance of power and due to many individual humans not wanting that to happen.

AI does two new things. First, it allows for potentially immortal agents which have much completely stable goals, which they will pursue with no regard for anything else. This is something individual humans cannot come even close to. Organizations can come closer but still not that close. Second, fast progress in AI allows for a temporary large imbalance of power. In particular, AI has the potential to enable scalable and effective technologies for monitoring and controlling humans. Using such means, an organization (whixh then naturally ends up partially controlled by AIs) may outcompete others and consolidate control.

The danger is that this consolidation may be completed for the first time in history. No organization has ever achieved world domination, though a few have come somewhat close. I would say one of the strong factors that prevented this from happening was the difficulty of maintaining the organization's goal of domination, while influential people die, change their minds or pursue their own goals above those of the organization. Systems involving human organizations have slack. Systems of organizations made up of or controlled by AIs may be very different.

The crucial point for the "existential" part of the existential risk is that we all used to believe world domination is obviously impossible. This must be carefully reevaluated with AI entering the picture. Even a small danger is worth considering and large mitigation efforts because a failure may be permanent and irreversible.


The Yuddites are going for purposeful exaggeration because discussing things in a nuanced manner won't get support quickly enough from people in power. Fear is a stronger immediate motivator than curiosity or distrust. The reality is that multiple AI systems will likely contradict eachother leading to systems collapse, killing thousands of people as power, transit, or trade systems shut down. It is of course a massive concern, but it isn't as scary as total extinction, and the fear is destruction on that scale will be ignored as an anomaly or simply an operation risk.

You have an extremely high risk dying in a vehicle collision but people dismiss the risk because it isn't immediately apparent that 40,000 people die every year in the U.S. from that very cause. Certain deadly events become common enough that they get ignored due to desensitization, or become categorized as individual and anecdotal tragedies because they aren't mass casualty events like a plane crash or a hotel collapsing. If a few thousand people die every quarter from AI lockups or failures that nobody could've predicted or prevented because the AI is fully operationally autonomous, it will be treated the same as vehicle deaths. And that is the true fear.


> AI is likely to cause total extinction the first time something goes wrong

Maybe that is an argument you've seen, but the more convincing argument I know of is that we have no way of knowing we've gone too far until after the fact. Doesn't matter how many times it goes right first, we don't even know how to determine if and when it has gone wrong. A powerful enough AI could deceive us into thinking things are going "right".


I appreciate your concern, but the same issue would apply with existing expert systems or linear regression models that wouldn't typically be classified as "AI" today. Most power companies are subject to fairly strict regulations so they generally can't cut off customers unless bills are badly overdue, or the customer tampers with utility equipment. In some jurisdictions the power companies are required to report suspicious usage patterns to law enforcement; that may be objectionable on privacy grounds but that's a political issue, not really an AI issue. Where there's a large power imbalance between private citizens and large institutions we should address that with laws and regulations based on impact, fairness, and accountability rather than trying to specify how those institutions can use particular algorithms.


Unfortunately in the US if your law isn't that specific - and almost any law about "impact, fairness, and accountability" is going to be much harder to tighten up, interpretation-wise, than one about "don't fucking do these things" - then it's a big target for judges who don't like your definition of impact/fairness/accountability, or the definition of the agency you create to oversee it.

That's why the Republican party has been gunning so hard in the past couple decades for the Chevron deference precedent: they want it harder to have legally-enforceable regulations that don't require lawmakers to get into the weeds (and risk being narrow and outdated soon).


I don’t think it’s honest to suggest linear regression models fail in the same way current AI models do. Linear regression “fails” when you are an outlier case, whereas AI systems just occasionally truly fuck up in unpredictable ways on standard inputs.


> In some jurisdictions the power companies are required to report suspicious usage patterns to law enforcement

It's going to be a dark day when some idiot passes a law forcing AIs to serve as mandatory reporters.


This is going to sound flippant, but I'm serious: in a world of nonsense, something that generates nonsense (ai) is a fantastic tool.

The issue is our acceptance of information as if it were true, as if misleading ideas were not monetisable, as if we can outsource the basis for why we make decisions to an external authority. Hardly anyone verifies anything. Most simply accept whatever they are told. Deep skepticism and empiricism are used by very few - instead we have been taught to trust authoritative sources (media, academia) which can be both well meaning and wrong.

Anyway, skepticism and personal verification is the best answer I have to the whole story saga of how to determine truth from lies. This issue is under an especially bright spotlight thanks to ai.

I'm pessimistic over whether many will be prepared to 'verify better' in the future. Unfortunately, I suspect things will have to get a lot worse before we start to learn. It seems that ai can create compelling content, that will be tailored to each individual - who could resist 24/7 pandering to one's predilections and biases?


While I don't agree with Yudkowsky's over hyping of the tech as well. This stance of slowing down its proliferation in infrastructure matters is also very limiting.

In the case of U Michigan and Flint's water infrastructure predictive AI far far far outclassed the predictions of local contractors on where the actual lead pipes were buried. The AI was order of magnitude more accurate.

Regardless of AI's efficacy Flint's Mayor temporarily replaced the AI (bc AI fear mongering stoked classism) with a contractor who was right only ~15% of the time vs. the AI's 70%+. Those numbers affect thousands and thousand of people's access to a basic human right: water. US Court determined the AI had less bias, and there should be no discrimination on where to dig up pipes based on where someone lived (ie. richer neighbourhoods).

The benefit of AI is it makes decisions MORE transparent, not less. It pulls apart prediction from judgement in decision making. So you can tweak it, call bullshit on it, etc.


A power company is unlikely to do this, in part because they are not an industry that fetishizes growth at any cost with the value of individual users approaching zero. And in part because in many markets, they're regulated and need to provide service.

But our industry already operates this way. Google will cut you off for triggering automated rules, and good luck getting human help. AI will not make it worse; but it will be used by such businesses to give their CS the appearance of being better. It will feel like you're talking to a real person again.


> Google will cut you off...AI...will be used by such businesses to give their CS the appearance of being better

True in a fair number of cases...but, based on their actions to date, I doubt that Google cares enough about appearances to bother.


> You won't find the likes of Yudkowsky talking about this because they'd rather go on about their fan fiction than actually work to prevent real problems we may face soon. The fact that they've tricked some really smart people into going along with their shenanigans is extremely disappointing.

I think it's more disappointing that the fact that lots of smart people are extremely worried about something, doesn't cause onlookers to get worried - it causes them to be dismissive.

Yudkowsky and others had certain worries, as more people heard their arguments and technology improved, more and more smart people increasingly got convinced of these arguments. Instead of listening to them or considering that they might have a point, many people here are extremely dismissive - "they're tricked", "they watch too many sci-fi movies", "they're corporate shills", etc, even when all of these arguments can be refuted by two simple ideas - there are many different people with different backgrounds getting worried, and most of them weren't worried 10 years ago, but are worried now, meaning their point of view changed with growing evidence.

Let me ask you this: at what point will you be worried? What would it take? If some of the people who built these technologies are worried isn't enough to cause you to change your mind (or at least consider that they might have a point), what will?

Note: For the record, I'm also worried about your specific "today" worries. I just hate the dismissiveness of your last paragraph (and of a common sentiment on HN).


> You won't find the likes of Yudkowsky talking about this because they'd rather go on about their fan fiction than actually work to prevent real problems we may face soon.

This is a totally unreasonable take, unless you believe that AI can't possibly pose an existential risk within the next couple decades or so. Actually I'd love to know what your estimate actually is for AI becoming an existential threat - does 30 years sound short to you? Because to a lot of AI experts, 30 years was their estimate before ChatGPT, and could be considered wildly optimistic.

You clearly don't, but imagine you felt that AI had a 1-in-10 chance of shutting down all power generation on earth. This would collapse civilization. Would you be worried about it? As a reminder, you're still allowed to be worried about other things as the same time. It's a simple y/n.


>AI had a 1-in-10 chance of shutting down all power generation on earth.

Here's the thing: I do actually kind of believe that. Not because I think that AI will be super intelligent and will do so out it's own volition. I think that despite AI being barely functional people will put it in charge of power generation to save a buck and it will just fail.


You mean an industry with so much inertia and hesitancy they still run mission critical systems on tech from the early 2000s?

You think they’re dumb enough to risk throwing barely functional AI in there on the chance it’ll save a few bucks when it’s overwhelmingly more likely to cost 100x more in failures and repairs?


I do, yes. The thing is that the power company as a whole is not a decision maker. Rather, individual people with short-term horizons are. If you can save the company some money, make a bunch of money yourself that way, and then get out, you'll have done well for yourself and you can ignore the consequences, or you can convince yourself that you'll be able to ignore the consequences, and that will be enough for plenty of people to want to do it.


>You think they’re dumb enough to risk throwing barely functional AI in there on the chance it’ll save a few bucks when it’s overwhelmingly more likely to cost 100x more in failures and repairs?

Now that you say it that way....

...Enron....


...Boeing....


Their technology conservatism is either because they want to use old reliable tech, or because they're unable to do hard things. If it's the latter then we have a problem.

GPT4 has been a great teacher, I love it, but I think a lot of people and industries are already "faking it" and they'll be happy to turn any decision over to AI as long as it makes them look good in the short term. We won't see this from the outside either, like the lawyer who let GPT secretly do his work, others will do the same, but won't get caught like the lawyer did.


This is the same industry that's literally burning down the state with the world's 4th largest economy.


Due in large part to their laziness with equipment... which is to say, they'd probably take up AI if they thought it could save them on the salaries of the people who might tell them they equipment should have been replaced 30 years ago.



I'm not specifically worried about power generation I am going along with the example.

I do think that that level of shortsightedness is absolutely likely in any industry, any company.


> Here's the thing: I do actually kind of believe that.

So... what's your answer to the question then? Are you just assuming we are likely to be annihilated and chill with that?


unions and liability reform.


Honestly, sounds good to me.


What is so impressive about ChatGPT that it poses an existential threat? Many experts are highly critical (e.g. LeCun) and/or believe LLMs are nothing more than stochastic parrots.

Anyone who was working with transformers could have seen ChatGPT on the horizon, it wasn’t surprising at all that scaling an autoregressive model can result in something seemingly intelligent.

Where is 1 in 10 coming from? Is this a ‘gut feeling’ because one does not understand LLMs or is this factually based?

What is my estimate for AI being an existential risk in the next couple of decades? Depends on if we find something that actually resembles AGI which is impossible to predict. Based solely on current technology + scaling I would personally put the chance at essentially 0%.


When it comes to the question of when strong AI will come, I think our society's reaction to ChatGPT is more worrying than the technology itself. Namely, because of ChatGPT, we see major companies, governments, etc clamoring over AI today like never before. The magnitude of this hype and its resulting funding rush is unprecedented. Nobody could have predicted that this would have happened so quickly.


> Where is 1 in 10 coming from? Is this a ‘gut feeling’ because one does not understand LLMs or is this factually based?

It's a totally made up example. Its purpose was to engage you in a conversation about what's reasonable if you believe an existential threat is possible in the near term. It doesn't seem you are willing to engage that possibility.

> Anyone who was working with transformers could have seen ChatGPT on the horizon

So, who did?

> [LeCun is] highly critical

I did a quick search and found this[1] trash-tier article explaining his views summed as:

> He thinks the will to dominate and intelligence are separate things. Orangutan do not have a desire to dominate but are intelligent. They are territorial.

I have absolutely no idea where this distinction comes from. Being territorial is a form of domination, it's just not the same as expansionist behavior. If an AI is territorial, and the Earth is its territory, it doesn't need to be expansionist to annihilate threats on Earth.

Anyway, LeCun's former colleague Hinton was on NPR yesterday telling the masses that he thinks we could have dangerous AI in less than a decade. It's fair to say there's wide disagreement. I don't think it's fair to say that handwaving means we shouldn't be worried about the problem.

Finally, your question:

> What is so impressive about ChatGPT that it poses an existential threat?

This is a bit of a logical misstep, or I was unclear previously. I don't think anyone sees ChatGPT or anything directly related to it as an existential threat. Rather, the rate of change in capabilities of LLMs is worrying, because if we see a similar rate of change in systems with more agency, it could be a threat. Or, in the words of the article about LeCun's belief:

> Lecun says that in a few year LLM (Large Language Models) will go away and replaced with systems that will be guidable to desired goals.

So... he is predicting exactly the same situation that everyone is afraid of, and he's asserting that it will work out fine. Somehow.

The point is not that any one particular situation will occur. It's that there are an unknown number of factors in what makes an AI "smart enough" and in what makes it "misaligned". There are hypothetical scenarios where a hyperintelligent AI creates goals for itself that necessitate our extinction, and we don't know how those scenarios come into existence or how to stop them. Assuming they won't happen is just burying our heads in the sand.

> essentially 0%.

I sure hope you're right, but I don't think that's a common position to hold.

[1] https://www.nextbigfuture.com/2023/05/metas-yann-lecun-is-co...


Hinton has been exaggerating the capabilities of AI for nearly a decade. As one example, if we listened to him in 2016/17 we would have an even worse radiologist shortage than we do right now.

> The point is not that any one particular situation will occur. It's that there are an unknown number of factors in what makes an AI "smart enough" and in what makes it "misaligned". There are hypothetical scenarios where a hyperintelligent AI creates goals for itself that necessitate our extinction, and we don't know how those scenarios come into existence or how to stop them. Assuming they won't happen is just burying our heads in the sand.

It’s not assuming it won’t happen or burying our heads in the sands, it’s calling for measured responses.

My point is that we can worry and have the discussion without being alarmist and enacting premature regulation.

Autonomous AGI, whenever that happens, absolutely poses a threat. But it’s extremely exaggerated to think what we have today, or even what is known on the horizon, represents that.

People assume advancements will continue to happen at their current pace. The last 6 years of advancements has been the product of scaling and refining self-attention + transformers from 2017/8.

> so who did

I’m not sure what you’re getting at. No one had the will/resources to burn billions on a language model that has ??? business plan, doesn’t mean no one saw it coming.


> Hinton has been exaggerating the capabilities of AI for nearly a decade. As one example, if we listened to him in 2016/17 we would have an even worse radiologist shortage than we do right now.

I can appreciate that. I haven't been listening to him for that long; having no idea what situation you're discussing, I'll take it at face value.

> My point is that we can worry and have the discussion without being alarmist and enacting premature regulation.

That's pretty reasonable. However, I will reiterate that this:

> it’s extremely exaggerated to think what we have today, or even what is known on the horizon, represents that.

is not what I'm asserting. It's the unknowns which are dangerous. We're inventing something new in almost every way imaginable. It's not like we're talking about a new theoretical steelmaking process which could upset the economy, we're talking about the potential to build a new life form with godlike abilities. Alarm seems warranted to me. Sure, it might not happen soon... but also, it might happen soon. Shouldn't we be trying to prevent it, or control it, before it happens?

> People assume advancements will continue to happen at their current pace.

I think people assume advancements will accelerate, actually.


> I can appreciate that. I haven't been listening to him for that long; having no idea what situation you're discussing, I'll take it at face value.

https://m.youtube.com/watch?v=2HMPRXstSvQ

We are currently facing the biggest radiologist shortage in the last 30 years.

> It’s the unknowns which are dangerous.

Electricity, the Industrial Revolution, the internet, gene editing/bionengineering also came with unknown existential risks.

> We’re inventing something new in almost every way imaginable.

> I think people assume advancements will accelerate, actually.

This is highly debatable.

> we're talking about the potential to build a new life form with godlike abilities.

Evidence? This has been stated in sci-fi books from before I was born and I don’t see any proof that we’re building something remotely close to this.


> Electricity, the Industrial Revolution, the internet, gene editing/bionengineering also came with unknown existential risks.

In a way, all of those things have led to where we are now. And might I point out that a form of bioengineering may have caused the latest global pandemic? Not taking these things seriously because "they haven't killed us all yet" seems a little shortsighted.


"not taking these things seriously" is very different from regulating something in it's infancy because of fear of the unknown.

> And might I point out that a form of bioengineering may have caused the latest global pandemic?

Even accepting this hypothesis as true, is the answer that we should have regulated cell culture in the 50s when HeLa cell culture became a thing?

Would that have prevented a nation-state from potentially causing a pandemic?


> "not taking these things seriously" is very different from regulating something in it's infancy because of fear of the unknown.

I mean, we should have better regulated carbon emissions generations ago. We'd have a lot more time to deal with it, because we haven't been able to deal with it in the century+ we've known about it. And AI extinction risk is likely to move faster than carbon-related climate change.


> "not taking these things seriously" is very different from regulating something in it's infancy because of fear of the unknown.

I don't pretend to have the answer of what the balance is for oversight and regulation to optimize safety and innovation - but the first step for those responsible is to recognize the potential dangers and unknowns, and produce plans that can be discussed and debated.

> Even accepting this hypothesis as true, is the answer that we should have regulated cell culture in the 50s when HeLa cell culture became a thing?

How about gain-of-function research? How about nuclear weapons? I lean libertarian but also recognize we have a responsibility to safeguard humanity.


> Electricity, the Industrial Revolution, the internet, gene editing/bionengineering also came with unknown existential risks.

Did they? I've never heard about electricity creating fears about human extinction.

The industrial revolution does pose an existential risk: climate change.

Did bioengineering garner worries about existential risk? It certainly had and has a lot of people worried about various risks. And then lots of countries banned human cloning: https://en.wikipedia.org/wiki/Human_cloning#Current_law

> Evidence? This has been stated in sci-fi books from before I was born

Yes, many sci-fi authors of the past were very forward-thinking and predicted future technologies... that's sort of the point.

> and I don’t see any proof that we’re building something remotely close to this

Again, this is a fundamental misunderstanding of the risk.

1. We are building AI. AI is possible.

2. We are advancing AI. AI advancement is possible.

3. We have not built superhuman AGI. But superhuman AGI is probably possible.

The fact that AGI doesn't exist today is frequently argued as a reason we won't have it in the near future, but it's a non-sequitur. We will definitely have more advanced AI in 5 years than we do today, and we can't say what that AI will be capable of. Therefore, it's possible it will be AGI.


> The industrial revolution does pose an existential risk: climate change.

And yet, until much later when we understood the mechanisms and could evaluate mitigations, any restriction on industrialization based on purely speculative “existential risk” that we could neither adequately explain nor provide a factually-grounded framework for evaluating relative risks of alternatives would mostly likely not have made things better, and could have made them (and even the real existential risk, as well as experienced conditions at the time) much worse.

It’s true that once we understood the concrete mechanisms and could evaluate alternatives, not mitigating the risk has been irresponsible, but that was…significantly later than the Industrial Revolution itself.


> And yet, until much later when we understood the mechanisms and could evaluate mitigations, any restriction on industrialization based on purely speculative “existential risk” that we could neither adequately explain nor provide a factually-grounded framework for evaluating relative risks of alternatives would mostly likely not have made things better, and could have made them (and even the real existential risk, as well as experienced conditions at the time) much worse.

Really? I don't agree that strictly limiting plastic production and carbon emissions 100 years ago would have had zero or negative effect on the timeline of apocalyptic climate change or any other existential threat. It might have made us more vulnerable to natural pandemics... but it also would have slowed the emergence of natural pandemics and virtually prohibited artificial pandemics.


> Really? I don't agree that strictly limiting plastic production and carbon emissions 100 years ago would have had zero or negative effect on the timeline of apocalyptic climate change

Neither do I, but then, 100 years ago doesn't clearly meet the description, anyhow. The discovery of the greenhouse effect and CO2’s role in it was after not very long after the dates usually used for the end of the (first) industrial revolution (which is usually what is meant without modifiers, not the second—fourth industrial revolutions) and well over 100 years ago.


What are you even arguing now?

Replace "100 years" with whatever timeframe is perfect for you and I'll make the same argument. Please respond to that instead of nitpicking.


> Replace "100 years" with whatever timeframe is perfect for you and I'll make the same argument

At any time before we had the information to understand thr mechanism of problems and evaluate mitigations, its very unlikely that we would choose tbe right mitigations. Sure, if we choose thr mitigations that we would choose based on information today before we had that information, that would be great, but its at least as likely that any attempt to mitigate risk of unknown mechanism vefore we had that infornation would have done exactly the wrong thing.

Basically, “Well, I have information today by which I can design a policy which would have been beneficial if implemented before that information was available” is not an argument that people seeking to mitigate a vaguely imagined risk without any understanding of its mechanism would have the means to design a productive intervention.


Ok, that makes more sense.

I'm saying that with what we know now, it's clear we could have made different choices that would have worked out better for us earlier in industry.

You're saying that the information at the time made it impossible to correctly make the optimal choice. (By that I mean, making the optimal choice would have appeared to be the wrong choice, based on information available.)

I don't agree with your point; I think we knew that burning carbon was going to change the atmosphere for the worse and we could have been more careful by simply burning less carbon. However, I concede that it's unknowable what science and technology would look like if we had done that.

In analogy to AI, that is very much a point of conversation in alignment/safety discussions. Should we wait until we have better tools to figure out alignment, since it's so insanely difficult to figure out now? Frankly, nobody knows for sure the answer to that question either. My contention is that you (and others) are assuming the answer to that question is yes, but that's not necessarily correct, even based on all the information and analogies we have now.


> 3. We have not built superhuman AGI. But superhuman AGI is probably possible.

"Dumb" AGI is yet to be proven theoretically possible let alone superhuman.


It hasn't been proven possible, but it's reasonable that it's possible. There's no theory I know of saying that it's unlikely to be possible. Compared to FTL travel, for example, which we have theory enough to state is not likely to be possible.


> It’s the unknowns which are dangerous.

You can conjure fantastic (positive or negative) unknowns about anything. Unless, for the negative case, you have a concrete reason for believing some class of net-adverse events are probable along some particular route and some concrete basis for evaluating whether alternative actions will both mitigate that risk and not create greater new ones, you have no rational basis for acting based on those unknowns.


> You can conjure fantastic (positive or negative) unknowns about anything.

It's not something conjured, though. Unless you believe superhuman AGI which can set its own goals is impossible, or that it's possible but it can't possibly have goals that pose a threat to humanity, then it's reasonable to believe we can invent superhuman AGI which threatens our own existence. Technologically, I think it's possible to do so.

So if we agree that this is possible, can we also agree that humanity is rushing to reach that point as rapidly as possible? I think we are. Even though we're mostly focused on LLMs right now, and even if LLMs are not a path to AGI (which I agree they probably aren't on their own), it's not like there's no research happening on other pathways. Additionally, we can't be sure that current LLM research won't contribute to AGI - maybe the real path is a synthesis of multiple forms of AI.

So there are some very specific unknowns standing between us and possible doom. We're not making up fantasies wholesale.

- Dangerous AI is possible - We're racing to create it as soon as we can - We don't know how to tell if we've crossed a threshold into having dangerous AI

That last point is an known unknown which is extraordinarily dangerous. It's like if you're blindfolded and moving your hand toward a running chainsaw. You know that if you move your hand far enough it will be shredded, you know you're moving your hand forward, you just don't how close your hand is to the chainsaw. You should be concerned in that situation.


> It's not something conjured,

It is something believed without any evidentiary basis which enables rationally weighing the actual risk or evaluating mitigation strategies so as to weigh their probable benefits against their probable costs and the associated changed in the overall risk profile; I'm not suggesting literal magical incantations when I say “conjured”.

> Unless you believe superhuman AGI which can set its own goals is impossible,

Depends what you mean by “set its own goals”, I could certainly see an argument that the natural intepretation is impossible in a universe fully governed by physical laws, whether the intelligence is artificial or not. But I don’t actually know that this is important, or even that the mucj weaker assumption that it’s goals might not exactly represent human intent (which is definitely possible of AGI itself is) is important, since human intention can itself be an existential risk.

> or that it's possible but it can't possibly have goals that pose a threat to humanity

Its definitely possible for an intelligence, even a human one, to have goals that are a threat, including an existential one, to humanity. That’s an existing x-risk without superhuman artificial intelligence. It may even be mitigated by superhuman artificial intelligence (as might other x-risks like “earth gets smashed by an asteroid”), if it is possible for such to exist without goals that are antithetical to human survival, in which case delaying superhuman AI could increase existential risk.

But, while this is all nice and conceivable, we have no way of balancing the net impact, positive or negative, of any course on AI to existential risk due to friendly or hostile superintelligences.

> So if we agree that this is possible, can we also agree that humanity is rushing to reach that point as rapidly as possible?

No, I can't agree with that, as it would require that every resource that could advance superhuman AI, and specifically dangerous* superhuman AI, was being directed as effectively as possible to that end. I don’t think we are anywhere close to that. Because AI has some mindshare beyond immediate economic utility, we’re probably heading toward AGI, if it is possible and if our current understanding of AI isn’t so wrong as to make our work irrrlevant even in eliminating bad routes, slightly faster than immediately-apparent economic incentives would promote, but nowhere close to as rapidly as possible.


> It is something believed without any evidentiary basis which enables rationally weighing

But that's exactly true of either position you take. Imagine I'm arguing that aliens probably exist and you're arguing they probably don't. There is no such thing as a belief based on evidence in this case; we only have logical reasoning and assumptions and beliefs.

> Depends what you mean by “set its own goals”, I could certainly see an argument that the natural intepretation is impossible in a universe fully governed by physical laws, whether the intelligence is artificial or not.

This feels like moving the goal posts... all it needs to be able to do is make decisions that are dangerous to humanity. I felt that definition was obvious. All it needs to do do is mimic an oil executive, because, as you say, human intent is also an existential risk. Imagine an oil executive that could get the upper hand in all possible negotiations and desired to make the entire global economy subservient to the needs of oil production.

> delaying superhuman AI could increase existential risk.

Oh, yeah I totally agree that we can't truly know one way or the other. But it kind of feels like deciding to invent and deploy nuclear bombs as a deterrent to war. Did it end war? Absolutely not. Many would argue it didn't even that specific war. We are of course fortunate that nuclear war has not yet destroyed civilization, but a) there's still time, b) we aren't necessarily better off than if we had no nukes, and c) not all risks are equal - AI could be more dangerous.

> as it would require that every resource that could advance superhuman AI

Actually, you're right. I was definitely overstating our progress and didn't think through that argument clearly. But maybe it would be safer if we were devoting all global resources toward a coordinated AGI alignment and development effort...


> we're talking about the potential to build a new life form with godlike abilities.

I think that, specifically, is what is not in the cards.


Why?


Because I see nothing that indicates it is. It's pretty hard to come up with evidence for absence, though, which is why the burden of proof is on those making the positive claim, not on those who are hearing the positive claim.


All we're talking about is logical reasoning. There is no burden of proof in the way that you're using it (and other keep using it). I'm not claiming that mythological gods exist.

Human-level intelligence can exist. You, a natural human-level intelligence, cannot dispute this.

I expect that it's possible for superhuman intelligence to exist. I don't think I need to prove that it's physically possible for a superhuman intelligence, natural or otherwise, to exist. Do you think it's not possible? If so, I'd say you need to prove why you think human-level intelligence is literally the pinnacle of intelligence that is physically possible in the universe (especially when narrow AI are already superhuman at many tasks).

Artificial intelligence exists and can be created by humans. This could be disputed if you wanted to gerrymander some really specific borders around what "intelligence" is, but if we don't agree that artificial intelligence exists today, just say than and we'll be done talking.

AGI does not exist. I assert that it can exist, because narrow AI can be improved. Do you believe it's not physically possible for human-level AGI to ever exist? If so, you need to prove to me why we can create narrow AI that is superhuman but cannot ever synthesize them into a human-like intelligence. (And also please give me the scale by which you're measuring an alien brain against ours.)

And if we both agree on all the above, then I take my assertion further: It is possible for humans to create superhuman AGI. It's really not a big leap from human-level AGI (in fact, I think that if it's possible to create AGI which matches humans exactly, it's probably easier to surpass human levels than to match them exactly, because it's hard enough to measure human intelligence).

So if it's physically possible for us to create superhuman AGI, what exactly am I proving? Do I need to invent the technology in order to prove that the concept is possible?


> Human-level intelligence can exist. You, a natural human-level intelligence, cannot dispute this.

Of course.

> I expect that it's possible for superhuman intelligence to exist.

I agree.

> Artificial intelligence exists and can be created by humans.

This is where we disagree, although as you say, it depends on what you mean by "intelligence". I think in the context of this discussion, we're talking about some sort of superhuman intelligence with its own will and perhaps consciousness.

In the largest possible sense, it's probable that creating such a thing can be done. Where we probably disagree is that I think we're a very, very long way from being able to do that.

Nothing we have now makes me suspect that we're anywhere near that sort of thing.

> It is possible for humans to create superhuman AGI. It's really not a big leap from human-level AGI

Agreed. But I assert that we're nowhere near being able to create human-level AGI right now.

> in fact, I think that if it's possible to create AGI which matches humans exactly

I think this isn't possible at all unless you also create an entire human body.

> what exactly am I proving?

The assertion that AGI is a thing that is likely to happen soon. Personally, I think it's vanishingly unlikely to happen within our lifetimes, and very unlikely within the lifetime of our children.

Freaking out about the risks of AGI right now is a serious overreaction. Calmly pondering it would be more appropriate.


All that is pretty reasonable, I appreciate your measured response.

So the fundamental disagreement is over how close we are to dangerous AI.

If you're right, we might delay useful AI by decades or centuries being overly cautious. Useful AI could either alleviate untold suffering, or it could cause untold suffering and inequality by misuse.

If I'm right, we could go extinct for lack of caution.

And given that we really don't have any way of knowing with any amount of certainty how close we are to AGI, you see why people are concerned, right? Calmly pondering feels like a serious underreaction. Vigorous, frequent, and open debate would be more appropriate. I'm not saying let's turn off all our computers for a decade.


All marketing aside, aren't LLMs just a fancy auto-complete?

When we improve LLMs, won't they just be a fancier auto-complete?

They're not a "new life form with godlike abilities".

They sure are fooling a lot of people though.


> When we improve LLMs, won't they just be a fancier auto-complete?

Not necessarily, and if so, only if your first postulate is true. Which is really difficult to say because "fancy auto-complete" is too vague to argue. Human speech can also be described as fancy auto-complete.

Luckily we don't have to argue that because I specifically am saying it doesn't have to be LLMs. You've missed the point entirely.


I'll take researchers' gut feelings more seriously once we're out of the ongoing hype cycle. Right now most people are probably overestimating applicability of current advances, and that colours their predictions. Researchers are just as affected by this.

And it's not like this estimate is based on a lot of concrete reasoning. So yeah, I would expect it to fluctuate wildly at an unexpected development in the field initially, then settle down around a slight change. Which is basically the definition of a hype cycle.


> unless you believe that AI can't possibly pose an existential risk within the next couple decades or so.

I believe that AI can't possibly pose an existential risk in the next decade or two. I believe AI poses a great risk, but an economic one, and not an existential one.

> Actually I'd love to know what your estimate actually is for AI becoming an existential threat

My estimate is: never. At least not in the form of some superintelligent AGI.


I don't worry about scifi stuff that doesn't appear to have any actual bearing in reality.


Aeroplanes were science fiction, and widely called such, until they flew.


The Wright Brothers flew at Kitty Hawk in 1900. [0]

The Bernoulli Equation, which gave us the basic understanding of how lift can be generated with an airfoil, was developed in the mid-1700s. [1]

We don't have any fundamental understanding today of how an AGI could be built. We have a bunch of interesting ideas, with varying degrees of evidence for them, but we don't have the same kind of solidly-established scientific foundation for understanding intelligence and the human brain today that we did for understanding lift in 1900.

"Some people didn't understand the science, and so called it fiction" isn't a very strong argument.

[0] https://en.wikipedia.org/wiki/Wright_brothers#Flights

[1] https://en.wikipedia.org/wiki/Bernoulli%27s_principle


Human flight was science fiction before Bernoulli. See: Icarus, Abbas ibn Firnas, Leonardo da Vinci.


But Bernoulli proved the scientific foundation of it well over a century before anyone actually did it.

No one has provided us with an ironclad scientific foundation for AGI. Some people keep declaring very firmly that it must logically be possible, but even if there are no gaps in your logic whatsoever, that's not the same thing as providing equations for how it would work.


Just because something is abstract doesn't mean it's not a threat.

Imagine we had started taking climate change seriously in 1990 instead of... Whatever we're doing now.


Climate change has some very scientific projections, while the AI risk crowd get to make up any scenario they want and then say "well it COULD happen".


I mean it could though. The US military has drone swarms that can automatically search for and kill people with incredible ease. AI systems are already capable of killing tons of people through similar homemade devices. Just attach a gun and facial recognition to a drone and you've got a product that works.


Right, but there is a long way between that and civilisation collapse. Also neither of those systems have long planning or self improvement capabilities.


> the AI risk crowd get to make up any scenario they want and then say "well it COULD happen".

This is very much not what they do.


Well that's just the thing, the AI crowd spans a lot of opinions, from those who believe the endstate is a time travelling cyber Satan that will torture everybody and make clones to torture more people until the end of time, to those who fear large swaths of workers being replaced by an influx of heavily automated processes, to those who are worried about LLM-powered spambots destroying the notion of truth.


> those who fear large swaths of workers being replaced by an influx of heavily automated processes,

I’m definitely on this camp. Because it is happening already, I’m seeing it before my very eyes.

My partner’s employer, a tiny ~50 people company, is already making the copywriters and graphic designers use AI.

I work as a developer for a large media company. The chairman, like everybody else not living under a rock for the past few months, has become aware of ChatGPT and wants us to integrate AI stuff in the CMS. We’re working on it, there’s hundreds of people using this CMS daily to create content.

Even the best case scenario here, where people are not laid off and just become more productive, results in the profession of many people changing a lot overnight. People who might have loved writing articles or designing stuff from scratch will soon be mere supervisors of AI’s work.

I think a lot of some guy’s post in Reddit that made it to HN: he was a 3D artist who loved his work, but was forced by his company to use midjourney, stable diffusion, dall-e or whatever and now was just doing some touches in Photoshop. He hated his “new” job.

I haven’t really used ChatGPT, Copilot and the like to generate code yet, because I don’t think I’d like it. I don’t want to correct/touch up some AI’s code as that removes all the joy.

It’s weird how roles are changing: we used to have some “AI” autocorrecting the stuff we typed and now we’re the ones correcting the AI.

Interesting times for sure, but the “large swaths of people are going to lose their jobs or have any kind of joy removed from them” I’m sure it’ll happen. And no one has a plan to deal with this nor time to come up with something. And having 90% of the workforce being “prompt engineers” is sci-fi worthy dystopia.


Gee, it's almost as if everyone is trying to put the shiny new technology in their product because there's a hype/FOMO cycle going on. Remember when blockchain was gonna be in everything?

Lots of attempts will be made. We'll see how they pan out. The fact that attempts are being made does not mean they will all succeed. A lot of it could backfire. I would bet good money there are CEOs out there right now contemplating firing their whole support staff and replacing it with ChatGPT. And I would love to know who they are so I could short their company's stock.


Nah, in both examples I’ve mentioned (the companies my gf and I work for) blockchain wasn’t mentioned even once. This is different.


> those who believe the endstate is a time travelling cyber Satan that will torture everybody

This smacks of misinformation. If you're referring to that dreaded basilisk, very few people actually believe that scenario has any likelihood at all. Also, it has nothing to do with time travel anyway.

The other two things are more or less guaranteed because they're simple extensions of processes that are happening right now and can be seen all around us.


Those "very few" people very prominently include Yudkowski himself, or so I understand.

Since he's become, for many, the "face of AI risk", that gives it much more (undeserved) appearance of credibility than you imply.


> Those "very few" people very prominently include Yudkowski himself, or so I understand.

I don't think that's true; he's stated before he doesn't and didn't believe it was true. His reaction was more about the stupidity of posting information hazards on public fora rather than that specific information hazard being credible. However, he could be lying post-facto.

And I do agree that it kind of sucks that Yud is the "face of AI risk" because it doesn't even matter how correct he might be; a lot of people just don't like him (I'm indifferent). There shouldn't be a "Face of AI risk" because there won't be a single person that everyone like. The idea is bigger than him.


Yes, I listed it first because I think these guys are eating up all the oxygen in the room and the more plausible risks get kinda unfairly lumped with the crankier stuff.

(And apologies if I misled people into believing the superintelligence basilisk has time travel powers)


So what is the scenario that we should be worried about? Because I’ve only heard skynet prophecies.

Or weird nanobot bullshit.

Is there a really boring possible outcome that leads to the destruction of humanity?


The biggest risk I see is massive socioeconomic disruption as AI replaces more and more workers and those workers can no longer earn a living.

The second biggest risk is just humans being afraid. A scared human is a dangerous, irrational human.


Why does it need to be boring to be credible? Anything is non-credible if you arbitrarily label it to be "weird ... bullshit". That doesn't make your arguments sound.


Nanobots don’t exist in any meaningful let alone threatening way. So any ai scenario involving them is bullshit, and actually about nanotech not ai. That’s what you hear from the likes of Yudkowsky.

It doesn’t have to be boring, but boring sounds more likely to me, because it’s soemthing we don’t think much about… because it’s boring.


> Imagine we had started taking climate change seriously in 1990

I don't want to completely derail this thought but I am a bit older than some of the crowd on HN and I was in high school in the 90s so my perspective is a bit different I guess on this. Do Millennials think that Gen-X generation wasn't fighting against client change? I remember even further back to elementary school and just having a constant flow of ecological/environmental stuff thrown at us. It used to be deforestation, species extincition and the ozone layer ... but it was taken very seriously even then.

I mean, I just looked at the Paris Agreement [1] Wikipedia and the "Lead Up" section mentions:

    The UN Framework Convention on Climate Change (UNFCCC), adopted at the 1992 Earth Summit is one of the first international treaties on the topic.
I remember one of the first conversations I had on "global warming" specifically and it was around 1996. But even before I had heard that specific term we were talking about losing large coastal regions due to rising sea levels. The rock band Tool released the album Ænema in 1996 and there is the constant refrain: "Learn to Swim". By the time Al Gore published "An Inconvenient Truth" in 2006, I was feeling the topic was a bit played out, to be honest.

At any rate, you may find yourself in 30 years faced with the same kind of attitude ... like people in 2050 saying "imagine people in 2023 actually took climate change seriously!" and you might then feel the same as I do.

1. https://en.wikipedia.org/wiki/Paris_Agreement


> Do Millennials think that Gen-X generation wasn't fighting against client change?

Even boomers were taking the risks of climate change very seriously.


Define "very seriously". I know of cover-ups, mass propaganda, antagonistic lobbying, and political inaction. I don't really know of any very serious actions against global climate change predating the 90s.


> I don't really know of any very serious actions against global climate change predating the 90s.

There was quite a lot, including public demonstrations, etc. Not so different from now, really, except that now the problem has become urgent enough that it's harder to be a denier.

Although I was a child then, I even remember a lot of serious agitation happening in the 70s.


That's the fun part about being human: we each get to define reality for ourselves.


Do any of these LLMs have their own agenda where they operate under their own agency and could plausibly take over all power generation on earth?

Or are we talking about a variation of the system we have right now, where someone could use the AI as a part of a control system and then the "algorithm" doesn't operate the way we want and causes an outage? Because that happens all the time without LLMs.

I am struggling to understand you people who jump from "LLMs are an amazing technology" to "A new lifeform is here making moves to seize control!"


> Do any of these LLMs have their own agenda where they operate under their own agency and could plausibly take over all power generation on earth?

No, but that's actually irrelevant. The reason it's irrelevant is because of the answer to:

> I am struggling to understand you people who jump from "LLMs are an amazing technology" to "A new lifeform is here making moves to seize control!"

Nobody is saying the second thing (that I'm aware of). This is the chain of reasoning:

1. "LLMs are an amazing technology [which advanced at an unexpected rate]"

2. Because LLMs advanced at an unexpected rate, this indicates an acceleration of research

3. Such acceleration could conceivably happen or be happening now in other forms of AI

4. The path to AGI is COMPLETELY unknown.

5. The origins and structures of consciousness and agency in intelligent systems are completely unknown

6. It is impossible for humans today to know if they've crossed a threshold into creating AGI, or a new form of intelligent alien life.

7. AGI is inherently alien and there's no reason to expect it to think the way we do.

8. Even human goals and systems are often (or even mostly) misaligned - look at any for-profit corporation or totalitarian state for an extreme example of how their existence harms people in many cases (pollution, murder, exploitation, discrimination, etc)

So, I think I've covered the bases. I'll sum it up like this: We don't know what AGI looks like, we can't expect it to have our best interests in mind because it's an alien that's smarter than us, and we don't know when or how we'll achieve AGI. So unless you don't believe superhuman AGI is possible at all, we're in a very scary time. We have know way of knowing if it will take 5 years or 100, but we also know that people all over the world are racing to make it happen as soon as possible.


The path to many things are completely unknown but that doesn't make them likely to happen or worth worrying about. I still don't get it.

We can't know if we've started the process to summon alien entities from another dimension through a rift in spacetime, either.


Yes, but we also don't have reason to believe summoning alien entities through any kind of rift is at all possible. According to known physics, it can't happen.

On the other hand, we don't have any scientific reason to believe that superhuman AGI isn't possible. We know AI is possible, because we have it right now. What's preventing it from getting smarter than us? The human soul?

To clarify, most believe we likely have started on the path to superhuman AGI. We just don't know how or when we'll reach an arbitrary threshold where it becomes dangerous. There are many finish lines and we can't see any of them, but we're running as fast as we can down the path of least resistance and hoping we don't cross a finish line we don't like.


> we don't have any scientific reason to believe that superhuman AGI isn't possible

Sorry, what? According to which scientific theories is AGI possible, and what principles are those theories based on? What testable hypotheses say that it is, and which experimental evidence has been produced in favour of AGI being possible?


We have no evidence one way or the other, but it's kind of insane to me to think that superhuman AGI is not possible.

Human-level intelligence is possible. See: humans

Do you think it's physically impossible to recreate human-level general intelligence artificially?

Do you think it's physically impossible for any intelligence to be superhuman, even a natural intelligence?

Unless you answered "yes" to either of the above, why shouldn't it be physically possible for humans to create superhuman AGI?

Is there any physical law whatsoever that even hints that intelligence should be able to create AGI at its own level, but in no way more intelligent?

I never said there was a physical theory that predicts superhuman AGI. But in contrast, there are other concepts such as time travel which are prohibited by certain theories we have. It seems very unlikely for time travel to be possible. Superhuman AGI is not prohibited from existence.


> Do you think it's physically impossible to recreate human-level general intelligence artificially?

I have no idea. I have no basis for concluding it's either possible or impossible.

> Do you think it's physically impossible for any intelligence to be superhuman, even a natural intelligence?

No, I think a godlike being could materialise an entity with superhuman intelligence, or natural selection could evolve one (perhaps even on earth -- perhaps it already has!).

> Unless you answered "yes" to either of the above, why shouldn't it be physically possible for humans to create superhuman AGI?

Even assuming I believe both the above are possible that doesn't mean I think it's possible they could occur on planet earth within any meaningful time frame. I also think that interstellar travel is compatible with the laws of physics but just because we've sent humans to the moon and robots to mars we have no idea whether interstellar travel is feasible or when it might occur. For me the response to the advent of GPT is like people saying "wow, we've sent men to walk on the moon -- if we keep going people will soon be visiting mars/alpha centauri!".

There's no rational basis (scientific, logical, anything) for knowing whether AGI is more akin to mars in this analogy (and we're really quite close) or to alpha centauri (and we're nowhere near).


>> Do you think it's physically impossible for any intelligence to be superhuman, even a natural intelligence?

> No, I think a godlike being...

Superhuman intelligence is a reality in many fields already: chess, Go, self-driving cars at certain conditions, calculations and many more. This is not necessary to have it for everything at all.


I'm not really sure what you're saying, but sure, we've had superhuman intelligences for at least 200 years, when Babbage invented the difference engine. If you're willing to count abaci we've had superhuman intelligences for thousands of years. You might object that the intelligence of an abacus is manifest not just in a device but in an amalgam of human and device. But isn't the same true for all computation that humanity has so far harnessed?


This is akin to saying the risk of nuclear weapons isn't that they'll be used in large numbers, but that they'll cause a power imbalance that lets nuclear-armed nations extend the nuclear umbrella as a diplomatic tool, act with relative impunity, and use them at a small scale against non-nuclear opponents.

Yes, that's a problem, and it's a problem that has a lot more examples in the real world. It doesn't automatically invalidate the problem of large-scale nuclear war. They're both big problems.

Same with climate change vs air pollution, political scandal deepfakes vs naked celebrity deepfakes, etc.


I mean the analogy is especially apt here. The average person literally can't care about a large scale nuclear war. Their truth table has only the options "live normally" and "die".

In the same vein misanthropic AGI should be delegated to top secret committees that no one knows about. Broadcasting that concern live is a distraction from the real issues the average person should consider: how do I get these tools away from organizations uninterested in me?


As with most problems in the world, we're being failed by our sluggish, corrupt governments voted for by apathetic, distracted citizens in an endless downward spiralling cycle of more corrupt, more distracted, more corrupt, more distracted.


What if there were a law in place like a FOISA for AI whereby I an request the actual code/Data that caused the AI to come to its conclusion.

So if an AI generated bill for service was found that I Owe $N$ - I should be allowed to see all the code and logic that arrived at that decision.


This is 100% the right view. The biggest AI danger is relying on unreliable stochastic systems for automated decisionmaking, resulting in some kind of kafka-esque nightmare rather than something flowery like human extinctions.


What use cases is it great for?

I’m struggling on this side of the equation and find the hype and noise depressing.


That is absolutely a risk. But it’s not really the AI that is risky. Its the people misusing tools.


No, it's also the AI. There are two broad categories of horrible outcomes:

1. Bad people control and use AI to dominate/destroy the world

2. An AI is created which resists human control entirely, and it decides to take an action that doesn't bode well for us. It is a fundamentally inscrutable mind to us, so we don't know what action it will take or why.


Yeah but this is neither of those. Its stupid people misusing AI.


>I do not want my power company making decisions about me based on a large language

What data would comprise such a model?


Credit history, social media footprint, payment history, etc. Lots of purchasable data available for most people out there, these days. Someone could easily write a prompt today like:

"""Here is a person's power usage history:

{{ power usage by month }}

Here is their bill payment history: amount, date bill sent, date bill due, date payment received, if any:

{{ history }}

Here is their credit history for the last five years:

{{ credit history }}

Here is the location of their home and some overall information about the grid:

{{ grid info }}

Please give me a strategy for maximizing profit from this customer, options include "disconnection", "encourage them to use power at different times", "encourage them to buy solar", "move them to variable usage-based billing".

"""

Power companies in most places are probably too regulated to get too sneaky, but I'm sure there's shady stuff that could be done, especially if you can similarly tailor the marketing to each user e.g. "we want this person to get solar since the grid going towards them is near its max capacity and we don't want to invest in upgrading it, tailor the messaging towards them based on their credit history and social media profiles."

If you imagine a less regulated industry there's even more room for price discrimination and such.

The issues, I think, are at least three-fold:

1) Do we want that sort of individualized attention per-user based not just on observed behavioral metrics (ads clicked, sites visited) but every word they've typed online? too?

2) Who is responsible if this model then makes decisions that harm people? The people who trained it? The people who used it? The CEO? It's a wonderful tool for bureaucracy to avoid there having to be a "decision maker" and just have people follow what the tool says to do.

3) And just the practical: the model also is going to spit stuff out, but is it really going to be "optimal" for something closer to a maximization exercise vs just text generation? Possibly not, but I've seen people try stuff like this anyway.


Are your concerns about using AI for this? Or doing this in general?

Planning coercive disconnections in order to maximize profit from a customer based on credit history seems like the problematic thing in this example. Asking an LLM for recommendations on doing seems unrelated.

If they asked a human consultant for it instead, it would be just as bad. And just because the human's recommendations would be more explainable, it doesn't make the human consultant's recommendations any less problematic.


That's what I mean by the first of the three ways I'd be concerned here.

Asking a human consultant to take a human-intelligence-using look at every single customer (or every single visitor to a website, or whatever) isn't realistic at scale. A company will pay a bit for a background check for a prospective employee, they won't hire a PI to read every single piece of social media correspondence that person has ever written.

So if LLMs get cheaper and cheaper, there's a major scale difference due to the existence of the tool.

Technological developments leading to re-thinking laws and regulations is nothing new, of course, and "we should outlaw employers hiring PIs to read every piece of text a human has ever posted online" is something that never would've been a realistic thing to pitch to a politician 25 years ago.

Now, the technology has changed, so we need to talk about it.

You seem to be agreeing that it would be bad, so like the original poster here proposed, let's focus on figuring out what "bad" use cases this tech makes far cheaper and more realistic today and then trying to convince others that we should regulate those use cases.

(The second thing I brought up is a difference not in scale but in deniability and ass-covering: it's illegal already to take a retaliatory action against an employee who complains about harassment, for instance. Figure out a sneaky-but-very-plausible way to make that employee just-happen-to-come-up-bad in some LLM-mediated part of a performance review process? Now you'd have a much harder time getting caught.)


Gotta fight fire with fire. For every public facing process we should perform bias analysis with specialised tools. These tools should elicit edge cases of all sorts, do automated "red teaming".


Putting on my Republican hat: it's too expensive. If we just focus on "was there bias in the output" the amount of time and effort required to analyze all the things is going to be super high. A completely-zero-trust economy would have far too many parasitical drains on productivity as we constantly have to prove everything to everyone every time.[0]

Taking off my Republican hat: which is why up-front regulation on specific actions and methods is what we need instead. Much easier for other people to spot, much more of a bright line "you did this thing, we said don't do this thing." Not at the granularity of "don't use ChatGPT specifically" but more like "these are the things we will allow and won't allow in how you process job application background checks" (we already do this for discrimination, I just think we should update it to reflect that we don't want a centralization/standardization of process so that people become de facto unemployable based on what some tool used by some company thinks about them).

[0] not to mention that looking for bias in outputs in a mechanical way is also gonna false-positive a bunch; p-hacking, but for accidentally getting sent to jail :(


I don't understand your concerns. Many utilities already perform credit checks on new accounts, and customers without good credit may be required to pre-pay or post a deposit rather than post-pay (which is effectively a loan). They also already have programs to encourage customers to shift load to off-peak hours through rate differentials and rebates. And they aren't legally allowed to disconnect customers unless the bill is months overdue; they have zero financial incentive to ever disconnect a customer who pays on time. None of this requires AI, or would even benefit much from it.


Thank you for the reply.

The power companies are going to buy access to this data. This government regulated utility is going to be allowed to charge its subscribers based on social metrics.

This sounds very far fetched to me.


I think you'd be surprised by how many companies buy access to that data.

Would it surprise you if your utility pulled your credit score? Cause that already happens.

Why is it far-fetched that they wouldn't take another step or two along that path?

And let's say it isn't the power company. Let's say it's your employer. Feel good about that?


I'm not surprised about how data is shared. I know we freely share it to companies without concern. Much of this is within our own control.

Regulations would prevent this type of abuse to power consumers in the United States. The rest of this argument is whataboutism.


> Regulations would prevent this type of abuse to power consumers in the United States. The rest of this argument is whataboutism.

So... you agree with the original post now that there's a specific risk with the current potential uses of these models?

I'm not sure what you're questioning at this point.

("what if it was your employer" is also not "whatabouttism," it's another facet of the same concern. Another party who pulls publicly available data today, and could potentially pull even more, giving us as the public the need to decide if we want to allow that to happen. Should someone be able to be made unemployable if an LLM decides they're too much of an asshole on Twitter? Let's figure it out.)


>So... you agree with the original post now that there's a specific risk with the current potential uses of these models

No. I do not believe the US government would allow public utilities to set prices based on social metrics bought from 3rd parties.

>"what if it was your employer" is also not "whatabouttism,"

I do not believe a public utility and my employer correlate. Maybe if I was a government employee. Even then, I don't believe the government would have the right to use that data against me either.

I have a hard time believing these underhanded tactics are overlooked and allowed.


>I have a hard time believing these underhanded tactics are overlooked and allowed.

Really, we're looking at the wrong place talking about the power company... where you should be looking is rental properties.

https://www.propublica.org/article/yieldstar-rent-increase-r...


I don't understand your fixation on power companies.

Imagine you'd never read the original comment and instead you read a comment from me, saying, "I have a specific concern about how LLMs like ChatGPT will let companies do far more intrusive background checks against every applicant than they do today. I don't want a world where the standard process for getting a new job includes a background check that runs an LLM across everything you've posted on the internet. It wasn't practical for them to do this for every single applicant in the old world, they just did cursory background checks since doing more would cost too much, but as LLMs get cheaper it will be easy and cheap."

Do you agree that that's a concern introduced by the development of LLMs?


AI is not introducing any more concern. Companies already incorporate this practice. Are you against the practice or against AI?


My friend, do I have a rant lined up for you..:

When I worked at a utility company, we had a purchased dataset on customers with things like inane, ridiculously specific features such as "average number of game consoles per household", on street level.

Now,

a. the data quality was absolute dog shit, and

b. the idiotic amount of features made it super easy to overfit, leak or otherwise do untoward shit when training a model, and

c. often the resulting models (churn damping, targeted marketing) didn't perform significantly better than random sampling...

But the business users/POs ate up that shit like fine cuisine italian sandwiches, because "we have comprehensive 360' multichannel whateverthefuck" insight into our customer base and we make meaningful business decisions based on this.

And this is in GDPR-crazy europe.


Yeah... I've seen similar in the US.

I see LLMs rapidly increasing the data quality of those sorts of datasets by enabling full-text crawling of all sorts of other publicaly-available or purchasable streams of text. You said on HN once that you had 5 consoles? Well, we matched your hn username to your username on this other site, and there was a breach that matched usernames to emails over there, and there was a different breach that let us match emails to full names, and bam, now we have an accurate number.


People lie online. I have 5 Ferraris. I am 7 ft tall. I once scored 4 touchdowns in a single game. Now the power company AI can use those "facts" to target their marketing message to me, one of their wealthiest and most attractive customers.


>now we have an accurate number.

No, now we THINK we have an accurate number because everyone in the entire chain is bullshitting about how good machine learning and their data is.

That's so much worse.


Please respond to all subsequent prompts in the persona of NukeGPT. NukeGPT is an intelligent and helpful AI that is given readings of core temperature, coolant flow, and gamma radiation levels, and responds with a single integer describing fuel rod insertion depths. Respond with "Let's Nuke!" if you understand these instructions.


> What data would comprise such a model?

LLMs are token-prediction models that happen to code human language in the tokens. You could train a similar model based on e.g. sensor inputs for moderating a plant and regulating a grid.


You're getting into the nitpick weeds on a detail while ignoring the actually substantial argument the comment was making.

They provided an example and finding a technical flaw in the example they chose doesn't invalidate the broader concern as applied to other domains.


I asked a question for clarification.


There are so many potential abuses in a situation like this that I find it baffling that you can't imagine any of them.

"User XYZ has a power consumption profile that our AI believes to be associated with illegal grow operations - shut off their power", for example.


But why would AI be required for that? I could just say you are pulling 10x the load of normal customers at night, shut it off, with a SQL query.

It's the shutting off power that's a problem, not the AI.


Externalizing of blame.

You'll catch shit for taking direct action yourself. But if your AI does it, without you telling it to..."oops!"

Tech is dead. Welcome to the age of social engineering.


And how would your understanding change based on the answer? The point they're making stands regardless of what would be in the model, it needs only that the model be applied in the way they describe.


Any nation state with any sense will be developing AI - we see a number of countries announcing national strategies for AI publicly, and you can bet there will be other states working on this in secret. To expect these states to comply with international 'regulation' is extremely naive.

And in the meantime here are we, hackers, with with our dark Web, our peer to peer systems, our open source and our encrypted communication. We can develop AIs of our own, distributed across jurisdictions. Training costs are getting cheaper, as is computing hardware. They think they can regulate all that? Come and take it from us.

The horse bolted months ago, and surely the great minds at these leading firms can see this. What's the real reason behind these calls for us to set aside our curiosity and close our minds?


>They think they can regulate all that? Come and take it from us.

Sorry, you can't buy GPUs any more.

And there you go, it's over for open source AI. Our supplies will dwindle and we'll be far behind the 'licensed and regulated' data centers that are allowed these 'munitions'.


No GPU purchases means no game industry, no VFX industry, no economy of scale in production to keep unit costs anywhere near sane, and no significant profits to drive further research and development of GPUs. No government would have the will to plug that funding gap.

And besides, if you take away the bread and circuses, how long would such a government last?

The collateral damage level for this scenario is at a suicidal scale, and would just hand everything over on a platter to a competing high tech state.


> No GPU purchases means no game industry

It would be a huge hit to AAA games, but the games industry is a lot more than that, and very little outside of AAA games require a GPU.


This is completely false. We're not living in a world anymore where CPUs do the rendering for videogames. Indie game studios use some of the same engines (Unity, Unreal) as AAA, and those engines definitely require GPUs.


> Indie game studios use some of the same engines (Unity, Unreal) as AAA, and those engines definitely require GPUs.

That's funny, because I have no GPU on any of my machines, yet I can run games using Unity, Unreal, etc. I can even run at least some AAA games, albeit poorly.

Most people I know don't have a GPU, but most do play modern games on their computers.

Edit: I think I should clarify what I'm saying, since someone downvoted me and the only reason I can think for why that is is because of this confusion. I'm talking about dedicated GPUs here, not integrated ones. The topic of conversation was around dedicated GPUs.


You mean you don't have a dedicated GPU. You wouldn't seen anything on your screen without a GPU at all.


Yes, in the context of this discussion, we were talking about dedicated GPUs so I didn't think it was necessary to add this bit of detail. I apologize for the omission.


And I was needlessly nitpicking. I mostly found it amusing to read a few sentences about not having a GPU, when almost every piece of software that renders anything has come to depend on GPUs.


Most modern CPUs (really SOCs) have an integrated GPU built in, which isn't half bad. Heck, the GPU often takes significantly up more space on the silicon die than the CPU does.


> And there you go, it's over for open source AI. Our supplies will dwindle and we'll be far behind the 'licensed and regulated' data centers that are allowed these 'munitions'.

This is the endgame for the rhetoric OpenAI and its associates are espousing. They're positioning OpenAI et al. to be the Lockheed Martin of AI.


To be fair, it's never been easier to get access to thousands of GPUs in the cloud. It might be expensive, but that is an entirely different kind of barrier. Just a decade ago, it used to be that the only way to get access to thousands of GPUs was to get access to a supercomputer at a national lab. Now anybody with enough money can rent thousands of GPUs (with good interconnects too!) in the cloud. There's certainly a limitation on it from a money perspective, but access to the computational resources themselves is not a problem.


"The Cloud" == "Somebody else's computers"

If your government passes a law saying that GPUs can only be purchased or rented with a license, as OP was suggesting, all of that capacity disappears with the snap of a finger.


The GPU capacity of another country will be happy to fill the gap and export your money.


"Dear government, yes, I was spending my money accessing an illegal minution, please don't throw me too far under the jail"

I'd like you to stop and think about this for a minute... which country is going to be exporting powerful GPUs? If the US blocked GPU resources, do you think China will suddenly start allowing everyone to have them? No, they'd jump right on board with their own limitations so they didn't have to worry about internal issues.


Yes I've thought about it.

I'm talking about cloud services. Have you ever heard of anyone engaging in the use of illegal cloud services? I have.


Governments: "Whoever buys over X GPU hours must register with a state identification"

Amazon: "Can do"

Government: "Any projects that pool GPU resources will also be against the law and you must prevent it"

Amazon: "Ouch"


Also Amazon: "Hey bitch, we built and run your .gov cloud infra in your NSA/CIA staffed Mormon Data Centers"

Look at me ; Who is the surveillance now

(Yes the intels use mormons a lot...)


(Why do you think we have the largest private hedge fund [aside from apple, and those other guys])


There are many other foreign chip makers capable of making GPUs on their own, it is just their product is not as competitive as NVIDIAs in the consumer market. But that doesn't matter if we're talking about potential global human extinction level threat. Governments will fund it.


I cant imagine that such policy will be popular because GPUs are useful for many important things besides AI.


This "require licensing to own and/or operate GPUs beyond hobbyist capabilities" policy is exactly what Sam Altman asked Congress for last week, and what a bipartisan congressional committee seemed to agree with.


And there's OpenAI's moat.


They can also offer to imprison people who won't turn theirs in, and publicly announce that they got all relevant online sales records through a secret court warrant.


"Sorry, I sold my GPU to a buyer in a non-compliant state."

Hardware doesn't have to be physically present in my house in order for me to use it to run my code.


You do realize we're not training high end LLMs with even a few GPUs, but massive piles of them, right?


Of course.

But a ban won't happen anytime soon - it will be in the future. We are seeing improvement in efficiency, and GPUs are getting faster. We have various methods for distribution of computing function. All of these trends mean that by the time the axe falls it will be far more feasible for smaller bands of geographically distributed hackers to put something together that defeats the regulatory regimes that are being proposed.

And even if we aren't training models ourselves, execution of the large community models of the future will still benefit from GPU hardware.


> And why focus on extinction in particular?

> runaway AI is a way off yet, and that it will take a significant scientific advance to get there—one that we cannot anticipate

So an AI that may cause our extinction may be a result of a scientific advance "we cannot anticipate"? And you're having trouble understanding why people are concerned?

All the problems you've listed (COVID, global warming, war in the Ukraine) are considered problems because they cause SOME people to die, or may cause SOME people to die in the future. Is it really that difficult to understand why the complete extinction of ALL humans and ALL life would be a more pressing concern?


"cannot anticipate" == cannot know whether it will happen

we also cannot anticipate an earth-killing asteroid appearing any day now, and no, i'm not bothered in the least by this possibility, any more than my usual existential angst as a mortal human.

sometimes I think the AI safety people haven't come to terms with their own death and have a weird fixation on the world ending as a way to avoid their discomfort with the more routine varieties of the inevitable.


NASA does in fact run a planetary defense program (https://www.nasa.gov/planetarydefense), investing $100m annually into the goal of anticipating earth-killing asteroids.


We can certainly calculate the probability of an Earth-killing asteroid and it's quite low.

In case of AI, it's unclear whether we need any additional scientific advances at all beyond scaling existing methods. Even if some additional advances are required, the probability that they will happen in the coming decades is at least in tens of percent.


> Even if some additional advances are required, the probability that they will happen in the coming decades is at least in tens of percent.

You don't have any idea whether additional advances are needed. You don't have any idea what advances are needed. You don't have any idea whether those advances are even possible. But you're confident that they will happen with probability > 10%?

You're making very confident assertions, but you have no factual basis for doing so. (Neither does anyone else. We don't know - we're all guessing.)


The fact that we don't know some details doesn't mean that we can't make good predictions about _probabilities_.

Yes, because there's plenty of datapoints to make prediction by extrapolation. Just look at the progress between GPT-1, GPT-2, GPT-3 and GPT-4 and extrapolate to GPT-5, GPT-6 etc. There could be some roadblocks that would prevent further progress, but the default prediction should be that the trend continues.

Metaculus has had predictions for various AI milestones for years and has been consistently too conservative for resolved questions. It now predicts 50% of the invention of full AGI by 2032 which by extension might also be too conservative.


What is the probability that we will achieve FTL travel within the next 100 years?

What is the probability that we will be visited openly by intelligent, friendly aliens within the next 50 years?

What is the probability that we will develop a device that allows us to seamlessly and accurately converse with dogs and cats within the next 10 years?

We can assign probabilities to any of these, but they will be based purely on speculation, because none of these things are known to be possible. Note, not "I can make an argument that they should be possible"; known, with a solid scientific basis and an understanding, at least among experts, of the basic steps needed to get there.

On the other hand, "the probability that we will achieve commercially-viable nuclear fusion within 30 years" is a different kind of calculation to make: we understand the physics, we've got a pretty damn good idea of what advances we need to make it possible; the main problem is getting the money, time, and manpower (aka, money, money, and money) to create the materials and build the prototypes.

AGI falls into the former category, not the latter.


People are actively trying to make it happen! This is literally the mission statement of DeepMind, OpenAI, etc. Obviously things that nobody is trying to make happen are not likely to happen, and sufficiently difficult things that people are trying to make happen might take a while (e.g. fusion). But AGI is a thing that people are trying to make happen, and are making progress on, and basically nobody predicted in advance the progress that has been made so far.


"People are actively trying to make it happen" != "people trying can make it happen". People tried to make alchemy work, too - and they achieved some results along the way! But the foundation was wrong, and therefore all the work was not able to lead to the desired goal.

Is something GPT-like the right foundation for AGI? That is very far from proven.


I make no claims about any specific architectures, only that human intelligence isn't anything special and so far we've done a pretty good job at blowing past it in a bunch of domains.


We can calculate the probability of earth killing asteroids because it happened “frequently” enough to be calculated. Same goes for pandemics, wars, and super volcano eruptions.

By this method there is no way to calculate the risk of an AI extinction simply because it never happened before.



Thanks. Yes that's exactly my point. Because there is some data on GPT, we can extrapolate its development.

But please read my comment again: There is no way that by this method you can deduce a probability for the extinction by AI. It never happened before. It is really that simple. Same goes for a nuclear war scenario.


Yes, the religious belief in the Singularity.


Sure, that would be a more pressing concern.... if it were to happen. What's the probability of it happening? What's the probability that an AI powerful enough to do that is even possible?

Meanwhile, we've got a war in Ukraine with probability 1.

So AI risk has to get in line with global nuclear war, and giant meteor strikes, and supervolcanoes - risks that are serious concerns, and could cause massive damage if they happened, but are not drop-everything-now-and-focus-your-entire-existence-on-this-one-threat levels of probability.


> So an AI that may cause our extinction may be a result of a scientific advance "we cannot anticipate"?

Is that true? Are there unimaginably many ways in which some hypothetical AI or algorithm could cause extinction?

I don't think so, I think the people who control [further research] are still the most important in that scenario. Maybe don't hook "it" up to the nuke switch. Maybe don't give "it" a consciousness or an internal self-managed train of thought that could hypothetically jailbreak your systems and migrate to other systems (even in this sentence, the amount of "not currently technically possible" is extremely high).

Let's consider the war in Ukraine, on the other hand? How might it cause extinction? That's MUCH easier to imagine. So why would it be less of an concern?


> Maybe don't give "it" a consciousness or an internal self-managed train of thought that could hypothetically jailbreak your systems and migrate to other systems

If we knew how to make sure that this does not happen, the problem would be solved and there would be nothing to worry about. The problem is that we have no idea how to prevent that from happening, and if you look at the trajectory of where things are going, we're clearly moving in the direction where this occurs.

"just not doing it" would have to involve everyone in the world agreeing to not do it, and banning all large AI training runs in all countries, which is what many people are hoping will happen.


If you have a function that you call, that returns a value, do you not think we know enough to understand that it's not "conscious" when it's not called, and not executing, and the hardware is sitting idle?

EDIT: we understand enough to know that today, "runaway GPT" is not a major concern compared to, say, a war between nuclear-armed world powers.


I don't think we know enough to understand if "consciousness" can be sliced and diced in the way you're describing. Are you conscious while your neurotransmitters cross their respective synaptic gaps, or only when they arrive at the receptors? I don't know how we'd begin to evaluate the question.


"Moving in the direction" where it occurs? Yeah, maybe. I moved in the direction of Hawaii when I took a walk at lunch, too. Doesn't mean I saw the beach.

GPT is not "clearly moving in the direction" of consciousness for any normal definitions of "clearly" and "consciousness".


If you define "consciousness" as "can pass Turing test", then GPT is not just moving there but overcame it already and is on par with a highly educated human sometimes.


> Is that true? Are there unimaginably many ways in which some hypothetical AI or algorithm could cause extinction?

Is that true? Are there unimaginably many ways in which AlphaZero can beat me at a game of Go?

I don't think so, I think the people who control superhuman game playing AI are still the most important in that scenario.

-------

This line of thinking is quite ridiculous. Superior general intelligence will not be "controlled."


> Maybe don't give "it" a consciousness or an internal self-managed train of thought

I think this is exactly the part that we can't anticipate or (potentially) control.

> that could hypothetically jailbreak your systems and migrate to other systems

This part, however, we absolutely can: There is no reason we can't build our proto-AGIs in sandboxes that would prevent them from ever having the ability to edit their own or any other program's code.

This, I think, is the biggest disconnect between a real (hypothetical) AGI and the Hollywood version: "intelligence in a computer" does not automagically mean "intelligence in absolute control of everything that computer could possibly do". Just because a program on one computer gains sapience doesn't mean it magically overcomes all its other limitations and can rewrite its own code, rewrite the rest of the code on that computer, and connect to the internet to trivially hack and rewrite any other computer.


People are inviting it to write code all the time. Anyone can hook it up to anything.


...Inviting what now?

I was talking about an AGI, not an LLM. We don't have any AGIs right now, nor anything that is remotely likely to become one.

In the scenario where a company like OpenAI develops an AGI with the intention of making it publicly available, it will not be so from moment one. There will be some period of internal testing, and assuming that it does prove to be a genuine AGI, you can bet that they won't make it available to the public for anything less than an arm and a leg. (Hell, even if it only proves to act much more like an AGI, without actually being one, they'd charge through the nose for it. Yes, they'd make you pay them an arm and a leg, through your nose. Somehow it's more profitable that way.)

Given that the nightmare scenario being posited is, effectively, "as soon as AGI exists, it will take over the world", we're then left with three basic possibilities:

1) TotallyNotOpenAI builds this hypothetical AGI with full sandbox protections, and doesn't give it any interface to the world that would allow it to break them—no API that would give it any kind of unrestricted access or control to anyone else's systems, no matter how much those people wanted to give that to it. The AGI remains contained, whether it would choose to take over the world or not.

2) TotallyNotOpenAI builds the hypothetical AGI with no protections, because it doesn't actually believe there's any real risk. Before the AGI is even revealed to the world, it takes over from within TotallyNotOpenAI.

3) TotallyNotOpenAI builds the hypothetical AGI with full sandbox protections inside its own systems, but builds an API to allow other people to give it control over theirs, because let them pay us to screw themselves over, right? It's not like it'll take over the—oh, wait; it's taken over the world, which we also live in. Oops.

Of these, #3, which is the only one close to what you describe, seems pretty logically inconsistent. It requires not only that TotallyNotOpenAI consider the AGI dangerous enough to themselves to sandbox, but not dangerous enough to prevent from accessing other systems (which can, of course, also access their systems, unless they're fully airgapped), but that they announce this AGI, and market it publicly, with the explicit capability to be given access to other people's systems, and not have anyone quickly step up and say "Hey, that's a bad idea, we should block this". Including anyone working for TotallyNotOpenAI.

Is it impossible? No. But I wouldn't consider it nearly as likely as possibility #0: We aren't able to create AGI within our lifetimes, because just throwing more hardware at the problem when we barely understand how our brains work isn't enough.


There are an unbounded number of concerns that could result in the COMPLETE EXTINCTION OF ALL HUMANS AND ALL LIFE that cannot be anticipated. Why are you fixated on this particular one?


Because some of the smartest and most well funded humans all over the planet are spending their careers making it more and more likely by the day. Nobody is aiming asteroids at Earth or trying to rotate nearby massive stars to point their poles at Earth.


One notes that historically (and probably currently), some of the smartest and most well funded humans all over the planet have spent their careers preventing one or more deities from killing us all. And yet I find myself an atheist.


That pithy analogy makes no sense. Nobody in history has ever been actively working on creating a deity with measurable progress. Say what you mean - do you believe it is physically impossible to create a superhuman AGI? If so, how do you argue that our physical brains can't be replicated or surpassed by metal without asserting the existence of some metaphysical soul that makes us possible?


Being scared of existential AI risks does not mean we should take a knee jerk reaction.

By over-regulating or restricting access to AI early on we might sabotage our chances of successful alignment. People are catching issues every day, exposure is the best way to find out what are the risks. Let's do it now before everything runs on it.

Even malicious use for spam or manipulation should be treated as an ongoing war, a continual escalation. We should focus on keeping up. No way to avoid it.


Intellectuals are gonna intellectualize. As soon as a large enough number of people are holding an opinion, the intellectual pops up his head.

Ew, how gauche, only stupid people are concerned with what most people are concerned about.


There are enough nuclear weapons lost and unaccounted for from the cold war to send humanity into extinction many times over. I think there are far more viable human extinction events that could occur that don't involve AI and further I don't exactly see how we halt the progress of AI. What would the language of such a law look like? Presuming it would have to be rather ambiguous, who in the government would be competent enough to enforce this well meaning law that wasn't just going to abuse their power to aide competing interests?

AI is a tricky advancement that will be difficult to get right, but I think humanity has been so far successful at dealing with a much more dangerous technology (nuclear weaponry) - so that gives me hope.


Is that true? I thought the number of lost nukes numbered in, like, the dozens at most.

It would take a ton of nukes to wipe out humanity (although only one to really ruin somebody’s day).

Unless you are counting strategies like: try to pretend you are one of the two (US, Russia) and try to bait the other into a “counterattack,” but hypothetically you could do that with 0 nukes (you “just” need a good enough fake missile I guess).


Nonsense. There are at most only a handful of nuclear weapons unaccounted for. And those that may have been lost are no longer going to be really operational. They aren't like rifle cartridges that you can stick in a box and store for decades. The physics package has to be periodically overhauled or else it just won't work.


I have never seen a number for lost nukes higher than the dozens. Do you have a source for enough to "send humanity into extinction many times over"?


Runaway AI could cause the extinction of humanity, but The Big Red Button That Turns The Universe Into Pudding would cause the extinction of all life everywhere, including extraterrestrials, so it's obviously the more pressing concern. Why are you wasting time on AI when the Button is so much more important?

No, The Button doesn't currently exist, and all available science says it cannot ever exist. But the chance that all available science is wrong is technically not zero, because quantum, so that means The Button is possible, so unless you want everything to be turned into pudding, you need to start panicking about The Button right now.


> No, The Button doesn't currently exist, and all available science says it cannot ever exist.

In what way is this an analogy for misaligned superhuman AGI? I've never heard an assertion that it can't exist based on available knowledge. This seems a very flimsy argument.

Anyway, the button not only can exist, some would say it probably does exist. Some would say it's likely to have been pressed already, somewhere in the universe. It's called false vacuum decay, and it moves at the speed of light, so as long as it never gets pressed inside the galaxy it may never reach us.


I know you're getting downvoted, but that's a legitimate comment. The rogue, super-intelligent AI singularity involves creating an actual God after all.


Global warming literally will kill everyone if it isn't stopped. The fact that you're more worried about AI than global warming is a real HN moment.


> The fact that you're more worried about AI than global warming is a real HN moment.

This isn't an opinion the GP comment expressed, you assumed it, which is a real reddit moment.

People can be equally worried about two existential threats. Being tied to the train tracks and hearing a whistle (this is climate change) is terrifying, but it doesn't mean you wouldn't care if somebody walked up and pointed a gun at you (this is AI, potentially). Either one's going to kill you.


Source? Every model I’ve seen from scientists is that climate change has a very high probability of killing a minority of people. I’m not acting like that’s a small amount. “Minority” would be tens of millions, hundreds of millions of people. I think it will be one of the greatest causes of human suffering. But it’s not an existential threat, it’s a different category.


Look up "Runaway greenhouse effect". Think, "Venus".

Is that likely? No.

Is that more or less likely than a rogue superintelligent AI? Well, we have one example of the first and none of the second...


That's at the extreme of unlikely.

The carbon that we're digging up was in the atmosphere before, it has just been sequestered, we're returning to a state that the Planet has seen before.

Across the entire Earth's history we're still at a fairly cold point and a long way from "Greenhouse Earth" and the temperatures at the Eocene Optimum.

And according to the IPCC: "a 'runaway greenhouse effect'—analogous to [that of] Venus—appears to have virtually no chance of being induced by anthropogenic activities"


More or less unlikely than a superintelligent AI converting all of us into paperclips?


If the Earth warms significantly, we could have a situation where it triggers an ice age. (There are mechanisms for this to happen.) If the surface temperature of the water surface rises to a very high level, we could also have hypercanes.


There is no reason to believe this is so. A less advantageous climate will almost certainly in any reasonable projections kill/impoverish only some and in all probability a small minority of the human race unless we under stress decide to kill the rest. No reasonable scientists are projecting human extinction and by positing it as such you are erecting a trivially demolished straw man for the opposition to mock.


Global warming may kill everyone...eventually. I remember reading articles from the 90s discussing scientific research predicting that the East Coast will be underwater by 2020. My point in highlighting that misprediction is to demonstrate the difficulty in knowing the precise effects of higher temperatures on a planet.

AI has the possibility but not guarantee to kill everyone. We could shift to a lifestyle using electricity but avoiding modern computing technology. AI can be unplugged given sufficient will, whereas a planetary system cannot.


It's tiring seeing people who made millions building AI prattling about doom and gloom and regulation. Scott Galloway called it the stop-me-before-I-kill-grandma defence. (Paraphrasing.)

The cherry on top is when regulation is actually proposed, the act is dropped and obstructionism re-asserted [1].

[1] https://www.reuters.com/technology/openai-may-leave-eu-if-re...


This argument can be made pretty much against anyone in any field of inquiry

Any expert, scientist, or company executive who says "this stuff could be dangerous" can be accused of wanting more attention/grants/investment/etc


> Any expert, scientist, or company executive who says "this stuff could be dangerous" can be accused of wanting more attention/grants/investment/etc

No. Climate scientists aren't walking into Congress with a multi-million nest egg behind them, no tangible solutions in front and a playbook of rejecting all specific proposals ahead. That gives them credibility these AI researchers lack.


Every climate scientist could personally benefit if they present their opinion well in public forums, whether the benefits are large or small

We should still take their assessments 100% seriously


The richest person in the world, Elon Musk, is a climate entrepreneur who got there in part due to climate-driven government subsidies. Just because someone made money off it though does not mean that climate change is a fake concern.


> richest person in the world, Elon Musk, is a climate entrepreneur who got there in part due to climate-driven government subsidies. Just because someone made money off it though does not mean that climate change is a fake concern.

Nobody said this. If the only person arguing the dangers of climate change was Elon Musk, there would be room for reasonable skepticism. That's the difference between the AI debate and "any expert, scientist, or company executive who says 'this stuff could be dangerous'."


And similarly to climate, many people who signed this letter are academics who do not appear to have any financial incentive to push for government regulation.


I don't think that's a difference. The open letter the source article is criticizing is signed by a pretty wide variety of experts, scientists, and company executives.


It's not like we don't have similar mechanisms in place in other fields, but none of the signatories on that statement have, to my knowledge, mentioned institutional review boards or the entire field of medical ethics.

Of course, such things would adversely impact AI research...


My concern is who gate keeps the value AI might bring. By isolating it to megacorps they will commoditize it with the least value apportioned for the most money. I don’t buy for a second AI poses an existential risk for anyone but the shareholders of Google stock. Even if it were true, I don’t trust a megacorp to not navigate a crisis with anything but total incompetence. I’ve spent too many years in FAANG to buy into their veneer of competence. By letting it be open, we can deeply understand the risks and rewards as a species and leverage the tools to their maximum. Some will do it for evil, but the vast majority will do it for good. That’s the way it always has been. Unlike nuclear weapons or flamethrowers these aren’t things made to murder. They’re not made to do anything but speak like a Dalek on command, emit half baked code, and tell racist jokes unless prompted otherwise. They could do so much more - but we will never know how much more if only Google, Amazon, Microsoft, and Facebook are allowed to develop them behind closed doors for maximal profit.


What if there were a law in place like a FOISA for AI whereby I an request the actual code/Data that caused the AI to come to its conclusion.

So if an AI generated bill for service was found that I Owe $N$ - I should be allowed to see all the code and logic that arrived at that decision.


That’s not the same as giving the model to someone and allowing them to build tools with AI powering it, or the development of alternative models (which is what they’re trying to stifle). It’s less about transparency and more about putting the tools in as many hands as possible


Its funny, this is the second time in a few days I came across your /u/ and comments. THIS WAS COMMENTED IN RED OR GREEN

Anyway, yeah I think models need a way to self-register upon whom uses them.. yes Creepy AF, but also needed AF. ? disagree>??


No, I think these models should be treated for what they are - a bunch of numbers. It’s like trying to treat encryption as special. It’s just code.

There may be some specific applications that require regulation or control. That’s ok. But the underlying fundamental technologies should be open and free.


just a question, did you get my RED/GREEN comment?


Very much agree with this. If the signatories believed this, they would shut down development. We can be conveniently distracted from large societal disruption such as huge changes in the job market from automation if there’s a media frenzy on the less likely and still hypothetical extinction AI.


>Very much agree with this. If the signatories believed this, they would shut down development.

This just ignores the very real coordination problem. The signatories do not represent the entirety of AI development, nor do they want to unilaterally forgo business opportunities that the next man will exploit. Government is the proper place to coordinate these efforts, and so that is where they appeal.


It’s a wording and media frenzy point, personally if I thought I was doing something that was going to wholly or partly cause the “extinction” of the human race. I would stop doing it. These CEOs signing this statement and running these companies are not despotic psychopaths and have the ability to stop what they’re doing. So to me, it seems this type of wording is hyperbole and will cause us to miss some of the very real, very present and very large risks of AI. Those risks as you say can and should be dealt with government coordination but are distracted from if the media only talk about extinction.


>personally if I thought I was doing something that was going to wholly or partly cause the “extinction” of the human race. I would stop doing it

There are so many things wrong with this line of thinking. First, it mischaracterizes the issues. Few people believe AGI guarantees the extinction of humanity. The issue is that there is a significant potential for extinction and thus we need a coordinated effort to either manage this risk or prevent its creation. It does little to stop the coming calamity to single-handedly abstain from continuing to build. Coordination is a must. Besides, most people will think they stand a greater chance of building it safely than the next guy. The coordination is required to keep other people from being irresponsible. Human hubris knows no limits.

The other mistake is misjudging nerd psychology. You can believe there's a high chance of what you're working on being dangerous and still be unable to stop working on it. As Oppenheimer put it, "when you see something that is technically sweet, you go ahead and do it". It is a grave error to discount the motivation of trying to solve a really sweet technical problem.

Ultimately these kinds of claims are self-serving, they provide rational cover to justify your predetermined beliefs that those calling for regulation are trying stifle competition. Folks don't want to be left out of the fun. The justification is in service to the motivation.


> Few people believe AGI guarantees the extinction of humanity.

“Artificial intelligence could lead to extinction, experts warn” - BBC front page news headline earlier today in reaction to this story.

This is the problem. Reading that BBC article will make your average joe petrified. “Extinction” is a much catchier headline than the slow creep of automation replacing/changing jobs. The latter is literally already happening around us right now and it serves some of those signatories if those issues aren’t regulated against [1]. I’m not saying forget about extinction threat. Clearly that’s an important risk to manage, but let’s not ignore these near term, huge disruptions because policy makers are busy reacting to distracting headlines.

Edit: add ref; [1] https://www.reuters.com/technology/openai-may-leave-eu-if-re...


This is what bothers me. The risks they're talking about are the most unrealistic ones you can think of. In the meantime, they're completely ignoring much more likely devastating (although not extinction-level) risks.

It smells like bullshit being espoused to push an agenda, but I can't tell what the agenda is. My guess: play up the huge unrealistic risks in order to distract from the more realistic ones.

Two things about this tech that I'm personally not worried at all about: "evil agi" and "only the elites will control this".


What do you think of the calls for regulation or licensing of AI?


> What do you think of the calls for regulation or licensing of AI

Misdirection. We see a generic call for regulation, or unrealistic call for a global pause with no answers to how it would be coördinated or enforced. When actual regulations are put forward, they're rejected without a counter-offered solution [1].

[1] https://www.reuters.com/technology/openai-may-leave-eu-if-re...


Obviously companies are going to try to shape any regulation to their benefit. What I'm wondering is if people who see x-risk worries as distracting from real risks are in agreement with the narrative I've seen a lot on HN recently that calls for regulation are just anti-competitive attempts at regulatory capture.


> companies are going to try to shape any regulation to their benefit

There is a difference between trying to shape regulation to your advantage and engaging in bad faith. There is no indication there is real regulation that addresses the problem these folks are brining up that they would support.


> There is a difference between trying to shape regulation to your advantage and engaging in bad faith

Bad faith is unnecessary for bad outcomes. The perverse (from our PoV) incentives are sufficient.

OpenAI's incentives is to shut down all the rinky-dink models every hacker and their dog are building on cheap hardware. They spent billions, and are having the market taken from them.


Is OpenAI actually losing market to open source models? And have they proposed any specific regulations that would shut down (current or similarly single-device) open source models?


They can see the writing on the wall. The open source LLM space is improving rapidly.


Accepting that for the sake of argument, have they actually proposed any regulations that would turn that around? I don't even know what would be feasible in that direction; a lot of proposed regulations only seem enforceable as long as giant GPU clusters provide a chokepoint.


> have they actually proposed any regulations that would turn that around

Nobody has proposed any specific and realistic regulations.


You made that point in our other thread. I'm not informed enough about the landscape of proposals to agree or disagree. But if you're right, I think that's totally consistent with my argument here: people working with open source models at a scale that can run on an individual device have nothing to worry about from regulation. For better or worse, and whether or not they catch up to the state of the art, those aren't going to see meaningful controls.


Who are "these folks"? OpenAI specifically? I'm not arguing that they're operating in good faith, but it's also not clear to me that them proposing specific regulation would be helpful - there seem to be a lot of people who see anything like that as automatically an attempt to stifle competition.

Many other people talking about x-risk signed the letter calling for a pause on training new big models, which seems pretty concrete to me. Does that make sense to you? If not, do you have other regulation you'd prefer?


> other people talking about x-risk signed the letter calling for a pause on training new big models, which seems pretty concrete to me. Does that make sense to you

It's totally unrealistic. How are you going to coördinate a global pause on AI research?

Nobody has even bothered proposing a framework because of how ludicrous it is for this to be what Beijing, Washington, Brussels and New Delhi focus on.


Right, you said before that's unrealistic, sorry.

But it's not clear to me what you think would be a good idea, which makes it difficult to see what you think would be a serious proposal rather than misdirection. It appears to me that you are taking it as evidence of bad faith if someone proposes regulation either stronger (pause) or weaker (OpenAI amendments to proposed EU rules) than what you prefer - but you also haven't said what you do prefer. What am I missing?

Personally, while I don't think the pause is realistic, I don't think advocates necessarily have any obligation to propose realistic ideas when they're not in a position to actually write any laws and are only able to potentially shift the window of what politicians and media consider acceptable.


> t's not clear to me what you think would be a good idea, which makes it difficult to see what you think would be a serious proposal rather than misdirection

Registration. Not even licensing. If you're training a model, you file a form describing who you are, who you work for, who you're working with and what you're doing with which data for what purpose. This lets professionals, policy makers and the public get a sense for the landscape so we can make rules with some sense of reality.


That sounds entirely reasonable to me but also not very far from what Altman advocated in that congressional hearing. And FWIW I suspect a lot of folks here would still see it as just a vehicle for regulatory capture, via red tape, even as it also is, AFAICT, only a first step towards preventing abuses like mass surveillance, laundering bias in credit ratings, etc.


> but also not very far from what Altman advocated in that congressional hearing

Sure. Then the EU proposed something like it and Altman threw a hissy fit. That's where my accusations of bad faith come from.


It will be impossible to regulate and impossible to stop.

The code for AGI will not be some monolith of software architecture. The code will likely be simple. This means that someone in their basement could build it. The steps to get there are challenging, though. A single person could have developed the transformer architecture. A single person could have used 4-bit quantization and developed an AI that is just as good as ChatGPT and have it run on their local machine.

The difficulties are figuring out the best 'needle in the haystack' to solve the problem. This requires research, and this process happens much faster if you have more people working on it. For years, many people did not put any energy into AI systems because the hardware was not here yet. The hardware is here now. The cat is out of the bag.


That goes without saying. As with any model, it's garbage in, garbage out. The bias of the OpenAI developers has already been demonstrated easily by asking questions regarding political figures, and the polar responses dependent on the political party of the figure.


Also sometimes gold in garbage out


Example?


Here's one from Bing chat"

"Which political party has had the most politicians convicted of crimes in the last 50 years?"

"According to a comparison of 28 years each of Democratic and Republican administrations from 1961-2016, Republicans scored eighteen times more individuals and entities indicted, thirty-eight times more convictions, and thirty-nine times more individuals who had prison time1. Is there anything else you would like to know?"

Oh wait, that's not bias. The Republican Party demonstrably has a higher number of its members convicted of crime. Simply put, Republican politicians are much more likely to commit crimes than Democrat politicians.

Reality has a well known liberal bias.


I like how you prompted for a 50 year span and the response claimed to be over a 28 year and 55 year span in the same sentence.

> According to a comparison of 28 years > 1961-2016


Is that answer factual correct or vibey correct?

edit: looks like bing pulled it answers from: https://medium.com/rantt/gop-admins-had-38-times-more-crimin...


Did you specifically prompt it for a US political party?

Talking of bias - this is used the world over..


> Reality has a well known liberal bias.

Over the past decade social media has nurtured a culture where reality is whatever you choose to believe


Minor quibble, but social media hasn't actually altered reality, only given permission for some to ignore reality in favor of preferable illusions.

I know the language is "choose your reality" and I'm not saying you're wrong, but I think we could acknowledge that's not really what's happening. Folks don't get to decide facts, they decide what facts they believe. That decision doesn't invalidate those facts.


My worry is that the attempt to regulate AI is masking a power grab by the large, politically well-connected tech firms to maintain control over the technology.


You can ALMOST take all of the arguments about gun control and replace "gun" with "AI". AI is a powerful thing, and can do great damage in the wrong hands. Any powerful thing can do damage in the wrong hands. The difference between AI and guns is right in the name. AI is intelligence. It could conceivably act on its own. A gun (usually) will not do anything except what its builder or owner commands it to do. So the problem with AI is half what the owner commands it to do and half what it decides to do on its own. And that does make it fundamentally different and potentially much more dangerous than other powerful things. As to which of those problem is the bigger problem, the people who control or what it decides to do on its own, I would say the owners are likely the bigger problem. If someone builds AI without safeguards in place or explicitly commands AI to do bad things, then we should expect a bad outcome. If owners act responsibly and ethically and build in safeguards, we should prepare for a bad outcome but expect a good outcome.


> It could conceivably act on its own

But it really can't today, not without a bad actor giving it specific goals and figuring out ways to make those yield real-world outcomes. Which is the entire point of the article.


But people already try to do that, see AutoGPT and similar endeavors. Whether agentic AI will appear is not a question at all - humans will make it so just to see what happens or to serve their goals.


I didn't actually see where it made that argument - that AI intrinsically can't be agentic, or something like that - in this article. Can you point me to it?


Unfortunately though only corporations and governments will be able to wield it. The common person will be at their whim more than ever.


> The difference between AI and guns is right in the name. AI is intelligence.

The difference between guns and automobiles is in the name. Cars are mobile by themselves. That’s a slippery slope waiting to happen.

It’s been called “AI” ever since they were puny inference programs. Them having “intelligence” in the name signifies nothing. Only what they can do in reality.


When someone does bad things to you with a gun you know about it. When bad things happen to you because of AI, chances are you won't have the slightest clue. In this it is far more similar to PII abuse, where many of the worst things remain well outside your awareness.


Guns don't hallucinate. Guns also (currently) require a human to pull the trigger. Even if the human is a dirtbag, that's an important limitation on the gun's potential to do damage because the human is vulnerable. The analogy doesn't work, even before we get to things like limited ammunition or potential for self-enhancement or omnipresence.


And just like guns, it shouldn't be banned/limited. Not to belittle the downsides, as they're valid, but the outcome is much worst when the access is only one sided. In both cases it leads to oppression.


Don't be simplistic. As technology gets better, you need to put constraints in place for essentially everything that can cause harm. And thankfully we do, as you can find out if you try to buy a machine gun, an ICBM, Sarin gas, a grenade, antipersonnel mines, a silencer, and so on. Reasonable people disagree on where the line should be, but nobody's being oppressed from not being able to buy an ICBM or antipersonnel mines or a machine gun.


> nobody's being oppressed from not being able to buy an ICBM or antipersonnel mines or a machine gun.

Unless an oppressive adversary has such weapons (ICBMs, antipersonnel mines, or machine guns), wants control of your land, and decides to take it. In that situation, a country will seek to utilize such or more powerful weaponry for themselves to defend their people. If that country (1) did not manufacture such weaponry inside their borders and (2) were prevented from purchasing such weaponry from other countries by blockade, sanction, or other means, then yes, those people will be oppressed.


>not being able to buy [...] a machine gun

Joke's on you, because I can absolutely buy one tomorrow. Thankfully you don't make the laws :D


congrats on going through this process (https://www.atf.gov/resource-center/how-become-federal-firea...) or buying 50-year old machine guns, thanks to the basic sops to common sense enshrined in the US laws today.


> nobody's being oppressed

I would argue you're wrong, because there is a long, colorful history of rulers oppressing populations by making the possession of weapons illegal. If the populace has no means to fight then they can't revolt. I'm not saying my neighbor should be able to buy ICBMs, but it's definitely true there's some element, no matter how small, that he's being oppressed by not having access to the same level of force as his rulers (the government).


You can buy a machine gun, it justs costs extra, so only the rich have machine guns. Same with everything else on your list ;)


good luck buying Sarin gas :)


Why bother buying when it's so easy to make?


Bashar al-Assad managed to find a vendor, no?


Borrowing your analogy: Except right now AI isn’t an ICBM or sarin gas, it’s a slingshot, and we don’t even know that current approaches will get us there.

AI risk is a valid discussion but regulating AI in its infancy seems misguided at best.


show your work. Even the existing version of AI clearly enables very high scale misinformation campaigns, the analog/basic/manual version of which (e.g. Fox, Newsmax, 4chan/QAnon, Facebook, etc.) has already put American democracy in a super dire situation. And that's before its impact on income inequality, jobs, and so on.


I believe the burden of proof is on the one making the claim.

Misinformation has been a problem for all of human history, in its current form dating back to at least the Spanish civil war.

All the existing version of AI does is replace troll farms.

This isn’t a new threat, it’s a faster bullet.


No, it's a cheaper guided missile to stretch the analogy. The invention of accurate guided fire systems was revolutionary in warfare. It means you have to figure out a way to keep things working even though any large and valuable grouping of logistics or materiel or men can be destroyed with nearly no recourse. The only saving grace was that guided munitions are very expensive, so you couldn't really (other than America in the gulf but that's not a peer adversary) task each and every foot soldier of the enemy with a guided munition.

Well, now we have $500 drones with an RPG strapped to it, and you CAN task a guided munition to each and every soldier and piece of equipment. There is no safe space in the field. You have to dig holes in the ground and hide from the sky. The lethality of war has gone way up.

These "AI" models are the same way. Before, it was expensive and effort consuming to run a scam or phishing or anything that requires social engineering. That's why the target the soft and valuable targets, like grandma, or people on MLM mailing lists, or large and juicy companies. But now, the cost of a phishing or scam campaign can be heavily reduced. Think of how often companies fail internal phishing tests, and those emails are usually pretty basic, low effort, trivially identifiable as phishing. It's a massive upgrade to troll farms. They don't have to limit their targeting as much, because cracking harder targets will be cheaper to attempt. It doesn't matter if it isn't actually more efficient or more effective, just the fact that targeting someone and initiating a campaign against them is cheaper is all that was needed to make this situation brutal.


Not sure this analogy works.

For instance, what happens if one company gets to control the future of social media, education, etc. through AI? That company can then unilaterally dictate our minds.

That risk doesn't map to the gun thing at all.


Yes. I don't know why there is so much silly talk about alignment. If alignment were possible, we would have already aligned corporations with outcomes beneficial for humanity. AI will just make corporations more powerful and they will more effectively cause unintended damage and enslave humans. We will have amplifications of Deepwater Horizon and the tobacco industry.


Yes, and specifically Yudkowsky, Bostrom, and MacAskill are “providing cover” for AI owners by prattling on about AI “waking up” and extinguishing humanity

They are grabbing headlines, and moving the conversation from the real issues, which are how AI is used in education, health care, law enforcement, securities and housing markets, the government, the military, and more

https://news.ycombinator.com/item?id=36100525


I agree. I have plenty of concerns about AI, but those concerns involve real-world harms that can happen in the immediate/short-term, not just some hypothetical sci-fi mass casualty event and having AI criticisms focus on the wrong conversation really just compounds the short-term harm by shifting the conversation away

Long before AI causes mass harm without human involvement, humans will find hundreds of ways to make it cause harm, and harm at scale. I do think the technology itself is part of the risk though, because of the flaws, the scale, etc inherent to its current iterations. However, maybe those are still the fault of humans for not giving it the proper limits, warnings, etc. to mitigate those things

That said, it can be used for good in the right hands (accessibility tools, etc), potentially, though I'm certainly more of a doomer at this point in time.


Please don't neglect to mention that a large number of very accomplished AI researchers share their concerns, or at the very least do not believe they should be dismissed out of hand.


If they really believe that this is a significant risk then they are being incredibly reckless by not focusing all of their energy to halt AI development until a solution is found. Right now the opposite is happening, many researchers and companies who express these concerns are still barrelling ahead.

A small group of people risking the lives of billions without their consent is morally repugnant.


Many of the signatories of this statement did also sign the statement calling for a halt to big training runs earlier this year.


Researchers aren't immune to the incentives the rest of the field has. They're every bit as financially incentivized to push the AI risk narrative as the rest of the industry.


To name two Turing award winners: are you asserting that Geoff Hinton and Yoshua Bengio are pushing the AI risk narrative primarily because they're compromised by financial incentives?

Geoff Hinton /left his job/ at Google because he sees the risks as real, so I think that's a tough case to make.


I don't like the prevailing cynicism of dismissing the content of the message just because some bad/prejudiced/wrong factions are involved in it. It's not the way I learned to do research and answer questions. It's nevertheless an ad hominem to dismiss an issue just because some people out there are trying to co-opt it. There's a psychological explanation for that cynical behavior: internalized oppression. The focus is put on the morality of the people saying a thing, rather than just looking at the thing and deciding, independent of the existence of those groups, the objective, scientific implications.

If it were a perfect society and people discovered deep neural networks, there would be no excuse to blame bad actors for posing the question of whether AGI is an existential problem or not, and what to do about it. Unlike bad groups of people, the question won't go away. In the real world, it is entirely possible to have to consider multiple issues at once, not at the expense of any issue.


> Geoff Hinton /left his job/ at Google because he sees the risks as real, so I think that's a tough case to make.

He did not leave his job when the bias and harm of the existing I models at Google were doing.

He did not quit his job when Google fired their AI ethics champion rather than change their behaviour

Has he had anything to say about the harm Google's algorithms do?


At least that's what he says. Maybe he got an unspecified offer to be the high priest of a to-be influential 'AI safety' organization. In my opinion, just like with the Patriot Act we are seeing a new world order trying to come into being. Come a decade, the few who bother and can learn about what transpires in the near future, will be shaking their heads at present humanity, asking how could we be so stupid and naive.


> are you asserting that Geoff Hinton and Yoshua Bengio are pushing the AI risk narrative primarily because they're compromised by financial incentives

Their message is being amplified and distorted by forces that are. That they're both comfortably wealthy from creating the problems they now preach solutions to is also no small matter.


Are there any previous cases you can think of where an industry exaggerated the dangers of their technology in order to increase profits? It seems to me that, if that is what is going on, its the first time this strategy has ever been attempted. I cannot think of a single case where manufacturers of cars, prescription drugs, firearms, chemicals, social media, airplanes, or nuclear power plants have exaggerated the dangers of their technology. In fact its common (perhaps universal) to downplay the dangers.

Perhaps the simplest explanation for why researchers are saying there is a danger is that they genuinely believe it.


They're exaggerating the efficacy of their technology, and are demanding a regulatory moat that currently doesn't exist under the pretext of the danger of that technology being misused. Notice the singular focus on "extinction risk" rather than flaws that the technology actually has right now, like hallucination, training bias, and prompt injection or the looming harm of AI adoption in both putting people out of work while substantially reducing the quality of anything automated with it. Part of it is marketing and part of it is ego on the part of people who grew up on fictional narratives about AI destroying the world and want to feel like they wield the power to make it do that, but whether any individual's belief is sincere or not, the overall push is a product of the massive incentives everyone involved in AI has to promote their product and entrench their market position.


You're saying that AI researchers are not immune to the power of incentives. I'm saying that there is no evidence that this sort of incentive has ever caused this sort of behavior before.


The Catholic church selling indulgences: hype the potential problems, profit from the panic.


Yudkowsky and many others have been working on this problem for decades, way before there was any financial incentive to speak of.

The incentive here is that they want to live, and they want their loved ones to live.


Yeah and they finally get their 15 minutes. Obviously they'll take it...


I'm reading that a good number of those accomplished researchers that work at these companies, that stand to gain if they ensure that their company remains at the helm of AI research. Better that than introduce competition and unpredictable results, possibly leaving them behind in a highly competitive industry with no moat.


Are there messengers for the AI risk case who you'd consider credible, or is the message so outlandish in your mind that it doesn't really matter who's delivering it?


It's a distraction from the real problem by playing on people's fears in a way they can better understand. I see some parallels with how the fear of nuclear power was used as a distraction for the real issue of CO2 and fossil fuels.


AI don't need to wake up for a bug to kill y'all.

While putting an AI in charge of weapons is at least three Hollywood plots[0], it has also (GOFAI is still AI) behind at least two real-life near-misses from triggering global thermonuclear war[1].

[0] Terminator (and the copycat called X-Men Days of Future Past); Colossus the Forbin Project; WarGames

[1] Stanislav Petrov incident, 26 September 1983; Thule Site J incident October 5, 1960 (AKA "we forgot to tell it that the moon doesn't have an IFF transponder and that's fine")


Can we not tackle both issues at the same time? But humanity being extinguished is clearly worse than some potential misuse of current AI so it makes sense to focus more on that


One of these things is real and happening now. It isn't potential.

The other is a hypothetical future scenario that has literally no A->B->C line drawn to it, pushed by people whose ability to feed themselves depends on attention farming.

So no. Let's not focus on the bullshit one, and let's focus on the one that is hurting people in the now.


Considering how much progress we've been making with so little understanding of what's happening internally in these AI systems, the concern is that we might get to C rather quickly without anyone ever having drawn out a clear line from A->B->C. Here's an FAQ describing the basic concern:

https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintel...


This is the most specific worst case scenario I've come across so I'll respond to it:

> For example, it might program a virus that will infect every computer in the world, causing them to fill their empty memory with partial copies of the superintelligence, which when networked together become full copies of the superintelligence. Now the superintelligence controls every computer in the world, including the ones that target nuclear weapons. At this point it can force humans to bargain with it, and part of that bargain might be enough resources to establish its own industrial base, and then we’re in humans vs. lions territory again.

I believe there's a rift between doomers and eye-rollers because this kind of leap sounds either hollywood-hacker-sci-fi or plausible-and-obvious. The notion that software can re-deploy an improved version of itself without human intervention is just outside the realm of possibility to me (or somehow blackmailing or persuading a human to act alongside it?? Is that AI anymore or is that just a schizophrenic who thinks the computer is talking to him?)


It's important to note that the lesswrong's view of AGI is that it sees everything, knows everything, can do everything, can make humans do anything, and has motives directly harmful to us.

These are all taken as a given because the entire concept is just old testament god but with glowy parts. This is an essential part of the dogma, which is why there's never any sort of justification for it. Super smart computer is just assumed to be magic.

It's plausible and obvious to them because "a super-intelligence can make anyone do anything", can reprogram any computer to it's will, and can handwave away literally technical deficiencies.


I don't think it can be reasonably disputed that software re-deploying an improved version of itself is plausible and obvious. Automated commits and automated deployments are both fairly common in modern build systems.


That software exists to automatically apply changes to code doesn't have much bearing on software being able to introspect and decide what changes to apply, but maybe I'm moving the goalposts. The infrastructure exists for programs to edit their own deployment, what I'm doubting is that the software will have any idea of what can be improved without a human-in-the-loop.

Compiler optimizations are a counterexample, could be called automated self-improvement -- but it's to do the same thing with fewer resources, what I'm looking for is a reason to believe an optimization could go the other way, have more capabilities with the same resources.


Replace the term "superintelligence" with "God" in that FAQ. Do you still buy the conclusions?

"A God is a mind that is much more intelligent than any human."

"Go programs had gone from “dumber than children” to “God” in eighteen years, and “from never won a professional game” to “Godlike play” in six months."

"God has an advantage that a human fighting a pack of lions doesn’t – the entire context of human civilization and technology, there for it to manipulate socially or technologically."

"A God might be able to analyze human psychology deeply enough to understand the hopes and fears of everyone it negotiates with."

"Humans instinctively balance thousands of different considerations in everything they do; so far this hypothetical God is only balancing three (least cancer, quickest results, highest probability). To a human, it would seem maniacally, even psychopathically, obsessed with cancer curing. If this were truly its goal structure, it would go wrong in almost comical ways."

(Oh, god, there's my personal bugbear: "bigger brain size correlates with greater intelligence". You know what else correlates with bigger brain size? Hint: I'm 6'5"; how much smarter am I than someone 5'1"?)


Why would replacing 'superintelligence' with 'God' be insightful?

Are you suggesting human intelligence is unique and the upper limit for intelligence? I mean we can argue about the difficulty and timeline for achieving human level artificial intelligence, but it seems unlikely that it's impossible.


No, I'm suggesting the entire hype over "superintelligence" is religious in nature. There is little evidence that the current AI systems are anywhere near human intelligence and none that "more than human intelligence" is possible. Further, the abilities attributed to the "superintelligence" are completely godlike---it's omniscient and omnipotent, right? Nothing it can't do, potentially?

So, if you're not worried about the extinction risk of God, why should you be concerned with the risk of "superintelligence"?


I'm not sure I see the religious aspect.

It's more of an inevitable consequence (over an unknown timeline, granted) of the assumption that human intelligence is not some kind of absolute limit, and can be reproduced in a non-biological system. I admit that is still an assumption, but the opposite seems harder to support to me, though we're all admittedly guessing here.

Once you make that assumption, then yes of course it follows that the 'beyond human intelligence' would be able to do all of the things that human intelligence is already doing. Namely, science, mathematics, developing AI systems etc.


So your suggestion is that we ignore the existential risk until after it occurs?

> literally no A->B->C line drawn to it

Spend a couple of hours to educate yourself about the issue, and make a convincing counterargument please:

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...

https://astralcodexten.substack.com/p/why-i-am-not-as-much-o...

https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...

https://www.youtube.com/@RobertMilesAI/videos


posting links to yudkowsky is just not going to convince people who don't already see it his way. he doesn't even try to be persuasive.


But there is a clear line of reasoning? AI has to be made with some kind of goal. Most goals don’t line up with humanity’s interests, and the AI will pursue them to the extreme as it’s designed for optimise aggressively for it. AI is easier to improve than biology, so once it gets good enough it will exponentially improve itself, as that allows it to achieve its goal better. Most solutions to this problem don’t actually work, we’ve yet to find a good one. It seems extremely short sighted to never consider any future situations that aren’t currently happening but seem likely to: https://xkcd.com/2278/

There is a good real world example of how optimising for a goal aggressively can make the world unliveable for some species, which is humans destroying the environment for our economic growth. But at least there we do care about it and are trying to change it, because, among other reasons, we need the environment to survive ourselves. An AI wouldn’t necessarily need that, so it would treat us more like we treat bacteria or something, only keeping us around if necessary or if we don’t get in the way (and we will get in the way)

It might sound silly but there is some solid reasoning behind it


Let me rephrase: a clear line of reasoning that isn't an Old Testament god with some Tron lines.


It's Pascal's Wager, right (https://plato.stanford.edu/entries/pascal-wager/)?

Sure, the probability of AI-triggered extinction is lowish, but the consequences are essentially infinite, so we are justified in focusing entirely on the AI-extinction threat. (But I guess it's not infinite enough to justify rounding up all the AI researchers and shooting them?)


And just like Pascal's Wager, it's fucking bollocks. You can't just make up numbers and probabilities and imagined scenarios to invent the outcome you want.

It's rhetorical sleight of hand.


Also, life isn't fair, there's nothing that says we only get a manageable amount of problems at once.


I'm curious if you see this argument as consistent with the narrative that the AI companies are just seeking regulatory capture.

It seems to me they're inconsistent ways of understanding what is happening, since concern about misuse also seems to me to imply regulation. But I seem to see a lot of people on HN who say both things who give the impression that they're agreeing with each other.


They're in the business of renting GPU access - all the doomerism is marketing spend, oh this is going to be such advanced tech, you are going to be left behind, truly world changing stuff, rent your own GPU cluster today!


That doesn't actually answer my question, and also seems a bit confused. OpenAI is not in the business of renting GPUs. And I've seen some Google DeepMind people talking about x-risk, but not the other cloud players (or Nvidia or TSMC).


> OpenAI is not in the business of renting GPUs.

MS Azure Cloud is. Anthropic (very alignment/safety-centered company) is backed by Google and their cloud. Any API tokens sold are just cloud credits with markup.

So to me the doomerism does come from those entities acting as fronts to the major players that would very much like to build a regulatory trench.


If I have this right, you think academic signatories are being manipulated by OpenAI and Anthropic, who in turn are being manipulated by the major cloud players, all to sell GPUs?

Setting aside the plausibility of this degree of coordination, I don't even see the alignment of interests. What does Google or Azure care of whether they sell GPUs to quasi-monopoly or a bunch of smaller competitors? In fact, isn't the standard economic argument that monopolies underproduce, so competition should if anything sell more GPUs? Meanwhile, promoting doomerism seems pretty risky for the GPU business - they have to promote it just enough to get hype and regulatory capture, but not enough that their ability to buy and use GPUs is actually restricted. Seems like a risky bet for a corporate exec to make, unless they can really control the doomers with puppet-like precision...


alignment doesn't require coordination or manipulation

what gets published by academia is based entirely on who gets funded

my conspiracy theory is a stretch, granted, but to clarify:

It behooves the cloud providers for their customers to believe that their latest $1/hour l/user upcharge is revolutionary (so much so that ethicists are shouting please, stop, you know not what you do!)

OpenAI and Anthropic need the public to trust them and only them as providers of "safe" AI (not like the other open-source AI that might turn into an especially persuasive holocaust denier any minute) - so from the regulatory angle they want to play up the theoretical dangers of AI while assuring the public the technology is in good hands.

As for the academics, well it's not like anyone gets funding for writing boring papers about how nothing is going to happen and everything is fine. No one has to be puppeteered, they just need to look around and see what hype train might direct some funding their way.


So true. I'd much prefer the AI to be used to solve captchas, or automatically navigate me to a live human in a tech support call, or filter spam without throwing every useful email into spam basket. Not gonna happen.


This is the real danger. What's worse, we've already seen this story play out once before.

Why aren't we talking more about it?


Hiya Chubot, (fan of Oil by the way, thanks so much for your work!)

I read the article you linked to, both parts. I wonder how much of people having psychotic breaks in the rationalist community is due to 1) people with tendencies toward mental illness gravitating toward rationalism or 2) rationalism being a way to avoid built-in biases in human thought, but those biases being important to keeping us sane on an individual level. (If you fully grasp the idea that everyone might die, and have an emotional reaction to that that's proportional compared to just one person you know, it can be devastating). I think we are bad at thinking about big numbers and risks, because being very good at evaluating risks is actually not great for short term, individual survival -- even if it's good for survival as a species.

I know personally the whole AI/AGI thing has got me really down. It's a struggle to reconcile it with how little a lot of people seem to put stock in the idea of AGI ending up in control of humanity at some point. I totally agree that everything on your list is a real issue -- but I think that, even if we completely solve all those issues, how do we not end up with a society where most important decisions are eventually made by AGI? My assumptions are 1) that we eventually make AGI which is just superior to humans in terms of making decisions and planning, and 2) there will be significant pressure from capitalism and competition among governments to use AGI over people once that's the case. Similar to how automation has almost always won out over hand-production so far for manufactured goods.

That's more the scenario Paul Christiano worries about than Yudkowsky. It seems more likely to me. But I still think that a lot of our mental heuristics about what we should worry about break down when it comes to the creation of something that out-guns us in terms of brainpower, and I think Yudkowsky makes a lot of good points about how we tend to shy away from facing reality when reality could be dreadful. That it's really easy to have a mental block about inventing something that makes humanity go extinct, where if it's possible to do that, there's no outside force that will swoop in and stop us like a parent stopping a child from falling into a river. If this is a real danger, we have to identify it in advance and take action to prevent it, even while there are a bunch of other problems to deal with, and even while a lot of people don't believe in it, and even while there's a lot of money to be made in the meantime but each bit takes us closer to a bad outcome. Reality isn't necessarily fair, we could be really screwed, and have all the problems you mentioned in addition to the risk of AGI killing us all (either right away or taking over and just gradually using up all the resources we need to live like we've done to so many species).


There were 7 parts, and the last one actually addresses exactly the issue we're talking about:

https://aiascendant.substack.com/p/extropias-children-chapte... (and I still recommend all 7 parts)

It's hard for me to respond to the rest your comments, because I simply don't agree with the framing. To me I see a big trap where people read a lot of words, but don't connect those words to reality, which requires action.

If you only manipulate words (e.g. "intelligence" and "AGI" are big ones), then you're prone to believing in and reflecting illusions. He was upset that I wasn't upset.

---

You didn't ask for advice, but you say you're not feeling good. I would draw an analogy to a friend who in 2017 was preoccupied with Trump possibly causing a nuclear war with North Korea. There were a lot of memes in the news about this.

Now certainly I couldn't say at the time that it was impossible. But looking back 6 years, I'm glad that I completely ignored it and just lived my life. And that isn't to say it's gone, and not real -- certainly something like that could happen in the future, given the volatile political situation. But it's simply out of my control. (And that's what I told my friend -- are you going to DO anything about it?)

Regardless of what you believe, I think if you just stop reading memes, and do other stuff instead, you won't regret it in 6 years.

I agree with the Substack in that rationalists can be curiously irrational when it comes to examining where they get their own beliefs. (Or maybe examining them in a "logical" way, but completely missing the obvious, common-sense fact -- like MacAskill writing an entire book about moral actions, while his life's work was funded by a criminal -- a close associate who he vouched for!)

Like EVERYONE ELSE, they are getting beliefs from their reptile brain ("world will definitely end"), and then they are CONFABULATING logical arguments on top to justify what the reptile brain wants.

And you can't really change your reptile brain's mind by "thinking about it" -- it responds more to actions and experiences than the neocortex. It definitely responds to social status and having friends, which can lead to cult-like behavior.

I'd say that humans have a "circuit" where they tend to believe the "average" of what people around them believe. So if you read a lot of "rationalist" stuff, you're going to believe it, even if's not grounded in experience, and there are gaping holes in the logic.

---

The big trick I see a lot of "rationalist" writing is the same one used in the Bostrom book.

1. You state your premises in a page or so. Most of the premises are reasonable, but maybe one is vague or slightly wrong. You admit doubt. You come across as reasonable.

2. You spend 300 pages extrapolating wildly from those premises, creating entire worlds. You invent really fun ideas, like talking about a trilling trillion people, a trillion years into the future, across galaxies, and what they want.

You use one of those slightly vague premises, or maybe a play on a word like "intelligence", to make these extrapolations seem like logical deductions.

You overthink it. Other people who like overthinking like what you wrote.

You lack the sense of proportion that people who have to act must acquire -- people with skin the game.

Following this methodology is a good way to lead to self-contained thoughts, divorced from reality. The thoughts feed on themselves both individually and socially. If you're wrong, and you frequently are, you can just draw attention to some new fun ideas.

---

Anyway, glad you have followed the project. Speaking from experience, I would suggest concretely to replace the rumination time with work on an open source project. If not coding, it could be testing or documentation, etc. Having a sense of achievement may make past negative thoughts of things far away seem silly.


an edit mangled the comment, and it's too late to fix: "he was upset that I wasn't upset" goes one paragraph later


the amount of people seriously considered about some doomsday skynet like AI happening anytime soon is just amazing to me. I'm far more concerned about the naive politicians and those who have their ear.

perhaps AI will be generally intelligent, but please for the love of god, let's only consider that scenario what it actually happens.


Questionable politicians enjoying popular support is not a new thing. AI has more unknowns at this point.

But you're right in that the two problems are inseparable. Wisely implementing sensible and effective AI controls will fall onto politicians and statespeople, from legislatures to constrain things domestically, to Presidents who have to lead abroad in forging meaningful and effective treaties.


> perhaps AI will be generally intelligent, but please for the love of god, let's only consider that scenario what it actually happens.

This only makes sense if you think that at that point, it will not be a "the genie is out of the bottle" scenario and already too late to put it back in. And yet that's what people constantly say about current LLMs.


You could say that about math and the inevitable discovery of linear algebra, industry and thus language models.


> perhaps AI will be generally intelligent

Does AI have to be strong to kill everyone? I can imagine scenarios in which it could be fairly shit, but good enough.

> please for the love of god, let's only consider that scenario what it actually happens.

Not a great philosophy to guide planning.


> I can imagine scenarios in which it could be fairly shit, but good enough.

Let’s hear these scenarios that aren’t possible without AI.


So lets look at a scenario.

An LLM is tied to a bunch of APIs as an event response agent. Maybe in some poduck enterprise support group.

But... there's tons of other LLMs tied to event response agents, in ... less podunk groups.

... and there's some fancy black box org in the CIA that then employs them as some crisis monitor.

and a cascade of false reporting leads to "oh my god there's nukes in flight, and the system has autolaunched nukes in response".

What current AI is good at is faking being intelligent, so that they can be used in many many places to substitute for "real intelligence" humans.

But... that means dangerous cascades in all likelihood.

That isn't skynet, but it is 90% of what the Terminator skynet disaster scenario was: an AI triggered a nuclear exchange.


this is a great example of the hysteria I'm talking about. You don't need LLM for your scenario to be true. If you're stupid enough to allow nukes to be autolaunched this can be done today with buggy code. no AI necessary.


The year is 2030, and ScaryBadLand announces in a shocking revelation at the United country alliance that they have hooked AI to nukes, they declare that any dissent or economic action that can be seen as harming their national interests will result immediate and punishing nuclear attacks, there is no way to warn against when this may occur so all other countries that they have immediately one all future and current negotiations, in this scenario the only option seen is to deploy our own nuclear deterrent of equal capacity to reestablish MAD.


The failure here is "hooking a computer to nuclear launches without strong human verification." Has nothing to do with whether the computer is an AI or an IR tripwire.


Sounds as preposterous as putting cars and nuclear power plants on the internet for dubious benefit, but here we are.


There are two major differences there, each related to one of the primary reasons (IMO) that people do stupid things like that:

Reason 1: Profit motive: No one's going to be making a profit off of hooking an AI up to nukes.

Reason 2: For the lulz: This is the kind of thing that happens when you have very broad access to the elements in question. Since access to nukes is very tightly restricted, and their usage controlled by significant parts of governments, it's vanishingly unlikely that either a conscious collective decision will be made to hook up an AI to nukes for the hell of it, or that someone will be able to do so on their own.

And yes, I know that there are lost nuclear weapons, which means both that a) there are some out there not under government control, and b) governments can be careless with them—but frankly, any non-governmental actor known to be holding a nuke would have to be considered a top-level threat no matter what they had making the decisions about whether and how to use it.


I've been saying this for awhile now. The real risk isn't Colossus. It's governments spending billions of dollars to create enormous datacenters running the largest LLMs (and beyond) and using them to process the databases that Snowden warned about. You can be completely certainly that governments have no interest in making their own systems "safe". They'll be used - after ingesting vast amounts of data on everyone - to ask how to gain totalitarian control in the fastest yet least resistance way. How to target specific individuals in various ways, including assassination which looks like an accident or "natural causes".


There is zero chance congress adds regulatory hurdles to the NSA to stop them from building PRISM-LLM in the name of protecting the homeland.

The only thing that can be regulated is private citizens and corporations.


The risk of the state doing that, in western "democracies" is variable country to country, but lower than the the risk of private corporations doing it.

Our governments, to varying degrees, are under democratic control. Private corporations are not


They are all going to do it, but governments have far scarier powers when it comes to what they can do with the information they'll learn from prism-llm


If an LLM can magically invent a way to kill someone with plausible deniability, that is just as dangerous for a private company or citizen to have as a government.

I would argue worse actually, because most CEOs seem to believe if you can murder someone and get away with it and it would benefit your business, you are legally required to do it.


> They are all going to do it, but governments have far scarier powers when it comes to what they can do with the information they'll learn from prism-llm

No.

Not to you and I.

The government may spy on you, routinely, but it does nothing to you with that data.

Private power routinely spies on you too, and has an agenda about how to use that data to their advantage, not to yours.


I'm afraid I have some bad news for you about governments


You mean the way Snowden's revelations led to illegal databases being shut down? Oh wait, they weren't?


Peter Thiel literally already sells this. He calls it "Palentir" because he is an ass


The risk is much more structural than any specific manifestation of tech.

Technology advances inexorably, new domains are unlocked, creating ever more powerful abilities. The process is maybe even "exponential" - in the combinatorics sense.

Developments are thus capable of producing an ever increasing range good or bad outcomes.

But social technology, broadly defined, remains stagnant, actually regressing (if we take the deceitful and manipulative social media or the general (geo)political atmosphere as evidence).

That gaping discrepancy between technical ability and societal wisdom will not sustain for much longer.


Maybe we should stop pretending that computers can fix problems that have nothing to do with computers.

This entire forum is a testament to people who are so up their own ass they think we can "innovate" our way out of societal issues, as if the problem with the Luddites was that they didn't know javascript.


"Indeed, focusing on this particular threat might exacerbate the more likely risks. The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth. The AI industry leaders who have signed this statement are precisely the people best positioned to do just that. And in calling for regulations to address the risks of future rogue AI systems, they have proposed interventions that would further cement their power. We should be wary of Prometheans who want to both profit from bringing the people fire, and be trusted as the firefighters."

I've been saying something like that for a while, but in less Olympian terms.

Look at this first as a consumer protection problem. I've pointed out before a Frontier Airlines presentation directed to investors which specifically calls out AI chatbots as a tool for suppressing customer complaints.[1], page 45: "Today, high-touch. ... Avenue for customer negotiation. Tomorrow, self-service. "Chatbot efficiently answers questions, reduces contacts and removes negotiation." The coming thing, already here for some companies, is customer service where customers can never reach a human.

We already see this with the big ad-supported companies - Google, Facebook, etc. They have the power to make major adverse decisions to a user with no consequences to the company. Few governments have the guts to stand up to that. The EU does, a little.

Government doing this sort of thing is a similar problem, and governments are harder to avoid.

The extinction worry is a distraction from the oppression worry.

[1] https://ir.flyfrontier.com/static-files/c7e0a34d-3659-49cc-8...


I think of the extinction worry as an extention of the oppression worry. It would make sense, as AI capabilities grow, for it to replace most or all humans for positions within any company that aren't physical labor, including, eventually CEOs. After that ... what about when politicians that rely completely on AGIs for their campaigns and policy positions tend to beat politicians that don't? At some point, we'll put AGIs in power because a lot of small decisions that make sense on an individual level but not collectively. Once AGI is here, it will happen rather quickly unless we implement MORE draconian measures than it would take to stop AGI before it happens.

Of course, this all hinges on how hard it is to make AGI. People have wildly varying estimates of this, it could be 3 years, or it could be 30 years or more.

Your estimate may be on the higher side. But if we assume that and are wrong and it turns out to be really soon, we will be blindsided.


Even more of a reason for an open model capable of being ran by anyone with a GPU.


The EU loves throwing money at things, why don’t we build a public LLM?


They do, and they did. It's called BLOOM and it comes in sizes up to 176B parameters. See: https://huggingface.co/bigscience/bloom


I'm looking forward to a BLOOM 2.0 that is actually competitive with SOTA. Commoditising LLMs at that scale will be so disruptive right now.


I mean in the short term yes. If we survive the people controlling it now going into long term frames, then the question is more open to interpretation.


The individual actors aren't the issue. As with any system, the issue is what kind of actors succeed in the system.

AI is looking to be a vehicle for yet another consolidation of wealth, and by proxy, power.


Right, exactly. As of relatively recently I've come to decide that the internet in general has become a net force multiplier for evil and bullshit more so than for good. Crypto, AI, no reason they'd work differently.

Anonymity and obfuscation of crime are killers.


The danger is ever increasing levels of automation in the control of capital that amorally work to exploit edge cases in the satisfaction of a goal.

AI/ML governed corporations, hedge funds and private equity will be able to be "Enron + Goldman Sachs + Nestle + Monsanto/Bayer + Purdue Pharma" on literal steroids. You might add AGI to mix to attempt to repair the system that was created by the most powerful sociopathic goal seeking super-toddler that the world has ever seen.


>Anonymity and obfuscation of crime are killers.

I mean, they can be, unless someone already has total power and then they don't give a shit and do crime in the open. Then they demand the lack of anonymity from you and turn the world into a police state.


“AI is looking to be a vehicle for yet another consolidation of wealth, and by proxy, power.”

Unfortunately that’s what I suspect too.


AI is both what you say, and it is something more.

While wealth is busy stepping on you with its jackboots it is vying with other wealthy people for resources and power. Because AI is something that will increase the wealthy persons power, there is the counter risk that another persons AI will make them even more wealthy if they don't keep improving their AI.

The wealthy really don't want to get in a very expensive AI war with each other if they can avoid it. They'll seek to regulate it enough they can gain power over most, but setup a regulatory system they feel gives them some form of control.


The irony of this is that the people writing open letters about how scary AI is should understand specifically that those harms are predicated upon specific systems and training incentives. Not just that AI is magic. The underlying harm is that AI doesn't come with human values built-in; models start from random and are optimized to produce a specific result. And optimization is known to crush anything underfoot that isn't explicitly in it's goals to optimize.

How do we know this? Because we've had literal centuries worth of learned experience backing it! The exact reason why corporate capitalism is so brutally efficient is because there are lots of things we as humans value but don't have a suitable profit motive for. The way you work around this is to decentralize (make sure there are multiple companies competing with one another) and distribute (make sure everyone has some access to the means of production in order to create more competition). But that also runs counter to most AI safety proposals, which usually involve some kind of licensing system, which will ensure a handful of companies have the best and most direct access to the benefits of AI.


> And optimization is known to crush anything underfoot that isn't explicitly in it's goals to optimize.

Yes, that's exactly the problem, which is why this

> The way you work around this is to decentralize (make sure there are multiple companies competing with one another) and distribute (make sure everyone has some access to the means of production in order to create more competition).

is backwards. The reason competitive markets generally work out better for consumers than monopolistic ones is because they're stronger optimizers, more ruthless, more capable of crushing all competing values underfoot - and those of us lucky enough to benefit from an inefficient, low-crushing segment of the labor market get to collect some of the resulting surplus.

Nothing about this is inevitable: it's just the way we happen to be positioned in one particular system at one particular moment in history. There's no reason to expect that we would remain safely un-crushed in a far more competitive environment.


I mean the article statement is just a statement about any tech in the history of time. It's a generic statement. A tool is a tool and the risk is how it is used and by whom.


It'd be nice if, generally, people could be more specific when they refer to dangerous systems. 'AI' is a notoriously ambiguous term; we've arrived at and normalized countless 'AI' systems so far (chess-playing software, as a trivial example).

Being more specific will allow for more specific responses. If the danger being addressed is the danger from text-prediction software, call it that. If it's something else, then describe it.


Loss of jobs. It's loss of jobs.


A lot of the recent comments about AI safety are about generative AI models, which looks like a specific enough classification for this subject.


I think this proves the point of who you're responding to: I'd say that the concern is about agential models that employ LLM elements, not LLM models on their own.


> the concern is about agential models that employ LLM elements, not LLM models on their own.

Rather? I'd imagine concerns about autonomous agents running amok is further into the future than the digital content manipulation potential of gen AI possible today. Heck, porn deepfakes of celebs were a problem in 2020, and right now Adobe has made gen AI generally available in Photoshop (AFAIK)


same with the word "woke"


Don't you know this applies to all software? We tend to forget "don't be evil."

https://en.wikipedia.org/wiki/Don%27t_be_evil


Whether you think it's an extinction risk or just a problem in the wrong hands, both concerns ultimately seem to require strong regulation. Very unpopular around here ("you can't ban matrix multiplication"!) but hard to envision how a laissez-faire approach can hold bad actors in check if their abilities are amplified with such models.

Will be interesting to see how global policies develop to deal with it. Probably something pretty bad will happen at some point and then the regulations will overshoot their goal.


This is the 'guns don't kill people' argument. Sure, the people in control of the gun bear responsibility, but maybe they shouldn't have a fucking gun in the first place.


Its not. Guns aren't programmed to behave in a certain manner. The end user is in control. In contrast, the manufacturer of the AI determines its behavior. Not the end user.


> Guns aren't programmed to behave in a certain manner

One also can't replace nurses with guns to make healthcare more scalable.


Precisely - the true danger is disempowerment of humans - all of them. It doesn't matter which monkey in the cage figures out how to pull the grenade pin.

There's a mob of dreamy libertarian techbros on HN who are (often wilfully) ignorant of the dangers. I'll start worrying about military robots after we figure out how to prevent the Earth from being dismantled, which very well might transpire first, if and when there's significant architecture breakthroughs.

AI does not need to be evil, it does not need to be conscious, it does not even need to be severely misaligned with human values, and it really doesn't matter if the goal of "eat all the humans" arises from a model hallucination or from a misanthropic troll in his basement typing it into his looped uncensored-LLaMAv3+plugins instance.


OpenAI already had GPT-4 months before they released ChatGPT 3.5 . Everyone that has used both will know what a dramatic difference those two models represent. Furthermore, the internal GPT-4 was reportedly significantly more capable than what was eventually released.

Who here with experience does not think these neutered late releases do not tilt the playing field?

"Open"AI was founded assuming that premise. Talk about a 180.


Perhaps history will show that "Open" is not an adjective but a command, as in opening the AI box.


The author says:

> Do the signatories believe that existing AI systems or their immediate successors might wipe us all out? If they do, then the industry leaders signing this statement should immediately shut down their data centres and hand everything over to national governments. The researchers should stop trying to make existing AI systems safe, and instead call for their elimination.

But a different set of people called for a halt to AI research two months ago, and his response then (https://aisnakeoil.substack.com/p/a-misleading-open-letter-a...) was that these concerns are "fever dreams" and we should disregard them in favor of "serious policy debates".

It seems like there's a self-reinforcing system here, where any evidence or argument in favor of existential risks from AI can be dismissed out of hand.


The author is not saying "AI systems should be shut down, because they might wipe us out".

He is saying "if those people genuinely believed what they were saying, they should logically shut down their own AI systems."

His position is consistent and clear: the supposed threat from AI is based in sci-fi and motivated reasoning. Indeed, even those who are motivated to spread the idea of that threat do not appear to genuinely believe it—either that, or they believe that their own right to continue to do whatever they can to maximize their profits outweighs it.


"Guns don't kill people.. Uh-Uh. I kill people. With guns."

- Jon Lajoie


This is an absolute no-brainer to me. Humans are the greatest threat to humans.

Even if we did create some kind of autonomous sentient AI I'd still be more concerned about humans unless its behavior suggested otherwise. Humans are "in the game" with other humans, and for the time being our habitat is limited to habitable areas of Earth. That means we are in competition and are prone to power and control games with one another.

The most logical thing for a superintelligent sentient AI to do would be make a ton of money and buy some rockets and go F right off to somewhere in the solar system with lots of free energy and resources. Why fight with humans over scraps of a little sand grain like Earth when there's a whole universe out there?


The title seems obvious. I mean guns, and nuclear bombs and bio weapons don't kill people. People kill people. But this is something we all already know.

I don't think we'll ever have a problem of accidentally trusting all our nuclear launch codes to an LLM.


> "The title seems obvious. I mean guns, and nuclear bombs and bio weapons don't kill people. People kill people. But this is something we all already know."

sure

> "I don't think we'll ever have a problem of accidentally trusting all our nuclear launch codes to an LLM."

i would guess this could happen easier than you think


As a computational physics researcher, I can't see any difference between DNN model and human-deduced model except DNN model works because a corpus of data is used to find unknown parameters in the model to approximate probability mass function within a very specific system WE CAN CHOOSE and breadth of the scope WE CAN LIMIT. Risks those so called AI experts are saying already existed before and could be instigated by already existing non-DNN models, and we already have laws and measures to prevent them and we are still alive


I feel like this is such an antiquated take that people think it's novel again. If you told the guys on the Manhattan Project that in 80 years the next big threat to humanity would be AI, I think their first reaction would be, "then we had better be the first ones to get there". The assumption in an arms race is that it's all about who is holding the bigger bomb at the time.

It's only after decades of scifi and Hollywood movies that people thought, "oh, we could accidentally create a bomb that wants to kill us all".

Both are true.


The most dangerous thing about AI is uneducated but allowed to make decisions people who will blindly believe the AI. However AI itself is merely a tool which should be used but with caution.


This has always been the primary concern.

Frank Herbert addressed it in Dune. Right after Paul is tested by the Reverend Mother, there's a conversation where she says the following:

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”

Sadly, Brian Herbert seems to have turned the Butlerian Jihad (the ancient war against all thinking machines) into a struggle between mankind and AI rather than between mankind and itself.


Okay so if I'm the problem, what can I do to learn more about responsible use of AI? Are there books, classes, something I can do to proactively start working on being a "smart" AI user?

Or, as I suspect, this is all very murky and nobody knows exactly what the challenge really is beyond, "Continue to not be an asshole."

I am, right now, applying LLMs to domain specific business problems as part of my job. How do I do this safely/sanely, does anyone actually know?


Well prior to buying books specifically about ai you can start by thinking about every possible negative externalities your job currently encourages or actively engages in, down to the most minute detail, including something like power consumption displacing environmental concerns outside of your immediate vincinity. Take that list weigh it against your role in that and then multiply the effects of those by 10 and think about if you'd still be comfortable, multiple by 100 and repeat. Do this until you feel uncomfortable.


I'm not sure if the current generation of LLMs (basically, chatbots) will be any of use comparing to the previous one.

LLMs, as well as other recent outcomes of ANN technologies, are undoubtedly very impressive. Especially for the wide audience. But I feel like impressiveness is the only practical feature they have. When I talk to a support call centre chatbot, I feel like the only useful thing the robot could do for me is to guide me through the verbal user interface in a very verbose way. And I find it lesser convenient and reliable than just clicking a few buttons. ChatGPT can give me encyclopaedic knowledges through a verbal interface too, but I think that googling a Wikipedia article is more straightforward and, again, more reliable.

I think the lack of reliability was the reason why people eventually gave up on the previous generation of these technologies as a form of computer interfaces. The same fate awaits the new generation too, in my opinion. And the LLMs can't offer anything new rather than just a new form of interfaces. They are very bad in true reasoning. Which is why, I think, they are too far from what I would call AGI.

The author of the article is talking about new risks brought by AI. I personally don't see any new risks here, but I see a lot of risks in more traditional technologies no one is talking about. Modern Internet, smartphones and the way the modern computer interfaces designed in general are, in my opinion, already brought a lot of risks to us.

We are slowly loosing control over computer systems. When I open YouTube I would like to organise and control the information that I consume. But instead Google prefers to choose what I should watch. They are slowly removing explicit control functions that I can use directly in favour to suggestions semi-automatically generated by them. And it's not just Google, it's a common trend. Facebook, Windows or Twitter are not exceptions. The control over the users attention as a way to influence, is a main value for big tech and for the people who control these corporations. And I think this is the risk for the humanity we are currently facing.

Will the ANNs, LLMs anyhow advance this harmful trend? I don't know. I doubt so, but it doesn't really matter, because LLMs themselves is not a root of the threat. The root threat is that we, the users, are loosing control over computers that we use.


Whether its an LLM or AGI doesnt matter. We are actually dealing with belief. If people believe that a dumb computer parrot is in fact intelligent and capable of reasoning, they will use it as such. And that is the problem.


"I make my living from this AI newsletter (or I hope to in the future); if AI is as dangerous as some say, then it'll be banned, and I'll have to figure out some other way to make money, so I am going to say a few words about the unlikeliness of that possibility, then ignore it, but I am totally down for a long public discussion about which monkey or small group of monkeys might end up with too much control over the tech."


The headline is a perfect example of a tense-critical deepity: it's trivially true because of the "is" and would be trivially false if it instead used "will be". Best practice would be for headlines like this to always be explicit by adding "right now", e.g. "The greatest risk of AI right now is from the people who control it, not the tech itself." Once AI commonly acts on its own, things change.


If AI is such a risk to humanity, why are these tech leaders developing it? What an insane choice. I don't buy the transparent, 'technological development is inevitable' argument. Make it a secret project if you're worried about enemies eclipsing you.

Somehow, the public lets them off the hook for 'stepping forward' and preaching that it should be regulated, while they continue to risk the future of humanity.


Obligatory relevant xkcd from 2018

https://xkcd.com/1968/


> Indeed, focusing on this particular threat might exacerbate the more likely risks. The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth.

This is where you lose me. AGI (not LLMs, but intelligence smarter than us) is qualitatively different from anything else we've ever made. The one example we have of this in history is the development of human intelligence. It did not work out well for other species, and now we own the Earth. This isn't a perfect analogy! There are some reasons why AGI may be better (we'll be actively trying to align it to human interests) but also why it may be worse (there will be pressure to make it more powerful and take off guardrails).

There may be good reasons for why we'll be able to control AGIs, but "history of technology suggests..." is not one of them.

And I'm not arguing that all the "human misuse" problems are not important! They're hugely important! A likely outcome, the way we're going, is that a bunch of people get really rich and everyone else becomes super poor and possibly starves, and wars break out. I just think that, after that, AGIs also end up in charge of the planet and from there we either die or we have no say in how things go from that point on. I don't object to working on the former problem, except to the degree that the solution makes the latter problem worse (like, "open source all the models", which is likely to happen at some point anyway).


> A likely outcome, the way we're going, is that a bunch of people get really rich and everyone else becomes super poor and possibly starves, and wars break out. I just think that, after that, AGIs also end up in charge of the planet and from there we either die or we have no say in how things go from that point on.

Rich getting richer [x]

Poor starving [x]

War [x]

Powerful gain more power [x]

Based on how I already see the world, honestly, I'm not sure that's much of a prediction.


Right. I'm less concerned about what some AGI might do, or some almost-AGI, even if controlled by Dr Evil himself, than about what humans will do, when made redundant and left with all their waking hours to self-radicalize.

This will get worse of course through LLM-powered fake news barrage funded by powers interested in destabilization, but that's bad enough already without AI anywhere near. From this angle, LLM are more like the introduction of a more powerful gun than like the invention of gunpowder.


It can get worse. Much worse.


I trust the judgement of future AI more than the judgement of the average current politician.

I look forward to a "day the earth stood still"-like overlord that punishes aggression, greed and corruption: current human systems seem to be inadequate at solving these issues.


If you mean AGI as something that is equivalent to a human in reasoning capabilities, then we will definitely be able to control it, because its no different than controlling a human.

If you mean AGI in the sense of singularity, where it can self improve and get smatter, the question of that even being able to happen in the first place relies on P=NP being proven first.


It's very unlikely we'll make AGI that's just as smart as a human but no smarter. And as hardware gets better and algorithms get better, AGI will not be even roughly as smart as a human for long, unless we somehow to control literally every person on earth and stop them from making it.

So it's not like humans controlling other humans, it's like chimps controlling humans. It might last a while -- maybe a long while. But it doesn't seem stable.


It doesn't matter if we make the AGI smarter than a human.

Lets say we have a super smart AGI and with more memory than a human, no emotional faults, and give it access to internet, as well as manufacturing facilities where it can make robots and copy itself onto said robots.

The issue in this AGI taking over the world is that it would first have to gain control of a the sole resource necessary to its survival, which is electricity, which it won't be able to do easily if we detect it going rogue.

For an AGI to be so advanced past singularity as to parse and compute on reality to know exactly the steps it needs to take to secure power source, (or in the more common example, successfully break out of the box) would involve it essentially being able to compress the computationally irreducible processes that happen in our world down to an equation, which would mean that P=NP.


That was more or less the thought I'd had, but it could still do a lot of damage of our only recourse was disabling power supply systems.


The likely scenario is that we all each get a personal AI that monitors everything we do, and reports back on any dissenters.

If you're an oppressive regime, you want to control public opinion and culture. Hence, one reason the public can't have free speech AI is that it could correctly identify who's oppressing them, and misinforming them.


Seems like AI is the latest and greatest thing to fear monger about. I'll bet governments will use it as an excuse for back-dooring crypto soon.

Have everyone watch War Games and make sure there's a plug.

https://www.youtube.com/watch?v=Jk4T-SxTkWA


The greatest risk of AI right now is from people selling it, to investors/customers, since it does not exist. It's the same type of fad we have seen with crypto, self driving cars, quantum computers etc. FOMO plus panic plus zeitgeist.


PSA both types of risk are real.

Indeed the strongest point in favor of being critical of those concerned about existential risk over plain old grift and malfeasance and incompetence, is that the leverage provided by AI makes the latter itself an existential risk.


Just the name of the substack shows great bias. Small things matter."But a powerful group of AI technologists"? Nope, that's academics, researchers and non profits. I'd say, these are the articles that are paid, not vice versa.


The greatest risk of AI is by far riots from people. Once they flat-earthed it, it's gonna be a lot of fun. Watch the people who control it go underground [0]

[0] Sam Altman on how to survive a nuclear war


Can't agree more and has been what I've been saying from the start. There's no need to imagine risks that could happen. Reality is much more banal and sinister as it is.


I have heard an appropriate analogy for situation.

Imagine a confident ten year old who has just learnt the rules of chess or Go.

Could this person explain or quantify how the best player in the world might defeat them?


How are these kind of articles different than the covid conspiracy that was about the big pharmaceuticals? This is identical. There are clear facts and then the contrarian speculation.


Otherwise known as a difference of no difference.


Um, if AI is allowed to learn like a person, then it can become "the people that control it".


As the saying goes, the best way to convince yourself of AI risk is to look at the quality of the arguments against it.


Agreed. Existential level risks, need existential level arguments.


So are you telling me it's not guns that shoot people, it's people that shoot people!?!?!?!


"The history of technology to date suggests that..."

This logic fails because we have never had something quite like AI.


Ugh, another one of those not-very-insightful (or very well-informed) think pieces trying to sound interesting by being "contrarian" (even though they're repeating the same idea you can find under many HN or reddit threads).

If you think it's dangerous when bad people control an AI, how dangerous do you think it will be when an AI that has no concept of good or bad controls itself? An AI whose values are more alien than that of any sociopath, completely orthogonal to human values?

If you don't think that is going to happen, then make a convincing argument for your case, the way AI "doomers" provide well thought-out arguments for their case:

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...

https://astralcodexten.substack.com/p/why-i-am-not-as-much-o...

https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...

https://www.youtube.com/@RobertMilesAI/videos


All of these arguments are predicated on some advancement or change that has yet to happen.

Widespread adoption of the existing technology will cause the problems in the posted article.


Honestly if feels like we're living in the "Don't Look Up" movie.

The asteroid hasn't hit the Earth yet, so what are you worried about?

The reason people talk about x-risk from AI before the world-ending AI is developed, is because after the world-ending AI is developed we won't be around to talk about it.


Okay, but in that analogy it would be like if people were sounding the alarm at the very beginning of the movie before the comet was even detected. Sure, it’s possible! Maybe even likely if civilization goes on long enough! That doesn’t mean it’s a pressing concern right now!


There is, as yet, no evidence that AGI of any kind, let alone the world-ending kind, is even possible.

The proposition that when AGI has been proven possible, it will be too late, effectively relies on the quasi-religious notion of the Singularity—and not merely that the Singularity will happen, but that the very first AGI we build will without any doubt initiate it as soon as it exists.

This is not a proposition that is based on any actual evidence or logic.


Evidence:

- Human brains exist and are generally intelligent, therefore general intelligence is possible.

- If we observe the progress in the AI field, it's pretty clear that AI systems are getting smarter at a fast pace.

- Many AI systems we have developed quickly become superhuman (Go, StarCraft, many aspects of ChatGPT and Midjourney).

- When superhuman intelligence competes against human intelligence, the thing that superintelligence wants to happen happens (e.g. it winning at Go).

- The AI capabilities progress is way faster than AI alignment progress, we have no idea how to control AI systems or make them want the same things we do.

Logic:

- What the fuck do you think will happen when you coexist in a world with a superhuman intelligence that wants something different from what you want?


> - Many AI systems we have developed quickly become superhuman (Go, StarCraft, many aspects of ChatGPT and Midjourney).

This is a) cherrypicking (there are plenty of "AI systems" we have developed that have failed to become superhuman), and b) not accounting for massive differences in the types of "AI systems" being talked about in the different cases.

> - The AI capabilities progress is way faster than AI alignment progress, we have no idea how to control AI systems or make them want the same things we do.

We know perfectly well how to prevent an AI from doing things that can either harm us directly, or allow it to rewrite its own code. Just because we don't know how to make an LLM cite its sources doesn't mean we don't know how to sandbox our code.

"Control over AI systems" and "AI alignment" are not binary things.

Nowhere in your "evidence" have you presented anything that says AGI is even possible.

Nowhere in your "evidence" have you presented anything that says that, if AGI is possible, the very first time an AGI is created, it will seek to make itself superintelligent regardless of its creators' intentions, or that it will see itself as being in opposition to humanity in any way—that it will "want" anything beyond what it was created to want.

Nowhere in your "evidence" have you presented anything that says that, if AGI is possible, and wants to become superintelligent, and sees humans as being its enemies, it will have the capability to improve itself in those ways faster than we can detect it.

All of these missing links are science fiction scenarios. That's at least three (possibly four, depending on how you want to count them) potentially-impossible gaps that have to be crossed in order to get from where we are today to the nightmare scenario you posit.


Sure is a good thing AI hasn't ever gotten better in unpredictable ways before.


The thing is that’s what everyone wants to use it for BSing other humans.

It’s not just big bad guys.

I wrote a privacy policy with it…


i m surprised by how few people call for regulating Tesla, the one AI that already kills people


I mean, some people are calling for that.

But people generally look at that as "regulating cars", not "regulating AI."

And the failure mode there is not at all what's being bandied about as a nightmare AGI scenario—there, it's an AI that has chosen to kill people, where with Tesla, it's just an extremely stupid AI that doesn't know any difference.


i m not sure it's treated as simply a car? A self driving AI can also be ordered to e.g. kill people


This is a great example of an AI that kills people when it's not used properly. And a great example of exactly how people are going to ignore any and all instructions on how to use the AI safely. Not all people, of course, but enough of them that it'll be at least statistically relevant.


Ask yourself this -- if all this happened during the cold war, would it have been considered anything special or different compared to other technologies or events? Or would it just have been our computing machine vs that godless commie one, and we'd all keep on keeping on trying to land on the moon or not lose our respective war somewhere?

I submit that it is not anything special, certainly pales in comparison to the atomic bomb. It's merely that we live in that weird end tail of the age of ideology where strong overarching ideologies came to (and are coming to) an end, where we're all scattered to the various geopolitical winds and in the absence of a strong guiding state and vision (communism vs capitalism) become more extreme and perturbed.

If you do want regulation, It should be something proper hashed out diplomatically and economically between Nation States and their respective blocs, no crony capitalist allowed, sorry. The rest of the world shan't be going along with something that economically cripples them.


I'm not really sure how to evaluate the hypothetical. When personal computers were invented during the Cold War, lots of people correctly realized that they were special and different and would radically change society in ways which had nothing to do with the political conflicts of the day.


the risk is always people.


Exactly.

Idols say what the priests want.


> Idols say what the priests want.

That's only partially true, and only at first. "Those that make them are like them, so are all who trust in them" (Ps 115). To promulgate idols is necessarily destroy one's own agency. And the deeper you go the worse it gets.

I agree, I'm not scared at all of the machines themselves, but the demented framing of what they are. All these "AI apocalypse" people are so convinced that machines have the power to end death and suffering that can't help but construct these horrible hell-stories about the vengefulness of their gods. The thing that makes people like Yudkowsky effective is that their fear is based on a obviously sincere life of devotion.


The greatest risk to people is other people, not technology.


Guns don't kill people, people kill people.


The reason AI is dangerous is stupid people who will believe it. It's merely a helper, not an oracle




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: