Hacker News new | past | comments | ask | show | jobs | submit login

Yann LeCunn is the most outspoken anti-AI Safety guy in the field, he has practically built his modern brand on it. In his eyes AGI is at best very far, and even if it isn't, AGI Safety isn't a real concern. He posts about it multiple times a week. Here are his most recent views (which do sound at least a tad more reasonable in this tweet than in many of his responses to both AI Safety and accelerationists where he goes further)[0]

"(super)human-level AI..

- is not "just around the corner". It will take a while.

- is not an existential risk. "

0. https://twitter.com/ylecun/status/1726578588449669218




How are any of these arguments anti-safety? You are basing your opinion on prejudice, not on what he says.


Sam Altman starts his essay with

"Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity. "

Yan LeCunn explicitely says "superhuman AI is not an existential risk."

But let's look at a few of his other tweets from just the last weeks

"There isn't a shred of evidence that AI poses a paradigmatic shift in safety.

It's all fantasies fueled by popular science fiction culture suggesting some sort of imaginary but terrible, terrible catastrophic risk."

or constantly retweeting anti-AI Safety posts like this

"A piece by MBZUAI president @ericxing in WEF Agenda explaining why worries about AI existential risks are baseless. " or

"The fears of AI-fueled existential risks are based on flawed ideas."

or even denying short-term non-existential LLM risks (which I care less about)

"Pretty much the most important question in the debate about short-term risks of LLMs. No clear evidence so far."

Scroll through his feed and you'll find countless examples where he dismisses any concerns as doomerism.

What prejudice are you talking about? LeCunn has expressed time and time again that he is not in the same camp as people like Altman, and has positioned himself as the leading face of anti-safety and specifically anti-AGI Safety concerns.

0. https://twitter.com/ylecun/status/1725066749203415056

1. https://twitter.com/ylecun/status/1724272286000390406

2. https://twitter.com/ylecun/status/1725684495507149109


Saying "superhuman AI is not an existential risk" isn't the same as not caring about safety. It's a coherent assessment from someone working in the field that you may or may not agree with.


Since actual AGI is nowhere near, everything about it is speculation.

So his point about there being no evidence is valid for the time being.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: