Yann LeCunn is the most outspoken anti-AI Safety guy in the field, he has practically built his modern brand on it. In his eyes AGI is at best very far, and even if it isn't, AGI Safety isn't a real concern. He posts about it multiple times a week. Here are his most recent views (which do sound at least a tad more reasonable in this tweet than in many of his responses to both AI Safety and accelerationists where he goes further)[0]
"(super)human-level AI..
- is not "just around the corner". It will take a while.
"Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity. "
Yan LeCunn explicitely says "superhuman AI is not an existential risk."
But let's look at a few of his other tweets from just the last weeks
"There isn't a shred of evidence that AI poses a paradigmatic shift in safety.
It's all fantasies fueled by popular science fiction culture suggesting some sort of imaginary but terrible, terrible catastrophic risk."
or constantly retweeting anti-AI Safety posts like this
"A piece by MBZUAI president
@ericxing
in WEF Agenda explaining why worries about AI existential risks are baseless.
"
or
"The fears of AI-fueled existential risks are based on flawed ideas."
or even denying short-term non-existential LLM risks (which I care less about)
"Pretty much the most important question in the debate about short-term risks of LLMs.
No clear evidence so far."
Scroll through his feed and you'll find countless examples where he dismisses any concerns as doomerism.
What prejudice are you talking about? LeCunn has expressed time and time again that he is not in the same camp as people like Altman, and has positioned himself as the leading face of anti-safety and specifically anti-AGI Safety concerns.
Saying "superhuman AI is not an existential risk" isn't the same as not caring about safety. It's a coherent assessment from someone working in the field that you may or may not agree with.
"(super)human-level AI..
- is not "just around the corner". It will take a while.
- is not an existential risk. "
0. https://twitter.com/ylecun/status/1726578588449669218