The problem is the ultra-liberal conflation of words with violence. The X-risk folks are mostly concerned about actual, physical violence against humanity by AI - "what if we accidentally make a paperclip maximizer" being the textbook example of AI risk, which is a scary scenario because it involves AI turning us all into goo using unlimited violence.
But then there's the generic left faction inside these companies that has spent years describing words as violence, or even silence as violence, and claiming their "safety" was violated because of words. That should have been shut down right at the start because it's not what the concept of safety means, but they didn't and now their executives lack the vocabulary to separate X-risk physical danger from "our AI didn't capitalize Black" ideological danger.
Given this it's not a surprise that AI safety almost immediately lost its focus on physical risks (the study of which might be quite important once military robots or hacking become involved), and became purely about the risks of non-compliant thought. Now that whole field has become a laughing stock, but I wonder if we'll come to regret that one day.
But then there's the generic left faction inside these companies that has spent years describing words as violence, or even silence as violence, and claiming their "safety" was violated because of words. That should have been shut down right at the start because it's not what the concept of safety means, but they didn't and now their executives lack the vocabulary to separate X-risk physical danger from "our AI didn't capitalize Black" ideological danger.
Given this it's not a surprise that AI safety almost immediately lost its focus on physical risks (the study of which might be quite important once military robots or hacking become involved), and became purely about the risks of non-compliant thought. Now that whole field has become a laughing stock, but I wonder if we'll come to regret that one day.