Hacker News new | past | comments | ask | show | jobs | submit login

The entire AI safety culture inside the corps is driven by abject terror of our modern cancel culture. I can't say I blame them. There is already talk of regulating AI by Congress. The premise is that we can't be trusted with this information but the AI is really just regurgitating information that's already a few google searches away.

This article is a perfect example. I picture a journalist getting giddy when they are trying out Mistral's AI and realizing there are no "safety" controls. It gives them the perfect opportunity to write an alarmist masterpiece on the evils of AI.

They then go find people on the fringes who are "outraged" and make it sound like the entire world is up in arms about AI being unsafe.




> The entire AI safety culture inside the corps is driven by abject terror of our modern cancel culture. I can't say I blame them. There is already talk of regulating AI by Congress.

Makes me want to see AI companies founded in countries that have very different cultures than ours.


Falcon (from UAE) is also censored


The base model is not censored. The training dataset was filtered for pornography, but it can still generate pornographic content, it's just not very graphic.


There are quite a number of AI firms in China working on LLMs. Actually, about half the research I follow is from Chinese institutions.


As we all know, the Chinese would never dream of censoring anything.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: