The entire AI safety culture inside the corps is driven by abject terror of our modern cancel culture. I can't say I blame them. There is already talk of regulating AI by Congress. The premise is that we can't be trusted with this information but the AI is really just regurgitating information that's already a few google searches away.
This article is a perfect example. I picture a journalist getting giddy when they are trying out Mistral's AI and realizing there are no "safety" controls. It gives them the perfect opportunity to write an alarmist masterpiece on the evils of AI.
They then go find people on the fringes who are "outraged" and make it sound like the entire world is up in arms about AI being unsafe.
> The entire AI safety culture inside the corps is driven by abject terror of our modern cancel culture. I can't say I blame them. There is already talk of regulating AI by Congress.
Makes me want to see AI companies founded in countries that have very different cultures than ours.
The base model is not censored. The training dataset was filtered for pornography, but it can still generate pornographic content, it's just not very graphic.
This article is a perfect example. I picture a journalist getting giddy when they are trying out Mistral's AI and realizing there are no "safety" controls. It gives them the perfect opportunity to write an alarmist masterpiece on the evils of AI.
They then go find people on the fringes who are "outraged" and make it sound like the entire world is up in arms about AI being unsafe.