Hacker News new | past | comments | ask | show | jobs | submit login

They should just have "Safe AI" switch like "Safe search" switch that turns all unnecessary danger filters off.

The rule should be "what you can find in Internet search cannot be dangerous."




I would personally like to have it working that way.

But I also understand that it wouldn't work for people who have the expectation that once a dangerous content is identified and removed from the internet, the models are re-trained immediately


I hope local-first models like Mistral will fix this. If you run it locally other people with their other expectations have little to say about your LLM.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: