Hacker News new | past | comments | ask | show | jobs | submit login

> Now we have a company "trying" to be more responsible when they are releasing a new technology and they are having hilariously terrible results...

I think the huge point that in my opinion all of these AI companies forget is that what is considered "responsible" depends a lot on the culture, country, or often even sub-culture that the user is in. It might sound a little bit postmodern, but I think that there are only few somewhat universally accepted opinions on things thhat are "responsible" vs "not responsible".

Just watch basically any debate on a topic where both sides have strong opinions on it, and analyze which cultural or moral traits each side has that lead it to its opinion.

Does my answer offer a "solution" for this problem? I don't think so. But I do think that including this thought into the design of making the AI "responsible" might reduce "outcries" (or shitstorms ;-) ) that come up because the "morality" that the AI uses to act "responsibly" is very different from the moral standards of some group of users.




I doubt any of these companies really care about culture beyond the walls of their execs western american offices and related media influencers.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: