Hacker News new | past | comments | ask | show | jobs | submit login

Moreover, in the next sentence GP confesses that they “think the biggest danger from AI is that these models will be used by those who control the models to control and censor what people are allowed to write”, revealing that they too harbor ethical concerns about AI, they’re just not one of “those” AI ethicists.



That's more a terminological accident I think. Those that describe themselves as working on "AI ethicis" in academia are mostly worried about stuff like AI's not saying something offensive or potentially discriminating related to race or sex, while people who use the term "AI risk" or "AI safety" are more worried about future risks like terrorism, war, or even human extinction.

Thinking about it, both groups don't talk a lot about the risk from AI being used for censorship...


> Thinking about it, both groups don't talk a lot about the risk from AI being used for censorship...

This is a pretty common topic in the academic community I follow, along with related things like how it’ll impact relationships with employers, governments, etc. I see that most as a counterpoint to the more sci-fi ideas as in “don’t worry about AIs annihilating humanity, worry about your boss saying an AI graded your work and said you are overpaid”.


What's this community called?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: