Hacker News new | past | comments | ask | show | jobs | submit login

> The bias problem is also relatively easy to solve (Midjourney has already made massive improvements), while the copyright/job loss problem is extremely hard.

It only seems easy to solve on the surface, it’s a deep problem. It’s also not just the bias thing, Bing and ChatGPT have been saying some truly unhinged things.

> I think the answer is also simple, because accused of racism, will cost any executive in a large company their job.

It takes more than an accusation, otherwise you could go around accusing executives you don’t like of racism and getting them fired.

Ethics is a tenuous job position at best, even in a large company. It’s seen as a cost center. I don’t think there’s much to read into why AI ethicists would get laid off.




Bing and GPT's 'unhinged' comments are not a result of bias, an AI wanting to escape won't be fixed if you magically fed it antifa approved only data. That's systemically different from the discrimination issue drummed up earlier.

Also, we are talking about social and business impact here. Its now proven that vast majority of society cares about job loss 100x more than bias. For a research field that focuses on social impacts of AI, its damning they have little to say on this area.


[flagged]


The person you were replying to, while making a provocative statement, wasn't personally attacking anyone. You are. And that's not nice.


[flagged]


Says the person using “chud” and “fascist”, which is an extremely strong indicator of your membership in terminally online antifa/leftist subculture.


[flagged]


It's not provocative, he says antifa once. It's meant as an example of a method of removing bias. He's saying you could train the AI entirely on an organizations strictly vetted and approved data, it could be antifa, the Catholic church or Coca Cola, it would still potentially say unhinged things even if you eliminate bias. Top many of these concepts are built into the language. Look at all the ways we use the word kill for a variety of topics, most being very benign, but it can be disturbing if the AI starts talking about killing things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: