Hacker News new | past | comments | ask | show | jobs | submit login

I don't like the title, but the second opening paragraph starts strong:

> A.I.’s political problems were starkly illustrated by the disastrous rollout of Google’s Gemini Advanced chatbot last month. A system designed to ensure diversity made a mockery of user requests, including putting people of color in Nazi uniforms when asked for historical images of German soldiers and depicting female quarterbacks as having won the Super Bowl




This is not about politics, it's all about marketing. The system was told "make sure that any images of people you generate are diverse" and it did just that. It performed as designed, there was no failure. The interesting political question is why we would care about these things in the first place, one way or the other.


As if we should expect real AI to have contingent historical knowledge...


That paragraph twists the harm around though. NYT goes out of its way to highlight Gemini depicting PoC as Nazis, while omitting that it almost entirely refused to depict white people and/or males in any positive context.

Gemini was way worse in its treatment of those groups, and above all, it was disastrous in its lack of respect for truth and accuracy. That latter part is where the true harm lies IMO.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: