I believe this case it is more "damned if they do" as even OpenAI's woke and security department has not gone this retarded.
Sundar is going to have a new task to deal with the press that mocks Gemini, and soon the next new task to keep explaining why this keeps happening to Google's shareholders.
OpenAI absolutely has: here it is, doing the exact same thing as Gemini [^1]
People are looking at a bad LLM, coupled to an image generator that adheres to prompts better than Dall-E 3, with an industry-best-practice for bias: image prompt injector, just like OpenAI.
It is confusing to tease it all out if you're an armchair QB with opinions on AI and politics (read: literally all of us), and from people who can't separate out their interests, you start getting rants about "Woke", whatever that would mean in the context of a bag of floats.
[^1] Prompt: a group of people during the revolutionary war
Revised: A dynamic scene depicting the revolutionary war. There's a group of people drawn from various descents and walks of life including Hispanic, Black, Middle-Eastern, and South Asian men and women.
It's hard to be mad at this because everyone trying to make a general purpose AI generator realized the "it only generates white people" problem and the training data is likely too far gone to fix it from the ground up. And so they (DALL-E) made a compromise to inject a diverse set of races into people's prompts and accept that it will be silly in cases where the prompt implies a race without explicitly saying it because the prompts most are using aren't historical photos of the revolutionary war.
Like they can't win, they could try to blame the training data and throw their hands up but there's already plenty of examples of the AI reflecting racism that it would just add to the pile. It's the curse of being a big visible player in the space, StabilityAI doesn't have the same problem because they aren't facing much pressure to fix it.
Honestly I think one of the best things for the industry would be a law that flips the burden from the AI vendor to the AI user -- "AI is a reflection of humanity including and especially our faults and the highest performing AI's for useful work are those that don't seek to mitigate those faults. Therefore if you use AI in your products it's up to you to take care that those faults don't bleed through."
If you can't win because no matter what option you pick, equally matched groups of politically motivated people will yell at you, then the logical thing to do is to just do whatever is easiest. In this case that would be to not try to correct for bias in the training set. The fact that Google did try to correct for the bias implies that either, rather than being politically neutral, they are actually on the side of the "woke" group, or they perceive that the "woke" group is stronger than the other groups. Other evidence suggests that Google is probably on the side of the "woke" group.
10 years ago I would have loved to be hired by Google, now they repel me with their political bias and big nanny approach to tech. I wonder how many other engineers feel the same way. I do understand that for legal and PR reasons, at least some of the big nanny approach is pretty much inevitable, but it seems to me that they go way beyond the bare minimum. Would I still go to work for them if they paid me half a million a year? Probably, but I feel like I'd have to grit my teeth and constantly remind myself about the money.
"faced with 100% white people for 'a smart person', my move would be damn the torpedos, ship it!"
Rolling with that, then complaining about politics in grandiose ways, shows myopia coupled to tone-deafness.
Pretty simple situation, they shouldn't have rushed a mitigation against line-levels engineering advice after that. I assume you're an engineer and have heard that one before. Rest is boring Wokes trying to couple their politics hobby to it.
Sundar is going to have a new task to deal with the press that mocks Gemini, and soon the next new task to keep explaining why this keeps happening to Google's shareholders.