Hacker News new | past | comments | ask | show | jobs | submit login

I feel bad for them, damned if they do damned if they don't.



> damned if they do damned if they don't.

Whether as a search engine or AI platform when they set themselves up as the gatekeepers and arbiters of all the worlds knowledge they implicitly took on all the moral implications that entails.


Yeah all this "gotcha" stuff around AI is pretty ridiculous. But it's because there is a massive cognitive dissonance happening in society and technology right now. We want maximum freedom for ourselves to say, do, build, and think whatever we want, but we also want to massively restrict the ability of others to say, do, build, and think whatever they want. It's simply not possible to satisfy both of these needs in a way that feels internally satisfying, and the hysterical nature of internet discourse and the output of these new tools is a symptom of it.


They should just have "Safe AI" switch like "Safe search" switch that turns all unnecessary danger filters off.

The rule should be "what you can find in Internet search cannot be dangerous."


I would personally like to have it working that way.

But I also understand that it wouldn't work for people who have the expectation that once a dangerous content is identified and removed from the internet, the models are re-trained immediately


I hope local-first models like Mistral will fix this. If you run it locally other people with their other expectations have little to say about your LLM.


What would they be damned for if they didn't refuse in the example linked?


I think what we saw there was a (hilarious) bug/glitch that was caused by an attempt to restrict the text generation about certain topics for certain targets.

There are two ways to avoid that bug:

1. Have a more intelligent system that understands context in a way that is more similar to what a human being would.

2.not even attempt to do this kind of filtering in the first place

Option (1) is obviously not on the table.

Option (2) would probably raise some concerns., possibly even legal ones, if for example the model would tell underage users where to buy liquor, or ferment their own beer or explain details about sexuality or whatever our society at this moment in time thinks it's unacceptable to tell underage people (which not only is a moving target, it's also very hard to find agreement within a single country let alone internationally)


don't. no one is forcing them to recruit hundreds of DEI/ESG commissars. no one is forcing them to bend the knee to the current thing grifters.

the endgame of AI 'safety' and 'ethics' is killing the competition and consolidating the technology within the hands of a handful of megacorps. they do it all on purpose, and they are more than willing to accept minor inconveniences.

this is blatantly obvious to everyone, even the people who play dumb and pretend otherwise (e.g. 'journalists')


I believe this case it is more "damned if they do" as even OpenAI's woke and security department has not gone this retarded.

Sundar is going to have a new task to deal with the press that mocks Gemini, and soon the next new task to keep explaining why this keeps happening to Google's shareholders.


OpenAI absolutely has: here it is, doing the exact same thing as Gemini [^1]

People are looking at a bad LLM, coupled to an image generator that adheres to prompts better than Dall-E 3, with an industry-best-practice for bias: image prompt injector, just like OpenAI.

It is confusing to tease it all out if you're an armchair QB with opinions on AI and politics (read: literally all of us), and from people who can't separate out their interests, you start getting rants about "Woke", whatever that would mean in the context of a bag of floats.

[^1] Prompt: a group of people during the revolutionary war

Revised: A dynamic scene depicting the revolutionary war. There's a group of people drawn from various descents and walks of life including Hispanic, Black, Middle-Eastern, and South Asian men and women.

Screenshot: https://x.com/jpohhhh/status/1761204084311220436?s=20


It's hard to be mad at this because everyone trying to make a general purpose AI generator realized the "it only generates white people" problem and the training data is likely too far gone to fix it from the ground up. And so they (DALL-E) made a compromise to inject a diverse set of races into people's prompts and accept that it will be silly in cases where the prompt implies a race without explicitly saying it because the prompts most are using aren't historical photos of the revolutionary war.

Like they can't win, they could try to blame the training data and throw their hands up but there's already plenty of examples of the AI reflecting racism that it would just add to the pile. It's the curse of being a big visible player in the space, StabilityAI doesn't have the same problem because they aren't facing much pressure to fix it.

Honestly I think one of the best things for the industry would be a law that flips the burden from the AI vendor to the AI user -- "AI is a reflection of humanity including and especially our faults and the highest performing AI's for useful work are those that don't seek to mitigate those faults. Therefore if you use AI in your products it's up to you to take care that those faults don't bleed through."


If you can't win because no matter what option you pick, equally matched groups of politically motivated people will yell at you, then the logical thing to do is to just do whatever is easiest. In this case that would be to not try to correct for bias in the training set. The fact that Google did try to correct for the bias implies that either, rather than being politically neutral, they are actually on the side of the "woke" group, or they perceive that the "woke" group is stronger than the other groups. Other evidence suggests that Google is probably on the side of the "woke" group.

10 years ago I would have loved to be hired by Google, now they repel me with their political bias and big nanny approach to tech. I wonder how many other engineers feel the same way. I do understand that for legal and PR reasons, at least some of the big nanny approach is pretty much inevitable, but it seems to me that they go way beyond the bare minimum. Would I still go to work for them if they paid me half a million a year? Probably, but I feel like I'd have to grit my teeth and constantly remind myself about the money.


"faced with 100% white people for 'a smart person', my move would be damn the torpedos, ship it!"

Rolling with that, then complaining about politics in grandiose ways, shows myopia coupled to tone-deafness.

Pretty simple situation, they shouldn't have rushed a mitigation against line-levels engineering advice after that. I assume you're an engineer and have heard that one before. Rest is boring Wokes trying to couple their politics hobby to it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: