Hacker News new | past | comments | ask | show | jobs | submit login

The issue, for anyone tempted by it, is not that our current chatbots are biased and some future iteration will not be. Creating an unbiased answer to all questions is impossible. People don’t agree on many important questions, and even if the answer tried to give equal weight to all perspectives, that would mean giving weight to fringe opinions.

It’s the same thing with image generators. How many eyes should the average generated person have? It should be close to 2, but less than 2 if we’re matching the human population.

The solution that these companies will inevitably reach for is an extension of filter bubbles. Everyone gets their own personalized chatbot with its own filter on reality. It makes the culture warriors happy but it will only make things worse.




> Everyone gets their own personalized chatbot with its own filter on reality.

Compare map services.


Ahh maps, where the borders and even the names of disputed regions are different depending on the legal whims of various totalitarian regimes.

Then of course, there's the countries wher accurately represented geographic features are verboten.


> different depending on the legal whims of various totalitarian regimes

An obvious solution to legal whims of various regimes (of any kind) would be to have One World Government that could resolve these international disputes, but this sol'n has its own drawbacks ;)


I agree that the likely future of this is that everyone gets their own "reality". After all, whats the harm in letting a little girl from harlem believe someone like her could have been a medieval european monarch? It's beneficial to make people feel good about themselves and history is just dead people right?

I think this argument actually resonates with a lot of people--if the truth makes people feel bad or excluded or marginalized, why not just change the truth? You see it in lots of almost trivial ways nowadays, things so small that it feels ridiculous to even mention. Some examples: Half the medieval warriors that come with a castle lego set nowadays are female. Viking legos too. And remember when they were trying to tell us Abraham Lincoln was gay?


> After all, whats the harm in letting a little girl from harlem believe someone like her could have been a medieval european monarch? It's beneficial to make people feel good about themselves and history is just dead people right?

The harm is that it's not true. When more and more people believe things that are fundamentally not true about the world, about history, we get further from the ability to move civilization forward in positive directions. History isn't just dead people, it's a record, to the best of our ability, of what actually happened to get us here. Any sort of improvement process always starts from where you are, but if you don't know where you are and the context that got you there, you can't improve.

There's definitely a lot more people these days that completely reject the very concept of objective truth and reality, and they are perfectly happy for us to spread that farther within society, but those concepts aren't just an idea, they're something tangible. The truth actually matters, it's foundational to being able to do things like science and engineering. Why many of us got into computers in the first place is because we felt comfort in the fact computers couldn't lie and could only do what they were told to do, if the computer did something unexpected it was because you had made a mistake, not because the system was non-rational or non-deterministic. LLMs, hallucinations, and filter bubbles create a world in which nobody has the comfort of truth anymore, we're all just suffering through some sort of delusion, mass or individual.


> After all, whats the harm in letting a little girl from harlem believe someone like her could have been a medieval european monarch?

I'm a little confused by this and want to better understand what point you intend to make.

What do you mean by "a little girl from harlem" and "a medieval european monarch"?


I'd assume they were referring to Google's image generator Gemini, which was generating historically inaccurate images of "diverse" European monarchs, founding fathers, and WW2 Nazis. Eg. https://www.telegraph.co.uk/news/2024/02/23/google-gemini-ai...


> I think this argument actually resonates with a lot of people--if the truth makes people feel bad or excluded or marginalized, why not just change the truth?

This is the essence of the so-called noble lie. Much has been said about it since the Greeks, but in short, the noble lie deprives individuals of meaningful agency to direct their lives. It traps them into believing myths that result in their subjugation today, and some potential irreversible harm in the future regardless of the perceived short-term benefits that come from self-delusion. That is unless humanity as a whole begins living in indestructible underground pods as drug-addled vegetables.

If we ignore this scenario, and assume people will still interact with each other on some level, the results won't be one where people a long lasting voluntary association on the basis of live and let live. After all, when has that ever happened? Instead, the outcome is another battlefront for competing solipsisms. It's not enough to be as equally wrong as everyone else. Consciously or unconsciously, one comes to uphold a contradiction: you must be the "right" kind of wrong to avoid becoming a scapegoat or target.

The critical examination and acceptance of reality are important learned skills. They allows us to dispel such irrationalities. A mere absence of overt conflict between others or between oneself and the laws of nature is not a defense against harm, perceived or real. Truth isn't what breeds conflict, as the truth doesn't change. Conflict is the negative reaction to an observation, thought, or sensation. It's a product of the human mind. And human beings as a collective are fickle animals.


Alternatively, why should toys make the person playing with it feel bad? Toys are intentionally escapist, it’s the point? Feels like a weird argument.

I think the better comparison is what the other commenters brought up - maps. These are tools, and we turn to them for a purpose. And yet even geography is not ahem set in stone. We need our tools to be useful to the user, and that means different things to different people. There’s dispute in all facts. Just ask the people living in Taiwan, or Gaza or Ukraine…


That one AI calls it taiwan and another calls it china's extra cool eastern china island doesn't seem very interesting to me. There have always been and always will be land disputes. What is more interesting is, suppose you trained an AI with the objective of minimizing "harm" (in the sense of hurting people's feelings that everyone uses now) rather than minimizing error with the truth?


Is it harmful to tell the user that Taiwan is the independent nation of Taiwan? It’s “not true” (depending on who you ask) and could “hurt the feelings” of many people. I bet if you went to mainland china and went to Taipei and talked about this, you could offend people with your version of “minimizing error”. There is no way to please both parties at once, one side will get hurt, and there is no “truth”.

That “little girl from Harlem” will never be a mid-evil princess, but neither will literally anyone alive today, because princesses and castles are a thing of the past. Seems like an example meant to straw man a cultural-war viewpoint. That little girl may factually become a doctor. Should the AI show pictures of (grown up versions of) little black girls from Harlem when asked to generate a picture of a doctor?

The culture war outrage over contrived examples in the name of “maximal truth” always misses that the attempts to showcase minorities and counter bias is adding truth to the output. Just because a bias is being artificially suppressed doesn’t mean that the bias was truthful. Sometimes it makes mistakes, but that would still be true if no “corrective” action was applied.


At a certain point we do need to go back into the basics of why we are having discussion in the first place.

Taiwan de jure might be ambiguous, but de facto it more or less is a country. If you were working out a trade deal, booking a holiday, immigrating, etc, if you actually needed to know then treating Taiwan as part of China would run into so many inconsistencies it would be impossible to function.


An alternative to filter bubbles is to have an attempt at being neutral like Wikipedia does. It won't please everyone but may be a way to go. I guess neutral in that context means close to the average opinion of an informed human. This seems close to the approach take in practice by ChatGPT, Claude and the like.


> It makes the culture warriors happy but it will only make things worse.

Needs an argument. One of our political problems at the moment is we can't accurately assess how people will respond to new hypothetical, we can't grade personalities by effectiveness because people keep changing their minds and we can't run controlled experiments where a single personality is put in charge of multiple different things at once.

It'd be awesome to have a biased chatbot that we could actually trial and see how it works on political topics. We'll eventually be able to build GovernBot 9000 that has a multi-century history of opinions that don't screw a country up. Being able to version and replicate personalities, even extreme ones, might be a huge positive. The great classical liberals of yesteryear that are responsible for so much of our culture were not moderates.


such a chatbot would be very interesting to experiment with, but one bots screw up is another bots success. so who is the e-arbiter of truth for GovernBot9000? hopefully it is not GlobalSuperCorp10k. or maybe said super corp would be OK. interesting times we live in


(1) In what sense is that a step back from current state?

(2) Why would it be bad?

There is something of a hysteria at the moment where people are piling on to Boeing because the management has been reverting to the mean. We're looking at a world where that doesn't need to happen because once management reaches a peak we can just stop fiddling with the algorithm. Culture won't change because the management culture can be encoded in a chatbot. Management is mostly talk, this is one of the easiest targets for the tech. That doesn't sound like politics ... until you look at what the current political firestorms are, it hits a lot of them.

It is hard to cast opinionated chatbots as a step back. The scary political implication is we're also looking at a point where maybe 1 man can control an army and that would be terrifying because the control on armies going rogue in a democracy is all the soldiers are voters. But consistent opinions are generally a good thing - we use the word principled for that sort of thing.


Forgive me for my brief quotes, I’m typing what you wrote because I can’t copy and paste. But I promise I read the whole thing even if it seems I am quoting out of context.

> Needs an argument.

Fair enough. The essence of my argument is, filter bubbles are bad for democracy because they people’s understanding of consensus and prevent them from encountering any opinions contrary to their own. A personalized chatbot to the extent that it is an extension of filter bubbles, is bad for the same reason.

> It’d be awesome to have a biased chatbot

For the scenario you outlined, sure. If it’s circumscribed and labeled as such, it would be an interesting exercise at least. For the general case, it’s probably not a product people will want.

> The great classical liberals of yesteryear

The issue I see with this is that by gradually Balkanizing our shared body of understanding, it will prevent the exact type of debate you (and I) think was so great from that time.


80%+ live where they grew up. So few moved about as democracy had its best days.

Only 14% have earned more than a bachelors, while polls suggest the public believe it’s closer to 50%.

We’re incredibly naive and ignorant of ourselves, including how to concretely define democracy.

That “type of debate” existed before you and isn’t going anywhere.

The problem is you’re thinking too specifically when reality is a handful of general patterns. Democracy doesn’t hinge on your fears because you’re a specific person not a body of people. Democracy was born in and triumphed over worse. You’re concerned about your specific understanding of how the world works embedded in your memory, deference to which is not the aggregate’s obligation.

Consider how religion has declined despite protests of the prior generation reality would end, yet it hasn’t. Consider Seduction of the Innocent and its authors fears about comic books fracturing society and it didn’t. Consider parental outrage at DND and rock music, which also did not end the world.

Just another generation bleating like a goat atop a car in a flood at the awareness of its own impermanence and importance, that it has less time ahead than behind. Just vanity leading people to believe their childhood memes are essential.

It’s been interesting watching Silent Gen, Boomers, and now GenX post on social media existential dread as each of those groups majority push into 50 years old and beyond.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: