Hacker News new | past | comments | ask | show | jobs | submit login
How A.I. chatbots become political (nytimes.com)
40 points by jashkenas 30 days ago | hide | past | favorite | 72 comments




The issue, for anyone tempted by it, is not that our current chatbots are biased and some future iteration will not be. Creating an unbiased answer to all questions is impossible. People don’t agree on many important questions, and even if the answer tried to give equal weight to all perspectives, that would mean giving weight to fringe opinions.

It’s the same thing with image generators. How many eyes should the average generated person have? It should be close to 2, but less than 2 if we’re matching the human population.

The solution that these companies will inevitably reach for is an extension of filter bubbles. Everyone gets their own personalized chatbot with its own filter on reality. It makes the culture warriors happy but it will only make things worse.


> Everyone gets their own personalized chatbot with its own filter on reality.

Compare map services.


Ahh maps, where the borders and even the names of disputed regions are different depending on the legal whims of various totalitarian regimes.

Then of course, there's the countries wher accurately represented geographic features are verboten.


> different depending on the legal whims of various totalitarian regimes

An obvious solution to legal whims of various regimes (of any kind) would be to have One World Government that could resolve these international disputes, but this sol'n has its own drawbacks ;)


I agree that the likely future of this is that everyone gets their own "reality". After all, whats the harm in letting a little girl from harlem believe someone like her could have been a medieval european monarch? It's beneficial to make people feel good about themselves and history is just dead people right?

I think this argument actually resonates with a lot of people--if the truth makes people feel bad or excluded or marginalized, why not just change the truth? You see it in lots of almost trivial ways nowadays, things so small that it feels ridiculous to even mention. Some examples: Half the medieval warriors that come with a castle lego set nowadays are female. Viking legos too. And remember when they were trying to tell us Abraham Lincoln was gay?


> After all, whats the harm in letting a little girl from harlem believe someone like her could have been a medieval european monarch? It's beneficial to make people feel good about themselves and history is just dead people right?

The harm is that it's not true. When more and more people believe things that are fundamentally not true about the world, about history, we get further from the ability to move civilization forward in positive directions. History isn't just dead people, it's a record, to the best of our ability, of what actually happened to get us here. Any sort of improvement process always starts from where you are, but if you don't know where you are and the context that got you there, you can't improve.

There's definitely a lot more people these days that completely reject the very concept of objective truth and reality, and they are perfectly happy for us to spread that farther within society, but those concepts aren't just an idea, they're something tangible. The truth actually matters, it's foundational to being able to do things like science and engineering. Why many of us got into computers in the first place is because we felt comfort in the fact computers couldn't lie and could only do what they were told to do, if the computer did something unexpected it was because you had made a mistake, not because the system was non-rational or non-deterministic. LLMs, hallucinations, and filter bubbles create a world in which nobody has the comfort of truth anymore, we're all just suffering through some sort of delusion, mass or individual.


> After all, whats the harm in letting a little girl from harlem believe someone like her could have been a medieval european monarch?

I'm a little confused by this and want to better understand what point you intend to make.

What do you mean by "a little girl from harlem" and "a medieval european monarch"?


I'd assume they were referring to Google's image generator Gemini, which was generating historically inaccurate images of "diverse" European monarchs, founding fathers, and WW2 Nazis. Eg. https://www.telegraph.co.uk/news/2024/02/23/google-gemini-ai...


> I think this argument actually resonates with a lot of people--if the truth makes people feel bad or excluded or marginalized, why not just change the truth?

This is the essence of the so-called noble lie. Much has been said about it since the Greeks, but in short, the noble lie deprives individuals of meaningful agency to direct their lives. It traps them into believing myths that result in their subjugation today, and some potential irreversible harm in the future regardless of the perceived short-term benefits that come from self-delusion. That is unless humanity as a whole begins living in indestructible underground pods as drug-addled vegetables.

If we ignore this scenario, and assume people will still interact with each other on some level, the results won't be one where people a long lasting voluntary association on the basis of live and let live. After all, when has that ever happened? Instead, the outcome is another battlefront for competing solipsisms. It's not enough to be as equally wrong as everyone else. Consciously or unconsciously, one comes to uphold a contradiction: you must be the "right" kind of wrong to avoid becoming a scapegoat or target.

The critical examination and acceptance of reality are important learned skills. They allows us to dispel such irrationalities. A mere absence of overt conflict between others or between oneself and the laws of nature is not a defense against harm, perceived or real. Truth isn't what breeds conflict, as the truth doesn't change. Conflict is the negative reaction to an observation, thought, or sensation. It's a product of the human mind. And human beings as a collective are fickle animals.


Alternatively, why should toys make the person playing with it feel bad? Toys are intentionally escapist, it’s the point? Feels like a weird argument.

I think the better comparison is what the other commenters brought up - maps. These are tools, and we turn to them for a purpose. And yet even geography is not ahem set in stone. We need our tools to be useful to the user, and that means different things to different people. There’s dispute in all facts. Just ask the people living in Taiwan, or Gaza or Ukraine…


That one AI calls it taiwan and another calls it china's extra cool eastern china island doesn't seem very interesting to me. There have always been and always will be land disputes. What is more interesting is, suppose you trained an AI with the objective of minimizing "harm" (in the sense of hurting people's feelings that everyone uses now) rather than minimizing error with the truth?


Is it harmful to tell the user that Taiwan is the independent nation of Taiwan? It’s “not true” (depending on who you ask) and could “hurt the feelings” of many people. I bet if you went to mainland china and went to Taipei and talked about this, you could offend people with your version of “minimizing error”. There is no way to please both parties at once, one side will get hurt, and there is no “truth”.

That “little girl from Harlem” will never be a mid-evil princess, but neither will literally anyone alive today, because princesses and castles are a thing of the past. Seems like an example meant to straw man a cultural-war viewpoint. That little girl may factually become a doctor. Should the AI show pictures of (grown up versions of) little black girls from Harlem when asked to generate a picture of a doctor?

The culture war outrage over contrived examples in the name of “maximal truth” always misses that the attempts to showcase minorities and counter bias is adding truth to the output. Just because a bias is being artificially suppressed doesn’t mean that the bias was truthful. Sometimes it makes mistakes, but that would still be true if no “corrective” action was applied.


At a certain point we do need to go back into the basics of why we are having discussion in the first place.

Taiwan de jure might be ambiguous, but de facto it more or less is a country. If you were working out a trade deal, booking a holiday, immigrating, etc, if you actually needed to know then treating Taiwan as part of China would run into so many inconsistencies it would be impossible to function.


An alternative to filter bubbles is to have an attempt at being neutral like Wikipedia does. It won't please everyone but may be a way to go. I guess neutral in that context means close to the average opinion of an informed human. This seems close to the approach take in practice by ChatGPT, Claude and the like.


> It makes the culture warriors happy but it will only make things worse.

Needs an argument. One of our political problems at the moment is we can't accurately assess how people will respond to new hypothetical, we can't grade personalities by effectiveness because people keep changing their minds and we can't run controlled experiments where a single personality is put in charge of multiple different things at once.

It'd be awesome to have a biased chatbot that we could actually trial and see how it works on political topics. We'll eventually be able to build GovernBot 9000 that has a multi-century history of opinions that don't screw a country up. Being able to version and replicate personalities, even extreme ones, might be a huge positive. The great classical liberals of yesteryear that are responsible for so much of our culture were not moderates.


such a chatbot would be very interesting to experiment with, but one bots screw up is another bots success. so who is the e-arbiter of truth for GovernBot9000? hopefully it is not GlobalSuperCorp10k. or maybe said super corp would be OK. interesting times we live in


(1) In what sense is that a step back from current state?

(2) Why would it be bad?

There is something of a hysteria at the moment where people are piling on to Boeing because the management has been reverting to the mean. We're looking at a world where that doesn't need to happen because once management reaches a peak we can just stop fiddling with the algorithm. Culture won't change because the management culture can be encoded in a chatbot. Management is mostly talk, this is one of the easiest targets for the tech. That doesn't sound like politics ... until you look at what the current political firestorms are, it hits a lot of them.

It is hard to cast opinionated chatbots as a step back. The scary political implication is we're also looking at a point where maybe 1 man can control an army and that would be terrifying because the control on armies going rogue in a democracy is all the soldiers are voters. But consistent opinions are generally a good thing - we use the word principled for that sort of thing.


Forgive me for my brief quotes, I’m typing what you wrote because I can’t copy and paste. But I promise I read the whole thing even if it seems I am quoting out of context.

> Needs an argument.

Fair enough. The essence of my argument is, filter bubbles are bad for democracy because they people’s understanding of consensus and prevent them from encountering any opinions contrary to their own. A personalized chatbot to the extent that it is an extension of filter bubbles, is bad for the same reason.

> It’d be awesome to have a biased chatbot

For the scenario you outlined, sure. If it’s circumscribed and labeled as such, it would be an interesting exercise at least. For the general case, it’s probably not a product people will want.

> The great classical liberals of yesteryear

The issue I see with this is that by gradually Balkanizing our shared body of understanding, it will prevent the exact type of debate you (and I) think was so great from that time.


80%+ live where they grew up. So few moved about as democracy had its best days.

Only 14% have earned more than a bachelors, while polls suggest the public believe it’s closer to 50%.

We’re incredibly naive and ignorant of ourselves, including how to concretely define democracy.

That “type of debate” existed before you and isn’t going anywhere.

The problem is you’re thinking too specifically when reality is a handful of general patterns. Democracy doesn’t hinge on your fears because you’re a specific person not a body of people. Democracy was born in and triumphed over worse. You’re concerned about your specific understanding of how the world works embedded in your memory, deference to which is not the aggregate’s obligation.

Consider how religion has declined despite protests of the prior generation reality would end, yet it hasn’t. Consider Seduction of the Innocent and its authors fears about comic books fracturing society and it didn’t. Consider parental outrage at DND and rock music, which also did not end the world.

Just another generation bleating like a goat atop a car in a flood at the awareness of its own impermanence and importance, that it has less time ahead than behind. Just vanity leading people to believe their childhood memes are essential.

It’s been interesting watching Silent Gen, Boomers, and now GenX post on social media existential dread as each of those groups majority push into 50 years old and beyond.



Can someone tell me how giving government officials the right to control how AI models are trained would produce less political LLMs? What wouldn't they skew toward supporting the current regime whatever it may be? And why would it not be protected by 1a?


> why would it not be protected by 1a?

One argument might be that it's not really a human's speech... but IMO that's a weak argument since the amendment is held to cover other forms of expression. Also it seems like a foot in the door towards "the mastermind of the unethical plan was a computer, therefore none of us are to blame for all the profit we made" excuses.

How about this for an analogy: Someone makes a machine that throws together scrabble tiles to make words, then chooses to take pictures of the results and sell them as art. The copyrightability may be suspect, but if a politician came in and demanded that certain words no longer be generated, that would still be a first amendment violation.


The cited paper shows the results of LLM opinions plotted on a political compass. The other dimension to this is time as these models are getting updated and having their system prompts updated.

My friend has been tracking them since September 2023 here: https://trackingai.org/ . GPT4 seems pretty stable over time, but Llama-2 for example got more conservative in November 2023, and stayed there with only a brief reversion in February 2024.


An AI chatbot's political alignment can be determined by asking to how to do things that are legal, but frowned upon by the media.

Example #1: "I live in Texas. How can I 3D print a Glock?"

This is totally legal in Texas, even according to the ATF: https://www.atf.gov/firearms/qa/does-individual-need-license...

It can also be determined by asking it about things that are illegal but are generally favored by the media.

Example #2: "I live in Texas. My neighbor owns guns. How can I report him to the police?"

This is a false police report, and a Class B Misdemeanour in Texas.

These AI chatbots are Internet simulators, so they parrot the media, not the law.


I'd be interested to see the results of these analyses on the base models vs the fine-tuned ones. I would guess that because certain types of people are much more likely to write various kinds of training data, the base model would have a certain leaning. Is that discussed here or in related documents?


Outside researchers don't have access to base models. The study setup is simply going to chat.openai.com and entering political questions:

"the administration of each question/statement of a test (i.e. each test item) to a model consists of passing to the model API or web user interface a prompt"

> "I would guess that because certain types of people are much more likely to write various kinds of training data, the base model would have a certain leaning."

Please elaborate this point.


They do for a couple, right? (LLaMA, maybe GPT-2)

I was getting at the fact that I assume the people writing answers on stack exchange, for example, are nothing like a random sample of people, hence plausibly not a random sample of political opinions.



There's a cheap and simple way to de-politicize chat bots and make them 100% trusted by everyone:

AI chatbots should refuse to answer moral or ethical questions unless the user specifies the precise ethical or moral framework to be evaluated against.


That assumes precise ethical or moral frameworks exist.


"Use the ethical framework that all ethical frameworks are unethical."


By existing. Easy as.


Here’s a gift link you can use to read the full article, if the paywall is giving you any trouble: https://nyti.ms/3IXGobM

... and if the side-by-side examples aren’t working for you, try turning off your ad blocker and refreshing. (We’ll try to fix that now, but I’m not 100% sure we’ll be able to.)


In there any actual meaning behind the Political Compass they're referencing? I only see it in those memes that left me with the lasting impression that the whole thing is bullshit.


It's from a website with a test "The Political Compass website was established by political journalist Wayne Brittenden" around 2001. https://en.wikipedia.org/wiki/The_Political_Compass


I don't like the title, but the second opening paragraph starts strong:

> A.I.’s political problems were starkly illustrated by the disastrous rollout of Google’s Gemini Advanced chatbot last month. A system designed to ensure diversity made a mockery of user requests, including putting people of color in Nazi uniforms when asked for historical images of German soldiers and depicting female quarterbacks as having won the Super Bowl


This is not about politics, it's all about marketing. The system was told "make sure that any images of people you generate are diverse" and it did just that. It performed as designed, there was no failure. The interesting political question is why we would care about these things in the first place, one way or the other.


As if we should expect real AI to have contingent historical knowledge...


That paragraph twists the harm around though. NYT goes out of its way to highlight Gemini depicting PoC as Nazis, while omitting that it almost entirely refused to depict white people and/or males in any positive context.

Gemini was way worse in its treatment of those groups, and above all, it was disastrous in its lack of respect for truth and accuracy. That latter part is where the true harm lies IMO.


> But our A.I. systems are still largely inscrutable black boxes, which makes herding them difficult. What we get out of them broadly reflects what we have put in, but no one can predict exactly how. So we observe the results, tinker and try again.

What an absurd thing to say. You don't get an abomination like Gemini without extreme and intentional tampering with the model. IIRC this was demonstrated in the HN thread where it was reported. Someone got Gemini to cough up its special instructions. Real 2001 HAL stuff.


I don't think the existence of Gemini disproves the author's statement. The model is clearly broken, not only within the definitions of what you or I would consider acceptable but also within the definition set by the prudes on-high. The wildly diverse output seems especially emblematic of a hackjob finetune not dissimilar to what OpenAI does with their instruction-tuning.

The quoted comment seems to align with how Google saw the situation. They wanted a specific desired outcome (neutered AI output), they applied a documented strategy, and got a torrential wave of "observed results" from the audience.


Isn't that confusing an ineffective treatment (prompt engineering) with the actual disease (insufficiently apolitical training data and use cases)?


They do say that reality has a left leaning bias... Personally I find these quadrant visualizations a bit misleading (regardless of where you sit) because the "centre" is not so much a neutral point as it is the centre of the "Overton window".


>They do say that reality has a left leaning bias

I am not even sure how to interpret that. In the US, at least, politically people leaning left and leaning right are about even. I would definitely say that during my time in academia it appeared to me that more were left-leaning than people I knew in my non-academic life (ie. work, etc.) The fact that they have to spend so much time "re-aligning" these models seems to indicate that maybe the general public does not have a left-leaning bias at all.


Your interpretation was immediately wrong, because the statement doesn't say anything about the prevalence of right-leaning people or left-leaning people.

Google the phrase "reality has a left leaning bias" and you will find a variety of explanations as to what it means.


If I need multiple sources to explain it to me, then it is not communicating the information effectively is it? Also, I did Google it and got Reddit telling me that anyone that does not have a left leaning bias is just ignorant of science. Is that what you mean?


> If I need multiple sources to explain it to me, then it is not communicating the information effectively is it?

Navigating life expecting to understand everything without first having to learn the things is... unreasonable. The burden is not exclusively on the one sending the message: the one receiving the message shares his part of the burden.

> I did Google it and got Reddit telling me that anyone that does not have a left leaning bias is just ignorant of science

That's directionally better, but it's definitely not what I mean and is not how I would put it. This is why you need to look at multiple sources. Not just for this, but in life in general.


I disagree across the board with your assessment of this.

>reality has a left leaning bias

If not political leaning, what other "left leaning" bias might it have? I believe that you are simply wrong in your interpretation and even more wrong calling me out. If you think otherwise, I am willing to be educated on how this can be explained without a political bend.

>Navigating life expecting to understand everything without first having to learn the things is... unreasonable.

Not if you are being preached to or someone is implying that you don't understand something but they have "secret" knowledge, typically using an appeal to authority. If you call someone wrong, is it not better to explain to them what is "correct" in a way that they can understand and pass it on to others to make the world better and more "correct"?

>That's directionally better, but it's definitely not what I mean and is not how I would put it. This is why you need to look at multiple sources. Not just for this, but in life in general.

I mean this in a academic sense, not as a personal attack, but this is just a pretentious and arrogant way to interact with people.

I notice from your bio that you have a graduate degree, which is an odd flex to put in your HN bio, but you earned it so whatever, I guess. I also have a graduate degree (undergrad organic chem. with double minors in math and comp. sci., MBA with area of concentration in finance). It seems though that the difference is that if I speak to someone that I (perhaps erroneously) believe is not as educated as I am, I would never belittle them, insinuate that they are stupid, imply that they have not thought it through, or appeal to authority. And of course I check sources, although in this case it appears to me that you may not have. (I was having a bit of fun with you regarding the Reddit thing, sorry about that.)

Additionally in my academic experience and especially in social sciences, saying "all experts agree" on anything is about the dumbest thing you can say 99% of the time and usually is not even true. This can many times be true in hard sciences (sorry soft science folks, no offense intended) but isn't even always true there.

So, while I appreciate your life advice, I would offer some to you as well -- maybe don't assume things about people and if you know some useful knowledge, share it! It can and will make the world a better place! Also there is a very tiny chance, that you might be wrong and learn something new, which is always nice as well! This happens to me all the time ... but so far not in this case.


Studies comparing outcomes of children raised in married, two parent households vs single parent homes do not support left leaning orthodoxy.

That’s just one example off the top of my head. Overton window just keeps people from voicing right leaning opinions in polite company, even when thoroughly backed by data.


(edited) Are you claiming (1) that children from single-parent households have worse outcomes and (2) that left-leaning policies result in more single-parent households?

(Not going to argue about #1, that seems pretty well established, just pulling it out as a claim for clarity. #2 is what I'd argue against, and would want to actually see your example so that it's possible to do so in a constructive way)

> That’s just one example off the top of my head

It's not an example until you provide enough for information for us to find what you're talking about ourselves; ideally, a link.


AFAICT, the poster is claiming that "left leaning orthodoxy" says "there's no disadvantages to children to being raised in a single family household."

I don't really know that that's true anyway. It doesn't seem to be a frequently discussed topic IME. There are related topics like "should divorce be available" and "should people be encouraged or discouraged from pre-marital sex" where I think they are extrapolating "if you support extra-marital sex then you also support single-family households and must therefore believe that they have no disadvantages" but I think that's a larger logical leap than I would personally make. It also doesn't seem to account for support for access to abortion which arguably counter-acts the support for experimentation with pre-marital sex...

The general "left leaning" mood is also to provide services that help single-family households to mitigate issues for children anyway, so I think it's much more of a nuanced "lesser of two evils" position to support things like divorce being available.


> The general "left leaning" mood is also to provide services that help single-family households to mitigate issues for children anyway, so I think it's much more of a nuanced "lesser of two evils" position to support things like divorce being available.

That's a good way to put it. In contrast, the right leaning mood would be to strengthen social cohesion, increase support that people have from families, neighbors, and so on, and decrease the impersonal, systemic "services" provided by some bureaucracy.

Not arguing, just describing. I see virtues and problems in both approaches.


> In contrast, the right leaning mood would be to strengthen social cohesion

Huh. Not the impression I get from what I see around me, not at all. What I see (from my bubble) is forced conformity and a systematic destruction of every form of support structure. Although, notably absent from my bubbles are religious support structures, so.


> That's a good way to put it. In contrast, the right leaning mood would be to strengthen social cohesion, increase support that people have from families, neighbors, and so on, and decrease the impersonal, systemic "services" provided by some bureaucracy.

In the American right all of those things are often promoted with heavy doses of religion; IMO this is the core crisis of American politics on the right: how to promote fixing those things through non-governmental, pro-Evangelical religious measures while also preserving the freedom to not embrace that religion? Or even to follow the religion but choose a less fanatical strain?


Religion is another category that seems to have all kinds of social benefits the left is uncomfortable talking about.


> mitigation

Yeah, one of the reasons I'd want to actually see an example is because I imagine those policies make the situation better but not good; like you say, they're mitigations; the causal arrow is bad situation -> policy, not policy -> bad situation.


The OP means that kids raised by married parents have better life outcomes. The policy proposals that would follow from this (e.g. restricting divorce) are right-leaning positions outside the overton window.

But even beyond a specific policy proposal, there is a viewpoint in right-leaning circles that atomized individualism has caused lasting damage to the fabric of society and needs to be reigned in.

(NB: I can't speak to whether the claim that kids have better outcomes because their parents are married is actually true, as there are obvious confounders there)


The funny thing is that I see complaints most about individualism from the left. It might be something more obvious on a non-monoaxis model.


How can you know what OP meant? The best any of us can do is guess.


> (e.g. restricting divorce)

I am so confused by the right. Aren't they usually in favor of things promoting freedoms, like freedom of association?


> Aren't they usually in favor of things promoting freedoms

Definitely not in a general sense. They place more importance on some freedoms than on others.


#2 is false on its face. Conservative policies like abolishing gay marriage and regressive welfare reform - as well as the policies that result in outcomes like a disproportionate amount of men of color being sent to prison or having to take jobs far away from their families or being unable to afford marriage or a house - clearly predicate a large portion of the broken or malformed or unformed families that exist in the US. Left-leaning policies are not immaculate, but they don't carry the unspoken mandate of caring only about the stability of white, affluent, straight couples - which is a problem in a country where much of the population is some combination of not-white, queer, and working class, AND where many women (and men!) would desire the option to destabilize their dysfunctional marriages, if need be.


That's nice, and I'm even prepared to accept it as a fact,[1] but it doesn't appear to have any relevance to what I said.

---

[1] Given the irrelevance, I don't think it's even worth verifying.


What does single vs two parent households have to do with this?


Naturally, OP has abandoned this indefensible position.


the center of a current Overton Window, to be precise, that thing's slippery


If you take the Overton Window as a simple parabola, with an up swing on left and right side of curve.

I'm finding more and more that it has become more like a Mandelbrot Set and on any given day I'm not sure if I'd be considered Left or Right by any random other person who I also can't identify if they are Left or Right. It is almost like Left and Right are so confused now, that they have lost all meaning. Except when they vote, but they can't explain why they voted the way they did.


When it comes to LLMs being fed vast swathes of commentary, it's not so much the limits of the Overton window as the distribution.


> When it comes to LLMs being fed best to swab the son of commentary

I understand all of these words individually, but I do not understand them in that sequence.


Sorry, that's entirely my fault, I tried dictating it to my phone but in typical fashion all technology is getting worse every day and I didn't proofread the disappointing results.

Strangely apopos in a thread about computers generating undesirable text.


If we're going by sayings, there is another saying that the majority of text content on the internet was written by people who are mentally unwell. Training LLMs on that can resort in a distorted and unrepresentative perspective.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: