Hacker News new | past | comments | ask | show | jobs | submit login
Chatbots Are Primed to Warp Reality (theatlantic.com)
27 points by peutetre 71 days ago | hide | past | favorite | 25 comments



I can definitely see people becoming overly trusting of LLM output, but I have a feeling it might be a self-correcting problem over time.

Initially when working with an LLM if you start with a problem it knows well, it's likely to give good results and if some minor hallucination creeps it you may well not notice and accept it based on earlier results being right.

However it's quite likely you'll hit a wildly wrong statement at some point, and that tends to break the illusion, and hopefully people who have that experience will start being more skeptical of what they're being told by LLMs.


People suck at telling the truth. There are whole occupations out there whose primary job it is to make use of that fact and influence people. I'd say upwards of 50% of the population can't even tell when they are being manipulated by other humans with false info. Even when it is against their own interest. How are these people ever gonna stand up to LLMs who are much more suave than your average con artist? These models were literally trained on all the marketing material that is the modern internet.


LLMs hallucinating, lying and doubling down on things that are wrong seem very human.


Or it will result in another phenomenon similar to Gell-Mann Amnesia.



> suggests that the solicitous, authoritative tone that AI models take—combined with them being legitimately helpful and correct in many cases—could lead people to place too much trust in the technology.

Haven't there been reports lately that people don't trust the news? I'd think that the search engines' AI models would suffer the same fate given similar levels of accuracy.

> No one person, or even government, can tamper with every link displayed by Google or Bing.

Well, Google or Bing can.


There's a difference between reading the news and chatting with an AI - evident because of the verbs.

While news are one-sided and usually have no immediate ways of discussing them (besides maybe Reddit and HN). Chatbots on the other hand are a correspondence, which makes them more like a discussion or interview... they seem more approachable (human?)

Platforms like c.ai push this to the extreme. While AI relationships seem dystopian right now, I fear they are already creeping up on us - forming beliefs even stronger than news can.

... Google and Microsoft are big in AI, so it's not even a change in "leadership" anyway.


The same text predictors can make articles though. The underlying algorithm is agnostic to the difference most probably have training data to cover both formats.


Yeah you'll get people ranting about gell-mann amnesia while uncritically ignoring good reporting.


Has The Atlantic written any articles about how Google's skewed top results also warps reality?


There's been a lot of reporting on Google getting worse, particularly in the Verge.


I've wondered this too. Just typing a single letter or two into Google immediately suggests stuff. I hate it. I don't want these random suggestions entering my brain.

you can turn them off, if you log in.


It's not just suggestions. On mobile front page they are showing suggestions of what people are searching without even typing word


Back in the day I wrote a Google frontend so that I could get search results on a low-bandwidth connection (basically just a fancy proxy server). I bet you could similarly block out the suggestions.

Actually, you could probably just DNS block those? I'll check later on desktop, but Google has a habit of throwing everything on a new domain.


Chatbots are warping reality. There is a growing number of people who use them as confirmation bias machines because most LLMs still do not disagree very well. And people enjoy being told they are right in an authoritative tone. We now get really angry if an LLM is "patronizing". We expect that it will tell us what we want to hear. And some of that anger is perhaps justified in the most egregious cases of information censorship for the sake of "Silicon Valley ethics", but not all of the anger.

As a software developer, I meet clients nowadays that dismiss all actual implementation issues because an LLM told them their idea is good. They will send screenshots from ChatGPT and shut down any meaningful discussion about the reality of the situation. I've also seen the older generation fall prey to many blogspam websites pumping out conspiracy content with LLMs, and sometimes even quite young people. I think we have all seen the blogspam situation.

I think this and echo chambers, or more generally — seeking unnatural levels of validation — is turning into something pathological. Either in the sense that it's pathology to seek only validation and nothing else, or also in the sense that this leads to stunted growth and inability to see nuance. We need some disagreement to properly come of age, to gain wisdom, and to understand the world around us. Developmental psychologists like Erik Erikson place conflict of ideas[0] at the center of a person's mental growth. But many people these days insulate themselves as much as they can from such conflicts. If this continues, it will be transformative for humanity, and very likely not for the better.

[0] https://www.simplypsychology.org/erik-erikson.html


Give three examples


I don't want to give actual examples, but I'm talking about published scientists and people I consider to be reasonably good engineers.


Yeah, I've seen that happen with really smart people. I'm sure they learned at some point that they need to criticize their own ideas, but that apparently goes out the window once the LLM is involved.


Paywall. We need a flair for pay walled articles


[flagged]


I think it's more subtle than that and the danger relies more on the massive usage of chatbot AIs. Like some people I know, they use it many hours every day and they get trapped in some kind of "echo chamber". Because they think chatbots have a consciousness like a human and they are social "beings" or entities, they sometimes get tricked into false beliefs. It's more easy than you think to induce false memories : https://www.nature.com/articles/s41598-022-11749-w


I considered scale and uniformity of bias as possible differences before posting, but I'm not sure I believe it's qualitatively different.

Maybe it's analogous to someone who puts all of their faith in a friend or family member and believes anything they say? Is this more than that - a folie a deux by proxy? We have folks who already do this: they are religious fanatics and listeners of Joe Rogan. Is it really that different?


Social reality is constructed by words. This has a very considerable power to shape physical reality.


I think you said it yourself: some (arguably most) people are impressionable.


I tried to ask my amazing google AI to send a text message today, and it couldn't fucking do it

The tech still sucks, and everyone loves to ignore that it is constantly wrong

ask an AI to help you with a Makefile to see what I mean lmao


People so strongly wish that AI exists that they will believe anything. Pretty sad from a sociological and, much later, historical point of view.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: