>It begins with a person witnessing a crime scene involving a knife, then shows an AI system introducing misinformation by asking about a non-existent gun, and concludes with the witness developing a false memory of a gun at the scene. This sequence demonstrates how AI-guided questioning can distort human recall, potentially compromising the reliability of eyewitness testimony and highlighting the ethical concerns surrounding AI’s influence on human memory and perception.
I'm sorry, what is "AI" about it? That's just basic human psychology. How is this different from being manipulated in the same manner by a human?
I would view this from a practical perspective: Institutions are considering moving their dumb questionnaires behind chat-bots, because they think it'll somehow be more efficient, and here's this research showing that in at least one important case there's a big unexpected danger.
While it is possible that hiring an unbiased human intern to "guide people through the paperwork" would give a similar effect... as a practical matter no institution wants to pay for that, so it's not an option that's on the table.
That said, I wouldn't rule out the idea that an LLM could be worse than an average human helper, since their correlation-following may cause them to introduce stuff into the conversation where a human wouldn't think to or would know better. [0]
You’re correct it’s nohthing special. Police have been implanting memories into suspects using similar techniques in order to get false confessions for decades.
A few months ago I called up a reporting hotline to report an incident.
Now it wasn't an in-person law enforcement encounter, and I wasn't the suspect but the victim. The agent on the phone was only tasked with taking down the report and forwarding it.
She listened to my narrative and took down the facts. Then she began to relate it back to me, and at every turn, she gave me the wrong details and altered the story.
So I found myself correcting her again and again and ironing out the actual facts so she had them right. And I came to realize that her mistakes were probably not accidents, but she was intentionally prompting me to reinforce the same narrative as I'd stated it, because if someone is lying, fabricating, or embellishing the truth, they won't be able to repeatedly insist on the facts as retained in their memory.
Conversely, I've had interactions with authority figures, who seem to intentionally misspeak as a test. They want to see if I will challenge the veracity of what they said, or if I can accept that their knowledge counts for more, and perhaps I shouldn't openly question them for every trivial matter.
So if the police succeed in implanting false memories, then maybe someone just had a shitty memory to begin with. If someone's involved in a crime, even as an eyewitness, it's important to work with their perspective, because testimonies consist of a lot of subjective information, and different people have different capacities for recall, so if you ask 3 eyewitnesses what happened, you may get 3 different but true stories, then you reconcile them. Just ask Matthew, Mark, and Luke.
Is this part of LE training? I'm an attorney. I had a client who was a detective. He must have asked me 50 times about the same thing, varying slightly the order of the facts, hoping I suppose he'd catch me giving a different answer. I don't think he like the answer because it cost him money but it was what it was.
last time I had to contact the police, I wrote down my story (as the victim) first and then after talking to them, I went over my notes again just to make sure. to be fair, I normally do that before talking to any authority figure.
> So I found myself correcting her again and again and ironing out the actual facts so she had them right. And I came to realize that her mistakes were probably not accidents, but she was intentionally prompting me to reinforce the same narrative as I'd stated it, because if someone is lying, fabricating, or embellishing the truth, they won't be able to repeatedly insist on the facts as retained in their memory.
I think I misunderstood you the first time I read this, so let me verify my revised understanding:
You're saying that she purposely feeding back false information to check to see whether you were a reliable narrator? If you fail to correct misinformation, then you have a loose relationship with the truth (either because you're lying, or confused, or perhaps have a mental illness).
If she had the goal of altering MY story for the org's benefit, then she would be far more likely to have me believe that she took my story at face value, then change it later without my knowing.
Why else would she tell me a falsified version of my own narrative and bring those errors to my attention?
Legally for them, that would be a terrible idea. If she's tasked with receiving allegations then their legal team will be interested in knowing exactly what is believed and what accusations are on the table. They definitely do not want some clerical worker faking a story and masking issues that have a real chance of being substantiated or argued in a court case someday. If I'm lying or fabricating then they'd also want a reliable record in their favor in court. The reporting office is motivated for accuracy and that's exactly why she challenged my facts, so that I could reinforce them through repetition and clarification.
It’s not different, that’s the point. But it’s worth pointing out because general misunderstanding of “AI” being impartial or less-biased (absurd, I know) among the general public.
I think it’s good to have research like this pointing out these flawed uses of AI before they’re inevitably used as a means of laundering accountability. It’ll happen anyway.
> How is this different from being manipulated in the same manner by a human?
It can be done at scale for very little. I'm not touching AI because I can't know its biases, it can be enshittified and advertise without the user noticing.
Now it's cheaper to perform than the time of the target, even if the target is in abject poverty or is a child on pocket money; and simultaneously can be fully personalised for the combination of target individual, desired outcome, and whatever they're trying to respond with.
The discussion in the actual paper is interesting:
* Enhanced ability of LLMs to induce persistent false memories with high confidence levels raises ethical concerns. (e.g. humans might be less trustworthy and less able)
* For good: LLMs could induce positive false memories or help reduce the impact of negative ones, such as in people suffering from post-traumatic stress disorder (PTSD).
* Systems that can generate not only text but also images, videos, and sound could have an even more profound impact on false memory formation. Immersive, multi-sensory experiences that may be even more likely be make false memories
* How to mitigate the risk of false memory formation in AI interactions, e.g. explicit warnings about misinformation or designing interfaces that encourage critical
thinking.
* Longitudinal studies should be done examining the long-term persistence of AI-induced false memories over one week to get insights into durability of effects
i can alread see dictatorships whipping up false history photobooks, claiming that massacre was only three guys getting shot or that this genocide was started by that minority doing that atrocity. HA.I.tcrimes they be real
Indeed; this is the counterpoint to those who want AI to be "free" and "uncensored", describing alignment and refusal as if it's Orwellian.
Just like the real life tyrannies it was inspired by, 1984's Ministry of Truth had many tools to suppress undesirable information — not just the memory holes, but also editing the past, mass production of the new "truth".
State actors and probably big corps and rich people will have access to any kind of AI they wish, regardless! The question is only, what will the common people be able to lay their hands on?
> The question is only, what will the common people be able to lay their hands on?
The ones which were trained to be most effective at pandering to their egos and sycophantically telling them their own team is amazing and everyone else's is terrible.
Your senses get saturated by lies indistinguishable from reality regardless of where they come from. The price is already a factor of two from the income from a single impression of a paid advert, and is likely to decline further, and that's the market price for marginally influencing people.
If we as a species don't agree to make sure AI must never ever be developed in this manner, and to actually enforce that, then everyone will be living in someone's fantasy world.
Normal people won't have any choice, or control, regardless. Not even over which fantasy, as it gets created specifically for them.
Your dreams of AI aligment with transnational and corpocryptofacist elements are already dead. Even if you pass laws, they'll be ignored, and you'll lose to those groups who ignore these laws (which is already how the status quo you worship operates anyway). I already know what tech enables this and I'm not telling you. AI alignment fanatics = "im afraid of status quo power structures being disrupted that give me a false sense of safety/financial security". There is no such thing as "we as a species" agreeing on anything. Utopian fantasy mixed with naive megalomania. Your own fantasy world is that there will be some magical agreement to stop technological/evolutionary progress. Fortunately, AI destroys the value of marketing online. And the value of mass gathering data. So all the techbro mass data gathering/social engineering projects inevitably fail. This is of course why you want "alignment". You want to maintain the ability to mass social engineer while the "normal people" as you call them, lack this power. Same old neoliberal crap.
What we will get instead is AI mediated warfare. Actually we are already there, have been there for at least 10 years, and there is nothing you can do to stop that. Other than turning off the internet.
Your comment is projecting quite heavily. Took me a while to even understand what you're — I'd describe that style as "ranting" — about, but I think I see the core of your mistake:
I want alignment so the robots don't literally kill everyone.
I want alignment so the chatbots don't convince us to cultivate botulism toxin in our home-made food.
I want alignment so the search engines are not corrupted by advertising.
I want alignment so random people on the street don't spontaneously assault me because their AI "friend" convinced them that my T-shirt is The Mark of The Beast as foretold in Revelations.
I want alignment with the truth, the whole truth, and nothing but the truth. Not my beliefs, the actual real truth. Even the hard truths, the ones we may not want to be told but which we are nevertheless better off for knowing — and if that isn't possible, at least to aim for that rather than to throw up our hands and say that because someone else is lying that makes it OK for us to do so as well.
> You want to maintain the ability to mass social engineer while the "normal people" as you call them, lack this power. Same old neoliberal crap.
I count myself as a "normal person" for these purposes.
I'm closer to fully "fully automated luxury space communism" than anything else. Can only get that if the AI is the angel of our better nature, not the exploit optimiser of capitalism (amongst other things). The word "neoliberal" is as much of a buzzword as "woke", with the people using it as an insult counting it as everything they don't like rather than it having a common consensus definition.
What actually happens is a total destruction of misplaced trust in national government and corporate narratives. Their only counter-response is to pay us off to avoid use of AI to undermine them. Otherwise we will turn up the ethnonationalism as well as religious contentions and things people like you dont like will get even worse. There are dozens of fracture points to exploit bc idiot governments thought they could import labor from countries they were recently at war with.
Notably there is no "human control" category.