Hacker News new | past | comments | ask | show | jobs | submit login

The "off by one" predilection of LLMs is going to lead to this massive erosion of trust in whatever "Truth" is supposed to be, and it's terrifying and going to make for a bumpy couple of years. (Or the complete collapse of objective knowledge, on a long enough time horizon.)

It's one thing to ask an LLM when George Washington was born, and have it return "May 20, 2020." It's another thing to ask it, and have it matter-of-factly hallucinate "February 20, 1733." At first glance, that... sounds right, right? President's Day is in February, and has something to do with his birthday? And that year seems to check out? Good enough!

But it's not right. And it's the confidence and bravado with which LLMs report these "facts" that's terrifying. It just misstates information, calculations, and detail work, because the stochastic model compelled it to, and there wasn't sufficient checks in place to confirm or validate the information.

Trust but verify is one of those things that's so paradoxical and cyclical: if I have to confirm every fact ChatGPT gives me with... what I hope is a higher source of truth like Wikipedia, before it's overrun with LLM outputs... then why don't I just start there? If I have to build a validator in Python to verify the output then... why not just start there?

We're going to see some major issues crop up from this sort of insidious error, but the hard part about off-by-ones is that they're remarkably difficult to detect, and so what will happen is data will slowly corrupt and take us further and further off course, and we won't notice until it's too late. We should be so lucky that all of LLMs' garbage outputs look like glue on pizza recommendations, but the reality is, it'll be a slow, seeping poisoning of the well, and when this inaccurate output starts sneaking into parts of our lives that really matter... we're probably well and truly fucked.




This is semi-offtopic, but "trust but verify" is an oxymoron. Trusting something means I don't have to verify whether it's correct (I trust that it is), so the saying, in the end, is "don't verify but verify".


Pragmatically, the statement was made famous in English by a conservative US president, addressing the nation, including his supporters, who trusted him, but not the Soviets with whom he was negotiating.

Saying, in effect: "you trust in me, I'm choosing to trust that it makes sense to make agreement with the USSR, and we are going to verify it, just as we would with any serious business, as is proverbially commonsensical" is a perfectly intelligible.

There is nothing cunning about clinging to a single, superficial, context free reading of language.

Human speech and writting is not code, ambiguity and containing a range of possible meanings is part of its power and value.


So "trust me, but verify others"? Where have you seen this adage used in this sense? It's not even used like that in the original Russian, where Reagan lifted it from.


I think that’s a rather peculiar interpretation. I always thought it was pretty obvious that Reagan was just saying that he didn’t trust the soviets, and found a polite excuse not to in the form of the Russian proverb.


This is not quite true. "Trust" is to give a permission for someone to act on achieving some result. "Verify" means assess the achieved result, and correct aposteriori the probability with which said person is able to achieve the abocementioned result. This is the way Bayesian reasoning works.

Trust has degrees. What you have brought is "unconditional trust". Very rarely works.


> "Trust" is to give a permission for someone to act on achieving some result.

This would make the sentence "I asked him to wash the dishes properly, but I don't trust him", as your definition expands this to "I asked him to wash the dishes properly, but I didn't give him permission to achieve this result".

If you say "I asked someone to do X but I don't trust them", it means you aren't confident they'll do it properly, thus you have to verify. If you say "I asked him to do X and I trust him, so I don't need to check up on him", it's unlikely to leave people puzzled.

It's surprising to me to see this many comments arguing against the common usage of trust, just because of a self-conflicting phrase.


Why could I not say "I trusted him to do the dishes properly, after he was done, I verified, it's a good thing I trusted him to do the dishes properly, my supervision would have been unwarranted and my trust was warranted?"

I trusted someone to do their task correctly, after the task was done, I verified my trust was warranted.


What would be different if you didn't trust them to do it correctly?


Instead of sitting in my office doing my work, then, spending a few minutes to verify once they're done, I'd sit in the kitchen next to them checking it as they went, being both distracted AND probably spending more time. I'd much rather trust but verify.


There's a much closer example I think people here would naturally understand and even advocate for, without connecting it to the phrase:

"Trust but verify" means letting a junior do the work you assigned them, then checking it afterwards in testing and code review. Not trusting would be doing it yourself instead of assigning it to them. Trusting but not verifying would be assigning them the work then pushing it live without testing it.


I would say in this instance you don't trust the junior. In fact in corporations, I would say there's very little trust.

We used to trust people to just do what they think is best. But then we get bribery, harassment, lawsuits... we don't do that anymore.

In my opinion, not having trust is not a bad thing. It has a poor connotation so the result is that we modify the meaning of trust so we can say everyone trusts everything.

For example, one thing I trust is Nutrition Facts. I trust that what I'm eating actually contains what it says it contains. I don't verify it. Why? Because I know someone, somewhere is looking out for this. The FDA does not trust the food industry, so sometimes they audit.

There's many, very good, things I don't trust. I don't trust the blind spot indicator in my car. I turn my head every time. Does that mean the technology is bad? No, in my opinion, but I still don't trust it.


> Trust, but verify (Russian: доверяй, но проверяй, romanized: doveryay, no proveryay, IPA: [dəvʲɪˈrʲæj no prəvʲɪˈrʲæj]) is a Russian proverb, which rhymes in Russian. The phrase became internationally known in English after Suzanne Massie, a scholar of Russian history, taught it to Ronald Reagan, then president of the United States, who used it on several occasions in the context of nuclear disarmament discussions with the Soviet Union.



> He said "President Reagan's old adage about 'trust but verify' ... is in need of an update. And we have committed here to a standard that says 'verify and verify'."


French armed forces have a better version of this saying. “Trust does not exclude control.” They’re still going to check for explosives under cars that want to park in French embassies.


It’s interesting to notice that etymologically speaking the French and English words have completely different roots and therefore evokes slightly different ideas which are lost in translation.

Trust shares its root with truth. It’s directly related to believing in the veracity of something.

Confiance comes from the Latin confidere which means depositing something to someone while having faith they are going to take good care of it. The accent is on the faith in the relationship, not the truthfulness. The tension between trust and control doesn’t really exist in French. You can have faith but still check.


> Trust shares its root with truth. It’s directly related to believing in the veracity of something.

Would you mind sharing your reference on that? All the etymology sites I rely on seem to place the root in words that end up at "solid" or "comfort".


Definitely and that’s not incompatible with what I’m saying.

You are indeed looking far back to the Proto-Indo-European where words are very different and sometimes a bit of guesses.

If you look at the whole tree, you will see that both trust, truth and true share common Germanic roots (that’s pretty obvious by looking at them) which is indeed linked with words meaning “solid” and then “promise, contract”.

What’s interesting is that the root is shared between “truth” and “trust” while in French it’s not (vérité from veritas vs confiance from confere).


I think a better translation of "control" in that saying is "checking" or "testing". "Control" in present-day English is a false cognate there.


I can’t edit my message, but I agree with you.


That's only one possible meaning of the word "trust," i.e. a firm belief.

Trust can also mean leaving something in the care of another, and it can also mean relying on something in the future, neither of these precludes a need to verify.

Edit: jgalt212 says in another reply that it's also the English translation of a Russian idiom. Assuming that's true, that would make a lot of sense in this context, since the phrase was popularized by Reagan talking about nuclear arms agreements with the USSR. It would be just like him to turn a Russian phrase around on them. It's somewhat humorous, but also conveys "I know how you think, don't try to fool me."


A better phrase would be “Use it but verify”, simply.


Yes, which boils down to "verify".


It's possible to trust (or have faith) in my car being able to drive another 50k miles without breaking down. But if I bring it to a mechanic to have the car inspected just in case, does that mean I never had trust/faith in the car to begin with?

"I trust my coworkers write good code, but I verify with code reviews" -- doing code reviews doesn't mean you don't trust your coworker.

Yet another way to look at it: people can say things they believe to be true but are actually false (which isn't lying). When that happens, you can successfully trust someone in the sense that they're not lying to you, but the absence of a lie doesn't guarantee a truth, so verifying what you trust to be true doesn't invalidate your trust.


We're getting into the definition of trust, but to me trust means exactly "I don't need to verify".

If I say I trust you to write correct code, I don't mean "I'm sure your mistakes won't be intentional", I mean "I'm sure you won't have mistakes". If I need to check your code for mistakes, I don't trust you to write correct code.

I don't know anyone who will hear "I trust you to write correct code, now let me make sure it's correct" and think "yes, this sentence makes sense".


> to me trust means exactly "I don't need to verify".

If you use the slightly weaker definition that trust means you have confidence in someone, then the adage makes sense.


The issue here is that the only value of the adage is in the sleight of hand it lets you perform. If someone asks "don't you trust me?" (ie "do you have to verify what I do/say?"), you can say "trust, but verify!", and kind of make it sound like you do trust them, but also you don't really.

The adage doesn't work under any definition of trust other than the one it's conflicting with itself about.


I think I just provided an example where it makes sense.

Specifically: I have confidence in your ability to execute on this task, but I want to check to make sure that everything is correct before we finalize.


“I trust that you believe your code is correct, now let’s double check”.

Or maybe the proverb needs to be rewritten as “feign trust and verify”


Or assume good faith but, since anyone can make mistakes, check the work anyway.

That's a bit wordy but I'm sure someone can come up with a pithy phrase to encapsulate the idea.


"Trust, but verify"?


It's a matter of degrees. Absolute trust is a rare thing, but people have given examples of relative trust. Your car won't break down and you can trust it with your kids' life, almost never challenging its trustworthiness, but still you can do checkups or inspections, because some of the bult-in redundancies might be strained. Trusting aircraft but still doing inspections. Trusting your colleagues to do their best but still doing reviews because every fucks up once in a while.

The idea of trusting a next-token-predictor (jesting here) is akin to trusting your System 1 - there's a degree to find where you force yourself to enable System 2 and correct biases.


I always wondered about that


Or another way of understanding it: trust (now), but verify (later).

It's a Russian proverb BTW: https://en.wikipedia.org/wiki/Trust%2C_but_verify


My theory that it's a sophism to mollify people who are offended by not being trusted, or to mollify people who think not trusting is rude.

I'd just assume see it not used though.


It's a more friendly way of saying "trust noone".


It basically means "trust, but not too much."


Is it an oxymoron to generate an asymmetrical cryptographic signature, send it to someone, and that someone verify the signature with the public key?

Why not just "trust" them instead? You have a contact and you know them, can't you trust them?

This is what "trust but verify" means. It means audit everything you can. Do not really on trust alone.

An entire civilization can be built with this methodology. It would be a much better one than the one we have now.


> Is it an oxymoron to generate an asymmetrical cryptographic signature, send it to someone, and that someone verify the signature with the public key?

Of course not. I verify because I don't trust them.

> Why not just "trust" them instead? You have a contact and you know them, can't you trust them?

No, the risk of trust is too high against the cost of spending a second verifying.

> This is what "trust but verify" means. It means audit everything you can. Do not really on trust alone.

Your comment just showed an example of something I don't trust and asked "why not trust instead"? The question even undermines your very point, because "why not trust them instead?" assumes (correctly) that I don't trust them, so I need to verify.


It was sarcasm. "Why not trust them instead?" Clearly, you wouldn't and you can't. It takes moments to verify a signature, so just do it.


> An entire civilization can be built with this methodology. It would be a much better one than the one we have now.

No, it wouldn't. Trust is an optimization that enables civilization. The extreme end of "verify" is the philosophy behind cryptocurrencies: never trust, always verify. It's interesting because it provides an exchange rate between trust and kilowatt hours you have to burn to not rely on it.


Yes, let's trust VCs and bankers instead, they seem to be great keepers of civilization -- no calamities in sight with them at the helm /s


Possible > impossible.

I'd first trust unicorns shooting rainbows out of their posteriors before the cryptocurrency vision; neither works for fostering civilization, but at least the unicorns aren't proposing an economy based on paying everyone for wasting energy.


The cryptocurrency economy is a sham to discredit a workable future.

The bitcoin economy, however, is a 1st generation system of distilling energy into value. It is the most honest form of value storage humanity has ever encountered and represents a product rarer than anything in the universe. Gold does not hold a candle to the scarcity of bitcoin, and yet bitcoin is more divisible and manageable.

These are neutral aligned systems. How we use them is up to us. Bitcoin, like an electric vehicle, does not care where the electrons come from. It will function either way.

Does your civilization use fossil fuels that poison the population and destroy the planet?

Bitcoin will run using that, and it will exponentially increase consumption.

Does your civilization use nuclear fission and fusion (which includes "renewables" since they are a direct fusion byproduct), that have manageable side effects for exponentially larger clean energy generation compared to anything else?

Bitcoin will run using that, and it will exponentially increase consumption.

Bitcoin is a neutral entity to distill energy into value. It cannot be tampered with like the federal reserve and a world cabal of bankers. You cannot negotiate with it, bail it out, or enrich your friends by sabotaging the ruleset for yourselves.

If your society has a selfish population, then it will be destroyed by the energy it requires to function. It is trivial to use energy sources exponentially more powerful without the destruction. The trouble with those sources is they do not have a "profit" motive, so countless elite will lose their golden spoons, and the synthetically generated "economy" will crash.

In exchange for the "economy" collapsing, the general populace can breathe again, the planet will stabilize, energy consumption can continue to grow exponentially without any harm, and a golden age for all beings will begin in this reality.

But my musings will have to stop here. I can say with certainty: you are someone who hasn't even remotely spent time thinking about and understanding this problem space on a deep level, so it is strange you would comment so confidently. It doesn't matter who you are, how much money you have, what innovations you've conceived and created, how respected you are, none of that matters. You're missing something very big here. If I was you, I would take the time to figure it out.

And truth is, I am not even replying to you. I write this out for the unspeaking and silenced people who are actually paying attention to validate their correct thinking.


> why not just start there?

Because there are many categories of problems where it's much easier to verify a solution than it is to come up with it. This is true in computer science, but also more generally. Having an LLM restructure a document as a table means you have to proofread it, but it may be less tedious than doing it yourself.

I agree that asking straightforward factual questions isn't one of those cases much like I agree with most of your post.


This is why I don't use code LLM code generators for citing or outputting a solution that includes the current code, because it's inclined to remove parts that it thinks don't matter, but rather matter a lot further down the line. And if it's not caught in code reviews, that can cause severe and difficult to debug issues. I'm sure there will be an epidemic of these issues in a few years, because developers are definitely lazy enough to rely on it.


Off topic, but a funny thing about asking about George Washington's birthday is there are two possible answers because of British calendar reform in 1750 (although we've settled on recognizing the new-style date as his birthday).

footnote [a] on wikipedia: https://en.wikipedia.org/wiki/George_Washington#cite_note-3


Heh, I knew that but didn't want to veer into too much mendacity.


Already quite a while ago I was entertained by a particular British tabloid article, which had been "AI edited". Basically the article was partially correct, but then it went badly wrong because the subject of the article was about recent political events that had happened some years after the point where LLM's training data ended. Because of this, the article contained several AI-generated contextual statements about state of the world that had been true two years ago, but not anymore.

They quietly fixed the article only after I pointed its flaws out to them. I hope more serious journalists don't trust AI so blindly.


Maybe it will be analogous to steel. For most of the post-nuclear age, steel has been contaminated with radionuclides from atmospheric nuclear weapon use and testing. To get "low background" steel you had to recycle steel that was made before 1945. Maybe to fact-check information we'll eventually have to go to textbooks or online archives that were produced before 2023.

(Steel contamination has slowly become less of an issue as most of the fallout elements have decayed by now. Maybe LLMs will get better and eventually the hallucinated "facts" will get weeded out. Or maybe we'll have an occasional AI "Chernobyl" that will screw everything up again for a while.)


Because of llm Internet we have today, I already do go out of my way to find books and information that I can audit were written by a human before GPT.


You fear that over time, artificially intelligent systems will suffer from increasingly harmful variance due to deteriorating confidence in training data, until there is total model collapse or otherwise systemic real-world harm.

Personally, I believe we will eventually discover mathematical structures which can reliably extract objective truth, a sieve, at least in terms of internal consistency and relationships between objects.

It's unclear philosophically whether this is actually possible, but it might be possible within specific constraints. For example, we could solve accuracy of dates specifically through some sort of automated reference-checking system, and possibly generalize the approach to entire classes of problems. We might even be able to encode this behavior directly into a model. Dates, at least, are purely empirical data, so there is an objective moment or statistically likely range of time which is recorded somewhere, so the problem becomes one of locating, identifying and analyzing the correct sources for a given piece of information, and having a sound-proof method of checking the work, likely not through an LLM.

I think we will come to discover that LLMs/transformers are a highly generalizable and integral component of a complete artificial brain, but we will soon uncover better metamodels which exhibit true executive functioning and self-referential loops which allow them to be trusted for critical tasks, leveraging transformer architecture for tasks which benefit from it, while employing redundancy and other techniques to increase confidence.


Agree with this sentiment, erosion of trust and potential issues. The illusion of facts and knowledge is a great moral hazard that AI companies are willing to step around while the market share battles play out. More responsible AI companies, stronger government policy, better engineering and less dumb users are all part of the solution here.

This is more solvable from an engineering perspective if we don't take the approach that LLMs are a hammer and everything is a nail. The solution I think is along the lines of breaking the issue down into 2-3 problems: 1) Understand the intent of question, 2) Validating the data in resultset and 3) provide a signal to the user of the measure to which the result matches the intent of the intention.

LLMs work great to understand the intent of the request; To me this is the magic of LLM - when I ask, it understands what I'm looking for as opposed to google has no idea, here's a bunch of blue links - you go figure it out.

However, more validation of results is required. Before answers are returned, I want the result validated with a trusted source. Trust is a hard problem..and probably not in the purview of the LLM to solve. Trust means different things in different contexts. You trust a friend because they understand your worldview and they have your best interest in mind. Does an LLM do this? You trust a business because they have consistently delivered valuable services to their customers, leveraging proprietary, up-to-date knowledge acquired through their operations, which rely on having the latest and most accurate information as a competitive advantage. Descartes stores this mornings garbage truck routes for Boise IA in its route planning software - thats the only source I trust for Boise IA garbage truck routes. This, I believe is the purpose for tools, agents and function calling in LLMs, and APIs from Descartes.

But this trust needs to be signaled to the user in the LLM response. Some measure of the original intent against the quality of the response needs to be given back to the user so that its not just an illusion of the facts and knowledge, but a verified response that the user can critically evaluate as to whether it matches their intent.


I believe it’s going to become counter productive sooner than anyone might think, and in fairly frustrating ways. I can see a class of programmers trading their affinity with the skill for a structurally unstable crutch.

I was using Perplexity with Claude 3.5 and asked it how I would achieve some task with langchain and it gleefully spat out some code examples and explanations. It turns out they were all completely fabricated (easy to tell because I had the docs open and none of the functions it referred to existed), and when asked to clarify it just replied “yeah this is just how I imagine it would work.”


One technique to reduce hallucinations is to tell the LLM "don't make things up, if you don't know then say so". Make a habit of saying this for important questions or questions for which you suspect the LLM may not know.


It's hit and miss, for the same reason Google is (and increasingly more so). If you try and search for 'langchaingo' then you might get lucky if you add enough into the query to say you're working with go, but otherwise it'd just see 'langchain'.

Google is pretty much useless for the same reason.

They're not actually more intelligent, they're more stupid, so you have to provide more and more context to get desired results compared to them just doing more exact searching.

Ultimately they just want you to boost their metrics with more searches and by loading more ads with tracking, so intelligently widening results to do that is in their favour.


> The "off by one" predilection of LLMs is going to lead to this massive erosion of trust in whatever "Truth" is supposed to be, and it's terrifying and going to make for a bumpy couple of years.

This sounds like searching for truth is a bad thing, but instead is what has triggered every philosophical enquiry in history.

I'm quiet bullish, and think that LLMs will lead to a Renaissance in the concept of truth. Similar to what Wittgenstein did, Plato's cavern or late middle age empiricists.


Having also been burned by "better to check Wikipedia first" hallucinations, I find the Anthropic footnotes system to be essential. The "confidence and bravado" is indeed alluring at first, but eventually leads away from using LLMs as search engines.


yea, I seen it alot on social media where people use ChatGPT as a source for things it possible cant know. Often with leading questions.


Sounds like the burning of the library at Alexandria, but worse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: