I want a study on the impact of generative AI on knowledge. I fear that AI hallucinations have the potential to destroy all knowledge. For now, when an AI hallucinates, I can still check an established source to see if the fact is correct. But what happens when most established sources are themselves AI generated? There will almost be no ability to verify any fact. The very foundation of society collapses and any shared knowledge. Oh wait, that already happened with flat earthers, Covid and conspiracy theories. But AI may be the final nail in the coffin for Western civilization.
> LLMs are just pattern matching and intelligence is something beyond that.
Human knowledge is ultimately embodied in physical experiences, it is not merely spitting out words. How can an LLM know the taste of a mango? Ability to regurgitate previously written text about mangos is neither knowledge nor intelligence. To know something, one must experience it, or something analogous to it.
Even mathematicians "experience" their formulas and proofs in an ineffable way, they don't just produce text. There's an "aha" moment when learners "get it" and they experience knowledge and are then able to produce correct answers and generate new knowledge and discoveries. LLMs just generate text, and have yet to produce any significant new discovery in any field.
LLMs are like fake wine sommeliers who have read a lot of lingo about wine and can speak convincingly enough to fool an amateur but would fail a simple taste test. We would say such a person doesn't know what he's talking about.
People who think LLMs are intelligent are like a cargo cult, or polytheists who believe their idols are gods. They're anthropomorphizing processes they don't understand. How can a probabilistic next-token word predictor be intelligent? Actual mammal brains are far more complex than that.
This is just fluff. I can read about something and understand it. I don't need have died in a car crash to know I most certainly don't want to be in one. Until we do actually know how intelligence emerges in the brain it makes zero sense to compare any AI to it.
You read something and understand it only if you have had analogous experiences. A person from the 16th century would not be able to understand what the words "Call me on FaceTime" means because they have never had an experience of using a smartphone.
You certainly don't know what it means to die. But you know what "car" is and "crash" is because you have had (and maybe witnessed) analogous physical experiences.
People in the 16th century had the notion of scrying mirrors - presumably without analogous experience. I'm sure you could explain FaceTime to a person from then. "This magic glass shows me the view and transports the sound from the magic glass of a friend or family member and vice versa, with their informed consent."
This has nothing to do with experience. What experience can you have when people explain to you behavior of many dimensional hyperbolic space? When you learn SVD matrix decomposition?
You don't need experience, you need some basic concepts.
Abstract mathematical concepts can eventually be traced back to simple integers, which are traced back to an experience of things in the world (a toddler learns to count 1, 2, 3).
And it is precisely because higher mathematics is so far abstracted away from physical experience is the reason why many people struggle with it.
Besides, many great mathematicians have explained their knowledge as an "experience" even "spiritual experience". Not all experiences are physical. Ramanujan described his experience of mathematics as lights and sounds and patterns, and this is common among geniuses.
I have not witnessed a car crash and I feel I do understand what it is. Similarly a 16th century person doesn't "Call me on FaceTime" because the words are not known. You could very easily explain the concept to them.
Maybe you would. That's not a requirement. If humans could only learn from experience and analogues entire fields would be dead. No advanced math for example.
There are a ton of things you can learn without experiencing it yourself and without analogues. For example, car crashes are bad to be in. Even though I have not personally experienced one, nor have I experienced some analogue of it.
Never experienced pain? Never experienced physically slamming into something? Are you like a toddler who doesn’t yet understand basic physics and object permanence? Then you certainly don’t know what a car crash is.
The creator of the knowledge argument, frank jackson, has since embraced materialism and therefore does not believe the knowledge argument is an impediment for unembodied AI.
Even scientists in the USSR were occasionally permitted to attend conferences in the West (and scientists from the West were permitted to attend conferences inside the Iron Curtain.
Sounds like someone in the executive branch doesn't want scientists to do their job. Wonder what they're afraid of?
Another area where this is effective is in learning a language. Just 5-10 minutes of Anki practice a day is a lot better than nothing but hopes and dreams. And flashcard apps work in such a way that even if you get lazy for months, you'll be surprised at how quickly you are able to remember and catch up.
How can I trust anything now? I recently asked ChatGPT if learning multiple languages could slow dementia, and now I realized there’s no way to know the answer to this even if I confirm it isn’t hallucinating.
Fwiw, we touched in this topic in some of my linguistics classes. Even then (~7 years ago), the claim was that learning multiple languages _slowed the appearance of some symptoms_. We debated whether that was really the same as slowing the disease, or if it was just hiding the affects. It probably depends who you ask.
Almost. There are a million tools like this, but most "don't get it". The problem with TablePlus, DataGrip and most other GUIs for databases or even Airtable clones like NocoDB or VisualDB is they lack the 3 core features I want. Atleast this tool has 2 of the 3, which is already an improvement to anything else I know of.
1. Ability to view relationships inline. It's shocking that most GUIs for databases don't do this. I don't want to see a foreign ID, I want to see a sample of the related row. Glad this tool gets it.
2. Filtering and grouping and views. Not simple SQL WHEREs but easy filters. And grouping by header. This is what made Airtable king. Glad to see it is covered here.
3. AI-generated queries and reports. Sorry, but it's 2025 and you release a product without AI? It's now trivial to generate correct SQL queries and graphs based on natural language. Why doesn't any mainstream tool do this? See for example: https://www.youtube.com/watch?v=ooWaPVvljlU
Having all of these 3 core features together in the same tool would be the holy grail for me in database management.
This principle, as well as related principles, like the McNamara Fallacy and Goodhart's Law essentially boil down to one lesson I've come to realize in life: "numbers", "metrics" or even a "system" are never a substitute for actual humans caring about doing the right thing. If the humans involved care to do the right thing, they will do it, even without a system (although some systems make it easier). If they don't care about doing the right thing, no system or data-driven approach can fix that.
Which is also why I have a sneaking suspicion that most economic theory is complete bullshit. All the debates over privatization vs. public services, monopoly vs. competition, autocratic vs. democratic styles of leadership etc. are mostly irrelevant. Both can work. It all boils down to whether you have good human stakeholders who have the integrity and agency to do the right thing. Counter to most economic theories, for example, a monopoly that is run by people who actually care about doing the right thing might be better run than fiercely competitive startups who just want to make a quick buck.
>Which is also why I have a sneaking suspicion that most economic theory is complete bullshit. All the debates over privatization vs. public services, monopoly vs. competition, autocratic vs. democratic styles of leadership etc. are mostly irrelevant. Both can work. It all boils down to whether you have good human stakeholders who have the integrity and agency to do the right thing.
I don't think this is some gotcha, but basic economic understanding for at least the last hundred years. Fundamentally, much of economic theory acknowledges these questions. This brings you full circle, given inconsistent humans with individual values, how do different systems and rulesets perform.
What are the chances of old-school espionage? OpenAI should look for a list of former employees who now live in China. Somebody might've slipped out with a few hard drives.
reply