Humans have that to, but the reason civilization doesn't go full crazy is our use of language and concepts are tied to doing objective things, which keeps things (mostly) grounded.
Where it isn’t grounded, as in endless online conversations with like minded people (closed loop feedback) about informally abstracted (poorly specified constraints) and emotion invoking (high reinforcement) topics, people go batshit too.
So the more AI models actually practice what they know in objective environments, the more likely that output->input feedback will inform introspection toward self-improvement, and less like an iterative calculation of the architecture’s resonant frequencies or eigenvalues.
I've often thought human echo chambers were their own phenomena. Something about the brain and tribalism from evolution.
I never thought of it in terms of AI training data.
As LLMs train on data produced by other LLMs, they will drift.
And this drifting is the same phenomena as when humans get in an echo chamber. If each person hears what the others are saying, and spits it out in some form, and the others hear it, and also spit it back out in some form, this turns into a drifting in understanding just like an LLM. ("idea telephone game")
Technically, it isn't just in echo chambers. It is all humans, at a lot of different scales, from small to large groups. Countries, and cultures are echo chambers at larger scales.
Like how the concepts in philosophy, as they become more abstract, they kind of twist back on themselves, and become re-invented. And as they get more abstract get accused of just 'playing with words'. Just like an LLM can just 'play with words'?
The difference is eventually humans have to relate to 'real objects'.
So even if the word for 'apple' drifts over time and between groups, eventually you can still relate the words back to the real 'apple'.
Humans are grounded in the reality of 'objects' in space.
But. I tend to think this is temporary. As LLM's are linked to things like AlphaGo, and drone flight systems. They will also have to deal with real 'objects'. Maybe that will then lead to more grounded reasoning.
And value science (understanding it is simply our accumulated tools for finding harder truth, not a priesthood of “the truth”), more than pandering & populist politicians, and tribal media & online personalities.
Where it isn’t grounded, as in endless online conversations with like minded people (closed loop feedback) about informally abstracted (poorly specified constraints) and emotion invoking (high reinforcement) topics, people go batshit too.
So the more AI models actually practice what they know in objective environments, the more likely that output->input feedback will inform introspection toward self-improvement, and less like an iterative calculation of the architecture’s resonant frequencies or eigenvalues.