There's one thing you forgot: we only have some model of how a brain might work. The model will only stand as long as we don't find a better model. That's how science works.
At some point though, the difference between the model and reality fall within a negligible error margin - particularly within a practical everyday context. Like, Newton's theory of gravity isn't perfect, but for most things it's pretty much good enough.
Similarly if LLMs can be used to model human intelligence, and predict and manipulate human behaviour, it'll be good enough for corporations to exploit.
Your prediction is that we are close. That prediction is founded on your assertion that we aren't missing anything substantive or new in that error margin: and that assertion is circular.
If you are correct about LLMs being a generally complete model, then that is a good prediction. But only if you are correct.
I think brain == LLM is only approaching true in the clean, "rational" world of the academia. The internet now amplifies this. IMHO it is not possible to make something perfectly similar to our own image in a culture that has taken to feeding upon itself. This sort of culture makes extracting value from it much, much more difficult. I think we map the model of our understanding of how we understand things to these "AI" programs. Doesn't count for much. We have so much more than our five senses, and I fully believe that we were made by God. We might come close to something that fulfills a great number of conditions for "life" but it will never be truly alive.
A model that matches part of the brain should not be treated as if it models all of the brain.
What I see you doing here is personifying the model, and drawing conclusions from the personification.
There is more to how we interact with language than prediction of repitition. You didn't predict anything I have said so far! Yet we are both interacting with the language.
We didn't just model LLMs after our brains, either. We pointed them at examples of thought, all neatly organized into the semantic relationships of grammar and story.
Don't ignore the utility of language: it stores behavior, objectivity, and interest.
These papers suggest we are just predicting the next word:
https://www.psycholinguistics.com/gerry_altmann/research/pap...
https://www.tandfonline.com/doi/pdf/10.1080/23273798.2020.18...
https://onlinelibrary.wiley.com/doi/10.1111/j.1551-6709.2009...
https://www.earth.com/news/our-brains-are-constantly-working...