I've had that thought for over a decade now. I felt that my inner voice is a bit of a Markov chain generator at the border between my conscious and unconscious, randomly stringing some thoughts in form of sentences (often mixed-language, to boot), and conscious-level thinking involves evaluating those thought streams - cutting some off completely, letting others continue or mixing them and "feeding back" to the generator, so it iterates more on those.
Markov chains (and a lot of caching) were a good high-level working model, but quite inadequate in power when inspected in detail. Deep language models I initially ignored, as they felt more like doubling down on caching alone and building convoluted lookup tables. But, to my surprise, LLMs turned not only to be a better high-level analogy - the way they work in practice feels so close to my experience with my own "inner voice", that I can't believe this is just a coincidence.
What I mean here is, in short: whenever I read articles and comments about strengths and weaknesses of current LLMs (especially GPT-4), I find that they might just as well be talking about my own "inner voice" / gut-level, intuition-driven thinking - it has the same strengths and the same failure modes.