Agreed. The "illusion" approach has always seemed somehow self-referential to me. Who/what is being fooled? Does being mistaken about something (which computers/algorithms can obviously be) somehow endow the thing with self-awareness and inner experience? We could trick a machine into believing it is conscious (`var conscious = true;`) but it's doubtful that that would change the mechanical, nonselfaware nature of its existence.
The brain is a super information processing machine. It executes incredibly and unimaginably complex algorithms without our knowing about it. But one of the things it does is to construct simplified models of itself, and these simplified models are what we are "aware of." More precisely, these simplified models are what consciousness is.
The "illusion" is that consciousness is somehow in the driver's seat. It isn't at all. It is just a kind of post-hoc shorthand that the brain uses for itself. It's more like a log file than an executed piece of code.
My typing out these words didn't come from my "consciousness," they came from the brain's incredibly complex algorithms. But my consciousness is taking the credit; it gives the abstract, executive summary version of how my goal of expressing some point led to my word selection. All those "inner experiences" along the way are essentially notational.
You are missing an important part of the puzzle. Yes, the brain creates models of the world and body. But it is locked in a loop of "agent <-> environment", more precisely "perception -> decision -> action -> reward learning -> repeat". This is what grounds the model in reality. The model is still imperfect (an illusion) but a useful one, with regard to maximising human rewards while minding the affordances and limitations of the environment.
The source of meaning is not the brain itself, but the game. The game between agent and environment, where the agent collects rewards and tries to reduce losses. All meanings stem from the game itself, meaning is not 'secreted' by brains in a jar.
The illusion objection is just an observation that any model, no matter how complex it is, is an approximation. The "ego" itself is a model of the agent, an approximation of the agent. But a useful one, a grounded approximation, that tracks reality as it changes. At the very least the model is good enough to make sure genes get into the next generation, otherwise there would be no more model to talk about. That's the source of meaning and consciousness - it's a loop, a bootstrapped meaning based on self replication.
I often see debates about consciousness ignoring the environment and the game (or life). That's a sad situation for philosophy, in my opinion. They got scared of behaviorism decades ago, and now ignore reinforcement learning. But RL is our best bet at cracking the mind-body problem.
All of which may be true - in the sense that perhaps there is a "self quale" whose purpose is to represent a simplified model of the important elements of internal state.
I'd suggest it's impossible to be conscious without being conscious of something, whether that something is an external event, a physical sensation, an emotional state, a desire, a memory, a plan, or an abstract thought.
In practice consciousness flits between all of these things, like an inner cursor.
But that does nothing to explain the sensation of consciousness - the simultaneous experience of being both subject and object.
It's the gap between "Maybe this very metaphorical explanation helps" and the lived sensation that appears to be unbridgeable, and makes the hard problem so hard.
If you take a look at word embeddings, image embeddings and other kinds of embeddings (such as those used for music recommendation) you will see how a rich space of nuanced meanings can emerge from what is basically a form of data compression. We're observing this higher order representation space - and right there is "what it's like to be" in that experience.
Qualia is an embedding of sensations into a "sensation space". Being aware of qualia is just evaluating sensations with report to future actions, goals and desires - basically adding emotion on top of perception. Emotion emerges from the utility of the current state of your experience, utility is related to rewards, rewards are controlled by genetics, and genetics have a single goal - replication. That's how qualia bootstraps into reality - it supports actions that protect self replication of genes, and genes support the development of a brain that can have qualia in the first place.