Humans don’t have completely explainable models, but we do have partiality explainable models which are still meaningful. A doctor who for example knows that antibiotics don’t work on viral infections can explain sufficient reasons to justify their actions. Shareable heuristics are a powerful tool and tossing them away binds you to the biases in your training sets.
Perhaps I was not clear; here's an example of what I meant:
We make complex reasoned plans like "how to make coffee": I'm going to need a filter in the pot, I'm going to need ground coffee, water, etc; let's make sure the filter's there before I add the coffee and both before the water, etc." Then we use some sort of hierarchical planning heuristics to run it. If someone asks what I did I can explain it at varying levels of resolution ("I made coffee", "I got these pieces together (xxx) and then made coffee...") again depending on various "explanation heuristics" which we learned as part of being eusocial organisms.
These plans are complex (even "get out of bed and pee" is a pretty complex plan).
However below that are a ton of decisions we aren't even aware we're making. I'm convinced (and my software reflects) that the actual "plans" we make organically are very short, and that the interesting plans -- the ones we can talk about and that we typically care about -- are very abstract ones. Making coffee is super abstract, after all.
Perhaps for an analogy: the extremely abstract reasoning for not using an antibiotic is at the level of "chemistry" while what I'm talking about is at the level of "physics".