That's a bit of an exaggeration. We understand high level motivations as being highly correlated to certain outcomes. Hunger, eat. Horny, sex. Wronged, vengeance. Money, directed. You don't necessarily need to know the underlying processes to make reliable predictions or valid causal explanations about human behavior.
There isn't even agreement on the existence of free will. For non-trivial decisions, let's say some judge's deliberation, to claim it is fully explainable is a stretch.
I didn't say it was fully explainable. And you don't need to settle debates about free will to offer valid causal explanations for human behaviors.
Are you saying we don't know at all why anyone does anything they ever do? That every action is totally unpredictable and after the fact that there is no way for us to offer more or less plausible explanations?
We are getting off track in our discussion. For non-trivial decisions we know we can have ex-post justifications from humans but at the same time we know that those aren't necessarily complete or even true at times: A credit officer not liking someone will have a justification for refusal that does not include that and we might never know - from the outside - the true explanation and that credit officer might not even know their biases themselves! AI being explainable is just going beyond what we demand of humans (we demand a justification, not a real explanation).
Also, predictability isn't the same as understanding the full decision process.
I don't think we're off track at all. You made the claim that human cognition is a black box. That's not true. We have valid causal explanations for human cognition /and/ human behavior.
Just because we don't have explanations at every level of abstraction does not prevent us from having them at some levels of abstraction. We very well may find ourselves in the same situation with regard to AI.
It's not going beyond, it would be achieving parity.
I disagree: for anything beyond basal stuff, we have hypotheses for causal explanations of human cognition, not full theories.
For example, we cannot explain the mental process how someone came up with a new chess move and check the validity in that case. We can have some ideas how it might happen and that person might also have some ideas how, but then we back to hypotheses.
I'm totally lost as to how any of that is relevant to anything I've said. The claim I am rebutting is that we can't expect to say anything about AI causality because we can't even say anything about human causality.
"When AI is used, its decisions must be explicable in human terms. No 'AI said so, so it's true'". Somehow the whole discussion is about how human mind cannot be explained either.
Yea, the decisions made by the human mind can be explained in human terms for the vast majority of relevant cases.
Because (1) it holds AI to higher standard than humans, (2) it means that even if AI makes better decisions (name your metric), we would deny the use if those decisions if we could not sufficiently explain them. I note, that with certain things we do not do that. For example, with certain pharmaceuticals we were and are quite happy to take the effect even if we do not fully understand how it works.
> Because (1) it holds AI to higher standard than humans,
It doesn't
> it means that even if AI makes better decisions (name your metric), we would deny the use if those decisions if we could not sufficiently explain them.
Ah yes, better decisions for racial profiling, social credit, housing permissions etc. Because we all know that AI will never be used for that.
Again:
If a bank denies you credit, it has to explain why. Not "AI told us so".
If police arrests you, they have to explain why, not "AI told us so".
If your job fires you, it has to explain why, not "AI told us so".
I made no such claim. My claim is that it might not be useful to hold AI to a higher standard as humans. With humans we accept certain "whims" in decisions, which is the same as having some unexplainable bit in an AI decision.
EDIT: it might not even be useful to insist on explainability if the results are better. We did not and do not do that in other areas.
I noted this elsewhere, but I'll reiterate. I'm confused as to where our disagreement is, because all of that, I agree with. Did I misread your original claims? If so, my apologies for dragging us through the mud.
Once again, you are raising the bar artificially high by pointing to examples where we fail to have reasonable casual accounts. But you are conveniently ignoring the mountain of reasonable accounts that we employ all day long simply to navigate our human world.
I think this is confusing a certain predictability with having a correct explanation (again, we are beyond basal things like being hungry leading to eating). Those two things are not the same.