Even if AlphaZero does play better chess, there's absolutely zero it can do in terms of explaining why it played that way.
AlphaZero is zero in terms of explainability.
Humans have to explain to themselves and to others what they do, this is key in understanding what's happening, in communicating what's happening, in human decision-making, in deciding between what works and what doesn't and how well or how bad it works.
Returning back to the original DeepMind press release, it's misinforming the public about the alleged progress, in fact no fundamental progress was made, DeepMind did not come up with an entirely new sorting algorithm, the improvement was marginal at best.
I maintain my opinion that Alphadev does not understand any of the existing sorting algorithms at all.
Even if AI comes up with a marginal improvement to something, it's incapable of explaining what it has done. Humans (unless they're a politician or a dictator) always have to explain their decisions, how they got there, they have to argue their decisions and their thought-process.
It cannot explain because (1) it is not necessary to become good and (2) it wasn't explicitly trained to explain.
But it's reasonable to imagine a later model trained to explain things. The issue is that some positions might not be explainable, as they require branching too much and a lot of edge cases, so the explanation is not understandable by the human.
It's unreasonable to give up on explanations and deem something "not understandable" when we've been doing this thing for 3000+ years called mathematics, where it's exactly explainability that we seek and the removal of doubt.
The only other entities that we know of who can't communicate or explain what they're doing are animals.
It's fine if you want to refer to Kahneman's classification [1] of instinctual and thorough thinking. Explainability is a separate topic. Also when the amount of energy and compute used are as high as they are.. the results, the return on investment really isn't that high. Hopefully there are better days ahead.
Returning back to the original DeepMind press release, it's misinforming the public about the alleged progress, in fact no fundamental progress was made, DeepMind did not come up with an entirely new sorting algorithm, the improvement was marginal at best.
I maintain my opinion that Alphadev does not understand any of the existing sorting algorithms at all.
Even if AI comes up with a marginal improvement to something, it's incapable of explaining what it has done. Humans (unless they're a politician or a dictator) always have to explain their decisions, how they got there, they have to argue their decisions and their thought-process.