I am disappointed by the fact that people are so willing to throw model explainability out of the window because black-box models have higher predictive accuracy and sound cooler. Or they say that SHAP can solve interpretability even though the model that SHAP presents is not the same as the one that is making the predictions. This can hide biases or reliability problems. I think throwing a big (semi) black-box at the problem is short term thinking. In this maintenance context, for example, you can only improve airplanes if you know WHY you need to do maintenance. So therefore you need explainable models. Put differently, you need explainable models to learn from the model and to steer it based on knowledge that domain experts have. We’ve learned the importance of this when working with the Dutch military on predicting who would drop out from special forces selection [1]. And yes, we had a smaller dataset, but I don’t think that matters so much. It takes effort to get an explainable model working on the data, but the knowledge that you obtain compounds and gives you better models over time.
And sure, throw black-box models at low-stake decisions like recommender systems, sentiment classification, or spam filtering, but I don’t think it’s wise for high-stake decisions where the decisions cannot easily be verified by a human.
But well, it doesn’t matter what’s the truth here. C3.ai wants to sell fancy algorithms and some Lt. Col. at the military wants to sound innovative. They both succeeded.
Is it maybe possible to combine the black box models with more explainable ones in a reasonable way? Use a good black box model to do the actually prediction, and then consult some more explainable models for a(notice that I use "a" not "the")
explanation.
One naive approach would be to have an ensamble of explainable models, and pick only those which gives the same answer as the black box model, and then use the explanation from them?
I guess it depends on what one wants to use the explanation for. Usually you want to deduce some actions you can do to change stuff in the real world, and with an explainable model you hope that the model has captured some truth which translates to the real world, and with the very naive approach above you hope that the good black box model helps us pick the explainable model which had best understood the world (or at least the world according to the precise parameters it has been run with now).
Maybe one can try some thousand permutations of the input all around the original input(into the black box model) , and use that to find the explainable model which had best understood the solution space just around there, and extract explanations from it?
Predictive maintenance is not that controversial. I built that for a big mining company with pretty broad asset base about 10 years ago. It’s easy to plug some ML in the place of other prediction models. The challenge with these programs are important nuances such as work order definitions and standardisation, whether you use new vs refurbished components, impact on cost, downstream maintenance impacts, useful life etc. Quite exciting domain, I wish I worked on that a bit more as I hand to hand over and people who received got confused on even more basic staff such as dynamic updates on strategy, I.e. run to failure vs predictive etc. Therefore, yes, it can possibly go wrong, but no, it does not have to.
Yes this. 25 years ago we had a big engineering management system and ran statistical analysis on assembly level MTTF and MTTR. This analysis would be fed back into maintenance scheduling and recall notification. Result interpretation after post mortem would drive design changes, testing and supplier contracts.
As for AI, I suspect this is just another excuse to use it over more formal methods. I prefer the formal methods for various reasons I can't be bothered to list here.
Yes. In my previous job we used to do this for pharma manufacturing grid and their engineers were always able to identify the same markers or flags that needed to be addressed. I have no doubt they're looking to automate more of that work as much as possible.
With minor manual intervention to confirm etc you're still saving engineers so much time. Would have been a really nice upsell for my last company too.
[1]: https://doi.org/10.31234/osf.io/s6j3r