Hacker News new | past | comments | ask | show | jobs | submit login

>(ii) Explainable ML methods provide explanations that are not faithful to what the original model computes.

>Explanations must be wrong. They cannot have perfect fidelity with respect to the original model. If the explanation was completely faithful to what the original model computes, the explanation would equal the original model, and one would not need the original model in the first place, only the explanation. (In other words, this is a case where the original model would be interpretable.) This leads to the danger that the explanation method can be an inaccurate representation of the original model in parts of the feature space.

This is such a succinct phrasing of what makes me so uncomfortable with these approximate explanations.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: