TY - GEN
T1 - Problems with Shapley-value-based explanations as feature importance measures
AU - Kumar, I. Elizabeth
AU - Venkatasubramanian, Suresh
AU - Scheidegger, Carlos
AU - Friedler, Sorelle A.
N1 - Funding Information:
Acknowledgments. This research was supported in part by the National Science Foundation under grants IIS-1633724, IIS-1633387, DMR-1709351, IIS-1815238, the DARPA SD2 Program, and the ARCS Foundation.
Publisher Copyright:
© International Conference on Machine Learning, ICML 2020. All rights reserved.
PY - 2020
Y1 - 2020
N2 - Game-theoretic formulations of feature importance have become popular as a way to "explain"machine learning models. These methods define a cooperative game between the features of a model and distribute influence among these input elements using some form of the game's unique Shapley values. Justification for these methods rests on two pillars: their desirable mathematical properties, and their applicability to specific motivations for explanations. We show that mathematical problems arise when Shapley values are used for feature importance, and that the solutions to mitigate these necessarily induce further complexity, such as the need for causal reasoning. We also draw on additional literature to argue that Shapley values are not a natural solution to the human-centric goals of explainability.
AB - Game-theoretic formulations of feature importance have become popular as a way to "explain"machine learning models. These methods define a cooperative game between the features of a model and distribute influence among these input elements using some form of the game's unique Shapley values. Justification for these methods rests on two pillars: their desirable mathematical properties, and their applicability to specific motivations for explanations. We show that mathematical problems arise when Shapley values are used for feature importance, and that the solutions to mitigate these necessarily induce further complexity, such as the need for causal reasoning. We also draw on additional literature to argue that Shapley values are not a natural solution to the human-centric goals of explainability.
UR - http://www.scopus.com/inward/record.url?scp=85105578048&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85105578048&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85105578048
T3 - 37th International Conference on Machine Learning, ICML 2020
SP - 5447
EP - 5456
BT - 37th International Conference on Machine Learning, ICML 2020
A2 - Daume, Hal
A2 - Singh, Aarti
PB - International Machine Learning Society (IMLS)
T2 - 37th International Conference on Machine Learning, ICML 2020
Y2 - 13 July 2020 through 18 July 2020
ER -