TY - JOUR
T1 - Auditing black-box models for indirect influence
AU - Adler, Philip
AU - Falk, Casey
AU - Friedler, Sorelle A.
AU - Nix, Tionney
AU - Rybeck, Gabriel
AU - Scheidegger, Carlos
AU - Smith, Brandon
AU - Venkatasubramanian, Suresh
N1 - Funding Information:
A preliminary version of this work with authors Philip Adler, Casey Falk, Sorelle A. Friedler, Gabriel Rybeck, Carlos Scheidegger, Brandon Smith, and Suresh Venkatasubramanian was titled Auditing Black-box Models for Indirect Influence and appeared in the Proceedings of the IEEE International Conference on Data Mining (ICDM) in 2016. This research was funded in part by the NSF under Grants IIS-1251049, CNS-1302688, IIS-1513651, DMR-1307801, IIS-1633724, and IIS-1633387.
Publisher Copyright:
© 2017, Springer-Verlag London Ltd.
PY - 2018/1/1
Y1 - 2018/1/1
N2 - Data-trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. It is therefore hard to acquire a deeper understanding of model behavior and in particular how different features influence the model prediction. This is important when interpreting the behavior of complex models or asserting that certain problematic attributes (such as race or gender) are not unduly influencing decisions. In this paper, we present a technique for auditing black-box models, which lets us study the extent to which existing models take advantage of particular features in the data set, without knowing how the models work. Our work focuses on the problem of indirect influence: how some features might indirectly influence outcomes via other, related features. As a result, we can find attribute influences even in cases where, upon further direct examination of the model, the attribute is not referred to by the model at all. Our approach does not require the black-box model to be retrained. This is important if, for example, the model is only accessible via an API, and contrasts our work with other methods that investigate feature influence such as feature selection. We present experimental evidence for the effectiveness of our procedure using a variety of publicly available data sets and models. We also validate our procedure using techniques from interpretable learning and feature selection, as well as against other black-box auditing procedures. To further demonstrate the effectiveness of this technique, we use it to audit a black-box recidivism prediction algorithm.
AB - Data-trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. It is therefore hard to acquire a deeper understanding of model behavior and in particular how different features influence the model prediction. This is important when interpreting the behavior of complex models or asserting that certain problematic attributes (such as race or gender) are not unduly influencing decisions. In this paper, we present a technique for auditing black-box models, which lets us study the extent to which existing models take advantage of particular features in the data set, without knowing how the models work. Our work focuses on the problem of indirect influence: how some features might indirectly influence outcomes via other, related features. As a result, we can find attribute influences even in cases where, upon further direct examination of the model, the attribute is not referred to by the model at all. Our approach does not require the black-box model to be retrained. This is important if, for example, the model is only accessible via an API, and contrasts our work with other methods that investigate feature influence such as feature selection. We present experimental evidence for the effectiveness of our procedure using a variety of publicly available data sets and models. We also validate our procedure using techniques from interpretable learning and feature selection, as well as against other black-box auditing procedures. To further demonstrate the effectiveness of this technique, we use it to audit a black-box recidivism prediction algorithm.
KW - ANOVA
KW - Algorithmic accountability
KW - Black-box auditing
KW - Deep learning
KW - Discrimination-aware data mining
KW - Feature influence
KW - Interpretable machine learning
UR - http://www.scopus.com/inward/record.url?scp=85032187305&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85032187305&partnerID=8YFLogxK
U2 - 10.1007/s10115-017-1116-3
DO - 10.1007/s10115-017-1116-3
M3 - Article
AN - SCOPUS:85032187305
SN - 0219-1377
VL - 54
SP - 95
EP - 122
JO - Knowledge and Information Systems
JF - Knowledge and Information Systems
IS - 1
ER -