TY - JOUR
T1 - Disentangling influence
T2 - 33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019
AU - Marx, Charles T.
AU - Phillips, Richard Lanas
AU - Friedler, Sorelle A.
AU - Scheidegger, Carlos
AU - Venkatasubramanian, Suresh
N1 - Funding Information:
∗This research was funded in part by the NSF under grants DMR-1709351, IIS-1633387, IIS-1633724, and IIS-1815238, by the DARPA SD2 program, and the Arnold and Mabel Beckman Foundation. The Titan Xp used for this research was donated by the NVIDIA Corporation.
Publisher Copyright:
© 2019 Neural information processing systems foundation. All rights reserved.
PY - 2019
Y1 - 2019
N2 - Motivated by the need to audit complex and black box models, there has been extensive research on quantifying how data features influence model predictions. Feature influence can be direct (a direct influence on model outcomes) and indirect (model outcomes are influenced via proxy features). Feature influence can also be expressed in aggregate over the training or test data or locally with respect to a single point. Current research has typically focused on one of each of these dimensions. In this paper, we develop disentangled influence audits, a procedure to audit the indirect influence of features. Specifically, we show that disentangled representations provide a mechanism to identify proxy features in the dataset, while allowing an explicit computation of feature influence on either individual outcomes or aggregate-level outcomes. We show through both theory and experiments that disentangled influence audits can both detect proxy features and show, for each individual or in aggregate, which of these proxy features affects the classifier being audited the most. In this respect, our method is more powerful than existing methods for ascertaining feature influence.
AB - Motivated by the need to audit complex and black box models, there has been extensive research on quantifying how data features influence model predictions. Feature influence can be direct (a direct influence on model outcomes) and indirect (model outcomes are influenced via proxy features). Feature influence can also be expressed in aggregate over the training or test data or locally with respect to a single point. Current research has typically focused on one of each of these dimensions. In this paper, we develop disentangled influence audits, a procedure to audit the indirect influence of features. Specifically, we show that disentangled representations provide a mechanism to identify proxy features in the dataset, while allowing an explicit computation of feature influence on either individual outcomes or aggregate-level outcomes. We show through both theory and experiments that disentangled influence audits can both detect proxy features and show, for each individual or in aggregate, which of these proxy features affects the classifier being audited the most. In this respect, our method is more powerful than existing methods for ascertaining feature influence.
UR - http://www.scopus.com/inward/record.url?scp=85090171653&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85090171653&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85090171653
SN - 1049-5258
VL - 32
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
Y2 - 8 December 2019 through 14 December 2019
ER -