TY - GEN
T1 - “If it didn’t happen, why would I change my decision?”
T2 - 10th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2022
AU - Yacoby, Yaniv
AU - Green, Ben
AU - Griffin, Christopher L.
AU - Doshi-Velez, Finale
N1 - Publisher Copyright:
© 2022, Association for the Advancement of Artificial Intelligence.
PY - 2022
Y1 - 2022
N2 - Many researchers and policymakers have expressed excitement about algorithmic explanations enabling more fair and responsible decision-making. However, recent experimental studies have found that explanations do not always improve human use of algorithmic advice. In this study, we shed light on how people interpret and respond to counterfactual explanations(CFEs)—explanations that show how a model’s output would change with marginal changes to its input(s)—in the context of pretrial risk assessment instruments (PRAIs).We ran think-aloud trials with eight sitting U.S. state court judges, providing them with recommendations from a PRAI that includes CFEs. We found that the CFEs did not alter the judges’ decisions. At first, judges misinterpreted the counter factualsas real—rather than hypothetical—changes to defendants. Once judges understood what the counter factualsmeant, they ignored them, stating their role is only to make decisions regarding the actual defendant in question. The judges also expressed a mix of reasons for ignoring or following the advice of the PRAI without CFEs. These results add to the literature detailing the unexpected ways in which people respond to algorithms and explanations. They also highlight new challenges associated with improving human-algorithm collaborations through explanations.
AB - Many researchers and policymakers have expressed excitement about algorithmic explanations enabling more fair and responsible decision-making. However, recent experimental studies have found that explanations do not always improve human use of algorithmic advice. In this study, we shed light on how people interpret and respond to counterfactual explanations(CFEs)—explanations that show how a model’s output would change with marginal changes to its input(s)—in the context of pretrial risk assessment instruments (PRAIs).We ran think-aloud trials with eight sitting U.S. state court judges, providing them with recommendations from a PRAI that includes CFEs. We found that the CFEs did not alter the judges’ decisions. At first, judges misinterpreted the counter factualsas real—rather than hypothetical—changes to defendants. Once judges understood what the counter factualsmeant, they ignored them, stating their role is only to make decisions regarding the actual defendant in question. The judges also expressed a mix of reasons for ignoring or following the advice of the PRAI without CFEs. These results add to the literature detailing the unexpected ways in which people respond to algorithms and explanations. They also highlight new challenges associated with improving human-algorithm collaborations through explanations.
UR - http://www.scopus.com/inward/record.url?scp=85173089978&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85173089978&partnerID=8YFLogxK
U2 - 10.1609/hcomp.v10i1.22001
DO - 10.1609/hcomp.v10i1.22001
M3 - Conference contribution
AN - SCOPUS:85173089978
SN - 9781577358787
T3 - Proceedings of the AAAI Conference on Human Computation and Crowdsourcing
SP - 219
EP - 230
BT - HCOMP 2022 - Proceedings of the 10th AAAI Conference on Human Computation and Crowdsourcing
A2 - Hsu, Jane
A2 - Yin, Ming
PB - Association for the Advancement of Artificial Intelligence
Y2 - 6 November 2022 through 10 November 2022
ER -