Abstract
Recently, a growing number of credibility assessment technologies (CATs) have been developed to assist human decision-making processes in evidence-based investigations, such as criminal investigations, financial fraud detection, and insurance claim verification. Despite the widespread adoption of CATs, it remains unclear how CAT and human biases interact during the evidence-collection procedure and affect the fairness of investigation outcomes. To address this gap, we develop a Bayesian framework to model CAT adoption and the iterative collection and interpretation of evidence in investigations. Based on the Bayesian framework, we further conduct simulations to examine how CATs affect investigation fairness with various configurations of evidence effectiveness, CAT effectiveness, human biases, technological biases, and decision stakes. We find that when investigators are unconscious of their own biases, CAT adoption generally increases the fairness of investigation outcomes if the CAT is more effective than evidence and less biased than the investigators. However, the CATs' positive influence on fairness diminishes as humans become aware of their own biases. Our results show that CATs' impact on decision fairness highly depends on various technological, human, and contextual factors. We further discuss the implications for CAT development, evaluation, and adoption based on our findings.
Original language | English (US) |
---|---|
Article number | 114326 |
Journal | Decision Support Systems |
Volume | 187 |
DOIs | |
State | Published - Dec 2024 |
Keywords
- Algorithmic fairness
- Bayesian modeling
- Credibility assessment technologies
- Evidence-based investigations
- Human biases
- Human-machine collaboration
ASJC Scopus subject areas
- Management Information Systems
- Information Systems
- Developmental and Educational Psychology
- Arts and Humanities (miscellaneous)
- Information Systems and Management