The (Im)possibility of fairness

Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian

Research output: Contribution to journalArticlepeer-review

121 Scopus citations

Abstract

Automated decision-making systems commonly determine criminal sentences, hiring choices, and loan applications. This widespread deployment is concerning, since these systems have the potential to discriminate against people based on their demographic characteristics. Current sentencing risk assessments are racially biased, and job advertisements discriminate on gender. These concerns have led to the growth in fairness-aware machine learning, a field that aims to enable algorithmic systems that are fair by design. To design fair systems, researchers must agree on what it means to be fair. Researchers introduce a framework for understanding these different definitions of fairness and how they relate to each other. The framework shows the definitions of fairness and their implementations correspond to different axiomatic beliefs about the world.

Original languageEnglish (US)
Pages (from-to)136-143
Number of pages8
JournalCommunications of the ACM
Volume64
Issue number4
DOIs
StatePublished - Apr 2021

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'The (Im)possibility of fairness'. Together they form a unique fingerprint.

Cite this