Fairness in representation: Quantifying stereotyping as a representational harm

Mohsen Abbasi, Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian

Research output: Chapter in Book/Report/Conference proceedingConference contribution

29 Scopus citations

Abstract

While harms of allocation have been increasingly studied as part of the subfield of algorithmic fairness, harms of representation have received considerably less attention. In this paper, we formalize two notions of stereotyping and show how they manifest in later allocative harms within the machine learning pipeline. We also propose mitigation strategies and demonstrate their effectiveness on synthetic datasets.

Original languageEnglish (US)
Title of host publicationSIAM International Conference on Data Mining, SDM 2019
PublisherSociety for Industrial and Applied Mathematics Publications
Pages801-809
Number of pages9
ISBN (Electronic)9781611975673
DOIs
StatePublished - 2019
Event19th SIAM International Conference on Data Mining, SDM 2019 - Calgary, Canada
Duration: May 2 2019May 4 2019

Publication series

NameSIAM International Conference on Data Mining, SDM 2019

Conference

Conference19th SIAM International Conference on Data Mining, SDM 2019
Country/TerritoryCanada
CityCalgary
Period5/2/195/4/19

ASJC Scopus subject areas

  • Software

Fingerprint

Dive into the research topics of 'Fairness in representation: Quantifying stereotyping as a representational harm'. Together they form a unique fingerprint.

Cite this