Fairness in representation: Quantifying stereotyping as a representational harm

Mohsen Abbasi, Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

While harms of allocation have been increasingly studied as part of the subfield of algorithmic fairness, harms of representation have received considerably less attention. In this paper, we formalize two notions of stereotyping and show how they manifest in later allocative harms within the machine learning pipeline. We also propose mitigation strategies and demonstrate their effectiveness on synthetic datasets.

Original languageEnglish (US)
Title of host publicationSIAM International Conference on Data Mining, SDM 2019
PublisherSociety for Industrial and Applied Mathematics Publications
Pages801-809
Number of pages9
ISBN (Electronic)9781611975673
DOIs
StatePublished - 2019
Event19th SIAM International Conference on Data Mining, SDM 2019 - Calgary, Canada
Duration: May 2 2019May 4 2019

Publication series

NameSIAM International Conference on Data Mining, SDM 2019

Conference

Conference19th SIAM International Conference on Data Mining, SDM 2019
CountryCanada
CityCalgary
Period5/2/195/4/19

ASJC Scopus subject areas

  • Software

Fingerprint Dive into the research topics of 'Fairness in representation: Quantifying stereotyping as a representational harm'. Together they form a unique fingerprint.

  • Cite this

    Abbasi, M., Friedler, S. A., Scheidegger, C., & Venkatasubramanian, S. (2019). Fairness in representation: Quantifying stereotyping as a representational harm. In SIAM International Conference on Data Mining, SDM 2019 (pp. 801-809). (SIAM International Conference on Data Mining, SDM 2019). Society for Industrial and Applied Mathematics Publications. https://doi.org/10.1137/1.9781611975673.90