A comparative study of fairness-enhancing interventions in machine learning

Sorelle A. Friedler, Sonam Choudhary, Carlos Eduardo Scheidegger, Evan P. Hamilton, Suresh Venkatasubramanian, Derek Roth

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

Computers are increasingly used to make decisions that have significant impact on people's lives. Often, these predictions can affect different population subgroups disproportionately. As a result, the issue of fairness has received much recent interest, and a number of fairness-enhanced classifiers have appeared in the literature. This paper seeks to study the following questions: how do these different techniques fundamentally compare to one another, and what accounts for the differences? Specifically, we seek to bring attention to many under-appreciated aspects of such fairness-enhancing interventions that require investigation for these algorithms to receive broad adoption. We present the results of an open benchmark we have developed that lets us compare a number of different algorithms under a variety of fairness measures and existing datasets. We find that although different algorithms tend to prefer specific formulations of fairness preservations, many of these measures strongly correlate with one another. In addition, we find that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition (simulated in our benchmark by varying training-test splits) and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.

Original languageEnglish (US)
Title of host publicationFAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency
PublisherAssociation for Computing Machinery, Inc
Pages329-338
Number of pages10
ISBN (Electronic)9781450361255
DOIs
StatePublished - Jan 29 2019
Event2019 ACM Conference on Fairness, Accountability, and Transparency, FAT* 2019 - Atlanta, United States
Duration: Jan 29 2019Jan 31 2019

Publication series

NameFAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency

Conference

Conference2019 ACM Conference on Fairness, Accountability, and Transparency, FAT* 2019
CountryUnited States
CityAtlanta
Period1/29/191/31/19

Fingerprint

Learning systems
Classifiers
Comparative study
Fairness
Machine learning
Chemical analysis

Keywords

  • Benchmarks
  • Fairness-aware machine learning

ASJC Scopus subject areas

  • Business, Management and Accounting(all)
  • Engineering(all)

Cite this

Friedler, S. A., Choudhary, S., Scheidegger, C. E., Hamilton, E. P., Venkatasubramanian, S., & Roth, D. (2019). A comparative study of fairness-enhancing interventions in machine learning. In FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency (pp. 329-338). (FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency). Association for Computing Machinery, Inc. https://doi.org/10.1145/3287560.3287589

A comparative study of fairness-enhancing interventions in machine learning. / Friedler, Sorelle A.; Choudhary, Sonam; Scheidegger, Carlos Eduardo; Hamilton, Evan P.; Venkatasubramanian, Suresh; Roth, Derek.

FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, Inc, 2019. p. 329-338 (FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Friedler, SA, Choudhary, S, Scheidegger, CE, Hamilton, EP, Venkatasubramanian, S & Roth, D 2019, A comparative study of fairness-enhancing interventions in machine learning. in FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency. FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery, Inc, pp. 329-338, 2019 ACM Conference on Fairness, Accountability, and Transparency, FAT* 2019, Atlanta, United States, 1/29/19. https://doi.org/10.1145/3287560.3287589
Friedler SA, Choudhary S, Scheidegger CE, Hamilton EP, Venkatasubramanian S, Roth D. A comparative study of fairness-enhancing interventions in machine learning. In FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, Inc. 2019. p. 329-338. (FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency). https://doi.org/10.1145/3287560.3287589
Friedler, Sorelle A. ; Choudhary, Sonam ; Scheidegger, Carlos Eduardo ; Hamilton, Evan P. ; Venkatasubramanian, Suresh ; Roth, Derek. / A comparative study of fairness-enhancing interventions in machine learning. FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, Inc, 2019. pp. 329-338 (FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency).
@inproceedings{20d794be6a684d1281fec369133ab38e,
title = "A comparative study of fairness-enhancing interventions in machine learning",
abstract = "Computers are increasingly used to make decisions that have significant impact on people's lives. Often, these predictions can affect different population subgroups disproportionately. As a result, the issue of fairness has received much recent interest, and a number of fairness-enhanced classifiers have appeared in the literature. This paper seeks to study the following questions: how do these different techniques fundamentally compare to one another, and what accounts for the differences? Specifically, we seek to bring attention to many under-appreciated aspects of such fairness-enhancing interventions that require investigation for these algorithms to receive broad adoption. We present the results of an open benchmark we have developed that lets us compare a number of different algorithms under a variety of fairness measures and existing datasets. We find that although different algorithms tend to prefer specific formulations of fairness preservations, many of these measures strongly correlate with one another. In addition, we find that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition (simulated in our benchmark by varying training-test splits) and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.",
keywords = "Benchmarks, Fairness-aware machine learning",
author = "Friedler, {Sorelle A.} and Sonam Choudhary and Scheidegger, {Carlos Eduardo} and Hamilton, {Evan P.} and Suresh Venkatasubramanian and Derek Roth",
year = "2019",
month = "1",
day = "29",
doi = "10.1145/3287560.3287589",
language = "English (US)",
series = "FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency",
publisher = "Association for Computing Machinery, Inc",
pages = "329--338",
booktitle = "FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency",

}

TY - GEN

T1 - A comparative study of fairness-enhancing interventions in machine learning

AU - Friedler, Sorelle A.

AU - Choudhary, Sonam

AU - Scheidegger, Carlos Eduardo

AU - Hamilton, Evan P.

AU - Venkatasubramanian, Suresh

AU - Roth, Derek

PY - 2019/1/29

Y1 - 2019/1/29

N2 - Computers are increasingly used to make decisions that have significant impact on people's lives. Often, these predictions can affect different population subgroups disproportionately. As a result, the issue of fairness has received much recent interest, and a number of fairness-enhanced classifiers have appeared in the literature. This paper seeks to study the following questions: how do these different techniques fundamentally compare to one another, and what accounts for the differences? Specifically, we seek to bring attention to many under-appreciated aspects of such fairness-enhancing interventions that require investigation for these algorithms to receive broad adoption. We present the results of an open benchmark we have developed that lets us compare a number of different algorithms under a variety of fairness measures and existing datasets. We find that although different algorithms tend to prefer specific formulations of fairness preservations, many of these measures strongly correlate with one another. In addition, we find that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition (simulated in our benchmark by varying training-test splits) and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.

AB - Computers are increasingly used to make decisions that have significant impact on people's lives. Often, these predictions can affect different population subgroups disproportionately. As a result, the issue of fairness has received much recent interest, and a number of fairness-enhanced classifiers have appeared in the literature. This paper seeks to study the following questions: how do these different techniques fundamentally compare to one another, and what accounts for the differences? Specifically, we seek to bring attention to many under-appreciated aspects of such fairness-enhancing interventions that require investigation for these algorithms to receive broad adoption. We present the results of an open benchmark we have developed that lets us compare a number of different algorithms under a variety of fairness measures and existing datasets. We find that although different algorithms tend to prefer specific formulations of fairness preservations, many of these measures strongly correlate with one another. In addition, we find that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition (simulated in our benchmark by varying training-test splits) and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.

KW - Benchmarks

KW - Fairness-aware machine learning

UR - http://www.scopus.com/inward/record.url?scp=85061831874&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85061831874&partnerID=8YFLogxK

U2 - 10.1145/3287560.3287589

DO - 10.1145/3287560.3287589

M3 - Conference contribution

AN - SCOPUS:85061831874

T3 - FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency

SP - 329

EP - 338

BT - FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency

PB - Association for Computing Machinery, Inc

ER -