Auditing black-box models for indirect influence

Philip Adler, Casey Falk, Sorelle A. Friedler, Tionney Nix, Gabriel Rybeck, Carlos Scheidegger, Brandon Smith, Suresh Venkatasubramanian

Research output: Research - peer-reviewArticle

Abstract

Data-trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. It is therefore hard to acquire a deeper understanding of model behavior and in particular how different features influence the model prediction. This is important when interpreting the behavior of complex models or asserting that certain problematic attributes (such as race or gender) are not unduly influencing decisions. In this paper, we present a technique for auditing black-box models, which lets us study the extent to which existing models take advantage of particular features in the data set, without knowing how the models work. Our work focuses on the problem of indirect influence: how some features might indirectly influence outcomes via other, related features. As a result, we can find attribute influences even in cases where, upon further direct examination of the model, the attribute is not referred to by the model at all. Our approach does not require the black-box model to be retrained. This is important if, for example, the model is only accessible via an API, and contrasts our work with other methods that investigate feature influence such as feature selection. We present experimental evidence for the effectiveness of our procedure using a variety of publicly available data sets and models. We also validate our procedure using techniques from interpretable learning and feature selection, as well as against other black-box auditing procedures. To further demonstrate the effectiveness of this technique, we use it to audit a black-box recidivism prediction algorithm.

LanguageEnglish (US)
Pages1-28
Number of pages28
JournalKnowledge and Information Systems
DOIs
StateAccepted/In press - Oct 25 2017

Fingerprint

Feature extraction
Application programming interfaces (API)

Keywords

  • Algorithmic accountability
  • ANOVA
  • Black-box auditing
  • Deep learning
  • Discrimination-aware data mining
  • Feature influence
  • Interpretable machine learning

ASJC Scopus subject areas

  • Software
  • Information Systems
  • Human-Computer Interaction
  • Hardware and Architecture
  • Artificial Intelligence

Cite this

Adler, P., Falk, C., Friedler, S. A., Nix, T., Rybeck, G., Scheidegger, C., ... Venkatasubramanian, S. (2017). Auditing black-box models for indirect influence. Knowledge and Information Systems, 1-28. DOI: 10.1007/s10115-017-1116-3

Auditing black-box models for indirect influence. / Adler, Philip; Falk, Casey; Friedler, Sorelle A.; Nix, Tionney; Rybeck, Gabriel; Scheidegger, Carlos; Smith, Brandon; Venkatasubramanian, Suresh.

In: Knowledge and Information Systems, 25.10.2017, p. 1-28.

Research output: Research - peer-reviewArticle

Adler, P, Falk, C, Friedler, SA, Nix, T, Rybeck, G, Scheidegger, C, Smith, B & Venkatasubramanian, S 2017, 'Auditing black-box models for indirect influence' Knowledge and Information Systems, pp. 1-28. DOI: 10.1007/s10115-017-1116-3
Adler P, Falk C, Friedler SA, Nix T, Rybeck G, Scheidegger C et al. Auditing black-box models for indirect influence. Knowledge and Information Systems. 2017 Oct 25;1-28. Available from, DOI: 10.1007/s10115-017-1116-3
Adler, Philip ; Falk, Casey ; Friedler, Sorelle A. ; Nix, Tionney ; Rybeck, Gabriel ; Scheidegger, Carlos ; Smith, Brandon ; Venkatasubramanian, Suresh. / Auditing black-box models for indirect influence. In: Knowledge and Information Systems. 2017 ; pp. 1-28
@article{7683be07b6e84f5ea12676f47543560f,
title = "Auditing black-box models for indirect influence",
abstract = "Data-trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. It is therefore hard to acquire a deeper understanding of model behavior and in particular how different features influence the model prediction. This is important when interpreting the behavior of complex models or asserting that certain problematic attributes (such as race or gender) are not unduly influencing decisions. In this paper, we present a technique for auditing black-box models, which lets us study the extent to which existing models take advantage of particular features in the data set, without knowing how the models work. Our work focuses on the problem of indirect influence: how some features might indirectly influence outcomes via other, related features. As a result, we can find attribute influences even in cases where, upon further direct examination of the model, the attribute is not referred to by the model at all. Our approach does not require the black-box model to be retrained. This is important if, for example, the model is only accessible via an API, and contrasts our work with other methods that investigate feature influence such as feature selection. We present experimental evidence for the effectiveness of our procedure using a variety of publicly available data sets and models. We also validate our procedure using techniques from interpretable learning and feature selection, as well as against other black-box auditing procedures. To further demonstrate the effectiveness of this technique, we use it to audit a black-box recidivism prediction algorithm.",
keywords = "Algorithmic accountability, ANOVA, Black-box auditing, Deep learning, Discrimination-aware data mining, Feature influence, Interpretable machine learning",
author = "Philip Adler and Casey Falk and Friedler, {Sorelle A.} and Tionney Nix and Gabriel Rybeck and Carlos Scheidegger and Brandon Smith and Suresh Venkatasubramanian",
year = "2017",
month = "10",
doi = "10.1007/s10115-017-1116-3",
pages = "1--28",
journal = "Knowledge and Information Systems",
issn = "0219-1377",
publisher = "Springer London",

}

TY - JOUR

T1 - Auditing black-box models for indirect influence

AU - Adler,Philip

AU - Falk,Casey

AU - Friedler,Sorelle A.

AU - Nix,Tionney

AU - Rybeck,Gabriel

AU - Scheidegger,Carlos

AU - Smith,Brandon

AU - Venkatasubramanian,Suresh

PY - 2017/10/25

Y1 - 2017/10/25

N2 - Data-trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. It is therefore hard to acquire a deeper understanding of model behavior and in particular how different features influence the model prediction. This is important when interpreting the behavior of complex models or asserting that certain problematic attributes (such as race or gender) are not unduly influencing decisions. In this paper, we present a technique for auditing black-box models, which lets us study the extent to which existing models take advantage of particular features in the data set, without knowing how the models work. Our work focuses on the problem of indirect influence: how some features might indirectly influence outcomes via other, related features. As a result, we can find attribute influences even in cases where, upon further direct examination of the model, the attribute is not referred to by the model at all. Our approach does not require the black-box model to be retrained. This is important if, for example, the model is only accessible via an API, and contrasts our work with other methods that investigate feature influence such as feature selection. We present experimental evidence for the effectiveness of our procedure using a variety of publicly available data sets and models. We also validate our procedure using techniques from interpretable learning and feature selection, as well as against other black-box auditing procedures. To further demonstrate the effectiveness of this technique, we use it to audit a black-box recidivism prediction algorithm.

AB - Data-trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. It is therefore hard to acquire a deeper understanding of model behavior and in particular how different features influence the model prediction. This is important when interpreting the behavior of complex models or asserting that certain problematic attributes (such as race or gender) are not unduly influencing decisions. In this paper, we present a technique for auditing black-box models, which lets us study the extent to which existing models take advantage of particular features in the data set, without knowing how the models work. Our work focuses on the problem of indirect influence: how some features might indirectly influence outcomes via other, related features. As a result, we can find attribute influences even in cases where, upon further direct examination of the model, the attribute is not referred to by the model at all. Our approach does not require the black-box model to be retrained. This is important if, for example, the model is only accessible via an API, and contrasts our work with other methods that investigate feature influence such as feature selection. We present experimental evidence for the effectiveness of our procedure using a variety of publicly available data sets and models. We also validate our procedure using techniques from interpretable learning and feature selection, as well as against other black-box auditing procedures. To further demonstrate the effectiveness of this technique, we use it to audit a black-box recidivism prediction algorithm.

KW - Algorithmic accountability

KW - ANOVA

KW - Black-box auditing

KW - Deep learning

KW - Discrimination-aware data mining

KW - Feature influence

KW - Interpretable machine learning

UR - http://www.scopus.com/inward/record.url?scp=85032187305&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85032187305&partnerID=8YFLogxK

U2 - 10.1007/s10115-017-1116-3

DO - 10.1007/s10115-017-1116-3

M3 - Article

SP - 1

EP - 28

JO - Knowledge and Information Systems

T2 - Knowledge and Information Systems

JF - Knowledge and Information Systems

SN - 0219-1377

ER -