Auditing black-box models for indirect influence

Philip Adler, Casey Falk, Sorelle A. Friedler, Gabriel Rybeck, Carlos Scheidegger, Brandon Smith, Suresh Venkatasubramanian

Research output: ResearchConference contribution

  • 2 Citations

Abstract

Data-Trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. It is therefore hard to acquire a deeper understanding of model behavior, and in particular how different features influence the model prediction. This is important when interpreting the behavior of complex models, or asserting that certain problematic attributes (like race or gender) are not unduly influencing decisions. In this paper, we present a technique for auditing black-box models, which lets us study the extent to which existing models take advantage of particular features in the dataset, without knowing how the models work. Our work focuses on the problem of indirect influence: how some features might indirectly influence outcomes via other, related features. As a result, we can find attribute influences even in cases where, upon further direct examination of the model, the attribute is not referred to by the model at all. Our approach does not require the black-box model to be retrained. This is important if (for example) the model is only accessible via an API, and contrasts our work with other methods that investigate feature influence like feature selection. We present experimental evidence for the effectiveness of our procedure using a variety of publicly available datasets and models. We also validate our procedure using techniques from interpretable learning and feature selection, as well as against other black-box auditing procedures.

LanguageEnglish (US)
Title of host publicationProceedings - 16th IEEE International Conference on Data Mining, ICDM 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1-10
Number of pages10
ISBN (Electronic)9781509054725
DOIs
StatePublished - Jan 31 2017
Event16th IEEE International Conference on Data Mining, ICDM 2016 - Barcelona, Catalonia, Spain
Duration: Dec 12 2016Dec 15 2016

Other

Other16th IEEE International Conference on Data Mining, ICDM 2016
CountrySpain
CityBarcelona, Catalonia
Period12/12/1612/15/16

Fingerprint

Feature extraction
Application programming interfaces (API)

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Adler, P., Falk, C., Friedler, S. A., Rybeck, G., Scheidegger, C., Smith, B., & Venkatasubramanian, S. (2017). Auditing black-box models for indirect influence. In Proceedings - 16th IEEE International Conference on Data Mining, ICDM 2016 (pp. 1-10). [7837824] Institute of Electrical and Electronics Engineers Inc.. DOI: 10.1109/ICDM.2016.158

Auditing black-box models for indirect influence. / Adler, Philip; Falk, Casey; Friedler, Sorelle A.; Rybeck, Gabriel; Scheidegger, Carlos; Smith, Brandon; Venkatasubramanian, Suresh.

Proceedings - 16th IEEE International Conference on Data Mining, ICDM 2016. Institute of Electrical and Electronics Engineers Inc., 2017. p. 1-10 7837824.

Research output: ResearchConference contribution

Adler, P, Falk, C, Friedler, SA, Rybeck, G, Scheidegger, C, Smith, B & Venkatasubramanian, S 2017, Auditing black-box models for indirect influence. in Proceedings - 16th IEEE International Conference on Data Mining, ICDM 2016., 7837824, Institute of Electrical and Electronics Engineers Inc., pp. 1-10, 16th IEEE International Conference on Data Mining, ICDM 2016, Barcelona, Catalonia, Spain, 12/12/16. DOI: 10.1109/ICDM.2016.158
Adler P, Falk C, Friedler SA, Rybeck G, Scheidegger C, Smith B et al. Auditing black-box models for indirect influence. In Proceedings - 16th IEEE International Conference on Data Mining, ICDM 2016. Institute of Electrical and Electronics Engineers Inc.2017. p. 1-10. 7837824. Available from, DOI: 10.1109/ICDM.2016.158
Adler, Philip ; Falk, Casey ; Friedler, Sorelle A. ; Rybeck, Gabriel ; Scheidegger, Carlos ; Smith, Brandon ; Venkatasubramanian, Suresh. / Auditing black-box models for indirect influence. Proceedings - 16th IEEE International Conference on Data Mining, ICDM 2016. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 1-10
@inbook{e5e92271f2aa4d028d1cd3800a224aac,
title = "Auditing black-box models for indirect influence",
abstract = "Data-Trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. It is therefore hard to acquire a deeper understanding of model behavior, and in particular how different features influence the model prediction. This is important when interpreting the behavior of complex models, or asserting that certain problematic attributes (like race or gender) are not unduly influencing decisions. In this paper, we present a technique for auditing black-box models, which lets us study the extent to which existing models take advantage of particular features in the dataset, without knowing how the models work. Our work focuses on the problem of indirect influence: how some features might indirectly influence outcomes via other, related features. As a result, we can find attribute influences even in cases where, upon further direct examination of the model, the attribute is not referred to by the model at all. Our approach does not require the black-box model to be retrained. This is important if (for example) the model is only accessible via an API, and contrasts our work with other methods that investigate feature influence like feature selection. We present experimental evidence for the effectiveness of our procedure using a variety of publicly available datasets and models. We also validate our procedure using techniques from interpretable learning and feature selection, as well as against other black-box auditing procedures.",
author = "Philip Adler and Casey Falk and Friedler, {Sorelle A.} and Gabriel Rybeck and Carlos Scheidegger and Brandon Smith and Suresh Venkatasubramanian",
year = "2017",
month = "1",
doi = "10.1109/ICDM.2016.158",
pages = "1--10",
booktitle = "Proceedings - 16th IEEE International Conference on Data Mining, ICDM 2016",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

TY - CHAP

T1 - Auditing black-box models for indirect influence

AU - Adler,Philip

AU - Falk,Casey

AU - Friedler,Sorelle A.

AU - Rybeck,Gabriel

AU - Scheidegger,Carlos

AU - Smith,Brandon

AU - Venkatasubramanian,Suresh

PY - 2017/1/31

Y1 - 2017/1/31

N2 - Data-Trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. It is therefore hard to acquire a deeper understanding of model behavior, and in particular how different features influence the model prediction. This is important when interpreting the behavior of complex models, or asserting that certain problematic attributes (like race or gender) are not unduly influencing decisions. In this paper, we present a technique for auditing black-box models, which lets us study the extent to which existing models take advantage of particular features in the dataset, without knowing how the models work. Our work focuses on the problem of indirect influence: how some features might indirectly influence outcomes via other, related features. As a result, we can find attribute influences even in cases where, upon further direct examination of the model, the attribute is not referred to by the model at all. Our approach does not require the black-box model to be retrained. This is important if (for example) the model is only accessible via an API, and contrasts our work with other methods that investigate feature influence like feature selection. We present experimental evidence for the effectiveness of our procedure using a variety of publicly available datasets and models. We also validate our procedure using techniques from interpretable learning and feature selection, as well as against other black-box auditing procedures.

AB - Data-Trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. It is therefore hard to acquire a deeper understanding of model behavior, and in particular how different features influence the model prediction. This is important when interpreting the behavior of complex models, or asserting that certain problematic attributes (like race or gender) are not unduly influencing decisions. In this paper, we present a technique for auditing black-box models, which lets us study the extent to which existing models take advantage of particular features in the dataset, without knowing how the models work. Our work focuses on the problem of indirect influence: how some features might indirectly influence outcomes via other, related features. As a result, we can find attribute influences even in cases where, upon further direct examination of the model, the attribute is not referred to by the model at all. Our approach does not require the black-box model to be retrained. This is important if (for example) the model is only accessible via an API, and contrasts our work with other methods that investigate feature influence like feature selection. We present experimental evidence for the effectiveness of our procedure using a variety of publicly available datasets and models. We also validate our procedure using techniques from interpretable learning and feature selection, as well as against other black-box auditing procedures.

UR - http://www.scopus.com/inward/record.url?scp=85014529607&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85014529607&partnerID=8YFLogxK

U2 - 10.1109/ICDM.2016.158

DO - 10.1109/ICDM.2016.158

M3 - Conference contribution

SP - 1

EP - 10

BT - Proceedings - 16th IEEE International Conference on Data Mining, ICDM 2016

PB - Institute of Electrical and Electronics Engineers Inc.

ER -