Fine tuning lasso in an adversarial environment against gradient attacks

Gregory Ditzler, Ashley Prater

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

Machine learning and data mining algorithms typically assume that the training and testing data are sampled from the same fixed probability distribution; however, this violation is often violated in practice. The field of domain adaptation addresses the situation where this assumption of a fixed probability between the two domains is violated; however, the difference between the two domains (training/source and testing/target) may not be known a priori. There has been a recent thrust in addressing the problem of learning in the presence of an adversary, which we formulate as a problem of domain adaption to build a more robust classifier. This is because the overall security of classifiers and their preprocessing stages have been called into question with the recent findings of adversaries in a learning setting. Adversarial training (and testing) data pose a serious threat to scenarios where an attacker has the opportunity to "poison" the training or "evade" on the testing data set(s) in order to achieve something that is not in the best interest of the classifier. Recent work has begun to show the impact of adversarial data on several classifiers; however, the impact of the adversary on aspects related to preprocessing of data (i.e., dimensionality reduction or feature selection) has widely been ignored in the revamp of adversarial learning research. Furthermore, variable selection, which is a vital component to any data analysis, has been shown to be particularly susceptible under an attacker that has knowledge of the task. In this work, we explore avenues for learning resilient classification models in the adversarial learning setting by considering the effects of adversarial data and how to mitigate its effects through optimization. Our model forms a single convex optimization problem that uses the labeled training data from the source domain and known weaknesses of the model for an adversarial component. We benchmark the proposed approach on synthetic data and show the trade-off between classification accuracy and skew-insensitive statistics.

Original languageEnglish (US)
Title of host publication2017 IEEE Symposium Series on Computational Intelligence, SSCI 2017 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1-7
Number of pages7
ISBN (Electronic)9781538627259
DOIs
StatePublished - Feb 2 2018
Event2017 IEEE Symposium Series on Computational Intelligence, SSCI 2017 - Honolulu, United States
Duration: Nov 27 2017Dec 1 2017

Publication series

Name2017 IEEE Symposium Series on Computational Intelligence, SSCI 2017 - Proceedings
Volume2018-January

Conference

Conference2017 IEEE Symposium Series on Computational Intelligence, SSCI 2017
CountryUnited States
CityHonolulu
Period11/27/1712/1/17

Keywords

  • Adversarial Machine Learning
  • Feature Selection
  • Supervised Learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Control and Optimization

Fingerprint Dive into the research topics of 'Fine tuning lasso in an adversarial environment against gradient attacks'. Together they form a unique fingerprint.

Cite this