Learning what we don't care about: Anti-training with sacrificial functions

Gregory Ditzler, Sean Miller, Jerzy W Rozenblit

Research output: Contribution to journalArticle

Abstract

The traditional machine learning paradigm focuses on optimizing an objective. This task of optimization is carried out by adjusting the free parameters of a model; however, many times in classification we do not assess the performance of the model using the optimized objective. Furthermore, choosing a poor set of free parameters for the objective function could lead to unintentional overfitting of the data. In this work, we present a novel approach based on the theory of anti-training for generating sacrificial functions that transform data. These sacrificial functions incorporate new classifier-independent objectives into the task of choosing a models’ free parameters. Our approach builds upon recent work in anti-training to develop a new class of functions that a classifier should not perform well on. We use multi-objective evolutionary algorithms to solve the task of model selection by minimizing a sacrificial function(s)and functions the model should always perform well on (i.e., error, sensitivity, specificity, etc.). Our experiments found that the proposed method provides statistically significant improvements in the generalization error over the state-of-the-art optimizers.

Original languageEnglish (US)
Pages (from-to)198-211
Number of pages14
JournalInformation Sciences
Volume496
DOIs
StatePublished - Sep 1 2019

Fingerprint

Classifier
Classifiers
Generalization Error
Overfitting
Multi-objective Evolutionary Algorithm
Model Selection
Model
Specificity
Generating Function
Machine Learning
Objective function
Evolutionary algorithms
Paradigm
Transform
Learning systems
Learning
Training
Optimization
Experiment
Experiments

Keywords

  • Anti-training
  • Model selection
  • Multi-objective optimization

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Theoretical Computer Science
  • Computer Science Applications
  • Information Systems and Management
  • Artificial Intelligence

Cite this

Learning what we don't care about : Anti-training with sacrificial functions. / Ditzler, Gregory; Miller, Sean; Rozenblit, Jerzy W.

In: Information Sciences, Vol. 496, 01.09.2019, p. 198-211.

Research output: Contribution to journalArticle

@article{522fe163b16447f9b4c81f8c4e262452,
title = "Learning what we don't care about: Anti-training with sacrificial functions",
abstract = "The traditional machine learning paradigm focuses on optimizing an objective. This task of optimization is carried out by adjusting the free parameters of a model; however, many times in classification we do not assess the performance of the model using the optimized objective. Furthermore, choosing a poor set of free parameters for the objective function could lead to unintentional overfitting of the data. In this work, we present a novel approach based on the theory of anti-training for generating sacrificial functions that transform data. These sacrificial functions incorporate new classifier-independent objectives into the task of choosing a models’ free parameters. Our approach builds upon recent work in anti-training to develop a new class of functions that a classifier should not perform well on. We use multi-objective evolutionary algorithms to solve the task of model selection by minimizing a sacrificial function(s)and functions the model should always perform well on (i.e., error, sensitivity, specificity, etc.). Our experiments found that the proposed method provides statistically significant improvements in the generalization error over the state-of-the-art optimizers.",
keywords = "Anti-training, Model selection, Multi-objective optimization",
author = "Gregory Ditzler and Sean Miller and Rozenblit, {Jerzy W}",
year = "2019",
month = "9",
day = "1",
doi = "10.1016/j.ins.2019.05.018",
language = "English (US)",
volume = "496",
pages = "198--211",
journal = "Information Sciences",
issn = "0020-0255",
publisher = "Elsevier Inc.",

}

TY - JOUR

T1 - Learning what we don't care about

T2 - Anti-training with sacrificial functions

AU - Ditzler, Gregory

AU - Miller, Sean

AU - Rozenblit, Jerzy W

PY - 2019/9/1

Y1 - 2019/9/1

N2 - The traditional machine learning paradigm focuses on optimizing an objective. This task of optimization is carried out by adjusting the free parameters of a model; however, many times in classification we do not assess the performance of the model using the optimized objective. Furthermore, choosing a poor set of free parameters for the objective function could lead to unintentional overfitting of the data. In this work, we present a novel approach based on the theory of anti-training for generating sacrificial functions that transform data. These sacrificial functions incorporate new classifier-independent objectives into the task of choosing a models’ free parameters. Our approach builds upon recent work in anti-training to develop a new class of functions that a classifier should not perform well on. We use multi-objective evolutionary algorithms to solve the task of model selection by minimizing a sacrificial function(s)and functions the model should always perform well on (i.e., error, sensitivity, specificity, etc.). Our experiments found that the proposed method provides statistically significant improvements in the generalization error over the state-of-the-art optimizers.

AB - The traditional machine learning paradigm focuses on optimizing an objective. This task of optimization is carried out by adjusting the free parameters of a model; however, many times in classification we do not assess the performance of the model using the optimized objective. Furthermore, choosing a poor set of free parameters for the objective function could lead to unintentional overfitting of the data. In this work, we present a novel approach based on the theory of anti-training for generating sacrificial functions that transform data. These sacrificial functions incorporate new classifier-independent objectives into the task of choosing a models’ free parameters. Our approach builds upon recent work in anti-training to develop a new class of functions that a classifier should not perform well on. We use multi-objective evolutionary algorithms to solve the task of model selection by minimizing a sacrificial function(s)and functions the model should always perform well on (i.e., error, sensitivity, specificity, etc.). Our experiments found that the proposed method provides statistically significant improvements in the generalization error over the state-of-the-art optimizers.

KW - Anti-training

KW - Model selection

KW - Multi-objective optimization

UR - http://www.scopus.com/inward/record.url?scp=85065526558&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85065526558&partnerID=8YFLogxK

U2 - 10.1016/j.ins.2019.05.018

DO - 10.1016/j.ins.2019.05.018

M3 - Article

AN - SCOPUS:85065526558

VL - 496

SP - 198

EP - 211

JO - Information Sciences

JF - Information Sciences

SN - 0020-0255

ER -