Learning what we don't care about: Anti-training with sacrificial functions

Research output: Contribution to journalArticlepeer-review


The traditional machine learning paradigm focuses on optimizing an objective. This task of optimization is carried out by adjusting the free parameters of a model; however, many times in classification we do not assess the performance of the model using the optimized objective. Furthermore, choosing a poor set of free parameters for the objective function could lead to unintentional overfitting of the data. In this work, we present a novel approach based on the theory of anti-training for generating sacrificial functions that transform data. These sacrificial functions incorporate new classifier-independent objectives into the task of choosing a models’ free parameters. Our approach builds upon recent work in anti-training to develop a new class of functions that a classifier should not perform well on. We use multi-objective evolutionary algorithms to solve the task of model selection by minimizing a sacrificial function(s)and functions the model should always perform well on (i.e., error, sensitivity, specificity, etc.). Our experiments found that the proposed method provides statistically significant improvements in the generalization error over the state-of-the-art optimizers.

Original languageEnglish (US)
Pages (from-to)198-211
Number of pages14
JournalInformation Sciences
StatePublished - Sep 2019


  • Anti-training
  • Model selection
  • Multi-objective optimization

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Theoretical Computer Science
  • Computer Science Applications
  • Information Systems and Management
  • Artificial Intelligence


Dive into the research topics of 'Learning what we don't care about: Anti-training with sacrificial functions'. Together they form a unique fingerprint.

Cite this