Elitist and ensemble strategies for cascade generalization

Huimin Zhao, Atish P. Sinha, Sudha Ram

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Several methods have been proposed for cascading other classification algorithms with decision tree learners to alleviate the representational bias of decision trees and, potentially, to improve classification accuracy. Such cascade generalization of decision trees increases the flexibility of the decision boundaries between classes and promotes better fitting of the training data. However, more flexible models may not necessarily lead to more predictive power. Because of potential overfitting problems, the true classification accuracy on test data may not increase. Recently, a generic method for cascade generalization has been proposed. The method uses a parameter - the maximum cascading depth - to constrain the degree that other classification algorithms are cascaded with decision tree learners. A method for efficiently learning a collection (i. e., a forest) of generalized decision trees, each with other classification algorithms cascaded to a particular depth, also has been developed In this article, we propose several new strategies, including elitist and ensemble (weighted or unweighted), for using the various decision trees in such a collection in the prediction phase. Our empirical evaluation using 32 data sets in the UCI machine learning repository shows that, on average, the elitist strategy outperforms the weighted full ensemble strategy, which, in turn, outperforms the unweighted full ensemble strategy. However, no strategy is universally superior across all applications. Since the same training process can be used to evaluate the various strategies, we recommend that several promising strategies be evaluated and compared before selecting the one to use for a given application.

Original languageEnglish (US)
Pages (from-to)92-107
Number of pages16
JournalJournal of Database Management
Volume17
Issue number3
StatePublished - 2006

Fingerprint

Decision trees
Learning systems
Cascade
Decision tree

Keywords

  • Cascade generalization
  • Data mining
  • Decision tree
  • Elitist strategy
  • Ensemble method
  • Voting method

ASJC Scopus subject areas

  • Computer Science(all)
  • Decision Sciences(all)

Cite this

Elitist and ensemble strategies for cascade generalization. / Zhao, Huimin; Sinha, Atish P.; Ram, Sudha.

In: Journal of Database Management, Vol. 17, No. 3, 2006, p. 92-107.

Research output: Contribution to journalArticle

Zhao, Huimin ; Sinha, Atish P. ; Ram, Sudha. / Elitist and ensemble strategies for cascade generalization. In: Journal of Database Management. 2006 ; Vol. 17, No. 3. pp. 92-107.
@article{2f45fb255a784016babcbca740f13cc4,
title = "Elitist and ensemble strategies for cascade generalization",
abstract = "Several methods have been proposed for cascading other classification algorithms with decision tree learners to alleviate the representational bias of decision trees and, potentially, to improve classification accuracy. Such cascade generalization of decision trees increases the flexibility of the decision boundaries between classes and promotes better fitting of the training data. However, more flexible models may not necessarily lead to more predictive power. Because of potential overfitting problems, the true classification accuracy on test data may not increase. Recently, a generic method for cascade generalization has been proposed. The method uses a parameter - the maximum cascading depth - to constrain the degree that other classification algorithms are cascaded with decision tree learners. A method for efficiently learning a collection (i. e., a forest) of generalized decision trees, each with other classification algorithms cascaded to a particular depth, also has been developed In this article, we propose several new strategies, including elitist and ensemble (weighted or unweighted), for using the various decision trees in such a collection in the prediction phase. Our empirical evaluation using 32 data sets in the UCI machine learning repository shows that, on average, the elitist strategy outperforms the weighted full ensemble strategy, which, in turn, outperforms the unweighted full ensemble strategy. However, no strategy is universally superior across all applications. Since the same training process can be used to evaluate the various strategies, we recommend that several promising strategies be evaluated and compared before selecting the one to use for a given application.",
keywords = "Cascade generalization, Data mining, Decision tree, Elitist strategy, Ensemble method, Voting method",
author = "Huimin Zhao and Sinha, {Atish P.} and Sudha Ram",
year = "2006",
language = "English (US)",
volume = "17",
pages = "92--107",
journal = "Journal of Database Management",
issn = "1063-8016",
publisher = "IGI Publishing",
number = "3",

}

TY - JOUR

T1 - Elitist and ensemble strategies for cascade generalization

AU - Zhao, Huimin

AU - Sinha, Atish P.

AU - Ram, Sudha

PY - 2006

Y1 - 2006

N2 - Several methods have been proposed for cascading other classification algorithms with decision tree learners to alleviate the representational bias of decision trees and, potentially, to improve classification accuracy. Such cascade generalization of decision trees increases the flexibility of the decision boundaries between classes and promotes better fitting of the training data. However, more flexible models may not necessarily lead to more predictive power. Because of potential overfitting problems, the true classification accuracy on test data may not increase. Recently, a generic method for cascade generalization has been proposed. The method uses a parameter - the maximum cascading depth - to constrain the degree that other classification algorithms are cascaded with decision tree learners. A method for efficiently learning a collection (i. e., a forest) of generalized decision trees, each with other classification algorithms cascaded to a particular depth, also has been developed In this article, we propose several new strategies, including elitist and ensemble (weighted or unweighted), for using the various decision trees in such a collection in the prediction phase. Our empirical evaluation using 32 data sets in the UCI machine learning repository shows that, on average, the elitist strategy outperforms the weighted full ensemble strategy, which, in turn, outperforms the unweighted full ensemble strategy. However, no strategy is universally superior across all applications. Since the same training process can be used to evaluate the various strategies, we recommend that several promising strategies be evaluated and compared before selecting the one to use for a given application.

AB - Several methods have been proposed for cascading other classification algorithms with decision tree learners to alleviate the representational bias of decision trees and, potentially, to improve classification accuracy. Such cascade generalization of decision trees increases the flexibility of the decision boundaries between classes and promotes better fitting of the training data. However, more flexible models may not necessarily lead to more predictive power. Because of potential overfitting problems, the true classification accuracy on test data may not increase. Recently, a generic method for cascade generalization has been proposed. The method uses a parameter - the maximum cascading depth - to constrain the degree that other classification algorithms are cascaded with decision tree learners. A method for efficiently learning a collection (i. e., a forest) of generalized decision trees, each with other classification algorithms cascaded to a particular depth, also has been developed In this article, we propose several new strategies, including elitist and ensemble (weighted or unweighted), for using the various decision trees in such a collection in the prediction phase. Our empirical evaluation using 32 data sets in the UCI machine learning repository shows that, on average, the elitist strategy outperforms the weighted full ensemble strategy, which, in turn, outperforms the unweighted full ensemble strategy. However, no strategy is universally superior across all applications. Since the same training process can be used to evaluate the various strategies, we recommend that several promising strategies be evaluated and compared before selecting the one to use for a given application.

KW - Cascade generalization

KW - Data mining

KW - Decision tree

KW - Elitist strategy

KW - Ensemble method

KW - Voting method

UR - http://www.scopus.com/inward/record.url?scp=33846635390&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33846635390&partnerID=8YFLogxK

M3 - Article

VL - 17

SP - 92

EP - 107

JO - Journal of Database Management

JF - Journal of Database Management

SN - 1063-8016

IS - 3

ER -