Incremental learning with rule-based neural networks

Charles M Higgins, R. M. Goodman

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Citations (Scopus)

Abstract

A classifier for discrete-valued variable classification problems is presented. The system utilizes an information-theoretic algorithm for constructing informative rules from example data. These rules are then used to construct a neural network to perform parallel inference and posterior probability estimation. The network can be grown incrementally, so that new data can be incorporated without repeating the training on previous data. It is shown that this technique performs as well as other techniques such as backpropagation while having unique advantages in incremental learning capability, training efficiency, knowledge representation, and hardware implementation suitability.

Original languageEnglish (US)
Title of host publicationProceedings. IJCNN-91-Seattle: International Joint Conference on Neural Networks
Editors Anon
PublisherPubl by IEEE
Pages875-880
Number of pages6
ISBN (Print)0780301641
StatePublished - 1991
Externally publishedYes
EventInternational Joint Conference on Neural Networks - IJCNN-91-Seattle - Seattle, WA, USA
Duration: Jul 8 1991Jul 12 1991

Other

OtherInternational Joint Conference on Neural Networks - IJCNN-91-Seattle
CitySeattle, WA, USA
Period7/8/917/12/91

Fingerprint

Knowledge representation
Backpropagation
Classifiers
Neural networks
Hardware

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Higgins, C. M., & Goodman, R. M. (1991). Incremental learning with rule-based neural networks. In Anon (Ed.), Proceedings. IJCNN-91-Seattle: International Joint Conference on Neural Networks (pp. 875-880). Publ by IEEE.

Incremental learning with rule-based neural networks. / Higgins, Charles M; Goodman, R. M.

Proceedings. IJCNN-91-Seattle: International Joint Conference on Neural Networks. ed. / Anon. Publ by IEEE, 1991. p. 875-880.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Higgins, CM & Goodman, RM 1991, Incremental learning with rule-based neural networks. in Anon (ed.), Proceedings. IJCNN-91-Seattle: International Joint Conference on Neural Networks. Publ by IEEE, pp. 875-880, International Joint Conference on Neural Networks - IJCNN-91-Seattle, Seattle, WA, USA, 7/8/91.
Higgins CM, Goodman RM. Incremental learning with rule-based neural networks. In Anon, editor, Proceedings. IJCNN-91-Seattle: International Joint Conference on Neural Networks. Publ by IEEE. 1991. p. 875-880
Higgins, Charles M ; Goodman, R. M. / Incremental learning with rule-based neural networks. Proceedings. IJCNN-91-Seattle: International Joint Conference on Neural Networks. editor / Anon. Publ by IEEE, 1991. pp. 875-880
@inproceedings{6ff2cfc639d64a5ba8b15508912d40be,
title = "Incremental learning with rule-based neural networks",
abstract = "A classifier for discrete-valued variable classification problems is presented. The system utilizes an information-theoretic algorithm for constructing informative rules from example data. These rules are then used to construct a neural network to perform parallel inference and posterior probability estimation. The network can be grown incrementally, so that new data can be incorporated without repeating the training on previous data. It is shown that this technique performs as well as other techniques such as backpropagation while having unique advantages in incremental learning capability, training efficiency, knowledge representation, and hardware implementation suitability.",
author = "Higgins, {Charles M} and Goodman, {R. M.}",
year = "1991",
language = "English (US)",
isbn = "0780301641",
pages = "875--880",
editor = "Anon",
booktitle = "Proceedings. IJCNN-91-Seattle: International Joint Conference on Neural Networks",
publisher = "Publ by IEEE",

}

TY - GEN

T1 - Incremental learning with rule-based neural networks

AU - Higgins, Charles M

AU - Goodman, R. M.

PY - 1991

Y1 - 1991

N2 - A classifier for discrete-valued variable classification problems is presented. The system utilizes an information-theoretic algorithm for constructing informative rules from example data. These rules are then used to construct a neural network to perform parallel inference and posterior probability estimation. The network can be grown incrementally, so that new data can be incorporated without repeating the training on previous data. It is shown that this technique performs as well as other techniques such as backpropagation while having unique advantages in incremental learning capability, training efficiency, knowledge representation, and hardware implementation suitability.

AB - A classifier for discrete-valued variable classification problems is presented. The system utilizes an information-theoretic algorithm for constructing informative rules from example data. These rules are then used to construct a neural network to perform parallel inference and posterior probability estimation. The network can be grown incrementally, so that new data can be incorporated without repeating the training on previous data. It is shown that this technique performs as well as other techniques such as backpropagation while having unique advantages in incremental learning capability, training efficiency, knowledge representation, and hardware implementation suitability.

UR - http://www.scopus.com/inward/record.url?scp=0026372558&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0026372558&partnerID=8YFLogxK

M3 - Conference contribution

SN - 0780301641

SP - 875

EP - 880

BT - Proceedings. IJCNN-91-Seattle: International Joint Conference on Neural Networks

A2 - Anon, null

PB - Publ by IEEE

ER -