Abstract
Many techniques have been proposed for improving the generalization performance of Fuzzy ARTMAP. We present a study of these architectures in the framework of structural risk minimization and computational learning theory. Fuzzy ARTMAP training uses on-line learning, has proven convergence results, and has relatively few parameters to deal with. Empirical risk minimization is employed by Fuzzy ARTMAP during its training phase. One weakness of Fuzzy ARTMAP concerns over-training on noisy training data sets or naturally overlapping training classes of data. Most of these proposed techniques attempt to address this issue, in different ways, either directly or indirectly. In this paper we will present a summary of how some of these architectures achieve success as learning algorithms.
Original language | English (US) |
---|---|
Pages | 2644-2649 |
Number of pages | 6 |
State | Published - 2002 |
Externally published | Yes |
Event | 2002 International Joint Conference on Neural Networks (IJCNN'02) - Honolulu, HI, United States Duration: May 12 2002 → May 17 2002 |
Conference
Conference | 2002 International Joint Conference on Neural Networks (IJCNN'02) |
---|---|
Country/Territory | United States |
City | Honolulu, HI |
Period | 5/12/02 → 5/17/02 |
Keywords
- Adaptive resonance theory
- Classification
- Empirical and structural risk minimization
- Generalization performance
- Machine learning
- Neural networks
ASJC Scopus subject areas
- Software
- Artificial Intelligence