Analysis of the ARTMAP neural network architecture

Michael Georgiopoulos, Juxin Huang, Gregory L. Heileman

Research output: Chapter in Book/Report/Conference proceedingChapter


In this paper we analyze the ARTMAP architecture for situations requiring learning of many-to-one maps. Our point of focus is the number of list presentations required by ARTMAP to learn an arbitrary many-to-one map. In particular, it is shown that if ARTMAP is repeatedly presented with a list of input/output pairs, it establishes the required mapping in at most [M.sub.a]- 1 list presentations, where [M.sub.a] corresponds to the total number of ones in each one of the input patterns. Other useful properties, associated with the learning of the mapping represented by an arbitrary list of input/output pairs, are also examined. These properties reveal some of the characteristics of learning in ARTMAP when it is used as a tool in establishing an arbitrary mapping from a binary input space to a binary output space. The results presented in this paper are valid for the fast learning case, and for small βa values, where βa is a parameter associated with the adaptation of bottom-up weights in one of the ART1 modules of ARTMAP.

Original languageEnglish (US)
Title of host publicationWorld Congress on Neural Networks
PublisherTaylor and Francis
ISBN (Electronic)9781315784076
ISBN (Print)9780805817454
StatePublished - Sep 10 2021
Externally publishedYes

ASJC Scopus subject areas

  • General Psychology


Dive into the research topics of 'Analysis of the ARTMAP neural network architecture'. Together they form a unique fingerprint.

Cite this