In this paper we analyze the ARTMAP architecture for situations requiring learning of many-to-one maps. Our point of focus is the number of list presentations required by ARTMAP to learn an arbitrary many-to-one map. In particular, it is shown that if ARTMAP is repeatedly presented with a list of input/output pairs, it establishes the required mapping in at most [M.sub.a]- 1 list presentations, where [M.sub.a] corresponds to the total number of ones in each one of the input patterns. Other useful properties, associated with the learning of the mapping represented by an arbitrary list of input/output pairs, are also examined. These properties reveal some of the characteristics of learning in ARTMAP when it is used as a tool in establishing an arbitrary mapping from a binary input space to a binary output space. The results presented in this paper are valid for the fast learning case, and for small βa values, where βa is a parameter associated with the adaptation of bottom-up weights in one of the ART1 modules of ARTMAP.
|Original language||English (US)|
|Title of host publication||World Congress on Neural Networks|
|Publisher||Taylor and Francis|
|State||Published - Sep 10 2021|
ASJC Scopus subject areas