Increasing the flexibility and speed of convergence of a Learning Agent

Miguel A. Soto Santibanez, Michael M. Marefat

Research output: Contribution to journalConference articlepeer-review

Abstract

A review of the basic methods used to model a Learning Agent, such as Instance-Based Learning, Artificial Neural Networks and Reinforcement Learning, suggests that they either lack flexibility (can only be used to solve a small number of problems) or they tend to converge very slowly to the optimal policy. This paper describes and illustrates a set of processes that address these two shortcomings. The resulting Learning Agent is able to "adapt fairly well" to a much larger set of environments and is capable of doing this in a reasonable amount of time. In order to address the lack of flexibility and slow convergence to the optimal policy, the new Learning Agent becomes a hybrid between a L. A. based on Instance-Based Learning and one based on Reinforcement Learning. To accelerate its convergence to its optimal policy, this new Learning Agent incorporates the use of a new concept we call Propagation of Good Findings. Furthermore, to make a better use of the Learning Agent's memory resources, and therefore increase its flexibility, we make use of another new concept we call Moving Prototypes.

Original languageEnglish (US)
Pages (from-to)1748-1753
Number of pages6
JournalProceedings of the IEEE International Conference on Systems, Man and Cybernetics
Volume3
StatePublished - 2001
Event2001 IEEE International Conference on Systems, Man and Cybernetics - Tucson, AZ, United States
Duration: Oct 7 2001Oct 10 2001

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Increasing the flexibility and speed of convergence of a Learning Agent'. Together they form a unique fingerprint.

Cite this