Gradient and Hamiltonian Dynamics Applied to Learning in Neural Networks

James W. Howse, Chaouki T. Abdallah, Gregory L. Heileman

Research output: Contribution to conferencePaperpeer-review

2 Scopus citations

Abstract

The process of machine learning can be considered in two stages: model selection and parameter estimation. In this paper a technique is presented for constructing dynamical systems with desired qualitative properties. The approach is based on the fact that an n-dimensional nonlinear dynamical system can be decomposed into one gradient and (n - 1) Hamiltonian systems. Thus, the model selection stage consists of choosing the gradient and Hamiltonian portions appropriately so that a certain behavior is obtainable. To estimate the parameters, a stably convergent learning rule is presented. This algorithm has been proven to converge to the desired system trajectory for all initial conditions and system inputs. This technique can be used to design neural network models which are guaranteed to solve the trajectory learning problem.

Original languageEnglish (US)
Pages274-280
Number of pages7
StatePublished - 1995
Externally publishedYes
Event8th International Conference on Neural Information Processing Systems, NIPS 1995 - Denver, United States
Duration: Nov 27 1995Dec 2 1995

Conference

Conference8th International Conference on Neural Information Processing Systems, NIPS 1995
Country/TerritoryUnited States
CityDenver
Period11/27/9512/2/95

ASJC Scopus subject areas

  • Information Systems
  • Computer Networks and Communications
  • Signal Processing

Fingerprint

Dive into the research topics of 'Gradient and Hamiltonian Dynamics Applied to Learning in Neural Networks'. Together they form a unique fingerprint.

Cite this