Gradient and Hamiltonian Dynamics Applied to Learning in Neural Networks

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

The process of machine learning can be considered in two stages: model selection and parameter estimation. In this paper a technique is presented for constructing dynamical systems with desired qualitative properties. The approach is based on the fact that an n-dimensional nonlinear dynamical system can be decomposed into one gradient and (n - 1) Hamiltonian systems. Thus, the model selection stage consists of choosing the gradient and Hamiltonian portions appropriately so that a certain behavior is obtainable. To estimate the parameters, a stably convergent learning rule is presented. This algorithm has been proven to converge to the desired system trajectory for all initial conditions and system inputs. This technique can be used to design neural network models which are guaranteed to solve the trajectory learning problem.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems 8, NIPS 1995
EditorsD. Touretzky, M.C. Mozer, M. Hasselmo
PublisherNeural information processing systems foundation
Pages274-280
Number of pages7
ISBN (Electronic)0262201070, 9780262201070
StatePublished - 1995
Event8th Advances in Neural Information Processing Systems, NIPS 1995 - Denver, United States
Duration: Nov 27 1995Nov 30 1995

Publication series

NameAdvances in Neural Information Processing Systems
Volume8
ISSN (Print)1049-5258

Conference

Conference8th Advances in Neural Information Processing Systems, NIPS 1995
Country/TerritoryUnited States
CityDenver
Period11/27/9511/30/95

ASJC Scopus subject areas

  • Signal Processing
  • Information Systems
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Gradient and Hamiltonian Dynamics Applied to Learning in Neural Networks'. Together they form a unique fingerprint.

Cite this