Can recurrent neural networks learn natural language grammars?

Steve Lawrence, C. Lee Giles, Sandiway Fong

Research output: Contribution to conferencePaperpeer-review

7 Scopus citations

Abstract

Recurrent neural networks are complex parametric dynamic systems that can exhibit a wide range of different behavior. We consider the task of grammatical inference with recurrent neural networks. Specifically, we consider the task of classifying natural language sentences as grammatical or ungrammatical - can a recurrent neural network be made to exhibit the same kind of discriminatory power which is provided by the Principles and Parameters linguistic framework, or Government and Binding theory? We attempt to train a network, without the bifurcation into learned vs. innate components assumed by Chomsky, to produce the same judgments as native speakers on sharply grammatical/ungrammatical data. We consider how a recurrent neural network could possess linguistic capability, and investigate the properties of Elman, Narendra & Parthasarathy (N&P) and Williams & Zipser (W&Z) recurrent networks, and Frasconi-Gori-Soda (FGS) locally recurrent networks in this setting. We show that both Elman and W&Z recurrent neural networks are able to learn an appropriate grammar.

Original languageEnglish (US)
Pages1853-1858
Number of pages6
StatePublished - 1996
Externally publishedYes
EventProceedings of the 1996 IEEE International Conference on Neural Networks, ICNN. Part 1 (of 4) - Washington, DC, USA
Duration: Jun 3 1996Jun 6 1996

Other

OtherProceedings of the 1996 IEEE International Conference on Neural Networks, ICNN. Part 1 (of 4)
CityWashington, DC, USA
Period6/3/966/6/96

ASJC Scopus subject areas

  • Software

Fingerprint

Dive into the research topics of 'Can recurrent neural networks learn natural language grammars?'. Together they form a unique fingerprint.

Cite this