Balancing the learning ability and memory demand of a perceptron-based dynamically trainable neural network

Edward Richter, Spencer Valancius, Josiah McClanahan, John Mixter, Ali Akoglu

Research output: Contribution to journalArticlepeer-review

Abstract

Artificial neural networks (ANNs) have become a popular means of solving complex problems in prediction-based applications such as image and natural language processing. Two challenges prominent in the neural network domain are the practicality of hardware implementation and dynamically training the network. In this study, we address these challenges with a development methodology that balances the hardware footprint and the quality of the ANN. We use the well-known perceptron-based branch prediction problem as a case study for demonstrating this methodology. This problem is perfect to analyze dynamic hardware implementations of ANNs because it exists in hardware and trains dynamically. Using our hierarchical configuration search space exploration, we show that we can decrease the memory footprint of a standard perceptron-based branch predictor by 2.3× with only a 0.6% decrease in prediction accuracy.

Original languageEnglish (US)
Pages (from-to)3211-3235
Number of pages25
JournalJournal of Supercomputing
Volume74
Issue number7
DOIs
StatePublished - Jul 1 2018

Keywords

  • Artificial neural network
  • Branch prediction
  • Perceptron
  • SimpleScalar

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Software
  • Information Systems
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Balancing the learning ability and memory demand of a perceptron-based dynamically trainable neural network'. Together they form a unique fingerprint.

Cite this