Abstract
Artificial neural networks (ANNs) have become a popular means of solving complex problems in prediction-based applications such as image and natural language processing. Two challenges prominent in the neural network domain are the practicality of hardware implementation and dynamically training the network. In this study, we address these challenges with a development methodology that balances the hardware footprint and the quality of the ANN. We use the well-known perceptron-based branch prediction problem as a case study for demonstrating this methodology. This problem is perfect to analyze dynamic hardware implementations of ANNs because it exists in hardware and trains dynamically. Using our hierarchical configuration search space exploration, we show that we can decrease the memory footprint of a standard perceptron-based branch predictor by 2.3× with only a 0.6% decrease in prediction accuracy.
Original language | English (US) |
---|---|
Pages (from-to) | 3211-3235 |
Number of pages | 25 |
Journal | Journal of Supercomputing |
Volume | 74 |
Issue number | 7 |
DOIs | |
State | Published - Jul 1 2018 |
Keywords
- Artificial neural network
- Branch prediction
- Perceptron
- SimpleScalar
ASJC Scopus subject areas
- Theoretical Computer Science
- Software
- Information Systems
- Hardware and Architecture