TY - GEN
T1 - An energy-efficient network-on-chip design using reinforcement learning
AU - Zheng, Hao
AU - Louri, Ahmed
N1 - Funding Information:
This research was partially supported by NSF grants CCF-1547035, CCF-1547036 and CCF-1702980. We thank the anonymous reviewers for their excellent feedback.
Funding Information:
This research was partially supported by NSF grants CCF-1547035, CCF-1547036 and CCF-1702980.
Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/6/2
Y1 - 2019/6/2
N2 - The design space for energy-efficient Network-on-Chips (NoCs) has expanded significantly comprising a number of techniques. The simultaneous application of these techniques to yield maximum energy efficiency requires the monitoring of a large number of system parameters which often results in substantial engineering efforts and complicated control policies. This motivates us to explore the use of reinforcement learning (RL) approach that automatically learns an optimal control policy to improve NoC energy efficiency. First, we deploy power-gating (PG) and dynamic voltage and frequency scaling (DVFS) to simultaneously reduce both static and dynamic power. Second, we use RL to automatically explore the dynamic interactions among PG, DVFS, and system parameters, learn the critical system parameters contained in the router and cache, and eventually evolve optimal per-router control policies that significantly improve energy efficiency. Moreover, we introduce an artificial neural network (ANN) to efficiently implement the large state-action table required by RL. Simulation results using PARSEC benchmark show that the proposed RL approach improves power consumption by 26%, while improving system performance by 7%, as compared to a combined PG and DVFS design without RL. Additionally, the ANN design yields 67% area reduction, as compared to a conventional RL implementation.
AB - The design space for energy-efficient Network-on-Chips (NoCs) has expanded significantly comprising a number of techniques. The simultaneous application of these techniques to yield maximum energy efficiency requires the monitoring of a large number of system parameters which often results in substantial engineering efforts and complicated control policies. This motivates us to explore the use of reinforcement learning (RL) approach that automatically learns an optimal control policy to improve NoC energy efficiency. First, we deploy power-gating (PG) and dynamic voltage and frequency scaling (DVFS) to simultaneously reduce both static and dynamic power. Second, we use RL to automatically explore the dynamic interactions among PG, DVFS, and system parameters, learn the critical system parameters contained in the router and cache, and eventually evolve optimal per-router control policies that significantly improve energy efficiency. Moreover, we introduce an artificial neural network (ANN) to efficiently implement the large state-action table required by RL. Simulation results using PARSEC benchmark show that the proposed RL approach improves power consumption by 26%, while improving system performance by 7%, as compared to a combined PG and DVFS design without RL. Additionally, the ANN design yields 67% area reduction, as compared to a conventional RL implementation.
UR - http://www.scopus.com/inward/record.url?scp=85067823203&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85067823203&partnerID=8YFLogxK
U2 - 10.1145/3316781.3317768
DO - 10.1145/3316781.3317768
M3 - Conference contribution
AN - SCOPUS:85067823203
T3 - Proceedings - Design Automation Conference
BT - Proceedings of the 56th Annual Design Automation Conference 2019, DAC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 56th Annual Design Automation Conference, DAC 2019
Y2 - 2 June 2019 through 6 June 2019
ER -