TY - JOUR
T1 - Design Space Exploration of Sparsity-Aware Application-Specific Spiking Neural Network Accelerators
AU - Aliyev, Ilkin
AU - Svoboda, Kama
AU - Adegbija, Tosiron
N1 - Publisher Copyright:
© 2011 IEEE.
PY - 2023/12/1
Y1 - 2023/12/1
N2 - Spiking Neural Networks (SNNs) offer a promising alternative to Artificial Neural Networks (ANNs) for deep learning applications, particularly in resource-constrained systems. This is largely due to their inherent sparsity, influenced by factors such as the input dataset, the length of the spike train, and the network topology. While a few prior works have demonstrated the advantages of incorporating sparsity into the hardware design, especially in terms of reducing energy consumption, the impact on hardware resources has not yet been explored. This is where design space exploration (DSE) becomes crucial, as it allows for the optimization of hardware performance by tailoring both the hardware and model parameters to suit specific application needs. However, DSE can be extremely challenging given the potentially large design space and the interplay of hardware architecture design choices and application-specific model parameters. In this paper, we propose a flexible hardware design that leverages the sparsity of SNNs to identify highly efficient, application-specific accelerator designs. We develop a high-level, cycle-accurate simulation framework for this hardware and demonstrate the framework's benefits in enabling detailed and fine-grained exploration of SNN design choices, such as the layer-wise logical-to-hardware ratio (LHR). Our experimental results show that our design can (i) achieve up to 76% reduction in hardware resources and (ii) deliver a speed increase of up to $31.25\times $ , while requiring 27% fewer hardware resources compared to sparsity-oblivious designs. We further showcase the robustness of our framework by varying spike train lengths with different neuron population sizes to find the optimal trade-off points between accuracy and hardware latency.
AB - Spiking Neural Networks (SNNs) offer a promising alternative to Artificial Neural Networks (ANNs) for deep learning applications, particularly in resource-constrained systems. This is largely due to their inherent sparsity, influenced by factors such as the input dataset, the length of the spike train, and the network topology. While a few prior works have demonstrated the advantages of incorporating sparsity into the hardware design, especially in terms of reducing energy consumption, the impact on hardware resources has not yet been explored. This is where design space exploration (DSE) becomes crucial, as it allows for the optimization of hardware performance by tailoring both the hardware and model parameters to suit specific application needs. However, DSE can be extremely challenging given the potentially large design space and the interplay of hardware architecture design choices and application-specific model parameters. In this paper, we propose a flexible hardware design that leverages the sparsity of SNNs to identify highly efficient, application-specific accelerator designs. We develop a high-level, cycle-accurate simulation framework for this hardware and demonstrate the framework's benefits in enabling detailed and fine-grained exploration of SNN design choices, such as the layer-wise logical-to-hardware ratio (LHR). Our experimental results show that our design can (i) achieve up to 76% reduction in hardware resources and (ii) deliver a speed increase of up to $31.25\times $ , while requiring 27% fewer hardware resources compared to sparsity-oblivious designs. We further showcase the robustness of our framework by varying spike train lengths with different neuron population sizes to find the optimal trade-off points between accuracy and hardware latency.
KW - Spiking neural networks
KW - TLM modeling
KW - design space exploration
KW - neural network sparsity
KW - resource-efficient machine learning
UR - http://www.scopus.com/inward/record.url?scp=85181545673&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85181545673&partnerID=8YFLogxK
U2 - 10.1109/JETCAS.2023.3327746
DO - 10.1109/JETCAS.2023.3327746
M3 - Article
AN - SCOPUS:85181545673
SN - 2156-3357
VL - 13
SP - 1062
EP - 1072
JO - IEEE Journal on Emerging and Selected Topics in Circuits and Systems
JF - IEEE Journal on Emerging and Selected Topics in Circuits and Systems
IS - 4
ER -