GCNAX: A Flexible and Energy-efficient Accelerator for Graph Convolutional Neural Networks

Jiajun Li, Ahmed Louri, Avinash Karanth, Razvan Bunescu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

96 Scopus citations

Abstract

Graph convolutional neural networks (GCNs) have emerged as an effective approach to extend deep learning for graph data analytics. Given that graphs are usually irregular, as nodes in a graph may have a varying number of neighbors, processing GCNs efficiently pose a significant challenge on the underlying hardware. Although specialized GCN accelerators have been proposed to deliver better performance over generic processors, prior accelerators not only under-utilize the compute engine, but also impose redundant data accesses that reduce throughput and energy efficiency. Therefore, optimizing the overall flow of data between compute engines and memory, i.e., the GCN dataflow, which maximizes utilization and minimizes data movement is crucial for achieving efficient GCN processing.In this paper, we propose a flexible and optimized dataflow for GCNs that simultaneously improves resource utilization and reduces data movement. This is realized by fully exploring the design space of GCN dataflows and evaluating the number of execution cycles and DRAM accesses through an analysis framework. Unlike prior GCN dataflows, which employ rigid loop orders and loop fusion strategies, the proposed dataflow can reconFigure the loop order and loop fusion strategy to adapt to different GCN configurations, which results in much improved efficiency. We then introduce a novel accelerator architecture called GCNAX, which tailors the compute engine, buffer structure and size based on the proposed dataflow. Evaluated on five real-world graph datasets, our simulation results show that GCNAX reduces DRAM accesses by a factor of 8.1 \times and 2.4 \times, while achieving 8.9 \times, 1.6 \times speedup and 9.5 \times, 2.3 \times energy savings on average over HyGCN and AWB-GCN, respectively.

Original languageEnglish (US)
Title of host publicationProceeding - 27th IEEE International Symposium on High Performance Computer Architecture, HPCA 2021
PublisherIEEE Computer Society
Pages775-788
Number of pages14
ISBN (Electronic)9780738123370
DOIs
StatePublished - Feb 2021
Event27th Annual IEEE International Symposium on High Performance Computer Architecture, HPCA 2021 - Virtual, Seoul, Korea, Republic of
Duration: Feb 27 2021Mar 1 2021

Publication series

NameProceedings - International Symposium on High-Performance Computer Architecture
Volume2021-February
ISSN (Print)1530-0897

Conference

Conference27th Annual IEEE International Symposium on High Performance Computer Architecture, HPCA 2021
Country/TerritoryKorea, Republic of
CityVirtual, Seoul
Period2/27/213/1/21

Keywords

  • Dataflow Accelerators
  • Domain-specific Accelerators
  • Graph Convolutional Neural Networks

ASJC Scopus subject areas

  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'GCNAX: A Flexible and Energy-efficient Accelerator for Graph Convolutional Neural Networks'. Together they form a unique fingerprint.

Cite this