TY - JOUR
T1 - Polyhedral Specification and Code Generation of Sparse Tensor Contraction with Co-iteration
AU - Zhao, Tuowen
AU - Popoola, Tobi
AU - Hall, Mary
AU - Olschanowsky, Catherine
AU - Strout, Michelle
N1 - Publisher Copyright:
© 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2022/12/16
Y1 - 2022/12/16
N2 - This article presents a code generator for sparse tensor contraction computations. It leverages a mathematical representation of loop nest computations in the sparse polyhedral framework (SPF), which extends the polyhedral model to support non-Affine computations, such as those that arise in sparse tensors. SPF is extended to perform layout specification, optimization, and code generation of sparse tensor code: (1) We develop a polyhedral layout specification that decouples iteration spaces for layout and computation; and (2) we develop efficient co-iteration of sparse tensors by combining polyhedra scanning over the layout of one sparse tensor with the synthesis of code to find corresponding elements in other tensors through an SMT solver.We compare the generated code with that produced by a state-of-The-Art tensor compiler, TACO. We achieve on average 1.63× faster parallel performance than TACO on sparse-sparse co-iteration and describe how to improve that to 2.72× average speedup by switching the find algorithms. We also demonstrate that decoupling iteration spaces of layout and computation enables additional layout and computation combinations to be supported.
AB - This article presents a code generator for sparse tensor contraction computations. It leverages a mathematical representation of loop nest computations in the sparse polyhedral framework (SPF), which extends the polyhedral model to support non-Affine computations, such as those that arise in sparse tensors. SPF is extended to perform layout specification, optimization, and code generation of sparse tensor code: (1) We develop a polyhedral layout specification that decouples iteration spaces for layout and computation; and (2) we develop efficient co-iteration of sparse tensors by combining polyhedra scanning over the layout of one sparse tensor with the synthesis of code to find corresponding elements in other tensors through an SMT solver.We compare the generated code with that produced by a state-of-The-Art tensor compiler, TACO. We achieve on average 1.63× faster parallel performance than TACO on sparse-sparse co-iteration and describe how to improve that to 2.72× average speedup by switching the find algorithms. We also demonstrate that decoupling iteration spaces of layout and computation enables additional layout and computation combinations to be supported.
KW - Data layout
KW - code synthesis
KW - index array properties
KW - polyhedral compilation
KW - sparse tensor contraction
KW - uninterpreted functions
UR - http://www.scopus.com/inward/record.url?scp=85148998004&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85148998004&partnerID=8YFLogxK
U2 - 10.1145/3566054
DO - 10.1145/3566054
M3 - Article
AN - SCOPUS:85148998004
SN - 1544-3566
VL - 20
JO - ACM Transactions on Architecture and Code Optimization
JF - ACM Transactions on Architecture and Code Optimization
IS - 1
M1 - 16
ER -