TY - GEN
T1 - Learning to decode LDPC codes with finite-alphabet message passing
AU - Vasic, Bane
AU - Xiao, Xin
AU - Lin, Shu
N1 - Funding Information:
This work is funded by the NSF under grant NSF ECCS-1500170 and is supported in part by the Indo-US Science and Technology Forum (IUSSTF) through the Joint Networked Center for Data Storage Research (JC-16-2014-US).
Publisher Copyright:
© 2018 IEEE.
PY - 2018/10/23
Y1 - 2018/10/23
N2 - In this paper, we discuss the perspectives of utilizing deep neural networks (DNN) to decode Low-Density Parity Check (LDPC) codes. The main idea is to build a neural network to learn and optimize a conventional iterative decoder of LDPC codes. A DNN is based on Tanner graph, and the activation functions emulate message update functions in variable and check nodes. We impose a symmetry on weight matrices which makes it possible to train the DNN on a single codeword and noise realizations only. Based on the trained weights and the bias, we further quantize messages in such DNN-based decoder with 3-bit precision while maintaining no loss in error performance compared to the min-sum algorithm. We use examples to present that the DNN framework can be applied to various code lengths. The simulation results show that, the trained weights and bias make the iterative DNN decoder converge faster and thus achieve higher throughput at the cost of trivial additional decoding complexity.
AB - In this paper, we discuss the perspectives of utilizing deep neural networks (DNN) to decode Low-Density Parity Check (LDPC) codes. The main idea is to build a neural network to learn and optimize a conventional iterative decoder of LDPC codes. A DNN is based on Tanner graph, and the activation functions emulate message update functions in variable and check nodes. We impose a symmetry on weight matrices which makes it possible to train the DNN on a single codeword and noise realizations only. Based on the trained weights and the bias, we further quantize messages in such DNN-based decoder with 3-bit precision while maintaining no loss in error performance compared to the min-sum algorithm. We use examples to present that the DNN framework can be applied to various code lengths. The simulation results show that, the trained weights and bias make the iterative DNN decoder converge faster and thus achieve higher throughput at the cost of trivial additional decoding complexity.
UR - http://www.scopus.com/inward/record.url?scp=85057290159&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85057290159&partnerID=8YFLogxK
U2 - 10.1109/ITA.2018.8503199
DO - 10.1109/ITA.2018.8503199
M3 - Conference contribution
AN - SCOPUS:85057290159
T3 - 2018 Information Theory and Applications Workshop, ITA 2018
BT - 2018 Information Theory and Applications Workshop, ITA 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 Information Theory and Applications Workshop, ITA 2018
Y2 - 11 February 2018 through 16 February 2018
ER -