Abstract
In this paper, we propose a new approach to design finite alphabet iterative decoders (FAIDs) for Low-Density Parity Check (LDPC) codes over binary symmetric channel (BSC) via recurrent quantized neural networks (RQNN). We focus on the linear FAID class and use RQNNs to optimize the message update look-up tables by jointly training their message levels and RQNN parameters. Existing neural networks for channel coding work well over Additive White Gaussian Noise Channel (AWGNC) but are inefficient over BSC due to the finite channel values of BSC fed into neural networks. We propose the bit error rate (BER) as the loss function to train the RQNNs over BSC. The low precision activations in the RQNN and quantization in the BER cause a critical issue that their gradients vanish almost everywhere, making it difficult to use classical backward propagation. We leverage straight-through estimators as surrogate gradients to tackle this issue and provide a joint training scheme. We show that the framework is flexible for various code lengths and column weights. Specifically, in high column weight case, it automatically designs low precision linear FAIDs with superior performance, lower complexity, and faster convergence than the floating-point belief propagation algorithms in waterfall region.
Original language | English (US) |
---|---|
Article number | 9057584 |
Pages (from-to) | 3963-3974 |
Number of pages | 12 |
Journal | IEEE Transactions on Communications |
Volume | 68 |
Issue number | 7 |
DOIs | |
State | Published - Jul 2020 |
Keywords
- Binary symmetric channel
- finite alphabet iterative decoders
- low-density parity-check codes
- quantized neural network
- straight-through estimator
ASJC Scopus subject areas
- Electrical and Electronic Engineering