Abstract
Machine learning-based decoding algorithms such as neural belief propagation (NBP) have been shown to improve upon prototypical belief propagation (BP) decoders. NBP decoder unfolds the BP iterations into a deep neural network (DNN), and the parameters of the DNN are trained in a data-driven manner. Neural Normalized Min-Sum (NNMS) and Offset min-sum (OMS) decoders with learnable offsets are other adaptations requiring fewer learnable parameters than the NBP decoder. In this paper, we study the generalization capabilities of the neural decoder when the check node messages are scaled by parameters that are learned by optimizing over the training data. Specifically, we show the dependence of the generalization gap (i.e., the difference between empirical and expected BER) on the block length, message length, variable/check node degrees, decoding iterations, and the training dataset size.
Original language | English (US) |
---|---|
Journal | Proceedings of the International Telemetering Conference |
Volume | 2023-October |
State | Published - 2023 |
Externally published | Yes |
Event | 58th Annual International Telemetering Conference, ITC 2023 - Las Vegas, United States Duration: Oct 23 2023 → Oct 26 2023 |
ASJC Scopus subject areas
- Electrical and Electronic Engineering
- Instrumentation
- Computer Networks and Communications
- Signal Processing