GENERALIZATION BOUNDS FOR NEURAL NORMALIZED MIN-SUM DECODERS

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

Machine learning-based decoding algorithms such as neural belief propagation (NBP) have been shown to improve upon prototypical belief propagation (BP) decoders. NBP decoder unfolds the BP iterations into a deep neural network (DNN), and the parameters of the DNN are trained in a data-driven manner. Neural Normalized Min-Sum (NNMS) and Offset min-sum (OMS) decoders with learnable offsets are other adaptations requiring fewer learnable parameters than the NBP decoder. In this paper, we study the generalization capabilities of the neural decoder when the check node messages are scaled by parameters that are learned by optimizing over the training data. Specifically, we show the dependence of the generalization gap (i.e., the difference between empirical and expected BER) on the block length, message length, variable/check node degrees, decoding iterations, and the training dataset size.

Original languageEnglish (US)
JournalProceedings of the International Telemetering Conference
Volume2023-October
StatePublished - 2023
Externally publishedYes
Event58th Annual International Telemetering Conference, ITC 2023 - Las Vegas, United States
Duration: Oct 23 2023Oct 26 2023

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Instrumentation
  • Computer Networks and Communications
  • Signal Processing

Fingerprint

Dive into the research topics of 'GENERALIZATION BOUNDS FOR NEURAL NORMALIZED MIN-SUM DECODERS'. Together they form a unique fingerprint.

Cite this