Papers Read on AI

Papers Read on AI header image 1
April 10, 2022  

Deep Learning Methods for Improved Decoding of Linear Codes

April 10, 2022

The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required.

2017: Eliya Nachmani, Elad Marciano, Loren Lugosch, W. Gross, D. Burshtein, Yair Be’ery

Deep learning, Belief propagation, Algorithm, Artificial neural network, Recurrent neural network, Quantization (signal processing), Tanner graph, BCH code, Software propagation, Yet another, Signal-to-noise ratio, Linear code, Network architecture, Parity bit, Block code, End-to-end principle, Iteration, Maxima and minima, Random neural network, Computational complexity theory, Linear programming relaxation

https://arxiv.org/pdf/1706.07043v2.pdf