Sie sind auf Seite 1von 7

ISSN: 2395-0560

International Research Journal of Innovative Engineering


www.irjie.com
Volume1, Issue 2 of February 2015

Finite Alphabet Iterative Decoder for Low Density Parity


Check Codes
Anita B. Wagh1, Pramodkumar B. Wavdhane2,
1

Assistant Professor, Dept. of ECE, Sinhgads Institute of Kashibai Navale College of Engg, Pune, Maharashtra, India
PG Student [VLSI & ES], Dept. of ECE, Sinhgads Institute of Kashibai Navale College of Engg, Pune,Maharashtra, India

Abstract As the Low Density Parity Check (LDPC) code has Shannon limit approach error correcting performance so
this code is used in many application. The iterative belief propagation algorithms as well as the approximations of the belief
propagation algorithm used for the decoding purpose of the LDPC codes, But the belief propagation algorithms based decoding of the LDPC codes suffers the error floor problem. On finite length codes the error correcting performance curve in
the low error rate region can flatten out due to the presence of cycles in the corresponding tanner graph. This is known as
the error floor. This happens because decoding converges to the trapping sets and cannot correct all errors even if more
numbers of decoding iterations carried out. The performance in the error floor region is important for applications which
require very low error rate like flash memory and optical communications. To overcome this problem, a new type of decoder is proposed i.e. Finite Alphabet Iterative Decoders (FAIDs), were developed for the LDPC codes. In this decoder the
messages are represented by alphabets with a very small number of levels and the variable to check messages are derived
from the check to variable messages. The channel information given through the predefined Boolean map i.e. designed to
optimize the error correcting capability in the error floor region. The FAIDs can better than the floating point BP decoders
in the error floor region over the Binary Symmetric Channel (BSC). In addition multiple FAIDs with different map functions can be developed to further improve the performance with higher complexity.

Keywords low density parity check (LDPC), trapping sets, belief propagation, error floor.

1. Introduction
Low Density Parity Check (LDPC) codes are nothing but the block codes. As the other block codes it uses the parity check
matrix (H). It uses less number of parity check data ratio than other block code hence known as low density [1]. Rather than
other block code it is used in many applications as it has excellent error correction capability. The low density parity check
code can be decoded by the iterative belief propagation, the approximations of the iterative belief propagation is also used
to decode the LDPC codes [3], such as min sum decoding algorithms. As per the above decoding algorithms the normalized
APP based algorithms also used but among this the iterative belief propagation gives less propagation loss. The low density
parity check codes is famous for its Shannon limit approach that is the bit error rate increases than certain level then the
decoder cannot collect the bits in that region. In the finite length codes the curve of the error correcting performance gets
flatten out in the low error rate region due to the presence of cycles in the tanner graph. This phenomenon is known as error
floor [2]. This phenomenon happens because the decoding converges to trapping sets [3] and it fails to correct all errors
even if large number of iterations carried out. The error floor region performance is much important in the applications like
NAND flash memory and optical communications.
_____________________________________________________________________________________________________________
2015 ,IRJIE-All Rights Reserved
Page -50

ISSN: 2395-0560

International Research Journal of Innovative Engineering


www.irjie.com
Volume1, Issue 2 of February 2015
The new researches are carried out to lower the error floor but for now the existing approach is to take multiple trials with
complicated variable processing. Now a days a new type of decoder developed referred as Finite Alphabet Iterative Decoders introduced for LDPC codes. In this decoders the massages are represented by alphabets with small no of levels. There
are variable and check nodes. The variable to check messages are derived from check to variable messages and the channel
information given by the predefined Boolean maps. The predefined Boolean maps are designed to optimize the error correction in the error floor region. In the FAID alphabet size of seven level messages quantization which translates three bit
word length [5]. The FAID can perform better than flotation point BP decoder in the error floor region when the binary
symmetry channel is used. The performance of FAID can be improved by using different Boolean map function

2. Preliminaries
LDPC is linear block code which can be defined by its parity check matrix denoted by H. The parity check matrix is of
order (N,K) the parity check matrix can be represented in the form of tanner graph which is bipartite graph having a set of
variable node of V=(V1, V2,Vn) and a set of check notes C= (C1, C2, Cm) [1]. Tanner graph represented by G
as shown in Fig. 1. The check nodes gives rows of the parity check matrix and the variable nodes gives column of parity
check matrix. If the entry in the parity check matrix is 1 then the check nodes and variable nodes are connected. The H has
sub matrices of r*t and each submatrices has L*L. This matrices is cyclically shifted or zero matrix. As the parity check
matrix is given by tanner graph and the check nodes and variable nodes are connected by edges and the massages are
passed on the edges this type decoding is known as massage passing decoding, hence known as message passing decoders.
This procedure continue until valid code word to be found. In the above decoder two functions are important Variable Node
Unit (VNU) and Check Node Unit (CNU) which are used to determine the functions v to c and c to v respectively. One
more value needed provide to the variable node given by Y= (Y1..Yn) which is known as channel value, which we
provide to VNU.

Fig.1.Tanner Graph

In the above graph the square gives the check nodes and the circle gives the variable nodes. In the LDPC code N is total
data send and K is actual messages.so the parity bits are
M=N-K
The rate can be given as,
R=K/N
_____________________________________________________________________________________________________________
2015 ,IRJIE-All Rights Reserved
Page -52

ISSN: 2395-0560

International Research Journal of Innovative Engineering


www.irjie.com
Volume1, Issue 2 of February 2015
The order of matrix is number of 1s in the parity check matrix. In the LDPC code the order is n. The rows of parity check
matrix need not to be linearly independent. The parity check matrix is given as (dv, dc) regular in which each rows has dc
1s and each column has dv 1s. the rate should be R> K/N i.e. desired rate.

3. Decoding Algorithms
The error floor is the important phenomenon in the decoders. The error floor arises due to the associated trapping sets in the
graph of LDPC code. The trapping sets structure plays an important role in the decoders i.e. present in the tanner graph of
the code because which can cause conventional iterative decoders to fail if low weight error patterns are there. Generally
we use notation for trapping sets is (a, b) where a is number of variable nodes, and b is odd degree check nodes in the sub-

Figure 2. Example of Trapping Sets for Regular LDPC Codes (5, 3).

graph. we can give example like (5,3) as show in Fig.2 that it consist of 5 variable nodes and 3 odd degree check nodes if
such kinds of structure are there in tanner graph of the code. It causes iterative decoder to fail if the errors are located in
the trapping sets. Once we identify the trapping sets of the LDPC codes which we wanted then the some product algorithm
decoder can be custom designed to give less error floor [3].
3.1 Normalized APP Based Algorithm
As the large no of rows and column weights are there, so we require a very complex interconnection network. As well as
the code length is large memory size must be large. To achieve high throughput with less complexity we need to use parallel architecture. The normalized APP- based algorithm is used which perform well for such structure. This type of algorithm simplifies variable node operations by substituting extrinsic outgoing messages from a variable node with a posteriors
log likelihood ratio.
The normalized APP based algorithm can causes large number of bit errors as the iterations proceeds. Which causes due to
the propagated massages values saturates to particular value in this way the reliability goes on increasing for such kind of
the problem the conditional node update algorithm is developed known as fixed point normalized APP based algorithm.
The given conditional node update algorithm finish the update a variable node as the reliability of corresponding a posterior
LLR reaches to maximum fixed point value.
3.2. Belief Propagation Based Decoders
The message passing algorithm used to decode the LDPC codes are based on the belief propagation. The BP algorithm
_____________________________________________________________________________________________________________
2015 ,IRJIE-All Rights Reserved
Page -53

ISSN: 2395-0560

International Research Journal of Innovative Engineering


www.irjie.com
Volume1, Issue 2 of February 2015
works on graphical model of the code known as tanner graph. The BP algorithm compute marginal of functions on graphical model as it has its roots in the broad class of Bayesian interference problems. The interference using BP is exact only
on loopy tree graphs as it provides surprisingly close approximation to exact marginal on loopy graph [4]. Exact interference in the Bayesian belief network is hard in general as well as because of the strong restrictions of graphical model topology. There for the research effort the efficient approximations algorithms. The algorithms takes the loopy structure of
the graph into account can auto perform algorithms that neglect the loopy structure of the graph. The BP algorithms closely
approximate the exact inference but prohibitively increase the complexity. The brief look of the iterative decoders of varying complexity the binary message passing algorithms like the gallager A/B algorithms occupy one end of the spectrum
while the BP algorithms lies on the other end of the spectrum. The range of the decoders filling in the intermediate space
can be understood the implementation of the BP and variance having different level of precision from the above discussion
of the belief propagation algorithms and its approximations ignores the topology of the bit neighborhood of as they still
operate under the assumption that the graph is tree. There for the decoders in which the message convey information regarding the level neighborhood of a node in the graph.
As shown in the Fig. 3 the error performance of the error performance of the LDPC codes for various decoding algorithms
over the AWGN channel, in which the floating point BP, floating point NMS, fixed point NMS, and fixed point normalized
APP-based algorithm with and without conditional node update. This performance we have taken eight iteration. In the
above computing the normalization factors of the floating-point NMS, fixed-point NMS, and fixed-point APP-based algorithms are set to 0.375, 0.5625, and 0.34375, respectively, which gives the best error performance. The performances of the
fixed-point normalized APP-based algorithms with the normalization factor of 0.25 gives simple hardware implementation.
The BP algorithm has the best error-correcting performance, and the gap between the BP and the floating point NMS algorithms is 0.033dB at the BER of 107.The fixed-point NMS decoding gives slight performance degradation of 0.052dB as
compared to the BP algorithm. The normalization factors of 0.34375 or 0.25 are used ,Then fixed-point normalized
APP-based algorithms with and without conditional node update gives good error performance that is close to the NMS
decoding, The error performance degradation because of the conditional node update in the normalized APP algorithms
<0.03 dB at the frame error rate (FER) of 106.This happen because of an incorrect variable node is saturated to the maximum fixed-point value + or -2q11, the variable node cannot be corrected under the conditional node update scheme. In the
high SNR region, small number of variable nodes remain unchanged. In each frame when the SNR is 5.7 dB, 92% of erroneous frames contain only one uncorrected bit error, this bits can be decoded without conditional node update. The proposed conditional variable node update algorithm have less computational complexity.

4. Finite Alphabet Iterative Decoder


A new type of decoder is proposed with less computational complexity and gives better performance than that of the decoders which we have studied until now as well as gives better performance in the error floor region. Finite Alphabet Iterative Decoders (FAIDs), were developed for the LDPC codes. In this decoder the messages are represented by alphabets
with a very small number of levels and the variable to check messages are derived from the check to variable messages [5].
The channel information given through the predefined Boolean map i.e. designed to optimize the error correcting capability
in the error floor region. The FAIDs can perform better than the floating point BP decoders in the error floor region. In addition multiple FAIDs with different map functions can be developed to further improve the performance with higher complexity.
_____________________________________________________________________________________________________________
2015 ,IRJIE-All Rights Reserved
Page -54

ISSN: 2395-0560

International Research Journal of Innovative Engineering


www.irjie.com
Volume1, Issue 2 of February 2015

Figure 3. Frame (dashed line) and bit-error (solid line) performance of a LDPC code with the serial schedule.

In the above block diagram the channel information is given for further processing, it consist of the parity check matrix H
of the QC-LDPC code has r*t nonzero sub matrices of dimension L*L. The second block gives the L[t/2w-1] variable node
units to pass the variable to check messages. The variable node units has the Boolean maps. After that the permutation carried out by the block and the information is given to the check node units ,there are r*l number of check node units, before
going to the permutation the data is converted into the serial form as the data from the different VNU is in the parallel form
to process it is send in the serial form, before sending it to the reverse permutation the data again converted serial to parallel
form after processed by the CNU.

Figure 4. Block Diagram of the Decoder

_____________________________________________________________________________________________________________
2015 ,IRJIE-All Rights Reserved
Page -55

ISSN: 2395-0560

International Research Journal of Innovative Engineering


www.irjie.com
Volume1, Issue 2 of February 2015
The proposed decoder employs r*L CNUs to process all rows of simultaneously in order to reach high throughput for optical communication and data storage systems.
4.1. VNU Architecture
The Boolean map for the VNU in the FAID can be efficiently implemented in a bit-parallel way using logic gates. The
VNU calculates the bits arriving to the variable node, for first computation we provide channel value to the variable node
as it computes all the message bits and pass to the check node.
The VNU function can be given as follows,
(1)
Before passing the value to the CNUs, the alphabet levels are encoded into binary representations according to
sign-magnitude format. Fig. 5 gives the general architecture of variable node unit for Finite Alphabet Iterative Decoder for
low density parity check codes.

Figure 5. Architecture of Variable Node Unit

4.2. CNU Architecture


The CNU function can be given as follows,
(2)
Fig. 5 shows the proposed bit-serial CNU architecture. The v-to-c messages are from the VNU in sign-magnitude form.
They are given starting from the most significant bits (MSBs) of the magnitudes, and the sign bits are loaded last as the
least significant bits (LSBs).The CNU computes the message bits arrived to the check node from various variable node.
The sign of the computed message bit is the multiplication of the signs of the received message bits.

Figure 6. Architecture of Check Node Unit

_____________________________________________________________________________________________________________
2015 ,IRJIE-All Rights Reserved
Page -56

ISSN: 2395-0560

International Research Journal of Innovative Engineering


www.irjie.com
Volume1, Issue 2 of February 2015

5. Conclusion
In this paper, we have studied Low Density Parity Check codes, which has better error correcting performance than other
block codes. LDPC codes used in many applications As it has Shannon limit approaching error correcting performance. The
LDPC codes were decoded by various decoders like the floating point BP, floating point NMS, fixed point NMS, and fixed
point normalized APP-based algorithms. Among this decoders the iterative belief propagation based decoding algorithm or
its approximations shows good error correcting with small performance loss. The belief propagation based algorithms has
error floor problem that is abrupt change in the slope of the error-rate curve which occurs at very low error rates. while
concluding this all problems new type of decoder is proposed with less computational complexity and gives better performance. Finite Alphabet Iterative Decoders (FAID),In this decoder the messages are represented by alphabets with a very
small number of levels. The channel information given through the predefined Boolean map i.e. designed to optimize the
error correcting capability in the error floor region. The FAIDs can perform better than the other decoders in the error floor
region.

REFERENCES
[1] M. Davey and D. J. MacKay, Low density parity check codes over GF(q), IEEE Commun. Lett., vol. 2, pp. 165167, Jun. 1998.
[2] T. Richardson, Error floors of LDPC codes, in Proc. 41st Annu. Allerton Conf. Commun. Control Comut., 2003.
[3] B. Vasic, S. K. Chilappagari, D.V. Nguyen, and S.K. Planjery, Trapping set ontology, in Proc. 47th Annu.Allerton Conf. Commun.,
Control,Comput., Sep. 2009.
[4] S. K. Planjeryet al., Iterative decoding beyond belief propagation,in Proc. Info. Theory Appl. Workshop, Feb. 2010.
[5] F. Cai, X. Zhang, D. Declercq, B. Vasic, D. V. Nguyen, and S. K.Planjery, Low-complexity finite alphabet iterative decoders for
LDPC codes, in Proc. IEEE Intl Symp. Circuits and Syst., May 2013.

_____________________________________________________________________________________________________________
2015 ,IRJIE-All Rights Reserved
Page -57

Das könnte Ihnen auch gefallen