Sie sind auf Seite 1von 27

LDPC Codes

Alma Bregaj
Project, ECE 534
A.Introduction
 Low-density parity-check (LDPC) codes are a class of linear
block LDPC codes.

 Their main advantage is that they provide a high performance


and linear time complex algorithms for decoding.

 LDPC codes were first introduced by Gallager in his PhD


thesis in 1960.

2 Alma Bregaj
1. Error correction using parity-checks
 Single parity check code (SPC)
 Example:
 The 7-bit ASCII string for the letter S is 1010011, and a parity bit
is to be added as the eighth bit. The string for S already has an even
number of ones (namely four) and so the value of the parity bit is
0, and the codeword for S is 10100110.
 More formally, for the 7-bit ASCII plus even parity code we define
a codeword c to have the following structure:
 c  [c c c c c c c c ] , where each ci is either 0 or 1, and every codeword
1 2 3 4 5 6 7 8

satisfies the constraint:

this equation is called a parity-check equation.

3 Alma Bregaj
1.Error correction using parity-checks

4 Alma Bregaj
1.Error correction using parity-checks
 In matrix form a string y  [c1c2c3c4c5c6 ] is a valid codeword for
the code with parity-check matrix H if and only if it satisfies
the matrix equation:
HyT  0 (1.4)

5 Alma Bregaj
2.Encoding
 The code constraints from example below can be re-written
as:
c 4  c1  c 2
c5  c 2  c3
c 6  c1  c 2  c 3

 The codeword bits c1 , c2 , and c3contain the three bit message,


while the codeword bits, c4 ,c5,c6 contain the three parity-
check bits.
 Written this way the codeword constraints show how to
encode the message. c  11  0 4

c5  1  0  1
c6  1  1  0  0
6 Alma Bregaj
2.Encoding
 and so the codeword for this message is c = [110010].
 Again these constraints can be written in matrix form as follows:

 where the matrix G is called the generator matrix of the code.


 The message bits are conventionally labeled by u  [u1; u 2 ;...u k ] ,
where the vector u holds the k message bits. Thus the codeword c
corresponding to the binary message u  [u1, u 2 , u 3 ] can be found
using the matrix equation:
c = uG.

7 Alma Bregaj
3. Error detection and correction
 Suppose a codeword has been sent down a binary symmetric
channel and one or more of the codeword bits may have been
flipped. The task is to detect any flipped bits and, if possible,
to correct them.

 Firstly, we know that every codeword in the code must satisfy


(1.4), and so errors can be detected in any received word
which does not satisfy this equation.

8 Alma Bregaj
3. Error detection and correction
 Example:
 The codeword c = [101110] from the code in Example 1.3
was sent through a channel and the string y = [101010]
received. Substitution into equation (1.4) gives:

 The result is nonzero and so the string y is not a codeword of


this code.
9 Alma Bregaj
3. Error detection and correction
 s  Hy , is called the syndrome of y.
T

 The syndrome indicates which parity-check constraints are


not satised by y.

 The result of the equation below, in this example, indicates


that the first parity-check equation in H is not satisfied by y.

 Since this parity-check equation involves the 1-st,


2-nd and 4-th codeword bits we can conclude that at least one
of these three bits has been inverted by the channel.

10 Alma Bregaj
3.Error detection and correction
 The Hamming distance between two codewords is defined as
the number of bit positions in which they differ.
 The minimum distance of a code, d min , is defined as the
smallest Hamming distance between any pair of codewords in
the code.
 A code with minimum distance d min , can always detect t
errors whenever:
t < d min

11 Alma Bregaj
3.Error detection and correction
 To go further and correct the bit flipping errors requires that
the decoder determine which codeword was most likely to
have been sent. Based only on knowing the binary received
string, y, the best decoder will choose the codeword closest
in Hamming distance to y. When there is more than one
codeword at the minimum distance from y the decoder will
randomly choose one of them.

 This decoder is called the maximum likelihood (ML) decoder


as it will always chose the codeword which is most likely to
have produced y.

12 Alma Bregaj
B.Representations for LDPC codes
 Matrix representation
 Graphical Representation

13 Alma Bregaj
1.Matrix Representation
 The matrix defined below is a parity check matrix with
dimension n x m for a (10, 5) code. We can now define two
numbers describing these matrix, wr for the number of 1’s in
each row and and wc for the columns.

14 Alma Bregaj
Regular and Irregular LDPC codes
 A low-density parity-check code is a linear block code for
which the parity check matrix has a low density of 1's.
 A regular LDPC code is a linear block code for whose parity-
check matrix H contains exactly wc 1's in each column and
exactly wr  wc (n / m) 1's in each row.
 An irregular LDPC code if H is low density, but the number
of 1s in each row or column is not constant.
 It is easiest to see the sense in which an LDPC code is regular
or irregular through its graphical representations.

15 Alma Bregaj
2. Graphical Representation
 The Tanner graph of a code is drawn according to the
following rule: check node j is connected to the variable
node i whenever element hij in H is 1.

16 Alma Bregaj
C.Constructing LDPC codes
 Several different algorithms exists to construct suitable
LDPC codes.

 Gallager himself introduced one. Furthermore MacKay


proposed one MacKay to semi-randomly generate sparse
parity checks matrices. In fact, completely randomly chosen
codes are good with a high probability.

 The problem that will arise, is that the encoding complexity


of such codes is usually rather high.

17 Alma Bregaj
D.Decoding LDPC Codes
 Hard-decision decoding
 Soft-decision decoding

18 Alma Bregaj
1.Hard-Decision Decoding

19 Alma Bregaj
1.Hard-Decision Decoding
 Step1: All v-nodes ci send a message to their c-nodes f j
containing the bit they believe to be the correct one for
them.
 Step 2: Every check nodes f j calculate a response to every
connected variable node. The response message contains the
bit that f j believes to be the correct one for this v-node ci
assuming that the other v-nodes connected to f j are correct.
In other words:If you look at the example, every c-node f j is
connected to 4 v-nodes. So a c-node f j looks at the message
received from three v-nodes and calculates the bit that the
fourth v-node should have in order to fulfill the parity check
equation.

20 Alma Bregaj
 Step 3: The v-nodes receive the messages from the check
nodes and use this additional information to decide if their
originally received bit is OK. A simple way to do this is a
majority vote. When coming back to our example that
means, that each v-node has three sources of information
concerning its bit.The original bit received and two
suggestions from the check nodes. Table 3 illustrates this step.
Now the v-nodes can send another message with their (hard)
decision for the correct value to the check nodes.
 Step 4: Go to step 2

21 Alma Bregaj
1.Hard-Decision Decoding

22 Alma Bregaj
1.Hard-Decision Decoding

23 Alma Bregaj
2.Soft-Decision Decoding
 The above description of hard-decision decoding was mainly
for educational purpose to get an overview about the idea.

 Soft-decision decoding of LDPC codes yields in a better


decoding performance and is therefore the prefered method.

 The underlying idea is exactly the same as in hard decision


decoding.

24 Alma Bregaj
E.Encoding LDPC Codes
 Choose certain variable nodes to place the message bits on.
 And in the second step calculate the missing values of the
other nodes.
 An obvious solution for that would be to solve the parity
check equations. This would contain operations involving the
whole parity-check matrix and the complexity would be
again quadratic in the block length.
 In practice however, more clever methods are used to ensure
that encoding can be done in much shorter time.

25 Alma Bregaj
F.Conclusions
 Low-density parity-check codes are being studied for a large
variety of applications, much as turbo codes, trellis codes etc.

 They make it possible to implement parallelizable decoders.

 The main disavantages are that encoders are somehow more


complex and that the code lengh has to be rather long to yield
good results.

 Their main advantage is that they provide a performance which is


very close to the capacity for a lot of different channels and linear
time complex algorithms for decoding

26 Alma Bregaj
Thank you!

27 Alma Bregaj

Das könnte Ihnen auch gefallen