Sie sind auf Seite 1von 56

Turbo Codes

A Need for Better Codes


 Designing a channel code is always a tradeoff between energy
efficiency and bandwidth efficiency.
 Lower rate Codes  correct more errors  the communication
system can operate with a lower transmit
power, transmit over longer distances,
tolerate more interference, use smaller antennas
and transmit at a higher data rate.
 have a large overhead and are hence more
heavy on bandwidth consumption
 decoding complexity grows exponentially
with code length, and long (low-rate) codes set
high computational requirements to
conventional decoders.

Encoding is easy but decoding is hard


2
Claude Shannon’s Limit

3
Motivation
 If the transmission rate, the bandwidth and the noise power are fixed, we
get a lower bound on the amount of energy that must be expended to
convey one bit of information. Hence, Shannon capacity sets a limit to the
energy efficiency of a code.
 Although Shannon developed his theory in the 1940s, several decades
later the code designs were unable to come close to the theoretical bound.
Even in the beginning of the 1990s, the gap between this theoretical bound
and practical implementations was still at best about 3dB.
“practical codes required about twice as much energy as the
theoretical predicted minimum”
 new codes were sought that would allow for easier decoding
 using a code with mostly high-weight code words
 combining simple codes in a parallel fashion, so that each part
the code can be decoded separately with less complex decoders
and each decoder can gain from information exchange with others.
4
Turbo Codes
 Berrou & Glavieux
 1993 International Conf. on Commun. (ICC)

 Rate ½ performance within 0.5 dB of Shannon capacity.

 Patent held by France Telecom.

 Features:
 Parallel code concatenation

 Can also use a serial concatenation

 Nonuniform interleaving

 Recursive systematic encoding

 Usually RSC convolutional codes are used.

 Can use block codes.

 Iterative decoding algorithm.

 Optimal approaches: BCJR/MAP, SISO, log-MAP

 Suboptimal approaches: max-log-MAP, SOVA


5
Concatenated Coding

6
Error Propagation
 If a decoding error occurs in a codeword,

 results in a number of subsequent data errors

 the next decoder may not be able to correct the errors

 the performance might be improved if these errors were


distributed between a number of separate codewords

 Can be achieved using an interleaver/de-interleaver.

7
Interleaver/de-interleaver

8
•If the rows of the interleaver are at least as long as the outer
codewords, and the columns at least as long as the inner data
blocks, each data bit of an inner codeword falls into a different
outer codeword.
•Hence, if the outer code is able to correct at least one error, it can
always cope with single decoding errors in the inner code.

9
Example

So, how to overcome this ?

10
Iterative decoding

If the output of the outer decoder were reapplied to the


inner decoder it would detect that some errors remained,
since the columns would not be codewords of the inner
code
Iterative decoder: to reapply the decoded word not just
to the inner code, but also to the outer, and repeat as
many times as necessary.
However, it is clear that this would be in danger of
simply generating further errors. One further ingredient is
required for the iterative decoder.
11
Soft-In, Soft-Out (SISO)
decoding
 The performance of a decoder is significantly enhanced if, in
addition to the ‘hard decision’ made by the demodulator on the
current symbol, some additional ‘soft information’ on the
reliability of that decision is passed to the decoder.

 For example, if the received signal is close to a decision


threshold (say
between 0 and 1) in the demodulator, then that decision has low
reliability, and the decoder should be able to change it when
searching for the most probable codeword.

Making use of this information in a conventional decoder,


called soft decision decoding, leads to a performance
improvement of around 2dB in most cases.
12
SISO decoder
 A component decoder that generates ‘soft information’
as well as makes use of it,
 Soft information usually takes the form of a log-
likelihood ratio for each data bit,
 The likelihood ratio is the ratio of the probability that a given bit
is ‘1’ to the probability that it is ‘0’
 If we take the logarithm of this, then its sign corresponds to the
most probable hard decision on the bit (if it is positive, ‘1’ is most
likely; if negative, then ‘0’)
 The absolute magnitude is a measure of our certainty
about this decision.
13
Likelihood Functions
Bayes’ Theorem 
P(d=i/x) = p(x/d=i) P(d=i) ; i = 1,……M
p(x)
P(d=i/x)  A posteriori probability APP
P(d=i)  A priori probability
p(x/d=i)  conditional pdf of received Signal x
p(x)  pdf of received Signal x
14
Maximum Likelihood
Let dk= +1, -1 ; AWGN channel
Received statistic  xk
Likelihood functions 
l1 = p(xk / dk= +1 )
l2= p(xk / dk= -1 )
Maximum Likelihood  hard decision rule
choose dk= +1, if l1 > l2
choose dk= -1, if l2 > l1

15
Maximum A Posteriori - MAP
Let dk= +1, -1 ; AWGN channel
Received statistic  xk
MAP Rule  H1
P(dk= +1 / xk ) > < P(dk= -1 / xk )
H2
H1 : dk= +1,
H2 : dk= -1.

16
MAP Likelihood ratio test
H1
p(xk / dk= +1 ) P(dk= +1) > p(xk / dk= -1 ) P(dk= -1)
<
H2

H1
p(xk / dk= +1 ) > P(dk= -1)
p(xk / dk= -1 ) < P(dk= +1)
H2

H1
p(xk / dk= +1 ) P(dk= +1) > 1
<
p(xk / dk= -1 ) P(dk= -1) H2
17
Log - Likelihood Ratio : LLR
P(d= +1 / x)
L(d/ x ) = log
P(d= -1 / x )

= log p(x / d= +1 ) P(d= +1)


p(x / d= -1 ) P(d= -1)

p(x / d= +1 ) P(d= +1)


= log + log
p(x / d= -1 ) P(d= -1)
= L ( x/d ) + L(d)
18
Log - Likelihood Ratio : LLR
L(d/ x ) = L ( x /d ) + L(d)

L’( d^) = Lc ( x ) + L(d)


Soft LLR output for a systematic code :
^ ^ ^
L( d ) = L’( d ) + L e( d )
LLR of data at Extrinsic LLR : Knowledge from
Demod. output Decoding process

^ ^
L( d ) = Lc ( x ) + L(d) + Le( d )
19
^ ^
L( d ) = Lc ( x ) + L(d) + Le( d )

L(d) apriori
value in

^ Le( d ) Output LLR value


L’( d ) = Lc ( x ) + L(d) SISO Extrinsic ^ ^ ^
Detector a posteriori L( d ) = L’( d ) + Le( d )
Decoder Value out
LLR value
^
Lc( x ) L’( d ) a posteriori
Channel value out
Value in

20
Iterative decoding algorithm for the
product code
1. Set the a-priori LLR L(d) = 0
2. Decode horizontally and obtain
^ ^
Leh( d ) = L( d ) - Lc ( x ) - L( d )
^
3. Set L(d) = Leh(d) for vertical decoding
4. Decode vertically and obtain
Lev( d^) = L( d^) - Lc ( x ) - L( d )
^
5. Set L(d) = Lev(d) for horizontal decoding
6. Repeat steps 2 to 5 to optimize and the soft output is
^ ^ ^
L( d ) = Lc ( x ) + Leh( d ) + Lev( d )
21
Iterative Decoder

22
Decoder Architectures
 Decoders must operate much faster than the rate at which incoming
data arrives, so that several iterations can be accommodated in the time
between the arrivals of received data blocks,
 Architecture may be replaced by a pipeline structure, in which data
and extrinsic information are passed to a new set of decoders while the first
one processes the next data block
 At some point the decoder may be deemed to have converged to the
optimum decoded word, at which point the combination of extrinsic and
intrinsic information can be used to find the decoded data
 Usually a fixed number of iterations is used—between 4 and 10,
depending on the type of code and its length—but it is also possible to
detect convergence and terminate the iterations at that point.
23
Log-Likelihood Algebra
Sum of two LLRs
L(d1) + L(d2)  L (d1  d2 )
= log exp[L(d1)] + exp [L(d2)]
1 + exp[L(d1)].exp [L(d2)]
 (-1) . sgn [L(d1)]. sgn [L(d2)] . Min ( |L(d1)| , |L(d2)| )

L (d) +  = - L (d)
L (d) + 0 = 0
24
Iterative decoding example
2D single parity code
di  di = pij d1 = 1 d2 = 0 p12 = 1
d3 = 0 d4 = 1 d34 = 1
p13 = 1 p24 = 1 -

x1 = 0.75 x2 = 0.05 x12 = 1.25

x3 = 0.10 x4 = 0.15 x34 = 1.0

x13 = 3.0 x24 = 0.5 -

25
Iterative decoding example
Estimate Lc(xk)
 = 2 xk /2
 assuming 2 = 1

Lc(x1 )= 1.5 Lc(x2 )= 0.1 Lc(x12 )= 2.5


Lc(x3 )= 0.2 Lc(x4 )= 0.3 Lc(x34 )= 2.0
Lc(x13 )= 6.0 Lc(x24 )= 1.0 -

26
Iterative decoding example
 Compute

Le( dj ) = [Lc ( x j) + L(dj ) ] + Lc ( x ij)

Leh( d1) = [Lc ( x 2) + L(d2) ] + Lc ( x 12 ) = new L( d1 )

Leh( d2) = [Lc ( x 1) + L(d1) ] + Lc ( x 12 ) = new L(d2)

Leh( d3) = [Lc ( x 4) + L(d4) ] + Lc ( x 34 ) = new L(d3)

Leh( d4) = [Lc ( x 3) + L(d3) ] + Lc ( x 34 ) = new L(d4)


27
Iterative decoding example
Lev( d1) = [Lc ( x 3) + L(d3) ] + Lc ( x 13 ) = new L( d1 )

Lev( d2) = [Lc ( x 4) + L(d4) ] + Lc ( x 24 ) = new L(d2)

Lev( d3) = [Lc ( x 1) + L(d1) ] + Lc ( x 13 ) = new L(d3)

Lev( d4) = [Lc ( x 2) + L(d2) ] + Lc ( x 24 ) = new L(d4)

After many iterations the LLR is computed for decision making

^ ^ ^
L( di ) = Lc ( x i) + Leh(di) + Lev( dj )
28
First Pass output
Lc(x1 )= 1.5 Lc(x2 )= 0.1 Leh(d1 )= -0.1 Leh(d2 )= -1.5

Lc(x3 )= 0.2 Lc(x4 )= 0.3 Leh(d3 )= -0.3 Leh(d4 )= -0.2

Lev(d1 )= 0.1 Lev(d2 )= -0.1 L(d1 )= 1.5 L(d2 )= -1.5

Lev(d3 )= -1.4 Lev(d4 )= 1.0 L(d3 )= -1.5 L(d4 )= 1.1

29
Parallel Concatenation Codes
 Component codes are Convolutional codes
 Recursive Systematic Codes
 Should have maximum effective free distance
 Large Eb/No  maximizing minimum weight
codewords
 Small Eb/No  optimizing weight distribution of
the codewords
 Interleaving to avoid low weight codewords
30
Non - Systematic Codes - NSC
{uk}
+ L-1
uk =  g1i dk-i (mod 2) ;
i=1
{dk} dk dk-1 dk-2
G1 = [ 1 1 1 ]
+
G2 = [ 1 0 1 ]
{vk}
L-1
vk =  g2i dk-i (mod 2) ;
i=1

31
Recursive Systematic Codes - RSC
{dk} {uk}

+ ak ak-1 ak-2

+
{vk}
L-1
ak = dk +  gi’ ak-i (mod 2) ; gi’ = g1i if uk = dk
i=1 g2i if vk = dk
32
Trellis for NSC & RSC
 NSC  RSC
00 00
a = 00 a = 00
11 11
11 11
b = 01 b = 01
00 00

10 10
c = 10 c = 10
01 01
01 01
d = 11 10
d = 11 10

33
Concatenation of RSC Codes
{dk} {uk}
+ ak ak-1 ak-2

Interleaver
+
{v1k}
{vk}
+ ak ak-1 ak-2

+
{ 0 0 …. 0 1 1 1 0 0 …..0 0 } {v2k}
{ 0 0 …. 0 0 1 0 0 1 0 … 0 0 }  produce low weight codewords in component coders
34
Feedback Decoder
APP  Joint Probability k i,m = P { dk = i, Sk = m / R1 N }

State at
time k
Received sequence
From time 1 to N

APP  P { dk = i / R1 N } =  k i,m ; i = 0,1 for binary


m

  k 1,m
m
Likelihood Ratio  ( dk ) =
 k 0,m
m

  k 1,m
m
Log Likelihood Ratio  L( dk ) = Log
 k 0,m
m 35
Feedback Decoder
 
 MAP Rule  dk =1 ; L(dk) > 0
 
dk =0 ; L(dk) < 0
^ ^
L( dk) = Lc ( x k) + L(dk) + Le( dk )
 
L1( dk ) = [Lc ( x k) + Le1(dk ) ]
  
L2( dk ) = [ f{L1 ( dn) }n k + Le2(dk ) ]

36
Feedback Decoder


 De-
xk  L1 ( d n ) Interleaver Le2( dk )
L1 ( d k )
DECODER 1 Interleaver DECODER 2

De-
Interleaver
L2( dk )
y1k y2k

yk 
dk

37
Modified MAP Vs. SOVA
 SOVA 
 Viterbi Algorithm acting on soft inputs over forward
path of the trellis for a block of bits
 Add BM to SM  compare  select ML path
 Modified MAP 
 Viterbi Algorithm acting on soft inputs over forward
and reverse paths of the trellis for a block of bits
 Multiply BM & SM  Sum in both directions  best
overall statistic

38
MAP Decoding Example
{uk}
00
a = 00
11

{dk} dk dk-1 dk-2 b = 10 00


11

01
c = 01 10
+
01
{vk}
d = 11 10

39
MAP Decoding Example
 d = { 1, 0, 0 }
 u = { 1, 0, 0 }  x = { 1., 0.5, -0.6 }
 v = { 1, 0, 1 }  y = { 0.8, 0.2, 1.2 }
 Apriori probabilities  1 = 0 = 0.5

Branch Metric  k i,m = P { dk = i, Sk = m , Rk }

= P { Rk / dk = i, Sk = m }

. P {Sk = m / dk = i } . P { dk = i }

P {Sk = m / dk = i } = 1 / 2 L = ¼ ; P { dk = i } = 1 / 2 ;

k i,m = P { xk / dk = i, Sk = m } . P { yk / dk = i, Sk = m } . { ki / 2L }

40
MAP Decoding Example
k i,m = P { xk / dk = i, Sk = m } . P { yk / dk = i, Sk = m } . { ki / 2L }

For AWGN channel :

k i,m = { ki / 2L } (1/2 ) exp { - (xk – uki )2 /(2  2 ) }dxk

. (1/2 ) exp { - (xk – vki,m )2 /(2  2 ) }dyk

k i,m = { Ak ki } exp { (xk . uki )+ (yk . Vki,m )/  2 }

Assuming Ak = 1 2 =1 ;
k i,m = 0.5 exp { xk . uki + yk . Vki,m }

41
Subsequent steps
 Calculate branch metric
k i,m = 0.5 exp { xk . uki + yk . Vki,m }

 Calculate forward state metric


1
k+1m =  k i,b(j,m) kb(j,m)
J=0

 Calculate reverse state metric


1
km =  kj,m k+1f(j,m)
J=0

42
Subsequent steps
 Calculate LLR for all times
  km k1,m k+1f(1,m)
m
Log Likelihood Ratio  L( dk ) = Log
 km k0,m k+1f(0,m)
m

 Hard decision based on LLR

43
Iterative decoding steps

Likelihood Ratio  ( dk )

{ k1}  km exp { xk . uk1 + yk . Vk1,m } k+1f(1,m)


m
=
{ k0}  km exp { xk . uk0 + yk . Vk0,m } k+1f(0,m)
m

 km exp { yk . Vk1,m } k+1f(1,m)


m
= { k} exp { 2xk }
 km exp { yk . Vk0,m } k+1f(0,m)
m

= { k} exp { 2xk } { k e}


LLR  L( dk ) = L(dk) + { 2xk } + Log [ke ]

44
Iterative decoding
 For the second iteration;

k i,m = ke i exp { xk . uki + yk . Vki,m }

 Calculate LLR for all times


  km k1,m k+1f(1,m)
m
Log Likelihood Ratio  L( dk ) = Log
 km k0,m k+1f(0,m)
m

 Hard decision based on LLR after


multiple iterations
45
Iterative decoding steps

Likelihood Ratio  ( dk )

{ k1}  km exp { xk . uk1 + yk . Vk1,m } k+1f(1,m)


m
=
{ k0}  km exp { xk . uk0 + yk . Vk0,m } k+1f(0,m)
m

 km exp { yk . Vk1,m } k+1f(1,m)


m
= { k} exp { 2xk }
 km exp { yk . Vk0,m } k+1f(0,m)
m

= { k} exp { 2xk } { k e}


LLR  L( dk ) = L(dk) + { 2xk } + Log [ke ]

46
Iterative decoding
 For the second iteration;

k i,m = ke i exp { xk . uki + yk . Vki,m }

 Calculate LLR for all times


  km k1,m k+1f(1,m)
m
Log Likelihood Ratio  L( dk ) = Log
 km k0,m k+1f(0,m)
m

 Hard decision based on LLR after


multiple iterations
47
Rayleigh fading channel

48
Rayleigh fading channel

49
Rayleigh fading channel model

50
Channel measurement based LLR

•When no CSI is available in the decoder, the equation can be


approximated by a Gaussian distribution with a mean
x·Ea [a]=0.8862 x , and a variance σ2. The variance σ2 is
determined by the additive noise.

•If the decoder has knowledge of the fading amplitudes for each
symbol, we can apply the Gaussian distribution with a mean ax
and a variance σ2.
51
Channel measurement based LLR

52
Performance Comparison

53
Performance Comparison

54
Performance Comparison

55
Performance Comparison

 Effect of block size


 Effect of channel fading
 Effect of channel correlation
 Importance of interleaver

56

Das könnte Ihnen auch gefallen