Sie sind auf Seite 1von 44

Outline Channel Coding I

Introduction
Declarations and definitions, general principle of channel coding
Structure of digital communication systems

One Lesson of Information Theory


Probabilities, measure of information
SHANNONs channel capacity for different channels

Linear block codes


Properties of block codes and general decoding principles
Bounds on error rate performance
Representation of block codes with generator and parity check matrices
Cyclic block codes (Reed-Solomon and BCH codes)

Convolutional Codes
Structure, algebraic and graphical presentation
Distance properties and error rate performance
Optimal decoding with Viterbi algorithm
Universitt
Bremen

Volker Khn Channel Coding I

Basics of Convolutional Codes


Lc

1 2

1 2

n-1

1 2

Shift register structure with Lck memory elements


memory leads to statistical dependence of successive code words
In each cycle k bits are shifted
each bit affects the output word Lc times
Lc is called constraint length, memory depth m= Lc-1
Coded symbols are calculated by modulo 2 additions of memory contents
generators
Code word contains n bit
code rate Rc = k/n
Our further investigation is restricted to codes with rate Rc = 1/n !
Universitt
Bremen

Volker Khn Channel Coding I

Structure and Encoding


Example: (2,1,3)-convolutional code with generators g1 = 78 and g2 = 58
Code is non-systematic and non-recursive (NSC-Code)
Rc = 1/2, Lc = 3

m=2

110
0
11

u(l)
0011001
001100
00110
0011
001
00
0

g10 =1

u(l-1)
0
10
g20 =1

g11 =1

g12 =1

u(l-2)
01
0
g21 =0

g22 =1

1110
Universitt
Bremen

u (l )
1
0
0
1
1
0
0

state
00
10
01
00
10
11
01

Volker Khn Channel Coding I

following state
10
01
00
10
11
01
00

output
11
10
11
11
01
01
11

Equivalence of block codes and convolutional codes


Convolutional codes:
k information bits are mapped onto code word x including n bits
Code words x are interdependent due to the memory

Block codes:
Code words x are independent

Block codes are convolutional codes without memory


Only sequences of finite length are viewed in practice
Finite convolutionally encoded sequence can be viewed as single code
word generated by a block code

Convolutional codes are a special case of block codes

Universitt
Bremen

Volker Khn Channel Coding I

Properties of Convolutional Codes


Only a small number of simple convolutional codes are of practical
interest
Convolutional codes are not constructed by algebraic methods but by
computer search
(with the advantage of a simple mathematical description)
Convolutional decoders can easily process soft-decision input and
compute soft-decision output
(only hard-decision decoding has been considered for block codes)
Similar to block codes, systematic and non-systematic encoders are
distinguished for convolutional codes
(mostly non-systematic convolutional codes are of practical interest)

Universitt
Bremen

Volker Khn Channel Coding I

Algebraic Description (1)


Description by generators gj (octal)
Example code with Lc=3 and Rc = 1/2
= [1 1 1] 78

g1 = g1,0

g1,1

g1,2

g 2 = g 2,0

g 2,1

g 2,2 = [1 0 1] 58

u(l)

g10 =1

g11 =1

u(l-1)
g20 =1

g12 =1

u(l-2)
g21 =0

g22 =1

Encoding by discrete convolution:


x1 = u g1
x2 = u g2

generally for =1,,n


x ( l ) =
Universitt
Bremen

m
i =0

g ,i ul 1 mod 2
Volker Khn Channel Coding I

Algebraic Description (2)


z-Transform
X ( z) =

i =0

xi z

D-Transform
X ( D) =

i =0

Generator polynomials
G ( D ) =

m
i =0

g ,i D i

Example

xi D i

G1 ( D ) = g1,0 + g1,1D + g1,2 D 2 = 1 + D + D 2


G2 ( D ) = g 2,0 + g 2,1D + g 2,2 D 2 = 1 + D 2

Encoding (polynomial multiplication) X ( D ) = U ( D ) G ( D )


Encoded sequence

X ( D ) = [ X 1 ( D)

X 2 ( D)

with generator matrix G ( D ) = [G1 ( D ) G2 ( D )

X n ( D)] = U ( D ) G ( D )
Gn ( D) ]

Code space = {U ( D ) G ( D ) U ( D ) , U i GF (2)}


Universitt
Bremen

Volker Khn Channel Coding I

Algebraic Description (3)


Example: u=[1 0 0 1 1]
U ( D) = 1+ D + D
3

x1(l)

Generator polynomials
G1 ( D ) = 1 + D + D

G2 ( D ) = 1 + D

u(l)

u(l-1)

Encoding

x2(l)

X ( D ) = [ X 1 ( D)

X 2 ( D)]

= U ( D ) G1 ( D )

U ( D ) G2 ( D )

= (1 + D 3 + D 4 ) (1 + D + D 2 )

(1 + D

= 1 + D + D 2 + D3 + D 6

[11
Universitt
Bremen

u(l-2)

+ D 4 ) (1 + D 2 )

1 + D 2 + D3 + D 4 + D5 + D6

10 11 11 01 01 11]
Volker Khn Channel Coding I

Graphical Presentation in Finite State Diagram


Convolutional encoder can be interpreted as Mealy state machine
They can be described by a state transition (dependent on the
current state and the input) and the corresponding output signal
Example: (2,1,3)-NSC-code with generators g1 = 78 and g2 = 58
0/00
1/11
u(l)

g11 =1

g10 =1

u(l-1)
g20 =1

g12 =1

g21 =0

g22 =1

0/11

0/10
10

u(l-2)

00

01

1/00

1/01

0/01
11
1/10

Universitt
Bremen

Volker Khn Channel Coding I

Graphical Presentation in the Trellis Diagram


Finite state diagram does not contain any information in time
State diagram expanded by temporal component results in Trellis
diagram
Example: (2,1,3)-NSC-code with generators g1 = 78 and g2 = 58
0/00
1/11

00

00
0/11
10

0/10
10

01

1/00

1/01

1/11

0/11
1/00

0/10

01

1/01

0/01
11

0/01

11
1/10

Universitt
Bremen

0/00

l=0

l=1

Volker Khn Channel Coding I

l=2

1/10

l=3

l=4

10

Classification of Convolutional Codes


Non-recursive non-systematic Convolutional Encoders (NSC-Encoders)
Non-systematic encoders
No separation between information bits and parity bits within the code word
Higher performance than systematic encoders

Systematic Convolutional Encoders


Code word explicitly contains information bits
Not relevant for praxis due to lower performance
Exception: recursive systematic convolutional encoders for Turbo-Codes and
Trellis Coded Modulation (TCM)

Recursive Systematic Convolutional Encoders (RSC-Codes)

Universitt
Bremen

Volker Khn Channel Coding I

11

Recursive Convolutional Encoders (RSC-Encoders) (1)


Consecutive state depends on
current state
encoder input
and feedback structure of the encoder

Recursive encoders of practical interest are mostly systematic and


can be derived from NSC codes
Beginning with a NSC encoder the generator polynomials are
converted to get a systematic but recursive encoder
Generator polynomials of NSC encoder
G1 ( D ) G1 ( D ) = 1
G ( D)
G2 ( D ) G 2 ( D ) = 2
G1 ( D)
Universitt
Bremen

Volker Khn Channel Coding I

12

Recursive Convolutional Encoders (RSC-Encoders) (2)


Output of the systematic RSC encoder is given by
X 1 ( D ) = U ( D ) G1 ( D ) = U ( D )
U ( D)
X 2 ( D) = U ( D) G 2 ( D) =
G2 ( D) = A( D) G2 ( D)
G1 ( D)
with

U ( D)
A( D) =
G1 ( D)

A( D)

m
i =0

g1,i Di = U ( D)

Using the delay operator D and g1,0=1 it follows


a (l ) +

m
i =1

g1,i a (l i ) = u (l )

a (l ) = u (l ) +

m
i =1

g1,i a (l i )

a(l) can be regarded as the current content of the register and


depends on the currents input u(l) and the old register content a(l-1)
Universitt
Bremen

Volker Khn Channel Coding I

13

Recursive Convolutional Encoders (RSC-Encoders) (3)


Example: (2,1,3)-RSC encoder with generators g1 = 78 (recursive) and
g2 = 58
X 1 ( D) = U ( D)
X 2 ( D) = A( D) (1 + D 2 ) with A( D) :=

U ( D)
1 + D + D2

a (l ) = u (l ) + a(l 1) + a (l 2)
0/00

x1 ( l )
1/11
g11 =1

a(l)
u(l)

a(l-1)
g20 =1

g12 =1

g21 =0

g22 =1

01

0/00

0/01

x2 ( l )
Universitt
Bremen

1/11

1/10
10

a(l-2)

00

Volker Khn Channel Coding I

0/01
11
1/10

14

Recursive Convolutional Encoders (RSC-Encoders) (4)


Now other polynomial used for feedback
Example : (2,1,3)-RSC encoder with generators g1 = 78 and g2 = 58 (recursive)
X 1 ( D) = U ( D)
X 2 ( D) = A( D) (1 + D + D 2 ) with A( D) :=

U ( D)
1 + D2

a(l ) = u (l ) + a(l 2)
0/00

x1 ( l )
g11 =0

a(l)
u(l)

a(l-1)
g20 =1

g12 =1

g22 =1

1/11
01

0/00

1/10

x2 ( l )
Universitt
Bremen

00
0/01

10

a(l-2)
g21 =1

1/11

Volker Khn Channel Coding I

1/10
11
0/01

15

Catastrophic Convolutional Codes


Catastrophic convolutional codes can produce sequences with infinite length
with finite weight, that do not return to the all-zero-path
They may produce an infinite number of decoding errors for a finite
number of transmission errors
Example: (2,1,3)-NSC encoder with generators g1 = 58 (recursive) and g2 = 38
0/00
1/11
u(l)

g11 =0

g10 =1

u(l-1)
g20 =1

g12 =1

g21 =1

g22 =0

0/10

0/01
10

u(l-2)

00

01

1/01

1/10

0/11
11
1/00

Universitt
Bremen

Volker Khn Channel Coding I

16

Truncated Convolutional Codes


Only sequences of finite length are viewed in practice
For information sequence u with arbitrary tail, the trellis can end in any
state
last state is not known by decoder
last bits are decoded with lower reliability
worse performance
Interpretation as block codes: Description by generator matrix
G0

G=

G1
G0

Gm
G1

Gm
G0

G1

Gm
G0

Universitt
Bremen

with

G i = g1,i

g 2,i

g n ,i

G1
G0

Volker Khn Channel Coding I

17

Terminated Convolutional Codes


Appending tail bits to information sequence u
encoder stops in a predefined state (usually state 0)
reliable decoding of the last information bits
Number of tail bits equals memory of encoder, tail bits depend on encoder
NSC: adding m times zero
RSC: adding m tail bits, value depends on last state encoded by information bits

Adding tail bits reduces the code rate


Sequence u with N information bits
and 1/n convolutional code:

G0

Generator matrix: G =

G1
G0

RcTail =
Gm

G1

Gm
G0

Universitt
Bremen

N
N
= Rc
n ( N + m)
N +m

G1

Volker Khn Channel Coding I

Gm

18

Tailbiting Convolutional Codes


For small sequence length N the addition of tail bits significantly
reduces the code rate
Tailbiting Convolutional Codes: last state corresponds to first state
No tail bits are required
NSC: initialize encoder with last m bits of u

G0

G1
G0

Gm
G1

Generator matrix: G =

Gm
G0

G1

Gm

Gm
G0
G1
Universitt
Bremen

Gm

Volker Khn Channel Coding I

G1
G0

19

Distance Properties of Convolutional Codes (1)


As for block codes, the distance spectrum affects the performance of
convolutional codes
Free distance df describes smallest Hamming-distance between 2
sequences
Free distance affects the asymptotic (Eb/N0) performance, for average
SNR, larger distances affect the performance as well
Distance spectrum
Distance spectrum:
Convolutional codes are linear
comparison with all-zero-sequence
Hamming weight of all sequences has to be calculated
Modified state diagram
Self-loop in state 0 is eliminated, state 0 becomes first state Sa and last state Se
Placeholder at state transitions: L = sequence length
W = weight of uncoded input sequence
D = weight of coded output sequence
Universitt
Bremen

Volker Khn Channel Coding I

20

Distance Properties of Convolutional Codes (2)


Example: distance spectrum for (2,1,3)-NSC-code with g1 = 78 and g2 = 58
Modified state diagram
WDL
WDL
00

WD2L

10

Sa

11
DL
WL

DL
01

D2L

00
Se

L = sequence length
W = weight of uncoded input sequence
D = weight of coded output sequence

Linear Equation System

S10 = WD 2 L S a + WL S01
S01 = DL S10 + DL S11
S11 = WDL S11 + WDL S10

Solution:

Se
WD 5 L3
=
=: T (W , D, L )
2
Sa 1 WDL WDL

Se = D 2 L S01
Universitt
Bremen

Volker Khn Channel Coding I

21

Distance Properties of Convolutional Codes (3)


Series expansion of T(W,D,L) yields
T (W , D, L ) = WD 5 L3

+W 2 D 6 L4 + W 2 D 6 L5
+W 3 D 7 L5 + 2W 3 D 7 L6 + W 3 D 7 L7 +
=

Tw,d ,l W w D d Ll
w

Interpretation:
1 sequence of length l = 3 with input weight w = 1 and output weight d = 5
1 sequence with input weight w = 2 and output weight d = 6 of length l = 4
and l = 3
1 sequence with input weight w = 3 and output weight d = 7 of length l = 5
and l = 7 and 2 sequences of length l = 6

Universitt
Bremen

Volker Khn Channel Coding I

22

Distance Properties of Convolutional Codes (4)


Example: (2,1,3)-NSC-code with generators g1 = 78 and g2 = 58

Presentation of the code sequences until a maximum weight of d 7 in


the Trellis diagram
w=3,d=7,l=5 w=3,d=7,l=6
w=1,d=5,l=3 w=2,d=6,l=4 w=2,d=6,l=5 w=3,d=7,l=6 w=3,d=7,l=7

00

0/11
1/00

1/11

10

0/10

01

1/01

0/01

11
l=0

Universitt
Bremen

l=1

l=2

1/10

l=3

l=4

Volker Khn Channel Coding I

l=5

l=6

l=7

23

Distance Properties of Convolutional Codes (5)


General calculation T (W , D, L ) =

aS p b

p =0

a: state transition of state 0 into all other states with parameters W, D and L
S: state transition between states S01 up to S11 (without state 0)
b: state transitions of all states into state 0
S01 S10
S11

For the example

a = 0 WD 2 L 0
WDL

WDL
00

WD2L

Sa

Universitt
Bremen

10

11
DL
WL

0 WL
0
S = DL 0 WDL
DL 0 WDL

DL
01

D2L

00
Se

Volker Khn Channel Coding I

Transition from

S01
S10
S11

D2 L
b= 0
0

24

Distance Properties of Convolutional Codes (6)


For sequence of length L the exponent of S becomes p=L-2
Tl =3 (W , D, L ) = aSb = 0 WD 2 L 0

Tl = 4 (W , D, L ) = aS 2b = WD3 L2

WL

D2 L

DL
DL

0
0

WDL
WDL

0
0

0 W 2 D 3 L2

D2 L
= WD 3 L2

WL

D2 L

DL
DL

0
0

WDL
WDL

0
0

Tl =5 (W , D, L ) = aS3b = W 2 D 4 L3 W 2 D 3 L3 W 3 D 4 L3

0 W 2 D 3 L2

0
0

= WD5 L3

D2 L
= W 2 D 4 L3 W 2 D3 L3 W 3 D 4 L3

WL

D2 L

DL
DL

0
0

WDL
WDL

0
0

0
0

= W 2 D 6 L4

= W 2 D 6 L5 + W 3 D 7 L5

Tl 5 (W , D, L ) = WD 5 L3 + W 2 D 6 L4 + W 2 D 6 L5 + W 3 D 7 L5

Universitt
Bremen

Volker Khn Channel Coding I

25

Distance Properties of Convolutional Codes (7)


Number of sequences with Hamming weight d
ad =

Tw,d ,l
w

Number of information bits equal to one (w=1) for all sequences with
Hamming weight d
T (W , D, L = 1)
=
W
W =1
cd =

cd D d
d

w Tw,d ,l
w

Universitt
Bremen

w Tw,d ,l D d =

Volker Khn Channel Coding I

26

Distance Spectra for NSC and RSC Encoders


Recursive convolutional encoder, RSC (2)

10log10( cd )

10log10( cd )

Non-recursive convolutional encoder

Universitt
Bremen

Volker Khn Channel Coding I

27

Puncturing of Convolutional Codes (1)


Variable adjustment of the code rate by puncturing (see block codes)
single binary digits of the encoded sequence are not transmitted
Advantage of Puncturing:
Flexible code rate without additional hardware effort
Possibly lower decoding complexity
Although the performance of the original code is decreased, the
performance of the punctured code is in general as good as an not
punctured code of the same rate

Puncturing Matrix of period LP


P=

Universitt
Bremen

p1,0

p1,1

p1, LP 1

p2,0

p2,1

p2, LP 1

pn ,0

pn ,1

pn , LP 1

= p0

p1

p LP 1

Each column pi of P contains the


puncturing scheme of a code
word and therefore consist of n
elements pi,j GF(2)
(pi,j = 0
j-th bit is not transmitted
pi,j = 1 j-th bit is transmitted)

Volker Khn Channel Coding I

28

Puncturing of Convolutional Codes (2)


Instead of transmitting nLP coded bits, only l+LP bits are transmitted
due to the puncturing scheme
Parameter l with 1 l (n1)LP adjusts code rates in the range
l =1

RC =

LP
LP + 1

l = ( n 1) LP

RC =

LP
1
=
LP n n

Puncturing affects the distance properties


optimal puncturing depends on the specific convolutional code
Attention: puncturing can produce catastrophic convolutional code!
Universitt
Bremen

Volker Khn Channel Coding I

29

Puncturing of Convolutional Codes (3)


Example: (2,1,3)-NSC-code of code rate Rc = 1/2 is punctured to code
rate Rc = 3/4 with puncturing period Lp = 3
x1(l)

u(l)

g10 =1

g11 =1

u(l-1)
g20 =1

g12 =1

u(l-2)
g21 =0

1 1 0
P=
1 0 1

x(l)

g22 =1
x2(l)

Encoded sequence

Transmit sequence

x1 (0), x2 (0), x1 (1), x2 (1), x1 (2), x2 (2), x1 (3), x2 (3),


Universitt
Bremen

x1 (0), x2 (0), x1 (1), x2 (2), x1 (3), x2 (3),

Volker Khn Channel Coding I

30

Optimal Decoding (1)


Information sequence u contains k information bits and is encoded
into the code sequence consisting of n symbols
x = ( x1 ( 0 )

xn ( 0 )

x1 ( N 1)

xn ( N 1) )

x (0)
x ( N 1)
The received sequence is y, x is the estimated coded sequence and

a
denotes an arbitrary coded sequence

MAP decoding (Maximum A-posteriori Probability)

Optimum decoder that calculates the sequence x which maximizes Pr {x y}


Pr {x y} Pr {a y}
p ( y x )

Pr {x }
Pr {a}
p(y a)
p ( y)
p ( y)

p ( y x ) Pr {x } p ( y a ) Pr {a}
Universitt
Bremen

x = arg max p ( y a ) Pr {a}

Volker Khn Channel Coding I

)
31

Optimal Decoding (2)


Maximum Likelihood Decoding
If all sequences are equally likely Pr {a} = Pr {x } = 2 k or if the receiver does
not know the statistics Pr{a} of the source, no a-priori information can be
used
Decision criteria becomes
p ( y x ) p ( y a )

x = arg max p ( y a )
a

For equally likely input sequences the MAP criteria and ML criteria
yield the identical (optimal) result
If the input sequences are not equally likely but the input statistic
Pr{a} is not known by the receiver the ML criteria is suboptimal

Universitt
Bremen

Volker Khn Channel Coding I

32

Optimal Decoding (3)


ML-Decoding memoryless channel
N 1

N 1

p ( y a ) = p y ( l ) a ( l ) = p yi ( l ) ai ( l )
l =0

l = 0 i =1

Joint probabilities
can be factorized

As the logarithm is a strictly monotone increasing function


N 1

N 1 n

ln p ( y a ) = ln p y ( l ) a ( l ) =
l =0

l = 0 i =1

ln p yi ( l ) ai ( l ) =

N 1 n

yi ( l ) ai ( l )

l = 0 i =1

Incremental metric (yi(l)|ai(l)) describes the transition probabilities of the


channel y l a l = ln p y l a l
p ( y (l ) | x (l ) = E / T ) p ( y (l ) | x (l ) = E

( ) i ( ))

( ) i ( ))

/ Ts

AWGN channel:

p yi ( l ) ai ( l ) =
Universitt
Bremen

1
e
N 0 / 2 / Ts

2
yi ( l ) ai ( l ) )
(

N 0 / 2 / Ts

Volker Khn Channel Coding I

Es / Ts

Es / Ts

33

Optimal Decoding (4)


Squared Euclidian distance

( yi ( l ) ai ( l ) )

yi ( l ) ai ( l ) = ln p yi ( l ) ai ( l ) = C

( yi ( l ) ai ( l ) )

yi ( l ) ai ( l ) = 2

N 0 / Ts

N 0 / 2 / Ts
2

minimize

Correlation metric

yi ( l ) ai ( l )

y (l )
y ( l ) ai ( l )
a (l )
2 y (l )
y ( l ) ai ( l )
E
=C i
+2 i
i
=C i
4 i
2 s
N 0 / 2 / Ts
N 0 / 2 / Ts N 0 / 2 / Ts
N 0 / Ts
N 0 / Ts
N0
2

does not depend


on ai ( l )

yi ( l ) ai ( l ) =
Universitt
Bremen

4
4E
yi ( l ) ai ( l ) = s yi ( l )
N 0 / Ts
N0
Volker Khn Channel Coding I

constant

maximize

34

Viterbi Algorithm
1) Start trellis in state 0
2) Calculate (y(l)|a (l)) for y(l) and any possible code words a (l)
3) Add the incremental path metric to the old cumulative state metric Mj(l-1),
j = 0,,2m-1
4) Select for each state the path with lowest Euclidian distance (largest
correlation metric) and discard the other paths
effort increases only linearly with observation length and not exponential
5) Return to step 2) unless all N received words have been processed
6) End of the Trellis

Terminated code (trellis ends in state 0):


select path with best metric M0(N),
Truncated code:
select path with the overall best metric Mj(N)

7) Trace back the path selected in 6) (survivor) and output the corresponding
information bits

Universitt
Bremen

Volker Khn Channel Coding I

35

Decoding of Convolutional Codes with Viterbi-Algorithm


Example : (2,1,3)-NSC-code with generators g1 = 78 and g2 = 58
Information sequence: u = [1 0 0 1 | 0 0]

-1 -1

00

+1 -1
2

2
-2

10

-2

0
-4

l=0

l=1
0
0
11

l=2
00

0
1
01

-2
0 2

l=3

-2
0 4
4

l=4

11

Volker Khn Channel Coding I

-2

2 6

1
00

l=5

6
2

-2

0
4

-1 +1
4

2
0

+1 -1
4

-2

-2

11

2
2

01

+1 +1

-2

0
2

Universitt
Bremen

+1 +1

0
2

-2 8
6

-2

l=6

36

Decoding with Viterbi Algorithm


Rule of Thumb
For continuous data transmission (or very long data blocks) the decoding
delay would become infinite or very high
It has been found experimentally that a delay of 5LC results in a negligible
performance degradation
Reason: If the decision depth is large enough, the beginnings of different
paths merge and the decision for this part is reliable

Punctured Codes
Placeholders for the punctured bits have to be inserted into the received
sequence prior to decoding (zeros for antipodal transmission). As the
distance properties are degraded by puncturing, the decision depth
should be extended.

Universitt
Bremen

Volker Khn Channel Coding I

37

Error bounds
An error occurs if the conditional probability of the correct code
sequence x is lower as the probability for another sequence a x
Probability of an error

Pw = Pr ln p ( y x ) < ln p ( y a )
= Pr

N 1
l =0

= Pr

N 1
l =0

= Pr

y (l ) x (l ) <
y (l ) x (l ) <

N 1 n
l = 0 i =1

l =0

l =0

y (l ) a (l )

y ( l ) a ( l ) = Pr

)
N 1 n
l = 0 i =1

yi ( l ) xi ( l ) <

N 1 n
l = 0 i =1

yi ( l ) ai ( l )

2 ( ai ( l ) xi ( l ) ) yi ( l ) > 0

=
Universitt
Bremen

N 1

N 1

4 Es / Ts
0

for all ai ( l ) xi ( l )
else

Volker Khn Channel Coding I

38

Error bounds
Pair wise error probability Pd of
sequences a and x with distance
d = dH(a,x)

Pd = Pr

N 1 n
l = 0 i =1

yi ( l ) > 0

xi ( l ) ai ( l )

The sum over d received symbols yi(l) is a Gaussian distributed


yi ( l ) with
random variable Y =
l

Mean

Y = d Es / TS

Variance

Y2 = d N 0 / 2 / Ts

Probability for mixing up two sequences with pairwise Hamming


distance d becomes
1
Pd = erfc
2
Universitt
Bremen

Es
1
= erfc
2
N0

dRc

Eb
N0

Volker Khn Channel Coding I

39

Estimation of the sequence error probability


Probability for first error event in a sequence
Pw

ad Pd =
d

1
ad erfc
2

dRc

Eb
N0

Estimation of the bit error probability


Pb

Universitt
Bremen

cd erfc
d

dRc

Eb
N0

Volker Khn Channel Coding I

40

Example (1)
Example: half-rate code with generators g1 = 78 and g2 = 58
Estimation of the sequence error probability
T (W , D, L ) =

d =5

W d 4 D d Ld 2 (1 + L )

T (W = 1, D, L = 1) =

Pb

Universitt
Bremen

2 d 5 D d

ad = 2d 5

d =5

T (W = 1, D, L = 1)
W

d 5

=
W =1

d =5

( d 4 ) 2d 5 erfc

( d 4 ) 2d 5 D d
dRc

cd = ( d 4 ) 2d 5

Eb
N0

Volker Khn Channel Coding I

41

Example (2)
Comparison of simulation and analytical estimation

10

Simulation
d=5
d=6
d=7
d=20

-1

10

-2

For estimating the BER at


moderate SNR, the whole
distance spectrum is
required

10
BER

Asymptotic bit error rate


(BER) is determined by
free distance df

-3

10

-4

10

-5

10

Eb / N 0 in dB

Universitt
Bremen

For large error rates or


small signal-to-noise
ratios, the union bound is
very loose and may
diverge

Volker Khn Channel Coding I

42

Performance of Convolutional Codes: Quantization


Influence of quantization

10

By quantizing the received


sequence before decoding,
information is lost.

none
q=
q=2
q=8

-1

10

Hard-Decision (q = 2):
Strong performance
degradation

-2

BER

10

-3

10

3-bit quantization (q = 8):


only a small performance
degradation compared to
no quantization

-4

10

-5

10

Universitt
Bremen

Eb / N 0 in dB

Volker Khn Channel Coding I

43

Numerical Results to Convolutional Codes


Influence of constraint length

10

Lc=3
Lc=5
Lc=7
Lc=9

-1

10

-2

-3

10

-4

10

-5

10

10
10

-1

10

-2

BER

BER

10

Influence of code rate

10

-3

10

-4

10

-5

Rc=1/4
Rc=1/3
Rc=1/2
Rc=2/3
Rc=3/4

Eb / N 0 in dB

Eb / N 0 in dB

Universitt
Bremen

Volker Khn Channel Coding I

44

Das könnte Ihnen auch gefallen