Sie sind auf Seite 1von 50

Basic Digital Communication System

HT(f)
Information

source

Pulse
generator

X(t)

Trans
filter

Hc(f)
channel
Channel noise

Digital
Processing

A/D

Y(t)

n(t)

Receiver
filter

HR(f)

Basic Communication System


HT(f)
Ak

Hc(f)

Trans
filter

channel

HR(f)
Y(t)

Receiver
filter

Y(tm)

Y (t ) Ak hc (t t d kTb ) n0 (t )

k
The received Signal is the transmitted signal, convolved with the channel
And added with AWGN (Neglecting HTx,HRx)

Y t m

A h m k T n t

K m

ISI - Inter Symbol


Interference

Band Limited Channels

Controlled ISI Partial


Response Signals
Signal Design with Controlled ISI Partial Response Signals
Relax the condition of zero ISI and allow a controlled amount of ISI
Then we can achieve the max. symbol rate of 2W symbols/sec
The ISI we introduce is deterministic or controlled; hence it can be taken
into account at the receiver

Intersymbol Interference
The filtering effect of the bandlimited channel will cause a
spreading of individual data symbols passing through
For consecutive symbols, this spreading causes part of
the symbol energy to overlap with neighbouring symbols,
.causing intersymbol interference (ISI)

Explanation of ISI

t
Fourier
Transform

Fourier
Transform

Channel

Tb

2Tb

5Tb
3Tb

4Tb

6Tb

Reasons for ISI

Channel is band limited in


nature
Physics e.g. parasitic
capacitance in twisted pairs

limited frequency response


unlimited time response

Channel has multi-path


reflections

Tx filter might add ISI when


channel spacing is crucial.

Channel Model
Channel is unknown
Channel is usually modeled as Tap-DelayLine (FIR)
x(n)

D
h(0)

h(1)

h(2)

+ + +
+
+

y(n)

h(N-1)

h(N)

Example for Measured Channels

The Variation of the Amplitude of the Channel Taps is Random


and usually modeled as Railegh distribution)changing Multipath(
in Typical Urban Areas

:Example for Channel Variation

Equalizer: equalizes the channel the


received signal would seen like it
passed a delta response.

1
| GE ( f ) |
| GE ( f ) | | GC ( f ) | 1 htotal (t ) (t )
| GC ( f ) |
arg(GE ( f )) arg(GC ( f ))

Need For Equalization


Need For Equalization:
Overcome ISI degradation

Need For Adaptive Equalization:


Changing Channel in Time

=> Objective:
Find the Inverse of the Channel Response
to reflect a delta channel to the Rx
Applications (or standards recommend us the channel*
).types for the receiver to cope with

Zero forcing equalizers


(according to Peak Distortion Criterion)
Tx

Ch

Eq

1,m 0

0,m 1,2...

q(mT )

Cn X (mT n )

n 2

No ISI Force:

Equalizer taps
Example: 5tap-Equalizer, 2/T sample rate:

x ( mT nT / 2) is described as matrix

x ( 0)
x (1T )
X
x ( 2T )

x (3T )

x ( 0.5T )

x ( 1T )

x ( 1.5T )

x (0.5T )
x (1.5T )
x ( 2.5T )

x ( 0)
x (1T )
x (2T )

x ( 0.5T )
x (0.5T )
x (1.5T )

x ( 2T )
x ( 1T )

x ( 0)

x (1T )

c2
c
1
C c0

c1
c2

Equalizer taps as vector

0
0

q 1

0
0

Desired signal as vector

XC=q

Copt=X-1q

Disadvantages: Ignores presence of additive noise


(noise enhancement)

MSE Criterion
UnKnown Parameter
)Equalizer filter response(
Received Signal

Desired Signal

N 1

J [ ] ( x[n] h[n])

n 0

Mean Square Error between the received signal


and the desired signal, filtered by the equalizer filter

LS Algorithm

LMS Algorithm

LS
Least Square Method:
Unbiased estimator
Exhibits minimum variance (optimal)
No probabilistic assumptions (only signal
model)
Presented by Guass (1795) in studies of
planetary motions)

LS - Theory
.1
.2

s[n] h[n m] [m]


s[n] H
N 1

.3

J [ ] ( x[n] h[n])
n 0

: Derivative according
to
N 1

.4

x[n]h[n]
n 0
N 1

h [ n]
n 0

:MSE

Back-Up
:The minimum LS error would be obtained by substituting 4 to 3
N 1

N 1

J min J [ ] ( x[n] h[n]) ( x[n] h[n])( x[n] h[n])


2

n 0

n 0

N 1

N 1

x[n]( x[n] h[n]) h[n]( x[n] h[n])


n 0
n 0
0 ( BySubstitutinh )

N 1

N 1

x [n] x[n]h[n]
2

n 0

n 0

N 1

N 1

J min x 2 [n]
n 0

( x[n]h[n]) 2
n 0
N 1

]h [n]
2

n 0

Energy Of
Original Signal

x[n] Signal w[n]

Energy Of
Fitted Signal
: If Noise Small enough (SNR large enough) Jmin~0

Finding the LS solution


s[n] H
N 1

(H: observation matrix (Nxp) and


N 1

s[n] ( s[0], s[1],...s[ N 1])T

J [ ] ( x[n] h[n]) ( x[n] h[n])( x[n] h[n])


2

n 0

n 0

( x[n] H ])T ( x[ n] H ])

J [ ] xT x xT H T H T x T H T H
xT z 2 xT H T H T H
scalar

J ( )
T
T
2 H
x

2
H
H

scalar

( H T H ) 1 H T x

LS : Pros & Cons


Advantages:
Optimal approximation for the Channel- once calculated
it could feed the Equalizer taps.
Disadvantages:
heavy Processing (due to matrix inversion which by
It self is a challenge)
Not adaptive (calculated every once in a while and
is not good for fast varying channels
Adaptive Equalizer is required when the Channel is time variant
(changes in time) in order to adjust the equalizer filter tap
Weights according to the instantaneous channel properties.

LEAST-MEAN-SQUARE ALGORITHM
Contents:
Introduction - approximating steepest-descent algorithm
Steepest descend method
Least-mean-square algorithm
LMS algorithm convergence stability
Numerical example for channel equalization using LMS
Summary

INTRODUCTION
Introduced by Widrow & Hoff in 1959
Simple, no matrices calculation involved in the adaptation
In the family of stochastic gradient algorithms
Approximation of the steepset descent method
Based on the MMSE criterion.(Minimum Mean square Error)
Adaptive process containing two input signals:


Filtering process, producing output signal.

2.) Desired signal (Training sequence)

Adaptive process: recursive adjustment of filter tap


weights

NOTATIONS

Input signal (vector): u(n)

Autocorrelation matrix of input signal: Ruu = E[u(n)uH(n)]

Desired response: d(n)

Cross-correlation vector between u(n) and d(n): Pud = E[u(n)d*(n)]

Filter tap weights: w(n)

Filter output: y(n) = wH(n)u(n)

Estimation error:

2E[e(n)e*(n)]

SYSTEM BLOCK USING THE LMS

U[n] = Input signal from the channel ; d[n] = Desired Response


H[n] = Some training sequence generator
e[n] = Error feedback between :
A.) desired response.
B.) Equalizer FIR filter output
W = Fir filter using tap weights vector

STEEPEST DESCENT METHOD

Steepest decent algorithm is a gradient based method which employs


recursive solution over problem (cost function)
The current equalizer taps vector is W(n) and the next sample equalizer
taps vector weight is W(n+1), We could estimate the W(n+1) vector by this
approximation:

W [gradient
n] Wis[anvector
1] pointing
0.5 (in
[n]) of the change in filter
The
theJdirection
coefficients that will cause the greatest increase in the error signal. Because
the goal is to minimize the error, however, the filter coefficients updated in
the direction opposite the gradient; that is why the gradient term is negated.
The constant is a step-size. After repeatedly adjusting each coefficient in
the direction opposite to the gradient of the error, the adaptive filter should
converge.

STEEPEST DESCENT EXAMPLE


Given the following function we need to obtain the vector
that would give us the absolute minimum.

Y (c1 , c2 ) C12 C22

It is obvious that C1 C2 0,
give us the minimum.

C1

C2

Now lets find the solution by the steepest descend method

STEEPEST DESCENT EXAMPLE


We start by assuming (C1 = 5, C2 = 7)
We select the constant . If it is too big, we miss the
minimum. If it is too small, it would take us a lot of time to
het the minimum. I would select = 0.1.
dy
dc
1

The gradient vector is:

2C1

dy 2C2
dc
2

So our iterative equation is:


C1
C1
C1
C1
C C 0.2 y C 0.1 C 0.9
2 [ n1] 2 [ n]
2 [n]
2 [n]

C1
C
2

[n]

STEEPEST DESCENT EXAMPLE


y

C 5
Iteration1 : 1
C2 7
C1 4.5
Iteration 2 :

C2 6.3

Initial guess

C1 0.405
0.567
C

Iteration3 :

Minimum

......
C1 0.01
Iteration 60 :

C2 0.013
C1
0
lim n
C2 [ n ] 0

C2

As we can see, the vector [c1,c2] convergates to the value


which would yield the function minimum and the speed of
this convergence depends on .

C1

MMSE CRITERIA FOR THE LMS


MMSE Minimum mean square error
MSE = E{[(d (k ) y(k )] } E{[(d (k ) w(n)u(k n)] }
N

n N

E{[(d (k )

w(n)u (k n)] } E{d (k ) } 2 w(n) P


2

n N

du

n N

( n)

w(n)w(m) R(n m)

n N m N

Pdu (n) E{d (k )u ( n k )}


Ruu (n m) E{u (m k )u ( n k )}

To obtain the LMS MMSE we should derivative


the MSE and compare it to 0:

d ( E{d (k ) } 2 w(n) P (n) w(n) w(m) R(n m))


2

d ( MSE )

dW (k )

n N

du

n N m N

dW (k )

MMSE CRITERION FOR THE LMS


And finally we get:
N
d ( MSE )
J ( n )
2 Pdu (k ) 2 w[n]Ruu (n k ), k 0,1,2,...
dW (k )
n N

By comparing the derivative to zero we get the MMSE:

wopt R P
This calculation is complicated for the DSP (calculating the inverse
matrix ), and can cause the system to not being stable cause if there
are NULLs in the noise, we could get very large values in the inverse
matrix. Also we could not always know the Auto correlation matrix of the
input and the cross-correlation vector, so we would like to make an
approximation of this.

LMS APPROXIMATION OF THE


STEEPEST DESCENT METHOD
W(n+1) = W(n) + 2*[P Rw(n)] <= According the MMSE criterion
We assume the following assumptions:
Input vectors :u(n), u(n-1),,u(1) statistically independent vectors.
Input vector u(n) and desired response d(n), are statistically independent of
d(n), d(n-1),,d(1)
Input vector u(n) and desired response d(n) are Gaussian-distributed R.V.
Environment is wide-sense stationary;
In LMS, the following estimates are used:
H

Ruu^ = u(n)u (n) Autocorrelation matrix of input signal


Pud^ = u(n)d*(n) - Cross-correlation vector between U[n] and d[n].
*** Or we could calculate the gradient of |e[n]|2 instead of E{|e[n]|2 }

LMS ALGORITHM
W[n 1] W[n] {P ^ R ^ w[n]}
w(n) {u[n]d *[n] u[n]u H [n]w[n]}
w(n) {u[n]{d [n] y [n]}
*

We get the final


result:

W[n 1] W[n] {u[n]e [n]}


*

LMS STABILITY
The size of the step size determines the algorithm convergence
rate. Too small step size will make the algorithm take a lot of
iterations. Too big step size will not convergence the weight taps.

:Rule Of Thumb

5(2 N 1) PR
Where, N is the equalizer length
Pr, is the received power (signal+noise)
that could be estimated in the receiver.

LMS CONVERGENCE GRAPH


: Example for the Unknown Channel of 2nd order

Desired Combination of taps

This graph illustrates the LMS algorithm. First we start from guessing
the TAP weights. Then we start going in opposite the gradient vector,
to calculate the next taps, and so on, until we get the MMSE,
meaning the MSE is 0 or a very close value to it.(In practice we can
not get exactly error of 0 because the noise is a random process, we
could only decrease the error below a desired minimum)

LMS Convergence Vs u

LMS EQUALIZER EXAMPLE

Channel equalization
example:

Average Square Error as a


function of iterations number
using different channel
transfer function
(change of W)

LMS Advantage:
Simplicity of implementation
Not neglecting the noise like Zero forcing equalizer
By pass the need for calculating an inverse matrix.

LMS Disadvantage:
Slow Convergence
Demands using of training sequence as reference
,thus decreasing the communication BW.

Non linear equalization


Linear equalization (reminder):
Tap delayed equalization
Output is linear combination of the equalizer
input
1
GE
GC

GE ( ai z 1 )
i

as FIR

Y ( z)
CE a0 a1 z 1 a3 z 2 ...
X ( z)

y ( n ) a0 x (n ) a1 x ( n 1) a2 x (n 2) ...

Non linear equalization DFE


(Decision feedback Equalization)
In

+
+
-

A(z)

Receiver
detector
B(z)

y ( n ) ai x ( n i ) bi y ( n i )
Y ( z)
GE
X ( z)

1
(
a

z
i )
i

1
(
b

z
i )

as IIR

The Decision feedback leads poles in z domain


Advantages: copes with larger ISI
Disadvantages: instability danger

Output

The nonlinearity is due the


detector characteristics that
is fed back (MAPPER)

Non linear equalization - DFE

Blind Equalization

ZFE and MSE equalizers assume


option of training sequence for
learning the channel.
What happens when there is
none?
Blind Equalization

Input

Vn
But Usually employs also :
Interleaving\DeInterleaving
Advanced coding
ML criterion

Adaptive
Equalizer

Output

~
In

Error en

Signal

Why? Blind Eq is hard and complicated enough!


So if you are going to implement it, use the best blocks
For decision (detection) and equalizing

Decision

In

dn
With LMS

Turbo Equalization
Iterative :
Estimate
Equalize
Decode
ReEncode

Next iteration would rely on better estimation


therefore would lead more precise equalization

Usually employs also :


Interleaving\DeInterleaving
TurboCoding (Advanced iterative code)
MAP (based on ML criterion)
Why? It is complicated enough!
So if you are going to implement it, use the best blocks

L e(c)

Channel
Estimator

L (c)
MAP
+
Equalizer

L e(c)

L e(c)

L e(c)

L (c)

MAP
Decoder
D

L (d)

Performance of Turbo Eq Vs
Iterations

ML criterion
MSE optimizes detection up to 1st/2nd order
statistics.
In Uris Class:
Optimum Detection:
Strongest Survivor
Correlation (MF)
(allow optimal performance for Delta ch and Additive noise.
Optimized Detection maximizes prob of detection
(minimizes error or Euclidean distance in Signal Space)

Lets find the Optimal Detection Criterion while in


presence of memory channel (ISI)

ML criterion Cont.
Maximum Likelihood :
Maximizes decision probability for the received trellis
Example BPSK (NRZI)

S1 S 0 Eb

Energy Per Bit

rk Eb nk
Received Signal occupies AWGN

possible transmitted signals 2


Conditional PDF (prob of correct decision on r1 pending s1 was transmitted)
(rk Eb ) 2
1
p (rk | s1 )
exp

2
2

2 n

N0/2

(rk Eb ) 2
1
p(rk | s0 )
exp

2
2

2 n

optimal

Prob of correct decision on a sequence of symbols

p (r1 , r2 ,..., rk | s

( m)

) p (rk | sk
k 1

(m)

Transmitted sequence

.ML Cont
:With logarithm operation, it could be shown that this is equivalent to
:Minimizing the Euclidean distance metric of the sequence

D(r , s

(m)

) (rk sk

(m) 2

)Called Metric(

k 1

Looks Similar?
while MSE minimizes Error (maximizes Prob) for decision on certain Sym,
MLSE minimizes Error (maximizes Prob) for decision on certain Trellis ofSym,

?How could this be used

Viterbi Equalizer
Example for NRZI:
Trasmit Symbols: E E
(0=No change in transmitted Symbol
(1=Alter Symbol)
b

S0

0 / Eb

0 / Eb

1 / Eb

0 / Eb

0 / Eb

1 / Eb

1 / Eb

1 / Eb

1 / Eb

1 / Eb

1 / Eb

S1
Metric
(Sum
of
Euclidean
Distance)

t T

0 / Eb

t 2T

0 / Eb

t 3T

0 / Eb

t 4T

D0 (0,0) (r1 Eb ) 2 ( r2 Eb ) 2

D0 (0,0,0) D0 (0,0) (r3 Eb ) 2

D0 (1,1) (r1 Eb ) 2 (r2 Eb ) 2

D0 (0,1,1) D0 (0,1) (r3 Eb ) 2

D0 (0,1) ( r1 Eb ) 2 (r2 Eb ) 2

D0 (0,0,1) D0 (0,0) (r3 Eb ) 2

D0 (1,0) ( r1 Eb ) 2 (r2 Eb ) 2

D0 (0,1,0) D0 (0,1) (r3 Eb ) 2

References

John G.Proakis Digital Communications.


John G.Proakis Communication Systems Eng.
Simon Haykin - Adaptive Filter Theory
K Hooli Adaptive filters and LMS
S.Kay Statistical Signal Processing Estimation Theory

Das könnte Ihnen auch gefallen