Beruflich Dokumente
Kultur Dokumente
HT(f)
Information
source
Pulse
generator
X(t)
Trans
filter
Hc(f)
channel
Channel noise
Digital
Processing
A/D
Y(t)
n(t)
Receiver
filter
HR(f)
Hc(f)
Trans
filter
channel
HR(f)
Y(t)
Receiver
filter
Y(tm)
Y (t ) Ak hc (t t d kTb ) n0 (t )
k
The received Signal is the transmitted signal, convolved with the channel
And added with AWGN (Neglecting HTx,HRx)
Y t m
A h m k T n t
K m
Intersymbol Interference
The filtering effect of the bandlimited channel will cause a
spreading of individual data symbols passing through
For consecutive symbols, this spreading causes part of
the symbol energy to overlap with neighbouring symbols,
.causing intersymbol interference (ISI)
Explanation of ISI
t
Fourier
Transform
Fourier
Transform
Channel
Tb
2Tb
5Tb
3Tb
4Tb
6Tb
Channel Model
Channel is unknown
Channel is usually modeled as Tap-DelayLine (FIR)
x(n)
D
h(0)
h(1)
h(2)
+ + +
+
+
y(n)
h(N-1)
h(N)
1
| GE ( f ) |
| GE ( f ) | | GC ( f ) | 1 htotal (t ) (t )
| GC ( f ) |
arg(GE ( f )) arg(GC ( f ))
=> Objective:
Find the Inverse of the Channel Response
to reflect a delta channel to the Rx
Applications (or standards recommend us the channel*
).types for the receiver to cope with
Ch
Eq
1,m 0
0,m 1,2...
q(mT )
Cn X (mT n )
n 2
No ISI Force:
Equalizer taps
Example: 5tap-Equalizer, 2/T sample rate:
x ( mT nT / 2) is described as matrix
x ( 0)
x (1T )
X
x ( 2T )
x (3T )
x ( 0.5T )
x ( 1T )
x ( 1.5T )
x (0.5T )
x (1.5T )
x ( 2.5T )
x ( 0)
x (1T )
x (2T )
x ( 0.5T )
x (0.5T )
x (1.5T )
x ( 2T )
x ( 1T )
x ( 0)
x (1T )
c2
c
1
C c0
c1
c2
0
0
q 1
0
0
XC=q
Copt=X-1q
MSE Criterion
UnKnown Parameter
)Equalizer filter response(
Received Signal
Desired Signal
N 1
J [ ] ( x[n] h[n])
n 0
LS Algorithm
LMS Algorithm
LS
Least Square Method:
Unbiased estimator
Exhibits minimum variance (optimal)
No probabilistic assumptions (only signal
model)
Presented by Guass (1795) in studies of
planetary motions)
LS - Theory
.1
.2
.3
J [ ] ( x[n] h[n])
n 0
: Derivative according
to
N 1
.4
x[n]h[n]
n 0
N 1
h [ n]
n 0
:MSE
Back-Up
:The minimum LS error would be obtained by substituting 4 to 3
N 1
N 1
n 0
n 0
N 1
N 1
N 1
N 1
x [n] x[n]h[n]
2
n 0
n 0
N 1
N 1
J min x 2 [n]
n 0
( x[n]h[n]) 2
n 0
N 1
]h [n]
2
n 0
Energy Of
Original Signal
Energy Of
Fitted Signal
: If Noise Small enough (SNR large enough) Jmin~0
n 0
n 0
( x[n] H ])T ( x[ n] H ])
J [ ] xT x xT H T H T x T H T H
xT z 2 xT H T H T H
scalar
J ( )
T
T
2 H
x
2
H
H
scalar
( H T H ) 1 H T x
LEAST-MEAN-SQUARE ALGORITHM
Contents:
Introduction - approximating steepest-descent algorithm
Steepest descend method
Least-mean-square algorithm
LMS algorithm convergence stability
Numerical example for channel equalization using LMS
Summary
INTRODUCTION
Introduced by Widrow & Hoff in 1959
Simple, no matrices calculation involved in the adaptation
In the family of stochastic gradient algorithms
Approximation of the steepset descent method
Based on the MMSE criterion.(Minimum Mean square Error)
Adaptive process containing two input signals:
Filtering process, producing output signal.
NOTATIONS
Estimation error:
2E[e(n)e*(n)]
W [gradient
n] Wis[anvector
1] pointing
0.5 (in
[n]) of the change in filter
The
theJdirection
coefficients that will cause the greatest increase in the error signal. Because
the goal is to minimize the error, however, the filter coefficients updated in
the direction opposite the gradient; that is why the gradient term is negated.
The constant is a step-size. After repeatedly adjusting each coefficient in
the direction opposite to the gradient of the error, the adaptive filter should
converge.
It is obvious that C1 C2 0,
give us the minimum.
C1
C2
2C1
dy 2C2
dc
2
C1
C
2
[n]
C 5
Iteration1 : 1
C2 7
C1 4.5
Iteration 2 :
C2 6.3
Initial guess
C1 0.405
0.567
C
Iteration3 :
Minimum
......
C1 0.01
Iteration 60 :
C2 0.013
C1
0
lim n
C2 [ n ] 0
C2
C1
n N
E{[(d (k )
n N
du
n N
( n)
w(n)w(m) R(n m)
n N m N
d ( MSE )
dW (k )
n N
du
n N m N
dW (k )
wopt R P
This calculation is complicated for the DSP (calculating the inverse
matrix ), and can cause the system to not being stable cause if there
are NULLs in the noise, we could get very large values in the inverse
matrix. Also we could not always know the Auto correlation matrix of the
input and the cross-correlation vector, so we would like to make an
approximation of this.
LMS ALGORITHM
W[n 1] W[n] {P ^ R ^ w[n]}
w(n) {u[n]d *[n] u[n]u H [n]w[n]}
w(n) {u[n]{d [n] y [n]}
*
LMS STABILITY
The size of the step size determines the algorithm convergence
rate. Too small step size will make the algorithm take a lot of
iterations. Too big step size will not convergence the weight taps.
:Rule Of Thumb
5(2 N 1) PR
Where, N is the equalizer length
Pr, is the received power (signal+noise)
that could be estimated in the receiver.
This graph illustrates the LMS algorithm. First we start from guessing
the TAP weights. Then we start going in opposite the gradient vector,
to calculate the next taps, and so on, until we get the MMSE,
meaning the MSE is 0 or a very close value to it.(In practice we can
not get exactly error of 0 because the noise is a random process, we
could only decrease the error below a desired minimum)
LMS Convergence Vs u
Channel equalization
example:
LMS Advantage:
Simplicity of implementation
Not neglecting the noise like Zero forcing equalizer
By pass the need for calculating an inverse matrix.
LMS Disadvantage:
Slow Convergence
Demands using of training sequence as reference
,thus decreasing the communication BW.
GE ( ai z 1 )
i
as FIR
Y ( z)
CE a0 a1 z 1 a3 z 2 ...
X ( z)
y ( n ) a0 x (n ) a1 x ( n 1) a2 x (n 2) ...
+
+
-
A(z)
Receiver
detector
B(z)
y ( n ) ai x ( n i ) bi y ( n i )
Y ( z)
GE
X ( z)
1
(
a
z
i )
i
1
(
b
z
i )
as IIR
Output
Blind Equalization
Input
Vn
But Usually employs also :
Interleaving\DeInterleaving
Advanced coding
ML criterion
Adaptive
Equalizer
Output
~
In
Error en
Signal
Decision
In
dn
With LMS
Turbo Equalization
Iterative :
Estimate
Equalize
Decode
ReEncode
L e(c)
Channel
Estimator
L (c)
MAP
+
Equalizer
L e(c)
L e(c)
L e(c)
L (c)
MAP
Decoder
D
L (d)
Performance of Turbo Eq Vs
Iterations
ML criterion
MSE optimizes detection up to 1st/2nd order
statistics.
In Uris Class:
Optimum Detection:
Strongest Survivor
Correlation (MF)
(allow optimal performance for Delta ch and Additive noise.
Optimized Detection maximizes prob of detection
(minimizes error or Euclidean distance in Signal Space)
ML criterion Cont.
Maximum Likelihood :
Maximizes decision probability for the received trellis
Example BPSK (NRZI)
S1 S 0 Eb
rk Eb nk
Received Signal occupies AWGN
2
2
2 n
N0/2
(rk Eb ) 2
1
p(rk | s0 )
exp
2
2
2 n
optimal
p (r1 , r2 ,..., rk | s
( m)
) p (rk | sk
k 1
(m)
Transmitted sequence
.ML Cont
:With logarithm operation, it could be shown that this is equivalent to
:Minimizing the Euclidean distance metric of the sequence
D(r , s
(m)
) (rk sk
(m) 2
)Called Metric(
k 1
Looks Similar?
while MSE minimizes Error (maximizes Prob) for decision on certain Sym,
MLSE minimizes Error (maximizes Prob) for decision on certain Trellis ofSym,
Viterbi Equalizer
Example for NRZI:
Trasmit Symbols: E E
(0=No change in transmitted Symbol
(1=Alter Symbol)
b
S0
0 / Eb
0 / Eb
1 / Eb
0 / Eb
0 / Eb
1 / Eb
1 / Eb
1 / Eb
1 / Eb
1 / Eb
1 / Eb
S1
Metric
(Sum
of
Euclidean
Distance)
t T
0 / Eb
t 2T
0 / Eb
t 3T
0 / Eb
t 4T
D0 (0,0) (r1 Eb ) 2 ( r2 Eb ) 2
D0 (0,1) ( r1 Eb ) 2 (r2 Eb ) 2
D0 (1,0) ( r1 Eb ) 2 (r2 Eb ) 2
References