Beruflich Dokumente
Kultur Dokumente
Lecture 7
Least Mean Square (LMS)
Adaptive Filtering
Dr. Tahir Zaidi
Steepest Descent
Chapter 5
11/19/2011
Basic Idea
Chapter 5
No
expectations,
Instantaneous
samples!
Basic Idea
is the gradient of
instead of
as
in SD.
Chapter 5
11/19/2011
Basic Idea
Chapter 5
Chapter 5
11/19/2011
LMS Algorithm
unbiased
Since the expectations are omitted, the estimates will have a high variance.
Therefore, the recursive computation of each tap weight in the LMS
algorithm suffers from a gradient noise.
Chapter 5
LMS Algorithm
Chapter 5
11/19/2011
LMS Algorithm
Chapter 5
Canonical Model
Chapter 5
10
11/19/2011
Canonical Model
Chapter 5
11
Canonical Model
Chapter 5
12
11/19/2011
Canonical Model
13
Chapter 5
Although the filter is a linear combiner, the algorithm is highly nonlinear and violates superposition and homogenity
, then
output
Chapter 5
input
14
11/19/2011
We have
Let
Chapter 5
15
Assumption III: The input and the desired response are jointly
Gaussian.
Chapter 5
16
11/19/2011
i.e.
Then, we have
where
Components of v(n)
are uncorrelated!
17
Chapter 5
stochastic force
natural component
of v(n)
Chapter 5
forced component
of v(n)
18
11/19/2011
Learning Curves
Chapter 5
19
Learning Curves
for small
use
Chapter 5
20
10
11/19/2011
Learning Curves
or
Chapter 5
21
Convergence
For small
Jex(n)
Chapter 5
22
11
11/19/2011
Misadjustment
Misadjustment, define
or equivalently
but
then
Chapter 5
23
but
then
Chapter 5
24
12
11/19/2011
Observations
Misadjustment is
directly proportional to the filter length M, for a fixed mse,av
inversely proportional to the time constant mse,av
Time constant is
inversely proportional to the step size
Chapter 5
25
LMS vs. SD
Requires auto/cross-correlations.
Achieves the minimum value of MSE, Jmin.
LMS and SD are iterative algorithms designed to find wo.
SD has direct access to auto/cross-correlations (exact measurements)
Chapter 5
26
13
11/19/2011
LMS vs. SD
Learning curves
SD has a well-defined curve composed of decaying exponentials
Chapter 5
27
LMS Limits
Chapter 5
28
14
11/19/2011
LMS Example
Chapter 5
29
LMS Example
Chapter 5
30
15
11/19/2011
LMS Example
Chapter 5
31
Chapter 5
32
16
11/19/2011
H Optimality of LMS
Chapter 5
Optimisation of an H criterion.
33
H Optimality of LMS
Provided that the step size parameter satisfies the limits on the
prev. slide, then
no matter how different the initial weight vector
is from the
unknown parameter vector wo of the multiple regression model, and
irrespective of the value of the additive disturbance n(n),
the error energy produced at the output of the LMS filter will never
exceed a certain level.
Chapter 5
34
17
11/19/2011
Chapter 5
35
18