Sie sind auf Seite 1von 104

Lecture 3: Real-Time Parameter Estimation

Least Squares and Recursive Computations

Estimating Parameters in Dynamical Systems

Experimental Conditions

Examples
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 1/26
Preliminary Comments
In adaptive controllers the observations (data) are obtained
sequentially in real-time.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 2/26
Preliminary Comments
In adaptive controllers the observations (data) are obtained
sequentially in real-time.

This property puts time constraint


on computation of estimates.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 2/26
Preliminary Comments
In adaptive controllers the observations (data) are obtained
sequentially in real-time.

This property puts time constraint


on computation of estimates.
To accommodate the constraint, one should

Simplify algorithms, if possible :)

Reorganize the computation in such a way that

updates of estimates are only done if new observations


are obtained,

estimates from previous time steps are used as an input


for the algorithm of computing updates.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 2/26
Preliminary Comments
In adaptive controllers the observations (data) are obtained
sequentially in real-time.

This property puts time constraint


on computation of estimates.
To accommodate the constraint, one should

Simplify algorithms, if possible :)

Reorganize the computation in such a way that

updates of estimates are only done if new observations


are obtained,

estimates from previous time steps are used as an input


for the algorithm of computing updates.
The last thoughts make an idea of recursive computations
very attractive!
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 2/26
What We Expect from Recursive Computations?
Ideally, the procedure should be as follows:
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 3/26
What We Expect from Recursive Computations?
Ideally, the procedure should be as follows:
Time 0: An initial guess for the parameters

(0) is given.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 3/26
What We Expect from Recursive Computations?
Ideally, the procedure should be as follows:
Time 0: An initial guess for the parameters

(0) is given.

. . .
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 3/26
What We Expect from Recursive Computations?
Ideally, the procedure should be as follows:
Time 0: An initial guess for the parameters

(0) is given.

. . .
Time t:

(t 1) is the estimate from the previous step.
New data [y(t), (t)] are obtained.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 3/26
What We Expect from Recursive Computations?
Ideally, the procedure should be as follows:
Time 0: An initial guess for the parameters

(0) is given.

. . .
Time t:

(t 1) is the estimate from the previous step.
New data [y(t), (t)] are obtained.
Compute an updated estimate:

(t) = F
_

(t 1), y(t), (t)


_
.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 3/26
What We Expect from Recursive Computations?
Ideally, the procedure should be as follows:
Time 0: An initial guess for the parameters

(0) is given.

. . .
Time t:

(t 1) is the estimate from the previous step.
New data [y(t), (t)] are obtained.
Compute an updated estimate:

(t) = F
_

(t 1), y(t), (t)


_
.

Time (t + 1): . . .
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 3/26
Basis for Recursive LS Computing
The LS estimate at time t is computed as

(t) =
_

_
1

T
Y =
_
t

i=1
(i)(i)
T
_
1
_
t

i=1
(i)y(i)
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 4/26
Basis for Recursive LS Computing
The LS estimate at time t is computed as

(t) =
_

_
1

T
Y =
_
t

i=1
(i)(i)
T
_
1
_
t

i=1
(i)y(i)
_
=
_
t1

i=1
(i)(i)
T
+(t)(t)
T
_
1
_
t1

i=1
(i)y(i)+(t)y(t)
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 4/26
Basis for Recursive LS Computing
The LS estimate at time t is computed as

(t) =
_

_
1

T
Y =
_
t

i=1
(i)(i)
T
_
1
_
t

i=1
(i)y(i)
_
=
_
t1

i=1
(i)(i)
T
+(t)(t)
T
_
1
_
t1

i=1
(i)y(i)+(t)y(t)
_
The LS estimate at time (t 1) was computed as

(t 1) =
_
t1

i=1
(i)(i)
T
_
1
_
t1

i=1
(i)y(i)
_
= P(t 1)
_
t1

i=1
(i)y(i)
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 4/26
Basis for Recursive LS Computing
The LS estimate at time t is computed as

(t) =
_

_
1

T
Y =
_
t

i=1
(i)(i)
T
_
1
_
t

i=1
(i)y(i)
_
=
_
P(t 1)
1
+(t)(t)
T
_
1
_
_
t1

i=1
(i)y(i) +(t)y(t)
_
_
The LS estimate at time (t 1) was computed as

(t 1) =
_
t1

i=1
(i)(i)
T
_
1
_
t1

i=1
(i)y(i)
_
= P(t 1)
_
t1

i=1
(i)y(i)
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 4/26
Basis for Recursive LS Computing
The LS estimate at time t is computed as

(t) =
_

_
1

T
Y =
_
t

i=1
(i)(i)
T
_
1
_
t

i=1
(i)y(i)
_
=
_
P(t 1)
1
+(t)(t)
T
_
1
_
P(t 1)
1

(t 1)+(t)y(t)
_
The LS estimate at time (t 1) was computed as

(t 1) =
_
t1

i=1
(i)(i)
T
_
1
_
t1

i=1
(i)y(i)
_
= P(t 1)
_
t1

i=1
(i)y(i)
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 4/26
Basis for Recursive LS Computing
The LS estimate at time t is computed as

(t) =
_

_
1

T
Y =
_
t

i=1
(i)(i)
T
_
1
_
t

i=1
(i)y(i)
_
=
_
P(t 1)
1
+(t)(t)
T
_
1
_
P(t 1)
1

(t 1)+(t)y(t)
_
The LS estimate at time (t 1) was computed as

(t 1) =
_
t1

i=1
(i)(i)
T
_
1
_
t1

i=1
(i)y(i)
_
= P(t 1)
_
t1

i=1
(i)y(i)
_
How to simplify computations? Especially for P(t1) P(t)
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 4/26
Basis for Recursive LS Computing
The LS estimate at time t is computed as

(t) =
_

_
1

T
Y =
_
t

i=1
(i)(i)
T
_
1
_
t

i=1
(i)y(i)
_
=
_
P(t 1)
1
+(t)(t)
T
_
1
_
P(t 1)
1

(t 1)+(t)y(t)
_
The LS estimate at time (t 1) was computed as

(t 1) =
_
t1

i=1
(i)(i)
T
_
1
_
t1

i=1
(i)y(i)
_
= P(t 1)
_
t1

i=1
(i)y(i)
_
P(t)
1
= P(t 1)
1
+(t)(t)
T
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 4/26
Basis for Recursive LS Computing
The LS estimate at time t is computed as

(t) =
_

_
1

T
Y =
_
t

i=1
(i)(i)
T
_
1
_
t

i=1
(i)y(i)
_
= P(t)
_
P(t 1)
1

(t 1)+(t)y(t)
_
The LS estimate at time (t 1) was computed as

(t 1) =
_
t1

i=1
(i)(i)
T
_
1
_
t1

i=1
(i)y(i)
_
= P(t 1)
_
t1

i=1
(i)y(i)
_
P(t)
1
= P(t 1)
1
+(t)(t)
T
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 4/26
Basis for Recursive LS Computing
The LS estimate at time t is computed as

(t) =
_

_
1

T
Y =
_
t

i=1
(i)(i)
T
_
1
_
t

i=1
(i)y(i)
_
= P(t)
_ _
P(t)
1
(t)(t)
T
_

(t 1)+(t)y(t)
_
The LS estimate at time (t 1) was computed as

(t 1) =
_
t1

i=1
(i)(i)
T
_
1
_
t1

i=1
(i)y(i)
_
= P(t 1)
_
t1

i=1
(i)y(i)
_
P(t)
1
= P(t 1)
1
+(t)(t)
T
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 4/26
Basis for Recursive LS Computing
The LS estimate at time t is computed as

(t) =
_

_
1

T
Y =
_
t

i=1
(i)(i)
T
_
1
_
t

i=1
(i)y(i)
_
=

(t 1) P(t)(t) (t)
T

(t 1) + P(t)(t) y(t)
The LS estimate at time (t 1) was computed as

(t 1) =
_
t1

i=1
(i)(i)
T
_
1
_
t1

i=1
(i)y(i)
_
= P(t 1)
_
t1

i=1
(i)y(i)
_
P(t)
1
= P(t 1)
1
+(t)(t)
T
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 4/26
Basis for Recursive LS Computing
The LS estimate at time t is computed as

(t) =
_

_
1

T
Y =
_
t

i=1
(i)(i)
T
_
1
_
t

i=1
(i)y(i)
_
=

(t 1) +P(t)(t)
_
y(t) (t)
T

(t 1)
_
The LS estimate at time (t 1) was computed as

(t 1) =
_
t1

i=1
(i)(i)
T
_
1
_
t1

i=1
(i)y(i)
_
= P(t 1)
_
t1

i=1
(i)y(i)
_
P(t)
1
= P(t 1)
1
+(t)(t)
T
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 4/26
Basis for Recursive LS Computing
To summarize, the update at time t can be computed as

(t) =

(t 1) +K(t)
_
y(t) (t)
T

(t 1)
_
where
K(t) = P(t)(t)
P(t) =
_
P(t 1)
1
+(t)(t)
T
_
1
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 5/26
Basis for Recursive LS Computing
To summarize, the update at time t can be computed as

(t) =

(t 1) +K(t)
_
y(t) (t)
T

(t 1)
_
where
K(t) = P(t)(t)
P(t) =
_
P(t 1)
1
+(t)(t)
T
_
1
Let us try now to simplify the last formula for P(t) update
_
A + BD
_
1
=??
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 5/26
Lemma:
Given matrices A, B, D of dimensions n n, n m and
mn respectively, if the n n and mm matrices A and
(I
m
+DB) are nonsigular, i.e.
det A = 0, det(I
m
+DB) = 0
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 6/26
Lemma:
Given matrices A, B, D of dimensions n n, n m and
mn respectively, if the n n and mm matrices A and
(I
m
+DB) are nonsigular, i.e.
det A = 0, det(I
m
+DB) = 0
then

the n n matrix (A+BD) is nonsingular, i.e.


det(A+BD) = 0

its inverse can be computed as follows


(A+BD)
1
= A
1
A
1
B
_
I
m
+DA
1
B
_
1
DA
1
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 6/26
Sketch of the Proof:
The direct computations show that
(A+BD)
_
A
1
A
1
B
_
I
m
+DA
1
B
_
1
DA
1
_
=
= I
n
+BDA
1
B
_
I
m
+DA
1
B
_
1
DA
1

BDA
1
B
_
I
m
+DA
1
B
_
1
DA
1
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 7/26
Sketch of the Proof:
The direct computations show that
(A+BD)
_
A
1
A
1
B
_
I
m
+DA
1
B
_
1
DA
1
_
=
= I
n
+BDA
1
B
_
I
m
+DA
1
B
_
1
DA
1

BDA
1
B
_
I
m
+DA
1
B
_
1
DA
1
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 7/26
Sketch of the Proof:
The direct computations show that
(A+BD)
_
A
1
A
1
B
_
I
m
+DA
1
B
_
1
DA
1
_
=
= I
n
+BDA
1
B
_
I
m
+DA
1
B
_
1
DA
1

BDA
1
B
_
I
m
+DA
1
B
_
1
DA
1
= I
n
+BDA
1

B
_
I
m
+DA
1
B
_ _
I
m
+DA
1
B
_
1
DA
1
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 7/26
Sketch of the Proof:
The direct computations show that
(A+BD)
_
A
1
A
1
B
_
I
m
+DA
1
B
_
1
DA
1
_
=
= I
n
+BDA
1
B
_
I
m
+DA
1
B
_
1
DA
1

BDA
1
B
_
I
m
+DA
1
B
_
1
DA
1
= I
n
+BDA
1

B
_
I
m
+DA
1
B
_ _
I
m
+DA
1
B
_
1
DA
1
= I
n
+BDA
1
BDA
1
= I
n
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 7/26
The formula
_
A+BD
_
1
= A
1
A
1
B
_
I
m
+DA
1
B
_
1
DA
1
should be applied to the expression
P(t) =
_
P(t 1)
1
+(t)(t)
T
_
1
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 8/26
The formula
_
A+BD
_
1
= A
1
A
1
B
_
I
m
+DA
1
B
_
1
DA
1
should be applied to the expression
P(t) =
_
P(t 1)
1
+(t)(t)
T
_
1
We obtain (with (t))
P(t) = P(t 1) P(t 1)
_
1 +
T
P(t 1)
_
1

T
P(t 1)
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 8/26
The formula
_
A+BD
_
1
= A
1
A
1
B
_
I
m
+DA
1
B
_
1
DA
1
should be applied to the expression
P(t) =
_
P(t 1)
1
+(t)(t)
T
_
1
We obtain (with (t))
P(t) = P(t 1) P(t 1)
_
1 +
T
P(t 1)
_
1

T
P(t 1)
We can simplify computation of the gain K(t) = P(t)(t)
K(t) = P(t 1)
_
1
_
1 +
T
P(t 1)
_
1

T
P(t 1)
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 8/26
The formula
_
A+BD
_
1
= A
1
A
1
B
_
I
m
+DA
1
B
_
1
DA
1
should be applied to the expression
P(t) =
_
P(t 1)
1
+(t)(t)
T
_
1
We obtain (with (t))
P(t) = P(t 1) P(t 1)
_
1 +
T
P(t 1)
_
1

T
P(t 1)
We can simplify computation of the gain K(t)
K(t) = P(t)(t) = P(t 1)
_
1 +
T
P(t 1)
_
1
and then
P(t) =
_
I
m
K(t) (t)
T
_
P(t 1).
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 8/26
Theorem (Recursive Least Squares):
Assume that for all t t
0
the excitation condition is valid, i.e.
(t)
T
(t) > 0.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 9/26
Theorem (Recursive Least Squares):
Assume that for all t t
0
the excitation condition is valid, i.e.
(t)
T
(t) > 0.
Given

(t
0
) and P(t
0
) = ((t
0
)
T
(t
0
))
1
, the LS estimate
satises the recursive equations

(t) =

(t 1) +K(t)
_
y(t) (t)
T

(t 1)
_
K(t) = P(t 1)(t)/
_
1 +(t)
T
P(t 1)(t)
_
P(t) =
_
I
m
K(t) (t)
T
_
P(t 1)
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 9/26
Comments:

The equation

(t) =

(t 1) +K(t)
_
y(t) (t)
T

(t 1)
_
can be seen as a procedure to change the value of estimate
if the current value cannot predict the output
y(t) = (t)
T

(t 1)
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 10/26
Comments:

The equation

(t) =

(t 1) +K(t)
_
y(t) (t)
T

(t 1)
_
can be seen as a procedure to change the value of estimate
if the current value cannot predict the output
y(t) = (t)
T

(t 1)

The excitation condition


(t)
T
(t) > 0, t t
0
implies that one needs to wait a number of time steps in
order to initialize in proper way the recursive computations.
In this case the initial conditions are
P(t
0
) =
_
(t
0
)
T
(t
0
)
_
1

(t
0
) = P(t
0
)(t
0
)
T
Y (t
0
)
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 10/26
Comments:

The equation

(t) =

(t 1) +K(t)
_
y(t) (t)
T

(t 1)
_
can be seen as a procedure to change the value of estimate
if the current value cannot predict the output
y(t) = (t)
T

(t 1)

The excitation condition


(t)
T
(t) > 0, t t
0
implies that one needs to wait a number of time steps in
order to initialize in proper way the recursive computations.
In this case the initial conditions are
P(t
0
) =
_
(t
0
)
T
(t
0
)
_
1

(t
0
) = P(t
0
)(t
0
)
T
Y (t
0
)
What to do if you like to start recursive computations at t = 0?
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 10/26
Modication for the start-up
Can we start with
P(0) = P
0
> 0 and

(0) =
0
?
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 11/26
Modication for the start-up
Can we start with
P(0) = P
0
> 0 and

(0) =
0
?
Consider the modied loss-function to be minimized
V
N
() =
1
2
N

i=1
_
y(i) (i)
T

_
2
+
1
2
(
0
)
T
P
1
0
(
0
)
where


0
is the initial guess and

P
1
0
is the measure of our condence in this guess.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 11/26
Modication for the start-up
Can we start with
P(0) = P
0
> 0 and

(0) =
0
?
Consider the modied loss-function to be minimized
V
N
() =
1
2
N

i=1
_
y(i) (i)
T

_
2
+
1
2
(
0
)
T
P
1
0
(
0
)
where


0
is the initial guess and

P
1
0
is the measure of our condence in this guess.
Following the computations done for the case P
1
0
= 0:

(N) =
_
(N)
T
(N) +P
1
0
_
1
_
(N)
T
Y (N) +P
1
0

0
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 11/26
Modication for the start-up (contd)
Introducing the notation
P(t) =
_
(t)
T
(t) +P
1
0
_
1
=
_
t

i=0
(i) (i)
T
+P
1
0
_
1
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 12/26
Modication for the start-up (contd)
Introducing the notation
P(t) =
_
(t)
T
(t) +P
1
0
_
1
=
_
t

i=0
(i) (i)
T
+P
1
0
_
1
we have P(t) =
_
P(t 1)
1
+(t) (t)
T
_
1
and

(t) = P(t)
_
t

i=0
(i) y(i) +P
1
0

0
_
= P(t)
_
t1

i=0
(i) y(i) +P
1
0

0
+(t) y(t)
_
= P(t)
_
P(t 1)
1

(t 1) +(t) y(t)
_
.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 12/26
Modication for the start-up (contd)
Introducing the notation
P(t) =
_
(t)
T
(t) +P
1
0
_
1
=
_
t

i=0
(i) (i)
T
+P
1
0
_
1
we have P(t) =
_
P(t 1)
1
+(t) (t)
T
_
1
and

(t) = P(t)
_
t

i=0
(i) y(i) +P
1
0

0
_
= P(t)
_
t1

i=0
(i) y(i) +P
1
0

0
+(t) y(t)
_
= P(t)
_
P(t 1)
1

(t 1) +(t) y(t)
_
.
This is the same as for the usual Recursive Least Square!
Hence, the only modication is the initial values.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 12/26
Designing Kalman Filter:
Consider the dynamical system

k
=
k1
, y
k
= C
T
k

k
+e
k
, k = 1, 2, 3, . . .
Here (with y
k
= y(i), C
k
= (i), e
k
= e(i))


k
is the state vector,

y
k
is the vector of measurements,

e
k
is the noise with
E e
k
= 0, E e
2
k
= Q, E (e
k
e
m
) = 0 for k = m

the initial condition


0
is independent with e
k
k and
E
0
=
0
, E (
0

0
)(
0

0
)
T
= P
0
> 0.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 13/26
Designing Kalman Filter:
Consider the dynamical system

k
=
k1
, y
k
= C
T
k

k
+e
k
, k = 1, 2, 3, . . .
Here (with y
k
= y(i), C
k
= (i), e
k
= e(i))


k
is the state vector,

y
k
is the vector of measurements,

e
k
is the noise with
E e
k
= 0, E e
2
k
= Q, E (e
k
e
m
) = 0 for k = m

the initial condition


0
is independent with e
k
k and
E
0
=
0
, E (
0

0
)(
0

0
)
T
= P
0
> 0.
Let us determine the minimum variance recursive estimator
(Kalman lter) for this system
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 13/26
Designing Kalman Filter (Contd):
Predicting Step:

k|k1
=

k1|k1
_
the copy of dynamics:
k
=
k1
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 14/26
Designing Kalman Filter (Contd):
Predicting Step:

k|k1
=

k1|k1
_
the copy of dynamics:
k
=
k1
_
Updating Step:
Given new data [y
k
, C
k
], we can improve

k|k1
by

k|k
=

k|k1
+L
k
_
y
k
C
T
k

k|k1
_
Here L
k
is a matrix parameter to be dened
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 14/26
Designing Kalman Filter (Contd):
Predicting Step:

k|k1
=

k1|k1
_
the copy of dynamics:
k
=
k1
_
Updating Step:
Given new data [y
k
, C
k
], we can improve

k|k1
by

k|k
=

k|k1
+L
k
_
y
k
C
T
k

k|k1
_
Here L
k
is a matrix parameter to be dened
In Kalman lter L
k
is chosen so that the covariance of estimate
P
k|k
:= E
_

k|k
_ _

k|k
_
T
is minimal in some sense.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 14/26
Designing Kalman Filter (Contd):
The estimate

k|k
can be expressed as follows

k|k
=

k|k1
+L
k
_
y
k
C
T
k

k|k1
_
=

k|k1
+L
k
__
C
T
k

k
+e
k
_
C
T
k

k|k1
_
=
_
I L
k
C
T
k
_

k|k1
+L
k
C
T
k

k
+L
k
e
k
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 15/26
Designing Kalman Filter (Contd):
The estimate

k|k
can be expressed as follows

k|k
=

k|k1
+L
k
_
y
k
C
T
k

k|k1
_
=

k|k1
+L
k
__
C
T
k

k
+e
k
_
C
T
k

k|k1
_
=
_
I L
k
C
T
k
_

k|k1
+L
k
C
T
k

k
+L
k
e
k
Then for any matrix L
k
we have
P
k|k
= E
_

k|k
__

k|k
_
T
= E
_
z
k
L
k
e
k
__
z
k
L
k
e
k
_
T
z
k
=
_
I L
k
C
T
k
_ _

k|k1
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 15/26
Designing Kalman Filter (Contd):
The estimate

k|k
can be expressed as follows

k|k
=

k|k1
+L
k
_
y
k
C
T
k

k|k1
_
=

k|k1
+L
k
__
C
T
k

k
+e
k
_
C
T
k

k|k1
_
=
_
I L
k
C
T
k
_

k|k1
+L
k
C
T
k

k
+L
k
e
k
Then for any matrix L
k
we have
P
k|k
= E
_

k|k
__

k|k
_
T
= E
_
z
k
L
k
e
k
__
z
k
L
k
e
k
_
T
= Ez
k
z
T
k
+EL
k
e
k
(L
k
e
k
)
T
= Ez
k
z
T
k
+L
k
Ee
k
e
T
k
L
k
T
z
k
=
_
I L
k
C
T
k
_ _

k|k1
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 15/26
Designing Kalman Filter (Contd):
The estimate

k|k
can be expressed as follows

k|k
=

k|k1
+L
k
_
y
k
C
T
k

k|k1
_
=

k|k1
+L
k
__
C
T
k

k
+e
k
_
C
T
k

k|k1
_
=
_
I L
k
C
T
k
_

k|k1
+L
k
C
T
k

k
+L
k
e
k
Then for any matrix L
k
we have
P
k|k
= E
_

k|k
__

k|k
_
T
= E
_
z
k
L
k
e
k
__
z
k
L
k
e
k
_
T
= Ez
k
z
T
k
+EL
k
e
k
(L
k
e
k
)
T
= Ez
k
z
T
k
+L
k
QL
k
T
z
k
=
_
I L
k
C
T
k
_ _

k|k1
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 15/26
Designing Kalman Filter (Contd):
The estimate

k|k
can be expressed as follows

k|k
=

k|k1
+L
k
_
y
k
C
T
k

k|k1
_
=

k|k1
+L
k
__
C
T
k

k
+e
k
_
C
T
k

k|k1
_
=
_
I L
k
C
T
k
_

k|k1
+L
k
C
T
k

k
+L
k
e
k
Then for any matrix L
k
we have
P
k|k
= E
_

k|k
__

k|k
_
T
= E
_
z
k
L
k
e
k
__
z
k
L
k
e
k
_
T
= Ez
k
z
T
k
+EL
k
e
k
(L
k
e
k
)
T
= Ez
k
z
T
k
+L
k
QL
k
T
=
_
IL
k
C
T
k
_
E(
k

k|k1
)(
k

k|k1
)
T
_
IL
k
C
T
k
_
T
+
+L
k
QL
k
T
z
k
=
_
I L
k
C
T
k
_ _

k|k1
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 15/26
Designing Kalman Filter (Contd):
The estimate

k|k
can be expressed as follows

k|k
=

k|k1
+L
k
_
y
k
C
T
k

k|k1
_
=

k|k1
+L
k
__
C
T
k

k
+e
k
_
C
T
k

k|k1
_
=
_
I L
k
C
T
k
_

k|k1
+L
k
C
T
k

k
+L
k
e
k
Then for any matrix L
k
we have
P
k|k
= E
_

k|k
__

k|k
_
T
= E
_
z
k
L
k
e
k
__
z
k
L
k
e
k
_
T
= Ez
k
z
T
k
+EL
k
e
k
(L
k
e
k
)
T
= Ez
k
z
T
k
+L
k
QL
k
T
=
_
IL
k
C
T
k
_
P
k|k1
_
IL
k
C
T
k
_
T
+L
k
QL
k
T
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 15/26
Designing Kalman Filter (Contd):
The estimate

k|k
can be expressed as follows

k|k
=

k|k1
+L
k
_
y
k
C
T
k

k|k1
_
=

k|k1
+L
k
__
C
T
k

k
+e
k
_
C
T
k

k|k1
_
=
_
I L
k
C
T
k
_

k|k1
+L
k
C
T
k

k
+L
k
e
k
Then for any matrix L
k
we have
P
k|k
= E
_

k|k
__

k|k
_
T
= E
_
z
k
L
k
e
k
__
z
k
L
k
e
k
_
T
= Ez
k
z
T
k
+EL
k
e
k
(L
k
e
k
)
T
= Ez
k
z
T
k
+L
k
QL
k
T
=
_
IL
k
C
T
k
_
P
k|k1
_
IL
k
C
T
k
_
T
+L
k
QL
k
T
= P
k|k1
L
k
C
T
k
P
k|k1
P
k|k1
C
k
L
k
T
+L
k
_
Q+C
T
k
P
k|k1
C
k
_
L
k
T
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 15/26
Designing Kalman Filter (Contd):
The estimate

k|k
can be expressed as follows

k|k
=

k|k1
+L
k
_
y
k
C
T
k

k|k1
_
=

k|k1
+L
k
__
C
T
k

k
+e
k
_
C
T
k

k|k1
_
=
_
I L
k
C
T
k
_

k|k1
+L
k
C
T
k

k
+L
k
e
k
Then for any matrix L
k
we have
P
k|k
= E
_

k|k
__

k|k
_
T
= E
_
z
k
L
k
e
k
__
z
k
L
k
e
k
_
T
= Ez
k
z
T
k
+EL
k
e
k
(L
k
e
k
)
T
= Ez
k
z
T
k
+L
k
QL
k
T
=
_
IL
k
C
T
k
_
P
k|k1
_
IL
k
C
T
k
_
T
+L
k
QL
k
T
= P
k|k1
L
k
C
T
k
P
k|k1
P
k|k1
C
k
L
k
T
+L
k
_
Q+C
T
k
P
k|k1
C
k
_
L
k
T
= W
0
+L
k
W
1
+W
T
1
L
k
T
+L
k
W
2
L
k
T
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 15/26
Designing Kalman Filter (Contd):
The estimate

k|k
can be expressed as follows

k|k
=

k|k1
+L
k
_
y
k
C
T
k

k|k1
_
=

k|k1
+L
k
__
C
T
k

k
+e
k
_
C
T
k

k|k1
_
=
_
I L
k
C
T
k
_

k|k1
+L
k
C
T
k

k
+L
k
e
k
Then for any matrix L
k
we have
P
k|k
= E
_

k|k
__

k|k
_
T
= P
k|k1
L
k
C
T
k
P
k|k1
P
k|k1
C
k
L
k
T
+L
k
_
Q+C
T
k
P
k|k1
C
k
_
L
k
T
= W
0
+ L
k
W
1
+W
T
1
L
k
T
+L
k
W
2
L
k
T
= W
0
+ (L
k
X +Y ) (L
k
X +Y )
T
Y Y
T
L
k
opt
= Y X
1
, P
k|k
(L
k
opt
) = W
0
Y Y
T
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 15/26
Designing Kalman Filter (Contd):
The estimate

k|k
can be expressed as follows

k|k
=

k|k1
+L
k
_
y
k
C
T
k

k|k1
_
=

k|k1
+L
k
__
C
T
k

k
+e
k
_
C
T
k

k|k1
_
=
_
I L
k
C
T
k
_

k|k1
+L
k
C
T
k

k
+L
k
e
k
Then for any matrix L
k
we have
P
k|k
= E
_

k|k
__

k|k
_
T
= P
k|k1
L
k
C
T
k
P
k|k1
P
k|k1
C
k
L
k
T
+L
k
_
Q+C
T
k
P
k|k1
C
k
_
L
k
T
= W
0
+ L
k
W
1
+W
T
1
L
k
T
+L
k
W
2
L
k
T
= W
0
+L
k
XY
T
+Y X
T
L
k
T
+L
k
XX
T
L
k
T
W
2
= XX
T
, W
1
= XY
T
Y
T
= X
1
W
1
= W

1
2
2
W
1
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 15/26
Designing Kalman Filter (Contd):
The estimate

k|k
can be expressed as follows

k|k
=

k|k1
+L
k
_
y
k
C
T
k

k|k1
_
=

k|k1
+L
k
__
C
T
k

k
+e
k
_
C
T
k

k|k1
_
=
_
I L
k
C
T
k
_

k|k1
+L
k
C
T
k

k
+L
k
e
k
Then for any matrix L
k
we have
P
k|k
= E
_

k|k
__

k|k
_
T
= P
k|k1
L
k
C
T
k
P
k|k1
P
k|k1
C
k
L
k
T
+L
k
_
Q+C
T
k
P
k|k1
C
k
_
L
k
T
= W
0
+L
k
W
1
+W
T
1
L
k
T
+L
k
W
2
L
k
T
W
2
= XX
T
, Y
T
= X
1
W
1
= W

1
2
2
W
1

L
k
opt
= Y X
1
, P
opt
k|k
= W
0
Y Y
T
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 15/26
Designing Kalman Filter (Contd):
The estimate

k|k
can be expressed as follows

k|k
=

k|k1
+L
k
_
y
k
C
T
k

k|k1
_
=

k|k1
+L
k
__
C
T
k

k
+e
k
_
C
T
k

k|k1
_
=
_
I L
k
C
T
k
_

k|k1
+L
k
C
T
k

k
+L
k
e
k
Then for any matrix L
k
we have
P
k|k
= E
_

k|k
__

k|k
_
T
= P
k|k1
L
k
C
T
k
P
k|k1
P
k|k1
C
k
L
k
T
+L
k
_
Q+C
T
k
P
k|k1
C
k
_
L
k
T
= W
0
+L
k
W
1
+W
T
1
L
k
T
+L
k
W
2
L
k
T
W
2
= XX
T
, Y
T
= X
1
W
1
= W

1
2
2
W
1

L
k
opt
= W
T
1
W
1
2
, P
opt
k|k
= W
0
W
T
1
W
1
2
W
1
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 15/26
Designing Kalman Filter (summary):
The optimal gain L
k
that ensures the minimal variance P
k|k
of
updated estimate

k|k
is
L
k
opt
= W
T
1
W
1
2
= P
k|k1
C
k
_
Q+C
T
k
P
k|k1
C
k
_
1
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 16/26
Designing Kalman Filter (summary):
The optimal gain L
k
that ensures the minimal variance P
k|k
of
updated estimate

k|k
is
L
k
opt
= W
T
1
W
1
2
= P
k|k1
C
k
_
Q+C
T
k
P
k|k1
C
k
_
1
The corresponding variance is
P
opt
k|k
= W
0
W
T
1
W
1
2
W
1
= P
k|k1
P
k|k1
C
k
_
Q+C
T
k
P
k|k1
C
k
_
1
C
T
k
P
k|k1
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 16/26
Designing Kalman Filter (summary):
The optimal gain L
k
that ensures the minimal variance P
k|k
of
updated estimate

k|k
is
L
k
opt
= W
T
1
W
1
2
= P
k|k1
C
k
_
Q+C
T
k
P
k|k1
C
k
_
1
The corresponding variance is
P
opt
k|k
= W
0
W
T
1
W
1
2
W
1
= P
k|k1
P
k|k1
C
k
_
Q+C
T
k
P
k|k1
C
k
_
1
C
T
k
P
k|k1
These expressions coincide with the modied
Recursive Computations of the Least Square Estimate
for the case when the variance of noise Q equals 1.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 16/26
Recursive Least Squares with exponential forgetting
Consider the modied loss-function to be minimized
V
t
() =
1
2
t

i=1

ti
_
y(i)(i)
T

_
2
+

t
2
(
0
)
T
P
1
0
(
0
)
with 0 < 1.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 17/26
Recursive Least Squares with exponential forgetting
Consider the modied loss-function to be minimized
V
t
() =
1
2
t

i=1

ti
_
y(i)(i)
T

_
2
+

t
2
(
0
)
T
P
1
0
(
0
)
with 0 < 1. One can obtain the following

(t) =

(t 1) +K(t)
_
y(t) (t)
T

(t 1)
_
K(t) =
P(t 1)(t)
+
T
(t)P(t 1)(t)
P(t) =
_
I
m
K(t) (t)
T
_
P(t 1)/
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 17/26
Projection algorithm
Given the model y(t) = (t)
T
, an estimate

(t 1)
for and the value of y(t), nd

(t) = arg min


_

(t)

(t 1) : y(t) = (t)
T

(t)
_
.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 18/26
Projection algorithm
Given the model y(t) = (t)
T
, an estimate

(t 1)
for and the value of y(t), nd

(t) = arg min


_

(t)

(t 1) : y(t) = (t)
T

(t)
_
.
To solve the problem, let us minimize
V (, ) =
1
2
_

(t1)
_
T
_

(t1)
_
+
_
y(t)(t)
T

_
.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 18/26
Projection algorithm
Given the model y(t) = (t)
T
, an estimate

(t 1)
for and the value of y(t), nd

(t) = arg min


_

(t)

(t 1) : y(t) = (t)
T

(t)
_
.
To solve the problem, let us minimize
V (, ) =
1
2
_

(t1)
_
T
_

(t1)
_
+
_
y(t)(t)
T

_
.
The conditions for the minimum are
grad

V (, ) =

(t 1) (t) = 0 for =

(t)
V (, )

= y(t) (t)
T
= 0 for =

(t).
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 18/26
Projection algorithm
Given the model y(t) = (t)
T
, an estimate

(t 1)
for and the value of y(t), nd

(t) = arg min


_

(t)

(t 1) : y(t) = (t)
T

(t)
_
.
To solve the problem, let us minimize
V (, ) =
1
2
_

(t1)
_
T
_

(t1)
_
+
_
y(t)(t)
T

_
.
The conditions for the minimum are

(t)

(t 1) (t) = 0
y(t) (t)
T

(t) = 0.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 18/26
Projection algorithm (contd) / Gradient algorithm
Substituting

(t) =

(t 1) +(t)
into y(t) = (t)
T

(t) and solving for :


=
_
y(t) (t)
T

(t 1)
_
/
_
(t)
T
(t)
_
and substituting back we have the Projection algorithm

(t) =

(t 1) +
(t)
(t)
T
(t)
_
y(t) (t)
T

(t 1)
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 19/26
Projection algorithm (contd) / Gradient algorithm
Substituting

(t) =

(t 1) +(t)
into y(t) = (t)
T

(t) and solving for :


=
_
y(t) (t)
T

(t 1)
_
/
_
(t)
T
(t)
_
and substituting back we have the Projection algorithm

(t) =

(t 1) +
(t)
(t)
T
(t)
_
y(t) (t)
T

(t 1)
_
Modifying the algorithm to avoid possible devisions by zero, we
obtain the Gradient algorithm

(t) =

(t 1) +
(t)
+(t)
T
(t)
_
y(t) (t)
T

(t 1)
_
with 2 > > 0 and > 0.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 19/26
Continuous-time Models
Consider the regression model
y() = ()
T

0
with dened on [0, t].
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 20/26
Continuous-time Models
Consider the regression model
y() = ()
T

0
with dened on [0, t].
To compute the estimate

(t) of
0
, we minimize the function
V
t
() =
_
t
0
e
(t)
2
_
y() ()
T

. .
prediction error
_
2
d+
e
t
2
(
0
)
T
P
1
0
(
0
)
where the inverse of P
0
= P
T
0
> 0 denes how much we trust
in the initial guess
0
, and 0 is the forgetting factor.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 20/26
Continuous-time Models
Consider the regression model
y() = ()
T

0
with dened on [0, t].
To compute the estimate

(t) of
0
, we minimize the function
V
t
() =
_
t
0
e
(t)
2
_
y() ()
T

. .
prediction error
_
2
d+
e
t
2
(
0
)
T
P
1
0
(
0
)
where the inverse of P
0
= P
T
0
> 0 denes how much we trust
in the initial guess
0
, and 0 is the forgetting factor.
The condition for minimum at =

(t) is
0 = grad

V
t
()
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 20/26
Continuous-time Models
Consider the regression model
y() = ()
T

0
with dened on [0, t].
To compute the estimate

(t) of
0
, we minimize the function
V
t
() =
_
t
0
e
(t)
2
_
y() ()
T

. .
prediction error
_
2
d+
e
t
2
(
0
)
T
P
1
0
(
0
)
where the inverse of P
0
= P
T
0
> 0 denes how much we trust
in the initial guess
0
, and 0 is the forgetting factor.
The condition for minimum at =

(t) is
0 =
_
t
0
e
(t)
_
() y()+() ()
T

_
d+e
t
P
1
0
(
0
)
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 20/26
Continuous-time Models
Consider the regression model
y() = ()
T

0
with dened on [0, t].
To compute the estimate

(t) of
0
, we minimize the function
V
t
() =
_
t
0
e
(t)
2
_
y() ()
T

. .
prediction error
_
2
d+
e
t
2
(
0
)
T
P
1
0
(
0
)
where the inverse of P
0
= P
T
0
> 0 denes how much we trust
in the initial guess
0
, and 0 is the forgetting factor.
The condition for minimum is
0 =
_
t
0
e

_
() y()+() ()
T

(t)
_
d+P
1
0
_

(t)
0
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 20/26
Continuous-time Models
Consider the regression model
y() = ()
T

0
with dened on [0, t].
To compute the estimate

(t) of
0
, we minimize the function
V
t
() =
_
t
0
e
(t)
2
_
y() ()
T

. .
prediction error
_
2
d+
e
t
2
(
0
)
T
P
1
0
(
0
)
where the inverse of P
0
= P
T
0
> 0 denes how much we trust
in the initial guess
0
, and 0 is the forgetting factor.
The condition for minimum is
_
_
t
0
e

() ()
T
d +e
t
P
1
0
_

(t) =
_
t
0
e

() y()d+P
1
0

0
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 20/26
Least Squares Algorithm for continuous-time
Introduce the following notation
R(t) =
_
t
0
e

() ()
T
d +e
t
P
1
0
.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 21/26
Least Squares Algorithm for continuous-time
Introduce the following notation
R(t) =
_
t
0
e

() ()
T
d +e
t
P
1
0
.
Then
R(t)

(t) =
_
t
0
e

() y()d +P
1
0

0
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 21/26
Least Squares Algorithm for continuous-time
Introduce the following notation
R(t) =
_
t
0
e

() ()
T
d +e
t
P
1
0
.
Then
R(t)

(t) =
_
t
0
e

() y()d +P
1
0

0
Differentiate with respect to t, we obtain the updating law:
_
d
dt
R(t)
_

(t) +R(t)
d
dt

(t) = e
t
(t) y(t)
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 21/26
Least Squares Algorithm for continuous-time
Introduce the following notation
R(t) =
_
t
0
e

() ()
T
d +e
t
P
1
0
.
Then
R(t)

(t) =
_
t
0
e

() y()d +P
1
0

0
Solving for
d
dt

(t)
R(t)
d
dt

(t) = e
t
(t) y(t)
_
d
dt
R(t)
_

(t)
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 21/26
Least Squares Algorithm for continuous-time
Introduce the following notation
R(t) =
_
t
0
e

() ()
T
d +e
t
P
1
0
.
Then
R(t)

(t) =
_
t
0
e

() y()d +P
1
0

0
Solving for
d
dt

(t)
d
dt

(t) = R(t)
1
_
e
t
(t) y(t)
_
d
dt
R(t)
_

(t)
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 21/26
Least Squares Algorithm for continuous-time
Introduce the following notation
R(t) =
_
t
0
e

() ()
T
d +e
t
P
1
0
.
Then
R(t)

(t) =
_
t
0
e

() y()d +P
1
0

0
Substituting
d
dt
R(t)
d
dt

(t) = R(t)
1
_
e
t
(t) y(t)
_
e
t
(t) (t)
T
_

(t)
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 21/26
Least Squares Algorithm for continuous-time
Introduce the following notation
R(t) =
_
t
0
e

() ()
T
d +e
t
P
1
0
.
Then
R(t)

(t) =
_
t
0
e

() y()d +P
1
0

0
Introducing P(t) = e
t
R(t)
1

(t) = P(t) (t)


_
y(t) (t)
T

(t)
_
;

(0) =
0
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 21/26
Least Squares Algorithm for continuous-time
Introduce the following notation
R(t) =
_
t
0
e

() ()
T
d +e
t
P
1
0
.
Then
R(t)

(t) =
_
t
0
e

() y()d +P
1
0

0
Introducing P(t) = e
t
R(t)
1

(t) = P(t) (t)


_
y(t) (t)
T

(t)
_
;

(0) =
0
Differentiating P(t) R(t) = e
t
I
m
_
d
dt
P(t)
_
R(t) +P(t)
_
d
dt
R(t)
_
= e
t
I
m
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 21/26
Least Squares Algorithm for continuous-time
Introduce the following notation
R(t) =
_
t
0
e

() ()
T
d +e
t
P
1
0
.
Then
R(t)

(t) =
_
t
0
e

() y()d +P
1
0

0
Introducing P(t) = e
t
R(t)
1

(t) = P(t) (t)


_
y(t) (t)
T

(t)
_
;

(0) =
0
Substituting
d
dt
R(t)
_
d
dt
P(t)
_
R(t) +P(t)
_
e
t
(t) (t)
T
_
= e
t
I
m
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 21/26
Least Squares Algorithm for continuous-time
Introduce the following notation
R(t) =
_
t
0
e

() ()
T
d +e
t
P
1
0
.
Then
R(t)

(t) =
_
t
0
e

() y()d +P
1
0

0
Introducing P(t) = e
t
R(t)
1

(t) = P(t) (t)


_
y(t) (t)
T

(t)
_
;

(0) =
0
Using R(t) = P(t)
1
e
t
_
d
dt
P(t)
_
P(t)
1
e
t
+P(t)
_
e
t
(t) (t)
T
_
= e
t
I
m
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 21/26
Least Squares Algorithm for continuous-time
Introduce the following notation
R(t) =
_
t
0
e

() ()
T
d +e
t
P
1
0
.
Then
R(t)

(t) =
_
t
0
e

() y()d +P
1
0

0
Introducing P(t) = e
t
R(t)
1

(t) = P(t) (t)


_
y(t) (t)
T

(t)
_
;

(0) =
0
Canceling e
t
= 0
_
d
dt
P(t)
_
P(t)
1
+P(t)
_
(t) (t)
T
_
= I
m
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 21/26
Least Squares Algorithm for continuous-time
Introduce the following notation
R(t) =
_
t
0
e

() ()
T
d +e
t
P
1
0
.
Then
R(t)

(t) =
_
t
0
e

() y()d +P
1
0

0
Introducing P(t) = e
t
R(t)
1

(t) = P(t) (t)


_
y(t) (t)
T

(t)
_
;

(0) =
0
Finally

P(t) = P(t) P(t)


_
(t) (t)
T
_
P(t); P(0) = P
0
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 21/26
Continuous-time dynamical systems
Consider the input-output model y(t) u(t):
d
n
y
dt
n
+a
1
d
n1
y
dt
n1
+ +a
n
y = b
1
d
m1
y
dt
m1
+ +b
m
u
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 22/26
Continuous-time dynamical systems
Consider the input-output model y(t) u(t):
d
n
y
dt
n
+a
1
d
n1
y
dt
n1
+ +a
n
y
. .
A(p) y(t)
= b
1
d
m1
y
dt
m1
+ +b
m
u
. .
B(p) u(t)
n m and p is the differentiation operator: p y(t) =
dy
dt
.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 22/26
Continuous-time dynamical systems
Consider the input-output model y(t) u(t):
d
n
y
dt
n
+a
1
d
n1
y
dt
n1
+ +a
n
y
. .
A(p) y(t)
= b
1
d
m1
y
dt
m1
+ +b
m
u
. .
B(p) u(t)
n m and p is the differentiation operator: p y(t) =
dy
dt
.
How do we estimate the vector of parameters

0
=
_
a
1
, . . . , a
n
, b
1
, . . . , b
m
_
T
from the measured data
_
[y(), u()] : [0, t]
_
?
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 22/26
Continuous-time dynamical systems
Consider the input-output model y(t) u(t):
d
n
y
dt
n
+a
1
d
n1
y
dt
n1
+ +a
n
y
. .
A(p) y(t)
= b
1
d
m1
y
dt
m1
+ +b
m
u
. .
B(p) u(t)
n m and p is the differentiation operator: p y(t) =
dy
dt
.
How do we estimate the vector of parameters

0
=
_
a
1
, . . . , a
n
, b
1
, . . . , b
m
_
T
from the measured data
_
[y(), u()] : [0, t]
_
?
It would be easy to rewrite the model in the standard form, but
there is one problem: differentiation can not be realized as a
proper transfer function.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 22/26
Regression for continuous-time dynamical systems
Trick: introduce a lter H
f
(p)
A(p) y(t) = B(p) u(t) H
f
(p) A(p) y(t) = H
f
(p) B(p) u(t)
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 23/26
Regression for continuous-time dynamical systems
Trick: introduce a lter H
f
(p)
A(p) y(t) = B(p) u(t) A(p) H
f
(p) y(t)
. .
y
f
(t)
= B(p) H
f
(p) u(t)
. .
u
f
(t)
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 23/26
Regression for continuous-time dynamical systems
Trick: introduce a lter H
f
(p)
A(p) y(t) = B(p) u(t) A(p) H
f
(p) y(t)
. .
y
f
(t)
= B(p) H
f
(p) u(t)
. .
u
f
(t)
With a stable minimum-phase lter with relative degree n or
higher, such as H
f
(s) =
1
s
n
+
1
s
n1
+ +
n
, we have
A(p) y(t) = B(p) u(t) A(p) y
f
(t) = B(p) u
f
(t)
where derivatives of order 1, . . . , n for
y
f
(t) = H
f
(p) y(t) and u
f
(t) = H
f
(p) u(t)
can be realized as proper transfer functions.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 23/26
Now, since
p
n
y
f
(t) = a
1
p
n1
y
f
(t) a
n
y
f
(t)+b
1
p
m1
u
f
(t)+ +b
n
u
f
(t)
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 24/26
Now, since
p
n
y
f
(t) = a
1
p
n1
y
f
(t) a
n
y
f
(t)+b
1
p
m1
u
f
(t)+ +b
n
u
f
(t)
The regression model is
p
n
y
f
(t) = (t)
T

0
, (t)
T
=
_
p
n1
y
f
(t), . . . , p
m1
u
f
(t)
_
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 24/26
Now, since
p
n
y
f
(t) = a
1
p
n1
y
f
(t) a
n
y
f
(t)+b
1
p
m1
u
f
(t)+ +b
n
u
f
(t)
The regression model is
p
n
y
f
(t) = (t)
T

0
, (t)
T
=
_
p
n1
y
f
(t), . . . , p
m1
u
f
(t)
_
and the estimation scheme is
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 24/26
Next Lecture / Assignments:

Next lecture (April 14, 10:00-12:00, in A206Tekn):


Convergence and Persistent Excitation.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 25/26
Next Lecture / Assignments:

Next lecture (April 14, 10:00-12:00, in A206Tekn):


Convergence and Persistent Excitation.

Derive the formulae for recursive least squares algorithm


with exponential forgetting.
c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 25/26
Another homework problem:
Consider the discrete-time system y(t) = H(q) u(t)
represented by the transfer function H(z) =
b
1
z +b
2
z
2
+a
1
z +a
2
.

Write a recursive least squares algorithm with exponential


forgetting to estimate the parameters {a
1
, a
2
, b
1
, b
2
}.

Simulate your algorithm with the true parameters of the


system a
1
= a
2
= 0.5, b
1
= 0, b
2
= 1. Study
performance of the algorithm for

different initial conditions for the parameter estimates


(try 0 initial conditions and at least one other choice),

different values of the forgetting factor (at least three


values including = 1),

a unit step input and a square wave of unit amplitude


and a period of 10 samples.

Discuss the simulation results.


c Leonid Freidovich. April 13, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 3 p. 26/26

Das könnte Ihnen auch gefallen