You are on page 1of 17

# 1

## 13.4 Scalar Kalman Filter

Data Model
To derive the Kalman filter we need the data model:
> < + =
> < + =
Equation n Observatio ] [ ] [ ] [
Equation State ] [ ] 1 [ ] [
n w n s n x
n u n as n s
Assumptions
1. u[n] is zero mean Gaussian, White,
2 2
]} [ {
u
n u E =
2. w[n] is zero mean Gaussian, White,
2 2
]} [ {
n
n w E =
3. The initial state is ) , ( ~ ] 1 [
2
s s
N s u
4. u[n], w[n], and s are all independent of each other
Can vary
with time
To simplify the derivation: let u
s
= 0 (well account for this later)
2
Goal and Two Properties
{ } ] [ , ], 1 [ ], 0 [ | ] [ ] | [

## n x x x n s E n n s = Goal: Recur sively compute

| |
T
n x x x n ] [ , ], 1 [ ], 0 [ ] [ = X
Notation:
X[n] is set of all observations
x[n] is a single vector-observation
Two Properties We Need
1. For the jointly Gaussian case, the MMSE estimator of zero mean
based on two uncorrelated data vectors x
1
& x
2
is (see p. 350 of
text)
} | { } | { } , | {

2 1 2 1
x x x x E E E + = =
2. If =
1
+
2
then the MSEE estimator is
} | { } | { } | { } | {

2 1 2 1
x x x x E E E E + = + = =
(a result of the linearity of E{.} operator)
3
Derivation of Scalar Kalman Filter
Innovation: ] 1 | [

] [ ] [
~
= n n x n x n x
Recall from Section 12.6
MMSE estimate of
x[n] given X[n 1]
(prediction!!)
By MMSE Orthogonality Principle
{ } 0 X = ] 1 [ ] [
~
n n x E
data previous the with ed uncorrelat is that ] [ of part is ] [
~
n x n x
Now note: X[n] is equivalent to { } ] [
~
], 1 [ n x n X
Why? Because we can get get X[n] from it as follows:
] [
] [
] 1 [
] [
~
] 1 [
n
n x
n
n x
n
X
X X
=

"# "\$ %
] 1 | [

1
0
] [ ] [
~
] [

+ =
n n x
n
k
k
k x a n x n x
4
What have we done so far?
Have shown that { } ] [
~
], 1 [ ] [ n x n n X X
Have split current data set into 2 parts:
1. Old data
2. Uncorrelated part of new data (just the new facts)
uncorrelated
{ } { } ] [
~
], 1 [ | ] [ ] [ | ] [ ] | [

n x n n s E n n s E n n s = = X X
Because of this

## So what??!! Well can now exploit Property #1!!

{ } { }
" "# " "\$ % " " # " " \$ %
] [
~
| ] [
] 1 | [

] 1 [ | ] [ ] | [

n x n s E
n n s
n n s E n n s +
=
=

X
Update based on
innovation part
of new data

Now need t o
look mor e
closely at
each of t hese!
prediction of s[n]
based on past data
5
] 1 | [

## n n s Look at Prediction Term:

Use the Dynamical Model it is the key to prediction because it tells us
how the state should progress from instant to instant
{ } { } ] 1 [ | ] [ ] 1 [ ] 1 [ | ] [ ] 1 | [

+ = = n n u n as E n n s E n n s X X
Now use Property #2:
{ } { }
" " " # " " " \$ % " " " # " " " \$ %
0 ]} [ { ] 1 | 1 [

] 1 [ | ] [ ] 1 [ | ] 1 [ ] 1 | [

= = =
+ =
n u E n n s
n n u E n n s E a n n s X X
By Definition
By independence of u[n]
& X[n-1] See bottom
of p. 433 in textbook.
] 1 | 1 [

] 1 | [

= n n s a n n s
The Dynamical Model provides the
update from estimate to prediction!!
6
]} [
~
| ] [ { n x n s E Look at Update Term:
Use the form for the Gaussian MMSE estimate:
] [
~
]} [
~
{
]} [
~
] [ {
]} [
~
| ] [ {
] [
2
n x
n x E
n x n s E
n x n s E
n k
" " # " " \$ %

=
] 1 | [

] [ ] [
~
= n n x n x n x
( ) ] 1 | [

] [ ] [ ]} [
~
| ] [ { = n n x n x n k n x n s E
"# "\$ % "# "\$ %
0
] 1 | [

] 1 | [

=
+ = n n w n n s
by Prop. #2
Prediction Shows Up Again!!!
So
Because w[n] is indep.
of {x, , x[n-1]}
Put t hese Result s Toget her :
| | ] 1 | [

] [ ] [ ] 1 | [

] | [

] 1 | 1 [

+ + =
=
n n s n x n k n n s n n s
n n s a
"# "\$ %
This is t he
Kalman Filt er
How to get the gain?
7
Look at the Gain Term:
Need two properties
])} 1 | [

] [ ])( 1 | [

] [ {( ])} 1 | [

] [ ]( [ { = n n s n x n n s n s E n n s n x n s E A.
The innovation
] [
~
] 1 | [

] [
n x
n n x n x
=
=
Aside
<x,y> = <x+z,y>
for any z y
Linear combo of past data
thus w/ innovation
0 ])} 1 | [

] [ ]( [ { = n n s n s n w E
B.
proof
w[n] is the measurement noise and by assumption is indep. of the
dynamical driving noise u[n] and s[-1] In other words: w[n] is indep.
of everything dynamical So E{w[n]s[n]} = 0
is based on past data, which include {w, , w[n-1]}, and
since the measurement noise has indep. samples we get
] 1 | [

n n s
] [ ] 1 | [

n w n n s
8
So we start with the gain as defined above:
| | { }
| | { }
| || | { }
| | { }
| || | { }
| | { }
| | { } | | { }
| | { } | | { } ] [ ] 1 | [

] [ 2 ] 1 | [

] [
] [ ] 1 | [

] [ ] 1 | [

] [
] [ ] 1 | [

] [
] [ ] 1 | [

] [ ] 1 | [

] [
] [ ] 1 | [

] [
] 1 | [

] [ ] 1 | [

] [
] 1 | [

] [
] 1 | [

] [ ] [
]} [
~
{
]} [
~
] [ {
] [
2
2
2
2
2
2
2
n w n n s n s E n n s n s E
n w n n s n s E n n s n s E
n w n n s n s E
n w n n s n s n n s n s E
n w n n s n s E
n n s n x n n s n s E
n n s n x E
n n s n x n s E
n x E
n x n s E
n k
n
+ +
+
=
+
+
=
+

=

= =

## Use Prop. A in num.

Use x[n] = s[n]+ w[n]
in denominator
(!)
(!!)
Use
x[n] = s[n]+ w[n]
in numerator
Expand
= 0 by Prop. B
] 1 | [ =

n n M
Plug in for innovation
MSE when s[n] is estimated
by 1-step prediction
9
This gives a form for the gain:
] 1 | [
] 1 | [
] [
2
+

=
n n M
n n M
n k
n

This balances
the quality of the measured data
against the predicted state
I n t he Kalman f ilt er t he pr edict ion act s like t he
pr ior inf or mat ion about t he st at e at t ime n
bef or e we obser ve t he dat a at t ime n
10
Look at the Prediction MSE Term:
But now we need to know how to find M[n|n 1]!!!
| | { }
| | { }
( ) | | { }
2
2
2
] [ ] 1 | 1 [

] 1 [
] 1 | 1 [

] [ ] 1 [
1 | [

] [ ] 1 | [
n u n n s n s a E
n n s a n u n as E
n n s n s E n n M
+ =
+ =
=
2 2
] 1 | 1 [ ] 1 | [
u
n n M a n n M + =
Why are the cross-terms zero? Two parts:
1. s[n 1] depends on {u u[n 1], s[-1]}, which are indep. of u[n]
2. depends on {s+w s[n 1]+w[n 1]}, which are
indep. of u[n]
] 1 | 1 [

n n s
Use dynamical
model & exploit
form for
prediction
Cross-terms = 0
Est. Error at previous time
11
Look at a Recursion for MSE Term: M[n|n]
| | { } ( ) | | { }
2 2
] 1 | [

] [ ] [ ] 1 | [

] [ ] | [

] [ ] | [ = = n n s n x n k n n s n s E n n s n s E n n By def.: M
Term A Term B
Now well get three terms:
E{A
2
}, E{AB}, E{B
2
}
{ } ] 1 | [
2
= n n M A E
{ } | || | { }
] 1 | [ ] [ 2
] 1 | [

] [ ] 1 | [

] [ ] [ 2 2
=
=
n n M n k
n n s n x n n s n s E n k AB E
{ } | | { }
| |
| | ] 1 | [ ] [ ] [ of Num. ] [
] [ of Den. ] [
] 1 | [

] [ ] [
2
2 2 2
= =
=
=
n n M n k n k n k
n k n k
n n s n x E n k B E
from (!!) is num. k[n]
from (!) is den. k[n]
by definition
] 1 | [
] 1 | [
] [
2
+

=
n n M
n n M
n k
n

Recall:
12
So this gives
] 1 | [ ] [ ] 1 | [ ] [ 2 ] 1 | [ ] | [ + = n n M n k n n M n k n n M n n M
( ) ] 1 | [ ] [ 1 ] | [ = n n M n k n n M
Put t ing all of t hese r esult s t oget her gives
some ver y simple equat ions t o it er at e
Called t he Kalman Filt er
We just derived the form for Scalar State & Scalar Observation.
On the next three charts we give the Kalman Filter equations for:
Scalar State & Scalar Observation
Vector State & Scalar Observation
Vector State & Vector Observation
13
Kalman Filter: Scalar State & Scalar Observation
u[n] WGN; WSS;
) , 0 ( ~
2
u
N
] [ ] 1 [ ] [ n u n as n s + = State Model:
Varies
with n
] [ ] [ ] [ n w n s n + = x Observation Model:
w[n] WGN; ~ ) , 0 (
2
n
N
2 2
} ]) 1 | 1 [

]} 1 [ {( ] 1 | 1 [
]} 1 [ { ] 1 | 1 [

s
s
s s E M
s E s

u
= =
= =
Must Know: u
s
,
2
s
, a,
2
u
,
2
n
Must Know: u
s
,
2
s
, a,
2
u
,
2
n
Initialization:
Prediction:
] 1 | 1 [

] 1 | [

= n n s a n n s
2 2
] 1 | 1 [ ] 1 | [
u
n n M a n n M + =
Pred. MSE:
] 1 | [
] 1 | [
] [
2
+

=
n n M
n n M
n K
n

Kalman Gain:
( ) ] 1 | [

] [ ] [ ] 1 | [

] | [

+ = n n s n x n K n n s n n s
Update:
( ) ] 1 | [ ] [ 1 ] | [ = n n M n K n n M
Est. MSE:
14
Kalman Filter: Vector State & Scalar Observation
1 ) ( ~ ; ; 1; ] [ ] 1 [ ] [ + = r N r p p p p n n n Q 0, u B A s Bu As s
State Model:
1 ] [ ]; [ ] [ ] [ ] [ + = p n n w n n n x
T T
h s h
) , 0 ( ~
2
n
N
w[n] WGN;
Observation Model:
{ }
s
T
s
E
E
C s s s s M
s s
= =
= =
]) 1 | 1 [

]} 1 [ ])( 1 | 1 [

]} 1 [ ( ] 1 | 1 [
]} 1 [ { ] 1 | 1 [

Must Know: u
s
, C
s
, A, B, h, Q,
2
n
Must Know: u
s
, C
s
, A, B, h, Q,
2
n
Initialization:
] 1 | 1 [

] 1 | [

= n n n n s A s
Prediction:
T T
n n n n BQB A AM M + = ] 1 | 1 [ ] 1 | [
Pred. MSE (pp):
" " " # " " " \$ %
1 1
2
] [ ] 1 | [ ] [
] [ ] 1 | [
] [

+

=
n n n n
n n n
n
T
n
h M h
h M
K

## Kalman Gain (p1):

" " " " # " " " " \$ %
" " # " " \$ %
s innovation n x
n n x
T
n n n n x n n n n n
: ] [
~
] 1 | [
]) 1 | [

] [ ] [ ( ] [ ] 1 | [

] | [

+ = s h K s s
Update:
( ) ] 1 | [ ] [ ] [ ] | [ = n n n n n n
T
M h K I M Est. MSE (pp): :
15
Kalman Filter: Vector State & Vector Observation
1 ) N( ~ ; ; 1; ] [ ] 1 [ ] [ + = r r p p p p n n n Q 0, u B A s Bu As s State Model:
1 ) ] [ N( ~ ] [ ; ] [ ; 1 ]; [ ] [ ] [ ] [ + = M n n p M n M n n n n C 0, w H x w s H x Observation:
{ }
s
T
s
E
E
C s s s s M
s s
= =
= =
]) 1 | 1 [

]} 1 [ ])( 1 | 1 [

]} 1 [ ( ] 1 | 1 [
]} 1 [ { ] 1 | 1 [
Must Know: u
s
, C
s
, A, B, H, Q, C[n]}
Must Know: u
s
, C
s
, A, B, H, Q, C[n]}
Initialization:
] 1 | 1 [

] 1 | [

= n n n n s A s
Prediction:
T T
n n n n BQB A AM M + = ] 1 | 1 [ ] 1 | [
Pred. MSE (pp):
1
] [ ] 1 | [ ] [ ] [ ] [ ] 1 | [ ] [

|
|
.
|

\
|
+ =
" " " # " " " \$ %
M M
T T
n n n n n n n n n H M H C H M K
Kalman Gain (pM):
" " " " # " " " " \$ %
" " # " " \$ %
s innovation n
n n
n n n n n n n n n
: ] [
~
] 1 | [
]) 1 | [

] [ ] [ ( ] [ ] 1 | [

] | [

x
x
s H x K s s

+ =
Update:
Est. MSE (pp): :
( ) ] 1 | [ ] [ ] [ ] | [ = n n n n n n M H K I M
16
Kalman Filter Block Diagram
K[n]
Az
-1
+
+
] [

n u B
] 1 | [

n n s
x[n]

H[n]
] 1 | [

n n x
+

] [
~
n x
] | [

n n s
Estimated
State
Estimated
Driving Noise
Innovations
Observations
Embedded
Observation
Model
Embedded
Dynamical
Model
Predicted
Observation
Predicted
State
Looks a lot like Sequent ial LS/ MMSE except it
has t he Embedded Dynamical Model!!!
17
Overview of MMSE Estimation
Jointly
Gaussian
LMMSE
Bayesian
Linear
Model
LMMSE
Linear
Model
} | {

x E =
Optimal
Seq. Filter
(No Dynamics)
Optimal
Kalman Filter
(w/ Dynamics)
Linear
Seq. Filter
(No Dynamics)
Linear
Kalman Filter
(w/ Dynamics)
( ) } { } {

1
x x C C
xx x
E E + =

( ) ( )

H x C H HC H C + + =
1

w
T T
| |
1 1

] [

+ =
n
T
n n n n
n x h k
]) 1 | 1 [

] [ ] [ ]( [ ] 1 | [

] | [

+ = n n n n n n n n n s A H x K s s
Force Linear
Any PDF,
Known 2
nd
Moments
Assume
Gaussian
Gen. MMSE
Squared Cost Function