Sie sind auf Seite 1von 15

Kalman Smoothing

Jur van den Berg


Kalman Filtering vs. Smoothing
Dynamics and Observation model


Kalman Filter:
Compute
Real-time, given data so far
Kalman Smoother:
Compute
Post-processing, given all data

) , ( ,
1
Q N W W AX X
t t t t
0 = + =
+
) , ( , R N V V CX Y
t t t t
0 = + =
( )
t t t
Y Y X y y = = , , |
0 0

( ) T t Y Y X
T T t
< = = , , , |
0 0
y y
Kalman Filtering Recap
Time update

Measurement update:

Compute joint distribution
Compute conditional


X
0
X
1
X
2
X
3
X
4
X
5
Y
1
Y
2
Y
3
Y
4
Y
5

t t t t t
W AX X + =
+ | | 1
1 | 1 | 1 + + +
+ =
t t t t t
V CX Y
) , (
| 1 | 1 t t t t
Y X
+ +
) | (
1 | 1 | 1 1 | 1 + + + + +
= =
t t t t t t t
Y X X y
Kalman filter summary
Model:

Algorithm: repeat
Time update:

Measurement update:

Q A AP P
A
T
t t t t
t t t t
+ =
=
+
+
| | 1
| | 1

x x
( )
( )
t t t t t t t
t t t t t t t t
T
t t
T
t t t
CP K P P
C K
R C CP C P K
| 1 1 | 1 1 | 1
| 1 1 1 | 1 1 | 1
1
| 1 | 1 1

+ + + + +
+ + + + + +

+ + +
=
+ =
+ =
x y x x
) , ( ,
1
Q N W W AX X
t t t t
0 = + =
+
) , ( , R N V V CX Y
t t t t
0 = + =
Kalman Smoothing
Input: initial distribution X
0
and data y
1
, , y
T

Algorithm: forward-backward pass
(Rauch-Tung-Striebel algorithm)
Forward pass:
Kalman filter: compute X
t+1|t
and X
t+1|t+1
for 0 t < T
Backward pass:
Compute X
t|T
for 0 t < T
Reverse horizontal arrow in graph
Backward Pass
Compute X
t|T
given
Reverse arrow: X
t|t
X
t+1|t
Same as incorporating measurement in filter
1. Compute joint (X
t|t
, X
t+1|t
)
2. Compute conditional (X
t|t
| X
t+1|t
= x
t+1
)
New: x
t+1
is not known, we only know its
distribution:
3. Uncondition on x
t+1
to compute X
t|T
using
laws of total expectation and variance
T t t
X
| 1 1
~
+ +
x
) ,

(
| 1 | 1 | 1 T t T t T t
P N X
+ + +
= x
Backward pass. Step 1
Compute joint distribution of X
t|t
and X
t+1|t
:




where


( )
( )
( )
( ) ( )
( ) ( )
|
|
.
|

\
|
|
|
.
|

\
|
|
|
.
|

\
|
=
|
|
.
|

\
|
|
|
.
|

\
|
|
|
.
|

\
|
=
+
+
+ +
+
+
+
t t t t
T
t t t t
t t
t t
t t t t t t
t t t t t t
t t
t t
t t t t
P AP
A P P
N
X X X
X X X
X
X
N X X
| 1 |
| |
| 1
|
| 1 | | 1
| 1 | |
| 1
|
| 1 |
,

Var , Cov
, Cov Var
,
E
E
,
x
x
( ) ( )
( ) ( )
( )
t t
t t
t t t t t t t
t t t t t t t t t
AP
X A
X W X X A
X W AX X X
|
|
| | |
| | | | 1
Var
, Cov , Cov
, Cov , Cov
=
=
+ =
+ =
+
Backward pass. Step 2
Recall that if


then

Compute (X
t|t
|X
t+1|t
= x
t+1
):
( )
|
|
.
|

\
|
|
|
.
|

\
|
E E
E E
|
|
.
|

\
|
=
22 21
12 11
2
1
2 1
, ,

N Z Z
( ) ( ) ( )
21
1
22 12 11 2 2
1
22 12 1 2 2 1
, | E E E E E E + = =

z z N Z Z
( ) ( ) (
)
t t t t
T
t t t t
t t t t t
T
t t t t t t t t t
AP P A P P
P A P N X X
|
1
| 1 | |
| 1 1
1
| 1 | | 1 | 1 |
, |

+
+ +

+ + +

+ = = x x x x
Backward pass Step 3
Conditional only valid for given x
t+1
.



Where
But we dont know its value, but only its
distribution:
Uncondition on x
t+1
to compute X
t|T
using law
of total expectation and law of total variance
( ) ( ) (
)
( ) ( )
T
t t t t t t t t t t t t
t t t t
T
t t t t
t t t t t
T
t t t t t t t t t
L P L P L N
AP P A P P
P A P N X X
| 1 | | 1 1 |
|
1
| 1 | |
| 1 1
1
| 1 | | 1 | 1 |
,
, |
+ + +

+
+ +

+ + +
+ =

+ = =
x x x
x x x x
T t t
X
| 1 1
~
+ +
x
1
| 1 |

+
=
t t
T
t t t
P A P L
Law of total expectation/variance
Law of total expectation:
E(X) = E
Z
( E(X|Y = Z) )
Law of total variance:
Var(X) = E
Z
( Var(X|Y = Z) ) + Var
Z
( E(X|Y = Z) )
Compute
where


( )
( ) ) | ( Var
) | Var( ) Var(
| 1 | 1 |
| 1 | 1 | |
| 1
| 1
T t t t t t X
T t t t t t X T t
X X X E
X X X E X
T t
T t
+ +
+ +
=
+ = =
+
+
( ) ) | ( ) (
| 1 | 1 | |
| 1
T t t t t t X T t
X X X E E X E
T t
+ +
= =
+
)) Var( ), ( (
| | | T t T t T t
X X E N X =
Unconditioning
Recall from step 2 that


So,
( )
t t T t t t t T t t t t t
X L X X X E
| 1 | 1 | | 1 | 1 |

) | (
+ + + +
+ = = x x
T
t t t t t t T t t t t t
L P L P X X X
| 1 | | 1 | 1 |
) | Var(
+ + +
= =
( )
( )
t t T t t t t
T t t t t t X T t
L
X X X E E X E
T t
| 1 | 1 |
| 1 | 1 | |

) | ( ) (
| 1
+ +
+ +
+ =
= =
+
x x x
( )
( )
T
t t t T t t t t
T
t T t t
T
t t t t t t
T t t t t t X
T t t t t t X T t
L P P L P
L P L L P L P
X X X E
X X X E X
T t
T t
) (
) | ( Var
) | Var( ) Var(
| 1 | 1 |
| 1 | 1 |
| 1 | 1 |
| 1 | 1 | |
| 1
| 1
+ +
+ +
+ +
+ +
+
+
=
=
=
+ = =
+
+
Backward pass
Summary:



( )
T
t t t T t t t t T t
t t T t t t t T t
t t
T
t t t
L P P L P P
L
P A P L
) (

| 1 | 1 | |
| 1 | 1 | |
1
| 1 |
+ +
+ +

+
+ =
+ =
=
x x x x
Kalman smoother algorithm
for (t = 0; t < T; ++t) // Kalman filter





for (t = T 1; t 0; --t) // Backward pass


Q A AP P
A
T
t t t t
t t t t
+ =
=
+
+
| | 1
| | 1

x x
( )
( )
t t t t t t t
t t t t t t t t
T
t t
T
t t t
CP K P P
C K
R C CP C P K
| 1 1 | 1 1 | 1
| 1 1 1 | 1 1 | 1
1
| 1 | 1 1

+ + + + +
+ + + + + +

+ + +
=
+ =
+ =
x y x x
( )
T
t t t T t t t t T t
t t T t t t t T t
t t
T
t t t
L P P L P P
L
P A P L
) (

| 1 | 1 | |
| 1 | 1 | |
1
| 1 |
+ +
+ +

+
+ =
+ =
=
x x x x
Conclusion
Kalman smoother can in used a post-processing
Use x
t|T
s as optimal estimate of state at time t,
and use P
t|T
as a measure of uncertainty.

Extensions
Automatic parameter (Q and R) fitting using
EM-algorithm
Use Kalman Smoother on training data to learn
Q and R (and A and C)

Das könnte Ihnen auch gefallen