Sie sind auf Seite 1von 19

Lecture 11:

Kalman Filters
CS 344R: Robotics
Benjamin Kuipers

Up To Higher Dimensions
Our previous Kalman Filter discussion was
of a simple one-dimensional model.
Now we go up to higher dimensions:
State vector:
Sense vector:
Motor vector:

x
m
z
l
u

First, a little statistics.

Expectations
Let x be a random variable.
The expected value E[x] is the mean:
N
1
E[x] x p(x) dx x x i
N 1
The probability-weighted mean of all possible
values. The sample mean approaches it.

Expected value of a vector x is by component.


T
E[x] x [x1, x n ]

Variance and Covariance


The variance is E[ (x-E[x])2 ]
N
1
2
2
2
E[(x x ) ] (x i x )
N 1

Covariance matrix is E[ (x-E[x])(x-E[x])T ]

1
Cij
N

(x

ik

x i )(x jk x j )

k 1

Divide by N1 to make the sample variance an


unbiased estimator for the population variance.

Biased and Unbiased Estimators


Strictly speaking, the sample variance
N
1
2
2
2
E[(x x ) ] (x i x )
N 1
is a biased estimate of the population
variance. An unbiased estimator is:
N
1
2
2
s
(x i x )

N 1 1
But: If the difference between N and N1
ever matters to you, then you are probably
up to no good anyway [Press, et al]

Covariance Matrix
Along the diagonal, Cii are variances.
Off-diagonal Cij are essentially correlations.

C1,1 12
C1,2

2
C2,2 2
C2,1

CN ,1

2
N

C1,N

CN ,N

Independent Variation
x and y are
Gaussian random
variables (N=100)
Generated with
x=1 y=3
Covariance matrix:
0.90
Cxy
0.44

0.44

8.82

Dependent Variation
c and d are random
variables.
Generated with
c=x+y d=x-y
Covariance matrix:
10.62 7.93
Ccd

7.93 8.84

Discrete Kalman Filter


n

Estimate the state x of a linear


stochastic difference equation
x k Axk1 Buk wk1
process noise w is drawn from N(0,Q), with
covariance matrix Q.

with a measurement z m

zk Hx k vk

measurement noise v is drawn from N(0,R), with


covariance matrix R.

A, Q are nxn. B is nxl. R is mxm. H is mxn.

Estimates and Errors


n

x k is the estimated state at time-step k.

x k after prediction, before observation.

x
Errors:
k
k
k
e k x k x k

Error covariance matrices:


T

P E[e e ]
Pk E[e k e ]
T
k

Kalman Filters task is to update x k

Pk

Time Update (Predictor)


Update expected value of x
xk Ax k 1 Buk
Update error covariance matrix P

T
Pk APk 1A Q
Previous statements were simplified
versions of the same idea:

x (t ) x (t2 ) u[t3 t 2 ]

(t ) (t2) [t3 t2 ]
2

Measurement Update (Corrector)


Update expected value

x k x Kk (z k Hx )

innovation is z k Hx k
Update error covariance matrix

Pk (IK kH) P

Compare with previous form

x (t 3) x(t ) K(t3)(z3 x (t ))
2
2
(t 3 ) (1 K(t 3 )) (t3 )

The Kalman Gain


The optimal Kalman gain Kk is
T
T
1
K k Pk H (HPk H R)

PH

T
HP k H R
Compare with previous form

(t )
K(t 3 ) 2
2
(t3 ) 3
2

Extended Kalman Filter


Suppose the state-evolution and
measurement equations are non-linear:
x k f (x k1,uk ) wk1

zk h(x k ) vk
process noise w is drawn from N(0,Q), with
covariance matrix Q.
measurement noise v is drawn from N(0,R),
with covariance matrix R.

The Jacobian Matrix


For a scalar function y=f(x),
y f (x)x
For a vector function y=f(x),
f1
y1 (x)
x1
y Jx
f
y n n (x)

x1

f1
(x) x
x n 1


f n
x n
(x)
x n

Linearize the Non-Linear


Let A be the Jacobian of f with respect to x.
f i
A ij
(x k1,u k )
x j
Let H be the Jacobian of h with respect to x.

hi
H ij
(x k )
x j

Then the Kalman Filter equations are


almost the same as before!

EKF Update Equations


Predictor step:

xk f (x k 1 ,uk )

P APk 1A Q

Kalman gain: K k P H (HP H R)


Corrector step:

x k x k Kk (z k h(x k ))

Pk (IK kH) P

Catch The Ball Assignment


State evolution is linear (almost).
What is A?
B=0.

Sensor equation is non-linear.


What is y=h(x)?
What is the Jacobian H(x) of h with respect to x?

Errors are treated as additive. Is this OK?


What are the covariance matrices Q and R?

TTD
Intuitive explanations for APAT and HPHT
in the update equations.

Das könnte Ihnen auch gefallen