Beruflich Dokumente
Kultur Dokumente
Kalman Filters
CS 344R: Robotics
Benjamin Kuipers
Up To Higher Dimensions
Our previous Kalman Filter discussion was
of a simple one-dimensional model.
Now we go up to higher dimensions:
State vector:
Sense vector:
Motor vector:
x
m
z
l
u
Expectations
Let x be a random variable.
The expected value E[x] is the mean:
N
1
E[x] x p(x) dx x x i
N 1
The probability-weighted mean of all possible
values. The sample mean approaches it.
1
Cij
N
(x
ik
x i )(x jk x j )
k 1
N 1 1
But: If the difference between N and N1
ever matters to you, then you are probably
up to no good anyway [Press, et al]
Covariance Matrix
Along the diagonal, Cii are variances.
Off-diagonal Cij are essentially correlations.
C1,1 12
C1,2
2
C2,2 2
C2,1
CN ,1
2
N
C1,N
CN ,N
Independent Variation
x and y are
Gaussian random
variables (N=100)
Generated with
x=1 y=3
Covariance matrix:
0.90
Cxy
0.44
0.44
8.82
Dependent Variation
c and d are random
variables.
Generated with
c=x+y d=x-y
Covariance matrix:
10.62 7.93
Ccd
7.93 8.84
with a measurement z m
zk Hx k vk
x
Errors:
k
k
k
e k x k x k
P E[e e ]
Pk E[e k e ]
T
k
Pk
T
Pk APk 1A Q
Previous statements were simplified
versions of the same idea:
x (t ) x (t2 ) u[t3 t 2 ]
(t ) (t2) [t3 t2 ]
2
x k x Kk (z k Hx )
innovation is z k Hx k
Update error covariance matrix
Pk (IK kH) P
x (t 3) x(t ) K(t3)(z3 x (t ))
2
2
(t 3 ) (1 K(t 3 )) (t3 )
PH
T
HP k H R
Compare with previous form
(t )
K(t 3 ) 2
2
(t3 ) 3
2
zk h(x k ) vk
process noise w is drawn from N(0,Q), with
covariance matrix Q.
measurement noise v is drawn from N(0,R),
with covariance matrix R.
x1
f1
(x) x
x n 1
f n
x n
(x)
x n
hi
H ij
(x k )
x j
xk f (x k 1 ,uk )
P APk 1A Q
x k x k Kk (z k h(x k ))
Pk (IK kH) P
TTD
Intuitive explanations for APAT and HPHT
in the update equations.