Beruflich Dokumente
Kultur Dokumente
S. Lall, Stanford
2011.02.02.01
S. Lall, Stanford
2011.02.02.01
10 - MMSE Estimation
Estimation given a pdf
Minimizing the mean square error
The minimum mean square error (MMSE) estimator
The MMSE and the mean-variance decomposition
Example: uniform pdf on the triangle
Example: uniform pdf on an L-shaped region
Example: Gaussian
Posterior covariance
Bias
Estimating a linear function of the unknown
MMSE and MAP estimation
10 - 2 MMSE Estimation
xk2
xk
kx
xk2px(x) dx
10 - 3 MMSE Estimation
S. Lall, Stanford
2011.02.02.01
xk2
We have
E kx
xk2 = E (x
x)T (x
x)
= E xT x
2
xT x + xT x
= Ekxk2
2
xT E x + xT x
10 - 4 MMSE Estimation
S. Lall, Stanford
since E kx
xmmsek2 = E kx
E xk2
2011.02.02.01
10 - 5 MMSE Estimation
S. Lall, Stanford
2011.02.02.01
Applying this to z = x
E zk2 + kE zk2
x gives
E kx
xk2 = E kx
x
E xk2 + k
E xk2
The second term is the bias of x; the best we can do is make this zero.
10 - 6 MMSE Estimation
S. Lall, Stanford
2011.02.02.01
2
1
0
1
2
: Rm ! R n .
3
2
J = E k (y)
xk2
10 - 7 MMSE Estimation
S. Lall, Stanford
2011.02.02.01
S. Lall, Stanford
2011.02.02.01
Notation
Well use the following notation.
py is the marginal or induced pdf of y
p (y) =
y
p(x, y) dx
p(x, y)
py (y)
10 - 8 MMSE Estimation
k (y)
because
J=
=
Z Z
Z
k (y)
xk2 p(x, y) dx dy
py (y) econd(y) dy
10 - 9 MMSE Estimation
S. Lall, Stanford
2011.02.02.01
S. Lall, Stanford
2011.02.02.01
xk2 | y = ymeas
10 - 10
MMSE Estimation
x conditioned on y = w to give
xk2 | y = w
econd(w) = E k (y)
= E kx
h(w)k2 | y = w + k (w)
h(w)k2
10 - 11
MMSE Estimation
S. Lall, Stanford
2011.02.02.01
S. Lall, Stanford
2011.02.02.01
h(w)k2 | y = w
= trace cov(x | y = w)
10 - 12
MMSE Estimation
= E(x | y = ymeas)
We often write
xmmse =
mmse (ymeas )
econd(ymeas) = E kx
xmmsek2 y = ymeas
10 - 13
MMSE Estimation
S. Lall, Stanford
2011.02.02.01
0.5
0
0
0.5
1 + ymeas
2
10 - 14
xmmsek2 y = ymeas =
1
(ymeas
12
1)2
MMSE Estimation
S. Lall, Stanford
2011.02.02.01
0.5
E k
mmse (y)
xk2 = E econd(y)
x=0
x=0
1
24
p(x, y) econd(y) dy dx
y=0
1
(y
y=0 6
x
1)2 dy dx
0.5
10 - 15
MMSE Estimation
S. Lall, Stanford
2011.02.02.01
0.5
0
0
(1
10 - 16
xmmsek
y = ymeas =
if 0 ymeas
2
3
4
if
(1
12
1
48
1
2
0.5
1
2
< ymeas 1
if 0 ymeas
if
1
2
1
2
< ymeas 1
MMSE Estimation
S. Lall, Stanford
2011.02.02.01
0.5
E k
mmse (y)
xk2 = E econd(y)
0.5
p(x, y) econd(y) dy dx
x=0
1
16
1
2
1
p(x, y) dy dx +
y=0 12
x= 21
1
2
y= 12
1
p(x, y) dy dx
48
10 - 17
MMSE Estimation
S. Lall, Stanford
2011.02.02.01
x
N (, ) where
Suppose
y
x
=
= Tx
y
xy
0.35
0.3
0.25
0.2
0.15
xy
0.1
0.05
0
(ymeas
xy
xy
1 T
y
xy
3
2
1
y )
0
1
2
3
10 - 18
MMSE Estimation
S. Lall, Stanford
2011.02.02.01
= x +
econd(ymeas) = trace
xy
(ymeas
xy
y )
1 T
xy
mmse (y)
xk2 = trace
xy
1 T
xy
10 - 19
MMSE Estimation
S. Lall, Stanford
2011.02.02.01
S. Lall, Stanford
2011.02.02.01
Posterior covariance
Lets look at the error z = (y)
x. We have
10 - 20
MMSE Estimation
x
Here
N (0, ) with
y
2.8 2.4
=
2.4 3
3
2
We measure
ymeas = 2
0
1
= 12 221ymeas
= 0.8ymeas
= 1.6
3
3
xy
yx
= 0.88
10 - 21
MMSE Estimation
S. Lall, Stanford
2011.02.02.01
S. Lall, Stanford
2011.02.02.01
Bias
The MMSE estimator is unbiased; that is the mean error is zero.
E
10 - 22
mmse (y)
x =0
MMSE Estimation
q = Cx is a random variable
we measure y and would like to estimate q
The optimal estimator is
qmmse = C E x | y = ymeas
Because E q | y = ymeas = C E x | y = ymeas
The optimal estimate of Cx is C multiplied by the optimal estimate of x
Only works for linear functions of x
10 - 23
MMSE Estimation
S. Lall, Stanford
2011.02.02.01
3
2
1
0
1
2
3