Sie sind auf Seite 1von 13

Estimation and Detection (ET 4386)

Estimation Theory

Examples

1. Ranging
2. Localization
3. Communications
4. Speech
5. Imaging
6. ...

Estimation Theory
Example: estimation of the mean

Given a process (DC in noise):


x[n] = A + w[n] ,

n = 0, , N 1 ,

How can we estimate A?

Consider two possible estimators:


1
1 NP

x[n]
1. Choose A =
N n=0

2. Or choose A = x[0]

Which estimator is better?

w[n] zero mean noise

Introduction
Look at the mean:
=E
E(A)

1
N

and also

N
1
X
n=0

x[n]

N 1
1 X
=
E(x[n]) = A
N
n=0

=A
E(A)
Both estimators are unbiased.
Look at the variance, define 2 = var(w[n]):
!
N
1
N
1
2
X
X
1

1
= var
var(A)
var(x[n]) =
x[n] = 2
N
N
N
n=0

n=0

whereas

= 2
var(A)
0 as N : the estimator is consistent.
A has smaller variance. Also, var(A)
Conclusions: (i) estimators are random variables; (ii) what are optimal estimators?

Introduction
Problem Statement

Suppose we have an unknown scalar parameter that we want to estimate from


an observed stochastic vector x, which is related to through its pdf (the pdf is
parametrized by )
x p(x; )

The estimator is of the form


= g(x)
Note that itself is a random variable.
Hence, the performance of the estimator should be described statistically.

Estimation Techniques

We can view the unknown parameter as a deterministic variable


Minimum Variance Unbiased (MVU) Estimator
Best Linear Unbiased Estimator (BLUE)
Maximum Likelihood Estimator (MLE)
Least Squares Estimator (LSE)

The Bayesian philosophy: is viewed as a random variable, p(x, ) = p(x|)p()


Minimum Mean Square Error (MMSE) Estimator
Maximum A Posteriori (MAP) Estimator
Linear Minimum Mean Square Error (LMMSE) Estimator

Minimum Variance Unbiased Estimation

A natural criterion that comes to mind is the Mean Square Error (MSE):
h
i2 
h
i
+ (E()
)
= E ( )2 = E ( E())
mse()
h

= E ( E())

)2
)2 = var()
+(E()
+ (E()
| {z } | {z }
variance

bias

The MSE depends not only on the variance but also on the bias, a function of the
unknown .
This means that an estimator that tries to minimize the MSE will often depend on
the parameter , and is therefore unrealizable.

Minimum Variance Unbiased Estimation


Example:

Consider the model (DC level in white noise)


x[n] = A + w[n] ,

n = 0, , N 1 ,

w[n] N (0, 2 )

Let us try an estimator of A as


N
1
X
1
A = a
x[n] ,
N

for some constant a

n=0

We want to find a that results in the minimum MSE.


a2 2

Since E(A) = aA and var(A) =


we find
N
a2 2

+ (a 1)2 A2
mse(A) =
N
Differentiate the MSE with respect to a to find

d mse(A)
2a 2
=
+ 2(a 1)A2
da
N

A2
aopt = 2
A + 2 /N

The optimal value of a depends on the unknown parameter A, therefore we cannot


realize this estimator.
7

Minimum Variance Unbiased Estimation


In practice, we cannot compute minimum MSE estimators.

Solution: constrain the bias to zero and choose an estimator that minimizes the
variance. This leads to the so-called Minimum Variance Unbiased (MVU) estimator:
unbiased:
minimum variance:

=
E()

var()

for all

is minimal for all

Remark: The MVU does not always exist and is generally difficult to find.

var()

var()
1
2
3 MVU

No MVU estimator
1
2
3

Minimum Variance Unbiased Estimation


Counter example

Consider two independent observations x and y:

N (, 1) , 0
x N (, 1) ,
y
N (, 2) , < 0

Consider two possible estimators for :

1
1 = (x + y) ,
2

2
1
2 = x + y
3
3

Both estimators are unbiased. Look at the variances:

18 ,
1
36
var(1 ) = (var(x) + var(y)) =
27 ,
4
36

1
4
var(2 ) = var(x) + var(y) =

9
9

<0

20
36

24
36

<0

Between these two estimators, there is no MVU estimator.


9

Cramr-Rao Lower Bound


The CRLB is a lower bound on the variance of any unbiased estimator.
Assume the pdf p(x; ) satisfies the regularity condition:


ln p(x; )
E
= 0 for all

The variance of any unbiased estimator then satisfies



var()

1
1
"

=

2 #
2 ln p(x; )
ln p(x; )
E
E
2

An unbiased estimator may be found that attains the bound for all iff
ln p(x; )
= I()(g(x) )

=
for some functions g and I. The estimator then is = g(x) with mean E()
= 1 .
and variance var()
I()
An estimator is called efficient if it meets the CRLB with equality. In that case it is
the MVU (the converse is not necessarily true).
10

Cramr-Rao Lower Bound


The CRLB can be proven starting from the unbiased assumption
Z
Z

( )p(x; )dx = 0
( )p(x; )dx = 0

Z
Z
p(x; )
p(x; )dx + ( )
dx = 0

Z
ln p(x; )

( )
p(x; )dx = 1

Z
p
ln p(x; ) p

( ) p(x; )
p(x; )dx = 1

Let us now use the Cauchy-Schwarz inequality


Z
2
Z
Z
f 2 (x)dx g 2 (x)dx
f (x)g(x)dx
Then we obtain
Z

( )2 p(x; )dx

Z 

ln p(x; )


var()
E

"

ln p(x; )

11

2 #

2

p(x; )dx 1

Cramr-Rao Lower Bound

Let us now prove that


E

2 ln p(x; )
2

=E

"

ln p(x; )

2 #

Proof: From the regularity condition, we obtain




ln p(x; )

E
=0

Z
ln p(x; )

p(x; )dx = 0


Z  2
ln p(x; )
ln p(x; ) p(x; )

p(x;
)
+
dx = 0
2

2
Z 
Z 2
ln p(x; )
ln p(x; )
p(x; )dx
p(x;
)dx
=

2

"
 2

2 #
ln p(x; )
ln p(x; )
E
=
E
2

12

Cramr-Rao Lower Bound


If

ln p(x; )
= I()( ), it is easy to show that

"
2 #
ln p(x; )
E

E() = ,
var() =
I 2 ()

= 1 , it remains to be proven that


So to prove that var()
I()
"

2 #
 2
ln p(x; )
ln p(x; )
I() = E
= E

2
Proof:
ln p(x; )
= I()( )

2
ln p(x; )
I()

=
I()
+
( )
2


 2
ln p(x; )
= I()
E
2
13

Das könnte Ihnen auch gefallen