Sie sind auf Seite 1von 70

Unit 1 Minimum Variance

Unbiased Estimator

Institute of Communications Engineering, ECE, NCTU

Unit 1: Minimum Variance Unbiased Estimator

What are estimations all about?

Sau-Hsuan Wu

We want to know (or estimate) an unknown quantity


This quantity, , may not be obtained directly, or it is a
notion (quantity) derived from a set of observations
In any event, we need to retrieve the value of from a set of
observations, x = x[0],, x[N-1], which are related to
We hope to determine according to

How do we start?

Suppose we have some knowledge about the collected data


Examples:

x[n] = A1 + w[n],
x[n] = A2 + B2 n + w[n],

n=0,,N-1
n=0,,N-1

Institute of Communications Engineering, ECE, NCTU

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

1=
x[n]/N
and ?

An estimator may be thought of as a rule that assigns a


value to for each realization of x
The estimate of is the value of for a given realization
of x
How do we assess the performance of estimation?

What are we concerned the most about the estimate?

How close will be to A?


Are there better estimators than the sample mean for instance?
= x[0] ?

What are the differences between a


nd?

Institute of Communications Engineering, ECE, NCTU

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Can be have some performance measure?

We model the data by its probability density function


(PDF), assuming that the data are inherently random.

As an example:

We have a class of PDFs where each one is different due to


a different value of , i.e. the PDFs are parameterized by .
The parameter is assumed to be deterministic but
unknown
Given the PDF, we calculate some statistics of the estimates

E ( ) = E () ?
Var () = Var () ?

Institute of Communications Engineering, ECE, NCTU

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Note: In an actual problem, we are not given a PDF but


must choose one that is not only consistent with the
problem constraints and any prior knowledge, but one
that is also mathematically tractable
Sometimes, we might want to constraint the estimator to
produce values in a certain range.

To incorporate this prior knowledge, we can assume that


is no longer deterministic but a random variable having a
uniform distribution over the [-U, U] interval for instance
Assign a PDF to , then the data are described by the joint
PDF
Any estimator that yields estimates according to the prior
knowledge of is termed a Bayesian estimator

Institute of Communications Engineering, ECE, NCTU

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

We conclude for the time being that we are hoping to


have an estimator that gives

An unbiased mean of the estimate:

A minimum variance for the estimate:

Is an unbiased estimator always the optimal estimator?


Consider a widely used optimality criterion:
the minimum mean squared error (MSE) criterion

Institute of Communications Engineering, ECE, NCTU

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

As an example, consider the modified estimator

We attempt to find the


awhich yields the minimum MSE

Since E()=aA and var() = a2 2 / N, we have


mse () = a2 2 / N + (a-1)2A2
Taking the derivative w.r.t. to a and setting it to zero leads to
aopt = A2 / (A2 +
2/ N )
The optimal value of a depends upon the unknown parameter
A The estimator is not realizable
How do we resolve this dilemma?

Since, mse () = var() + b2, as an alternative,


we set b=0 and search for the estimator that minimizes var()
minimum variance unbiased (MVU) estimator

Institute of Communications Engineering, ECE, NCTU

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

However, does a MVU estimator always exist for all ?

Example:

x[0] ~ N(,1)
x[1] ~ N(,1) if 0
~ N(,2) if 0
Both of the two estimators are unbiased

Therefore

None of the estimator has a variance uniformly less than or


equal to 18/36

Institute of Communications Engineering, ECE, NCTU

Cramer-Rao Lower Bound

Institute of Communications Engineering, ECE, NCTU

Unit 1: Minimum Variance Unbiased Estimator

How to measure the accuracy of estimation?


Consider a single sample of observation x[0] = + w[0]
with w[0]~N(0,2),

Sau-Hsuan Wu

The unbiased estimator is and the variance is


The accuracy of estimation improves as 2 decreases

We can see this from the likelihood function (LK) of

Institute of Communications Engineering, ECE, NCTU

10

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

The accuracy of estimation increases with the


sharpness
of the LK, which is inversely proportional to i2 in this case
Actually, the variance i2 in this case is the negative inverse
of the second derivative of the logarithm of the likelihood
function (LLK)

Therefore, in a more rigorous definition, the sharpness of


the LK is measured by the curvature of the LLK, i.e.

In general, the curvature depends on x[0] as well, thus a


more appropriate measure of curvature is

Institute of Communications Engineering, ECE, NCTU

11

Unit 1: Minimum Variance Unbiased Estimator

Now, we put our problem in a more rigorous statement

Define a scalar parameter = g()


Define a PDF of the observation, p(x ; ) which is
parameterized by
Consider all unbiased estimators , namely
The variance of the estimation is given by

We want to find a lower bound for

Sau-Hsuan Wu

How do we start?

The hint from the previous example is that the variance is


inversely proportional to the average curvature of p(x ; )

Institute of Communications Engineering, ECE, NCTU

12

Unit 1: Minimum Variance Unbiased Estimator

Therefore, we may mathematically associate the variance


with the average curvature like this

Sau-Hsuan Wu

The larger the curvature, the smaller the lower bound for the
variance of estimation

By the Cauchy-Schwarz inequality, we have

Institute of Communications Engineering, ECE, NCTU

13

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Then, can we relate the square of the first-order derivative


to the second-order derivative? i.e. the connection between
and

Observe that if the integral and partial are interchangeable

Institute of Communications Engineering, ECE, NCTU

14

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Then, we have

supposed that

Substituting the results to the Cauchy-Schwarz inequality

Institute of Communications Engineering, ECE, NCTU

15

Unit 1: Minimum Variance Unbiased Estimator

Therefore,

The equality holds if and only if

where c might be a function of but not of x


If =g()= , then

Sau-Hsuan Wu

Institute of Communications Engineering, ECE, NCTU

16

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Taking the derivative w.r.t. on the both sides leads to

Taking the expectation on the both sides, for unbiased


estimators we have

Thus,

Institute of Communications Engineering, ECE, NCTU

17

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Clearly, the Cramer-Rao bound (CRB)

is valid if the regularity condition holds

Does the regularity condition always hold?

Refer to Leibniz
s rule

Institute of Communications Engineering, ECE, NCTU

18

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

The gradient, with respect to , of the log likelihood


function (LLK) is called the score

Under the regularity condition, the expected value of the


score is zero, namely

The Fisher information is the variance of the score

Institute of Communications Engineering, ECE, NCTU

19

Unit 1: Minimum Variance Unbiased Estimator

By definition,

If the regularity condition holds,

Sau-Hsuan Wu

then the information is additive for independent x and y

Therefore, it is referred to as the information of

Institute of Communications Engineering, ECE, NCTU

20

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Cramer Rao
s lower bound for vector parameter
Let vector = g(
) and define
)(
)T p( x; )dx
C
(
T

ln p ( x; )
ln p ( x; )
I ()

p( x; )dx




ln p ( x; )
ln p ( x; )

p ( x; )dx
T

I
() p. s . d 0
The goal is to prove C

T
1

a
C

I
()
Namely, for arbitrary a,

a 0

Institute of Communications Engineering, ECE, NCTU

21

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Cauchy-Schwarz inequality

aT
(

)(

)
p( x; )dx a

T
C

ln p ( x; )
ln p ( x; )
T

b
p( x; )dx b

2
I ()
T
T

ln p( x; )

a (
b p( x; )dx

p( x; )

p( x; )

b dx a
b dx

aT

p( x; )dx

b
T

g
(

T

a
b

Institute of Communications Engineering, ECE, NCTU

22

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Therefore, by Cauchy-Schwarz inequality, we have


2

g () T
T
T
a
b

a
C
ab
I (b
)

T


Besides, since b is arbitrary, we can define
T

g
()
-1
b I ()
a

As a result
2
2
T
g () T
g () -1

g ()
T
b = a
I ()
a
a
T
T

g
(

g
()
T
T
-1
-1
a Caa
I (I
) (I
) ()
a
T

Institute of Communications Engineering, ECE, NCTU

23

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Since I() is positive definite, so is I-1(). We have


2

T
T
g () -1

g () T

g
(

g
()
T
-1
I ()
a a Caa
I ()
a
a
T
T

T
g () -1

g ()
a Ca a
I ()
a
T

positive semidefinite

g
(

g
()
T
-1
a
CI ()
a 0

Since a is arbitrary, thus

g () -1

g T ()
CI ()
p. s.d 0
T

Institute of Communications Engineering, ECE, NCTU

24

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

When g()= , we have


C-I -1 () 0

Since the conditions for equality are

ln
p
(
x
;

ln
p
(
x
;

g
()
T
-1

a ( ) c()
b c()
I ()
a
T
T

For arbitrary a

g () -1
ln p( x; )
1
)

I
(

(
T

c()
Thus, for = g() =

ln p( x; )
1
)

I ()(

c()
Institute of Communications Engineering, ECE, NCTU

25

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Now take the derivative on the both sides w.r.t.


Denote the i-th row of I()/c(
) by Ii()/c(), we have

)
Ii () 1
Ii ()
I i ()(
()
()
I i ()

c()
1 c()
N c()


c()

Ii () 1
I i ()
1

E
(

E
(

I
(

)=

I i ()

x
i
1 c()
N c()
c()

c()

Putting the row vectors into a matrix returns


2 ln p( x; ) 1
I () E
I () c()=1

T
c()

Eventually, we arrive at

ln p( x; )
)
I ()(

Institute of Communications Engineering, ECE, NCTU

26

Unit 1: Minimum Variance Unbiased Estimator

For a positive semidefinite matrix A [A]ii 0

Sau-Hsuan Wu

Since xTAx 0 x
eiTAei = [A]ii 0

Therefore, C-I -1 () 0

-1
)

I
()
var(
i

ii
ii

Suppose E (
and
achieves
the
CRLB,
i.e.
C

I
()
)

Now, for a linear transformation = g() = A+ b

b E (
A

) and

g () -1

g T () , achieves the CRLB too


C=AC T I ()

Institute of Communications Engineering, ECE, NCTU

27

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

We revisit the line fitting problem

1
0

x[n] = A + Bn + w[n], n=0,,N-1

1
1

Let x = [x[0],, x[N-1]]T , = [A B]T and H


ln p( x; )
T
T
2
1
N

H x H H
/

I () H T H / 2

1
ln p( x; )
T

I ()
H H H T x I ()

An estimator which is unbiased and attains the CRLB


bound is said to be efficient in that it efficiently uses the
data

Institute of Communications Engineering, ECE, NCTU

28

Unit 1: Minimum Variance Unbiased Estimator

Efficiency is maintained over a linear transformation


Let us take a closer look at I() = HTH/2

N
2
I ()
N ( N 1)

22

N ( N 1)
2(2 N 1)2

N ( N 1)

22
, I 1 ()

N ( N 1)(2 N 1)

62

N ( N 1)

62

N ( N 1)

2
12

N ( N 2 1)

Compared with x[n] = A + w[n] in which var() = 2/N,


now
2(2 N 1)2 2
122
var( A
)

Sau-Hsuan Wu

N ( N 1)

if N 2 and var( B
)
N
N ( N 2 1)

Thus, the CRLB always increases as we estimate more


parameters

Institute of Communications Engineering, ECE, NCTU

29

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Extension to linear model


x = H+ w with w ~ N(0,C)
To derive the MVU, we can use a whitening approach
Let C-1 = DTD DE{w wH} DT = I
x= D x = D(H+ w ) = H+ wwith w~ N(0, I)
Based on the previous result, we have
1
1
T
T
T
T

H ' H ' H ' x ' =


H D DH H T DT Dx

=
H C H H T C 1 x
T

and

C
H ' H ' =
H D DH =
H C H
T

Institute of Communications Engineering, ECE, NCTU

30

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Ex. Estimation of the signal-to-noise (SNR) ratio

Given x[n] = A + w[n], n=0,,N-1


Let = [A 2]T and g() = 12/2 = A2 /2 =
Then, g () 2 A
A2
2

Recall that
N
2

I ()
0

g () 1

g T () 4 A2
2 A4 422
I ()
2 4

T
N

N N
N
24

The variance of estimation increases with the SNR , why?

The variance depends on the gradient of g()

Institute of Communications Engineering, ECE, NCTU

31

Sufficient Statistics

Institute of Communications Engineering, ECE, NCTU

32

Unit 1: Minimum Variance Unbiased Estimator

Is there a set of T(x) of x that is sufficient for estimation?


What do we mean by the sufficiency of a set of T(x)?

We want to have a set of statistics T(x) such that given T(x),


any x(n) of x = [x(0),,x(N-1)] is independent of A

Suppose 1=
x[n]/N, then are the followings sufficient?

Sau-Hsuan Wu

S1 = {x[0], x[1],, x[N-1]} each element is a statistic


S2 = {x[0]+x[1], x[2],, x[N-1]}
S3 = {x[n]} x[n] is also a statistic

Then, what is the minimum set of sufficient statistics?

Given
x[n] = T0 , do we still need the individual data?

Institute of Communications Engineering, ECE, NCTU

33

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

We say the conditional PDF : p( x | x[n] = T0 ; A)


should not depend on A if statistic T0 is sufficient
E.g.

For (a), a value of A near A0 is more likely even given T0


For (b), however, p( x | x[n] = T0 ; A) is a constant

Institute of Communications Engineering, ECE, NCTU

34

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Now, we need to determine p( x | x[n] = T0 ; A) to


show that x[n] = T0 is sufficient
By Baye
s rule

Since T(x) is a direct function of x,

Clearly, we have

Institute of Communications Engineering, ECE, NCTU

35

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Thus, we have

Institute of Communications Engineering, ECE, NCTU

36

Unit 1: Minimum Variance Unbiased Estimator

Since T(x) = x[n] ~ N(NA, N2)

Thus

Sau-Hsuan Wu

which does not depend on A


Institute of Communications Engineering, ECE, NCTU

37

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

In general, to identify potential sufficient statistics is


difficult
An efficient procedure of finding the sufficient statistics
is to employ the Neyman-Fisher factorization theorem
Observe that
If we can factorize p(x;) into p(x;)=g( T(x),) h(x)
where

g is a function that depends on x only through T(x)


h is a function that depends only on x
T(x) is a sufficient statistic for
The converse is also true
If T(x) is a sufficient statistic p(x;)=g(T(x),) h(x)

Institute of Communications Engineering, ECE, NCTU

38

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Recall p(x;A)

Now, we want to estimate 2 of y[n]=A+x[n]

Suppose A is given, then define x[n] = y[n]-A

Clearly, T(x) = x2[n] is a sufficient statistic for 2

Institute of Communications Engineering, ECE, NCTU

39

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Proof of the Neyman-Fisher Factorization Theorem


()

By assumption, p(x;) = g(T(x), )h(x)

Since

Institute of Communications Engineering, ECE, NCTU

40

Unit 1: Minimum Variance Unbiased Estimator

We have

Then

which does not depend on


Hence T(x) is a sufficient statistic

Sau-Hsuan Wu

Institute of Communications Engineering, ECE, NCTU

41

Unit 1: Minimum Variance Unbiased Estimator

()
Recall

And

Suppose T(x) is a sufficient statistic

Sau-Hsuan Wu

Then p(x| T(x)=T0 ;) = P(x| T(x)=T0) ( not a function of )

We can define p(x| T(x)=T0 ;) = w(x)

(T(x) - T0) with


w(x)

(T(x) - T0) =1
Therefore, we can let

Institute of Communications Engineering, ECE, NCTU

42

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

As a result, we have
and

This holds for arbitrary T0, resulting in


where

Institute of Communications Engineering, ECE, NCTU

43

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Ex. we want to estimate the phase of a sinusoid


x[n]=A cos(2
f0 n +
) + w[n], n=0,1,,N-1

Suppose A and f0 are given

Expand the exponent

Institute of Communications Engineering, ECE, NCTU

44

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

In this case, no single sufficient statistic exists, however

g(T1(x),T2 (x), )

h(x)

Institute of Communications Engineering, ECE, NCTU

45

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

The r statistics T1(x), T2(x),, Tr(x) are jointly sufficient


if p(x | T1(x), T2(x),, Tr(x) ; ) does not depend on
If p(x ; ) = g(T1(x), T2(x),, Tr(x) , ) h(x)
{T1(x), T2(x),, Tr(x)} are sufficient statistics for
Now, we know how to obtain the sufficient statistics
How do we apply them to help obtain the MVU estimator?
The Rao-Blackwell-Lehmann-Scheffe Theorem

If is an unbiased estimator of and T(x) is a sufficient


statistic for , then
is unbiased and

A valid estimator for (not dependent on )


Of lesser or equal variance than that of
for all
If T(x) is complete, then
is the MVU estimator

Institute of Communications Engineering, ECE, NCTU

46

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Proof

1> validity

by definition p(x|T(x) ; ) is not a function of after the


integration w.r.t. x, the result is not a function of but T
2> unbiasedness

By assumption

Institute of Communications Engineering, ECE, NCTU

47

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

3> show

Recall that

If

0
0
is solely a function of T(x)

Institute of Communications Engineering, ECE, NCTU

48

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Finally, a statistic is complete if there is only one function,


say g, of the statistic that is unbiased
is solely a function of T(x)
If T(x) is complete is unique and unbiased
Since T(x) is sufficient, is as good as with another T1(x)
Besides,
for all and any unbiased estimator
Then,
must be the MVU
In summary, the MVU can be found by

Taking any unbiased and carrying out


Alternatively, since there is only one function of T(x) that
leads to an unbiased estimator
find the unique g(T(x)) that makes
unbiased

Institute of Communications Engineering, ECE, NCTU

49

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Completeness of a sufficient statistic

We know that for x[n] = A + w[n], with w[n]~ N(0, 2)


T(x) = x[n] is sufficient and g(T(x)) = T(x)/N is unbiased
Suppose, a second function h for which E{h(T(x))} = A
E{g(T(x)) -h(T(x))} = A A =0, A
Since T ~ N(NA, N2)

where v(T) = g(T(x)) -h(T(x))


Let = T /N and v
(
) = v(N
)

W(
)

Institute of Communications Engineering, ECE, NCTU

50

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Which is a convolution of v
(
) and w(
) and =0 A v
(
)=0

Recall a signal v(t) is zero iff F(v(t)) = 0

F(v
(
)*w(
)) = V
(f)W(f) = 0, f
Since W(f) is still Gaussian V
(f) = 0 v
(
) = 0,
g(T(x)) = h(T(x)), thus T(x) is complete

Institute of Communications Engineering, ECE, NCTU

51

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Incomplete sufficient statistic

Now consider x[0] = A + w[0], while w[0] ~ U[-1/2, 1/2]


T(x) = x[0] is sufficient. But, is T(x) completely sufficient?
Let v(T) = g(T(x)) -h(T(x))

However, x = x[0] = T, so that

but

Institute of Communications Engineering, ECE, NCTU

52

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

So that

A nonzero v(T) = sin (2T) will satisfy this condition

Institute of Communications Engineering, ECE, NCTU

53

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Hence, v(T) = g(T) h(T) = sin (2T)


Let g(T) = T = x[0] since E{x[0]} =A
h(T) = T - sin (2T)
= x[0] - sin (2x[0]) is also an unbiased estimator of
A, using statistic T = x[0]
x[0] is not complete,
not sure if = x[0] is an MVU estimator

To summarize, we say a sufficient statistic is complete if

is satisfied only by the zero function or by v(T) = 0, T


Institute of Communications Engineering, ECE, NCTU

54

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

As a summary, the RBLS method can be used to find the


MVU even when an efficient estimator does not exist
The procedure we learn by now to find an MVU estimator

1> Find a sufficient statistic T(x) for


by the Neyman-Fisher factorization theorem
2> Determine if T(x) is complete, if so,
3> Find a function g(T(x)) that yields an unbiased
estimation of which is the MVU of
Alternatively,
where

is an unbiased estimator

Institute of Communications Engineering, ECE, NCTU

55

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Ex: Mean of uniform noise

x[n] = w[n] , n=0,1,,N-1, w[n] ~ U[0, ]


Want to find the MVU estimator for the mean = /2
The initial approach of using the CRLB to find an efficient
estimator cannot even be tried for it does not satisfy the
regularity condition : E{ln P(x;) / } = 0
Is a sample mean the MVU estimator for this case?

Its variance is

Institute of Communications Engineering, ECE, NCTU

56

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Now, we follow the procedure we learned so far

Define

Then

where =2
The PDF is
or

Institute of Communications Engineering, ECE, NCTU

57

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Alternatively

so that

T(x) = max(x)
We need to determine a function g to make T(x) unbiased
To do so requires us to determine E{T(x)}

Institute of Communications Engineering, ECE, NCTU

58

Unit 1: Minimum Variance Unbiased Estimator

The CDF of T(x)

The PDF follows as

Sau-Hsuan Wu

Institute of Communications Engineering, ECE, NCTU

59

Unit 1: Minimum Variance Unbiased Estimator

But d Pr{x[n] <


}/d is the PDF of x[n] or

Integrating, we obtain

Sau-Hsuan Wu

Institute of Communications Engineering, ECE, NCTU

60

Unit 1: Minimum Variance Unbiased Estimator

Which finally yields

We now have

Sau-Hsuan Wu

Institute of Communications Engineering, ECE, NCTU

61

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

To make it unbiased, we multiply T(x) by (N+1)/N

which is the MVU estimator whose corresponding


variance is
and

< 2/(12N) of the sample mean

Institute of Communications Engineering, ECE, NCTU

62

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Extension to a vector parameter


We want to seek an unbiased vector estimator such that
each element has the minimum variance
Similarly, a vector T(x) = [T1(x), T2(x), Tr(x)]T is said
to be sufficient for the estimation of

If P(x| T(x) ; ) = P(x| T(x))


If P(x ; ) =g(T(x), ) h(x) T(x) is of minimum
dimension

Institute of Communications Engineering, ECE, NCTU

63

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Back to a similar example


x[n]=A cos(2
f0 n) + w[n], n=0,1,,N-1
Now, =[A, f0 , 2 ]T is the unknown vector parameter
The PDF is

Expanding the exponent, we obtain

Cannot reduce the PDF due to

Institute of Communications Engineering, ECE, NCTU

64

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

If f0 is known, then =[A, 2 ]T

We are able to factorize the PDF which gives


N
1

x[n ]cos(2f 0n )

T1 ( x ) n 0

T ( x )

N 1
T2 ( x )

2
x [n ]

n 0

Institute of Communications Engineering, ECE, NCTU

65

Unit 1: Minimum Variance Unbiased Estimator

Sau-Hsuan Wu

Now apply the above result to our previous discussion


x[n]=A + w[n], n=0,1,,N-1
Again, =[A, 2 ]T
Set f0 = 0 for x[n]=A cos(2
f0 n) + w[n], we have
N 1

x[n ]

T1 ( x ) n 0

T ( x )

N 1
T2 ( x )

2
x [n ]

n 0

Taking the expected values produces


NA
NA

E{T ( x )}

2
2
2

NE
{
x
[
n
]}
N
(

A
)

Institute of Communications Engineering, ECE, NCTU

66

Unit 1: Minimum Variance Unbiased Estimator

So T2(x) only helps estimate the second moment not the


variance
If we transform T (x) into
1

T
(
x
)

g (T ( x ))

2
1

1
1

N
N T2 ( x ) N T1 ( x )

Sau-Hsuan Wu

N 1
2
2
x
[
n
]

n 0

Then, E{g(T(x))} gives


E
x

E{g (T ( x ))} 1 N 1 2
2
2
2
2

A E x
E x [n ] x

N
n

Institute of Communications Engineering, ECE, NCTU

67

Unit 1: Minimum Variance Unbiased Estimator

x ~ N (A,

2/N)

Sau-Hsuan Wu

E x A
N
2

Since

Substituting this back into E{g(T(x))} yields


A

E{g (T ( x ))} 2
2
2
2
2
2
2

A E x
A A N

Therefore, multiply the second element by N/(N-1) yields


an unbiased estimator of 2

1 T ( x)

N 1
2
g (T ( x )) 1

2
2
1
1

x
[
n
]

Nx

T2 ( x ) N T1 ( x )

n 0

N 1

N

N 1

Institute of Communications Engineering, ECE, NCTU

68

Unit 1: Minimum Variance Unbiased Estimator

Since

N
1

N
1

2
2
x
[
n
]

x
[
n
]

Nx

n 0

Sau-Hsuan Wu

n 0

Eventually, we have

x
x

g (T ( x )) 1
N
1
N
1

2
2
2

x
[
n
]

Nx
x
[
n
]

N 1 n 0

Is this the MVU of =[A, 2 ]T ?


We have shown that for Gaussian PDF, T(x) is complete
Actually, this is also true for the vector exponential
family of PDFs
efficient ?
Is

Institute of Communications Engineering, ECE, NCTU

69

Unit 1: Minimum Variance Unbiased Estimator

In fact and

Therefore

While, the CRLB is

Sau-Hsuan Wu

are independent with

Institute of Communications Engineering, ECE, NCTU

70

Das könnte Ihnen auch gefallen