Sie sind auf Seite 1von 6

Statistics 512 Notes 16: Efficiency of Estimators

and the Asymptotic Efficiency of the MLE


Method of moments estimator
1
, ,
n
X X K
iid
( ; ), f x
.
Find
( ) ( )
i
E X h


.
Method of moments estimator
1

( )
MOM
h X

.
Examples:
(1)
1
, ,
n
X X K
iid uniform
(0, )
.
( )
2
i
E X

2
MOM
X
(2)
1
, ,
n
X X K
iid logistic distribution
2
exp{ ( )}
( ; )
(1 exp{ ( )})
x
f x
x

+
,
, x < < < <
.
( )
i
E X

MOM
X

MLE
solves
1
exp{ ( )}
1 exp{ ( )} 2
n
i
i
i
X n
X

Efficiency of estimators:
A good criterion for comparing estimators is the mean
squared error:
2 2

( ) ( ) { ( )} ( ) MSE E Bias Var


1
+
]
For unbiased estimators,

( ) ( ) MSE Var


Relative efficiency of two unbiased estimators:
Let
1
W
and
2
W
be two unbiased estimators for with
variances
1
( ) Var W
and
2
( ) Var W
respectively. We will call
1
W
more efficient than
2
W
if
1
( ) Var W
<
2
( ) Var W
.
Also the relative efficiency of
1
W
with respect to
2
W
is
2 1
( ) / ( ) Var W Var W
.
Rao-Cramer Lower Bound:
The concept of relative efficiency provides a working
criterion for choosing between two competing estimators
but it does not give us any assurance that even the better of
1
W
and
2
W
is any good. How do we know that there isnt
an unbiased estimator
3
W
which is better than both
1
W
and
2
W
? The Rao-Cramer lower bound provides a partial
answer to this question in the form of a lower bound.
Theorem 6.2.1 (Rao-Cramer Lower Bound): Let
1
, ,
n
X X K
be iid with pdf
( ; ) f x
for . Assume that the
regularity conditions (R0)-(R4) hold. Let
1
( , , )
n
Y u X X K
be a statistic with mean
1
( ) [ ( , , )] ( )
n
E Y E u X X k

K
. Then
2
[ '( )]
( )
( )
k
Var Y
nI

.
Note (Corollary 6.2.1): If
1
( , , )
n
Y u X X K
is an unbiased
estimator of , then
1
( ) [ ( , , )] ( )
n
E Y E u X X k

K
so that
'( ) 1 k
. Thus for unbiased estimators
1
( , , )
n
Y u X X K
, there is a lower bound on the variance
1
( )
( )
Var Y
nI

Proof: The proof of this theorem is a clever application of


the Cauchy-Schwarz Inequality or, stated statistically, the
fact that for any two random variables V and W,
2
[ ( , )] ( )( ) Cov V W VarV VarW (*)
If we rearrange (*), we can get a lower bound on the
variance of X,
2
[ ( , )] Cov V W
VarV
VarW
(**).
The cleverness in this theorem follows from choosing V to
to be the estimator
1
( , , )
n
Y u X X K
and W to be the
quantity
1
log ( , , ; )
n
f X X

K
and applying the Cauchy-
Schwarz Inequality.
First, we calculate
1 1
( , , ), log ( , , ; )
n n
Cov u X X f X X


' )


K K
.
We have
1 1
1 1 1 1
1
1 1 1
1
1
1 1 1
1
( , , ) log ( , , ; )
( , , ) log ( , , ; ) ( ) ( )
( , , ; )
( , , ) ( ) ( )
( , , ; )
( , , ; )
( , , ) ( ) ( )
( , , ; )
n n
n n n n
n
n n n
n
n
n n
n
E u X X f X X
u x x f X X f x f x dx dx
f X X
u x x f x f x dx dx
f X X
f X X
u x x f x f x dx
f X X




K K
L K K L L
K
L K L L
K
K
L K L
K
1
[ ( , , )] '( )
n
n
dx
E u X X k

L
K
Also we have
1 1 1 1
1
1 1
1
1
1 1
1
[ log ( , , ; )] log ( , , ; ) ( , , ; )
( , , ; )
( , , ; )
( , , ; )
( , , ; )
( , , ; ) 0
( , , ; )
n n n n
n
n n
n
n
n n
n
E f X X f x x f x x dx dx
f x x
f x x dx dx
f x x
f x x
f x x dx dx
f x x




K L K K L
K
L K L
K
K
L K L
K
Thus,
1 1
( , , ), log ( , , ; ) '( )
n n
Cov u X X f X X k

' )


K K
Finally, we calculate
1
1
2 2
1 1
1
log ( , , ; ) log ( ; )
log ( ; ) log ( ; ) log ( ; )
( ( ) 0) ( )
n
n i
i
n
i
i
Var f X X Var f X
Var f X n E f X E f X
n I nI






1

' )
1

]
_
1 1 1

' ) ' )
1 1 1


] ] ]
,

K
Thus, using (**), we conclude that
2
[ '( )]
( )
( )
k
Var Y
nI


Example: Let
1
, ,
n
X X K
be iid Poisson( ). On your
homework, you should have found that

MLE
X
1
( ) I

From the properties of the Poisson distribution, we know


that
( ) , ( ) E X Var X
n


.
The Rao-Cramer lower bound for the variance of an
unbiased estimator is
1 1
( )
1
( )
Var Y
nI n
n


.
Thus, the maximum likelihood estimator is efficient.
The Rao-Cramer lower bound might not be achieved by
any unbiased estimator.
Asymptotic Optimality of MLE
The maximum likelihood estimator is consistent so that its
bias converges to 0 as
n
.
Example 6.2.4 shows that the maximum likelihood
estimator may not achieve the Rao-Cramer lower bound for
finite samples.
Under the regularity conditions assumed in Theorem 6.2.2,
( )
0
1

0,

( )
D
MLE
MLE
n N
I

_



,
Informally, Theorem 6.2.2 and its corollary say that the
distribution of the MLE can be approximated by
0
1
( , )

( )
MLE
N
nI

.
Thus, the MLE is asymptotically unbiased and has variance
equal to the Rao-Cramer lower bound.
In this sense, the MLE is as efficient as any other estimator
for large samples. For large enough samples, the MLE is
the optimal estimator.
Monte Carlo comparison of MSE for maximum likelihood
vs. method of moments for the logistic distribution.
+

Das könnte Ihnen auch gefallen