Beruflich Dokumente
Kultur Dokumente
Solutions to Homework 4
DeGroot & Schervish X.Y .Z means Exercise Z at the end of Section X.Y in our text,
Probability and Statistics (Fourth Edition) by Morris H. DeGroot and Mark J. Schervish.
1. Let X be drawn from a discrete uniform distribution on {1, . . . , N }, where N 1 is an
= X. (We have
unknown positive integer. The maximum likelihood estimator of N is N
discussed this result before, so you do not need to show it.)
(as an estimator of N ).
(a) Find the bias, variance, and mean squared error of N
Note: The discrete uniform distribution on {1, . . . , N } has mean (N + 1)/2 and
variance (N 2 1)/12. You may use these facts without proof.
Solution: We have
) = EN (N
) N = EN (X) N = N + 1 N = ( N 1 ),
BiasN (N
2
2
2
) = Var(X) = N 1 ,
VarN (N
12
2
)
MSEN (N ) = [BiasN (N )] + VarN (N
N 1 2 N2 1 N 1
(N 1)(2N 1)
= [(
)] +
=
[3(N 1) + (N + 1)] =
,
2
12
12
6
using the facts provided.
Solutions to Homework 4
2. Let X Bin(n, ), where 0 < < 1 and is unknown. Let = 1/. Prove that no unbiased
estimator of exists.
Hint: Here, an estimator is fully specified by the value it takes for each x {0, . . . , n}.
Let be any arbitrary estimator of , and let tx be the value that takes when X = x.
when is specified in this way.
Then look at the form of E ()
Solution: Let be any estimator of . Using the hint, we can write E () as
n
n
E () = tx P (X = x) = tx ( ) x (1 )nx ,
x
x=0
x=0
which is some polynomial function of . For to be unbiased, this polynomial function
of must equal 1/ for all (0, 1). However, this is impossible. (If it is not immediately
clear why this is impossible, note that as 0, the polynomial function tends to whatever
finite value it takes at zero, whereas 1/ tends to .)
1
X
1
X
2
[log p + X log(1 p)] =
(
)= 2
.
2
p
p p 1 p
p
(1 p)2
1
X
1
1
1p
1
]
=
+
(
)
=
,
p2 (1 p)2
p2 (1 p)2 p
p2 (1 p)
()
[
log
log
()
+
(
1)
log
X
X
]
=
[log
+ log X1 ]
1
1
2
()
2
() () [ ()]
[()]
and thus
2
I1 () = E [`X1 ()] = E {
() () [ ()]
2
[()]
}=
() () [ ()]
2
[()]
The desired result then follows immediately by Theorem 7.2.4 of the lecture notes.
Solutions to Homework 4
X1 1 X 1
X1 1 X1
2
[X1 log + (1 X1 ) log(1 )] = (
)= 2
,
2
(1 )2
and thus
I1 () = E [`X1 ()] = E [
X1 1 X1
1
1
1
]= +
=
.
2
2
(1 )
1 (1 )
Then the Fisher information for the entire sample is I() = n I1 () = n/[(1 )].
(b) We have shown before that the maximum likelihood estimator of is = n1 ni=1 Xi .
Does it agree
Use your answer to part (a) to state the asymptotic distribution of .
with the result obtained by using the central limit theorem?
Solution: By Theorem 7.2.4 of the lecture notes,
x+1
f (x) =
if x k,
if x < k.
n
n
n
=
[n log + n log k ( + 1) log Xi ] = + n log k log Xi
i=1
i=1
n
n
Xi
= log( ),
i=1
k
1 n
Xi
= [ log( )] .
n i=1
k
Solutions to Homework 4
n = [ log( )]
n i=1
k
1
1
( + log k log X1 ) = 2 ,
and thus
I1 () = E [`X1 ()] = E (
1
1
) = 2.
2
Then
n(
n ) D N (0, 2 )
(c) Now suppose instead that k > 0 is unknown and > 0 is known. Compute Ek [`X (k)],
and explain why your answer does not contradict Lemma 7.2.1 from the lecture notes.
Solution: The score function is now
`X (k) =
n
n
and thus
Ek [`X (k)] = Ek (
n
n
)=
0.
k
k
At first glance, it may appear that this result contradicts Lemma 7.2.1 of the lecture
notes, which states that the expectation of the score is zero. However, there is no
contradiction since one of the regularity conditions of Section 7.4 is now violated,
which means that Lemma 7.2.1 does not apply. Specifically, the set
X = {x R `X1 (k) > 0} = {x R x k} = [k, )
depends on the unknown parameter k, which is not permitted by the regularity
conditions of Section 7.4. (Note that there would be no violation here if instead k
were known, as was the case in the previous question.)
Solutions to Homework 4
7. Let X1 , . . . , Xn be iid continuous random variables with a pdf f (x) that is symmetric
about , where R is unknown. Suppose that Var (X1 ) = 2 < is known, which
implies that E (X1 ) = . Then is both the true mean and true median of the pdf f (x),
so it seems plausible that both the sample mean X n and the sample median, which we
will call Mn , could be good estimators of .
(a) Use the central limit theorem to state the asymptotic distribution of the sample
mean X n .
n(Mn ) D N (0,
2 ).
4[f ()]
(c) Suppose f (x) = 12 exp(x ), where > 0 is known. Find the asymptotic
relative efficiency of the sample median compared to the sample mean, and use it to
state which estimator performs better asymptotically.
Note: Under this pdf, Var (X1 ) = 2/2 . You may use this fact without proof.
Solution: ARE(Mn , X n ) = 4[f ()]2 /(1/ 2 ) = 4( 21 )2 /( 21 2 ) = 2, so the sample
median performs better asymptotically.