Sie sind auf Seite 1von 19

STATS 200 (Stanford University, Summer 2015)

Solutions to Final Exam Sample Questions


This document contains some questions that are fairly representative of the content, style,
and difficulty of the questions that will appear on the final exam. Most of these questions
come from actual exams that I gave in previous editions of the course. Please keep the
following things in mind:
This document is much longer than the actual final exam will be.
All material covered in the lecture notes (up through the end of class on Monday,
August 10) is eligible for inclusion on the final exam, regardless of whether it is covered
by any of the sample questions below.
The final exam will be cumulative, but material not covered on the midterm exam will
be represented more heavily. The proportion of questions on the final exam that are
drawn from pre-midterm material may not be exactly equal to the proportion of the
sample questions below that are drawn from pre-midterm material.
1. Note: Parts (a)(f) of this question already appeared in the midterm exam sample questions, but they are repeated here to maintain the original format of the overall question.
Let X1 , . . . , Xn iid Beta(1, ), where > 0 is unknown.
Note: The Beta(1, ) distribution has pdf

(1 x)1 if 0 < x < 1,


f (x) =

otherwise.

0
Also,
E (X1 ) =

1
,
1+

Var (X1 ) =

.
(1 + )2 (2 + )

You may use these facts without proof.


(a) Find the maximum likelihood estimator nMLE of .
Note: Recall that any logarithm of a number between 0 and 1 is negative.
Solution: Differentiating the log-likelihood yields
n

[n log + ( 1) log(1 Xi )]

i=1
n
n
n
= + log(1 Xi ) = 0 = n
.
i=1
i=1 log(1 Xi )

`Xn () =

This point is the only critical point, and it can be seen from the form of the loglikelihood that `Xn () both as 0 and as . Then the critical point is
indeed the maximum, and
n
nMLE = n
.
i=1 log(1 Xi )
(The justification for why the critical point is indeed the maximum is not required
for full credit since this fact is fairly obvious by inspection of the log-likelihood.)

Solutions to Final Exam Sample Questions

(b) Do we know for certain that nMLE is an unbiased estimator of ?


Solution: No. There is no reason why it necessarily must be, and indeed it isnt.
(An explanation is not required.)

(c) Find the asymptotic distribution of the maximum likelihood estimator n .


Note: Your answer should be a formal probabilistic result involving convergence in
distribution.
Solution: The second derivative of the log-likelihood is
`Xn () =

n n
n
[ + log(1 Xi )] = 2 ,
i=1

so the Fisher information in the sample is In () = n/2 . It follows that the Fisher
information per observation is I1 () = 1/2 , and so

n (n ) D N (0, 2 )
is the asymptotic distribution of the maximum likelihood estimator n .

(d) Let = 1/(1 + ) = E (X1 ), and define


n

X n = n1 Xi .
i=1

Do we know for certain that X n is an unbiased estimator of ?


Solution: Yes, since E (X n ) = n1 ni=1 E (X1 ) = 1/(1 + ) = . (An explanation
is not required.)

(e) Define the estimator


1
n =
1.
Xn
Do we know for certain that n is an unbiased estimator of ?
Solution: No, since E (1/X n ) 1/E (X n ).

(f) Find the asymptotic distribution of n .


Note: Your answer should be a formal probabilistic result involving convergence in
distribution.
Solution: By the central limit theorem,

n ( Xn

) D N [0,
].
1+
(1 + )2 (2 + )

Solutions to Final Exam Sample Questions

Now let g(t) = t1 1, so that = g[1/(1 + )] and n = g( X n ). Then g (t) = t2 ,


and
1
1
g(
)=
= (1 + )2 .
1+
[1/(1 + )]2
Next, observe that
[g (

2
1

(1 + )4
(1 + )2
)]
=
=
.
1+
(1 + )2 (2 + ) (1 + )2 (2 + )
2+

Then

(1 + )2
]
n (n ) D N [0,
2+
by the delta method.

(g) Find the asymptotic relative efficiency of nMLE compared to n .


Note: If you were unable to find the asymptotic distributions of nMLE and/or n , then
call their asymptotic variances v MLE () and/or v () so that you can still answer
this question.
Solution: The asymptotic relative efficiency of nMLE compared to n is
1/v MLE () (1 + )2 1 + 2 + 2
ARE (nMLE , n ) =
=
=
.
1/v ()
(2 + )
2 + 2
(Simplification is not necessary for full credit.)

2. Let X1 , . . . , Xn iid Geometric(), where is unknown and 0 < < 1.


Note: The Geometric() distribution has pmf

(1 )x if x {0, 1, 2, . . .},
f (x) =

otherwise,

0
and it has mean (1 )/. Also, the maximum likelihood estimator of is
n
nMLE =
.
n + ni=1 Xi
You may use these facts without proof.
(a) Find the Fisher information in the sample.
Solution: Differentiating the log-likelihood twice yields
`Xn () =

n
2 n
1
n
[
X
log(1

)
+
n
log
]
=

Xi 2 .

i
2
2
i=1
(1 ) i=1

Then the Fisher information in the sample is


n
n
n
n
n
In () = E [`Xn ()] =
E (X1 ) + 2 =
+ 2= 2
.
2
(1 )

(1 )
(1 )
(Simplification is not necessary for full credit.)

Solutions to Final Exam Sample Questions

(b) Let 0 be fixed and known, where 0 < 0 < 1. Find the Wald test of H0 = 0 versus
H1 0 and state how to choose the critical value to give the test approximate
size , where 0 < < 1.
Note: You may use either version of the Wald test.
Solution: Observe that
2

Jn =

`Xn (nMLE )

n
n + ni=1 Xi 2 (n + ni=1 Xi )
n + ni=1 Xi
)
X
+
n(
) =
=(
i
n
n
n ni=1 Xi
i=1 Xi
i=1
= In (nMLE ),

so both versions of the Wald test reject H0 if and only if

(n + n Xi )3
n

i=1

0 z/2 ,
n
n i=1 Xi
n + ni=1 Xi
where z/2 is the number such that P (Z z/2 ) = for a standard normal random
variable Z.

(c) Let 0 be fixed and known, where 0 < 0 < 1. Find the score test of H0 = 0 versus
H1 0 and state how to choose the critical value to give the test approximate
size , where 0 < < 1.
Solution: The score function is
`Xn () =

n n (n + ni=1 Xi )
1 n
n
X
+
[ Xi log(1 ) + n log ] =
=
,
i
i=1
1 i=1

(1 )

so the score test rejects H0 if and only if

02 (1 0 ) n 0 (n + ni=1 Xi )

z/2 ,
n
0 (1 0 )
where z/2 is the number such that P (Z z/2 ) = for a standard normal random
variable Z.

(d) Find the Wald confidence interval for with approximate confidence level 1 ,
where 0 < < 1.
Note: You may use either version of the Wald confidence interval.
Solution: Since In (nMLE ) = Jn , both versions of the Wald confidence interval are

n ni=1 Xi
n

(0, 1)

z
/2
n
3

n
+
X

(n + ni=1 Xi )
i

i=1

n ni=1 Xi
n

< <
+ z/2
,
n
3
n
n + i=1 Xi
(n + i=1 Xi )

where z/2 is the number such that P (Z z/2 ) = for a standard normal random
variable Z.

Solutions to Final Exam Sample Questions

3. Let X1 , . . . , Xn be iid random variables such that E,2 (X1 ) = and Var,2 (X1 ) = 2 are
both finite. However, suppose that X1 , . . . , Xn are not normally distributed. Now define
Xn =

1 n
Xi .
n i=1

(a) Do we know for certain that X n is a consistent estimator of ?


Solution: Yes, by the law of large numbers. (An explanation is not required.)
(b) Do we know for certain that the distribution of X n is approximately normal for
large n?
Solution: Yes, by the central limit theorem. (An explanation is not required.)
4. Let X1 , . . . , Xn be iid continuous random variables with pdf

2x exp(x2 )
f (x) =

if x 0,
if x < 0,

where > 0 is unknown. Suppose we assign a Gamma(a, b) prior to , where a > 0 and
b > 0 are known.
Note: The Gamma(a, b) distribution has pdf

ba

a1

(a) x exp(bx)
f (x) =

if x > 0,
if x 0,

and its mean is a/b. You may use these facts without proof.
(a) Find the posterior distribution of .
Solution: Ignoring terms that do not depend on , the posterior is
n

( xn ) n exp( x2i ) a1 exp(b) 1(0,) ()


i=1

a+n1 exp[(b + x2i )] 1(0,) (),


i=1

which we recognize as the unnormalized pdf of a Gamma(a + n, b + ni=1 x2i ) distribution. Thus, xn Gamma(a + n, b + ni=1 x2i ).

(b) Find (or simply state) the posterior mean of .


Solution: The posterior mean of is simply E( xn ) = (a + n)/(b + ni=1 x2i ).

Solutions to Final Exam Sample Questions

5. Let X1 , . . . , Xn iid Poisson(), where > 0 is unknown.


Note: The Poisson() distribution has pmf

x exp()

f (x) =
x!

if x {0, 1, 2, . . .},
if x {0, 1, 2, . . .},

and its mean and variance are both . Also, the maximum likelihood estimator of is
1 n
MLE

=
Xi .
n
n i=1
You may use these facts without proof.
(a) Let 0 > 0 be fixed and known. Find the likelihood ratio test of H0 = 0 versus
H1 0 . (You do not need to state how to choose the critical value for this part
of the question.)
MLE
Solution: Evaluating the likelihood at = 0 and at =
yields
n
0i=1
n

LXn (0 ) =
MLE
)=
LXn (
n

Xi

exp(n0 )
,
n
i=1 Xi !
n
X
(n1 ni=1 Xi )i=1 i exp[n(n1 ni=1 Xi )]
.
n
i=1 Xi !

Then the likelihood ratio statistic is


n
i=1 Xi

LXn (0 )
n0
(Xn ) =
=( n
)
MLE
X

i
LXn (
)
i=1
n

exp( Xi n0 ),
i=1

and the likelihood ratio test rejects H0 if and only if (Xn ) k for some critical
value k 0. (Simplification is not necessary for full credit.)

(b) State how the critical value of the likelihood ratio test in part (a) can be chosen to
give the test approximate size .
Solution: To give the likelihood ratio test in part (a) approximate size , we
reject H0 if and only if
2 log (Xn ) w ,
or equivalently, if and only if
(Xn ) exp(

w
),
2

where w is the number such that P (W w ) = for a 21 random variable W .

Solutions to Final Exam Sample Questions

6. Let X be a single continuous random variable with pdf


f (x) =

exp( x)
2

[1 + exp( x)]

x
1
x
1
sech(
) = sech(
),
4
2
4
2

where sech is the hyperbolic secant function, and cdf


F (x) =

1
,
1 + exp( x)

where R is unknown.
Note: The maximum likelihood estimator of is
MLE = X. Also, sech(t) = sech(t)
for all t R, and sech(t) is a strictly decreasing function of t. You may use these facts
without proof.
(a) Show that the likelihood ratio test of H0 = 0 versus H1 0 rejects H0 if and
only if X c for some critical value c. (You do not need to state how to choose the
critical value for this part of the question.)
Solution: Evaluating the likelihood at = 0 and at =
MLE yields
LX (0) =

exp(X)
2

[1 + exp(X)]

X
1
sech( ),
4
2

LX (X) =

1
1
=
,
(1 + 1)2 4

so the likelihood ratio statistic is


(X) =

LX (0)
X
4 exp(X)
=
),
2 = sech(
LX (X) [1 + exp(X)]
2

and we reject H0 if and only if (X) k for some critical value k 0. The note tells
us that sech(X/2) = sech(X/2) and that sech(X/2) is a strictly decreasing function
of X/2, so rejecting H0 if and only if sech(X/2) k is equivalent to rejecting H0 if
and only if X c for some c.

(b) State how the critical value c of the likelihood ratio test in part (a) can be chosen
to give the test size (exactly, not just approximately), where 0 < < 1.
Solution: The test has size if and only if
= P=0 (X c) = P=0 (X c) + P=0 (X c) = F0 (c) + 1 F0 (c)
1
1
+1
=
1 + exp(c)
1 + exp(c)
2
=
.
1 + exp(c)
Then we choose c = log(21 1) to achieve size . (Simplifying and solving for c in
terms of are not necessary for full credit.)

Solutions to Final Exam Sample Questions

(c) For the likelihood ratio test with size in parts (a) and (b), find the probability of
a type II error if the true value of happens to be = 0.
Solution: Since 0 ,
P (type II error) = P (X < c) = P (c < X < c)
= F (c) F (c)
1
1
=

1 + exp( c) 1 + exp( + c)
1
21 1

.
= 1
1
2 1 + exp( ) 1 + (2 1) exp( )
(Inserting the value of c is not necessary for full credit.)

(d) Suppose we observe X = xobs . Find the p-value of the likelihood ratio test for the
observed data xobs .
Note: Be sure your answer is correct for both positive and negative values of xobs .
Solution: The p-value is
p(xobs ) = P=0 (X xobs ) = P=0 (X xobs ) + P=0 (X xobs )
= F0 (xobs ) + 1 F0 (xobs )
1
1
2
=
+1
=
.
1 + exp(xobs )
1 + exp(xobs ) 1 + exp(xobs )
(Simplification is not necessary for full credit.)

7. Suppose that we call a hypothesis test trivial if its rejection region is either the empty
set or the entire sample space, i.e., a trivial test is a test that either never rejects H0
or always rejects H0 . Now let X Bin(n, ), where is unknown, and consider testing
H0 = 1/2 versus H1 1/2. Find a necessary and sufficient condition (in terms of n)
for the existence of a test of these hypotheses with level = 0.05 = 1/20 that is not trivial.
Solution: A test of these hypotheses with rejection region R has level 0.05 if and only
if P=1/2 (X R) 0.05. For such a test to be nontrivial, the rejection region R cannot
be the empty set. Now observe that the points in the sample space with the smallest
probability when = 1/2 are X = 0 and X = n, each of which have probability 1/2n when
= 1/2. Hence, there exists a nonempty rejection region R with P=1/2 (X R) 0.05 if
and only if 1/2n 0.05. This inequality holds if and only if n 5. Hence, there exists a
nontrivial test of these hypotheses with level = 0.05 if and only n 5. Also, since it is
clear that n must be a positive integer for the question to make sense, any condition that
is equivalent to n 5 when applied to the positive integers, such as n (log 20)/(log 2),
is also acceptable.

Solutions to Final Exam Sample Questions

8. Let X1 , . . . , Xn be iid Exp(1) random variables with pdf

exp(x)
f (x) =

if x 0,
if x < 0.

Then let Yn = max1in Xi . Find a sequence of constants an such that Yn an converges


in distribution to a a random variable with cdf G(t) = exp[ exp(t)]. (This limiting
distribution is called the Gumbel distribution.)
Hint: For any c R, (1 + n1 c)n exp(c) as n .
Solution: Let F denote the cdf of the Exp(1) distribution, which is

1 exp(x)
F (x) =

if x 0,
if x < 0.

Now let Gn denote the cdf of the random variable Yn an . Then


Gn (t) = P (Yn an t) = P (Yn t + an )
= P (max Xi t + an )
1in

= P (Xi t + an )
i=1
n

= [P (X1 t + an )]

[1 exp(t an )]
n
= [F (t + an )] =

if t + an 0,
if t + an < 0.

Now let an = log n. Then for every t R, t + an 0 for all sufficiently large n. Then for
every t R,
n

Gn (t) = [1 exp(t log n)]


1

for all sufficiently large n

= [1 n exp(t)] exp[ exp(t)] = G(t).


Thus, Yn log n converges in distribution to a random variable with cdf G(t).
9. Let X1 , . . . , Xn be iid continuous random variables with pdf

1
x
1
1

exp(

2
2 x3

2
2x
f (x) =

where > 0 is unknown.

if x > 0,
if x 0,

Solutions to Final Exam Sample Questions

10

(a) Find the maximum likelihood estimator of .


Solution: Differentiating the log-likelihood yields
`x ()

n
3 n
n
1 n
1 n 1
=
[ log(2) log xi + 2 xi ]
2
2 i=1
2 i=1
2 i=1 xi
n
n
n
1
1
= 2 + 3 xi = 0 = xi = xn .

i=1
n i=1

Although it is not necessary to obtain full credit, we technically should now verify
that this critical point does indeed maximize the likelihood since it is not immediately
clear that this is the case. (In particular, it is not obvious what the likelihood does
as .) However, the second derivative at the critical point is
`x ( xn ) =

2n
( xn )

3nxn
( xn )

n
=

( xn )

< 0,

so this critical point is indeed a maximum. Hence, the MLE of is


= X n.

(b) Let 0 > 0 be fixed and known. Find the likelihood ratio test of H0 = 0 versus
H1 0 . (You do not need to state how to choose the critical value to give the
test a particular size in this part of the question.)
Solution: The likelihood ratio statistic is

1
1
Xi
1

exp(

3
0 220 2Xi
i=1 2 Xi
LX (0 )

(X) =
=

n
LX (
)

1
1
Xi
1

exp


3
X n 2( X n )2 2Xi
i=1 2 Xi

n Xn
n nX n
n
n

= exp(
1) ,

+
) = exp
(
2
0 20 X n 2X n

2X n 0

and we reject H0 if and only if (X) k for some choice of k.

(c) State how the critical value of the likelihood ratio test in part (b) can be chosen to
give the test approximate size .
Solution: To give the likelihood ratio test in part (b) approximate size , we
reject H0 if and only if
2 log (Xn ) w ,
or equivalently, if and only if
(Xn ) exp(

w
),
2

where w is the number such that P (W w ) = for a 21 random variable W .

Solutions to Final Exam Sample Questions

11

(d) Explain how the likelihood ratio test in part (b) would change if the alternative
hypothesis were H1 < 0 instead.
Solution: The MLE must now be found over the restricted parameter space
= 0 ,
(0, 0 ] rather than over > 0. Thus, if X n > 0 , the MLE is instead
which yields (X) = 1. If instead X n 0 , then there is no change.

10. Let X1 , . . . , Xn be iid continuous random variables with pdf

( + 1)

( + x)2
f (x) =

if 0 x 1,
otherwise,

where > 0 is unknown. It can be shown that the Fisher information in the sample is
In () =

n
.
+ 1)2

32 (

Use this fact to find (or simply state) the asymptotic distribution of the maximum likelihood estimator n of .
Note: There is no need to actually find the form of n or to verify the result for the Fisher
information. Also, you may assume that the regularity conditions of Section 7.4 of the
notes hold.

Solution: Let n denote the MLE of . Then n(n ) D N [0, 32 ( + 1)2 ].


11. Let X1 , . . . , Xn be iid continuous random variables with pdf

( + x)2
f (x) =

if x 0,
if x < 0,

where > 0 is unknown. Let n denote the maximum likelihood estimator of (which
you do not need to find).
Note: It can be shown by simple calculus that
E (X1 ) = E (

1
) = ,
X1

E (

1
1
)= ,
+ X1
2

E [

1
1
] = 2,
2
( + X1 )
3

so you may use any of these facts without proof. You may also assume that the relevant
regularity conditions (i.e., those of Section 7.4 of the notes) are satisfied.

Solutions to Final Exam Sample Questions

12

(a) Find the score function for the sample and show explicitly that its expectation is
zero (i.e., do not simply cite the result from the notes that says that the expectation
is zero).
Solution: The score function is (for > 0)
`Xn () =

n
n

[log 2 log( + Xi )] = [n log 2 log( + Xi )]


i=1

i=1
n
n
1
= 2
.

i=1 + Xi

Then clearly E [`Xn ()] = 0 since E[1/( + Xi )] = 1/(2) for each i {1, . . . , n}.

(b) Find the Fisher information for the sample.


Solution: We first find the Fisher information per observation. The second
derivative of the log-likelihood for X1 is (for > 0)
`X1 () =

2
1
1
2
2
[log

2
log(
+
X
)]
=
(

)
=

+
.
1
2
+ X1
2 ( + X1 )2

Then
I1 () = E [`X1 ()] =

1
1
1
2
1
2 E [
] = 2 2 = 2,
2
2

( + X1 )

3
3

and thus
In () = n I1 () =
is the Fisher information for the sample.

n
32

(c) Find (or simply state) the asymptotic distribution of n .


Note: Your answer should be a formal probabilistic result involving convergence in
distribution.
Solution: Since the Fisher information per observation is I1 () = 1/(32 ), the
asymptotic distribution of the maximum likelihood estimator n of is

n(n ) D N (0, 32 )
by the standard result from the notes.

Solutions to Final Exam Sample Questions

13

12. Let X1 , . . . , Xn iid N (, 1), where R is unknown.


(a) State an estimator of that is consistent but not unbiased.
Solution: There exist many consistent estimators of that are not unbiased,
such as n1 +X n or (n1)X n /n. More generally, any estimator of the form an +bn X n
is consistent if an 0 and bn 1 and is biased (i.e., not unbiased) if an 0 or bn 1.
(However, there also exist estimators that meet the required criteria that are not of
this form.)

(b) State an estimator of that is consistent but not asymptotically efficient.


Solution: Again, there exist many consistent estimators of that are not asymptotically efficient. A simple example is to take the estimator to be the mean of just
the even-numbered observations among X1 , . . . , Xn . (Taking the estimator to be the
mean of any subset of X1 , . . . , Xn meets the required criteria if the fraction of observations in the subset tends to a limit strictly between 0 and 1, e.g., the fraction of
even-numbered observations tends to 1/2.)

(c) Is ( X n )2 = (n1 ni=1 Xi )2 an unbiased estimator of 2 ? Why or why not?


Solution: No, since E [( X n )2 ] = [E ( X n )]2 + Var ( X n ) = 2 + n1 2 .

(d) Is ( X n )2 = (n1 ni=1 Xi )2 a consistent estimator of 2 ? Why or why not?


Solution: Yes, since ( X n )2 P 2 by the continuous mapping theorem, noting
that X n P by the weak law of large numbers.

13. Lemma 7.2.1 of the notes states that E [`Xn ()] = 0 for all in the parameter space .
This result uses the regularity condition that Xn = (X1 , . . . , Xn ) is an iid sample. Now
suppose that we were to remove the condition of independence while keeping all other
regularity conditions in place. Explain why the result that E [`Xn ()] = 0 for all
would still be true.
Solution: If X1 , . . . , Xn are not independent, then the log-likelihood is
n

`Xn () = log[LX1 () LX2 X1 () LX3 X1 ,X2 ()LXn X1 ,...,Xn1 ()] = `Xi X1 ,...,Xi1 (),
i=1

where `Xi X1 ,...,Xi1 () is simply the conditional pdf or pmf of Xi given X1 , . . . , Xi1 . This
is still a valid pdf or pmf, so it still has the property that E [`Xi X1 ,...,Xi1 ()] = 0. Thus,
we still obtain the result that E [`Xn ()] = 0.

Solutions to Final Exam Sample Questions

14

14. Let X1 , . . . , Xn be iid random variables from a distribution that depends on an unknown
parameter R. This distribution has the following properties:
E (X1 ) = 2 exp(),
E (log X1 ) = ,
E (X11 ) = 2 exp(),

Var (X1 ) = 12 exp(2),


Var (log X1 ) = 2 log 2,
Var (X11 ) = 12 exp(2).

Now define the estimators


1 n
(1)
n = log( Xi ),
2n i=1

n
1/n

(2)

n = log( Xi ) .
i=1

(a) Find a function v (1) () such that


(1)
n[n ] D N [0, v (1) ()].
Solution to (a): By the central limit theorem,
1 n
n[ Xi 2 exp()] D N [0, 12 exp(2)].
n i=1
Now apply the delta method with the function g(t) = log(t/2), which has derivative
g (t) = 1/t. Then g [2 exp()] = 21 exp(), and so
(1)
n[n ] D N (0, 3).
Thus, v (1) () = 3 for all .

(b) Find a function v (2) () such that


(2)
n[n ] D N [0, v (2) ()].
(2)
Solution to (b): Begin by rewriting n as

1 n
(2)
n = log Xi .
n i=1
Then by the central limit theorem,
(2)
n[n ] D N (0, 2 log 2).
Thus, v (2) () = 2 log 2 for all .

Solutions to Final Exam Sample Questions

15

(1)
(2)
(1)
(2)
(c) Find ARE [n , n ], the asymptotic relative efficiency of n compared to n , and
use it to state which of the two estimators is better for large n.
(1)
(2)
Solution to (c): ARE [n , n ] = v (2) ()/v (1) () = (2 log 2)/3 for all R.
(2)
(1)
Note that log 2 < log e = 1, so 2 log 2 < 3. Thus, n is better than n for large n.

15. Let X1 , . . . , Xn be iid discrete random variables with pmf

(x + k 1)! k

x! (k 1)! (1 )
p (x) =

if x {0, 1, 2, . . .},
otherwise,

where k is a known positive integer, is unknown, and 0 < < 1.


(a) Find the maximum likelihood estimator n of .
Note: For the purposes of this question, you can ignore any possible data values for
which the MLE does not exist.
Solution: Differentiating the log-likelihood yields
n
n

(xi + k 1)!
{nk log + xi log(1 ) + log[
]}

xi ! (k 1)!
i=1
i=1
n
1 n
nk
nk

.
=
xi = 0 (1 )nk = xi =

1 i=1
nk + ni=1 xi
i=1

`xn () =

It is clear from the form of the likelihood that this point is indeed a maximum. Thus,
the maximum likelihood estimator of is
nk
n =
nk + ni=1 Xi
as long as Xi > 0 for some i {1, . . . , n}. (Otherwise the MLE does not exist, but
the question says we can ignore this possibility.)

(b) Find the asymptotic distribution of the maximum likelihood estimator n of .


Note: You may use without proof the fact that E (X1 ) = (1 )k/, and you may
assume that the regularity conditions of Section 7.4 of the notes hold.
Solution: First,
`X1 () =

k
X1
k
X1
(
)= 2
,
1

(1 )2

and thus
I1 () = E [`X1 ()] =

k E (X1 ) k
k
k
+
= 2+
= 2
.
2
2

(1 )

(1 ) (1 )

Then

2 (1 )
n(n ) D N [0 ,
],
k
assuming the various regularity conditions.

Solutions to Final Exam Sample Questions

16

16. Let X be a single continuous random variable with pdf


f (x) =

1
,
[1 + (x )2 ]

where R is unknown. Let 0 R be fixed and known, and consider testing H0 = 0


versus H1 0 .
Note: The cdf of X is F (x) = 1 arctan(x)+ 12 , noting that the arctan function is strictly
increasing and is also an odd function, i.e., arctan(u) = arctan(u) for all u R. Also,
the that appears in the pdf and cdf is the usual mathematical constant, i.e., 3.14.
You may use any of these facts without proof.
(a) Show that the likelihood ratio test of these hypotheses rejects H0 if and only if
X 0 c or X 0 + c , where c 0.
Solution: The maximum likelihood estimator of is clearly = X. Then the
likelihood ratio statistic is
(X) =

1
LX (0 )
,
=

LX () 1 + (X 0 )2

and we reject H0 if and only if (X) k for some k. Now note that (X) k is
equivalent to X 0 c for some c 0.

(b) Let 0 < < 1. Find the value of c such that the test in part (a) has size (exactly).
Note: The inverse of the arctan function is simply the tan function.
Solution: The test has size if and only if
= P0 (X 0 c ) = P0 (X 0 c ) + P0 (X 0 + c )
= F0 (0 c ) + 1 F0 (0 + c )
1
1
1
2
1
= arctan(c ) + + 1 arctan(c ) = 1 arctan(c ).

Thus,

c = tan[(1 ) ]
2
gives the test size (exactly).

Solutions to Final Exam Sample Questions

17

17. Let X be a single random variable with a continuous uniform distribution on [0, ], where
> 0 is unknown. Consider testing H0 = 1 versus H1 1.
Note: The maximum likelihood estimator of is = X. You may use this fact without
proof.
(a) Let 0 < < 1. Find the likelihood ratio test of these hypotheses, and find the critical
value that gives the test size .
Solution: Evaluating the likelihood at = 1 yields

1
LX (1) =

if X 1,
if X > 1.

while evaluating the likelihood at yields


=1= 1.
LX ()
X
Then the likelihood ratio test statistic is

X
(X) =

if X 1,
if X > 1,

and we reject H0 if and only if (X) k for some critical value k. To give the test
size , we must choose k so that = P=1 [(X) k]. Observe that if = 1, then X
and (X) are both uniformly distributed on [0, 1]. Then P=1 [(X) k] = k. Thus,
we take k = to give the test size .

(b) Find values c1 > 0 and c2 > 0 such that the likelihood ratio test with size in part (a)
takes the form Reject H0 if and only if either X c1 or X > c2 .
Solution: Observe that (X) k = if and only if either X or X > 1. Thus,
c1 = and c2 = 1.

(c) Give two reasons why it would not be appropriate to choose the critical value for
part (a) based on the result that 2 log has an approximate 21 distribution.
Solution: First, this result depends on various regularity conditions that are
not satisfied since the support of the distribution of X depends on the unknown
parameter . Second, this approximation is based on an asymptotic result, which
means it holds for large n. However, here n = 1.

Solutions to Final Exam Sample Questions

18

18. Let X1 , . . . , Xn be iid Poisson() random variables with pmf

x exp()

x!
p (x) =

if x {0, 1, 2, . . .},
otherwise,

where > 0 is unknown. Then let 0 > 0 be fixed and known, and consider testing
H0 = 0 versus 0 .
n = X n , and E (X1 ) = . You may use these facts without proof.
Note: The MLE of is
(a) Find the Wald test of these hypotheses, and state how to choose the critical value
to give the test approximate size . (You may use either version of the Wald test.)
Solution (Version I): First, observe that
`Xn () =

n
2 n
nX n
[
X
log

log(Xi !)] = 2 .

i
2
i=1

i=1

Then
n) =
Jn = `Xn (

n
.
Xn

Then the Wald test rejects H0 if and only if

n
X n 0 z/2 ,
Xn
where z/2 is the number such that P (Z z/2 ) = for a standard normal random
variable Z.

Solution (Version II): First, observe that


`Xn () =

n
2 n
nX n
[
X
log

log(Xi !)] = 2 .

i
2
i=1

i=1

Then
In () = E [`Xn ()] =

n
,

so
n) = n .
In (
Xn
Then the Wald test rejects H0 if and only if

n
X n 0 z/2 ,
Xn
where z/2 is the number such that P (Z z/2 ) = for a standard normal random
variable Z.

Solutions to Final Exam Sample Questions

19

(b) Find the score test of these hypotheses, and state how to choose the critical value to
give the test approximate size .
Solution: The score function is
`Xn () =

n
n
nX n
[ Xi log n log(Xi !)] =
n,
i=1

i=1

while the Fisher information is


In () = E [`Xn ()] =

n
.

Then the score test rejects H0 if and only if

0 nX n

n z/2 ,
n 0
i.e., if and only if

n
X n 0 z/2 ,
0

where z/2 is the number such that P (Z z/2 ) = for a standard normal random
variable Z.

(c) Find the Wald confidence interval for with approximate confidence level 1 .
(You may use either version of the Wald confidence interval.)
Solution: Using results from part (a), the Wald confidence interval for with
approximate confidence level 1 is

Xn
Xn

< 0 < X n + z/2

0 X n z/2

n
n

where z/2 is the number such that P (Z z/2 ) = for a standard normal random
variable Z.

Das könnte Ihnen auch gefallen