Beruflich Dokumente
Kultur Dokumente
otherwise.
0
Also,
E (X1 ) =
1
,
1+
Var (X1 ) =
.
(1 + )2 (2 + )
[n log + ( 1) log(1 Xi )]
i=1
n
n
n
= + log(1 Xi ) = 0 = n
.
i=1
i=1 log(1 Xi )
`Xn () =
This point is the only critical point, and it can be seen from the form of the loglikelihood that `Xn () both as 0 and as . Then the critical point is
indeed the maximum, and
n
nMLE = n
.
i=1 log(1 Xi )
(The justification for why the critical point is indeed the maximum is not required
for full credit since this fact is fairly obvious by inspection of the log-likelihood.)
n n
n
[ + log(1 Xi )] = 2 ,
i=1
so the Fisher information in the sample is In () = n/2 . It follows that the Fisher
information per observation is I1 () = 1/2 , and so
n (n ) D N (0, 2 )
is the asymptotic distribution of the maximum likelihood estimator n .
X n = n1 Xi .
i=1
n ( Xn
) D N [0,
].
1+
(1 + )2 (2 + )
2
1
(1 + )4
(1 + )2
)]
=
=
.
1+
(1 + )2 (2 + ) (1 + )2 (2 + )
2+
Then
(1 + )2
]
n (n ) D N [0,
2+
by the delta method.
(1 )x if x {0, 1, 2, . . .},
f (x) =
otherwise,
0
and it has mean (1 )/. Also, the maximum likelihood estimator of is
n
nMLE =
.
n + ni=1 Xi
You may use these facts without proof.
(a) Find the Fisher information in the sample.
Solution: Differentiating the log-likelihood twice yields
`Xn () =
n
2 n
1
n
[
X
log(1
)
+
n
log
]
=
Xi 2 .
i
2
2
i=1
(1 ) i=1
(1 )
(1 )
(Simplification is not necessary for full credit.)
(b) Let 0 be fixed and known, where 0 < 0 < 1. Find the Wald test of H0 = 0 versus
H1 0 and state how to choose the critical value to give the test approximate
size , where 0 < < 1.
Note: You may use either version of the Wald test.
Solution: Observe that
2
Jn =
`Xn (nMLE )
n
n + ni=1 Xi 2 (n + ni=1 Xi )
n + ni=1 Xi
)
X
+
n(
) =
=(
i
n
n
n ni=1 Xi
i=1 Xi
i=1
= In (nMLE ),
(n + n Xi )3
n
i=1
0 z/2 ,
n
n i=1 Xi
n + ni=1 Xi
where z/2 is the number such that P (Z z/2 ) = for a standard normal random
variable Z.
(c) Let 0 be fixed and known, where 0 < 0 < 1. Find the score test of H0 = 0 versus
H1 0 and state how to choose the critical value to give the test approximate
size , where 0 < < 1.
Solution: The score function is
`Xn () =
n n (n + ni=1 Xi )
1 n
n
X
+
[ Xi log(1 ) + n log ] =
=
,
i
i=1
1 i=1
(1 )
02 (1 0 ) n 0 (n + ni=1 Xi )
z/2 ,
n
0 (1 0 )
where z/2 is the number such that P (Z z/2 ) = for a standard normal random
variable Z.
(d) Find the Wald confidence interval for with approximate confidence level 1 ,
where 0 < < 1.
Note: You may use either version of the Wald confidence interval.
Solution: Since In (nMLE ) = Jn , both versions of the Wald confidence interval are
n ni=1 Xi
n
(0, 1)
z
/2
n
3
n
+
X
(n + ni=1 Xi )
i
i=1
n ni=1 Xi
n
< <
+ z/2
,
n
3
n
n + i=1 Xi
(n + i=1 Xi )
where z/2 is the number such that P (Z z/2 ) = for a standard normal random
variable Z.
3. Let X1 , . . . , Xn be iid random variables such that E,2 (X1 ) = and Var,2 (X1 ) = 2 are
both finite. However, suppose that X1 , . . . , Xn are not normally distributed. Now define
Xn =
1 n
Xi .
n i=1
2x exp(x2 )
f (x) =
if x 0,
if x < 0,
where > 0 is unknown. Suppose we assign a Gamma(a, b) prior to , where a > 0 and
b > 0 are known.
Note: The Gamma(a, b) distribution has pdf
ba
a1
(a) x exp(bx)
f (x) =
if x > 0,
if x 0,
and its mean is a/b. You may use these facts without proof.
(a) Find the posterior distribution of .
Solution: Ignoring terms that do not depend on , the posterior is
n
which we recognize as the unnormalized pdf of a Gamma(a + n, b + ni=1 x2i ) distribution. Thus, xn Gamma(a + n, b + ni=1 x2i ).
x exp()
f (x) =
x!
if x {0, 1, 2, . . .},
if x {0, 1, 2, . . .},
and its mean and variance are both . Also, the maximum likelihood estimator of is
1 n
MLE
=
Xi .
n
n i=1
You may use these facts without proof.
(a) Let 0 > 0 be fixed and known. Find the likelihood ratio test of H0 = 0 versus
H1 0 . (You do not need to state how to choose the critical value for this part
of the question.)
MLE
Solution: Evaluating the likelihood at = 0 and at =
yields
n
0i=1
n
LXn (0 ) =
MLE
)=
LXn (
n
Xi
exp(n0 )
,
n
i=1 Xi !
n
X
(n1 ni=1 Xi )i=1 i exp[n(n1 ni=1 Xi )]
.
n
i=1 Xi !
LXn (0 )
n0
(Xn ) =
=( n
)
MLE
X
i
LXn (
)
i=1
n
exp( Xi n0 ),
i=1
and the likelihood ratio test rejects H0 if and only if (Xn ) k for some critical
value k 0. (Simplification is not necessary for full credit.)
(b) State how the critical value of the likelihood ratio test in part (a) can be chosen to
give the test approximate size .
Solution: To give the likelihood ratio test in part (a) approximate size , we
reject H0 if and only if
2 log (Xn ) w ,
or equivalently, if and only if
(Xn ) exp(
w
),
2
exp( x)
2
[1 + exp( x)]
x
1
x
1
sech(
) = sech(
),
4
2
4
2
1
,
1 + exp( x)
where R is unknown.
Note: The maximum likelihood estimator of is
MLE = X. Also, sech(t) = sech(t)
for all t R, and sech(t) is a strictly decreasing function of t. You may use these facts
without proof.
(a) Show that the likelihood ratio test of H0 = 0 versus H1 0 rejects H0 if and
only if X c for some critical value c. (You do not need to state how to choose the
critical value for this part of the question.)
Solution: Evaluating the likelihood at = 0 and at =
MLE yields
LX (0) =
exp(X)
2
[1 + exp(X)]
X
1
sech( ),
4
2
LX (X) =
1
1
=
,
(1 + 1)2 4
LX (0)
X
4 exp(X)
=
),
2 = sech(
LX (X) [1 + exp(X)]
2
and we reject H0 if and only if (X) k for some critical value k 0. The note tells
us that sech(X/2) = sech(X/2) and that sech(X/2) is a strictly decreasing function
of X/2, so rejecting H0 if and only if sech(X/2) k is equivalent to rejecting H0 if
and only if X c for some c.
(b) State how the critical value c of the likelihood ratio test in part (a) can be chosen
to give the test size (exactly, not just approximately), where 0 < < 1.
Solution: The test has size if and only if
= P=0 (X c) = P=0 (X c) + P=0 (X c) = F0 (c) + 1 F0 (c)
1
1
+1
=
1 + exp(c)
1 + exp(c)
2
=
.
1 + exp(c)
Then we choose c = log(21 1) to achieve size . (Simplifying and solving for c in
terms of are not necessary for full credit.)
(c) For the likelihood ratio test with size in parts (a) and (b), find the probability of
a type II error if the true value of happens to be = 0.
Solution: Since 0 ,
P (type II error) = P (X < c) = P (c < X < c)
= F (c) F (c)
1
1
=
1 + exp( c) 1 + exp( + c)
1
21 1
.
= 1
1
2 1 + exp( ) 1 + (2 1) exp( )
(Inserting the value of c is not necessary for full credit.)
(d) Suppose we observe X = xobs . Find the p-value of the likelihood ratio test for the
observed data xobs .
Note: Be sure your answer is correct for both positive and negative values of xobs .
Solution: The p-value is
p(xobs ) = P=0 (X xobs ) = P=0 (X xobs ) + P=0 (X xobs )
= F0 (xobs ) + 1 F0 (xobs )
1
1
2
=
+1
=
.
1 + exp(xobs )
1 + exp(xobs ) 1 + exp(xobs )
(Simplification is not necessary for full credit.)
7. Suppose that we call a hypothesis test trivial if its rejection region is either the empty
set or the entire sample space, i.e., a trivial test is a test that either never rejects H0
or always rejects H0 . Now let X Bin(n, ), where is unknown, and consider testing
H0 = 1/2 versus H1 1/2. Find a necessary and sufficient condition (in terms of n)
for the existence of a test of these hypotheses with level = 0.05 = 1/20 that is not trivial.
Solution: A test of these hypotheses with rejection region R has level 0.05 if and only
if P=1/2 (X R) 0.05. For such a test to be nontrivial, the rejection region R cannot
be the empty set. Now observe that the points in the sample space with the smallest
probability when = 1/2 are X = 0 and X = n, each of which have probability 1/2n when
= 1/2. Hence, there exists a nonempty rejection region R with P=1/2 (X R) 0.05 if
and only if 1/2n 0.05. This inequality holds if and only if n 5. Hence, there exists a
nontrivial test of these hypotheses with level = 0.05 if and only n 5. Also, since it is
clear that n must be a positive integer for the question to make sense, any condition that
is equivalent to n 5 when applied to the positive integers, such as n (log 20)/(log 2),
is also acceptable.
exp(x)
f (x) =
if x 0,
if x < 0.
1 exp(x)
F (x) =
if x 0,
if x < 0.
= P (Xi t + an )
i=1
n
= [P (X1 t + an )]
[1 exp(t an )]
n
= [F (t + an )] =
if t + an 0,
if t + an < 0.
Now let an = log n. Then for every t R, t + an 0 for all sufficiently large n. Then for
every t R,
n
1
x
1
1
exp(
2
2 x3
2
2x
f (x) =
if x > 0,
if x 0,
10
n
3 n
n
1 n
1 n 1
=
[ log(2) log xi + 2 xi ]
2
2 i=1
2 i=1
2 i=1 xi
n
n
n
1
1
= 2 + 3 xi = 0 = xi = xn .
i=1
n i=1
Although it is not necessary to obtain full credit, we technically should now verify
that this critical point does indeed maximize the likelihood since it is not immediately
clear that this is the case. (In particular, it is not obvious what the likelihood does
as .) However, the second derivative at the critical point is
`x ( xn ) =
2n
( xn )
3nxn
( xn )
n
=
( xn )
< 0,
(b) Let 0 > 0 be fixed and known. Find the likelihood ratio test of H0 = 0 versus
H1 0 . (You do not need to state how to choose the critical value to give the
test a particular size in this part of the question.)
Solution: The likelihood ratio statistic is
1
1
Xi
1
exp(
3
0 220 2Xi
i=1 2 Xi
LX (0 )
(X) =
=
n
LX (
)
1
1
Xi
1
exp
3
X n 2( X n )2 2Xi
i=1 2 Xi
n Xn
n nX n
n
n
= exp(
1) ,
+
) = exp
(
2
0 20 X n 2X n
2X n 0
(c) State how the critical value of the likelihood ratio test in part (b) can be chosen to
give the test approximate size .
Solution: To give the likelihood ratio test in part (b) approximate size , we
reject H0 if and only if
2 log (Xn ) w ,
or equivalently, if and only if
(Xn ) exp(
w
),
2
11
(d) Explain how the likelihood ratio test in part (b) would change if the alternative
hypothesis were H1 < 0 instead.
Solution: The MLE must now be found over the restricted parameter space
= 0 ,
(0, 0 ] rather than over > 0. Thus, if X n > 0 , the MLE is instead
which yields (X) = 1. If instead X n 0 , then there is no change.
( + 1)
( + x)2
f (x) =
if 0 x 1,
otherwise,
where > 0 is unknown. It can be shown that the Fisher information in the sample is
In () =
n
.
+ 1)2
32 (
Use this fact to find (or simply state) the asymptotic distribution of the maximum likelihood estimator n of .
Note: There is no need to actually find the form of n or to verify the result for the Fisher
information. Also, you may assume that the regularity conditions of Section 7.4 of the
notes hold.
( + x)2
f (x) =
if x 0,
if x < 0,
where > 0 is unknown. Let n denote the maximum likelihood estimator of (which
you do not need to find).
Note: It can be shown by simple calculus that
E (X1 ) = E (
1
) = ,
X1
E (
1
1
)= ,
+ X1
2
E [
1
1
] = 2,
2
( + X1 )
3
so you may use any of these facts without proof. You may also assume that the relevant
regularity conditions (i.e., those of Section 7.4 of the notes) are satisfied.
12
(a) Find the score function for the sample and show explicitly that its expectation is
zero (i.e., do not simply cite the result from the notes that says that the expectation
is zero).
Solution: The score function is (for > 0)
`Xn () =
n
n
i=1
n
n
1
= 2
.
i=1 + Xi
Then clearly E [`Xn ()] = 0 since E[1/( + Xi )] = 1/(2) for each i {1, . . . , n}.
2
1
1
2
2
[log
2
log(
+
X
)]
=
(
)
=
+
.
1
2
+ X1
2 ( + X1 )2
Then
I1 () = E [`X1 ()] =
1
1
1
2
1
2 E [
] = 2 2 = 2,
2
2
( + X1 )
3
3
and thus
In () = n I1 () =
is the Fisher information for the sample.
n
32
n(n ) D N (0, 32 )
by the standard result from the notes.
13
13. Lemma 7.2.1 of the notes states that E [`Xn ()] = 0 for all in the parameter space .
This result uses the regularity condition that Xn = (X1 , . . . , Xn ) is an iid sample. Now
suppose that we were to remove the condition of independence while keeping all other
regularity conditions in place. Explain why the result that E [`Xn ()] = 0 for all
would still be true.
Solution: If X1 , . . . , Xn are not independent, then the log-likelihood is
n
`Xn () = log[LX1 () LX2 X1 () LX3 X1 ,X2 ()LXn X1 ,...,Xn1 ()] = `Xi X1 ,...,Xi1 (),
i=1
where `Xi X1 ,...,Xi1 () is simply the conditional pdf or pmf of Xi given X1 , . . . , Xi1 . This
is still a valid pdf or pmf, so it still has the property that E [`Xi X1 ,...,Xi1 ()] = 0. Thus,
we still obtain the result that E [`Xn ()] = 0.
14
14. Let X1 , . . . , Xn be iid random variables from a distribution that depends on an unknown
parameter R. This distribution has the following properties:
E (X1 ) = 2 exp(),
E (log X1 ) = ,
E (X11 ) = 2 exp(),
n
1/n
(2)
n = log( Xi ) .
i=1
1 n
(2)
n = log Xi .
n i=1
Then by the central limit theorem,
(2)
n[n ] D N (0, 2 log 2).
Thus, v (2) () = 2 log 2 for all .
15
(1)
(2)
(1)
(2)
(c) Find ARE [n , n ], the asymptotic relative efficiency of n compared to n , and
use it to state which of the two estimators is better for large n.
(1)
(2)
Solution to (c): ARE [n , n ] = v (2) ()/v (1) () = (2 log 2)/3 for all R.
(2)
(1)
Note that log 2 < log e = 1, so 2 log 2 < 3. Thus, n is better than n for large n.
(x + k 1)! k
x! (k 1)! (1 )
p (x) =
if x {0, 1, 2, . . .},
otherwise,
(xi + k 1)!
{nk log + xi log(1 ) + log[
]}
xi ! (k 1)!
i=1
i=1
n
1 n
nk
nk
.
=
xi = 0 (1 )nk = xi =
1 i=1
nk + ni=1 xi
i=1
`xn () =
It is clear from the form of the likelihood that this point is indeed a maximum. Thus,
the maximum likelihood estimator of is
nk
n =
nk + ni=1 Xi
as long as Xi > 0 for some i {1, . . . , n}. (Otherwise the MLE does not exist, but
the question says we can ignore this possibility.)
k
X1
k
X1
(
)= 2
,
1
(1 )2
and thus
I1 () = E [`X1 ()] =
k E (X1 ) k
k
k
+
= 2+
= 2
.
2
2
(1 )
(1 ) (1 )
Then
2 (1 )
n(n ) D N [0 ,
],
k
assuming the various regularity conditions.
16
1
,
[1 + (x )2 ]
1
LX (0 )
,
=
LX () 1 + (X 0 )2
and we reject H0 if and only if (X) k for some k. Now note that (X) k is
equivalent to X 0 c for some c 0.
(b) Let 0 < < 1. Find the value of c such that the test in part (a) has size (exactly).
Note: The inverse of the arctan function is simply the tan function.
Solution: The test has size if and only if
= P0 (X 0 c ) = P0 (X 0 c ) + P0 (X 0 + c )
= F0 (0 c ) + 1 F0 (0 + c )
1
1
1
2
1
= arctan(c ) + + 1 arctan(c ) = 1 arctan(c ).
Thus,
c = tan[(1 ) ]
2
gives the test size (exactly).
17
17. Let X be a single random variable with a continuous uniform distribution on [0, ], where
> 0 is unknown. Consider testing H0 = 1 versus H1 1.
Note: The maximum likelihood estimator of is = X. You may use this fact without
proof.
(a) Let 0 < < 1. Find the likelihood ratio test of these hypotheses, and find the critical
value that gives the test size .
Solution: Evaluating the likelihood at = 1 yields
1
LX (1) =
if X 1,
if X > 1.
X
(X) =
if X 1,
if X > 1,
and we reject H0 if and only if (X) k for some critical value k. To give the test
size , we must choose k so that = P=1 [(X) k]. Observe that if = 1, then X
and (X) are both uniformly distributed on [0, 1]. Then P=1 [(X) k] = k. Thus,
we take k = to give the test size .
(b) Find values c1 > 0 and c2 > 0 such that the likelihood ratio test with size in part (a)
takes the form Reject H0 if and only if either X c1 or X > c2 .
Solution: Observe that (X) k = if and only if either X or X > 1. Thus,
c1 = and c2 = 1.
(c) Give two reasons why it would not be appropriate to choose the critical value for
part (a) based on the result that 2 log has an approximate 21 distribution.
Solution: First, this result depends on various regularity conditions that are
not satisfied since the support of the distribution of X depends on the unknown
parameter . Second, this approximation is based on an asymptotic result, which
means it holds for large n. However, here n = 1.
18
x exp()
x!
p (x) =
if x {0, 1, 2, . . .},
otherwise,
where > 0 is unknown. Then let 0 > 0 be fixed and known, and consider testing
H0 = 0 versus 0 .
n = X n , and E (X1 ) = . You may use these facts without proof.
Note: The MLE of is
(a) Find the Wald test of these hypotheses, and state how to choose the critical value
to give the test approximate size . (You may use either version of the Wald test.)
Solution (Version I): First, observe that
`Xn () =
n
2 n
nX n
[
X
log
log(Xi !)] = 2 .
i
2
i=1
i=1
Then
n) =
Jn = `Xn (
n
.
Xn
n
X n 0 z/2 ,
Xn
where z/2 is the number such that P (Z z/2 ) = for a standard normal random
variable Z.
n
2 n
nX n
[
X
log
log(Xi !)] = 2 .
i
2
i=1
i=1
Then
In () = E [`Xn ()] =
n
,
so
n) = n .
In (
Xn
Then the Wald test rejects H0 if and only if
n
X n 0 z/2 ,
Xn
where z/2 is the number such that P (Z z/2 ) = for a standard normal random
variable Z.
19
(b) Find the score test of these hypotheses, and state how to choose the critical value to
give the test approximate size .
Solution: The score function is
`Xn () =
n
n
nX n
[ Xi log n log(Xi !)] =
n,
i=1
i=1
n
.
0 nX n
n z/2 ,
n 0
i.e., if and only if
n
X n 0 z/2 ,
0
where z/2 is the number such that P (Z z/2 ) = for a standard normal random
variable Z.
(c) Find the Wald confidence interval for with approximate confidence level 1 .
(You may use either version of the Wald confidence interval.)
Solution: Using results from part (a), the Wald confidence interval for with
approximate confidence level 1 is
Xn
Xn
0 X n z/2
n
n
where z/2 is the number such that P (Z z/2 ) = for a standard normal random
variable Z.