Sie sind auf Seite 1von 4

updated: 11/23/00, 1/12/03 (answer to Q7 of Section 1.

3 added)
Hayashi Econometrics: Answers to Selected Review Questions
Chapter 1
Section 1.1
1. The intercept is increased by log(100).
2. Since (
i
, x
i
) is independent of (
j
, x
1
, . . . , x
i1
, x
i+1
, . . . , x
n
) for i = j, we have: E(
i
|
X,
j
) = E(
i
| x
i
). So
E(
i

j
| X)
= E[E(
j

i
| X,
j
) | X] (by Law of Iterated Expectations)
= E[
j
E(
i
| X,
j
) | X] (by linearity of conditional expectations)
= E[
j
E(
i
| x
i
) | X]
= E(
i
| x
i
) E(
j
| x
j
).
The last equality follows from the linearity of conditional expectations because E(
i
| x
i
)
is a function of x
i
.
3.
E(y
i
| X) = E(x

i
+
i
| X) (by Assumption 1.1)
= x

i
+ E(
i
| X) (since x
i
is included in X)
= x

i
(by Assumption 1.2).
Conversely, suppose E(y
i
| X) = x

i
(i = 1, 2, . . . , n). Dene
i
y
i
E(y
i
| X). Then by
construction Assumption 1.1 is satised:
i
= y
i
x

i
. Assumption 1.2 is satised because
E(
i
| X) = E(y
i
| X) E[E(y
i
| X) | X] (by denition of
i
here)
= 0 (since E[E(y
i
| X) | X] = E(y
i
| X)).
4. Because of the result in the previous review question, what needs to be veried is Assumption
1.4 and that E(CON
i
| YD
1
, . . . , YD
n
) =
1
+
2
YD
i
. That the latter holds is clear from
the i.i.d. assumption and the hint. From the discussion in the text on random samples,
Assumption 1.4 is equivalent to the condition that E(
2
i
| YD
i
) is a constant, where
i

CON
i

2
YD
i
.
E(
2
i
| YD
i
) = Var(
i
| YD
i
) (since E(
i
| YD
i
) = 0)
= Var(CON
i
| YD
i
).
This is a constant since (CON
i
, YD
i
) is jointly normal.
5. If x
i2
= x
j2
for all i, j, then the rank of X would be one.
1
6. By the Law of Total Expectations, Assumption 1.4 implies
E(
2
i
) = E[E(
2
i
| X)] = E[
2
] =
2
.
Similarly for E(
i

j
).
Section 1.2
5. (b)
e

e = (M)

(M)
=

M (recall from matrix algebra that (AB)

= B

)
=

MM (since M is symmetric)
=

M (since M is itempotent).
6. A change in the unit of measurement for y means that y
i
gets multiplied by some factor, say
, for all i. The OLS formula shows that b gets multiplied by . So y gets multiplied by the
same factor , leaving R
2
unaected. A change in the unit of measurement for regressors
leaves x

i
b, and hence R
2
, unaected.
Section 1.3
4(a). Let d

E(

| X), a

E(

), and c E(

| X) E(

). Then d = a c and
dd

= aa

ca

ac

+cc

. By taking unconditional expectations of both sides, we obtain


E(dd

) = E(aa

) E(ca

) E(ac

) + E(cc

).
Now,
E(dd

) = E[E(dd

| X)] (by Law of Total Expectations)


= E
_
E[(

E(

| X))(

E(

| X))

| X]
_
= E
_
Var(

| X)
_
(by the rst equation in the hint).
By denition of variance, E(aa

) = Var(

). By the second equation in the hint, E(cc

) =
Var[E(

| X)]. For E(ca

), we have:
E(ca

) = E[E(ca

| X)]
= E
_
E[(E(

| X) E(

))(

E(

))

| X]
_
= E
_
(E(

| X) E(

)) E[(

E(

))

| X]
_
= E
_
(E(

| X) E(

))(E(

| X) E(

))

_
= E(cc

) = Var[E(

| X)].
Similarly, E(ac

) = Var[E(

| X)] .
2
4(b). Since by assumption E(

| X) = , we have Var[E(

| X)] = 0. So the equality in (a) for


the unbiased estimator

becomes Var(

) = E[Var(

| X)]. Similarly for the OLS estimator


b, we have: Var(b) = E[Var(b | X)]. As noted in the hint, E[Var(

| X)] E[Var(b | X)].


7. p
i
is the i-th diagonal element of the projection matrix P. Since P is positive semi-denite,
its diagonal elements are all non-negative. Hence p
i
0.

n
i=1
p
i
= K because this sum
equals the trace of P which equals K. To show that p
i
1, rst note that p
i
can be written
as: e

i
Pe
i
where e
i
is an n-dimensional i-th unit vector (so its i-th element is unity and the
other elements are all zero). Now, recall that for the annihilator M, we have M = I P
and M is positive semi-denite. So
e

i
Pe
i
= e

i
e
i
e

i
Me
i
= 1 e

i
Me
i
(since e

i
e
i
= 1)
1 (since M is positive semi-denite).
Section 1.4
6. As explained in the text, the overall signicance increases with the number of restrictions to
be tested if the t test is applied to each restriction without adjusting the critical value.
Section 1.5
2. Since
2
log L()/(

) = 0, the information matrix I() is block diagonal, with its rst


block corresponding to and the second corresponding to . The inverse is block diagonal,
with its rst block being the inverse of
E
_

2
log L()

_
.
So the Cramer-Rao bound for is the negative of the inverse of the expected value of
(1.5.2). The expectation, however, is over y and X because here the density is a joint
density. Therefore, the Cramer-Rao bound for is
2
E[(X

X)]
1
.
Section 1.6
3. Var(b | X) = (X

X)
1
X

Var( | X)X(X

X)
1
.
Section 1.7
2. It just changes the intercept by b
2
times log(1000).
5. The restricted regression is
log
_
TC
i
p
i2
_
=
1
+
2
log(Q
i
) +
3
log
_
p
i1
p
i2
_
+
5
log
_
p
i3
p
i2
_
+
i
. (1)
The OLS estimate of (
1
, . . . ,
5
) from (1.7.8) is (4.7, 0.72, 0.59, 0.007, 0.42). The OLS
estimate from the above restricted regression should yield the same point estimate and
standard errors. The SSR should be the same, but R
2
should be dierent.
3
6. Thats because the dependent variable in the restricted regression is dierent from that in
the unrestricted regression. If the dependent variable were the same, then indeed the R
2
should be higher for the unrestricted model.
7(b) No, because when the price of capital is constant across rms we are forced to use the
adding-up restriction
1
+
2
+
3
= 1 to calculate
2
(capitals contribution) from the OLS
estimate of
1
and
3
.
8. Because input choices can depend on
i
, the regressors would not be orthogonal to the error
term. Under the Cobb-Douglas technology, input shares do not depend on factor prices.
Labor share, for example, should be equal to
1
/(
1
+
2
+
3
) for all rms. Under constant
returns to scale, this share equals
1
. So we can estimate s without sampling error.
4

Das könnte Ihnen auch gefallen