Beruflich Dokumente
Kultur Dokumente
1
EARNINGS
EXP
1
EARNINGS
EXP
Specifically, we will look at an earnings function model where hourly earnings, EARNINGS,
depend on years of schooling (highest grade completed), S, and years of work experience,
EXP.
© Christopher Dougherty 1999–2006 2
MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE
1
EARNINGS
EXP
The model has three dimensions, one each for EARNINGS, S, and EXP. The starting point
for investigating the determination of EARNINGS is the intercept, 1.
1
EARNINGS
EXP
Literally the intercept gives EARNINGS for those respondents who have no schooling and
no work experience. However, there were no respondents with less than 6 years of
schooling. Hence a literal interpretation of 1 would be unwise.
© Christopher Dougherty 1999–2006 4
MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE
pure S effect
1 + 2S
1
EARNINGS
EXP
The next term on the right side of the equation gives the effect of variations in S. A one
year increase in S causes EARNINGS to increase by 2 dollars, holding EXP constant.
1 + 3EXP
pure EXP effect
1
EARNINGS
EXP
Similarly, the third term gives the effect of variations in EXP. A one year increase in EXP
causes earnings to increase by 3 dollars, holding S constant.
1 + 2S + 3EXP
1 + 3EXP combined effect of S
pure EXP effect and EXP
pure S effect
1 + 2S
1
EARNINGS
EXP
Different combinations of S and EXP give rise to values of EARNINGS which lie on the
plane shown in the diagram, defined by the equation EARNINGS = 1 + 2S + 3EXP. This is
the nonstochastic (nonrandom) component of the model.
© Christopher Dougherty 1999–2006 7
MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE
pure S effect
1 + 2S
1
EARNINGS
EXP
The final element of the model is the disturbance term, u. This causes the actual values of
EARNINGS to deviate from the plane. In this observation, u happens to have a positive
value.
© Christopher Dougherty 1999–2006 8
MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE
pure S effect
1 + 2S
1
EARNINGS
EXP
A sample consists of a number of observations generated in this way. Note that the
interpretation of the model does not depend on whether S and EXP are correlated or not.
pure S effect
1 + 2S
1
EARNINGS
EXP
However we do assume that the effects of S and EXP on EARNINGS are additive. The
impact of a difference in S on EARNINGS is not affected by the value of EXP, or vice versa.
Yi 1 2 X 2 i 3 X 3 i ui
Yˆi b1 b2 X 2 i b3 X 3 i
The regression coefficients are derived using the same least squares principle used in
simple regression analysis. The fitted value of Y in observation i depends on our choice of
b1, b2, and b3.
© Christopher Dougherty 1999–2006 11
MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE
Yi 1 2 X 2 i 3 X 3 i ui
Yˆi b1 b2 X 2 i b3 X 3 i
ei Yi Yˆi Yi b1 b2 X 2 i b3 X 3 i
The residual ei in observation i is the difference between the actual and fitted values of Y.
We define RSS, the sum of the squares of the residuals, and choose b1, b2, and b3 so as to
minimize it.
b1 Y b2 X 2 b3 X 3
X X 2 Yi Y X 3 i X 3
2
2i
X 3i X 3 Yi Y X 2 i X 2 X 3 i X 3
b2
X X X X
2i 2 3i 3 2i 2 3i 3
2 2
X X X X 2
We thus obtain three equations in three unknowns. Solving for b1, b2, and b3, we obtain the
expressions shown above. (The expression for b3 is the same as that for b2, with the
subscripts 2 and 3 interchanged everywhere.)
© Christopher Dougherty 1999–2006 15
MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE
b1 Y b2 X 2 b3 X 3
X X 2 Yi Y X 3 i X 3
2
2i
X 3i X 3 Yi Y X 2 i X 2 X 3 i X 3
b2
X X X X
2i 2 3i 3 2i 2 3i 3
2 2
X X X X 2
b1 Y b2 X 2 b3 X 3
X X 2 Yi Y X 3 i X 3
2
2i
X 3i X 3 Yi Y X 2 i X 2 X 3 i X 3
b2
X X X X
2i 2 3i 3 2i 2 3i 3
2 2
X X X X 2
However, the expressions for the slope coefficients are considerably more complex than
that for the slope coefficient in simple regression analysis.
b1 Y b2 X 2 b3 X 3
X X 2 Yi Y X 3 i X 3
2
2i
X 3i X 3 Yi Y X 2 i X 2 X 3 i X 3
b2
X X X X
2i 2 3i 3 2i 2 3i 3
2 2
X X X X 2
For the general case when there are many explanatory variables, ordinary algebra is
inadequate. It is necessary to switch to matrix algebra.
------------------------------------------------------------------------------
EARNINGS | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
S | 2.678125 .2336497 11.46 0.000 2.219146 3.137105
EXP | .5624326 .1285136 4.38 0.000 .3099816 .8148837
_cons | -26.48501 4.27251 -6.20 0.000 -34.87789 -18.09213
------------------------------------------------------------------------------
Here is the regression output for the earnings function using Data Set 21.
------------------------------------------------------------------------------
EARNINGS | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
S | 2.678125 .2336497 11.46 0.000 2.219146 3.137105
EXP | .5624326 .1285136 4.38 0.000 .3099816 .8148837
_cons | -26.48501 4.27251 -6.20 0.000 -34.87789 -18.09213
------------------------------------------------------------------------------
It indicates that earnings increase by $2.68 for every extra year of schooling and by $0.56
for every extra year of work experience.
------------------------------------------------------------------------------
EARNINGS | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
S | 2.678125 .2336497 11.46 0.000 2.219146 3.137105
EXP | .5624326 .1285136 4.38 0.000 .3099816 .8148837
_cons | -26.48501 4.27251 -6.20 0.000 -34.87789 -18.09213
------------------------------------------------------------------------------
Literally, the intercept indicates that an individual who had no schooling or work experience
would have hourly earnings of –$26.49.
------------------------------------------------------------------------------
EARNINGS | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
S | 2.678125 .2336497 11.46 0.000 2.219146 3.137105
EXP | .5624326 .1285136 4.38 0.000 .3099816 .8148837
_cons | -26.48501 4.27251 -6.20 0.000 -34.87789 -18.09213
------------------------------------------------------------------------------
Obviously, this is impossible. The lowest value of S in the sample was 6. We have
obtained a nonsense estimate because we have extrapolated too far from the data range.