Sie sind auf Seite 1von 17

1519T_c14 03/27/2006 07:28 AM Page 614

Chapter

14

Jose Luis Pelaez Inc/Blend Images/Getty Images, Inc./Getty Images, Inc.


Multiple Regression
14.1 Multiple Regression I n Chapter 13, we discussed simple linear regression and linear correlation. A simple regression
Analysis model includes one independent and one dependent variable, and it presents a very simplified
14.2 Assumptions of the scenario of real-world situations. In the real world, a dependent variable is usually influenced by
Multiple Regression a number of independent variables. For example, the sales of a company’s product may be deter-
Model mined by the price of that product, the quality of the product, and advertising expenditure incurred
14.3 Standard Deviation of by the company to promote that product. Therefore, it makes more sense to use a regression model
Errors that includes more than one independent variable. Such a model is called a multiple regression
model. In this chapter we will discuss multiple regression models.
14.4 Coefficient of Multiple
Determination
14.5 Computer Solution of
Multiple Regression

14.1 Multiple Regression Analysis


The simple linear regression model discussed in Chapter 13 was written as
y  A  Bx  
This model includes one independent variable, which is denoted by x, and one dependent vari-
able, which is denoted by y. As we know from Chapter 13, the term represented by  in the
above model is called the random error.
Usually a dependent variable is affected by more than one independent variable. When we
include two or more independent variables in a regression model, it is called a multiple
regression model. Remember, whether it is a simple or a multiple regression model, it always
includes one and only one dependent variable.

614
1519T_c14 03/27/2006 07:28 AM Page 615

14.1 Multiple Regression Analysis 615

A multiple regression model with y as a dependent variable and x1, x2, x3, p , xk as inde-
pendent variables is written as
y  A  B1x1  B2 x2  B3 x3  p  Bk xk   (1)
where A represents the constant term, B1, B2, B3, p , Bk are the regression coefficients of in-
dependent variables x1, x2, x3, p , xk, respectively, and  represents the random error term. This
model contains k independent variables x1, x2, x3, p , and xk. From model (1), it would seem
that multiple regression models can only be used when the relationship between the depend-
ent variable and each independent variable is linear. Furthermore, it also appears as if there
can be no interaction between two or more of the independent variables. This is far from the
truth. In the real world, a multiple regression model can be much more complex. Discussion
of such models is outside the scope of this book. When each term contains a single inde-
pendent variable raised to the first power as in model (1), we call it a first-order multiple
regression model. This is the only type of multiple regression model we will discuss in this
chapter.
In regression model (1), A represents the constant term, which gives the value of y when
all independent variables assume zero values. The coefficients B1, B2, B3, p , and Bk are called
the partial regression coefficients. For example, B1 is a partial regression coefficient of x1. It
gives the change in y due to a one-unit change in x1 when all other independent variables in-
cluded in the model are held constant. In other words, if we change x1 by one unit but keep
x2, x3, p , and xk unchanged, then the resulting change in y is measured by B1. Similarly the
value of B2 gives the change in y due to a one-unit change in x2 when all other independent
variables are held constant. In model (1) above, A, B1, B2, B3, p , and Bk are called the true
regression coefficients or population parameters.
A positive value for a particular Bi in model (1) will indicate a positive relationship between
y and the corresponding xi variable. A negative value for a particular Bi in that model will in-
dicate a negative relationship between y and the corresponding xi variable.
Remember that in a first-order regression model such as model (1), the relationship between
each xi and y is a straight-line relationship. In model (1), A  B1x1  B2 x2  B3 x3  p  Bk xk
is called the deterministic portion and  is the stochastic portion of the model.
When we use the t distribution to make inferences about a single parameter of a multiple
regression model, the degrees of freedom are calculated as
df  n  k  1
where n represents the sample size and k is the number of independent variables in the model.

Definition
Multiple Regression Model A regression model that includes two or more independent variables
is called a multiple regression model. It is written as
y  A  B1x1  B2 x2  B3 x3  p  Bk xk  

where y is the dependent variable, x1, x2, x3, p , xk are the k independent variables, and  is the
random error term. When each of the xi variables represents a single variable raised to the first
power as in the above model, this model is referred to as a first-order multiple regression model.
For such a model with a sample size of n and k independent variables, the degrees of freedom are:
df  n  k  1

When a multiple regression model includes only two independent variables (with k  22,
model (1) reduces to
y  A  B1x1  B2x2  
A multiple regression model with three independent variables (with k  32 is written as
y  A  B1x1  B2x2  B3x3  
1519T_c14 03/27/2006 07:28 AM Page 616

616 Chapter 14 Multiple Regression

If model (1) is estimated using sample data, which is usually the case, the estimated re-
gression equation is written as
ŷ  a  b1x1  b2x2  b3 x3  p  bk xk (2)
In equation (2), a, b1, b2, b3, p , and bk are the sample statistics, which are the point estimators
of the population parameters A, B1, B2, B3, p , and Bk, respectively.
In model (1), y denotes the actual values of the dependent variable for members of the sam-
ple. In the estimated model (2), ŷ denotes the predicted or estimated values of the dependent
variable. The difference between any pair of y and ŷ values gives the error of prediction. For a
multiple regression model,
SSE  a 1y  ŷ2 2

where SSE stands for the error sum of squares.


As in Chapter 13, the estimated regression equation (2) is obtained by minimizing the sum
of squared errors, that is,
Minimize a 1y  ŷ2 2
The estimated equation (2) obtained by minimizing the sum of squared errors is called the least
squares regression equation.
Usually the calculations in a multiple regression analysis are made by using statistical soft-
ware packages for computers, such as MINITAB, instead of using the formulas manually. Even
for a multiple regression equation with two independent variables, the formulas are complex
and manual calculations are time consuming. In this chapter we will perform the multiple re-
gression analysis using MINITAB. The solutions obtained by using other statistical software
packages such as JMP, SAS, S-Plus, or SPSS can be interpreted the same way. The TI-84 and
Excel do not have built-in procedures for the multiple regression model.

14.2 Assumptions of the Multiple


Regression Model
Like a simple linear regression model, a multiple (linear) regression model is based on certain
assumptions. The following are the major assumptions for the multiple regression model (1).
Assumption 1: The mean of the probability distribution of  is zero, that is,
E12  0
If we calculate errors for all measurements for a given set of values of independent variables for
a population data set, the mean of these errors will be zero. In other words, while individual pre-
dictions will have some amount of errors, on average our predictions will be correct. Under this
assumption, the mean value of y is given by the deterministic part of regression model (1). Thus,
E1y2  A  B1x1  B2x2  B3 x3  p  Bk xk
where E1y2 is the expected or mean value of y for the population. This mean value of y is also
denoted by my0x1, x2, p, xk.

Assumption 2: The errors associated with different sets of values of independent variables are
independent. Furthermore, these errors are normally distributed and have a constant standard
deviation, which is denoted by s.

Assumption 3: The independent variables are not linearly related. However, they can have a
nonlinear relationship. When independent variables are highly linearly correlated, it is referred
to as multicollinearity. This assumption is about the nonexistence of the multicollinearity prob-
lem. For example, consider the following multiple regression model:
y  A  B1x1  B2x2  B3x3  
1519T_c14 03/27/2006 07:29 AM Page 617

14.4 Coefficient of Multiple Determination 617

All of the following linear relationships (and other such linear relationships) between x1, x2, and
x3 should be invalid for this model.
x1  x2  4x3
x2  5x1  2x3
x1  3.5x2
If any linear relationship exists, we can substitute one variable for another, which will reduce
the number of independent variables to two. However, nonlinear relationships, such as x1  4x22
and x2  2x1  6x23 between x1, x2, and x3 are permissible.
In practice, multicollinearity is a major issue. Examining the correlation for each pair of
independent variables is a good way to determine if multicollinearity exists.
Assumption 4: There is no linear association between the random error term  and each inde-
pendent variable xi.

14.3 Standard Deviation of Errors


The standard deviation of errors (also called the standard error of the estimate) for the mul-
tiple regression model (1) is denoted by s, and it is a measure of variation among errors. How-
ever, when sample data are used to estimate multiple regression model (1), the standard devia-
tion of errors is denoted by se. The formula to calculate se is as follows.

where SSE  a 1y  ŷ2 2


SSE
se 
Bn  k  1
Note that here SSE is the error sum of squares. We will not use this formula to calculate se
manually. Rather we will obtain it from the computer solution. Note that many software pack-
ages label se as Root MSE, where MSE stands for mean square error.

14.4 Coefficient of Multiple Determination


In Chapter 13, we denoted the coefficient of determination for a simple linear regression model
by r 2 and defined it as the proportion of the total sum of squares SST that is explained by the
regression model. The coefficient of determination for the multiple regression model, usually
called the coefficient of multiple determination, is denoted by R2 and is defined as the pro-
portion of the total sum of squares SST that is explained by the multiple regression model. It
tells us how good the multiple regression model is and how well the independent variables in-
cluded in the model explain the dependent variable.
Like r2, the value of the coefficient of multiple determination R2 always lies in the range 0
to 1, that is,
0  R2  1
Just as in the case of the simple linear regression model, SST is the total sum of squares, SSR
is the regression sum of squares, and SSE is the error sum of squares. SST is always equal to
the sum of SSE and SSR. They are calculated as follows.
SSE  a e2  a 1y  ŷ2 2
SST  SSyy  a 1y  y2 2
SSR  a 1 ŷ  y2 2
SSR is the portion of SST that is explained by the use of the regression model, and SSE is
the portion of SST that is not explained by the use of the regression model. The coefficient of
multiple determination is given by the ratio of SSR and SST as follows.
SSR
R2 
SST
1519T_c14 03/27/2006 07:29 AM Page 618

618 Chapter 14 Multiple Regression

The coefficient of multiple determination R2 has one major shortcoming. The value of R2 gen-
erally increases as we add more and more explanatory variables to the regression model (even if
they do not belong in the model). Just because we can increase the value of R2 does not imply
that the regression equation with a higher value of R2 does a better job of predicting the depend-
ent variable. Such a value of R2 will be misleading, and it will not represent the true explanatory
power of the regression model. To eliminate this shortcoming of R2, it is preferable to use the adj-
usted coefficient of multiple determination, which is denoted by R2. Note that R2 is the coeffi-
cient of multiple determination adjusted for degrees of freedom. The value of R2 may increase,
decrease, or stay the same as we add more explanatory variables to our regression model. If a new
variable added to the regression model contributes significantly to explain the variation in y, then
R2 increases; otherwise it decreases. The value of R2 is calculated as follows.
n1 SSE/1n  k  12
R2  1  11  R2 2a b or 1
nk1 SST/1n  12
Thus, if we know R2, we can find the vlaue of R2. Almost all statistical software packages give
the values of both R2 and R2 for a regression model.
Another property of R2 to remember is that whereas R2 can never be negative, R2 can be
negative.
While a general rule of thumb is that a higher value of R2 implies that a specific set of inde-
pendent variables does a better job of predicting a specific dependent variable, it is important to
recognize that some dependent variables have a great deal more variability than others. Therefore,
R2  .30 could imply that a specific model is not a very strong model, but it could be the best
possible model in a certain scenario. Many good financial models have values of R2 below .50.

14.5 Computer Solution of Multiple Regression


In this section, we take an example of a multiple regression model, solve it using MINITAB,
interpret the solution, and make inferences about the population parameters of the regression model.

 EXAMPLE 14–1
A researcher wanted to find the effect of driving experience and the number of driving viola-
Using MINITAB to find a
tions on auto insurance premiums. A random sample of 12 drivers insured with the same com-
multiple regression equation.
pany and having similar auto insurance policies was selected from a large city. Table 14.1 lists

Table 14.1

Number of
Monthly Premium Driving Experience Driving Violations
(dollars) (years) (past 3 years)
148 5 2
76 14 0
100 6 1
126 10 3
194 4 6
110 8 2
114 11 3
86 16 1
198 3 5
92 9 1
70 19 0
120 13 3
1519T_c14 03/27/2006 07:29 AM Page 619

14.5 Computer Solution of Multiple Regression 619

the monthly auto insurance premiums (in dollars) paid by these drivers, their driving experi-
ences (in years), and the numbers of driving violations committed by them during the past
three years.
Using MINITAB, find the regression equation of monthly premiums paid by drivers on the
driving experiences and the numbers of driving violations.

Solution Let
y  the monthly auto insurance premium 1in dollars2 paid by a driver
x1  the driving experience 1in years2 of a driver
x2  the number of driving violations committed by a driver during the past three years
We are to estimate the regression model
y  A  B1x1  B2x2   (3)
The first step is to enter the data of Table 14.1 into MINITAB spreadsheet as shown in Screen
14.1. Here we have entered the given data in columns C1, C2, and C3 and named them Monthly
Premium, Driving Experience and Driving Violations, respectively.

Screen 14.1

To obtain the estimated regression equation, select StatRegressionRegression. In the di-


alog box you obtain, enter Monthly Premium in the Response box, and Driving Experience
and Driving Violations in the Predictors box as shown in Screen 14.2. Note that you can en-
ter the column names C1, C2, and C3 instead of variable names in these boxes. Click OK to
obtain the output, which is shown in Screen 14.3.
From the output given in Screen 14.3, the estimated regression equation is:
ŷ  110  2.75x1  16.1x2
1519T_c14 03/27/2006 07:29 AM Page 620

620 Chapter 14 Multiple Regression

Screen 14.2

Screen 14.3 
1519T_c14 03/27/2006 07:29 AM Page 621

14.5 Computer Solution of Multiple Regression 621

14.5.1 Estimated Multiple Regression Model


Example 14–2 describes, among other things, how the coefficients of the multiple regression
model are interpreted.

 EXAMPLE 14–2
Refer to Example 14–1 and the MINITAB solution given in Screen 14.3.
Interpreting parts of the
(a) Explain the meaning of the estimated regression coefficients. MINITAB solution of multiple
regression.
(b) What are the values of the standard deviation of errors, the coefficient of multiple
determination, and the adjusted coefficient of multiple determination?
(c) What is the predicted auto insurance premium paid per month by a driver with seven
years of driving experience and three driving violations committed in the past three
years?
(d) What is the point estimate of the expected (or mean) auto insurance premium paid
per month by all drivers with 12 years of driving experience and 4 driving violations
committed in the past three years?

Solution
(a) From the portion of the MINITAB solution that is marked I in Screen 14.3, the esti-
mated regression equation is
ŷ  110  2.75x1  16.1x2 (4)
From this equation,
a  110, b1  2.75, and b2  16.1
We can also read the values of these coefficients from the column labeled Coef in the
portion of the output marked II in the MINITAB solution of Screen 14.3. From this
column we obtain
a  110.28, b1  2.7473, and b2  16.106
Notice that in this column the coefficients of the regression equation appear with more
digits after the decimal point. With these coefficient values, we can write the estimated
regression equation as
ŷ  110.28  2.7473x1  16.106x2 (5)
The value of a  110.28 in the estimated regression equation (5) gives the value of ŷ
for x1  0 and x2  0. Thus, a driver with no driving experience and no driving
violations committed in the past three years is expected to pay an auto insurance
premium of $110.28 per month. Again, this is the technical interpretation of a. In
reality, that may not be true because none of the drivers in our sample has both zero
experience and zero driving violations. As all of us know, some of the highest premi-
ums are paid by teenagers just after obtaining their drivers licenses.
The value of b1  2.7473 in the estimated regression model gives the change in
ŷ for a one-unit change in x1 when x2 is held constant. Thus, we can state that a driver
with one extra year of experience but the same number of driving violations is ex-
pected to pay $2.7473 (or $2.75) less per month for the auto insurance premium. Note
that because b1 is negative, an increase in driving experience decreases the premium
paid. In other words, y and x1 have a negative relationship.
The value of b2  16.106 in the estimated regression model gives the change in ŷ
for a one-unit change in x2 when x1 is held constant. Thus, a driver with one extra
driving violation during the past three years but with the same years of driving expe-
rience is expected to pay $16.106 (or $16.11) more per month for the auto insurance
premium.
1519T_c14 03/27/2006 07:29 AM Page 622

622 Chapter 14 Multiple Regression

(b) The values of the standard deviation of errors, the coefficient of multiple determina-
tion, and the adjusted coefficient of multiple determination are given in part III of the
MINITAB solution of Screen 14.3. From this part of the solution,
se  12.1459, R2  93.1%, and R2  91.6%
Thus, the standard deviation of errors is 12.1459. The value of R2  93.1% tells us
that the two independent variables, years of driving experiences and the numbers of
driving violations, explain 93.1% of the variation in the auto insurance premiums. The
value of R2  91.6% is the value of the coefficient of multiple determination adjusted
for degrees of freedom. It states that when adjusted for degrees of freedom, the two
independent variables explain 91.6% of the variation in the dependent variable.
(c) To Find the predicted auto insurance premium paid per month by a driver with seven
years of driving experience and three driving violations during the past three years,
we substitute x1  7 and x2  3 in the estimated regression model (5). Thus,
ŷ  110.28  2.7473x1  16.106x2
 110.28  2.7473 172  16.106 132  $139.37
Note that this value of ŷ is a point estimate of the predicted value of y, which is de-
noted by yp. The concept of the predicted value of y is the same as that for a simple
linear regression model discussed in Section 13.8.2 of Chapter 13.
(d) To obtain the point estimate of the expected (mean) auto insurance premium paid per
month by all drivers with 12 years of driving experience and four driving violations
during the past three years, we substitute x1  12 and x2  4 in the estimated
regression equation (5). Thus,
ŷ  110.28  2.7473x1  16.106x2
 110.28  2.7473 1122  16.106 142  $141.74
This value of ŷ is a point estimate of the mean value of y, which is denoted by E1y2
or my0x1x2. The concept of the mean value of y is the same as that for a simple linear
regression model discussed in Section 13.8.1 of Chapter 13. 

14.5.2 Confidence Interval for an Individual Coefficient


The values of a, b1, b2, b3, p , and bk obtained by estimating model (1) using sample data give
the point estimates of A, B1, B2, B3, p , and Bk, respectively, which are the population parameters.
Using the values of sample statistics a, b1, b2, b3, p , and bk, we can make confidence intervals
for the corresponding population parameters A, B1, B2, B3, p , and Bk, respectively.
Because of the assumption that the errors are normally distributed, the sampling distribution
of each bi is normal with its mean equal to Bi and standard deviation equal to sbi. For example,
the sampling distribution of b1 is normal with its mean equal to B1 and standard deviation equal
to sb1. However, usually se is not known and, hence, we cannot find sbi. Consequently, we use
sbi as an estimator of sbi and use the t distribution to determine a confidence interval for Bi.
The formula to obtain a confidence interval for a population parameter Bi is given below.
This is the same formula we used to make a confidence interval for B in Section 13.5.2 of Chap-
ter 13. The only difference is that to make a confidence interval for a particular Bi for a multi-
ple regression model, the degrees of freedom are n  k  1.

Confidence Interval for Bi The 11  a2  100% confidence interval for Bi is given by


bi  tsbi
The value of t that is used in this formula is obtained from the t distribution table for a 2 area
in the right tail of the t distribution curve and 1n  k  12 degrees of freedom. The values of bi
and sbi are obtained from the computer solution.
1519T_c14 03/27/2006 07:29 AM Page 623

14.5 Computer Solution of Multiple Regression 623

Example 14–3 describes the procedure to make a confidence interval for an individ-
ual regression coefficient Bi.

 EXAMPLE 14–3
Determine a 95% confidence interval for B1 (the coefficient of experience) for the multiple re-
Making a confidence interval
gression of auto insurance premium on driving experience and the number of driving viola-
for an individual coefficient of
tions. Use the MINITAB solution of Screen 14.3.
a multiple regression model.

Solution To make a confidence interval for B1, we use the portion marked II in the MINITAB
solution of Screen 14.3. From that portion of the MINITAB solution,

b1  2.7473 and sb1  .9770


Note that the value of the standard deviation of b1, sb1  .9770, is given in the column labeled
SE Coef in part II of the MINITAB solution.
The confidence level is 95%. The area in each tail of the t distribution curve is obtained as
follows.
Area in each tail of the t distribution  11  .952 2  .025

The sample size is 12, which gives n  12. Because there are two independent variables,
k  2. Therefore,

Degrees of freedom  n  k  1  12  2  1  9

From the t distribution table (Table V of Appendix C), the value of t for .025 area in the right
tail of the t distribution curve and 9 degrees of freedom is 2.262. Then, the 95% confidence
interval for B1 is

b1  tsb1  2.7473  2.2621.97702


 2.7473  2.2100  4.9573 to .5373

Thus, the 95% confidence interval for b1 is 4.96 to .54. That is, we can state with 95%
confidence that for one extra year of driving experience, the monthly auto insurance pre-
mium changes by an amount between $4.96 and $.54. Note that since both limits of the
confidence interval have negative signs, we can also state that for each extra year of driv-
ing experience, the monthly auto insurance premium decreases by an amount between $.54
and $4.96. 

By applying the procedure used in Example 14–3, we can make a confidence interval for any
of the coefficients (including the constant term) of a multiple regression model, such as A and
B2 in model (3). For example, the 95% confidence intervals for A and B2, respectively, are

a  tsa  110.28  2.262114.622  77.21 to 143.35


b2  tsb2  16.106  2.26212.6132  10.20 to 22.02

14.5.3 Testing a Hypothesis about an Individual Coefficient


We can perform a test of hypothesis about any of the Bi coefficients of the regression model (1)
using the same procedure that we used to make a test of hypothesis about B for a simple re-
gression model in Section 13.5.3 of Chapter 13. The only difference is that the degrees of free-
dom are equal to n  k  1 for a multiple regression model.
Again, because of the assumption that the errors are normally distributed, the sampling
distribution of each bi is normal with its mean equal to Bi and standard deviation equal to sbi.
However, usually se is not known and, hence, we cannot find sbi. Consequently, we use sbi as
an estimator of sbi, and use the t distribution to perform the test.
1519T_c14 03/27/2006 07:29 AM Page 624

624 Chapter 14 Multiple Regression

Test Statistic for bi The value of the test statistic t for bi is calculated as
bi  Bi
t
sbi
The value of Bi is substituted from the null hypothesis. Usually, but not always, the null hypoth-
esis is H0: Bi  0. The MINITAB solution contains this value of the t statistic.

Example 14–4 illustrates the procedure for testing a hypothesis about a single coefficient.

 EXAMPLE 14–4
Using the 2.5% significance level, can you conclude that the coefficient of the number of years
Testing a hypothesis about a
of driving experience in regression model (3) is negative? Use the MINITAB output obtained
coefficient of a multiple
regression model.
in Example 14–1 and shown in Screen 14.3 to perform this test.

Solution From Example 14–1, our multiple regression model (3) is


y  A  B1x1  B2x2  
where y is the monthly auto insurance premium (in dollars) paid by a driver, x1 is the driving
experience (in years), and x2 is the number of driving violations committed during the past
three years. From the MINITAB solution, the estimated regression equation is
ŷ  110.28  2.7473x1  16.106x2
To conduct a test of hypothesis about B1, we use the portion marked II in the MINITAB so-
lution given in Screen 14.3. From that portion of the MINITAB solution,
b1  2.7473 and sb1  .9770
Note that the value of the standard deviation of b1, sb1  .9770, is given in the column labeled
SE Coef in part II of the MINITAB solution.
To make a test of hypothesis about B1, we perform the following five steps.
Step 1. State the null and alternative hypotheses.
We are to test whether or not the coefficient of the number of years of driving experi-
ence in regression model (3) is negative, that is, whether or not B1 is negative. The two
hypotheses are
H0 : B1  0
H1 : B1 6 0
Note that we can also write the null hypothesis as H0 : B1
0, which states that the coeffi-
cient of the number of years of driving experience in the regression model (3) is either zero
or positive.
Step 2. Select the distribution to use.
The sample size is small 1n 6 302 and se is not known. The sampling distribution of b1
is normal because the errors are assumed to be normally distributed. Hence, we use the t
distribution to make a test of hypothesis about B1.
Step 3. Determine the rejection and nonrejection regions.
The significance level is .025. The sign in the alternative hypothesis indicates that the
test is left-tailed. Therefore, area in the left tail of the t distribution curve is   .025. The
degrees of freedom are:
df  n  k  1  12  2  1  9
From the t distribution table (Table V in Appendix C), the critical value of t for 9 degrees of free-
dom and .025 area in the left tail of the t distribution curve is 2.262, as shown in Figure 14.1.
1519T_c14 03/27/2006 07:29 AM Page 625

14.5 Computer Solution of Multiple Regression 625

Reject H0 Do not reject H0 Figure 14.1

= .025

–2.262 0 t
Critical value of t

Step 4. Calculate the value of the test statistic and p-value.


The value of the test statistic t for b1 can be obtained from the MINITAB solution given in
Screen 14.3. This value is given in the column labeled T and the row named Driving Experi-
ence in the portion marked II in that MINITAB solution. Thus, the observed value of t is
b1  B1
t  2.81
sb1
Also, in the same portion of the MINITAB solution, the p-value for this test is given in the
column labeled P and the row named Driving Experience. This p-value is .020. However,
MINITAB always gives the p-value for a two-tailed test. Because our test is one-tailed, the
p-value for our test is
p-value  .020 2  .010

Step 5. Make a decision.


The value of the test statistic, t  2.81, is less than the critical value of t  2.262 and
it falls in the rejection region. Consequently, we reject the null hypothesis and conclude that
the coefficient of x1 in regression model (3) is negative. That is, an increase in the driving ex-
perience decreases the auto insurance premium.
Also the p-value for the test is .010, which is less than the significance level of a  .025.
Hence, based on this p-value also, we reject the null hypothesis and conclude that B1 is neg-
ative. 

Note that the observed value of t in Step 4 of Example 14–4 is obtained from the MINITAB
solution only if the null hypothesis is H0 : B1  0. However, if the null hypothesis is that B1 is
equal to a number other than zero, then the t value obtained from the MINITAB solution is no
longer valid. For example, suppose the null hypothesis in Example 14–4 is
H0 : B1  2
and the alternative hypothesis is
H1 : B1 6 2
In this case the observed value of t will be calculated as
b1  B1 2.7473  122
t   .765
sb1 .9770
To calculate this value of t, the values of b1 and sb1 are obtained from the MINITAB solution
of Screen 14.3. The value of B1 is substituted from H0.

EXERCISES
 CONCEPTS AND PROCEDURES
14.1 How are the coefficients of independent variables in a multiple regression model interpreted? Explain.
14.2 What are the degrees of freedom for a multiple regression model to make inferences about individual
parameters?
1519T_c14 03/27/2006 07:30 AM Page 626

626 Chapter 14 Multiple Regression

14.3 What kinds of relationships among independent variables are permissible and which ones are not
permissible in a linear multiple regression model?
14.4 Explain the meaning of the coefficient of multiple determination and the adjusted coefficient of mul-
tiple determination for a multiple regression model. What is the difference between the two?
14.5 What are the assumptions of a multiple regression model?
14.6 The following table gives data on variables y, x1, x2, and x3.

y x1 x2 x3

8 18 38 74
11 26 25 64
19 34 24 47
21 38 44 31
7 13 12 79
23 49 48 35
16 28 38 42
27 59 52 18
9 14 17 71
13 21 39 57

Using MINITAB, estimate the regression model


y  A  B1x1  B2x2  B3x3  
Using the solution obtained, answer the following questions.
a. Write the estimated regression equation.
b. Explain the meaning of a, b1, b2, and b3 obtained by estimating the given regression model.
c. What are the values of the standard deviation of errors, the coefficient of multiple determination,
and the adjusted coefficient of multiple determination?
d. What is the predicted value of y for x1  35, x2  40, and x3  65?
e. What is the point estimate of the expected (mean) value of y for all elements given that x1  40,
x2  30, and x3  55?
f. Construct a 95% confidence interval for the coefficient of x3.
g. Using the 2.5% significance level, test whether or not the coefficient of x1 is positive.
14.7 The following table gives data on variables y, x1, and x2.

y x1 x2

24 98 52
14 51 69
18 74 63
31 108 35
10 33 88
29 119 54
26 99 51
33 141 31
13 47 67
27 103 41
26 111 46

Using MINITAB, find the regression of y on x1 and x2. Using the solution obtained, answer the follow-
ing questions.
a. Write the estimated regression equation.
b. Explain the meaning of the estimated regression coefficients of the independent variables.
1519T_c14 03/27/2006 07:30 AM Page 627

14.5 Computer Solution of Multiple Regression 627

c. What are the values of the standard deviation of errors, the coefficient of multiple determination,
and the adjusted coefficient of multiple determination?
d. What is the predicted value of y for x1  87 and x2  54?
e. What is the point estimate of the expected (mean) value of y for all elements given that x1  95
and x2  49?
f. Construct a 99% confidence interval for the coefficient of x1.
g. Using the 1% significance level, test if the coefficient of x2 in the population regression model is
negative.

 APPLICATIONS
14.8 The salaries of workers are expected to be dependent, among other factors, on the number of years
they have spent in school and their work experiences. The following table gives information on the annual
salaries (in thousands of dollars) for 12 persons, the number of years each of them spent in school, and
the total number of years of work experiences.

Salary 52 44 48 77 68 48 59 83 28 61 27 69
Schooling 16 12 13 20 18 16 14 18 12 16 12 16
Experience 6 10 15 8 11 2 12 4 6 9 2 18

Using MINITAB, find the regression of salary on schooling and experience. Using the solution obtained,
answer the following questions.
a. Write the estimated regression equation.
b. Explain the meaning of the estimates of the constant term and the regression coefficients of inde-
pendent variables.
c. What are the values of the standard deviation of errors, the coefficient of multiple determination,
and the adjusted coefficient of multiple determination?
d. How much salary is a person with 18 years of schooling and 7 years of work experience expected
to earn?
e. What is the point estimate of the expected (mean) salary for all people with 16 years of schooling
and 10 years of work experience?
f. Determine a 99% confidence interval for the coefficient of schooling.
g. Using the 1% significance level, test whether or not the coefficient of experience is positive.
14.9 The CTO Corporation has a large number of chain restaurants throughout the United States. The
research department at the company wanted to find if the sales of restaurants depend on the size of the
population within a certain area surrounding the restaurants and the mean income of households in those
areas. The company collected information on these variables for 11 restaurants. The following table gives
information on the weekly sales (in thousands of dollars) of these restaurants, the population (in thou-
sands) within five miles of the restaurants, and the mean annual income (in thousands of dollars) of the
households for those areas.

Sales 19 29 17 21 14 30 33 22 18 27 24
Population 21 15 32 18 47 69 29 43 75 39 53
Income 58 69 49 52 67 76 81 46 39 64 28

Using MINITAB, find the regression of sales on population and income. Using the solution obtained, answer
the following questions.
a. Write the estimated regression equation.
b. Explain the meaning of the estimates of the constant term and the regression coefficients of pop-
ulation and income.
c. What are the values of the standard deviation of errors, the coefficient of multiple determination,
and the adjusted coefficient of multiple determination?
d. What are the predicted sales for a restaurant with 50 thousand people living within a five-mile
area surrounding it and $55 thousand mean annual income of households in that area.
e. What is the point estimate of the expected (mean) sales for all restaurants with 45 thousand people
living within a five-mile area surrounding them and $46 thousand mean annual income of house-
holds living in those areas?
f. Determine a 95% confidence interval for the coefficient of income.
g. Using the 1% significance level, test whether or not the coefficient of population is different from
zero.
1519T_c14 03/27/2006 07:30 AM Page 628

628 Chapter 14 Multiple Regression

USES AND MISUSES... AD D ITIVE VE R S U S M U LTI P LI C ATIVE E F F E C T


A first-order multiple regression model with (quantitative) inde- case, the effect is said to be multiplicative. It is important to recognize
pendent variables is one of the simpler types of multiple regression that the effect is multiplicative when the total increase does not equal
models. However, there are many limitations of this model. A major the sum of the increases of the independent variables.
limitation is that the independent variables have an additive effect Pharmaceutical companies are always looking for multiplicative
on the dependent variable. What does additive mean here? Suppose effects when creating new drugs. In many cases, a combination of
we have the following estimated regression equation: two drugs might have a multiplicative effect on a certain condition.
Simply stated, the two drugs provide greater relief when you take
ŷ  4  6 x1  3x2
them together than if you take them separately so that only one drug
From this estimated regression equation, if x1 increases by 1 unit (with is in your system at any time. Of course, the companies also have to
x2 held constant), our predicted value of y increases by 6 units. If x2 look for multiplicative effects when it comes to side effects. Individ-
increases by 1 unit (with x1 held constant), our predicted value of y ual drugs may not have major side effects when taken separately, but
increases by 3 units. But what happens if x1 and x2 both increase by could cause greater harm when taken together. One of the most note-
1 unit each? From this equation, our predicted value of y will increase worthy examples of this was the drug Fen-Phen, which was a com-
by 6  3  9 units. The total increase in ŷ is simply the sum of the bination of two drugs—Fenfluramine and Phentermine. Each of these
two increases. This change in ŷ does not depend on the values of x1 two drugs had been approved for short-term (individual) control of
and x2 prior to the increase. Since the total increase in the dependent obesity. However, the drugs used in combination became popular for
variable is equal to the sum of the increases from the two individual long-term weight loss. Unfortunately, the combination, when associ-
parts (independent variables), we say that the effect is additive. ated with longtime use, resulted in severe side effects that were
Now suppose we have the following equation: detailed in the following statement from the Food and Drug Admin-
istration in 1997:
ŷ  4  6x1  3x2  5x21x2
Thanks to the reporting of health care professionals, as of August
The important difference in this case is that the increase in the value
22, FDA has received reports of 82 cases (including Mayo’s 24
of ŷ is no longer constant when x1 and x2 both increase by 1 unit
cases) of cardiac valvular disease in patients—two of whom
each. Instead, it depends on the original values of x1 and x2. For ex-
were men—on combination fenfluramine and phentermine.
ample, consider the values of x1 and x2 , and the changes in the value
These reports have been from 23 different states. Severity of
of ŷ shown in the following table.
the cardiac valvular disease was graded as moderate or severe
in over three-fourths of the cases, and two of the reports de-
x1 x2 ŷ Change in ŷ (versus x1  2 and x2  3)
scribed deterioration from no detectable heart murmur to need
2 3 85 for a valve replacement within one-and-a-half years. Sixteen of
3 3 166 81 these 82 patients required surgery to repair their heart valves.
2 4 108 23 At least one of these patients died following surgery to repair
3 4 214 129
the valves. (The agency’s findings, as of July 31, are described
in more detail in the current issue of The New England Jour-
Unlike the previous example, here the total increase in ŷ is not
nal of Medicine, which also carries the Mayo study.) Source:
equal to the sum of the increases from the individual parts. In this
http://www.fda.gov/cder/ news/phen/fenphenupdate.htm

Glossary
Adjusted coefficient of multiple determination Denoted by R 2, Multicollinearity When two or more independent variables in a
it gives the proportion of SST that is explained by the multiple re- regression model are highly correlated.
gression model and is adjusted for the degrees of freedom. Multiple regression model A regression model that contains two
Coefficient of multiple determination Denoted by R2, it gives the or more independent variables.
proportion of SST that is explained by the multiple regression model. Partial regression coefficients The coefficients of independent
First-order multiple regression model When each term in a re- variables in a multiple regression model are called the partial re-
gression model contains a single independent variable raised to the gression coefficients because each of them gives the effect of the
first power. corresponding independent variable on the dependent variable when
all other independent variables are held constant.
Least squares regression model The estimated regression model
obtained by minimizing the sum of squared errors.
1519T_c14 03/27/2006 07:30 AM Page 629

Self-Review Test 629

Standard deviation of errors Also called the standard deviation SSR (regression sum of squares) The portion of SST that is
of estimate, it is a measure of the variation among errors. explained by the regression model.
SSE (error sum of squares) The sum of the squared differences SST (total sum of squares) The sum of squared differences
between the actual and predicted values of y. It is the portion of SST between actual y values and y.
that is not explained by the regression model.

Self-Review Test
1. When using the t distribution to make inferences about a single parameter, the degrees of freedom
for a multiple regression model with k independent variables and a sample size of n are equal to
a. n  k  1 b. n  k  1 c. n  k  1
2. The value of R2 is always in the range
a. zero to 1 b. 1 to 1 c. 1 to zero
3. The value of R 2 is
a. always positive b. always nonnegative c. can be positive, zero, or negative
4. What is the difference between the population multiple regression model and the estimated multiple
regression model?
5. Why are the regression coefficients in a multiple regression model called the partial regression
coefficients?
6. What is the difference between R2 and R 2? Explain.
7. A real estate expert wanted to find the relationship between the sale price of houses and various
characteristics of the houses. She collected data on four variables, recorded in the table, for 13 houses that
were sold recently. The four variables are
Price  Sale price of a house in thousands of dollars
Lot size  Size of the lot in acres
Living area  Living area in square feet
Age  Age of a house in years

Price Lot Size Living Area Age


455 1.4 2500 8
278 .9 2250 12
463 1.8 2900 5
327 .7 1800 9
505 2.6 3200 4
264 1.2 2400 28
445 2.1 2700 9
346 1.1 2050 13
487 2.8 2850 7
289 1.6 2400 16
434 3.2 2600 5
411 1.7 2300 8
223 .5 1700 19

Using MINITAB, find the regression of price on lot size, living area, and age. Using the solution ob-
tained, answer the following questions.
a. Indicate whether you expect a positive or a negative relationship between the dependent variable
and each of the independent variables.
b. Write the estimated regression equation. Are the signs of the coefficients of independent vari-
ables obtained in the solution consistent with your expectations of part a?
c. Explain the meaning of the estimated regression coefficients of all independent variables.
d. What are the values of the standard deviation of errors, the coefficient of multiple determina-
tion, and the adjusted coefficient of multiple determination?
1519T_c14 03/27/2006 07:30 AM Page 630

630 Chapter 14 Multiple Regression

e. What is the predicted sale price of a house that has a lot size of 2.5 acres, a living area of 3000
square feet, and is 14 years old?
f. What is the point estimate of the mean sale price of all houses that have a lot size of 2.2 acres, a
living area of 2500 square feet, and are 7 years old?
g. Determine a 99% confidence interval for each of the coefficients of the independent variables.
h. Construct a 98% confidence interval for the constant term in the population regression model.
i. Using the 1% significance level, test whether or not the coefficient of lot size is positive.
j. At the 2.5% significance level, test if the coefficient of living area is positive.
k. At the 5% significance level, test if the coefficient of age is negative.

 Mini-Project 14–1
Refer to the McDonald’s data set explained in Appendix B and given on the Web site of this text. Use
MINITAB to estimate the following regression model for that data set.
y  A  B1x1  B2x2  B3x3  
where y  calories
x1  fat (measured in grams)
x2  carbohydrate (measured in grams)
and x3  protein (measured in grams)
Now research on the Internet or in a book to find the number of calories in one gram of fat, one gram of
carbohydrate, and one gram of protein.
a. Based on the information you obtain, write what the estimated regression equation should be.
b. Are the differences between your expectation in part a and the regression equation that you ob-
tained from MINTAB small or large?
c. Since each gram of fat is worth a specific number of calories, and the same is true for a gram of
carbohydrate, and for a gram of protein, one would expect that the predicted and observed values
of y would be the same for each food item, but that is not the case. The quantities of fat, carbo-
hydrates, and protein are reported in whole numbers. Explain why this causes the differences dis-
cussed in part b.

DECIDE FOR YOURSELF


on the amount of tar in the cigarette 1x1 2 and the presence of a filter
Dummy Variables 1x2 2. Note that here x2  0 implies that a cigarette does not have a
In Sanford & Son, a very popular TV show of the 1970s, Fred filter and x2  1 means that a filter exists.
Sanford would often refer to other people as big dummies. So, if a
statistics professor questions your work and mentions a dummy in ŷ  .94x1  .45x2
the process, should you be offended? Obviously, context will help Answer the following questions.
you to answer that question, but if the professor is referring to a 1. Does the presence of a filter increase or decrease the tar con-
dummy variable, then do not take it personally. sumption? What part of the regression equation tells you this?
A dummy variable is the name given to a categorical independ-
ent variable used in a multiple regression model. The simplest version
2. On average, what percentage of the tar in a cigarette is consumed
if the cigarette is unfiltered? What if the cigarette is filtered?
occurs when there are only two categories. In this case, we assign a
value of 0 to one category and 1 to the other category of the variable. 3. Draw a graph of the above regression equation. (Hint: The graph
Suppose you have the following first-order regression equation consists of two different regression lines with two variables, not a
to predict the amount of tar inhaled (y) by smoking a cigarette based plane.)

Das könnte Ihnen auch gefallen