Sie sind auf Seite 1von 80

Lesson 4: Regression

LEARNING GOAL
Explain the simple linear regression model
Obtain and interpret the simple linear regression
equation for a set of data
Describe R2 as a measure of explanatory power of the
regression model
Use a regression equation for prediction
Copyright 2009 Pearson Education, Inc.

Definition
A correlation exists between two variables when
higher values of one variable consistently go with
higher values of another variable or when higher
values of one variable consistently go with lower
values of another variable.

Copyright 2009 Pearson Education, Inc.

Slide 7.1- 2

Here are a few examples of correlations:

There is a correlation between the variables


amount of smoking and likelihood of lung
cancer; that is heavier smokers are more
likely to get lung cancer.
There is a correlation between the variables
height and weight for people; that is, taller
people tend to weigh more than shorter
people.

Copyright 2009 Pearson Education, Inc.

Slide 7.1- 3

Here are a few examples of correlations:

There is a correlation between the variables


demand for apples and price of apples; that
is, demand tends to decrease as price
increases.
There is a correlation between practice time
and skill among piano players; that is, those
who practice more tend to be more skilled.

Copyright 2009 Pearson Education, Inc.

Slide 7.1- 4

Scatter Diagrams
Definition
A scatter diagram (or scatterplot) is a graph in
which each point represents the values of two
variables.

Copyright 2009 Pearson Education, Inc.

Slide 7.1- 5

Types of Correlation
(Note: detailed descriptions of these graphs appear in the next few slides.)

Figure 7.3 Types of correlation seen on scatter diagrams.


Copyright 2009 Pearson Education, Inc.

Slide 7.1- 6

Figure 7.3(a-c) Types of correlation seen on scatter diagrams.

Parts a to c of Figure 7.3 show positive correlations,


in which the values of y tend to increase with increasing
values of x. The correlation becomes stronger as we
proceed from a to c. In fact, c shows a perfect positive
correlation, in which all the points fall along a straight
line.
Copyright 2009 Pearson Education, Inc.

Slide 7.1- 7

Figure 7.3(d-f) Types of correlation seen on scatter diagrams.

Parts d to f of Figure 7.3 show negative correlations,


in which the values of y tend to decrease with
increasing values of x. The correlation becomes
stronger as we proceed from d to f. In fact, f shows a
perfect negative correlation, in which all the points fall
along a straight line.
Copyright 2009 Pearson Education, Inc.

Slide 7.1- 8

Figure 7.3(g) Types of correlation seen on scatter diagrams.

Part g of Figure 7.3 shows no correlation between x


and y. In other words, values of x do not appear to be
linked to values of y in any way.

Copyright 2009 Pearson Education, Inc.

Slide 7.1- 9

Figure 7.3(h) Types of correlation seen on scatter diagrams.

Part h of Figure 7.3 shows a nonlinear relationship, in


which x and y appear to be related but the relationship
does not correspond to a straight line. (Linear means
along a straight line, and nonlinear means not along a
straight line.)
Copyright 2009 Pearson Education, Inc.

Slide 7.1- 10

Types of Correlation
Positive correlation: Both variables tend to increase (or
decrease) together.
Negative correlation: The two variables tend to change
in opposite directions, with one increasing while the other
decreases.
No correlation: There is no apparent (linear) relationship
between the two variables.
Nonlinear relationship: The two variables are related,
but the relationship results in a scatter diagram that does
not follow a straight-line pattern.

Copyright 2009 Pearson Education, Inc.

Slide 7.1- 11

Measuring the Strength of a


Correlation
Statisticians measure the strength of a
correlation with a number called the
correlation coefficient, represented by
the letter r.

Copyright 2009 Pearson Education, Inc.

Slide 7.1- 12

Properties of the Correlation Coefficient, r


The correlation coefficient, r, is a measure of the
strength of a correlation. Its value can range only from
-1 to 1.
If there is no correlation, the points do not follow any
ascending or descending straightline pattern, and the
value of r is close to 0.
If there is a positive correlation, the correlation
coefficient is positive (0 < r 1): Both variables increase
together. A perfect positive correlation (in which all the
points on a scatter diagram lie on an ascending straight
line) has a correlation coefficient r = 1. Values of r close
to 1 mean a strong positive correlation and positive
values closer to 0 mean a weak positive correlation.
Copyright 2009 Pearson Education, Inc.

Slide 7.1- 13

Properties of the Correlation Coefficient, r


(cont,)
If there is a negative correlation, the correlation
coefficient is negative (-1 r < 0): When one variable
increases, the other decreases. A perfect negative
correlation (in which all the points lie on a descending
straight line) has a correlation coefficient r = -1. Values
of r close to -1 mean a strong negative correlation and
negative values closer to 0 mean a weak negative
correlation.

Copyright 2009 Pearson Education, Inc.

Slide 7.1- 14

Calculating the Correlation Coefficient


The formula for the (linear) correlation coefficient r can be
expressed in several different ways that are all algebraically
equivalent, which means that they produce the same value. The
following expression has the advantage of relating more
directly to the underlying rationale for r :

Copyright 2009 Pearson Education, Inc.

Slide 7.1- 15

Beware of Outliers
If you calculate
the correlation coefficient
for these data, youll find
that it is a relatively high
r = 0.880, suggesting a
very strong correlation.
Figure 7.10

However, if you cover the data point in the upper right corner of
Figure 7.10, the apparent correlation disappears.
In fact, without this data point, the correlation coefficient is r = 0.
Copyright 2009 Pearson Education, Inc.

Slide 7.2- 16

EXAMPLE 1 Masked Correlation


Youve conducted a study to determine how the number of
calories a person consumes in a day correlates with time spent
in vigorous bicycling. Your sample consisted of ten women
cyclists, all of approximately the same height and weight. Over
a period of two weeks, you asked each woman to record the
amount of time she spent cycling each day and what she ate on
each of those days. You used the eating records to calculate the
calories consumed each day.
Figure 7.11 shows a scatter diagram
with each womans mean time spent
cycling on the horizontal axis and
mean caloric intake on the vertical
axis. Do higher cycling times
correspond to higher intake
of calories?
Copyright 2009 Pearson Education, Inc.

Slide 7.2- 17

Solution: If you look at the data as a whole, your eye will


probably tell you that there is a positive correlation in which
greater cycling time tends to go with higher caloric intake. But
the correlation is very weak, with a correlation coefficient of
r = 0.374.
However, notice that two points are
outliers: one representing a cyclist
who cycled about a half-hour per
day and consumed more than 3,000
calories, and the other representing
a cyclist who cycled more than 2
hours per day on only 1,200 calories.
Its difficult to explain the two outliers, given that all the women
in the sample have similar heights and weights.

Copyright 2009 Pearson Education, Inc.

Slide 7.2- 18

Solution: (cont.)
We might therefore suspect that these two women either recorded
their data incorrectly or were not following their usual habits
during the two-week study. If we can confirm this suspicion, then
we would have reason to delete the two data points as invalid.
Figure 7.12 shows that the correlation
is quite strong without those two
outlier points, and suggests that the
number of calories consumed rises by
a little more than 500 calories for
each hour of cycling.
Figure 7.12 The data from Figure
Of course, we should not remove
7.11 without the two outliers.
the outliers without confirming our
suspicion that they were invalid data points, and we should report
our reasons for leaving them out.
Copyright 2009 Pearson Education, Inc.

Slide 7.2- 19

Beware of Inappropriate Grouping


Correlations can also be misinterpreted when data are grouped
inappropriately. In some cases, grouping data
hides correlations.
Consider a (hypothetical) study in which
researchers seek a correlation between hours
of TV watched per week and high school
grade point average (GPA). They collect the
21 data pairs in Table 7.3.
The scatter diagram (Figure 7.13) shows
virtually no correlation; the correlation
coefficient for the data is
about r = -0.063. The apparent conclusion
is that TV viewing habits are unrelated to
academic achievement.
Figure 7.13
Copyright 2009 Pearson Education, Inc.

Slide 7.2- 20

However, one astute researcher realizes that some of the


students watched mostly educational programs, while others
tended to watch comedies, dramas, and movies. She therefore
divides the data set into two groups, one for the students who
watched mostly educational television and one for the other
students.
Table 7.4
shows her
results with
the students
divided into
these two
groups.

Copyright 2009 Pearson Education, Inc.

Slide 7.2- 21

Now we find two very strong correlations (Figure 7.14): a


strong positive correlation for the students who watched
educational programs (r = 0.855) and a strong negative
correlation for the other students (r = -0.951).

Figure 7.14 These scatter diagrams show the same data as Figure 7.13,
separated into the two groups identified in Table 7.4.

Copyright 2009 Pearson Education, Inc.

Slide 7.2- 22

In other cases, a data set may show a stronger correlation than


actually exists among subgroups.
Figure 7.15 shows the scatter diagram of the (hypothetical)
data collected by a consumer group studying the relationship
between the weights and prices of cars.

Figure 7.15 Scatter diagram for the car weight and price data.

The data set as a whole shows a strong correlation; but there


is no correlation within either cluster.
Copyright 2009 Pearson Education, Inc.

Slide 7.2- 23

Correlation Does Not Imply Causality


Perhaps the most important caution about interpreting
correlations is one weve already mentioned:
Correlation does not necessarily imply causality.

Possible Explanations for a Correlation


1. The correlation may be a coincidence.
2. Both correlation variables might be directly
influenced by some common underlying cause.
3. One of the correlated variables may actually be a
cause of the other. But note that, even in this case, it
may be just one of several causes.
Copyright 2009 Pearson Education, Inc.

Slide 7.2- 24

Definition
The best-fit line (or regression line) on a scatter
diagram is a line that lies closer to the data points
than any other possible line (according to a
standard statistical measure of closeness).

Copyright 2009 Pearson Education, Inc.

Slide 7.3- 25

Predictions with Best-Fit Lines


Cautions in Making Predictions from Best-Fit Lines
1. Dont expect a best-fit line to give a good prediction
unless the correlation is strong and there are many
data points. If the sample points lie very close to the
best-fit line, the correlation is very strong and the
prediction is more likely to be accurate. If the sample
points lie away from the best-fit line by substantial
amounts, the correlation is weak and predictions tend
to be much less accurate.
2. Dont use a best-fit line to make predictions beyond
the bounds of the data points to which the line was fit.

Copyright 2009 Pearson Education, Inc.

Slide 7.3- 26

Cautions in Making Predictions from Best-Fit Lines


(cont.)
3. A best-fit line based on past data is not necessarily
valid now and might not result in valid predictions of
the future.
4. Dont make predictions about a population that is
different from the population from which the sample
data were drawn.
5. Remember that a best-fit line is meaningless when
there is no significant correlation or when the
relationship is nonlinear.

Copyright 2009 Pearson Education, Inc.

Slide 7.3- 27

EXAMPLE 1 Valid Predictions?


State whether the prediction (or implied prediction) should be
trusted in each of the following cases, and explain why or why
not.
a. Youve found a best-fit line for a correlation between the
number of hours per day that people exercise and the
number of calories they consume each day. Youve used this
correlation to predict that a person who exercises 18 hours
per day would consume 15,000 calories per day.
Solution:
a. No one exercises 18 hours per day on an ongoing basis, so
this much exercise must be beyond the bounds of any data
collected. Therefore, a prediction about someone who
exercises 18 hours per day should not be trusted.
Copyright 2009 Pearson Education, Inc.

Slide 7.3- 28

EXAMPLE 1 Valid Predictions?


State whether the prediction (or implied prediction) should be
trusted in each of the following cases, and explain why or why
not.
b. There is a well-known but weak correlation between SAT
scores and college grades. You use this correlation to predict
the college grades of your best friend from her SAT scores.
Solution:
b. The fact that the correlation between SAT scores and college
grades is weak means there is much scatter in the data. As a
result, we should not expect great accuracy if we use this
weak correlation to make a prediction about a single
individual.

Copyright 2009 Pearson Education, Inc.

Slide 7.3- 29

Overview of Linear Models

An equation can be fit to show the best linear


relationship between two variables:
Y = 0 + 1 X
Where Y is the dependent variable and
X is the independent variable
0 is the Y-intercept
1 is the slope

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-30

Least Squares Regression

Estimates for coefficients 0 and 1 are found


using a Least Squares Regression technique

The least-squares regression line, based on sample


data, is

y b0 b1x

Where b1 is the slope of the line and b0 is the yintercept:

Cov(x, y)
b1
s2x
Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

b 0 y b1x
Ch. 11-31

Introduction to
Regression Analysis

Regression analysis is used to:

Predict the value of a dependent variable based on


the value of at least one independent variable

Explain the impact of changes in an independent


variable on the dependent variable

Dependent variable: the variable we wish to explain


(also called the endogenous /response variable)

Independent variable: the variable used to explain


the dependent variable
(also called the exogenous/explanatory variable)

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-32

Linear Regression Model

The relationship between X and Y is


described by a linear function

Changes in Y are assumed to be caused by


changes in X

Linear regression population equation model

Yi 0 1x i i

Where 0 and 1 are the population model


coefficients and is a random error term.

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-33

Simple Linear Regression


Model
The population regression model:
Population
Y intercept
Dependent
Variable

Population
Slope
Coefficient

Independent
Variable

Random
Error
term

Yi 0 1Xi i
Linear component

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Random Error
component

Ch. 11-34

Simple Linear Regression


Model
(continued)

Yi 0 1Xi i

Observed Value
of Y for Xi

Predicted Value
of Y for Xi

Slope = 1
Random Error
for this Xi value

Intercept = 0

Xi
Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

X
Ch. 11-35

Simple Linear Regression


Equation
The simple linear regression equation provides an
estimate of the population regression line
Estimated
(or predicted)
y value for
observation i

Estimate of
the regression

Estimate of the
regression slope

intercept

y i b0 b1x i

Value of x for
observation i

The individual random error terms ei have a mean of zero

ei ( y i - y i ) y i - (b0 b1x i )
Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-36

Least Squares Estimators

b0 and b1 are obtained by finding the values


of b0 and b1 that minimize the sum of the
squared differences between y and y :
min SSE min ei2
min (y i y i )2
min [y i (b0 b1x i )]2
Differential calculus is used to obtain the
coefficient estimators b0 and b1 that minimize SSE

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-37

Least Squares Estimators


(continued)

The slope coefficient estimator is


n

b1

(x x)(y y)
i1

2
(x

x
)
i

sy
Cov(x, y)

rxy
2
sx
sx

i1

And the constant or y-intercept is

b0 y b1x

The regression line always goes through the mean x, y

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-38

Finding the Least Squares


Equation

The coefficients b0 and b1 , and other


regression results in this chapter, will be
found using a computer

Hand calculations are tedious

Statistical routines are built into Excel

Other statistical analysis software can be used

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-39

Linear Regression Model


Assumptions

The true relationship form is linear (Y is a linear function


of X, plus random error)
The error terms, i are independent of the x values
The error terms are random variables with mean 0 and
constant variance, 2
(the constant variance property is called homoscedasticity)
2

E[ i ] 0 and E[ i ] 2

for (i 1, , n)

The random error terms, i, are not correlated with one


another, so that
E[ i j ] 0
for all i j

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-40

Interpretation of the
Slope and the Intercept

b0 is the estimated average value of y


when the value of x is zero (if x = 0 is
in the range of observed x values)

b1 is the estimated change in the


average value of y as a result of a
one-unit change in x

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-41

Simple Linear Regression


Example

A real estate agent wishes to examine the


relationship between the selling price of a home
and its size (measured in square feet)

A random sample of 10 houses is selected


Dependent variable (Y) = house price in $1000s
Independent variable (X) = square feet

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-42

Sample Data for


House Price Model
House Price in $1000s
(Y)

Square Feet
(X)

245

1400

312

1600

279

1700

308

1875

199

1100

219

1550

405

2350

324

2450

319

1425

255

1700

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-43

Graphical Presentation

House price model: scatter plot

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-44

Regression Using Excel

Excel will be used to generate the coefficients and


measures of goodness of fit for regression

Data / Data Analysis / Regression

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-45

Regression Using Excel

Data / Data Analysis / Regression

(continued)

Provide desired input:

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-46

Excel Output

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-47

Excel Output
(continued)
Regression Statistics
Multiple R

0.76211

R Square

0.58082

Adjusted R Square

0.52842

Standard Error

house price 98.24833 0.10977 (square feet)

41.33032

Observations

ANOVA

The regression equation is:

10

df

SS

MS

F
11.0848

Regression

18934.9348

18934.9348

Residual

13665.5652

1708.1957

Total

32600.5000

Coefficients
Intercept
Square Feet

Standard Error

t Stat

P-value

Significance F
0.01039

Lower 95%

Upper 95%

98.24833

58.03348

1.69296

0.12892

-35.57720

232.07386

0.10977

0.03297

3.32938

0.01039

0.03374

0.18580

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-48

Graphical Presentation

House price model: scatter plot and


regression line
Slope
= 0.10977

Intercept
= 98.248

house price 98.24833 0.10977 (square feet)


Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-49

Interpretation of the
Intercept, b0
house price 98.24833 0.10977 (square feet)

b0 is the estimated average value of Y when the


value of X is zero (if X = 0 is in the range of
observed X values)

Here, no houses had 0 square feet, so b0 = 98.24833


just indicates that, for houses within the range of
sizes observed, $98,248.33 is the portion of the
house price not explained by square feet

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-50

Interpretation of the
Slope Coefficient, b1
house price 98.24833 0.10977 (square feet)

b1 measures the estimated change in the


average value of Y as a result of a oneunit change in X

Here, b1 = .10977 tells us that the average value of a


house increases by .10977($1000) = $109.77, on
average, for each additional one square foot of size

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-51

Measures of Variation

Total variation is made up of two parts:

SST

SSR

SSE

Total Sum of
Squares

Regression Sum
of Squares

Error Sum of
Squares

SST (y i y)2

SSR (y i y)2

SSE (y i y i )2

where:

= Average value of the dependent variable

yi = Observed values of the dependent variable


y = Predicted value of y for the given x value
i
i

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-52

Measures of Variation
(continued)

SST = total sum of squares

SSR = regression sum of squares

Measures the variation of the yi values around their


mean, y
Explained variation attributable to the linear
relationship between x and y

SSE = error sum of squares

Variation attributable to factors other than the linear


relationship between x and y

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-53

Measures of Variation
(continued)

Y
yi

2
SSE = (yi - yi )

SST = (yi - y)2


_2
SSR = (yi - y)

_
y

xi
Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

_
y

X
Ch. 11-54

Coefficient of Determination, R2

The coefficient of determination is the portion


of the total variation in the dependent variable
that is explained by variation in the
independent variable
The coefficient of determination is also called
R-squared and is denoted as R2
SSR regression sum of squares
R

SST
total sum of squares
2

note:

0 R 1

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-55

Examples of Approximate
r2 Values
Y
r2 = 1

r2 = 1

100% of the variation in Y is


explained by variation in X

r =1
2

Perfect linear relationship


between X and Y:

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-56

Examples of Approximate
r2 Values
Y
0 < r2 < 1

Weaker linear relationships


between X and Y:
Some but not all of the
variation in Y is explained
by variation in X

X
Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-57

Examples of Approximate
r2 Values
r2 = 0

No linear relationship
between X and Y:

r2 = 0

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

The value of Y does not


depend on X. (None of the
variation in Y is explained
by variation in X)

Ch. 11-58

Excel Output
Multiple R

0.76211

R Square

0.58082

Adjusted R Square

0.52842

Standard Error

58.08% of the variation in


house prices is explained by
variation in square feet

41.33032

Observations

ANOVA

SSR 18934.9348
R

0.58082
SST 32600.5000
2

Regression Statistics

10

df

SS

MS

F
11.0848

Regression

18934.9348

18934.9348

Residual

13665.5652

1708.1957

Total

32600.5000

Coefficients
Intercept
Square Feet

Standard Error

t Stat

P-value

Significance F
0.01039

Lower 95%

Upper 95%

98.24833

58.03348

1.69296

0.12892

-35.57720

232.07386

0.10977

0.03297

3.32938

0.01039

0.03374

0.18580

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-59

Correlation and R2

The coefficient of determination, R2, for a


simple regression is equal to the simple
correlation squared

R r
2

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

2
xy

Ch. 11-60

Estimation of Model
Error Variance

An estimator for the variance of the population model


error is
n

2
e
i

SSE
s

n2 n2
2

2
e

i1

Division by n 2 instead of n 1 is because the simple regression


model uses two estimated parameters, b0 and b1, instead of one

s e s 2e is called the standard error of the estimate


Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-61

Excel Output
Regression Statistics
Multiple R

0.76211

R Square

0.58082

Adjusted R Square

0.52842

Standard Error

41.33032

Observations

ANOVA

s e 41.33032

10

df

SS

MS

F
11.0848

Regression

18934.9348

18934.9348

Residual

13665.5652

1708.1957

Total

32600.5000

Coefficients
Intercept
Square Feet

Standard Error

t Stat

P-value

Significance F
0.01039

Lower 95%

Upper 95%

98.24833

58.03348

1.69296

0.12892

-35.57720

232.07386

0.10977

0.03297

3.32938

0.01039

0.03374

0.18580

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-62

Comparing Standard Errors


se is a measure of the variation of observed y
values from the regression line
Y

small se

large se

The magnitude of se should always be judged relative to the size


of the y values in the sample data
i.e., se = $41.33K is moderately small relative to house prices in
the $200 - $300K range
Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-63

Inferences About the


Regression Model

The variance of the regression slope coefficient


(b1) is estimated by
2
2
s
s
e
e
s 2b1

2
2
(xi x) (n 1)s x

where:

sb1

= Estimate of the standard error of the least squares slope

SSE
se
n2

= Standard error of the estimate

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-64

Excel Output
Regression Statistics
Multiple R

0.76211

R Square

0.58082

Adjusted R Square

0.52842

Standard Error
Observations

ANOVA

sb1 0.03297

41.33032
10

df

SS

MS

F
11.0848

Regression

18934.9348

18934.9348

Residual

13665.5652

1708.1957

Total

32600.5000

Coefficients
Intercept
Square Feet

Standard Error

t Stat

P-value

Significance F
0.01039

Lower 95%

Upper 95%

98.24833

58.03348

1.69296

0.12892

-35.57720

232.07386

0.10977

0.03297

3.32938

0.01039

0.03374

0.18580

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-65

Comparing Standard Errors of


the Slope
Sb1 is a measure of the variation in the slope of regression

lines from different possible samples


Y

small Sb1

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

large Sb1

Ch. 11-66

Inference about the Slope:


t Test

t test for a population slope

Is there a linear relationship between X and Y?

Null and alternative hypotheses


H0: 1 = 0
H1: 1 0

(no linear relationship)


(linear relationship does exist)

Test statistic

b1 1
t
sb1
d.f. n 2
Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

where:
b1 = regression slope
coefficient
1 = hypothesized slope
sb1 = standard
error of the slope
Ch. 11-67

Inference about the Slope:


t Test
(continued)
House Price
in $1000s
(y)

Square Feet
(x)

245

1400

312

1600

279

1700

308

1875

199

1100

219

1550

405

2350

324

2450

319

1425

255

1700

Estimated Regression Equation:


house price 98.25 0.1098 (sq.ft.)

The slope of this model is 0.1098


Does square footage of the house
affect its sales price?

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-68

Inferences about the Slope:


t Test Example
H0: 1 = 0

From Excel output:

H1: 1 0

Coefficients
Intercept
Square Feet

b1
Standard Error

sb1
t Stat

P-value

98.24833

58.03348

1.69296

0.12892

0.10977

0.03297

3.32938

0.01039

b1 1 0.10977 0
t

3.32938
t
sb1
0.03297

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-69

Inferences about the Slope:


t Test Example
(continued)

Test Statistic: t = 3.329


H0: 1 = 0

From Excel output:

H1: 1 0

Coefficients
Intercept
Square Feet

d.f. = 10-2 = 8
t8,.025 = 2.3060
/2=.025

Reject H0

/2=.025

Do not reject H0

-tn-2,/2
-2.3060

Reject H0

tn-2,/2
2.3060 3.329

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

b1
Standard Error

sb1

t Stat

P-value

98.24833

58.03348

1.69296

0.12892

0.10977

0.03297

3.32938

0.01039

Decision:
Reject H0
Conclusion:
There is sufficient evidence
that square footage affects
house price
Ch. 11-70

Inferences about the Slope:


t Test Example
(continued)

P-value = 0.01039
H0: 1 = 0

From Excel output:

H1: 1 0

Coefficients
Intercept
Square Feet

This is a two-tail test, so


the p-value is
P(t > 3.329)+P(t < -3.329)
= 0.01039
(for 8 d.f.)
Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

P-value
Standard Error

t Stat

P-value

98.24833

58.03348

1.69296

0.12892

0.10977

0.03297

3.32938

0.01039

Decision: P-value < so


Reject H0
Conclusion:
There is sufficient evidence
that square footage affects
house price
Ch. 11-71

Confidence Interval Estimate


for the Slope
Confidence Interval Estimate of the Slope:

b1 t n2,/2 sb1 1 b1 t n2,/2 sb1


d.f. = n - 2

Excel Printout for House Prices:


Coefficients
Intercept
Square Feet

Standard Error

t Stat

P-value

Lower 95%

Upper 95%

98.24833

58.03348

1.69296

0.12892

-35.57720

232.07386

0.10977

0.03297

3.32938

0.01039

0.03374

0.18580

At 95% level of confidence, the confidence interval for


the slope is (0.0337, 0.1858)
Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-72

Confidence Interval Estimate


for the Slope
(continued)
Coefficients
Intercept
Square Feet

Standard Error

t Stat

P-value

Lower 95%

Upper 95%

98.24833

58.03348

1.69296

0.12892

-35.57720

232.07386

0.10977

0.03297

3.32938

0.01039

0.03374

0.18580

Since the units of the house price variable is


$1000s, we are 95% confident that the average
impact on sales price is between $33.70 and
$185.80 per square foot of house size
This 95% confidence interval does not include 0.
Conclusion: There is a significant relationship between
house price and square feet at the .05 level of significance
Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-73

Prediction

The regression equation can be used to


predict a value for y, given a particular x

For a specified value, xn+1 , the predicted


value is

y n1 b0 b1x n1

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-74

Predictions Using
Regression Analysis
Predict the price for a house
with 2000 square feet:

house price 98.25 0.1098 (sq.ft.)


98.25 0.1098(200 0)
317.85
The predicted price for a house with 2000
square feet is 317.85($1,000s) = $317,850
Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-75

Relevant Data Range

When using a regression model for prediction,


only predict within the relevant range of data
Relevant data range

Risky to try to
extrapolate far
beyond the range
of observed Xs
Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-76

Correlation Analysis

Correlation analysis is used to measure


strength of the association (linear relationship)
between two variables

Correlation is only concerned with strength of the


relationship

No causal effect is implied with correlation

Correlation was first presented in Chapter 3

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Ch. 11-77

Correlation Analysis

The population correlation coefficient is


denoted (the Greek letter rho)

The sample correlation coefficient is

r
where

s xy

s xy
sxsy

(x x)(y y)

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

n 1

Ch. 11-78

Hypothesis Test for Correlation

To test the null hypothesis of no linear


association,

H0 : 0

the test statistic follows the Students t


distribution with (n 2 ) degrees of freedom:

r (n 2)

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

(1 r )
2

Ch. 11-79

Decision Rules
Hypothesis Test for Correlation
Lower-tail test:

Upper-tail test:

Two-tail test:

H0: 0
H1: < 0

H0: 0
H1: > 0

H0: = 0
H1: 0

-t

Reject H0 if t < -tn-2,


Where t

Reject H0 if t > tn-2,

r (n 2)
(1 r )
2

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

/2
-t/2

/2
t/2

Reject H0 if t < -tn-2,


or t > tn-2,

has n - 2 d.f.
Ch. 11-80

Das könnte Ihnen auch gefallen