Sie sind auf Seite 1von 96

Weighted LeastSquares Regression

A technique for correcting the problem of heteroskedasticity by loglikelihood estimation of a weight that adjusts the errors of prediction

Weighted Least-Squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

2 Key Concepts ***** Weighted Least-Squares Regression

OLS Parameter estimates as: Unbiased Efficient BLUE Theoretical Sampling distribution of b Standard error of b Relationship between the standard error of b and: The variance of X The residual sum of squares The sample size Gauss-Markov Theorem Assumptions about the errors (e) in regression analysis and the consequences of their violation: e is uncorrelated with X e has the same variance across all levels of X The values of e are independent of each other e is normally distributed The concepts of homoskedasticity and heteroskedasticity of the error distributions The concept of autocorrelation or serial correlation Spurious relationships Collinear relationships Intervening relationships Techniques for identifying heteroskedasticity Graphic Statistical Whites Test for heteroskedasticity Rezidualizing a variable Techniques for identifying WLS weights Theory, the literature, or prior experience

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

3 Regression of e2 on X and transformation Log-likelihood estimation of wi SPSS weight estimation procedure SPSS WLS>> procedure

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

Overview

Theoretical sampling distribution of b Assumptions about errors in regression Identifying heteroskedasticity The concept of weighted least-squares regression Methods for estimating weights Regressing ei2 on X Log-likelihood estimation of weights Using WLS>> command in SPSS

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

References
White, Halbert (1980) A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica 48:817-838. Graybill, Fraklin A. and Iyer, Hariharan K. (1994) Regression Analysis: Concepts and Applications. Duxbury Press 571-592. Freund, Rudolf J. and Wilson, William J. (1998) Regression Analysis: Statistical Modeling of a Response Variable. Academic Press 378-382. McClendon, McKee J. (1994) Multiple Regression and Causal Analysis. F. E. Peacock Publishers, Inc. 138-146, 174-181, 189-197.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

Violation of OLS Regression Assumptions

Y = a + b1X1 + b2X2 + + bkXk OLS regression makes various assumptions about the errors that result from a regression model. If these assumptions are met One can assume that the estimates of the regression constant (a) and the regression coefficients (bk) are

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

Unbiased: Replications of the study will yield values of a and bk which will be distributed on either side of their respective parameters and k Efficient: The standard errors of a and bk will neither over- nor underestimate their associated theoretical standard errors Violation of one or more of these assumptions may lead to biased and/or inefficient estimates.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

Theoretical Sampling Distribution of b

1 Population Y = + X 3 m Theoretical sampling distribution of b


b=

b 68.26%

Theoretical standard error of b

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

b = () / (SX

n )

( Y Y) 2 / N

SX =

(X X) 2 / N

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

10

The Theoretical Standard Error of b

b = () / (SX

n )

The standard error (b) is directly related to the standard deviation of the errors produced by the model () The greater the errors produced by the model, the greater the standard error of b

The standard error (b) is inversely related to the standard deviation of the predictor variable (SX) As the variability of X increases, the standard error of b decreases

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

11

The standard error (b) is inversely related to the sample size (n) As the sample size increases, the standard error of b decreases

Estimation of the Theoretical Standard Error of b

The theoretical standard error of b (b) is usually estimated from a single sample, vis--vis a sampling distribution of b.

SEb = (Se) / (

TSSX )

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

12

Se =

RSS / (n- k)

TSSX = (X X)2

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

13

Gauss-Markov Theorem

b is an unbiased estimate of . On repeated estimates, the distribution of b will be centered around . The sampling distribution of b will be normal if the samples are large and a sufficient number of samples are taken. OLS provides the best linear unbiased estimate of (BLUE) Best means: OLS provides the most unbiased and efficient estimate of . Efficiency refers to the size of the standard error of b (b); neither too large
Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

14

nor small.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

15

The Four Assumptions About Regression Error

e = (Y Y) e = prediction error
e is uncorrelated with X, the independence assumption.

e has the same variance (Se2) across the different levels of X, i.e. the variance of e is homoskedastic v heteroskedastic.

The values of e are independent of each other, i.e. not autocorrelated or serially correlated.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

16

e is normally distributed.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

17

The Problem of the Correlation of e & X

Y = a + bX

Spurious relationship: e and X may be correlated because Z is a common cause of X and Y. In this case b is a biased estimate of .
spurious relationship

X Z

Collinear Relationship: If X2 is correlated with X1 & Y but is not the cause of either, b1 will be a biased estimate of 1

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

18

X1

X2

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

19 Correlation of e with X ( con'd )

Intervening Relationship: X2 intervenes in the relationship between X1 and Y. In this case b1 will not be a biased estimate of , but: It will reflect both the direct and indirect effects of X1 on Y.

X1

X2

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

20

Homoskedasticity of Errors (e) Over Levels of X


The dotted lines represent the pattern of the dispersion of the residuals.

Homoskedastic

Heteroskedastic (+) RXSe2 > 0.0

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

21

Heteroskedastic (-) RXSe2 < 0.0

Heteroskedastic Hour-Glass Distributed

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

22

Consequences of Heteroscedasticity

b will be an unbiased estimate of , but SEb will be inefficient, too large or small.

SEb =

(Y-Y )2 / (n-k) TSSx

If SEb is overestimated, (RXSe2 < 0.0) b will not be an efficient estimate of and a Type II error may occur, since t = (b / SEb) If SEb is underestimated, (RXSe2 > 0.0)

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

23

b will not be an efficient estimate of and a Type I error may occur, since t = (b / SEb)

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

24

Consequences of the Errors (e) Being Autocorrelated


An example of a time series:
Y

Time

e at time t will likely be related to e at time t-1, and so forth. b will remain an unbiased estimate of But the SEb will be biased and not efficient since t = b / SEb

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

25

If SEb is overestimated, a Type II error may occur. If SEb is underestimated, a Type I error may occur.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

26

The Distribution of the Errors

OLS regression assumes that the errors of prediction are normally distributed. This can be tested by saving the errors and Plotting A histogram or A normal probability plot Histogram of errors

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

27

Errors as a function of predictions


30

20

10

Std. Dev = 4.04 Mean = 22.9 0 17.5 20.0 22.5 25.0 27.5 30.0 32.5 35.0 N = 70.00

Predictions

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

28 Distribution of errors ( con'd )

Normal probability plot of errors

Normal probability plot of errors


1.00

.75

.50

.25

0.00 0.00 .25 .50 .75 1.00

Observed Cumulative Probability

If the errors are non-normally distributed


b may still be unbiased and efficient if The homoskedasticity and independence assumptions are met and the sample is large

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

29

If the sample is small, the use of the t distribution in determining the significance of b and its confidence interval will be biased.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

30

Summary of Assumptions and The Consequences of Their Violation

Assumption Violation Errors correlated with X Spurious relationship Collinear relationship Intervening relationship

Consequences

b biased estimate of b biased estimate of b unbiased estimate of but reflects both direct & indirect effects b unbiased but not efficient, SEb too small/large, Type I or II error may result b unbiased but not efficient, SEb too small/large, Type I or II error may result b may be unbiased if

Heteroskedastisity

(RXSe2 0.0)

Autocorrelated errors

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

31

Errors non-normally distributed

homoskedasticity & independence assumptions met & N is large. If N is small, t distribution may be biased.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

32

Heteroskedastic Errors and Weighted Least-Squares Regression

If the errors are heteroskedastically distributed The SEb may inefficient, i.e. either too small or large, which may lead to a Type I or II error Ways to detect heteroskedasticity Scatterplot of X against Y (prior to analysis) Scatterplot of predictions against residuals, either unstandardized or standardized Scatterplot of X against residuals Scatterplot of X against the absolute value of the residuals ( e )
Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

33

Scatterplot of X against the squared residuals (e2) Whites Test for homoskedasticiy

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

34

Example Scatterplot of X Against Y

Sentence length (Y) as a function of drug dependency (X)

30

20

10

0 0 2 4 6 8 10 12

DR_SCORE

Heteroskedasticity

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

35

As drug score increases, the variability in sentence increases

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

36

Example Scatterplot of Predictions Against Residuals

Sentence length (Y) as a function of drug dependency (X)

20

10

-10 2 3 4 5 6 7 8 9

Predicted Value

Heteroskedasticity

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

37

As predicted sentence becomes longer, variability in residuals becomes greater.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

38

Whites Test for Heteroskedasticity


White, Halbert (1980) A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica 48:817-838.

Step 1 Compute the regression equation y = a + bX Step 2 Save the residuals (e = Y Y) Step 3 Square the residuals (e2) Step 4 Regress e2 on X and record R2 from this analysis (i.e. residualization of X)

Step 5 Calculate Whites chi-square (2) 2 = (n) (R2)

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

39

Ho = residuals are homoskedastic n = number of cases df = number of independent variables

Whites Test for Heteroskedasticity (cont.)

Example The regression of sentence on dr_score R2 = 0.06517 2 = n R2 = (70) (0.06517) = 4.56 df = 1 p < 0.05 Statistical decision
Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

40

Reject the Ho that the residuals are homoskedastic

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

41

How Does One Correct the Problem of Heteroskedastic Errors?


Solution: Weighted Least-Squares Regression (WLS Regression) The logic of WLS Regression Find a weight (wi) That can be used to modify the influence of large errors on the estimation of The best fit values of The regression constant (a) The regression coefficients (bk) OLS is designed to minimize: (Y Y)2 In WLS, values of a and bk are estimated which minimize RSS = wi (Y Y)2

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

42

This process has the effect of minimizing the influence of a case with a large error on the estimation of a and bk And maximizing the influence of a case with a small error on the estimation of a and bk

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

43

Techniques for Estimating a Suitable Value of a Weight wi

From theory, the literature, or experience gained in prior research. Rarely will this approach prove successful, except by trial and error Estimate wi by regressing e2 on the offending independent variable X and Transforming the values of X and Y. This is called residualizing the variable X. Use log-likelihood estimation to determine a suitable value of wi This can be done in SPSS using the regression weight estimation procedure coupled with the WLS>> procedure.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

44

In the following case study both the residualizing and SPSS WLS>> procedures will be demonstrated.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

45

An Example
***** The Relationship Between Drug Dependency & Length of Sentence

The model

Sentence = a + b (drug_score)

The results Sentence = 1.97 + 0.6438 (drug_score)

For this model to be BLUE, the residuals must be homoskedastic.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

46

Q Are the residuals homoskedastic?

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

47 An Example (cont.)

Scatterplot of the residuals Notice how the residuals become larger the greater the degree of drug dependency. These are heteroskedastic residuals.

Heteroskedastic Residuals
5 4 3 2 1 0 -1 -2 -2.0 -1.5 -1.0 -.5 0.0 .5 1.0 1.5

Standardized Predicted Value

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

48

Solving the Problem of Heteroskedasticity Solution


Residualize the offending variable X

Steps in the process 1. Plot X against Y to determine the presence of heteroskedasticity

2. Estimate the following regression equation and save the residuals (e = Y Y). In SPSS the residuals appear as res_1

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

49

Y = a + bX

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

50 Solving the Problem of Heteroskedasticity (cont.)

3. Square the residuals

e2 = (res_1)2 = residsq

4. Regress residsq on X and save the predicted residsq, in SPSS this is called pre_2

Residsq = a + bX

5. Transform X and Y, and compute a weight wi called wtsqroot

wtX = X /

pre_2

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

51

wtY = Y / wtsqroot = 1 /

pre_2 pre_2

Solving the Problem of Heteroskedasticity (cont.)

6. Estimate the following weighted regression equation through the origin, i.e. with a regression constant equal to 0.0

wtY = a(wtsqroot) + b(wtX)

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

52

Step 1
Sentence length as a function of drug dependency

SPSS scatterplot of sentence as a function of dr_score . This can only be done when there are 2 or less IV.

Sentence as a function of drug dependency


30

20

10

0 0 2 4 6 8 10 12

Drug Dependency

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

53

Heteroskedastic The variability in sentence length increases as the degree of drug dependency increases.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

54

Step 2 Regress sentence length on drug dependency, save the residuals (res_1) and the predictions (pre_1)

sentence = 1.975 + 0.644 dr_score

R2 = 0.12 (F = 9.24, p = 0.003)

SPSS results for Step 2


Regression
b Variables Entered/Removed

Model 1

Variables Entered DR_SCOR a E

Variables Removed .

Method Enter

a. All requested variables entered. b. Dependent Variable: SENTENCE

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

55

Step 2 (cont.)

b Model Summary

Model 1

R R Square .346 a .120

Adjusted R Square .107

Std. Error of the Estimate 4.6816

a. Predictors: (Constant), DR_SCORE b. Dependent Variable: SENTENCE

b ANO VA

Model 1

Regression Residual Total

Sum of Squares 202.516 1490.356 1692.871

df 1 68 69

Mean Square 202.516 21.917

F 9.240

Sig. .003a

a. Predictors: (Constant), DR_SCORE b. Dependent Variable: SENTENCE

Coefficientsa Standardi zed Coefficien ts Beta .346

Model 1

(Constant) DR_SCORE

Unstandardized Coefficients B Std. Error 1.975 1.425 .644 .212

t 1.386 3.040

Sig. .170 .003

a. Dependent Variable: SENTENCE

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

56
a Casewise Diagnostics

Case Number 60

Std. Residual 3.956

SENTENCE 25.00

a. Dependent Variable: SENTENCE

Step 2 (cont.)

a Residuals Statistics

Predicted Value Residual Std. Predicted Value Std. Residual

Minimum 2.6185 -7.4128 -1.949 -1.583

Maximum 8.4128 18.5186 1.433 3.956

Mean 5.9571 -7.61E-17 .000 .000

Std. Deviation 1.7132 4.6475 1.000 .993

N 70 70 70 70

a. Dependent Variable: SENTENCE

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

57

Heteroskedastic Residuals
5 4 3 2 1 0 -1 -2 -2.0 -1.5 -1.0 -.5 0.0 .5 1.0 1.5

Standardized Predicted Value

N.B. The residuals are heteroskedastic. Compare this scatterplot with the scatterplot of sentence as a function of dr_score. Notice that the patters are the same.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

58

Step 3
Calculate the squared residuals

In SPSS, the unstandardized residuals are saved as res_1. Step 3 involves squaring the residuals by use of the data transformation procedure in SPSS.

squared residual = (res_1) 2 = residsq

The SPSS syntax for this transformation is as follows:

Residsq = res_1**2

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

59

The steps used in this transformation process are described in the case study associated with this module.

Step 3 (cont.)

SPSS results for Step 3

pre_1

res_1

residsq

7.76901 8.41282 8.41282 6.48139 7.76901 7.12520 7.76901 8.41282 5.83758 7.76901

-6.76901 -7.41282 -7.41282 -5.48139 -5.76901 -5.12520 -5.76901 -6.41282 -2.83758 -4.76901

45.82 54.95 54.95 30.05 33.28 26.27 33.28 41.12 8.05 22.74

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

60

2.61852 2.61852 3.26233 5.19377 8.41282 7.76901 7.12520 6.48139 7.12520 7.12520

-.61852 .38148 1.73767 1.80623 -.41282 1.23099 2.87480 5.51861 6.87480 7.87480

.38 .15 3.02 3.26 .17 1.52 8.26 30.46 47.26 62.01

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

61

Step 4
Regress the squared residuals on the independent variable dr-score and save the predictions as (pre_2)

residsq = -7.587 + 4.6685 dr_score

R2 = 0.065 (F = 4.74, p = 0.0329) This process is called residualizing a variable. By OLS definition, the residuals (residsq) represent the variance in Y that is unrelated to X. Therefore, there should be no significant relationship between X and residsq. If there is, one or more OLS regression assumptions have been violated. In this case, the violated assumption is the homoskedasticity of the residuals.
Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

62 Step 4 (cont.)

SPSS results for Step 4


Regression
b Variables Entered/Removed

Model 1

Variables Entered DR_SCOR a E

Variables Removed .

Method Enter

a. All requested variables entered. b. Dependent Variable: RESIDSQ


b Model Summary

Model 1

R R Square .255 a .065

Adjusted R Square .051

Std. Error of the Estimate 47.3953

a. Predictors: (Constant), DR_SCORE b. Dependent Variable: RESIDSQ


b ANO VA

Model 1

Sum of Squares Regression 10648.802 Residual 152749.4 Total 163398.2

df 1 68 69

Mean Square 10648.802 2246.315

F 4.741

Sig. .033a

a. Predictors: (Constant), DR_SCORE b. Dependent Variable: RESIDSQ


Coefficientsa Standardi zed Coefficien ts Beta .255

Model 1

(Constant) DR_SCORE

Unstandardized Coefficients B Std. Error -7.587 14.422 4.669 2.144

t -.526 2.177

Sig. .601 .033

a. Dependent Variable: RESIDSQ

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

63
a Casewise Diagnostics

Case Number 60

Std. Residual 6.706

RESIDSQ 342.94

a. Dependent Variable: RESIDSQ

Step 4 (cont.)

Residuals Statisticsa Predicted Value Residual Std. Predicted Value Std. Residual Minimum -2.9189 -38.9275 -1.949 -.821 Maximum 39.0979 317.8466 1.433 6.706 Mean 21.2908 4.974E-15 .000 .000 Std. Deviation 12.4230 47.0506 1.000 .993 N 70 70 70 70

a. Dependent Variable: RESIDSQ

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

64

Step 5
Compute the absolute value of pre_2 and three new variables wtsent, wtdrug and the weight wtsqroot.

wtsent = (sentence) / wtdrug = (dr_score) / wtsqroot = (1) /

abspre_2 abspre_2 abspre_2

pre_2 from the previous step is the information in the squared residuals (residsq) that is related to the IV dr_score.

Dividing sentence and dr_score by pre_2 reduces the influence of extreme values on the estimation of a and b.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

65

Finally a third transformation is performed by creating the variable wtsqroot. This will serve as a weighting factor.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

66 Step 5 (cont.)

SPSS results for Step 5

abspre_2

wtsent

wtdr_sco

wtsqroot

34.43 . 39.10 . 39.10 . 25.09 34.43 29.76 34.43 39.10 20.42 34.43 20.42

17 16 16 .20 .34 .37 .34 .32 .66 .51 .66

1.53 1.60 1.60 1.40 1.53 1.47 1.53 1.60 1.33 1.53 1.33

.17 .16 .16 .20 .17 .18 .17 .16 .22 .17 .22

39.10 34.43 29.76 25.09 29.76 29.76

1.28 1.53 1.83 2.40 2.57 2.75

1.60 1.53 1.47 1.40 1.47 1.47

.16 .17 .18 .20 .18 .18

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

67

Step 6
Compute the WLS regression

wtsent = a(wtsqroot) + b (wtdrug)

Results: wtsent = 1.159 (wtsqroot) + 0.7833 (wtdrug) R2 = 0.674 (F = 70.29, p < 0.0001)

SPSS results for Step 6


Regression

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

68
b, Variables Entered/Removed c

Model 1

Variables Entered WTSQRO OT, WTDR_SC a O

Variables Removed .

Method Enter

a. All requested variables entered. b. Dependent Variable: WTSENT c. Linear Regression through the Origin

Step 6 (cont.)

c, Model Summary d

Model 1

R R Square .821b .674

Adjusted R Square .664

Std. Error of the Estimate .9655

a. For regression through the origin (the no-intercept model), R Square measures the proportion of the variability in the dependent variable about the origin explained by regression. This CANNOT be compared to R Square for models which include an intercept. b. Predictors: WTSQROOT, WTDR_SCO c. Dependent Variable: WTSENT d. Linear Regression through the Origin

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

69
c, ANOVA d

Model 1

Regression Residual Total

Sum of Squares 131.041 63.390 194.431 b

df 2 68 70

Mean Square 65.520 .932

F 70.285

Sig. .000 a

a. Predictors: WTSQROOT, WTDR_SCO b. This total sum of squares is not corrected for the constant because the constant is zero for regression through the origin. c. Dependent Variable: WTSENT d. Linear Regression through the Origin

Coefficientsa, b Standardi zed Coefficien ts Beta .637 .218

Model 1

WTDR_SCO WTSQROOT

Unstandardized Coefficients B Std. Error .783 .140 1.159 .605

t 5.597 1.917

Sig. .000 .059

a. Dependent Variable: WTSENT b. Linear Regression through the Origin

Step 6 (cont.)

a, Casewise Diagnostics b

Case Number 60

Std. Residual 3.796

WTSENT 4.99

a. Dependent Variable: WTSENT b. Linear Regression through the Origin

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

70
a, Residuals Statistics b

Predicted Value Residual Std. Predicted Value Std. Residual

Minimum 1.1369 -1.3046 -1.317 -1.351

Maximum 2.0606 3.6648 4.185 3.796

Mean 1.3580 1.483E-03 .000 .002

Std. Deviation .1679 .9585 1.000 .993

N 70 70 70 70

a. Dependent Variable: WTSENT b. Linear Regression through the Origin

Residuals of wtsent regressed on wtdrug


4 3

-1 -2 1.0 1.2 1.4 1.6 1.8 2.0 2.2

wtdrug

N.B. The heteroskedasticity has been reduced.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

71

Comparison of the OLS v Residualized Regression Models


Compare of scatterplots, notice the substantial
reduction of heteroskedasticity

Statistical results
Method OLS a 1.975 b 0.644 SEa 1.425 SEb 0.212 R2 0.1196 p 0.0034

Residualized

1.159

0.783

0.605

0.139

0.6740

0.0001

The residualized model is more efficient, SEs are smaller.

Comparison of 95% confidence intervals


95% Confidence Method OLS Interval 0.221 to 1.066 Difference 0.845

Residualized

0.504 to 1.062

0.558

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

72

N.B. The width of the residualized 95% confidence interval is less than that of the OLS interval.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

73

An Alternative Procedure for Correcting Heteroskedasticity


Log-Likelihood Estimation of wi If it can be assumed that the variance in the DV Is proportional to the IV, Log-likelihood estimation can be used to estimate wI. In this case it is assumed that

Sy2 (X)w or Sy2 (1 / Xw)

( is read proportional to) In log-likelihood estimation of wi, the question is:

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

74

What power of X, i.e. wi, is most likely to have produce the proportional relationship between Sy2 and X ?

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

75

SPSS Weight Estimation and WLS>> Regression Procedures


This procedure begins by using log-likelihood estimation to iteratively determine a weight wI To be used in estimating the values of the regression constant (a) and the regression coefficient (b) Such that the RSS is minimized.

RSS = [ (1 / Xwi) (Y Y)2 ]

This may solve the heteroskedasticity problem if:

Sy2 (X)w or Sy2 (1 / Xw)


Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

76

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

77

Step 1
Estimation of the weight wi using the SPSS weight estimation procedure

The result The most likely weight = 1.8 The variance in sentence is estimated to be

Sy2 = (dr_score) 1.8

Regression equation

sentence = 0.94 + 0.83 (dr_score)

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

78

For a drug score of 6,the prediction would be sentence = 0.94 + 0.83 (6) = 5.92 years

Step 1 (cont.)

Examination of weights for individual subjects


Subject Jones Smith dr_score 10 1 Weight
1/(10)1.8 = 0. 01585 1/(1)1.8 = 1.00

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

79 Step 1 (cont.)

SPSS results
Weighted Least Squares
MODEL: MOD_1. Dependent variable.. SENTENCE -247.704093 -244.711034 -241.746674 -238.811376 -235.905669 -233.030285 -230.186198 -227.374668 -224.597297 -221.856103 -219.153638 -216.493167 -213.878940 -211.316576 -208.813565 -206.379886 -204.028712 -201.777240 -199.647653 -197.668318 -195.875260 -194.313939 -193.041226 -192.127266 -191.656680 -191.728189 -192.451535 -193.940595 -196.302375 -199.623060 -203.954229 POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER POWER value value value value value value value value value value value value value value value value value value value value value value value value value value value value value value value = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = -3.000 -2.800 -2.600 -2.400 -2.200 -2.000 -1.800 -1.600 -1.400 -1.200 -1.000 -.800 -.600 -.400 -.200 .000 .200 .400 .600 .800 1.000 1.200 1.400 1.600 1.800 2.000 2.200 2.400 2.600 2.800 3.000 1.800

Source variable.. DR_SCORE Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Log-likelihood Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function Function = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

The Value of POWER Maximizing Log-likelihood Function =

log-likelihood estimated weight wi = 1.8

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

80 Step 1 (cont.)

Estimation of the weighted regression model


Source variable.. DR_SCORE POWER value = 1.800 Dependent variable.. SENTENCE Listwise Deletion of Missing Data Multiple R R Square Adjusted R Square Standard Error .63648 .40510 .39636 .84375

Analysis of Variance: DF Regression Residuals F = 1 68 Sum of Squares 32.965466 48.409826 Signif F = .0000 Mean Square 32.965466 .711909

46.30572

------------------ Variables in the Equation -----------------Variable DR_SCORE (Constant) B .828470 .939977 SE B .121747 .394588 Beta .636478 T 6.805 2.382 Sig T .0000 .0200

Log-likelihood Function = -191.656680 The following new variables are being created: Name WGT_1 Label Weight for SENTENCE from WLS, MOD_1 DR_SCORE** -1.800

Weighted equation Sentence = 0.9399 + 0.8285 (dr_score) Unweighted equation

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

81

Sentence = 1.97 + 0.6438 (dr_score)

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

82

Step 2
Plot the relationship between dr_score and the weight wi

Scatterplot of adjusted dr_score and dr_score Weight adjusted dr_score=1/(dr_score) **1.8


1.2 1.0 .8 .6 .4 .2 0.0 0 2 4 6 8 10 12

DR_SCORE

The heteroscdasticity problem


Recall the previous scatterplot: as the value of drug scores increases, the variance in sentences increases as well.

The log-likelihood estimated weight is such that


As the value of drug score increases, the weight adjusted drug score (wgt_1) decreases.
Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

83

The Effect of the Log-Likelihood Estimated Weight on the Regression Errors


OLS regression estimates the best fit values of a and b by minimizing RSS = (Y Y)2 WLS regression estimates the best fit values of a and b by minimizing RSS = wi (Y Y)2 For offender Jones with a drug score of 10 e2 = (1/101.8) (Y Y)2 e2 = (0.01585) (Y Y)2 Since prediction errors increase as drug score increases
Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

84

The weight of 1.8 reduces the effect of large errors on the RSS providing a more efficient estimate of the SEb.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

85

Step 3
The SPSS WLS>> in linear regression If an appropriate weight wi is already known by another means The WLS>> procedure in SPSS linear regression can be use instead of the SPSS weight estimation procedure The procedure Simply specify the regression model Enter the known weight-variable under the WLS>> command and estimate the model In this case, the weight variable wgt _1 from Step 2 will be used

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

86 Step 3 (cont.)

The results of the WLS>> analysis using a weightvariable wgt_1, wi = 1.8, with regression through the origin

R2 = 0.668, F = 139.14, p = 0.0001 sentence = 1.036 (dr_score) SEb = 0.087

N.B. Since this model does not include a constant (a), the R2 and the other statistical results can not be compared with the associated values of a model that does use a constant (a).

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

87 Step 3 (cont)

SPSS results
Regression
b, Variables Entered/Removed c

Model 1

Variables Entered DR_SCOR a E

Variables Removed .

Method Enter

a. All requested variables entered. b. Dependent Variable: SENTENCE c. Weighted Least Squares Regression Weighted by Weight for SENTENCE from WLS, MOD_1 DR_SCORE** -1.800
b, Model Summary c

Model 1

R R Square .636 a .405

Adjusted R Square .396

Std. Error of the Estimate .8437

a. Predictors: (Constant), DR_SCORE b. Dependent Variable: SENTENCE c. Weighted Least Squares Regression - Weighted by Weight for SENTENCE from WLS, MOD_1 DR_SCORE** -1.800
b, ANO VA c

Model 1

Regression Residual Total

Sum of Squares 32.965 48.410 81.375

df 1 68 69

Mean Square 32.965 .712

F 46.306

Sig. .000a

a. Predictors: (Constant), DR_SCORE b. Dependent Variable: SENTENCE c. Weighted Least Squares Regression - Weighted by Weight for SENTENCE from WLS, MOD_1 DR_SCORE** -1.800

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

88

Step 3 (cont)

Coefficientsa, b Standardi zed Coefficien ts Beta .636

Model 1

(Constant) DR_SCORE

Unstandardized Coefficients B Std. Error .940 .395 .828 .122

t 2.382 6.805

Sig. .020 .000

a. Dependent Variable: SENTENCE b. Weighted Least Squares Regression - Weighted by Weight for SENTENCE from WLS, MOD_1 DR_SCORE** -1.800
b,c Residuals Statistics

Predicted Value Residual a Std. Predicted Value Std. Residuala

Minimum 1.7684 -8.2247 . .

Maximum 9.2247 18.2607 . .

Mean 6.0647 -.1075 . .

Std. Deviation 2.2046 4.6734 . .

N 70 70 0 0

a. Not computed for Weighted Least Squares regression. b. Dependent Variable: SENTENCE c. Weighted Least Squares Regression - Weighted by Weight for SENTENCE from WLS, MOD_1 DR_SCORE** -1.800

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

89 Step 3 (cont)

Saved predicted & residual values, and the weighted values of dr_score (i.e. wgt_1)

wtg_1 = 1 / (dr_score) 1.8

wgt_1

pre_1

res_1

.01916 .01585 .01585 .03012 .01916 .02368 .01916 .01585 .03975

8.39621-7.39621 9.22468-8.22468 9.22468-8.22468 6.73927-5.73927 8.39621-6.39621 7.56774-5.56774 8.39621-6.39621 9.22468-7.22468 5.91080-2.91080

.02368 .03012 .02368 .02368

7.567742.43226 6.739275.26073 7.567746.43226 7.567747.43226

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

90

Step 4 Variable transformations and plot of the residuals

Unfortunately, the weighted residuals and predictions produced by the SPSS weight estimation and WLS>> procedures Can not be directly graphed from the saved residuals and predictions The residuals and the predictions must first be transformed as follows:

Transformed residual = (res_1) (wt)0.5

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

91

Transformed prediction = (pre_1) (wt)0.5

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

92 Step 4 (cont.)

SPSS results

Scatterplot of the Transformed Residuals


4 3

-1 -2 1.0 1.1 1.2 1.3 1.4

Transformed Weighted Predictions

Compare the degree of heteroskedasticity in this scatterplot with The plot of the residuals from the unweighted regression model.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

93

Notice the substantial change in the degree of heteroskedasticity.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

94 Step 4 (cont.)

Transformed variables transres and transpre transres = res_1*sqrt(wgt_1) transpre = pre_1*sqrt(wgt_1)

transres

transpre

-1.02 -1.04 -1.04 -1.00 -.89 -.86 -.89 -.91

1.16 1.16 1.16 1.17 1.16 1.16 1.16 1.16

.37 .91 .99 1.14

1.16 1.17 1.16 1.16

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

95

Comparison of Results: OLS, Residualized , and Log-Likelihood Models

Method
OLS Rezidualized Log Likelihood

a
1.975 1.159

b
0.644 0.783

SEa
1.425 0.605

SEb
0.212 0.139

0.940

0.828

0.394

0.121

N.B. The standard errors of the residualized & log-likelihood models are lower than the OLS model.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

96

The log-likelihood model produces smaller standard errors than the residualized model.

Weighted Least-squares Regression: Charles M. Friel Ph.D., Criminal Justice Center, Sam Houston State University

Das könnte Ihnen auch gefallen