Sie sind auf Seite 1von 22

P1.T2.

Stock & Watson Chapters 4 & 5

Bionic Turtle FRM Video Tutorials

By: David Harper CFA, FRM, CIPM

Note: This tutorial is for paid members only. You know who you are.
Anybody else is using an illegal copy and also violates GARP’s ethical
standards.
P1.T2. Stock & Watson, Chapters 4 & 5

Quantitative Agenda
• Introduction to Econometrics (Stock & Watson)
– Linear Regression with One Regressor (Ch. 4)
– Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals
(Ch. 5)
P1.T2. Stock & Watson, Chapters 4 & 5

T2.Quantitative: Learning Spreadsheets

Exam Relevance
Workbook (XLS not topic) Worksheet
T2.9.4. Single variable linear Medium
regression

Note: If you are unable to view the content within this document we recommend the following:

MAC Users: The built-in pdf reader will not display our non-standard fonts. Please use adobe’s pdf reader
(http://get.adobe.com/reader/otherversions/)
PC Users: We recommend you use the foxit pdf reader (http://www.foxitsoftware.com/Secure_PDF_Reader/) or adobe’s pdf
reader (http://get.adobe.com/reader/otherversions/).
Mobile and Tablet users: We recommend you use the foxit pdf reader app or the adobe pdf reader app.

All of these products are free. We apologize for any inconvenience. If you have any additional problems, please email Suzanne at
suzanne@bionicturtle.com.
Chapter 4: Linear Regression with One Regressor
P1.T2. Stock & Watson, Chapter 4: Linear Regression with One Regressor

Explain how regression analysis in econometrics measures the relationship


between dependent and independent variables.

TestScore   0  ClassSize  ClassSize other factors


Yi   0  1 X i ui

Dependent Independent
(regressand) (regressor)
Variable Variable

5
P1.T2. Stock & Watson, Chapter 4: Linear Regression with One Regressor

Define and interpret a population regression function, regression coefficients,


parameters, slope and the intercept.

Slope coefficient

Intercept coefficient error term

Yi   0  1 X i ui
This error (u) in the
PRF is estimated by
Parameters the residual (e) in the
(regression coefficients) SRF

6
P1.T2. Stock & Watson, Chapter 4: Linear Regression with One Regressor

Define and interpret the stochastic error term (or noise component).

• The error term contains all the other factors aside from (X) that determine the
value of the dependent variable (Y) for a specific observation.

Yi   0  1 X i ui

7
P1.T2. Stock & Watson, Chapter 4: Linear Regression with One Regressor

Define and interpret a sample regression function, regression


coefficients, parameters, slope and the intercept.
One set of unknowable parameters (B1, B2).
Each sample → SRF → Es mator (sta s c) → Estimate

stochastic
PRF Yi  B0  B1 X i  ui

SRF ˆ
Yi  b0  b1 X i
stochastic
SRF Yi  b0  b1 X i  ei
8
P1.T2. Stock & Watson, Chapter 4: Linear Regression with One Regressor

Describe the key properties of a linear regression.

• Okay if non-linear in variables, but must be linear in parameters

2
E (Y )  B0  B X i 1
Linear variable, nonlinear parameter

2
E (Y )  B0  B1 X i
Nonlinear variable, Linear parameter

9
P1.T2. Stock & Watson, Chapter 4: Linear Regression with One Regressor

Describe the method and assumptions of ordinary least squares for estimation of
parameters:

Estimate (conditional) mean of dependent The conditional distribution


variable of u(i) given X(i) has a mean
of zero
Test hypotheses about nature of dependence

[X(i), Y(i)] are independent


To forecast the mean value of the dependent
and identically distributed
(i.i.d.)

Correlation (dependence) is not causation

Large outliers are unlikely

10
P1.T2. Stock & Watson, Chapter 4: Linear Regression with One Regressor

Define and interpret the explained sum of squares, the total sum of squares, and
the residual sum of squares

Total (TSS) = Explained (ESS) + residual (SSR)

n 2
ESS   Yˆi  Y   SSR
SER 
i 1 n  k 1
n
SSR   uˆi2
i 1
n
2 2 ESS SSR
TSS   Yi  Y  R  1
i 1 TSS TSS
11
P1.T2. Stock & Watson, Chapter 4: Linear Regression with One Regressor

Define and interpret … the standard error of


the regression (SER), and the regression R2.

Total (TSS) = Explained (ESS) + residual (SSR)

In the case of one regressor: In the general case of k = number of


regressors (aka, independent variables):

SSR ei2
SER   SSR
n2 n2 SER 
n  k 1

2ESS SSR
R  1
TSS TSS
12
P1.T2. Stock & Watson, Chapter 4: Linear Regression with One Regressor

Interpret the results of an ordinary least


squares regression

LINEST() Function Key for LINEST()


B(1) B(0) B(1) B(0)
-2.28 698.93 slope intercept
(0.48) (9.47) se (slope) se (intercept)
0.05 18.58 R^2 se (y estimate)
22.56 418 F df
7,794 144,315 ESS RSS

Test Scores versus Student-Teacher Ratio

TestScore  698.9  2.28  STR 720.0


700.0

(9.47) (0.48) 680.0

Test Scores
660.0
640.0
620.0
600.0
10.0 15.0 20.0 25.0 30.0
Student-teacher ratio

13
P1.T2. Stock & Watson, Chapter 4: Linear Regression with One Regressor

Practice Question
• 216.3. A five-year regression of monthly cotton price changes, such that
the number of observations (n) equals 60, against average temperature
changes produced a standard error of the regression (SER) of $1.20. If
the total sum of squares (TSS) was $90.625 dollars2 , what is the implied
correlation coefficient?

14
P1.T2. Stock & Watson, Chapter 4: Linear Regression with One Regressor

Practice Question
• 216.3. A five-year regression of monthly cotton price changes, such that
the number of observations (n) equals 60, against average temperature
changes produced a standard error of the regression (SER) of $1.20. If
the total sum of squares (TSS) was $90.625 dollars2 , what is the implied
correlation coefficient?

As SER = SQRT[SSR/(n-df)], SSR = SER^2*(n-df).

In this case (again, 2 coefficients = 2 df):


SSR = 1.20^2*(60-2) = 83.52;
R^2 = ESS/TSS = 1 - SSR/TSS = 1 - 83.52/90.625 = 0.07840
correlation = SQRT(0.07840) = 0.280

15
Chapter 5: Regression with a Single Regressor:
Hypothesis Tests and Confidence Intervals
P1.T2. Stock & Watson, Chapter 7: Hypothesis Tests and Confidence Intervals in Multiple Regression

Define, calculate, and interpret confidence


intervals for regression coefficients.
• Limit = Coefficient ± [standard error × critical value @ c%]
• 680.4 = 698.9 – 9.47 × 1.96

Confidence Interval
Coefficient SE Lower Upper
Intercept 698.9 9.47 680.4 717.5
Slope (B1) -2.28 0.48 -3.2 -1.3

17
P1.T2. Stock & Watson, Chapter 7: Hypothesis Tests and Confidence Intervals in Multiple Regression

Define and interpret hypothesis tests about regression coefficients.

TestScore  698.9  2.28  STR


(9.47) (0.48)
STR: t statistic = |(-2.28 – 0)/0.48| = 4.75
p value 2 Tail ~ 0 %

18
P1.T2. Stock & Watson, Chapter 7: Hypothesis Tests and Confidence Intervals in Multiple Regression

Define and differentiate between homoskedasticity and heteroskedasticity.

• The error term u(i) is homoskedastic if the variance of the conditional


distribution of u(i) given X(i) is constant for i = 1,…,n and in particular does not
depend on X(i).
• Otherwise the error term is heteroskedastic.

19
P1.T2. Stock & Watson, Chapter 7: Hypothesis Tests and Confidence Intervals in Multiple Regression

Describe the implications of homoskedasticity and heteroskedasticity.

• Implications of homoskedasticity: the OLS estimators remain unbiased and


asymptotically normal.

• If heteroskedasticity, can use heteroskedastic-robust standard errors.

20
P1.T2. Stock & Watson, Chapter 5: Regression with a Single Regressor: Hypothesis Tests and Confidence Interva

Explain the Gauss-Markov Theorem and its limitations, and alternatives to the OLS.

• The Gauss– Markov theorem provides a theoretical justification for


using OLS, but has two key limitations:
1. Its conditions might not hold in practice. “In particular, if the error term is
heteroskedastic— as it often is in economic applications— then the OLS estimator
is no longer BLUE… An alternative to OLS when there is heteroskedasticity of a
known form, called the weighted least squares estimator.
2. Even if the conditions of the theorem hold, there are other candidate estimators
that are not linear and conditionally unbiased; under some conditions, these other
estimators are more efficient than OLS.

21
End of P1.T2. Stock & Watson, Chapters 4 & 5

Visit us on the forum @ www.bionicturtle.com/forum

Das könnte Ihnen auch gefallen