Sie sind auf Seite 1von 4

Econ 2280 Introductory Econometrics

Tutorial 4 Review Notes


The multiple Regression Model (II)

October 12, 2018

1 Gauss-Markov Assumption for MLR


Assumption 1(MLR.1) Linear in parameters

The model in the population can be written as

y= 0 + 1 x1 + 2 x2 + ::: + k xk +u (1)

where 0; 1 ; ::: k are the unknown parameters (constants) of interests


and u is an unobservable random error or disturbance.

Assumption 2 (MLR.2) Random Sampling

We have a random sample of n observations, f(xi1 ; xi2 ; xi3 ; :::xik ; yi ) : i =


1; 2; :::ng; following the population model in equation (1).

Assumption 3 (MLR.3) No perfect collinearity

In the sample (and therefore in the population), none of the independent


variables is constant, and there are no exact linear relationships among
the independent variables.

For the model


y= 0 + 1 x1 + 2 x2 + u;

1
x1 = inc x1 = log (inc) x1 = inc
Examples
x2 = inc2 x2 = log inc2 x2 = 0:8inc 1
Perfect Collinearity

Assumption 4 (MLR.4) Zero mean independence

The error term u has a zero expected value given any values of the explana-
tory variables. In other words,

E (ujx1 ; x2 ; :::; xk ) = 0

Remark 1 Endogenous variable means the explanatory variables that


are correlated with the error term. Endogeneity is a violation of as-
sumption MLR.4.
Exogenous variable means the explanatory variables that are uncorre-
lated with the error term. Assumption MLR.4 holds if all explanatory
variables are exogenous.

Assumption 5 (MLR.5): Homoskedasticity The error u has the same


variance given any values of the explanatory variables, i.e.,

2
V ar (ujx1 ; x2 ; :::; xk ) = :

2 Interpretation of the OLS regression equation


In MLR model, y = 0 + 1 x1 + 2 x2 + ::: + k xk + u. We can write

@y
j = for j = 1; 2; :::; k;
@xj
0 = E [yjx1 = 0; x2 = 0; :::xk = 0] :

Similarly, for yb = c0 + c1 x1 + c2 x2 + ::: + ck xk ; we can write

c = @b
y
j for j = 1; 2; :::; k;
@xj
c = yb under xi = 0 for i = 1; 2; :::k:
0

2
We can interpret j and cj as:

j : The change in the dependent variable when xj is increased by 1 unit,


holding all other independent variables and error term constant.

c : The expected change in the dependent variable when xj is increased by


j
1 unit, holding all other independent variables constant.

0 : The average value of the dependent variable for the observations in the
population with all independent variables equal to zero.

c : The expected value of the dependent variable under all independent


0
variables equals to zero.

3 Omitted Variable Bias: Simple Case


Omitting x2 in the model y = 0 + 1 x1 + 2 x2 +u
Correct sample regression function:

yb = c0 + c1 x1 + c2 x2 ;

Incorrect sample regression function:

yb = c0 + c1 x1 :

Relationship between the estimates:

f = c + c e1 :::
1 1 2

Bias in f1 :

Bias f1 = E f1 1 = E c1 + E c2 1 = e
2 1

3
Summary of Bias in f1 when x2 is omitted:

corr (x1 ; x2 ) > 0 corr (x1 ; x2 ) < 0


2 >0 positive bias, Bias f1 > 0 negative bias, Bias f1 < 0

2 <0 negative bias, Bias f1 < 0 positive bias, Bias f1 > 0

Das könnte Ihnen auch gefallen