Beruflich Dokumente
Kultur Dokumente
occur that disturb it. We then add a stochastic component, to take into account of the uncertainty
incorporated in many economic variables: C = + Y + , where is a random disturbance. When
we want to investigate such a theoretical relation, we estimate a model of the form:
yi = + xi + i
Where y is the dependent variable, x is the independent or explanatory variable, and i=1, , n is
the index of the n observations in our sample.
To complete our model, on top of the linear hypothesis, we add some assumptions:
Nonautocorrelation: Cov i , j = 0 if i j
Uncorrelation between the regressor and the disturbance: Cov xi , j = 0 for all i and j.
In graphic terms, when we run a regression we get something very close to the following picture:
Figure 1
So, in our regression model, the parameter captures the intercept of the function that represents
the relationship between the dependent variable and the regressor, while is the linear coefficient.
So, when x increases by one, y increases by in the linear regression model. captures the marginal
effect of x on y. The same concept holds true when we turn to a linear regression with multiple
explanatory variables:
yi = + xi + zi + wi + i
Here the regressors are x, z and w. The parameters , , and capture the partial effects of each of
these regressors on y, holding the others constant.
Coefficients can be estimated, but, given the presence of disturbances, we cannot be sure that the
underlying real parameters are of the same magnitude and sign as we have hypothesized in our
model. Think of Figure 1: the estimated coefficient may be close to one, but can we be sure that the
true is indeed 1?
To test this, we employ statistical inference and the tools of hypothesis testing.
Start with , the true coefficient of the relation we are studying. After estimation, we obtain b, the
estimated coefficient, and s 2 , the estimated variance of the error terms . We can also get the
sample variance of x: it is defined as S xx = ( xi x ) , where x is the sample mean of x. We can
2
then obtain the estimate of the sample variance of b as Var [b ] = s 2 / S xx . Taking the square root of
the estimated variance of b, we get sb, the standard error of the estimate b. So sb = s / S xx .
It can be shown that
b