Sie sind auf Seite 1von 35

Lecture 3

Stephen G Hall

Dynamic Modelling

The process of dynamic modelling has become such a central part of Econometrics that it is worth treating it as a topic in its own right.

Dynamic modelling is a largely intuitive and simple process but it has become surrounded by a specialised language, DGP, parsimonious encompassing, conditioning, marginalising etc.

This lecture attempts to explain this jargon and why it is useful.

let xt be a vector of observations on all variables in period t,and let Xt-1=(xt-1 ... x0), then the Joint probability of the sample xt, the DGP, may be stated as,

4 D ( x | X :5 )
t t t -1 t=1

Where 5 is a vector of unknown parameters.

The Philosophy underlying this approach is that all models are misspecified. The issue is to understand the misspecification and to build useful and adequate models.

The process of model reduction consists principally of the following four steps. 1. Marginalise the DGP. We select a set of 'variables of interest' and relegate all the rest of the variables to the set which are of no interest. 2. Conditioning assumptions. Given the choice of variables of interest we must now select a subset of these variables to be treated as endogenous 3. Selection of functional form. The DGP is a completely general functional specification and before any estimation can be undertaken a specific functional form must be assumed. 4. Estimation. The final stage involves assigning values to the unknown parameters of the system, this is the process of econometric estimation.

given the general DGP it is possible to represent the first two stages in the model reduction process by the following factorisation.

Dt ( xt | X t - 1 ; 5 ) = At ( W t | X t ; E ) Bt ( Y t | Y t - 1 , Z t ; F ) C t ( Z t | Y t - 1 , Z t - 1 ; K )
These steps are all crucial in the formulation of an adequate model. If the marginalisation is incorrect then this implies that some important variable has been relegated to the set of variables of no interest. If the conditioning assumptions are incorrect then we have falsely assumed that an endogenous variable is exogenous. If the functional form or estimation is invalid then obvious bias results

Exogeneity Conditioning is basically about getting the determination of exogeneity right, there are three main concepts of exogeneity Weak exogeneity Z is weakly exogenous if it is a function of only lagged Ys and the parameters which determine Y are independent of those determining Z. Strong exogeneity here in addition we assume that Z is not a function of lagged Y. this is weak exogeneity plus non granger causality Super exogeneity here in addition we assume that the parameters which determine Z are independent of the parameters which determine Y.

Weak exogeneity is needed for estimation. Strong exogeneity is needed for forecasting. Super exogeneity is needed for simulation and policy analysis.

Before the development of cointegration the dynamic modelling approach in practise began from a general statement of the DGP suitably marginalised and conditioned,
n i=1 m n

Y t = E 0 + E i Y t -i + B ki X kt -i + u t
k =1 i=0

This general form (the ADL) may be reparameterised into many different representations which are all either equivalent or are nested within it as restrictions. eg the Bewley transformation, the common factor restriction. A particularly useful for is the Error Correction Mechanism (ECM)
n -1

(Yt
i 1

* i

(Y +
t -i

n -1

F* ( ki ) ut

kt - i

k 1 i 0

- K ( E * + Y t -1 + 0
k 1

Fk

kt - 1

The basic idea in dynamic modelling is that the General model should be set up in such a way that it passes a broad range of tests, in particular that it should have constant parameters and a `well' behaved error process.

The model is then reduced or simplified applying a broad range of tests at each stage to try and find an acceptable parsimonious (minimum number of parameters) representation.

This is the process of model reduction.

In practise the real issue is to understand the tests used.

The general F test. The general test used for testing a group of restrictions is the F-test, this tests any restricted model against a less restricted model.

RSS 2 - RSS 1 T - k F(m,T - k) = m RSS 1


T- sample size k- number of parameters in unrestricted model m- number of restrictions.

The lagrange multiplier test for serial correlation

if u is the residual from an OLS regression then perform m

u t = K i u t -i + & X
i=1 Then an LM test of the assumption that there is no serial correlation up to order m is given by LM(m)=TR2

Instrument validity test when estimating an IV equation we should test that the instruments are weakly exogenous, this may be done by performing the following auxiliary regression

ut ! WE

where W is a set of variables which includes both the independent variables in the equation and the full set of instruments. The test is (T-k)R2 2 which is G (r) where r is the number of instruments minus the number of , endogenous variables in the equation,

The Box-Pierce and Ljung-Box test This is based on the correlogram and is a general test of mth order serial correlation

Q Q
*

r i2
i 1

( +2)(
i 1

- i ) r i2

-i

This is again a chi sq test with m degrees of freedom

ARCH test

2 t

E 0 + E i u
i 1

2 t -i

TR2 from this regression is a test of an autoregressive variance process of order m. It again is a chi sq test with m degrees of freedom

Parameter Stability The Chow test is a test of parameter constancy which is a special form of F test

HOW

SS - ( SS 1 + SS 2 ) - 2k ( k SS 1 + SS 2 )

which is distributed as F(k,T-2k).

In practise we tend to give greater weight to recursive estimation. This is a series of estimates where the sample size is increased by one period in each estimation. If we define F t as the estimate of the vector of parameters based on the period 1 to t. then we can define the recursive residuals as

v t = Y t - F t -1 X t
we can then standardise these for the degrees of freedom so that

wt

vt / d t ~ N(0,W )

now they have the same properties as the OLS residuals except that they are not forced to sum to zero and they are much more sensitive to model misspecification.

Formal tests based on the recursive residuals are; The CUSUM test,

USUM t (1/s)
The CUSUMSQ test

wi
i k +1

where s is the full sample estimate of the standard error.

USUMSQt

2 2 wi / wi i k +1 i k +1
t

But in practise plots of the recursive residuals and parameters are often much more informative. THE IMPORTANCE OF GRAPHS IS CRUCIAL

Testing Functional Form The Ramsey RESET test checks the possibility of higher polynomial terms

ut = K d t + E i Y X
i=1

i+1 t

again this is an LM test based on TR2

Testing for Normality Normality of the residuals is an important property in terms of justifying the whole inference procedures, typical test is the Bera-Jarque test

2 6 SK + 24 (EK - 3

2 ) ~ G (2)
2

where SK is the measure of skewness and EK is the measure of excess kurtosis

Encompassing This is a general principal of testing which allows us to interpret and construct tests of one model against another A model (M1) encompasses another model (M2) if it can explain the results of that model. Many standard tests (eg F or LM) can be interpreted as encompassing tests. Variance encompassing asymptotically a true model will always have a lower variance than a false model, so the finding of a smaller standard error is evidence of variance encompassing.

Parsimonious encompassing a large model will allays encompass a smaller nested model, this is not interesting. If a smaller model contains all the important information of a larger model this is important and we then say that it parsimoniously encompasses the larger model

Why Variance encompassing is better than using the R2 The R2 statistic is not invariant to the way we write an equation

Yt ! F

 EYt 1  ut

If Y is trended it will generally have very high R2

(Yt ! FX t  (E  1)Yt 1  ut
Exactly the same equation, just a reparameterisation, exactly the same errors

BUT a completely different R2 as we have changed the dependent variable. The errors and the error variance are unchanged.

Example
Davidson, Hendry, Srba and Yeo (DAISY)

Note:strong seasonality, upward near proportional trend

Note: Annual changes: consumption much smoother than income

NOTE: The APC is not constant but changes systematically

Note: The seasons are different, so seasonality is important

Note: notice the scale, changes in income much bigger than consumption

Note: the seasonal pattern is changing through time.

Start by considering the best existing models and what is wrong with them

Older Hendry model, very low long run MPC

LBS Ball model, Low long run MPC no seasonality

Wall model; no long run at all

Difference model or ECM, the first difference is only valid under testable restrictions

General to Specific: tests on an invalid model are themselves invalid

significant

Insignificant

So start from a general model and nest down to a specific one

But final model has no long run and fails to forecast

So set up an ECM to impose the long run proportionality

BUT: both fail to forecast so back to the beginning


Seasonally adjusted data

A possible missing variable; inflation may explain the movement in the APC

So start again, and eventually 2 models one without a long run one in ECM form

ECM passes the forecast test

Final validation, out of sample forecasting performance

Das könnte Ihnen auch gefallen