Beruflich Dokumente
Kultur Dokumente
Outline
Vector autoregressions
Based on the book New Introduction to Multiple Time Series
Analysis by Helmut Lutkepohl Introduction
November 3, 2011
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes
A realization is a sequence of vectors yt (), t Z , for a xed . It Then, a vector autoregressive process (VAR) satises the equation
is a function Z RK . A multiple time series is assumed to be a
nite portion of a realization. yt = + A1 yt1 + . . . + Ap ytp + ut ,
Given such a realization, the underlying stochastic process is called with ut a sequence of independently identically distributed random
the data generation process (DGP). K vectors with zero mean (conditions relaxed later).
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Specification and
Assume yt follows a VAR(p). Then, the forecast yT +1 is given by estimation of VAR model
model
yT +1 = + A1 yT + . . . + Ap yT p+1 , rejected
Model checking
i.e. the systematic part of the dening equation. Note that this
also denes a forecast for each component of yT +1 . model accepted
Structural
Forecasting
analysis
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes
The object of interest is the vector autoregressive process of order Continuous substitution in the VAR(1) model yields
p that satises the equation
y1 = + A1 y0 + u1 ,
yt = + A1 yt1 + . . . + Ap ytp + ut , t = 0, 1, 2, . . . y2 = (IK + A1 ) + A21 y0 + A1 u1 + u2 ,
..
with ut assumed as K dimensional white noise, i.e. Eut = 0, .
t1
Eus ut = 0 for s = t, and Eut ut = with nonsingular
(conditions relaxed). yt = (IK + A1 + . . . + At1 t
1 ) + A1 y0 + Aj1 utj ,
j=0
First we concentrate on the VAR(1) model
such that y1 , . . . , yt can be represented as a function of
yt = + A1 yt1 + ut . y0 , u1 , . . . , ut . All yt , t 0, are a function of just one starting
value and the errors.
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
The Wold representation of the VAR(1) Convergence of sums of stochastically bounded processes
Theorem
If all eigenvalues of A1 have modulus less than one, substitution Suppose (Aj ) is an absolutely summable sequence of real
can be continued using the yj , j < 0, and the limit exists: (K K )matrices and (zt ) is a sequence of K dimensional random
variables that are bounded by a common c R in the sense of
yt = (IK A1 )1 + Aj1 utj , t = 0, 1, 2, . . . , E(zt zt ) c, t = 0, 1, 2, . . . .
j=0
Then there exists a sequence of random variables (yt ), such that
and the constant portion can be denoted by .
n
The matrix sequence converges according to linear algebra results. Aj ztj yt ,
The random vector converges in mean square due to an important j=n
statistical lemma.
as n , in quadratic mean. (yt ) is uniquely dened except on a
set of probability 0.
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
This condition is equivalent to the stability condition This is, however, still an inconvenient process of dimension Kp.
Formally, the rst K entries of the vector Yt are obtained via the
det(IK A1 z . . . Ap z p ) = 0 for |z| 1, (K Kp)matrix
J = [IK : 0 : . . . : 0]
which is usually more ecient to check. Equivalence follows from
as yt = JYt .
the determinant properties of partitioned matrices.
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes
The Wold representation of the VAR(p) First and second moments of the VAR(p)
Using J, it follows that
Applying the lemma to the MA representation yields E (yt ) = and
yt = J + J Aj Utj
j=0 y (h) = E(yt )(yth )
h1
= + JAj J JUtj
= E i uti + h+i uthi j uthj
j=0
i=0 i=0 j=0
= + j utj
= h+i u i
j=0
i=0
for the stable and stationary VAR(p), a Wold representation with
j = JAj J . This is the canonical or fundamental or
prediction-error representation.
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
The Wold-type representation with lag operators Remarks on the lag operator representation
Using the operator L dened by Lyt = yt1 permits writing the
AR(p) model as
The property (L)A(L) = I allows to determine j iteratively
yt = + (A1 L + . . . + Ap Lp )yt + ut by comparing coecient matrices;
Note that = A1 (L) = A1 (1) and that
or, with A(L) = 1 A1 L . . . Ap Lp ,
A(1) = 1 A1 . . . Ap ;
A(L)yt = + ut . It is possible that A1 (L) is a nite-order polynomial, while
this is impossible for scalar processes;
Then, one may write (L) = j
j=0 j L and The MA representation exists i the VAR(p) is stable, i.e. i
1 all zeros of det(A(z)) are outside the unit circle: A(L) is
yt = + (L)ut = A (L)( + ut ), called invertible.
thus formally A(L)(L) = I or (L) = A1 (L). Note that A(L) is
a polynomial and (L) is a power series.
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes
Formally, covariance stationarity of K variate processes is dened Assume the VAR(1) is stable and stationary. The equation
by constancy of rst moments Eyt = t and of second
moments yt = A1 (yt1 ) + ut
E(yt )(yth ) = y (h) = y (h) t, h = 0, 1, 2, . . . can be multiplied by (yth ) from the right. Application of
expectation yields
Strict stationarity is dened by time invariance of all
nite-dimensional joint distributions. Here, stationarity refers to E(yt )(yth ) = A1 E{(yt1 )(yth ) }+Eut (yth )
covariance stationarity, for example in the proposition:
or
Proposition y (h) = A1 y (h 1)
A stable VAR(p) process yt , t Z, is stationary.
for h 1.
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
The system of Yule-Walker equations for VAR(1) How to use the Yule-Walker equations for VAR(1)
For the case h = 0, the last term is not 0:
For synthetic purposes, rst evaluate y (0) from given A1 and
E(yt )(yt ) = A1 E{(yt1 )(yt ) } + Eut (yt ) u ;
or Then, the entire ACF is obtained from y (h) = Ah1 y (0);
y (0) = A1 y (1) + u = A1 y (1) + u , The big matrix in the h = 0 equation must be invertible, as
which by substitution from the equation for h = 1 yields the eigenvalues of A1 A1 are the squares of the eigenvalues
of A1 , which have modulus less than one;
y (0) = A1 y (0)A1 + u , Sure, the same trick works for VAR(p), as they have a
which can be transformed to VAR(1) representation, but you have to invert
((Kp)2 (Kp)2 )matrices;
vecy (0) = (IK 2 A1 A1 )1 vecu , For analytic purposes, A1 = 0 1
1 can be used to estimate
an explicit formula to obtain the process variance from given A1 from the correlogram.
coecient matrix and error variance.
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Forecasting Forecasting
Proposition
Assume ut is independent white noise (martingale dierence
The hstep predictor that minimizes the forecast MSE is the
sequence with E(ut+1 |us , s t) = 0 suces), then for a VAR(p)
conditional expectation
Et (yt+1 ) = + A1 yt + A2 yt1 + . . . + Ap ytp+1 ,
yt (h) = E(yt+h |ys , s t).
and, recursively,
Often, the casual notation Et (yt+h ) is used.
This property (proof constructive) also applies to vector processes Et (yt+2 ) = + A1 Et (yt+1 ) + A2 yt + . . . + Ap ytp+2 ,
and to VARs, where the MSE is dened by
etc., which allows the iterative evaluation for all horizons.
MSE (yt (h)) = E{yt+h yt (h)}{yt+h yt (h)} .
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes
Forecasting Forecasting
Using the Wold-type MA representation yt = + j=0 j utj , a
scheme analogous to p = 1 works for VAR(p) with p > 1, using J.
The forecast error variance is There are three (interdependent) approaches to the interpretation
of VAR models:
h1
y (h) = MSE(yt (h)) = j u j , 1. Granger causality
j=0 2. Impulse response analysis
3. Forecast error variance decomposition (FEVD)
which converges to y = y (0) for h .
These MSE formulae can also be used to determine interval
forecasts (condence intervals).
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Instantaneous causality and the non-unique MA Impulse response analysis: the idea
representation
Consider the Cholesky factorization of u = PP , with P lower The researcher wishes to add detail to the Granger-causality
triangular. Then, it holds that analysis and to quantify the eect of an impulse in a component
variable yj,t on another component variable yk,t .
yt = + j PP 1 utj = + j wtj , The derivative yk,t+h /yj,t cannot be determined from the VAR
j=0 j=0 model. The derivative
yk,t+h
with j = j P and w = P 1 u and uj,t
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna