Sie sind auf Seite 1von 13

Introduction Stable VAR Processes Introduction Stable VAR Processes

Outline
Vector autoregressions
Based on the book New Introduction to Multiple Time Series
Analysis by Helmut Lutkepohl Introduction

Robert M. Kunst Stable VAR Processes


robert.kunst@univie.ac.at Basic assumptions and properties
University of Vienna
Forecasting
and Structural VAR analysis
Institute for Advanced Studies Vienna

November 3, 2011

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna

Introduction Stable VAR Processes Introduction Stable VAR Processes

Objectives of analyzing multiple time series Some basics: stochastic process


Main objectives of time series analysis may be:
1. Forecasting: prediction of the unknown future by looking at
the known past:
Assume a probability space (, F, Pr ). A (discrete) stochastic
yT +h = f (yT , yT 1 , . . .) process is a real-valued function

denotes the h-step prediction for the variable y ; y : Z R,


2. Quantifying the dynamic response to an unexpected shock to such that, for each xed t Z , y (t, ) is a random variable. Z is a
a variable by the same variable h periods later and also by useful index set that represents time, for example Z = Z or Z = N.
other related variable: impulse-response analysis;
3. Control: how to set a variable in order to achieve a given time
path in another variable; description of system dynamics
without further purpose.

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes

Some basics: multivariate stochastic process Vector autoregressive processes

A (discrete) Kdimensional vector stochastic process is a Let yt = (y1t , . . . , yKt ) , = (1 , . . . , K ) , and


real-valued function
y : Z RK , 11,j 1K ,j
.. .. .. .
Aj = . . .
such that, for each xed t Z , y (t, ) is a K dimensional random
vector. K 1,j KK ,j

A realization is a sequence of vectors yt (), t Z , for a xed . It Then, a vector autoregressive process (VAR) satises the equation
is a function Z RK . A multiple time series is assumed to be a
nite portion of a realization. yt = + A1 yt1 + . . . + Ap ytp + ut ,

Given such a realization, the underlying stochastic process is called with ut a sequence of independently identically distributed random
the data generation process (DGP). K vectors with zero mean (conditions relaxed later).

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna

Introduction Stable VAR Processes Introduction Stable VAR Processes

Forecasting using a VAR A owchart for VAR analysis

Specification and
Assume yt follows a VAR(p). Then, the forecast yT +1 is given by estimation of VAR model
model

yT +1 = + A1 yT + . . . + Ap yT p+1 , rejected

Model checking
i.e. the systematic part of the dening equation. Note that this
also denes a forecast for each component of yT +1 . model accepted

Structural
Forecasting
analysis

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes

Basic assumptions and properties Basic assumptions and properties

The VAR(p) model Substituting in the VAR(1)

The object of interest is the vector autoregressive process of order Continuous substitution in the VAR(1) model yields
p that satises the equation
y1 = + A1 y0 + u1 ,
yt = + A1 yt1 + . . . + Ap ytp + ut , t = 0, 1, 2, . . . y2 = (IK + A1 ) + A21 y0 + A1 u1 + u2 ,
..
with ut assumed as K dimensional white noise, i.e. Eut = 0, .
t1
Eus ut = 0 for s = t, and Eut ut = with nonsingular 
(conditions relaxed). yt = (IK + A1 + . . . + At1 t
1 ) + A1 y0 + Aj1 utj ,
j=0
First we concentrate on the VAR(1) model
such that y1 , . . . , yt can be represented as a function of
yt = + A1 yt1 + ut . y0 , u1 , . . . , ut . All yt , t 0, are a function of just one starting
value and the errors.

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna

Introduction Stable VAR Processes Introduction Stable VAR Processes

Basic assumptions and properties Basic assumptions and properties

The Wold representation of the VAR(1) Convergence of sums of stochastically bounded processes
Theorem
If all eigenvalues of A1 have modulus less than one, substitution Suppose (Aj ) is an absolutely summable sequence of real
can be continued using the yj , j < 0, and the limit exists: (K K )matrices and (zt ) is a sequence of K dimensional random
variables that are bounded by a common c R in the sense of


yt = (IK A1 )1 + Aj1 utj , t = 0, 1, 2, . . . , E(zt zt ) c, t = 0, 1, 2, . . . .
j=0
Then there exists a sequence of random variables (yt ), such that
and the constant portion can be denoted by .
n

The matrix sequence converges according to linear algebra results. Aj ztj yt ,
The random vector converges in mean square due to an important j=n
statistical lemma.
as n , in quadratic mean. (yt ) is uniquely dened except on a
set of probability 0.
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes

Basic assumptions and properties Basic assumptions and properties

Aspects of the convergent sum Expectation of the stationary VAR(1)

The matrices converge geometrically and hence absolutely, and the


theorem applies. The limit in the Wold representation is well The Wold-type representation implies
dened.
E(yt ) = (IK A1 )1 = .
 This is called a Wold representation, as Wolds Theorem
provides an innite-order moving-average representation for all This is due to the fact that Eut = 0 for the white-noise terms and
univariate covariance-stationary processes. a statistical theorem that permits exchanging the limit and
 Note that the white-noise property was not used. The sum expectation operations under the conditions of the lemma. Note
would even converge for time-dependent ut . that the white-noise property (uncorrelated sequence) is not used.

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna

Introduction Stable VAR Processes Introduction Stable VAR Processes

Basic assumptions and properties Basic assumptions and properties

Second moments of the stationary VAR(1) The denition of a stable VAR(1)


Luetkepohl presents a derivation of the cross-covariance
function Denition
A VAR(1) is called stable i all eigenvalues of A1 have modulus
y (h) = E(yt )(yth ) less than one. By a mathematical lemma, this condition is
n n equivalent to
= lim Ai1 E(uti utjh

)(Aj1 )
n
i=0 j=0 det(IK A1 z) = 0 for |z| 1.
n


= lim Ah+i i 
1 u (A1 ) = Ah+i i 
1 u (A1 ) , No roots within or on the unit circle. Note that this denition
i=0 i=0 diers from stability as dened by other authors. Stability is not
which uses E(ut us ) = 0 for s = t, E(ut ut ) = u , and a corollary equivalent to stationarity: a stable process started in t = 1 is not
to the lemma that permits evaluation of second moments under stationary; a backward-directed entirely unstable process is
the same conditions. Here, the white-noise property of ut is used. stationary.

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes

Basic assumptions and properties Basic assumptions and properties

Representation of VAR(p) as VAR(1) More on the state-space VAR(1) form


All VAR(p) models of the form
In the VAR(1) representation of a VAR(p), the vectors Yt , , and
yt = + A1 yt1 + . . . + Ap ytp + ut
Ut have length Kp:
can be written as VAR(1) models
yt ut
yt1
Yt = + AYt1 + Ut , Yt = , = 0 , Ut = 0 .
... ... ...
with ytp+1 0 0
A1 A2 . . . Ap1 Ap
IK 0 . . . 0 0 The big matrix A has dimension Kp Kp. This state-space form

A= .. .. .. . permits using all results from VAR(1) for the general VAR(p).
. . .
0 0 ... IK 0

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna

Introduction Stable VAR Processes Introduction Stable VAR Processes

Basic assumptions and properties Basic assumptions and properties

Stability of the VAR(p) The innite-order MA representation of the VAR(p)

Denition The stationary stable VAR(p) can be represented in the convergent


A VAR(p) is called stable i all eigenvalues of A have modulus less innite-order MA form
than one. By a mathematical lemma, this condition is equivalent to

Yt = + Aj Utj .
det(IKp Az) = 0 for |z| 1. j=0

This condition is equivalent to the stability condition This is, however, still an inconvenient process of dimension Kp.
Formally, the rst K entries of the vector Yt are obtained via the
det(IK A1 z . . . Ap z p ) = 0 for |z| 1, (K Kp)matrix
J = [IK : 0 : . . . : 0]
which is usually more ecient to check. Equivalence follows from
as yt = JYt .
the determinant properties of partitioned matrices.

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes

Basic assumptions and properties Basic assumptions and properties

The Wold representation of the VAR(p) First and second moments of the VAR(p)
Using J, it follows that

 Applying the lemma to the MA representation yields E (yt ) = and
yt = J + J Aj Utj
j=0 y (h) = E(yt )(yth )

 h1


= + JAj J  JUtj   
= E i uti + h+i uthi j uthj
j=0
i=0 i=0 j=0



= + j utj
= h+i u i
j=0
i=0
for the stable and stationary VAR(p), a Wold representation with
j = JAj J  . This is the canonical or fundamental or
prediction-error representation.
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna

Introduction Stable VAR Processes Introduction Stable VAR Processes

Basic assumptions and properties Basic assumptions and properties

The Wold-type representation with lag operators Remarks on the lag operator representation
Using the operator L dened by Lyt = yt1 permits writing the
AR(p) model as
 The property (L)A(L) = I allows to determine j iteratively
yt = + (A1 L + . . . + Ap Lp )yt + ut by comparing coecient matrices;
 Note that = A1 (L) = A1 (1) and that
or, with A(L) = 1 A1 L . . . Ap Lp ,
A(1) = 1 A1 . . . Ap ;
A(L)yt = + ut .  It is possible that A1 (L) is a nite-order polynomial, while
 this is impossible for scalar processes;
Then, one may write (L) = j
j=0 j L and  The MA representation exists i the VAR(p) is stable, i.e. i
1 all zeros of det(A(z)) are outside the unit circle: A(L) is
yt = + (L)ut = A (L)( + ut ), called invertible.
thus formally A(L)(L) = I or (L) = A1 (L). Note that A(L) is
a polynomial and (L) is a power series.
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes

Basic assumptions and properties Basic assumptions and properties

Remarks on stationarity Yule-Walker equations for VAR(1) processes

Formally, covariance stationarity of K variate processes is dened Assume the VAR(1) is stable and stationary. The equation
by constancy of rst moments Eyt = t and of second
moments yt = A1 (yt1 ) + ut

E(yt )(yth ) = y (h) = y (h) t, h = 0, 1, 2, . . . can be multiplied by (yth ) from the right. Application of
expectation yields
Strict stationarity is dened by time invariance of all
nite-dimensional joint distributions. Here, stationarity refers to E(yt )(yth ) = A1 E{(yt1 )(yth ) }+Eut (yth )
covariance stationarity, for example in the proposition:
or
Proposition y (h) = A1 y (h 1)
A stable VAR(p) process yt , t Z, is stationary.
for h 1.

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna

Introduction Stable VAR Processes Introduction Stable VAR Processes

Basic assumptions and properties Basic assumptions and properties

The system of Yule-Walker equations for VAR(1) How to use the Yule-Walker equations for VAR(1)
For the case h = 0, the last term is not 0:
 For synthetic purposes, rst evaluate y (0) from given A1 and
E(yt )(yt ) = A1 E{(yt1 )(yt ) } + Eut (yt ) u ;
or  Then, the entire ACF is obtained from y (h) = Ah1 y (0);

y (0) = A1 y (1) + u = A1 y (1) + u ,  The big matrix in the h = 0 equation must be invertible, as
which by substitution from the equation for h = 1 yields the eigenvalues of A1 A1 are the squares of the eigenvalues
of A1 , which have modulus less than one;
y (0) = A1 y (0)A1 + u ,  Sure, the same trick works for VAR(p), as they have a
which can be transformed to VAR(1) representation, but you have to invert
((Kp)2 (Kp)2 )matrices;
vecy (0) = (IK 2 A1 A1 )1 vecu ,  For analytic purposes, A1 = 0 1
1 can be used to estimate
an explicit formula to obtain the process variance from given A1 from the correlogram.
coecient matrix and error variance.
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes

Basic assumptions and properties Forecasting

Autocorrelations of stable VAR processes The forecasting problem

Autocorrelations are often preferred to autocovariances. Formally,


Based on an information set t {ys , s t} available at t, the
they are dened via
forecaster searches an approximation yt (h) to the unknown yt+h
ij (h) that minimizes some expected loss or cost
ij (h) =  
ii (0) jj (0)
E{g (yt+h yt (h))|t }.
from the autocovariances for i, j = 1, . . . , K and h Z. The
matrix formula The most common loss function g (x) = x 2 minimizes the forecast
Ry (h) = D 1 y (h)D 1 mean squared errors (MSE). t is the forecast origin, h is the
forecast horizon, yt (h) is an hstep predictor.
with D = diag(11 (0)1/2 , . . . , KK (0)1/2 ) is given for completeness.

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna

Introduction Stable VAR Processes Introduction Stable VAR Processes

Forecasting Forecasting

Conditional expectation Conditional expectation in a VAR

Proposition
Assume ut is independent white noise (martingale dierence
The hstep predictor that minimizes the forecast MSE is the
sequence with E(ut+1 |us , s t) = 0 suces), then for a VAR(p)
conditional expectation
Et (yt+1 ) = + A1 yt + A2 yt1 + . . . + Ap ytp+1 ,
yt (h) = E(yt+h |ys , s t).
and, recursively,
Often, the casual notation Et (yt+h ) is used.
This property (proof constructive) also applies to vector processes Et (yt+2 ) = + A1 Et (yt+1 ) + A2 yt + . . . + Ap ytp+2 ,
and to VARs, where the MSE is dened by
etc., which allows the iterative evaluation for all horizons.
MSE (yt (h)) = E{yt+h yt (h)}{yt+h yt (h)} .

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes

Forecasting Forecasting

Larger horizons for a VAR(1) Forecast MSE for VAR(1)


 j
The MA representation yt = + j=0 A1 utj clearly decomposes
yt+h into the predictor known in t and the remaining error, such
that
h1

By repeated insertion, the following formula is easily obtained: yt+h yt (h) = Aj1 ut+hj ,
j=0
Et (yt+h ) = (IK + A1 + . . . + Ah1 h
1 ) + A1 yt , and

which implies that the forecast tends to become trivial as h h1
 h1
increases, given the geometric convergence in the last term. y (h) = MSE(yt (h)) = E Aj1 ut+hj Aj1 ut+hj
j=0 j=0
h1

= Aj1 u (Aj1 ) = MSE(yt (h 1)) + Ah1 h1 
1 u (A1 ) ,
j=0

such that MSE increases in h.


Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna

Introduction Stable VAR Processes Introduction Stable VAR Processes

Forecasting Structural VAR analysis

Forecast MSE for general VAR(p) Structural VAR analysis


Using the Wold-type MA representation yt = + j=0 j utj , a
scheme analogous to p = 1 works for VAR(p) with p > 1, using J.
The forecast error variance is There are three (interdependent) approaches to the interpretation
of VAR models:
h1

y (h) = MSE(yt (h)) = j u j , 1. Granger causality
j=0 2. Impulse response analysis
3. Forecast error variance decomposition (FEVD)
which converges to y = y (0) for h .
These MSE formulae can also be used to determine interval
forecasts (condence intervals).

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes

Structural VAR analysis Structural VAR analysis

Granger causality Instantaneous Granger causality


Assume two M- and Ndimensional sub-processes x and z of a
K dimensional process y , such that y = (z  , x  ) . Again, assume two M- and Ndimensional sub-processes x and z
Denition of a K dimensional process y .
The process xt is said to cause zt in Grangers sense i Denition
There is instantaneous causality between process xt and zt in
z (h|t ) < z (h|t \ {xs , s t})
Grangers sense i
for some t and h.
z (1|t {xt+1 }) < z (1|t ).
The set t is an information set containing ys , s t; the matrix <
is dened via positive deniteness of the dierence; the correct
The property is symmetric: x and z can be exchanged in the
interpretation of the \ operator is doubtful.
denition: instantaneous causality knows no direction.
The property is not antisymmetric: x may cause z and z may also
cause x: feedback.
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna

Introduction Stable VAR Processes Introduction Stable VAR Processes

Structural VAR analysis Structural VAR analysis

Granger causality in a MA model Granger causality in a VAR

A stationary stable VAR has an MA representation, so Granger


causality can be checked on that one. Alternatively, consider the
Assume the representation
partitioned VAR
      
z (L) 12 (L) u1t      p     
yt = t = 1 + 11 . zt A11,j A12,j ztj u
xt 2 21 (L) 22 (L) u2t yt = = 1 + + 1t .
xt 2 A21,j A22,j xtj u2t
j=1
It is easily motivated that x does not cause z i 12,j = 0 for all j.
It is easily shown that x does not cause z i A12,j = 0, j = 1, . . . , p
(block inverse of matrix).

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes

Structural VAR analysis Structural VAR analysis

Remarks on testing for Granger causality in a VAR Characterization of instantaneous causality

 The property that 12,j = 0 characterizes non-causality is not


restricted to VARs: it works for any process with a Wold-type
MA representation; Proposition
 The property may also be generalized to x and z being two Let yt be a VAR with nonsingular innovation variance matrix u .
sub-vectors of y with M + N < K . Some extensions, however, There is no instantaneous causality between xt and zt i
do not work properly for the VAR representation, just for the 
MA representation; E(u1t u2t ) = 0.
 The denition is sometimes modied to hstep causality,
meaning xt does not improve zt (j), j < h but does improve This condition is certainly symmetric.
zt (h): complications in naive testing for the VAR form,
though not for the MA form.

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna

Introduction Stable VAR Processes Introduction Stable VAR Processes

Structural VAR analysis Structural VAR analysis

Instantaneous causality and the non-unique MA Impulse response analysis: the idea
representation
Consider the Cholesky factorization of u = PP  , with P lower The researcher wishes to add detail to the Granger-causality
triangular. Then, it holds that analysis and to quantify the eect of an impulse in a component

variable yj,t on another component variable yk,t .
 
yt = + j PP 1 utj = + j wtj , The derivative yk,t+h /yj,t cannot be determined from the VAR
j=0 j=0 model. The derivative
yk,t+h
with j = j P and w = P 1 u and uj,t

w = P 1 u (P 1 ) = IK . corresponds to the (k, j) entry in the matrix h of the MA


representation. It is not uniquely determined. The matrix of graphs
In this form, instantaneous causality corresponds to 21,0 = 0, of kj,h versus h is called the impulse response function (IRF).
which looks asymmetric. An analogous form and condition is
achieved by exchanging x and z.
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes

Structural VAR analysis Structural VAR analysis

Impulse response analysis: general properties Orthogonal impulse response


 If yj does not Granger-cause yk , the corresponding impulse
response in (k, j) is constant zero; Re-consider the alternative MA representation based on
 If the rst p(K 1) values of an impulse response are 0, then
u = PP  , j = j P, w = P 1 u,
all values are 0;
 If the VAR is stable, all impulse response functions must that is,


converge to 0 as h ;
yt = + j wtj .
 It is customary to scale the impulse responses by the standard
j=0
deviation of the response variable kk ;
Because of w = IK , shocks are orthogonal. Note that wj is a
 The impulse response based on the canonical MA linear function of uk , k j. The resulting matrix of graphs kj,h
representation h , h N, ignores the correlation across the versus h is an orthogonal impulse response function (OIRF).
components of u in u and may not correspond to the true
reaction.

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna

Introduction Stable VAR Processes Introduction Stable VAR Processes

Structural VAR analysis Structural VAR analysis

Orthogonal impulse response: properties Ways out of the arbitrariness dilemma


 Some researchers suggest to arrange the vector of components
 Because of the multiplication by the matrix P, diagonal such that the a priori most exogenous variable appears rst
entries in 0 will not be ones. This problem can be remedied etc.;
simply via a diagonal matrix, such that 0 has diagonal ones  The generalized impulse response function (GIRF) according
and w is diagonal; to Pesaran summarizes the OIRF for each response variable
 The OIRF can be quite dierent from the IRF based on h . If suering the maximum response (coming last in the vector).
there is no instantaneous causality, both will coincide; It is not an internally consistent IRF;
 The orthogonal IRF based on a re-ordering of variable  So-called structural VARs attempt to identify the shocks
components will dier from the correspondingly re-ordered from economic theory. They often use an additional matrix A0
OIRF. Additional to the permutations, a continuum of OIRF that permits an immediate reaction of a component yk to
versions may be considered. another yj and various identication restrictions. They may
also be over-identied and restrict the basic VAR(p) model.

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna
Introduction Stable VAR Processes Introduction Stable VAR Processes

Structural VAR analysis Structural VAR analysis

Decomposition of the hstep error Forecast error variance decomposition


Starting from an orthogonal MA representation with w = IK ,

 Consider the variance of the jth forecast component
yt = + i wti ,
h1 
 K
i=0
MSE(yj,t (h)) = jk,i
2
.
the error of an hstep forecast is
i=0 k=1
h1

yt+h yt (h) = i wt+hi , The share that is due to the kth component error,
i=0 h1
and for the jth component i=0 jk,i
2
jk,h = ,
K
h1  MSE(yj,t (h))

yj,t+h yj,t (h) = jk,i wk,t+i .
denes the forecast error variance decomposition (FEVD) and is
i=0 k=1
often tabulated or plotted versus h for j, k = 1, . . . , K .
All hK terms are orthogonal, and this error can be decomposed
into the K contributions from the component errors.
Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna

Introduction Stable VAR Processes

Structural VAR analysis

Invariants and others in structural analysis

1. Granger causality is independent of the choice of Wold-type


MA representation. It is there or it is not;
2. Impulse response functions depend on the chosen
representation. OIRF may dier for distinct orderings of the
component variables;
3. Forecast error variance decomposition inherits the problems of
IRF analysis: unique only in the absence of instantaneous
causality.

Vector autoregressions University of Vienna and Institute for Advanced Studies Vienna

Das könnte Ihnen auch gefallen