Sie sind auf Seite 1von 14

Cointegration Hypothesis testing and identification

Testing restrictions
We have so far seen estimation and other issues associated with a cointegrated V AR model.
But as in any empirical application, in a cointegrated V AR system also we can systematically test
many hypotheses of interest, mainly dictated by theoretical considerations. Though such restrictions
on both the cointegrating vectors as well as the loadings matrix do not help us in identifying these
vectors, these are of interest by themselves. As an example, by testing for the exclusion of a particular
variable in a cointegrating vector, potentially irrelevant variables may be tested out of the system,
thus reducing the dimension of the analysis. Many other interesting propositions, like the concept
of weak exogeneity, can be tested on the loadings matrix too. In this module, we shall see how to
formulate such restrictions in two different ways one, in terms of free or unrestricted parameters and
another in terms of of restrictions explicitly, which is the more traditional way. Which way should
one follow, is entirely a matter of taste; but as a practice, we shall demonstrate the use of both ways.

Formulating hypotheses as restrictions on :


Restrictions on the vector can be imposed in terms of si free parameters or in terms of mi
restrictions. We first specify in terms of free parameters.
Let i be the (si 1) redefined coefficient vector, Hi is a(N 1 si ) design matrix of known
elements, N 1 is the dimension of Zt in the V AR model, where N 1 is N plus any deterministic
variable and constant included in the V AR model. And i = 1, . . . , r. These are the notations for
formulating the hypotheses in terms of free parameters.
And in terms of restrictions, we specify Ri matrices of size (N 1 mi ), where mi = N 1 si
restrictions on i such that R01 1 = 0, . . . , R0r r = 0.
Let us illustrate these with help of a vector of variables, Zt = (mrt , ytr , pt , Rm,t , Rb,t , Ds )0 . These
variables are typically used in macro/monetary relations and Ds is a dummy variable. Let us suppose
there are 3 cointegrating relations.
Note that, when we estimate, a cointegrating relation will contain all the variables in the Z vector.
It may so happen, that the coefficient attached to a particular variable is very near zero, but we may
like to check if it can be statistically considered to be so; and this is exactly what we check in any
hypothesis testing. And hence, the first cointegrating relation, 01 Zt should actually look like

01 Zt = 11 mrt + 12 ytr + 13 pt + 14 Rm,t + 15 Rb,t + 16 Ds , (Unrestricted).


But to illustrate hypotheses testing involving cointegration vectors, let us suppose for exposition
sake, that the first cointegration looks like

01 Zt = [(mrt ytr ) b1 (Rm,t Rb,t ) b2 DS ] , (Restricted).


Note here that inflation rate has been omitted. Let us demonstrate how we arrived at the
restricted cointegrating vector from the unrestricted one, first by using only the free parameters and
then by using the restrictions only.
Treating (mrt ytr ) and (Rm,t Rb,t ) as one variable, we may say that there are three f ree
parameters. Note also that in this expositional cointegration relation, the coefficients are such that
12 = 11 and 15 = 14 . This simply means that the coefficients of mrt and ytr are equal in size
but opposite in sign. The same is true with the coefficients of Rm,t and Rb,t . Now, to differentiate
between the restricted and the unrestricted vectors, let us re-define the cointegrating vector 1 such
that, 11 = 12 = 11 and 14 = 15 = 12 , 13 = 16 . With this, the re-defined cointegrating
vector, 1 , is a (3 1) vector. Hence,

30
1 0 0 11

1 0 0 11
11

0 0 0 0

1 = H 1 1 = 12 =

0 1 0 12


13
0 1 0 12
0 0 1 13
With this we can write the expositional cointegrating vector as 11 (mrt ytr ) + 12 (Rm,t Rb,t ) +
13 Ds . Notice next that in the expositional cointegrating relation, we have normalized on the first
variable that is, we have set the coefficient of the first variable to be 1, so that now the normalized
cointegrating vector is (1, 1, 0, 12 /11 , 12 /11 , 13 /11 )0 . We shall simplify further and
assume that b1 = 12 /11 and b2 = 13 /11 so that the normalized cointegrating vector is
(1, 1, 0, b1 , b1 , ; b2 )0 . Thus, we get the first expositional restricted cointegration vector as

(1, 1, 0, b1 , b1 , b2 )Zt = [(mrt ytr ) b1 (Rm,t Rb,t ) b2 Ds ] .

We shall now demonstrate for the same vector, how to arrive at the restricted cointegrating vector,
using only the implied restrictions on the first cointegrating vector. First we shall fix the dimension
of the R matrix, which is a (6 3) matrix. In terms of restrictions notice, that 11 = 12 so that
11 + 12 = 0; 13 = 0 and 14 = 15 so that 14 + 15 = 0. With this we get the first restricted
cointegrating vector as

11


1 1 0 0 0 0 12


R01 1 = 0 0 1 0 0 0 13 = 0
14
0 0 0 1 1 0
15
16

Using the same logic on the second restricted cointegrating vector,

02 Zt = ytr b3 (pt Rb,t ),

we can write using the free parameters. Notice that there are only two free parameters, so that si = 2
and H2 is now (6 2) matrix:

0 0 0

1 0   21
0 1 21 22

2 = H2 2 = = .
0 0 22 0


0 1
22
0 0 0
Similarly, in terms of restrictions, with the R matrix now being a (6 4) matrix, we have

21

1 0 0 0 0 0 22

0 0 1 0 1 0 23

R02 2 = =0
0 0 0 1 0 0 24

0 0 0 0 0 1 25
26

31
And, for the third restricted cointegrating vector given by
03 Zt = (Rm,t Rb,t ) + b4 Ds
one can show that, using free parameters and with H3 being a (6 2) matrix,
0 0 0

0 0   0
0 0 0

31
3 = H 3 3 = = .

1 0 32 31


1 0
31
0 1 32
And, in terms of restrictions, we have the R matrix as a (6 4) matrix, and arrange them as
31

1 0 0 0 0 0 32

0 1 0 0 0 0 33

0
R3 3 = =0
0 0 1 0 0 0 34

0 0 0 1 1 0 35
36
Note the following:
Ri = H ,i , that is R0i Hi = 0.
Since such testing is done normally after the rank has been determined, such restrictions are
null hypothesis on the stationary linear combination of variables.

Same restrictions on all cointegrating vectors


For some reason, we may be interested in testing for the exclusion of a particular variable from
all cointegrating vectors. This results in the same exclusion restrictions on all the cointegrating
vectors. Or, we may want to check if some well known economic relation is common to all relations.
To be more specific, suppose we want to test if the relation (mrt ytr ) is common to all cointegrating
relations. How do we test it?
We continue with the same vector of variables, Zt , as before and also assume that we have three
cointegrating relations. But we shall not refer back to the three expositional restricted cointegrating
vectors used before. So for example, for this set up, our cointegrating relations are simply given as
01 Zt , 02 Zt , 03 Zt respectively. Writing out the first cointegrating relation explicitly,we have
01 Zt = 11 mrt + 12 ytr + 13 pt + 14 Rm,t + 15 Rb,t + 16 Ds .
The restriction we want to test implies, that for this vector, 11 (mrt ytr ). Since our aim is to check
if this is common to all cointegrating relations, we have for the other two cointegrating vectors,
21 (mrt ytr ) and 31 (mrt ytr ). If we re-define our restricted cointegrated vectors as 1 , 2 and
3 , the restrictions imply that 21 = 11 , 22 = 12 , 23 = 13 With this, we can impose
these restrictions either using the free parameters or the restrictions. Using the free parameters, for
example, we have the (6 5) H matrix, the restrictions can be expressed compactly as
1 0 0 0 0

11 12 13

1 0 0 0 0
22 23
0 1 0 0 0 21

H = 31 32 33

0 0 1 0 0
0 0 0 1 0 41 42 43

51 52 53
0 0 0 0 1

32
However, expressing this in terms of restrictions alone, is easier since R matrix is of dimension
(N 1 m) where m is the number of restrictions in each vector. For the present case, this simply
means
R0 = 0, R0 = (1 1 0 0 0 0).
And the transformed data vector for this set up becomes,

(mr y r )t

pt
H0 Zt = Rm,t


Rb,t
Ds

Note that, if this restriction is accepted, an important consequence is that, instead of using two
variables mrt and ytr separately, we use the relation (mrt ytr ) as one variable, so that the dimension
of the variables, now is 5. Such restrictions are common in economic theory. The restriction (mr y r )t
is generally understood as measuring money income velocity; (Rm,t Rb,t ) is defined as the rate spread;
and (R p) measures real interest rate.
However, the following important points need to be noted.

Note that the number of restrictions m that we can impose on the endogenous variables is
constrained by the fact that N m r. This means that we can impose only one more
restriction. For example we can check if the interest rate spread (Rm Rb ) is common to all
cointegrating relations. And if this is alsoaccepted, then the transformed data vector becomes
0
H0 Zt = (mr y r )t , pt , (Rm Rb )t , Ds . This model will produce exactly three eigen values

and is testable.
Remember that the restricted model is not going to re-estimate the number of cointegrating
relations. What we are interested in, is the log likelihood value of the restricted model, given
that we have already estimated three cointegrating vectors against the unrestricted model
which had identified these cointegrating relations with the full set of variables. So, what we are
basically looking at is, restrictions within the estimated cointegrating vectors; so the number of
such vectors is not going to change in the restricted version but the number of variables within
each vector may change. For instance, if the above restriction that both the relation (mr y r )t
and the interest rate spread relation (Rm Rb )t are common to all three cointegrating vectors
has been accepted, then the number of variables in each cointegrating relations will now be four
instead of the original six variables. The number of cointegrating rank, however, will remain
the same.

33
Estimation in restricted cointegrated systems
We have not yet mentioned anything about how we are going to estimate restricted models. Note
that in all cases of hypotheses testing involving the cointegrated V AR model, the statistical test
that is generally used is the likelihood ratio (LR) test. Thus, following the general practice, we shall
estimate both the restricted and the unrestricted versions and use the respective log likelihood values
in the test.
-
We shall outline below the steps involved in the estimation of the restricted model subject to
the condition that the same restriction is applied to all the cointegrating vectors. Let us take, for
example, that the relation (mr y r )t is common to all the vectors.

Estimate the unrestricted model following the steps given on pages 22 through 25 and calculate
the log -likelihood value using the first three eigen values.
Before estimating the restricted model, note that the transformed V ECM looks like,

Zt = 0 H0 Zt1 + 1 Zt1 + 2 Zt2 + + p1 Ztp+1 + et

implying that the unrestricted cointegrating vector, has now been replaced by the restricted
cointegrating vector c = H.
We again go through the same steps outlined on pages 25 through 27, only with the difference
that in I.2 we regress HZt1 on the short run dynamics and go through the rest of the steps.
Now we calculate the LR statistic as follows
n o
1 ) + ln(1 c ) ln(1
2(LA L0 ) = T ln(1 c1 ) ln(1 2 ) + ln(1 c ) ln(1
r ) .
2 r

Recall that the entire restricted modeling is done on stationary cointegrating relations and
hence all statistical testing can be done using Gaussian properties. Hence, this statistic is
distributed as 2 (j) where j = rm and m is the number of restrictions in each cointegrating
vector. So, there are j degrees of freedom. In the present case, there was only one restriction
per cointegrating vector; hence j = 3.
Suffice it to say that the above mentioned steps are generally those that are used to estimate
any restricted model. So we shall not outline the steps of all restricted models.

-
There are many other interesting restrictions that can be tested on the vectors. For illustration
we can conduct a joint test that the interest rate spread is common to all the relation and the
dummy variable can be excluded from all the relations. One can also check if a particular relation
is stationary. For example, one can check if pt is stationary. In this case the cointegrating vector
is (1 1) and this known can be tested for its correctness. The basic framework is the same as
before. Hence, we shall not pursue them here. Interested readers can refer to the book by K.Juselius
on this subject.
We shall next examine the implications of testing restrictions on the vector.
-

34
Formulating hypotheses as restrictions on :
Restrictions on the vector is closely associated with the concept of weak exogeneity. A test
of zero row vector is equivalent to testing for a particular variable to be weakly exogenous to the
long run parameters. When the null is accepted, then the particular variable is singled out as the
common driving trend. One can check also for a known vector in .
-
Testing for long run weak exogeneity
The hypothesis that a particular variable influences, but at the same time, is not influenced by
other variables in the system, is called the no levels feedback hypothesis or the concept of weak
exogeneity. The way to test is as before, both with the free parameters and restrictions. We express
the restricted vector as

Hc : = Hc ,
where is (N r) matrix; H is a (N s) matrix, where s is the number of free parameters; c
is (s r) matrix of non-zero coefficients. The equivalent form with the restrictions in the vector
is

Hc : R0 = 0, where R = H .
Note the following important point.

When the null of a zero row vector has been accepted, it means that particular variable does not
adjust to the deviations in the long run relations, meaning that the variable can be considered
a common stochastic trend. And since, there cannot be greater than N r common trends
in a system with r cointegrating relations, the number of such zero row restrictions can be at
most, (N r).

Empirical illustration
Let us work again with the same Zt vector where now the vector is Zt = (mrt , ytr , pt , Rm,t , Rb,t )0 .
We want to test if the bond rate, Rb,t is long run weakly exogenous for the long run parameters in
the data that is, we want to test if 51 = 52 = 53 = 0. So here s = 4 and m = 1. So our V ECM
model set up will be

35
mrt


ytr 11 12 13 11 Z1,t1 + 12 Z2,t1 + 13 Z3,t1 + 14 Z4,t1 + 15 Z5,t1
2 pt = + 21 Z1,t1 + 22 Z2,t1 + 23 Z3,t1 + 24 Z4,t1 + 25 Z5,t1

Rm,t 51 52 53 31 Z1,t1 + 32 Z2,t1 + 33 Z3,t1 + 34 Z4,t1 + 35 Z5,t1


Rb,t | {z }
| {z }
0 Zt1
Zt
Since our interest is the equation for Rb,t , we shall write out the equation explicitly as
Rb,t = 51 (11 Z1,t1 + 12 Z2,t1 + 13 Z3,t1 + 14 Z4,t1 + 15 Z5,t1 ) +
52 (21 Z1,t1 + 22 Z2,t1 + 23 Z3,t1 + 24 Z4,t1 + 25 Z5,t1 ) +
53 (31 Z1,t1 + 32 Z2,t1 + 33 Z3,t1 + 34 Z4,t1 + 35 Z5,t1 )
If the null of 51 = 52 = 53 = 0 is accepted in the equation for Rb,t , then we consider
that the bond market is weakly exogenous. Hence, we restrict it the following way:

1 0 0 0

c c c
0
0 1 0 0 11 12 13 1 Zt1
0
= + 0 0 1 0 2 Zt1

0 0 0 1 c c c
41 42 43 03 Zt1
0 0 0 0
c c c
11 12 13

01 Zt1

0
= + c c c 2 Zt1
41 42 43
03 Zt1
0 0 0

The same specification in terms of R matrix is easily seen to be

R0 = [0, 0, 0, 0, 1] .

Some issues about the estimation of V ECM under this restriction is worth noting.
Assuming that the bond rate is weakly exogenous implies that valid statistical inference on
can be obtained from the four dimensional system consisting of all variables but the bond
rate. Such an analysis is called the partial system analysis and the model that uses those four
variables is called the partial model and the lone equation explaining the bond rate is called
the marginal model. Thus evidence of weak exogeneity actually gives us a condition when a
partial model can be used to efficiently estimate without loss of information. This argument
is based on a partitioning of the density function into conditional and marginal densities. We
shall explain this below.
The implication of the above property is that, when there are N r = m zero rows in the
matrix, note that m N r we can partition the N equations into N m equations
that exhibit levels feedback and m equations that do not. Since m equations do not contain
information about long-run relations, one can estimate a system of N m variables conditional
on the m marginal models of the weakly exogenous variables.
Note that, ironically, if we want to estimate from a partial model, we have to estimate the
full system first and test for weak exogeneity of a variable! If the null has been accepted, then
it may be profitable to re-estimate the partial model conditioned on the weakly exogenous

36
variable. Why re-estimate? Because, re-estimating a partial system, after accepting the null of
weak exogeneity of a variable, results in a more balanced model some times. This may be true
especially if there nonlinearities in the system or non-constant parameters.
However, in many cases, it may be of interest to estimate the partial model from the outset.
In our system we typically include variables which we know apriori to be weakly exogenous.
For example, we know that US interest rates affect Indian rates but we also know for sure that
Indian rates do not influence US rates at all! A variable like the oil prices affects all other
macro variables in a system, without being affected by them. If we have such variables in our
model, it is profitable to go for the partial model right from the outset. This is normally the
strategy adopted by researchers, especially if they have a large number of variables.
However, if we go for the partial model on such apriori considerations from the outset, we have
to refer to a different set of asymptotic critical values. Assuming that our initial classification
of weak exogeneity is correct, we can refer to the tables calculated by Johansen et.al in Journal
of Business and Economic Statistics, 1998, pp.388-399.
Now, just what is this partial model? How is this related to the concept of weak exogeneity?
To understand the link, we digress a bit and recall some basic statistical results. Details follow.

Weak exogeneity and partial models.


We recall some basic statistical results.
Marginal distributions:
Let X (X1 , X2 ) be a bivariate random vector with a joint distribution function F (x1 , x2 ).
The question that naturally arises is if we could separate X1 and X2 and consider them as individual
random variables. The answer to this question leads to the concept of marginal distribution. Given
that the probability model has been defined in terms of the joint density functions, it is necessary to
define these in terms of the marginal density functions. Hence, the marginal density functions of X1
and X2 is

Z
f1 (x1 ) = f (x1 , x2 )dx2
Z

f2 (x2 ) = f (x1 , x2 )dx1

Literally this means the marginal density of Xi (i = 1, 2) is obtained by integrating or throwing


out Xj (j 6= i) from the joint density. The algebra behind this assertion should be available in any
elementary book on statistics.
Conditional distributions
Another useful idea is to consider the question of deriving the density of a subset of random
vectors by conditioning with respect to some other subset of random vectors given the joint density,
which leads us to the concept of conditional distributions. This is of great value in the context of the
probability model, because it offers a way to decompose the joint density function. Formally, if we
need the conditional density of X2 given X1 ,

f (x1 , x2 ) = (f1 (x1 )) (f (x2 |x1 )).

Needless to say that if X1 and X2 are independent

f (x1 , x2 ) = f1 (x1 )f2 (x2 ).

37
Given the importance of these concepts, we shall define these in the context of bivariate normal
density function, which takes the form f (x1 , x2 ; , ) and we write X (, ) where we can deduce
the following.
2
1 1 12 12 1 2
= ; = = , where = 12
2 2 1 2
2 21 2 2 1 2
=
(det ) = 12 22 (1 2 ) > 0 1 < < 1.

The marginal and conditional distributions in this case are denoted by,

X1 N (1 , 12 ), X2 N (2 , 22 ) (1)
 
1 2 2
(X1 |X2 ) N 1 + (x2 2 ), 1 (1 (2)
2
 
2 2 2
(X2 |X1 ) N 2 + (x1 1 ), 2 (1 (3)
1

How does one retrieve the model implied by these distributions?


From (2) we can write the model for X1 given X2 as

X1 = a + bX2 + 1 where
X2 = 2 + 2 , 2 N (0, 22 )
1
Here a = 1 b2 ; b = , 1 N (0, 2 ) 2 = 12 (1 2 )
2
Similarly, from (3) we can write the model for X2 given X1 as

X2 = a + b X1 + 2 where

X1 = 2 + 1 , 1 N (0, 12 )
2
Here a = 2 b 1 ; b = , 2 N (0,
2)
2 = 22 (1 2 )
1

Now we can generalize these points to the N vector to get the multivariate density and conditional
density functions.
If X N (, ) then the marginal distribution of any (K 1) subset of X, where
     
X1 1 11 12
X= ; ; =
X2 2 21 22

Marginal distributions of X1 and X2 are easily seen to be,

X1 N (1 , 11 ), and X2 N (2 , 22 )

For the same partition, the conditional distributions are given by,

(X1 |X2 ) N 1 + 12 1 11 12 1

22 (X2 2 ), 22 21

and
(X2 |X1 ) N 2 + 21 1 22 21 1

11 (X1 1 ), 11 12

38
Now we shall use these concepts and establish how to derive the partial model in the cointegrated
V AR framework. Let Zt = (Z1t , Z2t )0 . Let us partition
     
1 1i e1t
= ; i = ; et =
2 2i e2t

With these partitions given let us partition the V ECM as follows:


p1
X
0
Z1t = 1 Zt1 + 1i Zti + e1t
i=1
p1
X
Z2t = 2 0 Zt1 + 2i Zti + e2t
i=1

For this partition scheme, we have  


11 12
= .
21 22
And,
p1
X
0
1 = E(Z1t ) = 1 Zt1 + 1i Zti
i=1
p1
X
2 = E(Z2t ) = 2 0 Zt1 + 2i Zti
i=1

Mapping with the definition of conditional density defined before, we have X1 = Z1t , X2 = Z2t
and = so that, from the formula for the conditional density of (X1 |X2 ), we have the conditional
model for Z1t , given Z2t and given the past,
p1
X
Z1t = Z2t + (1 2 ) Zt1 + 0 1i Zti + e
1t
i=1

where = 1
12 22 ; 1i = (1i 2i ) ;
e1t = (e1t e2t )

and this partial model has variance

11.2 = 11 12 1
22 21

Since enters both the equations for Z1t and Z2t , we cannot analyse the conditional model
for Z1t alone, unless 2 = 0. If we can show this, then
p1
X
Z1t = Z2t + 1 Zt1 + 0 1i Zti + e
1t
i=1
p1
X
Z2t = 2i Zti + e2t

i=1

With this, a fully efficient estimate of can be obtained from the partial model explained by the
equation for Z1t . We estimate it by the usual method of concentrating out the short run dynamics
as well as Z2t . Such an estimation delivers a total of (N m) eigenvalues from which we use r
nonzero eigen values to decide the number of cointegrating vectors. More details are available in the
book by Johansen.

39
Identification in cointegrated systems
With nonstationary data, cointegration is a real possibility. We had in the previous discussion
seen the issues connected with a cointegrated data set up. Recall that, one could get r < N such
cointegrating relations given a vector of N variables. But in a cointegrated model, we have both a
long-run structure (given by the cointegrating relations) and the short-run structure given by the
equations in differences. The classical concept of identification is related to prior economic structure.
But here Johansen approaches the identification as a purely statistical process and lists out three
different meanings:

generic identification, which is related to a statistical model,


empirical identification, which is related to the estimated parameter values, so that we do not
accept basically any over identification restriction on parameters, and
economic identification, which is tested to the economic interpretability of the estimated coef-
ficients of an empirically identified structure.

Ideally all three must be fulfilled for an empirical model to be considered satisfactory.
We shall start with a V AR and the associated V ECM. Being reduced form models, how does
one retrieve the so called structure behind these reduced form models? Let us demonstrate this with
the simplest of the V ECM models:

Zt = 0 Zt1 + 1 Zt1 + et et N (0, )


A structural model is defined by the economic formulation of the problem and can be, for instance,
given by

B0 Zt = B 0 Zt1 + B1 Zt1 + vt , vt N (0, ) with


1 1 1 1
1 = B0 B1 ; = B0 B; et = B0 vt ; = B0 B0

In a VECM, for a unique identification of the short run structural parameters, given by the set
{B0 , B1 , B, } , we have to normally impose N (N 1) restrictions on the N equations. Note
however that the set of long-run parameters is the same in both forms, implying that identification
of the long-run structure can be done in either form. In order to identify the long-run relations,
we formulate restrictions on the individual cointegrating relations. The problem of identifying the
long run structure is similar to the one encountered in econometrics in connection with identifying
a simultaneous system equations model. The classical result in identification of the system is given
by a rank condition. (See Goldberger,1964, Econometric Theory) and this has been extended to the
VECM context by Johansen(1995,Journal of Econometrics,69,111-132). Just as in the classical case,
where we impose restrictions on the parameters such that the parameter matrix satisfies a rank con-
dition, in the VECM context also we have to impose (r 1) restrictions on each cointegrating vector,
so that in general we need to impose r(r 1) just-identifying restrictions on . Since Goldbergers
scheme of identification is based on parameters, which are generally unknown, Johansen defines the
rank conditions based on the observable matrices, H and R. The idea here is to choose these matrices
in such a way that the linear restrictions implied by them satisfy a rank condition.
Let us demonstrate this with the help of both free parameters and restrictions in a cointegrating
relation. Accordingly, let Hi = Ri, be a (N 1 si ) matrix of full rank; and let Ri be a (N 1 mi )
matrix of full rank, with (si + mi = N 1) so that R0i Hi = 0. Thus, there are mi restrictions and si
parameters to be estimated in the ith relation. Thus, the cointegrating relations are thus assumed
to satisfy the restrictions R0i i = 0 or equivalently, i = Hi i for some si vector i ; that is,

= (H1 1 , . . . , Hr r )

40
where the matrices H1 , . . . , Hr express some linear economic hypotheses to be tested against the
data. Herein we specify the condition for identification:
The first cointegrating relation is identified, if and only if,

rank(R01 1 , R01 2 , . . . , R01 r ) = rank(R01 H1 1 , . . . , R01 Hr r ) = r 1.


The intuitive meaning behind this is very simple. When applying the restrictions of one coin-
tegrating on other cointegrating vectors, we get a matrix of rank r 1. Hence it is not possible to
obtain linear combination of 2 , . . . , r to construct a vector in the same way as 1 which could be
confused with 1 . Hence 1 can be recognized among all linear combinations of 1 , . . . , r as the
only one that satisfy the restrictions R1 . But how does one check this if the parameter values are
unknown? And we can estimate the parameters only if the restrictions are identifying. So, to make
the condition operational, Johansen provides us with a condition to check which of the cointegrating
vectors are identified based only on the known coefficient matrices, Ri and Hi . The condition is: For
all i and k = 1, . . . , r 1 and any set of indices 1 i1 . . . ik r, not containing i, it holds that,

rank (R0i Hi1 , . . . , R0i Hik ) k.


If the condition is satisfied for any particular i, then the restrictions are satisfying that partic-
ular cointegrating vector. If all i vectors similarly satisfy this rank condition, then the model is
generically identified. Basically, this is the first criterion that must be satisfied, if one is interested
in identifying a particular cointegrating relation. As an example consider r = 2 where the condition
that must be satisfied is
ri.j = rank (R0i Hj ) 1, i 6= j.
For r = 3, we have the conditions,

ri.j = rank (R0i Hj ) 1, i 6= j


ri.jm = rank (R0i (Hj , Hm )) 2, i, j, m different

So, if one is interested in identifying structures in a cointegrated model, then the above rank condition
should be verified before one proceeds with model estimation. Note that this condition tackles only
equation by equation restrictions. More specifically, only exclusion or zero restrictions are allowed.
Cross equation restrictions or restrictions on covariance matrix are not allowed.
-
We shall fix this with an example.
Let us suppose that we have the following set of variables: Zt = (p1 , p2 , e12 , i1 , i2 )0 where the
first two variables are prices in two different countries A and B, e12 is the exchange rate between
the two countries and the last two are the interest rates prevailing in the two countries. The vector
(p1 p2 , p1 , e12 , i1 , i2 ) is found to be I(1). Let us suppose we have found three cointegrating
relations. We want to check if these are identified with some restrictions that satisfy some stylized
facts, like the long run PPP relation, (p1 p2 e1t ), and the uncovered interest rate parity (UIP)
relation, (i1t i2t ). It was found that, while PPP stationarity was accepted, UIP stationarity was
rejected. Next our intention was to check, if a combination or modification of these two hypotheses
would give us, stationary relations. Accordingly, let us impose the following restrictions:

= (H1 1 , H2 2 , H3 3 ) , where

1 0 1 0 0 0 0

0 0 0 0 0 1 0
H1 = 1 0 , H2 = 0 1 0 , H3 = 0 0 .

0 1 0 0 0 0 1
0 1 0 0 1 0 0

41
The first describes the relation between real exchange rates and the interest differential, the second
is a modified PPP relation and the third describes a relation between price inflation and nominal
interest rate. The rank condition would tell us if these restrictions identify the parameters of the long
run relations. We dont know the parameter values and hence we go for the generic identification.
We calculate the following matrices:

1 1 0 0 0 1 1 0 0 0
R01 H2 = 0 0 1 , R01 H3 = 0 1 , R01 (H2 : H3 ) = 0 0 1 0 1 , etc.
0 0 0 1 0 0 0 0 1 0
and find that,
rank(R01 H2 ) = 2, rank(R01 H3 ) = 2, rank(R01 (H2 : H3 )) = 3
rank(R02 H1 ) = 1, rank(R02 H3 ) = 2, rank(R02 (H1 : H3 )) = 2
rank(R03 H1 ) = 2, rank(R03 H2 ) = 3, rank(R03 (H1 : H2 )) = 3
Thus we see that for the proposed set of restrictions, all the cointegrating relations are satisfying
the generic condition and almost all are identified. Next one may proceed to the empirical and
economic identification of these relations.
-
Just identification and normalisation
Johansen (1994,Journal of Econometrics,63,7-36) suggests a normalisation procedure for cointe-
grating vectors that will be just identifying as well. In this case, the generic rank condition for
identification of cointegrating vectors will be automatically satisfied. However, one has to still justify
such restrictions imposed by the normalisation scheme as economically meaningful. If not, one can
impose restrictions that satisfy some economic theory; but in such cases, one may have to test if the
restrictions satisfy the generic rank condition.
The necessity to impose restrictions to identify cointegrating relations arises because the coin-
tegrating linear combinations are not unique. For example, we can always translate = 0 as
= QQ1 0 = 0 where = Q and = Q01 . We have to choose Q in such a way
that it imposes (r 1) restrictions on each cointegrating vector so that the rank condition for a
generic identification is automatically satisfied. Johansen suggests that Q =  10 where 10 is a (r r)
0 where
0

nonsingular matrix, defined by 0 = [ 01 02 ] . In this case, 0 = ( 01 ) 1
1
0
=
0 = Ir : 10 0 . For example, assume that is a (5 3) matrix and let us partition
h i
= ( 01 ) ,
1 2
it the following way.

11 12 13

21 22 23 1
31 32 33

=




41 42 43
2
51 52 53
implies
so that

1 0 0
1
0 1 0


1 0 0 1
=



1

41 42 43

2

51 52 53

42
Notice that our choice of Q = 10 in our example has in fact imposed two zero restrictions and
one normalisation on each cointegrating relation.
We shall consider a just identified structure describing long run relationship involving endogenous
variables real money, inflation and short-term interest rate and exogenous variables, real income and
bond rate, corresponding to the following restrictions on .

= (H1 1 , H2 2 , H3 3 )
where

1 0 0 0 0 0 0 0 0 0 0 0

0 1 0 0 1 0 0 0 1 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0

H1 = , H2 = , H3 =

0 0 0 0 0 0 0 0 0 1 0 0


0 0 1 0 0 0 1 0 0 0 1 0
0 0 0 1 0 0 0 1 0 0 0 1

H1 picks up real money, H2 explains inflation rate and H3 explains the short rate and the two
weakly exogenous variables enter all three relations. Note that this structure describes the long run
reduced form model for the endogenous variables in terms of the weakly exogenous variables. Note
that, no testing for the generic rank condition is involved in this case, because as the r 1 = 2
restrictions have been achieved by linear combinations of the unrestricted cointegrating relations,
that is, by rotating the cointegrating space.

43

Das könnte Ihnen auch gefallen