Sie sind auf Seite 1von 22

Dierence Equations

Weijie Chen Department of Political and Economics Studies University of Helsinki 16 Aug, 2011

Contents
1 First Order Dierence Equation 1.1 Iterative Method . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 General Method . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 One Example . . . . . . . . . . . . . . . . . . . . . . . 2 Second-Order Dierence Equation 2.1 Complementary Solution . . . . . . . . . . . . . . . . . . . . . 2.2 Particular Solutions . . . . . . . . . . . . . . . . . . . . . . . 2.3 One Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 pth -Order Dierence Equation 3.1 Iterative Method . . . . . . . . . . . 3.2 Analytical Solution . . . . . . . . . . 3.2.1 Distinct Real Eigenvalues . . 3.2.2 Distinct Complex Eigenvalues 3.2.3 Repeated Eigenvalues . . . . 2 2 4 6 7 7 9 9 10 11 13 14 19 20

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

4 Lag Operator 20 th -order dierence equation with lag operator . . . . . . . . 21 4.1 p 5 Appendix 21 5.1 MATLAB code . . . . . . . . . . . . . . . . . . . . . . . . . . 21

Abstract Dierence equations are close cousin of dierential equations, they have remarkable similarity as you will soon nd out. So if you have learned dierential equations, you will have a rather nice head start. Conventionally we study dierential equations rst, then dierence equations, it is not simply because it is better to study them chronologically, it is mainly because dience equations has a naturally stronger bond with computational science, sometimes we even need to study how to make a dierential equation into its discrete version-dierence equation-since which can be simulated by MATLAB. Dierence equations are always the rst lesson of any advanced time series analysis course, dierence equation largely overshadows its econometric brother, lag operation, that is because dierence equation can be expressed by matrix, which tremendously increase it power, you will see its shockingly powerful application of space-state model and Kalman lter.

First Order Dierence Equation

Dierence equations emerge because we need to deal with discrete time model, which is more realistic when we study econometrics, time series datasets are apparently discrete. First we will discuss about iterative mathod, which is almost the topic of rst chapter of every time series textbook. In preface of Ender (2004)[5],In my experience, this material (dierence equation) and a knowledge of regression analysis is sucient to bring students to the point where they are able to read the professional journals and embark on a serious applied study. Although I do not full agree with his optimism, I do concur that the knowledge of dierence equation is the key to all further study of time series and advanced macreconomic theory. A simple dierence equation is a dynamic model, which describes the time path of evolution, highly resembles the dierential equations, so its solution should be a function of t, completed free of yt+1 yt , a time instant t can exactly locate the position of its variable.

1.1

Iterative Method

This method is also called recursive substitution, basically it means if you know the y0 , you know y1 , and rest of yt can be expressed by a recursive relation. We start with a simple example which appears in very time series textbook, yt = ayt1 + wt (1)

where

w0 w1 w = . . . . wt

w is a determinstic vector, later we will drop this assumption when we study stochastic dierence equations, but for the time being and simplicity, we assume w nonstochastic yet. Notice that y0 = ay1 + w0 y1 = ay0 + w1 y2 = ay1 + w2 . . . yt = ayt1 + wt

Assume that we know y1 and w. Thus, substitute y0 into y1 , y1 = a(ay1 + w0 ) + w1 = a2 y1 + aw0 + w1 Now we have y1 , follow this procedure, substitute y1 into y2 , y2 = a(a2 y1 + aw0 + w1 ) + w2 = a3 y1 + a2 w0 + aw1 + w2 Again, substitute y2 into y3 , y3 = a(a3 y1 + a2 w0 + aw1 + w2 ) + w3 = a4 y1 + a3 w0 + a2 w1 + aw2 + w3

I trust you are able to perceive the pattern of the dynamics,


t

yt = at+1 y1 +
i=0

ai wti

Actually what we have done is not the simplest version of dierence equation, to show your the outrageously simplest version, let us run through next example in a lightning fast speed. A homogeneous dierence equation1 , myt nyt1 = 0
1

If this is rst time you hear about this, refer to my notes of dierential equation[2].

To rearrange it in a familiar manner, yt = n yt1 m

As the recurisve methods implies, if we have an intial value y0 2 , then y1 = y2 = ... yt = Substitute y1 into y2 , y2 = Then substitute y2 into y3 , y3 = n m n m
2

n y0 m n y1 m n yt1 m

n m

n y0 = m

n m

y0

y1 =

n m

y0

Well the pattern is clear, and solution is yt = n m


t

y0

If we denote (n/m)t as bt and y0 as A, it becomes Abt , this is the counterpart of solution of rst order dierential equation Aert . They both play the fundamental role in solving dierential or dierence equations. Notice that the solution is a function of t, here only t is a varible, y0 is an initial value which is a known constant.

1.2

General Method

As you no doubt have guess the general solution will dierence equation will also be the counterpart of its dierential version. y = yp + yc where yp is the particular solution and yc is the comlementary solution 3 .
For sake of mathematical convenience, we assume initial value y0 rather than y1 this time, but the essence is identical. 3 All these terminology is full explained in my notes of dierential equations.
2

The process will be clear with the help of an example, yt+1 + ayt = c We follow the standard procedure, we will nd the complementary solution rst, which is the solution of the according homogeneous dierence equation, yt+1 + ayt = 0 With the knowledge of last section, we can try a solution of Abt , when I say try it does not mean we just randomly guess, because we dont need to perform the crude iterative method every time, we can make use of the solution of previously solved equation, so yt+1 + ayt = Abt+1 + aAbt = 0, Cancel the common factor, b+a=0 b = a If b = a, this solution will work, so the complementary solution is yc = Abt = A(a)t Next step, nd the particular solution of yt+1 + ayt = c. The most intreguing thing comes, to make the equation above hold, we can choose any yt to satisfy it. Might to your surprise, we can even choose yt = k for < t < , which is just a constant time series, every period we have the same value k. So k + ak = c k = yp = c 1+a

You can easily notice that if we want to make this solution work, then a = 1. Then the question we ask will simply be, what if a = 1? Of course it is not dene, we have to change the form of the solution, here we use yt = kt which is similar to the one we used in dierential equation, (k + 1)t + akt = c k= c , t + 1 + at

because a = 1, k = c,

But this time yp = kt, so yp = ct, still it is a function of t. Add yp and yc together, yt = A(a)t + c 1+a yt = A(a)t + ct if a = 1 if a = 1 (2) (3)

Last, of course you can solve A if you have an initial condition, say yt = y0 , and a = 1, c c y0 = A + A = y0 1+a 1+a If a = 1, y0 = A(a)0 + 0 = A Then just plug them back to the according solution (2) and (3). 1.2.1 Solve yt+1 2yt = 2 First solve the complementary equation, yt+1 2yt = 0 use Abt , Abt+1 2Abt = 0 b2=0 b=2 So, complementary solution, yc = A(2)t To nd particular solution, let yt = k for < t < , k 2k = 2 k = yp = 2 So the general solution, yt = yp + yc = 2 + 2t A 6 One Example

If we are given an initial value, y0 = 4, 4 = 2 + 20 A A=6 then the denite solution is yt = 2 + 6 2t

Second-Order Dierence Equation

Second order dierence equation is that an equation involves 2 yt , which is (yt ) = (yt+1 yt ) 2 yt = yt+1 yt 2 yt = (yt+2 yt+1 ) (yt+1 yt ) 2 yt = yt+2 2yt+1 + yt We dene a linear second-order dierence equation as, yt+2 + a1 yt+1 + a2 yt = c But we better not use iterative method to mess with it, trust me, it is more confusing then illuminating, so we come straightly to the general solution. As we we have studied by far, the general solution is the addition of the complementary solution and particular solution, we will talk about them next.

2.1

Complementary Solution

As interestingly as dierential equation, we have several situations to discuss. So rst we try to solve the complementary equation, some textbook might call this reduced equation, it is simply the homogeneous version of the orginal equation, yt+2 + a1 yt+1 + a2 yt = 0 We learned from rst-order dierence equation that homogeneous dierence equation has a solution form of yt = Abt , we try it, Abt+2 + a1 Abt+1 + a2 Abt = 0 (b2 + a1 b + a2 )Abt = 0 We assume that Abt is nonzero, b2 + a1 b + a2 = 0 7

This is our characteristic equation, same as we had seen in dierential equation, high school math can sometime bring us a little fun, b= a1 a2 4a2 1 2

I guess you are much familiar with it in textbook form, if we have a qudractic eqution x2 + bx + c = 0, b b2 4ab x= 2a The same as we did in second-order dierential equations, now we have three cases to discuss. Case I a2 4a2 > 0 Then we have two distinct real roots, and the 1 complementary solution will be yc = A1 bt + A2 bt 1 2 Note that, A1 bt and A2 bt are linearly independent, we cant only use one of 1 2 them to represent complementary solution, because we need two constants A1 and A2 . Case II a2 4a2 = 0 Only one real root is available to us. 1 b = b1 = b2 = Then the complementary solution collapses, yc = A1 bt + A2 bt = (A1 + A2 )bt We just need another constant to ll the position, say A4 , yc = A3 bt + A4 tbt where A3 = A1 + A2 , and tbt is just old trick we have similiarly used in dierential equation, which we used tert . Case III a2 4a2 < 0 Complex numbers are our old friends, we need 1 to make use of them again here. b1 = + i b2 = i where = a1 , 2 =
4a2 a2 1 . 2

a1 2

Thus,

yc = A1 ( + i)t + A2 ( i)t Here we simply need to make use of De Moivres theorem, yc = A1 b1


t

[cos (t) + i sin (t)] + A2 b2 8

[cos (t) i sin (t)]

where b1 = b2 , because b1 = b2 = Thus, yc = A1 b t [cos (t) + i sin (t)] + A2 b t [cos (t) i sin (t)] = b t {A1 [cos (t) + i sin (t)] + A2 [cos (t) i sin (t)]} = b t [(A1 + A2 ) cos (t) + (A1 A2 )i sin (t) = b t [A5 cos (t) + A6 sin (t)] 2 + 2

2.2

Particular Solutions

We pick any yp to satisfy yt+2 + a1 yy+1 + a2 yt = c The simplest case is to choose yt+2 = yt+1 = yt = k, thus k + a1 k + a2 k = c, and k = c 1 + a1 + a2

But we have to make sure that a1 + a2 = 1. If a1 + a2 = 1 happens we will use choose yp = kt, and following this pattern there will be kt2 , kt3 , etc to choose depending on the situation we have.

2.3
Solve

One Example
yt+2 4yy+1 + 4yt = 7

First, complementary solution, its characteristic equation is b2 4b + 4 = 0 (b + 2)(b 2) = 0

The complementary solution is yc = A1 b2 + A2 b2 1 2 For particular solution, we try yt+2 = yt+1 = yt = k, then k 4k + 4k = 7, which is k = 7 Then the general solution is y = yc + yp = A1 2t A2 (2)t + 7 9

And we have two initial conditions, y0 = 1 and y1 = 3, y0 = A1 A2 + 7 = 1 y1 = 2A1 + 2A2 + 7 = 3 Solve for A1 = 4 Dinite solution is y = yc + yp = 4 2t 2(2)t + 7 A2 = 2

pth -Order Dierence Equation

Previous sections are simply teaching your to solve the low order dierence equations, no much of insight and diculty. From this section on, diculty increases signicantly, for those of you who do not prepare linear algebra well enough, it might not be a good idea to study this section in a hurry. Every mathematics is built on another, it is not quite possible to progress in jumps. And one warning, as the mathematics is becoming deeper, the notation is also becoming complicated. However, dont get dismayed, this is not rocket science. We generalize our dierence equation into pth -Order, yt = a1 yt1 + a2 yt2 + + ap ytp + t (4)

This is actually a generalization of (1), we need to set here in order to prepare for stochastic dierence equations. It is a conventional that we dience backwards when deal with high orders. But here we still take it as constant. We cant handle this dierence equation in this form since it doesnt leave us much of room to perform any useful algebraic operation. Even if you try to use old method, its characteristic equation will be bt a1 bt1 a2 bt2 + ap btp = 0, factor out btp , bp a1 bp1 a2 bp2 ap = 0 If the p is high enough make you have headache, it means this isnt the right way to advance. Everything will be reshaped into its matrix form, dene yt yt1 t = yt2 . . . ytp+1 10

which is a p 1 vector. And dene a1 a2 a3 1 0 0 F =0 1 0 . . . . . . . . . 0 Finally, dene 0 0

ap1 ap 0 0 0 0 . . . . . . 1 0

t 0 vt = 0 . . . 0 Put them together, we have new vector form rst-order dierence equation, t = F t1 + v t you will see what is t1 in next explicit form, yt a1 a2 a3 ap1 ap yt1 t yt1 1 0 0 yt2 0 0 0 yt2 0 1 0 0 0 yt3 + 0 = . . . . . . . . . . . . . . . . . . . . . . . . ytp+1 0 0 0 1 0 ytp 0 The rst equation is yt = a1 yt1 + a2 yt2 + + ap ytp + t which is exactly (4). And it is quite obvious for you that from second to pth equation is simple a yi = yi form. The reason that we write it like this is not obvious till now, but one reason is that we reduce the system down to a rst-order dierence equation, although it is in matrix form, we can use old methods to analyse it.

3.1

Iterative Method

We list evolution of dierence equations as follows, 0 = F 1 + v 0 1 = F 0 + v1 2 = F 1 + v2 . . . t = F t1 + v t 11

And we need to assume that 1 and v 0 are known. Then old tricks of recursive substitution, 1 = F (F 1 + v 0 ) + v 1 = F 2 1 + F v 0 + v 1 Again, 2 = F 1 + v2 = F (F 2 1 + F v 0 + v 1 ) + v 2 = F 3 1 + F 2 v 0 + F v 1 + v 2 Till step t, t = F t+1 1 + F t v 0 + F t1 v 1 + . . . + F v t1 + v t
t

= F t+1 1 +
i=0

F i v ti

which has explicit form, yt y1 0 1 yt1 y2 0 0 yt2 = F t+1 y3 + F t 0 + F t1 0 + + F . . . . . . . . . . . . ytp+1 yp 0 0

t1 t 0 0 0 0 + . . . . . . 0 0

Note that how we use v i here. Unfortunatly, more notations are needed. We (t) denote the (1, 1) elelment of F t as f11 , and so on so forth. We made these notation in order to extract the rst equation from above unwieldy system, thus yt = f11
(t+1) (t)

y1 + f12

(t+1)

y2 + f13

(t+1)

y3 + . . . + f1p

(t+1)

yp

+ f11 0 + f11

(t1)

1 + . . . + f11 t1 + t

I have to admit, most of time mathematics looks more dicult than it really is, notation is evil, and trust me, more evil stu you havent seen yet. However, keep on telling youself this is just the rst equation of the system, nothing more. Because the idea is the same in matrix form, we are told that we know 1 and v 0 as initial value. Here we just write them in a explicit way, yt is a function of initial values from y1 to yp , and a sequence of i . The scalar rst-order dierence equation need one initial value, and pth -order need p initial values. But if we turn it into vector form, it need one initial value again, which is 1 . 12

If you want, you can even take partial derivative to nd dynamic multiplier, say we nd yt (t) = f11 0 this measure one-unit increase in 0 , and is give by f11 .
(t)

3.2

Analytical Solution

We need to study further about matrix F , because it is the core of the system, all information are hidden in this matrix. Wed better use a small scale F to study the general pattern. Lets set F = a1 a2 1 0

So the rst equation of the system, we can write, yt = a1 yt1 + a2 yt2 + t , or you might like, yt a1 yt1 a2 yt2 = t It is your familiar form, a nonhomogenenous second-order dierence equation. And calculate its eigenvalue, a1 a2 =0 1 Write determinant in algebraic form, 2 a1 a2 = 0 We nally reveal the myth why we always solve a characteristic equation rst, because we are nding it eigenvalues. In general the characterstic equal of pth -order dierence equation will be, p a1 p1 a2 p2 ap = 0 (5)

we of course can prove it by using |F I| = 0, but the process looks rather messy, the basic idea is to perform row operations to turn it into upper triangle matrix, then the determinant is just the product of diagonal entries. Not much insight we can get from it, so we omit the process here. In second order dierential or dierence equation, we usually discuss three cases of solution, two distinct roots (now you know they are eigenvalues), one repeated root, complex roots. We have a root formula two catagorize them b2 4ac, however, that is only used in second order, when we come to high orders, we dont have anything like that. But it actually makes high orders more interesting than low ones, because we can use linear algebra. 13

3.2.1

Distinct Real Eigenvalues

Distinct eigenvalue category here corresponds to two distinct real roots in second order. It is becoming hardcore in following content, be sure you are well-prepared in linear algebra. If F has p distinct eigenvalues, you should be happy, because diagonalization is waving hands towards you. Recall that A = P DP 1 where P is a nonsingular matrix, because distinct eigenvalues assure that we have linearly dependent eigenvectors. We need to make some cosmetic change to suit our need, F = P P 1 We use a capital to indicate that all eigenvalues on the principle diagonal. You should naturally respond that F t = P t P 1 , simply because, F 2 = P P 1 P P 1 = P P 1 = P 2 P
1

Diagonal matrix shows its convenience, t 1 0 0 t 2 t = . . .. . . . . . 0 0

t p

0 0 . . .

And unfortunately again, we need more notation, we denote tij to be the ith row, j th column entry of P , and tij to be the ith row, j th column entry of P 1 . We try to write F t in explicit matrix form, t 11 12 1 0 0 t11 t12 t1p t t t1p t21 t22 t2p 0 t 0 t21 t22 t2p 2 t F = . . .. . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . 0 0 t tp1 tp2 tpp tp1 tp2 p 11 12 t11 t t12 t t1p t t t t1p p 1 2 t21 t t22 t t2p t t21 t22 t2p p 1 2 = . . . . . . .. .. . . . . . . . . . . . . . . p1 tp2 tpp t t t t t t tp1 1 p2 2 pp p 14 tpp

t Then f11 is

t f11 = t11 t11 t + t12 t21 t + + t1p tp1 t 1 2 p

if we denote ci = t1i ti1 , then


t f11 = c1 t + c2 t + + cp t 1 2 p

Obviously if you pay attention to, c1 + c2 + . . . + cp = t11 t11 + t12 t21 + . . . + t1p tp1 , you would realize it is a scalar product, it is from the rst row of P and rst column of P 1 . In other words, it is the rst element of P P 1 . Magically, P P 1 is an identity matrix. Thus, c1 + c2 + . . . + cp = 1 This time if you want to calculate dynamic multiplier, yt (t) = f11 0 = c1 t + c2 t + + cp t 1 2 p the dynamic multiplier is a weighted average of all tth powered eigenvalues. Here one problem must be solved before we move on, how can we nd ci ? Or is it just some unclosed form expression, we can stop here? What we do next might look very strange to you, but dont stop and nish it you will get a sense of what we are preparing here. Set ti to be the ith eigenvalue, p1 i p2 i . ti = . . 1
i

1 Then, p1 a1 a2 a3 ap1 ap i 1 0 0 p2 0 0 i 0 1 0 0 0 . F ti = . . . . . . .1 . . . . . . . . . . i 0 0 0 1 0 1 a1 p1 + a2 p2 + . . . + ap1 1 + ap i i i p1 i p2 i = . . . 1 i 15

Recall you have seen the characteristic equation of pth -order (5), p a1 p1 a2 p2 ap1 ap = 0 Rearange, p = a1 p1 + a2 p2 + + ap1 + ap (6) Interestingly, we get what we want here, the right-hand side of last equation is just the rst element of F ti . p i p1 i p2 F ti = i . . . 1 i We factor out a i , p1 i p2 i . F ti = i . = i ti . 1
i

1 We have successfully showed that ti is the eigenvalue of F . Now we can set up a P , its every column is eigenvalue, p1 p1 p1 p 1 2 p2 p2 p2 p 1 2 . . . .. . . . P = . . . . 1 1 1
1 2 p

Actually this is a transposed Vandermonde matrix, which is mainly made use of in signal processing and polynomial interpolation. Since we do not know P 14 yet. We use the notation of tij , we postmultiply P by the rst column of P 1 , p1 1 p1 p1 11 1 p 2 p2 p2 p2 t 0 p 21 1 2 . . . t = . .. . . . . . . . . . . . . 1 0 1 1 p1 p 1 2 t 0 1 1 1
Dont ever try to calculate its inverse matrix by ajugate matrix or Gauss-Jordon elimination by hands, it is very inaccurate and time consuming.
4

16

This is a linear equation system, solution will look like, t11 = t21 1 (1 2 )(1 3 ) (1 p ) 1 = (2 2 )(2 3 ) (2 p ) . . . 1 = (p 2 )(p 3 ) (p p )

tp1

Thus, c1 = t11 t
11

p1 1 = (1 2 )(1 3 ) (1 p ) p1 2 (2 1 )(2 3 ) (2 p )

c2 = t12 t21 = . . . cp = t1p t


p1

p1 p = (p 2 )(p 3 ) (p p1 )

I know all this must been quite uncomfortable to you if you dont fancy mathematics too much. Wed better go through a small example to get familiar with these artilleries. Lets look at a second-order dierence equation, yt = 0.4yt1 + 0.7yt2 + t . But I guess you would prefer to it like this, yt+2 0.4yt+1 0.7yt = t+2 . Calculate its characteristic equation, |F I| = 0, which is F I = 0.4 0.7 0 0.4 0.7 = 1 0 0 1

Now calculate its determinant, F I = 2 0.4 0.7 17

2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0 5 10 15 20 25

Figure 1: Dynamic multiplier as a function of t. Use root formula, 1 = 2 = For ci , 1.0602 1 = 0.6163 = 1 2 1.0602 + 0.6602 2 0.6602 c2 = = = 0.3837 2 1 0.6602 1.0602 c1 = And note that 0.6163 + 0.3837 = 1. Dynamic multiplier is yt = c1 t + c2 t = 0.6163 1.0602t + 0.3837 (0.6602)t t The gure 1 shows the dynamic multiplier as a function of t, MATLAB code is at the appendix. The dynamic multiplier will explode as t , because we have an eigenvalue 1 > 1. 18 0.4 + 0.4 (0.4)2 4(0.7) = 1.0602 2 (0.4)2 4(0.7) = 0.6602 2

3.2.2

Distinct Complex Eigenvalues

We also need to talk about distinct complex eigenvalues, but the example will still resort to second-order dierence equation, because we have a handy root formula. The pace might be a little fast, but easily understandable. Suppose we have two complex eigenvalues, 1 = + i 2 = i And modulus of the conjugate pair is the same, = Then rewritten conjugate pair as, 1 = (cos + i sin ) 2 = (cos i sin ) According to De Moivres theorem, to raise the power of complex number, t = t (cos t + i sin t) 1 t = t (cos t i sin t) 2 Back to dynamic multiplier, yt = c1 t + c2 t 1 2 t = c1 t (cos t + i sin t) + c2 t (cos t i sin t) rearrange, we have = (c1 + c2 ) t cos t + i(c1 c2 ) t sin t However, ci is calculated from i , so since eigenvalues are complexe number, and so are ci s. We can denote the conjugate pair as, c1 = + i c2 = i Thus, c1 t + c2 t = [( + i) + ( i)] t cos t + i[(( + i) ( i)] t sin t 1 2 = 2 t cos t + i2i t sin t = 2 t cos t 2 t sin t it turns out real number again. And the time path is determined by modulus , if > |1|, the dynamic multiplier will explode, if < |1| dynamic multiplier follow either a oscillating or nonoscillating decaying pattern. 19 2 + 2

3.2.3

Repeated Eigenvalues

If we have repeat eigenvalues, diagonalization might not be the choice, since we might encounter singular P . But since we have enough mathematical artillery to use, we just switch to another kind of more general diagonalization, which is called the Jordon decomposition. Lets give a denition rst, a h h matrix which is called Jordon block matrix, is dened as 1 0 0 0 1 0 J h () = 0 0 . . . .. . . . . . . . . . . 0 0 0 This is a Jordon canonical form,5 The Jordon decomposition says, if we have a mm matrix A, then there must be a nonsingular matrix B such that, J h1 (1 ) (0) (0) (0) J h2 (2 ) (0) B 1 AB = J = . . . .. . . . . . . . (0) (0) J hr (r ) Cosmetically change to our notation, F = M J M 1 As you can imagine, F t = M J t M 1 And t J h1 (1 ) (0) (0) J t 2 (2 ) h Jt = . . .. . . . . . (0) (0) (0) (0) . . .

J t r (r ) h

What you need to do is just to calculate J t i (i ) one by one, of course we h should leave this to computer. Then you can calculate dynamic multiplier (t) as usual, once you gure out the F , then pick the (1, 1) entry f11 the rest will be done.

Lag Operator

In econometrics, the convention is to use lag operator to function as what we did with dierence equation. Essentially, they are the identical. But there
5

Study my notes of Linear Algebra and Matrix Analysis II [1].

20

is still some subtle conceptual discrepancy, dierence equation is a kind of equation, a balanced expression on both sides, but lag operator represents a kind of operation, no dierent from addition, substraction or multiplication. Lag operator will turn a time series into a dierence equation. We conventionally use L to represent lag operator, for instance, Lxt = xt1 L2 xt = xt2

4.1

pth -order dierence equation with lag operator

Basically this section is to reproduce the crucial results of what we did with dierence equation. We are trying to show you that we can reach the same goal with the help of either dierence equations or lag operators. Turn this pth -order dierence equation into lag operator form, yt a1 yt1 a2 yt2 ap ytp = t which will be (1 a1 L a2 L2 ap Lp )yt = t Because it is an operation in the equation above, some steps below might not be appropriate, but if we switch to (1 a1 z a2 z 2 ap z p )yt = t , we can use some algebraic operation to analyse it. Multiply both sides by z p , z p (1 a1 z a2 z 2 ap z p )yt = z p t (z p a1 zz p a2 z 2 z p ap z p z p )yt = z p t (z p a1 z 1p a2 z 2p ap )yt = z p t And dene = z 1 , and set the right-hand side zero, p a1 p1 a2 p2 ap1 ap = 0 This is the characteristic equation, reproduction of equation (6). to be nished!Actually the rest of reproduction has no insight at all, simply technical manipulation to reproduce the same results as dierence equqation.

5
5.1

Appendix
MATLAB code

x=1:20; y=y=0.6163*1.0602.^x+0.3837*(-0.6602).^x; bar(x,y)

21

References
[1] Chen W. (2011): Linear Algebra and Matrix Analysis II, study notes [2] Chen W. (2011): Dierential Equations, study notes [3] Chiang A.C. and Wainwright K.(2005): Fundamental Methods of Mathematical Economics, McGraw-Hill [4] Simon C. and Blume L.(1994): W.W.Norton & Company Mathematics For Economists,

[5] Enders W. (2004): Applied Econometric Time Series, John Wiley & Sons

22

Das könnte Ihnen auch gefallen