John H. Cochrane
1
February 8, 2012
1
University of Chicago Booth School of Business, 5807 S. Woodlawn, Chicago IL 60637, 773 702
3059, john.cochrane@chicagobooth.edu, http://faculty.chicagobooth.edu/john.cochrane/. I acknowledge
research support from CRSP.
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 Linear models and lag operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.1 Discrete time reminder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Continuoustime Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Laplace transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Moving average representation and moments . . . . . . . . . . . . . . . . . . . . . . . . 11
4.1 Levels and dierences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.1.1 Levels to dierences in discrete time . . . . . . . . . . . . . . . . . . . . . 13
4.1.2 Levels to dierences in continuous time . . . . . . . . . . . . . . . . . . . 13
4.2 Finding an integral representation; unit roots and random walks . . . . . . . . . 14
4.2.1 Discrete time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2.2 Continuous time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5 Impulseresponse function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6 HansenSargent formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.1 Discrete time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.1.1 AR(1) example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.1.2 A prettier formula and the impact multiplier . . . . . . . . . . . . . . . . 21
6.1.3 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.2 Continuous time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.2.1 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
7 Autoregressive representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7.1 How not to dene AR models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
7.2 Adding AR(1)s to obtain more complex processes . . . . . . . . . . . . . . . . . . 26
7.2.1 A twocomponent example . . . . . . . . . . . . . . . . . . . . . . . . . . 27
8 Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
9 A continuoustime asset pricing economy . . . . . . . . . . . . . . . . . . . . . . . . . . 30
9.1 Timeseparable model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
9.1.1 Model solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
9.2 A model with habits or durability . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1
9.2.1 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
11 Lecture notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
12 Notes on Hansen Sargent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2
1 Introduction
Discretetime linear ARMA processes and lag operator notation are very convenient for lots
of calculations. Continuoustime models allow you to handle interesting nonlinear models as
well. But treatments of those models typically dont mention how to adapt the discretetime
linear model and lag operator tricks to continuous time. Here Ill attempt that translation.
The capstone is an explanation of the HansenSargent prediction formulas (1980), (1991) for
continuous time linear models, and their use to solve a fairly complex linearquadratic asset
pricing model with habits and durability in consumption.
The point of this note is heuristic, to learn to use and intuit the techniques. I do not pretend
there is anything new here. I also dont discuss the technicalities. Hansen and Sargent (1991)
is a good reference for most of this material.
2 Summary
Basic operators
1
t
r
t
= r
tt
1r
t
=
1
dt
dr
t
1 = c
1
; 1 = log(1)
Lag polynomials and functions
r
t
=
X
)=0
/
)

t)
= Z
b
(1)
t
; Z
b
(1) =
X
)=0
/
)
1
)
r
t
=
Z
t=0
/(t)d1
tt
= L
b
(1)11
t
; L
b
(1) =
Z
t=0
c
1t
/(t)dt
Inverting the AR(1)
r
t+1
= jr
t
+ 
t
r
t
=
X
)=0
j
)

t)
dr
t
= cr
t
dt + od1
t
r
t
=
Z
t=0
c
t
d1
tt
(1 j1)r
t
= 
t
r
t
=
1
1 j1

t
=
X
)=0
j
)
1
)

t
(1 + c)r
t
= 11
t
r
t
=
1
c + 1
11
t
=
Z
t=0
c
t
c
1t
dt
1
dt
d1
t
Forwardlooking operators
kjk 1
1
1 j1

t
=
j
1
1
1
1 j
1
1
1
!

t
=
X
)=1
j
)
1
)

t
=
X
)=1
j
)

t+)
krk 0
1
r + 1
11
t
=
1
r 1
11
t
=
Z
t=0
c
vt
c
+1t
dt
11
t
=
Z
t=0
c
vt
d1
t+t
3
Moving averages and moments, discrete time:
o
2
(r
t
) =
X
)=0
/
2
)
o
2
.
co(r
t
, r
tI
) =
X
)=0
/
)
/
)+I
o
2
.
o
a
(.) =
X
)=
c
i.)
co(r
t
, r
t)
) = Z
b
(c
i.
)Z
b
(c
i.
)
co(r
t
, r
t)
) =
1
2
Z
c
i.)
o
a
(.)d.
Continuous time:
o
2
(r
t
) =
Z
t=0
/
2
(t)dt,
co(r
t
, r
tI
) =
Z
t=0
/(t)/(t + /)dt
o
a
(.) =
Z
t=
co(r
t
r
tt
)dt = L
b
(i.)L
b
(i.)
co(r
t
, r
tI
) =
1
2
Z
c
i.I
o
a
(.)d. =
1
2
Z
c
i.I
L
b
(i.)L
b
(i.)d.
Levels to dierences
r
t
r
t1
= /
0

t
+
X
)=1
(/
)
/
)1
)
t)
dr
t
=
Z
t=0
d/(t)
dt
d1
tt
dt + /(0)d1
t
(1 1)r
t
= (1 1)Z
b
(1)
t
= [/
0
+ Z
b
(1)] 
t
= [Z
b
(0) + Z
b
(1)] 
t
1r
t
= 1L
b
(1)11
t
= [/(0) + L
b
0 (1)] 11
t
=
lim
1
1L
b
(1) + L
b
0 (1)
11
t
Dierences to levels, BeveridgeNelson decompositions. In discrete time,
(1 1)r
t
= Z
c
(1)
t
=
X
)=0
c
)
1
)
implies
r
t
= .
t
n
t
;
(1 1).
t
= Z
c
(1)
t
n
t
= Z
b
(1)
t
; /
)
=
X
I=)+1
c
I
or, equivalently
Z
c
(1) = Z
c
(1) + (1 1)Z
b
(1).
4
.
t
has the trend property
.
t
= lim
)
1
t
r
t+)
= r
t
+ 1
t
X
)=0
r
t+)
.
In continuous time,
dr
t
=
Z
t=0
c(t)d1
tt
dt + od1
t
1r
t
= [L
c
(1) + o] 11
t
implies
r
t
= .
t
n
t
;
d.
t
=
o +
Z
c=0
c(:)d:
d1
t
= [o + L
c
(0)] d1
t
n
t
Z
t=0
/(:)d1
tt
= L
b
(1)11
t
; /(:) =
Z
c=0
c(t + :)d:
or, equivalently,
L
c
(1) + o = [o + L
c
(0)] + 1L
b
(1).
.
t
has the trend property
.
t
= lim
)
1
t
r
t+)
= r
t
+ 1
t
Z
t=0
dr
t+t
.
Impulseresponse functions and multipliers
(1
t
1
t1
) r
t+)
= /
)

t
lim
0
(1
t+
1
t
) r
t+t
= /(t)d1
t
(1
t
1
t1
) r
t
= (1
t
1
t1
) r
t
= /
0

t
= Z
b
(0)
t
.
lim
0
(1
t+
1
t
) dr
t
= /(0)d1
t
=
lim
1
(1L
b
(1))
11
t
/
= Z
b
() should = 0
/() = lim
10
1L
b
(1) should = 0
Cumulative response of r
t
Z
b
(1) =
X
)=0
/
)
L
b
(0) =
Z
t=0
/(t)dt
For a MA representation of dierences,
dr
t
=
Z
t=0
c(t)d1
tt
dt + od1
t
1r
t
= [L
c
(1) + o] 11
t
5
the impact multiplier is
lim
0
(1
t+
1
t
) dr
t
= od1
t
the cumulative multiplier is
lim
0
(1
t+
1
t
)
Z
t=0
dr
t+t
= o +
Z
c=0
c(:)d:
= o + L
c
(0)
HansenSargent prediction formulas
1
t
X
)=0
,
)
r
t+)
=
1Z
b
(1) ,Z
b
(,)
1 ,

t
1
t
X
)=1
,
)1
r
t+)
=
Z
b
(1) Z
b
(,)
1 ,

t
(1
t
1
t1
)
X
)=0
,
)
r
t+)
= Z
b
(,)
t
.
Continuous time:
1
t
Z
t=0
c
vt
r
t+t
dt =
L
b
(1) L
b
(r)
r 1
11
t
.
lim
0
(1
t+
1
t
)
Z
t=0
c
vt
r
t+t
dt = L
b
(r)d1
t
.
If
dr
t
=
Z
t=0
/(t)d1
tt
+ od1
t
1r
t
= [o + L
b
(1)] 11
t
then also
1
t
Z
t=0
c
vt
dr
t+t
=
L
b
(1) L
b
(r)
r 1
11
t
.
Autoregressive processes,
dr
t
=
Z
t=0
a(t)dr
tt
dt + od1
t
(1 + L
o
(1)) 1r
t
= o11
t
or,
dr
t
=
a(0)r
t
+
Z
t=0
a
0
(t)r
tt
dt
dt + od1
t
[1 + a(0) + L
o
0 (1)] r
t
= o11
t
Twocomponent example AR(2)
r
t
=
c
1 + `
1
+
,
1 + `
2
11
t
1 + 0
2
/
1
(1 + 0
1
)
r
t
= 11
t
6
where
0
1
=
c`
2
+ ,`
1
c + ,
; 0
2
=
c`
1
+ ,`
2
c + ,
; / =
c,
c + ,
(`
1
`
2
)
2
Conversely,
1 + `
1
0
1
1 + `
2
r
t
= 11
t
r
t
=
1
j c
`
2
c
1 + c
+
`
1
c
1 + j
11
t
where
j, c =
(`
1
+ `
2
)
p
(`
1
`
2
)
2
+ 40
2
3 Linear models and lag operators
I start by dening lag operators and the inversion formulas
3.1 Discrete time reminder
As a reminder, discretetime linear models can be written in the unique moving average or Wold
representation
r
t
=
X
)=0
/
)

t)
= Z
b
(1)
t
(1)
where the operator 1 is dened by
1
t
= 
t1
and
Z
b
(1) =
X
)=0
/
)
1
)
(Its common to write /(1) =
P
)=0
/
)
1
)
but I will need to more clearly distinguish the function
Z
b
(1) from the function /(,) = /
)
as we go to continuous time)
The Wold representation and its error are dened from the autoregression
r
t
=
X
)=1
a
)
r
t)
+ 
t
We can write this autoregressive representation in lagoperator form,
Z
o
(1)r
t
= 
t
; Z
o
(1) =
X
)=0
a
)
r
t)
(2)
We can connect the autoregressive and moving average representations by inversion,
Z
o
(1) = Z
b
(1)
1
; Z
b
(1) = Z
o
(1)
1
Now, thats pretty, but how do you actually construct Z
o
(1)
1
? Here we use a power series
interpretation. For example, from the denition 1
t
= 
t1
its pretty clear that 1
1

t
= 
t+1
.
7
But suppose we want to invert the AR(1) (1 j1)r
t
= 
t
. What meaning can we give to
1,(1 j1)? (How do we nd a Z
b
(1) such that Z
b
(1)Z
o
(1) = 1?) To do that, we use the power
series expansion,
1
1 j1
=
X
)=0
j
)
1
)
(assuming kjk < 1). Then we can use the lag operator notation to construct transformations
from AR(1) to MA() representations and back again,
(1 j1)r
t
= 
t
=
r
t
=
1
1 j1

t
=
X
)=0
j
)
1
)

t
=
X
)=0
j
)

t)
.
A common confusion is especially important when we go to continuous time. The process
r
t
may also have a nonlinear representation which allows greater predictability. For example, a
random number generator is fully deterministic, r
t
= )(r
t1
) with no error. The function ) is
just so complex that when you run linear regressions of r
t
on its past, it looks unpredictable.
A precise notation would use 1
t1
(r
t
) = 1(r
t
r
t1
) to mean prediction using all linear and
nonlinear functions, and give 1
t1
(r
t
) = )(r
t1
) = r
t
in this example. We would use a notation
such as 1(r
t
 {r
t)
}) to denote linear prediction. I will not be so careful, so I will use 1
t1
or 1(r
t
 {r
t)
}) to mean prediction given the linear models under consideration, and Ill write
expected without further ado.
This point is important to address the common confusion that a linear model is not right
if there is an underlying better nonlinear model. That criticism is incorrect. Even if there is a
nonlinear model (say a square root process), there is nothing wrong with also studying its linear
predictive representation. Similarly, just because there may be additional variables j
t
, .
t
,
t
that help to forecast r
t
, there is nothing wrong with studying conditional (on r) moments that
ignore this extra information. The only place that either conditioningdown assumption causes
trouble is if you assume agents in a model only see the variables or information set that you the
econometrician choose to model.
3.2 Continuoustime Operators
We usually write continuous time processes in dierential or integral form. For example, the
continuoustime AR(1) can be written in dierential form,
dr
t
= cr
t
dt + od1
t
or in integral form
r
t
=
Z
t=0
c
t
d1
tt
.
The integral form is the obvious analogue to the movingaverage form of the discretetime rep
resentation. Our job is to think about and manipulate these expressions and how to transform
one to the other with lag operators, with an eye to doing so for more complex processes.
The lag operator can simply be extended to real numbers from integers, i.e.
1
t
r
t
= r
tt
8
Since we write dierential expressions, dr
t
in continuous time, its convenient to dene the
dierential operator 1, i.e.
1r
t
=
1
dt
dr
t
where dr
t
is the familiar continuoustime forwarddierence operator,
dr
t
= lim
0
(r
t+
r
t
) (3)
(This is not a limit in the usual , c sense, but Ill leave that to continuous time math books and
continue to abuse notation.)
1 and 1 are related by
1 = lim
0
1
= lim
0
c
log(1)
1
= lim
0
log(1)c
log(1)
1
= log(1).
In sum, the integral and dierential operators are related by
c
1
= 1; 1 = log(1). (4)
The obvious generalization: We write the general moving average as
r
t
=
Z
t=0
/(t)d1
tt
and in lag operator notation
r
t
= L
b
(1)11
t
L
b
(1) =
Z
t=0
c
1t
/(t)dt
It will usually be convenient to describe continuoustime lag functions in terms of 1 rather
than 1. Thus, familiar quantities such as the impact multiplier Z
b
(1 = 0) and the cumulative
multiplier Z
b
(1 = 1) will have counterparts corresponding to L
b
(1 = ) and L
b
(1 = 0). (I
nd these below.)
For example, the continuoustime AR(1) process in dierential form reads
dr
t
+ cr
t
dt = od1
t
(1 + c)r
t
= o11
t
.
We can invert this formula by inverting the lag operator polynomial as we do in discrete
time
1
.
r
t
= o
1
1 + c
11
t
= o
Z
t=0
c
t
c
1t
dt
11
t
= o
Z
t=0
c
t
11
t
dt
1
t
= o
Z
t=0
c
t
d1
tt
The second equality uses the formula for the integral of an exponential to interpret 1,(1 + c),
as we used the power series expansion
P
)=0
j
)
1
)
to interpret 1,(1 j1).
1
In case you forgot, heres the standard way to go from dierential to integral representation by solving the
stochastic dierential equation. You introduce a clever intergrating factor c
and note
d
= cc
dt + c
dr
= c
(dr
+ cr
dt). (5)
Then, start with the dierential representation of the AR(1),
dr
= cr
dt + od1
.
9
3.3 Laplace transforms
Now, where does this all come from really? If a process j
t
is generated from another r
t
by
j
t
=
Z
t=0
/(t)r
tt
the Laplace transform of this operation is dened as
L
b
(1) =
Z
t=0
c
1t
/(t)dt
where 1 is a complex number.
Given this denition, the Laplace transform of the lag operation is
j
t
= r
t)
L(1) = c
)1
.
This denition establishes the relationship between lag and dierential operators (4) directly.
One dierence in notation between discrete and continuous time notation is necessary. Its
common to write the discretetime lag polynomial
/(1) =
X
)=0
/
)
1
)
It would be nice to write
/(1) =
Z
t=0
c
1t
/(t)dt,
but we cant do that, since /(t) is already a function. If in discrete time we had written /
)
= /(,),
then /(1) wouldnt have made any sense either. For this reason, well have to use a dierent
letter. In deference to the Laplace transform Ill stick with the notation
L
b
(1)
Z
t=0
c
1t
/(t)dt,
and for clarity I also write lag polynomial functions as
Z
b
(1) =
X
)=0
/
)
1
)
multiplying both sides by c
(dr + crdt) = c
od1
d
= c
od1
Z
0
d
=
Z
0
c
od1
c
r r0 =
Z
0
c
od1
r = c
r0 +
Z
0
c
od1 .
10
rather than the more common /(1). (Z stands for ztransform.)
To use a lag polynomial
Z
b
(1) =
1
1 j1
r
t
=
X
)=0
j
)
1
)

t)
we must have kjk < 1. In general, then, the poles 1 : Z
b
(1) = and the roots 1 : Z
o
(1) =
Z
b
(1)
1
= 0 must lie outside the unit circle. The domain of Z
b
(1) is k1k < kjk
1
; for which
k1k < 1 will suce.
When j 1, or if the poles of Z
b
(1) are inside the unit circle, we solve in the opposite
direction:
1
1 j1

t
=
j
1
1
1
1 j
1
1
1

t
=
X
)=1
j
)
1
)

t
=
X
)=1
j
)

t+)
Here, the domain of Z
b
(1) must be 1 outside the unit circle.
Similarly, to when we interpret
L
b
(1)11
t
=
1
r + 1
11
t
=
Z
t=0
c
vt
c
1t
dt
11
t
=
Z
t=0
c
vt
d1
tt
we must have krk 0, and Re(1) 0 so that
c
1
11
t
=
Z
t=0
c
vt
d1
t+t
Lag operators (Laplace transforms) commute, so we can simplify expressions by taking them
in any order that is convenient,
L
o
(1)L
b
(1) = L
b
(1)L
o
(1).
Z
o
(1)Z
b
(1) = Z
b
(1)Z
o
(1)
4 Moving average representation and moments
The moving average representation
r
t
=
X
)=0
/
)

t)
= Z
b
(1)
t
is also a basis for all the secondmoment statistical properties of the series: Variance
o
2
(r
t
) =
X
)=0
/
2
)
o
2
.
covariance
co(r
t
, r
t)
) =
X
)=0
/
)
/
)+I
o
2
.
11
and spectral density
o
a
(.) =
X
)=
c
i.)
co(r
t
, r
t)
) = Z
b
(c
i.
)Z
b
(c
i.
)
The inversion formula
co(r
t
, r
t)
) =
1
2
Z
c
i.)
o
a
(.)d. =
1
2
Z
c
i.)
Z
b
(c
i.
)Z
b
(c
i.
)d.
gives us a direct connection between the function Z
b
(c
i.
) and the second moments of the series.
The continuoustime movingaverage representation
r
t
=
Z
t=0
/(t)d1
tt
= L
b
(1)11
t
is also the basis for standard moment calculations,
o
2
(r
t
) =
Z
t=0
/
2
(t)dt,
co(r
t
, r
tI
) =
Z
t=0
/(t)/(t + /)dt
o
a
(.) = L
b
(i.)L
b
(i.)
and the inversion formula
co(r
t
, r
t)
) =
1
2
Z
c
i.
o
a
(.)d. =
1
2
Z
c
i.)
L
b
(i.)L
b
(i.)d.
For example, the AR(1),
r
t
=
Z
t=0
c
t
od1
tt
=
o
c + 1
11
t
o
2
(r) = o
2
Z
t=0
c
2t
dt =
o
2
2c
co(r
t
, r
tI
) = o
2
Z
t=0
c
t
c
(t+I)
dt =
o
2
2c
c
I
o
a
(.) = L
b
(i.)L
b
(i.) =
o
c + i.
o
c i.
=
o
2
c
2
+ .
2
4.1 Levels and dierences
In discrete time, you usually choose to work with levels r
t
or dierences r
t
depending on
which is stationary. In continuous time, we often work with dierences even though the series
is stationary in levels, as we wrote the AR(1) as a model for dierences, dr
t
= cr
t
+ od1
t
.
This fact accounts for the major dierence between the look of continuous and discretetime
formulas, especially in the autoregressive representation. Expressions such as
R
t=0
a(t)r
tt
=
d1
t
dont make much sense (or require such wild a(t) functions as to make them unusable).
12
4.1.1 Levels to dierences in discrete time
Firstdierencing is simple in discrete time. We have
r
t
= Z
b
(1)
t
Were looking for Z
c
(1) in
(1 1)r
t
= Z
c
(1)
t
The answer is pretty obvious,
Z
c
(1) = (1 1)Z
b
(1) (6)
We can also easily nd the form of Z
c
(1),
r
t
r
t1
= /
0

t
+
X
)=1
(/
)
/
)1
)
t)
(7)
or suggestively
c
0
= /
0
; c
)
= /
)
.
So we can write
(1 1)r
t
= /
0

t
+ Z
b
(1)
t
(8)
We can also nd the impact multiplier /
0
from the operator function,
/
0
= Z
b
(0)
This important quantity gives the response (1
t
1
t1
) r
t
to a shock 
t
. In discrete time, this
is also the response (1
t
1
t1
) (r
t
r
t1
) to the shock 
t
.
4.1.2 Levels to dierences in continuous time
Start with the moving average representation for levels,
r
t
=
Z
t=0
/(t)d1
tt
(9)
The same process in dierences is
dr
t
=
Z
t=0
/
0
(t)d1
tt
dt + /(0)d1
t
(10)
This is the analogue to (7). This expression gives drift and diusion terms. It gives us an
impulse response function or moving average representation for dr
t
. And it shows instantly
that /(0) is the impact multiplier and the meaning of that term; /(0) tells us how a shock d1
t
aects dr
t
The operator statement of this transformation is that
r
t
= L
b
(1)11
t
(11)
13
implies
1r
t
= [/(0) + L
b
0 (1)] 1d1
t
(12)
To derive (12) we use a handy property of Laplace transforms:
1L
b
(1) = /(0) + L
b
0 (1) (13)
This property follows by integrating by parts:
L
b
0 (1) =
Z
t=0
c
1t
d/(t)
dt
dt = /(t)c
1t
0
+
Z
t=0
1c
1t
/(t)dt
= /(0) + 1
Z
t=0
c
1t
/(t)dt = /(0) + 1L
b
(1).
If it helps intuition, we can also obtain (10) by brute force,
r
t
=
Z
t=0
/(t)d1
tt
r
t+
r
t
=
Z
t=0
/(t)d1
t+t
Z
t=0
/(t)d1
tt
=
Z
t=0
/(t)d1
t+t
+
Z
t=0
[/(t +) /(t)] d1
tt
/(0) (1
t+
1
t
) +
Z
t=0
d/(t)
dt
d1
tt
dr
t
=
Z
t=0
d/(t)
dt
d1
tt
dt + /(0)d1
t
4.2 Finding an integral representation; unit roots and random walks
The converse operation: Suppose you have a dierential representation,
dr
t
=
Z
t=0
c(t)d1
tt
dt + od1
t
how do you nd the integral representation
r
t
=
Z
t=0
/(t)d1
tt
?
4.2.1 Discrete time
We want to get from
(1 1)r
t
= Z
c
(1)
t
to
r
t
= Z
b
(1)
t
.
Lag operator notation suggests
Z
b
(1) =
Z
c
(1)
1 1
14
= c
0
+ (c
0
+ c
1
) 1 + (c
0
+ c
1
+ c
2
) 1
2
+ ...
This operation only produces a stationary process if
P
)=0
c
)
= Z
c
(1) = 0. That need not be
the case. In general a process (1 1)r
t
= Z
c
(1)
t
is stationary in dierences but not levels.
A convenient way to handle the possibility is to decompose r
t
in to stationary and random
walk components via the BeveridgeNelson decomposition,
r
t
= .
t
n
t
(1 1).
t
= Z
c
(1)
t
n
t
= Z
b
(1)
t
where
Z
b
(1) =
X
)=0
X
I=)+1
c
I
1
)
; i.e. /
)
=
X
I=)+1
c
I
.
t
is a pure random walk, while n
t
is stationary in levels.
Equivalently, we express the given moving average representation
Z
c
(1) = Z
c
(1) (1 1)Z
b
(1)
and then integrating is straightforward.
Now, if Z
c
(1) = 0, we can nd an integral representation stationary r
t
. If not, there is an
interesting stationary + random walk representation and we get as much of a levelstationary
component as possible.
Thus, nding a level representation is nearly the opposite summing of the dierencing
operation used to nd a dierential representation. Unsurprisingly we have to worry about a
constant of integration.
The BeveridgeNelson trend has the property
.
t
= lim
)
1
t
r
t+)
.
This property can be used to derive the decomposition.
.
t
= r
t
+ lim
I
I
X
)=1
1
t
r
t+)
= r
t
+ 1
t
r
t+1
+ 1
t
r
t+2
+ ...
= r
t
+
X
)=1
c
)

t+1)
+
X
)=2
c
)

t+2)
+
X
)=3
c
)

t+3)
+ ...
= r
t
+
X
)=1
c
)

t
+
X
)=2
c
)

t1
+ ...
We can also use the HansenSargent prediction formulas (17) with , = 1 to obtain this result
without doing big sums.
1
t
X
)=1
r
t+)
=
Z
c
(1) Z
c
(1)
1 1

t
15
Dene
n
t
=
X
)=1
c
)

t
+
X
)=2
c
)

t1
+ ... =
X
)=0
/
)

t)
;
You can see that if {c
)
} are well behaved if
P
)=0
c
2
)
< , then so are the {/
)
} so n
t
is a
stationary process.
So now we have dened
.
t
= r
t
+ n
t
and n
t
is stationary. It remains to characterize .
t
. Note that
(1 1)n
t
=
X
)=1
c
)

t
+
X
)=2
c
)

t1
+ ...
X
)=1
c
)

t1
+
X
)=2
c
)

t2
+
=
X
)=1
c
)

t
c
1

t1
c
t

t2
...
=
X
)=0
c
)

t
X
)=0
c
)

t)
= (1 1)r
t
X
)=0
c
)

t)
Examine the rstdierence
(1 1).
t
= (1 1)r
t
+ (1 1)n
t
=
X
)=0
c
)

t
= Z
c
(1)
t
.
t
is a pure random walk.
4.2.2 Continuous time
Our objective is to start with a specication for dierences,
dr
t
=
Z
t=0
c(t)d1
tt
dt + od1
t
(14)
1r
t
= [L
c
(1) + o] 11
t
and obtain the corresponding specication for levels,
r
t
=
Z
t=0
/(t)d1
tt
= L
b
(1)11
t
The y in the ointment, as in discrete time, is that the process may not be stationary in
levels, r
t
, so the latter integral doesnt make sense. As an extreme example, if you started with
dr
t
= od1
t
you cant invert that to
r
t
=
Z
t=0
d1
tt
16
because the latter integral doesnt converge. We usually leave it as
r
t
= r
0
+
Z
t
t=0
d1
tt
.
As in this example, one can still get the right answers by ignoring the nonstationarity, and
thinking of a process that starts at time 0 with d1
t
= 0 for all t < 0. (Hansen and Sargent
(1993), last paragraph.)
Alternatively, we can handle this as in discrete time with the continuous time Beveridge
Nelson decomposition. From (14), write
r
t
= .
t
n
t
where .
t
is a pure Brownian motion,
d.
t
=
o +
Z
c=0
c(:)d:
d1
t
1.
t
= [o + L
c
(0)] 11
t
,
and n
t
is stationary,
n
t
Z
t=0
/(t)d1
tt
= L
b
(1)11
t
/(t) =
Z
c=0
c(t + :)d:;
Equivalently, we can express this result as a decomposition of the original operator,
L
c
(1) + o = [o + L
c
(0)] + 1L
b
(1).
This integration is almost exactly the opposite of the dierentiation I used to nd dierences
from levels in (10). As usual, there is a constant of integration to worry about.
Mirroring the BeveridgeNelson construction, dene
.
t
= lim
T
1
t
r
t+T
= r
t
+
Z
c=0
1
t
dr
t+c
= r
t
+
Z
c=0
Z
t=0
c(t + :)d1
tt
d:
= r
t
+
Z
t=0
Z
c=0
c(t + :)d:
(d1
tt
)
or,
.
t
= r
t
+ n
t
where
n
t
Z
t=0
Z
c=0
c(t + :)d:
d1
tt
.
Now, we nd the innovations to .
t
,
d.
t
= dr
t
+ dn
t
17
To evaluate the latter term,
dn
t
=
Z
t=0
d
dt
Z
c=0
c(t + :)d:
d1
tt
+
Z
c=0
c(0 + :)d:
d1
t
dn
t
=
Z
t=0
c(t)d1
tt
dt +
Z
c=0
c(:)d:
d1
t
Hence,
d.
t
=
Z
t=0
c(t)d1
tt
dt + od1
t
Z
t=0
c(t)d1
tt
dt +
Z
c=0
c(:)d:
d1
t
d.
t
=
o +
Z
c=0
c(:)d:
d1
t
We could also use the HansenSargent formula (21),
1
t
Z
t=0
c
vt
dr
t+t
=
L
c
(1) L
c
(r)
r 1
11
t
,
to arrive at
1
t
Z
t=0
dr
t+t
=
L
c
(0) L
c
(1)
1
11
t
which also leads to
Z
c=0
Z
t=0
c(t + :)d1
tt
d:.
5 Impulseresponse function
The discretetime moving average representation is convenient because it is the impulseresponse
function. In
r
t
=
X
)=0
/
)

t)
= Z
b
(1)
t,
the terms of /
)
measure the response of r
t+)
to a shock 
t
,
(1
t
1
t1
) r
t+)
= /
)

t
In particular,
/
0
= Z
b
(0)
is the impact multiplier, r
t
1
t1
r
t
= /
0

t
,
Z
b
(1) =
X
)=0
/
)
gives the cumulative response, the response of
P
)=0
r
t+)
to a shock, and
/
= lim
)
/
)
= lim
1
Z
b
(1) = Z
b
()
gives the nal response, which needs to be zero for a stationary process.
18
In continuous time,
r
t
=
Z
t=0
/(t)d1
tt
/(t) again gives an impulseresponse function, namely how expectations at t about r
t+t
are
aected by the shock d1
t
. The concept lim
0
(1
t+
1
t
) is not precise, but the response
to shock interpretation is fairly clear.
A more precise sense of lim
0
(1
t+
1
t
) r
t+I
can be obtained by dening
j
t
= 1
t
(r
t+I
) =
Z
t=0
/(t + /)d1
tt
.
Then, following the same logic as in (10)
dj
t
= /(/)d1
t
+
Z
t=0
/
0
(t + /)d1
tt
dt
Here you see directly what it means to say that /(/) is the shock to todays expectations of r
t+I
.
Transforming to dierences, (10) also gives a better sense of /(0) as the impact response of
r
t
to a shock,
dr
t
=
Z
t=0
/
0
(t)d1
tt
dt + /(0)d1
t
as in discrete time,
r
t+1
r
t
= /(0)
t+1
+
X
)=0
(/
)+1
/
)
) 
t)
the innovation in r
t
and dr
t
(i.e. r
t+1
and dr
t+1
) are the same.
We can recover the impact multiplier from the lag operator function from
/(0) = lim
1
1L
b
(1). (15)
This is the analogue to (8),
/
0
= Z
b
(0)
in discrete time. (This is the initial value theorem of Lapalce transforms.) To derive this
formula, take the limit of both sides of (13),
1L
b
(1) = /(0) + L
b
0 (1)
and note that
lim
1
L
b
0 (1) = lim
1
Z
t=0
c
1t
/
0
(t) = 0.
Informally, for very large 1, the function 1c
1t
drops o so quickly that only /(0) survives,
lim
1
1L
b
(1) lim
1
/(0)
Z
0
1c
1t
dt = /(0) c
1t
0
= /(0)
One requirement for stationarity is that the moving averages tail o,
lim
t
/(t) = 0
19
(Actually we need
R
t=0
/
2
(t) < which is stronger.) The Final value theorem states
/() = lim
10
1L
b
(1)
To see this result, simply examine
Z
t=0
1c
1t
/(t)dt
We also want the equivalent of Z
b
(1) in discrete time,
L
b
(0) =
Z
t=0
/(t)dt
In continuous time, the specication for dierences and levels is not quite the same (just
with r
t
=
P
c
)

t)
in place of r
t
=
P
/
)

t)
) so its worth writing down the results for
dr
t
=
Z
t=0
c(t)d1
tt
dt + od1
t
1r
t
= [L
c
(1) + o] 11
t
In this case, the impact multiplier is o, and the cumulative multiplier is, from the Beveridge
Nelson decomposition
o +
Z
c=0
c(:)d:
= o + L
c
(0)
6 HansenSargent formulas
Here is one great use of the operator notation and the application that drove me to gure all
this out and write it up. Given a process r
t
, how do you calculate
1
t
Z
t=0
c
vt
r
t+t
dt?
This is an operation we run into again and again in asset pricing.
6.1 Discrete time
Hansen and Sargent (1980) gave an elegant answer to this question in discrete time. You want
to calculate 1
t
P
)=0
,
)
r
t+)
. You are given a moving average representation.
r
t
= Z
b
(1)
t
.
(Here and below, 
t
can be a vector of shocks, which considerably generalizes the range of
processes you can write down.) The answer is, that the moving average representation of the
expected discounted sum is
1
t
X
)=0
,
)
r
t+)
=
1Z
b
(1) ,Z
b
(,)
1 ,

t
=
Z
b
(1) ,1
1
Z
b
(,)
1 ,1
1
!

t
. (16)
20
Hansen and Sargent give the rst form. The second form is a bit less pretty but shows a bit
more clearly what youre doing. Z
b
(1)
t
is just r
t
. 1 ,1
1
=
P
)=0
,
)
1
)
takes the forward
sum so
1 ,1
1
1
Z
b
(1)
t
is the actual, expost value whose expectation we seek. But
that expression would leave you many terms in 
t+)
. The second term
1 ,1
1
1
,1
1
Z
b
(,)
subtracts o all the 
t+)
terms leaving only 
t)
terms, which thus is the conditional expectation.
(Shown below.)
6.1.1 AR(1) example
We start with
r
t
= Z
b
(1)
t
= (1 j1)
1

t
Then 1
t
P
)=0
,
)
r
t+)
follows
1
t
X
)=0
,
)
r
t+)
=
1
1j1
o
1jo
1 ,

t
=
1
(1 j,)
1
(1 j1)

t
=
1
(1 j,)
X
)=0
j
)

t)
=
1
(1 j,)
r
t
.
6.1.2 A prettier formula and the impact multiplier
The formula is even prettier if we start one period ahead, as often happens in nance
1
t
X
)=1
,
)1
r
t+)
=
Z
b
(1) Z
b
(,)
1 ,

t
(17)
Just subtract r
t
= Z
b
(1)
t
from (16). This version turns out to look exactly like the continuous
time formula below.
We often want the impact multiplier how much does a price react to a shock? If
j
t
= Z
c
(1)
t
then the impact multiplier is
(1
t
1
t1
) j
t
= Z
c
(0)
t
= c
0

t
.
Applying that idea to the HansenSargent formula (16),
(1
t
1
t1
)
X
)=0
,
)
r
t+)
= Z
b
(,)
t
. (18)
This formula is particularly lovely because you dont have to construct, factor, or invert any
lag polynomials. Suppose you start with an autoregressive representation
Z
o
(1)r
t
= 
t
.
Then, you can immediately nd
(1
t
1
t1
)
X
)=0
,
)
r
t+)
= [Z
:
(,)]
1

t
!
21
6.1.3 Derivation
To keep this note selfcontained, here is a verication of the HansenSargent formula. I sim
ply write out
P
)=0
,
)
r
t+)
, apply the annihilation operator to take conditional expectations
by eliminating all 
t+)
, and then verify that Hansen and Sargents operator yields the same
expression.
Z
b
(1)
1 ,1
1

t
=
X
)=0
,
)
r
t+)
=
=
= +/
0

t
+/
1

t1
+/
2

t2
+/
3

t3
...
+ +,/
0

t+1
+,/
1

t
+,/
2

t1
+,/
3

t2
...
+ +,
2
/
0

t+2
+,
2
/
1

t+1
+,
2
/
2

t
+,
2
/
3

t1
+,
2
/
4

t3
... +,
3
/
0

t+3
+,
3
/
1

t+2
+,
3
/
2

t+1
...
=
... +,
3
1(,)
t+3
+,
2
1(,)
t+2
+,1(,)
t+1
+1(,)
t
+()
t1
+...
Now,
,1
1
1 ,1
1
Z
b
(,) =
,1
1
+ ,
2
1
2
+ ,
3
1
3
+ ..
Z
b
(,)
= ... ,
3
Z
b
(,)
t+3
+,
2
Z
b
(,)
t+2
+,Z
b
(,)
t+1
neatly subtracts o all the forward terms 
t+1
, 
t+2
, etc.
6.2 Continuous time
Now, lets translate this idea to continuous time. Hansen and Sargent (1991) show that if we
express a process in movingaverage form,
r
t
=
Z
t=0
/(t)d1
tt
= L
b
(1)11
t
,
then we can nd the moving average representation of the expected value by
1
t
Z
t=0
c
vt
r
t+t
dt =
L
b
(1) L
b
(r)
r 1
11
t
. (19)
You can see that the formula is almost exactly the same as (19).
The pieces work as in discrete time. The operator
1
r 1
=
Z
t=0
c
vt
c
1t
dt
takes the discounted forward integral; subtracting o L
b
(r) removes all the terms by which the
discounted sum depends on future realizations of d1
t+t
, leaving an expression that only depends
on the past and hence is the conditional expectation.
Lets do the AR(1) example in continuous time.
(1 + c)r
t
=
1
dt
d1
t
22
r
t
=
1
(1 + c)
1
dt
d1
t
1
t
Z
t=0
c
vt
r
t+t
dt =
1
(r 1)
1
1 + c
1
r + c
1
dt
d1
t
1
t
Z
t=0
c
vt
r
t+t
dt =
1
(r 1)
r 1
(1 + c) (r + c)
1
dt
d1
t
1
t
Z
t=0
c
vt
r
t+t
dt =
1
r + c
1
1 + c
1
dt
d1
t
=
1
r + c
Z
t=0
c
t
d1
tt
=
1
r + c
r
t
So we recover the same result as in continuous time.
The impact or (1
t
1
t1
) counterpart to (18) is found as we found impact multipliers above.
Start with
j
t
= 1
t
Z
t=0
c
vt
r
t+t
dt =
L
b
(1) L
b
(r)
r 1
11
t
.
The impact multiplier is
lim
1
1
L
b
(1) L
b
(r)
r 1
= L
b
(r). (20)
1L
b
(1) approaches a nite limit, so the rst term is zero. Thus,
lim
0
(1
t+
1
t
)
Z
t=0
c
vt
r
t+t
dt = L
b
(r)d1
t
or, more precisely,
dj
t
= ()dt + L
b
(r)d1
t
As in discrete time, this is a lovely formula because one may be able to nd L
b
(r) (say, from an
autoregressive representation) without knowing the whole L
b
(1) function.
In continuous time we have to adapt a little more to levels and rst dierences. Suppose we
start instead with
dr
t
=
Z
t=0
c(t)d1
tt
+ od1
t
1r
t
= [o + L
c
(1)] 11
t
Then, since o cancels from the larger operator [o + L
b
(1)] (19) suggests again
1
t
Z
t=0
c
vt
dr
t+t
=
L
c
(1) L
c
(r)
r 1
11
t
(21)
Checking that this is correct,
Z
t=0
c
vt
dr
t+t
=
Z
t=0
c
vt
Z
c=0
c(:)d1
t+tc
dt + od1
t+t
you can see that the last term vanishes, and were back at the same expression as before.
6.2.1 Derivation
One can derive the HansenSargent formulas mirroring the approach of discrete time. Express
the expost forward looking present value, then notice that the second half of the HansenSargent
formula neatly eliminates all the d1
t+t
terms:
Z
t=0
c
vt
r
t+t
dt =
Z
t=0
c
vt
Z
c=0
/(:)d1
t+tc
dt =
1
r 1
L
b
(1)11
t
23
=
Z
t=0
Z
c=0
c
vt
/(:)d1
t+tc
dt
We transform to an integral over = t : that counts each d1
q
once. Given = t : < 0,
looking at d1
t+q
in the past, then t 0 means : . If = t : 0, then : starts at 0.
= t :
t = + :
=
Z
q=
Z
c=max(0,q)
c
vq
c
vc
/(:)d1
t+q
d:
=
Z
q=0
Z
c=0
c
vq
c
vc
/(:)d1
t+q
d: +
Z
0
q=
Z
c=q
c
vq
c
vc
/(:)d1
t+q
d:
=
Z
q=0
c
vq
Z
c=0
c
vc
/(:)d:
d1
t+q
+
Z
q=0
c
vq
Z
c=q
c
vc
/(:)d:
d1
tq
=
Z
q=0
c
vq
Z
c=0
c
vc
/(:)d:
d1
t+q
+
Z
q=0
c
vq
Z
c=0
c
vq
c
vc
/(: + )d:
d1
tq
=
Z
c=0
c
vc
/(:)d:
Z
q=0
c
vq
d1
t+q
+
Z
q=0
Z
c=0
c
vc
/(: + )d:
d1
tq
Taking expectations, we just lose all the forwardlooking d1
t+q
terms, so the last term is the
expected value and we have
L
b
(1)
r 1
11
t
=
L
b
(r)
r 1
11
t
+ 1
t
Z
t=0
c
vt
r
t+t
dt
.
7 Autoregressive representations
ARMA models are convenient tractable forms in discrete time. Here, I develop a similar class
of simple models in continuous time.
An autoregressive process in continuous time consists of a regression of dr on lagged dr,
dr
t
=
Z
t=0
a(t)dr
tt
dt + od1
t
(22)
(1 + L
o
(1)) 1r
t
= o11
t
or, using (13), a regression on lagged r itself.
dr
t
=
a(0)r
t
+
Z
t=0
a
0
(t)r
tt
dt
dt + od1
t
(23)
[1 + a(0) + L
o
0 (1)] r
t
= o11
t
This denition is important, for the Wold representation is dened by inverting an autoregres
sion.
24
7.1 How not to dene AR models
You might be tempted to try to invert
r
t
=
Z
t=0
/(t)d1
tt
to something like
Z
t=0
a(t)r
tt
dt = d1
t
as we do in discrete time, but you can see right away that it wont work. In continuous time, we
will have to forecast even stationary timeseries such as the AR(1) in overdierenced form.
We have the AR(1) process as an example,
dr
t
+ cr
t
dt = d1
t
r
t
=
Z
t=0
c
t
d1
tt
(1 + c) r
t
= 11
t
r
t
=
1
1 + c
1
t
You might be tempted to write an AR(2) as follows.
(1 + `)(1 + j)r
t
= o
1
dt
d1
t
This would not work. Look at the MA representation,
1
(1 + `)
1
(1 + j)
=
1
j `
1
1 + `
1
1 + j
r
t
=
1
j `
Z
t=0
c
At
c
jt
od1
tt
dr
t
=
1
j `
d
Z
t=0
c
At
c
jt
od1
tt
We have /(0) = 0 so dr
t
is nonstochastic no loading on d1
t
. The 1
2
operator takes a second
derivative.
The problem is that as we go to continuous time, we do not want the second lag to contract
towards the rst. To derive the continuoustime analogue to the AR(2) we would start with
r
t
= j
1
r
t1
+ j
2
r
t2
+ 
t
r
t
r
t1
= (1 j
1
) r
t1
+ j
2
r
t2
+ 
t
Then, take the limit keeping the second lag xed,
dr
t
= (cr
t
+ c
2
r
ti
) dt + d1
t
1 + c + c
i1
r
t
= 11
t
25
we can do that, but the tractability is clearly lost, as inverting this lag operator will not be fun.
Thus, as we go to continuous time, the tractability of simple AR or MA models (past AR(1))
disappears. The nite length of AR or MA lag polynomials is no longer a simplication. Though
we can write nitelength processes such as
dr
t
=
Z
I
t=0
a(t)r
tt
dt + od1
t
r
t
=
Z
I
t=0
/(t)d1
tt
The niteness / of the AR or MA representation does not lead to easy inversion or manipulation
as it does in discrete time.
7.2 Adding AR(1)s to obtain more complex processes
The obvious way to think about forming more complex yet tractable linear processes is to add
AR(1) together
r
t
=
c
1 + j
+
,
1 + c
11
t
=
Z
t=0
cc
jt
+ ,c
t
d1
tt
(24)
So long as a 6= ,, this process will still be a diusion, i.e., dr
t
will load on d1
t
. Sums of
exponentials give convenient, though not nite, and exible moving average representations.
The obvious way to achieve an autoregressive representation is to introduce additional
state variables,
j
t
=
c
1 + j
11
t
= c
Z
t=0
c
jt
d1
tt
.
t
=
,
1 + c
11
t
= ,
Z
t=0
c
t
d1
tt
r
t
= j
t
+ .
t
and thus
dj
t
= jj
t
dt + cd1
t
d.
t
= c.
t
dt + ,d1
t
r
t
= j
t
+ .
t
This expression mirrors the way we often write an AR(2) in discrete time by adding r
t1
as an
additional state vector, and then treating the AR(2) as a vector AR(1).
However, by doing so we disguise the fact that .
t
and j
t
and the history d1
t
are in the
linear span of {r
t
}. Also, it is often convenient to use the autoregressive lag polynomials, for
example in the HansenSargent formulas.
For that reason, I work to write an autoregressive representation corresponding to (24).
The basic idea is straightforward. By partialfractions type algebra, any process of the form
r
t
=
c
1
1 + `
1
+
c
2
1 + `
2
+
c
3
1 + `
2
+ ...
11
t
26
Are equivalent to systems of the form
1 + 0
1
/
2
1
(1 + 0
2
)
/
3
1
(1 + 0
3
)
+ ...
r
t
= 11
t
.
7.2.1 A twocomponent example
To make the ideas concrete, lets invert
r
t
=
c
1 + `
1
+
,
1 + `
2
11
t
The answer, after some pleasant algebra, is
1 + 0
2
/
1
(1 + 0
1
)
r
t
= 11
t
where
0
1
=
c`
2
+ ,`
1
c + ,
; 0
2
=
c`
1
+ ,`
2
c + ,
; / =
c,
c + ,
(`
1
`
2
)
2
.
We start by forming the operator polynomial,
r
t
=
1 +
cA
2
+oA
1
c+o
(1 + `
1
) (1 + `
2
)
(c + ,) 11
t
Denote
0
1
=
c`
2
+ ,`
1
c + ,
.
We invert,
(1 + `
1
) (1 + `
2
)
(1 + 0
1
)
r
t
= (c + ,) 11
t
Now, we reexpress the polynomial on the left into the form
1 + +
C
1 + `
1
r
t
= (c + ,) 11
t
This will result in an autoregressive representation
dr
t
=
r
t
+ C
Z
t=0
c
A
1
t
r
tt
dt
dt + od1
t
Now lags of r
t
are useful in forecasting dr
t
, just as they are in higher order AR processes.
We just have to nd and C:
(1 + `
1
)(1 + `
2
)
(1 + 0
1
)
1
2
+ (`
1
+ `
2
)1 + `
1
`
2
(1 + 0
1
)
1 +
(`
1
+ `
2
0
1
)1 + `
1
`
2
(1 + 0
1
)
1 + (`
1
+ `
2
0
1
)1 +
`
1
`
2
(`
1
+ `
2
0
1
)0
1
(1 + 0
1
)
1 + (`
1
+ `
2
0
1
)1 +
(0
1
`
2
) (0
1
`
1
)
1 + 0
1
27
In this case, I can make it a little prettier:
`
1
+ `
2
0
1
= `
1
+ `
2
c`
2
+ ,`
1
c + ,
=
,`
2
+ c`
1
c + ,
0
1
`
2
=
c`
2
+ ,`
1
c + ,
`
2
=
, (`
1
`
2
)
c + ,
0
1
`
1
=
c`
2
+ ,`
1
c + ,
`
1
=
c(`
2
`
1
)
c + ,
1 + 0
2
c,
c + ,
(`
1
`
2
)
2
1 + 0
1
Thus, writing out the result, we have the autoregressive process:
r
t
=
c
1 + `
1
+
,
1 + `
2
11
t
1 + 0
2
/
1
(1 + 0
1
)
r
t
= d1
t
where
0
1
=
c`
2
+ ,`
1
c + ,
; 0
2
=
c`
1
+ ,`
2
c + ,
; / =
c,
c + ,
(`
1
`
2
)
2
or,
dr
t
= 0
2
r
t
dt + /
Z
t=0
c
0
1
t
r
tt
dt
dt + (c + ,)d1
t
0
1
=
c`
2
+ ,`
1
c + ,
; 0
2
=
c`
1
+ ,`
2
c + ,
The converse operation, AR to MA:
Suppose we start with
dr
t
=
`
1
r
t
+ 0
Z
t=0
c
A
2
t
r
tt
dt + d1
t
1 + `
1
0
1
1 + `
2
r
t
= 11
t
(1 + `
1
)(1 + `
2
) 0
1 + `
2
r
t
= 11
t
We invert,
r
t
=
1 + `
2
(1 + `
1
)(1 + `
2
) 0
11
t
We simply expand, factor, and express the result in partial fractions form. Dening
(1 + j)(1 + c) = (1 + `
1
)(1 + `
2
) 0
0 = 1
2
+ (`
1
+ `
2
) 1 + (`
1
`
2
0)
28
j, c =
(`
1
+ `
2
)
p
(`
1
`
2
)
2
+ 40
2
j + 0 = `
1
+ `
2
`
2
j = 0 `
1
`
2
0 = j `
1
then we have
r
t
=
1 + `
2
(1 + j)(1 + c)
11
t
r
t
=
1 + `
2
j c
1
1 + c
1
1 + j
11
t
r
t
=
1
j c
1
1 + c
1
1 + j
+
`
2
1 + c
`
2
1 + j
11
t
r
t
=
1
j c
1
c
1 + c
1 +
j
1 + j
`
2
1 + c
`
2
1 + j
11
t
r
t
=
1
j c
`
2
c
1 + c
+
j `
2
1 + j
11
t
or, nally,
r
t
=
1
j c
`
2
c
1 + c
+
`
1
c
1 + j
11
t
8 Tricks
To expand lag polynomials and make any sense out of them, you have to reduce them to known
forms like 1,(1 + `). Here are some tricks.
Trick 1:
1
1 + `
= 1
`
1 + `
How much easier then
d
Z
t=0
c
At
r
tt
dt
= r
t
`
Z
t=0
c
At
r
tt
dt
Similarly,
1 + c
1 + j
= 1 +
1 + (c j)
1 + j
Trick 2:Reducing powers
(1 + c) (1 + `)
(1 + j)
= 1 +
(c + ` j)1 + c`
(1 + j)
Trick 3: Partial fractions
1
(1 + `)
1
(1 + j)
=
1
j `
1
1 + `
1
1 + j
1
2
(c
c
t+t
)
2
dt s.t. (25)
d/
t
= (r/
t
+ j
t
c
t
) dt (26)
dj
t
= jj
t
dt + od1
t
I nd the equilibrium consumption process in level and dierenced form,
c
t
= r/
t
+
r
r + j
j
t
dc
t
=
r
r + j
od1
t
Then, I nd the price of the consumption stream,
j
t
= 1
t
Z
t=0
c
vt
c
c
t+t
c
c
t
c
t+t
dt.
The result is
j
t
=
c
t
r
1
c
c
t
o
2
(r + j)
2
or equivalently
j
t
= 1
t
Z
t=0
c
vt
c
t+t
1
c
c
t
o
2
(r + j)
2
The rst term is the risk neutral term, as emphasized by its expansion in the last equality.
The second term is a risk adjustment. The higher the variance, the lower the price. As c
approaches c
c
t+t
c
c
t
j
t+t
dt
=
j
t
j + r
1
c
c
t
o
2
(r + j)
2
= 1
t
Z
t=0
c
vt
j
t+t
dt
1
c
c
t
o
2
(r + j)
2
30
9.1.1 Model solution
First, I show that the ow constraint (26) (together with limits on how fast capital can grow)
imply the present value constraint
1
t
Z
0
c
vt
c
t+t
dt = 1
t
Z
0
c
vt
j
t+t
dt + /
t
(27)
Quotes on present value because 1
t
R
0
c
vt
c
t+t
dt is not the present value (price) of the risky
consumption stream!
To show this, write the ow budget constraint (26) as
(1r) /
t
= j
t
c
t
/
t
=
1
(1r)
(j
t
c
t
)
writing it out,
/
t
=
Z
0
c
vt
j
t+t
dt
Z
0
c
vt
c
t+t
dt
t
Applying 1
t
to both sides, we obtain (27).
Second, since the rate of return r equals the discount rate, the basic asset pricing rst order
condition gives
n
0
t
= 1
t
n
0
t+t
c
c
t
= 1
t
(c
c
t+t
) .
c
t
= 1
t
(c
t+t
)
and hence
1
t
(dc
t
) = 0.
Thus, we know that dc
t
= 0 dt + ()d1
t
, with the latter loading depending on the resource
constraint.
Next, we substitute rst order conditions c
t
= 1
t
c
t+t
and the income forecast 1
t
j
t+t
=
c
jt
j
t
into the resource constraint (27) to nd the actual consumption process,
1
t
Z
t=0
c
vt
c
t
dt = 1
t
Z
t=0
c
vt
c
jt
j
t
dt + /
t
c
t
r
=
j
t
r + j
+ /
t
c
t
= r/
t
+
r
r + j
j
t
This is the familiar permanentincome rule.
To the random walk I just take dierences
dc
t
= rd/
t
+
r
r + j
dj
t
dc
t
= r (r/
t
+ j
t
c
t
) dt +
r
r + j
(jj
t
dt + od1
t
)
= r
r/
t
+ j
t
r/
t
r
r + j
j
t
dt +
r
r + j
(jj
t
dt + od1
t
)
31
dc
t
=
r
r + j
od1
t
The price of the consumption stream
j
t
= 1
t
Z
t=0
c
vt
c
c
t+t
c
c
t
c
t+t
dt
By brute force,
c
t+t
= c
t
+
r
r + j
o
Z
t
c=0
d1
tc
1
t
c
t+t
= c
t
1
t
c
2
t+t
= c
2
t
+
ro
r + j
2
t
j
t
= 1
t
Z
t=0
c
vt
c
c
t
c
2
t
vo
v+j
2
t
c
c
t
dt
j
t
=
1
r
c
t
(c
c
t
)
c
c
t
1
c
c
t
ro
r + j
2
Z
0
tc
vt
dt
Z
0
tc
vt
d, = t
c
vt
t

0
Z
0
c
vt
r
dt =
1
r
2
j
t
=
c
t
r
1
c
c
t
o
2
(r + j)
2
Similarly, we can nd the price of the endowment stream, The price of the consumption
stream
j
t
= 1
t
Z
t=0
c
vt
c
c
t+t
c
c
t
j
t+t
dt
By brute force,
c
t+t
= c
t
+
r
r + j
o
Z
t
c=0
d1
tc
j
t+t
= c
jt
j
t
+ o
Z
t
c=0
c
jc
d1
tc
1
t
j
t+t
= c
jt
j
t
1
t
(c
t+t
j
t+t
) = c
jt
c
t
j
t
+
ro
2
r + j
!
Z
t
c=0
c
jc
d: = c
jt
c
t
j
t
+
ro
2
r + j
!
1
j
1 c
jt
j
t
=
Z
t=0
c
vt
c
c
jt
j
t
c
jt
c
t
j
t
vo
2
v+j
1
j
(1 c
jt
)
c
c
t
dt
j
t
= j
t
Z
t=0
c
(v+j)t
dt
ro
2
(r + j) j
1
c
c
t
Z
t=0
c
vt
1 c
jt
dt
j
t
=
j
t
j + r
ro
2
(r + j) j
1
c
c
t
1
r
1
r + j
j
t
=
j
t
j + r
1
c
c
t
o
2
(r + j)
2
32
9.2 A model with habits or durability
I use a utility function with a temporal nonseparability. The quantity dynamics are solved by
Heaton (1993), section 3.1, though I hope the notation here is a bit simpler. The consumer
faces a linear capital accumulation technology and an AR(1) income stream. The consumers
problem is
max
{c
1
2
Z
t=0
c
vt
c
t
Z
t=0
/ (t) c
tt
dt
2
:.t.
d/
t
= (r/
t
+ j
t
c
t
) dt
dj
t
= jj
t
dt + od1
t
We can write the objective in operator form,
1
2
Z
t=0
c
vt
(c
[1 + L
b
(1)] c
t
)
2
. (28)
The presentvalue form of the resource constraint is, as before,
1
t
Z
t=0
c
vt
c
t+t
dt = /
t
+ 1
t
Z
t=0
c
vt
j
t+t
dt = /
t
+
j
t
r + j
. (29)
We normally think of the nonseparability /(t) as generating habit persistence or dura
bility in consumption. If /(t) 0, then past consumption contributes positively to current
utility, as past durable goods purchases do. If /(t) < 0, then past consumption raises current
marginal utility, as habitforming goods do.
We can also use nonseparability for a dierent purpose: We can also regard r
t
as shifting
the bliss point, which controls risk aversion. A larger r
t
means current consumption c
t
is
closer to the composite bliss point, so the consumer becomes more risk averse. Thus, temporal
nonseparability allows us to control risk aversion.
Controlling risk aversion is useful to making the linear quadratic model vaguely reasonable.
One of its biggest problems is that higher consumption makes the consumer more risk averse,
where in reality we think risk aversion is likely independent of wealth in the long run, and rises
in consumption relative to the recent past may make people less risk averse, as in the Campbell
Cochrane model. By moving the bliss point, we can capture both ideas, and thus make the
linearquadratic model a more useful approximation.
As an example, I use a sum of two exponentials for the /(t) function, so the problem is is
max
{c}
1
2
Z
t=0
c
vt
(c
c
t
/r
t
/.
t
)
2
(30)
r
t
= `
Z
t=0
c
At
c
tt
dt (31)
.
t
= c
Z
t=0
c
ct
c
tt
dt (32)
In the operator notation of (28),
L
b
(1) =
/`
` + 1
+
/c
c + 1
.
33
This formulation allows for both habit and durable eects. For example, consumption
can be durable in the short run, but induce habits in the long run, with / < 0, / 0, ` < c.
Having included two exponentials, the generalization to an arbitrary sum of exponentials is clear.
Here are the major results: The consumption process follows
1[1 + L
b
(1)] c
t
=
ro
r + j
[1 + L
b
(r)] 11
t
.
You can see the natural generalization from the timeseparable case in which L
b
(1) = 0. In the
doubleexponential case,
dc
t
= [/`(c
t
r
t
) + /c(c
t
.
t
)] dt +
ro
j + r
1 +
/`
r + `
+
/c
r + c
d1
t
(33)
Without nonseparabilities, consumption follows the familiar random walk. With nonsepara
bilities, marginal utility is still a random walk, but consumption is not, and adjusts towards
(/, / 0) or away (/, / < 0) from its recent past.
The price of the consumption stream is
j
t
=
1
[1 + L
b
(r)]
1 +
rL
b
(r) 1L
b
(1)
(r 1)
c
t
r
[1 + L
b
(r)]
c
[1 + L
b
(1)] c
t
o
j + r
2
(34)
The rst term is just the riskneutral present value of the consumption stream, i.e.
1
t
Z
t=0
c
vt
c
t+t
dt.
The second term in (34) is a discount for risk.
In the doubleexponential case the price of the consumption stream is
j
t
=
1
r
c
t
+
IA
v+A
r
t
+
Ic
v+c
.
t
1 +
IA
v+A
+
Ic
v+c
1 +
IA
v+A
+
Ic
v+c
c
t
/r
t
/.
t
o
j + r
2
In either general or special formulas, the denominator in the second term is the interesting
component. This term is still current marginal utility. This term shows us how risk premiums
evolve over time. If c
t
rises in a comparativestatic sense, we still see risk aversion and the price
discount rise. But now risk aversion is determined by c
t
relative to the habit or durable stock
r
t
and .
t
. This generalization can allow the model to to produce more realistic time series.
The price of the endowment stream {j
t
} is similarly,
j
t
=
j
t
r + j
[1 + L
b
(r)]
c
[1 + L
b
(1)] c
t
o
2
(r + j)
2
In the doubleexponential case,
j
t
=
j
t
r + j
h
1 +
IA
v+A
+
Ic
v+c
i
c
[c
t
+ /r
t
+ /.
t
]
o
2
(r + j)
2
The rst term is again the risk neutral value,
j
t
r + j
= 1
t
Z
t=0
c
vt
j
t+t
dt.
The second term reects the same timevarying risk aversion as before, due to changing marginal
utility at time t.
34
9.2.1 Derivation
First order conditions and consumption drift.
The consumers rstorder conditions state that marginal utility is a martingale,
2
c
t
+
Z
t=0
/(t)c
tt
dt = 1
t
c
t+
+
Z
t=0
/(t)c
t+t
dt
(35)
Taking the limit, marginal utility follows a random walk
1
t
c
t
+
Z
t=0
/(t)c
tt
dt
= 0
I.e., we know that consumption follows a classic autoregressive process
3
, written in either form
(22) or (23),
dc
t
=
/(0)c
t
+
Z
t=0
/
0
(t)c
tt
dt
dt + d1
t
dc
t
=
Z
t=0
/(t)dc
tt
dt + d1
t
.
2
This condition is the same whether the nonseparability is internal or external. If external, these are
directly the rst order conditions, in equilibrium where individual = aggregate consumption. If internal, the
marginal utility of consumption today includes its eects on fugure utility,
ul
uc
c
Z
=0
/ (t) c dt
+ 1
Z
=0
c+
Z
=0
/ (t) c+ dt
/(c)dc
But then
ul
uc
= 1
ul
uc
+
Z
=0
/ (t) c
dt
+ 1
Z
=0
c
+
Z
=0
/ (t) c
+
dt
/(c)dc
= 1
c+
Z
=0
/ (t) c+ dt
+
Z
=0
c++
Z
=0
/ (t) c++ dt
/(c)dc
Z
=0
/ (t) c
dt
1 +
Z
=0
/(c)dc
= 1
c
+
Z
=0
/ (t) c
+
dt
1 +
Z
=0
/(c)dc
3
A bruteforce verication:
c
Z
=0
I(t)c dt = 1
c+
Z
=0
I(t)c+ dt
Z
=
I(t)c+ dt
c
Z
=0
I(t)c dt = 1
c+ I(0)c
Z
=0
I(t +)c dt
1 (c
+
c
) = I(0)c
+
Z
=0
dI
dt
c
dt
dc
I(0)c
+
Z
=0
I
0
(t)c
dt
dt
35
In operator notation, both expressions are equivalent to
[1 + L
b
(1)1] c
t
= 11
t
(36)
We dont know what is, which the resource constraint will tell us.
Resource constraint and consumption shocks.
Using the HansenSargent responsetoshock formula (20), and the representation (36), we
have
d
1
t
Z
t=0
c
vt
c
t+t
dt
= ()dt +
1
r
1
1 + L
b
(r)
1
t
Z
t=0
c
vt
c
t+t
dt
= d/
t
+
dj
t
r + j
=
r/
t
c
t
+
r
r + j
j
t
dt +
o
r + j
d1
t
Comparing the two expressions, the impact multiplier satises
1
r
1 + L
b
(r)
=
o
r + j
.
Therefore,
=
ro
r + j
(1 + L
b
(r))
In sum, then, consumption follows the process
dc
t
=
/(0)c
t
+
Z
t=0
/
0
(t)c
tt
dt
dt +
ro
r + j
1 +
Z
t=0
c
vt
/(t)dt
d1
t
dc
t
=
Z
t=0
/(t)dc
tt
dt +
ro
r + j
1 +
Z
t=0
c
vt
/(t)dt
d1
t
or, in operator notation,
[1 + L
b
(1)1] c
t
=
ro
r + j
[1 + L/(r)] 11
t
.
We can write the same result by characterizing the marginal utility process,
d
c
t
+
Z
t=0
/(t)c
tt
dt
=
ro
r + j
1 +
Z
t=0
c
vt
/(t)dt
d1
t
(37)
1[1 + L
b
(1)] c
t
=
ro
r + j
[1 + L
b
(r)] 11
t
(38)
Consumption process, exponential case
Expressing the state in terms of r
t
and .
t
is useful. Directly, (37) (38) are
d (c
t
+ /r
t
+ /.
t
) =
ro
r + j
[1 + L
b
(r)] d1
t
(39)
If we want to study consumption growth, we can substitute from (38)
1
1 +
/`
` + 1
+
/c
c + 1
c
t
=
ro
r + j
1 +
/`
` + r
+
/c
c + r
11
t
36
1 + /`
1
`
` + 1
+ /c
1
c
c + 1
c
t
=
ro
r + j
1 +
/`
` + r
+
/c
c + r
11
t
.
Then, rewriting this in terms of the state variables r
t
and .
t
,
dc
t
= [/`(c
t
r
t
) + /c(c
t
.
t
)] dt +
ro
j + r
1 +
/`
r + `
+
/c
r + c
d1
t
(40)
Price of the consumption stream
The price of the consumption stream is
j
t
= 1
t
Z
t=0
c
vt
c
[c
t+t
+
R
c=0
/(:)c
t+tc
d:]
c
[c
t
+
R
c=0
/(:)c
tc
d:]
c
t+t
dt
The formula simplies if we recognize that marginal utility follows a random walk. Then
c
t+t
+
Z
c=0
/(:)c
t+tc
d:
=
c
t
+
Z
c=0
/(:)c
tc
d:
+
ro
r + j
(1 + L
b
(r))
Z
t
&=0
d1
t+&
Substituting this result in the pricing formula, we have
j
t
= 1
t
Z
t=0
c
vt
c
t+t
dt
vo
v+j
[1 + L
b
(r)]
c
[1 + L
b
(1)] c
t
1
t
Z
t=0
c
vt
Z
t
&=0
d1
t+&
c
t+t
dt (41)
Lets work on the rst term. Using the HansenSargent prediction formula and the operator
expression for the consumption process,
[1 + L
b
(1)1] c
t
=
ro
r + j
(1 + L
b
(r)) 11
t
,
we have
1
t
Z
t=0
c
vt
c
t+t
dt =
ro
r + j
(1+L
(v))
1[1+L
(1)]
(1+L
(v))
v[1+L
(v)]
r 1
11
t
=
o
r + j
r [1 + L
b
(r)] 1[1 + L
b
(1)]
(r 1) 1[1 + L
b
(1)]
11
t
=
r [1 + L
b
(r)] 1[1 + L
b
(1)]
(r 1) [1 + L
b
(r)]
c
t
r
=
1
[1 + L
b
(r)]
1 +
rL
b
(r) 1L
b
(1)
(r 1)
c
t
r
I cant get further in general, so using the doubleexponential functional form,
1
t
Z
t=0
c
vt
c
t+t
dt =
1
h
1 +
IA
v+A
+
Ic
v+c
i
1 +
r
IA
A+v
+
Ic
c+v
IA
A+1
+
Ic
c+1
(r 1)
c
t
r
=
1
h
1 +
IA
v+A
+
Ic
v+c
i
"
1 +
/`
2
(` + r) (` + 1)
+
/c
2
(c + r) (c + 1)
#
c
t
r
=
c
t
+
I
(v+A)
r
t
+
I
(v+c)
.
t
1 +
IA
v+A
+
Ic
v+c
1
r
37
Now for the second part. Start with the hardlooking part,
1
t
Z
t=0
c
vt
Z
t
&=0
d1
&
c
t+t
dt.
Actually, this is easy, and the risk adjustment term is quite general. Start with any con
sumption process,
c
t
=
Z
t=0
/(t)d1
tt
1
t
c
t+t
Z
t
&=0
d1
t+&
=
Z
t
c=0
/(:)d:
1
t
Z
t=0
c
vt
c
t+t
Z
t
&=0
d1
t+&
dt =
Z
t=0
c
vt
dt
Z
t
c=0
/(:)d:
=
Z
c=0
/(:)
Z
t=c
c
vt
dt
d:
=
Z
c=0
/(:)
1
r
c
vc
dt
=
1
r
L
I
(r).
Using our consumption process,
c
t
=
ro
r + j
[1 + L
b
(r)]
1[1 + L
b
(1)]
11
t
=
ro
r + j
h
1 +
IA
v+A
+
Ic
v+c
i
1
h
1 +
IA
1+A
+
Ic
1+c
i11
t
,
we have
1
t
Z
t=0
c
vt
c
t+t
1
t+t
dt =
o
r + j
h
1 +
IA
v+A
+
Ic
v+c
i
r
h
1 +
IA
v+A
+
Ic
v+c
i =
o
r + j
1
r
Now, adding back the other terms of (41),
vo
v+j
[1 + L
b
(r)]
c
[1 + L
b
(1)] c
t
1
t
Z
t=0
c
vt
Z
t
&=0
d1
t+&
c
t+t
dt
=
vo
v+j
[1 + L
b
(r)]
c
[1 + L
b
(1)] c
t
o
r + j
1
r
=
[1 + L
b
(r)]
c
[1 + L
b
(1)] c
t
o
r + j
2
Price of the endowment stream
The price of the endowment stream is
j
t
= 1
t
Z
t=0
c
vt
c
[c
t+t
+
R
c=0
/(:)c
t+tc
d:]
c
[c
t
+
R
c=0
/(:)c
tc
d:]
j
t+t
dt
Working analogously, we have
c
t+t
+
Z
c=0
/(:)c
t+tc
d:
=
c
t
+
Z
c=0
/(:)c
tc
d:
+
ro
r + j
(1 + L
b
(r))
Z
t
&=0
d1
&
38
j
t
= 1
t
Z
t=0
c
vt
j
t+t
dt
vo
v+j
[1 + L
b
(r)]
c
[1 + L
b
(1)] c
t
1
t
Z
t=0
c
vt
Z
t
&=0
d1
t+&
j
t+t
dt
j
t
= 1
t
Z
t=0
c
vt
c
jt
dt
j
t
vo
v+j
[1 + L
b
(r)]
c
[1 + L
b
(1)] c
t
1
t
Z
t=0
c
vt
Z
t
&=0
d1
t+t&
Z
&=0
c
j&
od1
t+t&
dt
j
t
=
j
t
r + j
vo
2
v+j
[1 + L
b
(r)]
c
[1 + L
b
(1)] c
t
Z
t=0
c
vt
Z
t
&=0
c
j&
dt
j
t
=
j
t
r + j
vo
2
v+j
[1 + L
b
(r)]
c
[1 + L
b
(1)] c
t
1
j
1
r
1
r + j
j
t
=
j
t
r + j
[1 + L
b
(r)]
c
[1 + L
b
(1)] c
t
o
2
(r + j)
2
In the doubleexponential case,
j
t
=
j
t
r + j
h
1 +
IA
v+A
+
Ic
v+c
i
c
[c
t
+ /r
t
+ /.
t
]
o
2
(r + j)
2
10 References
Hansen, Lars Peter, and Thomas J. Sargent, 1991, Prediction Formulas for ContinuousTime
Linear Rational Expectations Models Chapter 8 of Rational Expectations Econometrics,
https://les.nyu.edu/ts43/public/books/TOMchpt.8.pdf
Hansen, Lars Peter, and Thomas .J. Sargent, 1980, 11Formulating and Estimating Dynamic
Linear RationalExpectations Models, Journal of Economic Dynamics and Control 2: 746.
Hansen, Lars Peter, and Thomas J. Sargent, 1981, A Note On WienerKolmogorov Predic
tion Formulas for Rational Expectations Models, Economics Letters 8(3): 255260,
Heaton, John, 1993, The Interaction Between TimeNonseparable Preferences and Time
Aggregation, Econometrica 61 353385 http://www.jstor.org/stable/2951555
39
11 Lecture notes
Goal: Translate class of linear ARMA models and operator notation tricks to continuous
time. Disclaimer: this is all in Hansen, Heaton. or so obvious to them they didnt bother
to write it down.
Discrete time. AR(1)
r
t
= jr
t1
+ 
t
r
t
=
X
)=0
j
)

t)
.
Operators
(1 j1)r
t
= 
t
r
t
=
1
1 j1

t
=
X
)=0
j
)
1
)

t
=
X
)=0
j
)

t)
.
Our goal: This class of processes and operator tricks in continuous time. Linear processes.
Continuous time AR(1)
r
t+1
= (1 c)r
t
+ 
t+1
r
t+1
r
t
= cr
t
+ 
t+1
dr
t
= cr
t
dt + od1
t
r
t
=
Z
t=0
c
t
d1
tt
.
Task 1 learn to do AR(1) with operators.
Basic operators
1
t
r
t
= r
tt
This would let us do general MA
r
t
=
Z
t=0
/(t)d1
tt
=
Z
t=0
/(t)1
t
d1
t
= lim
0
c
log(1)
1
= lim
0
log(1)c
log(1)
1
= log(1).
1r
t
=
1
dt
dr
t
1 = c
1
; 1 = log(1)
Inverting the AR(1)
dr
t
+ cr
t
dt = d1
t
(1 + c)r
t
= 11
t
r
t
=
1
c + 1
11
t
40
Interpret the last integral slowly
=
Z
t=0
c
t
c
1t
dt
1
dt
d1
t
=
Z
t=0
c
t
c
1t
1
dt
d1
t
dt
=
Z
t=0
c
t
d1
tt
or =
Z
t=0
c
t
c
1t
dt
11
t
=
Z
t=0
c
t
1c
1t
1
t
dt
=
Z
t=0
c
t
11
tt
dt
=
Z
t=0
c
t
d1
tt
Note we could use 1 but its easier to just stick with 1 = log(1)
Lag polynomials moving average representations and functions more generally
r
t
=
X
)=0
/
)

t)
= Z
b
(1)
t
; Z
b
(1) =
X
)=0
/
)
1
)
r
t
=
Z
t=0
/(t)d1
tt
= L
b
(1)11
t
; L
b
(1) =
Z
t=0
c
1t
/(t)dt
Agenda:
Z
o
(1) = Z
b
(1)
1
; Z
b
(1) = Z
o
(1)
1
how to do that in continuous time....
Laplace transforms.
j
t
=
Z
t=0
/(t)r
tt
the Laplace transform of this operation is dened as
L
b
(1) =
Z
t=0
c
1t
/(t)dt
where 1 is a complex number. The Laplace transform of the lag operation mysterious
1 = c
1
is
j
t
= r
t)
L(1) = c
)1
.
Yes, they commute
L
o
(1)L
b
(1) = L
b
(1)L
o
(1)
Z
o
(1)Z
b
(1) = Z
b
(1)Z
o
(1)
Moving averages and moments, discrete time:
o
2
(r
t
) =
X
)=0
/
2
)
o
2
.
co(r
t
, r
tI
) =
X
)=0
/
)
/
)+I
o
2
.
o
a
(.) =
X
)=
c
i.)
co(r
t
, r
t)
) = Z
b
(c
i.
)Z
b
(c
i.
)
co(r
t
, r
t)
) =
1
2
Z
c
i.)
o
a
(.)d.
41
Continuous time:
o
2
(r
t
) =
Z
t=0
/
2
(t)dt,
co(r
t
, r
tI
) =
Z
t=0
/(t)/(t + /)dt
o
a
(.) =
Z
t=
co(r
t
r
tt
)dt = L
b
(i.)L
b
(i.)
co(r
t
, r
tI
) =
1
2
Z
c
i.I
o
a
(.)d. =
1
2
Z
c
i.I
L
b
(i.)L
b
(i.)d.
Levels to dierences moving average representations. (Continuous time uses dr even when
r is stationary, as in AR(1)
r
t
r
t1
= /
0

t
+
X
)=1
(/
)
/
)1
)
t)
dr
t
=
Z
t=0
d/(t)
dt
d1
tt
dt + /(0)d1
t
Operators
(1 1)r
t
= (1 1)Z
b
(1)
t
= [/
0
+ Z
b
(1)] 
t
= [Z
b
(0) + Z
b
(1)] 
t
1r
t
= 1L
b
(1)11
t
= [/(0) + L
b
0 (1)] 11
t
=
lim
1
(1L
b
(1)) + L
b
0 (1)
11
t
Part 1) An important trick
1L
b
(1) = /(0) + L
b
0 (1)
is a a property of Laplace Transforms. Integrate by parts
L
b
0 (1) =
Z
t=0
c
1t
d/(t)
dt
dt = /(t)c
1t
0
+
Z
t=0
1c
1t
/(t)dt
= /(0) + 1
Z
t=0
c
1t
/(t)dt = /(0) + 1L
b
(1).
Part 2) recovering /(0) from the lag polynomial. Wait just a minute.
Dierences to levels, MA representation. Issue: is the series dierence or level stationary?
Operators with nonstationary series?
dr
t
= d1
t
r
t
r
0
=
Z
t
0
d1
tt
1r
t
= 11
t
r
t
= 1
t
?
r
t
=
1
1
11
t
=
Z
0
d1
tt
ok if you think of 1 = 0, r = 0 t < 0.
42
How do we handle dr where r
t
might have unit root? BeveridgeNelson decompositions.
In discrete time, (note c)
(1 1)r
t
= Z
c
(1)
t
=
X
)=0
c
)
1
)
implies
r
t
= .
t
n
t
;
(1 1).
t
= Z
c
(1)
t
n
t
= Z
b
(1)
t
; /
)
=
X
I=)+1
c
I
or, equivalently
Z
c
(1) = Z
c
(1) + (1 1)Z
b
(1).
.
t
has the trend property
.
t
= lim
)
1
t
r
t+)
= r
t
+ 1
t
X
)=0
r
t+)
.
In continuous time,
dr
t
=
Z
t=0
c(t)d1
tt
dt + od1
t
1r
t
= [L
c
(1) + o] 11
t
implies
r
t
= .
t
n
t
;
d.
t
=
o +
Z
c=0
c(:)d:
d1
t
= [o + L
c
(0)] d1
t
n
t
Z
t=0
/(:)d1
tt
= L
b
(1)11
t
; /(:) =
Z
c=0
c(t + :)d:
or, equivalently,
L
c
(1) + o = [o + L
c
(0)] + 1L
b
(1).
.
t
has the trend property
.
t
= lim
)
1
t
r
t+)
= r
t
+ 1
t
Z
t=0
dr
t+t
.
Also /(:) =
R
c=0
c(t + :)d: is the obvious inverse of c(:) = /
0
(:).
Impulseresponse functions and multipliers.
1. Meaning
dr
t
= ()dt + od1
t
lim
0
(1
t+
1
t
) r
t
= od1
t
43
2. MA representation is an impulse response function
(1
t
1
t1
) r
t+)
= /
)

t
lim
0
(1
t+
1
t
) r
t+t
= /(t)d1
t
meaning,
j
t
= 1
t
r
t+t
; dj
t
= ../(t)d1
t
3. You can read the impact multiplier o the lag polynomial
(1
t
1
t1
) r
t
= (1
t
1
t1
) r
t
= /
0

t
= Z
b
(0)
t
.
lim
0
(1
t+
1
t
) dr
t
= /(0)d1
t
=
lim
1
(1L
b
(1))
11
t
meaning
dr
t
=
Z
t=0
d/(t)
dt
d1
tt
dt + /(0)d1
t
= ... + lim
1
(1L
b
(1)) d1
t
Derivation: take the limit of both sides of
1L
b
(1) = /(0) + L
b
0 (1)
lim
1
1L
b
(1) = /(0) + lim
1
L
b
0 (1)
and note that
lim
1
L
b
0 (1) = lim
1
Z
t=0
c
1t
/
0
(t) = 0.
Informally, for very large 1, the function 1c
1t
drops o so quickly that only /(0)
survives,
lim
1
1L
b
(1) lim
1
/(0)
Z
0
1c
1t
dt = /(0) c
1t
0
= /(0)
Weirdness I dont really understand: 1 = c
1
; 1 = log(1) so I expected 1 = 0
1 = lim
1
L
b
(1) not the extra 1
4. Final multiplier
/
= Z
b
() should = 0
/() = lim
10
1L
b
(1) = lim
10
Z
t=0
1c
1t
/(t)dt should = 0
5. Cumulative response of r
t
Z
b
(1) =
X
)=0
/
)
L
b
(0) =
Z
t=0
/(t)dt
44
6. For a MA representation of dierences,
dr
t
=
Z
t=0
c(t)d1
tt
dt + od1
t
1r
t
= [L
c
(1) + o] 11
t
the impact multiplier is
lim
0
(1
t+
1
t
) dr
t
= od1
t
the cumulative multiplier is
lim
0
(1
t+
1
t
)
Z
t=0
dr
t+t
= o +
Z
c=0
c(:)d:
= o + L
c
(0)
Autoregressive processes. We have the AR(1) process as an example,
dr
t
+ cr
t
dt = d1
t
r
t
=
Z
t=0
c
t
d1
tt
(1 + c) r
t
= 11
t
r
t
=
1
1 + c
1
t
This is what I generalize.
Autoregressive processes in continuous time
dr
t
=
Z
t=0
a(t)dr
tt
dt + od1
t
or,
dr
t
=
a(0)r
t
+
Z
t=0
a
0
(t)r
tt
dt
dt + od1
t
more obviously equivalent in lag notation.
(1 + L
o
(1)) 1r
t
= o11
t
[1 + a(0) + L
o
0 (1)] r
t
= o11
t
.
Notice the dierence between an autoregressive process and a moving average process in
dr
t
=
Z
t=0
c(t)d1
tt
dt + od1
t
representation.
How not to do AR
1.
r
t
=
Z
t=0
/(t)d1
tt
Z
t=0
a(t)r
tt
dt = d1
t
too smooth
45
2. You might be tempted to write an AR(2) as.
(1 + `)(1 + j)r
t
= o
1
dt
d1
t
This would not work. Look at the MA representation,
1
(1 + `)
1
(1 + j)
=
1
j `
1
1 + `
1
1 + j
r
t
=
1
j `
Z
t=0
c
At
c
jt
od1
tt
dr
t
=
1
j `
d
Z
t=0
c
At
c
jt
od1
tt
We have /(0) = 0 so dr
t
is nonstochastic no loading on d1
t
. The 1
2
operator takes
a second derivative.
3. We do not want the second lag to contract towards the rst.
r
t
= j
1
r
t1
+ j
2
r
t2
+ 
t
r
t
r
t1
= (1 j
1
) r
t1
+ j
2
r
t2
+ 
t
We could take the limit keeping the second lag xed,
dr
t
= (cr
t
+ c
2
r
ti
) dt + d1
t
1 + c + c
i1
r
t
= 11
t
The tractability is clearly lost. Finite length AR or MA lag polynomials are no longer
a simplication.
dr
t
=
Z
I
t=0
a(t)r
tt
dt + od1
t
r
t
=
Z
I
t=0
/(t)d1
tt
How yes to do ARs. Twocomponent example AR(2)
r
t
=
c
1 + `
1
+
,
1 + `
2
11
t
1 + 0
2
/
1
(1 + 0
1
)
r
t
= 11
t
(where
0
1
=
c`
2
+ ,`
1
c + ,
; 0
2
=
c`
1
+ ,`
2
c + ,
; / =
c,
c + ,
(`
1
`
2
)
2
)
The key to avoiding the oversmoothness problem:
r
t
=
(1 + 0)
(1 + `
1
) (1 + `
2
)
11
t
Keep order of polynomials!
46
Conversely,
1 + `
1
0
1
1 + `
2
r
t
= 11
t
r
t
=
1
j c
`
2
c
1 + c
+
`
1
c
1 + j
11
t
where
j, c =
(`
1
+ `
2
)
p
(`
1
`
2
)
2
+ 40
2
To HS: Forwardlooking operators . Needed j < 1, r 0
kjk 1
1
1 j1

t
=
j
1
1
1
1 j
1
1
1
!

t
=
X
)=1
j
)
1
)

t
=
X
)=1
j
)

t+)
krk 0
1
r + 1
11
t
=
1
r 1
11
t
=
Z
t=0
c
vt
c
+1t
dt
11
t
=
Z
t=0
c
vt
d1
t+t
HansenSargent prediction formulas
1
t
X
)=0
,
)
r
t+)
=
1Z
b
(1) ,Z
b
(,)
1 ,

t
1
t
X
)=1
,
)1
r
t+)
=
Z
b
(1) Z
b
(,)
1 ,

t
.
1 = 0 : (1
t
1
t1
)
X
)=0
,
)
r
t+)
= Z
b
(,)
t
Particularly handy. If
Z
o
(1)r
t
= 
t
(1
t
1
t1
)
X
)=0
,
)
r
t+)
= [Z
o
(,)]
1

t
then no inverting lag polynomials!
Continuous time:
1
t
Z
t=0
c
vt
r
t+t
dt =
L
b
(1) L
b
(r)
r 1
11
t
.
lim
0
(1
t+
1
t
)
Z
t=0
c
vt
r
t+t
dt = lim
1
L
b
(1) L
b
(r)
r 1
11
t
lim
0
(1
t+
1
t
)
Z
t=0
c
vt
r
t+t
dt = L
b
(r)d1
t
.
Again, you dont need the whole partial fractions, etc.
1. AR(1)
r
t
= Z
b
(1)
t
= (1 j1)
1

t
47
1
t
X
)=0
,
)
r
t+)
=
1
1j1
o
1jo
1 ,

t
=
1
(1 j,)
1
(1 j1)

t
=
1
(1 j,)
X
)=0
j
)

t)
=
1
(1 j,)
r
t
.
2. Lets do the AR(1) example in continuous time.
(1 + c)r
t
=
1
dt
d1
t
r
t
=
1
(1 + c)
1
dt
d1
t
1
t
Z
t=0
c
vt
r
t+t
dt =
L
b
(1) L
b
(r)
r 1
11
t
1
t
Z
t=0
c
vt
r
t+t
dt =
1
1+
1
v+
r 1
!
11
t
1
t
Z
t=0
c
vt
r
t+t
dt =
1
(r 1)
r 1
(1 + c) (r + c)
1
dt
d1
t
1
t
Z
t=0
c
vt
r
t+t
dt =
1
r + c
1
1 + c
1
dt
d1
t
=
1
r + c
Z
t=0
c
t
d1
tt
=
1
r + c
r
t
3. Dierences: If
dr
t
=
Z
t=0
/(t)d1
tt
+ od1
t
1r
t
= [o + L
b
(1)] 11
t
then o is in L
b
(1) as well as L
b
(r) so also
1
t
Z
t=0
c
vt
dr
t+t
=
L
b
(1) L
b
(r)
r 1
11
t
.
Habit/durable problem
max
{c
1
2
Z
t=0
c
vt
c
t
Z
t=0
/ (t) c
tt
dt
1
2
Z
t=0
c
vt
(c
[1 + L
b
(1)] c
t
)
2
..
1. Specic example (paper does double exponential)
max
{c
1
2
Z
t=0
c
vt
(c
c
t
/r
t
)
2
r
t
= `
Z
t=0
c
At
c
tt
dt
L
b
(1) =
/`
` + 1
.
48
2. The resource constraint is,
d/
t
= (r/
t
+ j
t
c
t
) dt
dj
t
= jj
t
dt + od1
t
3. Present value resource constraint
(1r) /
t
= j
t
c
t
/
t
=
1
(1r)
(j
t
c
t
)
/
t
=
Z
0
c
vt
j
t+t
dt
Z
0
c
vt
c
t+t
dt
t
1
t
Z
t=0
c
vt
c
t+t
dt = /
t
+ 1
t
Z
t=0
c
vt
j
t+t
dt = /
t
+
j
t
r + j
.
4. FOC
n
0
(t) = 1
t
n
0
(t +)
c
t
+
Z
t=0
/(t)c
tt
dt = 1
t
c
t+
+
Z
t=0
/(t)c
t+t
dt
1
t
c
t
+
Z
t=0
/(t)c
tt
dt
= 0
I.e., autoregressive process
dc
t
=
Z
t=0
/(t)dc
tt
dt + d1
t
.
In operator notation,
[1 + L
b
(1)1] c
t
= 11
t
5. Resource constraint determines . Use HS formula
c
t
=
1
1
1 + L
b
(1)
11
t
d
1
t
Z
t=0
c
vt
c
t+t
dt
= ()dt +
1
r
1
1 + L
b
(r)
1
t
Z
t=0
c
vt
c
t+t
dt
= d/
t
+
dj
t
r + j
=
r/
t
c
t
+
r
r + j
j
t
dt +
o
r + j
d1
t
Match the d1 terms
1
r
1 + L
b
(r)
=
o
r + j
.
=
ro
r + j
(1 + L
b
(r))
So, the consumption process:
d
c
t
+
Z
t=0
/(t)c
tt
dt
=
ro
r + j
1 +
Z
t=0
c
vt
/(t)dt
d1
t
49
1[1 + L
b
(1)] c
t
=
ro
r + j
[1 + L
b
(r)] 11
t
..(algebra)
dc
t
= /`(c
t
r
t
) dt +
ro
j + r
1 +
/`
r + `
d1
t
Note you drift back to habits, away from durables stock.
6. Asset pricing. The price of the consumption stream is
j
t
= 1
t
Z
t=0
c
vt
c
[1 + L
b
(1)] c
t+t
c
[1 + L
b
(1)] c
t
c
t+t
dt
MU is a random walk
[1 + L
b
(1)] c
t+t
= [1 + L
b
(1)] c
t
+
ro
r + j
[1 + L
b
(r)]
Z
t
&=0
d1
t+&
so the pricing formula is
j
t
= 1
t
Z
t=0
c
vt
c
[1 + L
b
(1)] c
t
vo
v+j
(1 + L
b
(r))
R
t
&=0
d1
t+&
c
[1 + L
b
(1)] c
t
c
t+t
dt
j
t
= 1
t
Z
t=0
c
vt
c
t+t
dt
vo
v+j
[1 + L
b
(r)]
c
[1 + L
b
(1)] c
t
1
t
Z
t=0
c
vt
Z
t
&=0
d1
t+&
c
t+t
dt
Now, for any consumption process,
c
t
=
Z
t=0
/(t)d1
tt
1
t
c
t+t
Z
t
&=0
d1
t+&
=
Z
t
c=0
/(:)d:
1
t
Z
t=0
c
vt
c
t+t
Z
t
&=0
d1
t+&
dt =
Z
t=0
c
vt
dt
Z
t
c=0
/(:)d:
=
Z
c=0
/(:)
Z
t=c
c
vt
dt
d:
=
Z
c=0
/(:)
1
r
c
vc
dt
=
1
r
L
I
(r).
Using our consumption process,
c
t
=
ro
r + j
[1 + L
b
(r)]
1[1 + L
b
(1)]
11
t
1
t
Z
t=0
c
vt
Z
t
&=0
d1
t+&
c
t+t
dt =
1
r
ro
r + j
[1 + L
b
(r)]
r [1 + L
b
(r)]
=
1
r
o
r + j
so nally prices are
j
t
= 1
t
Z
t=0
c
vt
c
t+t
dt
vo
v+j
[1 + L
b
(r)]
c
[1 + L
b
(1)] c
t
1
r
o
r + j
j
t
= 1
t
Z
t=0
c
vt
c
t+t
dt
[1 + L
b
(r)]
c
[1 + L
b
(1)] c
t
o
2
(r + j)
2
50
j
t
= 1
t
Z
t=0
c
vt
c
t+t
dt
1 +
IA
v+A
c
t
/r
t
o
j + r
2
The price discount (expected return premium) rises as a function of consumption
relative to habit/durable stock, i.e.
j
t
= 1
t
Z
t=0
c
vt
c
t+t
dt
1 +
IA
v+A
n
0
(c
t
)
o
j + r
2
51
Viel mehr als nur Dokumente.
Entdecken, was Scribd alles zu bieten hat, inklusive Bücher und Hörbücher von großen Verlagen.
Jederzeit kündbar.