Sie sind auf Seite 1von 54

Macroeconomics Review Sheet: Laibson

Matthew Basilico
Spring, 2013

Course Outline:
1.

Discrete Time Methods


(a) Bellman Equation, Contraction Mapping Theorem, Blackwell's Sucient Conditions, Numerical Methods
i. Applications to growth, search,

2.

consumption, asset pricing

Continuous Time Methods


(a) Bellman Equation, Brownian Motion, Ito Proccess, Ito's Lemma
i. Application to search, consumption, price-setting,

investment, I.O., asset-pricing

Lecture 1
Sequence Problem
What is the sequence problem?
Find

v(x0 )

such that

v (x0 ) =
subject to

sup

{xt+1 }
t=0 t=0

xt+1 (x)

Variable denitions

xt

is the state vector at date

F (xt , xt+1 )

is the ow payo at date

 function

is stationary

 a.k.a. the ow utility function

t F (xt , xt+1 )

is the expontential discount function

with

x0

given

Discount Variable Denitions


What are the denitions of the dierent discount variables?

is the exponential discount function

is referrred to as the exponential discount factor

is the discount rate

 which is the rate of decline of the discount function, so

ln =

so

d t
dt
t

e =

Bellman Equation
What is the Bellman Equation? What are its components?
Bellman equation expresses the value function as a combination of a ow payo and a discounted
continuation payo

v(x) =

sup

{F (x, x+1 ) + v(x+1 )}

x+1 (x)
Components:

Flow payo is

Current value function is

Equation holds for all (feasible) values of

v()

F (x, x+1 )
v(x).

Continuation value function is

v(x+1 )

is called the solution to the Bellman Equation

 Any old function won't solve it

Policy Function
What is a policy function? What is an optimal policy?
A policy is a mapping from

to the action space (which is equivalent to the choice variables)

An optimal policy achieves payo

v(x)

for all feasible

Relationship between Bellman and Sequence Problem


A solution to the Bellman will also be a solution to the sequence problem, and vice versa.
1. A solution to the Sequence Problem is a solution to the Bellman

2. A solution to the Bellman Equation is also a solution to the sequence problem

Notation with Sequence Problem and Bellman example: Optimal Growth with CobbDouglas Technology
How do we write optimal growth using log utility and Cobb-Douglas technology?

How can this be

translated into sequence problem and Bellman Equation notation?


Optimal growth:

sup
{ct }
t=0
subject to the constraints

t ln(ct )

t=0

c, k 0, k = c + k+1 ,

and

k0

given

Optimal growth in Sequence Problem notation: [- horizon]

v(k0 ) =
such that

sup

{kt+1 }
t=0

t ln(kt kt+1 )

t=0

kt+1 [0, k ] (k)

and

k0 given

Optimal growth in Bellman Equation notation: [2-period]

v(k) =

sup

{ln (k k+1 ) + v(k+1 )}

k+1 [0,k ]

Methods for Solving the Bellman Equation


What are the 3 methods for solving the Bellman Equation?

1. Guess a solution
2. Iterate a functional operator analytically (This is really just for illustration)
3. Iterate a functional operator numerically (This is the way iterative methods are used in most
cases)

FOC and Envelope


What are the FOC and envelope conditions?
First order condtion (FOC):

Dierentiate with respect to choice variable

0=

F (x, x+1 )
+ v 0 (x+1 )
x+1

Envelope Theorem:

Dierentiate with respect to state variable

F (x, x+1 )
x

v 0 (x) =

Optimal Stopping using G&C


How do we approach optimal stopping using Guess and Check?

1. Write the Bellman Equation


2. Propose Threshold Rule as Policy Function
3. Use Policy Function to Write Value Function [Bellman Guess]
4. Rene Value Function using continuity of
5. Find value of threshold

v()

so that Value Function solves Bellman (using indierence)

Expanded:

Agent draws an oer

from a uniform distribution with support in the unit interval

1. Write the Bellman Equation

v(x) = max {x, Ev(x+1 )}


2. Propose Threshold Rule as Policy Function
(a) Stationary threshold rule, with threshold

Accept
Reject

x :
if
if

x x
x < x

3. Use Policy Function to Write Value Function [Bellman Guess]


(a) Stationary threshold rule implies that there exists some constant


v(x) =
4. Rene Value Function using continuity of

x
v

if
if

x x
x < x

v()
v = x

x x
x < x

(a) By continuity of the Value Function, it follows that


v(x) =

x
x

if
if

such that

5. Find value of threshold

so that Value Function solves Bellman

(a) Use indierence condition

v(x ) = x = Ev(x+1 )

so

x=x

x =

x=1

1
1
xf (x)dx = (x )2 +
2
2


p
x = 1 1 1 2

x f (x)dx +

x=x

x=0
Giving solution

Lecture 2
Bellman Operator
What is the Bellman operator? What are its properties as a functional operator?
Bellman operator

B,

operating on a function

(Bw) (x)

sup

w, is dened
{F (x, x+1 ) + w(x+1 )} x

x+1 (x)
Note: The Bellman operator is a contraction mapping that can be used to iterate until convergence to
a xed point, a function that is a solution to the Bellman Equation.

Bw on the LHS frees the RHS continuation value function w from


sup Bw(x) 6= w(x), then keep getting a better (closer) function
Bw(x) = w(x).
Having

being the same function.

Hence if

to the xed point where

Notation: When you iterate, RHS value function lags (one period). When you are not iterating, it
doesn't lag.

Properties as a functional operator

Denition is expressed pointwise - for one value of

Operator

maps function

 Hence operator

to new function

- but applies to all values of

Bw

is a functional operator since it maps functions

Properties at solution to Bellman Equation

If

Function

is a solution to the Bellman Equation, then

is a xed point of

B (B

maps

to

Bv = v

(that is,

(Bv)(x) = B(x) x)

v)

Contraction Mapping
Why does

Bnw

converge as

n ?

What is a contraction mapping? What is the contraction mapping

theorem?
The reason

Bnw

Denition : Let

converges as

(S, d)

is that

is a contraction mapping.

B : S S be a function mapping S onto itself. B


(0, 1), d (Bf, Bg) d(f, g) for any two functions f and g .

be a metric space and

contraction mapping if for some

is a

Intuition:

is a contraction mapping if operating

closer together. (Bf and

Bg

on any two functions moves them strictly

are strictly closer together than

and

g)

A metric (distance function) is just a way of representing the distance between two functions
(e.g. maximum pointwise gap between two functions)

Contraction Mapping Theorem:


If

(S, d)
1.

is a complete metric space and


has exactly one xed point

2. for any
3.

B n v0

B: SS

is a contraction mapping, then:

vS

v0 S , lim B n v0 = v

has an exponential convergence rate at least as great as

ln

Blackwell's Theorem
What is Blackwell's Theorem? How is it proved?

Blackwell's Sucient Conditions -

these are sucienct (but not necessary) for an operator to

be a contraction mapping
Let
Let

X Rl and let C(X) be a space of bounded functions f : X R,


B : C(X) C(X) be an operator satisfying:

1.

(Monotonicity) if

2.

(Discounting) there exists some

f, g C(X)

and

f (x) g(x)x X ,

(0, 1)

Then

is a contraction with modulus

is a constant, and

f +a

f C(X), a 0, x X

is the function generated by adding a constant to the function

Proof:
For any

f, g C(X)

we have

(Bf )(x) (Bg)(x)x X

such that

[B(f + a)] (x) (Bf )(x) + a


[Note that

then

with the sup-metric.

f g + d(f, g)

Properties 1 & 2 of the conditions imply

Bf B (g + d(f, g)) Bg + d(f, g)


Bg B (f + d(f, g)) Bf + d(f, g)
Combining the rst and last terms:

Bf Bg d(f, g)
Bg Bf d(f, g)
|(Bf )(x) (Bg)(x)| d(f, g) x
sup |(Bf )(x) (Bg)(x)| d(f, g)
x

d(Bf, Bg) d(f, g)


6

f]

Checking Blackwell's Conditions


Ex 4.1: Check Blackwell conditions for a Bellman operator in a consumption problem
Consumption problem with stochastic asset returns, stocastic labor income, and a liquidity constraint

(Bf )(x) = sup

n

o
+1 (x c) + y+1
u(c) + Ef R
x

c[0,x]

1. Monotonicity:

f (x) g(x)x. Suppose cf is the optimal policy when the continuation value function is f .
n

o


+1 (x c) + y+1
+1 (x c ) + y+1
(Bf )(x) = sup u(c) + Ef R
= u(cf ) + Ef R
f

Assume

c[0,x]

n

o


+1 (x c) + y+1
+1 (x c ) + y+1 sup u(c) + Eg R
= (Bg)(x)
u(cf ) + Eg R
f
c[0,x]
[Note (in top line) elimination of sup by using optimal policy

cf .

In bottom line, can use any policy

to add in sup]

2. Discounting
Adding a constant

()

to an optimization problem does not eect optimal choice, so:

[B(f + )] (x) = sup

n
h 

io
+1 (x c) + y+1 +
u(c) + E f R

c[0,x]

n

o
+1 (x c) + y+1
+ = (Bf )(x) +
= sup u(c) + Ef R
c[0,x]

Iteration Application: Search/Stopping


What is the general notation?

v0 (x) = 0

Taking lecture 1 example, can you iterate from an initial guess of

for 2 steps? How about proving convergence?

Notation: Iteration of the Bellman operator

B:



vn (x) = (B n v0 )(x) = B(B n1 v0 )(x) = max x, E(B n1 v0 )(x)
Let

xn
xn

xn E(B n1 v0 )(x+1 ),

is sucient statistic for

vn (x) = (B n v0 )(x)

Bellman Operator for the Problem:

vn (x).
vn (x) = (B n v0 )(x)

which is the continuation payo for

is also the cuto threshold associated with

(Bw)(x) max {x, Ew(x+1 )}.

*Note there is no lag here in the

index since we are not iterating

Iterating Bellman Operator once on

v0 (x) = 0:

v1 (x) = (Bv0 )(x) = max {x, Ev0 (x+1 )} = max {x, 0} = x


Iterating again on

v1 (x) = x:



v2 (x) = (B 2 v0 )(x) = (Bv1 )(x) = max {x, Ev1 (x+1 )} = max {x, Ex+1 }
x2 = Ex+1 =


v2 =

x2
x

x x2
x x2

if
if

Proof at limit:
We have:

(B n1 v0 )(x) =
"
xn = E(B

n1

xn1
x

x xn1
x xn1

if
if

x=xn1

x=0
We set

xn = xn1

x=1

xn1 f (x)dx +

v0 )(x) =

xf (x)dx =
x=xn1


2
xn1 + 1
2

to conrm that this sequence converges to



p
lim xn = 1 1 1 2

Lecture 3
Classical Consumption Model
1. Consumption; 2. Linearization of the Euler Equation; 3. Empirical tests without precautionary
savings eects

Classical Consumption
What is the classical consumption problem, in Sequence Problem and Bellman Equation notation?

1. Sequence Problem Representation


Find

v(x)

such that

v(x0 ) = sup E0
{ct }
0

(
subject to

Variables:

t u(ct )

t=0

a static budget constraint for consumption:


a dynamic budget constraint for assets:

is vector of assets,

is consumption,

xt+1

C
ct (x)

t+1 , yt+1 , . . .
X xt , ct , R

is vector of nancial asset returns,

is vector of

labor income
Common Example:

x:

t+1 , yt+1 , . . . R
t+1 (xt ct ) + yt+1 ; x0 = y0
xt , ct , R

The only asset is cash on hand, and consumption is constrained to lie between 0 and

ct C (xt ) [0, xt ];
Assumptions:

y is

xt+1 X

exogenous and iid;

is concave;

limc0 u0 (c) =

(so

c>0

as long as

x > 0)

2. Bellman Equation Representation


[It is more convenient to think about

as the choice variable. The state variable,

it is not directly chosen (rather a distribution for

v(xt ) =

xt+1

is chosen at time

x,

is stochastic, so

t)]

sup {u(ct ) + Et v(xt+1 )} x


ct [0,xt ]

t+1 (xt ct ) + yt+1


xt+1 = R
x0 = y0

Euler Equation 1: Optimization


How is the Euler equation derived from the Bellman Equation of the consumption problem above using
optimization?

v(xt ) =

sup

n

o
t+1 (xt ct ) + yt+1
u(ct ) + Et v R
x

ct [0,xt ]

1. First Order Condition:

t+1 v 0 (xt+1 )
u0 (ct ) = Et R
0

u (ct ) Et Rt+1 v 0 (xt+1 )


h

t+1 )
F OCct : 0 = u0 (ct ) + Et v 0 (xt+1 ) (R

if
if

0 < ct < xt
ct = xt
i

(interior)
(boundary)

using chain rule

2. Envelope Theorem

v 0 (xt ) = u0 (ct )
h
h

Dierentiate with respect to state variable


Note:

ct
= u0 (ct )
xt : v 0 (x) = u0 (ct ) x
t

t+1 (xt ct ) + yt+1 = ct R


t+1 = R
t+1 xt xt+1 + yt+1 = ct = xt
xt+1 = R

xt+1 +
yt+1
t+1
R

Putting the FOC and Envelope Condition together, and moving the Envelope Condition forward one
period, we get:

IEuler

Equation:
t+1 u0 (ct+1 )
u0 (ct ) = Et R
0
t+1 u0 (ct+1 )
u (ct ) Et R

if
if

0 < ct < xt
ct = xt

(interior)
(boundary)

Euler Equation 2: Perturbation


How is the Euler equation derived from the Bellman Equation of the consumption problem above using
perturbation?
[*Prefers that we know it this way*]

1. General Intuition

Cost of consuming one dollar less today?

 Utility loss today

= u0 (ct )

Value of saving an extra dollar  Discounted, expected utility gain of consuming

t+1
R

more

dollars tomorrow:

 Utility gain tomorrow

t+1 u0 (ct+1 )
= Et R

2. Perturbation
**Idea: at an optimum, there cannot be a perturbation that improves welfare of the agent.
If there is such a feasible pertubation, then we've proven that whatever we wrote down is not an
optimum.
Proof by contradiction: [Assume inequalities, and show they produce contradictions so cannot hold]
1. Suppose
[If

u (ct )

t+1 u0 (ct+1 )
u0 (ct ) < Et R

is less than the discounted value of a dollar saved, then what should I do? Save more,

consume less.]
(a) Then we can reduce

ct

by

and raise

ct+1

by

t+1
R

to generate a net utility gain :

h
i
t+1 u0 (ct+1 ) u0 (ct )
0 < Et R
= this

perturbation raises welfare [Q? - dening welfare here]

[Note the expression is from simply moving

u0 (ct )

to RHS from above and multiplying by

(a) If this perturbation is possible that raises welfare, then can't be true that we started this
analysis at an optimum.
i. This perturbation is always possible along the equilibrium path.
ii. Hence, if this cannot be an optimum, we've shown that at an optimum, we must have:

t+1 u0 (ct+1 )
u0 (ct ) Et R
2. Suppose

t+1 u0 (ct+1 )
u0 (ct ) > Et R

(a) Then we can raise


i. [If

u (ct )

ct

by

and reduce

ct+1

by

t+1
R

to generate a net utility gain :

is greater than the value of a dollar saved, then what should I do? Save less,

consume more.]

h
i
t+1 u0 (ct+1 ) > 0
u0 (ct ) Et R
= this

perturbation raises welfare [Q? - dening welfare here]

(b) If this perturbation is possible that raises welfare, then can't be true that we started this
analysis at an optimum.
i. This perturbation is always possible along the equilibrium path, as long as

ct < x t

(liquidity constraint not binding).


ii. Hence, if this cannot be an optimum, we've shown that at an optimum, we must have:

t+1 u0 (ct+1 )
u0 (ct ) Et R

10

as long as

ct < xt

It follows that:

t+1 u0 (xt+1 )
u0 (ct ) = Et R
0
t+1 u0 (xt+1 )
u (ct ) Et R

if
if

0 < ct < xt
ct = xt

(interior)
(boundary)

Linearizing the Euler Equation


How do you linearize the Euler Equation?
Goal: Can we linearize this and make it operational without unrealistic assumptions?
Equation tells us what I should expect consumption growth
1. Assume

to do

is an isoelastic (i.e. CRRA) utility function

u(c) =
Note: lim1
2. Assume

ln ct+1

Rt+1

c1 1
=ln c.
1

c1 1
1

Important special case.

is known at time

3. Rewrite the Euler Equation [using functional form]:

c
= Et Rt+1 c
t
t+1
4. [Algebra] Divide by

c
t

1 = Et Rt+1 c
t+1 ct

5. [Algebra] Rearrange, using exp[ln]:



1 = Et exp ln Rt+1 c
t+1 ct
6. [Algebra] Distribute ln:

1 = Et exp [rt+1 + () ln ct+1 /ct ]


[Note: ln = ; ln Rt+1 = rt+1 ]
7. [Algebra] Substitute notation for consumption growth:

1 = Et exp [rt+1 ln ct+1 ]


[Note: ln(ct+1 /ct ) = ln ct+1 ln ct ln ct+1 ]
8. [Stats] Assume

ln ct+1

is conditionally normally distributed. Apply expectation operator:



1 2
1 = exp Et rt+1 ln ct+1 + Vt ln ct+1
2
h
i
a

E
a+ 21 V ar
a
Note: Ee = e
where a
is a random variable


2
Here we have: ln ct+1 N ( ln ct+1 , Vt ln ct+1 )
11

9. [Algebra] Take ln:

1
0 = Et rt+1 ln ct+1 + 2 Vt ln ct+1
2
10. [Algebra] Divide by

rearrange:

() ln ct+1 =
Terms of linearized Euler Equation

ln ct+1 = consumption growth


Vt ln ct+1 =conditional variance

1
1
(Et rt+1 ) + Vt ln ct+1

():

in consumption growth  conditional on information known at time

t.
(i.e. conditional on being in year 49, what is the variance in consumption growth between 49 and 50)
It is also known as the precautionary savings term as seen in the regression analysis

ln ct+1
Why is ln ct+1

consumption growth?







ct+1 ct
ct+1
ct+1
ct+1 ct
ln 1 +
= ln 1 +
1 = ln
ln ct+1
ct
ct
ct
ct
[the second step uses the log approximation,

x ln (1 + x)]

Important Consumption Models


What are the two important consumption models?
In 1970s, began recognizing that what is interesting is change in consumption, as opposed to just levels
of consumption. Since then have stayed with this approach.
1.

Life Cycle Hypothesis (Modigliani & Brumberg, 1954; Friedman) = Eat the Pie
Problem
(a) Assumptions
i.
ii.

t = R
R
R = 1 [ < 1; R > 1]

iii. Perfect capital markets, no moral hazard, so future labor income can be exchanged for
current capital
(b) Bellman Equation

v(x) = sup {u(c) + Ev(x+1 )} x


cx

x+1 = R(x c)

x0 = E

X
t=0

12

Rt yt

[At date 0, sells all his labor income for a lump sum, discounted at rate

r.

This sum is his

only source of wealth now]


[Eat the Pie, except every piece he leaves grows by a factor

r]

(c) Euler Equation


i.
ii.

u0 (ct ) = Ru0 (ct+1 ) = 1 u0 (ct+1 ) = u0 (ct+1 )


= consumption is constant in all periods

(d) Budget Constraint

R ct = E0

t=0

Rt yt

t=0

[(Discounted sum of his consumption) must be equal to [the expectation at date 0 of (the
discounted sum of his labor income)]]
(e) Subsitute Euler Equation

R c0 = E0

t=0

Rt yt

t=0

i. To give nal form: [Euler Equation + Budget Constraint]


c0 =

1
1
R


E0

Rt yt

Note:

t=0

Rt =

1
1
1 R

t=0

Intuition: Consumption is an annuity. Consumption is equal to interest rate scaling my total


net worth at date

1
2.

1
R

is the annuity scaling term; approximately equal to the real net interest rate

Certainty Equivalence Model (Hall, 1978)


(a) Assumptions
i.
ii.
iii.
iv.

t = R
R
R = 1 [ < 1; R > 1]

Can't sell claims to labor income


Quadratic utility : u(c) = c 2 c2
A. Note: This admits negative consumption, and does not imply that
B. [Hence it is possible to consume a negative amount]

(b) Bellman Equation

v(x) = sup {u(c) + Ev(x+1 )} x


c

x+1 = R(x c) + y+1


x0 = y0
(c) Euler Equation

13

limc0 u0 (c) =

ct = Et ct+1 = Et ct+n
t+1 ; u0 (ct+1 ) = Et u0 (ct+1 ) [Since R = 1]
u0 (c) = c. u0 (ct ) = Et R
= ct = Et [ ct+1 ] = Et ct+1 = ct = Et ct+1
Et ct+1
So ct = Et ct+1 = Et ct+n implies consumption is a random walk :

i. Euler becomes:

ii.

ct =

ct+1 = ct + t+1
iii.

So ct+1 can not be predicted by any information available at time t


A. If I know

ct ,

no other economic variable should have predictive power at

ct+1 ;

any

other information should be orthogonal 


(d) Budget Constraint

Rt ct

t=0
[Sum from t to

Rt yt

t=0

of discounted consumption is

the

discounted value of stochastic labor

income]
i. Budget Constraint at time

t:

ct+s = xt +

s=0

Rs yt+s

s=1

[Discounted sum of remaining consumption = cash on hand (at date t) and the discounted sum of remaining labor income]
ii. Now applying expectation: [if it's true on every path, it's true in expectation ]

Et

Rs ct+s = xt + Et

s=0
(e) Subsitute Euler Equation

Rs yt+s

s=1

[ct = Et ct+s ]

Rs ct = xt + Et

s=0

Rs yt+s

s=1

i. To give nal form: [Euler Equation + Budget Constraint] Note:

!



X
1
xt + Et
Rs yt+s
ct = 1
R
s=1

t
=
t=0 R

1
1
1 R

Intuition: Just like in Modigliani, consumption is equal to interest rate scaling my total net
worth at date

i. (but get there just by assuming a quadratic utility function)


1 R1 is the annuity scaling term (approximately equal
P
(xt + Et s=1 Rs yt+s ) is total net worth at date t

14

to the real net interest rate)

Empirical Tests of Linearized Euler Equation: Without Precautionary Savings Eects


Want to prove that consumption growth is unpredictable between

and

t+1

A. Testing Linearized Euler Equation: Without Precautionary Savings Eects


1. Start with: Linearized Euler Equation, in regression form:

ln ct+1 =
[where

t+1

1
1
(Et rt+1 ) + Vt ln ct+1 + t+1

is orthogonal to any information known at time

t.]

2. Assume (counterfactually) precautionary savings is constant


(a) Euler equation reduces to:

ln ct+1 = constant + 1 Et rt+1 + t+1

[Note: when we replace precautionary term with a constant, we are eectively ignoring its eect
(since it is no longer separately identied from the other constant term

)]

B. Estimating the Linearized Euler Equation: 2 Goals and Approaches


1. Goal 1: Estimate
(a)

1
is EIS, the elasticity of intertemporal substitution [for this model, EIS is the inverse of
1
the CRRA; in other models EIS not
]

(b) Estimate using

ln ct+1
Et rt+1 , with equation

2. Goal 2: Test the orthagonality restriction

ln ct+1 = constant + 1 Et rt+1 + t+1

t t+1

(a) This means, test the restriction that information available at time
sumption growth in the following regression [new

ln ct+1 = constant +
(a) For example, does the date

Xt

does not predict con-

term, for any variable]:

1
Et rt+1 + Xt + t+1

expectation of income growth,

Et ln Yt+1

predict date

t+1

consumption growth [for Shea, etc test below]?


i. [Model is trying to say that an expectation of rising incomei.e.
marketdoesn't translate into a rise in consumption.

ln ct+1 = constant +

being hired on job

should be 0.]

1
Et rt+1 + Et ln Yt+1 + t+1

C. Summary of Results:
1.

[0, 0.2]

(Hall, 1988)

(a) EIS is very smallpeople are fairly unresponsive to expected movements in the interest rate;
this doesn't seem to be a big driver of consumption growth
i. the parameter that is scaling the expected value of the interest rate is estimated to
be close to 0

15

(b) But estimate may be messed up once you add in liquidity contraints. Someone who is very
liquidity constrained

EIS of 0.

(c) So what exactly are we learning when we see American consumers are not responsive to
the interest rate? [Is it just about liquidity constraints, or does it mean that even in their
absence, something deep about responsiveness to interest rate]
2.

[0.1, 0.8]
(a)

(b)

=
t+1
=

(Campell and Mankiw 1989, Shea 1995)

expected income growth (predicted at date

t)

predicts consupmtion growth at date

the assumptions (1) the Euler Equation is true, (2) the utility function is CRRA, (3)

the linearization is accurate, and (4)

Vt ln ct+1

is constant, are jointly rejected

(c) Theories on why does expected income predict consumption growth:


i. Leisure and consumption expenditure are substitutes
ii. Work-based expenses
iii. Households support lots of dependents in mid-life when income is highest
iv. Houesholds are liquidity-constrained and impatient
v. Some consumers use rules of thumb:

cit = Yit

vi. Welfare costs of smoothing are second-order

Lecture 4
Precautionary Savings and Liquidity Contraints
1. Precautionary Savings Motives; 2. Liquidity Constraints; 3. Application: Numerical solution of
problem with liquidity constraints; 4. Comparison to eat-the-pie problem

Precautionary Motives
Why do people save more in response to increased uncertainty?

1. Uncertainty and the Euler Equation

1
(Et rt+1 ) + Vt ln ct+1

2

= Et [ ln ct+1 Et ln ct+1 ]2
Et ln ct+1 =

where

Vt ct+1

(a) Increase in economic uncertainty raises

Vt ct+1 ,

raising

Et ln ct+1

i. Note: this is not a general result (i.e. doesn't apply to quadratic)


2. Key reason is the convexity of marginal utility
(a) An increase in uncertainty raises the expected value of marginal utility
i. Convexity of marginal utility curve raises marginal utility in expectation in the next
period
A. Euler equation (for CRRA):

u0 (ct ) 6= u0 (Et (ct+1 ))


16

but rather

u0 (ct ) = Et u0 (ct+1 )

B. (Since by Jensen's inequality

Et (ct+1 )
ct

> 1)

(b) This increase the motive to save


i.

referred to as the precautionary savings eect

3. Denition: Precautionary Savings


(a) Precautionary saving is the reduction in consumption due to the fact that future labor
income is uncertain instead of being xed at its mean value.

Liquidity Constraints
Buer Stock Models
What are the two key assumptions of the buer stock models? Qualitatively, what are some of their
predictions
Since the 1990s, consumption models have emphasized the role of liquidity constraints (Zeldes, Caroll,
Deaton).
1. Consumers face a borrowing limit: e.g.

ct x t

(a) This matters whether or not it actually binds in equilibrium (since desire to preserve buer
stock will aect marginal utilty as you get close to full-borrowing level)
2. Consumers are impatient:

>r

Predictions

 Consumers accumulate a small stock of assets to buer transitory income shocks


 Consumption weakly tracks income at high frequencies (even predictable income)
 Consumption strongly tracks income at low frequencies (even predictable income)

17

Eat the Pie Problem


A model in which the consumer can securitize her income stream. In this model, labor can be transformed into a bond.
1. Assumptions
(a) If consumers have exogenous idiosyncratic labor income risk, then there is no risk premium
and consumers can sell their labor income for

W0 = E0

Rt yt

t=0
(b) Budget constraint

W+1 = R(W c)
2. Bellman Equation

v(W ) = sup {u(c) + Ev(R(W c))} x


c[0,W ]
3. Solve Bellman: Guess Solution Form

(
v(W ) =

W1
+ ln W

[0, ], 6= 1
=1

if
if

4. Conrm that solution works (problem set)


5. Derive optimal policy rule (problem set)
1

c = W
1

= 1 (R1 )

Lecture 5
Non-stationary dynamic programming

18

Non-Stationary Dynamic Programming


What type of problems does non-stationary dynamic programming solve? What is dierent about the
approach generally?
Non-Stationary Problems:

So far we have assumed problem is stationary - that is, value function does not depend on time,
only on the state variable

Now we apply to nite horizon problems

 E.g. Born at

t = 1,

terminate at

t = T = 40

backwards induction
We index the value function, since each value function is now date-specic

Two changes for nite horizon problems:


1.

vt (x)

E.g.,

represents value function for period

vt (x) = Et

PT

s=t

st u(cs )

2. We don't use an arbitrary starting function for the Bellman operator

Instead, iterate the Bellman operator on

vT (x),

where

is the last period in the problem

 In most cases, the last period is easy to calculate, (i.e. end of life with no bequest

vT (x) = u(x))

motive:

Backward Induction
How does backward induction proceed?

Since you begin in the nal period, the RHS equation is one period ahead of the LHS side
equation (opposite of previous notation with operator)

 Each iteration takes on more step back in time

Concretely, we have the Bellman Equation

vt1 (x) = sup {u(c) + Et vt (R(x c) + y)} x


c[0,x]

*To generate vt1 (x), apply the Bellman Operator


vt1 (x) = (Bvt )(x) = sup {u(c) + Et vt (R(x c) + y)} x
c[0,x]

 (As noted above, generally start at termination date

Generally we have:

vT n (x) = (B n vT )(x)

19

and work backwards)

Lecture 6
Hyperbolic Discounting
Context:
For



< 1, Ut = u(ct ) + u(ct+1 ) + 2 u(ct+2 ) + 3 u(ct+3 ) + . . .

Hyperbolic Bellman Equations


Bellman Equation in set up in the following way. Dene three equations where:

V is the continuation value function


W is the current value function
C is the consumption function
1.

V (x) = U (C(x)) + E[V (R(x C(x)) + y)]

2.

W (x) = U (C(x)) + E[V (R(x C(x)) + y)]

3.

C(x) = arg maxc U (c) + E[V (R(x c) + y)]

Three further identities:

Identity linking V and W


V (x) = W (x) (1 )U (C(x))

Envelope Theorem

W 0 (x) = U 0 (C(x))

FOC

U 0 (C(x)) = RE [V 0 (R(x C(x)) + y)]

Hyperbolic Euler Equation


How do we derive an Euler equation for the hyperbolic scenario?

What is the intuition behind the

equation?

1. We have the FOC

u0 (ct ) = REt [V 0 (xt+1 )]


2. Use the Identity linking

and

to subsitute for

V0



dCt+1
= REt W 0 (xt+1 ) (1 )u0 (ct+1 )
dxt+1
3. Use envelope theorem

20

(a) [substitute

u0 (xt+1 )

for

W 0 (ct+1 )]



dCt+1
0
0
= REt u (xt+1 ) (1 )u (ct+1 )
dxt+1
4. Distribute

 



dCt+1
dCt+1
= REt
+ 1
u0 (ct+1 )
dxt+1
dxt+1

Interpretation of Hyperbolic Euler Equation

First, note that

()

()

dCt+1
dxt+1 is the marginal propensity to consume (MPC).

We then see that the main dierence in this Euler equation is that it is a

discount factors

weighted average of






dCt+1
dCt+1
u0 (ct+1 )
+
1

= REt

dx
dx
t+1
t+1
| {z }
{z
}
|
short run

long run

More cash on hand will shift to to the long-run term (less hyperbolic discounting)

Sophisticated vs. Naive Agents


[From Section Notes]

Sophisticated Agents

 The above analysis has assumed sophisticated agents


 A sophisticated agent knows that in the future, he will continue to be quasi-hyperbolic
(that is, he knows that he will continue to choose future consumption in such a way that
he discounts by

between the current and next period, but by

between future periods.)

 This is reected in the specication of the consumption policy function (as above)

Naive Agents

 Doesn't realize he will discount in a quasi-hyperbolic way in future periods


 He wrongly believes that, starting from the next period, he will discount by

between all

periods.

(Another way of saying: he wrongly believes that he will be willing to commit to the
path that he currently sets)

 He will decide today, believing that he will choose from tomorrow forward by the exponential discount function. (

superscript); however, when he reaches the next date, he again

reoptimizes:

21

1. Dene
(a) Continuation value function, as given by standard exponential discounting Bellman
equation:

e
Vt+1
(x) =

max
e

ct+1 [0,x]




e
u cet+1 + Et+1 Vt+2
R(x cet+1 ) + y

(b) Current value function, as given by the hyperbolic equation:

Wtn (x) = emax

ct [0,x]



e
u (cn ) + Et Vt+1
(R(x cet ) + y)

2. Since he is not able to commit to the exponential continuation function, realized consumption path will be given:
(a) Realized consumption path:
(b) Solution at each time

{cn (x )} =t

is characterized by

u0 (cet+1 ) = REt+1 u0 cet+2


0

u0 (cnt ) = REt V e (xt+1 ) = REt u0 cet+1

Lecture 7
Asset Pricing
1. Equity Premium Puzzle; 2. Calibration of Risk Aversion; 3. Resolutions to the Equity Premium
Puzzle
Intuition

Classical economics says that you should take an arbitrarily small bet with any positive re-

as long as the payos of the bet are uncorrelated with your consumption path

turn...

(punchline)

Don't want to make a bet where you will give up resources in a low state of consumption (when
the marginal utility of consumption is higher) for resources in a high state of consumption (where
marginal utility of consumption is lower) [all because of curvature of the utility function]

Not variance-based risk, covariance based risk

Asset Pricing Equation


What is the asset pricing equation? How is it derived?

Asset Pricing Equation


General Form:

ij = ic jc

22

where

ij = ri rj

(the gap in rate of returns between assets

iand j )

ic = Cov( i i , ln c)
Risk-Free Asset:
Denition: we denote the risk free return:
When

i =equities

and

j =risk

Rtf .

Specically, we assume that at time

t 1, Rtf

is known.

free asset, we have the simple relationship

equity,f = equity,c
Intuition:

The equity premium


risk,

equity,f ,

is equal to the amount of risk,

equity,c ,

multiplied by the price of

Asset Notation and Statistics:


1. Allowing for many dierent types of assets:

1
2
i
I
Rt+1
, Rt+1
, . . . , Rt+1
, . . . , Rt+1

2. Assets have stochastic returns:

i
Rt+1
=e

it+1 has unit variance



2
i
rt+1
+ i it+1 21 [ i ]

where

it+1 has

unit variance

it+1 is a normally distributed random variable:

x N (, 2 ) Ax + B N (A + B, A2 2 )

E [exp (Ax + B)] = exp A + B + 21 A2 2
1
2
[Since x N (, ) E(exp(x)) = exp(E(x) + V ar(x));
2
1 2
= exp( + 2 )]

it+1 N (0, 1)

so if standard normal:

i
i
i
i it+1 + rt+1
12 [ i ]2 N ( i 0 + rt+1
12 [ i ]2 , i2 12 ) = N (rt+1
12 [ i ]2 , i2 )


i
E [Rit ] = E exp(rt+1
+ i it+1 21 [ i ]2 ) which as we've seen is now exp(a random variable)

Hence:

so

we can use

i
i
= exp(mean + 12 V ar) = exp(rt+1
12 [ i ]2 + 21 i2 ) = exp(rt+1
)
Finally we have that since for small

x: ln(1 + x) x

= 1 + x ex

i
exp(rti ) 1 + rt+1
.

Derivation of Asset Pricing Equation:

ij = ic jc

How is the Asset Pricing Equation derived?


Use Euler, Substitute, Take Expectation (Random Variable), Dierence Two Equations, Apply Covariance Formula
1. Start with the Euler Equation

 i

u0 (ct ) = Et Rt+1
u0 (ct+1 )
2. Assume

is isoelastic (CRRA):

u(c) =

23

c1 1
1

3. Substitute into the Euler equation


(a)

= exp()

(b)

i
Rt+1
=e

(c)



2
i
rt+1
+ i it+1 12 [ i ]

u (c) = c

where

it+1 has

unit variance

(given CRRA)


 i
c = Et Rt+1
c
t+1





1 i 2
i
i i

ct+1
= Et exp + rt+1 + t+1
2
"

1 = Et

1 i 2
i

exp + rt+1
+ i it+1
2



ct+1
ct

 #

4. Re-write in terms of exp(), using exp(ln)







1 i 2
ct+1
i
1 = Et exp + rt+1
+ i it+1

exp ln
2
ct



1 i 2
i
ln (ct+1 )
1 = Et exp + rt+1
+ i it+1
2
(a) Rearrange, and note

1 i 2
i
i i

1 = Et
t+1 ln (ct+1 )
exp + rt+1 2 +
{z
}
{z
} |
|
non-stochastic

random variable

5. Take Expectation, applying the formula for the expectation of a Random Variable


Sx N (S, S 2 2 ) E [exp (Sx)] = exp S + 21 S 2 2
 i i

For the random variable t+1 ln (ct+1 )
 i i


 i i

1
That is, E t+1 ln (ct+1 ) = exp Et [ ln (ct+1 )] + V t+1 ln (ct+1 )
2

(a) That is, use:


i.
ii.





1 i 2
1  i i
i
1 = exp + rt+1
Et [ ln (ct+1 )] + V t+1 ln (ct+1 )
2
2
6. Take Ln

i
0 = + rt+1


1 i 2
1 
Et [ ln (ct+1 )] + V i it+1 ln (ct+1 )
2
2

7. Dierence Euler Equation for assets

j
i
0 = rt+1
rt+1

and

j,

and Re-write:

h
ii
2 i 1 h  i i

1 h i 2
j
+
V t+1 ln (ct+1 ) V j jt+1 ln (ct+1 )
2
2
24

(a) Note that

, Et [ ln (ct+1 )]

drop out of the equation

(b) Re-write with interest rate dierence on LHS:

j
i
=
rt+1
rt+1

ii
h
2 i 1 h  i i

1 h i 2

V t+1 ln (ct+1 ) V j jt+1 ln (ct+1 )


j
2
2

8. Apply formula for


(a)

V (A + B)

[V (A + B) = V (A) + V (B) + 2Cov(A, B)]


2Cov(A, B) = 2Cov(A, B)




V i it+1 ln (ct+1 ) = V i it+1 +V ( ln (ct+1 ))+2Cov i it+1 , ln (ct+1 )

i. Also note
Note, in our case:

= i

2

+ 2 V ln (ct+1 ) 2ic

j
i
rt+1
rt+1
=

i h 2
ii
2 h 2
1 h i 2
j i + 2 V ln (ct+1 ) 2ic + j + 2 V ln (ct+1 ) 2jc
2

Just cancel terms:

rti rtj =

1
2



2
2
2
2
( i )\
( i ) + ( j )\
( j ) + 2 V ln (ct+1\
) 2 V ln (ct+1 ) + 2ic 2jc

To get out asset pricing equation:

j
i
ij = rt+1
rt+1
= ic jc

Equity Premium Puzzle


What is the equity premium puzzle?
We can rearrange the asset pricing equation for the risk free asset

equity,f
equity,c

Empirical Date: In the US post-war period

equity,f .06
equity,c .0003
=

.06
= 200
.0003

25

 equity,f


= equity,c

to isolate

Lecture 8
Summary Equations

Brownian Motion; Discrete to Continuous


x x(t + t) x(t) =

+h
h

p
q =1p

with prob
with prob

t
t (p

E [x(t) x(0)] = n ((p q)h) =


q)h (*)

t
2
V [x(t) x(0)] = n 4pqh = t 4pqh2 (*)

Calibration

h = t
h
i
p = 12 1 +
t , (p q) =


E [x(t) x(0)] =

V [x(t) x(0)] =

t
t (p

t
t
t

q)h =

 
t =

t
|{z}

linear drift

2 t
|{z}

linear variance

Weiner
1.

z = t

where

N (0, 1),

2. Non overlapping increments of

Vertical movements

are independent

Ito:

dx = a(x, t)dt + b(x, t)dz ; dx = a(x, t)dt +


| {z }
drift

1. Random Walk:

dx = dt + dz

2. Geometric Random Walk:


variance

z N (0, t)]

[also stated

Ito's Lemma:

b(x, t)dz
| {z }

standard deviation

[with drift

and variance

dx = xdt + xdz

2 ]

[with proportional drift

and proportional

2 ]
z(t)

is Wiener,

x(t)

is Ito with

dx = a(x, t)dt + b(x, t)dz .

Let

V = V (x, t),

then

1 V
2
dV = V
dt + V
x dx + 2 x2 b(x, t) dt
h t
i
V
1 2V
2
= V
dt +
t + x a(x, t) + 2 x2 b(x, t)

V
x

b(x, t)dz

Value Function as Ito Process:

 Value Function is Ito Process

(V ) of an Ito Process (x) or a Weiner Process (z)

(need to check this)



V
V
1 2V
V
2

dV = a
(x, t)dt + b(x, t)dz =
+
a(x, t) +
b(x, t) dt +
b(x, t) dz
t
x
2 x2
|x {z }
{z
}
|

Proof of Ito's Lemma:

dV =


V
t

dt +

1 2V
2 t2

(dt)2 +

V
x

dx +

1 2V
2 x2

(dx)2 +

2V
x2

dxdt + h.o.t.`

(dx) = b(x, t)2 (dz)2 + h.o.t. = b(x, t)2 dt + h.o.t.

26

Motivation: Discrete to Continuous Time


Brownian Motion Approximation with discrete intervals
Have continuous time world, but use
length goes to zero, that is

discrete time steps

in

continuous time

t.

Want to see what happens as the step

t 0.

Process with jumps at discrete intervals:

At every

intervals, a process

x(t)

either goes up or down:

+h
h

x x(t + t) x(t) =

with prob
with prob

p
q =1p

Expectation of this process:

E(x) = ph + q(h) = (p q)h

Variance of this process

V (x) = E [x Ex]


2
2
2
We have: V (x) = E [x Ex] = E (x)
[Ex] = 4pqh2


2
2
(Since)E (x)
= ph2 + q (h) = h2
Denition of Variance of a random variable:

Analysis of

in this process

Time span of length

x(t) x(0)

= x(t) x(0)

implies

is

n
k

x(t) x(0)

is a binomial random variable

General form for probability that


Probability that

t
t steps in

n=

x(t) x(0)

is a certain combination of

and

h:

x(t) x(0) = (k)(h) + (n k)(h)

k nk

p q

n
k

(where

n!
k!(nk)! )

Expectation and Variance:




E [x(t) x(0)] = n ((p q)h) =


(n

t
t (p

times the individual step

q)h (*)

x.

Since these are iid steps, the sum is

times each

element in the sum)


V [x(t) x(0)] = n 4pqh2 =

(Since all steps are iid:

t
2
t 4pqh (*)
when we look at sum of random variables, where random

variables are all independent, the variance of the sum is the sum of the variances)

We have binomial random variable. As we chop into smaller and smaller time steps, will converge to
normal density.

x given a time step t,


t, during which time there
x(t) x(0), is a binomial random

So far: we've taken the discrete time process, discussed the individual steps
and then we've aggregated across many time steps to an interval of length
are

n steps n =

t
t , and we've observed that this random variable,

27

t
t
2
t (p q)h and variance t 4pqh . Our next job is to move from discrete number
of steps to smaller and smaller steps.
variable with mean

Calibrate h and p to give us properites we want as we take t 0: [Linear drift and variance]

Let:

h = t (*)
h
i
p = 12 1 +
t

(*)

 These Follow:

h
i
q = 1 p = 12 1
t

(p q) =
t

t to h, p, q ) We are trying to get to continuous time


random walk with drift. Random walk has variance that increases linearly with t forcasting

Where did this come from? (linking

horizon. Would like this property to emerge in continuous time.

t
 Consider the variance (of the aggregate time random variable): t
4pqh2 .

t is going to zero; p and q are numbers that are around .5 so can think of them as constants
4 is constant, t is the thing we'd like everything to be linear with respect to

So given all this, h must be proportional to t [then put in scaling parameter ]




t
But then, what must (pq) be for drift to be linear in t E [x(t) x(0)] = t
(p q)h ?

(p q) must also be proportional to t, which (subject to ane transformations)


pins down p, q and (p q).

in the limit,

No other way to calibrate this for it to make sense as a continuous time random walk

Expectation and Variance

E [x(t) x(0)] =

t    
t
t t =
(p q)h =
t
t

t
|{z}

linear drift

V [x(t) x(0)] =

t
t
4pqh2 =
4
t
t


 
 2 
 2 

1
1
t 2 t = t 2 1
t
4

2 t
|{z}

(as t 0)

linear variance

Intuition as

t 0

As take

At a point

to

0,

t,we

mathematically converging to a continuous time random walk.


have a

variable with mean

binomial random variable. It is converging to a normal random


and var

2 t.

[These are the four graphs simulating the process, which appear closer and closer to Brownian
motion as

gets smaller]

28

Hence this Random Walk, a continuous time stochastic process, is the limit case of a family of
stochastic processes

Properties of Limit Case (Random Walk):

1. Vertical movements proportional to


2.

[x(t) x(0)] D N (t, 2 t),

t (not t)

Binomial D N ormal

1
1
period > nh =
t t = t

since

3. Length of curve during 1 time

(a) (Since length will be greater than the number of step sizes, time the length of each step
size)
(b) Hence length of curve is innity over any nite period of time
4.

x
t

t
t

(a) Time derivative,

dx
dt , doesn't exist

(b) So we can't talk about expectation of the derivative.

However, we can talk about the

following derivative-like objects (5,6):

5.

E(x)
t

(pq)h
t

(a) So we write

t)( t)
t

E (dx) = dt

i. That is, in a period

6.

V (x)
t



2
2
4( 14 ) 1(
) t t
t

(a) So we write

t,

you expect this process to increase by

V (dx) = 2 dt

Finally we see what the process is everywhere continuous, nowhere dierentiable

When we let

converge to zero, the limiting process is called a continuous time random walk with

(instantaneous) drift

and (instantaneous) variance

2 .

We generated this continuous-time stochastic process by building it up as a limit case.


We could have also just dened the process directly as opposed to converging to it:

Note that x above becomes z for following :

Wiener Process
Weiner processes: Family of processes described before with zero drift, and unit variance per unit time

Denition
If a continuous time stoachastic process
sponding to a time interval
1.

z = t

where

t,

z(t)

is a Wiener Process, then any change in

satises the following conditions:

N (0, 1)

29

z, z ,

corre-

z N (0, t)]

piece, and
t is

(a) [often also stated


(b)

is the gausian

2. Non overlapping increments of


(a) A.K.A. If

t1 t2 t3 t4

the scaling piece

are independent

then

E [(z(t2 ) z(t1 )) (z(t4 ) z(t3 ))] = 0

A process tting these two properties is a Weiner process. It is the same process converged to in the
rst part of the lecture above.

Properties

A Wiener process is a continuous time random walk with zero drift and unit variance

z(t)

has the Markov property: the current value of the process is a sucient statistic for the

distribution of future values

 History doesn't matter. All you need to know to predict in

periods is where you are right

now.

z(t) N (z(0), t)

so the

variance of

z(t) rises linearly with t

 (Zero drift case)

Note on terminology :
Wiener processes are a subset of Brownian motion (processes), which are a subset of Ito Processes.

Ito Process
Now we generalize. Let drift (which we'll think of as

t and state x.
limt0 Ex
t equal

with arguments

and

t, a(x, t))

depend on

time
[Let

to a function

with those two arguments]

Similarly, generalize variance.

From Wiener to Ito Process


Let

z(t)

be a Wiener Process. Let

x(t)

be another continuous time stocastic process such that

limt0 Ex
t = a (x, t)
2
x
limt0 Vt
= b (x, t)

i.e.
i.e.

E (dx) = a (x, t) dt
2
V (dx) = b (x, t) dt

Note that the i.e. equations are formed by simply bringing the
We summarize these properties by writing the expression for an

dt

"drift"
"variance"
to the RHS.

Ito Process :

dx = a(x, t)dt + b(x, t)dz

This is not technically an equation, but a way of saying

Drift and Standard Deviation terms:

dx = a(x, t)dt +
| {z }
drift

30

b(x, t)dz
| {z }

standard deviation

 We think of

dz

as increments of our Brownian motion from Weiner process/limit case of

discrete time process.

 What is telling us about variance is whatever term is multiplying


as

x.

Weiner processcause a change in


change in

dz .

The process changes

changes, and what you want to know is how does a little change in

with scaling factor

Deterministic and Stochastic terms:

z this

background

In this case, an instantaneous change in

causes a

b.
dx =

a(x, t)dt
| {z }

deterministic

+ b(x, t)dz
| {z }

stochastic

Ito Process: 2 Key Examples

Random Walk

dx = dt + dz
 Random Walk with drift

and variance

Geometric Random Walk

dx = xdt + xdz
 Geometric Random Walk with proportional drift

and proportional variance

x
dx

Drift and variance are now dependent on the state variable

dx
Can also think of as
x

= dt + dz ,

so percent change in

is a random walk

 **Note on Geometric Random Walk: If it starts above zero, if will never become negative
(or even become zero): even with strongly negative drift, given its proportionality to

the

process will be driven asymptotically to zero at the lowest.

 Laibson: Asset returns are well approximated by Geometric Random Walk

Ito's Lemma
Motivation is to work with functions that take Ito Process as an argument

Ito's Lemma
Basically just a taylor expansion

z(t) be
V = V (x, t), then

Theorem: Let
Let

a Wiener Process. Let

dV =

x(t)

be an Ito Process with

V
1 2V
V
dt +
dx +
b(x, t)2 dt
t
x
2 x2

31

dx = a(x, t)dt + b(x, t)dz .


=


V
V
V
1 2V
2
b(x,
t)
dt +
+
a(x, t) +
b(x, t)dz
t
x
2 x2
x

[the second line simply follows from expanding

dx = a(x, t)dt + b(x, t)dz ]

Motivation: Work with Value Functions (which will be Ito Processes) that take Ito
Processes as Arguments

We are going to want to work with functions (value functionsthe solutions of Bellman equations)
that are going to take as an argument, an Ito Process

Motivation is to know what these functions look like, that take

as an argument, where

is an

Ito Process

 More specically, we'd like to know what the Ito Process looks like that characterizes the
value function

Value function

V.
V

is

2nd

Ito Process, which takes an Ito Process (x) as an argument

Not only want to talk about Ito Process

x,

but also about Ito Process

Since we are usually looking for a solution to a question: i.e. Oil Well

 Price of oil is given by


 The Ito Process

dx = a(x, t)dt + b(x, t)dz

x,

describes the path of

the price of oil (not the

oil well)

e.g. price of oil is characterized by geometric Brownian motion:

V (x, t)

is value of Oil Well, which depends on the price of oil

Want to characterize the value function of the oil well,

dx = xdt + xdz

and time

V (x, t),

where

follows an Ito

Process

Ito Process that characterizes value function over time:


But we will see that the dependence of

and

b on V

dV = a
(V, x, t)dt+b(V, x, t)dz
will drop

 We want to write the stochastic process which describes the evolution of

V:

dV = a
(x, t)dt + b(x, t)dz

which is called the total dierential of

terms
Note that the hat

(
a, b)

are what we are looking for in this stochastic process,

which are dierent than the terms in the Ito Process

(a, b)

Ito's Lemma gives the solution to this problem in the following form:

dV =



V
V
1 2V
V
1 2V
V
V
2
2
dt+
dx+
b(x,
t)
dt
=
+
a(x,
t)
+
b(x,
t)
dt+
b(x, t)dz
2
2
t
x
2 x
t
x
2 x
x

V
V
1 2V
V
2
dV =
+
a(x, t) +
b(x, t) dt +
b(x, t) dz
t
x
2 x2
|x {z }
|
{z
}


32

where:

Proof of Ito's Lemma


Proof: Using a Taylor Expansion

dV =

V
V
2V
1 2V
1 2V
2
2
(dt)
+
(dx)
+
dxdt + h.o.t.
dt +
dx
+
t
2 t2
x
2 x2
x2

Any term of order

(dt) 2

 [This is because
tant than

Note that

or higher is small relative to terms of order

(t)2

is innitely less important than

(t) 2 ,

dt.

which is innitely less impor-

(t)]
2

(dz) = dt.

Since

t,

recalling

z h

[there's a Law of Large Numbers

making this true, this is math PhD land and not something Laibson knows or expects us to
know]

 Hence we can eliminate:


2

(dt) = h.o.t.
dxdt = a(x, t)(dt)2 + b(x, t)dzdt = h.o.t.
2
(dx) = b(x, t)2 (dz)2 + h.o.t. = b(x, t)2 dt + h.o.t.
 Combining these results we have:

V
1 2V
V
dt +
dx +
b(x, t)2 dt
t
x
2 x2
dz is a Wiener process that depends only on t through random motion
dx is a Ito process that depends on x and t through random motion
V (x, t) is a value function (and an Ito Process) that depends on a Ito process dx

and directly on

Intuition: Drift in V through Curvature of V :

Even if we assume
doesn't depend on

We still have

 If

a(x, t) = 0
t]

E(dV ) =

is concave,

1 2V
2 x2

[no drift in Ito Process] and assume

is expected to fall due to variation in

V (x) = ln x, hence V =
dx = xdt
+ xdz
h

Ex:

=
dt

[holding

xed,

1
x and

00

[convex

rise]

x12
V
x

b(x, t)dz

Growth in V falls below due to concavity of V

Intuition for proof : for Ito Processes,


of order

=0

b(x, t)2 dt 6= 0

i
V
1 2V
2
dt +
dV = V
t + x a(x, t) + 2 x2 b(x, t)
h
i
2
= 0 + x1 x 2x1 2 (x) dt + x1 xdz


= 12 2 dt + dz

V
t

(dx)

behaves like

b(x, t)2 dt,

so the eect of concavity is

and can not be ignored when calculating the dierential of

 Because of Jensen's inequality, if you are moving on independent variable (x) axis randomly,
expected value will go down

33

Lecture Summary
Talked about a continuous time process (Brownian motion, Weiner process or its generalization as
an Ito Process) that has these wonderful properties that is kind of like the continuous time analog of
things in discrete time are thought of as random walks. But it is even more general that this:
Has perverse property that it moves innitely far over unit time, but when you look at how it changes
over any discrete interval of time, it looks very natural.
Also learned how to study functions that take Ito Processes as arguments: key tool in these processes
is Ito's Lemma. It uses a 2nd order taylor expansion and throws out higher order terms. So we can

characterize drift in that function

very simply and powerfully, which we put to use in next lectures.

Lecture 9
Continuous Time Bellman Equation & Boundary Conditions

Continuous Time Bellman Equation


Final Form:

V (x, t)dt = max {w(u, x, t)dt + E [dV ]}


u

Inserting Ito's Lemma

 
V
1 2V
V
2
+
a+
b(x, t) dt
V (x, t)dt = max w(u, x, t)dt +
u
t
x
2 x2


Next 1.5 lectures about solving for

using this equation with Ito's Lemma. First, Merton. Then,

stopping problem.

Intuition

V (x, t)dt
| {z }

= max
u

required rate of return

w(u, x, t)dt
| {z }

instantaneous dividend

E [dV ]
| {z }

instantaneous capital gain

 required rate of return = instantaneous return; instantaneous capital gain = instantaneous


change in the program.

Derivation

 Towards Condensed Bellman

Let

Let

w(x, u, t) =

instantaneous payo function

Arguments:

is state variable,

x = x + x and t = t + t .

is control variable,

is time

We'll end up thinking about what happens as

Just write down Bellman equation as we did in discrete time

34

t 0

n
o
1
V (x, t) = max w(x, u, t)t + (1 + t) EV (x0 , t0 )
u

V (x, t) = max
u

w(x, u, t)t
|
{z
}
payo time

ow

3 simple steps algebra with

(1 + t)

+
step

(1 + t) EV (x0 , t0 )
|
{z
}

discounted value of continuation

term:

(1 + t) V (x, t) = max {(1 + t) w(x, u, t)t + EV (x0 , t0 )}


u

tV (x, t) = max {(1 + t) w(x, u, t)t + EV (x0 , t0 ) V (x, t)}


u

n
o
2
tV (x, t) = max w(x, u, t)t + w(x, u, t) (t) + EV (x0 , t0 ) V (x, t)
u

Let

t 0

(dt) = 0

. Terms of order

V (x, t)dt = max {w(x, u, t)dt + E [dV ]}


u

(*)

 Substitute using Ito's Lemma

Substitute for

E [dV ]

where


V
1 2V
V
V
2
+
a(x, u, t) +
b(x, u, t) dt +
b(x, u, t)dz
dV =
t
x
2 x2
x




V
since E
bdz = 0
x


V
V
1 2V 2
E [dV ] =
+
a+
b
dt
t
x
2 x2


Substituting this expression into (*) we get

 


V
1 2V
V
2
V (x, t)dt = max w(u, x, t)dt +
+
a+
b(x,
t)
dt
u
t
x
2 x2

which is a partial dierential equation (in

and

t)

Interpretation

 Sequence Problem:

et w (x(t), u(t), t) dt
t

 Interpretation of Bellman (*) before substuting for Ito's Lemma:

V (x, t)dt
| {z }

required rate of return

= max
u

w(x, u, t)
| {z }

instantaneous dividend

35

E [dV ]
| {z }

instantaneous capital gain

note that instantaneous capital gain

instantaneous change in the program

And when we plug in Ito's Lemma


Instantaneous capital gain

V
V
1 2V
b(x, t)2
+
a+
t
x
2 x2

Application 1: Merton's Consumption

Merton's Consumption

 Consumer has 2 assets

Equity: return

 Invests share

r
r+

Risk free: return

with proportional variance

in assets, and consumes at rate c

Ito Process that characterizes dynamics of

x,

her cash on hand:

dx = [rx + x c] dt + xdz
 Intuition

rxdt every period on


(xdt). Consuming

assets.

equity premium

at rate c, so depletes assets at rate

a = [rx + x c]

 aka 

xdz .

b = x

and

equation of motion for the state variable 

Distinguishing Variables

is a stock (not in the sense of equity, but a quantity)

is a ow (ow means rate).

Need to make sure units match (usually use annual in economics)


i.e. spending $1000/48hrs is ~$180,000/year

He can easily consume at a rate that is above his total stock of wealth

is bounded between

is getting an additional

Not between

and

and

as it was in discrete time

Bellman Equation

 
V
1 2V
2
V (x, t)dt = max w(u, x, t)dt +
[rx + x c] +
(x) dt
c,
x
2 x2


 Note the

cdt.

Standard Deviation term: cost to holding equities is volatility. Volatility scales with
the amount of assets in risky category:

 So,

A fraction

Drift term: earn

maximization over both c and , so two FOCs

In world of continuous time, these can be changed instantaneously.

36

(In principle, in every single instant, could be picking a new consumption level
and asset allocation level

It will turn out that in CRRA world, he will prefer a constant asset allocation

even though he is constantly changing his consumption

doesn't depend directly on

t,

so rst term was removed

If someone was nitely lived, then function would depend on

  dt can be removed since notationally redundant

Solving with CRRA utility

c1
1 [Lecture]

It turns out that in this problem, the value function inherits similar properties to the utilty
function:

u(c) =

V (x) = x1

If given utility function is CRRA

u(c) =

c1
1 , then it can be proved that

 This is just a scaled version of the utility function (by

V (x) = x1

 This gives us single solution to Bellman equation

Solving for Policy Functions [given value function

Assume

V (x) = x1

 We need to know
 The restriction

and optimal consumption policy rule for


1

Two FOCs,

allows us to do this

, re-write Bellman plugging in for

u(c) =

c1 V 2 V
1 , x , x2

n
h
i o
x1

dt = max u(c)dt + x [(r + )x c] x1 (x)2 dt


c,
1
2

x1
= max

c,
1

: nd values for policy functions that solve Bellman Equation

V (x) = x1

V (x) = x1

Taking

V (x) = x1

and

h
i

c1
+ x [(r + )x c] x1 (x)2
1
2

c:

F OC : x x 2

F OCc : u0 (c) = c = x

x1 (x)2 = 0

Simplifying
1

c = x
=

 Interpretation:
Nice result: Put more into equities the higher the equity premium
with higher RRA

and higher variability of equities

put less into equities

Why is variance of equities replacing familiar covariance term from Asset Pricing lecture?
In Merton's world, all the stocasticity is coming from equity returns.

37

Plugging back in, we get:


1



r+
+ 1 1

Special case:

 So

=1

2
2 2

implies

M P C ' .05

c = x

(Marginal Propensity to Consume)

 But we also get

0.06
1(0.16)2

= 2.34

Saying to borrow money so have assets of 2.34 in equity [and therefore 1.34 in liabilities]
We don't know what's wrong here. This model doesn't match up with what wise souls think you
should do.

Can we x by increasing

 With

Remember no one in classroom had

0.06
5(0.16)2

= 5, =

= 0.47

Recommends 47% allocation into equities

Typical retiree:
No liabilities; $600,000 of social security (bonds), $200,000 in house, less than $200,000 in 401(k)
401(k) usually 60% bonds and 40% stocks
Equity allocation is

8%

of total assets

Solving with Log utility

u(c) = ln(c) [PS5]

Given that guess for value function is

Set up is the same:

When substitute for

V (x) = ln(x) +
n
h
V (x, t)dt = maxc, w(u, x, t)dt + V
x [rx + x c] +
V (x)

and

[ + ln(x)] = max ln(c) + [(r + )x c] x


c,

1
(x)2 x2
2

FOCs:

F OCc : c =

F OC : =

1
x

Plugging back into Bellman

+ ln(x) = r ln() 1 +

i o
(x)2 dt

u(c):


1 2V
2 x2

2
2 2

+ ln(x)

Observation to solve for parameters

 We can re-write the above with just constants on RHS:


2
2 2

ln(x) ln(x) = r ln() 1 +


Given that the RHS is constant, this will only hold if
be increasing in

Hence,

x,

= 1

and

0 = r ln() 1 +

 The Bellman equation is satised

= 1

and

= 1.

[Otherwise LHS would

while RHS won't increase, a contradiction]

i

= r ln() 1 +

2
2 2

38

2
2 2

This means:

 Giving us

2
2 2

+ ln() 1 +

Optimal Policy

 Plugging back into FOCs

c = x

and

General Continuous Time Problem


Most General Form

Equation of motion of the state variable

dx = a(x, u, t)dt + b(x, u, t)dz ,

 Note

dx

depends on choice of control

Using Ito's Lemma, Continuous Time Bellman Equation

V (x, t) = w(x, u , t) +

u = u(x, t) =optimal

V
t

V
x

a(x, u , t) +

1 2V
2 x2

b(x, u , t)2

value of control variable

Properties: PDE and Boundary Conditions

Value function is a 2nd order PDE

is the 'dependent' variable;

and

are the 'independent' variables

PDE will have continuum of solutions

Need structure from economics to nd the right solution to this equationthe solution the
makes economic sensefrom the innite number of solutions that make mathematical sense.

These restrictions are called boundary conditions, which we need to solve PDE

Need to use economic logic to derive boundary conditions

 Name of the game in these problems is to use economic logic to nd right solution, and
eliminate all other solutions

 (Any policy generates a solution, we're not looking for any solution that corresponds to any
policy, but for the solution that corresponds to the optimal policy)

[In Merton's problem, we exploited 'known' functional form of value function, which imposes
boundary conditions]

Two key examples of boundary conditions in economics


1. Terminal Condition

Suppose problem ends at date

T
39

 i.e. have already agreed to sell oil well on date

Then we know that

 Where

(x, t)

V (x, T ) = (x, T ) x

[Note this is at date

T]

is (some known, exogenous) termination function. Note that it can be

state dependenti.e. change with the price of oil.

Can solve problem using techniques analogous to backward induction

This restriction gives enough structure to pin down solution to the PDE

2. Stationary

horizon

problem

Value function doesn't depend on

t = becomes

an

ODE

1
V (x) = w(x, u ) + a(x, u )V 0 + b(x, u )2 V 00
2
V0

Note the use of

We will use Value Matching and Smooth Pasting

since now only one argument for

V,

can drop the

notation

Motivating Example for Value Matching and Smooth Pasting:

Consider the optimal stopping problem :



V (x, t) = max w(x, t)t + (1 + t)1 EV (x0 , t0 ), (x, t)

where

(x, t)

is the termination payo in the stopping problem

 the solution is characterized by the stopping rule:

if

x > x (t)

continue; if

x x (t)

stop

Motivation: i.e. irreversibly closing a production facility

 Assume that

dx = a(x, t)dt + b(x, t)dz

 In continuation region, value function characterized using Ito's Lemma:

V (x, t)dt = w(x, t)dt +

V
t

V
x

a(x, t) +

1 2V
2 x2

i
b(x, t)2 dt

Value Matching

Value Matching:

All continuous problems must have continuous value functions

V (x, t) = (x, t)

for all

x, t

such that

x(t) x (t)

(at boundary and to left of boundary)

40

 A discontinuity is not permitted, otherwise one could get a better payo from not following
the policy

 Intuition: an optimal value function has to be continuous at the stopping boundary.

If there was a discontinuity, can't be optimal to quit (if


continued so far (if

 See graph [remember

below

is above

or to have

on y axis,

on x-axis]

Smooth Pasting
Derivatives with respect to

must be equal at the boundary.

Using problem above:

Smooth Pasting:

Vx (x (t), t) = x (x (t), t)
 Intuition

and

must have same slope at boundary.

Proof intuition:

if slopes are not equal at convex kink, then can show that the

policy of stopping at the boundary is dominated by the policy of continuing for


just another instant and then stopping. (It doesn't mean the latter is the optimal
policy, but shows that initial policy of stopping at the boundary is sub-optimal)

x (t),

If value functions don't smooth paste at

then stopping at

x (t)

can't be optimal:

Better to stop an instant earlier or later

If there is a (convex) kink at the boundary, then the gain from waiting is in
and the cost from waiting is in

t.

41

So there can't be a kink at the boundary.

Think of graph [remember

on y axis,

on x-axis]

Will need a 3rd boundary condition from the following lecture

Lecture 10
Third Boundary Condition, Stopping Problem and ODE Strategies

Statement of Stopping Problem

Assume that a continuous time stochastic process

x(t)

is an Ito Process,

dx = adt + bdz
 i.e.

is the price of a commodity produced by this rm

While in operation, rm has ow prot


`

w(x) = x
 (Or

w(x) = x c

if there is a cost of production)

Assume rm can costlessly (permanently) exit the industry and realize termination payo

=0

Intuitively, rm will have a stationary stopping rule

 if

x > x

continue

 if

x x

stop (exit)

Draw the Value Function (

=0

in below graph)

42

Characterizing Solution as an ODE

Overall Value Function



V (x) = max xt + (1 + t)1 EV (x0 ), 0
 Where

and

is interest rate

In stopping region

x0 = x(t + t),

V (x) = 0

In continuation region

 General

V (x)dt = xdt + E(dV )


Since:

V (x) = xt + (1 + t)1 EV (x0 )

= (1 + t)V (x) = (1 + t)xt + EV (x0 )


= tV (x) = (1 + t)xt + EV (x0 ) V (x).
Let

t 0

and multiply out. Terms of order

 Substitute for

E(dV )

(dt)2 = 0.

using Ito's Lemma

2nd Order ODE:

V = x + aV 0 +

b2 00
V
2

 Interpretation

Value function must satisfy this dierential equation in the continuation region

We've derived a 2nd order ODE that restricts the form of the value function

in

the continuation region

2nd order because highest order derivative is 2nd order, ordinary because only
depends on one variable (x),
drift from jensen's inequality

drift in argument

V
|{z}

required return

x
|{z}

dividend

z}|{
aV 0

+
|

z }| {
b2 00
V
2

capital gain =how

{z
V is

instantly drifting

b2 00
2 V is drift resulting from brownian motion'jiggling and jaggling': it is negative
if concave, and positive if convex (as in this case).

Term

Intuition:

almost always negative. [don't confuse with

get when stopping; but can decide on some policy of

V (x) = = 0. This
x ]

is the value you

for when to stop:

 Why? Option value  love the ability take the chance at future prots. Even if drift is
negative, will still stay in, since through Brownian motion may end up protable again.

And now, we have continuum of solutions to ODE, so need restrictions to get to single solution

43

Third Boundary Condition


Boundary Conditions

*Three variables we need to pin down: two constants in 2nd order ODE, and free boundary (x )

Value Matching:

V (x ) = 0

Smooth Pasting:

V 0 (x ) = 0

Third Boundary Condition for large x


 As

the option value of exiting goes to zero

It is so unlikely that prots would become negative (and if it happened, would likely
be very negatively discounted) that value of being able to shut down is close to zero

As

gets very large, value function approaches that of the rm that does not have the

option to shut down (which would mean it would theoretically need to keep running at
major losses)

 Hence

converges to the value associated with the policy of never exiting the industry

[equation derived later]

lim

x 1

V (x)

 =1
x + a

Denominator is value function of rm that can't shut down

[Or can think of it as

limx V (x) =

x+

 We can also write:

lim V 0 (x) =

x,

Intuition: at large

how does value function change with one more unit of price

x?

Use sequence problem:


Having one more unit of price produces same exact sequence shifted up by one unit
all the way to innity. What is the additional value of a rm with price one unit
higher?


1
et x(t)dt et (x0 )(t)dt =
et 1 dt =

0
{z
} |
{z
}
|

old oil well

new oil well

So if I raise price by 1, I raise value of rm by

Solving 2nd Order ODEs


Denitions

Goal: Solve for

Complete Equation

with independent variable

 Generic 2nd order ODE:

44

F 00 (x) + A(x)F 0 (x) + B(x)F (x) = C(x)

Reduced Equation

C(x)

is replaced by

0
F 00 (x) + A(x)F 0 (x) + B(x)F (x) = 0

Theorem 4.1

Any solution

F (x),

of the reduced equation can be expressed as a linear combination of any two

linearly independent solutions of the reduced equation,

F (x) = C1 F1 (x) + C2 F2 (x)

F1

and

F2

F1 , F2 linearly

provided

independent

 Note that two solutions are linearly independent if there do not exist constants
such that

A1 F1 (x) + A2 F2 (x) = 0

Theorem 4.2

The general solution of the complete equation is the sum of

 any particular solution of the complete equation


 and the general solution of the reduced equation

Solving Stopping Problem


1. State Complete and Reduced Equation

Complete Equation

 Dierential equation that characterizes continuation region

V = x + aV 0 +

b2 00
V
2

Reduced equation

0 = V + aV 0 +

b2 00
V
2

2. Guess Form of Solution to nd General Solution to Reduced Equation

45

A1

and

A2

First challenge is to nd solutions of the reduced equation

Consider class

erx

To conrm this is in fact a solution, dierentiate and plug in to nd:

0 = erx + arerx +

b2 2 rx
r e
2

 This implies (dividing through by erx )

0 = + ar +

b2 2
r
2

Apply quadratic formula

r=
r+

Let

General solution to reduced equation

represent the positive root and let

p
a2 + 2b2
b2

represent the negative root

 Any solution to the reduced equation can be expressed


+

C + er + C er

3. Find Particular Solution to Complete Equation

V = x + aV 0 +

b2 00
2 V

Want a solution to the complete equation

Consider specic example: payo function of policy never leave the industry

The value of this policy is

et x(t)dt

E
0

In solving for value of the policy without the expectation, we'll also need to know



t
E [x(t)] = E x(0) +
dx(t)
0





t
t
= E x(0) +
[adt + bdz(t)] = E x(0) + at +
bdz(t)
0

bdz(t) = 0

as it is the Brownian motion term]. Hence:

E [x(t)] = x(0) + at

The value of the policy in  V

(x) =

form, without the expectation

46

E [x(t)]:

 Derived by integration by parts

udv = uv

 Taking

et x(t)dt:

et [x(0) + at] dt
0

et [x(0)] dt +

=E
0

et [at] dt =
0

x(0)
+E

et [at] dt
0

So if

dv = e
and u = at
v = 1 et and du = adt

1 t
udv = 0 et (at)dt then uv vdu = [at 1 et ]
adt
0 0 e

Note

1 et ]
0 =

Think of
Then

= [at 1 et ]
0

[ 12 et a]
0

= 0 + a 12


1
=

a
x(0) +

Hence our candidate particular solution takes the form

1
V (x) =

x(t)dt = E

vdu



a
x+

Conrm that this is a solution to the complete equation

V = x + aV 0 +

b2 00
2 V

 

 
1
a
a
1
b2
x+
=x+ =x+a
+
0

4. Put Pieces Together

General solution to reduced equation

+ r+ x

C e

r x

+C e

with roots

r ,r =

p
a2 + 2b2
b2

Particular solution to general equation

a
x+

Hence the general solution to the complete equation is

 [Sum of particular solution to complete equation and general solution to reduced equation]

1
V (x) =

a
x+

47

+ C + er

+ C er

Now just need to apply boundary conditions

5. Apply Boundary Conditions

3 Conditions

 Value Matching:

V (x ) = 0

 Smooth Pasting:

V 0 (x ) = 0

limx V 0 (x) =

This gives us three conditions on the general solution

C+ = 0




[from

limx V 0 (x) =

1
]

Which then implies:

V (x ) =
V 0 (x ) =

x +

+ C er

+ r C er

=0

=0

[Value Matching] (1)

[Smooth Pasting] (2)

Simplifying from (1) and (2)

 Plugging (1) into (2)

(1) implies

C er



= 1 x + a

Plugging this into (2) we get



r 1 x + a = 0


=
 Simplifying: 1 = r x + a

Gives us

x =

 Plugging in for

1
r

r =

= x +

 We get our solution for

1
r

a2 +2b2
b2

x
x =

b2
a+

a2

2b2

a
<0

()

The general solution to the complete equation can be expressed as:

the sum of a particular solution to the complete dierential equation and the general solution to
the reduced equation

 Note: this is usually done in search problems by looking for the particular solution of the
policy never leave. Then you can get a general solution, and then use value matching and
smooth pasting to come up with the threshold value.

48

Lecture 12
Discrete Adjustment - Lumpy Investment

Lumpy Investment Motivation

Plant level data suggests individual establishments do not smooth adjustment of their capital
stocks.

 Instead, investment is lumpy. Ex rms build new plant all at once, not a little bit each year

Doms & Donne 1993: 1972-1989:

largest investment episode, rm


total investment, rm i

. If investment were spread

1
this ratio would be
18 .

 Instead average value was 14 . Hence, on average rms did 25% of investment in 1 year over
18 year period

We go from convex to ane cost functions:

 smooth convex cost functions assume small adjustments are costlessly reversible
 use ane functions, where small adjustments are costly to reverse

Lumpy Investment
Capital Adjustment Costs Notation

Capital adjustment has xed and variable costs

 Upward Adjustment Cost:

CU

is xed cost,

cU

CU + cu I

is variable cost

 Downward Adjustment Cost:

CD + cD (I).

CD + cD |I| is the general case. Here


CD + cD (I)
CD is xed cost, cD is variable cost

we usually assume

cD > 0

and

I < 0,

Firm's Problem

Firm loses prots if the actual capital stock deviates from target capital stock

 Deviations

(x x )

generate instantaneous negative payo

Functional form:

b
b
2
(x x ) = X 2
2
2
 where

X = (x x ).

49

so we get

is an Ito Process between adjustments:

dX = dt + dz
 where

dz

are Brownian increments

Value Function Representation

Overall Value Function

e( t)

V (X) = max E

=t


A (n) =

)


X
b 2
X d
e( (n)t) A (n)
2
n=1


CU + cU In
CD + cD |In |

if
if

In > 0
In < 0

A (n) =

nth

adjustment;

 Notation:

(n) =

date of

nth

adjustment;

cost of

In =

adjustment

x :

Above x :

Below

Adjust capital at
Adjust capital at

X = U . Adjust to X = u.
X = D. Adjust to X = d.

Value Function in all 3 Regions


1. Between Adjustments:

 Bellman Equation

Using Ito's

1
b
V (X) = X 2 + V 0 (X) + 2 V 00 (X)
2
2
h
i
V
V
1 2V 2
Lemma: E [dV ] =
dt
t + x a + 2 x2 b

 Functional form solved for below [PS6 2.d]


2. Action Region Below

 [If

X U]
V (X) = V (u) [CU + cU (u X)]

This implies
V 0 (X) = cU

X U

1. Action Region Above

 [If

X D]
V (X) = V (d) [CD + cD (X d)]

This implies
V 0 (X) = cD

50

X D

investment in

nth

Boundary Conditions
1. Value Matching

lim V (X) = V (u) [CU + cU (u U )] = V (U ) = lim V (X)

XU

XU

lim V (X) = V (d) [CD + cD (D d)] = V (D) = lim V (X)

XD

XD

2. Smooth Pasting

lim V 0 (X) = cU = V 0 (U ) = lim V 0 (X)

XU

XU

lim V 0 (X) = cD = V 0 (D) = lim V 0 (X)

XD

XD

First Order Conditions

Intuition:
=

When making capital adjustment, willing to move until the marginal value of moving

marginal cost

 This gives us two important optimality condtitions:

V 0 (u) = cU
V 0 (d) = cD

51

Solving for Functional Form of

V (X) in Continuation Region

PS6 2.d

2
+2X
X2
+ 2
is a solution to the continuation region equa +
2
3
b
1 2 00
2
0
tion: V (X) = X + V (X) + V (X).
2
2
Show that this is the expected present value of the rm's payo stream assuming that adjustment

Question: Show that

V (X) = 2b

costs are innite.

This is guess and check, where we've been provided the guess:

b
V (X) =
2
Calculate

V 0 (X)

and

X2
+ 2X
22
+
+

2
3

V 00 (X):
b
V (X) =
2
0

2X
22
+ 2

, V 00 (X) =



1 2 00
2
0
V (X) = 1 b
2 X + V (X) + 2 V (X) :






1 b 2
b 2X
22
1 2b
b
2X
2
2 2
2
V (X) =
X
+ 2
=
X +
+
+ 2
2
2

2
2

We plug back into the continuous time Bellmann Equation from

52



2
b X2
2 + 2X
+
+
2
2
3

So our guessed solution works as a solution to the continuous Bellmann.


We also know that this works as the expected present value of the rm's payo stream, given that this
is the equivalent to the solution to the continuous time Bellman for the continuation region, because
the innite adjustment costs mean that

u =

and

d = ,

hence the solution will hold for

X R.

Appendix: Math Reminders


Innite Sum of a Geometric Series with

|x| < 1:
xk =

k=0

1
1x

L'Hopital's Rule

If

lim f (x) = lim g(x) = 0

xc

xc

and

f 0 (x)
xc g 0 (x)
lim

and

then

or

exists

g 0 (x) 6= 0

f (x)
f 0 (x)
= lim 0
xc g(x)
xc g (x)
lim

Probability
Expectation of RV:

E(exp(
a)) = exp[E[
a] + 12 V ar[
a]].
When a
is a random variable, or anything
x N (, 2 ) Ax + B N (A + B, A2 2 ) 
E [exp (Ax + B)] = exp A + B + 12 A2 2
1
2
[Since x N (, ) E(exp(x)) = exp(E(x) + V ar(x));
2

so if standard normal:

= exp( + 21 2 )]

In case of Asset pricing

Rit

has stochastic returns

Rit = exp(rit + i it 12 [ i ]2 )
i
i
i
where t has unit variance. Hence t is a normally distributed random variable: t N (0, 1)
1 i 2
1 i 2
i
t
i2
i i
t
From above we see that this implies: t + ri [ ] N ( 0 + ri [ ] ,
12 ) =
2
2
1 i 2
i2
2 [ ] , )

We have:

53

N (rit

Then:

E[Rit ] = E[exp(rit + i it 12 [ i ]2 )] which as we've


= exp(mean + 12 V ar) = exp(rit 12 [ i ]2 + 21 i2 )
= exp(rit )

seen is now exp(a random variable) so we can use

Covariance:
If A and B are random variables, then

V (A + B) = V (A) + V (B) + 2Cov(A, B)


Log Approximation:

x:
ln(1 + x) x
1 + x ex
For small

54

Das könnte Ihnen auch gefallen