Beruflich Dokumente
Kultur Dokumente
Martingales.
149
150 CHAPTER 5. MARTINGALES.
The second inequality in (5.2) follows from the fact that |x| is a convex func-
tion of x, and therefore |Xj | is a sub-martingale. In particular E{|Xn ||Fj }
|Xj | a.e. P and Ej Fj . Summing up (5.2) over j = 1, , n we obtain the
theorem.
Remark 5.6. We could have started with
Z
1
P (Ej ) p |Xj |p dP
` Ej
and obtained for p 1
Z
1
P (Ej ) p |Xn |p dP. (5.3)
` Ej
Proof. Let us denote the tail probability by T (`) = P {Y `}. Then with
1
p
+ 1q = 1, i.e. (p 1)q = p
Z Z Z
Y dP =
p p
y dT (y) = p y p1T (y)dy (integrating by parts)
Z 0 Z 0
dy
p y p1 X dP (by assumption)
0 y Y y
Z Z Y
p2
=p X y dy dP (by Fubinis Theorem)
0
Z
p
= X Y p1 dP
p1
Z p1 Z 1q
p
p
X dP Y q(p1)
dP (by Holders inequality)
p1
Z p1 Z 1 p1
p
p
X dP p
Y dP
p1
5.1. DEFINITIONS AND PROPERTIES 153
This simplifies to
Z p Z
p
Y dP
p
X p dP
p1
R
provided Y p dP is finite. In general given Y , we can truncate it at level N
to get YN = min(Y, N) and for 0 < ` N ,
Z Z
1 1
P {YN `} = P {Y `} X dP = X dP
` Y ` ` YN `
R
with P {YN `} = 0 for ` > N. This gives us uniform bounds on YNp dP
and we can Rpass to the limit. So we have the
R strong implication that the
p p
finiteness of X dP implies the finiteness of Y dP .
X
n
E[Xn2 ] 2
= E[X0 ] + E [Yj2 ].
j=1
If in addition, supn E[Xn2 ] < , then show that there is a random variable
X such that
lim E [ |Xn X|2 ] = 0.
n
154 CHAPTER 5. MARTINGALES.
Proof. Suppose kXn kp is uniformly bounded. For p > 1, since Lp is the dual
of Lq with 1p + 1q = 1, bounded sets are weakly compact. See [7] or [3]. We
can therefore choose a subsequence Xnj that converges weakly in Lp to a
limit in the weak topology. We call this limit X. Then consider A Fn for
some fixed n. The function 1A () Lq .
Z Z Z
X dP =< 1A , X >= lim < 1A , Xnj >= lim Xnj dP = Xn dP.
A j j A A
The last equality follows from the fact that {Xn } is a martingale, A Fn
and nj > n eventually. It now follows that Xn = E [ X |Fn ]. We can now
apply the preceeding theorem.
Exercise 5.3. For p = 1 the result is false. Example 5.1 gives us at the same
time a counterexample of an L1 bounded martingale that does not converge
in L1 and so cannot be represented as Xn = E [ X |Fn ].
We can show that the convergence in the preceeding theorems is also valid
almost everywhere.
Theorem 5.7. Let X Lp for some p 1. Then the martingale Xn =
E [X |Fn ] converges to X for almost all with respect to P .
Proof. From Holders inequality kXk1 kXkp . Clearly it is sufficient to
prove the theorem for p = 1. Let us denote by M L1 the set of functions
X L1 for which the theorem is true. Clearly M is a linear subset of L1 .
We will prove that it is closed in L1 and that it is dense in L1 . If we denote
by Mn the space of Fn measurable functions in L1 , then Mn is a closed
subspace of L1 . By standard approximation theorems n Mn is dense in L1 .
Since it is obvious that M Mn for every n, it follows that M is dense in
L1 . Let Yj M L1 and Yj X in L1 . Let us define Yn,j = E [Yj |Fn ].
With Xn = E [X |Fn ], by Doobs inequality (5.1) and jensens inequlaity
(4.2),
Z
1
P sup |Xn | ` |XN | dP
1nN ` {:sup1nN |Xn |`}
1
E [ |XN | ]
`
1
E [ |X| ]
`
156 CHAPTER 5. MARTINGALES.
lim sup Xn lim inf Xn [lim sup Yn,j lim inf Yn,j ]
n n n n
Here we have used the fact that Yj M for every j and hence
Finally
P lim sup Xn lim inf Xn P sup |Xn Yn,j |
n n n 2
2
E [ |X Yj | ]
=0
since the left hand side is independent of j and the term on the right on the
second line tends to 0 as j .
1. (Yn , Fn ) is a martingale.
3. A1 0.
and
Xj = (Yj + Xj ) Yj
does it!
We can always assume that our nonnegative martingale has its expecta-
tion equal to 1 because we can always multiply by a suitable constant. Here
is a way in which such martingales arise. Suppose we have a probability
space ( , F , P ) and and an increasing family of sub -fields Fn of F that
generate F . Suppose Q is another probability measure on ( , F ) which may
or may not be absolutely continuous with respect to P on F . Let us suppose
however that Q << P on each Fn , i.e. whenever A Fn and P (A) = 0, it
follows that Q(A) = 0. Then the sequence of Radon-Nikodym derivatives
dQ
Xn =
dP Fn
=
j=1 R ; Fn = [x1 , , xn ] ; Xj () = xj
Fm Fn if m<n
Exercise 5.7. Show that if 1 , 2 are stopping times so are max (1 , 2 ) and
min (1 , 2 ). In particular any stopping time is an increasing limit of
bounded stopping times n () = min( (), n).
Exercise 5.8. Verify that for any stopping time , F is indeed a sub -field
i.e. is closed under countable unions and complementations. If () k
then F Fk . If 1 2 are stopping times F1 F2 . Finally if is a
stopping time then it is F measurable.
E [X2 | F1 ] = X1 a.e.
162 CHAPTER 5. MARTINGALES.
E [ Xk |F ] = X a.e. (5.6)
Xn = 1 + 2 + n for n 1
Exercise 5.12. It does not mean that we can never consider stopping times
that are unbounded. Let be an unbounded stopping time. For every k,
k = min(, k) is a bounded stopping time and E [Xk ] = 0 for every k. As
k , k and Xk X . If we can establish uniform integrability of
Xk we can pass to the limit. In particular if S() = sup0n () |Xn ()| is
integrable then supk |Xk ()| S() and therefore E [X ] = 0.
E [ X2 |F1 ] = X1 a.e..
Exercise 5.14. The previous exercise needs the fact that if n are stop-
pimg times, then
{n Fn } = F .
Prove it.
Exercise 5.15. Let us go back to the earlier exercise (Exercise 5.11) where
we had
Xn = 1 + + n
as a sum of n idependent random variables taking the values 1 with prob-
ability 12 . Show that if is a stopping time with E[ ] < , then S() =
sup1n () |Xn ()| is square integrable and therefore E[X ] = 0. [Hint: Use
the fact that Xn2 n is a martingale.]
164 CHAPTER 5. MARTINGALES.
which could very well have lots of 0s at the end. In any case the first few
terms correspond to upcrossings and each term is at least (b a) and there
5.6. MARTINGALE TRANSFORMS, OPTION PRICING. 165
are U(a, b) of them. Before the 0s begin there may be at most one nonzero
term which is an incomplete upcrossing, i.e. when 2`1 < n = 2` for some
`. It is then equal to (Xn X2l1 ) Xn a for some l. If on the other hand
if we end in the middle of a downcrossing, i.e. 2` < n = 2`+1 there is no
incomplete upcrossing. Therefore
D() (b a)U(a, b) + Rn ()
Rn () = 0 if 2` < n = 2`+1
(Xn a) if 2`1 < n = 2`
Xn0 = Xn1
0
+ an1 Yn , for n 1
166 CHAPTER 5. MARTINGALES.
where aj is F
j measurable,
Fj being the -field generated by 1 , , j .
0 2
Calculate E [Xn ] .
X
N
VN = V0 + aj (Xj+1 Xj )
j=1
at time N equals the claim f (XN ) under every conceivable behavior of the
price movements X1 , X2 , , XN . If the claim can be exactly replicated
starting from an initial capital of V0 , then V0 becomes the price of that
option. Anyone could sell the option at that price, use the proceeds as
capital and follow the strategy dictated by the coefficients a0 , , aN 1 and
have exactly enough to pay off the claim at time N. Here we are ignoring
transaction costs as well as interest rates. It is not always true that a claim
can be replicated.
Let us assume for simplicity that the stock prices are always some non-
negative integral multiples of some unit. The set of possible prices can then
be taken to be the set of nonnegative integers. Let us make a crucial assump-
tion that if the price on some day is x the price on the next day is x 1. It
has to move up or down a notch. It cannot jump two or more steps or even
stay the same. When the stock price hits 0 we assume that the company
goes bankrupt and the stock stays at 0 for ever. In all other cases, from day
to day, it always moves either up or down a notch.
Let us value the claim f for one period. If the price at day N 1 is x 6= 0
and we have assets c on hand and invest in a shares we will end up on day
N, with either assets of c + a and a claim of f (x + 1) or assets of c a with
a claim of f (x 1). In order to make sure that we break even in either case,
we need
f (x + 1) = c + a ; f (x 1) = c a
1 1
c(x) = [f (x 1) + f (x + 1)] ; a(x) = [f (x + 1) f (x 1)]
2 2
168 CHAPTER 5. MARTINGALES.
for j 1 till we arrive at the value V0 (x) of the claim at time 0 and price x.
The corresponding values of a = aj1 (x) = 12 [Vj (x + 1) Vj (x 1)] gives us
the number of shares to hold between day j 1 and j if the current price at
time j 1 equals x.
Remark 5.13. The important fact is that the value is determined by arbi-
trage and is unaffected by the actual movement of the price so long as it is
compatible with the model.
Remark 5.14. The value does not depend on any statistical assumptions on
the various probabilities of transitions of price levels between successive days.
Remark 5.15. However the value can be interpreted as the expected value
Px
V0 (x) = E f (XN )
1
where Px is the random walk starting at x with probability 2
for transitions
up or down a level, which is absorbed at 0.
Remark 5.16. Px can be characterized as the unique probability distribution
of (X0 , , XN ) such that Px [X0 = x] = 1, Px [|Xj Xj1 | = 1|Xj1 1] = 1
for 1 j N and Xj is a martingale with respect to (, Fj , Px ) where Fj
is generated by X0 , , Xj .
Exercise 5.18. It is not necessary for the argument that the set of possible
price levels be equally spaced. If we make the assumption that for each price
level x > 0, the price on the following day can take only one of two possible
values h(x) > x and l(x) < x with a possible bankruptcy if the level 0 is
reached, a simlar analysis can be worked out. Carry it out.
5.7. MARTINGALES AND MARKOV CHAINS. 169
or
X
j
Zjf = f (Xj ) f (X0 ) hi1 (X0 , , Xi1 )
i=1
then
h(x) = ([ I]f )(x).
and X
h(x) = [ I](x) = [f (y) f (x)](x, y).
y
and for every bounded measurable function f defined on the state space X
X
n
f (xn ) f (x0 ) h(xj1 )
j=1
( I)V = 0 on Ac
V = 1 on A (5.9)
( I)V = 0 on Ac
V = f on A (5.10)
is equal to
V (x) = E Px f (xA ) . (5.11)
If < 2R , then cos x cos R > 0 in [R, R]. Consider Zn = en cos xn
with = log cos .
E Px Zn |Fn1 = e n f (xn1 ) = e n cos cos xn1 = Zn1 .
If R is the exit time from the interval (R, R), for any N, we have
E Px ZR N = E Px Z0 = cos x.
Since > 0 and cos x cos R > 0 for x [R, R], if R is an integer, we
can claim that
cos x
E P0 e [R N ] .
cos R
Since the estimate is uniform we can let N to get the estimate
cos x
E P0 e R .
cos R
so that we have slightly perturbed the random walk with perhaps even a
possible bias.
Exact calculations like in Eaxmple 5.1 are of course no longer possible.
Let us try to estimate again the exit time from a ball of radius R. For > 0
consider the function
Xd
F (x) = exp[ |xi |]
i=1
d
defined on Z . We can get an estimate of the form
(F )(x1 , , xd ) F (x1 , , xd )
for some choices of > 0 and > 1 that may depend on R. Now proceed as
in Example 5.1.
174 CHAPTER 5. MARTINGALES.
Example 5.3. We can use these methods to show that the random walk is
transient in dimension d 3.
For 0 < < d 2 consider the function V (x) = |x|1 for x 6= 0 with
V (0) = 1. An approximate calculation of (V )(x) yields, for sufficiently
large |x| (i.e |x| L for some L), the estimate
(V )(x) V (x) 0
If we start initially from an x with |x| > L and take L to be the first
entrance time into the ball of radius L, one gets by the stopping theorem,
the inequality
E Px V (xL N ) V (x).
If L N, then |xL | L. In any case V (xL N ) 0. Therefore,
V (x)
Px L N
inf |y|L V (y)
valid uniformly in N. Letting N
V (x)
Px L < .
inf |y|L V (y)
If we let |x| , keeping
L fixed,
we see the transience. Note that recur-
rence implies that Px L < = 1 for all x. The proof of transience really
only required a function V defined for large |x|, that was strictly positive for
each x, went to 0 as |x| and had the property (V )(x) V (x) for
large values of |x|.
Example 5.4. We will now show that the random walk is recurrent in d = 2.
This is harder because the recurrence of random walk in d = 2 is right
on the border. We want to construct a function V (x) as |x| that
satisfies (V )(x) V (x) for large |x|. If we succeed, then we can estimate
by a stopping argument the probability that the chain starting from a point
x in the annulus ` < |x| < L exits at the outer circle before getting inside
the inner circle.
V (x)
Px L < ` .
inf |y|L V (y)
We also have for every L,
Px L < = 1.
5.7. MARTINGALES AND MARKOV CHAINS. 175
This proves that Px ` < = 1 thereby proving recurrence. The natural
candidate is F (x) = log |x| for x 6= 0. A computation yields
C
(F )(x) F (x)
|x|4
which does not quite make it. On the other hand if U(x) = |x|1 , for large
values of |x|,
c
(U)(x) U(x) 3
|x|
for some c > 0. The choice of V (x) = F (x) CU(x) = log x |x|
C
works with
any C > 0.
Example 5.5. We can use these methods for proving positive recurrence as
well.
Suppose X is a countable set and we can find V 0, a finite set F and
a constant C 0 such that
(
1 for x /F
(V )(x) V (x)
C for x F
Xn X
n
E Px
C 1F (xj1 ) 1F c (xj1 )
j=1 j=1
Xn
= E Px [1 (1 + C)1F (xj1 )]
j=1
X
n X
= n + (1 + C) n (x, y)
j=1 yF
= n + o(n) as n .
176 CHAPTER 5. MARTINGALES.
X
E[q Xn+1 |Fn ] = [ q j pj ]Xn
= [P (q)]Xn
= q Xn
5.7. MARTINGALES AND MARKOV CHAINS. 177