Sie sind auf Seite 1von 106

A. N.

Shiryaev
Steklov Mathematical Institute
and Lomonosov Moscow State University
Optimal stopping problems for
Brownian motion with drift and disorder;
application to
mathematical nance and engineering
1
1. ESTIMATION of the DRIFT of BROWNIAN MOTION
INTRODUCTION
We consider two models of observed processes (X
t
)
t0
driven by
Brownian motion (B
t
)
t0
.
Model A: (Part I)
X
t
= t +B
t
or, in dierentials, dX
t
= dt +dB
t
,
where is a random parameter which does not depend on B.
Model B: (Part II)
X
t
= (t )
+
+B
t
or dX
t
=
_
_
_
dB
t
, t < ,
dt +dB
t
, t ,
where (, ) are random parameters which do not depend on B.
2
Our presentation are based on the recent works:
U. Cetin, A. A. Novikov, A. Shiryaev. A Bayesian estimation
of drift of fractional Brownian motion
(Preprints, LSE, UTS.)
A. Shiryaev, M. Zhitlukhin. A Bayesian sequential testing
problem of three hypotheses for Brownian motion.
(Statistics & Risk Modeling, 2011, No. 3)
M. Zhitlukhin, A. Shiryaev. Bayesian disorder problems on
ltered probability spaces
(TPA, 2012, No. 3)
3
A. Aliev Towards a problem of detection of a disorder which
depends on trajectories of the process (TPA, 2012, No. 3)
M. Zhitlukhin, A. Muravlev. Solution of a Cherno problem
of testing hypotheses on drift of Brownian motion
(TPA, 2012, No. 4)
A. Shiryaev, M. Zhitlukhin. Optimal stopping problems for a
Brownian motion with a disorder on a nite interval
(TPA, 2013)
4
We consider some problems of nancial economics which can be
solved by the methods of optimal stopping. The general problem
of such type can be formulated as follows: To nd the value
function
V (T) = sup
T
EG

,
where is a stopping time, T is a nite horizon. Of course, it is
interesting to nd also the optimal stopping time

for which
EG

= V (T) (if this stopping time exists).


A lot of books are written on optimal stopping. For example,
G. Peskir and A. Shiryaev.
Optimal stopping and free-boundary problems.
We would like to expose here our results obtained together with
several our colleagues (A. Novikov, X. Y. Zhou,...).
5
ESTIMATION of the DRIFT COEFFICIENT
We observe a process X = (X
t
)
t0
X
t
= t +B
t
where is a random parameter which does not depend on B.
Decision rule based on T
X
-observations (T
X
= (T
X
t
)
t0
, T
X
t
=
(X
s
, s t)), is a pair = (, d), where
is a T
X
-stopping time (i.e., t T
X
t
for any t 0);
d is a T
X

-measurable function (taking values in R).


6
The Bayesian risk which we consider is given by
1 = inf
(,d)
E[c +W(, d)],
where
E is the mean with respect to the measure generated by
(independent) and B;
W is a penalty function; E < .
Due to the representation
E[c +W(, d)] = E
_
E
_
c +W(, d)

T
X

__
and the T
X

-measurability of and d, we need to nd


E
_
W(, d)

T
X

_
.
7
The conditional distribution of is determined by
P
_
y

T
X
t
_
=
y
_

dP(X
t
0
[ = z)
dP(X
t
0
[ = 0)
dP

(z)

dP(X
t
0
[ = z)
dP(X
t
0
[ = 0)
dP

(z)
,
with the RadonNikodym derivative
dP(X
t
0
[ = z)
dP(X
t
0
[ = 0)
of the measure of the process X
t
0
= (X
s
, s t) with = z w.r.t.
the measure of the process X
t
0
= (X
s
, s t) with = 0.
8
Calculating explicitly the RadonNykodym derivative, we nd
P
_
y

T
X
t
_
=
y
_

e
zX
t
z
2
t/2
dP

(z)

e
zX
t
z
2
t/2
dP

(z)
.
If P

(z) has a density, dP

(z) = p(z)dz,
then the conditional density admits the representation
p(y, X
t
; t) :=
dP( y [ T
X
t
)
dy
=
e
yX
t
y
2
t/2
p(y)

e
zX
t
z
2
t/2
p(z) dz
.
9
Thus, for d = d() we have
E[W(, d) [ T
X

] =
_
R
W(y, d()) p(y, X

, ) dy.
If for each there exists an T
X

-measurable function d

() such
that
inf
dT
X

_
R
W(y, d) p(y, X

; )dy =
=
_
R
W(y, d

()) p(y, X

; )dy ( G(, X

)) ,
then (with the notation p = Law)
inf
(,d)
E[c +W(, d)] = inf

E[c +G(, X

)] ( V (p)).
If

is an optimal time for the right-hand side,


then (

, d

)) is an optimal solution of the initial problem.


10
EXAMPLE 1 (classical mean-square criterion)
W(, d) = ( d)
2
and A(m,
2
)
In this case
V (p) = inf

E[c +v()], where v(t) = 1/(t +


2
).
The optimal time

is deterministic, at that
(a) if

c <
2
, then

is a unique solution to the


equation v(

) =

c, i.e.,

= c
1/2

2
;
(b) if

c
2
, then

= 0.
Optimal d

coincides with the a posteriori mean E([ T


X

):
(c) d

=
_
_
_

cX

+m

c/
2
, if

c <
2
,
m, if

c
2
.
11
How can one get the representation
V (p) = inf

E[c +v()] for v(t) = 1/(t +


2
) ?
Consider
inf
(,d)
E[c +( d)
2
].
For a given the optimal d

() is E([ T
X

):
d

() =
_
R
y p(y, X

; ) dy.
12
It is interesting to observe that if we denote
A(t, x) =
_
R
y p(y, x; t) dy,
then from the explicit form of p(y, x; t) we can see that
A

x
(t, x) =
_
R
y
2
p(y, x; t) dy A
2
(t, x).
So, A

(t, X
t
) = E
_
( E([ T
X
t
))
2

T
X
t
_
. Thus,
A

x
(t, X
t
) is the variance of conditioned on T
X
t
.
Consequently,
V (p) = inf

E[c +A

x
(, X

)]
_
inf

E[c +G(, X

)]
_
.
13
If A(m,
2
), then the conditional variance has the form
A
x
(t, X
t
) = v(t),
where v(t) solves the Riccati equation (KalmanBucy lter)
v

(t) = v
2
(t), v(0) =
2
,
i. e.,
v(t) =
1
t +
2
.
Thus,
V (p) = inf

E
_
c +
1
t +
2
_
,
which proves (a) and (b) for

.
14
Representation (c) for d

= E([ T
X

) follows from the formula


d

) =
_
R
yp(y, X

) dy
= X

v(

) +mexp
_

0
v(s) ds
_
= X

2
1 +
2

+
m
1 +
2

,
whence we nd
d

) =
_
_
_

cX

+m

c/
2
, if

c <
2
(

= c
1/2

2
),
m, if

c
2
(

= 0).
15
EXAMPLE 2 (criterion connected with the precise detection,
when d

=)
W(, ) =

()
where

is a Dirac function. In this case


_
R
W(, d)p(, X

, y) dy = p(, X

, d) =
p(d) exp(X

d
1
2
d
2
)
_
R
p(z) exp(xz
1
2
z
2
) dz
.
Thus, d

() is a mode of the conditional density p(, X

, ) (i.e.,
any point of local maximum p(, X

, )).
If the support of p is R and the function p is dierentiable,
then d

() solves the equation


p

(d)
p(d)
d = X

.
16
In normal case A(m,
2
) the mode coincides
with the conditional mean (see Example 1):
d

() =
_
_
_

cX

+m

c/
2
, if

c <
2
( = c
1/2

2
),
m, if

c
2
( = 0).
In this case
G(, X

) = p(, X

; d

()) =
1
_
2v()
.
17
Taking into account that E(c +G(, X

)) = E(c 1/
_
2v()),
we obtain the equality that

= t

, where
c
1
2

v(t

)
2
= 0.
Consequently,
t

=
_
_
_
1/(8c
2
) 1/
2
, if 8c
2
<
2
,
0, if 8c
2

2
.
The corresponding function d

is given by
d

= v(

)X

+m
v(

2
= 8c
2
X

+m
8c
2

2
.
Of great interest are problems, where lies in a nite interval
[
1
,
2
] with, e.g., uniform distribution. In this case optimal
time

is NOT deterministic.
18
2. Bayesian sequential estimation of the drift of
fractional Brownian motion
We assume the observed process X = (X
t
)
t0
has the representation
X
t
= t +B
H
t
,
where B
H
= (B
H
t
)
t0
is a fractional Brownian motion with
B
H
0
= 0, EB
H
t
= 0, E[B
H
t
B
H
s
[
2
= [t s[
2H
, 0 < H < 1.
In case H ,= 1/2 the process B
H
is not a semimartingale; in case
H = 1/2 the process B
1/2
is a Brownian motion.
19
We consider the problem:
To nd a sequential optimal rule

= (

, d

)
inf
D
E[c +w(, d())] = E[c

+w(, d

)],
where D is a class of rules with stopping time T < w.r.t.
the ow T
X
t
= (X
s
; s t); d() is T
X

-measurable; w(, d) is
a penalty function.
In this talk we consider the penalty functions
w(, d) = [ d[
2
and w(, d) = (, d),
where (, d) if the Dirac delta function which can be understood
as the distributional limit as 0:
(, d; ) =
_
_
_
1/(2) if d ( , +),
0 if d , ( , +).
20
If E[w(, d)[< , then E(c+w(, d())) = E[c+E(w(, d()) [ T
X

)].
By the generalized Bayes formula, the conditional density
p(y; X, t) =
dP( y [ T
X
t
)
dy
has the following representation:
p(y; X, t) =
p(y)L
t
(y, X)
_
E
p(y)L
t
(y, X) dy
,
where p(y), y E, is a density of the distribution of and
L
t
(y, X) is a RadonNikodym derivative of the measure generated
by X
u
= yu + B
H
u
(on (, T, (T
X
t
)
t0
, P)) w.r.t. the measure of
X
u
= B
H
u
, u t.
21
LEMMA 1 (Norros, Valkeila, Virtano [Bernoulli 5 (1999), 571587]).
We have
L
t
(y, B
H
) = exp
_
yM
t
(B
H
)
y
2
2
M(B
H
))
t
_
,
where M = (M
t
(B
H
))
t0
is a fundamental Gaussian martingale
with independent increments such that
M)
t
= D(M
t
(B
H
)) = C
2
2
t
22H
,
C
2
2
=
(3/2 H)
4H(1 H)(1/4 +H)(2 2H)
,
M
t
(B
H
) = c
_

0
K(t, s) dB
H
s
, K(t, s) = C
1
(st s
2
)
1/2H
, s (0, t),
C
1
= 2HB(3/2 H, 1/2 +H)
1
,
where B(x, y) is a beta function (Euler integral of the rst kind).
22
So,
p(y; X, t) =
p(y) expyM
t
(X)
y
2
2
M)
t

_
p(y) expyM
t
(X)
y
2
2
M)
t
dy
,
and hence the optimal d

should be found from the following


relation:
inf
d
E[w(, d) [ T
X

] E[w(, d

) [ T
X

] =
_
w(y, d

)p(y; X, ) dy.
23
The case of QUADRATIC PENALTY FUNCTION
Here we consider the case w(, d) = [ d[
2
. It is well known that
inf E([ d()[
2
[ T
X

) is achieved for < with the decision


function
d

() = E([ T
X

) =
_
yp(y; X, ) dy.
LEMMA 2. Let N(m, 1). Then for any t 0
E( [ T
X
t
) =
m+M
t
(X)
1 +M)
t
and D( [ T
t
) =
1
1 +M)
t
.
Direct calculations show that
inf
D
E
_
c +[ d()[
2
_
= inf

E
_
c +
1
1 +M)

_
= inf
t[0,T]
F
H
(t),
where F
H
(t) = ct +
1
1 +C
2
2
t
22H
.
24
THEOREM 1. Let N(m, 1), w(, d) = [ d[
2
. In this case the
optimal stopping time

is deterministic and has the form:


1) if H > 1/2, then

= arg inf
t[0,T]
F
H
(t) =
_
_
_
t

1
, if t

1
< T,
T, if t

1
T,
where t

1
is a solution of the equation
cF

H
(t)
2(1 H)C
2
2
t
12H
1 +C
2
2
t
22H
= 0;
2) if H = 1/2, then

= arg inf
t[0,T]
F
1/2
(t) =
_

_
0, if c 1,
c
1/2
1, if c < 1 and T > c
1/2
1,
T, if c < 1 and T c
1/2
1;
here F
1/2
(t) = ct +1/(1 +t);
25
3) if H (0, 1/2), then one can easily nd that the function F
H
(t)
has a maximum at the point t
1
and minimum at the point t
2
> t
1
;
then the optimal time

is dened by the relation

=
_

_
0, if T < t
1
or t
1
< T < t
2
and F(T) 1,
t
2
, if T t
2
and F(t
2
) < 1,
T, if t
1
< T < t
2
and F(T) 1.
The optimal decision function is
d

=
m+M

(X)
1 +C
2
2
(

)
22H
.
26
The case of the Dirac-type penalty function w(, d) = (, d).
In this case
E[w(, d) [ T
X
t
] =
_
E
w(y, d)p(y; X, t) dy = p(d; X, t).
So, inf E(w(, d) [ T
X
t
) is achieved when d is a MODE of the a
posteriori density.
Assume further that p(y) is a dierentiable function. Then the
optimal decision d

should be a root of the equation


d
dy
p(d; X, t) = p

(y)e
yM
t
(X)
y
2
2
M)
t
+p(y)[M
t
(X) yM)
t
]e
yM
t
(X)
y
2
2
M)
t
= 0,
or, equivalently,
p

(y)
p(y)
+M
t
(X) yM)
t
= 0 .
27
Assume N(m, 1). Then p

(y)/p(y) = (ym); hence the optimal


decision d

at any stopping time satises (d

m) + M

(X)
d

M)

= 0, thus d

=
m+M

(X)
1+M)

. Note that the optimal decision d

is the same as for quadratic penalty function, but the value of the
penalty function is dierent: direct calculations show that
E[w(, d

) [ T
X

] = p(d

, X, ) =
p(d

)e
d

(X)(d

)
2
M)

p(y)e
yM
t
(X)
y
2
2
M)
t
dy
=
_
1 +M)

2
.
Thus
inf
D
E[c +w(, d)] = inf
T
E
_
c
_
1 +M)

2
_
= inf
tT
G
H
(t),
where G
H
(t) = ct
_
1 +C
2
2
t
22H

2
.
28
If H > 1/2, then the unique minimum of the function G
H
(t)
on (0, ) is achieved at the point s

, which is a positive root of


the equation
G

H
(t) = c
C
2
2
(2 2H)t
12H

8
_
1 +C
2
2
t
22H
= 0.
Hence the optimal time

is dened by the relations

=
_
_
_
s

1
, if s

1
< T,
T, if s

1
T.
If H = 1/2, then G
1/2
(t) = ct
_
1 +t)/

2, hence the optimal

is dened by the relations

=
_

_
0, if c
1

8
,
1
8c
2
1, if c <
1

8
and T >
1
8c
2
1,
T, if c <
1

8
and T
1
8c
2
1. 29
If H (0, 1/2), then one can easily nd that the function G
H
(t)
has a maximum at the point s

1
and then a minimum at the
point s

2
> s

1
. Hence the optimal observation time

is dened
by the relations

=
_

_
0, if T < s

1
or s

1
< T < s

2
and G
H
(T) 1,
s

2
, if T s

1
and G
H
(s

2
) < 1,
T, if s

1
< T < s

2
and G
H
(T) 1.
Considerations presented above provides the proof of the following
theorem.
30
THEOREM 2. Let N(m, 1) and w(, d) = (, d). Then
the optimal stopping time

is deterministic and has the form


given above. The optimal decision function is
d

=
m+M

(X)
1 +C
2
2
(

)
22H
.
31
CONCLUDING REMARK.
Suppose X
t
=
_
t
0
f(X, s) ds +W
t
and w(, d) = ( d)
2
. Here
L
t
(y, X) = exp
_
y
_
t
0
f(X, s) dX
s

y
2
2
_
t
0
f
2
(X, s) ds
_
(if, for example, E exp
_
y
2
2
_
T
0
f
2
(W, s) ds
_
< ). Thus we obtain
that for any stopping time <
d

=
m+
_

0
f(X, s) dX
s
1 +
_

0
f
2
(X, s) ds
.
32
It is easy to nd that here
inf

E
_
c
_

0
f
2
(X, s) ds +( d)
2
_
= inf

E
_
c
_

0
f
2
(X, s) ds +
1
1 +
_

0
f
2
(X, s) ds
_
.
Assume
_

0
f
2
(X, s) ds < . Then the optimal stopping time is

= inf
_
t 0:
_
t
0
f
2
(X, s) ds = t

(c)
_
,
where t

(c) =
_
_
_
0, if c 1,
c
1/2
1, if c < 1.
33
INTERESTING PROBLEM:
Let dX
t
= dt +dB
t
, where (B
t
)
t0
is a Brownian motion. Here
X
t
t
= +
W
t
t
and
W
t
t
0 (P-a.s.), t .
So, it is interesting to nd E
0
inft: [W
s
/s[ , s t =: E
0

().
Since for each , < < , we have
P

()
x

2
_
= P
_
sup
0t1
[W
t
[<

x
_
,
it follows that E
0

() =
c

2
for some constant c .
PROBLEM: To nd E
0

() for the model


dX
t
= dt +dB
H
t
,
where (B
H
t
)
t0
is a fractional Brownian motion.
34
3. Chernos problem
We observe a random process
X
t
= t +B
t
,
where A(
0
,
2
0
) does not depend on B.
Bayesian risk:
1(, d) = E[c +k[[ Id ,= sgn()]
where d is a T
X

-measurable function taking values 1:


if d = +1, then we accept the hypothesis H
+
: > 0
if d = 1, then we accept the hypothesis H

: 0.
Quantities c, k > 0 are given constants.
35
In a remarkable way, the Cherno problem reduces to a problem
on optimal stopping of the absolute value of Wiener process.
For xed
0
and
2
0
, introduce a process W = (W
t
)
t1
,
W
t
=
0
(1 t)X
t/
2
0
(1t)
t
0
/
0
where W
1
is dened as the limit of W
t
as t 1.
One can prove that W is a Wiener process,
EW
t
= 0, EW
2
t
= t and W
0
= 0.
36
The theorem below shows that to nd an optimal decision rule
in the initial problem
inf
(,d)
1(, d) = inf
,d
E[c +k[[ Id ,= sgn()] (A)
it suces to nd
V

0
,
0
= inf
1
E
_
2

3
0
(1 )
[W

+
0
/
0
[
_
. (B)
(This V

0
,
0
-problem was widely propagandized by L. Shepp
and A. N. Shiryaev as an interesting nonlinear optimal stopping
problem for Brownian motion, independently of Chernos problems.)
In the sequel we assume without loss of generality that c = k = 1.
37
THEOREM
1) Let

B
be an optimal time in problem (B).
Then optimal decision rule (

A
, d

A
) in problem (A) has the form

A
=

2
0
(1

B
)
, d

A
= sgn(X

B
+
0
/
2
0
).
2) Optimal time

B
in problem (B) has the form

B
= inf0 t 1 : [W
t
+
0
/
0
[ a

0
(t),
where a

0
(t) is a nonincreasing function on [0, 1] such that a

0
(t) >
0 for t < 1 and a

0
(1) = 0.
38
THEOREM (continued)
3) Function a

0
(t) is a unique continuous solution of the integral
equation
G(1 t, a(t))
1 t
=
_
1
t
2

3
0
(1 s)
2

_
a(s) a(t)

s t
_

_
a(s) a(t)

s t
_
_
ds
in the class of functions a(t) such that a(t) 0 for t < 1 and
a(1) = 0.
Here function G(t, x) is dened in the following way:
G(t, x) =
1

_
x

t
_

[x[
t

_
[x[

t
_
, t > 0, x R,
where (x), (x)is are standard normal density and distribution
function.
39
REMARK
Cherno has considered the process X

t
= X
t1/
2
0
+
0
/
2
0
,
which satises the equation
dX

t
=
X

t
t
dt +dB

t
, t 1/
2
0
,
with some Brownian motion B

.
Then the optimal decision rule in problem (A) is obtained by
nding the optimal time

C
in the problem
V

(t, x) = inf
t
E
t,x
[ G(, X

)] (C)
for t = 1/
2
0
, x =
0
/
2
0
.
Optimal times

A
and

C
are connected by

A
=

C
1/
2
0
.
Optimal d

A
equals sgn(X

C
).
40
REMARK (continued)
Optimal time

C
=

C
(x, t) in problem (C) is

C
= infs t : [X

s
[ (s),
where (s) is a certain strictly positive function for t > 0 (which
does not depend on parameters
0
,
0
.)
From the construction of processes W and X

we nd that
(t) =
0
t a

0
(1 1/(
2
0
t)), t 1/
2
0
.
41
NUMERICAL SOLUTION
0.0 0.2 0.4 0.6 0.8 1.0

0
.
5
0

0
.
2
5
0
.
0
0
0
.
2
5
0
.
5
0
0 1 2 3 4 5

0
.
5
0

0
.
2
5
0
.
0
0
0
.
2
5
0
.
5
0
Boundary a

0
(t) for
0
=

2
in Problem (B)
Boundary (t)
in Problem (C)
42
PROOF of the THEOREM
Step 1 (reduction to problem for Wiener process).
It suces to consider decision rules (, d) with E < . For any
such rule we have
1(, d) = E[ +E(

[ F

)Id = +1 +E(
+
[ F

)Id = 1].
Thus, we need to nd time

which minimizes the value


E () = E[ +minE(

[ F

), E(
+
[ F

)],
and to put
d

=
_
_
_
+1, E(

[ F

) E(
+
[ F

),
1, E(

[ F

) > E(
+
[ F

).
43
By the normal correlation theorem,
E () = E[ +G( +1/
2
0
, X

+
0
/
2
0
)]
where G(t, x) is the function
G(t, x) =
1

t
(x/

t)
[x[
t
([x[/

t),
already introduced above.
The innovation representation for X implies
dX
t
= E([ T
t
) dt +d

B
t
dX
t
=
X
t
+
0
/
2
0
t +1/
2
0
dt +d

B
t
.
with Brownian motion

B
t
= X
t

_
t
0
E([ T
s
) ds.
In particular, X is a Markov process.
44
Direct calculations yield
L
t,x
[G(t, x) +[x[/2t] = 0
where
L
t,x
=

t
+
x +
0
/
2
0
t +1/
2
0


x
+
1
2

2
x
2
.
Then for any stopping time , E < , by applying the Ito
formula to the expression
E () = E[ +G( +1/
2
0
, X

+m
0
/
2
0
)],
we nd
E () = E
_

[X

+m
0
/
2
0
[
2( +1/
2
0
)
_
+G
_
1

2
0
,
m
0

2
0
_
+
[m
0
[
2
.
45
Also by direct calculation we get that
the process M
t
=
X
t
+m
0
/
2
0

0
(t +1/
2
0
)

m
0

0
is a martingale.
Using a change of time, we nd that
the process W
t
= M
t/
2
0
(1t)
is a Brownian motion.
Then for any stopping time such that E < we have
E () =

0
2
E
_
2

3
0
(1
B
)
[W

B
+
0
/
0
[
_
+ . . . . . .
where . . . . . . is the deterministic part which does not depend on ,

B
is a stopping time associated with by the formula

B
=

2
0

1 +
2
0

.
46
Thus, to nd optimal decision rule (

A
, d

A
) in the initial problem
of distinguishing between H
+
and H

it suces to nd optimal
time

B
in problem
V

0
,
0
= inf
1
E
_
2

3
0
(1 )
[W

+
0
/
0
[
_
(B)
and to put

A
=

2
0
(1

B
)
, d

A
= sgn(X

B
+
0
/
2
0
).
47
Step 2 (analysis of the structure of the optimal time in problem
(B)).
For the solution of problem (B) consider the value function
V (t, x) = inf
1t
E
_
2/
2
0
1 ( +t)
[W

+x[
_

2/
2
0
1 t
,
letting V (1, x) = 0 for all x.
One can prove that V (t, x) is continuous, and optimal stopping
time has the form

(t, x) = infs 0 : (s +t, W


s
+x) , C,
where C is the set of continuation of observation:
C = (t, x) : V (t, x) < [x[
([x[ is a gain from instantaneous stopping).
48
Analyzing the structure of V (t, x), we establish that
C = (t, x) : t [0, 1), [x[< a(t) ,
where a(t) is some nonincreasing function on [0, 1] such that
a(t) > 0 for t < 1 and a(1) = 0.
Moreover, one can prove that a(t) is continuous on [0, 1].
49
Step 3 (integral equation).
Using the general theory of optimal stopping, one can prove that
V (t, x) solves the following problem for the operator L
t,x
:
_

_
L
t,x
V (t, x) =
2/
3
0
(1 s)
2
, [x[< a(t),
V
x
(t, x) = sgn(x), x = a(t),
V (t, x) = [x[, [x[ a(t).
Applying the Ito formula gives
EV (1,W
1t
+x) = V (t, x)
+
_
1
t
L
t,x
V (s, W
1s
+x) I([W
1s
+x[,= a(s)) du.
50
Using equalities V (1, x) = [x[ for all x R,
L
t,x
V (t, x) = 0 for [x[> a(t), we get
V (t, x) = E[W
1t
+x[+
_
1
t
2/
2
0
(1 s)
2
P([W
1s
+x[< a(s)) ds.
Using equality V (t, a(t)) = a(t), we nd
E[W
1t
+a(t)[a(t) =
_
1
t
2/
2
0
(1 s)
2
P([W
1s
+a(t)[< a(s)) ds,
which, after calculation of E[. . . [ and P(. . .), turns into the required
equation.
51
Step 4 (uniqueness of solution of the integral equation).
Proof follows the method of:
P.V.Gapeev, G.Peskir. The Wiener disorder problem with nite
horizon (Stochastic Process. Appl. 116:2 (2006))
G.Peskir, A.N.Shiryaev. Optimal stopping and free-boundary problems
(Birkhauser, 2006)
52
4. Distinguishing between three hypotheses
We observe a random process
X
t
= t +B
t
,
where is a random variable, which does not depend on B and
takes values m
0
, m
1
, m
2
with probabilities
0
,
1
,
2
.
Bayesian risk:
1(, d) = E[c +W(, d)]
where c > 0 is a constant, W(, d) is a penalty function:
W(m
i
, m
i
) = 0, i = 0, 1, 2,
W(m
i
, m
j
) = a
ij
, i, j = 0, 1, 2, i ,= j,
with a
ij
> 0.
53
For simplicity, let m
1
= 1, m
0
= 0, m
2
= 1, a
ij
= 1,
i
= 1/3.
Introduce the process of a posteriori probabilities
i
= (
i
t
)
t0
:

i
t
= P( = m
i
[ T
X
t
), i = 0, 1, 2.
Then for any decision rule (, d), 1(, d) takes the form
1(, d) = E

_
c +1

Id =
i

_
Consequently, we must nd a time

which minimizes
E

[c +1 max
0

,
1

,
2

]
and dene d

by the formula
d

= m
i
, where i = argmax
i

54
Our problem reduces to the problem of optimal stopping of
the observed process X.
From the innovation representation for X we obtain
dX
t
= E( [ T
X
t
) dt +d

B
t
,
where

B
t
= X
t

_
t
0
E( [ T
X
s
) ds is a Brownian motion.
The properties of conditional expectation yield
E( [ T
X
t
) =
0

0
t
+
1

1
t
+
2

2
t
=
2
t

1
t
.
Calculating
i
t
by means of the Bayes formula gives
dX
t
=
e
t/2
(e
X
t
e
X
t
)
1 +e
t/2
(e
X
t
+e
X
t
)
dt +d

B
t
.
55
Thus, the problem
inf

E[c +G(
0

,
1

,
2

)]
with
G(
0

,
1

,
2

) = min
1

+
2

,
0

+
2

,
0

+
1

is replaced by the problem


inf

E[c +G(, X

)]
with
G(t, x) =
min(e
x
+e
x
, 1 +e
x
, 1 +e
x
)
1 +e
t/2
(e
x
+e
x
)
.
56
Following the general theory, introduce the value function in
problem
V (t, x) = inf

E
t,x
[c +G( +t, X
t+
)]
Optimal stopping time is

(t, x) = inf

s 0 : V (t +s, X
t+s
) = G(t +s, X
t+s
).
Now we characterize the set of continuation of observation
C = (t, x) : V (t, x) < G(t, x)
for large t.
57
THEOREM 1 (qualitative behavior of stopping boundaries)
There exist T
0
> 0 and functions f(t), g(t) such that the set
C
T
0
= (t, x) C : t T
0

admits the representation


C
T
0
=
_
(t, x) : t T
0
and [x[ (g(t), f(t))
_
.
Functions f(t) and g(t) are such that
f(t) = t/2 +b +O(e
t
), g(t) = t/2 b +O(e
t
),
where the constant b is a unique solution of the equation
e
b
e
b
+2b = 1/(2c).
58
OPTIMAL STOPPING BOUNDARIES
x
t
g(t)
g(t)
f(t)
f(t)
T
0
0
H
2
H
1
H
0
x
=
t
/
2
x
=

t
/
2
The set of continuation of observation has the property
C
T
0
= (t, x) : t T
0
and [x[ (g(t), f(t)).
59
THEOREM 2 (integral equations)
For all t T
0
stopping boundaries f(t), g(t) satisfy the system
of integral equations
_

_
c
_

t
K
1
(f(t), t, s, f(s), g(s))ds =
_

t
K
2
(f(t), t, s)ds
c
_

t
K
1
(g(t), t, s, f(s), g(s))ds =
_

t
K
2
(g(t), t, s)ds
where function K
1
and K
2
are dened by
K
1
(x, t, s, f, g) =

i
[
st
(fx
i
(st))
st
(gx
i
(st))]
t
(x
i
t)

j

t
(x
j
t)
K
2
(x, t, s) =

st
(
i
(st)s/2+x)
t
(x
i
t)
2(2+e
s
)

j

t
(x
j
t)
,
where
r
(y) =
1

2r
e
y
2
/(2r)
and
r
(z) =
_
z

r
(y)dy.
60
5. Disorder problem on nite intervals
We observe a process X = (X
t
)
t0
,
X
t
= (t )
+
+B
t
,
where is a random variable which does not depend on B and
is UNIFORMLY distributed on [0, 1].
We consider the following problems:
V
1
= inf
1
_
P( < ) +cE( )
+
_
,
V
2
= inf
1
E[ [.
61
The key point to solution of problems V
1
and V
2
is reduction to
Markovian problems of optimal stopping.
Introduce the ShiryaevRoberts statistic = (
t
)
t0
:

t
= e
X
t

2
t/2
_
t
0
e
X
s
+
2
s/2
ds,
or, in dierentials,
d
t
= dt +
t
dX
t
,
0
= 0.
Process
t
is related to process of a posteriori probabilities

t
= P( t [ T
X
t
) by the following formula:

t
=

t
1
t
(1 t).
62
Lemma
The following representations hold:
V
1
= inf
1
E

__

0
(c
s
1) ds
_
+1,
V
2
= inf
1
E

__

0
(
s
(1 s)) ds
_
,
where E

[ ] stands for the expectation in absence of disorder


(i. e., when X is a Brownian motion).
Proof is based on the following equalities:
E( )
+
= E

[
_

s
ds] ,
P( < ) = 1 E

,
E( )

= E

(1 )
2
/2.
63
Proof of the lemma
1) Rewrite the average time of delay E( )
+
:
E( )
+
=
_
1
0
E[( u)
+
[ = u] du
=
_
1
0
_
1
u
E[I(s )[ = u] ds
=
_
1
0
_
1
u
E

[I(s )e
(X
s
X
u
)
2
(su)/2
] ds
= E

_

0
_
s
0
e
(X
s
X
u
)
2
(su)/2
ds
=
_

0

s
ds.
64
2) Rewrite the probability of a false alarm P( < ):
P( < ) =
_
1
0
P( < u[ = u) du
=
_
1
0
P

( < u) du
= E

3) Rewrite the average time after a false alarm E( )

:
E( )

=
_
1
0
E[( u)

[ = u] du
=
_
1
0
E

( u)

du
= E

(1 )
2
/2

65
Thus, for the initial problems
V
1
= inf
1
_
P( < ) +cE( )
+
_
, V
2
= inf
1
E[ [
we got the representations
V
1
= inf
1
E

__

0
(c
s
1) ds
_
+1,
V
2
= inf
1
E

__

0
(
s
(1 s)) ds
_
,
where has the dierential
d
t
= dt +
t
dX
t
,
0
= 0,
and X
t
is a Brownian motion w.r.t. P

.
66
Introduce functions f
1
(t) = 1/c and f
2
(t) = 1 t.
Theorem
Optimal stopping times for V
1
and V
2
are

i
= inft 0 :
t
a

i
(t) 1, i = 1, 2
where a

i
(t) is a unique continuous solution of the equation
_
1
t
E

_
(
s
f
i
(s))I
s
a

i
(s)


t
= a

i
(t)
_
ds = 0,
satisfying the conditions
a

i
(t) f
i
(t) for t < 1, a

i
(1) = f
i
(1).
67
Theorem (continued)
Values V
1
and V
2
are given by
V
1
=
_
1
0
E

(c
s
1)I
s
< a

1
(s) ds +1,
V
2
=
_
1
0
E

[
s
(1 s)]I
s
< a

2
(s) ds.
68
Proof of the theorem
For the solution of the problem, consider the value function
V
i
(t, x) = inf
1t
E

x
__

0
(
s
f
i
(t +s)) ds
_
, i = 1, 2.
where E

x
[ ] stands for expectation under assumption
0
= x.
One can prove that V
i
(t, x) are continuous, and optimal stopping
times have the form

i
(t, x) = infs 0 : (t +s,
s
) , C
i
,
where C
i
is the set of continuation of observations:
C = (t, x) : V
i
(t, x) < 0
(here 0 is a gain from instantaneous stopping).
69
Analyzing the structure of functions V
i
(t, x), we establish that
C
i
= (t, x) : t [0, 1), x < a

i
(t),
where a

i
(t) are unknown nonincreasing functions on [0, 1],
at that a
i
(t) f
i
(t) for t < 1 and a
i
(1) = f
i
(1).
One can prove that a
i
(t) are continuous on [0, 1].
70
One can prove also that V
i
(t, x) solves a free-boundary problem
_

_
V

t
(t, x) +L

V (t, x) = f
i
(t) x, x < a
i
(t),
V (t, x) = 0, x a
i
(t),
V (t, x) = 0, x = a
i
(t),
V

x
(t, x) = 0, x = a
i
(t),
where
L

=

2
x
2
2

2
x
2
+

x
.
71
Applying the Ito formula to V
i
(s,
s
), we get
E

x
V (1,
1t
) = V (t, x)
+E

x
_
1t
0
[V

t
+L

V ](t +s,
s
) I(
s
< a(t +s)) ds
Since V
i
(1, ) 0, and V
i
(t, x) = 0 for x = a

i
(t), we nd
V (t, x) = E

x
_
1t
0
[V

t
+L

V ](t +s,
s
) I(
s
< a(t +s)) ds,
which gives, after substitution of [V

t
+L

V ](t, x) = f
i
(t)x, the
required equation.
Proof of uniqueness of solution of the integral equations is
given in (Zhitlukhin, Shiryaev, TPA, 2012).
72
Numerical results
Integral equation
_
1
t
E

_
(
s
f
i
(s))I
s
a

i
(s)


t
= a

i
(t)
_
ds = 0 ()
can be solved numerically by backward induction:
1. Fix the partition 0 = t
0
< t
1
< . . . < t
n
= 1;
2. Take a

i
(t
n
) = f
i
(1) (by the theorem);
3. If a

i
(t
k
), . . . , a

i
(t
n
) are calculated, then we nd a

i
(t
k1
) by
calculating integral
_
1
t
k1
in () with stepwise function equal to
a

i
() in points t
k
, . . . , t
n
and
solving the resulting algebraic equation w.r.t. a

i
(t
k1
).
73
Example
For = 4 consider the problem
V
2
= inf
t1
E[ [.
0.0 0.2 0.4 0.6 0.8 1.0

0
.
5
0
.
0
0
.
5
1
.
0
0.0 0.2 0.4 0.6 0.8 1.0
0
1
2
3
4
process X
t
; = 0.5. process
t
and boundary a

2
(t).
74
6. Disorder and Finance. I: Bubbles
We observe Brownian motion with disorder (X
t
)
t0
:
dX
t
= [
1
I(t < ) +
2
I(t )] dt + dB
t
where U[0, 1],
1
> 0 >
2
(in case of long position),
1
<
0 <
2
(in case of short position), > 0 (drift changes from
1
to
2
). We restrict our analysis to the case of long position only.
Below we consider problems of optimal stopping:
H
I
= sup
1
EX

, H
II
= sup
1
Eexp(X


2
/2).
Earlier problems of such type were considered in (Beibel, Lerche,
1997), (Shiryaev, Novikov, 2008), (Ekstrom, Lindberg, 2012).
75
Application in mathematical nance
Let the price of an asset be modeled by geometrical Brownian
motion with disorder S
t
= exp(X
t

2
t/2):
dS
t
= [
1
I(t < ) +
2
I(t )]S
t
dt +S
t
dB
t
, S
0
= 1,
i. e., the price in average grows up till time , and falls down
after .
Problem H
I
consists in maximization of logarithmic utility of
selling asset:
H
I
= sup
1
E(logS

), [

i
=
i

2
/2].
Problem H
II
consists in maximization of linear utility of selling
asset:
H
II
= sup
1
ES

.
76
Solution of the problem H
l
Since X
t
=
1
t+(
2

1
)(t)
+
+B
t
, we have for any stopping
time 1
EX

= E[
1
(
1

2
)( )
+
].
Denoting = (
1

2
)/ and

X = (X
t

1
t)/, we nd

t
= e

X
t

2
t/2
_
t
0
e

X
s
+
2
s/2
ds.
Analogously to the result above,
H
I
= sup
1
E

__

0
(
1
(
1

2
)
s
) ds
_
,
where E

[ ] stands for expectation under assumption that



X is
a standard Brownian motion.
77
Theorem
Optimal stopping time in problem H
I
is

l
= inft 0 :
t
a

l
(t) 1
where a

l
(t) is a unique continuous solution of the equation
_
1
t
E

_
(
1
(
1

2
)
s
)I(
s
a

l
(s))


t
= a

l
(t)
_
ds = 0,
satisfying the conditions
a

l
(t)

1

2
for t < 1, a

l
(1) =

1

2
.
The value H
I
= EX

l
can be found by the formula
H
I
=
_
1
0
E

[
1
(
1

2
)
s
]I(
s
< a

l
(s)) ds.
78
Solution of problem H
g
We introduce a new measure

P such that
(

X
t
t) is a

P-Brownian motion,
where

X = (X
t

1
t)/.
We establish that for any stopping time 1
E
P
S

= E

P
_
S


dP

_
= E

P
_
e

+1 )
_
,
at that process has dierential
d
t
= [1 (
1

2
)
t
] dt +
t
d(

X
t
t),
0
= 0.
Applying the Ito formula, we get
E
P
S

= E

P
__

0
e

1
s
(
2

s
+
1
(1 s)) ds
_
+1.
79
Theorem
Optimal stopping time in problem H
II
is

g
= inft 0 :
t
a

g
(t)
where a

g
(t) is a unique continuous solution of the equation
_
1
t
E

P
_
(
2

s
+
1
(1 s))I(
s
a

g
(s))


t
= a

g
(t)
_
ds = 0,
satisfying the conditions
a

g
(t)

1
[
2
[
(1 t) for t < 1, a

g
(1) = 0.
The value H
II
= ES

g
van be found by the formula
H
II
=
_
1
0
E

P
[
2

s
+
1
(1 s)]I(
s
< a

g
(s)) ds +1.
80
Example
Consider problems H
I
and H
II
for
1
=
2
= 2, = 1.
0.0 0.2 0.4 0.6 0.8 1.0
0
.
5
1
.
0
1
.
5
2
.
0
0.0 0.2 0.4 0.6 0.8 1.0
0
1
2
3
4
5
process S
t
; = 0.5.
t
and boundaries a

l
(t), a

h
(t).
81
III. When to sell Apple?
Let us apply our results to problems of mathematical nance
based on real asset prices.
Consider two bubbles on nancial markets:
Increase of prices of Apple assets from 2009 to 2012.
Increase of prices of Internet companies assets at the end of
1990s.
Problem consists in choosing optimal time of exit from bubble
with maximum gain.
82
REMARK. The basic idea of bubbles is that there is a FAST
rate of growth in prices, then PEAK, and then a fast DECLINE.
There are several papers of Robert Jarrow and Philip Protter
(see, e.g., SIAM J. Financial Math., 2 (2011), 839865), where they
developed the martingale theory of bubbles. Their analysis is
based on idea that prices of bubbles behave similarly to the
path behavior of the strict nonnegative continuous local
martingale. A typical path of such processes is to shoot up to
high value and then quickly decrease to small values and remain
at them. Jarrow and Protter proposed some stochastic volatility
models, saying that appearing of bubbles in prices relates with
increasing of the volatility.
Our analysis of bubbles is based on idea of work with drift terms
(increasing/decreasing).
83
Example 1. Increase of Apple asset prices
In 20092012 prices on Apple assets grew up in almost 9 times.
Minimum equals $82.33 (6/03/09), maximum equals $705.07
(21/09/12).
However, already on 15/11/12 the price fell down to $522.62.
0
1
0
0
2
0
0
3
0
0
4
0
0
5
0
0
6
0
0
7
0
0
2009 2010 2011 2012 2013
The fall down at the end of
2012 was expected already
at the beginning of the year.
84
Setting of the problem of optimal exit from bubble
Agents on the market might not be aware of existence of a
probability-statistical model of price evolution.
From their point of view, the question considered sounds as
follows:
1. One observe a sequence of prices
P
0
, P
1
, . . . , P
N
,
where P
0
is price on 6/03/09 and P
N
is price on 31/12/12.
2. One expect prices to fall down at the end of 2012
3. For a given date n
0
< N of buying asset, one wants to nd a
time of selling it which would maximize the gain.
85
Representation of observed prices by process with disorder
1. We project dates n
0
, . . . , N onto the interval [0, 1], since one
market day has length t = 1/(N n
0
).
Assume that prices are modeled by process
dS
t
= [
1
I(t < ) +
2
I( t)]S
t
dt +S
t
dB
t
,
where S
kt
= P
k
/P
0
and U[0, 1].
2. Parameters
1
and are estimated from data P
0
, . . . , P
n
0
.
The choice of
2
is subjective but
2
=
1
is proved empirically
to be good (one can see it from other cases).
3. Then one applies results on solution of the problem of maximization
of ES

.
86
Results of choice of time for selling Apple
Buy Sell
3-Jan-11 ($ 329.57) 9-Oct-12 ($ 635.85)
1-Jul-11 ($ 343.26) 8-Oct-12 ($ 638.17)
3-Jan-12 ($ 411.23) 8-Oct-12 ($ 638.17)
1-May-12 ($ 582.13) 9-Oct-12 ($ 635.85)
3-Jul-12 ($ 599.41) 9-Oct-12 ($ 635.85)
1-Aug-12 ($ 606.81) 11-Oct-12 ($ 628.10)
87
Results of the work of our method in case when assets were
bought on 3 January 2012.
On the left are prices (red point = time of selling).
On the right are statistic and optimal stopping boundary.
4
0
0
4
5
0
5
0
0
5
5
0
6
0
0
6
5
0
7
0
0
Jan Mar May Jul Sep Nov Jan

0
1
2
3
4
5
Jan Mar May Jul Sep Nov Dec
88
Example 2. Rise of NASDAQ index
From the beginning of 1994 till March
2000, NASDAQ-100 grew up in more
than 12 times, from 395.53 to 4816.35.
Then it fell down in 6 times, to 795.25,
by October 2002
For example, the Soros Foundation
has lost $ 5 bln. of $ 12 bln.
1
0
0
0
2
0
0
0
3
0
0
0
4
0
0
0
1994 1996 1998 2000 2002 2004
89
Results of choice of time for selling NASDAQ-100
Buy Sell
2-Jul-98 ($ 1332.53) 12-Apr-00 ($ 3633.63)
4-Jan-99 ($ 1854.39) 13-Apr-00 ($ 3553.81)
1-Jul-99 ($ 2322.32) 13-Apr-00 ($ 3553.81)
1-Oct-99 ($ 2404.45) 14-Apr-00 ($ 3207.96)
3-Jun-00 ($ 3790.55) 14-Apr-00 ($ 3553.81)
Results are obtained under assumption that prices begin to fall
down before the end of 2001 (this was really expected by most
traders).
90
PROBLEM I. Let U = U(x) be a utility function (e.g., U(x) =
logx or U(x) = x). In the paper
A.Shiryaev, Z.Xu, X.Y.Zhou. Thou Shalt Buy and Hold
the following problem was considered:
To nd an optimal stopping time

such that
EU
_
P

M
T
_
= sup
T
EU
_
P

M
T
_
,
where P
t
= S
t
/B
t
is discounted price,
dB
t
= rB
t
dt, B
0
= 1, dS
t
= S
t
(dt + dW
t
), S
0
= 1.
Prices P
t
solve the equation
dP
t
= P
t
(( r) dt + dW
t
), P
0
= 1,
and P
t
= exp(t +W
t
), where = r
2
/2.
91
THEOREM I. For the linear function U(x) = x the optimal
stopping time is degenerate:

=
_
_
_
T, if > 0,
0, if 0.
()
(The case 0 < and
2
/2 was considered in the paper by
A.Shiryaev, Z.Xu, X.Y.Zhou; the case
2
/2 < 0 was studied
by J. du Toit, G.Peskir.)
The case of the logarithmic function U(x) = logx is simple:
sup
T
E log
P

M
T
= sup
T
E[ +W

M
T
] = sup
T
E[ +W

] EM
T
= sup
T
E EM
T
=
_
_
_
T EM
T
, if > 0,
EM
T
, if 0.
So, in the logarithmic case the optimal stopping time is given by ().
92
PROBLEM II. Now we consider the model
dS
t
= S
t
_
(
1
I(t < ) +
2
I(t )) dt + dW
t
_
with
1
>
2
,
1

1
r
2
/2 > 0,
2

2
r
2
/2 < 0 so that

1
2

2
< r <
1

1
2

2
.
If the value
1
remains unchanged on the whole interval [0, T] and

1
> 0, then by the previous result (Problem I) we should
hold the stock until time t = T and sell it at this time.
But in fact the model admits that at a certain random time the
regime switches from
1
to
2
and if
2

2
r
2
/2 < 0, then
again by the previous problem we should
sell this stock at this time .
However, this time is unobservable and so the time of selling must
depend on the correct estimation of the time .
93
Our second problem (Problem II) is the following:
To nd one-time rebalancing stopping time

T
such that
V
T
= sup
T
EU
_
P

M
T
_
, P

=
S

.
We shall consider the case U(x) = logx, i.e.,
V
T
= sup
T
E log
P

M
T
= sup
T
E logP

E logM
T
.
Assume that hidden parameter has an exponential distribution
P( = 0) = , P( > t [ > 0) = e
t
,
where > 0 is known and [0, 1). Brownian motion W and
in
dS
t
= S
t
_
(
1
I(t < ) +
2
I(t )) dt + dW
t
_
are independent.
94
We see that P
t
= S
t
/B
t
= expX
t
, where
X
t
=
_
t
0
(s, ) ds +W
t
,
(s, ) = (s, ) r
1
2

2
, (s, ) =
1
I(s < ) +
2
I(s ).
LEMMA 1. For any stopping time T (< )
E logP

EX

= E
_

0
[
1
(
1

2
)
s
] ds ()
where
s
= P( s [ T
s
), T
s
= (S
u
, u s).
95
Proof. For X
t
=
_
t
0
(s, ) ds +W
t
we have an innovation
representation
X
t
=
_
t
0
E[(s, ) [ T
s
] ds +W
t
,
where W = (W
t
, T
t
) is an innovation (Wiener) process.
Since E[(s, ) [ T
s
] =
1
(1
s
) +
2

s
=
1
(
1

2
)
s
, we get
the representation ().
96
LEMMA 2. For (
t
)
t0
we have
d
t
= (1
t
) dt +

2

t
(1
t
) dW
t
where
W
t
=
1

_
X
t

_
t
0
(
1
(1
s
) +
2

s
) ds
_
.
Proof is well known and can be done in the following way.
97
Dene
t
=

t
1
t
, L
t
=
dP
0
t
dP

t
, where P
i
t
= Law(X
s
, s t [ = i).
Then
L
t
= exp
_

2
X
t

1
2

2
2

2
1

2
t
_
, dL
t
= L
t

2
(dX
t

1
dt).
By the Bayes formula,

t
=
0
e
t
dP
0
t
dP

t
+e
t
_
t
0
e
s
dP
s
t
dP

t
ds =
0
e
t
L
t
+e
t
_
t
0
e
s
L
t
L
s
ds,
where we used the property
dP
s
t
dP

t
=
L
t
L
s
.
98
By the Ito formula,
d
t
=
_
(1 +
t
)
t

2
_
dt +
t

2
dX
t
with
0
= /(1 ). From
t
=
t
/(1 +
t
) it follows
d
t
= (1
t
)
_

1

2

t

(
2

1
)
2

2

2
t
_
dt
+

2

2

t
(1
t
) dX
t
,
where X
t
=
_
t
0
[
1
(
1

2
)
s
] ds +W
t
.
So, d
t
= (1
t
) dt +

2

2

t
(1
t
) dW
t
. Since
E logP

= EX

= E
_
_

0
[
1
1 (
1
1
2
)
s
] ds +W

_
and EW

= 0 ( T), we get the representation


E logP

= E
_

0
[
1
(
1

2
)
t
] dt.
99
REMARK. For P
t
= e
X
t
we obtain
EP
t
= E exp
_
_

0
[(
1
r) (
1

2
)
s
] ds
_
,
where (
t
)
tT
has the stochastic dierential
d
t
= (1
t
)[ +
t
(
2

1
)] dt +

2

2

t
(1
t
) dW
t
.
Return to the problem of nding
V
T
= sup
T
E logP

= sup
T
_

0
[
1
(
1

2
)
t
] dt.
100
LEMMA 3. For V
T
= V
T
(; ) we have the representation
V
T
(; ) =

1

(1 )

R
T
(c; )
where c = [
2
[/
1
and
R
T
(c; ) = inf
T
P( ) +cE( )
+
.
Proof. From d
t
= (1
t
) dt +

2

t
(1
t
) dW
t
we nd
t = (
t
) +
_
t
0

s
ds

_
t
0

s
(1
s
) dW
s
.
101
So,
1
t =

1

(
t
) +
1
_
t
0

s
ds

1
(
2

1
)

_
t
0

s
(1
s
) dW
s
and
E
_

0
[
1
(
1

2
)
t
] dt
=

1

E
_
(

) +

2

_

0

t
dt
_
=

E
_

+
[
2
[

_

0

t
dt
_
=

1

(1 )

E
_
(1

) +
[
2
[

_

0

t
dt
_
.
102
Note that P( < ) = EI( < ) = EE(I( < ) [ T
X
t
) = E(1

)
and
E( )
+
= E
_
T
0
I( s ) ds = E
_
T
0
E[I( s)I(s ) [ T
X
s
] ds
= E
_
T
0
I(s )E[I( s) [ T
X
s
] ds = E
_

0

s
ds.
So, E
_
(1

) +
[
2
[

_

0

t
dt
_
= P( ) +
[
2
[

1
E( )
+
and
E
_

0
[
1
(
1

2
)
t
] dt =

1

(1)

_
P( )+
[
2
[

1
E( )
+
_
.
Taking inmum over T, we nd the required formula
V
T
(; ) =

1

(1 )

R
T
(c; ).
103
The solution of the problem
R
T
(c; ) = inf
T
_
P( ) +
[
2
[

1
E( )
+
_
.
for the case T = was obtained by the author: the optimal
stopping time is given by

= inft 0:
t
g

, ()
with g

a unique root of the equation (g) = 1, where


(x) =
c

_
x
0
exp
_

[H(x) H(y)]
_
dy
y(1 y)
2
with c =
[
2
[

1
, =
(
1

2
)
2
2
2
, H(x) = log
x
1 x

1
x
.
104
For the case T < , the optimal stopping time is given by

T
= inf0 t T:
t
g

T
(t),
where g

= g

T
(t), 0 t T, is a unique solution of the nonlinear
integral equation (Gapeev & Peskir)
E
t,g(t)

T
= g(t) +c
_
Tt
0
E
t,g(t)
[
t+u
I(
t+u
< g(t +u))] du
+
_
Tt
0
E
t,g(t)
[(1
t+u
)I(
t+u
< g(t +u))] du.
105
1

t
t

t
g(t)
0
T

+c
g

is dened on page 104


g

=

+c
=

1

2
=

1

1
+|
2
|
g

g
T
(0)
(T)

t g
T
(t)
106

Das könnte Ihnen auch gefallen