Sie sind auf Seite 1von 5

Answers.

1.1. Clearly
f
n
(X
1
, X
2
, . . . , X
n
) =
_
3
4
if (n + 1)-th toss uses coin 1
1
4
if (n + 1)-th toss uses coin 2
For the n + 1-th coin to be coin 1, even number changes are needed. n S
n
should be
even. Therefore
f
n
(X
1
, X
2
, . . . , X
n
) =
_
3
4
if n S
n
is even
1
4
if n S
n
is odd
1.2.
E[S
2
n+1
S
2
n
|F
n
] = E[x
2
n+p1
+ 2X
n+1
S
n
|F
n
] = 1
Hence S
2
n
n is a martingale. Let be the stopping time. Then for any k
E[S
2
k
k] = x
2
or
E[S
2
k
] = E[ k] +x
2
S
n
for n is bounded by N. Hence we can let k in the LHS, and by the monotone
convergence theorem it is OK to let k in the RHS. Therefore
E[S
2

] = N
2
P[S

= N] + 0
2
P[S

= 0] = N
2
x
N
= Nx = E[] +x
2
or
E[] = Nx x
2
= x(N x)
2.1 Let P(x) be the probability that
n
given that
0
= x. Then
P(x) = pP(x + 1) +qP(x 1) for x 1; P(0) = P(1)
This yields
(P(x + 1) P(x))p = q(P(x) P(x 1))
Since P(1) P(0) = 0 it follows that P(x) c. If we take
u(x) = A
x
then this will solve
u(x) = pu(x + 1) +qu(x 1)
if
= p
2
+q
1
or
=
1 +

1 4pq
2p
=
1 (p q)
2p
= {1,
q
p
}
If we look at the entire set of integers and dene ( , ) as just a random walk then u(
n
)
will be a martingale. If is the time of hitting 0, there is no dierence between the two.
Hence
u(x) =
_
q
p
_
x
is the probability of hitting 0. Since a martingale that is bounded must have a limit, the
only other possibility is going to .
1
_
q
p
_
x
= P[
n
,
n
> 0 n 0|
0
= x] 1
as x . Therefore c = 1 and P[ < |
0
= x] = (
q
p
)
x
.
2.2. If q > p, then
n
and so P[ < |
0
= x] = 1. Note that until it hits 0 it is
just a random walk. To calculate E[] we note that

n
n(p q)
is a martingale. Yields
E[

(p q)] = x
But

= 0. Therefore
E[] =
x
q p
Needs a little justication. Stop at N as well as 0. That is dene

N
= inf[t : x(t) = 0 or N]
x = E[

N

N
(p q)] = 0p(x) + (1 p
N
(x))N (p q)E
x
[
N
]
Simplies to
E
x
[
N
] =
x N(1 p
N
(x))
(q p)
Since p
N
(x) = (
p
q
)
Nx
, Np
N
(x) 0. This completes the proof.
3.2. Let take values {s
j
}. Let A F

. Need to show
E[f(x(t
1
+) x(),x(t
2
+) x(), . . . , x(t
n
+) x())1
A
()]
= P(A)E[f(x(t
1
), x(t
2
), . . . , x(t
n
))]
where P is the Brownian motion probability and E is expectation with respect to P.
Let E
j
= { : = s
j
}. Then E
j
F
s
j
. From the independence of increments for
2
Brownian motion, the collection {x(s
j
+ t
i
) x(s
j
)} is independent of F
t
j
and has the
same distribution as {x(t
j
)} under P. Moreover A F

means A{ = t
j
} F
t
j
. Hence
E[f(x(t
1
+) x(), x(t
2
+) x(), . . . , x(t
n
+) x())1
A
()]
=

j
E[f(x(t
1
+) x(), x(t
2
+) x(), . . . , x(t
n
+) x())1
AE
j
()]
=

j
E[f(x(t
1
+t
j
) x(t
j
), x(t
2
+t
j
) x(t
j
), . . . , x(t
n
+t
j
) x(t
j
))1
AE
j
()]
=

j
P(A E
j
)E[f(x(t
1
), x(t
2
), . . . , x(t
n
))]
= P(A)E[f(x(t
1
), x(t
2
), . . . , x(t
n
))]
3.2. First note that
[n]+1
n
=
j
n
if [n] = j 1 or j 1 n < j or
j1
n
<
j
n
. Hence
the set :
[n()]+1
n
=
j
n
is in Fj
n
and
n
is a stopping time. Because
n
, F

n
F

. If
A F

, then A F

n
and
E[f(x(t
1
+
n
) x(
n
), x(t
2
+
n
) x(
n
), . . . , x(t
k
+
n
) x(
n
))1
A
()]
= P(A)E[f(x(t
1
), x(t
2
), . . . , x(t
k
))]
Assuming f to be continuous, we can let n .
n
and obtain
E[f(x(t
1
+) x(), x(t
2
+) x(), . . . , x(t
k
+) x())1
A
()]
= P(A)E[f(x(t
1
), x(t
2
), . . . , x(t
k
))]
4.1 By Itos formula, until time ,
du(t, x) = [u
t
(t, x(t)) +
1
2
u
xx
(t, x(t))]dt +u
x
(t, x(t))dx(t)
where x(t) is Brownian Motion starting from any x with |x| < 1 at time s. In particular
u(s, x) = E
x
[u( t), x( t))]
On the set t, u(, x()) = u(t, 1) = 0. Hence if u is bounded by C,
u(s, x) CP[ > t]
And, by the reection principle
P[ > t] P
s,x
[ sup
st
x() 1]
1 2P
s,x
[x(t) 1]
= 1 2
1
_
2(t s)
_

1
exp[
y
2
2(t s)
]dy
= 2
1
_
2(t s)
_
1
0
exp[
y
2
2(t s)
]dy
]
2
_
2(t s)
0
3
as t . As for the second part, one can construct a solution of the form
f(x)e
t
provided
f +
1
2
f
xx
= 0.
f(x) = cos

2
x and =

2
8
will do it.
4.2. We show that
u(s, x) = P[ < |x(s) = 0] 0
as s . By symmetry
P
s,0
[ < ] 2P
s,0
[ sup
ts
[x(t) t] 0]
If x(t) is Brownian motion starting from 0 at time s, the process
e
x(t)
1
2
(ts)
is a martingale. By Doobs inequality
P
s,0
[ sup
ts
e
x(t)
1
2
(ts)
] e

Take = e
s
2
. Then, and
P
s,0
[ sup
ts
[x(t) t] 0] = P
s,0
[ sup
ts
e
x(t)t
1] P
s,0
[ sup
ts
e
x(t)
1
2
(ts)
] e

s
2
which is sucient.
5.1
I(f) = f(T)x(T) x(0)f(0)
_
T
0
x(s)f

(s)ds = f(T)x(T)
_
T
0
x(s)f

(s)ds
Clearly I(f) is Gausian, has mean 0 and
E
_
[I(f)]
2

= T[f(T)]
2
+
_
T
0
_
T
0
f

(t)f

(s) min(s, t)dsdt 2


_
T
0
f(T)f

(s) min(T, s)ds


This reduces to
_
T
0
|f(t)|
2
dt
4
if we integrate by parts. Now we approximate f L
2
[0, T] by smooth f
n
and
lim
m,n
E
_
[I(f
n
) I(f
m
)]
2

= lim
m,n
E
_
[I(f
n
) I(f
m
)]
2

= lim
m,n
_
T
0
|f
n
(t) f
m
(t)|
2
dt = 0
I(f
n
) then has a limit in L
2
(P) and the limit I(f) is clearly Gaussian with mean 0 and
variance
_
T
0
|f(t)|
2
dt.
5.2 If Z is a Gaussian random variable with mean 0 and variance
2
, we have
E[|Z|] = c, E[|Z|
2
] =
2
, Var (|Z|) = (1 c
2
)
2
Therefoer
E[V
n
] = c2
n
2

n
2
= c2
n
2
Var (V
n
) = (1 c
2
)2
n
2
n
= (1 c
2
)
By Tchebechevs inequality
P[V
n

c
2
2
n
2
] P[|V
n
E(V
n
)|
c
2
2
n
2
]
4(1 c
2
)
c
2
2
n
Borel-Cantelli Lemma shows V
n
with probability 1.
c =
1

2
_

|z|e

z
2
2
dz = 2
1

2
_

0
z e

z
2
2
dz = 2
1

2
=
_
2

< 1
5

Das könnte Ihnen auch gefallen