Sie sind auf Seite 1von 5

Stat 150 Stochastic Processes Spring 2009

Lecture 20: Markov Chains: Examples


Lecturer: Jim Pitman
A nice formula:
E
i
(number of hits on j before T
i
) =

j

i
Domain of truth:
P is irreducible.
is an invariant measure: P = ,
j
0 for all j, and

j
> 0
Either

j
< or (weaker) P
i
(T
i
< ) = 1 (recurrent).
Positive recurrent case: if E
i
(T
i
) < , then can be a probability measure.
Null recurrent case: if P
i
(T
i
< ) = 1 but E
i
(T
i
) = , then cannot be a
probability measure.
The formula above is a renement of the formula E
i
(T
i
) = 1/
i
for a positive
recurrent chain with invariant with

j

j
= 1. To see this, observe that
N
ij
: = number of hits on j before T
i
N
ij
=

n=0
1(X
n
= j, n < T
i
)
T
i
=

n=0
1(n < T
i
)
=

n=0

jS
1(X
n
= j)

=1
1(n < T
i
)
=

jS

n=0
1(X
n
= j, n < T
i
)
=

jS
N
ij
1
Lecture 20: Markov Chains: Examples 2
Then
E
i
T
i
= E
i

j
N
ij
=

j
E
i
N
ij
=

i
=
1

i
with

j
= 1
Strong Markov property : Chain refreshes at each visit to i = numbers of
visits to j in successive i-blocks are iid.
i
j
X
n
n
4 4
0
0 2 6

Sequence N
(1)
ij
, N
(2)
ij
, N
(3)
ij
, . . . Each N
(k)
ij
is an independent copy of N
ij
= N
(1)
ij
.
So E
i
N
ij
= expected # of js in an i-block, and according to the law of large
numbers:
1
m
m

k=1
N
(k)
ij

average # of visits to j in the rst m i-blocks
E
i
N
ij
Push this a bit further:
N

n=1
1(X
n
= j) = # of js in the rst N steps.
N

n=1
1(X
n
= i) = # of is in the rst N steps.
Previous discussion: In the long run, the # of js / (visits to i) E
i
N
ij
:

N
n=1
1(X
n
= j)

N
n=1
1(X
n
= i)
=
# of js in the rst N steps
# of is in the rst N steps
E
i
N
ij
But also in positive recurrent case, for simplicity, assume aperiodic/regular:
P
i
(X
n
= j)
j
E
i
(
1
N
N

n=1
1(X
n
= j)) =
1
N
N

n=1
P
n
(i, j)

j

j
Lecture 20: Markov Chains: Examples 3
because averages of a convergent sequence tend to the same limit.
Whats happening of course is
1
N

N
n=1
1(X
n
= j)
j
1
N

N
n=1
1(X
n
= i)
i

N
n=1
1(X
n
= j)

N
n=1
1(X
n
= i)

i
Hence E
i
N
ij
=
j
/
i
.
Example
p q walk on {0, 1, 2, . . . } with partial reection at 0.
0 < p < 1
P(0, 1) = p
P(0, 0) = q
for i = 1, 2, . . .
P(i, i + 1) = p
P(i, i 1) = q

_
`
_
`
_
`

p
p
p
q
q
q
0
1
2
3

q
Analysis:
1) p > q: walk is transient (goes to innity)
2) p = q: walk is null recurrent (keep returning to 0 with innite mean return
time)
3) p < q: walk is positive recurrent, limP
i
(X
n
= j) =
j
for (
i
) a probability
measure
We will focus on 3) and the determination of .
Appeal to result from previous class. If a stationary with

j

j
= 1, then
P is positive recurrent and E
i
(T
i
) = 1/
i
.
Advice: always look for a reversible equilibrium. Say the equilibrium distribu-
tion is
j
.
Lecture 20: Markov Chains: Examples 4
REV EQ
i
P
ij
=
j
P
ji
i/j
0/1 :
0
p =
1
q
1/2 :
1
p =
2
q
2/3 :
2
p =
3
q
.
.
.
=

1
=
0
(p/q)

2
=
1
(p/q) =
0
(p/q)
2
.
.
.

n
=
0
(p/q)
n
So there is a unique solution to REQ with
0
= 1 namely
n
= (p/q)
n
. If p > q
then

n
= which shows there is no invariant probability distribution for
this chain. If p < q then

n
= 1/(1 p/q) < which gives the unique
invariant probability distribution

n
=
(p/q)
n

n=0
(p/q)
n
= (p/q)
n
(1 p/q) Geometric(1 p/q) on {0, 1, 2, . . . }
From previous theory, existence of this invariant probability distribution proves
that the chain is positive recurrent for p < q.
Example: Compute E
1
T
0
by two methods and compare.
Method 1: First step analysis. Let m
ij
= E
i
T
j
. From 0,
m
00
= 1 +pm
10
= m
10
=
m
00
1
p
=
1
1p/q
1
p
=
1
q p
Method 2: We can use Walds identity for the coin-tossing walk, without con-
cern about the boundary condition at 0, to argue that m
10
=
1
qp
.
Example: For any particular probability distribution p
1
, p
2
, . . . on {1, 2, . . . },
there is a MC and a state 0 whose recurrent time T
0
has P
0
(T
0
= n) = p
n
.
0
1
2
3
n

3 4 0 3 5 0 3
Lecture 20: Markov Chains: Examples 5
Start with T
(1)
0
, T
(2)
0
, T
(3)
0
, . . . iid with distribution probability (p
1
, p
2
, . . . ).
P(0, j) = p
j+1
, for j = 0, 1, 2, . . .
P(1, j) = 1(j = 0)
P(2, j) = 1(j = 1)
.
.
.
P(i, j) = 1(j = i 1), for i > 0
Describe the stationary distribution: Solve P = . Not reversible. Use

0
= E
0
( # of visits to j before T
0
)
Notice chain dynamics: # of hits on state j 1 is either 1 or 0. It is 1 if and only
if initial jump from 0 is to j or greater. Probability of that is p
j+1
+p
j+2
+
=

j

0
= p
j+1
+p
j+2
+ .
Check
1

0
= E
0
(T
0
):
1

0
=

0
=

j=1
P
0
(T
0
> j) = E
0
(T
0
)

Das könnte Ihnen auch gefallen