Beruflich Dokumente
Kultur Dokumente
=
The conditional probabilities in the preceding follow by noting that they are equal to
the probability in the gamblers ruin problem that a gambler starting with 1 will
reach n before going broke when the gamblers win probabilities are p and q.
(4) (a)
2
, 1 2
( )
i i
m i
P
m
+
= ,
2
, 1 2 i i
i
P
m
= ,
, 2
2( )
i i
i m i
P
m
=
(b) Since in the limit the set of m balls in urn 1 is equally likely to be any subset
of m balls, it is intuitively clear that
2
2 2
i
m m m
i m i i
m m
m m
| || | | |
| | |
\ .\ . \ .
= =
| | | |
| |
\ . \ .
(c) We must verify that, with
i
given in (b),
i
P
i,i+1
=
i+1
P
i+1,I
It is easy to show that
( )
( )
( )
( )
( )
2
2
2 2
, 1 2 2
2
2
2
2
2
1 1, 2
( ) ! ( ) 1
2 2 ( )!!
! ( 1) 1
2
( 1)!( 1)!
1 ( 1)
2
i i i
i i i
m
i m i m m i
P
m m m m i i m
m
m
m i
m
m i i m
m
m
i i
P
m
m
m
+
+ +
| |
= =
|
| |
\ .
|
\ .
| | +
=
|
+
\ .
+ +
= =
With
i
i
=1, the Markov chain is time reversible.
(5) Rate at which transition from i to j to k occur =
i
P
ij
P
jk
, whereas the rate in the
reverse order is
k
P
kj
P
ji
. So owe must show
i
P
ij
P
jk
=
k
P
kj
P
ji
Now,
i
P
ij
P
jk
=
j
P
ji
P
jk
// time reversible Markov chain
=
j
P
jk
P
ji
=
k
P
kj
P
ji
// time reversible Markov chain
// by result on page 60 of lecture
// note part I
2. Questions in the lecture notes(Chapter 1)
(1)
There are 9 states. For every state S, define eS =E[waiting time till HTHT or THTT |
current state at S]. In particular, eHTHT =eTHTT =0. We have the following linear system:
After solving it, we got e
null
=90/7 =12.86.
(2) Let state i be a recurrent state, j be a transient state.
Assume that j can be accessible from i, i.e. P
m
ij
>0 for some m>0
Consider the two possible cases:
1) i is also accessible from j. Then i communicates with j, so they are in the same
class. Because all states in a same class can only be either recurrent or transient
at the same time, this contradicts with the assumption.
2) i is not accessible form j. i.e. P
n
ji
=0 for all n>0
Since P
m
ij
>0 for some m>0, it implies that there is a chance for the Markov
chain to enter state j from state i after m transitions, in which case the Markov
chain will never return to state i no matter how many transitions n it takes.
Therefore, state i must be transient in this case, which contradicts with the
assumption.
Since both situations lead to a contradiction, it suggests that the assumption does
not hold, therefore a transient state cannot be accessible from a recurrent state.
3. Other Questions
Denote = P[will ever visit state 1 | X
0
=0].
=P[will ever visit state 1|X
0
=0,X
1
=1]P
01
+P[will ever visit state 1| X
0
=0, X
1
=-1]P
0-1
=1P
01
+P[will ever visit state 1| X
1
=-1]P
0-1
= 1/2(1+
2
) //P[will ever visit state 1|X
0
=0]=P[will ever visit state 0|X
0
=-1],
// P[will ever visit state 1| X
0
=-1]=P[will ever visit state 0| X
0
=-1] P[will ever visit
//state 1| X
0
=0]
//P
01
=P
0-1
To solve the equation, we get =1
Note: P{X
t
=i for some t >0 | X
1
=i+2}
=
m=1
P{X
t
=i for some t >0 | X
0
=i+2, 1st occurrence of state i+1 at time m}
P{1
st
occurrence of state i+1 at time m | X
0
=i+2}
=
m=1
P{1
st
occurrence of state i+1 at time m | X
0
=i+2}
=
m=1
P{1
st
occurrence of state i+1 at time m | X
0
=i+2} =
2