Sie sind auf Seite 1von 4

IERG 5300 Random Process

Suggested Solution of Assignment 1


1. Questions in chapter 4 of textbook(8
th
edition)
(1) Define 4 states in the Markov chain.
State 0: last two trials were all successes;
1: last two trials were a success followed by a failure;
2: last two trials were a failure followed by a success;
3: last two trials were all failures;
Then the transition matrix is
.8 .2 0 0
0 0 .5 .5
P=
.5 .5 0 0
0 0 .5 .5
(
(
(
(
(

.
By solving the linear system





We get (
0
,
1
,
2
,
3
) =(5/11, 2/11, 2/11, 2/11).
In the long run, proportion of success trials is .8
0
+ .5(
1
+
2
+
3
) =7/11
Remarks: In this question, we are calculating the long run proportion of success.
If we want to calculate the limiting probabilities, it is necessary to
check for ergodicity of the finite Markov chain.
(2) P
1
: One class: {0, 1, 2}, recurrent.
P
2
: One class: {0, 1, 2, 3}, recurrent.
P
3
: Three classes: {0, 2}, recurrent; {1}, transient; {3, 4}, recurrent.
P
4
: Four classes {0, 1}, recurrent; {2}, recurrent; {3}, transient; {4}, transient.

(3) Let A be the event that all states have been visited by time T. Conditioning on the
initial transition,
P(A) =P(A | X
0
=0, X
1
=1)p +P(A | X
0
=0, X
1
=-1)q
0 1 2 3 0 1 2 3
0 1 2 3
.8 .2 0 0
0 0 .5 .5
( , , , )=( , , , )
.5 .5 0 0
0 0 .5 .5
1


(
(
(
(
(

+ + + =
1 / 1 /
if
1 ( / ) 1 ( / )
1
if 1/2
n n
q p p q
p q p q
q p p q
p
n

+

=
The conditional probabilities in the preceding follow by noting that they are equal to
the probability in the gamblers ruin problem that a gambler starting with 1 will
reach n before going broke when the gamblers win probabilities are p and q.

(4) (a)
2
, 1 2
( )
i i
m i
P
m
+

= ,
2
, 1 2 i i
i
P
m

= ,
, 2
2( )
i i
i m i
P
m

=
(b) Since in the limit the set of m balls in urn 1 is equally likely to be any subset
of m balls, it is intuitively clear that
2
2 2
i
m m m
i m i i
m m
m m

| || | | |
| | |

\ .\ . \ .
= =
| | | |
| |
\ . \ .

(c) We must verify that, with
i
given in (b),

i
P
i,i+1
=
i+1
P
i+1,I
It is easy to show that
( )
( )
( )
( )
( )
2
2
2 2
, 1 2 2
2
2
2
2
2
1 1, 2
( ) ! ( ) 1
2 2 ( )!!
! ( 1) 1

2
( 1)!( 1)!
1 ( 1)

2
i i i
i i i
m
i m i m m i
P
m m m m i i m
m
m
m i
m
m i i m
m
m
i i
P
m
m
m

+
+ +
| |
= =
|
| |
\ .
|
\ .
| | +
=
|
+
\ .
+ +
= =

With
i

i
=1, the Markov chain is time reversible.

(5) Rate at which transition from i to j to k occur =
i
P
ij
P
jk
, whereas the rate in the
reverse order is
k
P
kj
P
ji
. So owe must show
i
P
ij
P
jk
=
k
P
kj
P
ji

Now,

i
P
ij
P
jk
=
j
P
ji
P
jk
// time reversible Markov chain
=
j
P
jk
P
ji
=
k
P
kj
P
ji
// time reversible Markov chain
// by result on page 60 of lecture
// note part I


2. Questions in the lecture notes(Chapter 1)
(1)
There are 9 states. For every state S, define eS =E[waiting time till HTHT or THTT |
current state at S]. In particular, eHTHT =eTHTT =0. We have the following linear system:

After solving it, we got e
null
=90/7 =12.86.


(2) Let state i be a recurrent state, j be a transient state.
Assume that j can be accessible from i, i.e. P
m
ij
>0 for some m>0
Consider the two possible cases:
1) i is also accessible from j. Then i communicates with j, so they are in the same
class. Because all states in a same class can only be either recurrent or transient
at the same time, this contradicts with the assumption.
2) i is not accessible form j. i.e. P
n
ji

=0 for all n>0
Since P
m
ij
>0 for some m>0, it implies that there is a chance for the Markov
chain to enter state j from state i after m transitions, in which case the Markov
chain will never return to state i no matter how many transitions n it takes.
Therefore, state i must be transient in this case, which contradicts with the
assumption.
Since both situations lead to a contradiction, it suggests that the assumption does
not hold, therefore a transient state cannot be accessible from a recurrent state.

3. Other Questions
Denote = P[will ever visit state 1 | X
0
=0].

=P[will ever visit state 1|X
0
=0,X
1
=1]P
01
+P[will ever visit state 1| X
0
=0, X
1
=-1]P
0-1

=1P
01
+P[will ever visit state 1| X
1
=-1]P
0-1
= 1/2(1+
2
) //P[will ever visit state 1|X
0
=0]=P[will ever visit state 0|X
0
=-1],
// P[will ever visit state 1| X
0
=-1]=P[will ever visit state 0| X
0
=-1] P[will ever visit
//state 1| X
0
=0]
//P
01
=P
0-1
To solve the equation, we get =1

Note: P{X
t
=i for some t >0 | X
1
=i+2}
=
m=1
P{X
t
=i for some t >0 | X
0
=i+2, 1st occurrence of state i+1 at time m}
P{1
st
occurrence of state i+1 at time m | X
0
=i+2}
=
m=1
P{1
st
occurrence of state i+1 at time m | X
0
=i+2}
=
m=1
P{1
st
occurrence of state i+1 at time m | X
0
=i+2} =
2

Das könnte Ihnen auch gefallen