You are on page 1of 24

MA3160a6 Markov Chain 1112B

1. Three white and three black balls are distributed in two urns in such a way that each contains three
balls. We say that the system is in state i , i = 0,1, 2,3 , if the first urn contains i white balls. At each
step, we draw one ball from each urn and place the ball drawn from the first urn into the second,
and conversely with the ball from the second urn. Let X n denote the state of the system after the nth
step. Explain why { X n : n = 0,1,...} is a Markov chain and calculate its transition probability matrix.
Solution:
Each random variable X n can take on the same values i = 0,1, 2,3 .
And Pij = P ( X n +1 = j X n = i, X n 1 = in 1 ," , X 0 = i0 ) = P ( X n +1 = j X n = i )
0 1 2 3
0 0 1 0 0
1 1/ 9 4 / 9 4/9 0

P=
2 0 4/9 4 / 9 1/9

3 0 0 1 0
,
2. Suppose that whether or not it rains today depends on previous weather conditions through the last
three days.
(a) Show how this system may be analysed by using a Markov chain. How many states are
needed?
(b) Suppose that if it has rained for the past three days, then it will rain today with probability 0.8;
if it did not rain for any of the past three days, then it will rain today with probability 0.2; and
in any other case the weather today will, with probability 0.6, be the same as the weather
yesterday. Determine the transition matrix P for this Markov chain.
Solution:
(a)
If we let the state at time n depend only on whether or not it is raining at time n, then the above model is
not a Markov chain (why ?). However, we can transform the above model into a Markov chain by saying
that the state at any time is determined by the weather conditions during both that day and the previous
two days. In other words, we can say that the process is in state 0 if it rained both today and last two days,
state 1 if it rained both yesterday and the day before yesterday but not today, state 2 if it rained both today
and the day before yesterday but not yesterday, state 3 if it rained today and yesterday but not the day
before yesterday, state 4 if it rained today but not last two days, state 5 if it rained yesterday but not
today and the day before yesterday, state 6 if it rained the day before yesterday but not today and
yesterday, state 7 if it did not rain either today and last two days. The preceding would then represent a
eight-state Markov chain.
(b)

1
0 1 2 3 4 5 6 7
0 0.8 0.2 0 0 0 0 0 0

1 0 0 0.2 0 0 0 0.8 0
2 0 0 0 0.2 0 0.8 0 0

3 0.2 0.8 0 0 0 0 0 0
4 0 0 0 0.6 0 0.4 0 0

5 0 0 0.4 0 0 0 0.6 0
6 0 0 0 0 0.4 0 0 0.6

7 0 0 0 0 0.4 0 0 0.6
,
3. Consider a process { X n , n = 0,1,...} which takes on the values 0, 1, or 2. Suppose
P I when n is even
P ( X n +1 = j X n = i, X n 1 = in 1 ,..., X 0 = i0 ) = ijII , where
Pij when n is odd
2 2

PijI = PijII = 1, i = 0,1, 2" . Is { X n , n = 0,1,"} a Markov chain? If not, then show how, by
j =0 j =0

enlarging the state space, we may transform it into a Markov chain.


Solution:
Pij ( n ) is dependent of n , that is, if n is even, then P ( n ) = ( PijII ) , if n is odd, P ( n ) = ( PijI ) . Thus, the
Markov chain is not homogeneous.
{ } ()
Let the state space be S = 0,1, 2,0,1, 2 , where state i i signifies that the present value is i , and the
present day is even (odd). The homogeneous transition matrix is as follows:
0 1 2 0 1 2
0 0 0 0 P00I P01I P02I
1 0 0

0 P10I P11I P12I

2 0 0 0 P20I P21I P22I
P = II
0 P00 P01II P02II 0 0 0
1 P10II P11II P12II 0 0 0
II
II
2 P20 P21 P22 0
II
0 0
,
4. Let X n denote the number of traffic accidents over a period of n weeks in a particular area and let
n
Yi be the corresponding number traffic accidents in the i th week. We have X n = Yi where Yi
i =1
are assumed to be independent and identically distributed as a random variable Y with
qk = P (Y = k ) , k = 0,1," . Show that { X n , n 1} is a Markov chains and find the transition
probabilities.
Solution:
{ X 1 , X 2 ,"} is a Markov chain with state space {0,1,"} and the transition probabilities
q if j = i + k ; k = 0,1,"
Pij = k
0 otherwise
2
,

5. Find the n-step transition matrix P for the Markov chain:

Solution:
0 1 2
0 1/ 2 1/ 2 0
.
P = 1 1/ 2 1/ 2 0
2 1/ 4 1/ 4 1/ 2

1 1
2
2
0
3
1 1 1 1 1 1
det ( I P ) = det 0 = = ( 1) .
2 2 2 4 2 2
1 1 1

4 4 2
1 1
For = 0 , we have an eigenvector 1 . For = 1 , we have an eigenvector 1 .

0 1

0
1
For = , we have an eigenvector 0 .
2 1

1 1
2 2 0 0 0 0
1 0 1 1 0 1 1 0 1
1 1 1
Let A = 1 0 1 , then PA = AD , that is, 0 1 0 1 = 1 0 1 0 0
0 1 1 2 2 2
1 1 1 0 1 1 0 1 1 0 0
1

4 4 2
1 1
2 2 0 0 0 0 1
1 0 1 1 0 1
1 1 1
So 0 = 1 0 1 0 0 1 0 1 . Therefore,
2 2 2
1 1 1 0 1 1 0 0 1 0 1 1

4 4 2

3
n
1 1
2 0 n
2 0 0 0 1
1 0 1 1 0 1
1 1 1
P(n) =P =
n
0 = 1 0 1 0 0 1 0 1
2 2 2
1 0 1 1 0 1 1

1 1 0 0 1

4 4 2
1 1
0
0 0 0 1 0 0 0 2 2
1 0 1 1 0 1 1 0 1 1
1 1 1
= 1 0 1 0 0 1 0 1 = 1 0 1 0 0 1
0 1 1 2n 0 1 1 2n 2 2
0 1 1
0
0 1 0
0 1 1 1
0
2 2
1 1 1 1
0 0
0 0 1 2 2

2 2

1 1 1 1
= 0 0 1 1 = 0
2 2 2 2
1 1
0 1 1 1 1 1 1
2n 0 n +1 + + 1/ 2
2 2 2 2 2 n +1 2n
,
6. Identify the communicating classes of the following transition matrix:
1 2 3 4 5
1 1/2 0 0 0 1/2

2 0 1/2 0 1/2 0
3 0 0 1 0 0

4 0 1/4 1/4 1/4 1/4
5 1/2 0 0 0 1/2
Which classes are closed?
Solution:
C1 = {1,5} , C2 = {2, 4} , C3 = {3} . Both C1 = {1,5} , C3 = {3} are closed.
,
7. Classify the states of the discrete-time Markov chains with state space S = {1, 2, 3, 4} and transition
matrices
1 2
3 0 0 1 1

3
0 2 2
0
1 1
0 0 1 2
(a) 2 2 (b) 0 0
1 3 3
1 1 1
0 0 0 0
4 4 2
0 0 1 0
0 0 0 1

In case (a), calculate f 34( n ) , and deduce that the probability of ultimate absorption in state 4, starting

4
2
from 3, equals . Find the expected number of transitions required to enter state j when starting
3
from state j , j = 1, 2,3, 4 in case (b).
Solution:
(a)
The state 4 is absorbing. The state 3 communicates with 4 and is therefore transient. {1, 2} is finite, closed
and aperiodic and hence 1, 2 are positive recurrent.
n 1 n 1
1 1


1 1
1 1 2
f 34( n ) = , f 34 = = 2 = 2 = .
4 2 n =1 4 2 1 1 3 3
4 4
(b)
The chain is irreducible with period 2. All states are positive recurrent.
1 1 3
0 0 1 =
2 2 8

1 0 0 2 = = 3
( 1 2 3 4)
3
( 1 2 3 4 ) 2 16
3
1 0 0 0 3 = 5
16
0 0 1 0
1
1 + 2 + 3 + 4 = 1 4 =
8
1 8 1 16 1 16 1
E ( T11 ) = = , E ( T22 ) = = , E ( T33 ) = = , E ( T44 ) = = 8.
1 3 2 3 3 5 4
,
8. For a Markov chain { X n , n 0} with state space S = {1, 2,3} , let X 0 = 3 , T = min {n : n 1, X n = 1}
1 0 0

(a) Let the transition matrix be P = 1/ 3 0 2 / 3 , find E ( X 2 ) , E ( X 2 X 1 ) , E ( X 3 X 2 ) ,
0 1/ 3 2 / 3

2i = P ( X 2 = i ) , i S .
1/ 3 2 / 3 0

(b) Let the transition matrix be P = 1/ 3 0 2 / 3 , find P (T = k X 0 = 3) ,1 k 3 ,
0 1/ 3 2 / 3

E min {T , 4} X 0 = 3 .
0 1/ 3 2 / 3

(c) Let the transition matrix be P = 1/ 3 0 2 / 3 , find the distribution of T11 and E (T11 )
1/ 3 2 / 3 0

where T11 = min {n : n 1, X n = 1, X 0 = 1}
Solution:
(a)
X0 3
It is given that X 0 = 3 , that is,
Probability 1

5
1 0 0 1 0 0 1 0 0

P = 1/ 3 0 2 / 3 1/ 3 0 2 / 3 = 1/ 3 2 / 9 4 / 9 .
2

0 1/ 3 2 / 3 0 1/ 3 2 / 3 1/ 9 2 / 9 6 / 9

1 0 0
1 2 6
Distribution of X 2 can be found by ( 0 0 1) P = ( 0 0 1) 1/ 3 2 / 9 4 / 9 =
2
.
1/ 9 2 / 9 6 / 9 9 9 9

1 2 6
Therefore, 21 = P ( X 2 = 1) = , 22 = P ( X 2 = 2 ) = , 23 = P ( X 2 = 3) = .
9 9 9

1 2 6 1 + 4 + 18 23
E ( X 2 ) = 1 P ( X 2 = 1) + 2 P ( X 2 = 2 ) + 3 P ( X 2 = 3) = 1 + 2 + 3 = = .
9 9 9 9 9

Distribution of X 1 is as follows:
1 0 0
( 0 0 1) P = ( 0 0 1) 1/ 3 0 2 / 3 = 0
1 2
.
0 1/ 3 2 / 3 3 3

Observe that E ( X 2 X 1 ) is a function of X 1 .
For X 1 = 1 ,
E ( X 2 X 1 = 1) = 1 P ( X 2 = 1 X 1 = 1) + 2 P ( X 2 = 2 X 1 = 1) + 3 P ( X 2 = 3 X 1 = 1) = 1 1 = 1 .
For X 1 = 2 ,
1 2 7
E ( X 2 X 1 = 2 ) = 1 P ( X 2 = 1 X 1 = 2 ) + 2 P ( X 2 = 2 X 1 = 2 ) + 3 P ( X 2 = 3 X 1 = 3 ) = 1 + 3 = .
3 3 3
For X 1 = 3 ,
1 2 8
E ( X 2 X 1 = 3 ) = 1 P ( X 2 = 1 X 1 = 3 ) + 2 P ( X 2 = 2 X 1 = 3 ) + 3 P ( X 2 = 3 X 1 = 3 ) = 2 + 3 = .
3 3 3

7 8 7 8
E ( X 2 X1 ) 1 E ( X 2 X1 )
3 3 3 3
=
1 2 1 2
Probability 0 Probability
3 3 3 3

7 8
E ( X3 X 2 ) 1
Similarly, E ( X 3 X 2 ) is a function of X 2 . We have
3 3
1 2 6
Probability
9 9 9

(b)
0 1/ 3 2 / 3

P = 1/ 3 0 2 / 3 , T = min {n : n 1, X n = 1}
1/ 3 2 / 3 0

6
P (T = 1 X 0 = 3) f31(1) = P ( X 1 = 1 X 0 = 3) = 0 .

P (T = 2 X 0 = 3) f31( 2) = P ( X 2 = 1, X 1 1 X 0 = 3) = P ( X 2 = 1, X 1 = 2 or 3 X 0 = 3)
= P ( X 2 = 1, X 1 = 2 X 0 = 3) + P ( X 2 = 1, X 1 = 3 X 0 = 3)
= P ( X 1 = 2 X 0 = 3) P ( X 2 = 1 X 0 = 3, X 1 = 2 ) + P ( X 1 = 3 X 0 = 3) P ( X 2 = 1 X 0 = 3, X 1 = 3)
= P ( X 1 = 2 X 0 = 3) P ( X 2 = 1 X 1 = 2 ) + P ( X 1 = 3 X 0 = 3) P ( X 2 = 1 X 1 = 3)
1 1 2 1
= + 0 =
3 3 3 9

P (T = 3 X 0 = 3) f 31( 3) = P ( X 3 = 1, X 2 1, X 1 1 X 0 = 3) = P32 f 21( 2 ) + P33 f31( 2)

( ) ( )
= P32 P22 f 21(1) + P23 f 31(1) + P33 P32 f 21(1) + P33 f31(1) = P32 ( P22 P21 + P23 P31 ) + P33 ( P32 P21 + P33 P31 )
2 1 1 2
= P33 P32 P21 = =
P22 = P31 = 0 3 3 3 27

Observe that 1 min {T , 4} 4 .

T = 1 min {T , 4} = 1, T = 2 min {T , 4} = 2, T = 3 min {T , 4} = 3,


T = 4 or 5 or 6 or " min {T , 4} = 4

E min {T , 4} X 0 = 3 = 1 P ( min {T , 4} = 1 X 0 = 3) + 2 P ( min {T , 4} = 2 X 0 = 3)


+3 P ( min {T , 4} = 3 X 0 = 3) + 4 P ( min {T , 4} = 4 X 0 = 3)
= 1 P (T = 1 X 0 = 3) + 2 P (T = 2 X 0 = 3) + 3 P (T = 3 X 0 = 3) + 4 P (T = 4 or 5 or " X 0 = 3)
= 1 P ( T = 1 X 0 = 3 ) + 2 P ( T = 2 X 0 = 3 ) + 3 P ( T = 3 X 0 = 3 )
+4 1 P (T = 1 X 0 = 3) P (T = 2 X 0 = 3) P (T = 3 X 0 = 3)
1 2 1 2 6+6 22 12 + 88 100
= 1 0 + 2 + 3 + 4 1 0 = + 4 = =
9 27 9 27 27 27 27 27

(c)
0 1/ 3 2 / 3

P = 1/ 3 0 2 / 3
1/ 3 2 / 3 0

diagraph of the Markov chain is

7
Observe that
P (T11 = 1 X 0 = 1) = f11(1) = 0 .
P (T11 = 2 X 0 = 1) = f11( 2) = P12 P21 + P13 P31
P (T11 = 4 X 0 = 1) = f11( 4) = P12 P23 P32 P21 + P13 P32 P23 P31 = ( P12 P21 + P13 P31 )( P23 P32 )
P (T11 = 6 X 0 = 1) = f11( 6 ) = P12 P23 P32 P23 P32 P21 + P13 P32 P23 P32 P23 P31 = ( P12 P21 + P13 P31 )( P23 P32 )
2

#
P (T11 = 2n X 0 = 1) = f11( 2 n ) = P12 ( P23 P32 ) P21 + P13 ( P32 P23 ) = ( P12 P21 + P13 P31 )( P23 P32 )
n 1 n 1 n 1

n 1 n 1 ,
1 2 4 1 4
= + = , n 1
9 9 9 3 9

P (T11 = 1 X 0 = 1) = f11(1) = 0
P P P P
P (T11 = 3 X 0 = 1) = f11( 3) = P12 P23 P31 + P13 P32 P21 = 12 31 + 13 21 P23 P32
P32 P23
( P23 P32 ) ( P32 P23 )
2 2

P (T11 = 5 X 0 = 1) = f11( 5) = P12 P23 P32 P23 P31 + P13 P32 P23 P32 P21 = P12 P31 + P13 P21
P32 P23
P P P P
= 12 31 + 13 21 ( P23 P32 )
2

P32 P23
P P P P
P (T11 = 7 X 0 = 1) = f11( 7 ) = P12 P23 P32 P23 P32 P23 P31 + P13 P32 P23 P32 P23 P32 P21 = 12 31 + 13 21 ( P23 P32 )
3

P32 P23
#
1 2
P P P P 4 n 1 4 n
P (T11 = 2n + 1 X 0 = 1) = f11( ) = 12 31 + 13 21 ( P23 P32 ) = 9 + 9 = , n 1
2 n +1 n

P32 P23 2 2 9 29
3 3

1 2 5
n 1 n
1 4 14
Observe that f11 = f11( n ) (= f( 11
2n)
+ f11( 2 n +1) = + = 3 + 9 = 9 =1
1)
f11 = 0 n =1 n =1 3 9 n =1 2 9
4 4 5
n =1 n =1
1 1
9 9 9
n 1 n
1 4 14
E (T11 ) = nf11 ( n)
= 2nf ( 2n)
11 + ( 2n + 1) f11 ( 2 n +1)
= 2n + ( 2n + 1)
n =1 f11( ) = 0 n =1
1
n =1 n =1 3 9 n =1 29

n 1
1 4
Let S1 = 2 n , then
n =1 3 9

8
4
1 4
n 1

1 4
n
1 1 4 n 1 1 4 n
S1 S1 = 2 n 2 n = 2 + n n
9 3 9 3 9 3 n=2 3 9 3 9
n =1 n =1 n =1

1 1 4
n
1 4
n
1 1 4
n

= 2 + ( n + 1) n = 2 + ( n + 1) n
3 n =1 3 9 n =1 3 9 3 n =1 3 9

4
1 1 4 n 1 27 1 4 9 1 4 6
= 2 + = 2 + = 2 + = 2 + =
3 n =1 3 9
3 1 4 3 27 5 3 15 5
9
6
54
S1 = 5 =
5 25
9

n n n
14 14 14
S 2 = ( 2n + 1) = 2 n +
n =1 29 n =1 2 9 n =1 2 9
1 4
2
1 4
2


14+ 2 9 2+ 2 9
2 9 4 1 4 9 5 2
1 29
= 2 9 9 9 = 2 2 + 8 + 2 = 36 + 10 = 46
4 + 4
= 2
5 + 5
1 1 5 25 5 25 25 25
9 9 9 9



Therefore,
n 1 n
1 4 14 54 46
E (T11 ) = 2n + ( 2n + 1) = + =4
n =1 3 9 n =1 29 25 25
,
9. Consider the following random walk:
Pi ,i +1 = p with 0 < p < 1
Pi ,i 1 = q = 1 p for i = 1, 2," , r 1
P0,0 = Pr ,r = 1
Find d ( k ) = E ( time to absorption into 0 or r initial state is k ) .
Solution:
Let Ti ,0 or r be the first passage time from state i to state 0 or r , 1 i r 1 .
1 if the particle moves to the right by 1 step
Let M i = E (Ti ,0 or r ) . Let Y = and
1 if the particle moves to the left by 1 step
P (Y = 1) = p, P (Y = 1) = q = 1 p if the system is in state 1 i r 1 .

Then

9
M i = E E (Ti ,0 or r Y ) = P (Y = 1) E (Ti ,0 or r Y = 1) + P (Y = 1) E (Ti ,0 or r Y = 1)
.
= p E (Ti +1,0 or r ) + 1 + q E (Ti 1,0 or r ) + 1 = pM i +1 + p + qM i 1 + q = 1 + pM i +1 + qM i 1
p + q =1

So we have M i = 1 + pM i +1 + qM i 1 , 1 i r 1 .

Observe that M i = E (Ti ,0 or r ) = 0 if i = 0 or r .

Suppose p q .
q 1
M i = 1 + pM i +1 + qM i 1 pM i +1 pM i = qM i qM i 1 1 M i +1 M i = ( M i M i 1 ) , 1 i r 1 .
p p

So
2
q 1 q q 1 1 q q 1
M i M i 1 = ( M i 1 M i 2 ) = ( M i 2 M i 3 ) = ( M i 2 M i 3 ) 2
p p pp p p p p p
i 1
1 q
i 1 i 1
q q i 2 q i 3 q 1 q p pi
= " = ( M 1 M 0 ) i 1 + i 2 + " + 2 + = ( M 1 M 0 )
p p p p p p q
1
p

And
1 q k 1
k 1
i

i i
q p pk
M i M 0 = ( M k M k 1 ) = ( M1 M 0 )
k =1 p k =1
q
1 p
k =1


i i
1 1 q i 1 q
k 1
i k 1 1 1
i
q p p i
q p p p p
= ( M1 M 0 ) + = ( M1 M 0 ) +
k =1 p k =1
q q k =1 p q q q q
1 p 1 p 1
p
1
p
1
p
1
p

1 q q
i i
i 1 1

p p p i 1 p
= + M M0 + = + M1 M 0 +
q 1 q q p q p q 1 q
1 1 1
p p p p

q i

1
i 1 p
Thus, M i M 0 = + M1 M 0 + , i 1
pq p q 1 q

p

So

10
q r q r
1 1
r 1 p r 1 p
Mr M0 = + M1 M 0 + 0= + M1 +
pq p q 1 q M r = M 0 =0 pq p q 1 q

p p

q r q r q r
1 1 1
1 p r p r 1 p
M1 + = M1 =
p q 1 q p q q pq pq q
1 1
p p p

r
pq 1 r 1
M1 = =
q
r
pq q r
pq
1 p 1
p p
pq
p
Thus,
q i
1
i 1 p
Mi M0 = + M1 M 0 +
pq p q 1 q

p

i
i
1 q
q
1

i r 1 1 p = i + r p
Mi = + +
p q q r p q p q 1 q pq q r p q

p 1 p p p 1 p
p
i i i
q q q
1 1 1
=
i
+
r p = i + r p = i r p
pq pq q r
pq pq q r
q p q p q r
1 1 1
p p p

1
Suppose p = q = .
2
M i +1 M i = ( M i M i 1 ) 2 then
M i M i 1 = ( M i 1 M i 2 ) 2 = ( M i 2 M i 3 2 ) 2 = ( M i 2 M i 3 ) 2 2 = " = ( M 1 M 0 ) 2 ( i 1)

i i i
So M i M 0 = ( M k M k 1 ) = ( M 1 M 0 ) 2 ( k 1) = i ( M 1 M 0 ) ( i 1) i .
k =1 k =1 k =1

Then M r M 0 = r ( M 1 M 0 ) ( r 1) r 0 = rM 1 ( r 1) r M 1 = r 1 .
M r = M 0 =0

11
Thus M i = iM 1 ( i 1) i = i ( r 1) ( i 1) i = ir i 2 = i ( r i ) .
,
10. For each possible starting state i {1, 2,3, 4} , find the limiting state probabilities for the following
Markov chain.

Solution:
We could solve this problem by forming the 5 5 state transition matrix P and evaluating P n , but a simpler
approach is to recognize the communicating classes C1 = {0,1} and C2 = {3, 4} . Starting in state i C1 the
3 1
system operates like a two-state chain with transition probabilities p = and q = . Let
4 4
(j1) = lim P ( X n = j | X 0 C1 ) denote the limiting state probabilities. The limiting state probabilities for this
n

q p 1 3
(
embedded two-state chain are 0(1) , 1(1) = ) , = ,
q+ p q+ p 4 4
Since this two-state chain is within the five-state chain, the complete set of limiting state probabilities is

( ) 1 3
4 4

0(1) , 1(1) , 3(1) , 4(1) , 5(1) = , ,0,0,0

When the system starts in states 3 or 4, let (j2 ) = lim P ( X n = j | X 0 C2 ) denote the limiting state
n

probabilities. In this case, the system cannot leave class C2 . The limiting state probabilities are the same as if
1 1
states 3 and 4 were a two-state chain with p = and q = . Starting in class C2 the limiting state
2 4

( ) 2 2 1 2
probabilities are 3( ) , 4( ) = , . The complete limiting state probabilities are
3 3

( )

1 2
0( 2 ) , 1( 2 ) , 3( 2 ) , 4( 2 ) , 5( 2 ) = 0,0,0, , . Starting in state 2, we see that the limiting state behaviour
3 3
depends on the first transition we make out of state 2. Let event B2 k denote the event that our first transition
is to a state in class Ck . Given B21 the system enters state 1, and the limiting probabilities are given by (1) .
Given B22 the system enters state 3 and the limiting probabilities are given by ( 2 ) . Since
1 1 1
P ( B21 ) = P ( B22 ) = , the limiting probabilities are lim P ( X n = j | X 0 = 2 ) = (j1) + (j2 )
2 n 2 2
,
11. A particle moves along the real axis. Starting from a position (state) i = 1, 2," , it jumps at the next
time unit to state i + 1 with probability p and to state i 1 with probability q = p 1 . When the
particle arrives at state 0, it remains there for a further time unit with probability q or jumps to state
1 with probability p. Let X n denote the position of the particle after the n-th jump (time unit).
Under which conditions that the Markov chain { X 0 , X 1 ,"} is a stationary distribution?
Solution:
12
Since P00 = q, P01 = p , Pi ,i +1 = p, Pi ,i 1 = q, i = 1, 2," , q = 1 p , the system of linear equations is
0 = 0 q + 1q

i = i 1 p + i +1q; i = 1, 2,"
i
p
Solving this system of equations recursively yields i = 0 ; i = 0,1,"
q

1
To guarantee that i = 1 , the condition p < q , or, equivalently, p < , must hold. In this case,
i =0 2
i
q p p
i = ; i = 0,1,"
q q
1
The necessary condition p < for the existence of a stationary distribution is intuitive, since otherwise
2
the particle would tend to drift to infinity. But then no time-invariant behaviour of the Markov chain can
be expected.
,
12. Consider the Markov chain shown below.
(a) What is the period d of state 0?
(b) What are the stationary probabilities 0 , 1 , 2 , 3 ?
(c) Given the system is in state 0 at time 0, what is the probability the system is in state 0 at time
nd in the limit as n

Solution:
(a)
By inspection, the number of transitions need to return to state 0 is always a multiple of 2.
Thus the period of state 0 is d = 2 .
(b)
3
To find the stationary probabilities we solve the system of equations = P and
i =1
i = 1:

3 1
0 = 4 1 + 4 3

= 1 + 1
1 0 2
4 4
1 1
2 = 1 + 3
4 4

0 + 1 + 2 + 3 = 1

4 1 4
Solving the second and third equations for 2 and 3 yields 2 = 4 1 0 , 3 = 2 1 = 5 1 0 .
3 3 3

13
3 1 3 5 1
Substituting 3 back into the first equation yields 0 = 1 + 3 = 1 + 1 0 .
4 4 4 4 3
2 5
This implies 1 = 0 . It follows from the first and second equations that 2 = 0 and 3 = 2 0 .
3 3
Lastly, we choose 0 so the state probabilities sum to 1:
2 5 16
0 + 1 + 2 + 3 = 0 1 + + + 2 = 0
3 3 3
3 2 5 6
It follows that the state probabilities are 0 = , 1 = , 2 = , 3 = .
16 16 16 16
(c)
Since the system starts in state 0 at time 0, we find the limiting probability that the system is in state 0 at
3
time nd: lim P00( ) = d 0 =
nd
n 8
,
13. A packet voice communication system transmits digitized speech only when the speaker is talking. In
every 10 ms interval (referred to as a timeslot) the system decides whether the speaker is talking or
silent. When the speaker is talking, a speech packet is generated; otherwise no packet is generated. If
1
the speaker is silent in a slot, the speaker is talking in the next slot with probability p = . If the
140
1
speaker is talking in a slot, the speaker is silent in the next slot with probability q = . If states 0
100

and 1 represent silent and talking, what is the limiting state probability vector 0 = lim ( n ) ?
1 n
Solution:
Let the states S 0 The speaker is silent , S1 The speaker is talking . Then
1 139 1
P0,0 = P (X n +1 = 1 X n = 0) = , P0,1 = P (X n +1 = 0 X n = 0) = , P1,0 = P (X n +1 = 0 X n = 1) = ,
140 140 100
99
P1,1 = P (X n +1 = 1 X n = 1) = .
100
S0 S1
139 1
S 0 140
140
T=
S1 1 99

100 100
139 1 1 1
139
1 140 + 100 = 140 + 100 = 0
140
140
( , ) = ( , ) 1 99 1 1 10
1
99 + = =0 =
140 100 140 100 14
100
100
+ = 1 + = 1
+ = 1
10 14 7 5
+ =1 = = ,=
14 24 12 12

14
7
0
So = lim ( n ) = 12 .
1 n 5
12
,
14. Each of two switches is either on or off during a day. On day n, each switch will independently be
on with probability (1 + number of on switches during day n 1) / 4 . For instance, if both switches
3
are on during day n 1 , then each will be independently be on during day n with probability .
4
(a) Represent this process as Markov Chain with stationary transition probabilities by defining the
possible states and constructing the transition matrix P.
(b) Draw the digraph associated to the transition matrix P, identify the communicating classes of
the transition matrix P and determine whether they are transient or recurrent?
(c) Determine lim P n .
n
(d) What fraction of days are both switches on? What fraction of days are both switches off?
Solution:
(a)
Let Si i switches are on, i = 0,1, 2 .
S0 S1 S2
S0 9 /16 6 /16 1/16
P = S1 1/4 2/4 1/4
S2 1/16 6/16 9/16
(b)

P is a positive matrix, thus there is only one communicating class which contains all states. As there are
only three states, all states have to be recurrent.

(c)
Solve
9 6 1 9 1 1
16 16 16 16 0 + 4 1 + 16 2 = 0 " (1)

( ) 1 2 1 = ( ) 6 + 2 + 6 = " (2)
0 1 2
4 4 4 0 1 2
16 0 4 1 16 2 1

1 6 9 1 1 9
0 + 1 + 2 = 2 " (3)
16 16 16 16 4 16
0 + 1 + 2 = 1 0 + 1 + 2 = 1" (4)
1 1 7 2 6 6
From (1), 1 + 2 = 0 " (5); From (2), 1 + 2 = 0 " (6)
4 16 16 4 16 16
8 8
2 (5) + (6), 2 = 0 2 = 0 .
16 16

1 1 7 3
Then 1 + 2 = 0 1 = 0
4 16 16 2
Thus

15
3 1 2
0 + 1 + 2 = 1 0 + 0 + 0 = 1 0 = =
2 7 7
2
3 2
1 = , 2 =
7 7
2 3 2
7 7 7

2 3 2
lim P =
n
n 7 7 7
2 3 2

7 7 7

(d)
2 2
of days are both switches on and of days are both switches off.
7 7
,
15. Mr. Chen goes for a run each morning. When he leaves his house for his run he is equally likely to
go out through either the front or the back door and similarly when he returns he is equally likely to
go through either the front or back door. Mr. Chen owns 2 pairs of running shoes which hetakes off
after the run at whichever door he happens to be. If there are no shoes at the door from which he
leaves to go running he runs barefooted. We are interested in determining the proportion of time that
Mr. Chen runs barefooted.
(a) Represent this process as Markov Chain with stationary transition probabilities by defining the
possible states and constructing the transition matrix P.
(b) Draw the digraph associated to the transition matrix P, identify the communicating classes
and determine whether they are transient or recurrent. What is the period of each state i?
(c) Determine lim P n .
n
(d) Determine the proportion of days that Mr. Chen runs barefooted.
(e) Mr. Chen runs barefooted today, find the expected number of days required for Mr. Chen to
run barefooted again.
Solution:
(a)
Let i = 0,1, 2 denote the number of pairs of running shoes at the door from which Mr. Chen leaves to go
running.
0 1 2
0 1/ 2 0 1/ 2
The transition matrix is:
P = 1 1/ 4 1/ 2 1/ 4
2 1/ 4 1/ 2 1/ 4

(c)

16
1 1
2 0 2

( ) 1 1 1 = ( )
0 1 2
4 2 4 0 1 2

1 1 1

4 2 4
0 + 1 + 2 = 1
1 1 1
2 0 + 4 1 + 4 2 = 0
0 =
1
3
1 + 1 = 0 = 2
1
2 1 2 2 1
2 = 1 1 =
1 + + = 1 3
1 1
0 + 1 + 2 = 2 0 1 2
1
2 4 4 2 = 3
0 + 1 + 2 = 1
1 1 1
3 3 3

1 1 1
lim P n =
n 3 3 3
1 1 1

3 3 3

(d)
1
of days that Mr. Chen runs barefooted.
3
(e)
1 1
E ( T00 ) = = =3
0 1
3
,
16. We are given two boxes A and B containing a total of 3 labelled balls. At each step, a ball is
selected at random (all selections being equally likely) from among the 3 balls and then a box is
2 1
selected at random. Box A is selected with probability and box B with probability
3 3
independently of the ball selected. The selected ball is moved to the selected box, if the ball is not
already in it. Consider the Markov chain of the number X n of balls in box A after the nth step.
(a) Define the possible states and construct the transition matrix P.
(b) Draw the digraph with probability as label associated to the transition matrix P, identify the
communicating classes and determine whether they are transient or recurrent. What is the
period
of each state i?
(c) Initially box A is empty, find the probability of box A being empty again after 5 steps taken,
(
that is, to find P X 5 = 0, X k 0, k = 1, 2,3, 4 X 0 = 0 ) .
(d) Determine lim P n .
n
(e) Determine the proportion of time that box B contains more balls than A does.
17
(f) Initially box A is empty, find the expected number of steps required for box A to be empty
again , that is, to find E (T00 ) .
Solution:
(a)
Let i = 0,1, 2,3 denote the number of balls in box A.
0 1 2 3
0 1/ 3 2 / 3 0 0
1 1/9 4/9 4/9 0

P=
2 0 2/9 5/9 2/9

3 0 0 1/3 2/3
(b)
There is one communicating class including all states, all states are positive recurrent and aperiodic.

(c)
(
P X 5 = 0, X k 0, k = 1, 2,3, 4 X 0 = 0 )
3
2 4 1 2 4 4 2 1 2 4 2 4 1 2 4 5 2 1
= + + +
011110
011 210
3 9 9 3 9 9 9 9 3 9 9 9 9 3 9 9 9 9
01 2110
01 2 210

128 + 64 + 80 + 64 336
= =
19683 19683
(d)
1/ 3 2 / 3 0 0
1/9 4/9 4/9 0
( 0 1 2 3 ) = ( 0 1 2 3 )
0 2/9 5/9 2/9

0 0 1/3 2/3
1
0 = 27

= 6
1 27

2 = 12
27
8
3 =
27
So
1 6 12 8
27 27 27 27
1/ 3 2 / 3 0 0 1 6 12 8

1/9 4/9 4/9 0
lim = 27 27 27 27
n 0 2/9 5/9 2/9 1 6 12 8

0 0 1/3 2/3 27 27 27 27
1 6 12 8

27 27 27 27
18
(e)
1 6 7
The proportion of time that box B contains more balls than A does = 0 + 1 = + = .
27 27 27
(f)
1 1
E ( T00 ) = = = 27
0 1
27
,
17. A spider hunting a fly moves between locations 1 and 2 according to a Markov chain with transition
0.7 0.3
matrix . The fly, unaware of the spider, moves according to a Markov chain with
0.3 0.7
0.7 0.3
transition matrix .
0.3 0.7
The spider catches the fly and the hunt ends whenever they meet in the same location.
(a) Show that the progress of the hunt, except for knowing the location where it ends, can be
described by a three-state Markov chain where one absorbing state represents hunt ended
and the other two that the spider and fly are at different locations. Obtain the transition
matrix T for this chain.
(b) Draw the digraph with probabilities as labels associated to the transition matrix T , identify the
communicating classes and determine whether they are transient or recurrent. What is the
period of each state i ?
(c) Find the probability that initially the spider is at location 1, the fly is at location 2, and after 1
step taken the spider is at location 1, the fly is at location 2, and after 2 steps taken the
spider is at location 2, the fly is at location 1, and after 3 steps taken the spider is at location 1,
the fly is at location 2. Suppose the spider starts at location 1 and the fly starts at location 2.
(d) Initially, the spider is at location 1, the fly is at location 2, find the probability that the spider
and the fly are both at their initial locations the first time again after 3 steps taken.

Let X n , n = 0,1, 2," be Markov chains with the transition matrix T found in (a).
Let pn = P ( X n = s1 ) , n = 0,1," , where s1 denotes the event that the spider at location 1, the fly at
location 2.
Let qn = P ( X n = s2 ) , n = 0,1," , where s2 denotes the event that the spider at location 2, the fly at
location 1.
(e) Find pn , qn in terms of pn 1 , qn 1 , respectively. As a consequence, find pn + qn in terms of
pn 1 , qn 1 .
(f) Find pn + qn in terms of p0 , q0 . Suppose the spider starts at location 1 and the fly starts at
location 2, find pn + qn .
(g) Find P ( X n = s3 ) , where s3 denotes the event that the ant is caught by the spider.
(h) Evaluate lim P ( X n = s3 ) .
n

(i) Suppose the spider starts at location 1 and the fly starts at location 2, find the expected steps
required to take, then the fly will be caught by the spider.
Solution:
(a)
Let X n , n = 0,1, 2," be the Markov chains with states defined as follows:

19
s1 The spider at location 1, fly at location 2 , s2 The spider at location 2, fly at location 1 ,
s3 The spider and the fly both at location 1 or at location 2 .
The transition matrix T of the Markov chains X n , n = 0,1, 2," is as follows:
s1 s2 s3
s1 0.49 0.09 0.42

T = s2 0.09 0.49 0.42
s3 0 0 1

(b)
Communicating classes {s1 , s2 } , {s3 } . s1 , s2 are transient, s3 is recurrent. All states are aperiodic.
(c)
P ( X 0 = 1, X 1 = 1, X 2 = 2, X 3 = 1)

( ( (
= P ( X 0 = 1) P X 1 = 1 X 0 = 1) P X 2 = 2 X 0 = 1, X 1 = 1) P X 3 = 1 X 0 = 1, X 1 = 1, X 2 = 2 )

= P( X0 = 1) P ( X 1 =1 X0 = 1) P ( X 2 (
= 2 X 1 = 1) P X 3 = 1 X 2 = 2 )

= P( X0 = 1) P ( X 1 =1 X0 = 1) P ( X 1 = 2 X0 = 1) P ( X 1 = 1 X 0 = 2)
= 1 0.49 0.09 0.09 = 3.969 103
(d)

(
P X 3 = 1, X k 1, k = 1, 2 X 0 = 1) =
1 2 21
0.09 0.49 0.09 = 3.969 103
(e)
pn = P ( X n = 1) = P ( X n 1 = 1) P ( X n = 1 X n 1 = 1) + P ( X n 1 = 2 ) P ( X n = 1 X n 1 = 2 )
= 0.49 pn 1 + 0.09qn 1

qn = P ( X n = 2 ) = P ( X n 1 = 1) P ( X n = 2 X n 1 = 1) + P ( X n 1 = 2 ) P ( X n = 2 X n 1 = 2 )
= 0.09 pn 1 + 0.49qn 1

pn + qn = 0.49 pn 1 + 0.09qn 1 + 0.09 pn 1 + 0.49qn 1 = 0.58 ( pn 1 + qn 1 ) .


(f)
By induction, we have ,
pn + qn = 0.58 ( pn 1 + qn 1 ) = ( 0.58 ) ( p0 + q0 ) .
n

And p0 = P ( X 0 = 1) = 1, q0 = P ( X 0 = 2 ) = 0 , so pn + qn = ( 0.58 ) ( p0 + q0 ) = ( 0.58)


n n

(g)
P ( X n = s3 ) = 1 P ( X n = s1 ) + P ( X n = s2 ) = 1 pn + qn = 1 ( 0.58 ) ( p0 + q0 )
n

(h)
lim P ( X n = s3 ) = lim 1 ( 0.58 ) ( p0 + q0 ) = 1 lim ( 0.58 ) ( p0 + q0 ) = 1
n n

n n n

(i)
Let C denote the steps required to take then the ant will be caught by the spider, given that the spider
starts in location 1 and the fly starts in location 2.
20
From (f), we have P ( X n s3 X 0 = s1 ) = ( 0.58 ) .
n

P ( C = 1) = P ( X 1 = s3 X 0 = s1 ) = 0.42
P ( C = 2 ) = P ( X 2 = s3 , X 1 s3 X 0 = s1 ) = P ( X 1 s3 X 0 = s1 ) P ( X 2 = s3 X 1 s3 ) = 0.58 0.42
#
P ( C = n ) = P ( X n = s3 , X i s3 ,1 i n 1 X 0 = s1 ) = P ( X n = s3 , X n 1 s3 X 0 = s1 )
= P ( X n 1 s3 X 0 = s1 ) P ( X n = s3 X n 1 s3 ) = ( 0.58 )
n 1
0.42
#

0.42
P ( C = n ) = P ( C = n ) = ( 0.58) ( 0.42 ) =
n 1
Observe that = 1 . Therefore, C is a discrete
n =1 n =1 n =1 1 0.58
random variable.


Compute E ( C ) = nP ( C = n ) = n ( 0.58 ) ( 0.42 ) .
n 1

n =1 n =1

(1 0.58) E ( C ) = E ( C ) 0.58E ( C ) = n ( 0.58) ( 0.42 ) n ( 0.58) ( 0.42 )
n 1 n

n =1 n =1

= 0.42 + n ( 0.58 ) ( 0.42 ) ( n 1)( 0.58) ( 0.42 )
n 1 n 1

n=2 n=2

= 0.42 + n ( 0.58 ) ( 0.42 ) ( n 1)( 0.58) ( 0.42 ) = 0.42 + ( 0.58) ( 0.42 )
n 1 n 1 n 1

n=2
n=2

= ( 0.58 ) ( 0.42 ) = 1
n 1

n =1

1 1
E (C ) = = = 2.380952381
1 0.58 0.42
,
18. At all times, an urn contains N > 0 balls some white balls and some black balls. At each stage, a
coin having probability p , 0 < p < 1 , of landing heads is flipped. If heads appears, then a ball is
chosen at random from the urn and is replaced by a white ball; it tails appears, then a ball is chosen
from the urn and is replaced by a black ball. Let, X n , n 0 , denote the number of white balls in the
urn after the n th stage. Observe that { X n , n 0} is a Markov chain.
(a) Define the N + 1 possible states and let Pij , 0 i N be the transition probabilities.
Compute (i) Pij , i = j ; (ii) Pij , i + 1 = j ; (iii) Pij , i = j + 1 ; (iv) Pij , i j 2 .
(b) Identify the communicating classes and determine whether they are transient or recurrent.
What is the period of each state i?
1
Let N = 2 , p = .
3
(c) Construct the transition matrix P. Draw the digraph associated to the transition matrix P with
probability as label.
(d) Find P ( X 0 = 0, X 1 = 0, X 2 = 1, X 3 = 2, X 4 = 1) in terms of P ( X 0 = 0 ) .
(e) Let T02 be the first passage time from state 0 to state 2, show by computation

21
41 41
that P (T02 = 4 ) =
648
(
, that is, show that P X 4 = 2, X k 2, k = 1, 2,3 X 0 = 0 ) =
648
.

(f) Determine lim P n , where P is the transition matrix.


n
(g) Find the proportion of time during which the number of white balls in the urn is equal to the
number of black balls in the urn.
(h) Initially, the urn has 1 white ball. Find the expected number of stages required for the urn to
have 1 white ball again
(i) Let Tij be the first passage time from state i to state j . Find E (T01 ) , E (T12 ) , E (T02 )
Let N = 2 , p = 1 .
(j) Determine the transition matrix P. Draw the digraph associated to the transition matrix P with
probability as label. Identify the communicating classes and determine whether they are
transient or recurrent. What is the period of each state i?
(k) Let fij = P ( X n = j for some n 1 X 0 = i ) denote the probability that, starting in state i, the
process will ever enter state j . Without computation, find f 01 , f12 , justify your findings by
providing an explanation.
(l) What is the expected number of stages required until there are only white balls in the urn if
initially there are no white balls?
Solution:
(a)
Let i = 0,1," , N 1, N denote the number of white balls in the urn.
N i
p N j = i +1

(1 p ) i i = j +1

Pij = ( X n +1 = j X n = i ) = N
0 j i 2

i N i
p N + (1 p ) N j =i

Let X n , X n +1 , denote the number of white balls in the urn after the n th stage and ( n + 1) stage.
th

Observe that X n +1 X n 1 , thus Pij = 1 if j i 2 . In addition, observe that Pij = 0 if j i 2 .

(b)
There is only 1 communicating class, which is {0,1, 2," , N 1, N } .
All states of {0,1, 2," , N 1, N } are positive recurrent and aperiodic.
(c)
0 1 2
0 2 / 3 1/ 3 0

P = 1 1/ 3 1/ 2 1/ 6
2 0 2 / 3 1/ 3
(d)

22
P ( X 0 = 0, X 1 = 0, X 2 = 1, X 3 = 2, X 4 = 1)
( ( (
= P ( X 0 = 0 ) P X 1 = 0 X 0 = 0 ) P X 2 = 1 X 0 = 0, X 1 = 0 ) P X 3 = 2 X 0 = 0, X 1 = 0, X 2 = 1)

(
P X 4 = 1 X 0 = 0, X 1 = 0, X 2 = 1, X 3 = 2 )

( ( (
= P ( X 0 = 0 ) P X 1 = 0 X 0 = 0 ) P X 2 = 1 X 1 = 0 ) P X 3 = 2 X 2 = 1) P X 4 = 1 X 3 = 2 ) (
= P( X0 = 1) P ( X 1 = 0 X0 = 0) P ( X 1 =1 X0 = 0) P ( X 1 = 2 X0 = 1) P ( X 1 = 1 X 0 = 2)
2 1 1 2 4 2
= P ( X 0 = 1) = P ( X 0 = 0) = P ( X 0 = 0)
3 3 6 3 162 81
(e)
(
f 02( 4) = P X 4 = 2, X k 2, k = 1, 2,3 X 0 = 0 )

=
0 0 01 2
( (
P X 4 = 2, X 3 = 1, X 2 = 0, X 1 = 0 X 0 = 0 ) + P X 4 = 2, X 3 = 1, X 2 = 1, X 1 = 0 X 0 = 0 )
0 0 11 2
0 1 0 1 2
0 111 2

( (
+ P X 4 = 2, X 3 = 1, X 2 = 0, X 1 = 1 X 0 = 0 ) + P X 4 = 2, X 3 = 1, X 2 = 1, X 1 = 1 X 0 = 0 )
2 2 1 1 2 1 1 1 1 1 1 1 1 1 1 1
= + + +
3 3 3 6 3 3 2 6 3 3 3 6 3 2 2 6
2 1 1 1 41
= + + + =
81 54 162 72 648
(f)
2 1 2 1
3 3 0 3 0 + 3 1 = 0
0 =
4

1 = 0
9
1 1 1= 1 1 2
( 0 2 ) ( 0 1 2) 0 + + = 1 4
3 2 6 2 = 0 1 =
1
3 2
1
3
2 1

4 9
0 2 1
1
+
1
= + + = 1 1
0
2 = 9
1 2 2 1 2
3 3 6 3
0 + 1 + 2 = 1 0 + 1 + 2 = 1

n
2 / 3 1/ 3 0 0 1 2 4 / 9 4 / 9 1/ 9

lim P = lim 1/ 3 1/ 2 1/ 6 = 0 1 2 = 4 / 9 4 / 9 1/ 9
n
n n
0 2 / 3 1/ 3 4 / 9 4 / 9 1/ 9
0 1 2
(g)
proportion of time during which the number of white balls in the urn is equal to the number of black balls
4
in the urn = proportion of time in state 1 = 1 = .
9
(h)

1 9
1 = nf11( n ) = = .
n =1 1 4
(i)
1 2 1
E (T01 ) = 1 + E (T01 ) + 1 E (T01 ) = 1 E (T01 ) = 3 .
3 3 3

23
1 1 1 1 1 1
E (T12 ) = E (T02 ) + 1 + E (T12 ) + 1 + 1 = E (T01 ) + E (T12 ) + 1 + E (T12 ) + 1 +
3 2 6 3 2 6
.
1 1
E (T12 ) = E (T01 ) + 1 E (T12 ) = 12
6 3 E (T01 ) = 3

E (T02 ) = E (T01 ) + E (T12 ) = 3 + 12 = 15


,
-End-

24