Sie sind auf Seite 1von 17

Markov chains: complements

01KRR - STOCHASTIC PROCESSES


October 2010
Absorbing states
A state j S is said to be an absorbing state if
P{X
n+1
= j |X
n
= j } = 1.
Consider a MC X = {X
n
, n 0} having state space T R, where
T a set of transient states and R is a set of recursive states. Let
i T and j R. A useful quantity one can be interested in is:
f (i , j ) = P{X
n
= j for some n 0|X
0
= i }
(probability of ever entering in j given that the process starts in i ).
This can be calculated by means of the following.
Theorem
If j is recurrent, then the set of probabilities {f (i , j ), i T}
satises
f (i , j ) =

kT
P
ik
f (k, i ) +

R
P
ik
,
where

R denotes the set of states communicating with j (in
particular,

R = {j } if j is an absorbing state).
Proof: Denote with N
j
the number of visits in state j in an innite
time.
f (i , j ) =P{N
j
> 0|X
0
= i }
=

k
P{N
j
> 0|X
0
= i , X
1
= k}P
ik
=

kT
f (k, j )P
ik
+

R
f (k, j )P
ik
+

kT,k

R
f (k, j )P
ik
=

kT
f (k, j )P
ik
+

R
1 P
ik
+

kT,k

R
0 P
ik
=

kT
P
ik
f (k, i ) +

R
P
ik
.
By making use of the above equations, and considering appropriate
boundary conditions (i.e., f (j , j ) = 1 for absorbing states, or
f (j , i ) = 0 for j absorbing state and i recurrent state), one can
easily nd the quantities f (i , j ).
Another interesting quantity is the absorbing time. Let
S = t R where T is the set of transient states and R is a closed
set of recurrent states. Let T
i
, for i T, be the time of absorption
given X
0
= i :
T
i
= inf{n 0 : X
n
R|X
0
= i }.
Reasoning as before (conditioning on the rst visited state):
E[T
i
] = 1 +

kS
P
ik
E[T
k
], i T.
Considering the boundary conditions T
j
= 0 = E[T
j
] for j R, one
can nd the mean absorbing times.
Gamblers Ruin Problem

A gambler starts to play with a fortune of i euro. At each play the


probability of winning 1 euro is p and the probability of losing 1
euro is q = 1 p.

The successive plays are a sequence of binary independent random


variables Z
1
, Z
2
, . . . iid B(p).

The random fortune at time n is X


n
= X
n1
+ (2Z
n
1) until the
fortune reaches the values 0 or N euro, in which case the play stops,
i.e. the fortune stays at that value forever.

The Gamblers fortune X


n
, n = 0, 1, . . . , is a a MC. The set of
states is E = {0, 1, . . . , N}, the initial probability is

0
(x) = (x i ), x E, and the transition probabilities are
P
00
= P
NN
= 1
P
i ,i +1
= 1 P
i ,i 1
= p i = 1, 2, . . . , N 1
GRP: Example

If N = 4, i = 1, then
P =
_
_
_
_
_
_
1 0 0 0 0
1 p 0 p 0 0
0 1 p 0 p 0
0 0 1 p 0 p
0 0 0 0 1
_
_
_
_
_
_
=
_
0 1 0 0 0
_

The classes are {0}, {4}, {1, 2, 3}.

0 e 4 are called absorbing states.


GRP: Recurrence

Let f (i , j ) = P{

n=1
{X
n
= j }|X
0
= i }.

Some values of f (i , j ) are known because of the absorption


condition: f (0, 0) = f (N, N) = 1, otherwise f (0, j ) = f (N, j ) = 0.
I.e. the absorbing states are recurrent.

For i = 0, N, we derive an equation by conditioning on the result of


the rst play.
f (i , j ) =P{
n1
{X
n
= j }|X
1
= i + 1, X
0
= i }p+
P{
n1
{X
n
= j }|X
1
= i 1, X
0
= i }(1 p)
=P{
n1
{X
n
= j }|X
1
= i + 1}p+
P{
n1
{X
n
= j }|X
1
= i 1}(1 p)
=
_

_
pf (i + 1, j ) + (1 p)f (i 1, j ) if j = i + 1, i 1
p + (1 p)f (i 1, j ) if j = i + 1
pf (i + 1, j ) + (1 p) if j = i 1
GRP: Probability of winning (1)

In particular, for j = N, i {1, 2, . . . , N 1}, set P


i
= f (i , N).

Therefore, P
i
= pP
i
+ (1 p)P
i
= pP
i +1
+ (1 p)P
i 1
and
P
i +1
P
i
=
1 p
p
(P
i
P
i 1
) i {1, 2, . . . , N 1}.

As P
0
= 0, P
2
P
1
=
q
p
P
1
, P
3
P
2
=
q
p
(P
2
P
1
) =
_
q
p
_
2
P
1
. . .
and
P
i
P
1
= P
1
_
_
q
p
_
+
_
q
p
_
2
+ +
_
q
p
_
i 1
_
,
so that
P
i
=
_

_
1

q
p

i
1

q
p

P
1
if
q
p
= 1
iP
1
if
q
p
= 1
GRP: Probability of winning (2)

From the second boundary condition P


N
= 1 we obtain the value of
P
1
:
P
1
=
_

_
1

q
p

q
p

N
if p =
1
2
1
N
if p =
1
2

The nal result is


P
i
=
_

_
1

q
p

i
1

q
p

N
if p =
1
2
i
N
if p =
1
2

Note what happens if N .


GRP: absorption probability and transience

The probability of being absorbed at 0 is obtained by the same


computation applied to Q
j
= f (j , 0). In fact by the exchange
0 N, p q, i j = N 1, we compute the probabiity of
winning of the other player.

The nal result is


Q
j
=
_

_
1

p
q

j
1

p
q

N
if q =
1
2
j
N
if q =
1
2

The probability of absorption is P


i
+ Q
i
= 1. Therefore, the class
{1, . . . , (N 1)} is transient.

For p = 18/37 and various combination of i s and Ns, we have


i 1 1 100 100
N 2 10 200 1000
P
i
% 48.649 7.747 0.447 0.000
Branching processes
A population. Each individual, at the end of his life, produce j osprings
with probability P
j
, j 0, independently on the number of osprings
produced by others. X
0
: individuals at generation 0; X
n
: size of the n-th
generation. The process X = {X
n
, n 0} is called branching process.
X
n+1
=
X
n

i =1
Z
i
where Z
i
: number of osprings of i -th individual of n-th generation (IID).
If = E[Z
i
], then
E[X
n
] = E[E[X
n
|X
n1
]] = E[X
n1
] = . . . =
n
.
Let
0
: probability that, starting with a single individual, the
population dies out.

0
= P{population dies out}
=

j =1
P{population dies out|X
1
= j }P
j
=

j =1

j
0
P
j
.
In particular, if P
0
> 0 and P
0
+ P
1
< 1 then
0
is the smallest
positive number satisfying equation above, and
0
= 1 i 1
(see Theorem 4.5.1 in Ross for details)
Reversing the direction of time in a MC
Let X
n
, n Z
+
, be a MC with transition matrix P.

If A is a event depending on . . . , X
n2
, X
n1
, B an event depending
on X
n
, C an event depending on X
n+1
, X
n+1
, . . . , then the Markov
property is satised, i.e.
P{A|B C} = P{A|B}
P{C|B A} = P{C|B}
P{A C|B} = P{A|B}P{C|B}

A forward MC is a backward Markov process and we can


compute the backword transition as
P{X
m
= j |X
m+1
= i } =
P{X
m
= j , X
m+1
= i
}
P{X
m+1
= i }
=
P
ji

m
(j )

m+1
(i )

The backard process is a MC if the transitions are stationary, i.e. if


the original MC is stationary:
n
= .
Backward SMC of a SMC
Let X
n
, n Z
+
, be a MC with transition matrix P and stationary initial
probability , P = . Then the forward MC is a Stationary MC.

The elements of the transition matrix of the backward MC are


Q
ij
= P{X
m
= j |X
m+1
= i }
=
P{X
m
= j , X
m+1
= i }
P{X
m+1
= i }
=
P
ji
(j )
(i )
then the backward Markov process is a Stationary MC.

If Q
ij
= P
ij
, i.e.
(i )P
ij
= (j )P
ji
i = j
then the SMC is reversible. The rational for the name is the
equality
P{X
n
= i , X
n+1
= j } = P{X
n
= j , X
n+1
= i }
Conditions for reversibility
Let : E R
+
be any non negative function of the states such
that K =

i
(i ) < + and
(i )P
ij
= (j )P
ji
i = j
Then = /K is a stationary probability and the SMC is
reversible. In fact

i E
(i )P
ij
= (j )

i E
P
ji
= (j )
SemiMarkov processes
A semiMarkov process changes states according to a MC, but takes a
random amount of time between changes.
The process Z = {Z(t), t 0} is called semiMarkov process if it has a
nite or countable set of states and if, whenever it enters in a state i :

the next state is j with probability P


ij
,

given that the next state is j , then the time for transition has
distribution F
ij
.
SemiMarkov processes do not posses the Markov property (Why?).
Let X = {X
n
, n 0} denote the n-th state visited by the process. X is a
MC, with transition probabilities P
ij
, called embedded Markov chain of
Z. The semiMarkov process Z is said irreducible if X is.
Let H
i
be the distribution of the sojourn time in state i . By conditioning:
H
i
(t) =

j
P
ij
F
ij
(t).
Let T
ii
be the time between transitions in state i , and
i
= E[T
ii
]. Let

i
=
_

0
x dH
i
(x).
Theorem
If the semiMarkov process is irreducible and T
ii
is nonlattice with

ii
< , then the longrun proportion of time spent in i is
P
i
= lim
t
P{Z(t) = i |Z(0) = j } =

i

ii
In order to nd P
i
one can calculate the stationary probabilities
j
and
use the equality
P
i
=

i

j

j

Das könnte Ihnen auch gefallen