Sie sind auf Seite 1von 17

Continuous-time Markov Chain

Consider a random process  X  t  , t  0 where state space V is either finite or countable.

 X  t  is called a continuous-time Markov chain if, given time instances t 1  t2  ...  tn  s  t and
integers i1 , t 2 ,.., tn , i, j  V , we have

  
P X  t  s   j X  s   i, X  tk   ik , k  1, 2,.., n  P X  t  s   j X  s   i 
Thus the conditional probability of a future state given the states at the present and the past
instances is independent of the past states.

The probability pij  s, t   P  X t  s   j X  s   i is called the transition probability.


If pij  s, t  is independent of s but dependent on t-s, we call the chain to be homogeneous.

If {X(t), t  0} is a homogenous CTMC, then

pij (t )  P  X (t  s )  j | X ( s )  i 
 P  X (t )  j | X (0)  i 
Exponential distribution of the sojourn time or the state holding time:

After the CTMC enters a state i, it spends some time there before the transition from this
state. The time spent in state i is a continuous random variable and called the holding time in
or the sojourn time in state i. The distribution of Ti plays an important role in understanding
the dynamics of a CTMC.
Theorem (a) Ti is memory-less. In other words
P(Ti  t  s / Ti  s )  P(Ti  t )
(b) fTi  t    i e  it u (t ) where  i  0 is a constant
Proof:
(a)
P (Ti  s  t / Ti  s )  P ( X (u )  i, u  [0, s  t ] / X (u )  i, u  [0, s ])
 P ( X (u )  i, u  [ s, s  t ] / X (u )  i, u  [0, s ])
 P ( X (u )  i , u  [ s , s  t ] / X ( s )  i ) ( Using theMarkov property)
 P ( X (u )  i, u  [0, t ] / X (0)  i ) ( Using the homogenity property)
 P(Ti  t )

Thus Ti is memory-less.
(b)

P Ti  t  s 
 P Ti  t  s  Ti  s 


 P Ti  s  P Ti  t  s Ti  s 
 P Ti  s  P Ti  t  (Using the memoryless property)

Using the notation of complementary distribution function

FTci  x   P Ti  x 

We get

FTci  s  t   FTci  s  FTci  t 

Taking the logarithm of both sides results

log FTci  s  t   log FTci  s   log FTci  t 

The only function that satisfies the above relationship for arbitrary t and s is the function

log FTci  t    i  t

where  i  0 is a constant. The negative sign is used considering the non-positive value of
log F  t  .
c
Ti

So that FTci  t   e
 i t

Taking the derivative with respect to t and using the property of the PDF, we get

fTi  t    i e  i t u (t ) where  i  0 is a constant . Thus the state holding time at the state i is an
1
exponential random variable with mean state holding time .
i

Eaxmple 1: The Poisson Process

Consider the Poisson process { X (t ), t  0} with th rate parameter  . The process is an


independent increment process and is therefore Markov with the state space V  {0,1,2,...} . The
state transition probability pij  s, t  , j  i is given by

pij  s, t   P X  t  s   j X  s   i 
P  X  t  s   j, X  s   i 

P( X  s   i )
P  X  s   i  P  X t  s   X  s   j  i 

P( X  s   i )
 P  X t  s  X  s   j  i 
(  t ) j i
 exp(  t )
( j  i )!

Therefore,

 (  t ) j i
 exp(   t ) , ji
pij  s, t    ( j  i)!
 0 otherwise

 pi , j (t )

Note that the state holding time for state i here is the interarrival time and is given by

f Ti  t    e  t t0

Structure of a homogeneous CTMC

The operation of a CTMC can be explained using the concept of the state holding time as follows:

1) Once CTMC enters state i , it stays at the state for a time TI ~ exp( i ) .

2) Once the CTMC leaves state i, it enters one of the state j with the transition probability
Pi , j , j  i such that P
j i
ij  1.

The process of holding in a state and then jumping into another state continues as the chain evolves.

The process of jumping to the state j from state i is like a discrete-time Markova chain. This process
is sometimes called an embedded Markov chain. The two events leaving the state i and entering the
state j are independent because of the Markovian assumption.

Example 2: Poisson process continued

Suppose the Poisson process has entered state i at time O. It will remain in same state until
the next arrival. Thus TI ~ exp( ) . Once an arrival takes place, the state become i+1

Thus , for j  i ,

 1, j  i  1
Pi , j  
0 otherwise
Short-time Behaviour of the chain at a time interval (t , t  t )

pii  t   p(Ti  t )  o  t 

 e  i t  o  t 

 1   i t  o  t 

pii ( t )  1
lim   i
t  0 t

pij  t   p (Ti  t )  Pij


 (1  e  i t )  Pij
 ( i t  o( t ))  Pij
 ( i t ) Pij  o  t 
 qij t  o  t 

where qij  vi Pij is a probability rate function. It indicates the probability rate at which the
chain enters the state j from the state i.

Note that q
j i
ij  vi  Pij
j i

If we denote vi   qii , then q


j
ij  0.

We have the following two lemmas for the limiting behaviour of pii (t ) and pij (t ) .

Lemma 1: From pii (t )  1  vi t  o(t ) ,

1  pii (t ) o(t )


We have lim   i  lim i
t  0 t  t 0 t

1  pii (t )
Therefore i .
t

Thus  i indicates the probability rate at which the chain leaves state i.

Lemma 2: Again from pij (t )  qij t  o(t )

We get for j  i
pij (t ) o(t )
lim  qij  lim  qij
t  0 t  t 0 t

pij (t )
 lim  qij , j  i
 t 0 t

Chapman Kolmogorv Equation for CTMC

Recall that for a DTMC, pij( n )   pik( m ) pkj( n  m )


k
The same argument can be applied to derive
pij ( s  t )   pik ( s ) pkj (t )
k
Since the chain is continuous we have to characterize the chain at each point. This is done by
the so called Kolmogorov backward and forward differential equations.
To describe the CTMC dynamically we have to consider the CK equation for pij  t  t  and derive
the differential equation to represent the dynamics of the transition probabilities for the CTMC.

Kolmogorov Backward Equation

i j
0 t  t
t
Suppose the chain is at state i at t=0 as shown in the above figure.

pij (t  t )  P( X (t  t )  j | X (0)  i )
  pik ( t ) pkj (t )  pii ( t ) pij (t )   pik ( t ) pkj (t )
k k j

 (1  vi t  o( t )) pij (t )   qik tpkj (t )


k j

pij (t  t )  pij (t )
 lim   vi pij (t )   qik pkj (t )
t 0 t k j

 pij '(t )  vi pij (t )   qik pkj (t )


k j

Noting that qii   vi , we have

pij '(t )  qii pij (t )   qik pkj (t )


k j

so that
pij '(t )   qik pkj (t )
k
Kolmogorov Forward Equation

0 t t  t

Consider the figure as shown above. Here,

pij (t t )   pik (t ) pkj (t )  pii (t ) pij (t )   pik (t ) pkj (t )
k k j

 (1  vi t  o(t )) pij (t )   pik (t )( qik t  o(t ))


k j

 pij '(t )  vi pij (t )   pik (t ) qkj


k j

which is the Kolmogorov forward differential equation.

Writing qii  vi , we can rewrite the above differential equation as:

pij '(t )   pik (t ) qkj


k

Matrix Form of Kolmogorov Equations

We can define the matrices

 p00 (t ) p01 (t ).... 


 p (t ) p (t ) ......
P(t )   10 11 ,
  
 
 
 p  (t ) p  (t ).... 
 00 01

 p10 (t ) p11 (t ) ......
P'(t )    and
  
 
 
 q00 q01 ... 
 q q ......
Q   10 11 
  
 
 

Then the Kolmogorov backward and forward differential equations can be written as
P'( t )  QP(t )

and

P'( t )  P(t )Q

Example - (Reliability study)

As a certain system has two states – under operation state 1 and under repair state 0. The duration
of operation and repair are exponential RVs with rate parameters λ and µ respectively.

 p00 (t ) p01(t ) 
Find P (t )   
 p10 (t ) p11 (t ) 

Solution- The generator matrix of the chain is given by

  
Q
   

The forward kolmogrov equation is given by

P (t )  P(t )Q

 (t ) p01
 p00  (t )   p00 (t ) p01 (t )      
    
 (t )   p10 (t ) p11(t )   
 (t ) p11
 p10   

Thus,

 (t )    p00 (t )   p01(t )
p00
   p00 (t )   (1  p00 (t ) )  p00 (t )  p01(t )  1
 (t )  (    )p00 (t )  
 p00

We also note that

p00 (0)  Probability(X (0)  0 / X (0)  0)  1.

With this initial condition, We get the solution of the above differential equation as

 
p00 (t )   e  (    )t
 

Similarly,

 
p11 (t )   e  (    )t
 

Now,
 
p01(t )  1  p00 (t )   e  (    )t
 

and

 
p10 (t )  1  p11(t )   e  (    )t
 

Limiting profanities are

 
lim p00 (t )  , lim p01 (t )  ,
t   t  

 
lim p10 (t )  and lim p11(t ) 
t   t  

Thus at the steady state, the state transition probability matrix is given by

   
      0 1 
Π     (say)
     0 1 
     

We also observe that, irrespective of the values of the initial state probabilities  p0 (0) p1(0) ,
the steady-state state probabilities are given by

 0 1 

The above example illustrates a remarkable property of the CTMC without proof:

If lim pi , j  t  exists, then


t 

lim pi , j  t    j independent of i where  j is the probability of the state j at the steady state.
t 

Birth-death processes

The Birth-Death process is the well-known example of continuous time MC. This process has many
applications-the queuing system is one such example. The process has the state space V  0,1,... .
If the process is at state i, it can move only to the state i+1 ( single birth) or i-1 (single death) at
some random times. We associate two times:

Bi = random time till the next birth. Bi is exponentially distributed with the rate parameter i .

Di =random time till the next death. Di is exponentially distributed with the rate parameter i .

Bi and Di are assumed to be independent.


The state holding time Ti at a state i  0 is given by Ti  min(Bi , Di ) .

Theorem: The state holding time for a Birth-death process at a state i  0 is exponentially
distributed with the rate parameter ( i  i ) .

Proof:

P (Ti  t )  P (min( Bi , Di )  t )
 P ( Bi  t , Di  t )
 P ( Bi  t )P (Di  t )
 e  ( i  i )t
 1  FTi (t )  e  ( i  i )t

so that

fTi (t )  (i  i )te  ( i  i )t


 i  i  i

We can also get the transition probabilities of the embedded MC.

For i  0,

Pi ,i 1  P (Bi  Di )


e
 i u
 i i e   v dvdu
i

0 u

i

i  i

Similarly,

i
Pi ,i 1
i  i

At i  0,

 0  0 and P01  1

Thus the P matrix for the embedded MC is given by

 0 1 0 .... 
  1 
 1
0 .., 
 1  1 1  1 
 
P  .... 
 i i 
0 ... 0 ...
 i  i i  i 
... 
 
and the Generator matrix Q is given by

  0 0 0 0.... 
 
 1  (1  1 ) 1 0.., 
Q  0 2  (2  2 ) 2 ..,
 
 ... 
... 
 

Kolmogorov Equations

Suppose the process is at a state i in an instant t. It can move to the states i+1 and i-1 in an
infinitesimal time t with the following transition probabilities:

 i t  o  t  for j  i  1

pij  t    i t  o  t  for j  i  1
 o  t  for j  i  1 and j  i  1

where i is called the birth rate and i is called the death rate at state i. In queuing theory, there
are arrival and departure rates of the customer.

From (1) the transition probability pii  t  is given by

pii  t   1  i t  i t  o  t 

With these rates we can get two differential equations to describe the transition probability at an
instant t.

0 t t  t
Suppose the chain is at i at t=0. We can apply the CK equation to find the transition probability
pij  t  t  by considering the intermediate instant t.

pik, t pk, j  t 


i

i
t=0
t t   t
pi , j  t  t    pi , k  t  pk , j  t 
k

Applying the assumptions of the Birth-Death process

pi , j  t  t   pi , j 1  t   j 1t  pi , j 1  t   j 1t  1   j t   j t  pi , j  t    Ot 

Rearranging the terms and dividing by t we get

pi , j  t  t   pi , j  t  o  t 
 pi , j 1  t   j 1  pi , j 1  t   j 1    j   j  pi , j  t  
t t

o  t 
Taking the limit as t  0 and noting that lim  0, we get
t  0 t

dpi , j  t 
  j 1 pi , j 1  t    j 1 pi , j 1  t     j   j  pi , j  t 
dt

This differential equation is known as the forward Kolmogorv equation. If we consider Chapman
Kolmogorv equation in the following fashion

pi , j  t  t   pi 1, j  t  i  pi 1, j  t  i  1  i  i  pi ,i  t   o  t 
pk, j t
j
i
k
t  t
t=0
t

We get another differential equation

dpi , j  t 
 i pi 1, j  t   i pi 1, j  t    i  i  pi ,i  t 
dt

This differential equation is known as the backward Kolmogorv differential equation.

Note the state varying parameters i and i because of which the solution of Kolmogorv equations
difficult. We consider the special case when the steady state solution exists.

Global balance equation


Suppose a steady solution of the Kolmogorov differential equations exist. Then as t   ,
dpi , j  t 
 0 , pi , j  t    j independent of i. Note that  j also denotes the probability of the state j
dt
at the steady state.

Putting the above results in the forward Kolmogorv equation, we get

 j 1 j 1   j 1 j 1    j   j   j  0


or  j 1 j 1   j 1 j 1   j   j  j 
(the probability of leaving the state j is balanced by the probability of entering state j)

which is a difference equation in terms of j and known as the global balance equation. The global
balance equation is illustrated in the figure below.

 j 1 j 1

 j 1 j
j 1 j
j

This equation can be easily solved on the basis of following information.


(1) 
j 0
j 1

(2) At j=0, there cannot be further death

 0 0  1 1
0
 1  
1 0

Substituting the value of  1 in the global balance equation, we get

 1  1   1  0 0  2 2
So that  2 2  1 1  1 1  0 0  1 1

   
  2   1  1  1  0  0
 2  2 1

Continuing in the similar manner we get for j=1,2,…


i 1
j   j 1
i
j
i 1
 
i 1 i 0

Putting this result in (1) we get

 j
i 1
 0    1
j 1 i 1 i 0
1
 0   j
i 1
1  
j 1 i 1 i
i 1 j

 j
i 1 i

 j   i 1 0 
i 1 i
 j

1   i 1
j 1 i 1 i

Example: M/M/1 queuing system


M/M/1 queuing system

A/S/n/m queuing system

Allowable Markov Markov


arrival One Server
queue length Service
arrival process process
Service process
process Number of servers

The M/M/1 queue is an example of the application of the Birth-Death process. Suppose { X  t }
represents the number of jobs in a queuing system at time t . In an M/M/1 queue, the arrival
process is Poisson with rate i   for i  0,1,.... . The jobs are served on the first-come-first-serve
basis by a single server and the service time is identically and exponentially distributed with
i   for i  1,.... .. The global balance equations are given by

 j 1   j 1        j , j  1, 2,...


and
 0    1
with the condition


j 0
j 1

If    , then the process will behave as symmetrical random walk process and X  t  will the null-
recurrent.

When    ,  j can be obtained as follows:

For j  0, we have

 0    1

 1     0


For j  1, we have

 0   2         1

Substituting for  1 from above, we get

2

2    0


Continuing in the same manner, we get

j

 j    0


Now

j


  
j 1   





1



 
j


 j   

1
 
j
      
   j=0,1,…..
    
j
    
 1   
    


Thus  j  1  p  p j where p  is the utilization factor for the queue.

Suppose lim X  t   X be the number of jobs in the queue in the steady state. Note that this limit
t 


is in the probabilistic sense. Thus random X is a geometric random variable with p  . The

mean and variance are given by


p
EX   j j 
j 0 1 p
and
p
var  X  
(1  p )2

Note that this average number includes the jobs waiting for service and jobs already in service. At
any time if j>1 is the total number of jobs in the system, then j-1 jobs are waiting. If j=0 or 1 then no
job will be waiting in the queue. Let L represent the number of jobs waiting for service. Then


p2
EL  0   0   1     j  j  1 
j 2 1 p

Average waiting time

Each job has a service time Ti D which is identically and exponentially distributed with a rate  . The
waiting time W is the sum of the service times of the jobs in the queue. If there are j jobs in the
queue, then the average waiting time

j
j
E (W / X  j )  E  Ti D  j  ETi D 
i 1 

Thus average waiting time for a job is given by



j
EEW j  j
Xj
j 1 

j
 1  p  p j
j 1 
p

 1  p 
 Average queuelength X Average service time per customer

Average time for job to be in the system

ET=Average waiting time+ own average seveice time


p 1
 
 1  p  
1

 1  p 
1

 

Example: M/M/1/K queuing system

The system can hold a maximum of K jobs. The global balance equations are given by

 j 1   j 1        j , j  1, 2,...


 0    1

 K 1   K 

We can recursively solve the above equations to get

 j   0 ( p) j

K
So that using the condition 
j 0
j  1 results

 1 p 
j  K 1 
pj
 1  p 

Example: M/M/∞ queuing system

The system can hold infinite jobs. When a customer arrives, he/she immediately goes into service.
Thus if there are j customers, the service rate will be n . The global balance equations are given
by
 j 1 j 1   j 1 j 1    j   j   j , j  1, 2,...
 0   1

We have

 j   and j  j 

Applying the initial condition and through mathematical induction, we get

 j   j 1 ( j )
  j 1 ( j )

We can recursively solve the above equations to get


n
 j   0 ( j ! )


The condition 
j 0
j  1 gives


j

0 ( j ! )  1
j 0

 0 e   1

Thus,
n
 j  e   ( j ! ), j  1, 2,...

Thus the distribution of  j is Poisson with the parameter  .

Das könnte Ihnen auch gefallen