Beruflich Dokumente
Kultur Dokumente
X t is called a continuous-time Markov chain if, given time instances t 1 t2 ... tn s t and
integers i1 , t 2 ,.., tn , i, j V , we have
P X t s j X s i, X tk ik , k 1, 2,.., n P X t s j X s i
Thus the conditional probability of a future state given the states at the present and the past
instances is independent of the past states.
pij (t ) P X (t s ) j | X ( s ) i
P X (t ) j | X (0) i
Exponential distribution of the sojourn time or the state holding time:
After the CTMC enters a state i, it spends some time there before the transition from this
state. The time spent in state i is a continuous random variable and called the holding time in
or the sojourn time in state i. The distribution of Ti plays an important role in understanding
the dynamics of a CTMC.
Theorem (a) Ti is memory-less. In other words
P(Ti t s / Ti s ) P(Ti t )
(b) fTi t i e it u (t ) where i 0 is a constant
Proof:
(a)
P (Ti s t / Ti s ) P ( X (u ) i, u [0, s t ] / X (u ) i, u [0, s ])
P ( X (u ) i, u [ s, s t ] / X (u ) i, u [0, s ])
P ( X (u ) i , u [ s , s t ] / X ( s ) i ) ( Using theMarkov property)
P ( X (u ) i, u [0, t ] / X (0) i ) ( Using the homogenity property)
P(Ti t )
Thus Ti is memory-less.
(b)
P Ti t s
P Ti t s Ti s
P Ti s P Ti t s Ti s
P Ti s P Ti t (Using the memoryless property)
FTci x P Ti x
We get
The only function that satisfies the above relationship for arbitrary t and s is the function
log FTci t i t
where i 0 is a constant. The negative sign is used considering the non-positive value of
log F t .
c
Ti
So that FTci t e
i t
Taking the derivative with respect to t and using the property of the PDF, we get
fTi t i e i t u (t ) where i 0 is a constant . Thus the state holding time at the state i is an
1
exponential random variable with mean state holding time .
i
Therefore,
( t ) j i
exp( t ) , ji
pij s, t ( j i)!
0 otherwise
pi , j (t )
Note that the state holding time for state i here is the interarrival time and is given by
f Ti t e t t0
The operation of a CTMC can be explained using the concept of the state holding time as follows:
1) Once CTMC enters state i , it stays at the state for a time TI ~ exp( i ) .
2) Once the CTMC leaves state i, it enters one of the state j with the transition probability
Pi , j , j i such that P
j i
ij 1.
The process of holding in a state and then jumping into another state continues as the chain evolves.
The process of jumping to the state j from state i is like a discrete-time Markova chain. This process
is sometimes called an embedded Markov chain. The two events leaving the state i and entering the
state j are independent because of the Markovian assumption.
Suppose the Poisson process has entered state i at time O. It will remain in same state until
the next arrival. Thus TI ~ exp( ) . Once an arrival takes place, the state become i+1
Thus , for j i ,
1, j i 1
Pi , j
0 otherwise
Short-time Behaviour of the chain at a time interval (t , t t )
pii t p(Ti t ) o t
e i t o t
1 i t o t
pii ( t ) 1
lim i
t 0 t
where qij vi Pij is a probability rate function. It indicates the probability rate at which the
chain enters the state j from the state i.
Note that q
j i
ij vi Pij
j i
We have the following two lemmas for the limiting behaviour of pii (t ) and pij (t ) .
1 pii (t )
Therefore i .
t
Thus i indicates the probability rate at which the chain leaves state i.
We get for j i
pij (t ) o(t )
lim qij lim qij
t 0 t t 0 t
pij (t )
lim qij , j i
t 0 t
i j
0 t t
t
Suppose the chain is at state i at t=0 as shown in the above figure.
pij (t t ) P( X (t t ) j | X (0) i )
pik ( t ) pkj (t ) pii ( t ) pij (t ) pik ( t ) pkj (t )
k k j
pij (t t ) pij (t )
lim vi pij (t ) qik pkj (t )
t 0 t k j
so that
pij '(t ) qik pkj (t )
k
Kolmogorov Forward Equation
0 t t t
pij (t t ) pik (t ) pkj (t ) pii (t ) pij (t ) pik (t ) pkj (t )
k k j
Writing qii vi , we can rewrite the above differential equation as:
Then the Kolmogorov backward and forward differential equations can be written as
P'( t ) QP(t )
and
P'( t ) P(t )Q
As a certain system has two states – under operation state 1 and under repair state 0. The duration
of operation and repair are exponential RVs with rate parameters λ and µ respectively.
p00 (t ) p01(t )
Find P (t )
p10 (t ) p11 (t )
Q
P (t ) P(t )Q
(t ) p01
p00 (t ) p00 (t ) p01 (t )
(t ) p10 (t ) p11(t )
(t ) p11
p10
Thus,
(t ) p00 (t ) p01(t )
p00
p00 (t ) (1 p00 (t ) ) p00 (t ) p01(t ) 1
(t ) ( )p00 (t )
p00
With this initial condition, We get the solution of the above differential equation as
p00 (t ) e ( )t
Similarly,
p11 (t ) e ( )t
Now,
p01(t ) 1 p00 (t ) e ( )t
and
p10 (t ) 1 p11(t ) e ( )t
lim p00 (t ) , lim p01 (t ) ,
t t
lim p10 (t ) and lim p11(t )
t t
Thus at the steady state, the state transition probability matrix is given by
0 1
Π (say)
0 1
We also observe that, irrespective of the values of the initial state probabilities p0 (0) p1(0) ,
the steady-state state probabilities are given by
0 1
The above example illustrates a remarkable property of the CTMC without proof:
lim pi , j t j independent of i where j is the probability of the state j at the steady state.
t
Birth-death processes
The Birth-Death process is the well-known example of continuous time MC. This process has many
applications-the queuing system is one such example. The process has the state space V 0,1,... .
If the process is at state i, it can move only to the state i+1 ( single birth) or i-1 (single death) at
some random times. We associate two times:
Bi = random time till the next birth. Bi is exponentially distributed with the rate parameter i .
Di =random time till the next death. Di is exponentially distributed with the rate parameter i .
Theorem: The state holding time for a Birth-death process at a state i 0 is exponentially
distributed with the rate parameter ( i i ) .
Proof:
P (Ti t ) P (min( Bi , Di ) t )
P ( Bi t , Di t )
P ( Bi t )P (Di t )
e ( i i )t
1 FTi (t ) e ( i i )t
so that
For i 0,
Pi ,i 1 P (Bi Di )
e
i u
i i e v dvdu
i
0 u
i
i i
Similarly,
i
Pi ,i 1
i i
At i 0,
0 0 and P01 1
0 1 0 ....
1
1
0 ..,
1 1 1 1
P ....
i i
0 ... 0 ...
i i i i
...
and the Generator matrix Q is given by
0 0 0 0....
1 (1 1 ) 1 0..,
Q 0 2 (2 2 ) 2 ..,
...
...
Kolmogorov Equations
Suppose the process is at a state i in an instant t. It can move to the states i+1 and i-1 in an
infinitesimal time t with the following transition probabilities:
i t o t for j i 1
pij t i t o t for j i 1
o t for j i 1 and j i 1
where i is called the birth rate and i is called the death rate at state i. In queuing theory, there
are arrival and departure rates of the customer.
pii t 1 i t i t o t
With these rates we can get two differential equations to describe the transition probability at an
instant t.
0 t t t
Suppose the chain is at i at t=0. We can apply the CK equation to find the transition probability
pij t t by considering the intermediate instant t.
i
t=0
t t t
pi , j t t pi , k t pk , j t
k
pi , j t t pi , j t o t
pi , j 1 t j 1 pi , j 1 t j 1 j j pi , j t
t t
o t
Taking the limit as t 0 and noting that lim 0, we get
t 0 t
dpi , j t
j 1 pi , j 1 t j 1 pi , j 1 t j j pi , j t
dt
This differential equation is known as the forward Kolmogorv equation. If we consider Chapman
Kolmogorv equation in the following fashion
pi , j t t pi 1, j t i pi 1, j t i 1 i i pi ,i t o t
pk, j t
j
i
k
t t
t=0
t
dpi , j t
i pi 1, j t i pi 1, j t i i pi ,i t
dt
Note the state varying parameters i and i because of which the solution of Kolmogorv equations
difficult. We consider the special case when the steady state solution exists.
j 1 j 1 j 1 j 1 j j j 0
or j 1 j 1 j 1 j 1 j j j
(the probability of leaving the state j is balanced by the probability of entering state j)
which is a difference equation in terms of j and known as the global balance equation. The global
balance equation is illustrated in the figure below.
j 1 j 1
j 1 j
j 1 j
j
(1)
j 0
j 1
0 0 1 1
0
1
1 0
1 1 1 0 0 2 2
So that 2 2 1 1 1 1 0 0 1 1
2 1 1 1 0 0
2 2 1
j
i 1
0 1
j 1 i 1 i 0
1
0 j
i 1
1
j 1 i 1 i
i 1 j
j
i 1 i
j i 1 0
i 1 i
j
1 i 1
j 1 i 1 i
The M/M/1 queue is an example of the application of the Birth-Death process. Suppose { X t }
represents the number of jobs in a queuing system at time t . In an M/M/1 queue, the arrival
process is Poisson with rate i for i 0,1,.... . The jobs are served on the first-come-first-serve
basis by a single server and the service time is identically and exponentially distributed with
i for i 1,.... .. The global balance equations are given by
j 0
j 1
If , then the process will behave as symmetrical random walk process and X t will the null-
recurrent.
For j 0, we have
0 1
1 0
For j 1, we have
0 2 1
2
2 0
j
j 0
Now
j
j 1
1
j
j
1
j
j=0,1,…..
j
1
Thus j 1 p p j where p is the utilization factor for the queue.
Suppose lim X t X be the number of jobs in the queue in the steady state. Note that this limit
t
is in the probabilistic sense. Thus random X is a geometric random variable with p . The
mean and variance are given by
p
EX j j
j 0 1 p
and
p
var X
(1 p )2
Note that this average number includes the jobs waiting for service and jobs already in service. At
any time if j>1 is the total number of jobs in the system, then j-1 jobs are waiting. If j=0 or 1 then no
job will be waiting in the queue. Let L represent the number of jobs waiting for service. Then
p2
EL 0 0 1 j j 1
j 2 1 p
Each job has a service time Ti D which is identically and exponentially distributed with a rate . The
waiting time W is the sum of the service times of the jobs in the queue. If there are j jobs in the
queue, then the average waiting time
j
j
E (W / X j ) E Ti D j ETi D
i 1
The system can hold a maximum of K jobs. The global balance equations are given by
K 1 K
j 0 ( p) j
K
So that using the condition
j 0
j 1 results
1 p
j K 1
pj
1 p
The system can hold infinite jobs. When a customer arrives, he/she immediately goes into service.
Thus if there are j customers, the service rate will be n . The global balance equations are given
by
j 1 j 1 j 1 j 1 j j j , j 1, 2,...
0 1
We have
j and j j
j j 1 ( j )
j 1 ( j )
The condition
j 0
j 1 gives
j
0 ( j ! ) 1
j 0
0 e 1
Thus,
n
j e ( j ! ), j 1, 2,...