Sie sind auf Seite 1von 32

Continuous-time markov

chains
QUEUEING SYSTEMS
• THE POISSON PROCESS
• The Poisson process is central to the physical process modeling and plays a
pivotal part in classical queuing theory.
• In most of the elementary queuing systems, the inter-arrival times and the
service times are assumed to be exponentially distributed, or, equivalently,
the arrival and service processes are assumed to be Poisson.
• The Poisson process is a counting process: the random variable X(t) that
counts the number of point events in a given time interval (0,t) has a
Poisson distribution if:

P X (t )  k  
t k  t
e
k!

Point events
Poisson point events

Time
0 t

Elementary Queueing Capacity Analysis 2


Theory
QUEUEING SYSTEMS
• THE POISSON PROCESS – SUPERPOSITION PROPERTY
• In the foregoing formula, (t) is the mean of a Poisson variable: physically it
represents the average number of occurrences of the event in time interval
t, hence  can be interpreted as a rate.
• The superposition property states that if N independent Poisson processes
A1,A2,..AN, are combined into a single process, A=A1+A2+A3+..+AN, then
process A is still Poisson with rate  equal to the sum of the individual rates
i of Ai.

Poisson A1

+ Poisson A

Poisson AN

Elementary Queueing Capacity Analysis 3


Theory
QUEUEING SYSTEMS
• EXPONENTIALLY DISTRIBUTED INTERARRIVAL TIMES
• The exponential distribution and the Poisson distribution are closely related,
and mirror each other in the following sense:
• - If the inter-arrival times in a point process are exponentially distributed,
then the number of arrival points in the time interval (0,t) is given by a Poisson
distribution, and the process is a Poisson arrival process.
• - Conversely, if the number of arrival points k, in any interval (0,t), is a
Poisson random variable, the inter-arrival times are exponentially distributed,
and the arrival process is Poisson.
• Let t be the inter-arrival time. Then:
Pt  t   1  Pt  t 
• But P[t>t] is the probability that no arrivals occur in the interval [0,t]. That is:

Pt  t   P0 (t )  e  t

• Therefore we obtain the exponentially distributed inter-arrival time:


Pt  t   1  e  t

Elementary Queueing Capacity Analysis 4


Theory
QUEUEING SYSTEMS
• THE POISSON PROCESS – MEMORYLESS PROPERTY
• The memoryless property of a Poisson process means that if we observe at a
certain point in time, the distribution of the time until the next arrival is not
affected by the fact that some time has passed since the last arrival.
• In other words, the process starts afresh at the time of observation and has no
memory of the past.
• The memoryless property states that the distribution of the remaining time
until the next arrival, given that to units of time have elapsed since the last
arrival, is identically equal to the unconditional distribution of inter-arrival
times.
• Assume that we start observing the process immediately after an arrival at time
t=0. From the Poisson distribution, we see that the probability of no arrivals
in the time interval (0,to) is given by:


P 0 arrivals in 0, t0   t0 0  t
e o
 e  t o

0!

Elementary Queueing Capacity Analysis 5


Theory
QUEUEING SYSTEMS
• THE POISSON PROCESS – MEMORYLESS PROPERTY
• We now determine the conditional probability that the first arrival occurs in
(to, to+t), given that a period to has elapsed. That is:


P arrival in t0 , t0  t  / no arrivals in 0, t0  
t0 t
 t
 e dt

t0


 e   t  t   e  t
0 0

e  t 0
e  t 0

 1  e  t
• But the probability of an arrival in (0,t) is also given by:
 
P arrival in (0, t )  1  P0  1  e  t

• Therefore we see that the conditional probability of inter-arrival tines, given


that a certain time has elapsed, is the same as the unconditional distribution.
This confirms the Markovian or memoryless property of the Poisson counting
process and the exponential distribution.

Elementary Queueing Capacity Analysis 6


Theory
QUEUEING SYSTEMS
• STOCHASTIC PROCESSES
• A stochastic process is a mathematical model for describing an empirical
process that changes in time according to some probabilistic forces.
• More specifically, a stochastic process is a family of random variables defined
on a given probability space, indexed by time parameter, t, in an index set T.
Then we define:
X (t ), t  T 
• Such that the probability that X(t) takes on a value, say i,is denoted as:

P X (t )  i 
• Examples of stochastic processes are: number of customers arriving at a bank;
number of calls to an exchange; the price, P(t), at a stock counter in the Stock
Exchange.
• Basically, three parameters characterize a stochastic process:
• A) State Space: The values assumed by a random variable X(t) are called
states; and the collection of all possible values form the state space of the
process. If X(t)=i, we say that the process is in state i.

Elementary Queueing Capacity Analysis 7


Theory
QUEUEING SYSTEMS
• STOCHASTIC PROCESSES
• B) Index Parameter: In the context of applied stochastic processes, the index
is always taken to be the time parameter.
• C) Statistical Dependency: The statistical dependency of a stochastic process
refers to the relationships between one random variable and other members of
the same family.
• Markov processes are stochastic processes that exhibit a particular kind of
dependency among the random variables.
• For a Markov process, its future probabilistic development is dependent only
on the most current state; how the process arrives at that state is irrelevant.
• In stochastic processes, we are interested in the probability that X(t) takes on a
value i at some future time t – that is:
P X (t )  i 
• This is because precise knowledge cannot be had about the state of the process
in future times.
• We are also interested in the steady-state probabilities if the probability
converges.

Elementary Queueing Capacity Analysis 8


Theory
QUEUEING SYSTEMS
• DISCRETE-TIME MARKOV CHAINS
• A Markov process is a stochastic process that is characterized by the so-called
Markov property – the memoryless property that is a characteristic of the
exponential distribution, and hence also of the inter-arrival times of the
Poisson process.
• The Poisson process is therefore a special case of the Markov process.
• A Markov process is a stochastic process such that if, at a given time its state
is known, then its subsequent behaviour is independent of its past history.
• The Markov chain is a special type of Markov process.
• Discrete-time Markov chains has the characteristic that the state can be
numbered by the set of non-negative integers {0,1,2,..}; similarly, time is
discretized, with the time index n taking successive values {n=0,1,2,3,..}.
• A stochastic sequence {Xk, kT} is said to be a Markov chain if:
P X k 1  j / X 0  i0 , X 1  i1 , X 2  i2 ,.., X k  i 
 P X k  1  j / X k  i 
 Pij i, j , k

Elementary Queueing Capacity Analysis 9


Theory
QUEUEING SYSTEMS
• DISCRETE-TIME MARKOV CHAINS
• That is, the (k+1)th probability conditioned on all preceding ones equals
the (k+1)th probability conditioned on the kth, k=0,1,2,3,..
• In other words, the future probabilistic development of the chain depends only
on its current state. This memoryless property is commonly known as the
Markovian property.
• Define the one-step transition probability, Pij, as the probability of the chain
going from state i to state j.
• If Pij does not vary with time, then it is called a time-homogeneous Markov
chain. We assume that all Markov processes here are time-homogeneous.
• The Markov chain is characterized by its transition probability matrix [P]:
 P00 P01 P02 P03 ..
P P11 P12 P13 ..
 10
P   P20 P21 P22 P23 ..; P ij 1
  j
 P30 P31 P32 P33 ..
 .. .. .. .. ..
Elementary Queueing Capacity Analysis 10
Theory
QUEUEING SYSTEMS
• DISCRETE-TIME MARKOV CHAINS
• An alternative way of representing the evolution of a Markov chain is a state-
transition diagram, shown below:

Pij

Pii i j Pjj

Pji

State Transition diagram

Elementary Queueing Capacity Analysis 11


Theory
QUEUEING SYSTEMS
• STATE OF MARKOV CHAIN
• The state of a Markov chain is normally taken as the non-negative integers
{0,1,2,..}.
• When Xk=j, the chain is said to be in state j at time k; we therefore define the
probability of finding the chain in this state as:
 (jk )  P X k  j 
• The conditional probability Pij expresses only the dynamism (movement) of
the chain.
• To characterize the Markov chain completely, it is necessary to specify the
starting point of the chain – or the initial probability distribution, P[Xo=i]
• With this, it is possible to calculate the probabilities of finding the chain in any
state at any time, using the following expression:

P X k 1  j    P X k  i P X k 1  j / X k  i 
i 0

   i( k ) Pij
i 0

Elementary Queueing Capacity Analysis 12


Theory
QUEUEING SYSTEMS
• MATRIX FORMULATION OF STATE PROBABILITIES
• The transition probability matrix,is [P]={Pij}, where Pij is the transition
probability. If the number of states is n, then [P] is an nxn matrix.
• The sum of each row of [P] should equal unity.
• Similarly, express the state probabilities at each time interval as a row matrix:
     
(k )
0
k
1k  ....  nk  
• It can thus be shown that:
    P
(1) (0)

    P
( 2) (1)


    P
(k ) ( k 1)

• Then we obtain:
    P 
(k ) (0) (k )

where P   P  multiplied by itself k times  k  step transition matrix


(k )

Elementary Queueing Capacity Analysis 13


Theory
QUEUEING SYSTEMS
• MATRIX FORMULATION OF STATE PROBABILITIES
• Define:
P   I 
(0)

• the identity matrix.


• The general method for calculating the state probability k steps into a chain is:

P   P xP
( k 1) (k )

n
k 1
P ij   Pijk Pkj
k 0

• These two equations are the Chapman-Kolmogorov forward equations.

Elementary Queueing Capacity Analysis 14


Theory
QUEUEING SYSTEMS
• STEADY-STATE BEHAVIOUR OF A MARKOV CHAIN
• The limiting values of the state probabilities exist and are independent of the
initial probability vector.
• We say that these chains have steady states, and their steady states are called
stationary probabilities or stationary distributions. Then:
lim  j    i Pij   j   i  j
(k ) (0) ( k ) ( 0)

k  i i

• In general, a discrete Markov chain {Xk} that is ergodic has the limiting
probabilities:
 j  lim  (jk )  lim P X k  j , j  0,1,2,..
k  k 

   lim  ( k ) 
k 

• The stationary probabilities are uniquely determined as:


 j 1
j

 j    i Pij
i

Elementary Queueing Capacity Analysis 15


Theory
QUEUEING SYSTEMS
• CONTINUOUS-TIME MARKOV CHAINS
• The continuous time Markov chain can be considered as the limiting case of
the discrete one.
• However, concerning mathematical formulation, there is a difference: in the
continuous Markov chain, since transitions can take place at any instant, it is
necessary to specify how long a process has stayed in a particular state before
a transition takes place.
• The continuous-time Markov chain is a stochastic process {X(t)} in which the
future probabilistic development of the process depends only on the present
state and not on its past history. That is:
PX (tk 1 )  j / X t1   i1 , X t2   i2 ,.., X tk   ik   PX (tk 1 )  j / X tk   ik 

• For a continuous-time Markov chain, which is currently in state i,the


probability that the chain will leave state i to go to state j in the next
infinitesimal amount of time Dt, no matter how long the chain has been in state
i, is:
Pij t , t  Dt   qij Dt

Elementary Queueing Capacity Analysis 16


Theory
QUEUEING SYSTEMS
• CONTINUOUS-TIME MARKOV CHAINS
• where qij is the instantaneous transition rate of leaving state i for state j.
Therefore the total instantaneous transition rate at which the chain leaves state
i is
 qij
j i

• The time a process spends in a state, say sate i, is called the sojourn time or
holding time of the state i.
• By definition, the sojourn times of a Markov chain must be memoryless
• It can be shown that the sojourn times of a continuous time Markov chain are
exponentially distributed, and since the exponential distribution is the only
continuous probability distribution that possesses the memoryless property, the
sojourn times are therefore memoryless.

Elementary Queueing Capacity Analysis 17


Theory
QUEUEING SYSTEMS
• CONTINUOUS-TIME MARKOV CHAINS – STATE PROBABILITY
DISTRIBUTION
• Let us define:
 j (t )  P X t   j 
• Let us consider probability change in an infinitesimal time Dt; then:
 
 j t  Dt     i (t )qij Dt   j (t ) 1   q jk Dt 
i j  k j 
• The first term on the right is the probability that the chain is in state i at time t,
and makes a transition to state j in Dt.
• The second term is the probability that the chain is in state j at time t, and does
not make any transition to other states in Dt. (The probability of the chain being
in state j at time t, less the probability of being in state j and leaving to all other
states except j, within Dt  the probability of the chain remaining in state j at
t+Dt.)
• Rearranging the terms, dividing by Dt, and taking the limits we have:
d
 j (t )    i qij   j  q jk
dt i j k j

Elementary Queueing Capacity Analysis 18


Theory
QUEUEING SYSTEMS
• CONTINUOUS-TIME MARKOV CHAINS – STATE PROBABILITY
DISTRIBUTION
• If we define:
q jj    q jk
k j

• Then the previous expression becomes:


d
 j (t )    i (t )qij
dt i
• Then we define the three matrices:
 (t )  1 (t )  2 (t )   n (t )
d
 (t )   d 1 (t ) d
 2 (t ) 
d 
 n (t )
dt  dt dt dt 
  q1 j q12  q1n 
 j 1 
 q21   q2 j  q2 n 
Q  qij     
j2
   
 q qn 2    qnj 
 n1 jn 
     

Elementary Queueing Capacity Analysis 19
Theory
QUEUEING SYSTEMS
• CONTINUOUS-TIME MARKOV CHAINS – STATE PROBABILITY
DISTRIBUTION
• Then we have the general matrix equation:
d
 t    (t )Q
dt
• The matrix [Q] is known as the transition rate matrix, or infinitesimal
generator. Its general solution is:
 (t )  e Q ( t ) 

 I   
Qk t k
k 1 k!

where e Q ( t ) 
 I   Q(t ) 
Q(t )
2
 .. 
Q
k

2! k!
• For an irreversible Markov chain, the limiting value exists and it is
independent of the initial value of the chain. Then:
d
 t   0;
dt
lim  (t )   
t 

  Q   0

Elementary Queueing Capacity Analysis 20


Theory
QUEUEING SYSTEMS
• TRANSITION RATE & TRANSITION PROBABILITY MATRICES
• Consider the transition-rate matrix [Q], and compare it with the transition-
probability matrix, [P].
• These two matrices each characterizes a Markov chain – [Q] for a continuous
Markov chain, and [P] for the discrete Markov chain.
• Thus all the transient and steady-state probabilities of the corresponding
chain can in principle be calculated from them in conjunction with the
initial probability vector.
• The main distinction between them lies in the fact that all entries in [Q] are
transition rates, whereas those in [P] are probabilities.
• To obtain the probabilities in [Q], each entry needs to be multiplied by Dt,
that is, qijDt.
• Secondly, each row of [Q] sums to zero, while each row of [P] sums to one.
• The instantaneous transition rate of going back to the same state is not defined
in the continuous state.

Elementary Queueing Capacity Analysis 21


Theory
QUEUEING SYSTEMS
• TRANSITION RATE & TRANSITION PROBABILITY MATRICES
• What precisely, then, is the transition rate matrix? What do the quantities qij
stand for?
• Assume the Markov chain begins in state i at time 0, and is in state j at time t;
that is:
Pij (t )  P X (t )  j / X (0)  i 
Pik (0)  1, k  i, k  j
Pik (0)  0, k  i, k  j

• Then j and i are Pij(t) and Pik(t) above; then our equation becomes:
d
Pij (t )   Pik (t )qkj  Pij (t )  q jk
dt k j k j

• If we set t=0(equivalent to t0) we get:


• d
Pij (0)  qij
• Setting t=0 and j=I, we have: dt
d
Pii (0)    qik
dt k i

Elementary Queueing Capacity Analysis 22


Theory
QUEUEING SYSTEMS
• BIRTH-DEATH PROCESS
• A birth-death process is a special case of a Markov chain in which the process
makes transition only to the neighboring states of its current positions.
• This restriction simplifies treatment and makes it an excellent model for all
basic queuing systems – the so-called elementary queuing theory.
• The term birth-death implies that when the population is k at time t, the
process is said to be in state Ek at time t. A transition from Ek to Ek+1 signifies
a birth, and a transition down to Ek-1 signifies a death in the population.
• Implicitly, there can be no simultaneous births or deaths in an incremental
time period Dt. The state-transition diagram is shown below:

k-1 k k+1

Elementary Queueing Capacity Analysis 23


Theory
QUEUEING SYSTEMS
• BIRTH-DEATH PROCESS
• If we define k to be the birth rate when the population size is k, and mk the
death rate when the population size is k, then we have:
k  qk , k 1; m k  qk , k 1
• Substituting these into the transition-rate matrix, [Q], we have:
 0 0 0 0 
 m  1  m1  1 0 0
 1 
Q   0 m2  2  m 2  2 0
 
 0 0 m3  3  m 3  3 
 0 0 0  

where q jj    q ji  q00   q01  0 ; q11   q10  q12  ( m1  1 );


i j

q22   q21  q23  m 2  2 ; q33   q32  q34  m3  3 ; etc.

• Expanding and using the more familiar notation, we have:


d
Pk (t )  k  m k Pk (t )  k 1 Pk 1 (t )  m k 1 Pk 1 (t ), k  1
dt
d
P0 (t )  0 P0 (t )  m1 P1 (t )
dt
Elementary Queueing Capacity Analysis 24
Theory
• The transition diagram of a birth and death
process
• When the state is finite ,

(Q  t ) n
P (t )  e Qt

n 0 n!
The term eQ t is called a matrix exponential .
MATLAB has a routine expm for computing
The matrix for a finite t .
• Example
X (t ) is a homogeneous continuous-time markov
chain, the state space I  {i0 , i1 , i2 , i3} , and
the generator matrix
 0.5 0.2 0 0.3 
 0.4 1.0 0.4 0.2 
Q 
 0 0.7 0.8 0.1 
 
 0.1 0.2 0 0.3

the distribution of X (0) is as following table

X(0) i0 i1 i2 i3
P{X(0)} 0.2 0.3 0.4 0.1
(1) Find the expression of P(t ) ;
(2) Find P(t  1) and P(t  100) ;
(3) Find the distribution of X (t  1) and X (t  100) ;
(4) Find the joint probability
p{X (0)  i0 , X (1)  i1, X (10)  i2 , X (100)  i3}
(5)Is the markov chain ergodic? Please explain
your result.
Solution : (1)

(Q  t )n
P(t )  eQt 
n 0 n!
(2) According to (1) , we use a routine expm in
Matlab to computer the matrix
 0.6405 0.1196 0.0215 0.2184 
 0.2108 0.4559 0.1746 0.1586 

(Q  1) n
P(t  1)  eQ1    
n 0 n!  0.0728 0.3110 0.5110 0.1053
 
0.0917 0.1196 0.0215 0.7672 

0.2544 0.2105 0.1053 0.4298


 0.2105 0.1053 0.4298

(Q  100) n 0.2544
P(t  100)  eQ100   
n 0 n! 0.2544 0.2105 0.1053 0.4298
 
0.2544 0.2105 0.1053 0.4298
(3) s(t )  s(0)  P(t )
according to the given condition
S (0)  0.2 0.3 0.4 0.1
hence
 0.6405 0.1196 0.0215 0.2184 
 0.2108 0.4559 0.1746 0.1586 
S (t  1)  S (0)  P(t  1)  0.2 0.3 0.4 0.1   
 0.0728 0.3110 0.5110 0.1053
 
0.0917 0.1196 0.0215 0.7672 

=  0.22962 0.29703 0.26323 0.21012

The distribution of X (1)

X(1) i0 i1 i2 i3
P{X(1)} 0.22962 0.29703 0.26323 0.21012
• As likely
0.2544 0.2105 0.1053 0.4298
0.2544 0.2105 0.1053 0.4298
S (t  100)  S (0)  P(t  100)  0.2 0.3 0.4 0.1   
0.2544 0.2105 0.1053 0.4298
 
0.2544 0.2105 0.1053 0.4298

=  0.2544 0.2105 0.1053 0.4298

The distribution of X (100)

X(1) i0 i1 i2 i3
P{X(1)} 0.2544 0.2105 0.1053 0.4298
• (4) p{ X (0)  i0 , X (1)  i1 , X (100)  i3}
 p{ X (0)  i0 }  p{ X (1)  i1 | X (0)  i0}  p{ X (100)  i3 | X (1)  i1}
 p{ X (0)  i0 }  pi0i1 (1  0)  pi1i3 (100  1)
 p{ X (0)  i0 }  pi0i1 (1)  pi1i3 (99)
 0.2 * 0.1196 * 0.4298

0.2544 0.2105 0.1053 0.4298


 0.2105 0.1053 0.4298

(Q  99) n 0.2544
P(t  99)  eQ99   
n 0 n! 0.2544 0.2105 0.1053 0.4298
 
0.2544 0.2105 0.1053 0.4298

Das könnte Ihnen auch gefallen