Sie sind auf Seite 1von 55

Contents

1 Generating Functions
1.1 Probability Generating Function . . . . . . . . . . . . . . . . .

2
2

2 Poisson Process
6
2.1 Time Dependent Poisson Process . . . . . . . . . . . . . . . . 11
2.2 Weighted Poisson Process . . . . . . . . . . . . . . . . . . . . 11
3 Birth Process
3.1 Pure Birth Process . . . . . . . . . . .
3.2 Homogeneous Pure Birth Process : . .
3.3 Linear Birth Process (Yule Process): .
3.4 Time Dependent Linear Birth Process

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

13
13
14
18
27

4 Death Process
27
4.1 Pure Death Process . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Linear Death Process (Homogeneous) . . . . . . . . . . . . . . 28
4.3 Time Dependent Linear Death Process . . . . . . . . . . . . . 31
5 The
5.1
5.2
5.3
5.4

Generalized Birth and Death Process


Limiting Behavior of Birth and Death Process .
Effect of Migration on Birth and Death Process:
The Effect of Immigration . . . . . . . . . . . .
Limiting Size of Population When t . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

31
38
39
40
46

6 Poisson Process and Related Distributions


47
6.1 Interarrival Time . . . . . . . . . . . . . . . . . . . . . . . . . 47
7 Properties of Poisson Process
50
7.1 Additive property . . . . . . . . . . . . . . . . . . . . . . . . . 50
7.2 Difference of two independent Poisson processes . . . . . . . . 52
7.3 Decomposition of a Poisson processes . . . . . . . . . . . . . . 54

Generating Functions

In dealing with integral valued random variables, it is often of great convenience to apply the powerful method of generating function. Many stochastic
process that we come across involve integral valued random variable and quite
often we can use generating functions for their studies. The principle advantage of its use is that a single function may be used to represent a whole set
of individual items.
Definition 1. Let a0 , a1 , a2 , be a sequence of real numbers. Using a variable s, we may define a function
A(s) = a0 s0 + a1 s1 + a2 s2 +

X
=
ak s k

(1)

k=0

If this power series converges in some interval s0 < s < s0 , then A(s) is
called the generating function of the sequence a0 , a1 , a2 , .
The variable s itself has no particular significance. Here we assume s to be
real but generating function with complex variable is also used sometimes.
Differentiating 1, k times, putting s = 0 and dividing by k! we get ak i.e.


1 dk A(s)
(2)
ak =
k!
dsk s=0

1.1

Probability Generating Function

Suppose X is a random variable which assumes non-negative integral values


0, 1, 2, . . . and that
X
P [X = k] = pk ,
k = 0, 1, 2, . . .
pk = 1
(3)
If we take ak to be the probability pk , k = 0, 1, 2, . . . then the corresponding
generating function P (s) of the sequence of probabilities {pk } is known as
probability generating function (p. d. f) of the random variable X.
It sometimes also called the s - transformation or geometric transformation of X. Thus we have

X
P (s) =
pk sk
(4)
k=0
= E(sk )
2

Where E(sk ) is the expectation of the function sk (a random variable) of


the random variable X. The series P (s) converges for at least 1 s 1.
The first two derivatives of P (s) are given by

X
P 0 (s) =
kpk sk1
k=1

P 00 (s) =

and

(5)
k(k 1)pk sk2

k=1

Now the expectation of X i.e. E(X) is given as

X
E(X) =
kpk = P 0 (1)

(6)

k=1

also
E[X(X 1)] =

k(k 1)pk = P 00 (1)

(7)

k=1

and
E(X 2 ) = E[X(X 1)] + E(X)
= P 00 (1) + P 0 (1)
V (X) = E(X 2 ) [E(X)]2
= P 00 (1) + P 0 (1) [P 0 (1)]2
This mean and variance can be obtained with a knowledge of p. g. f. .
In fact moments and cumulants etc. can be expressed in terms of generating
functions.
The k th factorial moments of X is given as

 k
d P (s)
for k = 1, 2, . . .
E[X(X 1) . . . (X k + 1)] =
dsk s=1
Also P (et ) is the moment generating function as
MX (t) = E[etX ]

X
=
pk .etk
k=0

pk .sk ,

k=0

where s = et

Example 1. Poisson Distribution


e k
,
k!

X
pk .sk
P (s) =

k = 0, 1, . . .

pk =

k=0

=
=

X
e k

k!

k=0

X
k=0

=e

.sk

e (s)k
k!
. es = e(s1)

Now
P 0 (s) = e .es
P 0 (1) =
P 00 (s) = e .2 es
P 00 (1) = 2
E(X) =
V (X) = 2 + 2 =

thus

Example 2. Geometric Distribution


pk = p.q k ,
k = 0, 1, . . . and p + q = 1

X
P (s) =
pk .sk
=

k=0

p.q k .sk

k=0

=p

.q k .sk

k=0

p
1 qs
4

we have
pq
(1 qs)2
p
P 0 (1) =
q
2pq 2
P 00 (s) =
(1 qs)3
2q 2
P 00 (1) = 2
p
p
E(X) =
q
V (X) = P 00 (1) + P 0 (1) [P 0 (1)]2
 2
2q 2 p
p
p
= 2 +
= 2
p
q
q
q
P 0 (s) =

thus

Example 3. Binomial Distribution


pk =
P (s) =
=
=
P 1 (1)
P 11 (1)
E(x)
V (x)

=
=
=
=
=
=
=

n k nk
p q
k
k

P
pk sk
k=0
n
P
pk sk
k=0
n

P
n k nk k
p q s
k
k=0
n

= 0, 1, 2, . . .

(q + sp)
n(q + sp)n1 .p
n(n 1).(q + sp)n2 .p2
P 1 (1) = np
P 11 (1) + P 1 (1) [P 1 (1)]2
n(n 1).p2 + np n2 p2
npq

Example 4. Let X be a random variable with p.g.f. P (s).To find the p.g.f.
of the random variable Y = mx + n

LetPx (s) and Py (s) be the p.g.f.of X and Y respectively. We have,


Py (s) = E[sy ]
= E[smx+n ]
= E[smx .sn ]
= sn E[(sm )x ]
= sn .Px (sm )
Similarly if X and Y are independent random variables then the p.g.f. of
X and Y .
Pz (s) =
=
=
=
=

E[sz ]
E[sx+n ]
E[sx sy ]
E[sx ].E[sy ]
Px (s).Py (s)

In the above examples we have discussed the problem of finding Ps for a


given set of pk s.

Poisson Process

A stochastic process {X(t), t > 0} is called Poisson process if X(t) is a process


with independent increments and the distribution of X(t) X(s), t > s, is
given by
P {(X(t) X(s)) = k} = e

(ts)

{(t s)}k
k!

k = 0, 1, ,

Let X(t) denote the number of events occurring in the time interval (0, t).
Basic assumptions underlying the Poisson process are as follows:
1. The probability that an event will occur in the time interval (t, t + t)
is t + 0 (t) where is independent of t as well as the number of
events occurred in the interval (0, t).
2. The probability that more than one event will occur in the interval
(t, t + t) is 0 t.
6

Hence probability of number of no change in the interval (t, t + t) is 1


.t 0(t). The probability of k events in timeinterval (0, t) is denoted by
pk (t)
Let
pk (t) = P [X(t) = k],
k = 0, 1, 2,
We are interested in finding out an expression for pk (t). For this purpose
we extended the interval (0, t) to point t + t. Now, we enumerate all the
possible ways for computing the probability pk (t + t).

t + t

Now the occurrence of k events in the interval t + t can happen in the


following ways
1. Exactly k events occurred in (0, t) and no event occurs during (t, t+t).
The probability of this event is
pk (t) [1 .t 0(t)]
2. Exactly (k 1) events occurred in the interval (0, t) and one event
occurs in the interval (t, t + t) and the probability of this event is
pk1 (t) [.t + 0(t)]
3. Exactly (k i, i 2 events occurred in the interval (0, t) and i events
occurs in the interval (t, t + t) and the probability of this event is
0(t).
Thus considering all the cases we get that
pk (t + t) = pk (t)[1 t] + pk1 (t) t + 0(t)
thus

pk (t + t) pk (t)
= pk (t) + pk1 (t)
t0
t
lim

d
pk (t) = pk (t) + pk1 (t)
dt
d
p0 (t) = p0 (t)
for k = 0
dt
7

for k 1

(8)
(9)

(10)
(11)

The initials conditions are p0 (0) = 1 and pk (0) = 0, k 1


Thus from 11, we get
d
p (t)
dt 0

p0 (t)

d
[log p0 (t)] =
dt
Thus
log p0 (t) = t + C
p0 (t) = et+C
= C 0 et
Putting the initial conditions p0 (0) = 1, we get C 0 = 1. Thus
p0 (t) = et
Now from 10
d
p1 (t) = p1 (t) + p0 (t)
dt
d
p1 (t) + p1 (t) = et
dt
Multiplying the above equation by et , we get
et

d
p1 (t) + et p1 (t) =
dt
d t
[e p1 (t)] =
dt
et p1 (t) = t + C

Now applying the condition p1 (0) = 0, we have C = 0, Thus


et p1 (t) = t
p1 (t) = tet
Proceeding in the similar manner, we get
pk (t) = e

k
t (t)

k = 0, 1, 2,

k!
8

(12)

Since the parameter is independent of t and number of events occurred


prior to t, hence
P [X(t) X(s) = k] = P [X(t s) = k]
= e(ts)

[(t s)]k
k!

k = 0, 1, 2,

In many cases the reverse problem arises to determine Pk from Ps . Many


situations arises when it is easer to find the p.g.f. Ps of a variable rather
than the probably distribution {Pk }.Even without finding {Pk }, we can find
the moments of the distribution. {Pk } can obtained as


1 dk P(s)
Pk =
k = 0, 1, 2, . . .
k! dsk s=0
Also Px can be obtained as the coefficient of sk in P (s) as a power series in
S.
Solution of pk (t) in Poisson process with the help of p.g.f.
Let Gx (s, t) be the p.g.f. of x(t) then
Gx (t) =

sk .pk (t)

k=0

Differentiating Gx (s, t) w.r.to t ,we have

X
d

sk . .pk (t)
.Gx (s, t) =
t
dt
k=0
Subtracting the value of

d
.p (t)
dt k

from the relation

d
.pk (t) = pk (t) + pk1 (t)
dt

We see

X
X

k
.Gx (s, t) =
s .pk (t) +
pk1 (t).sk
t
k=0
k=1

= .Gx (s, t) + s
= .Gx (s, t) + s

X
k=1

sk1 .pk1 (t)


sr .pr (t)

r=0

= .Gx (s, t) + s.Gx (s, t)


= (1 s)Gx (s, t)
The initial condition is Gx (s, 0) = 1 as
Gx (s, t) =

sk .pk (t) = s0 = 1

k=0

1
. .Gx (s, t) = (1 s)
Gx (s, t) t

log Gx (s, t) = (1 s)
t
log Gx (s, t) = (1 s).t + C
By the initial condition
log Gx (s, 0) = c
i.e. c = 0
Then,
log Gx (s, t) = (1 s)t
Gx (s, t) = et(1s)
This is the p.g,f. of a Poisson distribution with parameter t. Consequently
pk (t) =

et (t)k
k!

10

k = 0, 1, 2, . . .

2.1

Time Dependent Poisson Process

If we assume that is a function of time in assumptions of Poisson process,


then the differential equation for probability generating function reduces to
the form

Gx (s, t) = (t).(1 s).Gx (s, t)


t

log Gx (s, t) = (t).(1 s)


t
Z
t

log Gx (s, t) = (1 s)

( ) d
0

From this we find


Gx (s, t) = e(1s).

Rt
0

( ) d

This is the p.g.f. of a random variable having Poisson distribution with


parameter
Z t
( ) d
0

and
P [X(t) = k] =

2.2

Rt
0

( ) d

Rt
.[ 0 ( ) d ]k
k!

k = 0, 1, 2, . . .

Weighted Poisson Process

The Poisson Process describes the frequency distribution of occurrence of an


event to an individual with risk parameter . If we are sampling a population
of individuals, then variability of individuals with respect to this risk should
be taken into account. For example, risk to accident varies through out the
population according to the density function f(). Then the probability
pk (t) =

et (t)k
k!

k = 0, 1, 2, . . .

must be interpreted as the conditional distribution for given or of {X(t)|}


and the probability that an individual chosen at random from the population
will experience k events in time interval of length tis
11

Z
pk (t) =

P [X(t) = k|] .f () d
X
=
P [X(t) = k|] .f ()

Example 5. Suppose has a type III distribution, i.e.


f () =

1
e .

> 0, > 0, > 0

Then,

et (t)k e 1
. .
. d
k!

0
Z
1 t
=
e .(t)k .e .1 . d
.
k! 0
Z
tk (t+) .k+1 . d
e
=
.
k! 0
Z

pk (t) =

Let (t + ) = x,then d =

dx
t+

so,
Z
xk+1
tk x
e .
pk (t) =
.
. dx
k! 0
(t + )k +
tk k +
.
k = 0, 1, 2, . . .
=
k! (t + )+k

 
k
k +
t

=
.
.
.k! (t + )
t+


+k1 k
=
p q
k = 0, 1, 2, . . .
k

where t+
= p and q = 1 p.
This is a negative Binomial distribution. We know that the mean of the
negative binomial distribution is r. pq and the variance is r. pq2 .Here r = .
t
Thus the mean of X(t) is .( t+
). (t+)
=

t
Variance of X(t) is .( t+
).

(t+)2
2

.t.(+t)
= 2 .

12

and

Birth Process

3.1

Pure Birth Process

In the study of some growth phenomena, birth may be introduced as an event


where the probability of occurence of an event in (t,t+t) is dependent on
the number of parent events already in existence.
For example
1. It may refer to literal birth.
2. It may refer to a new case in epidemic.
3. It may refer to the appearance of a new tumor cell etc.
Let X(t) denote the number of births till point t given that initially there
are k0 births.Then we may be interested in computing
pk (t) = p [X(t) = k|X(0) = k0 ]
Where X(0) represents the initial number of events (births) in existence.
Assumptions underlying the Pure Birth Process:
1. Given X(t) = k ,the probability that a new event will occur in the time
interval (t, t + t) is k (t)t + 0(t). where k (t) is a function of k
and t.
2. Probability that more than one event occur in the time interval
(t, t + t) is 0.(t)
Hence probability of no change in (t, t + t) is
[1 k (t).t 0(t)]
In order to derive the differential equation for pk (t) , we extend the time
interval (0, t) to a point (t+t) and enumerate all the possible ways in which
k events can happen in (0, t + t) as follows:

13

In time interval (0, t) In time interval (t, t + t)


(i)
k events
no event
(ii) k 1 events
one event
(iii) k i events
i events
i2
i2
and these probabilities are
(i)
pk (t).[ 1-k (t) .t - 0( t)]
(ii) pk1 (t).[k1 (t) .t + 0(t)]
(iii) 0(t) respectively.
Combining all these we get
pk (t + t) = pk (t)[1 k (t).t 0(t)] + pk1 (t).k1 (t).t + 0(t)
= pk (t) pk (t).k (t).t + pk1 (t).k1 (t).t + 0(t)
pk (t + t) pk (t) = pk (t).k (t).t + pk1 (t).k1 (t).t + 0(t)
pk (t + t) pk (t)
lim
= pk (t).k (t) + pk1 (t).k1 (t)
t0
t
i.e.
d
pk (t) = pk (t).k (t) + pk1 (t).k1 (t)
dt
Further if at time t, there are k0 events then
d
pk (t) = pk0 (t).k0 (t)
dt 0
The initial conditions are
pk0 (0) = 1
pk (0) = 0

for

k > k0

if initially there are k0 events.

3.2

Homogeneous Pure Birth Process :

The pure birth process is said to be homogeneous if k (t) is independent of


t. i.e.
k (t) = k
14

Then,

d
pk (t) = k0 pk (t)
dt 0

(13)

and
d
pk (t) = k .pk (t) + k1 .pk1 (t)
dt

for

k > k0

(14)

Initial conditions are: pk0 (0) = 1, pk (0) = 0, k > k0


These equations can be solved successively assuming that all the 0 s are
distinct. The solution for pk (t) is

pk (t) = (1)kk0 k0 .k0 +1 .k0 +2 ...k1

k
X

ei t

k
Q

i=k0

i
j

j=k

k = k0 , k0+1 , k0+2 . . .

j6=i

(15)
The above result will be proved by induction. For this we make use of
the identity
k
X
i=k0

1
k
Q

=0

if i s are distinct.

(i j )

j=k0
j6=i

We will not prove the identity. However , as an example this can be


verified. Let k0 =2 , k=4 . So we have,
k
X
i=k0

1
k
Q

(i j )

1
1
1
+
+
(2 3 )(2 4 ) (3 2 )(3 4 ) (4 2 )(4 3 )

j=k0
j6=i

(3 4 ) + {(2 4 )} + {2 3 }
(2 3 )(2 4 )(3 4 )
=0
=

15

Now, by solving equation ??, we have


d
log pk0 (t) = k0
dt
log pk0 (t) = k0 + C
Since pk0 (0) = 1 so, 0 = C . Thus pk0 (t) = ek0 .t
Putting k = k0 in 15 we get
pk0 (t) = ek0 .t
Which is the same as solution of ??. Thus we see that 15 is true for
k = k0
Let us suppose that 15, holds for k = k 1, i.e.,

pk1 (t) = (1)

kk0 1

. k0 . k0 +1 . . . k2

k1
X
i=k0

ei t

k1

i
j

j=k

j6=i

Now multiplying both sides of 14 by ek .t , we get


d
pk (t).ek .t = k .pk (t).ek .t + k1 .pk1 (t).ek .t
dt
d k .t
[e .pk (t)] = k1 .pk1 (t).ek .t
dt
Substituting the value of pk1 (t) from 16 we get,

16

(16)

k1
X

ei t
d k .t
[e .pk (t)] = (1)kk0 1 .k0 .k0 +1 . . . k2 .k1
.ek .t
k1

Q
dt
i=k0

(i j )

j=k0
j6=i

= (1)kk0 1 .k0 .k0 +1 . . . k1

k1
X

e(i k ).t
k1
Q
(i j )

i=k0

j=k0
j6=i
kk0 1

= (1)

.k0 .k0 +1 . . . k1

k1
X
i=k0

 ( ).t 
e i k

k1

Q
(i j ) {(i k )}

j=k0

d
dt

j6=i

because

d
dt

e(i k ).t =

e(i k ).t
(i k )

= (1)kk0 .k0 .k+1 . . . k1

k1
P

d
dt

[e(i k ).t ]
k
Q

i=k0

(i j )

j=k0
j6=i

Integrating both sides with respect to t, we get,

k1

X
(i k ).t
e

k .t
kk0
e .pk (t) = (1)
+ C
.k0 . . . k1
k

i=k0
(i j )
j=k0
j6=i

Since pk (0) = 0 (initial condition),


so we get

k1
P
0 = (1)kk0 .k0 . . . k1
i=k0

1
k
Q

(i j )

j=k0
j6=i

17

+ C

(17)

From this and using the identity

k
P

1
k
Q

i=k0

= 0, we get

(i j )

j=k0
j6=i

C=

1
k1
Q

(i j )

j=k0
j6=i

Substuting the value of C in the above we get

k1
X
(k tk .t)
(i k ).t
e
e

ek .t .pk (t) = (1)kk0 .k0 . . . k1


+ k1

Q
Q
i=k0
(i j )
(k j )
j=k0
j6=i

j=k0

because e(k tk .t) = e0 = 1


Thus , we get

pk (t) = (1)kk0 .k0 . . . k1

X
i=k0

e(i .t)

(i j )

j=k
0

j6=i

Thus if the result holds for k 1 , then it also holds for k . Since it holds
for k0 and hence for k0 + 1 , and so on .
Thus the required solution is

pk (t) = (1)kk0 .k0 . . . k1

k
X
i=k0

e(i .t)

(i j )

j=k

f or k = k0 , k0+1 , k0+2 . . .

j6=i

3.3

Linear Birth Process (Yule Process):

We consider a population of members which can (by splitting or other wise)


give birth to new members but can not die. Assume that in any short interval
18

of length t, each member has the probability t + 0(t) to create a new


member. The constant determines the rate of increase of population. If
there is no interaction among the members of population and at time t, the
population size is k, then the probability of birth of a new individual in the
population in the time interval (t, t+t) is kt+0.(t) and the probability
of more than one birth is 0.(t).
Example 6. Suppose there are k individuals at time t , then the probability
that each individual will give a birth ( occurrence a new event ) in time
(t, t + t) is t + 0.(t). So probability of occurrence of j events in time
interval (t, t + t) is
=

cj [.t + 0.(t)]j [1 .t 0.(t)]kj

so for j = 0, the probability is


[1 .t 0.(t)]k = 1 k..t + 0.(t)
for j = 1, the probability is
k[.t + 0.(t)][1 .t 0.(t)]k1
k[.t + 0.(t)][1 (k 1)..t 0.(t)]
= k..t + 0.(t)
for j 2, the probability is 0.(t).
Now the probability pk (t) satisfies the equation (in general)
d
pk (t) = k .pk (t) + k1 pk1 (t)
dt
d
pk (t) = k0 .pk0 (t)
dt 0
Here k = k.
Thus the equations are
d
pk (t) = k.pk (t) + (k 1).pk1 (t)
dt
19

(18)

d
pk0 (t) = k0 .pk0 (t)
(19)
dt
The solution of the above equation can be obtained with the help of the
solution of homogeneous pure birth process. In the pure birth process, we
have

(
.t)
X
i
e
kk0
.k0 . . . k1
f or k = k0 , k0+1 , k0+2 . . .
pk (t) = (1)
k

i=k0
(i j )

j=k
0

j6=i

Now putting i = i,we have


(1)kk0 .k0 . . . k1 = (1)kk0 .k0 .(k0 + 1) . . . (k 1)
= (1)kk0 kk0 .k0 .(k0 + 1) . . . (k 1)
1.2.3 . . . k0 1.k0 . . . (k 1)
= (1)kk0 kk0
1.2.3 . . . k0 1
(k k0 )!
(k

1)!
= (1)kk0 kk0 .
(k0 1)! (k k0 )!


k1
kk0
kk0 kk0
(1)
.k0 . . . k1 = (1)

.(k k0 )!
(20)
k0 1
Now,
k
Q
j=k0
j6=i

(i j ) =

k
Q

(i j)

j=k0
j6=i

= kk0

k
Q

(i j)

j=k0
j6=i

= kk0 [(i k0 )(i k0 1) . . . 3.2.1.(1)(2) . . . {(k i)}]


Example 7. The above expression seems to be some what complicated. We
give one example here to clarify the equation. Let i = 3 k0 = 1 k = 5.
20

k
Y

(i j) = (3 )(3 2)(3 4)(3 5)

j=k0
j6=i

= (2).(1).(1).(2)
= 4 .2.1.(1)(2)
Hence
k
Y

(i j ) = kk0 [(i k0 )(i k0 1) . . . 3.2.1.(1)ki (k i)! ]

j=k0
j6=i

= kk0 (1)ki (k i)! (i k0 )!


(k i)! (i k0 )!
= kk0 (1)ki
(k k0 )!
(k k0 )!

1
k k0
kk0
ki
(k k0 )!
=
(1)
i k0

(21)

Thus


k
0
X
eit . kk
k1
ik0
pk (t) = (1)

(k k0 )!
kk0 (1)ki (k k0 )!
k0 1
i=k0



 k
k k0
k 1 X eit
kk0
.
= (1)
k0 1 i=k (1)ki i k0
0




k
k 1 k0 t X eit
t ik0 k k0
=
e
.(e )
k k0
(1)ki
i k0
i=k0




k
k 1 k0 t X
kk0 k+i t ik0 k k0
=
e
(1)
(e )
k k0
i k0
i=k
kk0 kk0


pk (t) =




k
k 1 k0 t X
t ik0 k k0
e
(e )
k k0
i k0
i=k
0

Now let us evaluate


21

(22)

k
X

(e

t ik0

i=k0



k k0
i k0

putting l = i k0 in the above we get


=

kk
X0
l=0

(e

k k0
)
l

t l

It is known that in general


n  
X
n
(x)k = (1 x)n
k
k=0

Hence LHS
= (1 et )kk0
Thus

pk (t) =


k 1 k0 t
e
(1 et )kk0
k k0

f or k = k0 , k0+1 , k0+2 . . .

From the above it can be shown that


Y (t) = X(t) k0
has a negative binomial distribution with parameters r = k0 and p = et
Because
P [X(t) = k] = P [Y (t) = kk0 ] = P [Y (t) = r] i.e. r = kk0



k1
P [Y (t) = r] =
(et )k0 (1 et )r
r


k0 + r 1
=
(et )k0 (1 et )r
r

i.e. k = k0 +r

r = 0, 1, 2, 3, . . .

Thus, Y(t) has a negative binomial distribution with parameters k0 and e.t
22

1 et
et
t
= k0 (e 1)
E[X(t)] = k0 + E[Y (t)]
E[Y (t)] = k0 .

= k0 + (et 1)
= k0 et
Similarly
V [X(t)] = V [Y (t)] = k0 .

1 et
e2t

= k0 [e2t et ]
= k0 [et 1].et
Another Method for Solution of Linear Birth Process:(Method of
P.G.F.)
For the linear birth process we have the equations
d
pk (t) = k0 pk0 (t)
dt 0

(23)

d
pk (t) = kpk (t) + (k 1)pk1 (t)
dt
subject to initial conditions
pk0 (0) = 1

pk (0) = 0

k > k0

k > k0

(24)

(25)

Let Gx (s, t) be the p.g.f. of random variable X(t)


Gx (s, t) =

pk (t).sk

k=k0

X
d

Gx (s, t) =
pk (t).sk
t
dt
k=k
0

23

(26)

kpk (t).s +

k=k0

(k 1)pk1 (t)sk

(27)

k=k0 +1

X
X

k
Gx (s, t) =
pk (t)k.s +
(k 1)sk .pk1 (t)
t
k=k
k=k +1
0

= s

pk (t)k.s

k1

+ s

(k 1)sk2 .pk1 (t)

k=k0 +1

k=k0

= s

Gx (s, t) + s2 Gx (s, t)
s
s

Gx (s, t) + s(1 s) Gx (s, t) = 0


t
s
Suppose Z is a function of x and y and we have
p
then we have

(28)

z
z
+Q
=R
x
y
dx
dy
dz
=
=
P
Q
R

If
u(x, y, z) = c1
v(x, y, z) = c2
are independent solutions then the general solution is u = (v).
Then
ds
dGx (s, t)
dt
=
=
1
s(1 s)
0
Now we obtain the solution of equation 28 using the above described
technique.Considering the auxiliary equation
dt
dGx (s, t)
=
1
0
we get
Gx (s, t) = C

(C is constant)

24

(29)

Also considering the second auxiliary equation, we have


dt
ds
=
1
s(1 s)
i.e.



1
1
ds
= ds
+
dt =
s(1 s)
s 1s

After integration, we get


t = log s log(1 s) + log k

where k is constant


s
= log
.k
1s
s
+ C1
ort = log
1s
s
log
= t C1
1s
s
= etC1 = C2 et
1s
s t
e
= C2
1s
Which is the solution obtained from above.
Then the general solution will be


s
t
Gx (s, t) =
.e
1s
Where is an arbitrary function.To obtain the particular solution,

25

(30)

(31)

we use the initial condition given in 25.




s
0
Gx (s, 0) =
.e
1s
but Gx (s, 0) = sk0


s
k0
s =
1s
Let us write
s
=
1s
s = s.
s + s =
s(1 + ) =

s=
1+
s
k0
Thus (
) = () = (
)
1s
1+
 t 
se
now Gx (s, t) =
1s
 t  " set #k0
se
1s
Thus
=
t
1s
1 + se1s

k0
set
Gx (s, t) =
1 s + set

k0
set
=
1 s(1 et )

k0
et
k0
=s
1 s(1 et )
Let us consider a new variable
Y (t) = X(t) k0
Thus
X(t) = Y (t) + k0
26

Now p.g.f. of X(t) = sk0 . p.g.f. of Y (t). Now

et
1s(1et )

ik0

is the p.g.f. of a

negative binomial distribution with parameter r = k0 and p = et .


Now P [X(t) = k] = P [Y (t) = k k0 ]
We know that in case of negative binomial distribution with parameters
and p


x+1 x
px =
.p .q
x
so


k k0 + k0 1
pk (t) =
(et )k0 (1 et )kk0
k k0


k 1 k0 t
=
e
(1 et )kk0
f or k = k0 , k0+1 , k0+2 . . .
k k0

3.4

Time Dependent Linear Birth Process

In linear birth process, we assume that k = k.


However if we assume k (t) = k(t) then we get time dependent linear
birth process. In this case differential equation becomes
d
pk (t) = k0 (t) pk0 (t)
dt 0
d
pk (t) = k (t) pk (t) + (k 1)(t)pk1 (t)
dt
In this case
"
Gx (s, t) = sk0

Rt
0

#k0

( )d

1 s(1 e

Rt
0

( )d

Thus the distribution


of X(t) is of the same form as discussed above with
Rt
.t replaced as 0 ( )d .

4
4.1

Death Process
Pure Death Process

LetX(t) denote the number of individuals present at time t given that initially
there are k0 individuals. The basic assumptions underlying the pure death
27

process are as follows:


1. Give X(t) = k, probability that there will be a death during interval
(t, t + t) is k (t).t + 0.(t).
k (t) is known as the force of mortality and is function of k as well as
t.
2. Probability that there will be more than one death in the interval(t, t +
t) is 0.(t).
Hence probability of no change is [1 k (t).t 0.(t)]
In order to write the differential equation for pk (t), we extend the interval
(0, t) upto (t + t) and consider all the probabilities which may lead to
presence of k individuals in the interval(0, t + t). Thus we have
pk (t + t) = pk+1 (t)[k+1 (t)t + 0.(t)]
+ pk (t)[1 k (t)t 0.(t)] + 0.(t)
k k0
= k+1 (t)pk+1 (t).t + pk (t) k (t)pk (t) + 0.(t) k k0
= [1 k0 (t) 0.(t)]pk0 (t)
Thus
d
pk (t) = k0 (t)pk0 (t)
dt 0
d
pk (t) = k+1 (t)pk+1 (t) k (t)pk (t).
dt

k = k0 , k01 , k02 , . . .

Initial conditions are pk0 (0) = 1, pk (0) = 0

4.2

k < k0 .

Linear Death Process (Homogeneous)

Here we assume
k (t) = k
Then the differential equation become
d
pk (t) = k0 .pk0 (t)
dt 0
d
pk (t) = (k + 1)pk+1 (t) k.pk (t).
dt
28

(32)
k < k0

(33)

Let Gx (s, t) be the p.g.f. of random variable X(t).


Gx (s, t) =

k0
X

sk .pk (t)

(34)

k=0

Now
k

0
X

d
Gx (s, t) =
sk pk (t)
t
dt
k=0

=
=

kX
0 1

k0
X

k=0
kX
0 1

k=0
k0
X

(k + 1).pk+1 (t).sk

k..pk (t).sk

pk+1 (t).(k + 1)sk s

k=0

pk (t)k.sk1

k=0

Gx (s, t) s Gx (s, t)
s
s

= (s 1). Gx (s, t)
s
=

Thus

Gx (s, t) + (s 1). Gx (s, t) = 0


(35)
s
s
The above equation is solved by the technique discussed earlier. Then the
auxiliary equation are
ds
dGx (s, t)
dt
=
=
1
(s 1)
0
Solving

dt
1

dGx (s,t)
,
0

(36)

we get
Gx (s, t) = C

Also considering
ds
dt
=
1
(s 1)
ds
.dt =
s1
t = log(s 1) + C2
(s 1) = C2 .et
29

(37)

et (s 1) = C2

(38)

Thus the general solution is


Gx (s, t) = (et .(s 1))

(39)

For particular solution, set t = 0, in Gx (s, t), we get


Gx (s, 0) = sk0 = (s 1)

(40)

put s 1 = s = 1 +
() = (1 + )k0

(41)

Let = et (s 1) from (8) and(10) we get


Gx (s, t) = [1 + et (1 s)]k0
= [1 et + set ]k0

(42)

This is of the form [q + ps]n .


Thus, it is the p.g.f. of a binomial distribution with parameters k0 and
t
e .
Thus
pk (t) = P [X(t) = k|X(0) = k0 ]
 
k0 kt
pk (t) =
e
(1 et )k0 k
k = 0, 1, 2, . . . k0
k

(43)

E[X(t)] = k0 .et

(44)

V [X(t)] = k0 .et (1 et )

(45)

30

4.3

Time Dependent Linear Death Process

Here k (t) = k.(t).


we have the relationship

Gx (s, t) + (s 1)(t) Gx (s, t) = 0


t
s
and
 
ik0 k
Rt
k0 k R t ( )d h
1 e 0 ( )d
pk (t) =
e 0
k

k = 0, 1, 2, . . . k0

The Generalized Birth and Death Process

In the birth and death process, we make the following assumptions.


Given X(t) = k
1. The probability that a birth occur in the interval(t, t + t) is k (t)t +
0.(t).
2. probability that a death occur in the interval(t, t + t) is k (t)t +
0.(t).
3. probability that more than one change will occur in (t, t+t) is 0.(t).
4. Hence probability of no change is
[1 k (t)t k (t)t 0.(t)]
Consequently we can write
pk (t, t + t) = pk (t)[1 k (t)t k (t)t]
+ pk1 (t)[k1 (t).t] + pk+1 (t)[k+1 (t).t]
+ 0.(t)
and
p0 (t, t + t) = p0 (t)[1 0 (t)t 0 (t)t]
+ p1 (t).1 (t).t + 0.(t)
31

consequently we get
d
pk (t) = [k (t) + k (t)]pk (t) + k1 (t).k1 (t) + k+1 (t)pk+1 (t)
dt

(46)

and

d
p0 (t) = [0 (t) + 0 (t)]p0 (t) + p1 (t).1 (t)
dt
Initial conditions are
pk0 (0) = 1, pk (0) = 0

k 6= k0

(47)

(48)

It is quite difficult to obtain general solution from these equations. However


some special cases mat be considered.
The case of Linear Growth:
If k (t) = k and k (t) = k then the process is known as linear birth and
death process.
Thus in this case the differential equations become
d
p0 (t) = p1 (t)
dt

(49)

d
pk (t) = (k 1)pk1 (t) + (k + 1)pk+1 (t) k( + ).pk (t)
dt

(50)

Initial condition
pk0 (0) = 1, pk (0) = 0

k 6= k0

(51)

Let Gx (s, t) be the p.g.f. of random variable X(t).


Gx (s, t) =

pk (t).sk

k=0

X d

Gx (s, t) =
pk (t).sk
t
dt
k=0

32

(52)

(k 1)pk1 (t).sk

k=1

(k + 1)pk+1 (t)

k=0

k.( + )pk (t).sk

k=1

=s

(k 1)pk1 (t).sk2

k=1

X
(k + 1)pk+1 (t).sk
+
k=0

s( + )

kpk (t).sk1

k=1

Gx (s, t) = s2 Gx (s, t) + Gx (s, t) s( + ) Gx (s, t)


(53)
t
s
s
s
so we see that the p.g.f. of Gx (s, t) satisfies the differential equation

Gx (s, t) + (s )(1 s) Gx (s, t) = 0


t
s
The auxiliary equation are given by
dt
ds
dGx (s, t)
=
=
1
(s )(1 s)
0
Now

dt
1

dGx (s,t)
,
0

Also from =
the case 6= as

(55)

gives
Gx (s, t) = C

dt
1

(54)

ds
,
(s)(1s)

(56)

using method of partial fractions, we get, from



dt
ds
1
=
+
1
( )(s ) ( )(1 s)



1
( )dt =
+
ds
(s ) ( )(1 s)
33

After integration,
( )t = log(s ) log(1 s) + C2


s
= log
+ C2
1s
s
= e()t .C3
1s
e()t .

s
= C4
1s

(57)

Thus the general solution is




(1 s) ()t
Gx (s, t) =
.e
(s )

Where is an arbitrary differential function.


Now using the initial condition that at t = 0, we see from 58


1s
k0
Gx (s, 0) = s =
s

(58)

(59)

holds at least for all s with |s| < 1


1s
put = s
s = 1 s
s(1 + ) = 1 +
(1 + )
(1 + )


1 +
() =
1 +
s=

so

Let =

(1s)
.e()t
(s)

(60)

(61)

, from58 and61 we get


"

gx (s, t) =

1s
1 + . s
.e()t
1s
1 + . s
.e()t

34

#k 0
(62)

s + (1 s).e()t
=
s + (1 s).e()t


k0
(63)

Let us put
1 e()t
(t) = .
( )e()t
(t) =

.e()t

.(t) =

.e()t

Then


(t) + {1 (t) (t)} s


Gx (s, t) =
1 (t).s

k0
(64)

now

.e()t
1 (t).s = 1
.s
.e()t


.e()t s + s.e()t
=
.e()t


[s. + .(1 s).e()t ]
=
.e()t


Also


.e()t .e()t
(t) + {1 (t) (t)} s = (t) + 1

.s
.e()t .e()t


.e()t + .e()t + .e()t
= (t) +
.s
.e()t


+ .e()t
.s
= (t) +
.e()t


.e()t s + s.e()t
=
.e()t


(s ) + (1 s).e()t
=
.e()t
Thus 64 is

(t)+{1(t)(t)}s
1(t).s

ik0

.
35

Now the numerator is


[(t) + {1 (t) (t)} s]

k0

k0  
X
k0

j=0

{(t)}k0 j {1 (t) (t)}j .sj

The denominator is
k0

[1 (t).s]



X
k0
i

i=0

(1)i [(t)]i .si



X
k0 + i 1
i

i=0

[(t)]i .si

Now in
(a0 + a1 s + a2 s2 + . . . ak0 sk0 )(b0 + b1 s + b2 s2 + . . . bk0 sk0 )
the coefficient of sk is

M in(k
P0 ,k)

aj .bkj .

j=0

Example 8.
(a0 + a1 s + a2 s2 + a3 s3 + . . .)(b0 + b1 s + b2 s2 + b3 s3 + . . .)
the coefficient of s2 is
a0 .b2 + a1 .b1 + a2 .b2
coefficient of s5 is
a0 .b5 + a1 .b4 + a2 .b3 + a3 .b2
Therefore
M in(k0 ,k) 

k (t) =

X
j=0




k0
k0 j
j k0 + k j 1
[(t)]
[1 (t) (t)]
[(t)]kj
j
kj

M in(k0 ,k) 

pk (t) =

X
j=0

k0
j



k0 + k j 1
[(t)]k0 j [(t)]kj [1 (t) (t)]j
kj

and p0 (t) = [(t)]k0 .

.Gx (s, t)]s=1


Now E[X(t)] = [ t
36

k 1
(t) + {1 (t) (t)} s 0
E[X(t)] = k0
1 (t).s

{1 (t).s} {1 (t) (t)} {(t) + {1 (t) (t)} s} {(t)}
.
{1 (t).s}2
s=1

k0 1 

1 (t)
{1 (t)} {1 (t) (t)} {1 (t)} .(t)
= k0
.
1 (t)
{1 (t)}2


1 (t) (t) + (t)
= k0
1 (t)



E[X(t)] = k0

(
E[X(t)] = k0

().t

1e
1 . .e
().t

1 (t)
1 (t)

().t

1e
1 . .e
().t

.e().t + .e().t
= k0
.e().t + .e().t
 ().t

e
.(
= k0
( )
 ().t
= k0 e






2
2

Gx (s, t)
+
Gx (s, t)

Gx (s, t)|s = 1
V [X(t)] =
s2
s
s
s=1
s=1


It can be shown that


V [X(t)] =

k0 [1 (t)].[(t) + (t)]
[1 (t)]2

Thus
()t

V [X(t)] = k0 .e

37


.

(t) + (t)
1 (t)

h
i
Since 1(t)
= e()t
1(t)
.e()t + .e()t
= k0 .e
.
.e()t + .e()t


( + )[1 e()t ]
()t
= k0 .e
.
( )


( + )[e()t 1]
()t
= k0 .e
.
( )
()t

Thus
E[X(t)] = k0 .e t
()t

V [X(t)] = k0 .e


.

( + )[e()t 1]
( )

Now if = , then
E[X(t)] = k0
Also for finding V [X(t)] in case of = ,we proceed as

V [X(t)] =

o
n
n
2 .t2
+
.
.
.
(
+
)
( )t +
k0 . 1 + ( )t ()
2!

().t2
2!

+ ...

( )




( )2 .t2
( ).t2
= k0 . 1 + ( )t
+ . . . ( + ) ( )t +
+ ...
2!
2!

if =
= k0 .2.t
= 2.k0 .t

5.1

Limiting Behavior of Birth and Death Process


s + (1 s).e()t
Gx (s, t) =
s + (1 s).e()t


i.e. lim p0 (t)


n t
o
p0 (t) = {(t)}k0
38

k0

Thus
lim p0 (t) =

h ik0

if <

=1

if

See page 125, Medhi.


Further as t
E[X(t)] = 0
= k0
=

if >
if =
if <

V [X(t)] = 0
=

if >
if <

Also

5.2

Effect of Migration on Birth and Death Process:

For the migration there are two situations:


1. Emigration
2. Immigration
In emigration individuals go out from the population consequently the size
of the population decreases due to emigration.
In immigration individuals come from out side to the population.Consequently
the population increases due to immigration.
It can be thought that the probability that a person will emigrate from
the population in the time interval (t, t + t) may depend on the size of
population. Thus the probability may be represented as
1k (t).t + 0.(t)
Hence it can be adjusted force of mortality and we can write
1
11
k (t) = k (t) + k (t)

39

Thus in the birth and death process,the force of mortality k (t) can be replaced by 11
k (t).
However in the case of immigration it may not dependent on the size of
population and consequently the effect of immigration may not be adjusted
with k (t).

5.3

The Effect of Immigration

In many biological populations, some form of migration is an essential characteristic. consequently we introduce this phenomenon in the birth and death
process. So for as the emigration is concerned, it is clear that this can be
allowed by a suitable adjustment of the death rate since for deaths and emigration we can take the chance of a single loss in the time interval (t, t + t)
as proportional to X(t) where X(t) denote the size of the population at time
t.
With immigration on the other hand the situation is different. For the
simplest reasonable assumption about immigration is that it occurs randomly
without being affected by the size of population at timet.
It can be considered as a Poisson process independent of the population
size.
We will consider a time homogeneous birth and death process with a
random accession of immigrants (with Poisson Process) with immigration
rate .
We denote
X(t) = k
size of population at time t
X(0) = k0
Initial population is k0
pt (t) = P [X(t) = k|X(0) = k0 ]
The basic assumption underlying in a time homogeneous linear
birth and death process with the effect of immigration:
Given X(t) = k
1. The probability that there will be an increase in the population either
due to birth or due to immigration in the interval(t, t + t) is
k..t + .t + 0.(t)
40

2. The probability that the size of population will decrease by one unit in
the interval(t, t + t) is
k..t + 0.(t)
3. Probability that more than one change will occur in the time interval(t, t+
t) is 0.(t).
4. Probability of no change in the interval (t, t + t) is
1 (k. + ).(t) k..t 0.(t)

p( t + t) = pk (t)[1 (k. + ).(t) k..t]


+ pk+1 (t)[(k + 1).t]
+ pk1 (t)[(k 1)t + .t] + 0.(t)
p( t + t) = p0 (t)[1 .(t)] + p1 (t)[.(t)] + 0.(t)
By transferring pk (t) in LHS, dividing by t and taking the limit as t
0 we have
d
pk (t) = k( + )pk (t) + (k + 1)pk+1 (t)
dt
+ (k 1)pk1 (t) + pk1 (t) pk (t)
d
p0 (t) = p1 (t) p0 (t)
dt
with the initial condition
pk0 (0) = 1
pk (0) = 0
k 6= k0
Let Gx (s, t)be the p.g.f. of random variable X(t)
Gx (s, t) =

G (s, t)
t x

Gx (s, t) =

P
k=0

pk (t).sk
d
p (t).sk
dt k

k=0

P
k=0

k( + )pk (t) + (k + 1)pk+1 (t)



+(k 1)pk1 (t) + pk1 (t) pk (t) .sk
41

X
k( + )pk (t).s +
(k + 1)pk+1 (t).sk
k

k=0

k=0

X
X
X
k
k
+
(k 1)pk1 (t).s +
pk1 (t).s
pk (t).sk

k=1

k=1

k=0

( + ).s.k.pk (t).sk1 +

k=0

pk+1 (t).(k + 1).sk

k=0

.s2k1 pk1 (t).sk2

k=1

spk1 (t).s

k=1

k1

..pk (t).sk

k=0

= ( + ).s Gx (s, t) + Gx (s, t) + .s2 Gx (s, t)


t
t
t
+ .sGx (s, t) Gx (s, t)
Thus we see that Gx (s, t) satisfies the differential equation.

Gx (s, t) + (s )(1 s) Gx (s, t) = .(s 1)Gx (s, t)


t
t
so the auxiliary equation for solving the equation will be
dt
ds
dGx (s, t)
=
=
1
(s )(1 s)
.(s 1)Gx (s, t)
now considering

dt
ds
=
1
(s )(1 s)

we get


dt

1
= ds
+
1
( )(s ) ( )(1 s)
so we have

42

( ).t + C = log(s ) log(1 s)




s
= log
1s


s
= e()t+C
1s


1s
e()t = C1
s
Now considering
dGx (s, t)
ds
=
(s )(1 s)
.(s 1)Gx (s, t)
ds
dGx (s, t)
=
(s )
.Gx (s, t)
1
log(s )
= . log Gx (s, t) + C2
or

or log(s ) + log Gx (s, t) = C3

or log (s ) .Gx (s, t) = C3

(s ) .Gx (s, t) = C4
Consequently the most general solution is given as




1s
()t
e
(s ) .Gx (s, t) =
s
where is an arbitrary function which is obtained from the initial condition i.e.
Gx (s, 0) = sk0
i.e.

(s ) .s

k0

1s
=
s

Let us put

43

1s
=
s
1 s = (s ). = .s. .
1 + = s + .s.
= (1 + ).s
1 +
or s =
1 +
 
 
k
1 +
1 + 0
so () = .
.

1 +
1 +
 
k

1 + 0
+
.
=
1 +
1 +

 
k0

1 +
=
.
1 +
1 +
Thus the general solution is
(

(s ) .Gx (s, t) =

1s
1 + . s
.e()t

) (
.

1s
1 + . s
.e()t

)k0

1s
1 + . s
.e()t

( ) [(s ) + (1 s).e()t ]k0 .(s )


(s ) .Gx (s, t) =

[(s ) + (1 s).e()t ] +k0





( ) [(s ) + (1 s).e()t ]k0


Gx (s, t) =

[(s ) + (1 s).e()t ]k0 +

44

Special Case When k0 = 0





( )
Gx (s, t) =

[(s ) + (1 s).e()t ]


= ( ) (s ) + (1 s).e()t


= ( ) .e()t (e()t 1).s
"

 #
()t


(e
1).s
= ( ) .e()t 1
()t
.e


 


(e()t 1).s
=
1
.e()t
.e()t
of a negative binomial distribution with r = and
hWhich is the p.g.f.
i

p = .e()t
.

ir
h
p
or [pr .(1 qs)r ].
Since the p.g.f. of a negative binomial distribution is 1qs
obviously if p =
Then

.e()t




q = 1
.e()t
 ()t

.e
+
=
.e()t
( 
)
e()t 1
=
.e()t

45

q
M ean = r.
p
#
"


e()t 1 .e()t
.
.
=
.e()t

=
. e()t 1
( )
q
and variance = r. 2
p
 ()t
.e()t

. e
1 .
=
( )

 ()t
 ()t

.
e

1
.e

=
( )2


. e()t 1
M ean =
( )



( )2 t2 ( )3 t3
=
+
+ ... 1
1 + ( )t +
( )
2!
3!


( )t2 ( )2 t3
= t+
+
+ ...
2!
3!
if = then
Mean = t

5.4

Limiting Size of Population When t




. e()t 1
( )

[0 1]
if <
lim E[X(t)] =
t
( )

=
if <
( )
lim E[X(t)] =
if >
E[X(t)] =

Also for =

E[X(t)] = t as
46

Further
 ()t
 ()t

.
e

1
.e

( )2
(1)()
lim V [X(t)] =
if >
t
( )2
.
=
( )2
=
if >
V [X(t)] =

Also when =

 

( )t2 ( )2 t3
( )t2 ( )2 t3
V [X(t)] = t +
+
+ ... t +
+
+ ...
2!
3!
2!
3!
= .[t][t]

6
6.1

Poisson Process and Related Distributions


Interarrival Time

With a Poisson process, {X(t), t 0}, where X(t) denotes the number of
occurences of an event E by epoch t, there is associated a random variablethe interval X between two successive occurences of E. We proceed to show
that X has a negative exponential distribution.
Theorem 1. The interval between two successive occurences of a Poisson
process {X(t), t 0} having parameter has a negative exponential distribution with mean 1/
Proof:
Let X be the random variable representing the interval between two successive occurences of {X(t), t 0} and let Pr(X x) = F(x) be its distribution function.
Let us denote two successive evnts by Ei and Ei+1 and suppose that Ei occured at the instant ti . Then

47

P [X > x] = P [Ei+1 did not occur in (ti , ti + x) given that


Ei occured at the instant ti ]
= P [Ei+1 did not occur in (ti , ti + x) | X(ti ) = i]
(because of the postulate of independence)
= P [no occurence takes place in an interval (ti , ti + x)
of length x | X(ti ) = i]
= P [X(x) = 0 | X(ti ) = i] = p0 (x) = ex
Since i is arbitrary, we have for the interval X between any two successive
occurences,
F (x) = P [X x] = 1 P [X > x] = 1 ex , x > 0

The density function is


f (x) =

d
F (x) = ex , x > 0
dx

It can be further proved that if Xi denotes the interval between Ei and


Ei+1 , i= 1,2,. . . , then X1 , X2 ,. . . are also independent.
Theorem 2. The intervals between successive occurences (called interarrival
times) of a Poisson process (with mean t) are identically and independently
distributed random variables which follow the negative exponential law with
mean 1/.
The converse also holds; which is given in the next theorem below. (These
two theorems give a characterisation of the Poisson process).
Theorem 3. If the intervals between successive occurences of an event E are
independently distributed with a commom exponential distribution with mean
1/, then the events E form a Poisson process with mean t.
48

Proof:
Let Zn denote the interval between (n-1)th and nth occurence of a process
{X(t), t 0} and let the sequence Z1 , Z2 ,. . . be independently and identically distributedrandom variables having negative exponential distribution
with mean 1/. The sum Wn = Z1 + Z2 + . . . + Zn is the waiting time
upto the nth occurence, i.e. the time from the origin to the nth subsequent
occurence. Wn has a gamma distribution with parameters , n. The p.d.f.
g(x) and the distribution function FWn are given respectively by

g(x) =

n .xn1 .ex
,x > 0
n

and
Z
FWn (t) = P [Wn t] =

g(x)dx
0

The events {X(t) < n} and {Wn = Z1 + . . . + Zn > t} are equivalent.


Hence the distribution functions FN (t) and FWn satisfy the relation

FWn (t) = P [Wn t] = 1 P [Wn > t]


= 1 P [X(t) < n] = 1 P [X(t) (n 1)]
= 1 FX(t) (n 1)
Hence the distribution function of X(t) is given by

49

FX(t) (n 1) = 1 FWn (t)


Z t n n1 x
x e
dx
=1
n
0
Z t
1
=1
y n1 ey dy
n
Z 0
1
=
y n1 ey dy
n t
n1 t
X
e (t)j
=
(integrating by parts)
j!
j=0

Thus the probability law of X(t) is,


pn (t) = P [X(t) = n] = FX(t) (n) FX(t) (n 1)
=

n
X
et (t)j
j=0

j!

n1 t
X
e (t)j
j=0

j!

et (t)n
, n = 0, 1, 2, . . .
n!

Thus the process {X(t), t 0} is a Poisson process with mean t.


Note: Poisson process has independent exponentially distributed interarrival times and gamma distributed waiting times.

7
7.1

Properties of Poisson Process


Additive property

Sum of two independent Poisson processes is a Poisson process.


Let X1 (t) and X2 (t) be two Poisson processes with parameters 1 and 2

50

respectively and let


X(t) = X1 (t) + X2 (t)

The p.g.f. of Xi (t) (i=1,2) is


E[sXi (t) ] = ei (s1)t

The p.g.f. of N(t) is


E[sXi (t) ] = E[sX1 (t)+X2 (t) ]

and because of independence of X1 (t) and X2 (t), we have


E[sXi (t) ] = E[sX1 (t) ].E[sX2 (t) ]
= [e1 (s1)t ][e2 (s1)t ]
= e(1 +2 )(s1)t
Thus X(t) is a Poisson process with parameter [1 + 2 ].
The result can also be proved as follows:
P [X(t) = n] =
=

n
X
r=0
n
X
r=0

P [X1 (t) = r]P [X2 (t) = n r]


e1 t (1 t)r e2 t (2 t)nr
.
r!
(n r)!

e(1 +2 )t [(1 + 2 )t]n


,n 0
n!

Hence X(t) is a Poisson process with parameter [1 + 2 ].

51

(65)

7.2

Difference of two independent Poisson processes

The probability distribution of X(t) = X1 (t) X2 (t) is given by,

P [X(t) = n] = e

(1 +2 )t

1
2

n/2

p
I|n| (2t 1 2 ), n = 0, 1, 2, . . .

(66)

(x/2)2r+n
r!(r + n + 1)

(67)

where

In (x) =

X
r=0

is the modified Bessel function of order n ( -1).


Proof (i) The p.g.f. of X(t) is
E[sX(t) ] = E[sX1 (t)X2 (t) ]
= E[sX1 (t) ].E[sX2 (t) ]
because of the independence of N1 (t) andN2 (t). Thus
E[sX(t) ] = E[sX1 (t) ].E[(1/s)X2 (t) ]
= e[1 t(s1)] .e[2 t(s

1 1)]

= e(1 +2 )t .e[1 ts+2 t/s] ;

(68)

P [X(t) = n] is given by the coefficient of sn in the expansion of the right


hand side of (69) as a series in positive and negative powers of s.
(ii) P[X(t) = n] can also be obtained directly as follows:

52

P [X(t) = n] =

P [X1 (t) = n + r]P [X2 (t) = r]

r=0

X
e1 t (1 t)n+r e2 t (2 t)r
=
.
(n + r)!
r!
r=0

= e(1 +2 )t

X
(1 2 t2 )r (1 t)n
r=0

(n + r)!r!

Now,
p
1 2 )2r .(1 t)n

(t 1 2 )2r+n .(1 t)n

=
(t 1 2 )n
p
n1
2r+n
. n/2 n/2
= (t 1 2 )
1 .2
 n/2 p
1
=
.(t 1 2 )2r+n
2

(1 2 t2 )r (1 t)n = (t

Thus,

(1 +2 )t

P [X(t) = n] = e

= e(1 +2 )t

1
2

n/2 X

1
2

n/2

r=0

(t 1 2 )2r+n
r!(r + n)!

p
I|n| (2t 1 2 )

Thus it may be noted that:


1. The difference of two independent Poisson processes is not a Poisson
process;
2. In (t) = In (t) = I|n| (t), 1,2,3,. . . ;
53

3. the first two moments of N(t) are given by,


E[X(t)] = (1 2 )t and E[X 2 (t)] = (1 + 2 )t + (1 2 )2 t2
also V (X(t)) = (1 + 2 )t
Example 9. If passengers arrive (singly) at a taxi stand in accordance with
a Poisson process with parameter 1 and taxis arrive in accordance with a
Poisson process with parameter 2 , then X(t) = X1 (t) X2 (t) gives the
excess of passengers over taxis in an interval t. The distribution of X(t).
i.e., P[X(t) = n], n=0, 1, 2. . . is given by (67). The mean of X(t) is
(1 2 )t, which is or < 0 according as,
1 or < 2 ; and var[X(t)] = (1 + 2 )t.

7.3

Decomposition of a Poisson processes

A random selection from a Poisson process yields a Poisson process. Suppose


that X(t), the number of occurences of an event E in an interval of length t
is a Poisson process with parameter . Suppose also that each occurence of
E has a constant probability p of being recorded, and that the recording of
an occurence is independent of that of other occurences and also of X(t).
If M(t)is the number of occurences recorded in an interval of length t,
then M(t) is also a Poisson process with parameter p
Proof The event M(t)= n can happen in the following mutually exclusive
ways:
Ar : E occurs (n+r) times by epoch t and exactly n out of (n+r) occurences are recored,
probability of each occurence recored being p, (r=0,1,2, . . . ).
We have
P [Ar ] = P [E occurs (n + r) times by epoch t].P [n ocuurences are recorded
given that the number of occurences is n + r]


et (t)n+r n + r n r
=
p q
(n + r)!
n
Hence
54

P [M (t) = n] =

P [Ar ]

r=0




X
et (t)n+r n + r n r
=
p q
(n + r)!
n
r=0
= et

X
(pt)n (qt)r

n!r!

r=0
t (pt)

=e

n!

X
(qt)r
r=0

r!

(pt)n qt
e
n!
(pt)n
= ept
n!
= et

We can interpret the above as follows:


For a Poisson process X(t), the probability of an occurence in an infinitesimal
interval h is (approximately)proportional to the length of the interval h, the
constant of proportionality being .
Now for M(t), the probability of a recording in the interval h is proportional to the length h, the constant of proportionality being p. Thus M(t),t
.ge. 0 is a Poisson process with parameter p.

55

Das könnte Ihnen auch gefallen