Beruflich Dokumente
Kultur Dokumente
1 Generating Functions
1.1 Probability Generating Function . . . . . . . . . . . . . . . . .
2
2
2 Poisson Process
6
2.1 Time Dependent Poisson Process . . . . . . . . . . . . . . . . 11
2.2 Weighted Poisson Process . . . . . . . . . . . . . . . . . . . . 11
3 Birth Process
3.1 Pure Birth Process . . . . . . . . . . .
3.2 Homogeneous Pure Birth Process : . .
3.3 Linear Birth Process (Yule Process): .
3.4 Time Dependent Linear Birth Process
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
13
13
14
18
27
4 Death Process
27
4.1 Pure Death Process . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Linear Death Process (Homogeneous) . . . . . . . . . . . . . . 28
4.3 Time Dependent Linear Death Process . . . . . . . . . . . . . 31
5 The
5.1
5.2
5.3
5.4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
31
38
39
40
46
Generating Functions
In dealing with integral valued random variables, it is often of great convenience to apply the powerful method of generating function. Many stochastic
process that we come across involve integral valued random variable and quite
often we can use generating functions for their studies. The principle advantage of its use is that a single function may be used to represent a whole set
of individual items.
Definition 1. Let a0 , a1 , a2 , be a sequence of real numbers. Using a variable s, we may define a function
A(s) = a0 s0 + a1 s1 + a2 s2 +
X
=
ak s k
(1)
k=0
If this power series converges in some interval s0 < s < s0 , then A(s) is
called the generating function of the sequence a0 , a1 , a2 , .
The variable s itself has no particular significance. Here we assume s to be
real but generating function with complex variable is also used sometimes.
Differentiating 1, k times, putting s = 0 and dividing by k! we get ak i.e.
1 dk A(s)
(2)
ak =
k!
dsk s=0
1.1
X
P (s) =
pk sk
(4)
k=0
= E(sk )
2
X
P 0 (s) =
kpk sk1
k=1
P 00 (s) =
and
(5)
k(k 1)pk sk2
k=1
X
E(X) =
kpk = P 0 (1)
(6)
k=1
also
E[X(X 1)] =
(7)
k=1
and
E(X 2 ) = E[X(X 1)] + E(X)
= P 00 (1) + P 0 (1)
V (X) = E(X 2 ) [E(X)]2
= P 00 (1) + P 0 (1) [P 0 (1)]2
This mean and variance can be obtained with a knowledge of p. g. f. .
In fact moments and cumulants etc. can be expressed in terms of generating
functions.
The k th factorial moments of X is given as
k
d P (s)
for k = 1, 2, . . .
E[X(X 1) . . . (X k + 1)] =
dsk s=1
Also P (et ) is the moment generating function as
MX (t) = E[etX ]
X
=
pk .etk
k=0
pk .sk ,
k=0
where s = et
X
pk .sk
P (s) =
k = 0, 1, . . .
pk =
k=0
=
=
X
e k
k!
k=0
X
k=0
=e
.sk
e (s)k
k!
. es = e(s1)
Now
P 0 (s) = e .es
P 0 (1) =
P 00 (s) = e .2 es
P 00 (1) = 2
E(X) =
V (X) = 2 + 2 =
thus
X
P (s) =
pk .sk
=
k=0
p.q k .sk
k=0
=p
.q k .sk
k=0
p
1 qs
4
we have
pq
(1 qs)2
p
P 0 (1) =
q
2pq 2
P 00 (s) =
(1 qs)3
2q 2
P 00 (1) = 2
p
p
E(X) =
q
V (X) = P 00 (1) + P 0 (1) [P 0 (1)]2
2
2q 2 p
p
p
= 2 +
= 2
p
q
q
q
P 0 (s) =
thus
=
=
=
=
=
=
=
n k nk
p q
k
k
P
pk sk
k=0
n
P
pk sk
k=0
n
P
n k nk k
p q s
k
k=0
n
= 0, 1, 2, . . .
(q + sp)
n(q + sp)n1 .p
n(n 1).(q + sp)n2 .p2
P 1 (1) = np
P 11 (1) + P 1 (1) [P 1 (1)]2
n(n 1).p2 + np n2 p2
npq
Example 4. Let X be a random variable with p.g.f. P (s).To find the p.g.f.
of the random variable Y = mx + n
E[sz ]
E[sx+n ]
E[sx sy ]
E[sx ].E[sy ]
Px (s).Py (s)
Poisson Process
(ts)
{(t s)}k
k!
k = 0, 1, ,
Let X(t) denote the number of events occurring in the time interval (0, t).
Basic assumptions underlying the Poisson process are as follows:
1. The probability that an event will occur in the time interval (t, t + t)
is t + 0 (t) where is independent of t as well as the number of
events occurred in the interval (0, t).
2. The probability that more than one event will occur in the interval
(t, t + t) is 0 t.
6
t + t
pk (t + t) pk (t)
= pk (t) + pk1 (t)
t0
t
lim
d
pk (t) = pk (t) + pk1 (t)
dt
d
p0 (t) = p0 (t)
for k = 0
dt
7
for k 1
(8)
(9)
(10)
(11)
p0 (t)
d
[log p0 (t)] =
dt
Thus
log p0 (t) = t + C
p0 (t) = et+C
= C 0 et
Putting the initial conditions p0 (0) = 1, we get C 0 = 1. Thus
p0 (t) = et
Now from 10
d
p1 (t) = p1 (t) + p0 (t)
dt
d
p1 (t) + p1 (t) = et
dt
Multiplying the above equation by et , we get
et
d
p1 (t) + et p1 (t) =
dt
d t
[e p1 (t)] =
dt
et p1 (t) = t + C
k
t (t)
k = 0, 1, 2,
k!
8
(12)
[(t s)]k
k!
k = 0, 1, 2,
sk .pk (t)
k=0
X
d
sk . .pk (t)
.Gx (s, t) =
t
dt
k=0
Subtracting the value of
d
.p (t)
dt k
d
.pk (t) = pk (t) + pk1 (t)
dt
We see
X
X
k
.Gx (s, t) =
s .pk (t) +
pk1 (t).sk
t
k=0
k=1
= .Gx (s, t) + s
= .Gx (s, t) + s
X
k=1
r=0
sk .pk (t) = s0 = 1
k=0
1
. .Gx (s, t) = (1 s)
Gx (s, t) t
log Gx (s, t) = (1 s)
t
log Gx (s, t) = (1 s).t + C
By the initial condition
log Gx (s, 0) = c
i.e. c = 0
Then,
log Gx (s, t) = (1 s)t
Gx (s, t) = et(1s)
This is the p.g,f. of a Poisson distribution with parameter t. Consequently
pk (t) =
et (t)k
k!
10
k = 0, 1, 2, . . .
2.1
log Gx (s, t) = (1 s)
( ) d
0
Rt
0
( ) d
and
P [X(t) = k] =
2.2
Rt
0
( ) d
Rt
.[ 0 ( ) d ]k
k!
k = 0, 1, 2, . . .
et (t)k
k!
k = 0, 1, 2, . . .
Z
pk (t) =
P [X(t) = k|] .f () d
X
=
P [X(t) = k|] .f ()
1
e .
Then,
et (t)k e 1
. .
. d
k!
0
Z
1 t
=
e .(t)k .e .1 . d
.
k! 0
Z
tk (t+) .k+1 . d
e
=
.
k! 0
Z
pk (t) =
Let (t + ) = x,then d =
dx
t+
so,
Z
xk+1
tk x
e .
pk (t) =
.
. dx
k! 0
(t + )k +
tk k +
.
k = 0, 1, 2, . . .
=
k! (t + )+k
k
k +
t
=
.
.
.k! (t + )
t+
+k1 k
=
p q
k = 0, 1, 2, . . .
k
where t+
= p and q = 1 p.
This is a negative Binomial distribution. We know that the mean of the
negative binomial distribution is r. pq and the variance is r. pq2 .Here r = .
t
Thus the mean of X(t) is .( t+
). (t+)
=
t
Variance of X(t) is .( t+
).
(t+)2
2
.t.(+t)
= 2 .
12
and
Birth Process
3.1
13
for
k > k0
3.2
Then,
d
pk (t) = k0 pk (t)
dt 0
(13)
and
d
pk (t) = k .pk (t) + k1 .pk1 (t)
dt
for
k > k0
(14)
k
X
ei t
k
Q
i=k0
i
j
j=k
k = k0 , k0+1 , k0+2 . . .
j6=i
(15)
The above result will be proved by induction. For this we make use of
the identity
k
X
i=k0
1
k
Q
=0
if i s are distinct.
(i j )
j=k0
j6=i
1
k
Q
(i j )
1
1
1
+
+
(2 3 )(2 4 ) (3 2 )(3 4 ) (4 2 )(4 3 )
j=k0
j6=i
(3 4 ) + {(2 4 )} + {2 3 }
(2 3 )(2 4 )(3 4 )
=0
=
15
kk0 1
. k0 . k0 +1 . . . k2
k1
X
i=k0
ei t
k1
i
j
j=k
j6=i
16
(16)
k1
X
ei t
d k .t
[e .pk (t)] = (1)kk0 1 .k0 .k0 +1 . . . k2 .k1
.ek .t
k1
Q
dt
i=k0
(i j )
j=k0
j6=i
k1
X
e(i k ).t
k1
Q
(i j )
i=k0
j=k0
j6=i
kk0 1
= (1)
.k0 .k0 +1 . . . k1
k1
X
i=k0
( ).t
e i k
k1
Q
(i j ) {(i k )}
j=k0
d
dt
j6=i
because
d
dt
e(i k ).t =
e(i k ).t
(i k )
k1
P
d
dt
[e(i k ).t ]
k
Q
i=k0
(i j )
j=k0
j6=i
k1
X
(i k ).t
e
k .t
kk0
e .pk (t) = (1)
+ C
.k0 . . . k1
k
i=k0
(i j )
j=k0
j6=i
k1
P
0 = (1)kk0 .k0 . . . k1
i=k0
1
k
Q
(i j )
j=k0
j6=i
17
+ C
(17)
k
P
1
k
Q
i=k0
= 0, we get
(i j )
j=k0
j6=i
C=
1
k1
Q
(i j )
j=k0
j6=i
k1
X
(k tk .t)
(i k ).t
e
e
Q
Q
i=k0
(i j )
(k j )
j=k0
j6=i
j=k0
X
i=k0
e(i .t)
(i j )
j=k
0
j6=i
Thus if the result holds for k 1 , then it also holds for k . Since it holds
for k0 and hence for k0 + 1 , and so on .
Thus the required solution is
k
X
i=k0
e(i .t)
(i j )
j=k
f or k = k0 , k0+1 , k0+2 . . .
j6=i
3.3
(18)
d
pk0 (t) = k0 .pk0 (t)
(19)
dt
The solution of the above equation can be obtained with the help of the
solution of homogeneous pure birth process. In the pure birth process, we
have
(
.t)
X
i
e
kk0
.k0 . . . k1
f or k = k0 , k0+1 , k0+2 . . .
pk (t) = (1)
k
i=k0
(i j )
j=k
0
j6=i
1)!
= (1)kk0 kk0 .
(k0 1)! (k k0 )!
k1
kk0
kk0 kk0
(1)
.k0 . . . k1 = (1)
.(k k0 )!
(20)
k0 1
Now,
k
Q
j=k0
j6=i
(i j ) =
k
Q
(i j)
j=k0
j6=i
= kk0
k
Q
(i j)
j=k0
j6=i
k
Y
j=k0
j6=i
= (2).(1).(1).(2)
= 4 .2.1.(1)(2)
Hence
k
Y
j=k0
j6=i
(21)
Thus
k
0
X
eit . kk
k1
ik0
pk (t) = (1)
(k k0 )!
kk0 (1)ki (k k0 )!
k0 1
i=k0
k
k k0
k 1 X eit
kk0
.
= (1)
k0 1 i=k (1)ki i k0
0
k
k 1 k0 t X eit
t ik0 k k0
=
e
.(e )
k k0
(1)ki
i k0
i=k0
k
k 1 k0 t X
kk0 k+i t ik0 k k0
=
e
(1)
(e )
k k0
i k0
i=k
kk0 kk0
pk (t) =
k
k 1 k0 t X
t ik0 k k0
e
(e )
k k0
i k0
i=k
0
(22)
k
X
(e
t ik0
i=k0
k k0
i k0
kk
X0
l=0
(e
k k0
)
l
t l
Hence LHS
= (1 et )kk0
Thus
pk (t) =
k 1 k0 t
e
(1 et )kk0
k k0
f or k = k0 , k0+1 , k0+2 . . .
k1
P [Y (t) = r] =
(et )k0 (1 et )r
r
k0 + r 1
=
(et )k0 (1 et )r
r
i.e. k = k0 +r
r = 0, 1, 2, 3, . . .
Thus, Y(t) has a negative binomial distribution with parameters k0 and e.t
22
1 et
et
t
= k0 (e 1)
E[X(t)] = k0 + E[Y (t)]
E[Y (t)] = k0 .
= k0 + (et 1)
= k0 et
Similarly
V [X(t)] = V [Y (t)] = k0 .
1 et
e2t
= k0 [e2t et ]
= k0 [et 1].et
Another Method for Solution of Linear Birth Process:(Method of
P.G.F.)
For the linear birth process we have the equations
d
pk (t) = k0 pk0 (t)
dt 0
(23)
d
pk (t) = kpk (t) + (k 1)pk1 (t)
dt
subject to initial conditions
pk0 (0) = 1
pk (0) = 0
k > k0
k > k0
(24)
(25)
pk (t).sk
k=k0
X
d
Gx (s, t) =
pk (t).sk
t
dt
k=k
0
23
(26)
kpk (t).s +
k=k0
(k 1)pk1 (t)sk
(27)
k=k0 +1
X
X
k
Gx (s, t) =
pk (t)k.s +
(k 1)sk .pk1 (t)
t
k=k
k=k +1
0
= s
pk (t)k.s
k1
+ s
k=k0 +1
k=k0
= s
Gx (s, t) + s2 Gx (s, t)
s
s
(28)
z
z
+Q
=R
x
y
dx
dy
dz
=
=
P
Q
R
If
u(x, y, z) = c1
v(x, y, z) = c2
are independent solutions then the general solution is u = (v).
Then
ds
dGx (s, t)
dt
=
=
1
s(1 s)
0
Now we obtain the solution of equation 28 using the above described
technique.Considering the auxiliary equation
dt
dGx (s, t)
=
1
0
we get
Gx (s, t) = C
(C is constant)
24
(29)
1
1
ds
= ds
+
dt =
s(1 s)
s 1s
where k is constant
s
= log
.k
1s
s
+ C1
ort = log
1s
s
log
= t C1
1s
s
= etC1 = C2 et
1s
s t
e
= C2
1s
Which is the solution obtained from above.
Then the general solution will be
s
t
Gx (s, t) =
.e
1s
Where is an arbitrary function.To obtain the particular solution,
25
(30)
(31)
s=
1+
s
k0
Thus (
) = () = (
)
1s
1+
t
se
now Gx (s, t) =
1s
t " set #k0
se
1s
Thus
=
t
1s
1 + se1s
k0
set
Gx (s, t) =
1 s + set
k0
set
=
1 s(1 et )
k0
et
k0
=s
1 s(1 et )
Let us consider a new variable
Y (t) = X(t) k0
Thus
X(t) = Y (t) + k0
26
et
1s(1et )
ik0
is the p.g.f. of a
3.4
Rt
0
#k0
( )d
1 s(1 e
Rt
0
( )d
4
4.1
Death Process
Pure Death Process
LetX(t) denote the number of individuals present at time t given that initially
there are k0 individuals. The basic assumptions underlying the pure death
27
k = k0 , k01 , k02 , . . .
4.2
k < k0 .
Here we assume
k (t) = k
Then the differential equation become
d
pk (t) = k0 .pk0 (t)
dt 0
d
pk (t) = (k + 1)pk+1 (t) k.pk (t).
dt
28
(32)
k < k0
(33)
k0
X
sk .pk (t)
(34)
k=0
Now
k
0
X
d
Gx (s, t) =
sk pk (t)
t
dt
k=0
=
=
kX
0 1
k0
X
k=0
kX
0 1
k=0
k0
X
(k + 1).pk+1 (t).sk
k..pk (t).sk
k=0
pk (t)k.sk1
k=0
Gx (s, t) s Gx (s, t)
s
s
= (s 1). Gx (s, t)
s
=
Thus
dt
1
dGx (s,t)
,
0
(36)
we get
Gx (s, t) = C
Also considering
ds
dt
=
1
(s 1)
ds
.dt =
s1
t = log(s 1) + C2
(s 1) = C2 .et
29
(37)
et (s 1) = C2
(38)
(39)
(40)
put s 1 = s = 1 +
() = (1 + )k0
(41)
(42)
(43)
E[X(t)] = k0 .et
(44)
V [X(t)] = k0 .et (1 et )
(45)
30
4.3
k = 0, 1, 2, . . . k0
consequently we get
d
pk (t) = [k (t) + k (t)]pk (t) + k1 (t).k1 (t) + k+1 (t)pk+1 (t)
dt
(46)
and
d
p0 (t) = [0 (t) + 0 (t)]p0 (t) + p1 (t).1 (t)
dt
Initial conditions are
pk0 (0) = 1, pk (0) = 0
k 6= k0
(47)
(48)
(49)
d
pk (t) = (k 1)pk1 (t) + (k + 1)pk+1 (t) k( + ).pk (t)
dt
(50)
Initial condition
pk0 (0) = 1, pk (0) = 0
k 6= k0
(51)
pk (t).sk
k=0
X d
Gx (s, t) =
pk (t).sk
t
dt
k=0
32
(52)
(k 1)pk1 (t).sk
k=1
(k + 1)pk+1 (t)
k=0
k=1
=s
(k 1)pk1 (t).sk2
k=1
X
(k + 1)pk+1 (t).sk
+
k=0
s( + )
kpk (t).sk1
k=1
dt
1
dGx (s,t)
,
0
Also from =
the case 6= as
(55)
gives
Gx (s, t) = C
dt
1
(54)
ds
,
(s)(1s)
(56)
dt
ds
1
=
+
1
( )(s ) ( )(1 s)
1
( )dt =
+
ds
(s ) ( )(1 s)
33
After integration,
( )t = log(s ) log(1 s) + C2
s
= log
+ C2
1s
s
= e()t .C3
1s
e()t .
s
= C4
1s
(57)
(1 s) ()t
Gx (s, t) =
.e
(s )
(58)
(59)
so
Let =
(1s)
.e()t
(s)
(60)
(61)
gx (s, t) =
1s
1 + . s
.e()t
1s
1 + . s
.e()t
34
#k 0
(62)
s + (1 s).e()t
=
s + (1 s).e()t
k0
(63)
Let us put
1 e()t
(t) = .
( )e()t
(t) =
.e()t
.(t) =
.e()t
Then
k0
(64)
now
.e()t
1 (t).s = 1
.s
.e()t
.e()t s + s.e()t
=
.e()t
[s. + .(1 s).e()t ]
=
.e()t
Also
.e()t .e()t
(t) + {1 (t) (t)} s = (t) + 1
.s
.e()t .e()t
.e()t + .e()t + .e()t
= (t) +
.s
.e()t
+ .e()t
.s
= (t) +
.e()t
.e()t s + s.e()t
=
.e()t
(s ) + (1 s).e()t
=
.e()t
Thus 64 is
(t)+{1(t)(t)}s
1(t).s
ik0
.
35
k0
k0
X
k0
j=0
The denominator is
k0
[1 (t).s]
X
k0
i
i=0
X
k0 + i 1
i
i=0
[(t)]i .si
Now in
(a0 + a1 s + a2 s2 + . . . ak0 sk0 )(b0 + b1 s + b2 s2 + . . . bk0 sk0 )
the coefficient of sk is
M in(k
P0 ,k)
aj .bkj .
j=0
Example 8.
(a0 + a1 s + a2 s2 + a3 s3 + . . .)(b0 + b1 s + b2 s2 + b3 s3 + . . .)
the coefficient of s2 is
a0 .b2 + a1 .b1 + a2 .b2
coefficient of s5 is
a0 .b5 + a1 .b4 + a2 .b3 + a3 .b2
Therefore
M in(k0 ,k)
k (t) =
X
j=0
k0
k0 j
j k0 + k j 1
[(t)]
[1 (t) (t)]
[(t)]kj
j
kj
M in(k0 ,k)
pk (t) =
X
j=0
k0
j
k0 + k j 1
[(t)]k0 j [(t)]kj [1 (t) (t)]j
kj
k 1
(t) + {1 (t) (t)} s 0
E[X(t)] = k0
1 (t).s
{1 (t).s} {1 (t) (t)} {(t) + {1 (t) (t)} s} {(t)}
.
{1 (t).s}2
s=1
k0 1
1 (t)
{1 (t)} {1 (t) (t)} {1 (t)} .(t)
= k0
.
1 (t)
{1 (t)}2
1 (t) (t) + (t)
= k0
1 (t)
E[X(t)] = k0
(
E[X(t)] = k0
().t
1e
1 . .e
().t
1 (t)
1 (t)
().t
1e
1 . .e
().t
.e().t + .e().t
= k0
.e().t + .e().t
().t
e
.(
= k0
( )
().t
= k0 e
2
2
Gx (s, t)
+
Gx (s, t)
Gx (s, t)|s = 1
V [X(t)] =
s2
s
s
s=1
s=1
k0 [1 (t)].[(t) + (t)]
[1 (t)]2
Thus
()t
V [X(t)] = k0 .e
37
.
(t) + (t)
1 (t)
h
i
Since 1(t)
= e()t
1(t)
.e()t + .e()t
= k0 .e
.
.e()t + .e()t
( + )[1 e()t ]
()t
= k0 .e
.
( )
( + )[e()t 1]
()t
= k0 .e
.
( )
()t
Thus
E[X(t)] = k0 .e t
()t
V [X(t)] = k0 .e
.
( + )[e()t 1]
( )
Now if = , then
E[X(t)] = k0
Also for finding V [X(t)] in case of = ,we proceed as
V [X(t)] =
o
n
n
2 .t2
+
.
.
.
(
+
)
( )t +
k0 . 1 + ( )t ()
2!
().t2
2!
+ ...
( )
( )2 .t2
( ).t2
= k0 . 1 + ( )t
+ . . . ( + ) ( )t +
+ ...
2!
2!
if =
= k0 .2.t
= 2.k0 .t
5.1
k0
Thus
lim p0 (t) =
h ik0
if <
=1
if
if >
if =
if <
V [X(t)] = 0
=
if >
if <
Also
5.2
39
Thus in the birth and death process,the force of mortality k (t) can be replaced by 11
k (t).
However in the case of immigration it may not dependent on the size of
population and consequently the effect of immigration may not be adjusted
with k (t).
5.3
In many biological populations, some form of migration is an essential characteristic. consequently we introduce this phenomenon in the birth and death
process. So for as the emigration is concerned, it is clear that this can be
allowed by a suitable adjustment of the death rate since for deaths and emigration we can take the chance of a single loss in the time interval (t, t + t)
as proportional to X(t) where X(t) denote the size of the population at time
t.
With immigration on the other hand the situation is different. For the
simplest reasonable assumption about immigration is that it occurs randomly
without being affected by the size of population at timet.
It can be considered as a Poisson process independent of the population
size.
We will consider a time homogeneous birth and death process with a
random accession of immigrants (with Poisson Process) with immigration
rate .
We denote
X(t) = k
size of population at time t
X(0) = k0
Initial population is k0
pt (t) = P [X(t) = k|X(0) = k0 ]
The basic assumption underlying in a time homogeneous linear
birth and death process with the effect of immigration:
Given X(t) = k
1. The probability that there will be an increase in the population either
due to birth or due to immigration in the interval(t, t + t) is
k..t + .t + 0.(t)
40
2. The probability that the size of population will decrease by one unit in
the interval(t, t + t) is
k..t + 0.(t)
3. Probability that more than one change will occur in the time interval(t, t+
t) is 0.(t).
4. Probability of no change in the interval (t, t + t) is
1 (k. + ).(t) k..t 0.(t)
G (s, t)
t x
Gx (s, t) =
P
k=0
pk (t).sk
d
p (t).sk
dt k
k=0
P
k=0
X
k( + )pk (t).s +
(k + 1)pk+1 (t).sk
k
k=0
k=0
X
X
X
k
k
+
(k 1)pk1 (t).s +
pk1 (t).s
pk (t).sk
k=1
k=1
k=0
( + ).s.k.pk (t).sk1 +
k=0
k=0
k=1
spk1 (t).s
k=1
k1
..pk (t).sk
k=0
dt
ds
=
1
(s )(1 s)
we get
dt
1
= ds
+
1
( )(s ) ( )(1 s)
so we have
42
(s ) .Gx (s, t) = C4
Consequently the most general solution is given as
1s
()t
e
(s ) .Gx (s, t) =
s
where is an arbitrary function which is obtained from the initial condition i.e.
Gx (s, 0) = sk0
i.e.
(s ) .s
k0
1s
=
s
Let us put
43
1s
=
s
1 s = (s ). = .s. .
1 + = s + .s.
= (1 + ).s
1 +
or s =
1 +
k
1 +
1 + 0
so () = .
.
1 +
1 +
k
1 + 0
+
.
=
1 +
1 +
k0
1 +
=
.
1 +
1 +
Thus the general solution is
(
(s ) .Gx (s, t) =
1s
1 + . s
.e()t
) (
.
1s
1 + . s
.e()t
)k0
1s
1 + . s
.e()t
44
( )
Gx (s, t) =
[(s ) + (1 s).e()t ]
= ( ) (s ) + (1 s).e()t
= ( ) .e()t (e()t 1).s
"
#
()t
(e
1).s
= ( ) .e()t 1
()t
.e
(e()t 1).s
=
1
.e()t
.e()t
of a negative binomial distribution with r = and
hWhich is the p.g.f.
i
p = .e()t
.
ir
h
p
or [pr .(1 qs)r ].
Since the p.g.f. of a negative binomial distribution is 1qs
obviously if p =
Then
.e()t
q = 1
.e()t
()t
.e
+
=
.e()t
(
)
e()t 1
=
.e()t
45
q
M ean = r.
p
#
"
e()t 1 .e()t
.
.
=
.e()t
=
. e()t 1
( )
q
and variance = r. 2
p
()t
.e()t
. e
1 .
=
( )
()t
()t
.
e
1
.e
=
( )2
. e()t 1
M ean =
( )
( )2 t2 ( )3 t3
=
+
+ ... 1
1 + ( )t +
( )
2!
3!
( )t2 ( )2 t3
= t+
+
+ ...
2!
3!
if = then
Mean = t
5.4
. e()t 1
( )
[0 1]
if <
lim E[X(t)] =
t
( )
=
if <
( )
lim E[X(t)] =
if >
E[X(t)] =
Also for =
E[X(t)] = t as
46
Further
()t
()t
.
e
1
.e
( )2
(1)()
lim V [X(t)] =
if >
t
( )2
.
=
( )2
=
if >
V [X(t)] =
Also when =
( )t2 ( )2 t3
( )t2 ( )2 t3
V [X(t)] = t +
+
+ ... t +
+
+ ...
2!
3!
2!
3!
= .[t][t]
6
6.1
With a Poisson process, {X(t), t 0}, where X(t) denotes the number of
occurences of an event E by epoch t, there is associated a random variablethe interval X between two successive occurences of E. We proceed to show
that X has a negative exponential distribution.
Theorem 1. The interval between two successive occurences of a Poisson
process {X(t), t 0} having parameter has a negative exponential distribution with mean 1/
Proof:
Let X be the random variable representing the interval between two successive occurences of {X(t), t 0} and let Pr(X x) = F(x) be its distribution function.
Let us denote two successive evnts by Ei and Ei+1 and suppose that Ei occured at the instant ti . Then
47
d
F (x) = ex , x > 0
dx
Proof:
Let Zn denote the interval between (n-1)th and nth occurence of a process
{X(t), t 0} and let the sequence Z1 , Z2 ,. . . be independently and identically distributedrandom variables having negative exponential distribution
with mean 1/. The sum Wn = Z1 + Z2 + . . . + Zn is the waiting time
upto the nth occurence, i.e. the time from the origin to the nth subsequent
occurence. Wn has a gamma distribution with parameters , n. The p.d.f.
g(x) and the distribution function FWn are given respectively by
g(x) =
n .xn1 .ex
,x > 0
n
and
Z
FWn (t) = P [Wn t] =
g(x)dx
0
49
n
X
et (t)j
j=0
j!
n1 t
X
e (t)j
j=0
j!
et (t)n
, n = 0, 1, 2, . . .
n!
7
7.1
50
n
X
r=0
n
X
r=0
51
(65)
7.2
P [X(t) = n] = e
(1 +2 )t
1
2
n/2
p
I|n| (2t 1 2 ), n = 0, 1, 2, . . .
(66)
(x/2)2r+n
r!(r + n + 1)
(67)
where
In (x) =
X
r=0
1 1)]
(68)
52
P [X(t) = n] =
r=0
X
e1 t (1 t)n+r e2 t (2 t)r
=
.
(n + r)!
r!
r=0
= e(1 +2 )t
X
(1 2 t2 )r (1 t)n
r=0
(n + r)!r!
Now,
p
1 2 )2r .(1 t)n
=
(t 1 2 )n
p
n1
2r+n
. n/2 n/2
= (t 1 2 )
1 .2
n/2 p
1
=
.(t 1 2 )2r+n
2
(1 2 t2 )r (1 t)n = (t
Thus,
(1 +2 )t
P [X(t) = n] = e
= e(1 +2 )t
1
2
n/2 X
1
2
n/2
r=0
(t 1 2 )2r+n
r!(r + n)!
p
I|n| (2t 1 2 )
7.3
P [M (t) = n] =
P [Ar ]
r=0
X
et (t)n+r n + r n r
=
p q
(n + r)!
n
r=0
= et
X
(pt)n (qt)r
n!r!
r=0
t (pt)
=e
n!
X
(qt)r
r=0
r!
(pt)n qt
e
n!
(pt)n
= ept
n!
= et
55