Sie sind auf Seite 1von 100

# Chapter III

Random Variable
Defn: If S is sample space of
experiment S . Then random
variable X is a real valued
function defined on S . In
other words for every point s
in S X(s) is a real number.
S
X
s X(s)
Discrete & Continuous
random variable
Discrete Random variable :
A random variable X is
discrete if values of X are
finite or countable infinite
i.e range of X is finite or
countable infinite.
The behavior of the random
variable is accomplished by:
(i) Probability density f(x)
(ii) Cumulative distribution
function F(x)
Random variable

Continuous r.v.
Discrete r.v.
All possible values
Possible values are
form an interval of
Finite or countable
positive length.
infinite.
They can’t be arranged
They can be arranged
as a sequence
as finite or infinite seq.
Discrete Probability density

## Definition : The Probability

density function of a discrete
random variable X is the
function f defined by
f(x) = P(X=x) for all real x.

Rajiv
From the density, one can evaluate
the probability of any event C of
sample space S, CS
let A = {X(s) : sC}

P (C ) = P ( A) = �f ( x )
x�A
The necessary and sufficient condition for a
function f to be a discrete density function :

## (i ) f ( x ) �0 for all x and

(ii )�f ( x ) = 1
all x
Cumulative distribution function F
The Cumulative distribution function
F of a discrete random variable X with
density f is defined as

## F(x) = P(X �x)

for any real number x ,
F(x0 ) = P(X �x0 ) = � f(x)
x �x 0
for any real number x 0 , we sum over
all values of X that occur with nonzero
probability
The density and cumulative
distribution function determine
each other.
If random variable takes integer
values then f(n) = F(n)-F(n-1).
F is CDF (Cumulative distribution
function ) of a discrete
random variable X then
P(a<X ≤c ) =P( X ≤c)-P(X≤a)
=F(c)-F(a)
as set of all x such that X≤a
is subset of set of all x such that x≤c
In such situation, cumulative
distribution function of a discrete
random variable is a step function, its
values change at points where
density is positive.

## F(x) is non decreasing and

lim F ( x) = 1
x �� lim F ( x) = 0
x �-�
Tabular way of defining density :
Can tabulate values of density at
points where it is nonzero.
Tabular way of defining cumulative
distribution function : Can
tabulate values of F(x) where
steps change.
If for a discrete random variable X
taking values x1 , x 2 ,.......... xm .
satisfying x1 < x2 < ............. < xm
If CDF F(x) is known at x i , for 1 �i �m
then f(x1 ) = F ( x1 ) &
f ( xi ) = F ( xi ) - F ( xi -1 ), 2 �i �m
tested : If CDF ,F(x) for a random variable X
is given in tabular form as
x -1 0 1/3 1/2 2/3 1 3

## (i)Find probability density function f(x)

for all x
(ii) Find P(2<X≤3) & P(2≤X<3)
(iii)Find F(-2) & F(4)
(iv)Find P(X<3) & P(X>0)
Mean or expectation of a
discrete random variable
In many problems it is desirable to find
some feature of random variable by
means of a single number that can be
computed from its probability density
function Mean or expectation is one
such number .
Defn : Mean or Expection of a discrete random
variable X is
m X = E(X) = �xf(x) = �xP(X = x)
allx allx

## Remark : If f(x) is not zero only for finitly

many values of X , then E(X) always exist
other wise E(X) exist if �|x|f(x) < �
all x
Expectation :
Def : Let X be a discrete random
variable and g(X) be a function of X.
Then the expected value of (X), denoted
by E(g(X)), is defined by
E ( g ( X )) = �g ( x ) f ( x )
all x
Where f(x) is density of X provided

�g(x)f(x)
all x
Rajiv
is finite
Properties of mean
Theorem : If X is a random variable
and c is a real number then :
E[cX]= cE[X].

Proof :
E (cX ) = �cxf(x) = c �xf(x) = cE(X)
all x all x
Thm : Prove for real numbers
a& c, E[aX+c]=aE[X]+c.
E (aX + c ) = �(ax + c)f(x) =
all x

## = a �xf(x) + �cf(x) = aE(X) + c

all x all x
Variance of a discrete random
variable
Mean of a random variable does not give
us information about the variability of
the values of random variable .
Variance and Standard
deviation
Def : If a discrete random variable X has
mean m, its variance Var(X) or 2 is
defined by
Var(X) = E[(X-m)2].

## Defn : The standard deviation  is the

non negative square root of Var(X).
Rajiv
Comment :
(i) Var(X) is always nonnegative,
if it exists.

## ii. Variance measures the dispersion or

variability of X. It is large if values of X
away from m have large probability, i.e.
values of X are more likely to be spread.
That points inconsistency or instability of
random variable.
Rajiv
In fact, if the variance is small, then the
values of the random variable are close
to the mean , hence variability or
spread of random variable can be
computed with the help of variance ,
essentially variance is measure of
dispersion. If two random variables have
same mean then to get additional
may look into their variance.
Comment :
(i) Variances are often needed for
comparative purpose to distinguish
between two random variables which
may appear to be identical .
(ii) Standard deviation has the advantage

## having the same unit as the original

data.
Properties of variance
Theorem : Var[X]=E[X2]-(E[X])2.

## Pr oof : Var[X] = E[(X-m ) ] 2

= E[(X - 2 m X + m )]
2 2

= E[X ] - 2 m ( m ) + m
2 2

## = E[X ]-m = E[X ]- ( E[X])

2 2 2 2

Rajiv
Theorem : For a real number c,
Var [aX+c]=a2Var[X].
LetY = aX + c
Var (Y ) = E[(Y - E[Y ]) ]2

= E[( aX + c - aE[ X ] - c ) ]2

=E[(aX-am X ) ] = E[a ( X - m X ) ]
2 2 2

=a E[( X - m X ) ] = a Var ( X )
2 2 2
15 : The density for X, the number of
holes that can be drilled per bit while drilling
into limestone is given by the following table :

x 1 2 3 4 5 6 7 8

## Find F(x), E[X], E[X2], Var[X], X. Find the unit

of X what physical unit is attached with X
Ordinary Moments : For any positive integer r
the rth ordinary moment of a discrete random
variable X with density f(x) is defined to be
E[Xr].Thus for r=1 we get mean.
Using 1st and 2nd ordinary moment, we can
evaluate variance.
There is a tool, moment generating function
(m.g.f) which helps to evaluate all ordinary
moments in one go.
Rajiv
Moment generating function
Definition : Let X be any random variable with density f. The m.g.f. for X is denoted by mX(t) and is given by

provided the expectation is finite for all real numbers t in some open interval
(-h, h).It is derived from the density that allows one to calculate ordinary moments of distribution easily. It
provides a unique identifier for each distribution.

m X (t ) = E ( e ) tX

Rajiv
Theorem :If mX(t) is the m.g.f. for a random
variable X, then

r
d m X (t )
r
= E ( X r
)
dt t =0

Rajiv
X2 X 3
t n
X n
e tX = 1 + tX + t 2 + .t 3 .. + + .......
2! 3! n!

t 2 E[ X 2 ] t 3E[ X 3 ] t n E[ X n ]
mX (t ) = 1 + tE[ X ] + + ....... + + ...
2! 3! n!

dm X (t ) 2tE [ X 2 ] nt n -1E[ X n ]
= E[ X ] + + ... + + ...
dt 2! n!
dm X (t )
= E( X )
dt t =0
t 2 E[ X 2 ] t 3E[ X 3 ] t n E[ X n ]
mX (t ) = 1 + tE[ X ] + + ....... + + ...
2! 3! n!
dm X (t ) 2tE [ X 2 ] t 2 E[ X 3 ] nt n -1E[ X n ]
= E[ X ] + + ... + + ...
dt 2! 2! n!

d 2m X (t ) 2 E [ X 2 ] 2t E [ X 3 ] n(n - 1)t n -2 E [ X n ]
2
= + ... + + ...
dt 2! 2! n!
d 2 m X (t )
2
= E ( X 2
)
dt t =0
d 2 m X ( t ) 2 E [ X 2 ] 2t E [ X 3 ] n(n - 1)t n -2 E [ X n ]
2
= + ... + + ...
dt 2! 2! n!

d r m X (t ) r +1 n -r
r
= E [ X r
] + tE [ X ] + ... + t E [ X n
] / ( n - r )!+ ...
dt
Now put t = 0 to get the result.
d r m X (t )
r
= E ( X r
)
dt t =0
Geometric distribution
If we perform a sequence of identical and
independent Bernoulli trials,
The probability of success , p, remains the
same from trial to trial,
X = number of trials needed to get the first
success is a discrete random variable
called geometric random variable. Its
probability distribution is called geometric
distribution.
Rajiv
For X geometric random variable with
0<p<1 x -1
( q) ( p ), x = 1, 2,3....

f ( x) = �
0, otherwise

CDF F(x)
� 0 for x < 1
F(x) = � [x ]
(1-(q)
� ) for x �1

## We as well proved in last

class E(X)=1/p Rajiv
Theorem : The m.g.f. of geometric random variable X
with parameter p, 0 < p < 1, is
t
pe
m X (t ) = ; for t < - ln q;
1 - qe t

where q = 1 - p.

Rajiv
Proof: The density for X is given by
x-1
f(x) = q p , x=1,2,3,...
mX (t ) = E[e ] = �e f ( x )
tX tx

all x
� �
= �e f ( x ) = �e pq
tx tx x -1

x =1 x =1 �
= pq -1
�(qe )
x =1
t x

## the summation on the right is a geometric

series with first term qet common ratio qet.
Rajiv

Thus,
t t
� qe � pe
-1
m X (t ) = pq � �=
1 - qe � 1 - qe

t t

## provided IrI = IqetI< 1. Since the exponential

function is nonnegative and 0< q<1, this
restriction implies that (qet)<1. The
inequality for t is solved as follows qet <1
implies that et < (1/q).
ln ( e ) < ln (1 / q ) � t < (ln 1 - ln q )
t

� t < ( - ln q)
Rajiv
Theorem: Let X be a geometric random variable with parameter p
1 q
then E[ X ] = and Var[ X ] = 2
p p
t t
pe d pe
Pr oof : m X (t ) = � ( m X (t ) ) =
1 - qe ( 1 - qe )
t 2
dt t

t t
pe dm X (t ) pe
m X (t ) = � =
(1 - qe )
t
dt (1 - qe )
t 2
t
dm X (t ) pe 1
= =
dt t =0 (1 - qe )
t 2
t =0
p
2
d m X (t ) d t -2
2
= ( pe )(1 - qe )
t

dt dt
= ( pet )(1 - qet ) -2 + ( pet )( -2)(1 - qet ) -3 ( - qet )
2
d m (t )
X t t
= ( pe )(1 - qe ) - 2
dt 2 t=0
t=0
t t - 3 t
+ ( pe )( -2)(1 - qe ) ( - qe )
t=0
1 2q
= + 2
p p
Var[X]=E[X2]-(E[X])2
=(1/p)+(2q/p2)- (1/p2)
=(p+2q-1)/p2
= q/p2
79: In a Video game the player attempts to capture
a treasure lying behind one of five doors. The
location of treasure varies randomly in such a way
that at any given time it is just as likely to be
behind one door as any other. When the player
knocks on a given door, the treasure is his if it lies
his original starting point and approach the doors
through a dangerous maze again. If the treasure
is captured, the game ends. Let X be the number
of trials needed to capture the treasure. Find the
average number of trials needed to capture the
treasure. Find P(X≤3) and P(X>3).
Rajiv
3.2:10 :
It is known that the probability of being able
to log on to a computer from a remote
terminal at any given time is 0.7. Let X be
the number of attempts that must be made
i. Find pdf f of X and identify the random
variable and parameters
ii. Find the probability that at most 4
attempts are required to access the
computer.
bernoulli trials
A trial which has exactly 2 possible
outcomes, success s and failure f, is
called Bernoulli trial.
For any random experiment, if we are only
interested in occurrence or not of a
particular event, we can treat it as
Bernoulli trial.
Thus if we toss a dice , success is we get
even number , we can call that as a
Bernoulli trial. Rajiv
binomial random variable
Let an experiment consist of fixed
number n of Bernoulli trials.
Assume all trials are identical and
independent. Thus p = probability of
success is same on any trial.
X= number of successes in these n
trials.
What is P(X=x)? Rajiv
A discrete random variable X has binomial
distribution with parameters n and p, n is a
positive integer and 0<p<1,
if its density function is

n x
�� n- x
f ( x ) = P( X = x ) = ��p (1 - p)
x
��
x=0,1,2,............n
Rajiv
C.D.F. of binomial
distribution
It is difficult to write explicit
formula.
So values are given in Table I App.
A, p. 687-690.

Rajiv
From C.D.F.,F(x) we can find density :
f(x)=F(x)-F(x-1) if x =1,2,…,n.
f(0)=F(0)
3.5.3 : Let X represent the number of signals
properly identified in a 30 minute time period
in which 10 signals are received.
Assuming that any signal is identified with
probability p=.3 and identification of signals
is independent of each other.
(i) Find the probability that at most seven
signals are identified correctly.
(ii) Find the probability that at most 7 and at
least 2 signals are identified correctly.
Rajiv
Theorem : Let X be a binomial random
variable with parameters n and p. Then

1) The m.g.f. of X is
t n
m X (t ) = ( q + pe ) with q = 1 - p.
2) E[ X ] = np and Var[ X ] = npq.

Rajiv
n x
�� n
Proof : m X (t) = E[e ] = ���p (1 - p ) e
tX n - x tx

x
x =0 ��

n n�

= �� � n- x
(1 - p ) ( pe ) .
t x

x =0 �x�

= ( q + pe ) where q = 1 - p
t n

Rajiv
m X (t ) = ( q + pe ) . t n

dm X (t )
Thus E[X] =
dt t =0
t n -1
= n( q + pe ) pe t
t =0
= np( q + p ) = np.
Rajiv
t n -1
2
d m X (t ) d [npe ( q + pe )
t
]
E[ X ] =
2
2
=
dt t =0
dt t =0

t n -2 t n -1
= [n(n - 1) p e ( q + pe ) + npe ( q + pe ) ]
2 2t t
t =0

= n(n - 1) p ( q + p ) + np
2

= n(n - 1) p + np.
2

Rajiv
Thus Var[ X ] = E [ X ] - E[ X ]
2 2

= n p + np - np - n p
2 2 2 2 2

= np(1 - p ) = npq.
Variance of Binomial random variable
n
�n � x n- x
E ( X ( X - 1) ) = �x( x - 1) � �p q
x =0 �x �
n
n!
= �x( x - 1) x n- x
pq
x=2 x !( n - x ) !
n
n!
=� x n- x
pq
x = 2 ( x - 2)!( n - x ) !
n
n!
=� x n- x
pq
x = 2 ( x - 2)!( n - x ) !

let x=y+2
n
n!
= �
y + 2= 2 ( y + 2 - 2)!( n - y - 2 ) !
y + 2 n- y -2
p q

(n - 2)!n(n - 1) p y n - y - 2
n-2 2
=� p q
y =0 ( y )!( n - y - 2 ) !
n-2
(n - 2)!n(n - 1) p y n - y - 2 2
=� p q
y =0 ( y )!( n - y - 2 ) !
n-2
(n - 2)!
= n(n - 1) p �
2 y n- y -2
p q
y = 0 ( y )!( n - y - 2 ) !
n-2
= n(n - 1) p ( p + q )
2
= n(n - 1) p 2

i.e.E ( X ( X - 1) = E ( X ) - E ( X ) = n(n - 1) p
2 2

E ( X ) = n(n - 1) p + np
2 2

## Var ( X ) = n(n - 1) p + np - n p = npq

2 2 2
3.4.25 : The zinc phosphate coating on the
threads of steel tubes used in oil and gas wells
is critical to their performance. To monitor the
coating process, an uncoated metal sample
with known outside area is weighed and
treated along with the lot of tubing. This
sample is then stripped and reweighed. From
this it is possible to determine whether or not
the proper amount of coating was applied to
the tubing.
Rajiv
Assume that the probability that a given lot is
unacceptable is 0.05. Let X denote the
number of runs conducted to produce an
unacceptable lot. Assume that the runs are
independent in the sense that the outcome of
one run has no effect on that of any other.
(i)Is X geometric? What is success? p=? (ii)
What is density, E[X], E[X2], 2? m.g.f.? Find
the probability that the number of runs
required to produce an unacceptable lot is at
least 3.
Rajiv
Theorem : Let X be geometric random
variable with density function 0<p<1
x -1
(q ) ( p), x = 1, 2,3....

f ( x) = �
0,
� o therwise
Find E(X)& Var (X)

Rajiv

1 �
1
� q =
x
� �xq =x -1
....(i)
1- q ( 1- q)
2
x =0 x =1

p 1
� E ( X ) = �xpq x -1
= 2 =
x =1 ( 1- q) p

1 �
1
� x
q = � �xq =x -1
....(i)
1- q ( 1- q)
2
x =0 x =1

d � �
� d � 1 �
� � x -1
xq �= � �
� ( 1- q) �
2
dq �x =1 � dq � �

2
� �x ( x - 1) q x-2
= ......(ii)
( 1- q)
3
x=2

2
�x( x - 1)q x-2
= ......(ii)
( 1- q)
3
x=2

2
� �x( x - 1)q x-2
=
( 1- q)
3
x =1

2 pq
� �x( x - 1)q x-2
pq =
( 1- q)
3
x =1

2 pq
� �x( x - 1)q x-2
pq =
( 1- q)
3
x =1

2q
� �x( x - 1)q p = 2 = E ( X ( X - 1))
x -1

x =1 p
2q
E( X ) - E( X ) = 2
2

p
2q
E( X ) - E( X ) = 2
2

p
2q 2q 1
� E( X ) = 2 + E( X ) = 2 +
2

p p p

� Var ( X ) = E ( X ) - ( E ( X ) )
2 2

2q 1 1 q
= 2+ - 2 = 2
p p p p
Point binomial. Assume that an experiment is
conducted and that the outcome is
considered to be either a success or a
failure. Let p denote the probability of
success. Define X to be 1if the experiment is
a success and 0 if it is a failure. X is said to
have a point binomial distribution
{Bernoullli distribution) with
parameter p.

Rajiv
71 A bacterium often found in the human
digestive tract can mutate from being
streptomycin sensitive to being streptomycin
Resistant which can cause the individual involve
to become resistant to antibiotic streptomycin
Assume that there is an average of two
streptomycin-resistant bacteria on cultures drawn
from a particular patient . Each culture has an
area of 80 square centimeters. (i) Find the
probability that a one square centimeter random
sample from a single culture will contain at leas
one bacterium?
3.4.24.
The probability that a wildcat well be productive is 1/13.
Assume that a group is drilling wells in various parts of
the country so that the status of one well has no bearing
on that of any other. Let X denote the number of wells
drilled to obtain the first strike.
(i)Verify that X is geometric, and identify the value of the
parameter p.
The drilling of a well result in a strike which we call it as
success and not strike is failure.

Rajiv
The trials are identical and independent with p =
1/13 for each well
X= the number of trials before the first success.
(ii) What is the exact expression for the density for X?
(iii)What is the exact expression for the moment
generating function for X?
(iv) What are the numerical values of E[X], E[X 2], σ2 ,
and σ.

Rajiv
Hypergeometric distribution
If we are choosing without
replacement a sample of size n from
N objects of which r are favorable, and
X=number of favorable objects in the
sample, then
�r��N - r �
�x �
�n-x � �
P[ X = x ] = � �
� ;
�N �
� �
n �

if max[0, n-(N-r)] �x �min(n,r) and 0 otherwise.
Rajiv
Definition :A random variable X with integer
values has a hyper geometric distribution with
parameters N, n, r if its density is

�N - r �
�r �
�x �
�n-x � �
f ( x) = � �
� ;
�N �
� �
n �

if max[0, n-(N-r)] �x �min(n,r)
Rajiv
Definition :A random variable X with integer
values has a hyper geometric distribution with
parameters N, n, r if its density is
� �N - r �
r�
��� �
�x�
�n-x �
f ( x) = = P( X = x), x = 0,1,...........n
�N �
� �
�n �
�q �
if we assume � �= 0 if d>q,
�d� Rajiv
heorem : If X is a hypergeometric random
ariable with parameters N, n, r then
i. E[X] = n(r / N)
ii. Var[X]=n (r / N) [(N-r) / N][(N-n)/(N-1)]

Rajiv
54 :
If X is hyper geometric with
N=20 r=17 and n=10 what are possible values
of X Find E(X) and Var(X)
17 �
� � 3 �
� �� �
�x �
� 10 - x �
P( X = x) = ; x=7,8,....10
�20 �
� �
�10 �

i. E[X] = n(r / N)
ii. Var[X]=n (r / N) [(N-r) / N][(N-n)/(N-1)]
55 :
If X is hyper geometric with
N=20 r=3 and n=5 what are possible values
of X

�3�� 17 �
�x �
� �
3- x�
P( X = x ) = � �
� ; x=0,1,2,3
�20 �
�3 ��

�N - r �
�r �
�� � �
n
�x �
�n - x � r
E ( X ) = �x =n
x =0 �N � N
��
�n �
�r - 1��N - r �
n -1 � �� �
�x � �n -1- x �
taking help of � = 1 how?
x =0 �N - 1�
� �
�n -1 �

�N � �N - 1�
n � �= N � �
�n � �n - 1 �
�r - 1��N - r �
n -1 � �� �
r �x ��n - 1 - x �
We shall prove E ( X ) == n �
N x =0 �N - 1�
� �
�n - 1 �
�r�
� N -r�
��� �
n
�x�
�n-x �
E ( X ) = �x
x =0 �N�
� �
�n �
r! �N -r�
� �
n
( x - 1)!( r - x )! �n-x �
= �
x =1 N � N - 1�
� �
n � n -1 �
(r - 1)!r �N - r �
� �
n -1
( x)!(r - 1 - x)! �n -1 - x �
=�
x =0 N �N - 1�
� �
n� n -1 �
�r - 1��N - r �
n -1 � �� �
n � x � �n -1- x � n
=r � =r
N x =0 �N - 1� N
� �
�n -1 �
Hypergeometric binomial
When the sample size n is small compared to
population size N, we can use binomial distribution
even when sampling is without replacement.
This is done if n/N  0.05. The parameters are n and
p = r/N.
If n is small relative to N, then the composition of the
sampled group does not change much from trial to
trial even though we are keeping the sampled items

Rajiv
Q.60 3.7 A random telephone poll is
onducted to ascertain public opinion
oncerning the construction of a nuclear plant
n a particular community. Assume there are
150,000 numbers listed for private individuals
nd that 90,000 of these would elicit a negative
response if contacted L et X be number of
egative responses obtained in 15 calls
(i)Find the density for X (ii) Find E(X) &
Var(X)
(iii)Set up Calculations to find P(X6)
(iv) Through binomial table
approximateP(X6)
Poisson Distribution
Let k > 0 be a constant

�e-k k x
� ; for x = 0,1, 2,...
f ( x) = � x !
�0 otherwise

## Theorem : f is a density function.

Theorem : The m.g.f. of a Poisson random
variable X with parameter k>0 is

t
k (e - 1)
mX (t ) = e
E[X]=k and Var[X]=k.
� -k x
e k
mX (t ) = E (e ) = �e
tx tx

x =0 x!
� t x
( ke )
=e �
t
-k - k ke
=e e
x =0 x!
k ( et -1)
=e
t
k (e - 1)
mX (t ) = e
dmX (t ) t
k (e - 1)
=e ke t

dt
k (et - 1) + t
= ke
dmX (t ) k (e0 - 1) + 0
E( X ) = = ke =k
dt t =0
dmX (t ) t
k (e - 1) + t
= ke
dt
2
d mX (t ) d �k (et - 1) + t �
=k � e �
dt 2
dt �

t
k (e - 1) + t
=ke (ke + 1)
t

d 2
m (t ) k ( e 0 - 1) + 0
E( X ) =
2 X
2
= ke (ke + 1)
0

dt t =0

=k(k+1)
Var(X)=E(X2)-(E(X))2= k(k+1)-k2=k
Poisson Process : A process occurring
discretely over a continuous interval of time
or length or space is called a Poisson
Process.

##  = average number of occurrences of an

event in a unit of (time or length or
space)
Examples of Poisson
process
1. Number of errors on a page in a book.
2.Number of customers entering in a café in
a hour.
3. Number of deaths in a given time period of
time of the policy holders of a life insurance
company
4. Number of electrons emitted from a heated
source in a fixed time period
Steps in solving a Poisson
problem
1 Determine the basic unit of measurement
2 Determine the average number of occurrences of
the event per unit i.e 
3. Determine length or size of observation period i.e
s
4. Random variable X the number of occurrences of
the event in the interval of size s is a Poisson
distribution with parameter k=s
 = average number of occurrences of an
event in a unit of (time or length or
space)

## Let X = number of times the discrete event

occurs in a given interval of length s units
In the assumptions of Poisson process.
Then X has Poisson distribution with
parameter k = s.

Rajiv
C.D.F. of Poisson distribution

## Provided by Table on p.692.

Values of k =s, the parameter of
Poisson distribution corresponds to
columns, values t of random variable
correspond to rows and value of cdf
F(t) are entries inside table.

Rajiv
3.8
Q 63 :Geophysicists determine the age
of a zircon by counting the number of
uranium fission tracks on a polished
surface. A particular zircon is of such an
age that the average number of tracks
per square centimeter is five. What is
the probability that a 2 centimeter-
square sample of this zircon will reveal
at most 3 tracks?
71 A bacterium often found in the human
digestive tract can mutate from being
streptomycin sensitive to being streptomycin
Resistant which can cause the individual involve
o become resistant to antibiotic streptomycin.
Assume that there is an average of two
streptomycin-resistant bacteria on cultures drawn
rom a particular patient . Each culture has an
area of 80 square centimeters. Find the
probability that a one square centimeter random
sample from a single culture will contain at least
one resistant bacterium?
75 (Review problems ): A large
microprocessor chip contains multiple
copies of circuits . If a circuit fails, the chip
knows how to select the proper logic to
repair itself. Average number of defects per
chip is 300 . Find the probability that 10 or
fewer defects will be found in a randomly
selected region that comprises 5% of the
total surface area ?

Probability in question?
Rajiv
Poisson approximation to binomial :
If a binomial random variable X has parameter p
very small and n large so that np = k is
moderate then X can be approximated by a
Poisson random variable Y with parameter k.

## Usually language of question will

mention that find probability taking
poisson approximation
Rajiv
tested : An order of 3000 parts is received
. The probability that a part is defective is
1/1000 . Take Poisson approximation to
find the probability that there will be more
than equal to five defective parts.

Rajiv