Sie sind auf Seite 1von 42

PROBABILITY THEORY

Random variables (discrete & continuous), probability density function, cumulative density
function. Probability distributions- binomial & poison distributions; exponential & normal
distributions.

Random variables
Introduction: In a random experiment, the outcomes (results) are governed by chance
mechanism and the sample space S of such a random experiment consists of all outcomes of
the experiment. When the outcomes of the sample space are non-numeric, they can be
quantified by assigning a real number to every event of the sample space. This assignment
rule, known as the random variable or stochastic variable. In other words a random variable
is a function that assigns a real number to every sample point in the sample space of a
random experiment. Random variables are usually denoted by X,Y,Z…….The set of all real
number of a random variable X is called the range of X.

Example-1 While tossing a coin, suppose that the value 1 is associated for the outcome’
head’ and 0 for the outcome ‘tail’. The sample space S={𝐻, 𝑇}and if X is the random variable
then
X (H) =1and X(T)=0, Range of X = {0,1}

Example-2 A pair of fair dice is tossed. The sample space S consists of the 36 ordered pair
(a,b)where a and b can be any integers between 1 and 6, that is S={(1,1),(1,2),………(6,6)}
Let X assign to each point (a,b) the maximum of its numbers, that is, X(a,b)=max(a,b). For
example(1,1)=1,X(3,4)=4,X(5,2)=5

Then x is a random variable where any number between 1 and 6 could occur, and no other
number can occur. Thus the range space of X = {1, 2, 3, 4, 5, 6}

Let Y assign to each point (a,b) the sum of its numbers, that is Y(a,b)=a+b.For example
Y(1,1)=2, Y(3,4)=7, Y(6,3)=9. Then Y is a random variable where any number between 2
and 12 could occur, and no other number can occur. Thus the range space
={2,3,4,5,6,7,8,9,10,11,12}

Discrete random variables

Definition: a random variable is said to be discrete random variable if it’s set of possible
outcomes, the sample space S, is countable (finite or an unending sequence with a many
elements as there are whole numbers).
Example:
1) Tossing a coin and observing the outcome.
2) Tossing a coin and observing the number of heads turning up.
3) Throwing a ‘die’ and observing the number of the face.

Continuous random variables

Definition: A randomvariable is said to be continuous random variables if sample space S


contains infinite number of values.

Example:
1) Weight of articles.
2) Length of nails produced by a machine.
3) Observing the pointer on a speedometer/voltmeter.
4) Conducting a survey on the life of electric bulbs.

Generally counting problems corresponds to discrete random variables and measuring


problems lead to continuous random variables.

PROBABILITY DISTRIBUTIONS
Probability distribution is the theoretical counterpart of frequency distribution, and plays an
important role in the theoretical study of populations.

Discrete probability distribution:

Definition: If for each value 𝑥𝑖 of a discrete random variable X, we assign number p(𝑥𝑖 ) such
that
i) p(𝑥𝑖 ) ≥ 0, ii)∑𝑖 p(𝑥𝑖 ) = 1 then the function p(𝑥) is called a probability function. If the
probability that X takes the values 𝑥𝑖 is𝑝𝑖 , then P(X=𝑥𝑖 ) = 𝑝𝑖 or p(𝑥𝑖 ).The set of values [𝑥𝑖 ,
p(𝑥𝑖 )] is called a discrete probability distribution of the discrete random variable X. The
function P(X) is called the probability density function (p.d.f) or the probability mass
function (p.m.f)

Cumulative distribution function


The distribution function f(𝑥) is defined by f(𝑥) = 𝑃(𝑋 ≤ 𝑥) = ∑𝑥𝑖=1 p(𝑥𝑖 ), x being an
integer is called the cumulative distribution function(c.d.f)

The mean and variance of the discrete probability distribution


Mean (𝜇) 𝑜𝑟 𝑒𝑥𝑝𝑒𝑐𝑡𝑎𝑡𝑖𝑜𝑛𝐸(𝑋) = ∑𝑖 𝑥𝑖 𝑝(𝑥𝑖 )
Variance (V) = ∑𝑖(𝑥𝑖 − 𝜇)2 𝑝(𝑥𝑖 )= ∑𝑖 𝑥𝑖 2 𝑝(𝑥𝑖 ) − [𝑥𝑖 𝑝(𝑥𝑖 )]2=∑𝑖 𝑥𝑖 2 𝑝(𝑥𝑖 ) − 𝜇 2
Standard deviation (𝜎) = √𝑉

Problems1: Determinethediscrete probability distribution, expectation, variance, standard


deviation of a discrete random variable X which denotes the minimum of the two numbers
that appear when a pair of fair dice is thrown once.

Solution: The totalnumber of cases are 6x6=36. The minimum number could be
1,2,3,4,5,6i.e.,X(s)=X(a,b)=min{𝑎, 𝑏}. the number 6 will appear only in one case (6,6),so
P(6)=P(X=6)=P({6,6}=1/36

For minimum 5, favorable cases are (5,5),(5,6),(6,5) so P(5)=P(X=5)=3/36 For minimum 4,


favorable cases are(4,4,(4,5),(4,6),(5,4)(6,4) so P(4)=P(X=4)=5/36

For minimum 3, favorable cases are(3,3),(3,4),(3,5),(3,6),(4,3),(5,3),(6,3) so


P(3)=P(X=3)=7/36

For minimum 2, favorable cases are (2,2),(2,3),(2,4),(2,5),(2,6),(3,2),(4,2),(5,2),(6,2) so


P(2)=P(X=2)=9/36

For minimum 1, favorable cases are (1,1)(!,2),(1,3),(1,4),(1,5),(1,6),(2,1),(3,1),(4,1),(5,1),(6,1)


so P(1)=P(X=1)=11/36

Thus required discrete probability distribution


X=𝑥𝑖 1 2 3 4 5 6
p(𝑥𝑖 ) 11/36 9/36 7/36 5/36 3/36 1/36

Mean = ∑𝑖 𝑥𝑖 𝑝(𝑥𝑖 )=1x11/36+2x9/36+3x7/36+4x5/36+5x3/36+6x1/36=2.5

Variance (V)=∑𝑖 𝑥𝑖 2 𝑝(𝑥𝑖 ) − 𝜇 2


=1x11/36+4x9/36+9x7/36+16x5/36+25x3/36+36x1/36−(2.5)2
=1.9745
Standard deviation=√𝑉=1.4

Example 2:The random variable X has the following probability mass function
X 0 1 2 3 4 5
P(X) K 3K 5K 7K 9K 11K
i) find K ii) find P(X<3) iii) find P(3<X≤ 5)(June-July2011)

Solution: If X is a discrete random variable then ∑𝑖 𝑃(𝑥𝑖 ) = 1


⇒ K+3K+5K+7K+9K+11K =1
⇒ 36K=1
⇒ K = 1/36

ii) P(X<3) = P(X=0)+P(X=1)+P(X=2)


= K+3K+5K = 9K = 9/36=1/4

iii) P(3<X≤ 5) = P(X=4)+P(X=5) =9K+11K= 20K= 20/36=5/9

Example 3: The probability distribution of a finite random variable X is given by the


following table:
𝑋𝑖 -2 -1 0 1 2 3
p(𝑋𝑖 ) 0.1 k 0.2 2k 0.3 k
i) Find the value of K and calculate the mean and variance.
ii) EvaluateP(X<1). (July2006)

Solution: If X is a discrete random variable then ∑𝑖 𝑃(𝑥𝑖 ) = 1


⇒0.1+K+0.2+2k+0.3+k =1
⇒ 0.6+4K=1
⇒ 4k=1-0.6=0.4
⇒ K = 0.1

Mean (𝜇) = ∑𝑖 𝑥𝑖 𝑝(𝑥𝑖 ) =-2x0.1+-1x0.1+0x0.2+1x0.2+2x0.3+3x0.1


=0.8
Variance (V) = ∑𝑖 𝑥𝑖 2 𝑝(𝑥𝑖 ) − 𝜇 2
= 4x0.1+1x0.1+0x0.2+1x0.2+4x0.3+9x0.1−(0.8)2
=2.16

ii)P(X<1) = P(X=-2)+P(X=-1)+P(X=0)
=0.1+0.1+0.2
=0.4

Example 4:A random variable X has the following probability function for various values of x
x 0 1 2 3 4 5 6 7
P(x) 0 k 2k 2k 3k 𝑘 2
2𝑘 2
7𝑘 2 +k

i) Findk ii) Evaluate P(x<6) and P(3<x≤ 6) Also find the probability distribution and the
cumulative distribution function of X

Solution: If X is a discrete random variable then ∑𝑖 𝑃(𝑥𝑖 ) = 1and P(X)≥ 0


⇒ 0+k+2k+2k+3k+𝑘 2 +2𝑘 2 +7𝑘 2 +k=1
⇒10𝑘 2 +9k−1=0
⇒ k=1/10 andk=-1
Ifk=-1 the second condition fails and hence k≠ −1 ∴ k=1/10
Hence the probability distribution is as follows.
x 0 1 2 3 4 5 6 7
P(x) 0 0.1 0.2 0.2 0.3 0.01 0.02 0.17

P(x<6)=P(0)+P(1)+P(2)+P(3)+P(4)+P(5)
=0+0.1+0.2+0.2+0.3+0.01=0.81
P(3<x≤ 6)=P(4)+P(5)+P(6)
=0.3+0.01+0.02=0.33

Cumulative distribution function of X is as follows.


x 0 1 2 3 4 5 6 7
f(x) 0 0+0.1 0.1+0.2 0.3+0.2 0.5+0.3 0.8+0.01 0.81+0.02 0.83+0.17
=0.1 =0.3 0.5 0.8 0.81 0.83 1

In discrete probability distribution we are going to study


Geometric distribution & Poisson distribution.

GEOMETRIC DISTRIBUTION
Consider a sequence of trials, where each trial has only two possible outcomes (designated
failure and success). The probability of success is assumed to be the same for each trial. In
such a sequence of trials, the geometric distribution is useful to model the number of failures
before the first success. The distribution gives the probability that there are zero failures
before the first success, one failure before the first success, two failures before the first
success, and so on.

Examples:

1.A newly-wed couple plans to have children, and will continue until the first girl. What is
the probability that there are zero boys before the first girl, one boy before the first girl, two
boys before the first girl, and so on?

2.A doctor is seeking an anti-depressant for a newly diagnosed patient. Suppose that, of the
available anti-depressant drugs, the probability that any particular drug will be effective for a
particular patient is p=0.6. What is the probability that the first drug found to be effective for
this patient is the first drug tried, the second drug tried, and so on? What is the expected
number of drugs that will be tried to find one that is effective?
3.A patient is waiting for a suitable matching kidney donor for a transplant. If the probability
that a randomly selected donor is a suitable match is p=0.1, what is the expected number of
donors who will be tested before a matching donor is found?

Definition: If p be the probability of success and 𝑥 be the number of failures preceding the
first success then this distribution is
𝑝(𝑥) = 𝑞 𝑥 𝑝, 𝑥 =0,1,2,3….., q = 1-p

1
Obviously ∑∞ ∞ 𝑥
𝑥=0 𝑃(𝑥) = 𝑝 ∑𝑥=0 𝑞 = 𝑝. 1−𝑞 = 1

Hence P(x) is a probability function.

Mean and standard deviation of the Geometric distribution


Mean(𝜇)=∑∞
𝑥=0 𝑥𝑃(𝑥)

=∑∞ 𝑥
𝑥=0 𝑥 𝑞 𝑝

= 𝑝 ∑∞
𝑥=1 𝑥 𝑞
𝑥−1
𝑞
𝑑(𝑞 𝑥 )
= 𝑝𝑞 ∑∞
𝑥=1 𝑑𝑞
𝑑
=𝑝𝑞 𝑑𝑞 ∑∞
𝑥=1 𝑞
𝑥

𝑑 𝑞
=𝑝𝑞 [ ]
𝑑𝑞 1−𝑞

1
=𝑝𝑞 [(1−𝑞)2 ]
𝑞
𝜇= 𝑝

𝑞
Variance(V) =∑∞ 2 2
𝑥=0 𝑥 𝑃(𝑥) − 𝜇 = 𝑝2

Problem 1 : What is the probability that the marketing representative must select more
than 6 people before he finds one who attended the last home c.d.f of a Geometric R V with
1-p=0.08 and x=0.6.
Solution: P(x > 6) = 1 − p(x ≤ 6) = 1 − (1 − 0.86 ) = 0.262
There are 26% chance.
Problem 2: The probability that a person succeded in finding is equal to 0.20 and let X
denote the number of person to select until his first success.What is the probability that the
marketing representative must select 4 person?
Solution: p = 0.20,1 − p = 0.8 x = 4
P(x = 4) = 0.803 × 0.20 = 0.1024
POISSON DISTRIBUTION

Poisson distribution is the discrete probability distribution of discrete random variable X,


𝑚𝑥 𝑒 −𝑚
which has no upper bound. It is defined for non-negative values of x as follows: P(x)= 𝑥!
for x=0,1,2,3…… Here m>0 is called the parameter of the distribution. In binomial
distribution the number successes out of total definite number of n trials is determined,
whereas in Poisson distribution the number of successes at a random point time and space
is determined.
Poisson distribution is suitable for ‘rare’ events for which the probability of occurrence p is
very small and the of trials n is very large. Also binomial distribution can be approximated by
Poisson distribution when n→ ∞ and p→ 0 such that m= np= constant.

Example of rare events:

i) Number of printing mistake per page.


ii) Number of accidents on a highway.
iii) Number of bad cheques at a bank.
iv) Number of defectives in a production center.
We have in case of binomial distribution, the probability of x successes out of n trials,
𝑃(𝑥) = 𝑛𝑐 𝑥 𝑝 𝑥 𝑞 𝑛−𝑥
𝑛(𝑛−1)(𝑛−2)………𝑛−(𝑥−1)
= 𝑝 𝑥 𝑞 𝑛−𝑥
𝑥!
1 2 𝑥−1
𝑛.𝑛(1− )𝑛(1− )……..𝑛.(1− )
= 𝑛 𝑛 𝑛
𝑝 𝑥 𝑞 𝑛−𝑥
𝑥!
1 2 𝑥−1
𝑛𝑥 (1− )(1− )……..(1− )
= 𝑛 𝑛 𝑛
𝑝 𝑥 𝑞 𝑛−𝑥
𝑥!
1 2 𝑥−1
(𝑛𝑝)𝑥 (1− )(1− )……..(1− )
= 𝑛 𝑛 𝑛
𝑞𝑛
𝑥!𝑞 𝑥
𝑚 𝑚 −𝑚 𝑚
But np =m; 𝑞 𝑛 = (1 − 𝑝)𝑛 = (1 − 𝑛 )𝑛 = {(1 − 𝑛 )−𝑛/𝑚 } denoting − 𝑛 = 𝑘
−𝑚
We have, 𝑞 𝑛 = {(1 + 𝑘)1/𝑘 } → 𝑒 −𝑚 as n→ ∞ or 𝑘 → 0
Further 𝑞 𝑥 = (1 − 𝑝)𝑥 → 1 for a fixed x as p→ 0.
1 2 𝑥−1
Also the factor (1 − 𝑛) (1 − 𝑛) … … . . (1 − ) will also tend to 1 as n→ ∞
𝑛
𝑚𝑥 𝑒 −𝑚
Thus P(x)= 𝑥!
This is known as the poisson distribution of the random variable.P(x) is called Poisson
probability function and x is called a Poisson variate.
The distribution of probabilities for x=0,1,2,3….. is as follows.
x 0 1 2 3 …..
−𝑚 −𝑚 2 −𝑚 3 −𝑚
P(x) 𝑒 𝑚𝑒 𝑚 𝑒 𝑚 𝑒
1! 2! 3!
We have P(x)≥ 0 and

−𝑚
𝑚𝑒 −𝑚 𝑚2 𝑒 −𝑚 𝑚3 𝑒 −𝑚
∑ 𝑃(𝑥) = 𝑒 + + + + ⋯ … ..
1! 2! 3!
𝑥=0
𝑚 𝑚2 𝑚3
= 𝑒 −𝑚 {1 + 1! + + +⋯}
2! 3!
= 𝑒 −𝑚 𝑒 𝑚 =1
Hence P(x) is a probability function.

Mean and standard deviation of the Poisson distribution


Mean(𝜇)=∑∞
𝑥=0 𝑥𝑃(𝑥)
𝑚𝑥 𝑒 −𝑚
=∑∞
𝑥=0 𝑥 𝑥!
𝑥 −𝑚
∞ 𝑚 𝑒
=∑𝑥=1 (𝑥−1)!
𝑚𝑥−1
=𝑚𝑒 −𝑚 ∑∞
𝑥=1 (𝑥−1)!
𝑚 𝑚2 𝑚3
= 𝑚𝑒 −𝑚 {1 + 1! + + +⋯}
2! 3!
= 𝑚𝑒 −𝑚 𝑒 𝑚
Mean(𝜇) =m

Standard deviation(𝜎)=√𝑉

Variance(V)=∑∞ 2 2
𝑥=0 𝑥 𝑃(𝑥) − 𝜇 ………(1)
𝑚𝑥 𝑒 −𝑚
Consider∑∞ 2 ∞
𝑥=0 𝑥 𝑃(𝑥) = ∑𝑥=0[𝑥(𝑥 − 1) + 𝑥] 𝑥!
∞ 𝑚𝑥 𝑒 −𝑚 ∞ 𝑚𝑥 𝑒 −𝑚
= ∑𝑥=2 +∑ 𝑥=1
(𝑥−2)! (𝑥−1)!
𝑚𝑥−2
=𝑚2 𝑒 −𝑚 ∑∞
𝑥=2 (𝑥−2)! + 𝑚
𝑚 𝑚2 𝑚3
=𝑚2 𝑒 −𝑚 {1 + 1! + + +⋯}+𝑚
2! 3!
=𝑚2 𝑒 −𝑚 𝑒 𝑚 + 𝑚
∑∞ 2 2
𝑥=0 𝑥 𝑃(𝑥) =𝑚 + 𝑚

Equation (1) implies

Variance(V)=𝑚2 + 𝑚 − 𝑚2
=m
∴ Standard deviation(𝜎)=√𝑚
Mean and variance are equal in Poisson distribution.
Problem1: The probabilitythatindividual suffers a bad reaction from an injection is 0.001.
Find the probability that out of 2000 individuals i) more than 2 ii) exactly 3 will get bad
reaction
Solution:As the probability of occurrence is very small, this follows Poisson distribution and
we have
𝑚𝑥 𝑒 −𝑚
P(x)= 𝑥!
Mean=m=np=2000x0.001=2
i)P(x>2)=1-P(x≤ 2)
=1 – [P(x=0)+P(x=1)+P(x=2)]
𝑚 𝑚2
= 1- 𝑒 −𝑚 {1 + 1! + }
2!
−2
=1 - 𝑒 [1 + 2 + 2]=0.3233
23 𝑒 −2
ii)P(x=3)= = 0.1804
3!

Problem2:2% of the fuses manufactured by a firm are found to be defective. Find the
probability that a box containing 200 fuses contains i)no defective fuse ii)3 or more
defective fuses.(July-2007)

Solution:By data probability of defective fuse=2/100=0.02


Mean=m=np=200x0.02=4
𝑚𝑥 𝑒 −𝑚
Poisson distribution P(x)= 𝑥!
4𝑥 𝑒 −4
= 𝑥!
i)P(x=0)= 𝑒 −4 =0.0183
ii)P(x≥ 3) = 1 − 𝑃(𝑥 < 3)
=1- [P(X=0)+P(x=1)+P(x=2)]
4 42
=1 –𝑒 −4 [1+1! + 2! ]
=0.7621

Problem3:There is a chance that 5% of the pages of a book contain typographical errors. If


100 pages of the book are chosen at random, find the probability that 2 of the pages contain
typographical errors, using i)Binomial distribution ii)Poisson distribution.

Solution:i) Binomial distribution


The probability that a chosen page contains typographical errors is given as p=5%=0.05,
q =1-0.05=0.95, n=100
𝑃(𝑥) = 𝑛𝑐 𝑥 𝑝 𝑥 𝑞 𝑛−𝑥 = 100𝐶 𝑥 (0.05)𝑥 (0.95)100−𝑥
P(x=2)= 100𝐶 2 (0.05)2 (0.95)98= 0.081

ii) Poisson distribution.


Mean=m=np=100x0.05=5
𝑚𝑥 𝑒 −𝑚
P(x)= 𝑥!
52 𝑒 −5
P(x=2)= =0.084
2!

Problem4:If x is a Poisson variate such that P(x=1)=0.2P(x=2).find the mean & evaluate
P(x=0)

Solution:For the Poisson distribution, the p.d.f


𝑚𝑥 𝑒 −𝑚
P(x)= 𝑥!
−𝑚
P(x=1)=𝑒 𝑚
𝑚2 𝑒 −𝑚
P(x=2)= 2!
By data P(x=1)=0.2P(x=2)
𝑚2 𝑒 −𝑚
=0.2 2!
This implies m=10
10𝑥 𝑒 −10
∴p.d.f P(x) = 𝑥!
P(x=0)=𝑒 −10

Continuous probability distribution:

The number of events are infinitely large the probability that a specific event will occur is
practically zero for this reason continuous probability statement must be worded somewhat
differently from discrete ones. Instead of finding the probability that x equals some value,
we find the probability of x falling in a small interval.In this context we need a continuous
probability function which is defined as follows.

Definition: If for every x belonging to the range of a continuous random variable X, we


assign a real number P(𝑥) satisfying the conditions
i) P(𝑥) ≥ 0

ii) ∫−∞ P(𝑥)dx = 1 then P(𝑥) is called a Continuous probability function or probability
density function(p.d.f).If (a,b) is a subinterval of the range space of X then the probability
that 𝑥 lies in the (a,b) is defined to be the interval of P(𝑥) between a and b. i.e., P(a≤x≤b) =
𝑏
∫𝑎 P(𝑥)dx

Cumulative distribution function


If X is a continuous random variable with probability density function P(𝑥) then the
𝑥
function f( 𝑥) is defined by f( 𝑥) = 𝑃(𝑋 ≤ 𝑥) = ∫−∞ 𝑃(𝑥)dx is called the cumulative
distribution function(c.d.f)of X

The mean and variance of the continuous probability distribution



Mean (𝜇) or Expectation E(X) =∫−∞ 𝑥. 𝑝(𝑥)𝑑𝑥
∞ ∞
Variance (V)= ∫−∞(𝑥𝑖 − 𝜇)2 . 𝑝(𝑥)𝑑𝑥=∫−∞(𝑥𝑖 )2 . 𝑝(𝑥)𝑑𝑥 − 𝜇 2

Example 1:A random variable X has the density function


2
P(x)={ 𝑘𝑥 𝑓𝑜𝑟 − 3 ≤ 𝑥 ≤ 3 find k. Also find P(x≤ 2) and P(x>1)
0 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒

Solution: If X is a continuous random variable then i) P(𝑥) ≥ 0



ii) ∫−∞ P(𝑥)dx = 1
3
That is ∫−3 𝑘𝑥 2 dx = 1
3
𝑘𝑥 3
⇒[ ] =1
3 −3
⇒ k=1/18
2
2 𝑥2 1 𝑥3 35
P(x≤ 2)=∫−3 dx = [ ] =
18 18 3 −3 54
3
3 𝑥2 1 𝑥3 26 13
P(x>1) =∫1 dx = 18 [ 3 ] =54= 27
18 1

Example 2: Thedaily consumption of electric power (in millions of kW-hours) is a random


𝑥
1
𝑥𝑒 −3 𝑥>0
9
variable having the p.d.f P(x)= { if the total production is
0 𝑥 ≤0
12million kW-hours, determine the probability that there is power cut (shortage) on any
given day.

Solution:Probability that the power consumed is between 0to12 is P(0 ≤ 𝑥 ≤ 12) =


𝑥 𝑥 𝑥 12
12 12 1 𝑥
∫0 P(𝑥)dx = ∫0 𝑥𝑒 −3 dx = [− 3 𝑒 −3 − 𝑒 −3 ] = 1 − 5𝑒 −4
9 0
Power supply is inadequate if daily consumption exceeds 12million kW,i.e.,
P(x>12)=1− P(0≤ 𝑥 ≤ 12)= 1−[1 − 5𝑒 −4 ]=5𝑒 −4 =0.0915781

𝑥
1
Example 3:Find the mean and variance of p.d.f. f(x)={ 𝑒 −4 𝑓𝑜𝑟 𝑥 > 0
4
0 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
𝑥
∞ ∞ ∞1 −
Solution: Mean =∫−∞ 𝑥. 𝑓(𝑥)𝑑𝑥 = ∫0 xf(𝑥)dx=∫0 4 𝑥𝑒 dx 4


𝑥 𝑥
− −
1 𝑥𝑒 4 𝑒 4
=4 [ 1 − 1 ] =−4(0 − 1)=4

4 16
0
𝑥
∞ ∞ 1
Variance(V) = ∫−∞(𝑥𝑖 )2 . 𝑓(𝑥)𝑑𝑥 − 𝜇 2 = ∫0 x 2 4 𝑥𝑒 −4 dx − 16
𝑥 𝑥 𝑥 ∞
− − −
1 x2 𝑒 4 𝑒 4 𝑒 4
= 4[ 1 − 2𝑥 1 +2 1 ] − 16
− −
4 16 64 0
=32-16=16

In continuous probability distribution we study


Normal & Exponential distribution.

EXPONENTIAL DISTRIBUTION

Many scientific experiments involve the measurement of the duration of time X between an
initial point of time and the occurrence of some phenomenon of interest. For Example X is
the life time of a light bulb which is turned on left until it burns out. The continuous random
variable X having the probability density function f(x)
𝑓𝑜𝑟 𝑥 > 0
= {𝜶𝑒 −𝛼𝑥 , where 𝛼 > 0 is known as the exponential
0 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
distribution. Here the only parameter of the distribution is 𝛼

Example of random variables modeled as exponential are


i) Duration of telephone calls
ii) Time require for repair of a component
iii) Service time at a server in a queue

Mean and standard deviation of the exponential distribution


∞ ∞ ∞
Mean (𝜇) = ∫−∞ 𝑥. 𝑓(𝑥)𝑑𝑥 = ∫0 𝑥. 𝜶𝑒 −𝛼𝑥 𝑑𝑥 = 𝜶 ∫0 𝑥. 𝑒 −𝛼𝑥 𝑑𝑥
𝑒 −𝛼𝑥 𝑒 −𝛼𝑥 ∞
= 𝜶 [𝑥. −1 ]
−𝜶 𝜶2 0
1 1
= 𝜶 [0 − 𝜶2 (0 − 1)] = 𝜶
1
Mean (𝜇) = 𝜶
Standard deviation (𝜎)=√𝑉

Variance (V) = ∫−∞(𝑥 − 𝜇)2 𝑓(𝑥)𝑑𝑥

= 𝜶 ∫0 (𝑥 − 𝜇)2 𝑒 −𝛼𝑥 𝑑𝑥
𝑒 −𝛼𝑥 𝑒 −𝛼𝑥 𝑒 −𝛼𝑥 ∞
= 𝜶 [(𝑥 − 𝜇)2 . − 2(𝑥 − 𝜇) +2 ]
−𝜶 𝜶2 −𝜶3 0
1 1 1
=𝜶 [(0 − 𝜇 2 ) −𝜶 − 2([0 − (−𝜇)] 𝜶2 − 2 𝜶3 (0 − 1)]
𝜇2 2𝜇 2
= 𝜶 [ 𝜶 − 𝜶2 + 𝜶3 ]
1 2 2 1
= 𝜶 [ 𝜶3 − 𝜶3 + 𝜶 3 ] =𝜶 2

1
Standard deviation (𝜎)=√𝑉=√𝜶2
Mean Standard deviation is equal in exponential transformation
Problem1: In a certain town the duration of a shower is exponentially distributed with mean
5 minutes. What is the probability that a shower will last for i) less than 10 minutes ii) 10
minutes or more iii) between 10minutes and 12 minutes (Dec.06/jan07)

Solution: The p.d.f of the exponential distribution is given by


f(x)=αe−αx , x > 0 and mean = 1/ α
𝑥
1
By data 1/ α = 5 ∴ α = 5 and hence f(x) =5 e−5
10
10 1 −𝑥 𝑥
i) P(x<10)=∫0 e 5 𝑑𝑥 = − [e−5 ] =1- e2 =0.8647
5 0
𝑥 ∞
∞ 1 −𝑥 −
ii) P(x≥ 10) = ∫10 e 5 𝑑𝑥=− [e 5 ] = e2 = 0.1353
5 10
𝑥 12
12 1 −𝑥 −
12
iii) P(10<x<12)=∫10 e 5 𝑑𝑥=− [e 5 ] =-(e 5 − e−2 )=0.0446
5 10

Problem2: Thesale per day in a shop is exponentially distributed with average sale
amounting to Rs.100 and net profit is 8%. Find the probability that the net profit exceeds
Rs.30 on a day.

Solution:Let x be the random variable of the sale in the shop. Since x is an exponential
variate the p.d.f f(x)= αe−αx , x > 0 mean = 1/ α=100
⇒ α = 0.01 hence f(x)= 0.01e−0.01x , x > 0
Let A be the amount for which profit is 8%
8
⇒ A. 100 = 30 ∴ 𝐴 = 375
Probability of profit exceeding Rs.30 = 1- Prod(profit≤ 𝑅𝑠. 30)
=1-Prob(sale𝑠 ≤ 𝑅𝑠. 375)
375
=1 − ∫0 (0.01)e−0.01𝑥 𝑑𝑥
=1+[e−0.01𝑥 ]375
0 = e−3.75
The probability that the net profit exceeds Rs.30 on a day is e−3.75

Problem3:Let the mileage (in thousands of miles) of a particular tyre be random variable x
1 𝑓𝑜𝑟 𝑥 > 0
having p.d.f {20 𝑒 −𝑥/20 find the probability that i) at most
0 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
10,000miles ii)any where from 16,000 to 24,000miles iii) at least 30,000miles.iv)Find the
mean and the variance of the given p.d.f.

Solution:By data α = 1/20


10 10 1 1 −20 10
i)P(x≤ 10) = ∫0 𝑓(𝑥)𝑑𝑥 = ∫0 𝑒 −𝑥/20 𝑑𝑥.=[20 𝑒 −𝑥/20 ] =1-𝑒 −1/2=0.3934
20 1 0
24 24 1 1 −20 24
ii)P(16≤ 𝑥 ≤ 24) = ∫16 𝑓(𝑥)𝑑𝑥 = ∫16 𝑒 −𝑥/20 𝑑𝑥. =-[20 𝑒 −𝑥/20 ]
20 1 16
−4/5 −6/5
=𝑒 -𝑒 =0.148
∞ ∞ 1 1 −20 ∞
iii) P(x≥ 30) = ∫30 𝑓(𝑥)𝑑𝑥 = ∫30 𝑒 −𝑥/20 𝑑𝑥 = [20 𝑒 −𝑥/20 ] = 𝑒 −3/2=0.2231
20 1 30
1
iv) Mean(𝜇) = = 20
𝜶
1
Variance(V)=𝜶2 = 202

NORMAL DISTRIBUTION

Normal distribution is the probability distribution of continuous random variable X, known


1 2
as normal random variable or normal variate it is given by f(x)=𝜎√2𝜋 𝑒 −(𝑥−𝜇) /2𝜎 2 where -
∞ < 𝑥 < ∞, -∞ < 𝜇 < ∞ & 𝜎>0. Is known as normal distribution.
∞ 1 ∞ 2
∫−∞ 𝑓(𝑥)𝑑𝑥 =𝜎√2𝜋 ∫−∞ 𝑒 −(𝑥−𝜇) /2𝜎 2 dx

𝜇 & 𝜎 are the two parameters of the normal distribution is also known as Gaussian
distribution.
This distribution is most important , simple, useful & is the corner stone of modern statistics
because sampling distribution ’ t ‘, F, 𝝌2 tend to be normal for large samples & it is
applicable in statistical quality control in industry.

PROPERTIES OF NORMAL DISTRIBUTION

(i)The graph of the normal distribution y=f(x) in the XY-plane is known as normal curve.
Normal curve is symmetric about y axis, it is bell shaped the mean, median,& mode coincide
& therefore normal curve is unimodal.
Normal curve is asymptotic to both positive & negative x-axis.

(ii)Area under the curve is unity.


(iii)Probability that the continuous random variable lies between a&b is denoted by P(a≤
𝑏 1 2
𝑥 ≤ 𝑏)& is given by ∫𝑎 𝑒 −(𝑥−𝜇) /2𝜎 2 𝑑𝑥………(i)
𝜎√2𝜋
Since (1) depends on the two parameters 𝜇 & 𝜎 we get different normal curves for different
values of 𝜇 & 𝜎 & it is impracticable task to plot all such normal curves.Instead by
(𝑥−𝜇)
introducing Z= .The RHS integral in(1) becomes independent of the two parameters
𝜎
𝜇 & 𝜎here Z is known standard variate.

(iv) Change of scale from x-axis to z-axis


𝑏 1 2
P(a≤ 𝑥 ≤ 𝑏)=∫𝑎 𝑒 −(𝑥−𝜇) /2𝜎 2 𝑑𝑥
𝜎√2𝜋
𝑧 1 2
P(𝑧1 ≤ 𝑧 ≤ 𝑧2 )=∫𝑧 2 𝑒 −𝑧 /2 𝜎𝑑𝑧
1 𝜎√2𝜋
𝑧 1 2
=∫𝑧 2 𝑒 −𝑧 /2 𝑑𝑧………………….…(2)
1 √2𝜋
𝑎−𝜇 𝑏−𝜇
Where 𝑧1 = , 𝑧2 =
𝜎 𝜎

1 𝑧 2
(v) Error function or probability integral is defined as P(Z)= ∫ 𝑒 −𝑧 /2
√2𝜋 0
𝑑𝑧…..(3)
Now (2) can be written by using (3) as
𝑧 1 2
P(a≤ 𝑥 ≤ 𝑏)= P(𝑧1 ≤ 𝑧 ≤ 𝑧2 )=∫𝑧 2 𝑒 −𝑧 /2 𝑑𝑧………….(4)
1 √2𝜋
=P(𝑧1 ) − 𝑃(𝑧2)
1 2
Normal distribution f(x) transformed by the standard variable Z is given by F(Z) = 𝑒 −𝑧 /2
√2𝜋
with mean 0 & standard deviation 1 is known as standard normal distribution & its normal
curve as standard normal curve. The probability integral (3) is tabulated for various values of
Z varying from 0 to 3.9 & is known as normal table. Thus the entries in the normal table
gives the area under the normal curve between the ordinates z=0 to z. Since normal curve is
symmetric about y-axis the area from 0 to –z is same as the area from 0 to z. For this reason,
normal table is tabulated only for positive values of z.
The integral in the RHS of (4) geometrically represents the area bounded by the standard
normal curve F(Z) between z=𝑧1 & z=𝑧2 . Further in particular if 𝑧1 = 0 𝑤𝑒 ℎ𝑎𝑣𝑒 ∅(𝑍) =
1 𝑧 2
∫ 𝑒 −𝑧 /2dz. This represents the area under the
√2𝜋 0

standard normal curve z=0 to z.


Normal probability table
z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.0000 0.0040 0.0080 0.0120 0.0160 0.0199 0.0239 0.0279 0.0319 0.0359
0.1 0.0398 0.0438 0.0478 0.0517 0.0557 0.0596 0.0636 0.0675 0.0714 0.0753
0.2 0.0793 0.0832 0.0871 0.0910 0.0948 0.0987 0.1026 0.1064 0.1103 0.1141
0.3 0.1179 0.1217 0.1255 0.1293 0.1331 0.1368 0.1406 0.1443 0.1480 0.1517
0.4 0.1554 0.1591 0.1628 0.1664 0.1700 0.1736 0.1772 0.1808 0.1844 0.1879
0.5 0.1915 0.1950 0.1985 0.2019 0.2054 0.2088 0.2123 0.2157 0.2190 0.2224
0.6 0.2257 0.2291 0.2324 0.2357 0.2389 0.2422 0.2454 0.2486 0.2517 0.2549
0.7 0.2580 0.2611 0.2642 0.2673 0.2704 0.2734 0.2764 0.2794 0.2823 0.2852
0.8 0.2881 0.2910 0.2939 0.2967 0.2995 0.3023 0.3051 0.3078 0.3106 0.3133
0.9 0.3159 0.3186 0.3212 0.3238 0.3264 0.3289 0.3315 0.3304 0.3365 0.3389
1.0 0.3413 0.3438 0.3461 0.3485 0.3508 0.3531 0.3554 0.3577 0.3599 0.3621
1.1 0.3643 0.3665 0.3686 0.3708 0.3729 0.3749 0.3770 0.3790 0.3810 0.3830
1.2 0.3849 0.3869 0.3888 0.3907 0.3925 0.3944 0.3962 0.3980 0.3997 0.4015
1.3 0.4032 0.4049 0.4066 0.4082 0.4099 0.4115 0.4131 0.4147 0.4162 0.4177
1.4 0.4192 0.4207 0.4222 0.4236 0.4251 0.4265 0.4279 0.4292 0.4306 0.4319
1.5 0.4332 0.4345 0.4357 0.4370 0.4382 0.4394 0.4406 0.4418 0.4429 0.4441
1.6 0.4452 0.4463 0.4474 0.4484 0.4495 0.4505 0.4515 0.4525 0.4535 0.4545
1.7 0.4554 0.4564 0.4573 0.4582 0.4591 0.4599 0.4608 0.4616 0.4625 0.4633
1.8 0.4641 0.4649 0.4656 0.4664 0.4671 0.4678 0.4686 0.4693 0.4699 0.4706
1.9 0.4713 0.4719 0.4726 0.4732 0.4738 0.4744 0.4750 0.4756 0.4761 0.4767
2.0 0.4772 0.4778 0.4783 0.4788 0.4793 0.4798 0.4803 0.4808 0.4812 0.4817
2.1 0.4821 0.4826 0.4830 0.4834 0.4838 0.4842 0.4846 0.4850 0.4854 0.4857
2.2 0.4861 0.4864 0.4868 0.4871 0.4875 0.4878 0.4881 0.4884 0.4887 0.4890
2.3 0.4893 0.4896 0.4898 0.4901 0.4904 0.4906 0.4909 0.4911 0.4913 0.4916
2.4 0.4918 0.4920 0.4922 0.4925 0.4927 0.4929 0.4931 0.4932 0.4934 0.4936
2.5 0.4938 0.4940 0.4941 0.4943 0.4945 0.4946 0.4948 0.4949 0.4951 0.4952
2.6 0.4953 0.4955 0.4956 0.4957 0.4959 0.4960 0.4961 0.4962 0.4963 0.4964
2.7 0.4965 0.4966 0.4967 0.4968 0.4969 0.4970 0.4971 0.4972 0.4973 0.4974
2.8 0.4974 0.4975 0.4976 0.4977 0.4977 0.4978 0.4979 0.4979 0.4980 0.4981
2.9 0.4981 0.4982 0.4982 0.4983 0.4984 0.4984 0.4985 0.4985 0.4986 0.4986
3.0 0.4987 0.4987 0.4987 0.4988 0.4988 0.4989 0.4989 0.4989 0.4990 0.4990
3.1 0.4990 0.4991 0.4991 0.4991 0.4992 0.4992 0.4992 0.4992 0.4993 0.4993
3.2 0.4993 0.4993 0.4994 0.4994 0.4994 0.4994 0.4994 0.4995 0.4995 0.4995
3.3 0.4995 0.4995 0.4995 0.4996 0.4996 0.4996 0.4996 0.4996 0.4996 0.4997
3.4 0.4997 0.4997 0.4997 0.4997 0.4997 0.4997 0.4997 0.4997 0.4997 0.4998
3.5 0.4998 0.4998 0.4998 0.4998 0.4998 0.4998 0.4998 0.4998 0.4998 0.4998
3.6 0.4998 0.4998 0.4999 0.4999 0.4999 0.4999 0.4999 0.4999 0.4999 0.4999
3.7 0.4999 0.4999 0.4999 0.4999 0.4999 0.4999 0.4999 0.4999 0.4999 0.4999
3.8 0.4999 0.4999 0.4999 0.4999 0.4999 0.4999 0.4999 0.4999 0.4999 0.4999
Note:

1.∫−∞ ∅(𝑍)𝑑𝑧 = 1 ⇒P(-∞ < 𝑧 ≤ ∞) = 1
0 ∞
2. ∫−∞ ∅(𝑍)𝑑𝑧 = ∫0 ∅(𝑍)𝑑𝑧 = 1/2⇒P(-∞ ≤ 𝑧 ≤ 0) = 𝑃(0 ≤ 𝑧 ≤ ∞) = 1/2
3.P(-∞ < 𝑧 < 𝑧1 )=P(-∞ < 𝑧 ≤ 0)+P(0≤ 𝑧 < 𝑧1 )=0.5+∅(𝑧1 )
4. P(z>𝑧2 )=0.5-∅(𝑧2 )
MEAN & STANDARD DEVIATION OF THE NORMAL DISTRIBUTION


Mean (𝜇) = ∫−∞ x f(x) dx
1 ∞ 2
=𝜎√2𝜋 ∫−∞ 𝑥 𝑒 −(𝑥−𝜇) /2𝜎 2 𝑑𝑥
(𝑥−𝜇)
Putting t= or x= 𝜇 + 𝜎𝑡√2, we have dx= 𝜎√2dt
𝜎√2
t also varies from -∞ 𝑡𝑜 ∞
1 ∞ 2
mean=𝜎√2𝜋 ∫−∞ 𝜇 + 𝜎𝑡√2𝑒 −𝑡 𝜎√2dt
𝜇 ∞ −𝑡 2 √2 ∞ 2
= ∫ 𝑒 𝑑𝑡 + 𝜎 ∫ 𝑡𝑒 −𝑡 𝑑𝑡
√𝜋 −∞ √𝜋 −∞
2𝜇 ∞ 2 √2 ∞ 2
= 𝜋 ∫0 𝑒 −𝑡 𝑑𝑡 + 𝜎 𝜋 ∫−∞ 𝑡𝑒 −𝑡 𝑑𝑡
√ √
2
The second integral is 0 by standard property since 𝑡𝑒 −𝑡 is an odd function.
∞ 2 √𝜋
By gamma function ∫0 𝑒 −𝑡 𝑑𝑡 = 2
2 𝜇 √𝜋
Hence mean= +0= 𝜇
√𝜋 2
Hence the mean of the normal distribution is equal to mean of the given distribution.
Standard deviation 𝜎 = √𝑉

Variance (V)= ∫−∞(𝑥 − 𝜇)2 𝑓(𝑥)𝑑𝑥
1 ∞ 2
= ∫ (𝑥 − 𝜇)2 𝑒 −(𝑥−𝜇) /2𝜎 2 𝑑𝑥
√2𝜋 −∞
𝑥−𝜇
Substituting t= , x= 𝜇 + 𝜎𝑡√2, we have dx= 𝜎√2dt
√2𝜋
t also varies from -∞ 𝑡𝑜 ∞
1 ∞ 2
Variance (V)=𝜎√2𝜋 ∫−∞ 2𝜎 2 𝑡 2 𝑒 −𝑡 𝜎√2dt
2𝜎2
∞ 2
= ∫ 𝑡 2 𝑒 −𝑡 𝑑𝑡
√𝜋 −∞
2𝜎2 ∞ 2
= 𝜋 2 ∫0 𝑡 2 𝑒 −𝑡 𝑑𝑡

2𝜎2 ∞ 2
= ∫0 𝑡 (2𝑡𝑒 −𝑡 )𝑑𝑡
√𝜋
−𝑡 2
Taking u=t, v=2t𝑒
∫ 𝑢𝑣dt=u∫ 𝑣dt-∬ 𝑣dt . 𝑢′ 𝑑𝑡
2𝜎2 2 ∞ 2
Variance (V)= {[𝑡𝑒 −𝑡 ]∞
0 − ∫0 −𝑒
−𝑡
𝑑𝑡}
√𝜋
2𝜎2 ∞ 2 2𝜎2 √𝜋
Variance (V)= [0+∫0 𝑒 −𝑡 𝑑𝑡]= 𝜋 =𝜎 2
√𝜋 √ 2
The variance/standard deviation of the normal distribution is equals to the variance of the
given distribution.
Area under standard normal curve to the left of z=1.5

Area under standard normal curve to the right of z=-0.5

Area under standard normal curve between z=0.8 to 3

Problem1:Find the following probabilities for the standard normal distribution with the help
of normal probability table
a) P(-0.5≤ 𝑧 ≤ 1.1) b) P(z ≥ 0.60) c) P(z ≤ 0.75) d)P(0.2≤ 𝑧 ≤ 1.4)

Solution:
a) P(-0.5≤ 𝑧 ≤ 1.1) = P(-0.5≤ 𝑧 ≤ 0) + P(0 ≤ 𝑧 ≤ 1.1)
= P(0≤ 𝑧 ≤ 0.5)+ P(0 ≤ 𝑧 ≤ 1.1)
=∅(0.5) + ∅(1.1)
=0.1915+0.3643
=0.5558
b) P(z ≥ 0.60) = P(z≥ 0) − 𝑃(𝑧 ≤ 0.60)
= 0.5 −∅(0.60)
=0.5 −0.2258 =0.2742
c) P(z ≤ 0.75) = P(z ≤ 0) + P(0≤z ≤ 0.75)
=0.5 + ∅(0.75)
=0.5 + 0.2422=0.7422
d)P(0.2≤ 𝑧 ≤ 1.4) = P(0≤ 𝑧 ≤ 1.4) − d)P(0 ≤ 𝑧 ≤ 0.2)
=∅(1.4) − ∅(0.2)
=0.4192 -0.0793
=0.3399
Problem2:Assuming thatthe diameters of 1000 brass plugs taken consecutively from a
machine form a normal distribution with mean 0.7515cm & standard deviation 0.002cm
How many of the plugs are likely to rejected if the approved number is 0.752±0.004cm?

Solution: Let x represent the bras plugs, by data mean 𝜇= 0.7515cm &
S.D 𝜎 = 0.002
𝑥−𝜇 𝑥−0.7515
We have standard normal variate(s.n.v) z= =
𝜎 0.002
Now 0.752+0.004=0.756
0.756−0.7515
⇒ x =0.756 so z = =2.25
0.002
P(z>2.25)=P(0≤ 𝑧 ≤ ∞) − 𝑃(0 ≤ 𝑧 ≤ 2.25)=0.5 −∅(2.25)=0.5−0.4878=0.0122………….(1)
Now 0.752-0.004=0.748
0.748−0.7515
⇒ x =0.748so z = =-1.75
0.002
P(z<1.75)=0.5−∅(1.75)=0.5 – 0.4599=0.0401………(2)

Equation(1)+ Equation(2)
P(2.25<z<1.75)=0.0122+0.041=0.0523
For 1000 brass plugs 1000x0.0523=52.3=52
⇒52 plugs are rejected

Problem3:x isnormal random variable with 30 as mean & S.D 5. Find the probabilities that
i)26≤ 𝑥 ≤ 40 ii) x≥ 45 iii)|𝑥 − 30| ≤ 5 (May-June 2010)

𝑥−𝜇 𝑥−30
Solution:We have s.n.v z= =
𝜎 5
i)26≤ 𝑥 ≤ 40
26−30 40−30
x=26, z = =-0.8, x=40, z= =2
5 5
i.e., we shall find P(-0.8≤ 𝑧 ≤ 2)=P(-0.8≤ 𝑧 ≤ 0) + P(0≤ 𝑧 ≤ 2)
= ∅(0.8) + ∅(2)=0.2881+0.4772=0.7653
ii) x≥ 45
45−30
z= 5
=3
i.e., we shall find P(z≥ 3) = 𝑃(0 ≤ 𝑧 ≤ ∞) − 𝑃(0 ≤ 𝑧 ≤ 3)
=0.5 - ∅(3)=0.5 – 0.4987=0.0013
iii)|𝑥 − 30| ≤ 5⇒-5 ≤ 𝑥 − 30 ≤ 5⇒ 25≤ 𝑥 ≤ 35
25−30 35−30
x=25, z = =-1, x=35, z = =1
5 5
P(-1≤ 𝑧 ≤ 1)=P(-1≤ 𝑧 ≤ 0) + 𝑃(0 ≤ 𝑧 ≤ 1)=∅(1) + ∅(1)=2∅(1) = 2x0.3413=0.6826

Problem4: Find the mean and standard deviation of an examination in which grades 70 and
88 corresponding to standard scores of -0.6 and 1.4 respectively.

𝑥−𝜇
Solution:s.n.v z= 𝜎
70−𝜇
Hence -0.6 = so 𝜇 − 0.6𝜎 = 70
𝜎
88−𝜇
1.4= so 𝜇 + 1.4𝜎 = 88
𝜎
By solving 𝜇 = 75.4, 𝜎 = 9 are the mean and standard deviation.

Problem5:In a test of electric bulbs , it was found that the life time of a particular brand is
distributed normally with an average life of 2000 hours and S.D of 60 hours. If a firm
purchases 2500 bulbs find the number of bulbs that are likely to last for i) more than 2100
hours, ii) less than 1950 hours iii) between 1900 to 2100 hours.

Solution: By data 𝜇 = 2000, 𝜎 = 60


𝑥−𝜇 𝑥−2000
We have s.n.v z= =z=
𝜎 60

i)To findP(x>2100)=
2100−2000
If x=2100, z= = 1.67
60
P(z>1.67)=P(z≥ 0) − 𝑃(0 < 𝑧 < 1.67)=0.5 - ∅(1.67)=0.5 – 0.4525=0.0475
∴ number of bulbs that are likely to last for more than 2100 hours is
2500x0.0475 = 118.75 ≈ 119

ii) To find P(x< 1950)


1950−2000
If x =1950, z= = −0.83
60
P(z<-0.83)=P(z>0.83)=P(z≥ 0) − 𝑃(0 < 𝑧 < 0.83)= 0.5 - ∅(0.83)=0.5 – 0.2967 = 0.2033
∴ number of bulbs that are likely to last for less than 1950 hours is
2500x0.2033= 508.25≈ 508

iii) To find P(1900<x<2100)


1900−2000 2100−2000
If x =1900, z= = −1.67 and if x= 2100, z= = 1.67
60 60
P(-1.67<Z<1.67)=2P(0<z<1.67)
= 2∅(1.67)= 2x 0.4525= 0.905
∴ number of bulbs that are likely to last between 1900and 2100 hours is
2500 x 0.905=2262.5≈ 2263

Sampling Theory
Population: A large collection of individuals or attributes or numerical data can be regarded
as population or universe.

A finite subset of the population is known as sample.

Size of the population N is the number of objects or observations in the population.


Population is said to be finite or infinite depending on the size N being finite or infinite.

Size of the sample is denoted by n. Sampling is the process of drawing samples from a given
𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛.

Large sampling: If n≥ 30, the sampling is said to be large sampling.

Small Sampling: If n<30, the sampling is said to be small sampling.

Examples:

1. Population of India is population where as the population of Karnataka is sample.

2. Cars produced in India are the population where as the mantis cars produced in India is
sample.

The statistical constants of the population such as mean (µ), Standard deviation (  ) etc are
called the parameters. Similarly the constants for the sample drawn from the given
population i.e. Mean (x ) standard deviation (S) etc are called statistics.

Random sampling :

The selection of an item from the population in such a way that each has the same chance
of being selected is called random sampling.

Suppose we take a sample of size n from the finite population of size N, then we will have
NC possible samples. Random sampling is a technique in which each of the Nc samples has
n n
an equal chance of being selected.

Sampling where each member of a population may be chosen more than once is called
sampling with replacement while if a member cannot be chosen more than once it is called
sampling without replacement.

Sampling distribution :
Given a population, suppose we consider a set of samples of a certain size drawn from the
population. For each sample, suppose we compute a statistics such as the mean, standard
deviation etc., these statistics will vary from the sample to the other sample, suppose we
group these statistics according to their frequencies and form a frequency distribution. The
frequency distribution so formed is called a sampling distribution.

The standard deviation of sampling distribution is called its Standard error.

The reciprocal of the standard errors is called precision.

Sampling distribution of Means :

Consider a population for which the mean is µ and the standard deviation is  suppose we
draw a set of samples of a certain size N, from this population and find the mean x of each
of these population. The frequency distribution of these means is called a sampling
distribution of means. Let the mean and the standard deviation of sampling distribution of
means be 𝝁𝑿̅ and 𝝈𝑿̅ respectively.

Suppose the population is finite with size NP or random sampling without the replacement
i.e. the items drawn one by one and are not put back to the population before the next
draw. In this case there will be NPCN samples and we have

𝝁𝑿̅ = 𝝁 and

𝝈𝟐 𝑵
𝝈𝟐 𝑿̅ = [ 𝑵𝑷−𝑵 ]
𝑵 𝑷−𝟏

𝝈𝟐 𝑵
𝝈𝟐 𝑿̅ = c where c = [ 𝑵𝑷−𝑵 ] is called the finite population correction
𝑵 𝑷−𝟏
factor. If NPis very large i.e. if the population is infinite or the sampling is finite with
replacement then

c = 1 as 𝑵𝑷 → ∞

𝝈𝟐
∴ 𝝈𝟐 𝑿̅ = 𝑵

The standard normal variate for the sampling distribution of means is given by
̅ − 𝝁𝑿
𝑿 ̅ ̅− 𝝁
𝑿
Z= =
𝝈𝑿
̅ 𝝈𝑿
̅

Sampling distribution of Differences and sums :

S1 be the statistic obtained for all possible samples of size n 1, drawn from the
population A. 𝜇𝑠1 and 𝜎𝑠1 be the mean and s.d of a sampling distribution of statistic
S 1.
S2 be the statistic obtained for all possible samples of size n2 drawn from the
population B. 𝜇𝑠2 and 𝜎𝑠2 be the mean and s.d. of a sampling distribution of statistic
S 2.

The mean 𝜇𝑠1 −𝑠2 and the s.d. 𝜎𝑠1 −𝑠2 of the sampling distribution of differences are

given by 𝝁𝒔𝟏 −𝒔𝟐 = 𝝁𝒔𝟏 − 𝝁𝒔𝟐 and 𝝈𝒔𝟏 −𝒔𝟐 = √𝝈𝟐 𝒔𝟏 + 𝝈𝟐 𝒔𝟐

Similarly by 𝝁𝒔𝟏 +𝒔𝟐 = 𝝁𝒔𝟏 + 𝝁𝒔𝟐 and 𝝈𝒔𝟏 +𝒔𝟐 = √𝝈𝟐 𝒔𝟏 + 𝝈𝟐 𝒔𝟐

For the infinite population the sampling distribution of sums and differences of mean
and s.d. are given by

𝝁𝑿̅𝑨 +𝑿̅𝑩 = 𝝁𝑿̅𝑨 + 𝝁𝑿̅𝑩 and

𝝈𝟐 𝝈𝟐 𝟐
𝝈𝑿̅𝑨+𝑿̅𝑩 = √𝝈𝟐 𝑿̅𝑨 + 𝝈𝟐 𝑿̅𝑩 = √ 𝒏 𝟏 +
𝟏 𝒏𝟐

𝒂𝒏𝒅 𝝁𝑿̅𝑨 −𝑿̅𝑩 = 𝝁𝑿̅𝑨 − 𝝁𝑿̅𝑩 and

𝝈𝟐 𝝈𝟐 𝟐
𝝈𝑿̅𝑨−𝑿̅𝑩 = √𝝈𝟐 𝑿̅𝑨 + 𝝈𝟐 𝑿̅𝑩 = √ 𝒏 𝟏 +
𝟏 𝒏𝟐

In paritcular case the statistics S1 and S2 have the proportion of success 𝒫1 and

𝒫2 then 𝝁𝓟𝟏 −𝓟𝟐 = 𝝁𝓟𝟏 − 𝝁𝓟𝟐 = 𝒑𝟏 − 𝒑𝟐 and

𝒑𝟏 𝒒𝟏 𝒑𝟐 𝒒𝟐
𝝈𝓟𝟏 −𝓟𝟐 = √𝝈𝟐 𝓟𝟏 + 𝝈𝟐 𝓟𝟐 = √ +
𝒏𝟏 𝒏𝟐

̅ 𝑨 −𝑿
(𝑿 ̅ 𝑩 ) − ( 𝝁𝑿 ̅𝑩)
̅ 𝑨 −𝑿
Where the normal variate for the difference in means is Z = 𝝈𝑿̅ −𝑿 ̅
𝑨 𝑩

And the normal variate for the sampling distribution of the differences of
( 𝓟𝟏 − 𝓟𝟐 ) − 𝝁𝓟𝟏 − 𝓟𝟐
proportions is given by is Z = 𝝈𝓟𝟏 −𝓟𝟐

1. Two brands A and B of cables have mean breaking strengths of 4000 and 4500
and standard deviations of 300 and 200 respectively. If 100 cables of brand a
and 50 cables of brand B are tested, what is the probability that mean breaking
strength of brand will be
a. at least 600 more than that of A.
b. at least 450 more more than that of A
𝜇𝑋̅𝐴 = 4000 , 𝜇𝑋̅𝐵 = 4500 , 𝝈𝑿̅𝑨 = 𝟑𝟎𝟎 , 𝝈𝑿̅𝑩 = 𝟐𝟎𝟎 , 𝒏𝟏 =100 , 𝒏𝟐 =
𝟓𝟎

𝝁𝑿̅𝑩 −𝑿̅𝑨 = 𝝁𝑿̅𝑩 − 𝝁𝑿̅𝑨 = 4500 – 4000 = 500

𝝈𝑿̅𝑩 −𝑿̅𝑨 = √𝝈𝟐 𝑿̅𝑨 + 𝝈𝟐 𝑿̅𝑩 = 41.23

̅ 𝑩 −𝑿
(𝑿 ̅ 𝑨 ) − ( 𝝁𝑿 ̅𝑨 )
̅ 𝑩 −𝑿
Z= 𝝈𝑿̅ −𝑿 ̅
𝑩 𝑨

̅ 𝑩 −𝑿
(𝑿 ̅ 𝑨 ) − ( 𝝁𝑿 ̅𝑨 ) ( 𝑿
̅ 𝑩 −𝑿 ̅ −𝑿
̅ 𝑨 )− 𝟓𝟎𝟎
= = 𝑩 𝟒𝟏.𝟐𝟑
𝝈𝑿̅ −𝑿 ̅
𝑩 𝑨

̅𝑩 − 𝑿
i) P ( ( 𝑿 ̅ 𝑨 ) ≥ 600 ) = P ( Z ≥ 2.43 ) = P ( Z > 0 ) − ∅ (𝟐. 𝟒𝟑) = 𝟎. 𝟎𝟎𝟕𝟓.

̅𝑩 − 𝑿
ii) P ( ( 𝑿 ̅ 𝑨 ) ≥ 450 ) = P ( Z ≥ −1.21 )

= P (−1.21 < Z < 0 ) + 𝑷 ( 𝒁 ≥ 𝟎) = ∅ (𝟏. 𝟐𝟏 ) + 𝟎. 𝟓

= 0.8859

2 .Two friends A and B play a game of “heads and tails” each tossing a coin 50 times.
A will win the game if he tosses 3 or more heads than B, Otherwise B wins.
Determine the probability that A win.

𝒫𝐴 and 𝒫𝐵 are the proportions of heads obtained by A and B


repectively. The probability of getting a head = p = 0.5 for both A and
B.

The number of tosses made by A & B are nA = nB=50

𝝁𝓟𝑨−𝓟𝒃 = 𝝁𝓟𝑨 − 𝝁𝓟𝑩 = 𝒑𝑨 − 𝒑𝑩 = 𝟎

𝒑𝑨 𝒒𝑨 𝒑𝑩 𝒒𝑩
𝝈𝓟𝑨−𝓟𝑩 = √𝝈𝟐 𝓟𝑨 + 𝝈𝟐 𝓟𝑩 = √ + = 0.
𝒏𝑨 𝒏𝑩

𝟓
( 𝓟𝑨 − 𝓟𝑩 ) − 𝝁𝓟𝑨 − 𝓟𝑩 (𝓟𝑨 − 𝓟𝑩 ) ( )
𝟖
Z= = = 𝟎.𝟏 =0.6
𝝈𝓟𝑨 −𝓟𝑩 𝟎.𝟏

P( ( 𝓟𝑨 − 𝓟𝑩 ) > ) = P ( Z > 0.6 ) = P ( Z > 0 ) – P ( 0 < Z < 0.6 )

= 0.5 − ∅ ( 0.6 ) = 0.2742

Central Limit Theorem:

This is a very important theorem regarding the distribution of the mean of a sample if the
parent population is non-normal and the sample size is large.
If the variable X has a non-normal distribution with mean 𝜇 and standard deviation 𝜎, then
the limiting distribution of
𝑥̅ −𝜇
Z=𝜎 as 𝑛 → ∞,is the standard normal distribution(i.e, with mean 0 and unit S D)
√𝑛

There is no restriction upon the distribution of X except that it has a finite mean and
variance. This theorem holds well for a sample of 25 or more which is regarded as large.

Statitistical Estimation:

The method P which the parameters are estimated with the aid of the corresponding
statistics. An estimate of the unknown true or exact value of the parameter or an interval in
which the parameter is to be determined on the basis of sample data from the population.

1. Confidence interval :

Consider sampling distributions of a statistic S, which may be regarded as a normal


distribution. Letµ𝑠 and 𝜎𝑠 be the mean and s.d. of the normal distribution.

a. The probability that 𝜇𝑠 lies in the interval (s-𝜎𝑠 , s+𝜎𝑠 ) is 68.26%


b. The Probability that 𝜇𝑠 lies in the interval (s- 2𝜎𝑠 ,s+2) 𝑖𝑠 95.44%
c. The probability that 𝜇𝑠 lies in the interval (s- 3𝜎𝑠 , s +3𝜎𝑠 ) is 99.74%

The confidence intervals for 𝜇𝑠 indicated above are of the form

( s - 𝑧𝑐 𝜎𝑠 , s+ 𝑧𝑐 𝜎𝑠 )

Zc=1 at 68.26% confidence level

Zc=2 at 95.44% confidence level

Zc=3 at 99.74% confidence level

figure
Z% Confidence interval :

The Z% confidence interval for 𝜇𝑠 if P { ( s - 𝑧𝑐 𝜎𝑠 , s+ 𝑧𝑐 𝜎𝑠 ) } = z%


𝑍
= P { - 𝑧𝑐 𝜎𝑠 ≤ 𝜇𝑠 – s ≤ 𝑧𝑐 𝜎𝑠 }
100

𝑠−𝜇𝑠
=P{ | | ≤ 𝑍𝑐 }
𝜎𝑠

𝑠− 𝜇𝑠
= P { |𝑍| ≤ 𝑍𝑐 } , where Z = is the normal variate associated with
𝜎𝑠
s
𝑍
= P { −𝑍𝑐 ≤ 𝑍 ≤ 𝑧𝑐 }
100

= 2 P { 0 ≤ 𝑍 ≤ 𝑍𝑐 }

= 2 ∅ ( 𝑍𝑐 )

Z = 2 ∅ ( 𝑍𝑐 ) × 100

Thus Z% confidence interval of 𝜇𝑠 is the interval ( s - 𝑧𝑐 𝜎𝑠 , s+ 𝑧𝑐 𝜎𝑠 ) , where Zc is the


positive real number.

Figure
Confidence limit : The interval ( s - 𝑧𝑐 𝜎𝑠 , s+ 𝑧𝑐 𝜎𝑠 ) is the Z% confidence interval for 𝜇𝑠 ,
then the quantities ( s ± 𝑍𝑐 𝜎𝑠 ) are called Z% confidence limits.The member Zc is called
the corresponding confidence coefficient or the critical value confidence.

The length of the confidence interval i( s - 𝑧𝑐 𝜎𝑠 , s+ 𝑧𝑐 𝜎𝑠 ) ie 2l = 2 𝑍𝑐 𝜎𝑠 called the error


in the confidence level. The error in the 50% confidence level is called probable error.

Table for the confidence coefficients Zc for various values of Z

Z Zc Z Zc
50 .6745 90 1.645
55 .7639 95 1.96
60 .843 95.44 2
65 .9259 96 2.05
68.26 1 97 2.195
70 1.041 98 2.33
75 1.15 99 2.58
80 1.277 99.5 2.81
85 1.445 99.74 3

𝒔 𝒔
The confidence interval for the population mean is ( ̅𝑿
̅̅̅ − 𝒁𝒄 , ̅𝑿
̅̅̅ + 𝒁𝒄 )
√𝑵 √𝑵

𝒑𝒒
Confidence limits for proportions: We have 𝝁𝓟 = p and 𝝈𝓟 = √ 𝑵 where q= p-1

𝒑𝒒 𝒑(𝟏−𝒑)
The Confidence limits are given by p ± 𝒛𝒄 𝝈𝓟 = p ± 𝒛𝒄 √ 𝑵 = p ± 𝒛𝒄 √ 𝑵

Confidence limits for differences:

Confidence limits for the difference of two population means are

2 𝜎2 2
̅̅̅1 – 𝑋
{(𝑋 ̅̅̅2 ) ± 𝑍𝑐 𝜎𝑋̅ –𝑋̅ } = { ( ̅̅̅ ̅̅̅2 ) ± 𝑍𝑐 √𝜎1 +
𝑋1 – 𝑋
1 2 𝑁 1 𝑁2

Generally population standard deviations 𝜎1 𝑎𝑛𝑑 𝜎2 are unknown; these are estimated
by sample standard. Deviation S1 and S2, then
2 𝑠2 2
̅̅̅1 – 𝑋
{(𝑋 ̅̅̅2 ) ± 𝑍𝑐 𝜎𝑋̅ –𝑋̅ } = { ( ̅̅̅ ̅̅̅2 ) ± 𝑍𝑐 √𝑠1 +
𝑋1 – 𝑋
1 2 𝑁 1 𝑁2

Confidence limits for the difference of two population proportions are

𝑝1 𝑞1 𝑝2 𝑞2
{ ( 𝑝1 − 𝑝2 ) ± 𝑍𝑐 𝜎(𝑝1 −𝑝2 ) } = { ( p1 – 𝑝2 ) ± 𝑍𝑐 √ +
𝑁1 𝑁2

1. A random sample of size N=100 is taken from a population with standard deviation
𝜎 = 5.1. Given that the sample mean is 𝑋̅ = 21.6. Obtain the 95% confidence interval
for the population mean 𝜇
N=100, 𝜎 = 5.1 , 𝑋̅ = 21.6

Confidence limits for the population mean are


𝑠 5.1
̅𝑋
̅̅̅ ± 𝑍𝑐 = 21.6 ± 𝑍𝑐
√𝑁 √100

For 95% confidence level , 𝑍𝑐 = 1.96


𝑠 5.1
∴ ̅𝑋
̅̅̅ ± 𝑍𝑐 = 21.6 ± (1.96) = 21.6 ± .9996
√𝑁 √100

2. In a sample of 500 men it was found that 60% of them had overweight. Find the 99%
of confidence limits for the proportion of men in the population having overweight.

Probability of persons having over weight p = 60% = .60

q = 40% = .40

For 99% confidence level , 𝑍𝑐 = 2.58

𝑝𝑞 (.6×.4)
Probable limits are , p ± 𝑧𝑐 𝜎𝒫 = p ± 𝑧𝑐 √ 𝑁 = 0.6 ± (2.58)√ = 0.6
500

± 0.057

3. In a random sample of 400 men and 600 women, 100 men and 300 women
expressed their support for a political party. Obtain 99% confidence limits for the
difference in proportions of all mean and all women who support the party.
100 300
𝑝1 = 400 = 0.25, 𝑝2 = 600 = 0.5 , 𝑁1 = 400 , 𝑁2 = 600 , q1 = .75 , q2 = .5
For 99% confidence level, 𝑍𝑐 = 2.58
Confidence limits for the differences in proportions are

𝑝1 𝑞1 𝑝2 𝑞2
{ ( 𝑝2 − 𝑝1 ) ± 𝑍𝑐 𝜎(𝑝2 −𝑝1 ) } = { ( p2 – 𝑝1 ) ± 𝑍𝑐 √ +
𝑁1 𝑁2
(.25×.75) (.5×.5)
= 0.25 ± (2.58) √ + = .25 ± .077
400 600

4. In a sample of 200 items produced by a machine, 15 were found defective, while in a


sample of 100 items produced by another machine, 12 were found defective. Find
99% and 99.74% confidence limits for the difference in proportions of defective
items produced by the two machines
N1=200, N2=100
15
𝑝1= proportion of defective items from the first machine = 200 .075 , q1 = .925
12
Similarly, 𝑝2 = proportion of defective items from the second machine = 100 =.12, q2=
.88

For 99% confidence level , 𝑍𝑐 = 2.58


Confidence limits for the differences in proportions are

𝑝1 𝑞1 𝑝2 𝑞2
{ ( 𝑝2 − 𝑝1 ) ± 𝑍𝑐 𝜎(𝑝2 −𝑝1 ) } = { ( p2 – 𝑝1 ) ± 𝑍𝑐 √ +
𝑁1 𝑁2

= 0.045 ± .097

For 99.74% confidence level, 𝑍𝑐 = 3


Confidence limits for the differences in proportions are

𝑝1 𝑞1 𝑝2 𝑞2
{ ( 𝑝2 − 𝑝1 ) ± 𝑍𝑐 𝜎(𝑝2 −𝑝1 ) } = { ( p2 – 𝑝1 ) ± 𝑍𝑐 √ +
𝑁1 𝑁2

= 0.045 ± 0.112

5. An unbiased coin is thrown n times, it is desired that the relative frequency of the
appearance of heads should lie between 0.49 and 0.51. Find the smallest value of n
that will ensure this result with a) 95% confidence
P=probability of getting a head = (1/2), q = 1-p = 1-(1/2) = (1/2)
𝑝𝑞 1
𝜎𝑝 = √ 𝑛 =2
√𝑛

At 95% confidence level Zc = 1.96


1
The confidence limits are p ± 𝑧𝑐 𝜎𝑝 = 0.5 ±(1.96) 2
√𝑛

1
0.5 ±(1.96) 2 = 0.49 or 0.51
√𝑛

1 1
Ie 0.5 − (1.96) 2 = 0.49 or 0.5 +(1.96) 2 = 0.51
√ 𝑛 √𝑛
√𝑛 = 98 ⇒ n = 9608

Statistical Decision

Introduction:

For reaching statistical decisions, we start with some assumptions or guesses about the
populations involved. Such assumptions / guesses, which may or may not be true, are called
Statistical Hypotheses.

In many instances we formulate a statistical hypothesis for the sole purpose of


rejecting or nullifying it. Such Hypotheses are called Null hypotheses and are generally
denoted by H0 .

A statistical hypothesis which differs from a given hypothesis is called an alternative


hypothesis. A hypothesis alternative to the null hypothesis is generally denoted by H1 .

Example: i) Suppose we wish to reach the decision that a certain coin is biased ( that is, the
coin shows more heads tails or vice versa ). To reach this decision, we start with the
hypothesis that the coin is faired ( not biased) with the sole purpose of rejecting it (at the
end). This hypothesis is a null hypothesis.

ii) Consider the situation where the probability of an event is, say, 1/3, according
to some hypothesis. For arriving at some decision, if we make the hypothesis that the
probability is, say, ¼, then the hypothesis we have made is an alternative hypothesis.

Tests of hypothesis and significance:

Procedures which enable us to decide whether to accept or reject a hypothesis or to


determine whether observed samples differ significantly from expected results are called tests
of hypothesis, tests of significance, or rules of decision.

By an error of judgement suppose we reject a hypothesis when it should have been


accepted. Then we say that an error of Type I has been made. On the other hand, suppose we
accept a hypothesis when it should be rejected; in this case, we say that an error of Type II
has been made.

Type I error:

In a hypothesis test, a type I error occurs when the null hypothesis is rejected when it is in
fact true; that is, H0 is wrongly rejected.

For example, in a clinical trial of a new drug, the null hypothesis might be that the new drug
is no better, on average, than the current drug; i.e.
H0: there is no difference between the two drugs on average.
A type I error would occur if we concluded that the two drugs produced different effects
when in fact there was no difference between them.
The following table gives a summary of possible results of any hypothesis test:

Decision

Reject H0 Don't reject H0

H0 Type I Error Right decision


Truth
H1 Right decision Type II Error

A type I error is often considered to be more serious, and therefore more important to
avoid, than a type II error. The hypothesis test procedure is therefore adjusted so that there is
a guaranteed 'low' probability of rejecting the null hypothesis wrongly; this probability is
never 0. This probability of a type I error can be precisely computed as
P(type I error) = significance level =𝛼

The exact probability of a type II error is generally unknown.

If we do not reject the null hypothesis, it may still be false (a type II error) as the sample may
not be big enough to identify the falseness of the null hypothesis (especially if the truth is
very close to hypothesis).

For any given set of data, type I and type II errors are inversely related; the smaller the risk of
one, the higher the risk of the other.

A type I error can also be referred to as an error of the first kind.

Type II Error:

In a hypothesis test, a type II error occurs when the null hypothesis H0, is not rejected
when it is in fact false. For example, in a clinical trial of a new drug, the null hypothesis
might be that the new drug is no better, on average, than the current drug; i.e.
H0: there is no difference between the two drugs on average.
A type II error would occur if it was concluded that the two drugs produced the same effect,
i.e. there is no difference between the two drugs on average, when in fact they produced
different ones.

A type II error is frequently due to sample sizes being too small.

The probability of a type II error is generally unknown, but is symbolised by 𝛽 and written
P(type II error) = 𝛽

A type II error can also be referred to as an error of the second kind.


Levels of significance:

Suppose that, under a given hypothesis H, the sampling distribution of a statistic S is a


normal distribution with mean 𝜇𝑠 and standard deviation 𝜎𝑠 . Then
𝑆−𝜇𝑠
𝑧= ----------- (1) is the standard normal variate associated with S, so that for the
𝜎𝑠
distribution of Z the mean is zero and the standard deviation is 1.

Accordingly, for the distribution of z, the z% confidence interval is (-zc ,zc ). This
means that we can be Z% confident that, if the hypothesis H is true, then the value of z will
lie between

-zcandzc . This is equivalent to saying that there is (100 – z)% chance that the hypothesis H is
true but the value of Z lies outside the interval (-zc , zc ). If we reject the hypothesis H on the
grounds that the value of Z lies outside the interval (-zc ,zc ), we would be making a type I
error and the probability of making the error is (100-Z)%. Her, we say that the hypothesis is
rejected at a (100 - Z)% level of significance. Thus, a level of significance is the probability
level below which we reject a hypothesis.

In practice, only two levels of significance are employed: one corresponding to Z = 95


and the other corresponding to z = 99. The case Z = 95 yields 5% or 0.05 as the level of
significance with the corresponding zc = 1.96, and the case z =99 yields 1% or 0.01 as the
level of significance with the corresponding zc = 2.58.

The value of the normal variate Z, determinate by using the formula (1) is usually
called the z – score of the statistic S. It is this score that determines the "fate” of a hypothesis
H and is called the test statistic.

Rule of decision:

“Reject a hypothesis H at a (100 – Z)% level of significance if the z – score of the statistic S,
determined on the basis of H, is outside the interval (-zc , zc). Do not reject the hypothesis
otherwise”.

Her the interval (-zc ,zc) is called the interval of test. For the level of significance 0.05, z c =
1.96 and for the level 0.01,zc = 2.58.

‘t’ - distribution
Thus far, we considered sampling distributions on the assumption that they are normal
or approximately normal. As mentioned before, this assumption is valid when the sample size
N is large. For small samples, this assumption is not generally valid. In this section, we
consider an important sampling distribution, called the ‘t-distribution’, which is used in
studies involving small samples.

Let N be the sample size, 𝑋̅ and µ be respectively the sample mean and the population mean,
and s be the sample standard deviation. Consider the statistic t defined by

𝑋̅−µ
t= ( )√𝑁 − 1
𝑠

Suppose w e obtain a frequency distribution of t by computing the value of t for each of a et


of samples of size N drawn from a normal or a nearly normal population. The sampling
distribution so obtained is called the ‘Student’s’‘t - distribution’.

Properties of the t Distribution

The t distribution has the following properties:

 The mean of the distribution is equal to 0 .


 The variance is always greater than 1, although it is close to 1 when there are many
degrees of freedom. With infinite degrees of freedom, the t distribution is the same as
the standard normal distribution.

Note:

𝑇𝑎𝑏𝑙𝑒 𝑜𝑓 𝑣𝑎𝑙𝑢𝑒𝑠 𝑜𝑓 𝑡𝛽 (𝛾)𝑎𝑡 𝛽 = 0.01 𝑎𝑛𝑑 0.05 𝑙𝑒𝑣𝑒𝑙𝑠 𝑜𝑓

𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑛𝑐𝑒 𝑓𝑜𝑟 𝑣𝑎𝑙𝑢𝑒𝑠 𝑜𝑓𝛾 𝑓𝑟𝑜𝑚 1 𝑡𝑜 50.

𝛾 𝑡0.01 (𝛾) 𝑡0.05 (𝛾)


1 63.66 12.71
2 9.92 4.30
3 5.84 3.18
4 4.60 2.78
5 4.03 2.57
6 3.71 2.45
7 3.50 2.36
8 3.36 2.31
9 3.25 2.26
10 3.17 2.23
11 3.11 2.20
12 3.06 2.18
13 3.01 2.16
14 2.98 2.14
15 2.95 2.13
16 2.92 2.12
17 2.90 2.11
18 2.88 2.10
19 2.86 2.09
20 2.84 2.09
21 2.83 2.08
22 2.82 2.07
23 2.81 2.07
24 2.80 2.06
25 2.79 2.06
26 2.78 2.06
27 2.77 2.05
28 2.76 2.05
29 2.76 2.04
30 2.75 2.04

Problems:
1. For a random sample of 16 values with mean 41.5 and the sum of the squares of the
deviations from the mean equal to 135 and drawn from a normal population, find the
95% confidence limits and the confidence interval, for the mean of the population.

Solution: Here, the sample size is N = 16, so that 𝛾 = 16 – 1 = 15. For 95%
confidence level (i,e. for 0.05 significance level), 𝑡0.05 (𝛾) = 𝑡0.05 (15) = 2.13.

𝑎𝑙𝑠𝑜 , 𝑡ℎ𝑒 𝑠𝑎𝑚𝑝𝑙𝑒 𝑚𝑒𝑎𝑛 𝑖𝑠 𝑋̅ = 41.5 and sample variance is


1
S2 = 16 × 135 = 8.4375.

Therefore, the required confidence limits are

s √8.4375
𝑋̅ ± 𝑡0.05 (𝛾) = 41.5 ± [ × 2.13] = 41.5 ± 1.5975 = 43.1, 39.9.
√𝛾 √15

Therefore, the 95% confidence interval is (39.9, 43.1).

2. A sample of 12 measurements of the diameter of a metal of a metal ball gave the mean 𝑋̅ =
7.38 mm with the standard deviation s = 1.24mm. Find (1) 95% and (b) 99% confidence
limits for the actual diameter.
s
Solution: The confidence limits for the mean are 𝑋̅±( )𝑡𝑐 , where 𝑡𝑐 = 𝑡0.05 (𝛾)for 95%
√𝛾
confidence level and 𝑡𝑐 = 𝑡0.01 (𝛾) for 99% confidence level.
Here, we have N=12. Therefore, 𝛾 = N-1 = 12 – 1 = 11. From the table, we find that, for 𝛾 =
11, 𝑡0.05 = 2.20and 𝑡0.01 = 3.31. it is given that 𝑋̅ = 7.38 and s = 1.24. accordingly, for the
actual diameter of the ball,

a) 95% confidence limits are


s 1.24
𝑋̅ ± ( )𝑡0.05 (11) = 7.38 ± × 2.20 = 7.38 ± 0.8225 = 8.2025, 6.5575.
√𝛾 √11

b) 99% confidence limits are


s 1.24
𝑋̅ ± ( )𝑡0.01 (11) = 7.38 ± × 3.31 = 7.38 ± 1.1627 = 8.5427, 6.2173.
√11 √11

3. Find the student’s‘t’ for the following values in a sample of eight:

-4, -2, -2, 0, 2, 2, 3, 3

Taking the mean of the population to be zero.

Solution: here, N = 8. We find the sample mean is


1
𝑋̅ = ( -4 – 2 – 2 + 0 + 2 + 2 + 3) = 0.25
8

Also, the sample variance is


1
S2 = 8 {( −4 − 0.25)2 + ( −2 − 0.25)2 + ( −2 − 0.25)2 + ( 0 − 0.25)2 + (2 − 0.25)2 +
( −2 − 0.25)2 + ( 3 − 0.25)2 + ( 3 − 0.25)2 }
1
= {[18.0625 + 5.0625 + 5.0625 + 0.0625 + 3.0625 + 3.0625 +7.5625 + 7.5625)}
8

= 6.1875

For the population meanµ = 0, we get


𝑋̅−µ 0.25−0
t= ( )√𝑁 − 1 = ×√7= 0.2659.
𝑠 √6.1875

this is the required student’s ‘t’.


4. Consider the sample consisting of nine numbers 45, 47, 50, 52, 48, 47, 49, 53 and 51.The

sample is drawn from a population whose mean is 47.5. Find whether the sample mean differs

significantly from the population mean at 5% level of significance.

Solution: for the given sample, the size is N=9. Therefore its mean is

1
𝑋̅ = ( 45 + 47+50+5 2 +48+47 + 49 +53 + 51) = 49.11
9

And the variance is


1
S2 = 9 {( 45 − 49.11)2 + (47 − 49.11)2 + ( 50 − 49.11)2 + ( 52 − 49.11)2 +

(48 − 49.11)2 + ( 47 − 49.11)2 + ( 49 − 49.11)2 + (53 − 49.11)2 +

(51 − 49.11)2 }

=6.0988

So that the standard deviation is s = √6.0988 = 2.47.

Since N = 9, we have 𝛾 = 8 for which we find from the table that 𝑡0.05 = 2.31

With µ = 47.5, 𝑋̅= 49.11 and s= 2.47, we have


𝑋̅−µ 49.11−47.5
t= ( )√𝛾 = ×√8= 1.844.
𝑠 2.47

Thus, here the t- score is less than 𝑡0.05 (𝛾) = 2.31. Accordingly, the difference between the
sample mean and the population is not significant at 0.05 level of significance.

5. Eleven school boys were given a test in mathematics carrying a maximum of 25 marks.
They were given a month’s extra coaching and a second test of equal difficulty was held
thereafter. The following table gives the marks in the two tests.

Boy 1 2 3 4 5 6 7 8 9 10 11
I Test Marks 23 20 19 21 18 20 18 17 23 16 19
II Test Marks 24 19 22 18 20 22 20 20 23 20 17
Do the marks given evidence that the students have benefitted by extra coaching? Use 0.05
level of significance.

Solution: We first calculate the mean and the standard deviation in the difference in marks in
the two tests.

We note that the difference in marks(marks in II test – marks in I test) are

1, -1, 3, -3, 2, 2, 2, 3, 0, 4, -2.


The mean of these differences is

1
𝑋̅ = (1 − 1 + 3 − 3 + 2 + 2 + 2 + 3 + 0 + 4 − 2) = 1
11
And the variance is

1
𝑠2 = {(1 − 1)2 + (−1 − 1)2 + (3 − 1)2 + (−3 − 1)2 + (2 − 1)2 + (2 − 1)2
11
+ (2 − 1)2 + (3 − 1)2 + (0 − 1)2 + (4 − 1)2 + (−2 − 1)2 }
1 50
= 11 (0 + 4 + 4 + 16 + 1 + 1 + 1 + 4 + 1 + 9 + 9) = 11 = 4.545,

So that the standard deviation is 𝑠 = √4.545 = 2.13.

Since N = 11, we have 𝛾 = 10 for which we find from table that 𝑡0.05 = 2.23.

Now, we make hypothesis that the students have not been benefitted by extra coaching. That
is, the difference in mean marks 𝜇 is Zero. Under this Hypothesis, the t – score is
𝑋̅ −𝜇 1−0
𝑡= √𝛾 = 2.13 √10 = 1.485.
𝑠

We note that this t- score is less than 𝑡0.05 (𝛾) = 2.23. Hence, we do not reject the
hypothesis at 0.05 level of significance. This means that it is likely that the students have not
been benefitted by extra coaching.

Chi - square test


In practice, expected (theoretical) frequencies are computed on the basis of a hypothesis H0. If
under this hypothesis the value of 𝜒 2 computed with the use of the formula:
𝑛
2
(𝑓𝑘 − 𝑒𝑘 )2
𝜒 =∑
𝑒𝑘
𝑘=1

It is greater than some critical value𝜒 2 𝑐 , we would conclude that the observed frequencies
differ significantly from the expected frequencies and would reject Ho at the corresponding
level of significance c. Otherwise; we would accept it or at least not reject it. This procedure
is called the Chi – square (𝜒 2 ) Test of hypothesis or significance.

Generally, the chi-square test is employed by taking c = 0.05 or 0.01. Tables giving the
values of 𝜒 2 𝑐 for different values of v are available. Below table gives the value of 𝜒 2 𝑐 at c
= 0.05 and 0.01 levels and for v=1,2,……10.

Table of values of 𝜒 2 𝑐 (v) for c = 0.05 and 0.01


2 2
v 𝜒0.05 (𝑣) 𝜒0.01 (𝑣)

1 3.84 6.64
2 5.59 9.21
3 7.82 11.34
4 9.49 13.28
5 11.07 15.09
6 12.59 16.81
7 14.07 18.48
8 15.51 20.09
9 16.92 21.67
10 18.31 23.21

The number of degrees of freedom v is determined by using the formula v = n – m. Here, n is


the number of frequency – pairs (fi ,ei) used in the computation of and m is the number of
quantities that are needed (and used) in the calculation of the expected frequencies ei. if the
N= ∑ 𝒇𝒊 is the only quantity used in the calculation of ei, then m = 1 so that v = n-1.

Goodness of Fit
When a hypothesis H0 is accepted(or not rejected)on the basis of the Chi- square test,
we say that the expected frequencies calculated on the basis of H0 form a good fit for the
given frequencies. When H0 is rejected, we say that the corresponding expected frequencies
do not form a good fit.

It has been mentioned that the sampling distribution of 𝜒 2 is approximately identical


with the Chi-square distribution when the expected frequencies are at least equal to 5.
Therefore, the Chi- square test is applicable only if every expected frequency is ≥ 5. If some
expected frequencies are less than 5, then some frequencies are to be clubbed together so that
none of the expected frequency is less than 5.

Problems:

1. The following table gives the number of road accidents that occurred in a large city
during the various days of a week. Test the hypothesis that the accidents are uniformly
distributed over all the days of a week.

Day Sun Mon Tue Wed Thu Fri Sat Total


No. of 14 16 8 12 11 9 14 84
accidents
Solution: under the hypothesis that the accidents on each day are uniformly distributed over
the week, the expected number of accidents on each day are 12.(because a total of N = 84
accidents have occurred in 7 days). Thus, her, the expected frequencies are 12 each observed
frequencies are the number of accidents shown in the given table. Using these, we find that
(14−12)2 (16−12)2 (8−12)2 (12−12)2 (11−12)2 (9−12)2 (14−12)2 50
𝜒2 = + + + + + + =
12 12 12 12 12 12 12 12

= 4.17.

We note that n=7 frequency pairs are used in the computation of 𝜒 2 . Further, N = ∑ 𝒇𝒊 = 84.
Is the only quantity used in the computation of ei.Therefore, the number of degrees of
2 2
freedom is v= 7-1 = 6. From the Table we find that 𝜒0.05 (6) = 12.59 and 𝜒0.01 (6) = 16.81.
2 2
Since 𝜒 2 =4.17 is much less than both of 𝜒0.05 (6) and 𝜒0.01 (6), we do not reject the
hypothesis. This means that the accidents seem to be distributed uniformly over the week.

2. Genetic theory states that children having one parent of blood type M and the other of the
blood type N will always be one of the three types M, MN, N and that the proportion of these
types will on an average be 1:2:1. A report states that out of 300 children having one M
parent and one N parent, 30% were found to be of type M, 45% of type MN and the
remainder of type N. Test the theory by 𝜒 2 test.

Solution: According to the given hypothesis of the genetic theory. The children with blood
types M, MN, and N are in proportions 1: 2: 1. This means that the one in child four will have
blood type M, two children will have four will have blood type MN and one child in four will
have blood type N. Therefore, out of 300 children, the expected number of children having

1
Blood type M is 4× 300 = 75 = 𝑒1 (say),

2
Blood type MN is 4× 300 = 150 = 𝑒2 (say),

1
Blood type N is 4× 300 = 150 = 𝑒3 (say),

According to the report, these frequencies are(respectively)


30 45 25
f1= 100× 300 = 90, f2= 100× 300 = 135, f3= 100× 300 = 75,

Therefore, the corresponding


(90−75)2 (135−150)2 (75−75)2 3
𝜒2 = 12
+ 12
+ 12
= 3 + 2 + 0 = 4.5
We note that the number of degrees of freedom is 3-1=2. For this degree of freedom, we have
2 2 2 2
𝜒0.05 = 5.99 and 𝜒0.01 = 9.21; . Since 𝜒 2 = 4.5 is less than both of𝜒0.05 (2) and 𝜒0.01 (2), we do
not reject the hypothesis. That, this is the genetic theory seems to be correct.

3. A set of five identical coins is tossed 320 times and the result is shown in the following
table.

No. of heads 0 1 2 3 4 5
Frequency 6 27 72 112 71 32

Test the hypothesis that the data follows a binomial distribution associated with a fair coin.

Solution: The Probability that x number of fair coins out of 5 shows a head in a single toss is
given by the binomial function
1 1
b (5, ½, x) = 5𝐶𝑥 (1/2)𝑥 (1/2)5−𝑥 = (5𝐶𝑥 ) = 32 (5𝐶𝑥 ) = b(x), say,
25

accordingly, in 320 tosses the expected number of tosses in which x number of coins show a
head is 320 × b(x). Hence the expected frequencies (i,e. the number of tosses in which
0,1,2,3,4,5 coins show a head) are, respectively,
1
𝑒1 = 320 × b(0) = 320 × 32× 5𝐶0 = 10,

1
𝑒2 = 320 × b(1) = 320 × 32× 5𝐶1 = 50,

1
𝑒3 = 320 × b(2) = 320 × 32× 5𝐶2 = 100,

1
𝑒4 = 320 × b(4) = 320 × 32× 5𝐶4 = 100,

1
𝑒5 = 320 × b(5) = 320 × 32× 5𝐶5 = 50,

1
𝑒6 = 320 × b(6) = 320 × 32× 5𝐶6 = 10,

The corresponding observed frequencies are

𝑓1 = 6, 𝑓2 = 27, 𝑓3 = 72, 𝑓4 = 112, 𝑓5 = 71, 𝑓6 = 32

We find that
(6−10)2 (27−50)2 (72−100)2 (112−100)2 (71−50)2 (32−10)2
𝜒2 = + + + + +
10 50 100 100 50 10

16 529 784 144 441 484


= 10
+
50
+
100
+
100
+
50
+
10
= 78.68
We note that the number of degrees of freedom is 6-1 = 5. From the table we find that
2 2
𝜒0.05 (5) = 11.07 and 𝜒0.01 (5) = 15.09. We observe that 𝜒 2 = 78.68, is very much greater
2 2
than both of𝜒0.05 (5) and 𝜒0.01 (5). Therefore, we reject the hypothesis that the observed data
follows a binomial distribution associated with a fair coin.

4. A survey of 240families with 3 children each revealed the distribution shown in the
following table. Is the data consistent with the hypothesis that the male and female births are
equally probable? Use the 𝜒 2 test at 0.05 and 0.01 levels.

3 boys 2 boys 1 boy 0 boy


No. of children
0 girl 1 girl 2 girls 3 girls
No. of families 37 101 84 18

Solution: Let p be the probability of a male birth and q be the probability of a female birth.
Under the hypothesis that the male and female births are equally probable, we have p = ½, q
= 1-p = ½.then, among the three children, the probability that x children are boys is given by
the binomial probability function

1 1
b (5, ½, x) = 3𝐶𝑥 (1/2)𝑥 (1/2)3−𝑥 = (3𝐶𝑥 ) = 8 (3𝐶𝑥 ) = b(x), say,
23

from this, we find


1 3
b(0) = b(3) = 8 , b(1) = b(2) = 8

Therefore, among 240 families with 3 children each, the expected number of families with

3 boys is 𝑒1 = 240 × b(3) = 30,

2 boys is 𝑒2 = 240 × b(2) = 90,

1 boys is 𝑒3 = 240 × b(1) = 90,

no boy is 𝑒4 = 240 × b(0) = 30,

the corresponding observed number of families are respectively(from what is given)

𝑓1 = 37, 𝑓2 = 101, 𝑓3 = 84, 𝑓4 = 18.

Hence, here,
(37−30)2 (101−90)2 (84−90)2 (18−30)2
𝜒2 = + + +
30 90 90 30

49 121 36 144
= + + + = 8.1773
30 90 90 90

We note that the number of degrees of freedom is 4-1=3. From the table, we find that
2 2
𝜒0.05 (3) = 7.82 and 𝜒0.01 (3) = 11.34.

We observe that
2 2
𝜒 2 <𝜒0.01 (3)and𝜒 2 > 𝜒0.05 (3)

The Chi – square test indicates that the data is not consistent with the given hypothesis at 0.05
level of significance but is consistent with the hypothesis at 0.01 levels.

Das könnte Ihnen auch gefallen