Sie sind auf Seite 1von 28

UNIT-II

Probability I: Introductory Ideas


Probability: The Study of Odds and Ends

 Jacob Bernouli (1654–1705), Abraham de Moivre (1667–


1754), the Reverend Thomas Bayes (1702–61) and Joseph
Laagrange (1736–1813) developed probability formulae and
techniques.

 Probability was successfully applied at the gambling tables,


and more relevant to social and economic problems.

 The mathematical theory of probability is the basis for


statistical applications in both social and decision-making
research.
Basic Terminology in Probability
 Two broad categories of decision-making problems:

Deterministic Model
Probabilistic (Random) Models.
 Probability is the chance something will happen.
 Probabilities are expressed as fractions/decimals between zero and
one.
 Assigning a probability of zero means that something will never
happen, a probability of one indicates that something will always
happen.
 In probability theory, an event is one or more of the possible
outcomes of doing something.
Contd…
 The activity that produces such an event is referred
to in probability theory as an experiment.

 Events are said to be mutually exclusive, if one and


only one of them can take place at a time.

 When a list of the possible events that can result


from an experiment includes every possible
outcome, the list is said to be collectively
exhaustive.
Random Experiment
A process of obtaining information through observation or
measurement of a phenomenon whose outcome is subject to
chance .

Properties:
(i) All possible outcomes can be listed in advance
(ii) Experiment can be repeated, and
(iii) The same outcome may not occur on various repetitions.

Random Variation
The variation among experimental outcomes due to
uncontrolled factors is called random variation.

A Simple Event
The basic possible outcome of an experiment, it cannot be
broken down into simple outcomes.
 Sample Space
The set of all possible outcomes or simple events of an experiment.

Event
Any subset of outcomes of an experiment.

Compound Events
Two or more events are said to be compound events when these events
occur simultaneous. These events may be (i) independent or (ii)
dependent.

Equally Likely Events


Two or more events are said to be equally likely when each of these has
an equal chance to occur.

Complementary Events
If E is any subset of the sample space, then its complement denoted by
contains all the elements of sample space that are not part of E. If S
denotes the sample space, then

= S-E = { All sample elements not in E}


Three Types of Probability
 There are three basic ways of classifying probability.
These three represent rather different conceptual
approaches to the study of probability theory.

Probability

Relative Frequency Subjective


Classical Approach
Approach Approach
 Classical Approach
Defines probability as ratio of favorable outcomes to
the total outcomes.
Also known as priori probability.
It assumes a number of assumptions, hence is the
most restrictive approach, and it is least useful in
real-life situations.
 Relative Frequency Approach
Defines probability as observed relative frequency of
an event in a very large number of trials.
It assumes less assumptions but requires the event to
be capable of being repeated a large number of times.
 Subjective Probability
Deals with specific or unique situations typical of
the business or management world.
Based upon some belief or educated guess of the
decision maker.
Subjective assessments of probability permits the
widest flexibility of the three concepts, also known
as personal probability.
Probability Rules
 Most managers who use probabilities are concerned with
two conditions:
The case where one event or another will occur.
The situation where two or more events will both occur.

 A single probability means that only one event can take


place, it is called marginal or unconditional probability.

 Addition rule for probability:


P ( A or B) = P (A) + P(B) – P (A and B)
For mutually exclusive events:
P ( A or B) = P (A) + P(B)
Probabilities Under Conditions of Statistical
Independence
 Occurrence of one event has no effect on the probability
of the occurrence of any other event.
 There are three types of probabilities under statistical
independence
Marginal
 Probability of the occurrence of an event
Joint
P(AB) = P(A) X P(B)
Conditional
 P(A/B) = P(A) and P(B/A) = P(B)
 In statistical independence, assumption is that events are
not related.
Probabilities Under Conditions of Statistical
Dependence
 When the probability of some event is dependent on or affected
by the occurrence of some other event.

 As independent case, there are three types of probabilities under


statistical independence
Conditional:
 P(A/B) = P(AB)/ P(B) and
 P(B/A) = P(AB)/P(A)
Joint:
 P(AB) = P(A/B) X P(B) and
 P(AB) = P(B/A) X P(A)
Marginal: Marginal probabilities under statistical dependence
are computed by summing up the probabilities of all the joint
events in which the simple event occurs.
Revising Prior Estimates of
Probabilities—Bayes’ Theorem
 Bayes’ theorem expresses how a subjective degree of
belief should rationally change to account for evidence.

 Probabilities can be revised as more (additional)


information is gained. New probability is known as
posterior probability.

 In the Bayesian interpretation, Bayes' theorem is


fundamental to Bayesian statistics, and has application in
fields including science, engineering, medicine and law.
 In the Bayesian (or epistemological) interpretation,
probability measures a degree of belief. Bayes'
theorem then links the degree of belief in a
proposition before and after accounting for evidence.

 ) = =

=
Probability Distributions
 A probability distributions can be thought as a
theoretical frequency distribution, describing how
outcomes are expected to vary in a random
experiment.

 It is a listing of probabilities of all possible outcomes


that could result if the experiment were done.

 Classification of probability distributions:


Discrete: Variable can take on only limited
number of values or only integer values, which
can be listed.
Continuous: Variable under consideration is
allowed to take any value within a given range, all
values cannot be listed.
Random Variables
 A variable is random, if it takes on different values as a
result of the outcomes of a random experiment.

 The values of a random variable are the numerical values


corresponding to each possible outcome of the random
experiment.

 Classification of random variables:


Discrete: Random variable is allowed to take only a
limited number of values or integer values, which can be
listed.
Continuous: Random variable is allowed to assume any
value within a given range.
Expected Value and Its Role in Decision Making
 The expected value of a random variable is the
weighted average of all possible values that this
random variable can take on.

 The weights used in computing this average


correspond to the probabilities in case of a discrete
random variable, or probability-densities in case of
a continuous random variable.

 Expected value can be used to make intelligent


decisions under uncertain conditions by combining
probabilities that a random variable will take on
certain values with the monetary gain or loss that
results, when it does take on those values.
Binomial Distribution
 The binomial distribution describes discrete data, resulting from a
Bernoulli process experiment.

 Conditions/Requirements of Bernoulli process:


Each trial has only two possible (dichotomous) outcomes either
success denoted by ‘p’ or a failure denoted by ‘q’.
The random experiment is performed under the same conditions for a
fixed and a finite number of times, say ‘n’. Each observation of the
random variable in a random experiment is called a trial.
Probability of the outcome of any trial remains fixed over time.
Trials are statistically independent.

 Binomial distribution is the discrete probability distribution of the


number of successes in a sequence of n independent success/failure
(dichotomous) experiments, each of which yields success with
probability p.
 
Binomial Probability Function

In general, for a binomial random variable, x the probability of


success(occurrence of desired outcome) r number of times in n
independent trials, regardless of their order of occurrence is given by

= , r = 0, 1, 2,…,n (1)

Where n = number of trials or sample size


p = probability of success
q = (1-p) probability of failure
x = discrete binomial random variable
r = number of success in n trials

The term = represents the probability of one sequence of


outcomes
= represents the number of possible sequences (combinations) of r
successes that are possible out of n trials.
 The binomial distribution is frequently used to
model the number of successes in a sample of size
n drawn with replacement from a population of
size N.

 Parameters of binomial distribution: n (number of


trials) and p (probability of success).

 Mean of a binomial distribution: μ = n.p

 Standard deviation of a binomial distribution: σ =


√npq.
Poisson Distribution
 Poisson distribution is a discrete probability distribution that
expresses the probability of a given number of events occurring in
a fixed interval of time and/or space if these events occur with a
known average rate, and independent of the time since the last
event. Here, our concern is only upon the happening (success) of
an event.
 The Poisson distribution can also be used for the number of events
in other specified intervals such as distance, area or volume.
 Parameters of Poisson distribution: λ (Lambda: the mean number
of occurrences per interval of time).
 Mean of Poisson distribution = λ.
 Poisson as an approximation of the binomial distribution:
when n >= 20 and p <= 0.05 such that their product, n.p = λ is
finite.
For example, the random variable of interest might
be:

 The number of arrivals at a car wash in one hour.


 The number of repairs needed in 10 miles of
highway.
 The number of leaks in 100 miles of pipeline.
 Number of defective material.
 The emission of radioactive particles.
Conditions for Poisson Process

 The outcomes in any interval of time occur randomly


and independently of one another.

 The probability of occurrence of an outcome in a


small interval of time is proportional to the length of
the interval but is independent of the specific time
interval.

 The probability of occurrence of more than one


outcome in a small interval of time is negligible.

 The average number of occurrence of outcomes is


constant in equal intervals of time.
 Poisson Probability Function

, r = 0, 1, 2…

The typical application of Poisson distribution is for


analyzing queuing (or waiting line) problems in which
customers during an interval of time arrive
independently and the average or mean number of
arrivals (or events) is proportional to the length of the
time period.
Normal Distribution
 The normal (or Gaussian) distribution is a continuous probability
distribution that has a bell-shaped probability density function,
known as the Gaussian function or informally the bell curve.

 There are two basic reasons why the normal distribution occupies
such a prominent place in statistics. First, it has some properties
that make it applicable to a great many situations in which it is
necessary to make inferences by taking samples. Second, the
normal distribution comes close to fitting the actual observed
frequency distributions of many phenomena (Central limit
theorem). So, it is also known as “mother of all distribution”.
 Characteristics of normal probability
distribution (normal curve):
The curve has single peak, thus it is unimodal.
It has bell shaped symmetrical curve.
The mean of a normally distributed population
lies at the centre of its normal curve.
The mean, median and mode coincide in a normal
curve.
The two tails of normal curve extends indefinitely
and never touch horizontal axis.

 Parameters of normal distribution: Mean of normal


distribution (μ) and standard deviation of normal
distribution (σ ).
 Table of the standard normal probability
distribution {a normal distribution with (μ = 0 and
σ = 1)} to find probabilities of a desired range
(areas under any normal curve). With this table,
we can determine the area (or probability) that the
normally distributed random variable will lie
within certain distances from the mean. These
distances are defined in terms of standard
deviations.
 Standardizing a normal random variable:
Z = (X- μ)/ σ
 Normal distribution as an approximation of the
binomial distribution: When n is large (>20) and
both n.p and n.q are at least 5.

Das könnte Ihnen auch gefallen