Sie sind auf Seite 1von 4

An experiment is a situation involving chance or probability that leads to results

called outcomes.

An outcome is the result of a single trial of an experiment.

An event is one or more outcomes of an experiment.

Probability is the measure of how likely an event is.

Probability starts with logic. There is a set of N elements. We can define a sub-
set of n favorable elements, where n is less than or equal to N. Probability is
defined as the rapport of the favorable cases over total cases, or calculated as:

P=n/N

It is the Fundamental Formula of Probability (FFPr) and everything in theory of


probability is derived from it.

NOTES

An urn contains w white balls and b black balls (w > 0 and b > 0). The balls are
thoroughly mixed and two are drawn, one after the other, without replacement.
Let Wi and Bi denote the respective outcomes 'white on the ith draw' and 'black
on the ith draw,' for i = 1, 2.

P(W2 ) = P(W1) = w/(w + b). (Which clearly implies a similar identity for B2 and
B1 .)

Furthermore, P(Wi ) = w/(w + b), for any i not exceeding the total number of
balls w + b.

In order to measure probabilities, mathematicians have devised the following


formula for finding the probability of an event

Combining Events

If E and F are events In an experiment, then:

E' is the event that E does not occur.

E ∪F is the event that either E occurs or F occurs (or both).

E ∩F is the event that both E and F occur.

E and F are said to be disjoint or mutually exclusive if (E ∩F) is empty

Some Properties of Estimated Probability

Let S = {s 1 , s 2, ... , sn} be a sample space and let P(s i ) be the estimated
probability of the event {s i }. Then
(a) 0 ≤ P(s i ) ≤ 1
(b) P(s1) + P(s 2 ) + ... + P(s n) = 1
(c) If E = {e 1, e2, ..., er }, then P(E) = P(e1 ) + P(e 2 ) + ... + P(e r).

In words:

(a) The estimated probability of each outcome is a number between 0 and 1.


(b) The estimated probabilities of all the outcomes add up to 1.
(c) The estimated probability of an event E is the sum of the estimated
probabilities of the individual outcomes in E.

Empirical Probability

The empirical probability, P(E), of an event E is a probability determined from


the nature of the experiment rather than through actual experimentation.

The estimated probability approaches the empirical probability as the number of


trials gets larger and larger.

Notes

1. We write P(E) for both estimated and empirical probability. Which one we are
referring to should always be clear from the context.
2. Empirical probability satisfies the same properties (shown above) as estimated
probability:

Abstract Probability

An abstract finite sample space is just a finite set S. An (abstract) probability


distribution is an assignment of a number P(s i ) to each outcome si in a sample
space S ={s 1 , s 2, ... , sn} so that

(a) 0 ≤ P(s i ) ≤ 1
(b) P(s1 ) + P(s 2) + ... + P(s n) = 1.

P(s i) is called the (abstract) probability of si . Given a probability distribution, we


obtain the probability of an event E by adding up the probabilities of the
outcomes in E.

If P(E) = 0, we call E an impossible event. The event φ is always impossible,


since something must happen.

Notes

1. Abstract probability includes both estimated and empirical probability. Thus,


all properties of abstract probability are also properties of estimated and
empirical probability. As a consequence, everything we say about abstract
probability applies equally well to estimated and empirical probability.
2. From now on, we will speak only of "probability," meaning abstract
probability, thus covering both estimated and empirical probability, depending on
the context.
Addition Principle

Mutually Exclusive Events


If E and F are mutually exclusive events, then

P(E ∪ F) = P(E) + P(F).

This holds true also for more events: If E1, E2, . . . , En are mutually exclusive
events (that is, the intersection of any pair of them is empty) and E is the union
of E1, E2, . . . , En, then

P(E) = P(E 1) + P(E2 ) + . . . + P(E n).

General Addition Principle


If E and F are any two events, then

P(E ∪ F) = P(E) + P(F) - P(E ∩ F).

Further Properties of Probability

The following are true for any sample space S and any event E.

The probability of something


P(S) = 1
happening is 1.
The probability of nothing
P(φ ) = 0
happening is 0.
The probability of E not
P(E') = 1-P(E)
happening is 1 minus the probability of E.

Conditional Probability

If E and F are two events, then the conditional probability, P(E|F), is the
probability that E occurs, given that F occurs, and is defined by

P(E ∩ F)
P(E|F) = .
P(F)

We can rewrite this formula in a form known as the multiplication principle:

P(E∩F) = P(F)P(E|F).

Conditional Estimated Probability


If E and F are events and P is the estimated probability, then

fr(E ∩ F)
P(E|F) = .
fr(F)

Conditional Probability for Equally Likely Outcomes


If all the outcomes in S are equally likely, then
n(E ∩ F)
P(E|F) = .
n(F)

Independent Events

The events E and F are independent if

P(E|F) = P(E)

or, equivalently (assuming that P(F) is not 0), we have the:

Test for Independence

The events E and F are independent if and only if

P(E ∩ F) = P(E)P(F).

If two events E and F are not independent, then they are dependent.

Given any number of mutually independent events, the probability of their


intersection is the product of the probabilities of the individual events.

Bayes' Theorem

The short form of Bayes' Theorem states that if E and F are events, then

P(E|F)P(F)
P(F|E) = .
P(E|F)P(F) + P(E|F')P(F')

We can often compute P(F|E) by instead constructing a probability tree.(To see


how, go to the tutorial by following the link below.)

An expanded form of Bayes' Theorem states that if E is an event, and if F1, F2,
and F3 are a partition of the sample space S, then

P(E|F 1 )P(F 1 )
P(F 1 |E) = .
P(E|F 1 )P(F 1 ) + P(E|F 2)P(F 2) + P(E|F 3 )P(F 3 )

A similar formula works for a partition of S into four or more events

Das könnte Ihnen auch gefallen