Sie sind auf Seite 1von 27

EN 53001

Communication Theory
3. Random processes and noise
3.1 Probability
3.1 Probability - Review
• Definitions
• Axioms
• Joint and Conditional Probability
• Total Probability
• Bayes’ theorem
• Independent Events
• Combined Experiments
• Bernoulli Trials
Probability Introduced through Sets and
Relative Frequency
“Probability is a measure of the expectation that an event will
occur or a statement is true. Probabilities are given a value
between 0 (will not occur) and 1 (will occur). The higher the
probability of an event, the more certain we are that the event
will occur.”
• Probability is defined in two ways:
1. Set theory and fundamental axioms
2. Relative frequency, based on common sense and
engineering/scientific observations
– From a physical experiment, a mathematical model is developed. A
single experiment is called a trial for which there is an outcome.
• The simplified approach to a precise mathematical
procedure in defining an experiment is to use reason
and examples
• Example 3.1-1: tossing a die and observing the
number that shows up
– if the die is unbiased, the likelihood of any number
occurring is 1/6 (this is called probability of the
outcome).
• The set of all possible outcomes in any experiment is
called the sample space S.
• There are three components in forming the mathematical
model of random experiments:
1. Sample space (discrete and continuous): Set of all possible
outcomes of a trial
2. Events: An event is defined as a subset of the sample space.
A sample space (set) with N elements can have as many as
2N events (subsets)
3. Probability: Probability is a nonnegative number assigned to
each event on a sample space, it is a function of the event
defined i.e., P(A) or P[A] is the probability of event A.
– Axiom 1: P(A)  0
– Axiom 2: P(S) = 1, therefore S is a certain event, and null
set  is an event with no element known as impossible
event with probability 0.
– Axiom 3: For N events An, n = 1,2,….,N where N
could be infinite, defined on a sample space S, and
Am  An =  for all m  n,

P(An) =  P(An)

• The probability of an event that is the union of any


number of mutually exclusive events is equal to the
sum of the individual event probabilities.
Mathematical Model of Random Experiments

• The mathematical model of experiments can


be formed in the following three steps:
1. Assignment of a sample space
2. Definition of events of interest
3. Making probability assignments to the events
such as to satisfy the axioms
Example 3.1-2: An experiment consists of observing the sum of the
numbers showing up when two dice are thrown. Develop a model
for this experiment.

Figure 3.1-1: Sample space applicable to Example 3.1-2.


Example 3.1-2 Continued
Probability as a Relative Frequency
• The second way of forming the probability is by using
the relative frequency of occurrence of some events
by means of common sense and
engineering/scientific observations.
• For example in flipping a fair coin, if head (H) shows
up nH times out of n flipping, the limit
lim nH / n  P( H )
n
can be interpreted as the probability of the even H. If
the coin is “fair” this limit is ½, or P(H)=1/2.
Joint and Conditional Probability

• In some experiments, some events are


not mutually exclusive because of
common elements (See Example 3.0.1)
• For two events A and B, the common
elements form the event A  B. The
probability P(A  B) is called the joint
probability for two events A and B which
intersect in the sample space.
• It is shown by Venn diagram that:
P(A  B) = P(A) + P(B) - P(A  B)  P(A) + P(B)
or
P(A  B) = P(A) + P(B) – P(A  B)
• The probability of the union of two events
never exceeds the sum of the event
probabilities.
• The equality only holds for mutually exclusive
events because A  B =  and therefore
P(A  B)= P( )=0.
• For an event B which has nonzero probability,
the conditional probability of an event A,
given B is:
P(A|B)= P(A  B)/ P(B)
• For mutually exclusive events A and B,
A  B = . Therefore P(A|B) = 0.
Total Probability
Suppose we are given N mutually exclusive events
Bn, n=1,2,…,N and whose union equals S.
Bm  Bn =  m≠ n=1,2,…,N
N

B
n 1
n S

• Total probability of event A,defined on S is


N
P A   P A | Bn PBn 
n 1

is proven using Figure 3.0-2


Figure 3.1-2 Venn diagram of N mutually exclusive events Bn
and another event A.
Bayes’ theorem
• Bayes’ theorem is given in equations (1.4-15) and (1.4-16) in
your text. See Examples 1.4-2, 1.4-3, and 1.4-4.
• If P(A)≠0 PBn  A
PBn | A 
P  A
• If P(B)≠0 P A  Bn 
P A | Bn  
PBn 
• Equating above two, we get one form of Bayes’ Theorem
P A | Bn PBn 
 
P Bn | A 
P  A
• Another form is obtained using the concept of total
probability. P A | Bn PBn 
PBn | A  N
 P A | Bn PBn 
n 1
Example 3.1-3: An elementary binary communication system consists of a
transmitter that sends one of two possible symbols (a 1 or a 0) over a
channel to a receiver. The channel occasionally causes errors so that a 1
shows up at the receiver as a 0 and vice versa.

We denote the events B1=“the symbol before the channel is 1”, B2=”the
symbol before the channel is 0” . The “apriori probabilities are given as

The events A1=“ the symbol after the channel is 1” and A2-”the symbol after
the channel is 0”.
Conditional probabilities describing the effect of channel are given as

Find P(A1) and P(A2) the total probabilities of receiving a 1 and 0. Also find the
“aposteriori probabilites” P(B1|A1), P(B1|A2), P(B2|A1) and P(B2|A2).
Figure 3.1-3 Binary symmetric communication system Diagrammatical model.

The last two numbers are probabilities of system error while P(B1|A1) and P(B2|A2) are probabilities
of correct system transmission of symbols.
Independent Events
• Two events A and B are statistically independent if
the probability of occurrence of one event is not
affected by the occurrence of the other event. This
statement can be written as:
P(A|B) = P(A) or P(B|A) = P(B), also means
that the probability of the joint occurrence of two
events must equal the product of the two event
probabilities:
P(AB) = P(A) P(B)
• This condition is used as a test of independence. It is
the necessary and sufficient condition for A and B to
be independent.
• For mutually exclusive events, their joint
probability is P(AB) = 0, therefore two events
can not be both mutually exclusive and
statistically independence. In order for the
two events to be independent, they must have
an intersection, AB  Ø.
• When more than two events involved, say
three events A, B and C, the following four
equations should satisfied to be considered as
independence:
1. P(A  B) = P(A) P(B)
2. P(B  C) = P(B) P(C)
3. P(A  C) = P(A) P(C)
4. P(A  B  C) = P(A) P(B) P(C)
Combined Experiments
• A combined experiment consists of forming a
single experiment by combining individual
experiments, which are called sub-
experiments. In this regards, we will have
combined sample space, events on the
combined space, and probabilities
Figure 3.1-4 A combined sample space for two subexperiments.
Permutations and Combinations
• Permutation is introduced when experiments involve
multiple trials when outcomes are elements of a
finite sample space and they are not replaced after
each trial. There are nPr= n!/(n-r)! permutations for r
objects selected from a total of n objects with order.

• If the order of elements in a sequence is not


important, the possible sequences of r elements
taken from n elements without replacement are
called combination. There are nCr= n!/[(n-r)!r!]
combination in selecting r objects from a total of n.
Bernoulli Trials
• Any experiment for which there are only two
possible outcomes in any trial i.e., pass or fail,
true or false, 1 or 0, and is done repeatedly is
called Bernoulli trials.
Bernoulli Trials
Example 3.1-4: (Pulse Code Modulation) PCM Repeater Error
Probability
In PCM, regenerative repeaters are used to detect pulses corrupted by noise
and1retransmit
st link new,2clean
nd linkpulses. th th
k link n link
In Out

If Pe is the probability of error in detecting a pulse over one link, show that
PE, the probability of error in detecting a pulse over the entire channel (n
links in tandem), is PE ≈ nPe nPe << 1
Solution:
Probability of detecting a pulse correctly over one link = (1- Pe)
Probability of detecting a pulse correctly over entire channel (n links)= (1- PE)
A pulse can be detected correctly over the entire channel if either the pulse is
detected correctly over every link or errors are made over an even number of
links only.
Bernoulli Trials
PCM Repeater Error Probability …
Probability of correctly detecting a pulse over entire channel:
1- PE = P(correct detection over all links)
+ P(erroneous detection over two links only)
+ P(erroneous detection over four links only)+…
+ P(erroneous detection over   2  2  links only)
n
 
where a  denotes the largest integer less than or equal to a.

1  PE  1  Pe   Pek 1  Pe  ; Pe  1
n!

n nk

k  2 , 4 , 6 ,... k!n  k !

nn  1 2
 1  Pe   Pe2  1  Pe  
n n! n
Pe
2!n  2! 2
 1  Pe  if nPe  1
n

 1  nPe since Pe  1
and PE  nPe (Intuitive explanation ? )

Das könnte Ihnen auch gefallen