Beruflich Dokumente
Kultur Dokumente
Communication Theory
3. Random processes and noise
3.1 Probability
3.1 Probability - Review
• Definitions
• Axioms
• Joint and Conditional Probability
• Total Probability
• Bayes’ theorem
• Independent Events
• Combined Experiments
• Bernoulli Trials
Probability Introduced through Sets and
Relative Frequency
“Probability is a measure of the expectation that an event will
occur or a statement is true. Probabilities are given a value
between 0 (will not occur) and 1 (will occur). The higher the
probability of an event, the more certain we are that the event
will occur.”
• Probability is defined in two ways:
1. Set theory and fundamental axioms
2. Relative frequency, based on common sense and
engineering/scientific observations
– From a physical experiment, a mathematical model is developed. A
single experiment is called a trial for which there is an outcome.
• The simplified approach to a precise mathematical
procedure in defining an experiment is to use reason
and examples
• Example 3.1-1: tossing a die and observing the
number that shows up
– if the die is unbiased, the likelihood of any number
occurring is 1/6 (this is called probability of the
outcome).
• The set of all possible outcomes in any experiment is
called the sample space S.
• There are three components in forming the mathematical
model of random experiments:
1. Sample space (discrete and continuous): Set of all possible
outcomes of a trial
2. Events: An event is defined as a subset of the sample space.
A sample space (set) with N elements can have as many as
2N events (subsets)
3. Probability: Probability is a nonnegative number assigned to
each event on a sample space, it is a function of the event
defined i.e., P(A) or P[A] is the probability of event A.
– Axiom 1: P(A) 0
– Axiom 2: P(S) = 1, therefore S is a certain event, and null
set is an event with no element known as impossible
event with probability 0.
– Axiom 3: For N events An, n = 1,2,….,N where N
could be infinite, defined on a sample space S, and
Am An = for all m n,
P(An) = P(An)
B
n 1
n S
We denote the events B1=“the symbol before the channel is 1”, B2=”the
symbol before the channel is 0” . The “apriori probabilities are given as
The events A1=“ the symbol after the channel is 1” and A2-”the symbol after
the channel is 0”.
Conditional probabilities describing the effect of channel are given as
Find P(A1) and P(A2) the total probabilities of receiving a 1 and 0. Also find the
“aposteriori probabilites” P(B1|A1), P(B1|A2), P(B2|A1) and P(B2|A2).
Figure 3.1-3 Binary symmetric communication system Diagrammatical model.
The last two numbers are probabilities of system error while P(B1|A1) and P(B2|A2) are probabilities
of correct system transmission of symbols.
Independent Events
• Two events A and B are statistically independent if
the probability of occurrence of one event is not
affected by the occurrence of the other event. This
statement can be written as:
P(A|B) = P(A) or P(B|A) = P(B), also means
that the probability of the joint occurrence of two
events must equal the product of the two event
probabilities:
P(AB) = P(A) P(B)
• This condition is used as a test of independence. It is
the necessary and sufficient condition for A and B to
be independent.
• For mutually exclusive events, their joint
probability is P(AB) = 0, therefore two events
can not be both mutually exclusive and
statistically independence. In order for the
two events to be independent, they must have
an intersection, AB Ø.
• When more than two events involved, say
three events A, B and C, the following four
equations should satisfied to be considered as
independence:
1. P(A B) = P(A) P(B)
2. P(B C) = P(B) P(C)
3. P(A C) = P(A) P(C)
4. P(A B C) = P(A) P(B) P(C)
Combined Experiments
• A combined experiment consists of forming a
single experiment by combining individual
experiments, which are called sub-
experiments. In this regards, we will have
combined sample space, events on the
combined space, and probabilities
Figure 3.1-4 A combined sample space for two subexperiments.
Permutations and Combinations
• Permutation is introduced when experiments involve
multiple trials when outcomes are elements of a
finite sample space and they are not replaced after
each trial. There are nPr= n!/(n-r)! permutations for r
objects selected from a total of n objects with order.
If Pe is the probability of error in detecting a pulse over one link, show that
PE, the probability of error in detecting a pulse over the entire channel (n
links in tandem), is PE ≈ nPe nPe << 1
Solution:
Probability of detecting a pulse correctly over one link = (1- Pe)
Probability of detecting a pulse correctly over entire channel (n links)= (1- PE)
A pulse can be detected correctly over the entire channel if either the pulse is
detected correctly over every link or errors are made over an even number of
links only.
Bernoulli Trials
PCM Repeater Error Probability …
Probability of correctly detecting a pulse over entire channel:
1- PE = P(correct detection over all links)
+ P(erroneous detection over two links only)
+ P(erroneous detection over four links only)+…
+ P(erroneous detection over 2 2 links only)
n
where a denotes the largest integer less than or equal to a.
1 PE 1 Pe Pek 1 Pe ; Pe 1
n!
n nk
k 2 , 4 , 6 ,... k!n k !
nn 1 2
1 Pe Pe2 1 Pe
n n! n
Pe
2!n 2! 2
1 Pe if nPe 1
n
1 nPe since Pe 1
and PE nPe (Intuitive explanation ? )