Sie sind auf Seite 1von 64

Introduction Chapter 3: Probability, Random Variables, Random Processes

EE456 Digital Communications


A First Course in Digital Communications
by Ha H. Nguyen and E. Shwedyk, Cambridge University Press,
March 2009
Department of Electrical & Computer Engineering
University of Saskatchewan
Saskatoon, SK, Canada, S7N 5A9
ha.nguyen@usask.ca
Fall 2008
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Analog and Digital Amplitude Modulations
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
1
0
1
t
A
n
a
l
o
g

m
e
s
s
a
g
e
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
1
0
1
t
A
M

s
i
g
n
a
l
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
1
0
1
t
D
i
g
i
t
a
l

m
e
s
s
a
g
e
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
1
0
1
t
B
A
S
K

s
i
g
n
a
l
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
What is Digital Communication?
x(t)
(a)
0
(c)
T
s
0
t
(b)
(d)
T
s
0
x(t)
x(t) x(t)
t
t
t
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Why Digital Communications?
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
1
0
1
t
T
r
a
n
s
m
i
t
t
e
d

A
M
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
2
0
2
t
R
e
c
e
i
v
e
d

A
M
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
1
0
1
t
T
r
a
n
s
m
i
t
t
e
d

B
A
S
K
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
2
0
2
t
R
e
c
e
i
v
e
d

B
A
S
K
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Why Digital Communications?
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
1
0
1
t
T
r
a
n
s
m
i
t
t
e
d

A
M
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
5
0
5
t
R
e
c
e
i
v
e
d

A
M
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
1
0
1
t
T
r
a
n
s
m
i
t
t
e
d

B
A
S
K
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
5
0
5
t
R
e
c
e
i
v
e
d

B
A
S
K
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Regenerative Repeater in Digital Communications
Propagation distance
Original pulse Some distortion Degraded
Severely
degraded
Regenerated
2 1 3 4 5
Digital communications: Transmitted signals belong to a
nite set of waveforms The distorted signal can be
recovered to its ideal shape, hence removing all the noise.
Analog communications: Transmitted signals are analog
waveforms, which can take innite variety of shapes Once
the analog signal is distorted, the distortion cannot be
removed.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Block Diagram of a Communication System
Source
(User)
Transmitter Channel Receiver
Sink
(User)
Synchronization
Transmitter Receiver
Source
Encoder
Channel
Encoder
Modulator
De-
modulator
Channel
Decoder
Source
Decoder
(a)
(b)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Digital vs. Analog
Advantages:
Digital signals are much easier to be regenerated.
Digital circuits are less subject to distortion and interference.
Digital circuits are more reliable and can be produced at a
lower cost than analog circuits.
It is more exible to implement digital hardware than analog
hardware.
Digital signals are benecial from digital signal processing
(DSP) techniques.
Disadvantages:
Heavy signal processing.
Synchronization is crucial.
Larger transmission bandwidth.
Non-graceful degradation.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Sample Space and Probability
Random experiment: its outcome, for some reason, cannot be
predicted with certainty.
Examples: throwing a die, ipping a coin and drawing a card
from a deck.
Sample space: the set of all possible outcomes, denoted by .
Outcomes are denoted by s and each lies in , i.e., .
A sample space can be discrete or continuous.
Events are subsets of the sample space for which measures of
their occurrences, called probabilities, can be dened or
determined.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Three Axioms of Probability
For a discrete sample space , dene a probability measure P on
as a set function that assigns nonnegative values to all events,
denoted by E, in such that the following conditions are satised
Axiom 1: 0 P(E) 1 for all E (on a % scale
probability ranges from 0 to 100%. Despite popular sports
lore, it is impossible to give more than 100%).
Axiom 2: P() = 1 (when an experiment is conducted there
has to be an outcome).
Axiom 3: For mutually exclusive events
1
E
1
, E
2
, E
3
,. . . we
have P (

i=1
E
i
) =

i=1
P(E
i
).
1
The events E
1
, E
2
, E
3
,. . . are mutually exclusive if E
i
E
j
= for all
i = j, where is the null set.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Important Properties of the Probability Measure
1. P(E
c
) = 1 P(E), where E
c
denotes the complement of E.
This property implies that P(E
c
) +P(E) = 1, i.e., something
has to happen.
2. P() = 0 (again, something has to happen).
3. P(E
1
E
2
) = P(E
1
) +P(E
2
) P(E
1
E
2
). Note that if
two events E
1
and E
2
are mutually exclusive then
P(E
1
E
2
) = P(E
1
) +P(E
2
), otherwise the nonzero
common probability P(E
1
E
2
) needs to be subtracted o.
4. If E
1
E
2
then P(E
1
) P(E
2
). This says that if event E
1
is contained in E
2
then occurrence of E
2
means E
1
has
occurred but the converse is not true.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Conditional Probability
We observe or are told that event E
1
has occurred but are
actually interested in event E
2
: Knowledge that of E
1
has
occurred changes the probability of E
2
occurring.
If it was P(E
2
) before, it now becomes P(E
2
|E
1
), the
probability of E
2
occurring given that event E
1
has occurred.
This conditional probability is given by
P(E
2
|E
1
) =
_
_
_
P(E
2
E
1
)
P(E
1
)
, if P(E
1
) = 0
0, otherwise
. (1)
If P(E
2
|E
1
) = P(E
2
), or P(E
2
E
1
) = P(E
1
)P(E
2
), then
E
1
and E
2
are said to be statistically independent.
Bayes rule
P(E
2
|E
1
) =
P(E
1
|E
2
)P(E
2
)
P(E
1
)
, (2)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Total Probability Theorem
The events {E
i
}
n
i=1
partition the sample space if:
(i)
n
_
i=1
E
i
= (3a)
(ii) E
i
E
j
= for all 1 i, j n and i = j (3b)
If for an event A we have the conditional probabilities
{P(A|E
i
)}
n
i=1
, P(A) can be obtained as
P(A) =
n

i=1
P(E
i
)P(A|E
i
). (4)
Bayes rule:
P(E
i
|A) =
P(A|E
i
)P(E
i
)
P(A)
=
P(A|E
i
)P(E
i
)

n
j=1
P(A|E
j
)P(E
j
)
. (5)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Random Variables
R

4
( ) x
1
( ) x
2
( ) x
3
( ) x
A random variable is a mapping from the sample space to
the set of real numbers.
We shall denote random variables by boldface, i.e., x, y, etc.,
while individual or specic values of the mapping x are
denoted by x().
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Cumulative Distribution Function (cdf)
cdf gives a complete description of the random variable. It is
dened as:
F
x
(x) = P( : x() x) = P(x x). (6)
The cdf has the following properties:
1. 0 F
x
(x) 1 (this follows from Axiom 1 of the probability
measure).
2. F
x
(x) is nondecreasing: F
x
(x
1
) F
x
(x
2
) if x
1
x
2
(this is
because event x() x
1
is contained in event x() x
2
).
3. F
x
() = 0 and F
x
(+) = 1 (x() is the empty set,
hence an impossible event, while x() is the whole
sample space, i.e., a certain event).
4. P(a < x b) = F
x
(b) F
x
(a).
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Typical Plots of cdf I
A random variable can be discrete, continuous or mixed.
0 . 1
0

x
( ) F x
x
(a)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Typical Plots of cdf II
0 . 1
0

x
( ) F x
x
(b)
0 . 1
0

x
( ) F x
x
(c)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Probability Density Function (pdf)
The pdf is dened as the derivative of the cdf:
f
x
(x) =
dF
x
(x)
dx
. (7)
It follows that:
P(x
1
x x
2
) = P(x x
2
) P(x x
1
)
= F
x
(x
2
) F
x
(x
1
) =
_
x
2
x
1
f
x
(x)dx. (8)
Basic properties of pdf:
1. f
x
(x) 0.
2.
_

f
x
(x)dx = 1.
3. In general, P(x A) =
_
A
f
x
(x)dx.
For discrete random variables, it is more common to dene
the probability mass function (pmf): p
i
= P(x = x
i
).
Note that, for all i, one has p
i
0 and

i
p
i
= 1.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Bernoulli Random Variable
0
x
1 0
x
1
p 1
1
(1 ) p
( ) p
( ) F x
x
( ) f x
x
A discrete random variable that takes two values 1 and 0 with
probabilities p and 1 p.
Good model for a binary data source whose output is 1 or 0.
Can also be used to model the channel errors.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Binomial Random Variable
0
x
2
05 . 0
10 . 0
20 . 0
15 . 0
25 . 0
30 . 0
4
6
( ) f x
x
Also a discrete random variable that gives the number of 1s
in a sequence of n independent Bernoulli trials.
f
x
(x) =
n

k=0
_
n
k
_
p
k
(1 p)
nk
(x k). (9)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Uniform Random Variable
0
x
a b
1
a b 0
x
a b
1
( ) F x
x
( ) f x
x
A continuous random variable that takes values between a
and b with equal probabilities over intervals of equal length.
The phase of a received sinusoidal carrier is usually modeled
as a uniform random variable between 0 and 2. Quantization
error is also typically modeled as uniform.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Gaussian (or Normal) Random Variable
0
x
0
x
1
2
2
1

2
1

( ) f x
x
( ) F x
x
A continuous random variable whose pdf is:
f
x
(x) =
1

2
2
exp
_

(x )
2
2
2
_
, (10)
and
2
are parameters. Usually denoted as N(,
2
).
Most important and frequently encountered random variable
in communications.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Functions of A Random Variable
The function y = g(x) is itself a random variable.
From the denition, the cdf of y can be written as
F
y
(y) = P( : g(x()) y). (11)
Assume that for all y, the equation g(x) = y has a countable
number of solutions and at each solution point, dg(x)/dx
exists and is nonzero. Then the pdf of y = g(x) is:
f
y
(y) =

i
f
x
(x
i
)

dg(x)
dx

x=x
i

, (12)
where {x
i
} are the solutions of g(x) = y.
A linear function of a Gaussian random variable is itself a
Gaussian random variable.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Expectation of Random Variables I
Statistical averages, or moments, play an important role in the
characterization of the random variable.
The expected value (also called the mean value, rst moment)
of the random variable x is dened as
m
x
= E{x}
_

xf
x
(x)dx, (13)
where E denotes the statistical expectation operator.
In general, the nth moment of x is dened as
E{x
n
}
_

x
n
f
x
(x)dx. (14)
For n = 2, E{x
2
} is known as the mean-squared value of the
random variable.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Expectation of Random Variables II
The nth central moment of the random variable x is:
E{y} = E{(x m
x
)
n
} =
_

(x m
x
)
n
f
x
(x)dx. (15)
When n = 2 the central moment is called the variance,
commonly denoted as
2
x
:

2
x
= var(x) = E{(x m
x
)
2
} =
_

(x m
x
)
2
f
x
(x)dx.
(16)
The variance provides a measure of the variables
randomness.
The mean and variance of a random variable give a partial
description of its pdf.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Expectation of Random Variables III
Relationship between the variance, the rst and second
moments:

2
x
= E{x
2
} [E{x}]
2
= E{x
2
} m
2
x
. (17)
An electrical engineering interpretation: The AC power equals
total power minus DC power.
The square-root of the variance is known as the standard
deviation, and can be interpreted as the root-mean-squared
(RMS) value of the AC component.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
The Gaussian Random Variable
0 0.2 0.4 0.6 0.8 1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
t (sec)
S
i
g
n
a
l

a
m
p
l
i
t
u
d
e

(
v
o
l
t
s
)
(a) A muscle (emg) signal

EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
1 0.5 0 0.5 1
0
0.5
1
1.5
2
2.5
3
3.5
4
x (volts)
f
x
(
x
)

(
1
/
v
o
l
t
s
)
(b) Histogram and pdf fits
Histogram
Gaussian fit
Laplacian fit
f
x
(x) =
1
_
2
2
x
e

(xm
x
)
2
2
2
x
(Gaussian) (18)
f
x
(x) =
a
2
e
a|x|
(Laplacian) (19)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Gaussian Distribution (Univariate)
15 10 5 0 5 10 15
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
x
f
x
(
x
)

x
=1

x
=2

x
=5
Range (k
x
) k = 1 k = 2 k = 3 k = 4
P(m
x
k
x
< x m
x
k
x
) 0.683 0.955 0.997 0.999
Error probability 10
3
10
4
10
6
10
8
Distance from the mean 3.09 3.72 4.75 5.61
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Multiple Random Variables I
Often encountered when dealing with combined experiments
or repeated trials of a single experiment.
Multiple random variables are basically multidimensional
functions dened on a sample space of a combined
experiment.
Let x and y be the two random variables dened on the same
sample space . The joint cumulative distribution function is
dened as
F
x,y
(x, y) = P(x x, y y). (20)
Similarly, the joint probability density function is:
f
x,y
(x, y) =

2
F
x,y
(x, y)
xy
. (21)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Multiple Random Variables II
When the joint pdf is integrated over one of the variables, one
obtains the pdf of other variable, called the marginal pdf:
_

f
x,y
(x, y)dx = f
y
(y), (22)
_

f
x,y
(x, y)dy = f
x
(x). (23)
Note that:
_

f
x,y
(x, y)dxdy = F(, ) = 1
F
x,y
(, ) = F
x,y
(, y) = F
x,y
(x, ) = 0. (24)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Multiple Random Variables III
The conditional pdf of the random variable y, given that the
value of the random variable x is equal to x, is dened as
f
y
(y|x) =
_
_
_
f
x,y
(x, y)
f
x
(x)
, f
x
(x) = 0
0, otherwise
. (25)
Two random variables x and y are statistically independent if
and only if
f
y
(y|x) = f
y
(y) or equivalently f
x,y
(x, y) = f
x
(x)f
y
(y).
(26)
The joint moment is dened as
E{x
j
y
k
} =
_

x
j
y
k
f
x,y
(x, y)dxdy. (27)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Multiple Random Variables IV
The joint central moment is
E{(xm
x
)
j
(ym
y
)
k
} =
_

(xm
x
)
j
(ym
y
)
k
f
x,y
(x, y)dxdy
(28)
where m
x
= E{x} and m
y
= E{y}.
The most important moments are
E{xy}
_

xyf
x,y
(x, y)dxdy (correlation) (29)
cov{x, y} E{(x m
x
)(y m
y
)}
= E{xy} m
x
m
y
(covariance). (30)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Multiple Random Variables V
Let
2
x
and
2
y
be the variance of x and y. The covariance
normalized w.r.t.
x

y
is called the correlation coecient:

x,y
=
cov{x, y}

y
. (31)

x,y
indicates the degree of linear dependence between two
random variables.
It can be shown that |
x,y
| 1.

x,y
= 1 implies an increasing/decreasing linear relationship.
If
x,y
= 0, x and y are said to be uncorrelated.
It is easy to verify that if x and y are independent, then

x,y
= 0: Independence implies lack of correlation.
However, lack of correlation (no linear relationship) does not
in general imply statistical independence.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Examples of Uncorrelated Dependent Random Variables
Example 1: Let x be a discrete random variable that takes on
{1, 0, 1} with probabilities {
1
4
,
1
2
,
1
4
}, respectively. The
random variables y = x
3
and z = x
2
are uncorrelated but
dependent.
Example 2: Let x be an uniformly random variable over
[1, 1]. Then the random variables y = x and z = x
2
are
uncorrelated but dependent.
Example 3: Let x be a Gaussian random variable with zero
mean and unit variance (standard normal distribution). The
random variables y = x and z = |x| are uncorrelated but
dependent.
Example 4: Let u and v be two random variables (discrete or
continuous) with the same probability density function. Then
x = u v and y = u +v are uncorrelated dependent random
variables.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Example 1
x {1, 0, 1} with probabilities {1/4, 1/2, 1/4}
y = x
3
{1, 0, 1} with probabilities {1/4, 1/2, 1/4}
z = x
2
{0, 1} with probabilities {1/2, 1/2}
m
y
= (1)
1
4
+ (0)
1
2
+ (1)
1
4
= 0; m
z
= (0)
1
2
+ (1)
1
2
=
1
2
.
The joint pmf (similar to pdf) of y and z:
0
1
1
1
1
2
1
4
1
4
y
z
P(y = 1, z = 0) = 0
P(y = 1, z = 1) = P(x = 1) = 1/4
P(y = 0, z = 0) = P(x = 0) = 1/2
P(y = 0, z = 1) = 0
P(y = 1, z = 0) = 0
P(y = 1, z = 1) = P(x = 1) = 1/4
Therefore, E{yz} = (1)(1)
1
4
+ (0)(0)
1
2
+ (1)(1)
1
4
= 0
cov{y, z} = E{yz} m
y
m
z
= 0 (0)1/2 = 0!
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Jointly Gaussian Distribution (Bivariate)
f
x,y
(x, y) =
1
2
x

y
_
1
2
x,y
exp
_

1
2(1
2
x,y
)

_
(x m
x
)
2

2
x

2
x,y
(x m
x
)(y m
y
)

y
+
(y m
y
)
2

2
y
_ _
,(32)
where m
x
, m
y
,
x
,
y
are the means and variances.

x,y
is indeed the correlation coecient.
Marginal density is Gaussian: f
x
(x) N(m
x
,
2
x
) and
f
y
(y) N(m
y
,
2
y
).
When
x,y
= 0 f
x,y
(x, y) = f
x
(x)f
y
(y) random
variables x and y are statistically independent.
Uncorrelatedness means that joint Gaussian random variables
are statistically independent. The converse is not true.
Weighted sum of two jointly Gaussian random variables is also
Gaussian.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Joint pdf and Contours for
x
=
y
= 1 and
x,y
= 0
3
2
1
0
1
2
3
2
0
2
0
0.05
0.1
0.15
x

x,y
=0
y
f
x
,
y
(
x
,
y
)
x
y
2 1 0 1 2
2.5
2
1
0
1
2
2.5
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Joint pdf and Contours for
x
=
y
= 1 and
x,y
= 0.3
3
2
1
0
1
2
3
2
0
2
0
0.05
0.1
0.15
x

x,y
=0.30
y
f
x
,
y
(
x
,
y
)
a crosssection
x
y
2 1 0 1 2
2.5
2
1
0
1
2
2.5
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Joint pdf and Contours for
x
=
y
= 1 and
x,y
= 0.7
3
2
1
0
1
2
3
2
0
2
0
0.05
0.1
0.15
0.2
a crosssection
x

x,y
=0.70
y
f
x
,
y
(
x
,
y
)
x
y
2 1 0 1 2
2.5
2
1
0
1
2
2.5
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Joint pdf and Contours for
x
=
y
= 1 and
x,y
= 0.95
3
2
1
0
1
2
3
2
0
2
0
0.1
0.2
0.3
0.4
0.5
x

x,y
=0.95
y
f
x
,
y
(
x
,
y
)
x
y
2 1 0 1 2
2.5
2
1
0
1
2
2.5
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Multivariate Gaussian pdf
Dene

x = [x
1
, x
2
, . . . , x
n
], a vector of the means

m = [m
1
, m
2
, . . . , m
n
], and the n n covariance matrix C
with C
i,j
= cov(x
i
, x
j
) = E{(x
i
m
i
)(x
j
m
j
)}.
The random variables {x
i
}
n
i=1
are jointly Gaussian if:
f
x
1
,x
2
,...,x
n
(x
1
, x
2
, . . . , x
n
) =
1
_
(2)
n
det(C)

exp
_

1
2
(

m)C
1
(

m)

_
. (33)
If C is diagonal (i.e., the random variables {x
i
}
n
i=1
are all
uncorrelated), the joint pdf is a product of the marginal pdfs:
Uncorrelatedness implies statistical independent for multiple
Gaussian random variables.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Random Processes I
Real number
Time
t
k
x
1
(t,
1
)
x
2
(t,
2
)
x
M
(t,
M
)
.
.
.

j
x(t,)
t
2
t
1
.
.
.
.
.
.
. . .
x(t
k
,)
.
.
.
A mapping from a sample space to a set of time functions.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Random Processes II
Ensemble: The set of possible time functions that one sees.
Denote this set by x(t), where the time functions x
1
(t,
1
),
x
2
(t,
2
), x
3
(t,
3
), . . . are specic members of the ensemble.
At any time instant, t = t
k
, we have random variable x(t
k
).
At any two time instants, say t
1
and t
2
, we have two dierent
random variables x(t
1
) and x(t
2
).
Any relationship between them is described by the joint pdf
f
x(t
1
),x(t
2
)
(x
1
, x
2
; t
1
, t
2
).
A complete description of the random process is determined by
the joint pdf f
x(t
1
),x(t
2
),...,x(t
N
)
(x
1
, x
2
, . . . , x
N
; t
1
, t
2
, . . . , t
N
).
The most important joint pdfs are the rst-order pdf
f
x(t)
(x; t) and the second-order pdf f
x(t
1
)x(t
2
)
(x
1
, x
2
; t
1
, t
2
).
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Examples of Random Processes I
(a) Thermal noise
0 t
(b) Uniform phase
t 0

EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Examples of Random Processes II
t
0
(c) Rayleigh fading process
t
+V
V
(d) Binary random data
0
T
b

EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Classication of Random Processes
Based on whether its statistics change with time: the process
is non-stationary or stationary.
Dierent levels of stationarity:
Strictly stationary: the joint pdf of any order is independent of
a shift in time.
Nth-order stationarity: the joint pdf does not depend on the
time shift, but depends on time spacings:
f
x(t
1
),x(t
2
),...x(t
N
)
(x
1
, x
2
, . . . , x
N
; t
1
, t
2
, . . . , t
N
) =
f
x(t
1
+t),x(t
2
+t),...x(t
N
+t)
(x
1
, x
2
, . . . , x
N
; t
1
+t, t
2
+t, . . . , t
N
+t).
The rst- and second-order stationarity:
f
x(t
1
)
(x, t
1
) = f
x(t
1
+t)
(x; t
1
+t) = f
x(t)
(x) (34)
f
x(t
1
),x(t
2
)
(x
1
, x
2
; t
1
, t
2
) = f
x(t
1
+t),x(t
2
+t)
(x
1
, x
2
; t
1
+t, t
2
+t)
= f
x(t
1
),x(t
2
)
(x
1
, x
2
; ), = t
2
t
1
. (35)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Statistical Averages or Joint Moments
Consider N random variables x(t
1
), x(t
2
), . . . x(t
N
). The
joint moments of these random variables is
E{x
k
1
(t
1
), x
k
2
(t
2
), . . . x
k
N
(t
N
)} =
_

x
1
=

_

x
N
=
x
k
1
1
x
k
2
2
x
k
N
N
f
x(t
1
),x(t
2
),...x(t
N
)
(x
1
, x
2
, . . . , x
N
; t
1
, t
2
, . . . , t
N
)
dx
1
dx
2
. . . dx
N
, (36)
for all integers k
j
1 and N 1.
Shall only consider the rst- and second-order moments, i.e.,
E{x(t)}, E{x
2
(t)} and E{x(t
1
)x(t
2
)}. They are the mean
value, mean-squared value and (auto)correlation.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Mean Value or the First Moment
The mean value of the process at time t is
m
x
(t) = E{x(t)} =
_

xf
x(t)
(x; t)dx. (37)
The average is across the ensemble and if the pdf varies with
time then the mean value is a (deterministic) function of time.
If the process is stationary then the mean is independent of t
or a constant:
m
x
= E{x(t)} =
_

xf
x
(x)dx. (38)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Mean-Squared Value or the Second Moment
This is dened as
MSV
x
(t) = E{x
2
(t)} =
_

x
2
f
x(t)
(x; t)dx (non-stationary),
(39)
MSV
x
= E{x
2
(t)} =
_

x
2
f
x
(x)dx (stationary). (40)
The second central moment (or the variance) is:

2
x
(t) = E
_
[x(t) m
x
(t)]
2
_
= MSV
x
(t) m
2
x
(t) (non-stationary),
(41)

2
x
= E
_
[x(t) m
x
]
2
_
= MSV
x
m
2
x
(stationary). (42)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Correlation
The autocorrelation function completely describes the power
spectral density of the random process.
Dened as the correlation between the two random variables
x
1
= x(t
1
) and x
2
= x(t
2
):
R
x
(t
1
, t
2
) = E{x(t
1
)x(t
2
)}
=
_

x
1
=
_

x
2
=
x
1
x
2
f
x
1
,x
2
(x
1
, x
2
; t
1
, t
2
)dx
1
dx
2
. (43)
For a stationary process:
R
x
() = E{x(t)x(t +)}
=
_

x
1
=
_

x
2
=
x
1
x
2
f
x
1
,x
2
(x
1
, x
2
; )dx
1
dx
2
. (44)
Wide-sense stationarity (WSS) process: E{x(t)} = m
x
for
any t, and R
x
(t
1
, t
2
) = R
x
() for = t
2
t
1
.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Properties of the Autocorrelation Function
1. R
x
() = R
x
(). It is an even function of because the
same set of product values is averaged across the ensemble,
regardless of the direction of translation.
2. |R
x
()| R
x
(0). The maximum always occurs at = 0,
though there maybe other values of for which it is as big.
Further R
x
(0) is the mean-squared value of the random
process.
3. If for some
0
we have R
x
(
0
) = R
x
(0), then for all integers
k, R
x
(k
0
) = R
x
(0).
4. If m
x
= 0 then R
x
() will have a constant component equal
to m
2
x
.
5. Autocorrelation functions cannot have an arbitrary shape.
The restriction on the shape arises from the fact that the
Fourier transform of an autocorrelation function must be
greater than or equal to zero, i.e., F{R
x
()} 0.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Power Spectral Density of a Random Process (I)
Taking the Fourier transform of the random process does not work.
x
2
(t,
2
)
x
M
(t,
M
)
x
1
(t,
1
)
t
t 0
0
0
0
0
0
|X
1
(f,
1
)|
|X
2
(f,
2
)|
|X
M
(f,
M
)|
f
f
f
.
.
.
.
.
.
.
.
.
.
.
.
Timedomain ensemble Frequencydomain ensemble
t
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Power Spectral Density of a Random Process (II)
Need to determine how the average power of the process is
distributed in frequency.
Dene a truncated process:
x
T
(t) =
_
x(t), T t T
0, otherwise
. (45)
Consider the Fourier transform of this truncated process:
X
T
(f) =
_

x
T
(t)e
j2ft
dt. (46)
Average the energy over the total time, 2T:
P =
1
2T
_
T
T
x
2
T
(t)dt =
1
2T
_

|X
T
(f)|
2
df (watts).
(47)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Power Spectral Density of a Random Process (III)
Find the average value of P:
E{P} = E
_
1
2T
_
T
T
x
2
T
(t)dt
_
= E
_
1
2T
_

|X
T
(f)|
2
df
_
.
(48)
Take the limit as T :
lim
T
1
2T
_
T
T
E
_
x
2
T
(t)
_
dt = lim
T
1
2T
_

E
_
|X
T
(f)|
2
_
df,
(49)
It follows that
MSV
x
= lim
T
1
2T
_
T
T
E
_
x
2
T
(t)
_
dt
=
_

lim
T
E
_
|X
T
(f)|
2
_
2T
df (watts). (50)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Power Spectral Density of a Random Process (IV)
Finally,
S
x
(f) = lim
T
E
_
|X
T
(f)|
2
_
2T
(watts/Hz), (51)
is the power spectral density of the process.
It can be shown that the power spectral density and the
autocorrelation function are a Fourier transform pair:
R
x
() S
x
(f) =
_

=
R
x
()e
j2f
d. (52)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Time Averaging and Ergodicity
A process where any member of the ensemble exhibits the
same statistical behavior as that of the whole ensemble.
All time averages on a single ensemble member are equal to
the corresponding ensemble average:
E{x
n
(t))} =
_

x
n
f
x
(x)dx
= lim
T
1
2T
_
T
T
[x
k
(t,
k
)]
n
dt, n, k. (53)
For an ergodic process: To measure various statistical
averages, it is sucient to look at only one realization of the
process and nd the corresponding time average.
For a process to be ergodic it must be stationary. The
converse is not true.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Examples of Random Processes
(Example 3.4) x(t) = Acos(2f
0
t +), where is a
random variable uniformly distributed on [0, 2]. This process
is both stationary and ergodic.
(Example 3.5) x(t) = x, where x is a random variable
uniformly distributed on [A, A], where A > 0. This process
is WSS, but not ergodic.
(Example 3.6) x(t) = Acos(2f
0
t +) where A is a
zero-mean random variable with variance,
2
A
, and is
uniform in [0, 2]. Furthermore, A and are statistically
independent. This process is not ergodic, but strictly
stationary.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Random Processes and LTI Systems
Linear, Time-Invariant
(LTI) System
Input Output
( ) ( ) h t H f
( ) t x ( ) t y
, ( ) ( ) m R S f
x x x
, ( ) ( ) m R S f
y y y
,
( ) R
x y
m
y
= E{y(t)} = E
__

h()x(t )d
_
= m
x
H(0)(54)
S
y
(f) = |H(f)|
2
S
x
(f) (55)
R
y
() = h() h() R
x
(). (56)
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Thermal Noise in Communication Systems
A natural noise source is thermal noise, whose amplitude
statistics are well modeled to be Gaussian with zero mean.
The autocorrelation and PSD are well modeled as:
R
w
() = kG
e
||/t
0
t
0
(watts), (57)
S
w
(f) =
2kG
1 + (2ft
0
)
2
(watts/Hz). (58)
where k = 1.38 10
23
joule/
0
K is Boltzmanns constant, G
is conductance of the resistor (mhos); is temperature in
degrees Kelvin; and t
0
is the statistical average of time
intervals between collisions of free electrons in the resistor (on
the order of 10
12
sec).
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
15 10 5 0 5 10 15
0
f (GHz)
S
w
(
f
)

(
w
a
t
t
s
/
H
z
)
(a) Power Spectral Density, S
w
(f)
0.1 0.08 0.06 0.04 0.02 0 0.02 0.04 0.06 0.08 0.1
(picosec)
R
w
(

)

(
w
a
t
t
s
)
(b) Autocorrelation, R
w
()
N
0
/2
N
0
/2()
White noise
Thermal noise
White noise
Thermal noise
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
The noise PSD is approximately at over the frequency range
of 0 to 10 GHz let the spectrum be at from 0 to :
S
w
(f) =
N
0
2
(watts/Hz), (59)
where N
0
= 4kG is a constant.
Noise that has a uniform spectrum over the entire frequency
range is referred to as white noise
The autocorrelation of white noise is
R
w
() =
N
0
2
() (watts). (60)
Since R
w
() = 0 for = 0, any two dierent samples of
white noise, no matter how close in time they are taken, are
uncorrelated.
Since the noise samples of white noise are uncorrelated, if the
noise is both white and Gaussian (for example, thermal noise)
then the noise samples are also independent.
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
Example
Suppose that a (WSS) white noise process, x(t), of zero-mean and
power spectral density N
0
/2 is applied to the input of the lter.
(a) Find and sketch the power spectral density and
autocorrelation function of the random process y(t) at the
output of the lter.
(b) What are the mean and variance of the output process y(t)?
L
R
( ) t x ( ) t y
EE456 Digital Communications Ha H. Nguyen
Introduction Chapter 3: Probability, Random Variables, Random Processes
H(f) =
R
R+j2fL
=
1
1 +j2fL/R
. (61)
S
y
(f) =
N
0
2
1
1 +
_
2L
R
_
2
f
2
R
y
() =
N
0
R
4L
e
(R/L)||
. (62)
0 0
2
0
N
(Hz) f (sec)
L
R N
4
0
( ) (watts/Hz) S f
y
( ) (watts) R
y
EE456 Digital Communications Ha H. Nguyen

Das könnte Ihnen auch gefallen