Beruflich Dokumente
Kultur Dokumente
• 5.1 Introduction
• 5.2 Geometric Representation of Signals
• Gram-Schmidt Orthogonalization Procedure
• 5.3 Conversion of the AWGN into a Vector Channel
• 5.4 Maximum Likelihood Decoding
• 5.5 Correlation Receiver
• 5.6 Probability of Error 2/45
INTRODUCTION – THE MODEL
• We consider the following model of a generic transmission
system (digital source):
• A message source transmits 1 symbol every T sec
• Symbols belong to an alphabet M (m1, m2, …mM)
• Binary – symbols are 0s and 1s
• Quaternary PCM – symbols are 00, 01, 10, 11
3/45
TRANSMITTER SIDE
• Symbol generation (message) is probabilistic, with a
priori probabilities p1, p2, .. pM. or
• Symbols are equally likely
• So, probability that symbol mi will be emitted:
i P(mi )
1
= for i=1,2,....,M (5.1)
M
4/45
• Transmitter takes the symbol (data) mi (digital
message source output) and encodes it into a distinct
signal si(t).
• The signal si(t) occupies the whole slot T allotted to
symbol mi.
• si(t) is a real valued energy signal ?
T
Ei si2 (t )dt , i=1,2,....,M (5.2)
0
5/45
• Transmitter takes the symbol (data) mi (digital message
source output) and encodes it into a distinct signal si(t).
• The signal si(t) occupies the whole slot T allotted to
symbol mi.
• si(t) is a real valued energy signal (signal with finite
energy)
T
Ei si2 (t )dt , i=1,2,....,M (5.2)
0
6/45
CHANNEL ASSUMPTIONS:
• Linear, wide enough to accommodate the signal si(t) with
no or negligible distortion
• Channel noise is w(t) is a zero-mean white Gaussian noise
process – AWGN
• additive noise
• received signal may be expressed as:
0 t T
x(t ) si (t ) w(t ), (5.3)
i=1,2,....,M
7/45
RECEIVER SIDE
• Observes the received signal x(t) for a duration of time T sec
• Makes an estimate of the transmitted signal si(t) (eq. symbol mi).
• Process is statistical
• presence of noise
• errors
• So, receiver has to be designed for minimizing the average probability of
error (Pe)
M
Pe = p P(mˆ m
i 1
i i / mi ) (5.4)
8/45
OUTLINE
• 5.1 Introduction
• 5.2 Geometric Representation of Signals
• Gram-Schmidt Orthogonalization Procedure
• 5.3 Conversion of the AWGN into a Vector Channel
• 5.4 Maximum Likelihood Decoding
• 5.5 Correlation Receiver
• 5.6 Probability of Error
9/45
5.2. GEOMETRIC REPRESENTATION OF SIGNALS
• Objective: To represent any set of M energy signals
{si(t)} as linear combinations of N orthogonal basis
functions, where N ≤ M
• Real value energy signals s1(t), s2(t),..sM(t), each of
duration T sec Orthogonal basis
function
N
0 t T
si (t ) sij j (t ), (5.5)
j 1 i==1,2,....,M
coefficient
10/45
Energy signal
• Coefficients:
T i=1,2,....,M
sij si (t ) j (t )dt , (5.6)
0
j=1,2,....,M
11/45
• The set of coefficients can be viewed as a N-dimensional vector, denoted by si
• Bears a one-to-one relationship with the transmitted signal si(t)
12/45
FIGURE 5.3
(A) SYNTHESIZER FOR GENERATING THE SIGNAL SI(T). (B) ANALYZER
FOR GENERATING THE SET OF SIGNAL VECTORS SI.
13/45
SO,
• Each signal in the set si(t) is completely determined
by the vector of its coefficients
si1
s
i2
.
si , i 1,2,....,M (5.8)
.
.
siN
14/45
FINALLY,
• The signal vector si concept can be extended to 2D, 3D etc. N-
dimensional Euclidian space
• Provides mathematical basis for the geometric representation of
energy signals that is used in noise analysis
• Allows definition of
• Length of vectors (absolute value)
• Angles between vectors
• Squared value (inner product of si with itself)
siT si
2 Matrix
si Transposition
N
= sij2 , i 1,2,....,M (5.9)
j 1
15/45
FIGURE 5.4
ILLUSTRATING THE GEOMETRIC
REPRESENTATION OF SIGNALS
FOR THE CASE WHEN N 2
AND M 3.
(TWO DIMENSIONAL SPACE,
THREE SIGNALS)
16/45
ALSO,
What is the relation between the vector representation
of a signal and its energy value?
17/45
T
N N
• After substitution: Ei sij j (t ) sikk (t ) dt
0 j 1 k 1
N N T
Ei s s (t ) (t )dt
ij ik j k (5.11)
j 1 k 1
• After regrouping:
0
N
Ei s 2 2
ij = si (5.12)
• Φj(t) is orthogonal, so j 1
finally we have:
The energy of a signal
is equal to the squared
length of its vector 18/45
FORMULAS FOR TWO SIGNALS
• Assume we have a pair of signals: si(t) and sj(t), each
represented by its vector,
• Then:
T
sij si (t )sk (t )dt siT sk (5.13)
0
20/45
ANGLE BETWEEN TWO SIGNALS
• The cosine of the angle Θik between two signal vectors si and sk is
equal to the inner product of these two vectors, divided by the
product of their norms:
T
s s
cosik i k
(5.15)
si sk
• So the two signal vectors are orthogonal if their inner product siTsk is
zero (cos Θik = 0)
21/45
SCHWARTZ INEQUALITY
(5.16)
2
s1 (t )s2 (t )dt 2
s (t )dt s22 (t )dt
1
22/45
OUTLINE
• 5.1 Introduction
• 5.2 Geometric Representation of Signals
• Gram-Schmidt Orthogonalization Procedure
• 5.3 Conversion of the AWGN into a Vector Channel
• 5.4 Maximum Likelihood Decoding
• 5.5 Correlation Receiver
• 5.6 Probability of Error 23/45
GRAM-SCHMIDT ORTHOGONALIZATION PROCEDURE
Assume a set of M energy signals denoted by s1(t), s2(t), .. , sM(t).
1. Define the first basis function s1 (t )
starting with s1 as: (where E is the 1 (t ) (5.19)
energy of the signal) (based on E1
5.12)
2. Then express s1(t) using the basis
function and an energy related s1 (t ) E11 (t ) = s111 (t ) (5.20)
coefficient s11 as:
24/45
4. If we introduce the g 2 (t ) s2 (t ) s211 (t ) (5.22)
intermediate function g2 as:
Orthogonal to φ1(t)
T
0
1 (t )2 (t )dt 0
25/45
AND SO ON FOR N DIMENSIONAL SPACE…,
• In general a basis function can be defined using the
following formula:
i 1
gi (t ) si (t ) sij - j (t) (5.25)
j 1
26/45
SPECIAL CASE:
• For the special case of i = 1 gi(t) reduces to si(t).
General case:
• 5.1 Introduction
• 5.2 Geometric Representation of Signals
• Gram-Schmidt Orthogonalization Procedure
• 5.3 Conversion of the AWGN into a Vector Channel
• 5.4 Maximum Likelihood Decoding
• 5.5 Correlation Receiver
• 5.6 Probability of Error 28/45
CONVERSION OF THE CONTINUOUS AWGN CHANNEL INTO
A VECTOR CHANNEL
• Suppose that the si(t) is not
x(t ) si (t ) w(t ),
any signal, but specifically
the signal at the receiver 0 t T
(5.28)
side, defined in accordance i=1,2,....,M
with an AWGN channel:
T
get: =w(t ) w j j (t )
j 1
=w(t ) (5.33)
which means that the sample function x1(t) depends only on
the channel noise! 31/45
• The received signal can
be expressed as:
N
x(t ) x ji (t ) x(t )
j 1
N
x ji (t ) w(t ) (5.34)
j 1
33/45
MEAN VALUE
• Let Wj, denote a random variable, represented by its
sample value wj, produced by the jth correlator in
response to the Gaussian noise component w(t).
• So it has zero mean (by definition of the AWGN
model) x E X j
j
0 0
T T
T T =E 0 j (t )i (u)W (t )W (u)dtdu (5.37)
x2 = o
i
o
(t ) (u ) E[W (t )W (u )]dtdu
0
i j
T T
=E 0 j (t )i (u) Rw (t, u)dtdu (5.38) Autocorrelation function of
o the noise process 35/45
• It can be expressed as:
(because the noise is N0
stationary and with a R w (t , u ) (t u ) (5.39)
2
constant power spectral
density) N0
T T
xi = (t ) (u ) (t u )dtdu
2
37/45
• Define (construct) a vector X of N random variables, X1, X2,
…XN, whose elements are independent Gaussian RV with mean
values sij, (output of the correlator, deterministic part of the signal
defined by the signal transmitted) and variance equal to N0/2
(output of the correlator, random part, calculated noise added by
the channel).
• then the X1, X2, …XN , elements of X are statistically
independent.
• So, we can express the conditional probability of X, given si(t)
(correspondingly symbol mi) as a product of the conditional
density functions (fx) of its individual elements fxj.
NOTE: This is equal to finding an expression of the probability of a
received symbol given a specific symbol was sent, assuming a
memoryless channel
38/45
• …that is:
N
f x ( x / mi ) f x j ( x j / mi ), i=1,2,....,M (5.44)
j 1
39/45
N
f x ( x / mi ) f x j ( x j / mi ), i=1,2,....,M (5.44)
j 1
Vector x is called
observation vector
Vector x and scalar xj Scalar xj is called
are sample values of observable element
the random vector X
and the random
variable Xj
40/45
• Since, each Xj is Gaussian with mean sj and variance
N0/2
1 2 j=1,2,....,N
f x j ( x / mi ) ( N0 ) N /2
exp ( x j sij ) , (5.45)
N0 i=1,2,....,M
1 N 2
f x ( x / mi ) ( N0 ) N /2
exp ( x j sij ) ,
N0 j 1 i=1,2,....,M (5.46)
41/45
• If we go back to the formulation of the received signal
through a AWGN channel 5.34
N
x(t ) x ji (t ) x(t )
j 1
N
x ji (t ) w(t ) (5.34)
j 1
42/45
FINALLY,
• The AWGN channel, is equivalent to an N-dimensional
vector channel, described by the observation vector
x si w, i 1, 2,....., M (5.48)
43/45
OUTLINE
• 5.1 Introduction
• 5.2 Geometric Representation of Signals
• Gram-Schmidt Orthogonalization Procedure
• 5.3 Conversion of the AWGN into a Vector Channel
• 5.4 Maximum Likelihood Decoding
• 5.5 Correlation Receiver
• 5.6 Probability of Error 44/45
MAXIMUM LIKELIHOOD DECODING
45/45