Sie sind auf Seite 1von 45

OUTLINE

• 5.1 Introduction
• 5.2 Geometric Representation of Signals
• Gram-Schmidt Orthogonalization Procedure
• 5.3 Conversion of the AWGN into a Vector Channel
• 5.4 Maximum Likelihood Decoding
• 5.5 Correlation Receiver
• 5.6 Probability of Error 2/45
INTRODUCTION – THE MODEL
• We consider the following model of a generic transmission
system (digital source):
• A message source transmits 1 symbol every T sec
• Symbols belong to an alphabet M (m1, m2, …mM)
• Binary – symbols are 0s and 1s
• Quaternary PCM – symbols are 00, 01, 10, 11

3/45
TRANSMITTER SIDE
• Symbol generation (message) is probabilistic, with a
priori probabilities p1, p2, .. pM. or
• Symbols are equally likely
• So, probability that symbol mi will be emitted:

i  P(mi )
1
= for i=1,2,....,M (5.1)
M

4/45
• Transmitter takes the symbol (data) mi (digital
message source output) and encodes it into a distinct
signal si(t).
• The signal si(t) occupies the whole slot T allotted to
symbol mi.
• si(t) is a real valued energy signal ?
T
Ei   si2 (t )dt , i=1,2,....,M (5.2)
0
5/45
• Transmitter takes the symbol (data) mi (digital message
source output) and encodes it into a distinct signal si(t).
• The signal si(t) occupies the whole slot T allotted to
symbol mi.
• si(t) is a real valued energy signal (signal with finite
energy)
T
Ei   si2 (t )dt , i=1,2,....,M (5.2)
0
6/45
CHANNEL ASSUMPTIONS:
• Linear, wide enough to accommodate the signal si(t) with
no or negligible distortion
• Channel noise is w(t) is a zero-mean white Gaussian noise
process – AWGN
• additive noise
• received signal may be expressed as:

0  t  T 
x(t )  si (t )  w(t ),   (5.3)
i=1,2,....,M 
7/45
RECEIVER SIDE
• Observes the received signal x(t) for a duration of time T sec
• Makes an estimate of the transmitted signal si(t) (eq. symbol mi).
• Process is statistical
• presence of noise
• errors
• So, receiver has to be designed for minimizing the average probability of
error (Pe)
M
Pe =  p P(mˆ  m
i 1
i i / mi ) (5.4)

8/45
OUTLINE

• 5.1 Introduction
• 5.2 Geometric Representation of Signals
• Gram-Schmidt Orthogonalization Procedure
• 5.3 Conversion of the AWGN into a Vector Channel
• 5.4 Maximum Likelihood Decoding
• 5.5 Correlation Receiver
• 5.6 Probability of Error
9/45
5.2. GEOMETRIC REPRESENTATION OF SIGNALS
• Objective: To represent any set of M energy signals
{si(t)} as linear combinations of N orthogonal basis
functions, where N ≤ M
• Real value energy signals s1(t), s2(t),..sM(t), each of
duration T sec Orthogonal basis
function

N
0  t  T 
si (t )   sij j (t ),   (5.5)
j 1 i==1,2,....,M 
coefficient
10/45
Energy signal
• Coefficients:
T i=1,2,....,M 
sij   si (t ) j (t )dt ,   (5.6)
0
 j=1,2,....,M 

• Real-valued basis functions:


T
1 if i  j 
0 i (t ) j (t )dt   ij  0 if i  j  (5.7)

11/45
• The set of coefficients can be viewed as a N-dimensional vector, denoted by si
• Bears a one-to-one relationship with the transmitted signal si(t)

12/45
FIGURE 5.3
(A) SYNTHESIZER FOR GENERATING THE SIGNAL SI(T). (B) ANALYZER
FOR GENERATING THE SET OF SIGNAL VECTORS SI.

13/45
SO,
• Each signal in the set si(t) is completely determined
by the vector of its coefficients
 si1 
s 
 i2 
. 
si    , i  1,2,....,M (5.8)
. 
. 
 
 siN 
14/45
FINALLY,
• The signal vector si concept can be extended to 2D, 3D etc. N-
dimensional Euclidian space
• Provides mathematical basis for the geometric representation of
energy signals that is used in noise analysis
• Allows definition of
• Length of vectors (absolute value)
• Angles between vectors
• Squared value (inner product of si with itself)

 siT si
2 Matrix
si Transposition
N
=  sij2 , i  1,2,....,M (5.9)
j 1
15/45
FIGURE 5.4
ILLUSTRATING THE GEOMETRIC
REPRESENTATION OF SIGNALS
FOR THE CASE WHEN N  2
AND M  3.
(TWO DIMENSIONAL SPACE,
THREE SIGNALS)

16/45
ALSO,
What is the relation between the vector representation
of a signal and its energy value?

• …start with the definition of T

average energy in a E i   si2 (t )dt (5.10)


0
signal…(5.10)
N

• Where si(t) is as in (5.5): si (t )   sij j (t ), (5.5)


j 1

17/45
T
N  N 
• After substitution: Ei    sij j (t )   sikk (t )  dt
0  j 1   k 1 

N N T
Ei    s s   (t ) (t )dt
ij ik j k (5.11)
j 1 k 1
• After regrouping:
0

N
Ei   s 2 2
ij = si (5.12)
• Φj(t) is orthogonal, so j 1

finally we have:
The energy of a signal
is equal to the squared
length of its vector 18/45
FORMULAS FOR TWO SIGNALS
• Assume we have a pair of signals: si(t) and sj(t), each
represented by its vector,
• Then:
T
sij   si (t )sk (t )dt  siT sk (5.13)
0

Inner product is invariant


to the selection of basis
Inner product of the signals functions
is equal to the inner product
of their vector
19/45
representations [0,T]
EUCLIDIAN DISTANCE
• The Euclidean distance between two points
represented by vectors (signal vectors) is equal to
||si-sk|| and the squared value is given by:
N
si  s k =  (sij -skj ) 2
2
(5.14)
j 1
T
=  ( si (t )  sk (t )) 2 dt
0

20/45
ANGLE BETWEEN TWO SIGNALS
• The cosine of the angle Θik between two signal vectors si and sk is
equal to the inner product of these two vectors, divided by the
product of their norms:

T
s s
cosik  i k
(5.15)
si sk

• So the two signal vectors are orthogonal if their inner product siTsk is
zero (cos Θik = 0)
21/45
SCHWARTZ INEQUALITY

      (5.16)
 2  
s1 (t )s2 (t )dt  2
s (t )dt s22 (t )dt
  1 

22/45
OUTLINE

• 5.1 Introduction
• 5.2 Geometric Representation of Signals
• Gram-Schmidt Orthogonalization Procedure
• 5.3 Conversion of the AWGN into a Vector Channel
• 5.4 Maximum Likelihood Decoding
• 5.5 Correlation Receiver
• 5.6 Probability of Error 23/45
GRAM-SCHMIDT ORTHOGONALIZATION PROCEDURE
Assume a set of M energy signals denoted by s1(t), s2(t), .. , sM(t).
1. Define the first basis function s1 (t )
starting with s1 as: (where E is the 1 (t )  (5.19)
energy of the signal) (based on E1
5.12)
2. Then express s1(t) using the basis
function and an energy related s1 (t )  E11 (t ) = s111 (t ) (5.20)
coefficient s11 as:

3. Later using s2 define the T


coefficient s21 as: s21   s2 (t )1 (t )dt (5.21)
0

24/45
4. If we introduce the g 2 (t )  s2 (t )  s211 (t ) (5.22)
intermediate function g2 as:
Orthogonal to φ1(t)

5. We can define the second 2 (t ) 


g 2 (t )
(5.23)
basis function φ2(t) as: T
0
g 22 (t )dt
6. Which after substitution of s2 (t )  s211 (t )
g2(t) using s1(t) and s2(t) it 2 (t )  (5.24)
becomes: E2  s212

• Note that φ1(t) and φ2(t) are T


orthogonal that means:  0
22 (t )dt  1 (Look at 5.23)

T
 0
1 (t )2 (t )dt  0
25/45
AND SO ON FOR N DIMENSIONAL SPACE…,
• In general a basis function can be defined using the
following formula:
i 1
gi (t )  si (t )   sij - j (t) (5.25)
j 1

• where the coefficients can be defined using:


T
sij   si (t ) j (t )dt , j  1,2,....., i 1 (5.26)
0

26/45
SPECIAL CASE:
• For the special case of i = 1 gi(t) reduces to si(t).

General case:

• Given a function gi(t) we can define a set of basis


functions, which form an orthogonal set, as:
gi (t )
i (t )  , i  1, 2,....., N (5.27)
T

0
gi2 (t )dt
27/45
OUTLINE

• 5.1 Introduction
• 5.2 Geometric Representation of Signals
• Gram-Schmidt Orthogonalization Procedure
• 5.3 Conversion of the AWGN into a Vector Channel
• 5.4 Maximum Likelihood Decoding
• 5.5 Correlation Receiver
• 5.6 Probability of Error 28/45
CONVERSION OF THE CONTINUOUS AWGN CHANNEL INTO
A VECTOR CHANNEL
• Suppose that the si(t) is not
x(t )  si (t )  w(t ),
any signal, but specifically
the signal at the receiver 0  t  T 
  (5.28)
side, defined in accordance i=1,2,....,M 
with an AWGN channel:
T

• So the output of the x i   x(t ) j (t )dt


0

correlator (Fig. 5.3b) can be =sij  wi ,


defined as: j  1, 2,....., N (5.29)
29/45
T T

sij   si (t )i (t )dt (5.30) wi   w(t )i (t )dt (5.31)


0 0

deterministic quantity random quantity

contributed by the sample value of the


transmitted signal si(t) variable Wi due to noise
30/45
NOW,
• Consider a random process
X1(t), with x1(t), a sample N

function which is related to x(t )  x(t )   x ji (t ) (5.32)


j 1
the received signal x(t) as
N
follows: x(t )  x(t )   ( sij  w j ) j (t )
• Using 5.28, 5.29 and 5.30 j 1

and the expansion 5.5 we N

get: =w(t )   w j j (t )
j 1

=w(t ) (5.33)
which means that the sample function x1(t) depends only on
the channel noise! 31/45
• The received signal can
be expressed as:
N
x(t )   x ji (t )  x(t )
j 1
N
  x ji (t )  w(t ) (5.34)
j 1

NOTE: This is an expansion similar to the one


in 5.5 but it is random, due to the additive
noise.
32/45
STATISTICAL CHARACTERIZATION

• The received signal (output of the correlator of Fig.5.3b) is a random


signal. To describe it we need to use statistical methods – mean and
variance.
• The assumptions are:
• X(t) denotes a random process, a sample function of which is represented by the
received signal x(t).
• Xj(t) denotes a random variable whose sample value is represented by the
correlator output xj(t), j = 1, 2, …N.
• We have assumed AWGN, so the noise is Gaussian, so X(t) is a Gaussian process
and being a Gaussian RV, X j is described fully by its mean value and variance.

33/45
MEAN VALUE
• Let Wj, denote a random variable, represented by its
sample value wj, produced by the jth correlator in
response to the Gaussian noise component w(t).
• So it has zero mean (by definition of the AWGN
model)  x  E  X j 
j

• …then the mean of =E  sij  W j 


Xj depends only on
=sij  E[W j ]
sij:
 x = sij
j
(5.35)
34/45
VARIANCE
 x2  var[ X j ]
• Starting from the definition, i

we substitute using 5.29 and =E ( X j  sij ) 2 


5.31
T =E W j2  (5.36)
wi   w(t )i (t )dt (5.31)
0
 T T

 xi =E   W (t ) j (t )dt  W (u ) j (u )du 
2

0 0 
T T

T T =E   0  j (t )i (u)W (t )W (u)dtdu  (5.37)
 x2 =  o
i
o
  (t ) (u ) E[W (t )W (u )]dtdu
0
i j

T T

=E   0  j (t )i (u) Rw (t, u)dtdu  (5.38) Autocorrelation function of
o the noise process 35/45
• It can be expressed as:
(because the noise is N0
stationary and with a R w (t , u )   (t  u ) (5.39)
2
constant power spectral
density) N0
T T
 xi =    (t ) (u ) (t  u )dtdu
2

• After substitution for 2 o 0


i j

the variance we get: N0 2


T
= 
2 0
 j (t )dt (5.40)
• And since φj(t) has unit N0
energy for the variance  x =
2
for all j (5.41)
we finally have:
i
2
• Correlator outputs, denoted by Xj have variance
equal to the power spectral density N0/2 of the noise
process W(t). 36/45
PROPERTIES (WITHOUT PROOF)
• Xj are mutually uncorrelated
• Xj are statistically independent (follows from above
because Xj are Gaussian)
• and for a memoryless channel the following equation is
true:
N
f x ( x / mi )   f x j ( x j / mi ), i=1,2,....,M (5.44)
j 1

37/45
• Define (construct) a vector X of N random variables, X1, X2,
…XN, whose elements are independent Gaussian RV with mean
values sij, (output of the correlator, deterministic part of the signal
defined by the signal transmitted) and variance equal to N0/2
(output of the correlator, random part, calculated noise added by
the channel).
• then the X1, X2, …XN , elements of X are statistically
independent.
• So, we can express the conditional probability of X, given si(t)
(correspondingly symbol mi) as a product of the conditional
density functions (fx) of its individual elements fxj.
NOTE: This is equal to finding an expression of the probability of a
received symbol given a specific symbol was sent, assuming a
memoryless channel

38/45
• …that is:
N
f x ( x / mi )   f x j ( x j / mi ), i=1,2,....,M (5.44)
j 1

• where, the vector x and the scalar xj, are sample


values of the random vector X and the random
variable Xj.

39/45
N
f x ( x / mi )   f x j ( x j / mi ), i=1,2,....,M (5.44)
j 1

Vector x is called
observation vector
Vector x and scalar xj Scalar xj is called
are sample values of observable element
the random vector X
and the random
variable Xj

40/45
• Since, each Xj is Gaussian with mean sj and variance
N0/2
 1 2 j=1,2,....,N
f x j ( x / mi )  ( N0 ) N /2
exp  ( x j  sij )  , (5.45)
 N0  i=1,2,....,M

• we can substitute in 5.44 to get 5.46:

 1 N 2
f x ( x / mi )  ( N0 ) N /2
exp   ( x j  sij )  ,
 N0 j 1  i=1,2,....,M (5.46)

41/45
• If we go back to the formulation of the received signal
through a AWGN channel 5.34
N
x(t )   x ji (t )  x(t )
j 1
N
  x ji (t )  w(t ) (5.34)
j 1

Only projections of the noise onto


the basis functions of the signal set
The vector that we
{si(t)Mi=1 affect the significant
have constructed fully statistics of the detection problem
defines this part

42/45
FINALLY,
• The AWGN channel, is equivalent to an N-dimensional
vector channel, described by the observation vector

x  si  w, i  1, 2,....., M (5.48)

43/45
OUTLINE

• 5.1 Introduction
• 5.2 Geometric Representation of Signals
• Gram-Schmidt Orthogonalization Procedure
• 5.3 Conversion of the AWGN into a Vector Channel
• 5.4 Maximum Likelihood Decoding
• 5.5 Correlation Receiver
• 5.6 Probability of Error 44/45
MAXIMUM LIKELIHOOD DECODING

45/45

Das könnte Ihnen auch gefallen