Beruflich Dokumente
Kultur Dokumente
Lesson 16
Representation of Signals
Version 2 ECE IIT, Kharagpur
Gram-Schmidt Orthogonalization
The principle of Gram-Schmidt Orthogonalization (GSO) states that, any set of M energy signals, {si(t)}, 1 i M can be expressed as linear combinations of N orthonormal basis functions, where N M. If s1(t), s2(t), .., sM(t) are real valued energy signals, each of duration T sec,
N 0 t T si (t ) = sij j (t ) ; j =1 i = 1, 2,...., M N
4.16.1
where,
T i = 1, 2,......, M sij = si (t ) j (t )dt ; j = 1, 2,......, N 0
4.16.2
The j(t)-s are the basis functions and sij-s are scalar coefficients. We will consider realvalued basis functions j (t) - s which are orthonormal to each other, i.e.,
T
i (t ). j (t )dt =
0
1, if i = j 0, if i j
4.16.3
Note that each basis function has unit energy over the symbol duration T. Now, if the basis functions are known and the scalars are given, we can generate the energy signals, by following Fig. 4.16.1. Or, alternatively, if we know the signals and the basis functions, we know the corresponding scalar coefficients (Fig. 4.16.2).
si 1
1 (t)
si 2
2 (t)
+ +
si (t) 0t T
siN
Correlator Structure
T
( )dt
0
si1
si ri (t)
2 (t)
( )dt
0
si2
N (t)
( )dt
0
siN Scalar
Verify that even if two coefficients are not zero, e.g. a1 0 and a3 0, then s1(t) and s3(t) are dependent signals. Let us arbitrarily set, aM 0. Then, 1 sM (t ) = [ a1s1 (t ) + a2 s2 (t ) + ........ + aM 1sM 1 (t )] aM
1 aM
M 1 i =1
a s (t )
i i
4.16.5
Eq.4.16.5 shows that sM(t) could be expressed as a linear combination of other si(t) s, i = 1, 2, .., (M 1). i = 1,2,.., (M 1). Next, we consider a reduced set with (M-1) signals {si(t)}, This set may be either linearly independent or not. If not, there exists a set of {bi}, i = 1,2, (M 1), not all equal to zero such that,
M 1 i =1
b s (t ) = 0, 0 t < T
i i
4.16.6
Again, arbitrarily assuming that bM-1 0, we may express sM-1(t) as: 1 M 2 4.16.7 sM 1 (t ) = bi si (t ) bM 1 i =1 Now, following the above procedure for testing linear independence of the remaining signals, eventually we will end up with a subset of linearly independent signals. Let {si(t)}, i = 1, 2, ., N M denote this subset. Part II : We now show that it is possible to construct a set of N orthonormal basis functions 1(t), 2(t), .., N(t) from {si(t)}, i = 1, 2, .., N. s (t ) Let us choose the first basis function as, 1 (t ) = 1 , where E1 denotes the energy of the E1 first signal s1(t), i.e.,
E1 = s12 (t )dt :
0 T
4.16.8
4.16.9 4.16.10
Let us now define an intermediate function: g2(t) = s2(t) s211(t); 0 t < T Note that,
T
= s21 s21 = 0 g 2 (t ) Orthogonal to 1(t); So, we verified that the function g2(t) is orthogonal to the first basis function. This gives us a clue to determine the second basis function. Now, energy of g2(t)
= g 2 2 (t )dt
0 T
= [ s2 (t ) s211 (t )] dt
2
0 T
= E2 2.s21.s21 + s21 = E2 s
2
2 21
4.16.11
2 (t ) =
g 2 (t )
T
s2 (t ) s211 (t ) E2 s212
4.16.12
g
0
2 2
(t )dt
0 1
2 2
(t )dt = 1 ,
and
(t ).
0
Proceeding in a similar manner, we can determine the third basis function, 3(t). For i=3,
g3 (t ) = s3 (t ) s3 j J (t ) ;
j 1
0 t <T
= s3 (t ) [ s311 (t ) + s32 2 (t )]
where,
s31 = s3 (t )1 (t )dt and s32 = s3 (t ) 2 (t )dt
0 0 T T
4.16.13
g
0
2 3
(t )dt
gi (t ) Egi
Indeed, in general,
i (t ) =
gi (t )
T
4.16.14
g
0
2 i
(t )dt
i 1
gi (t ) = si (t ) sij j (t )
j =1
4.16.15
and
sij = si (t ). j (t )dt
0
4.16.16
for i = 1, 2,.., N and j = 1, 2, , M Let us summarize the steps to determine the orthonormal basis functions following the Gram-Schmidt Orthogonalization procedure: If the signal set {sj(t)} is known for j = 1, 2,.., M, 0 t <T, Derive a subset of linearly independent energy signals, {si(t)},i = 1, 2,.., N M. Find the energy of s1(t) as this energy helps in determining the first basis function 1(t), which is a normalized form of the first signal. Note that the choice of this first signal is arbitrary. Find the scalar s21, energy of the second signal (E 2), a special function g2(t) which is orthogonal to the first basis function and then finally the second orthonormal basis function 2(t) Follow the same procedure as that of finding the second basis function to obtain the other basis functions.
sij = si (t ) j (t )dt
0
Now, we can represent a signal si(t) as a column vector whose elements are the scalar coefficients sij, j = 1, 2, .., N : si1 s si = i 2 ; i= 1, 2,, M # siN 1 N
4.16.17
These M energy signals or vectors can be viewed as a set of M points in an N dimensional Euclidean space, known as the Signal Space (Fig.4.16.3). Signal Constellation is the collection of M signals points (or messages) on the signal space.
2 E2
E1
s2
0
s1
s11
s12
1 (t )
s3
E3
2D Signal Space
s1 , s 2 and s3
Now, the length or norm of a vector is denoted as si . The squared norm is the inner product of the vector:
2
si
= si , si = s 2ij
j =1
4.16.18
The cosine of the angle between two vectors is defined as: J G JG si , s j J G JG cos angle between si & s j = J G JG si s j
4.16.19
Ei =
T 2 i
s ( t ) dt =
N T
4.16.20
It may now be guessed intuitively that we should choose si (t) and sk (t) such that the J G JJ G Euclidean distance between them, i.e. si -sk is as much as possible to ensure that their detection is more robust even in presence of noise. For example, if s1 (t) and s2 (t) have same energy E, (i.e. they are equidistance from the origin), then an obvious choice for maximum distance of separation is, s1(t) = -s2(t).
received signal point. This helps us to identify a noise vector, w(t) ,also. The detection problem can now be stated as: Given an observation / received signal vector ( r ), the receiver has to perform a mapping from r to an estimate m for the transmitted symbol mi in a way that would minimize the average probability of symbol error. Maximum Likelihood Detection scheme provides a general solution to this problem when the noise is additive and Gaussian. We discuss this important detection scheme in Lesson #19.
Problems
Q4.16.1) Sketch two signals, which are orthonormal to each other over 1 sec. Verify that Eq4.16.3 is valid. Q4.16.2) Let, S1(t) = Cos2ft, S2(t) = Cos (2ft + /3) and S3(t) = Sin 2ft. Comment whether the three signals are linearly independent? Q4.16.3) Consider a binary random sequence of 1 and 0. Draw a signal constellation for the same.