Sie sind auf Seite 1von 4

Optical Communication Theory and Techniques

Part I: Communication Theory and Digital Transmission


February 5, 2014

1. Consider the signals s1 (t), s2 (t), s3 (t), and s4 (t) shown in the figure below.

s1 (t) s2 (t) s3 (t) s4 (t)


2 2
1 1 1

1 2 3 4 t 1 2 3 4 t 1 2 3 4 t 1 2 3 4 t
−1 −1
−2 −2

(a) Determine the dimensionality of the signals and find an orthonormal basis.
(b) Find the images s1 , s2 , s3 , and s4 of the signals with respect to such basis.
(c) Determine the minimum distance of the signal constellation.

2. Consider the PAM signal X


x(t) = an p(t − nT )
n

with elementary pulse p(t) and symbols an such that

an = un − un−2

where {un } is a sequence of uncorrelated binary random variables with values ±1 occurring with
equal probability. Denoting by P( f ) the Fourier transform of p(t):

(a) Determine the autocorrelation function Ra (n) = E{ak+n ak } of the symbols an .


(b) Determine the power spectral density of x(t).
(c) Repeat (b) if the possible values of the un are 0 and 1.
Solution:

1. (a) Instead of directly applying the Gram-Schmidt procedure, it is better to quickly check
the linear independence of the signals. The signals are linearly independent if, for any t,
c1 s1 (t) + c2 s2 (t) + c3 s3 (t) + c4 s4 (t) = 0 can only be achieved by c1 = c2 = c3 = c4 = 0.
Given the particular shape of the signals, we can write the following set of 4 equations

2c1 + c2 − 2c3 + c4 = 0 0<t<1







−c1 − c2 + c3 − 2c4 = 0 1<t<2




−c1 + c2 + c3 − 2c4 = 0 2<t<3





−c1 − c2 + 2c4 = 0 3<t<4


i.e., in matrix form


 2 1 −2 1 c1  0
    
−1 −1 1 −2 c  0
  2  
−1 1 1 −2 c3  = 0


−1 −1 0 2 c4 0
     

Of course, c1 = c2 = c3 = c4 = 0 is a solution. However, if the determinant of the matrix


is different from 0 there is no other solution. Recalling that any linear combination of
rows and/or columns leaves the determinant unchanged, adding the third column to the
first one, the determinant to be determined becomes
0 1 −2 1

0 −1 1 −2 1 −2 1

= −1 1 −2
0 1 1 −2 1 1 −2

−1 −1 0 2

where we also used the Laplace’s expansion (the determinant |A| of a n × n matrix A with
elements ai j is given by either |A| = ni=0 (−1)i+ j ai j Mi j or |A| = nj=0 (−1)i+ j ai j Mi j , where
P P
Mi j is the i, j minor determinant, i.e., the determinant of the minor matrix resulting from
deleting the i-th row and j-th column of A). Multiplying the second column by 2 and
adding the result to the third column, we also have

1 −2 1 1 −2 −3

1

−1 1 −2 = −1 1 0 = −3 −1
= 6.

1 1
1 1 −2 1 1 0

This means that there is no other solution other than the trivial one. Thus, the signals
are linearly independent and therefore their dimensionality is 4. Now, given the linear
independence and particular shape of the signals, we need not apply the Gram-Schmidt
procedure, because it is apparent that the following set of signals is an orthonormal basis:
ϕ1 (t) ϕ2 (t) ϕ3 (t) ϕ4 (t)
1 1 1 1

1 2 3 4 t 1 2 3 4 t 1 2 3 4 t 1 2 3 4 t

(b) The images of the signals with respect to the previous basis are readily found without any
computation
 2  1 −2  1
       
−1 −1  1 −2
s1 =   , s2 =   , s3 =   , s4 =  
−1  1  1 −2
−1 −1 0 2
(c) The distance di j between any pair si , s j of signals is given by
q
di j = ksi − s j k = (si1 − s j1 )2 + (si2 − s j2 )2 + (si3 − s j3 )2 + (si4 − s j4 )2

so that

d12 = 12 + 02 + (−2)2 + 02 = 5
p

d13 = 42 + (−2)2 + (−2)2 + (−1)2 = 25
p

d14 = 12 + 12 + 12 + (−3)2 = 12
p

d23 = 32 + (−2)2 + 02 + (−1)2 = 14
p

d24 = 02 + 12 + 32 + (−3)2 = 19
p

d34 = (−3)2 + 32 + 32 + (−2)2 = 31
p


Thus, the minimum distance is dmin = 5.

2. (a) Letting Ru (n) = E{uk+n uk }, as an = un − un−2 , we have

Ra (n) = E{ak+n ak } = E{(uk+n − uk+n−2 )(uk − uk−2 )}


= E{uk+n uk } + −E{uk+n uk−2 } − E{uk+n−2 uk } + E{uk+n−2 uk−2 }
= 2Ru (n) − Ru (n + 2) − Ru (n − 2)

As the random variables un are uncorrelated and can assume the values ±1 with equal
probability, we have that E{un } = 0 and E{u2n } = 1, so that

E{u2k } = 1, n=0


Ru (n) = E{uk+n uk } = 

E{uk+n }E{uk } = 0, n 6= 0

Hence,
2, n=0





Ra (n) =  n = ±2

−1,


 0,


 otherwise
(b) The power spectral density Sx ( f ) of a generic PAM signal x(t) is given by

1 2
Ra (n)e− j2π f nT
X
Sx ( f ) = |P( f )|
T n=−∞

Thus, replacing Ra (n) and taking into account that Ra (−2) = Ra (2),
1
Sx ( f ) =|P( f )|2 Ra (−2)e− j2π f (−2)T + Ra (0) + Ra (2)e− j2π f 2T
 
T
1
= |P( f )|2 (Ra (0) + 2Ra (2) cos 4π f T )
T
1
= |P( f )|2 (2 − 2 cos 4π f T )
T
4
= |P( f )|2 sin2 2π f T
T
(c) If the random variables un are uncorrelated and can assume the values 0 and 1 with equal
probability, we have that E{un } = E{u2n } = 1/2. In this case, let us denote by a prime the
autocorrelation functions of the symbols, so that Ru0 (n) = E{uk+n uk } and Ra0 (n) = E{ak+n ak }.
Thus,
E{u2k } = 1/2, n=0


Ru0 (n) = 

E{uk+n }E{uk } = 1/4, n 6= 0

and proceeding as in (a), we get

Ra0 (n) = 2Ru0 (n) − Ru0 (n + 2) − Ru0 (n − 2) .

We now note that Ru0 (n) = (1 + Ru (n)) /4, where Ru (n) is as in (a), such that

Ra0 (n) = 2Ru0 (n) − Ru0 (n + 2) − Ru0 (n − 2)


1
= 2Ru (n) − Ru (n + 2) − Ru (n − 2)

4
1
= Ra (n)
4
Thus, in this case, the power spectral density is as in (b) but reduced by a factor of 4, i.e.,
1
Sx ( f ) = |P( f )|2 sin2 2π f T
T

Das könnte Ihnen auch gefallen