Beruflich Dokumente
Kultur Dokumente
Contents
1 Converting a Digital Signal to an Analog Signal
14
17
22
29
36
44
52
56
60
65
13 Equalization
71
14 Non-Coherent Reception
77
p
where In takes on one of of the four possible values 1/2(1 j) with equal probability. The
sequence of information symbols {In } is statistically independent (i.i.d).
(a) Determine the power density spectrum of u(t) when
(
A, 0 t T,
g(t) =
0, otherwise.
(b) Repeat (1a) when
(
g(t) =
A sin(t/T ), 0 t T,
0,
otherwise.
(c) Compare the spectra obtained in (1a) and (1b) in terms of the 3dB bandwidth and the
bandwidth to the first spectral zero. Here you may find the frequency numerically.
Solution:
We have that SU (f ) =
1
T
|G(f )|
m=
CI (m) =
therefore
m=
1,
0,
m = 0,
m 6= 0.
CI (m)ej2f mT = 1 SU (f ) =
1
T
|G(f )| .
sin f T j2f T /2
sin2 f T
2
e
|G(f )| = A2 T 2
f T
(f T )2
where the factor ej2f T /2 is due to the T /2 shift of the rectangular pulse from the center.
Hence:
sin2 f T
SU (f ) = A2 T
(f T )2
RT
(b) For the sinusoidal pulse: G(f ) = 0 A sin(t/T ) exp(j2f t)dt. By using the trigonometric
it is easily shown that:
identity sin x = exp(jx)exp(jx)
2j
G(f ) =
Hence:
SU (f ) =
2A
2
T
2AT
cos2 T f
(1 4T 2 f 2 )2
2
cos2 T f
(1 4T 2 f 2 )2
m = 0,
2,
=
1, m = 2
0,
otherwise.
2(m) (m 2) (m + 2)
=
(b) SU (f ) =
1
T
|G(f )|
m=
CI (m)ej2f mT , where
CI (m)ej2f mT = 4 sin2 2f T,
m=
and
2
|G(f )| = (AT )2
sin f T
f T
Therefore:
SU (f ) = 4A2 T
1 E{a a }
n m
= 0 for n 6= m.
sin f T
f T
2
2
.
sin2 2f T
(c) If {an } takes the values (0, 1) with equal probability then E{an } = 1/2. This results in:
CI (m)
SU (f )
1
[2(m) (m 2) (m + 2)] ii (f ) = 2 sin2 2f T
2
2
sin f T
sin2 2f T
2A2 T
f T
Thus, we obtain the same result as in (2b) but the magnitude of the various quantities is
reduced by a factor of 2.
3. [2, Problem 1.16].
A zero mean stationary process x(t) is applied to a linear filter whose impulse response is defined
by a truncated exponential:
(
aeat , 0 t T,
h(t) =
0,
otherwise.
Show that the power spectral density (PSD) of the filter output y(t) is defined by
SY (f ) =
a2
(1 2 exp(aT ) cos 2f T + exp(2aT ))SX (f )
a2 + 4 2 f 2
Z
=
a exp(at) exp(j2f t)dt
Z
= a
exp((a + j2f )t)dt
a
[1 eaT (cos 2f T j sin 2f T )].
a + j2f
|H(f )| =
a2
1 2eaT cos 2f T + e2aT
2
2
2
a + 4 f
=
=
1
T
|G(f )| .
e
2 f T /2
2 f T /2
T sin(f T /2) j2f T
e
(2j sin(f T /2))
2 f T /2
f T /2
2
2
sin (f T /2)
2
|G(f )| = T 2
f T /2
2
2
sin (f T /2)
SS (f ) = T
f T /2
= jT
(b) For non-independent information sequence the power spectrum of s(t) is given by SS (f ) =
2 P
1
j2f mT
.
m= CB (m)e
T |G(f )|
CB (m)
= E{bn+m bn }
= E{an+m an } + kE{an+m1 an } + kE{an+m an1 } + k 2 E{an+m1 an1 }
1 + k , m = 0,
=
k,
m = 1
0,
otherwise.
Hence:
CB (m)ej2f mT = 1 + k 2 + 2k cos 2f T
m=
We want:
SS (1/T ) = 0
CB (m)ej2f mT f =1/T = 0 1 + k 2 + 2k = 0 k = 1
m=
sin2 f T /2
f T /2
2
sin2 f T
1 + k , m = 0,
CB (m) =
k,
m = 4
0,
otherwise.
CB (m)ej2f mT
1 + k 2 + 2k cos 2f 4T
m=
y(t) =
x( )d ,
tT
where x(t) is the input, y(t) is the output, and T is the integration period. Both x(t) and y(t)
are sample functions of stationary processes X(t) and Y (t), respectively. Show that the power
spectral density (PSD) of the integrator output is related to that of the integrator input by
SY (f ) = T 2 sinc2 (f T )SX (f ).
Remark 1. sinc(x) =
sin(x)
x
Solution:
First, we will find the impulse response, h(t), of the running integrator
(
Z t
1, 0 t T,
h(t) =
( )d =
0, otherwise.
tT
ej2f t dt
=
0
1
1 ej2f T
j2f
= T sinc(f T )ejf T .
=
SY (f ) = |H(f )| SX (f )
= T 2 sinc2 (f T )SX (f ).
Remark 2. Hypothesis testing is another common name for decision problem: You have to decide
between two or more hypothesis, say H0 , H1 , H2 , . . . where Hi can be interpreted as the unknown
parameter has value i. Decoding a constellation with K symbols can be interpreted as selecting the
correct hypothesis from H0 , H1 , . . . , HK1 where Hi is the hypothesis that Si was transmitted.
1. Consider an equal probability binary source p(0) = p(1) = 1/2, and a continuous output channel:
fR|M (r|1)
= aear
r0
fR|M (r|0)
br
r0
= be
b>a>0
1
(a) Find a constant K such that the optimal decision rule is rK.
0
(b) Find the respective error probability.
Solution:
(a) Optimal decision rule:
0
p(0)fR|M (r|0)p(1)fR|M (r|1)
1
Using the defined channel distributions:
bebr
aear
a (ab)r
e
b
a
ln( ) + (b a)r
b
ln( ab )
=K
ab
(b)
p(e)
=
=
=
2. Consider a binary source: Pr{x = 2} = 2/3, Pr{x = 1} = 1/3, and the following channel
y = A x,
where x and A are independent.
8
A N (1, 1)
(Y |1) N (1, 1)
1 1
(y 1)2
exp
3 2
2
(y + 2)2
8
3y(y 4)
(
2,
1,
x
(y)
(y 1)2
2
y < 0, 4 < y
otherwise.
(b)
Z 0
Z
Z
1
2 4
f (y| 2)dy +
f (y|1)dy +
f (y|1)dy
p(e) =
3 0
3
4
2
0+2
4+2
01
41
1
=
Q
Q
+ 1Q
Q
3
2
2
3
1
1
1
0.15821
= Q(1) Q(3) =
3
3. Decision rules for binary channels.
(a) The Binary Symmetric Channel (BSC) has binary (0 or 1) inputs and outputs. It
outputs each bit correctly with probability 1 p and incorrectly with probability p. Assume
0 and 1 are equally likely inputs. State the MAP and ML decision rules for the BSC when
p < 12 . How are the decision rules different when p > 12 ?
(b) The Binary Erasure Channel (BEC) has binary inputs as with the BSC. However there
are three possible outputs. Given an input of 0, the output is 0 with probability 1 p1 and 2
with probability p1 . Given an input of 1, the output is 1 with probability 1 p2 and 2 with
probability p2 . Assume 0 and 1 are equally likely inputs. State the MAP and ML decision
rules for the BEC when p1 < p2 < 12 . How are the decision rules different when p2 < p1 < 21 ?
Solution:
(a) For equally likely inputs the MAP and ML decision rules are identical. In each case we wish
to maximize py|x (y|xi ) over the possible choices for xi . The decision rules are shown below,
p
<
>
1
=Y
X
2
1
=1Y
X
2
9
(b) Again, since we have equiprobable signals, the MAP and ML decision rules are the same.
The decision rules are as follows,
(
1
= Y, Y = 0, 1.
p1 < p2 < X
2
1, Y = 2.
(
1
= Y, Y = 0, 1.
p2 < p1 < X
2
0, Y = 2.
4. In a binary hypothesis testing problem, the observation Z is Rayleigh distributed under both
hypotheses with different parameter, that is,
z
z2
f (z|Hi ) = 2 exp 2
z 0, i = 0, 1
i
2i
You need to decide if the observed variable Z was generated with 02 or with 12 , namely choose
between H0 and H1 .
(a) Obtain the decision rule for the minimum probability of error criterion. Assume that the H0
and H1 are equiprobable.
(b) Extend your results to N independent observations, and derive the expressions for the resulting probability of error.
PN
Note: If R Rayleigh() then i=1 Ri2 has a gamma distribution with parameters N and
P
N
2 2 : Y = i=1 Ri2 (N, 2 2 ).
Solution:
(a)
log(f (z|Hi ))
log z log i2
log
z2
H1
H0
z2
2i2
H1
02
1
1
2
0
+
z
12
202
212
H0
2 2 2
1 0
1
=
2 log
02
12 02
H1 , z ,
H=
H0 , z < .
(b) Let
f (z|H1 )
f (z|H0 )
2 LRT
f (z|H1 )
f (z|H0 )
10
N
1
X
n=0
N
1
X
log zn N log i2
n=0
= N log i2 +
N
1
X
zn2
2i2
log zn
n=0
zn2
2i2
zn2 2N log
2
2
0
1 02
n=0
H0
log LRT
Define Y =
PF A
PM
PN 1
n=0
N log
12
02
1
1
2
2
20
21
NX
1
(N, /202 )
(N )
(N, /212 )
Pr{decoding H0 if H1 was transmitted} = 1 Pr{Y > |H1 } =
(N )
Pr{decoding H1 if H0 was transmitted} = Pr{Y > |H0 } = 1
(a) Channel 1
(b) Channel 2
Rx
0
ts1 et dt.
11
Solution:
(a) For a channel output y = 0:
1
(1 p)
3
Pr {x = 1} Pr {y = 0|x = 1} = 0.
Pr {x = 0} Pr {y = 0|x = 0} =
1
p.
3
Pr {x = 0} Pr {y = 0|x = 0} =
12
Since
1
2
1
2
2
p + q(1 ) = q + (p 2q).
3
3
3
3
(d) As p(e) is linear with the respect to , its minimal value is achieved in one of the edge points.
For 2q < p the minimal error probability, p(e) = 23 q, is achieved for = 0. For p 2q the
minimal error probability, p(e) = 31 p, is achieved for = 1.
13
1 |r|
e
2
(a) Obtain the decision rule for the minimum probability of error criterion and the correspondingly minimal probability of error.
0 2
(b) For the cost matrix C =
, obtain the optimal generalized decision rule and the error
0
probability.
Solution:
(a)
|r| > 1
: fR|M (r|M = 0) m
= 1.
|r| < 1
1
=0
|r| 0 m
0
1 |r| 1
2e
1
1
2
0
1
1 |r|
e
dr = [1 e1 ]
2
2
=
=
p(1) C01 C11
2
2
|r| > 1
fR|M (r|M = 0) = 0
|r| < 1
1 |r| 1
1
2e
1
2
2
0
1
|r| ln 2
0
(
1, |r| < ln 2, |r| > 1
0, ln 2 < |r| < 1.
Probability of error
Z
ln 2
1
dr = ln 2
2
Z ln 2
1
1
PM = Pr{m
= 0|m = 1} =
e|r| dr = e1
2
2
ln 2<|r|<1
1
1
p(e) = p(0)PF A + p(1)PM = [ln 2 + e1 ]
2
2
PF A
Pr{m
= 1|m = 0} =
14
y = m + N,
(a) Obtain the decision rule for the minimum probability of error criterion and the minimal
probability of error.
0
1
(b) For the cost matrix C =
, obtain the optimal Bayes decision rule and the error
100 0
probability.
Solution:
(a)
(
f (y|1)
0,
(
f (y| 1)
1
4,
0,
(
1
4,
1 < y < 3,
otherwise.
3 < y < 1,
otherwise.
1, 3 < y < 1,
1,
1 < y < 3.
1
dy = 0.05
4
p(1) 100
p(1)
1
p(1)f (y|1)
100p(1)f (y| 1)
(
1, 3 < y < 1,
1,
1 < y < 3.
1
dy = 0.45
4
3. A binary digital communication system transmits one of the following two symbols S0 = 0, S1 = A,
M consecutive times using a zero mean AWGN channel with variance 2 . The receiver decide which symbol was transmitted based on the corresponding M received symbols {ri }, i =
0)
1, 2, . . . , M . The symbols a-priori probabilities obey p(S
p(S1 ) = , while the receiver uses the cost
C00 C01
matrix C =
.
C10 C11
15
and find .
Solution:
(a)
fR|S (r|S0 ) =
M
Y
i=1
1
2 2
ri2
e 22 ,
fR|S (r|S1 ) =
M
Y
i=1
1
2 2
fR|S (r|S0 )
C C11
S0 01
S1
C10 C00
C C11
i=1
S0 01
M
S
X
A2 + 2ri A 1
C10 C00
ln
2 2
C01 C11
i=1
S0
M
S
X
C10 C00
ri A 1 M A2
+
ln
2
2 2
C01 C11
i=1
S0
M
S1
1 X
2
C10 C00
A
ri +
ln
.
M i=1
2
MA
C01 C11
S0
M
Y
A2 +2ri A
2 2
16
(ri A)2
2 2
0
r = si + n n = [n1 , n2 ], n N (0, n ), n = 1
0 22
The noise vector, n, and the messages mi are independent.
(a) Obtain the optimal decision rule using MAP criterion, and examine it for the following cases:
i. q = 21 , 1 = 2 .
ii. q = 12 , 12 = 222 .
iii. q = 31 , 12 = 222 .
(b) Derive the error probability for the obtained decision rule.
Solution:
(a) The conditional probability distribution function R|Si N (Si , n ):
1
1
(r si )T 1
(r
s
)
exp
f (r|si ) = p
i
n
2
(2)2 det n
The MAP optimal decision rule
p(m0 )f (r|s0 )
m0
m1
p(m1 )f (r|s1 )
1
(r s0 )T 1
(r
s
)
0
n
2
m0
m1
1q
p
exp
(2)2 det n
1
(r s0 )T 1
(r
s
)
0
n
2
m0
m1
(1 q) exp
T 1
(r s1 )T 1
n (r s1 ) (r s0 ) n (r s0 )
m0
m1
2 ln
q
p
(2)2 det n
exp
q exp
1
(r s1 )T 1
n (r s1 )
2
1
(r s1 )T 1
(r
s
)
1
n
2
1q
q
Assign rT = [x, y]
(x + 1)2
(y + 1)2
(x 1)2
(y 1)2
+
12
22
12
22
m
x
y 01 1 q
ln
+
12
22
2
q
m1
17
m0
m1
2 ln
1q
q
1
2
ln 1q
q , and define z =
x
12
Z|si N ((1)i
y
.
22
12 + 22 12 + 22
,
),
12 22
12 22
i = 0, 1
K + 12 +22
2 2
= Q q 2 1 22
1 +2
12 22
1 +2
12 22
q
2
For the case q = 12 , 1 = 2 the error probability equals Q
.
2
1
n = [n0 , n1 , n2 ]
where the elements of n are i.i.d with the following probability density function
fNK (nk ) =
18
1 |nk |
e
2
= fN (r si ) =
2
Y
fN (nk = rk sik )
k=0
1
1 |r0 si,0 | 1 |r1 si,1 | 1 |r2 si,2 |
e
e
e
= e[|r0 si,0 |+|r1 si,1 |+|r2 si,2 |]
2
2
2
8
H1
H0
p(H0 )
p(H1 )
H1
H0
2
(z1 + 2z2 )
2
H1
H0
z1 + 2z2
H1
H0
(b) Define X = Z1 + 2Z2 . Since V1 , V2 are independent Z1 , Z2 are independent as well. A linear
combination of Z1 , Z2 yia Gaussian R.V with the following parameters
E{X|H0 } = 2,
E{X|H1 } = 2,
V ar{X|H0 } = V ar{X|H1 } = 2 2
And the probability of error events
PF A
= H1 |H = H0 } =
Pr{H
f (x|H1 )dx,
Z
= H0 |H = H1 } = 1
Pr{H
f (x|H0 )dx
0
PM
n = [n0 , n1 ]T ,
where N0 N (0, 2 ) and fN1 (n1 ) = 2 e|n1 | . The noise vector elements, n0 , n1 , and the sources
are all mutually independent.
(a) Find the conditional PDFs fR|S (r|s0 ), fR|S (r|s1 ).
(b) Find the log likelihood ratio.
(c) Find and draw the optimal decision regions (in the (r0 , r1 ) plane) for =
1
2 2 .
Solution:
(a) The noise vector PDF is
fN (n) =
1
2 2
n2
0
e 22
|n1 |
,
e
2
1
2 2
(r0 1)2
2 2
|r1 1|
e
,
2
20
fR|S (r|s1 ) =
1
2 2
(r0 +1)2
2 2
|r1 +1|
e
.
2
fR|S (r|s1 )
1
2
2
(r
1)
(r
+
1)
(|r1 + 1| |r1 1|)
=
0
0
fR|S (r|s0 )
2 2
2r0
= 2 (|r1 + 1| |r1 1|) .
1
2 2 ,
s1
2r0
1
2 2 (|r1 + 1| |r1 1|) 0
2
s0
s1
|r1 + 1| + |r1 1| 4r0 0.
s0
For r1 1:
s1
r1 + 1 r1 + 1 4r0 0
s0
s0
2r0 1 0.
s1
For 1 r1 1:
s1
(r1 + 1) r1 + 1 4r0 0
s0
s0
2r0 + r1 0.
s1
s1
(r1 + 1) + r1 1 4r0 0
s0
s0
2r0 + 1 0.
s1
For 1 r1 :
21
1 m M, 0 t T
k=1
Show that the M signal waveforms {s0m (t)} have equal energy, given by
0 = (M 1)
mn =
1
M 1
Solution:
The energy of the signal waveform s0m (t) is:
2
M
1 X
dt =
sk (t) dt
sm (t) M
k=1
Z T
M Z T
M X
M Z
X
2 X T
1
2
sk (t)sl (t)dt
sm (t) + 2
sm (t)sk (t)dt
M
M
0
0
0
Z
2
|s0m (t)|
k=1 l=1
k=1
M M
2
1 XX
kl
+ 2
M
M
1
2
M 1
=
M
M
M
k=1 l=1
M
M
1 X
1 X
sm (t)
sk (t) sn (t)
sl (t) dt
M
M
0
k=1
l=1
Z
Z
M
M
M
T
1 XX
1 2 X T
sm (t)sn (t)dt + 2
sk (t)sl (t)dt 0
sm (t)sk (t)dt
M
M
0
0
1
0
1
0
Z
1
2
M2 M M
M 1
M
k=1 l=1
1
=
M 1
4 hs
j, k {1, 2, . . . , M }.
j (t), sk (t)i = 0, j 6= k,
5 The energy of the signal waveform s (t) is: =
m
|sm (t)|2 dt
22
k=1
f1 (t)
f2 (t)
f3 (t)
0 t < 2,
2,
21 , 2 t < 4,
0,
otherwise.
(
1
2 , 0 t < 4,
0, otherwise.
0 t < 1, 2 t < 3
2,
1
2 , 1 t < 2, 3 t < 4
0,
otherwise.
1, 0 < t < 1
1,
1t<3
x(t) =
1,
3t<4
0,
otherwise.
and if you can determine the weighting coefficients, otherwise explain.
Solution:
(a) To show that the waveforms fn (t), n = 1, 2, 3 are orthogonal we have to prove that:
Z
fn (t)fm (t)dt = 0. m 6= n
For n = 1, m = 2:
Z
Z
f1 (t)f2 (t)dt
f1 (t)f2 (t)dt
0
2
f1 (t)f2 (t)dt
1
4
=
For n = 1, m = 3:
Z
dt
0
1
4
dt = 0
2
Z
f1 (t)f3 (t)dt =
f1 (t)f2 (t)dt +
f1 (t)f3 (t)dt
0
=
For n = 2, m = 3:
Z
1
4
dt
0
f2 (t)f3 (t)dt =
1
4
dt
1
1
4
1
4
dt +
2
1
4
1
4
dt = 0
3
f2 (t)f3 (t)dt
0
1
4
dt
0
23
1
4
dt +
1
dt
2
dt = 0
3
Thus, the signals fn (t) are orthogonal. It is also straightforward to prove that the signals
have unit energy:
Z
2
|fn (t)| dt = 1, n = 1, 2, 3
Z
Z
1 1
1 2
=
x(t)f1 (t)dt =
dt +
dt
2 0
2 1
0
Z 4
Z 4
1
=
x(t)f2 (t)dt =
x(t)dt = 0
2
0
0
Z 1
Z
Z 4
1 2
1
dt
dt +
=
x(t)f1 (t)dt =
2 0
2 1
0
Z
x1
x2
x1
1
2
1
2
1
dt +
2
1
dt +
2
dt = 0
3
dt = 0
3
As it is observed, x(t) is orthogonal to the signal waveforms fn (t), n = 1, 2, 3 and thus it can
not represented as a linear combination of these functions.
3. [1, Problem 4.11].
Consider the following four waveforms
s1 (t)
s2 (t)
s3 (t)
s4 (t)
2,
1,
0,
2,
1,
0,
1,
1,
0,
1,
2,
2,
0,
0 t < 1,
1 t < 4,
otherwise.
0 t < 1,
1 t < 3,
otherwise.
0 t < 1, 2 t < 3,
1 t < 2, 3 t < 4,
otherwise.
0 t < 1,
1 t < 3,
3 t < 4,
otherwise.
(a) Determine the dimensionality of the waveforms and a set of basis functions.
(b) Use the basis functions to present the four waveforms by vectors s1 , s2 , s3 and s4 .
(c) Determine the minimum distance between any pair of vectors.
Solution:
(a) As an orthonormal set of basis
(
1,
f1 (t) =
0,
(
1,
f3 (t) =
0,
set
1 t < 2,
otherwise.
3 t < 4,
otherwise.
2 1 1 1 f1 (t)
s1 (t)
s2 (t) 2 1
1
0
f2 (t)
s3 (t) = 1 1 1 1 f3 (t)
f4 (t)
1 2 2 2
s4 (t)
Note that the rank of the transformation matrix is 4 and therefore, the dimensionality of the
waveforms is 4.
(b) The representation vectors are
s1
s2
s3
s4
1
1
2 2
(c) The distance between the first and the second vector is:
q
q
2
d1,2 = |s1 s2 | = 4 2 2
2
1 = 25
d1,4
d2,3
d2,4
d3,4
q
2
|s1 s3 |
q
2
|s1 s4 |
q
2
|s2 s3 |
q
2
|s2 s4 |
q
2
|s3 s4 |
q
2
= 1 0 2 0 = 5
q
2
= 1 1 1 3 = 12
q
2
= 3 2 0 1 = 14
q
2
= 3 3 3 2 = 31
q
2
= 0 1 3 3 = 19
5.
Define
T
Z
s21
4 1dt = 4
(0
4, 1 t < 2,
= s2 (t) s21 1 (t) =
0,
otherwise.
s2 (t)1 (t)dt =
g2 (t)
2 (t) = qR
T
0
(
=
g22 (t)dt
1, 1 t < 2,
0,
otherwise.
Define
Z
s31
=
0
0
2T
3 1dt = 3
(
3, 2 t < 3,
= s3 (t) s31 1 (t) s32 2 (t) =
0, otherwise.
s3 (t)2 (t)dt =
g3 (t)
3 1dt = 3
s3 (t)1 (t)dt =
Z
s32
(b)
s1 (t)
21 (t)
s2 (t)
41 (t) + 42 (t)
s3 (t)
5. Optimum receiver.
Suppose one of M equiprobable signals xi (t), i = 0, . . . , M 1 is to be transmitted during a period
of time T over an AWGN channel. Moreover, each signal is identical to all others in the subinterval
[t1 , t2 ] where 0 < t1 < t2 < T .
(a) Show that the optimum receiver may ignore the subinterval [t1 , t2 ].
(b) Equivalently, show that if x0 , . . . , xM 1 all have the same projection in one dimension6 , then
this dimension may be ignored.
(c) Does this result necessarily hold true if the noise is Gaussian but not white? Explain.
Solution:
(a) The data signals xi (t) being equiprobable, the optimum decision rule is the Maximum Like2
lihood (ML) rule, given by, (in vector form) mini |y xi | . From the invariance of the inner
product, the ML rule is equivalent to,
Z T
2
min
|y(t) xi (t)| dt
i
6 xT
i
= xi1
xi2
...
26
t1
|y(t) xi (t)| dt
t2
Since the second integral over the interval [t1 , t2 ] is constant as a function of i, the optimum
decision rule reduces to,
Z t1
Z T
2
2
|y(t) xi (t)| dt +
|y(t) xi (t)| dt
min
i
t2
And therefore, the optimum receiver may ignore the interval [t1 , t2 ].
(b) In an appropriate orthonormal basis of dimension N M , the vectors xi and y are given
by,
xTi = xi1 xi2 . . . xiN
yT = y1 y2 . . . yN
Assume that xim = x1m for all i, the optimum decision rule becomes,
min
i
M
X
M
X
k=1
k=1,k6=m
Since |ym xim | is constant for all i, the optimum decision rule becomes,
min
i
M
X
|yk xik |
k=1,k6=m
(
3 (t) =
0,
3,
2
3
t < 1,
,
otherwise
3
3
3 (t)
s1 (t) = 1 (t) + 2 (t) +
4
4
3
3
3 (t)
s2 (t) = 1 (t) + 2 (t) +
4
4
3
3
s3 (t) = 2 (t)
3 (t).
4
4
Assume that these signals are used to transmit equiprobable symbols over an AWGN channel with
noise spectral density N20 .
27
(a) Show that optimal decisions (minimum probability of symbol error) can be obtained via the
outputs of two correlators (or sampled matched filters) and specify the waveforms used in
these correlators (or the impulse responses of the filters).
(b) Assume that p(e) is the resulting probability of symbol error when optimal demodulation
and detection is employed. Show that
r
r
2
2
Q
< p(e) < 2Q
.
N0
N0
Solution:
The three signals can be expressed in terms of two orthonormal basis waveforms 1 (t) and 2 (t).
These can be chosen, e.g., as
3
1
1 (t) = 1 (t), 2 (t) =
2 (t) + 3 (t).
2
2
The above choice gives
3
2 (t),
s1 (t) = 1 (t) +
2
3
s2 (t) = 1 (t) +
2 (t),
2
3
3
s1 = (1,
), s2 = (1,
),
2
2
s3 (t) =
3
2 (t),
2
s3 = (0,
3
),
2
28
2
N0
.
i = 0, 1,
where n(t) is a zero-mean AWGN with variance n2 . The demodulator cross-correlates the received
signal r(t) with si (t), i = 0, 1 and samples the output of the correlator at t = T .
(a) Determine the optimum detector for an AWGN channel and the optimum threshold, assuming
that the signals are equally probable.
(b) Determine the probability of error as a function of the SNR. How does on-off signalling
compare with antipodal signaling?
Solution:
(a) The correlation type demodulator employs a filter:
(
1 , 0 t < T,
T
f (t) =
0,
otherwise.
Hence, the sampled outputs of the cross-correlators are:
r = si + n,
i = 0, 1
where s0 = 0, s1 = A T and the noise term n is a zero-mean Gaussian random variable with
variance n2 = N20 . The probability density function for the sampled output is:
r2
1
f (r|s0 ) =
e N0
N0
1
2
A2 T =
1
2
(rA T )2
1
f (r|s1 ) =
e N0
N0
SNROn-Off =
1
2
A2 T
N0
2
A2 T
.
N0
s1
s0
s1
s0
1
A T
2
29
N0
2 .
=
=
1
2
1
2
1
2A
1
2A
r2
1
1
e N0 dr +
2
N0
1
2A
1
f (r|s0 )dr +
2
f (r|s1 )dr
1
2A
(rA T )2
1
e N0 dr
N0
Z 1q 2
1 2 N0 A T 1 x2
1 x2
e 2 dx
e 2 dx +
q
1
2
2
2
2
2
N0 A T
!
r
r
1
2
1
Q
A T =Q
SNROn-Off
2 N0
2
1
2
Thus, the on-off signaling requires a factor of two more energy to achieve the same probability
of error as the antipodal signaling.
2. [2, Problem 5.11].
Consider the optimal detection of the sinusoidal signal
8t
s(t) = sin
, 0tT
T
in additive white Gaussian noise.
(a) Determine the correlator output (at t = T ) assuming a noiseless input.
(b) Determine the corresponding match filter output, assuming that the filter includes a delay
T to make it casual.
(c) Hence show that these two outputs are the same at time instant t = T .
Solution:
For the noiseless case, the received signal r(t) = s(t), 0 t T .
(a) The correlator output is:
T
Z
y(T ) =
Z
r( )s( )d =
s ( )d =
0
T
2
sin
0
8
T
d =
T
2
(b) The matched filter is defined by the impulse response h(t) = s(T t). The matched filter
output is therefore:
Z
Z
y(t) =
r()h(t )d =
s()s(T t + )d
=
=
=
8(T t + )
8
sin
d
T
T
0
Z
Z
1 T
1 T
8(T t)
8(T t + )
cos
d
cos
d
2 0
T
2 0
T
T
8(t T )
T
8(T t)
T
8t
cos
sin
sin
.
2
T
16
T
16
T
sin
30
T
2
which is exactly the same as the correlator output determined in item (2a).
3. SNR Maximization with a Matched Filter.
Prove the following theorem:
For the real system shown in Figure 4, the filter h(t) that maximizes the signal-to-noise ratio at
sample time Ts is given by the matched filter h(t) = x(Ts t).
[x(t) h(t)|t=Ts ]2
Z
2
=
x(t)h(Ts t)dt = [hx(t), h(Ts t)i]2
The sampled noise at the matched filter output has energy or mean-square
Z
Z
Noise Energy = E
n(t)h(Ts t)dt
n(s)h(Ts s)ds
Z Z
N0
=
(t s)h(Ts t)h(Ts s)dtds
2
Z
N0
=
h2 (Ts t)dt
2
N0
2
khk
=
2
The signal-to-noise ratio, defined as the ratio of the signal power in to the noise power, equals
SNR =
with equality if and only if x(t) = kh(Ts t) where k is some arbitrary constant. Thus, by
inspection, the SNR is maximized over all choices for h(t) when h(t) = x(Ts t). The filter h(t)
is matched to x(t), and the corresponding maximum SNR (for any k) is
SNRmax =
31
2
2
kxk
N0
(q
0 t < aT,
T,
q
2E
cos 2t
, 0 t < T,
T
T
s0 (t) = E , aT t < T, s1 (t) =
T
0,
otherwise.
0,
otherwise.
The observation, r(t), obeys
r(t)
E{n(t)n( )}
si (t) + n(t),
N0
=
(t ),
2
i = 0, 1
n(t) N (0,
N0
(t )).
2
(a) Find the optimal receiver for the above two signals, write the solution in terms of s0 (t) and
s1 (t).
(b) Find the error probability of the optimal receiver for equiprobable signals.
(c) Find the parameter a, which minimizes the error probability.
Solution:
(a) We will use a type II, which uses filters matched to the signals si (t), i = 0, 1. The optimal
receiver is depicted in Figure 5.
=
=
E
N0
E
N0
ln p0 [h1 (t) r(t)]t=T
ln p1 +
[h0 (t) r(t)]t=T +
2
2
2
2
N0 p0
ln
+ [(h0 (t) h1 (t)) r(t)]t=T
2
p1
Hence the optimal receiver can be implemented using one convolution operation instead of
two convolution operations, as depicted in Figure 6.
32
hs0 , s1 i
hs0 , s1 i
=
ks0 k ks1 k
E
2E 2 hs0 , s1 i
p
d =
2E(1 )
s
E(1 )
p(e) = Q
N0
=
s0 (t)s1 (t)dt
Z Tr r
Z aT r r
E 2E
2t
E 2E
2t
cos
dt
cos
dt
T
T
T
E
E
T
aT
0
E
E
sin 2a + 2
sin 2a
2
2
2
2
sin 2a
s
E(1 2 sin 2a)
Q
N0
0
=
=
=
p(e)
In order to minimize the probability of error, we will maximize the Q function argument:
sin 2a = 1
3
a =
4
33
0 t < T2 ,
T,
q
s2 (t) = E , T t < T,
T
2
0,
otherwise.
q
T
T, 0 t < 2,
q
E
T
s3 (t) =
2 t < T,
T,
0,
otherwise.
The observation, r(t), obeys
r(t)
E{n(t)n( )}
si (t) + n(t),
N0
=
(t ),
2
i = 0, 1, 2, 3
n(t) N (0,
N0
(t )).
2
(a) Find a signal space representation for the signals si (t), i = 0, 1, 2, 3, and draw the optimal
decision regions.
(b) Find the optimal receiver for the above four signals which comprises of at most two filters
(or two multipliers and integrators).
(c) Find the error probability of the optimal receiver.
Solution:
(a) The following two orthonormal functions comprise a basis to the signal space
1
0 (t) = s0 (t),
E
1
1 (t) = s2 (t).
E
It is easy to verify that h0 (t), 1 (t)i = 0, and that {0 (t), 1 (t)} spans the signal space.
Figure 7 depicts the signal space spanned by {0 (t), 1 (t)} and the optimal decision regions.
34
s (t), y > |y | ,
1
1
2
s(t) =
s
(t),
y
>
|y
|
2
2
1 ,
2E
=Q
2N0
!
E
.
N0
Let p(c) = (1 Q)2 denote the probability of a correct decision. Then, the error probability
is
p(e) = 1 p(c) = 1 (1 Q)2 = 2Q Q2 .
35
and compares U with a threshold A and a threshold A. If U > A the decision is made that s(t)
was sent. If U < A, the decision is made in favor of s(t). If A U A, the decision is made
in favor of 0.
(a) Determine the three conditional probabilities of error p(e|s(t)), p(e|0)) and p(e| s(t)).
(b) Determine the average probability of error p(e) as a function of the threshold A, assuming
that the three symbols are equally probable a priori.
(c) Determine the value of A that minimizes p(e).
Solution:
s(t) + z(t)
RT
s(t) + z(t)
(a) U = Re 0 r(t)s(t)dt , where r(t) =
depending on which signal was
z(t)
sent. If we assume that s(t) was sent:
Z
U = Re
Z
s(t)s (t)dt + Re
where E =
1
2
RT
0
z(t)s (t)dt = 2E + N
RT
0
z(t)s (t)dt is a Gaussian
random variable with zero mean and variance 2EN0 . Hence, given that s(t) was sent, the
probability of error is:
2E A
p1 (e) = Pr{N < A 2E} = Q
2EN0
When s(t) is transmitted: U = 2E + N , and the corresponding conditional error probability is:
2E A
p2 (e) = Pr{N > A + 2E} = Q
2EN0
and finally, when 0 is transmitted: U = N , and the corresponding error probability is:
A
p3 (e) = Pr{N > A or N < A} = 2Q
2EN0
(b)
1
2
2E A
A
p(e) = [p1 (e) + p2 (e) + p3 (e)] =
Q
+Q
3
3
2EN0
2EN0
36
R
x
t
1 e 2
2
r
4
E
p(e) = Q
3
2N0
2. [1, Problem 5.19].
Consider a signal detector with an input
r = A + n,
A>0
where +A and A occur with equal probability and the noise variable n is characterized by the
Laplacian p.d.f:
2|n|
1
f (n) = e
2
(a) Determine the probability of error as a function of the parameters A and .
(b) Determine the SNR required to achieve an error probability of 105 . How does the SNR
compare with the result for Gaussian p.d.f?
Solution:
(a) Let =
2
.
=
=
=
=
=
1
1
Pr{Error|A} + Pr{Error| A}
2
2
Z
Z
1 0
1
f (r|A)dr +
f (r| A)dr
2
2 0
Z
Z
1 |r+A|
1 0 |rA|
e
dr +
e
dr
2 2
2 0 2
Z
Z
A |x|
|x|
e
dx +
e
dx
4
4 A
1 A
1 2A
e
= e
2
2
A2
2
p(e) =
For p(e) = 105 we obtain:
p(e) = Q
SNR
5
where SNR
is the signal to noise ratio at the output of the matched filter. With p(e) = 10
we find SNR = 4.26 and therefore SNR = 12.594 dB. Thus the required signal to noise
ratio is 5 dB less when the additive noise is Gaussian.
Eb ck + nk ,
k = 1, 2, . . . , n
represents the output sequence of samples from a demodulator, where ck = 1 are elements of
one of two possible code words, C1 = [1 1 . . . 1] and C2 = [1 1 . . . 1 1 . . . 1]. The code word
C2 has w elements that are +1 and n w elements that are 1, where w is a positive integer.
The noise sequence {nk } is white Gaussian with variance 2 .
(a) What is the optimum ML detector for the two possible transmitted signals?
(b) Determine the probability of error as a function of the parameters 2 , Eb , w.
(c) What is the value of w that minimizes the the error probability?
Solution:
(a) The optimal ML detector selects the sequence Ci that minimizes the quantity:
D(r, Ci ) =
n
X
(rk
p
Eb cik )2
k=1
w
X
k=1
w
X
(rk
n
X
p
p
Eb )2 +
(rk Eb )2
k=w+1
n
X
p
p
(rk Eb )2 +
(rk + Eb )2
k=1
k=w+1
Since the first term of the right side is common for the two equations, we conclude that the
optimal ML detector can base its decisions only on the last n w received elements of r.
That is
w
X
(rk
w
X
Eb )2
(rk +
k=w+1
k=w+1
or equivalently
C1
rk 0
C2
k=w+1
w
X
38
C2
p
Eb )2 0
C1
(b) Since rk =
Pr
p
n
X
Eb (n w) +
nk < 0
k=w+1
Pr
X
n
p
nk < (n w) Eb
k=w+1
The R.V u =
Pn
k=w+1
1
Pr{Error|C1 } = p1 (e) = p
2u2
(nw) Eb
exp
r
x2
Eb (n w)
2 dx = Q
u
2
Similarly we find that Pr{Error|C1 } = Pr{Error|C2 } and since the two sequences are
equiprobable
r
Eb (n w)
p(e) = Q
2
(c) The probability of error p(e) is minimized when Eb (nw)
is maximized, that is for w = 0. This
2
implies that C1 = C2 and thus the distance between the two sequences is the maximum
possible.
4. Sub optimal receiver.
Consider a binary system transmitting the signals s0 (t), s1 (t) with equal probability.
(q
(q
2t
2E
2E
sin 2t
, 0 t T,
T
T
T cos T , 0 t T,
s0 (t) =
s1 (t) =
0,
otherwise.
0,
otherwise.
The observation, r(t), obeys
r(t) = si (t) + n(t),
i = 0, 1
N0
2 (t
).
(a) Sketch an optimal and efficient (in the sense of minimal number of filters) receiver. What is
the error probability when this receiver is used?
(b) What is the error probability of the following receiver?
Z
0
T
2
s0
r(t)dt 0
s1
aT
s0
r(t)dt K,
s1
0a1
R aT
where K is the optimal threshold for 0 r(t)dt. Find a which minimizes the probability of
error. Numerical solution may be used.
39
2t
2t
2E
2
= 2E d = 2E
d =
sin
cos
T
T
T
0
The receiver depicted in Figure 9 is equivalent to the the following (and more efficient)
receiver, depicted in Figure 10.
d
d
d
q
=Q
=Q
p(e) = Q
2
2N0
2 N20
where d, the distance between the signals, is given by
d = ks0 (t) s1 (t)k = ks0 s1 k
Hence, the probability of error is
r
E
d
p(e) = Q
p(e) = Q
N0
2N0
(b) Let us define the random variable, Y =
Z
T
2
r(t)dt. Y obeys
T
2
Y |s0 =
T
2
s0 (t)dt +
0
Z
Y |s1 =
n(t)dt
0
T
2
Z
s1 (t)dt +
T
2
n(t)dt
0
RT
Let us define the random variable N = 02 n(t)dt. N is a zero mean Gaussian random
variable, and variance
Z T Z T
Z T Z T
2
2
2
2 N
No T
0
Var{N } = E
n( )n()d d =
( )d d =
2
4
0
0
0
0
40
Y |si is a Gaussian random variable (note that Y is not gaussian, but a Gaussin Mixture!)
with mean:
Z T2
2ET
E{Y |s0 } =
s0 (t)dt =
0
Z T2
s1 (t)dt = 0
E{Y |s1 } =
0
The variance of Y |si is identical under both cases, and equal to the variance of N . For the
given decision rule the error probability is:
p(e)
(c) We will use the same derivation procedure as in the previous item.
Define the random variables Y, N as follows:
Z
Y
aT
Z
r(t)dt,
N=
E{N }
E{Y |s0 }
E{Y |s1 }
Var{Y |s0 }
aT
n(t)dt
0
aT N0
0, Var{N } =
2
r
Z
2E aT
2ET
=
s0 (t)dt =
(1 cos 2a)
T 0
2
r
Z
2E aT
2ET
=
sin 2a
s1 (t)dt =
T 0
2
= Var{Y |s1 } = Var{N }
=
in an AWGN channel with spectral density N20 . The receiver for this problem is implemented as
a filter with the impulse response
(
eat , t 0,
, a 0.
h(t) =
0,
otherwise.
More precisely, letting yT denote the value of the output of the filter sampled at t = T , when fed
by the received signal, the decision is
yT < K s = s0 ,
yT K s = s1 ,
where K > 0 is a decision threshold. Assume that only one symbol is transmitted and answer the
following questions.
(a) Determine the resulting error probability p(e).
(b) Which value for the threshold b minimizes p(e)?
(c) With the optimal value for b (from the previous item), which value for the filter parameter,
a, minimizes p(e)? Numerical solution may be used.
Solution:
(a) The decision variable can be expressed as
yT = s + w,
where s is either zero, corresponding to s0 (t), or
Z
s=
s1 ( )h(T )d
r
Z T
2E
a(T )
sin
e
d
=
T
T
0
2ET 1 + eaT
=
,
a2 T 2 + 2
corresponding to s1 (t), and where W is zero mean Gaussian. The variance of W is
Z
N0 2
N0
Var{W } =
h (t)dt =
.
2 0
4a
We conclude that the conditional distributions of YT are
2ET 1 + eaT N0
N0
Y |s0 (t) N (0,
), Y |s1 (t) N (
).
,
4a
a2 T 2 + 2
4a
The error probability is
1
(Pr (YT > K|s0 (t)) + Pr (YT < K|s1 (t)))
2
s
!!
r
2
1 4aK 1
4a 2ET 1 + eaT
= Q
+ Q
K
.
2
N0
2
N0
a2 T 2 + 2
p(e) =
42
(b) As this is a binary decision problem in an AWGN channel, the probability of error, p(e), is
minimized when the decision threshold K is located in half-way between the two alternatives
for s, corresponding to
1 2ET 1 + eaT
,
K=
2
a2 T 2 + 2
giving
r
p(e) = Q
2ET a 1 + eaT
N0
a2 T 2 + 2
!
.
x (1 + ex )
,
x2 + 2
with respect to x = aT . The maximum is at x 1.1173, hence the optimal a is a
43
1.1173
.
T
s
log2 M .
Pe
.
log2 M
Pe 2Q 189.74 sin(/8) = 1.355 107 .
Using the approximation for Pe,bit we get
Pe,bit =
Pe
= 4.52 108 .
3
Pe 2Q 252.98 sin(/16) = 1.916 103 .
Using the approximation for Pe,bit we get
Pe,bit =
Pe
= 4.79 104 .
4
Note that Pe,bit is much larger for 16PSK than for 8PSK for the same b . This result is expected,
since 16PSK packs more bits per symbol into a given constellation, so for a fixed energy-per-bit
the minimum distance between constellation points will be smaller.
2. Bit error probability for rectangular constellation.
Let p0 (t) and p1 (t) be two orthonormal functions, different from zero in the time interval [0, T ].
The equiprobable signals defined in Figure 11 are transmitted through a zero-mean AWGN channel
with noise PSD equals N0 /2.
(a) Calculate Pe for the optimal receiver.
(b) Calculate Pe,bit for the optimal receiver (optimal in the sense of minimal Pe ).
q
(c) Approximate Pe,bit for high SNR ( d2 N20 ). Explain.
44
=
(a)
(b)
(c)
2
d/2
1Q p
N0 /2
Pr{correct decision |(100) was transmitted}
Pr{correct decision |(010) was transmitted}
P1 .
where (a), (b) and (c) are due to the constellation symmetry.
=
(a)
(b)
(c)
d/2
d/2
1Q p
1 2Q p
N0 /2
N0 /2
Pr{correct decision |(101) was transmitted}
Pr{correct decision |(011) was transmitted}
P2 .
where (a), (b) and (c) are due to the constellation symmetry.
45
Hence
Pc
Pe
Pe
1
2
2
!
d/2
d/2
d/2
+ 1Q p
1Q p
1 2Q p
N0 /2
N0 /2
N0 /2
1 Pc
2
1
d/2
d/2
5Q p
3Q p
.
2
N0 /2
N0 /2
(b) Let b0 denote the MSB, b2 denote the LSB and b1 denote the middle bit7 . Let bi (s), i = 0, 1, 2
denote the ith bit of the constellation point s.
Pr{error in b2 |(000) was transmitted}
Pr{
s was received|(000) was transmitted}
s:b2 (
s)6=0
d
5d
< N0 <
2
2
d
5d
Pr
< N0 <
2
2
d/2
5d/2
Q p
Q p
N0 /2
N0 /2
=
=
=
(a)
(b)
(c)
Pr
P1 .
where (a), (b) and (c) are due to the constellation symmetry.
Pr{
s was received|(001) was transmitted}
s:b2 (
s)6=1
3d
d
Pr N0 <
+ Pr
< N0
2
2
d/2
3d/2
= Q p
+Q p
N0 /2
N0 /2
= Pr{error in b2 |(101) was transmitted}
=
= P2 .
7 For
46
Using similar arguments we can calculate the bit error probability for b1
3d/2
Pr{error in b1 |(000) was transmitted} = Q p
N0 /2
= Pr{error in b1 |(100) was transmitted}
=
= P3 .
d/2
p
= Q
N0 /2
= Pr{error in b1 |(101) was transmitted}
=
= P4 .
The bit error probability for b0 equals
Pr{error in b0 |(000) was transmitted}
d/2
= Q p
N0 /2
= P5 .
Due to the constellation symmetry and the bits mapping, the bit error probability for b0 is
equal for all the constellation points.
Let Pe,bi , i = 0, 1, 2 denote the averaged (over all signals) bit error probability of the ith bit,
then
Pe,b0
= P5 .
1
(P3 + P4 ).
Pe,b1 =
2
1
Pe,b2 =
(P1 + P2 ).
2
The averaged bit error probability, Pe,bit , is given by
2
Pe,bit
=
=
(c) For
d
2
Note that
1X
Pe,bi
3 i=0
5
d/2
1
3d/2
1
5d/2
Q p
+ Q p
Q p
6
3
6
N0 /2
N0 /2
N0 /2
N0
2
Pe
log2 M
Pe,bit
Pe
Pe,bit
5
d/2
Q p
6
N0 /2
5
d/2
Q p
2
N0 /2
Pe
.
3
3. Octagon constellation.
Consider the signal constellation depicted in Figure 12.
48
Figure 13: Decision regions for the optimal receiver. Ik is the region for deciding on sk , k = 0, 1, . . . , 7.
Let bm , m = 0, 1, 2, be the received bits after detection. The averaged bit error probability is
Pe,bit =
3
1 X
Pr bm 6= bm .
3 m=1
Let r = (r0 , r1 ) be the received point. For b0 , assume b0 = 0 and note that b0 = 1 if the
received point is in the right half-plane, that is
n
o
Pr b0 6= 0|b0 = 0 = Pr {r0 > 0|si , i {0, 1, 2, 3} was transmitted}
=
1
1
1
1
Pr {r0 > 0|s0 } + Pr {r0 > 0|s1 } + Pr {r0 > 0|s2 } + Pr {r0 > 0|s3 } .
4
4
4
4
The distance between the constellation point s6 and the r1 axis is d2 . Referring to Figure 14,
observe that the distance between the constellation point s6 and the r0 axis is d2 +x = d2 + 2d.
Figure 14: Distance from the axis r0 and r1 for the constellation point s6 . (2d)2 = 2x2 x =
49
2d.
Therefore,
n
o 1
1
Pr b0 =
6 0|b0 = 0 = Pr {r0 > 0|s0 } + Pr {r0 > 0|s1 }
2
2
s
s
2
1
d 1
d2
= Q
+ Q
2 2+1
2
2N0
2
2N0
= A0 .
n
o
n
o
Due to symmetry, we get Pr b0 =
6 1|b0 = 1 = Pr b0 6= 0|b0 = 0 , hence
n
o
Pr b0 =
6 b0 = A0 .
n
o
n
o
For b1 it is easy to see that we get Pr b1 =
6 b1 = Pr b0 6= b0 , since assume b1 = 0, an
error occurs if r1 > 0, thus the situation is identical to the case of b0 (subject to 90 rotation).
Finally, let n = (n0 , n1 ) be the noise terms in the received signal. Consider the rotated noise
= (
terms, n
n0 , n
1 ), as depicted in Figure 158 .
Figure 15: Decision regions for the optimal receiver, and rotated noise terms.
Then assuming b2 = 0 we get
n
o
Pr b2 6= 0|b2 = 0 = Pr {si , i {1, 3, 5, 7} was received|sj , j {0, 2, 4, 6} was transmitted}
=
1
Pr{b2 = 1|s = s0 } + Pr{b2 = 1|s = s2 }
4
+ Pr{b2 = 1|s = s4 } + Pr{b2 = 1|s = s6 } .
have the same PDF as they are the projection of an AWGN on orthonormal basis.
that the vectors n and n
50
Thus,
n
o
n
o
Pr b2 6= 0|b2 = 0 = Pr b2 6= 0|s = s6
= Pr {s I7 I5 I1 I3 |s = s6 }
= Pr {s I7 I5 |s = s6 } + Pr {s I1 I3 |s = s6 }
(
(
! )
1+ 2
d + Pr n
0 < d, n
1 >
= Pr n
0 > d, n
1 <
2
!
!!
d
d
1+ 2
p
=Q p
1Q
+
2
N0 /2
N0 /2
!!
!
1+ 2
d
d
p
Q
1Q p
2
N0 /2
N0 /2
!
!
d
1+ 2
d
p
=Q p
+Q
2
N0 /2
N0 /2
!
!
d
1+ 2
d
p
2Q
Q p
2
N0 /2
N0 /2
! )
1+ 2
d
2
= A2 .
n
o
n
o
Note that due to the 90 symmetry of the problem Pr b2 6= 1|b2 = 1 = Pr b2 6= 0|b2 = 0 .
Hence,
1
2
Pe,bit = A0 + A2 .
3
3
51
bits
sec
Solution:
(a) The channel bandwidth is W = 3.4 KHz. The received signal-to-noise ratio is SNR = 103 =
30 dB. Hence the channel capacity is
3
3
3 bits
C = W log2 (1 + SNR) = 3.4 10 log2 (1 + 10 ) = 33.9 10
.
sec
(b) The required SNR is the solution of the following equation
4800 = 3.4 103 log2 (1 + SNR) SNR = 1.6 = 2.2 dB.
2. [1, Problem 7.17].
Channel C1 is an additive white Gaussian noise channel with a bandwidth W , average transmitter
power P , and noise PSD N20 . Channel C2 is an additive white Gaussian noise channel with the
same bandwidth and average power as channel C1 but with noise PSD Sn (f ). It is further assumed
that the total noise power for both channels is the same; that is
Z W
Z W
N0
Sn (f )df =
df = N0 W.
W
W 2
Which channel do you think has larger capacity? Give an intuitive reasoning.
Solution:
The capacity of the additive white Gaussian channel is:
P
C = W log2 1 +
N0 W
For the nonwhite Gaussian noise channel, although the noise power is equal to the noise power in
the white Gaussian noise channel, the capacity is higher. The reason is that since noise samples
are correlated, knowledge of the previous noise samples provides partial information on the future
noise samples and therefore reduces their effective variance.
3. Capacity of ISI channel.
Consider a channel with Inter Symbol Interference (ISI) defined as follows
yk =
L1
X
hi xki + zk .
i=0
The channel input obeys an average power constraint E{x2k } P , and the noise zk is i.i.d
Gaussian distributed: zk N (0, z2 ). Assume that H(ej2f ) has no zeros and show that the
channel capacity is
(
+ )
Z
z2 /|H(ej2f )|2
1 W
C=
log 1 +
df ,
2 W
z2 /|H(ej2f )|2
52
z2
|H(ej2f )|2
+
df = P.
Y (ej2f )
H(ej2f )
Z(ej2f )
H(ej2f )
j2f
j2f ).
= X(e
) + Z(e
= X(ej2f ) +
This is a problem of colored Gaussian channel with no ISI. The channel PSD is
SZZ (ej2f ) =
z2
.
|H(ej2f )|2
z2
|H(ej2f )|2
+
df = P.
4. Final B, 2011.
Consider a communication system (denoted by system A) consists of two transmitters (Tx1 and
Tx2 ) capable of simultaneous transmission and one receiver.
Tx1 transmits a real signal one dimensioned with power constraint P [watts], using the
frequency band f1L f f1U , f1L = 0. In the frequency band f1L f f1U the channel
frequency response is constant, real, and equals A > 0 such
that
atts
the received signal power is
P A2 . The noise is an AWGN with spectral density N20 WHz
. Let W1 = f1U f1L denote
the channel bandwidth for Tx1 .
Tx2 transmits a real signal with power constraint P [watts], using the frequency band f2L
f f2U , f2L > 0. In the frequency band f2L f f2U the channel frequency response is
2
constant, real, and equals B > 0 such that
the received signal power is P B . The noise is an
N0 W atts
AWGN with spectral density 2
. Let W2 = f2U f2L denote the channel bandwidth
Hz
for Tx2 .
53
B2P
C2 = W2 log2 1 +
,
N0 W 2
(b) Using Theorem 2 presented in class, as each transmitter is independent, we find the capacity
for each transmitter separately. From Theorem 2
Z
C=
W
P (f )|H(f )|2
log2 1 +
df ,
N0
P (f ) =
N0
|H(f )|2
+
Z
,
P =
P (f )df .
W
f1U
C1 =
f1U
P1 (f )A2
log2 1 +
df ,
N0
P1 (f ) =
N0
A2
+
Z
,
f1U
P =
P1 (f )df .
f1U
+
0
The equation P1 (f ) = N
indicate that the power allocation is constant over 0
A2
|f | f1U and zero otherwise. Using the last equation with P1 (f ) = K for 0 |f | f1U
yields
P
2W1 K = P P1 (f ) = K =
,
2W1
and
C1 = 2W1 log2
A2 P
1+
2W1 N0
B2P
1+
2W2 N0
55
10
where {an } and {bn } are two sequences of statistically independent binary digits and g(t) is a
sinusoidal pulse defined as
(
t
, 0 t 2T,
sin 2T
g(t) =
0,
otherwise.
This type of signal is viewed as a four-phase PSK signal in which the pulse shape is one-half
cycle
of a sinusoid. Each of the information sequences {an } and
bits
{bn } is transmitted at a rate of
1 bits
1
and,
hence,
the
combined
transmission
rate
is
2T sec
T sec . The two sequences are staggered
in time by T seconds in transmission. Consequently, the signal u(t) is called staggered four-phase
PSK.
(a) Show that the envelope |u(t)| is a constant, independent of the information an on the inphase component and information bn on the quadrature component. In other words, the
amplitude of the carrier used in transmitting the signal is constant.
(b) Determine the power density spectrum of u(t).
(c) Compare the power density spectrum obtained from (1b) with the power density spectrum
of the MSK signal [1, 4.4.2]. What conclusion can you draw from this comparison?
Solution:
1
(a) Since the signaling rate is 2T
for each sequence and since g(t) has duration 2T , for any time
instant only g(t 2nT ) and g(t 2nT T ) or g(t 2nT + T ) will contribute to u(t). Hence,
for 2nT t 2nT + T :
|u(t)|
G(f )
SU (f )
56
4T cos(2T f ) j2f T
e
1 16T 2 f 2
16T cos2 (2T f )
2 (1 16T 2 f 2 )2
(c) The above power density spectrum is identical to that for the MSK signal. Therefore, the
MSK signal can be generated as a staggered four phase PSK signal with a half-period sinusoidal pulse for g(t).
2. [1, Problem 5.29].
In an MSK signal, the initial state for the phase is either 0 or rad. Determine the terminal
phase state for the following four input pairs of input data b0 , b1 : (a) 00; (b) 01; (c) 10; (d) 11.
Solution:
We assume that the input bits 0, 1 are mapped to the symbols -1 and 1 respectively. The terminal
phase of an MSK signal at time instant n is given by
(n; s) =
n
X
sk + 0
2
k=0
where 0 is the initial phase and sk is 1 depending on the input bit at the time instant k. The
following table shows (1; s) for two different values of 0 , and the four input pairs of data:
0
0
0
0
0
b0
0
0
1
1
0
0
1
1
b1
0
1
0
1
0
1
0
1
s0
-1
-1
1
1
-1
-1
1
1
s1
-1
1
-1
1
-1
1
-1
1
(1; s)
0
0
0 t 2Tb
=
=
=
q
sc (t)2 + ss (t)2
s
2b
t
2b
t
cos2
+
sin2
Tb
2Tb
Tb
2Tb
r
2b
Tb
57
(b) The signal s(t) is equivalent to an MSK signal. Figure 17 depicts a block diagram of the
modulator for synthesizing the signal. In Figure 17 xe are the even pulse sequence and xo
are the odd pulse sequence.
1
2
and
Solution:
Since p = 2, m is odd (m = 1), L = 2 and M = 2, there are
Ns = 2pM = 8
phase states, which we denote as Qn = (n , sn1 ). The 2p = 4 phase states corresponding to n
are
3
,
= 0, , ,
2
2
9 Read
58
sn
(n + sn1 , sn ) = (n+1 , sn ).
(n , sn1 )
2
The trellis diagram is depicted in Figure 19. In Figure 19 solid lines denote transition corresponding to sn = 1, while dashed lines denote transitions corresponding to sn = 1.
59
11
1. Colored noise.
Consider the following four equiprobable signals
1
s0 (t) = cos(t),
1
s1 (t) = sin(t),
s2 (t) = s0 (t),
s3 (t) = s1 (t),
0 t 2
The received signal obeys r(t) = s(t) + n(t), where n(t) is a colored Gaussian noise with the
following PSD
rad
N0 2
,
=
SN () =
2 1 + 2
sec
the noise n(t) and the signal s(t) are independent.
(a) The optimal receiver for this scenario consists of a whitening filter, H(), followed by an
optimal receiver for the AWGN channel. What should be the whitening filter amplitude,
|H()|2 , so the noise at the filter output will be white?
(b) Find the above H() and the corresponding h(t) which can be composed of an adder and
an integrator.
(c) For a noise-free channel, what are the transmitted signals, si (t), i = 1, . . . , 3, at the output
of the whitening filter?
(d) Let si (t) , si (t) h(t), i = 1, . . . , 3, where h(t) is the impulse response of H(). Find a
set of real orthonormal basis functions which span the set S = (
s0 (t), . . . , s3 (t)). Find the
projection of each element in the set S on the basis functions.
(e) Sketch the optimal receiver.
Solution:
(a) The noise at the filter output will be white if
|H()|2
(b) Let the constant be
is
N0
2 ,
2 N0
= Constant.
1 + 2 2
hence |H()|2 =
1+ 2
2 .
H0 () =
1+ 2
2
1 + j
1
=1+
.
j
j
The impulse response of H0 () is h(t) = (t) + u(t) 21 , where u(t) denotes a step function.
Hence
Z t
Z
Z t
1
1
si (t) = si (t) [(t) + u(t) ] = si (t) +
si ( )d
si ( )d = si (t) +
si ( )d ,
2
2
i = 0, 1, 2, 3.
60
(c)
si (t)
Z min{2,t}
Z t
si ( )d
si ( )d = si (t) +
si (t) +
0
(
Rt
t<0
0,
Rt
si (t) + 0 si ( )d ,
si (t) + 0 si ( )d , 0 t 2 =
0,
R 2
si ( )d ,
t > 2
0
t [0, 2]
t
/ [0, 2]
s2 (t) =
s0 (t), s3 (t) =
s1 (t), t [0, 2]
(d) The functions 0 = sin(t) + cos(t) and 1 = sin(t) + 1 cos(t) are orthogonal at the range
[0, 2], hence establish a basis of the signal space. In order to have an orthonormal basis we
should normalize 0 and 1
2
k0 (t)k
k1 (t)k
1
1 (t) = (sin(t) + 1 cos(t))
4
s2 = s0 ,
61
s3 = s1 .
The whitening filter can be integrated into the match filters. Since cos(2 t) = cos(t) and
sin(2 t) = sin(t)
(
1 (cos(t) sin(t)), 0 t 2,
2
s0 (2 t) =
0,
otherwise.
(
1
(1 cos(t) sin(t)), 0 t 2,
s1 (2 t) =
0,
otherwise.
Hence
(t) + u(t) s0 (2 t)
(t) + u(t) s1 (2 t)
0,
f (t) , t,
2,
1
(2 cos(t) 1)
1
(f (t) 2 sin(t))
Figure 22 depicts an optimal receiver in which the whitening filter is integrated into the
matched filters.
Figure 22: Type-II receiver for ACGN with whitening filter integrated into the match filters.
2. Final A, 2011.
Consider the following two equiprobable signals
(q
E
T , 0 t T,
s0 (t) =
0,
otherwise.
s1 (t) =
( q
E
T , 0 t T,
0,
otherwise.
The above signals are transmitted through an additive colored Gaussian noise (ACGN) channel
with PSD SN (f ). The noise PSD, SN (f ), obeys
Z
ln SN (f )
df < .
1 + f2
62
Prove that the probability of error of the optimal receiver is given by p(e) = Q( E), where
,
sin(x)
x
2
1
SN
x
T
dx
Solution:
Since the PSD of the noise satisfies the Paley-Wiener condition, a minimum-phase whitening filter
H(f ) such that |H(f )|2 = N20 SN1(f ) exists. The optimal decoder then first whitens the noise and
then performs MAP decoding on the modified signals: s0 (t) = s0 (t) h(t) and s1 (t) = s1 (t) h(t),
assuming T is large enough. Since the signals are anti-podal and equiprobable, the probability of
error is given by
p(e) = Pr {s0 (t)} Pr {error|
s0 (t)} + Pr {s1 (t)} Pr {error|
s1 (t)} = Pr {error|
s0 (t)} .
Next, note that as after whitening we arrive at an AWGN with noise variance
dmin
Pr {error|
s0 (t)} = Q
,
2N0
N0
2
then
where dmin = ks1 s0 k, and s0 and s1 are e projections of s0 (t) and s1 (t), respectively, on an
appropriate orthonormal basis. From Parsevals theorem it follows that
dmin = ks1 s0 k
s
Z T
2
=
(
s1 (t) s0 (t)) dt
t=0
sZ
=
f =
sZ
2
S1 (f ) S0 (f ) df
=
f =
sZ
|H(f )| |S1 (f ) S0 (f )| df .
=
f =
Note that
s0 (t)ej2f t dt
S0 (f ) =
t=
r
=
r
=
E
T
ej2f t dt
t=0
E jf T sin (f T )
e
,
T
f
63
and S1 (f ) = S0 (f ). Hence
sZ
dmin
|H(f )| |S1 (f ) S0 (f )| df
=
f =
sZ
=
f =
sZ
=
x=f T
=
=
N0 1
2
|2S1 (f )| df
2 SN (f )
1
2
|S1 (f )| df
SN (f )
f =
sZ
r
2
1
sin (f T )
2N0 E
df
T
f
f = SN (f )
s
r
2
Z
1
2N0 E
1
sin (x)
dx
x
x
)
T
S
(
T
N
x=
T
T
s
2
Z
p
1
1
sin (x)
dx
2N0 E
x
x
x= SN ( T )
p
2N0 E .
2N0
Finally we arrive at
Pr {error|
s0 (t)} = Q
d
min
2N0
=Q
64
2N0 E
=Q
E .
2N0
12
In p(t nT ) + n(t)
n=
In x(t nT ) + v(t)
n=
where x(t) = p(t) p (t) and v(t) = n(t) p (t). If the sampling time is off by 10%,
1
)T . Assuming that
then the samples at the output of the correlator are taken at t = (m 10
1
t = (m 10 )T without loss of generality, then the sampled sequence is:
ym =
In x((m
n=
1
1
)T nT ) + v((m )T ).
10
10
P
1
If the signal pulse is rectangular with amplitude A and duration T , then n= In x((m 10
)T nT )
is nonzero only for n = m and n = m 1 and therefore, the sampled sequence is given by:
ym
1
1
1
T ) + Im1 x(T T ) + v((m )T )
10
10
10
9 2
1
1
A T Im + A2 T Im1 + v((m )T )
10
10
10
Im x(
N0 2
A T
2
9
10
2
2(A2 T )2
81 2A2 T
=
.
N0 A2 T
100 N0
65
81
100
(b) Recall from item (1a) that the sampled sequence is:
ym =
9 2
1
A T Im + A2 T Im1 + vm .
10
10
1
The term 10
A2 T Im1 expresses the ISI introduced to the system. If Im = 1 is transmitted,
then the probability of error is
Pr{e|Im = 1}
=
=
=
1
1
Pr{e|Im = 1, Im1 = 1} + Pr{e|Im = 1, Im1 = 1}
2
2
8
Z A2 T
Z 10
A2 T
2
2
1
1
N vA2 T
N vA2 T
0
0
e
e
dv
+
dv
2 N0 A2 T
2 N0 A2 T
s
s
!
2 2 !
1
2A T
1
2A2 T
8
+ Q
Q
2
N0
2
10
N0
Since the symbols of the binary PAM system are equiprobable the previous derived expression
is the probability of error when a symbol by symbol detector is employed. Comparing this
with the probability of error of a system with no ISI, we observe that there is an increase of
the probability of error by
s
s
!
!
2
1
8
2A2 T
2A2 T
1
Q
Q
.
2
10
N0
2
N0
2. [1, Problem 10.8].
A binary antipodal signal is transmitted over a nonideal band-limited channel, which introduced
ISI over two adjacent symbols:
X
1
ym =
Im xkm + vm = Im + Im1 + vm .
4
k
X
k
1
Im xkm + vm = Im + Im1 + vm .
4
N0
xkj
2
N0
N0
x0 =
.
2
2
If a symbol by symbol detector is employed and we assume that the symbols Im = Im1 =
b have been transmitted, then the probability of error Pr{e|Im = Im1 = b } is:
b , Im1 = b } = Pr vm
3
<
b
4
3
=Q
4
!
2b
.
N0
log(P(e)
-3.5
-4
-4.5
-5
-5.5
-6
-6.5
-7
6
10
11
SNR/bit, dB
12
13
14
(a) Sketch the tree structure, showing the possible signal sequences for the received signals y1 , y2 ,
and y3 .
(b) Suppose the Viterbi algorithm is used to detect the information sequence. How many metrics
must be computed at each stage of the algorithm?
(c) How many surviving sequences are there in the Viterbi algorithm for this channel?
(d) Suppose that the received signals are
y1 = 0.5,
y3 = 1.0
y2 = 2.0,
Determine the surviving sequences through stage y3 and the corresponding metrics.
Solution:
(a) Figure 24 depicts part of the tree.
I1
I3
I2
3
1
-1
-3
3
3
1
-1
-3
3
1
1
-1
-3
3
-1
1
-1
-3
3
-3
1
-1
-3
0.8I
+
0.6I
)
,
k
> 1.
k
k
k1
k
68
1
3.61
0.09
1.69
8.41
I1
3
1
-1
-3
3
1
-1
-3
3
1
-1
-3
3
1
-1
-3
2 (I2 , I1 )
5.57
0.13
6.53
13.25
12.61
3.33
2.05
8.77
24.77
11.65
6.53
9.41
42.05
25.09
16.13
15.17
(I2 , I1 )
(I2 , I1 )
(I2 , I1 )
(I3 , I2 , I1 )
(I3 , I2 , I1 )
(I3 , I2 , I1 )
I3
3
3
3
3
1
1
1
1
-1
-1
-1
-1
-3
-3
-3
-3
I2
3
1
-1
-3
3
1
-1
-3
3
1
-1
-3
3
1
-1
-3
I1
1
-1
-1
-3
1
-1
-1
-3
1
-1
-1
-3
1
-1
-1
-3
3 (I3 , I2 , I1 )
2.69
9.89
22.53
42.21
0.13
7.81
12.29
28.13
2.69
2.69
7.17
19.17
10.37
2.69
7.17
15.33
1
1
sn + wn 2 , w.p. 4 ,
rn = sn + wn + 12 , w.p. 14 ,
sn + wn ,
w.p. 12 .
By symmetry p(e) = Pr{error|s = 1} = Pr{error|s = 1}, hence,
p(e) = Pr{error|s = 1}
1
3
1
1
1
= Pr{w 1 > 0} + Pr{w > 0} + Pr{w > 0}
2
4
2
4
2
1
1
1
1
1
= Pr{w > 1} + Pr{w > 1} + Pr{w + > 1}
2
4
2
2
4
1
1
1
3
1
1
= Q
+ Q
+ Q
.
2
w
4
2w
4
2w
70
13
Equalization
1,
0 |f | < 10KHz,
0,
otherwise.
The frequency response is symmetric in positive and negative frequencies. Assume an AWGN
channel with noise PSD N0 = 109 W/Hz.
(a) Find a ZF analog equalizer that completely removes the ISI introduced by H(f ).
(b) Find the total noise power at the output of the equalizer from item (1a).
(c) Assume a MMSE analog equalizer of the form Heq (f ) = H(f1)+ . Find the total noise power
at the output of this equalizer for an AWGN input with PSD N0 for = 0.5 and for = 1.
(d) Describe qualitatively two effects on a signal that is transmitted over channel H(f ) and
then passed through the MMSE equalizer Heq (f ) = H(f1)+ with > 0. What design
considerations should go into the choice of ?
(e) What happens to the total noise power for the MMSE equalizer in item (1c) as ?
What is the disadvantage of letting in this equalizer design?
Solution:
(a)
1,
2,
1
Hzf (f ) =
= 3,
H(f )
4,
5,
0 |f | < 10KHz,
10KHz |f | < 20KHz,
20KHz |f | < 30KHz,
30KHz |f | < 40KHz,
40KHz |f | < 50KHz.
(b) The noise spectrum at the output of the filter is given by N (f ) = N0 |Heq (f )|2 , and the noise
power is given by the integral of N (f ) from 50 kHz to 50 kHz:
Z 50KHz
Z 50KHz
N=
N (f )df = 2N0
|Heq (f )|2 df
50KHz
2N0 (1 + 4 + 9 + 16 + 25)(10KHz)
1.1mW.
0
(c) The noise spectrum at the output of the filter is given by N (f ) = (H(fN)+)
2 , and the noise
power is given by the integral of N (f ) from 50 kHz to 50 kHz. For = 0.5 we get
(d) As increases, the frequency response Heq (f ) decreases for all f . Thus, the noise power
decreases, but the signal power decreases as well. The factor should be chosen to balance
maximizing the SNR and minimizing distortion, which also depends on the spectrum of the
input signal (which is not given here).
(e) As , the noise power goes to 0 because Heq (f ) 0 for all f . However, the signal
power also goes to zero.
2. [1, Problem 10.10].
Binary PAM is used to transmit information over an unequalized linear filter channel. When
a = 1 is transmitted, the noise-free output of the demodulator is
0.3, m = 1,
0.9, m = 0,
xm =
0.3, m = 1,
0,
otherwise.
(a) Design a three-tap zero-forcing linear equalizer so that the output is
(
1, m = 0,
qm =
0, m = 1.
Remark 4. qm does not have to be causal.
(b) Determine qm for m = 2, 3, by convolving the impulse response of the equalizer with the
channel response.
Solution:
(a) If by {cn } we denote the coefficients of the FIR equalizer, then the equalized signal is:
qm =
1
X
cn xnm
n=1
0.9
0.3
0
as
0.3
0.9
0.3
c1
0
0
0.3 c0 = 1
0.9
c1
0
The coefficients of the zero-forcing equalizer can be found by solving the above matrix equation. Thus
c1
0.4762
c0 = 1.4286 .
c1
0.4762
72
1
X
cn x2n = c1 x1 = 0.1429
n=1
q2
1
X
cn x2n = c1 x1 = 0.1429
n=1
q3
1
X
cn x3n = 0
n=1
q3
1
X
cn x3n = 0.
n=1
0.9, m = 0,
xm = 0.3, m = 1,
0,
otherwise.
The noise k , at the output of the sampler, is a zero-mean Gaussian sequence with autocorrelation
function:
E{k l } = 2 xkl , |k l| 1.
If the Z-transform of the sequence {xm }, X(z), assumes the factorization:
X(z) = F (z)F (1/z )
then the filter 1/F (1/z ) can follow the sampler to white the noise sequence k . In this case the
output of the whitening filter, and input to the MSE equalizer, is the sequence
X
un =
Ik fnk + nk
k
where nk is zero mean white Gaussian with variance 2 . The optimum coefficients of the MSE
equalizer, ck , satisfy:
1
X
ck nk = k , k = 1, 0, 1
n=1
where
nk
(
xnk + 2 n,k , |n k| 1,
0,
otherwise.
(
fk , 1 k 0,
0,
otherwise.
73
With
X(z) = 0.3z + 0.9 + 0.3z 1 = (f0 + f1 z 1 )(f0 + f1 z)
we obtain the parameters f0 and f1 as:
(
0.7854
f0 =
0.1146
(
0.1146
f1 =
0.7854
The parameters f0 and f1 should have the same sign since f0 f1 = 0.3. To have a stable inverse
system 1/F (1/z ), we select f0 and f1 in such a waythat the zero of the
system F (1/z ) =
f0 + f1 z is inside the unit circle. Thus, we choose f0 = 0.1146 and f1 = 0.7854 and therefore,
the desired system for the equalizers coefficients is:
0.9 + 0.1
0.3
0
c1
0.7854
0.3
0.9 + 0.1
0.3 c0 = 0.1146
0
0.3
0.9 + 0.1
c1
0
Solving this system, we obtain
c1 = 0.8596,
c0 = 0.0886,
c1 = 0.0266.
cj lj = fl
,
j=K1
10 Read
74
K1 l 0,
where
lj =
l
X
fm
fm+lj + N0 lj ,
K1 l, j 0.
m=0
The tap coefficients of the feedback filter of the DFE are given in terms of the coefficients of
the feedforward section by the following equations:
ck =
0
X
cj fkj ,
1 k K2 ,
j=K1
1
1
c0
2 + N0
2
,
=
c1
N0
2 2N0
2 N02 + 32 N0 + 41
for N0 1
for N0 1.
(b)
Jmin (1) = 1
0
X
cj fj =
j=K1
2N02 + N0
2N0 ,
2 N02 + 32 N0 + 14
for N0 1
(c)
=
1 Jmin (1)
1 + 4N0
1
=
,
Jmin (1)
2N0 (1 + 2N0 )
2N0
for N0 1
(d) For the infinite tap DFE, we have from [1, Example 10.3.1]:
Jmin
2N
p 0
2N0 , for N0 1
1 + N0 + (1 + N0 )2 1
p
1 + N0 + (1 + N0 )2 1
1 Jmin
=
Jmin
2N0
Jmin = 0.128,
= 51 (17.1 dB)
The three-tap equalizer performs very well compared to the infinite-tap equalizer. The
difference in performance is 0.6 dB for N0 = 0.1 and 0.4 dB for N0 = 0.01.
75
5. Final C, 2011.
Let yn be the signal at the output of an ISI channel
yn =
xi sni + vn ,
i=
where {sn }n= is the transmitted symbol sequence. The symbols are selected in an i.i.d man
ner from a constellation A, with average energy EA . {vn }n= is a zero mean complex white
Gaussian noise with variance N0 . {xi }i= is the channel coefficients vector. Figure 25 depicts
The signal {yn }n= is filtered by a zero-forcing equalizer. Find the SNR per symbol at the
equalizer output as a function of EA and N0 .
Solution:
The z-transform of the received signal is
Y (z) = S(z)X(z) + V (z).
The frequency response depicted in Figure 25 indicate that X(z) is invertible. Thus, after filtering
1
with X(z)
we obtain
1
= S(z) + V (z),
Y (z) = S(z) + V (z)
X(z)
and the PSD of the filtered noise is
SV ej =
N0
2.
|X (ej )|
EA
v2
Z
1
=
S ej d
2 = V
Z
1
N0
=
d
=0 |X (ej )|2
9
= N0 .
12
=
12 EA
9 N0 .
76
14
Non-Coherent Reception
2E
T
cos(2fi t), 0 t T,
0,
i = 0, 1
otherwise.
T,
0
T
T cos(2f1 t + ), 0 t T,
s0 (t) =
, s1 (t) =
0,
otherwise.
0,
otherwise.
Find the minimal frequency difference required for the two signals to be orthogonal, for an
unknown .
Solution:
We first solve for the general case, and then assign = 0 for item 1a.
Z
2E T
hs0 (t), s1 (t)i =
cos(2f0 t) cos(2f1 t + )dt
T 0
Z
1 2E T
=
sin 2(f0 f1 )t
E
2(f0 f1 )T
T
=0
|{z}
0 demand
1
2T
where the last step follows from the demand that the result
will be zero for any , hence we
require that the difference between 2(f0 f1 )T and () will equal n 2, where n
is an integer.
Hence, the minimal frequency difference for the non-coherent scenario is
|f0 f1 |min =
1
T
We conclude that for the non-coherent scenario a double bandwidth is required comparing
to the coherent scenario.
2. Non coherent receiver for M orthogonal signals.
Consider the following M orthogonal signals
r
2E
si (t) =
sin(i t), 0 t T,
T
i = 0, 1, . . . , M 1.
2E
sin(i t + ) + n(t)
T
N0
2 .
1
The set {rs,i , rc,i }M
i=0 is sufficient statistic for decoding r(t), where
r
r
Z T
Z T
2
2
rc,i =
r(t)
cos(i t)dt, rs,i =
r(t)
sin(i t)dt
T
T
0
0
In class it was obtained that the optimal receiver for equiprobable a-priori probabilities finds the
2
2
maximal ri2 = rc,i
+ rs,i
, and chooses the respective si (t).
The probability density function (PDF) of r0 and ri , i = 1, . . . , M 1, given that s0 (t) was
transmitted, are:
!
2r0 Nr02 NE
2 E
f (r0 |s0 ) =
e 0 e 0 I0
r0 , r0 0
N0
N0
f (ri |s0 )
2ri Nri2
e 0,
N0
ri 0,
i = 1, . . . , M 1
For equiprobable a-priori probabilities and M = 2, the error probability of the optimal receiver is
p(e) =
E
1 2N
0
e
2
Show that for equiprobable a-priori probabilities and general M , the error probability of the
optimal receiver is
M
1
X
i
E
1
M 1
e i+1 N0
p(e) =
(1)i+1
i
i+1
i=1
Guideline: Let A, B and C be i.i.d RVs with PDF fY (y). Let X = max{A, B, C}. Derive the
PDF fX (x).
Solution:
78
Due to symmetry
p(e) =
M
1
X
i=0
n
(a)
Pr{ymax < y} = Pr{y1 , . . . , yn y} = FY (y)
n1
= n FY (y)
fY (y)
where (a) follows from the fact the the random variables are i.i.d.
In order to find f (rmax |s0 ) we need to find F (ri |s0 ):
Z ri
ri2
2t Nt2
F (ri |s0 ) =
e 0 dt = 1 e N0
N0
0
Hence
f (rmax |s0 ) = (M 1) 1 e
2
rmax
N0
2
M 2 2rmax rmax
e N0
N0
(M 1)
M
2
X
2
rmax
N0
j
j=0
M
2
X
(M 1)(1)j e
2
(j+1)rmax
N0
j=0
i=j+1
M
1
X
(1)i+1
i=1
2
2rmax rmax
M 2
e N0
j
N0
2rmax M 2
j
N0
2
M 1 irNmax 2rmax i
0
e
i
N0
In order to calculate p(e|s0 ) we need to integrate the whole region in which rmax > r0
Z
Z
p(e|s0 ) =
f (r0 |s0 )
f (rmax |s0 )drmax dr0
r0 =0
rmax =r0
rmax =r0
M
1
X
(1)i+1
i=1
Z ir2
max 2rmax i
M 1
drmax
e N0
i
N0
r0
|
{z
}
Rayleigh distribution
M
1
X
i=1
2
0
M 1 ir
(1)i+1
e N0
i
79
Hence
Z
p(e|s0 ) =
r0 =0
! M 1
ir2
X
0
2r0 Nr02 NE
2 E
i+1 M 1
0
0
e
e
I0
r0
(1)
e N0 dr0
i
N0
N0
i=1
Multiplying p(e|s0 ) by
E/(i+1)2
E/(i+1)2
i+1 N
e 0 /(i+1) e N0 /(i+1) = 1
i+1
M
1
X
E
i
1
M 1
e i+1 N0
(1)i+1
i
i+1
{z
}
i=1
p(e)
Z
0
!
p
2
r0
E/(i+1)2
2 E/(i + 1)2
2(i + 1)r0 N /(i+1)
N /(i+1)
e 0
I0
e 0
r0 dr0
N0
N0 /(i + 1)
{z
}
R
Rice distribution=1
M
1
X
i
E
1
M 1
=
e i+1 N0
(1)i+1
i
i+1
i=1
3. [1, Problem 5.42].
In on-off keying of a carrier-modulated signal, the two possible signals are
s0 (t)
s1 (t)
0, 0 t T
r
2b
=
cos(2fc t + ),
T
0 t T.
r(t)
n(t), 0 t T
r
2b
cos(2fc t + ) + n(t),
T
0tT
b
N0
1, b 1, 2 =
b
)
2
b
2
re
r b
2
2
r
For
b
2
1: I0 (
For
b
2
1:
For
b
2
2 2 b
1
.
2 2
b
.
2
80
1
2
R VT
0
f (r) r2 I0
N0 11
.
2
b
2
dr:
Solution:
(a) Figure 26 depicts the noncoherent envelope detector for the on-off keying signal.
2r Nr2
r r22
2
=
e
e 0.
2
N0
r
r(t)
2
cos(2fc t)dt
T
r
Z T
2 b
2
=
cos(2fc t + ) cos(2fc t)dt +
n(t)
cos(2fc t)dt
T
T
0
0
=
b cos() + nc
Z
where nc is zero-mean Gaussian random variable with variance N20 . Similarly, for the quadrature component we have:
rs = b sin() + ns
p
p
The PDF of the random variable r = rc2 + rs2 = b + n2c + n2s follows the Rician distribution:
!
!
r b
2r b
2r r2N+b
r r2 +2 b
0 I
=
e
f (r|s1 (t)) = 2 e 2 I0
0
2
N0
N0
(c) For equiprobable signals the probability of error is given by
p(e) =
1
2
VT
p(r|s1 (t))dr +
81
1
2
p(r|s0 (t))dr
VT
Since r > 0 the expression for the probability of error takes the form
p(e)
=
=
1
2
1
2
VT
Z
1
p(r|s0 (t))dr
2 VT
Z
r b
r r2 +2 b
1 r r22
2
e
I
dr
+
e 2 dr
0
2
2
2 VT 2
p(r|s1 (t))dr +
0
VT
However, when Nb0 1 the optimum value is close to: 2 b and we will use this threshold
to simplify the analysis. The integral involving the Bessel function cannot be evaluated in
closed form. Instead of I0 (x) we will use the approximation:
ex
I0 (x)
2x
which is valid for large x, that is for high SNR. In this case:
1
2
VT
Z 2b r
2
2
r b
r r2 +2 b
1
r
e 2 I0
dr
e(r b ) /2 dr
2
2
2
2 0
2 b
This integral is further simplified if we observe that for high SNR, the integrand is dominant
in the vicinity of b and therefore, the lower limit can be substituted . Also
r
r
r
1
2 2 b
2 2
and therefore:
1
2
Z
0
b
2
r
2 2
e
b
(r b )2 /2 2
dr
1
2
b
2
Z
0
1
Q
2
1 (rb )2 /22
e
dr
2 2
!
b
2N0
Finally:
p(e)
1
Q
2
1
Q
2
b
2N0
b
2N0
82
1
2
b
2
2r Nr2
e 0 dr
N0
b
1
+ e 4N0 .
2
References
[1] J. G. Proakis, Digital Communications, 4th Edition, John Wiley and Sons, 2000.
[2] S. Haykin, Communication Systems, 4th Edition, John Wiley and Sons, 2000.
[3] A. Goldsmith, Wireless Communications, Cambridge University Press, 2006.
[4] J. G. Proakis and M. Salehi, Communication Systems Engineering, 2nd Edition, Prentice-Hall
Inc., 2002.
83