Sie sind auf Seite 1von 83

Digital Communication Exercises

Contents
1 Converting a Digital Signal to an Analog Signal

2 Decision Criteria and Hypothesis Testing

3 Generalized Decision Criteria

14

4 Vector Communication Channels

17

5 Signal Space Representation

22

6 Optimal Receiver for the Waveform Channel

29

7 The Probability of Error

36

8 Bit Error Probability

44

9 Connection with the Concept of Capacity

52

10 Continuous Phase Modulations

56

11 Colored AGN Channel

60

12 ISI Channels and MLSE

65

13 Equalization

71

14 Non-Coherent Reception

77

Converting a Digital Signal to an Analog Signal


1. [1, Problem 4.15].
Consider a four-phase PSK signal represented by the equivalent lowpass signal
X
u(t) =
In g(t nT )
n

p
where In takes on one of of the four possible values 1/2(1 j) with equal probability. The
sequence of information symbols {In } is statistically independent (i.i.d).
(a) Determine the power density spectrum of u(t) when
(
A, 0 t T,
g(t) =
0, otherwise.
(b) Repeat (1a) when
(
g(t) =

A sin(t/T ), 0 t T,
0,
otherwise.

(c) Compare the spectra obtained in (1a) and (1b) in terms of the 3dB bandwidth and the
bandwidth to the first spectral zero. Here you may find the frequency numerically.
Solution:
We have that SU (f ) =

1
T

|G(f )|

m=

CI (m)ej2f mT , E(In ) = 0, E(|In | ) = 1, hence


(

CI (m) =
therefore

m=

1,
0,

m = 0,
m 6= 0.

CI (m)ej2f mT = 1 SU (f ) =

1
T

|G(f )| .

(a) For the rectangular pulse:


G(f ) = AT

sin f T j2f T /2
sin2 f T
2
e
|G(f )| = A2 T 2
f T
(f T )2

where the factor ej2f T /2 is due to the T /2 shift of the rectangular pulse from the center.
Hence:
sin2 f T
SU (f ) = A2 T
(f T )2
RT
(b) For the sinusoidal pulse: G(f ) = 0 A sin(t/T ) exp(j2f t)dt. By using the trigonometric
it is easily shown that:
identity sin x = exp(jx)exp(jx)
2j
G(f ) =

2AT cos T f j2f T /2


2
e
|G(f )| =
1 4T 2 f 2

Hence:

SU (f ) =

2A

2
T

2AT

cos2 T f
(1 4T 2 f 2 )2

2

cos2 T f
(1 4T 2 f 2 )2

(c) The 3dB frequency for (1a) is:


sin2 f3dB T
1
0.44
= f3dB
=
2
(f3dB T )
2
T
(where this solution is obtained graphically), while the 3dB frequency for the sinusoidal pulse
on (1b) is: f3dB
= 0.59
T .
The rectangular pulse spectrum has the first spectral null at f = 1/T , whereas the spectrum
of the sinusoidal pulse has the first null at 3/2T . Clearly the spectrum for the rectangular
pulse has a narrower main lobe. However, it has higher sidelobes.
2. [1, Problem 4.21].
The lowpass equivalent representation of a PAM signal is
X
u(t) =
In g(t nT )
n

Suppose g(t) is a rectangular pulse and


In = an an2
where {an } is a sequence of uncorrelated 1 binary values (1, 1) random variables that occur with
equal probability.
(a) Determine the autocorrelation function of the sequence {In }.
(b) Determine the power density spectrum of u(t).
(c) Repeat (2b) if the possible values of an are (0, 1).
Solution:
(a)
CI (m)

= E{In+m In } = E{(an+m an+m2 )(an an2 )}

m = 0,
2,
=
1, m = 2

0,
otherwise.
2(m) (m 2) (m + 2)

=
(b) SU (f ) =

1
T

|G(f )|

m=

CI (m)ej2f mT , where

CI (m)ej2f mT = 4 sin2 2f T,

m=

and
2

|G(f )| = (AT )2

sin f T
f T

Therefore:
SU (f ) = 4A2 T
1 E{a a }
n m

= 0 for n 6= m.

sin f T
f T

2

2
.

sin2 2f T

(c) If {an } takes the values (0, 1) with equal probability then E{an } = 1/2. This results in:
CI (m)

SU (f )

1
[2(m) (m 2) (m + 2)] ii (f ) = 2 sin2 2f T
2
2

sin f T
sin2 2f T
2A2 T
f T

Thus, we obtain the same result as in (2b) but the magnitude of the various quantities is
reduced by a factor of 2.
3. [2, Problem 1.16].
A zero mean stationary process x(t) is applied to a linear filter whose impulse response is defined
by a truncated exponential:
(
aeat , 0 t T,
h(t) =
0,
otherwise.
Show that the power spectral density (PSD) of the filter output y(t) is defined by
SY (f ) =

a2
(1 2 exp(aT ) cos 2f T + exp(2aT ))SX (f )
a2 + 4 2 f 2

where SX (f ) is the PSD of the filter input.


Solution:
The frequency response of the filter is:
Z
H(f ) =
h(t) exp(j2f t)dt

Z
=
a exp(at) exp(j2f t)dt

Z
= a
exp((a + j2f )t)dt

a
[1 eaT (cos 2f T j sin 2f T )].
a + j2f

The squared magnitude response is:


2

|H(f )| =


a2
1 2eaT cos 2f T + e2aT
2
2
2
a + 4 f

And the required PSD follows.


4. [1, Problem 4.32].
The information sequence {an } is a sequence of i.i.d random variables, each taking values +1 and
1 with equal probability. This sequence is to be transmitted at baseband by a biphase coding
scheme, described by
X
s(t) =
an g(t nT )
n

where g(t) is defined by


(
1,
0 t T /2,
g(t) =
1, T /2 t T.

(a) Find the PSD of s(t).


(b) Assume that it is desirable to have a zero in the power spectrum at f = 1/T . To this end
we use precoding scheme by introducing bn = an + kan1 , where k is some constant, and
then transmit the {bn } sequence using the same g(t). Is it possible to choose k to produce a
frequency null at f = 1/T ? If yes, what are the appropriate value and the resulting power
spectrum?
(c) Now assume we want to to have zeros at all multiples of f0 = 1/4T . Is it possibl to have
these zeros with an appropriate choice of k in the previous part? If not then what kind of
precoding do you suggest to result in the desired nulls?
Solution:
(a) Since = 0, a2 = 1, we have SS (f ) =
G(f )

=
=

1
T

|G(f )| .

T sin(f T /2) j2f T /4 T sin(f T /2) j2f 3T /4


e

e
2 f T /2
2 f T /2
T sin(f T /2) j2f T
e
(2j sin(f T /2))
2 f T /2

sin2 (f T /2) j2f T


e

f T /2
 2
2
sin (f T /2)
2
|G(f )| = T 2

f T /2
 2
2
sin (f T /2)
SS (f ) = T
f T /2
= jT

(b) For non-independent information sequence the power spectrum of s(t) is given by SS (f ) =
2 P
1
j2f mT
.
m= CB (m)e
T |G(f )|
CB (m)

= E{bn+m bn }
= E{an+m an } + kE{an+m1 an } + kE{an+m an1 } + k 2 E{an+m1 an1 }

1 + k , m = 0,
=
k,
m = 1

0,
otherwise.

Hence:

CB (m)ej2f mT = 1 + k 2 + 2k cos 2f T

m=

We want:
SS (1/T ) = 0


CB (m)ej2f mT f =1/T = 0 1 + k 2 + 2k = 0 k = 1

m=

and the resulting power spectrum is:



SS (f ) = 4T

sin2 f T /2
f T /2

2

sin2 f T

(c) The requirement for zeros at f = l/4T, l = 1, 2, . . . means 1 + k 2 + 2k cos l/2 = 0,


which cannot be satisfied for all l. We can avoid that by using precoding in the form:
bn = an + kan4 . Then

1 + k , m = 0,
CB (m) =
k,
m = 4

0,
otherwise.

CB (m)ej2f mT

1 + k 2 + 2k cos 2f 4T

m=

and a value of k = 1 will zero this spectrum in all multiples of 1/4T .


5. [1, Problem 4.29].
Show that 16-QAM on {1, 3} {1, 3} can be represented as a superposition of two 4-QAM
signals where each component is amplified separately before summing. i.e., let
s(t) = G[An cos 2f t + Bn sin 2f t] + [Cn cos 2f t + Dn sin 2f t]
where {An }, {Bn }, {Cn } and {Dn } are statically independent binary sequences with element from
the set {+1, 1} and G is the amplifier gain. You need to show s(t) an also be written as
s(t) = In cos 2f t + Qn sin 2f t
and determine In and Qn in terms of An , Bn , Cn and Dn .
Solution:
The 16-QAM signal is represented as s(t) = In cos 2f t + Qn sin 2f t where In = {1, 3}, Qn =
{1, 3}. A superposition of two 4-QAM signals is:
s(t) = G[An cos 2f t + Bn sin 2f t] + [Cn cos 2f t + Dn sin 2f t]
where An , Bn , Cn , Dn = {1}. Clearly In = GAn + Cn , In = GBn + Dn . From these equations it
is easy to see that G = 2 gives the required equivalence.
6. [2, Problem 1.15].
A running integrator is defined by
Z

y(t) =

x( )d ,
tT

where x(t) is the input, y(t) is the output, and T is the integration period. Both x(t) and y(t)
are sample functions of stationary processes X(t) and Y (t), respectively. Show that the power
spectral density (PSD) of the integrator output is related to that of the integrator input by
SY (f ) = T 2 sinc2 (f T )SX (f ).
Remark 1. sinc(x) =

sin(x)
x

Solution:
First, we will find the impulse response, h(t), of the running integrator
(
Z t
1, 0 t T,
h(t) =
( )d =
0, otherwise.
tT

Correspondingly, the frequency response of the running integrator is


Z
h(t)ej2f t dt
H(f ) =

ej2f t dt

=
0


1
1 ej2f T
j2f
= T sinc(f T )ejf T .
=

Hence, the PSD SY (f ) is defined in terms of the PSD SX (f ) as follows


2

SY (f ) = |H(f )| SX (f )
= T 2 sinc2 (f T )SX (f ).

Decision Criteria and Hypothesis Testing

Remark 2. Hypothesis testing is another common name for decision problem: You have to decide
between two or more hypothesis, say H0 , H1 , H2 , . . . where Hi can be interpreted as the unknown
parameter has value i. Decoding a constellation with K symbols can be interpreted as selecting the
correct hypothesis from H0 , H1 , . . . , HK1 where Hi is the hypothesis that Si was transmitted.
1. Consider an equal probability binary source p(0) = p(1) = 1/2, and a continuous output channel:
fR|M (r|1)

= aear

r0

fR|M (r|0)

br

r0

= be

b>a>0

1
(a) Find a constant K such that the optimal decision rule is rK.
0
(b) Find the respective error probability.
Solution:
(a) Optimal decision rule:
0
p(0)fR|M (r|0)p(1)fR|M (r|1)
1
Using the defined channel distributions:
bebr

aear

a (ab)r
e
b

a
ln( ) + (b a)r
b

ln( ab )
=K
ab

(b)
p(e)

=
=
=

p(0) Pr{r > K|0} + p(1) Pr{r < K|1}


Z
Z K

1  bt
be dt +
aeat dt
2 K
0
1 bK
aK
[e
+1e
]
2

2. Consider a binary source: Pr{x = 2} = 2/3, Pr{x = 1} = 1/3, and the following channel
y = A x,
where x and A are independent.
8

A N (1, 1)

(a) Find the optimal decision rule.


(b) Calculate the respective error probability.
Solution:
(a) First we will find the conditional distribution of y given x:
(Y | 2) N (2, 4),

(Y |1) N (1, 1)

Hence the decision rule will be:


2 1
(y + 2)2 
exp
3 8
8

1 1
(y 1)2 
exp
3 2
2

(y + 2)2
8

3y(y 4)

(
2,
1,

x
(y)

(y 1)2
2

y < 0, 4 < y
otherwise.

(b)
Z 0

Z
Z
1
2 4
f (y| 2)dy +
f (y|1)dy +
f (y|1)dy
p(e) =
3 0
3
4
 








2
0+2
4+2
01
41
1
=
Q
Q
+ 1Q
Q
3
2
2
3
1
1
1
0.15821
= Q(1) Q(3) =
3
3. Decision rules for binary channels.
(a) The Binary Symmetric Channel (BSC) has binary (0 or 1) inputs and outputs. It
outputs each bit correctly with probability 1 p and incorrectly with probability p. Assume
0 and 1 are equally likely inputs. State the MAP and ML decision rules for the BSC when
p < 12 . How are the decision rules different when p > 12 ?
(b) The Binary Erasure Channel (BEC) has binary inputs as with the BSC. However there
are three possible outputs. Given an input of 0, the output is 0 with probability 1 p1 and 2
with probability p1 . Given an input of 1, the output is 1 with probability 1 p2 and 2 with
probability p2 . Assume 0 and 1 are equally likely inputs. State the MAP and ML decision
rules for the BEC when p1 < p2 < 12 . How are the decision rules different when p2 < p1 < 21 ?
Solution:
(a) For equally likely inputs the MAP and ML decision rules are identical. In each case we wish
to maximize py|x (y|xi ) over the possible choices for xi . The decision rules are shown below,
p

<

>

1
=Y
X
2
1
=1Y
X
2
9

(b) Again, since we have equiprobable signals, the MAP and ML decision rules are the same.
The decision rules are as follows,
(
1
= Y, Y = 0, 1.
p1 < p2 < X
2
1, Y = 2.
(
1
= Y, Y = 0, 1.
p2 < p1 < X
2
0, Y = 2.
4. In a binary hypothesis testing problem, the observation Z is Rayleigh distributed under both
hypotheses with different parameter, that is,


z
z2
f (z|Hi ) = 2 exp 2
z 0, i = 0, 1
i
2i
You need to decide if the observed variable Z was generated with 02 or with 12 , namely choose
between H0 and H1 .
(a) Obtain the decision rule for the minimum probability of error criterion. Assume that the H0
and H1 are equiprobable.
(b) Extend your results to N independent observations, and derive the expressions for the resulting probability of error.
PN
Note: If R Rayleigh() then i=1 Ri2 has a gamma distribution with parameters N and
P
N
2 2 : Y = i=1 Ri2 (N, 2 2 ).
Solution:
(a)
log(f (z|Hi ))

log z log i2

log f (z|H1 ) log f (z|H0 )

log

z2

H1

H0

z2
2i2


H1
02
1
1
2
0
+
z

12
202
212
H0
 2  2 2 
1 0
1

=
2 log
02
12 02

Since z 0, the following decision rule is obtained:


(

H1 , z ,

H=

H0 , z < .
(b) Let

f (z|H1 )
f (z|H0 )

be denoted as Likelihood Ration Test (LRT) 2 , hence


log LRT = log f (z|H1 ) log f (z|H0 )

2 LRT

f (z|H1 )
f (z|H0 )

10

For N i.i.d observations:


log(f (z|Hi ))

N
1
X

log(f (zn |Hi ))

n=0

N
1
X

log zn N log i2

n=0

= N log i2 +

N
1
X

zn2
2i2

log zn

n=0

zn2
2i2

The log LRT will be:


H1
zn2 0
n=0
H0
 2  2 2 
N
1
H1
X
1 0
1
=

zn2 2N log
2
2
0
1 02
n=0
H0


log LRT

Define Y =
PF A

PM

PN 1
n=0

N log

12
02

1
1
2
2
20
21

 NX
1

zn2 , then Y |Hi (N, 2i2 ).

(N, /202 )
(N )
(N, /212 )
Pr{decoding H0 if H1 was transmitted} = 1 Pr{Y > |H1 } =
(N )
Pr{decoding H1 if H0 was transmitted} = Pr{Y > |H0 } = 1

where (K, x/) is the lower incomplete gamma function 3 .


5. Consider a binary source: Pr{x = 0} = 1/3, Pr{x = 1} = 2/3, which can be transmitted using
one of the channels depicted in Figure 1. It is given that p < 12 , q < 12 . Find the optimal decision
rule and the respective error probability for the following cases:
(a) The source is transmitted using channel 1.
(b) The source is transmitted using channel 2.
(c) The source is transmitted using channel 1 with probability , and using channel 2 with
probability 1 .
(d) What is the value of which achieves the minimal error probability ( as a function of p
and q)? what is the respective error probability?

(a) Channel 1

(b) Channel 2

Figure 1: Two channels for the transmission of the source X.


3 (s, x)

Rx
0

ts1 et dt.

11

Solution:
(a) For a channel output y = 0:
1
(1 p)
3
Pr {x = 1} Pr {y = 0|x = 1} = 0.
Pr {x = 0} Pr {y = 0|x = 0} =

Hence for y = 0 the decision is x


= 0. For a channel output y = 1:
1
p
3
2
Pr {x = 1} Pr {y = 1|x = 1} = .
3
Pr {x = 0} Pr {y = 1|x = 0} =

Hence for y = 1 the decision is x


= 1. The error probability is
p(e) = Pr {x = 0} Pr {y = 1|x = 0} =

1
p.
3

(b) For a channel output y = 0:


1
3
2
Pr {x = 1} Pr {y = 0|x = 1} = q.
3
Pr {x = 0} Pr {y = 0|x = 0} =

Hence for y = 0 the decision is x


= 0. Using the same arguments as in the previous item, for
a channel output y = 1 the decision is x
= 1, and the error probability is p(e) = 32 q.
(c) The equivalent channel is depicted in Figure 2.

Figure 2: The equivalent channel.


For a channel output y = 0:
1
(1 p)
3
2
Pr {x = 1} Pr {y = 0|x = 1} = q(1 ).
3

Pr {x = 0} Pr {y = 0|x = 0} =

Since (1 ) < (1 p) and q < 12 , we get


1
2
(1 p) > q(1 ) x
= 0.
3
3
For a channel output y = 1:
1
p
3
2
Pr {x = 1} Pr {y = 1|x = 1} = (1 q(1 )) .
3
Pr {x = 0} Pr {y = 1|x = 0} =

12

Since

1
2

< (1 q(1 )) and p < 12 , the decision is x


= 1. The error probability is
p(e) =

1
2
2

p + q(1 ) = q + (p 2q).
3
3
3
3

(d) As p(e) is linear with the respect to , its minimal value is achieved in one of the edge points.
For 2q < p the minimal error probability, p(e) = 23 q, is achieved for = 0. For p 2q the
minimal error probability, p(e) = 31 p, is achieved for = 1.

13

Generalized Decision Criteria

Remark 3. Vectors are denoted with boldface letters, e.g. x, y.


1. Bayes decision criteria.
Consider an equiprobable binary symmetric source m {0, 1}. For the observation, R, conditional
probability density function is
(
1
2 , |r| < 1,
fR|M (r|M = 0) =
0, otherwise.
fR|M (r|M = 1)

1 |r|
e
2

(a) Obtain the decision rule for the minimum probability of error criterion and the correspondingly minimal probability of error.


0 2
(b) For the cost matrix C =
, obtain the optimal generalized decision rule and the error
0
probability.
Solution:
(a)
|r| > 1

: fR|M (r|M = 0) m
= 1.

|r| < 1

1
=0
|r| 0 m
0

1 |r| 1
2e
1
1
2
0

The probability of error


Z

p(e) = p(0) 0 + p(1)


1

1
1 |r|
e
dr = [1 e1 ]
2
2

(b) The decision rule


fR|M (r|M = 1)
fR|M (r|M = 0)

p(0) C10 C00


1

=
=
p(1) C01 C11
2
2

|r| > 1

fR|M (r|M = 0) = 0

|r| < 1

1 |r| 1
1
2e

1
2
2
0

1
|r| ln 2
0

(
1, |r| < ln 2, |r| > 1
0, ln 2 < |r| < 1.

Probability of error
Z

ln 2

1
dr = ln 2
2
Z ln 2
1
1
PM = Pr{m
= 0|m = 1} =
e|r| dr = e1
2
2
ln 2<|r|<1
1
1
p(e) = p(0)PF A + p(1)PM = [ln 2 + e1 ]
2
2

PF A

Pr{m
= 1|m = 0} =

14

2. Non Gaussian additive noise.


Consider the source m {1, 1}, Pr{m = 1} = 0.9, Pr{m = 1} = 0.1. The observation, y, obeys
N U [2, 2]

y = m + N,

(a) Obtain the decision rule for the minimum probability of error criterion and the minimal
probability of error.


0
1
(b) For the cost matrix C =
, obtain the optimal Bayes decision rule and the error
100 0
probability.
Solution:
(a)
(
f (y|1)

0,
(

f (y| 1)

1
4,

0,
(

1
4,

1 < y < 3,
otherwise.
3 < y < 1,
otherwise.

1, 3 < y < 1,
1,
1 < y < 3.

The probability of error


Z

p(e) = p(1) 0 + p(1)


1

1
dy = 0.05
4

(b) The decision rule


f (y|1)
f (y| 1)

p(1) 100

p(1)
1

p(1)f (y|1)

100p(1)f (y| 1)
(

1, 3 < y < 1,
1,
1 < y < 3.

The probability of error


Z

p(e) = p(1) 0 + p(1)


1

1
dy = 0.45
4

3. A binary digital communication system transmits one of the following two symbols S0 = 0, S1 = A,
M consecutive times using a zero mean AWGN channel with variance 2 . The receiver decide which symbol was transmitted based on the corresponding M received symbols {ri }, i =
0)
1, 2, . . . , M . The symbols a-priori probabilities obey p(S
p(S1 ) = , while the receiver uses the cost


C00 C01
matrix C =
.
C10 C11
15

(a) Find the conditional PDFs fR|S (r|S0 ), fR|S (r|S1 ).


(b) Use Bayes criterion to show that the optimal decision criterion is
M
S1
1 X
ri ,
M i=1
S0

and find .
Solution:
(a)
fR|S (r|S0 ) =

M
Y

i=1

1
2 2

ri2

e 22 ,

fR|S (r|S1 ) =

M
Y
i=1

1
2 2

(b) Bayes optimal decision criterion is


fR|S (r|S1 ) S1 C10 C00

fR|S (r|S0 )
C C11
S0 01
S1
C10 C00

C C11
i=1
S0 01


M
S
X
A2 + 2ri A 1
C10 C00

ln

2 2
C01 C11
i=1
S0


M
S
X
C10 C00
ri A 1 M A2

+
ln

2
2 2
C01 C11
i=1
S0


M
S1
1 X
2
C10 C00
A
ri +

ln
.
M i=1
2
MA
C01 C11
S0

M
Y

A2 +2ri A
2 2

16

(ri A)2
2 2

Vector Communication Channels


1. General Gaussian vector channel.
Consider the Gaussian vector channel with the sources p(m0 ) = q, p(m1 ) = 1 q, s0 = [1, 1]T , s1 =
[1, 1]T . For sending m0 the transmitter sends s0 and for sending m1 the transmitter sends s1 .
The observations, ri , obeys
 2


0
r = si + n n = [n1 , n2 ], n N (0, n ), n = 1
0 22
The noise vector, n, and the messages mi are independent.
(a) Obtain the optimal decision rule using MAP criterion, and examine it for the following cases:
i. q = 21 , 1 = 2 .
ii. q = 12 , 12 = 222 .
iii. q = 31 , 12 = 222 .
(b) Derive the error probability for the obtained decision rule.
Solution:
(a) The conditional probability distribution function R|Si N (Si , n ):


1
1
(r si )T 1
(r

s
)
exp
f (r|si ) = p
i
n
2
(2)2 det n
The MAP optimal decision rule
p(m0 )f (r|s0 )

m0

m1

p(m1 )f (r|s1 )


1
(r s0 )T 1
(r

s
)
0
n
2

m0

m1

1q
p
exp
(2)2 det n


1
(r s0 )T 1
(r

s
)
0
n
2

m0

m1

(1 q) exp

T 1
(r s1 )T 1
n (r s1 ) (r s0 ) n (r s0 )

m0

m1

2 ln

q
p

(2)2 det n

exp

q exp

1
(r s1 )T 1
n (r s1 )
2


1
(r s1 )T 1
(r

s
)
1
n
2

1q
q

Assign rT = [x, y]

(x + 1)2
(y + 1)2
(x 1)2
(y 1)2
+

12
22
12
22

m
x
y 01 1 q
ln
+
12
22
2
q
m1

17

m0

m1

2 ln

1q
q

i. For the case q = 12 , 1 = 2 the decision rule becomes


m0
x+y 0
m1
ii. For the case q = 12 , 12 = 222 the decision rule becomes
m0
x + 2y 0
m1
iii. For the case q = 31 , 12 = 222 the decision rule becomes
m0
x + 2y 22 ln 2
m1
(b) Denote K ,

1
2

ln 1q
q , and define z =

x
12

Z|si N ((1)i

y
.
22

The conditional distribution of Z is

12 + 22 12 + 22
,
),
12 22
12 22

i = 0, 1

The decision rule in terms of z, K


m0
zK
m1
The error probability
p(e) = p(m0 ) Pr{z < K|m0 } + p(m1 ) Pr{z > K|m1 }
Assigning the conditional distribution
 K 12 +22 
2 2
1 Q q 2 1 22

Pr{z < K|m0 }

Pr{z > K|m1 }

 K + 12 +22 
2 2
= Q q 2 1 22

1 +2
12 22

1 +2
12 22

q 
2
For the case q = 12 , 1 = 2 the error probability equals Q
.
2
1

2. Non Gaussian additive vector channel.


Consider a binary hypothesis testing problem in which the sources s0 = [1, 2, 3], s1 = [1, 1, 3]
are equiprobable. The observations, ri , obey
r = si + n,

n = [n0 , n1 , n2 ]

where the elements of n are i.i.d with the following probability density function
fNK (nk ) =
18

1 |nk |
e
2

Obtain the optimal decision rule using MAP criteria.


Solution:
The optimal decision rule using MAP criteria
0
p(s0 )f (r|s0 ) p(s1 )f (r|s1 )
1
0
f (r|s0 ) f (r|s1 )
1
The conditional probability distribution function
f (r|si )

= fN (r si ) =

2
Y

fN (nk = rk sik )

k=0

1
1 |r0 si,0 | 1 |r1 si,1 | 1 |r2 si,2 |
e
e
e
= e[|r0 si,0 |+|r1 si,1 |+|r2 si,2 |]
2
2
2
8

An assignment of the si elements yield


1
|r0 1| + |r1 2| + |r2 3| |r0 1| + |r1 + 1| + |r2 + 3|
0
1
|r1 2| + |r2 3| |r1 + 1| + |r2 + 3|
0
Note that the above decision rule compares the distance from the axis in both hypotheses, unlike
in the Gaussian vector channel in which the Euclidean distance is compared.
3. Gaussian two-channel.
Consider the following two-channel problem, in which the observations under the two hypotheses
are
  
   
1 0 V1
1
Z1
H0 :
=
+
Z2
12
0 12 V2
  
   
1 0 V1
1
Z1
H1 :
=
+ 1
Z2
0 12 V2
2
where V1 and V2 are independent, zero-mean Gaussin variables with variance 2 .
(a) Find the minimum probability of error receiver if both hypotheses are equally likely. Simplify
the receiver structure.
(b) Find the minimum probability of error.
Solution:
 
Z
Let Z = 1 . The conditional distribution of Z is
Z2
Z|H0 N (0 , ), Z|H1 N (1 , ),
 
 


1
1
0
2 1
0 =
, , 1 = 1
=
21
0 41
2
19

(a) The decision rule


f (z|H1 )
f (z|H1 )

H1

H0

p(H0 )
p(H1 )

log f (z|H1 ) log f (z|H0 )

H1

H0

2
(z1 + 2z2 )
2

H1

H0

z1 + 2z2

H1

H0

(b) Define X = Z1 + 2Z2 . Since V1 , V2 are independent Z1 , Z2 are independent as well. A linear
combination of Z1 , Z2 yia Gaussian R.V with the following parameters
E{X|H0 } = 2,

E{X|H1 } = 2,

V ar{X|H0 } = V ar{X|H1 } = 2 2
And the probability of error events
PF A

= H1 |H = H0 } =
Pr{H

f (x|H1 )dx,
Z
= H0 |H = H1 } = 1
Pr{H
f (x|H0 )dx
0

PM

4. Additive vector channel.


Consider the additive vector channel with the equiprobable sources s0 = [1, 1]T , s1 = [1, 1]T .
The observations vector, r, obeys
r = si + n

n = [n0 , n1 ]T ,

where N0 N (0, 2 ) and fN1 (n1 ) = 2 e|n1 | . The noise vector elements, n0 , n1 , and the sources
are all mutually independent.
(a) Find the conditional PDFs fR|S (r|s0 ), fR|S (r|s1 ).
(b) Find the log likelihood ratio.
(c) Find and draw the optimal decision regions (in the (r0 , r1 ) plane) for =

1
2 2 .

Solution:
(a) The noise vector PDF is
fN (n) =

1
2 2

n2
0

e 22

|n1 |
,
e
2

and the conditional PDFs are


fR|S (r|s0 ) =

1
2 2

(r0 1)2
2 2

|r1 1|
e
,
2
20

fR|S (r|s1 ) =

1
2 2

(r0 +1)2
2 2

|r1 +1|
e
.
2

(b) The log likelihood ratio


ln


fR|S (r|s1 )
1 
2
2
(r

1)

(r
+
1)
(|r1 + 1| |r1 1|)
=
0
0
fR|S (r|s0 )
2 2
2r0
= 2 (|r1 + 1| |r1 1|) .

(c) As the a-priori probabilities are equal, and using =

1
2 2 ,

the decision rule is

s1
2r0
1
2 2 (|r1 + 1| |r1 1|) 0

2
s0
s1
|r1 + 1| + |r1 1| 4r0 0.
s0
For r1 1:
s1
r1 + 1 r1 + 1 4r0 0
s0

s0
2r0 1 0.
s1

For 1 r1 1:
s1
(r1 + 1) r1 + 1 4r0 0
s0

s0
2r0 + r1 0.
s1

s1
(r1 + 1) + r1 1 4r0 0
s0

s0
2r0 + 1 0.
s1

For 1 r1 :

The optimal decision regions are depicted in Figure 3.

Figure 3: Decision regions for Q4. Ik is the region for deciding on sk , k = 0, 1.

21

Signal Space Representation


1. [1, Problem 4.9].
Consider a set of M orthogonal signal waveforms sm (t), 1 m M, 0 t T 4 , all of which have
the same energy 5 . Define a new set of waveforms as
M
1 X
sk (t),
M

s0m (t) = sm (t)

1 m M, 0 t T

k=1

Show that the M signal waveforms {s0m (t)} have equal energy, given by
0 = (M 1)

and are equally correlated, with correlation coefficient


1
0

mn =

s0m (t)s0n (t)dt =

1
M 1

Solution:
The energy of the signal waveform s0m (t) is:

2

M


1 X

dt =
sk (t) dt
sm (t) M

k=1
Z T
M Z T
M X
M Z
X
2 X T
1
2
sk (t)sl (t)dt
sm (t) + 2
sm (t)sk (t)dt
M
M
0
0
0
Z

2
|s0m (t)|

k=1 l=1

k=1

M M
2
1 XX
kl

+ 2
M
M

1
2
M 1

=

M
M
M

k=1 l=1

The correlation coefficient is given by:


mn




M
M
1 X
1 X
sm (t)
sk (t) sn (t)
sl (t) dt
M
M
0
k=1
l=1

Z
Z
M
M
M
T
1 XX
1 2 X T
sm (t)sn (t)dt + 2
sk (t)sl (t)dt 0
sm (t)sk (t)dt
M
M
0
0

1
0

1
0

Z

1
2
M2 M M
M 1
M

s0m (t)s0n (t)dt =

k=1 l=1

1
=
M 1

4 hs

j, k {1, 2, . . . , M }.
j (t), sk (t)i = 0, j 6= k,
5 The energy of the signal waveform s (t) is: =
m

|sm (t)|2 dt

22

k=1

2. [1, Problem 4.10].


Consider the following three waveforms

f1 (t)

f2 (t)

f3 (t)

0 t < 2,
2,
21 , 2 t < 4,

0,
otherwise.
(
1
2 , 0 t < 4,
0, otherwise.

0 t < 1, 2 t < 3
2,
1
2 , 1 t < 2, 3 t < 4

0,
otherwise.

(a) Show that these waveforms are orthonormal.


(b) Check if you can express x(t) as a weighted linear combination of fn (t), n = 1, 2, 3, if

1, 0 < t < 1

1,
1t<3
x(t) =

1,
3t<4

0,
otherwise.
and if you can determine the weighting coefficients, otherwise explain.
Solution:
(a) To show that the waveforms fn (t), n = 1, 2, 3 are orthogonal we have to prove that:
Z
fn (t)fm (t)dt = 0. m 6= n

For n = 1, m = 2:
Z

Z
f1 (t)f2 (t)dt

f1 (t)f2 (t)dt

0
2

f1 (t)f2 (t)dt

1
4

=
For n = 1, m = 3:
Z

dt
0

1
4

dt = 0
2

Z
f1 (t)f3 (t)dt =

f1 (t)f2 (t)dt +

f1 (t)f3 (t)dt
0

=
For n = 2, m = 3:
Z

1
4

dt
0

f2 (t)f3 (t)dt =

1
4

dt
1

1
4

1
4

dt +
2

1
4

1
4

dt = 0
3

f2 (t)f3 (t)dt
0

1
4

dt
0

23

1
4

dt +
1

dt
2

dt = 0
3

Thus, the signals fn (t) are orthogonal. It is also straightforward to prove that the signals
have unit energy:
Z
2
|fn (t)| dt = 1, n = 1, 2, 3

Hence, they are orthonormal.


(b) We first determine the weighting coefficients
Z
xn =
x(t)fn (t)dt, n = 1, 2, 3

Z
Z
1 1
1 2
=
x(t)f1 (t)dt =
dt +
dt
2 0
2 1
0
Z 4
Z 4
1
=
x(t)f2 (t)dt =
x(t)dt = 0
2
0
0
Z 1
Z
Z 4
1 2
1
dt
dt +
=
x(t)f1 (t)dt =
2 0
2 1
0
Z

x1
x2
x1

1
2

1
2

1
dt +
2

1
dt +
2

dt = 0
3

dt = 0
3

As it is observed, x(t) is orthogonal to the signal waveforms fn (t), n = 1, 2, 3 and thus it can
not represented as a linear combination of these functions.
3. [1, Problem 4.11].
Consider the following four waveforms
s1 (t)

s2 (t)

s3 (t)

s4 (t)

2,
1,

0,

2,
1,

0,

1,
1,

0,

1,

2,

2,

0,

0 t < 1,
1 t < 4,
otherwise.
0 t < 1,
1 t < 3,
otherwise.
0 t < 1, 2 t < 3,
1 t < 2, 3 t < 4,
otherwise.
0 t < 1,
1 t < 3,
3 t < 4,
otherwise.

(a) Determine the dimensionality of the waveforms and a set of basis functions.
(b) Use the basis functions to present the four waveforms by vectors s1 , s2 , s3 and s4 .
(c) Determine the minimum distance between any pair of vectors.
Solution:
(a) As an orthonormal set of basis
(
1,
f1 (t) =
0,
(
1,
f3 (t) =
0,

functions we consider the


(
0 t < 1,
1,
f2 (t) =
otherwise.
0,
(
2 t < 3,
1,
f4 (t) =
otherwise.
0,
24

set
1 t < 2,
otherwise.
3 t < 4,
otherwise.

In matrix notation, the four waveforms can be represented as


2 1 1 1 f1 (t)
s1 (t)

s2 (t) 2 1
1
0
f2 (t)


s3 (t) = 1 1 1 1 f3 (t)
f4 (t)
1 2 2 2
s4 (t)
Note that the rank of the transformation matrix is 4 and therefore, the dimensionality of the
waveforms is 4.
(b) The representation vectors are
s1

s2

s3

s4


1



1

2 2

(c) The distance between the first and the second vector is:
q
q 
2
d1,2 = |s1 s2 | = 4 2 2

 2
1 = 25

Similarly we find that:


d1,3

d1,4

d2,3

d2,4

d3,4

q
2
|s1 s3 |
q
2
|s1 s4 |
q
2
|s2 s3 |
q
2
|s2 s4 |
q
2
|s3 s4 |

q 
 2
= 1 0 2 0 = 5
q 
 2
= 1 1 1 3 = 12
q 
 2
= 3 2 0 1 = 14
q 
 2
= 3 3 3 2 = 31
q 
 2
= 0 1 3 3 = 19

Thus, the minimum distance between any pair of vectors is dmin =

5.

4. [2, Problem 5.4].


(a) Using Gram-Schmidt orthogonalization procedure, find a set of orthonormal basis functions
to represent the following signals
(
(
(
2, 0 t < 1,
4, 0 t < 2,
3, 0 t < 3,
s1 (t) =
s2 (t) =
s3 (t) =
0, otherwise.
0,
otherwise.
0, otherwise.
(b) Express each of the signals si (t), i = 1, 2, 3 in terms of the basis functions found in (4a).
Solution:
(a) The energy of s1 (t) and the first basis are
Z 1
Z 1
2
E1 =
|s1 (t)| dt =
22 dt = 4
0
0
(
1, 0 t < 1,
s1 (t)
=
1 (t) =
E1
0, otherwise.
25

Define
T

Z
s21

4 1dt = 4
(0
4, 1 t < 2,
= s2 (t) s21 1 (t) =
0,
otherwise.

s2 (t)1 (t)dt =

g2 (t)

Hence, the second basis function is


g2 (t)

2 (t) = qR
T
0

(
=

g22 (t)dt

1, 1 t < 2,
0,
otherwise.

Define
Z
s31

=
0

0
2T

3 1dt = 3
(
3, 2 t < 3,
= s3 (t) s31 1 (t) s32 2 (t) =
0, otherwise.

s3 (t)2 (t)dt =

g3 (t)

3 1dt = 3

s3 (t)1 (t)dt =
Z

s32

Hence, the third basis function is


(
1, 2 t < 3,
3 (t) = qR
=
T 2
0, otherwise.
g3 (t)dt
0
g3 (t)

(b)
s1 (t)

21 (t)

s2 (t)

41 (t) + 42 (t)

s3 (t)

31 (t) 32 (t) + 33 (t)

5. Optimum receiver.
Suppose one of M equiprobable signals xi (t), i = 0, . . . , M 1 is to be transmitted during a period
of time T over an AWGN channel. Moreover, each signal is identical to all others in the subinterval
[t1 , t2 ] where 0 < t1 < t2 < T .
(a) Show that the optimum receiver may ignore the subinterval [t1 , t2 ].
(b) Equivalently, show that if x0 , . . . , xM 1 all have the same projection in one dimension6 , then
this dimension may be ignored.
(c) Does this result necessarily hold true if the noise is Gaussian but not white? Explain.
Solution:
(a) The data signals xi (t) being equiprobable, the optimum decision rule is the Maximum Like2
lihood (ML) rule, given by, (in vector form) mini |y xi | . From the invariance of the inner
product, the ML rule is equivalent to,
Z T
2
min
|y(t) xi (t)| dt
i

6 xT
i

= xi1

xi2

...

xiN are vectors of length N , k : xik = xjk , i, j {0, . . . , M 1}.

26

The integral is then written as a sum of three integrals,


Z T
Z t1
Z t2
Z
2
2
2
|y(t) xi (t)| dt =
|y(t) xi (t)| dt +
|y(t) xi (t)| dt +
0

t1

|y(t) xi (t)| dt

t2

Since the second integral over the interval [t1 , t2 ] is constant as a function of i, the optimum
decision rule reduces to,

 Z t1
Z T
2
2
|y(t) xi (t)| dt +
|y(t) xi (t)| dt
min
i

t2

And therefore, the optimum receiver may ignore the interval [t1 , t2 ].
(b) In an appropriate orthonormal basis of dimension N M , the vectors xi and y are given
by,


xTi = xi1 xi2 . . . xiN


yT = y1 y2 . . . yN
Assume that xim = x1m for all i, the optimum decision rule becomes,
min
i

M
X

M
X

|yk xik | min


i

k=1

|yk xik | + |ym xim |

k=1,k6=m

Since |ym xim | is constant for all i, the optimum decision rule becomes,
min
i

M
X

|yk xik |

k=1,k6=m

Therefore, the projection xm might be ignored by the optimum receiver.


(c) The result does not hold true if the noise is colored Gaussian noise. This is due to the fact
that the noise along one component is correlated with other components and hence might
not be irrelevant. In such a case, all components turn out to be relevant. Equivalently, by
duality, the same result holds in the time domain.
6. Let three orthonormal waveforms be defined as
(
(
3, 0 t < 13 ,
3, 13 t < 32 ,
1 (t) =
2 (t) =
0,
otherwise
0,
otherwise

(
3 (t) =

0,

3,

2
3

t < 1,
,
otherwise

and consider the three signal waveforms

3
3
3 (t)
s1 (t) = 1 (t) + 2 (t) +
4
4
3
3
3 (t)
s2 (t) = 1 (t) + 2 (t) +
4
4
3
3
s3 (t) = 2 (t)
3 (t).
4
4
Assume that these signals are used to transmit equiprobable symbols over an AWGN channel with
noise spectral density N20 .
27

(a) Show that optimal decisions (minimum probability of symbol error) can be obtained via the
outputs of two correlators (or sampled matched filters) and specify the waveforms used in
these correlators (or the impulse responses of the filters).
(b) Assume that p(e) is the resulting probability of symbol error when optimal demodulation
and detection is employed. Show that
r 
r 
2
2
Q
< p(e) < 2Q
.
N0
N0
Solution:
The three signals can be expressed in terms of two orthonormal basis waveforms 1 (t) and 2 (t).
These can be chosen, e.g., as

3
1
1 (t) = 1 (t), 2 (t) =
2 (t) + 3 (t).
2
2
The above choice gives

3
2 (t),
s1 (t) = 1 (t) +
2

3
s2 (t) = 1 (t) +
2 (t),
2

corresponding to the vector representation

3
3
s1 = (1,
), s2 = (1,
),
2
2

s3 (t) =

3
2 (t),
2

s3 = (0,

3
),
2

that is, the three corners of an equilateral triangle of side-length 2.


(a) Use the two basis waveforms derived above to implement a correlation receiver.
(b) Since the three signals are at pairwise distance 2, and the transmitted signal are equiprobable,
the following holds
Pr {error|s1 (t)} = Pr {error|s2 (t)} = Pr {error|s3 (t)} .
The union bound gives an upper bound by including, for each point, the other two:
Pr {error|s1 (t)} = Pr {detect s2 (t) or s3 (t)|s1 (t)}
Pr {detect s2 (t)|s1 (t)} + Pr {detect s3 (t)|s1 (t)}
r 
2
= 2Q
.
N0
The lower bound is obtained by only counting one neighbor:
r
Pr {error|s1 (t)} Pr {detect s2 (t)|s1 (t)} = Q

28

2
N0


.

Optimal Receiver for the Waveform Channel


1. [1, Problem 5.4].
A binary digital communication system employs the signals
(
(
0, 0 t < T,
A, 0 t < T,
s1 (t) =
s0 (t) =
0, otherwise.
0, otherwise.
for transmitting the information. This is called on-off signaling. The received signal, r(t) obeys
r(t) = si (t) + n(t),

i = 0, 1,

where n(t) is a zero-mean AWGN with variance n2 . The demodulator cross-correlates the received
signal r(t) with si (t), i = 0, 1 and samples the output of the correlator at t = T .
(a) Determine the optimum detector for an AWGN channel and the optimum threshold, assuming
that the signals are equally probable.
(b) Determine the probability of error as a function of the SNR. How does on-off signalling
compare with antipodal signaling?
Solution:
(a) The correlation type demodulator employs a filter:
(
1 , 0 t < T,
T
f (t) =
0,
otherwise.
Hence, the sampled outputs of the cross-correlators are:
r = si + n,

i = 0, 1

where s0 = 0, s1 = A T and the noise term n is a zero-mean Gaussian random variable with
variance n2 = N20 . The probability density function for the sampled output is:
r2
1
f (r|s0 ) =
e N0
N0

The signal power is 21 0 +


for on-off signaling is

1
2

A2 T =

1
2

(rA T )2
1
f (r|s1 ) =
e N0
N0

A2 T . The noise power is

SNROn-Off =

1
2

A2 T
N0
2

A2 T
.
N0

The minimum error decision rule is:


f (r|s1 )
f (r|s0 )

s1

s0

s1

s0

1
A T
2

29

N0
2 .

Therefore, the SNR

(b) The average probability of error is:


p(e)

=
=

1
2

1
2

1
2A

1
2A

r2
1
1

e N0 dr +
2
N0

1
2A

1
f (r|s0 )dr +
2

f (r|s1 )dr

1
2A

(rA T )2
1
e N0 dr
N0

Z 1q 2
1 2 N0 A T 1 x2
1 x2
e 2 dx
e 2 dx +
q

1
2
2
2
2
2
N0 A T
!
r
 r

1
2
1
Q
A T =Q
SNROn-Off
2 N0
2
1
2

Thus, the on-off signaling requires a factor of two more energy to achieve the same probability
of error as the antipodal signaling.
2. [2, Problem 5.11].
Consider the optimal detection of the sinusoidal signal


8t
s(t) = sin
, 0tT
T
in additive white Gaussian noise.
(a) Determine the correlator output (at t = T ) assuming a noiseless input.
(b) Determine the corresponding match filter output, assuming that the filter includes a delay
T to make it casual.
(c) Hence show that these two outputs are the same at time instant t = T .
Solution:
For the noiseless case, the received signal r(t) = s(t), 0 t T .
(a) The correlator output is:
T

Z
y(T ) =

Z
r( )s( )d =

s ( )d =
0

T
2

sin
0


8
T
d =
T
2

(b) The matched filter is defined by the impulse response h(t) = s(T t). The matched filter
output is therefore:
Z
Z
y(t) =
r()h(t )d =
s()s(T t + )d

=
=
=


8(T t + )
8
sin
d
T
T
0




Z
Z
1 T
1 T
8(T t)
8(T t + )
cos
d
cos
d
2 0
T
2 0
T






T
8(t T )
T
8(T t)
T
8t
cos

sin

sin
.
2
T
16
T
16
T

sin

30

(c) When the matched filter output is sampled at t = T , we get


y(T ) =

T
2

which is exactly the same as the correlator output determined in item (2a).
3. SNR Maximization with a Matched Filter.
Prove the following theorem:
For the real system shown in Figure 4, the filter h(t) that maximizes the signal-to-noise ratio at
sample time Ts is given by the matched filter h(t) = x(Ts t).

Figure 4: SNR maximization by matched filter.


solution:
Compute the SNR at sample time t = Ts as follows:
Signal Energy

[x(t) h(t)|t=Ts ]2
Z
2
=
x(t)h(Ts t)dt = [hx(t), h(Ts t)i]2

The sampled noise at the matched filter output has energy or mean-square
Z

Z
Noise Energy = E
n(t)h(Ts t)dt
n(s)h(Ts s)ds

Z Z
N0
=
(t s)h(Ts t)h(Ts s)dtds
2
Z
N0
=
h2 (Ts t)dt
2
N0
2
khk
=
2
The signal-to-noise ratio, defined as the ratio of the signal power in to the noise power, equals
SNR =

2 [hx(t), h(Ts t)i]2


2
N0
khk

The Cauchy-Schwarz Inequality states that


2

[hx(t), h(Ts t)i]2 kxk khk

with equality if and only if x(t) = kh(Ts t) where k is some arbitrary constant. Thus, by
inspection, the SNR is maximized over all choices for h(t) when h(t) = x(Ts t). The filter h(t)
is matched to x(t), and the corresponding maximum SNR (for any k) is
SNRmax =

31

2
2
kxk
N0

4. The optimal receiver.


Consider the signals s0 (t), s1 (t) with the respective probabilities p0 , p1 .
q
E

(q
0 t < aT,


T,
q
2E
cos 2t
, 0 t < T,
T
T
s0 (t) = E , aT t < T, s1 (t) =
T

0,
otherwise.

0,
otherwise.
The observation, r(t), obeys
r(t)
E{n(t)n( )}

si (t) + n(t),
N0
=
(t ),
2

i = 0, 1
n(t) N (0,

N0
(t )).
2

(a) Find the optimal receiver for the above two signals, write the solution in terms of s0 (t) and
s1 (t).
(b) Find the error probability of the optimal receiver for equiprobable signals.
(c) Find the parameter a, which minimizes the error probability.
Solution:
(a) We will use a type II, which uses filters matched to the signals si (t), i = 0, 1. The optimal
receiver is depicted in Figure 5.

Figure 5: Optimal receiver - II.


where h0 (t) = s0 (T t), h1 (t) = s1 (T t).
The Max block in Figure 5 can be implemented as follows
s0 (t)
y = y0 y1 0
s1 (t)
The R.V y obeys
y

=
=



E
N0
E
N0
ln p0 [h1 (t) r(t)] t=T
ln p1 +
[h0 (t) r(t)] t=T +
2
2
2
2

N0 p0
ln
+ [(h0 (t) h1 (t)) r(t)] t=T
2
p1

Hence the optimal receiver can be implemented using one convolution operation instead of
two convolution operations, as depicted in Figure 6.
32

Figure 6: Optimal receiver - II.


(b) For an equiprobable binary constellation, in an AWGN channel, the probability of error is
given by


d/2
p(e) = Q
, d = ks0 s1 k

d2 = ks0 s1 k = ks0 k + ks1 k 2 hs0 , s1 i


where 2 is the noise variance.
The correlation coefficient between the two signals, , equals
=

hs0 , s1 i
hs0 , s1 i
=
ks0 k ks1 k
E

and for equal energy signals


d2

2E 2 hs0 , s1 i
p
d =
2E(1 )
s

E(1 )
p(e) = Q
N0
=

(c) is the only parameter, in p(e), affected by a. An explicit calculation of yields


Z
hs0 , s1 i =

s0 (t)s1 (t)dt
Z Tr r
Z aT r r
E 2E
2t
E 2E
2t
cos
dt
cos
dt
T
T
T
E
E
T
aT
0
E
E
sin 2a + 2
sin 2a
2
2
2
2
sin 2a
s



E(1 2 sin 2a)
Q
N0
0

=
=
=
p(e)

In order to minimize the probability of error, we will maximize the Q function argument:
sin 2a = 1
3
a =
4

33

5. The optimal receiver - II.


Consider the following equiprobable signals si (t), i = 0, 1, 2, 3.
(q

2E
cos 2t
, 0 t < T,
T
T
s0 (t) =
0,
otherwise.
( q

2t
2E
T cos
T , 0 t < T,
s1 (t) =
0,
otherwise.
q
E

0 t < T2 ,
T,

q
s2 (t) = E , T t < T,
T
2

0,
otherwise.
q
T

T, 0 t < 2,
q
E
T
s3 (t) =
2 t < T,
T,

0,
otherwise.
The observation, r(t), obeys
r(t)
E{n(t)n( )}

si (t) + n(t),
N0
=
(t ),
2

i = 0, 1, 2, 3
n(t) N (0,

N0
(t )).
2

(a) Find a signal space representation for the signals si (t), i = 0, 1, 2, 3, and draw the optimal
decision regions.
(b) Find the optimal receiver for the above four signals which comprises of at most two filters
(or two multipliers and integrators).
(c) Find the error probability of the optimal receiver.
Solution:
(a) The following two orthonormal functions comprise a basis to the signal space
1
0 (t) = s0 (t),
E

1
1 (t) = s2 (t).
E

It is easy to verify that h0 (t), 1 (t)i = 0, and that {0 (t), 1 (t)} spans the signal space.
Figure 7 depicts the signal space spanned by {0 (t), 1 (t)} and the optimal decision regions.

Figure 7: Decision regions for the optimal receiver.

34

Figure 8: Optimal receiver.


(b) Note that the signals si (t), i = 0, 1, 2, 3, are equiprobable and with equal energy. The optimal
receiver is depicted in Figure 8.
The decision block depicted in Figure 8 implements the following equation

s0 (t), y1 > |y2 | ,

s (t), y > |y | ,
1
1
2
s(t) =

s
(t),
y
>
|y
|
2
2
1 ,

s3 (t), y2 > |y1 | .


(c) Let Q be defined as follows
Q,Q

2E

=Q
2N0

!
E

.
N0

Let p(c) = (1 Q)2 denote the probability of a correct decision. Then, the error probability
is
p(e) = 1 p(c) = 1 (1 Q)2 = 2Q Q2 .

35

The Probability of Error


1. [1, Problem 5.10].
A ternary communication system transmits one of three signals, s(t), 0, or s(t), every T seconds.
The received signal is one either r(t) = s(t) + z(t), r(t) = z(t) or r(t) = s(t) + z(t), where z(t) is
white Gaussian noise with E{z(t)} = 0 and zz ( ) = 21 E{z(t)z ( )} = N0 (t ) . The optimum
receiver computes the correlation metric
Z T

r(t)s (t)dt
U = Re
0

and compares U with a threshold A and a threshold A. If U > A the decision is made that s(t)
was sent. If U < A, the decision is made in favor of s(t). If A U A, the decision is made
in favor of 0.
(a) Determine the three conditional probabilities of error p(e|s(t)), p(e|0)) and p(e| s(t)).
(b) Determine the average probability of error p(e) as a function of the threshold A, assuming
that the three symbols are equally probable a priori.
(c) Determine the value of A that minimizes p(e).
Solution:



s(t) + z(t)
RT
s(t) + z(t)
(a) U = Re 0 r(t)s(t)dt , where r(t) =
depending on which signal was

z(t)
sent. If we assume that s(t) was sent:
Z
U = Re


Z
s(t)s (t)dt + Re

where E =

1
2

RT
0


z(t)s (t)dt = 2E + N

s(t)s (t)dt is a constant, and N = Re

RT
0


z(t)s (t)dt is a Gaussian

random variable with zero mean and variance 2EN0 . Hence, given that s(t) was sent, the
probability of error is:


2E A
p1 (e) = Pr{N < A 2E} = Q
2EN0
When s(t) is transmitted: U = 2E + N , and the corresponding conditional error probability is:


2E A
p2 (e) = Pr{N > A + 2E} = Q
2EN0
and finally, when 0 is transmitted: U = N , and the corresponding error probability is:


A
p3 (e) = Pr{N > A or N < A} = 2Q
2EN0
(b)
 



1
2
2E A
A
p(e) = [p1 (e) + p2 (e) + p3 (e)] =
Q
+Q
3
3
2EN0
2EN0

36

(c) In order to minimize p(e):


dp(e)
=0A=E
dA
where we differentiate Q(x) =

R
df
d
g(a)da = dx
g(f (x)).
dx
f (x)
Using this threshold:

R
x

t
1 e 2
2

dt with respect to x, using the Leibnitz rule:

r

4
E
p(e) = Q
3
2N0
2. [1, Problem 5.19].
Consider a signal detector with an input
r = A + n,

A>0

where +A and A occur with equal probability and the noise variable n is characterized by the
Laplacian p.d.f:

2|n|
1
f (n) = e
2
(a) Determine the probability of error as a function of the parameters A and .
(b) Determine the SNR required to achieve an error probability of 105 . How does the SNR
compare with the result for Gaussian p.d.f?
Solution:

(a) Let =

2
.

The optimal receiver uses the criterion:


f (r|A)
= e[|rA||r+A|]
f (r| A)

The average probability of error is:


p(e)

=
=
=
=
=

1
1
Pr{Error|A} + Pr{Error| A}
2
2
Z
Z
1 0
1
f (r|A)dr +
f (r| A)dr
2
2 0
Z
Z
1 |r+A|
1 0 |rA|
e
dr +
e
dr
2 2
2 0 2
Z
Z
A |x|
|x|
e
dx +
e
dx
4
4 A
1 A
1 2A
e
= e
2
2

(b) The variance of the noise is 2 , hence the SNR is:


SNR =
37

A2
2

and the probability of error is given by:


1 2SNR
e
2

p(e) =
For p(e) = 105 we obtain:

ln(2 105 ) = 2SNR SNR = 17.674 dB


If the noise was Gaussian, then the probability of error for antipodal signalling is:



p(e) = Q
SNR
5
where SNR
is the signal to noise ratio at the output of the matched filter. With p(e) = 10
we find SNR = 4.26 and therefore SNR = 12.594 dB. Thus the required signal to noise
ratio is 5 dB less when the additive noise is Gaussian.

3. [1, Problem 5.38].


The discrete sequence
rk =

Eb ck + nk ,

k = 1, 2, . . . , n

represents the output sequence of samples from a demodulator, where ck = 1 are elements of
one of two possible code words, C1 = [1 1 . . . 1] and C2 = [1 1 . . . 1 1 . . . 1]. The code word
C2 has w elements that are +1 and n w elements that are 1, where w is a positive integer.
The noise sequence {nk } is white Gaussian with variance 2 .
(a) What is the optimum ML detector for the two possible transmitted signals?
(b) Determine the probability of error as a function of the parameters 2 , Eb , w.
(c) What is the value of w that minimizes the the error probability?
Solution:
(a) The optimal ML detector selects the sequence Ci that minimizes the quantity:
D(r, Ci ) =

n
X

(rk

p
Eb cik )2

k=1

The metrics of the two possible transmitted sequences are


D(r, C1 )
D(r, C2 )

w
X
k=1
w
X

(rk

n
X
p
p
Eb )2 +
(rk Eb )2
k=w+1

n
X
p
p
(rk Eb )2 +
(rk + Eb )2

k=1

k=w+1

Since the first term of the right side is common for the two equations, we conclude that the
optimal ML detector can base its decisions only on the last n w received elements of r.
That is
w
X

(rk

w
X

Eb )2

(rk +

k=w+1

k=w+1

or equivalently
C1
rk 0
C2
k=w+1
w
X

38

C2
p
Eb )2 0
C1

(b) Since rk =

Eb cik + nk the probability of error Pr{Error|C1 } is


Pr{Error|C1 }

Pr

p

n
X
Eb (n w) +
nk < 0
k=w+1

Pr

 X
n

p 
nk < (n w) Eb

k=w+1

The R.V u =

Pn

k=w+1

nk is zero-mean Gaussian with variance u2 = (n w) 2 . Hence

1
Pr{Error|C1 } = p1 (e) = p
2u2

(nw) Eb


exp


r

x2
Eb (n w)
2 dx = Q
u
2

Similarly we find that Pr{Error|C1 } = Pr{Error|C2 } and since the two sequences are
equiprobable
r

Eb (n w)
p(e) = Q
2
(c) The probability of error p(e) is minimized when Eb (nw)
is maximized, that is for w = 0. This
2
implies that C1 = C2 and thus the distance between the two sequences is the maximum
possible.
4. Sub optimal receiver.
Consider a binary system transmitting the signals s0 (t), s1 (t) with equal probability.
(q
(q
2t
2E
2E
sin 2t
, 0 t T,
T
T
T cos T , 0 t T,
s0 (t) =
s1 (t) =
0,
otherwise.
0,
otherwise.
The observation, r(t), obeys
r(t) = si (t) + n(t),

i = 0, 1

where n(t) is white Gaussian noise with E{n(t)} = 0 and E{n(t)n( )} =

N0
2 (t

).

(a) Sketch an optimal and efficient (in the sense of minimal number of filters) receiver. What is
the error probability when this receiver is used?
(b) What is the error probability of the following receiver?
Z
0

T
2

s0
r(t)dt 0
s1

(c) Consider the following receiver


Z
0

aT

s0
r(t)dt K,
s1

0a1

R aT
where K is the optimal threshold for 0 r(t)dt. Find a which minimizes the probability of
error. Numerical solution may be used.
39

Figure 9: Optimal receiver type II.


Solution:
(a) The signals are equiprobable and have equal energy. We will use type II receiver, depicted
in Figure 9.
The distance between the signals is




2
Z T

2t
2t
2E
2
= 2E d = 2E
d =
sin
cos
T
T
T
0
The receiver depicted in Figure 9 is equivalent to the the following (and more efficient)
receiver, depicted in Figure 10.

Figure 10: Efficient optimal receiver.


For a binary system with equiprobable signals s0 (t) and s1 (t) the probability of error is given
by


 

d
d
d

q
=Q
=Q
p(e) = Q
2
2N0
2 N20
where d, the distance between the signals, is given by
d = ks0 (t) s1 (t)k = ks0 s1 k
Hence, the probability of error is


r 
E
d
p(e) = Q
p(e) = Q
N0
2N0
(b) Let us define the random variable, Y =
Z

T
2

r(t)dt. Y obeys

T
2

Y |s0 =

T
2

s0 (t)dt +
0

Z
Y |s1 =

n(t)dt
0

T
2

Z
s1 (t)dt +

T
2

n(t)dt
0

RT
Let us define the random variable N = 02 n(t)dt. N is a zero mean Gaussian random
variable, and variance
Z T Z T
 Z T Z T
2
2
2
2 N
No T
0
Var{N } = E
n( )n()d d =
( )d d =
2
4
0
0
0
0
40

Y |si is a Gaussian random variable (note that Y is not gaussian, but a Gaussin Mixture!)
with mean:

Z T2
2ET
E{Y |s0 } =
s0 (t)dt =

0
Z T2
s1 (t)dt = 0
E{Y |s1 } =
0

The variance of Y |si is identical under both cases, and equal to the variance of N . For the
given decision rule the error probability is:
p(e)

p(s0 ) Pr{Y < 0|s0 } + p(s1 ) Pr{Y > 0|s1 }


 r

1
2 2E
1
=
Q
+
2
N0
4

(c) We will use the same derivation procedure as in the previous item.
Define the random variables Y, N as follows:
Z
Y

aT

Z
r(t)dt,

N=

E{N }
E{Y |s0 }
E{Y |s1 }
Var{Y |s0 }

aT

n(t)dt
0

aT N0
0, Var{N } =
2
r

Z
2E aT
2ET
=
s0 (t)dt =
(1 cos 2a)
T 0
2
r

Z
2E aT
2ET
=
sin 2a
s1 (t)dt =
T 0
2
= Var{Y |s1 } = Var{N }
=

The distance between Y |s0 and Y |s1 equals





2ET


d=
(1 cos(2a) sin(2a))
2


d
For an optimal decision rule the probability of error equals Q 2
. Hence the probability of
error equals
 r

1
E 1
|(1 cos(2a) sin(2a))|
p(e) = Q
2 N0 a
which is minimized when 1a |(1 cos 2a sin 2a)| is maximized.
Let aopt denote the a which maximizes the above expression. Numerical solution yields that
aopt
= 0.5885
5. Non-matched receiver.
Consider binary signaling with equiprobable waveforms
s0 (t) = 0, 0 t T
r
2E
t
sin , 0 t T
s1 (t) =
T
T
41

in an AWGN channel with spectral density N20 . The receiver for this problem is implemented as
a filter with the impulse response
(
eat , t 0,
, a 0.
h(t) =
0,
otherwise.
More precisely, letting yT denote the value of the output of the filter sampled at t = T , when fed
by the received signal, the decision is
yT < K s = s0 ,
yT K s = s1 ,
where K > 0 is a decision threshold. Assume that only one symbol is transmitted and answer the
following questions.
(a) Determine the resulting error probability p(e).
(b) Which value for the threshold b minimizes p(e)?
(c) With the optimal value for b (from the previous item), which value for the filter parameter,
a, minimizes p(e)? Numerical solution may be used.
Solution:
(a) The decision variable can be expressed as
yT = s + w,
where s is either zero, corresponding to s0 (t), or
Z
s=
s1 ( )h(T )d

r
Z T
2E
a(T )
sin
e
d
=
T
T
0


2ET 1 + eaT
=
,
a2 T 2 + 2
corresponding to s1 (t), and where W is zero mean Gaussian. The variance of W is
Z
N0 2
N0
Var{W } =
h (t)dt =
.
2 0
4a
We conclude that the conditional distributions of YT are


2ET 1 + eaT N0
N0
Y |s0 (t) N (0,
), Y |s1 (t) N (
).
,
4a
a2 T 2 + 2
4a
The error probability is
1
(Pr (YT > K|s0 (t)) + Pr (YT < K|s1 (t)))
2

s
!!


r
2
1 4aK 1
4a 2ET 1 + eaT
= Q
+ Q
K
.
2
N0
2
N0
a2 T 2 + 2

p(e) =

42

(b) As this is a binary decision problem in an AWGN channel, the probability of error, p(e), is
minimized when the decision threshold K is located in half-way between the two alternatives
for s, corresponding to


1 2ET 1 + eaT
,
K=
2
a2 T 2 + 2
giving
r
p(e) = Q

2ET a 1 + eaT
N0
a2 T 2 + 2

!
.

(c) Setting a to minimize p(e) corresponds to maximizing

x (1 + ex )
,
x2 + 2
with respect to x = aT . The maximum is at x 1.1173, hence the optimal a is a

43

1.1173
.
T

Bit Error Probability


1. [3, Example 6.2].
Compare the probability of bit error for 8PSK and 16PSK, in an AWGN channel, assuming
Eb
and equal a-priori probabilities. Use the following approximations:
b = 15dB = N
0
Nearest neighbor approximation given in class.
b

s
log2 M .

The approximation for Pe,bit given in class.


Solution:
The nearest neighbor approximation for the probability of error, in an AWGN channel, for an
M-PSK constellation is
p

Pe 2Q 2s sin( ) .
M
The approximation for Pe,bit (under Gray mapping at high enough SNR) is
Pe,bit

Pe
.
log2 M

For 8PSK we have s = (log2 8) 1015/10 = 94.87. Hence


Pe 2Q 189.74 sin(/8) = 1.355 107 .
Using the approximation for Pe,bit we get
Pe,bit =

Pe
= 4.52 108 .
3

For 16PSK we have s = (log2 16) 1015/10 = 126.49. Hence


Pe 2Q 252.98 sin(/16) = 1.916 103 .
Using the approximation for Pe,bit we get
Pe,bit =

Pe
= 4.79 104 .
4

Note that Pe,bit is much larger for 16PSK than for 8PSK for the same b . This result is expected,
since 16PSK packs more bits per symbol into a given constellation, so for a fixed energy-per-bit
the minimum distance between constellation points will be smaller.
2. Bit error probability for rectangular constellation.
Let p0 (t) and p1 (t) be two orthonormal functions, different from zero in the time interval [0, T ].
The equiprobable signals defined in Figure 11 are transmitted through a zero-mean AWGN channel
with noise PSD equals N0 /2.
(a) Calculate Pe for the optimal receiver.
(b) Calculate Pe,bit for the optimal receiver (optimal in the sense of minimal Pe ).
q
(c) Approximate Pe,bit for high SNR ( d2  N20 ). Explain.

44

Figure 11: Eight signals in rectangular constellation.


Solution:
Let n0 denote the noise projection on p0 (t) and n1 the noise projection on p1 (t). Clearly ni
N (0, N0 /2), i = 0, 1.
(a) Let Pc denote the probability for correct symbol decision; hence Pe = 1 Pc .
Pr{correct decision |(000) was transmitted}

=
(a)

(b)

(c)



2
d/2
1Q p
N0 /2
Pr{correct decision |(100) was transmitted}
Pr{correct decision |(010) was transmitted}

Pr{correct decision |(110) was transmitted}

P1 .

where (a), (b) and (c) are due to the constellation symmetry.

Pr{correct decision |(001) was transmitted}

=
(a)

(b)

(c)






d/2
d/2
1Q p
1 2Q p
N0 /2
N0 /2
Pr{correct decision |(101) was transmitted}
Pr{correct decision |(011) was transmitted}

Pr{correct decision |(111) was transmitted}

P2 .

where (a), (b) and (c) are due to the constellation symmetry.

45

Hence
Pc

Pe

Pe

1
2




2 


!
d/2
d/2
d/2
+ 1Q p
1Q p
1 2Q p
N0 /2
N0 /2
N0 /2

1 Pc
 


2 
1
d/2
d/2
5Q p
3Q p
.
2
N0 /2
N0 /2

(b) Let b0 denote the MSB, b2 denote the LSB and b1 denote the middle bit7 . Let bi (s), i = 0, 1, 2
denote the ith bit of the constellation point s.
Pr{error in b2 |(000) was transmitted}

Pr{
s was received|(000) was transmitted}

s:b2 (
s)6=0


d
5d
< N0 <
2
2


d
5d
Pr
< N0 <
2
2




d/2
5d/2
Q p
Q p
N0 /2
N0 /2


=
=
=
(a)

(b)

(c)

Pr

Pr{error in b2 |(100) was transmitted}


Pr{error in b2 |(010) was transmitted}

Pr{error in b2 |(110) was transmitted}

P1 .

where (a), (b) and (c) are due to the constellation symmetry.

Pr{error in b2 |(001) was transmitted}

Pr{
s was received|(001) was transmitted}

s:b2 (
s)6=1





3d
d
Pr N0 <
+ Pr
< N0
2
2




d/2
3d/2
= Q p
+Q p
N0 /2
N0 /2
= Pr{error in b2 |(101) was transmitted}
=

Pr{error in b2 |(011) was transmitted}

Pr{error in b2 |(111) was transmitted}

= P2 .
7 For

the top left constellation point in Figure 11 (b0 , b1 , b2 ) = (010).

46

Using similar arguments we can calculate the bit error probability for b1


3d/2
Pr{error in b1 |(000) was transmitted} = Q p
N0 /2
= Pr{error in b1 |(100) was transmitted}
=

Pr{error in b1 |(010) was transmitted}

Pr{error in b1 |(110) was transmitted}

= P3 .

Pr{error in b1 |(001) was transmitted}



d/2
p
= Q
N0 /2
= Pr{error in b1 |(101) was transmitted}
=

Pr{error in b1 |(011) was transmitted}

Pr{error in b1 |(111) was transmitted}

= P4 .
The bit error probability for b0 equals
Pr{error in b0 |(000) was transmitted}



d/2
= Q p
N0 /2
= P5 .

Due to the constellation symmetry and the bits mapping, the bit error probability for b0 is
equal for all the constellation points.
Let Pe,bi , i = 0, 1, 2 denote the averaged (over all signals) bit error probability of the ith bit,
then
Pe,b0

= P5 .
1
(P3 + P4 ).
Pe,b1 =
2
1
Pe,b2 =
(P1 + P2 ).
2
The averaged bit error probability, Pe,bit , is given by
2

Pe,bit

=
=

(c) For

d
2

Note that

1X
Pe,bi
3 i=0






5
d/2
1
3d/2
1
5d/2
Q p
+ Q p
Q p
6
3
6
N0 /2
N0 /2
N0 /2

N0
2

Pe
log2 M

Pe,bit

Pe

Pe,bit

is the lower bound for Pe,bit .


47



5
d/2
Q p
6
N0 /2


5
d/2
Q p
2
N0 /2
Pe
.
3

3. Octagon constellation.
Consider the signal constellation depicted in Figure 12.

Figure 12: Octagonal constellation.


Each of the eight signal points carries three bits of information, with labeling as indicated in
Figure 12. The bits are equiprobable and independent. The signals are transmitted over an
AWGN channel with spectral density N20 and the receiver is the optimal (minimum symbol error
probability) receiver.
(a) Sketch the decision regions of the optimal receiver.
(b) Let p(e) denote the resulting symbol error probability. Show that






d
d
2d
Q
< p(e) < Q
+Q
.
2N0
2N0
2N0
(c) Determine an exact expression for the bit-error probability, in terms of the Q-function and
d2
the ratio N
.
0
Solution:
(a) As the noise is AWGN, the optimal decoding rule is minimum distance. The cprresponding
decision regions of the optimal receiver are depicted in Figure 13.
(b) The upper bound is the union bound. Note that each signal point has two neighbors, one at
distance d and one at distance 2d, and that only these two neighbors need to be included in
the bound to make it valid.
The lower bound is the nearest neighbor term in the union bound. Since there are more than
two signal points, including only this term will surely give a lower bound.
(c) Let b0 b1 b2 be the bits corresponding to one signal point, with b0 the MSB and b2 the LSB.
Let si denote the ith signal point, with
i = b0 4 + b1 2 + b2 .

48

Figure 13: Decision regions for the optimal receiver. Ik is the region for deciding on sk , k = 0, 1, . . . , 7.
Let bm , m = 0, 1, 2, be the received bits after detection. The averaged bit error probability is
Pe,bit =

3


1 X
Pr bm 6= bm .
3 m=1

Let r = (r0 , r1 ) be the received point. For b0 , assume b0 = 0 and note that b0 = 1 if the
received point is in the right half-plane, that is
n
o
Pr b0 6= 0|b0 = 0 = Pr {r0 > 0|si , i {0, 1, 2, 3} was transmitted}
=

1
1
1
1
Pr {r0 > 0|s0 } + Pr {r0 > 0|s1 } + Pr {r0 > 0|s2 } + Pr {r0 > 0|s3 } .
4
4
4
4

The distance between the constellation point s6 and the r1 axis is d2 . Referring to Figure 14,

observe that the distance between the constellation point s6 and the r0 axis is d2 +x = d2 + 2d.

Figure 14: Distance from the axis r0 and r1 for the constellation point s6 . (2d)2 = 2x2 x =

49

2d.

Therefore,
n
o 1
1
Pr b0 =
6 0|b0 = 0 = Pr {r0 > 0|s0 } + Pr {r0 > 0|s1 }
2
2

s
s

2
1
d 1 
d2
= Q
+ Q
2 2+1
2
2N0
2
2N0
= A0 .
n
o
n
o
Due to symmetry, we get Pr b0 =
6 1|b0 = 1 = Pr b0 6= 0|b0 = 0 , hence
n
o
Pr b0 =
6 b0 = A0 .
n
o
n
o
For b1 it is easy to see that we get Pr b1 =
6 b1 = Pr b0 6= b0 , since assume b1 = 0, an
error occurs if r1 > 0, thus the situation is identical to the case of b0 (subject to 90 rotation).
Finally, let n = (n0 , n1 ) be the noise terms in the received signal. Consider the rotated noise
= (
terms, n
n0 , n
1 ), as depicted in Figure 158 .

Figure 15: Decision regions for the optimal receiver, and rotated noise terms.
Then assuming b2 = 0 we get
n
o
Pr b2 6= 0|b2 = 0 = Pr {si , i {1, 3, 5, 7} was received|sj , j {0, 2, 4, 6} was transmitted}
=

1
Pr{b2 = 1|s = s0 } + Pr{b2 = 1|s = s2 }
4

+ Pr{b2 = 1|s = s4 } + Pr{b2 = 1|s = s6 } .

Note that due to 90 symmetry


Pr{b2 = 1|s = s0 } = Pr{b2 = 1|s = s2 } = Pr{b2 = 1|s = s4 } = Pr{b2 = 1|s = s6 }.
8 Note

have the same PDF as they are the projection of an AWGN on orthonormal basis.
that the vectors n and n

50

Thus,
n
o
n
o
Pr b2 6= 0|b2 = 0 = Pr b2 6= 0|s = s6
= Pr {s I7 I5 I1 I3 |s = s6 }
= Pr {s I7 I5 |s = s6 } + Pr {s I1 I3 |s = s6 }
(
(
! )
1+ 2

d + Pr n
0 < d, n
1 >
= Pr n
0 > d, n
1 <
2
!
!!

d
d
1+ 2

p
=Q p
1Q
+
2
N0 /2
N0 /2
!!
!

1+ 2
d
d

p
Q
1Q p
2
N0 /2
N0 /2
!
!

d
1+ 2
d

p
=Q p
+Q

2
N0 /2
N0 /2
!
!

d
1+ 2
d

p
2Q
Q p
2
N0 /2
N0 /2

! )
1+ 2

d
2

= A2 .
n
o
n
o
Note that due to the 90 symmetry of the problem Pr b2 6= 1|b2 = 1 = Pr b2 6= 0|b2 = 0 .
Hence,
1
2
Pe,bit = A0 + A2 .
3
3

51

Connection with the Concept of Capacity


1. [2, Problem 9.29].
A voice-grade channel of the telephone network has a bandwidth of 3.4 KHz. Assume real-valued
symbols.
(a) Calculate the capacity of the telephone channel for signal-to-noise ratio of 30 dB.
(b) Calculate the minimum signal-to-noise ratio required
to
 support information transmission

through the telephone channel at the rate of 4800

bits
sec

Solution:
(a) The channel bandwidth is W = 3.4 KHz. The received signal-to-noise ratio is SNR = 103 =
30 dB. Hence the channel capacity is


3
3
3 bits
C = W log2 (1 + SNR) = 3.4 10 log2 (1 + 10 ) = 33.9 10
.
sec
(b) The required SNR is the solution of the following equation
4800 = 3.4 103 log2 (1 + SNR) SNR = 1.6 = 2.2 dB.
2. [1, Problem 7.17].
Channel C1 is an additive white Gaussian noise channel with a bandwidth W , average transmitter
power P , and noise PSD N20 . Channel C2 is an additive white Gaussian noise channel with the
same bandwidth and average power as channel C1 but with noise PSD Sn (f ). It is further assumed
that the total noise power for both channels is the same; that is
Z W
Z W
N0
Sn (f )df =
df = N0 W.
W
W 2
Which channel do you think has larger capacity? Give an intuitive reasoning.
Solution:
The capacity of the additive white Gaussian channel is:


P
C = W log2 1 +
N0 W
For the nonwhite Gaussian noise channel, although the noise power is equal to the noise power in
the white Gaussian noise channel, the capacity is higher. The reason is that since noise samples
are correlated, knowledge of the previous noise samples provides partial information on the future
noise samples and therefore reduces their effective variance.
3. Capacity of ISI channel.
Consider a channel with Inter Symbol Interference (ISI) defined as follows
yk =

L1
X

hi xki + zk .

i=0

The channel input obeys an average power constraint E{x2k } P , and the noise zk is i.i.d
Gaussian distributed: zk N (0, z2 ). Assume that H(ej2f ) has no zeros and show that the
channel capacity is
(

+ )
Z
z2 /|H(ej2f )|2
1 W
C=
log 1 +
df ,
2 W
z2 /|H(ej2f )|2
52

where is a constant selected such that


Z W 

z2
|H(ej2f )|2

+
df = P.

You may use the following theorem


Theorem 1. Let the transmitter have a maximum average power constraint of P [W atts].
The

atts
capacity of an additive Gaussian noise channel with noise power spectrum N (f ) WHz
is given
by
(

+ ) 

Z
N (f )
1
bits
log2 1 +
.
C=
df
2
N (f )
sec
+
R
where is chosen so that
N (f ) df = P .
Solution:
Since H(ej2f ) has no zeros the ISI filter is invertible. Inverting the chennel results in
Y (ej2f )

Y (ej2f )
H(ej2f )

Z(ej2f )
H(ej2f )
j2f
j2f ).
= X(e
) + Z(e

= X(ej2f ) +

This is a problem of colored Gaussian channel with no ISI. The channel PSD is
SZZ (ej2f ) =

z2
.
|H(ej2f )|2

The capacity of this channel, using Theorem 1 is given by


(

+ )
Z
z2 /|H(ej2f )|2
1 W
C=
log 1 +
df ,
2 W
z2 /|H(ej2f )|2
where is a constant selected such that
Z W 

z2
|H(ej2f )|2

+
df = P.

4. Final B, 2011.
Consider a communication system (denoted by system A) consists of two transmitters (Tx1 and
Tx2 ) capable of simultaneous transmission and one receiver.
Tx1 transmits a real signal one dimensioned with power constraint P [watts], using the
frequency band f1L f f1U , f1L = 0. In the frequency band f1L f f1U the channel
frequency response is constant, real, and equals A > 0 such
that
 atts
 the received signal power is
P A2 . The noise is an AWGN with spectral density N20 WHz
. Let W1 = f1U f1L denote
the channel bandwidth for Tx1 .
Tx2 transmits a real signal with power constraint P [watts], using the frequency band f2L
f f2U , f2L > 0. In the frequency band f2L f f2U the channel frequency response is
2
constant, real, and equals B > 0 such that
 the received signal power is P B . The noise is an
N0 W atts
AWGN with spectral density 2
. Let W2 = f2U f2L denote the channel bandwidth
Hz
for Tx2 .
53

Figure 16: Frequency bands for Tx1 and Tx2 .


It is also given that the frequencies f1U and f2L obey f2L > f1U (Tx1 frequency band and
Tx2 frequency band do not overlap), see Figure 16.
Each transmitter operates separately (the transmitters cannot exchange transmission power).
The receiver receives the entire bandwidth f1L f f2U .
(a) Find the capacity of system A.
System B improves system A by transmitting a complex signal in place of the real signal. Similarly
to system A, each transmitter operates separately (the transmitters cannot exchange transmission
power).
(b) Find the capacity of system B.
System C improves system B by enabling transmission power exchange between the two transmitters, resulting in a single effective transmitter with transmission power constraint 2P [watts].
Note that this single transmitter is capable of simultaneous transmission in both frequency bands.
(c) For A = B, W1 = W2 , compare the capacity of system C with the capacity of system B.
Explicit capacity calculation is not needed. Explain your answer.
(d) For A < B, W1 = W2 , compare the capacity of system C with the capacity of system B.
Explicit capacity calculation is not needed. Explain your answer.
Solution:
(a) Let Ci , i = 1, 2 denote the capacity corresponding to Txi and Wi . Let CA denote the capacity
of system A. Since the two transmitters are separate CA = C1 + C2 . Using Theorem 1
presented in class, and the fact that in the frequency band of Tx1 the channel gain is A, C1
equals


A2 P
.
C1 = W1 log2 1 +
N0 W 1
Similarly, C2 equals



B2P
C2 = W2 log2 1 +
,
N0 W 2

and the capacity of system A is






A2 P
B2P
CA = C1 + C2 = W1 log2 1 +
+ W2 log2 1 +
.
N0 W1
N0 W 2
54

(b) Using Theorem 2 presented in class, as each transmitter is independent, we find the capacity
for each transmitter separately. From Theorem 2
Z

C=
W



P (f )|H(f )|2
log2 1 +
df ,
N0


P (f ) =

N0
|H(f )|2

+

Z
,

P =

P (f )df .
W

For Tx1 , |H(f )| = A, hence


Z

f1U

C1 =
f1U



P1 (f )A2
log2 1 +
df ,
N0


P1 (f ) =

N0
A2

+

Z
,

f1U

P =

P1 (f )df .
f1U

+
0
The equation P1 (f ) = N
indicate that the power allocation is constant over 0
A2
|f | f1U and zero otherwise. Using the last equation with P1 (f ) = K for 0 |f | f1U
yields
P
2W1 K = P P1 (f ) = K =
,
2W1
and


C1 = 2W1 log2

A2 P
1+
2W1 N0

B2P
1+
2W2 N0

Following similar arguments for Tx2 yields



C2 = 2W2 log2

and the capacity of system B is






A2 P
B2P
CB = C1 + C2 = 2W1 log2 1 +
+ 2W2 log2 1 +
.
2W1 N0
2W2 N0
(c) When there is one transmitter that can transmit on both bands simultaneously, the capacity
is given by the water-filling theorem. Since A = B, W1 = W2 , the channel is fixed over both
bands and the same power is allocated to each band. Thus, P will be allocated to W1 and
P will be allocated to W2 , and the capacity of both systems will be identical.
(d) Since the channel is different in each band the water-filling allocation will not be constant in
frequency and the capacity will be higher for system C.

55

10

Continuous Phase Modulations

1. [1, Problem 4.14].


Consider an equivalent low-pass digitally modulated signal of the form
X
u(t) =
[an g(t 2nT ) jbn g(t 2nT T )]
n

where {an } and {bn } are two sequences of statistically independent binary digits and g(t) is a
sinusoidal pulse defined as
(

t
, 0 t 2T,
sin 2T
g(t) =
0,
otherwise.
This type of signal is viewed as a four-phase PSK signal in which the pulse shape is one-half
cycle
of a sinusoid. Each of the information sequences {an } and
 bits
 {bn } is transmitted at a rate of
1 bits
1
and,
hence,
the
combined
transmission
rate
is
2T sec
T sec . The two sequences are staggered
in time by T seconds in transmission. Consequently, the signal u(t) is called staggered four-phase
PSK.
(a) Show that the envelope |u(t)| is a constant, independent of the information an on the inphase component and information bn on the quadrature component. In other words, the
amplitude of the carrier used in transmitting the signal is constant.
(b) Determine the power density spectrum of u(t).
(c) Compare the power density spectrum obtained from (1b) with the power density spectrum
of the MSK signal [1, 4.4.2]. What conclusion can you draw from this comparison?
Solution:
1
(a) Since the signaling rate is 2T
for each sequence and since g(t) has duration 2T , for any time
instant only g(t 2nT ) and g(t 2nT T ) or g(t 2nT + T ) will contribute to u(t). Hence,
for 2nT t 2nT + T :

|u(t)|

= |an g(t 2nT ) jbn g(t 2nT T )|


= a2n g 2 (t 2nT ) + b2n g 2 (t 2nT T )
= g 2 (t 2nT ) + g 2 (t 2nT T )
t 
(t T ) 
= sin2
+ sin2
2T
2T


2 t
2 t
+ cos
= 1, t.
= sin
2T
2T

(b) The power density spectrum is:


1
2
|G(f )|
T
 j2f t
R
R
t
where G(f ) = g(t)ej2f t dt = sin 2T
e
dt. By using Eulers formula it is
easily shown that:
SU (f ) =

G(f )

SU (f )

56

4T cos(2T f ) j2f T
e
1 16T 2 f 2
16T cos2 (2T f )
2 (1 16T 2 f 2 )2

(c) The above power density spectrum is identical to that for the MSK signal. Therefore, the
MSK signal can be generated as a staggered four phase PSK signal with a half-period sinusoidal pulse for g(t).
2. [1, Problem 5.29].
In an MSK signal, the initial state for the phase is either 0 or rad. Determine the terminal
phase state for the following four input pairs of input data b0 , b1 : (a) 00; (b) 01; (c) 10; (d) 11.
Solution:
We assume that the input bits 0, 1 are mapped to the symbols -1 and 1 respectively. The terminal
phase of an MSK signal at time instant n is given by
(n; s) =

n
X
sk + 0
2
k=0

where 0 is the initial phase and sk is 1 depending on the input bit at the time instant k. The
following table shows (1; s) for two different values of 0 , and the four input pairs of data:
0
0
0
0
0

b0
0
0
1
1
0
0
1
1

b1
0
1
0
1
0
1
0
1

s0
-1
-1
1
1
-1
-1
1
1

s1
-1
1
-1
1
-1
1
-1
1

(1; s)

0
0

3. [1, Problem 5.30].


A continuous-phase FSK signal with h = 12 is represented as
r
r






2b
2b
t
t
s(t) =
cos
cos 2fc t
sin
sin 2fc t ,
Tb
2Tb
Tb
2Tb

0 t 2Tb

where the signs depend on the information bits transmitted.


(a) Show that this signal has constant amplitude.
(b) Sketch a block diagram of the modulator for synthesizing the signal from the input bit stream.
(c) Sketch a block diagram of the demodulator and detector for recovering the information bit
stream from the signal.
Solution:
(a) The envelope of the signal is


s(t) 2

=
=
=

q



sc (t) 2 + ss (t) 2
s




2b
t
2b
t
cos2
+
sin2
Tb
2Tb
Tb
2Tb
r
2b
Tb
57

(b) The signal s(t) is equivalent to an MSK signal. Figure 17 depicts a block diagram of the
modulator for synthesizing the signal. In Figure 17 xe are the even pulse sequence and xo
are the odd pulse sequence.

Figure 17: Modulator block diagram.


(c) Figure 18 depicts a block diagram of the demodulator

Figure 18: Demodulator block diagram.


4. [1, Problem 5.31]9
Sketch the state trellis, and the state diagram for partial-response CPM with h =
(
1
, 0 t 2T,
g(t) = 4T
0,
otherwise.

1
2

and

Solution:
Since p = 2, m is odd (m = 1), L = 2 and M = 2, there are
Ns = 2pM = 8
phase states, which we denote as Qn = (n , sn1 ). The 2p = 4 phase states corresponding to n
are



3
,
= 0, , ,
2
2
9 Read

[1, Subsection 4.3.3].

58

and therefore, the eight states Qn are




 

  

3
3
(0, 1) , (0, 1) ,
,1 ,
, 1 , (, 1) , (, 1) ,
,1 ,
, 1
.
2
2
2
2
Having at our disposal the state (n , sn1 ) and the transmitted symbol sn , we can find the new
phase state as

sn
(n + sn1 , sn ) = (n+1 , sn ).
(n , sn1 )
2
The trellis diagram is depicted in Figure 19. In Figure 19 solid lines denote transition corresponding to sn = 1, while dashed lines denote transitions corresponding to sn = 1.

Figure 19: Trellis diagram.


The state diagram is depicted in Figure 20.

Figure 20: State diagram.

59

11

Colored AGN Channel

1. Colored noise.
Consider the following four equiprobable signals
1
s0 (t) = cos(t),

1
s1 (t) = sin(t),

s2 (t) = s0 (t),

s3 (t) = s1 (t),

0 t 2

The received signal obeys r(t) = s(t) + n(t), where n(t) is a colored Gaussian noise with the
following PSD


rad
N0 2
,

=
SN () =
2 1 + 2
sec
the noise n(t) and the signal s(t) are independent.
(a) The optimal receiver for this scenario consists of a whitening filter, H(), followed by an
optimal receiver for the AWGN channel. What should be the whitening filter amplitude,
|H()|2 , so the noise at the filter output will be white?
(b) Find the above H() and the corresponding h(t) which can be composed of an adder and
an integrator.
(c) For a noise-free channel, what are the transmitted signals, si (t), i = 1, . . . , 3, at the output
of the whitening filter?
(d) Let si (t) , si (t) h(t), i = 1, . . . , 3, where h(t) is the impulse response of H(). Find a
set of real orthonormal basis functions which span the set S = (
s0 (t), . . . , s3 (t)). Find the
projection of each element in the set S on the basis functions.
(e) Sketch the optimal receiver.
Solution:
(a) The noise at the filter output will be white if
|H()|2
(b) Let the constant be
is

N0
2 ,

2 N0
= Constant.
1 + 2 2

hence |H()|2 =

1+ 2
2 .

H0 () =

One of the filters which obeys |H()|2 =

1+ 2
2

1 + j
1
=1+
.
j
j

The impulse response of H0 () is h(t) = (t) + u(t) 21 , where u(t) denotes a step function.
Hence
Z t
Z
Z t
1
1
si (t) = si (t) [(t) + u(t) ] = si (t) +
si ( )d
si ( )d = si (t) +
si ( )d ,
2
2

where the last step follows from the fact that


Z
Z 2
si ( )d =
si ( )d = 0,

i = 0, 1, 2, 3.

Therefore H0 () can be implemented using an adder and an integrator.

60

(c)
si (t)

Z min{2,t}
Z t
si ( )d
si ( )d = si (t) +
si (t) +
0

(
Rt

t<0
0,
Rt
si (t) + 0 si ( )d ,
si (t) + 0 si ( )d , 0 t 2 =

0,
R 2
si ( )d ,
t > 2
0

t [0, 2]
t
/ [0, 2]

Therefore the filtered signals are


1
1
s0 (t) = (cos(t) + sin(t)), s1 (t) = (sin(t) + 1 cos(t))

s2 (t) =
s0 (t), s3 (t) =
s1 (t), t [0, 2]
(d) The functions 0 = sin(t) + cos(t) and 1 = sin(t) + 1 cos(t) are orthogonal at the range
[0, 2], hence establish a basis of the signal space. In order to have an orthonormal basis we
should normalize 0 and 1
2

hsin(t) + cos(t), sin(t) + cos(t)i = 2

hsin(t) + 1 cos(t), sin(t) + 1 cos(t)i = 4

k0 (t)k

k1 (t)k

Hence the orthonormal basis is


1
0 (t) = (sin(t) + cos(t)),
2

1
1 (t) = (sin(t) + 1 cos(t))
4

The vectors of the whitened signals are


 
 
0
2
s0 =
, s1 =
,
2
0

s2 = s0 ,

(e) A type-II is depicted in Figure 21.

Figure 21: Type-II receiver for ACGN.


where T = 2.

61

s3 = s1 .

The whitening filter can be integrated into the match filters. Since cos(2 t) = cos(t) and
sin(2 t) = sin(t)
(
1 (cos(t) sin(t)), 0 t 2,
2
s0 (2 t) =
0,
otherwise.
(
1
(1 cos(t) sin(t)), 0 t 2,

s1 (2 t) =
0,
otherwise.
Hence

(t) + u(t) s0 (2 t)


(t) + u(t) s1 (2 t)

where the definition range of functions cos, sin


defined as follows

0,
f (t) , t,

2,

1
(2 cos(t) 1)

1
(f (t) 2 sin(t))

and 1 is 0 t 2, and the function f (t) is


t < 0,
0 t 2,
t > 2.

Figure 22 depicts an optimal receiver in which the whitening filter is integrated into the
matched filters.

Figure 22: Type-II receiver for ACGN with whitening filter integrated into the match filters.
2. Final A, 2011.
Consider the following two equiprobable signals
(q
E
T , 0 t T,
s0 (t) =
0,
otherwise.

s1 (t) =

( q
E
T , 0 t T,
0,

otherwise.

The above signals are transmitted through an additive colored Gaussian noise (ACGN) channel
with PSD SN (f ). The noise PSD, SN (f ), obeys

Z
ln SN (f )
df < .
1 + f2

62


Prove that the probability of error of the optimal receiver is given by p(e) = Q( E), where
,

sin(x)
x

2

1
SN

x
T

 dx

Solution:
Since the PSD of the noise satisfies the Paley-Wiener condition, a minimum-phase whitening filter
H(f ) such that |H(f )|2 = N20 SN1(f ) exists. The optimal decoder then first whitens the noise and
then performs MAP decoding on the modified signals: s0 (t) = s0 (t) h(t) and s1 (t) = s1 (t) h(t),
assuming T is large enough. Since the signals are anti-podal and equiprobable, the probability of
error is given by
p(e) = Pr {s0 (t)} Pr {error|
s0 (t)} + Pr {s1 (t)} Pr {error|
s1 (t)} = Pr {error|
s0 (t)} .
Next, note that as after whitening we arrive at an AWGN with noise variance


dmin
Pr {error|
s0 (t)} = Q
,
2N0

N0
2

then

where dmin = ks1 s0 k, and s0 and s1 are e projections of s0 (t) and s1 (t), respectively, on an
appropriate orthonormal basis. From Parsevals theorem it follows that
dmin = ks1 s0 k
s
Z T
2
=
(
s1 (t) s0 (t)) dt
t=0

sZ

=
f =

sZ


2


S1 (f ) S0 (f ) df

|H(f )S1 (f ) H(f )S0 (f )| df

=
f =

sZ

|H(f )| |S1 (f ) S0 (f )| df .

=
f =

Note that

s0 (t)ej2f t dt

S0 (f ) =
t=

r
=
r
=

E
T

ej2f t dt

t=0

E jf T sin (f T )
e
,
T
f

63

and S1 (f ) = S0 (f ). Hence
sZ
dmin

|H(f )| |S1 (f ) S0 (f )| df

=
f =

sZ

=
f =

sZ

=
x=f T

=
=

N0 1
2
|2S1 (f )| df
2 SN (f )

1
2
|S1 (f )| df
SN (f )
f =
sZ
r

2

1
sin (f T )
2N0 E
df
T
f
f = SN (f )
s
r

2
Z
1
2N0 E
1
sin (x)
dx
x
x
)
T
S
(
T
N
x=
T
T
s
2

Z

p
1
1
sin (x)
dx
2N0 E
x
x
x= SN ( T )
p

2N0 E .
2N0

Finally we arrive at

Pr {error|
s0 (t)} = Q

d
min
2N0


=Q

64


 
2N0 E

=Q
E .
2N0

12

ISI Channels and MLSE

1. [1, Problem 10.2].


In a binary PAM system, the clock that specifies the sampling of the correlator matched filter
output is offset from the optimum sampling time by 10%.
(
A, 0 t < T,
, determine the loss in SNR
(a) If the signal pulse used is rectangular, p(t) =
0, otherwise.
of the desired signal component sampled at the output of the MF due to the mistiming.
(b) Determine the ISI coefficients, fk , due to the mistiming and determine its effect on the
probability of error assuming per-symbol decoding designed for binary PAM over AWGN
(no ISI) with equal a-priori probabilities.
Solution:
(a) If the transmitted signal is:
r(t) =

In p(t nT ) + n(t)

n=

then the output of the receiving filter is


y(t) =

In x(t nT ) + v(t)

n=

where x(t) = p(t) p (t) and v(t) = n(t) p (t). If the sampling time is off by 10%,
1
)T . Assuming that
then the samples at the output of the correlator are taken at t = (m 10
1
t = (m 10 )T without loss of generality, then the sampled sequence is:

ym =

In x((m

n=

1
1
)T nT ) + v((m )T ).
10
10

P
1
If the signal pulse is rectangular with amplitude A and duration T , then n= In x((m 10
)T nT )
is nonzero only for n = m and n = m 1 and therefore, the sampled sequence is given by:
ym

1
1
1
T ) + Im1 x(T T ) + v((m )T )
10
10
10
9 2
1
1
A T Im + A2 T Im1 + v((m )T )
10
10
10

Im x(

The variance of the noise is:


v2 =

N0 2
A T
2

and therefore, the SNR is:



SNR =

9
10

2

2(A2 T )2
81 2A2 T
=
.
N0 A2 T
100 N0

As it is observed, there is a loss of 10 log10

65

81
100

= 0.9151 dB due to the mistiming.

(b) Recall from item (1a) that the sampled sequence is:
ym =

9 2
1
A T Im + A2 T Im1 + vm .
10
10

1
The term 10
A2 T Im1 expresses the ISI introduced to the system. If Im = 1 is transmitted,
then the probability of error is

Pr{e|Im = 1}

=
=
=

1
1
Pr{e|Im = 1, Im1 = 1} + Pr{e|Im = 1, Im1 = 1}
2
2
8
Z A2 T
Z 10
A2 T
2
2
1
1
N vA2 T
N vA2 T
0
0

e
e
dv
+
dv
2 N0 A2 T
2 N0 A2 T
s
s
!
 2 2 !
1
2A T
1
2A2 T
8
+ Q
Q
2
N0
2
10
N0

Since the symbols of the binary PAM system are equiprobable the previous derived expression
is the probability of error when a symbol by symbol detector is employed. Comparing this
with the probability of error of a system with no ISI, we observe that there is an increase of
the probability of error by
s 
s
!
!
2
1
8
2A2 T
2A2 T
1
Q
Q
.
2
10
N0
2
N0
2. [1, Problem 10.8].
A binary antipodal signal is transmitted over a nonideal band-limited channel, which introduced
ISI over two adjacent symbols:
X
1
ym =
Im xkm + vm = Im + Im1 + vm .
4
k

where vm is an additive noise.


(a) Determine the average probability of error, assuming equiprobable signals and the additive
noise is white and Gaussian, using decoder designed for antipodal signals over AWGN (no
ISI).
(b) By plotting the error probability obtained in (2a) and that for the case of no ISI, determine
the relative difference in SNR of the error probability of 106 .
Solution:
(a) The output of the matched filter at the time instant mT is:
ym =

X
k

1
Im xkm + vm = Im + Im1 + vm .
4

The autocorrelation function of the noise samples vm is:


E{vk vj } =

N0
xkj
2

thus, the variance of the noise is


v2 =
66

N0
N0
x0 =
.
2
2

If a symbol by symbol detector is employed and we assume that the symbols Im = Im1 =

b have been transmitted, then the probability of error Pr{e|Im = Im1 = b } is:

Pr{e|Im = Im1 = b } = Pr{ym < 0|Im = Im1 = b }


!
r


5
5 2b
= Pr vm <
b = Q
4
4 N0

If however Im1 = b }, then:


Pr{e|Im =

b , Im1 = b } = Pr vm

3
<
b
4

3
=Q
4

!
2b
.
N0

Since the symbols are equiprobable, we conclude that:


!
!
r
r
1
5 2b
1
3 2b
p(e) = Q
+ Q
.
2
4 N0
2
4 N0
(b) Figure 23 depicts the error probability obtained in item (2a) vs. the SNR per bit and the
error probability for the case of no ISI. As it observed, the relative difference in SNR of the
error probability of 106 is 2 dB.
-2
-2.5
-3

log(P(e)

-3.5
-4
-4.5
-5
-5.5
-6
-6.5
-7
6

10
11
SNR/bit, dB

12

13

14

Figure 23: Probability of error comparison.


3. [1, Problem 10.24].
Consider a four-level PAM system with possible transmitted levels 3, 1, 1, and 3. The channel
through which the data are transmitted introduces intersymbol interference over two successive
symbols. The equivalent discrete-time channel model obeys
(
0.8Ik + nk ,
k = 1,
yk =
0.8Ik 0.6Ik1 + nk , k > 1.
where {nk } is a sequence of real-valued independent zero-mean Gaussian noise variables with
variance 2 = N0 .
67

(a) Sketch the tree structure, showing the possible signal sequences for the received signals y1 , y2 ,
and y3 .
(b) Suppose the Viterbi algorithm is used to detect the information sequence. How many metrics
must be computed at each stage of the algorithm?
(c) How many surviving sequences are there in the Viterbi algorithm for this channel?
(d) Suppose that the received signals are
y1 = 0.5,

y3 = 1.0

y2 = 2.0,

Determine the surviving sequences through stage y3 and the corresponding metrics.
Solution:
(a) Figure 24 depicts part of the tree.

I1

I3

I2

3
1
-1
-3

3
3

1
-1
-3

3
1

1
-1
-3

3
-1

1
-1
-3

3
-3

1
-1
-3

Figure 24: Tree structure.


(b) There are four states in the trellis (corresponding to the four possible values of the symbol
Ik1 ), and for each one there are four paths starting from it (corresponding to the four
possible values of the symbol Ik ). Hence, 16 metrics must be computed at each stage of the
Viterbi algorithm.
(c) Since, there are four states, the number of surviving sequences is also four.
(d) The metrics are
(
(y1 0.8Ik )2 ,
k = 1,
k = P
2
(y

0.8I
+
0.6I
)
,
k
> 1.
k
k
k1
k
68

Table 1 details the metric for the first stage.


I1
3
1
-1
-3

1
3.61
0.09
1.69
8.41

Table 1: First stage metric.


Table 2 details the metric for the second stage.
I2
3
3
3
3
1
1
1
1
-1
-1
-1
-1
-3
-3
-3
-3

I1
3
1
-1
-3
3
1
-1
-3
3
1
-1
-3
3
1
-1
-3

2 (I2 , I1 )
5.57
0.13
6.53
13.25
12.61
3.33
2.05
8.77
24.77
11.65
6.53
9.41
42.05
25.09
16.13
15.17

Table 2: Second stage metric.




The four surviving paths at this stage are minI1 2 (x, I1 ) , x = 3, 1, 1, 3:
(I2 , I1 )

(3, 1) : 2 (3, 1) = 0.13

(I2 , I1 )

(1, 1) : 2 (1, 1) = 2.05

(I2 , I1 )

(1, 1) : 2 (1, 1) = 6.53

(I2 , I1 )

(3, 3) : 2 (3, 3) = 15.17

Table 3 details the metric for the third stage.




The four surviving paths at this stage are minI2 ,I1 3 (x, I2 , I1 ) , x = 3, 1, 1, 3:
(I3 , I2 , I1 )

(3, 3, 1) : 3 (3, 3, 1) = 2.69

(I3 , I2 , I1 )

(1, 3, 1) : 3 (1, 3, 1) = 0.13

(I3 , I2 , I1 )

(1, 3, 1) : 3 (1, 3, 1) = 2.69

(I3 , I2 , I1 )

(3, 3) : 3 (3, 1, 1) = 2.69


69

I3
3
3
3
3
1
1
1
1
-1
-1
-1
-1
-3
-3
-3
-3

I2
3
1
-1
-3
3
1
-1
-3
3
1
-1
-3
3
1
-1
-3

I1
1
-1
-1
-3
1
-1
-1
-3
1
-1
-1
-3
1
-1
-1
-3

3 (I3 , I2 , I1 )
2.69
9.89
22.53
42.21
0.13
7.81
12.29
28.13
2.69
2.69
7.17
19.17
10.37
2.69
7.17
15.33

Table 3: Third stage metric.

4. [4, Problem 8.2].


In a binary equiprobable PAM system the input to the detector is
rn = sn + wn + bn ,
where sn = 1 is the desired signal, wn is a zero mean Gaussian random variable with variance
2
w
, and bn represents the ISI due to the channel distortion. The ISI term is a random variable
which takes the values, 21 , 0, 12 with probabilities 14 , 12 , 14 , respectively. Determine the average
2
.
probability of error, p(e), of an optimal symbol by symbol detector as a function of w
Solution:
An optimal symbol by symbol detector for the binary PAM compares the received signal, rn , to
zero. The received signal (detector input) is

1
1

sn + wn 2 , w.p. 4 ,
rn = sn + wn + 12 , w.p. 14 ,

sn + wn ,
w.p. 12 .
By symmetry p(e) = Pr{error|s = 1} = Pr{error|s = 1}, hence,
p(e) = Pr{error|s = 1}
1
3
1
1
1
= Pr{w 1 > 0} + Pr{w > 0} + Pr{w > 0}
2
4
2
4
2
1
1
1
1
1
= Pr{w > 1} + Pr{w > 1} + Pr{w + > 1}
2  
4
2
2
 4 
1
1
1
3
1
1
= Q
+ Q
+ Q
.
2
w
4
2w
4
2w

70

13

Equalization

1. [3, Problem 11.13].


This problem illustrates the noise enhancement of zero-forcing equalizers, and how this enhancement can be mitigated using an MMSE approach. Consider a frequency-selective fading channel
with baseband frequency response

1,
0 |f | < 10KHz,

1/2, 10KHz |f | < 20KHz,

1/3, 20KHz |f | < 30KHz,


H(f ) =
1/4, 30KHz |f | < 40KHz,

1/5, 40KHz |f | < 50KHz,

0,
otherwise.
The frequency response is symmetric in positive and negative frequencies. Assume an AWGN
channel with noise PSD N0 = 109 W/Hz.
(a) Find a ZF analog equalizer that completely removes the ISI introduced by H(f ).
(b) Find the total noise power at the output of the equalizer from item (1a).
(c) Assume a MMSE analog equalizer of the form Heq (f ) = H(f1)+ . Find the total noise power
at the output of this equalizer for an AWGN input with PSD N0 for = 0.5 and for = 1.
(d) Describe qualitatively two effects on a signal that is transmitted over channel H(f ) and
then passed through the MMSE equalizer Heq (f ) = H(f1)+ with > 0. What design
considerations should go into the choice of ?
(e) What happens to the total noise power for the MMSE equalizer in item (1c) as ?
What is the disadvantage of letting in this equalizer design?
Solution:
(a)

1,

2,

1
Hzf (f ) =
= 3,
H(f )

4,

5,

0 |f | < 10KHz,
10KHz |f | < 20KHz,
20KHz |f | < 30KHz,
30KHz |f | < 40KHz,
40KHz |f | < 50KHz.

(b) The noise spectrum at the output of the filter is given by N (f ) = N0 |Heq (f )|2 , and the noise
power is given by the integral of N (f ) from 50 kHz to 50 kHz:
Z 50KHz
Z 50KHz
N=
N (f )df = 2N0
|Heq (f )|2 df
50KHz

2N0 (1 + 4 + 9 + 16 + 25)(10KHz)

1.1mW.

0
(c) The noise spectrum at the output of the filter is given by N (f ) = (H(fN)+)
2 , and the noise
power is given by the integral of N (f ) from 50 kHz to 50 kHz. For = 0.5 we get

N = 2N0 (0.44 + 1 + 1.44 + 1.78 + 2.04)(10KHz) = 0.134mW.


For = 1 we get
N = 2N0 (0.24 + 0.44 + 0.56 + 0.64 + 0.69)(10KHz) = 0.0516mW.
71

(d) As increases, the frequency response Heq (f ) decreases for all f . Thus, the noise power
decreases, but the signal power decreases as well. The factor should be chosen to balance
maximizing the SNR and minimizing distortion, which also depends on the spectrum of the
input signal (which is not given here).
(e) As , the noise power goes to 0 because Heq (f ) 0 for all f . However, the signal
power also goes to zero.
2. [1, Problem 10.10].
Binary PAM is used to transmit information over an unequalized linear filter channel. When
a = 1 is transmitted, the noise-free output of the demodulator is

0.3, m = 1,

0.9, m = 0,
xm =

0.3, m = 1,

0,
otherwise.
(a) Design a three-tap zero-forcing linear equalizer so that the output is
(
1, m = 0,
qm =
0, m = 1.
Remark 4. qm does not have to be causal.
(b) Determine qm for m = 2, 3, by convolving the impulse response of the equalizer with the
channel response.
Solution:
(a) If by {cn } we denote the coefficients of the FIR equalizer, then the equalized signal is:
qm =

1
X

cn xnm

n=1

which in matrix notation is written

0.9
0.3
0

as
0.3
0.9
0.3


c1
0
0
0.3 c0 = 1
0.9
c1
0

The coefficients of the zero-forcing equalizer can be found by solving the above matrix equation. Thus

c1
0.4762
c0 = 1.4286 .
c1
0.4762

72

(b) The values of qm for m = 2, 3 are given by


q2

1
X

cn x2n = c1 x1 = 0.1429

n=1

q2

1
X

cn x2n = c1 x1 = 0.1429

n=1

q3

1
X

cn x3n = 0

n=1

q3

1
X

cn x3n = 0.

n=1

3. [1, Problem 10.15].


Repeat problem (2) using the MMSE as the criterion for optimizing the tap coefficients. Assume
that the noise PSD is 0.1 W/Hz.
Solution:
A discrete time transversal filter equivalent to the cascade of the transmitting filter gT (t), the
channel c(t), the matched filter at the receiver gR (t) and the sampler, has tap gain coefficients
{xm }, where

0.9, m = 0,
xm = 0.3, m = 1,

0,
otherwise.
The noise k , at the output of the sampler, is a zero-mean Gaussian sequence with autocorrelation
function:
E{k l } = 2 xkl , |k l| 1.
If the Z-transform of the sequence {xm }, X(z), assumes the factorization:
X(z) = F (z)F (1/z )
then the filter 1/F (1/z ) can follow the sampler to white the noise sequence k . In this case the
output of the whitening filter, and input to the MSE equalizer, is the sequence
X
un =
Ik fnk + nk
k

where nk is zero mean white Gaussian with variance 2 . The optimum coefficients of the MSE
equalizer, ck , satisfy:
1
X
ck nk = k , k = 1, 0, 1
n=1

where
nk

(
xnk + 2 n,k , |n k| 1,
0,
otherwise.
(
fk , 1 k 0,
0,
otherwise.
73

With
X(z) = 0.3z + 0.9 + 0.3z 1 = (f0 + f1 z 1 )(f0 + f1 z)
we obtain the parameters f0 and f1 as:
(
0.7854

f0 =
0.1146

(
0.1146

f1 =
0.7854

The parameters f0 and f1 should have the same sign since f0 f1 = 0.3. To have a stable inverse

system 1/F (1/z ), we select f0 and f1 in such a waythat the zero of the
system F (1/z ) =

f0 + f1 z is inside the unit circle. Thus, we choose f0 = 0.1146 and f1 = 0.7854 and therefore,
the desired system for the equalizers coefficients is:


0.9 + 0.1
0.3
0
c1
0.7854
0.3
0.9 + 0.1
0.3 c0 = 0.1146
0
0.3
0.9 + 0.1
c1
0
Solving this system, we obtain
c1 = 0.8596,

c0 = 0.0886,

c1 = 0.0266.

4. [1, Problem 10.21]10 .


Consider the following channel
1
1
yn = In + In1 + vn
2
2
{vn } is a real-values white noise Gaussian sequence with zero mean and variance N0 . Suppose
the channel is to be equalized by DFE having a two-tap feedforward filter (c0 , c1 ) and a one-tap
feedback filter (c1 ). The {ci } are optimized using the MSE criterion.
(a) Determine exactly the optimum coefficients as a function of N0 and approximate their values
for N0  1.
(b) Determine the exact value of the minimum MSE and a find first order approximation (in
terms of N0 ) appropriate to the case N0  1. Assume E{In2 } = 1.
(c) Determine the exact value of the output SNR for the three-tap equalizer as a function of N0
and find a first order approximation appropriate to the case N0  1.
(d) Compare the results in items (4b) and (4c) with the performance of the infinite-tap DFE.
(e) Evaluate and compare the exact values of the output SNR for the three-tap and infinite-tap
DFE in the special case where N0 = 0.1 and N0 = 0.01. Comment on how well the three-tap
equalizer performs relative to the infinite-tap equalizer.
Solution:
(a) The tap coefficients of the feedforward filter are given by the following equations:
0
X

cj lj = fl
,

j=K1
10 Read

[1, Sub-section 10.3.2] and [1, Example 10.3.1]

74

K1 l 0,

where
lj =

l
X

fm
fm+lj + N0 lj ,

K1 l, j 0.

m=0

The tap coefficients of the feedback filter of the DFE are given in terms of the coefficients of
the feedforward section by the following equations:
ck =

0
X

cj fkj ,

1 k K2 ,

j=K1

In this case, K1 = 1, resulting in the following two equations:


0,0 c0 + 0,1 c1 = f0
1,0 c0 + 1,1 c1 = f1
From the definition of lj the above system can be written as:
1

 " 1 #
1
+
N
c
0
0
2
2
= 12
1
1
c1
2
2 + N0
2
so:

1
  

1
c0
2 + N0
2

,
=
c1
N0
2 2N0
2 N02 + 32 N0 + 41

The coefficient for the feedback section is:


1
c1 = c0 f1 = c0 1,
2

for N0  1

for N0  1.

(b)
Jmin (1) = 1

0
X

cj fj =

j=K1

2N02 + N0
 2N0 ,
2 N02 + 32 N0 + 14

for N0  1

(c)
=

1 Jmin (1)
1 + 4N0
1
=

,
Jmin (1)
2N0 (1 + 2N0 )
2N0

for N0  1

(d) For the infinite tap DFE, we have from [1, Example 10.3.1]:
Jmin

2N
p 0
2N0 , for N0  1
1 + N0 + (1 + N0 )2 1
p
1 + N0 + (1 + N0 )2 1
1 Jmin
=
Jmin
2N0

(e) For N0 = 0.1 we have:


Jmin (1) = 0.146,

= 5.83 (7.66 dB)

Jmin = 0.128,

= 6.8 (8.32 dB)

For N0 = 0.01 we have:


Jmin (1) = 0.0193,
Jmin = 0.0174,

= 51 (17.1 dB)

= 56.6 (17.5 dB)

The three-tap equalizer performs very well compared to the infinite-tap equalizer. The
difference in performance is 0.6 dB for N0 = 0.1 and 0.4 dB for N0 = 0.01.
75

5. Final C, 2011.
Let yn be the signal at the output of an ISI channel
yn =

xi sni + vn ,

i=

where {sn }n= is the transmitted symbol sequence. The symbols are selected in an i.i.d man
ner from a constellation A, with average energy EA . {vn }n= is a zero mean complex white

Gaussian noise with variance N0 . {xi }i= is the channel coefficients vector. Figure 25 depicts

the frequency response magnitude of {xi }i= .

Figure 25: Frequency response magnitude of {xi }i= .

The signal {yn }n= is filtered by a zero-forcing equalizer. Find the SNR per symbol at the
equalizer output as a function of EA and N0 .
Solution:
The z-transform of the received signal is
Y (z) = S(z)X(z) + V (z).
The frequency response depicted in Figure 25 indicate that X(z) is invertible. Thus, after filtering
1
with X(z)
we obtain
1
= S(z) + V (z),
Y (z) = S(z) + V (z)
X(z)
and the PSD of the filtered noise is

SV ej =

N0

2.

|X (ej )|

Hence, the energy of the noise is


v2

Therefore, the SNR per symbol is

EA
v2

Z

1
=
S ej d
2 = V
Z
1
N0
=
d
=0 |X (ej )|2
9
= N0 .
12
=

12 EA
9 N0 .

76

14

Non-Coherent Reception

1. Minimal frequency difference for orthogonality.


(a) Consider the signals
(q
si (t) =

2E
T

cos(2fi t), 0 t T,

0,

i = 0, 1

otherwise.

Both frequencies obey fi T  1, i = 0, 1. What is the minimal frequency difference, |f0 f1 |,


required for the two signals, s0 (t) and s1 (t), to be orthogonal?
(b) Now an unknown phase is added to one of the signals
(q
(q
2E
2E
cos(2f
t),
0

T,
0
T
T cos(2f1 t + ), 0 t T,
s0 (t) =
, s1 (t) =
0,
otherwise.
0,
otherwise.
Find the minimal frequency difference required for the two signals to be orthogonal, for an
unknown .
Solution:
We first solve for the general case, and then assign = 0 for item 1a.
Z
2E T
hs0 (t), s1 (t)i =
cos(2f0 t) cos(2f1 t + )dt
T 0

Z 


1 2E T
=

cos 2(f0 + f1 )t + + cos 2(f0 f1 )t dt


2 T 0
"

 #T
sin 2(f0 + f1 )t +
sin 2(f0 f1 )t
= E
+
2(f0 + f1 )T
2(f0 f1 )T
0
|
{z
}
0 because fi T 1

sin 2(f0 f1 )t
E
2(f0 f1 )T

 T

=0
|{z}

0 demand

We now consider the special cases.


(a) For = 0:

hs0 (t), s1 (t)i = 0 sin 2(f0 f1 )T = 0 2(f0 f1 )T = n
where n is an integer, hence
|f0 f1 |min =

1
2T

(b) For unknown :


hs0 (t), s1 (t)i = 0

T
sin 2(f0 f1 )t 0 = 0

sin 2(f0 f1 )T sin() = 0

2(f0 f1 )T () = n 2
77

where the last step follows from the demand that the result
will be zero for any , hence we

require that the difference between 2(f0 f1 )T and () will equal n 2, where n
is an integer.
Hence, the minimal frequency difference for the non-coherent scenario is
|f0 f1 |min =

1
T

We conclude that for the non-coherent scenario a double bandwidth is required comparing
to the coherent scenario.
2. Non coherent receiver for M orthogonal signals.
Consider the following M orthogonal signals
r
2E
si (t) =
sin(i t), 0 t T,
T

i = 0, 1, . . . , M 1.

The received signal is


r
r(t) =

2E
sin(i t + ) + n(t)
T

where U [0, 2) and n(t) is white Gaussian noise with PSD

N0
2 .

1
The set {rs,i , rc,i }M
i=0 is sufficient statistic for decoding r(t), where
r
r
Z T
Z T
2
2
rc,i =
r(t)
cos(i t)dt, rs,i =
r(t)
sin(i t)dt
T
T
0
0

In class it was obtained that the optimal receiver for equiprobable a-priori probabilities finds the
2
2
maximal ri2 = rc,i
+ rs,i
, and chooses the respective si (t).
The probability density function (PDF) of r0 and ri , i = 1, . . . , M 1, given that s0 (t) was
transmitted, are:
!

2r0 Nr02 NE
2 E
f (r0 |s0 ) =
e 0 e 0 I0
r0 , r0 0
N0
N0
f (ri |s0 )

2ri Nri2
e 0,
N0

ri 0,

i = 1, . . . , M 1

For equiprobable a-priori probabilities and M = 2, the error probability of the optimal receiver is
p(e) =

E
1 2N
0
e
2

Show that for equiprobable a-priori probabilities and general M , the error probability of the
optimal receiver is

M
1 
X
i
E
1
M 1
e i+1 N0
p(e) =
(1)i+1
i
i+1
i=1

Guideline: Let A, B and C be i.i.d RVs with PDF fY (y). Let X = max{A, B, C}. Derive the
PDF fX (x).
Solution:
78

Due to symmetry
p(e) =

M
1
X

p(e|si )p(si ) = p(e|s0 )

i=0

The probability of error given s0 (t) was transmitted obeys


p(e|s0 ) = Pr{rmax = max{r1 , . . . , rM 1 } > r0 |s0 }
Note: the ri , i = 1, . . . , M 1 are i.i.d.
For i.i.d random variables y1 , . . . , yn with PDF fY (y) and cdf FY (y), the cdf of ymax = max{y1 , . . . , yn }
obeys
FYmax (y)
fYmax (y)

n
(a)
Pr{ymax < y} = Pr{y1 , . . . , yn y} = FY (y)
n1
= n FY (y)
fY (y)

where (a) follows from the fact the the random variables are i.i.d.
In order to find f (rmax |s0 ) we need to find F (ri |s0 ):
Z ri
ri2
2t Nt2
F (ri |s0 ) =
e 0 dt = 1 e N0
N0
0
Hence
f (rmax |s0 ) = (M 1) 1 e

2
rmax
N0

2
M 2 2rmax rmax
e N0

N0

f (rmax |s0 ) can be expanded as follows


f (rmax |s0 )

(M 1)

M
2 
X

2
rmax
N0

j 

j=0

M
2
X

(M 1)(1)j e

2
(j+1)rmax
N0

j=0
i=j+1

M
1
X

(1)i+1

i=1


2
2rmax rmax
M 2

e N0
j
N0


2rmax M 2
j
N0


2
M 1 irNmax 2rmax i
0
e
i
N0

In order to calculate p(e|s0 ) we need to integrate the whole region in which rmax > r0
Z
Z
p(e|s0 ) =
f (r0 |s0 )
f (rmax |s0 )drmax dr0
r0 =0

rmax =r0

Assigning f (rmax |s0 ) to the inner integral yields


Z

f (rmax |s0 )drmax

rmax =r0

M
1
X

(1)i+1

i=1


 Z ir2
max 2rmax i
M 1
drmax
e N0
i
N0
r0
|
{z
}
Rayleigh distribution

M
1
X
i=1



2
0
M 1 ir
(1)i+1
e N0
i
79

Hence

Z
p(e|s0 ) =

r0 =0

! M 1


 ir2
X
0
2r0 Nr02 NE
2 E
i+1 M 1
0
0
e
e
I0
r0
(1)
e N0 dr0
i
N0
N0
i=1

Multiplying p(e|s0 ) by
E/(i+1)2
E/(i+1)2
i+1 N

e 0 /(i+1) e N0 /(i+1) = 1
i+1

and rearranging the summation elements yields


p(e|s0 )

M
1 
X


E
i
1
M 1
e i+1 N0
(1)i+1
i
i+1
{z
}

i=1

p(e)

Z
0

!
p
2
r0
E/(i+1)2
2 E/(i + 1)2
2(i + 1)r0 N /(i+1)
N /(i+1)
e 0
I0
e 0
r0 dr0
N0
N0 /(i + 1)
{z
}
R

Rice distribution=1


M
1 
X
i
E
1
M 1
=
e i+1 N0
(1)i+1
i
i+1
i=1
3. [1, Problem 5.42].
In on-off keying of a carrier-modulated signal, the two possible signals are
s0 (t)
s1 (t)

0, 0 t T
r
2b
=
cos(2fc t + ),
T

0 t T.

The corresponding received signals are


r(t)

r(t)

n(t), 0 t T
r
2b
cos(2fc t + ) + n(t),
T

0tT

where is the carrier phase and n(t) is AWGN.


(a) Sketch a block diagram of the receiver (demodulator and detector) that employs noncoherent
(envelope) detection.
(b) Determine the PDFs for the two possible decision variables at the detector corresponding to
the two possible received signals.
(c) Derive the detector error probability assuming
11 You

b
N0

 1, b  1, 2 =

may use the following approximations for estimating the integral


r

b
)
2

b
2
re

r b
2
2
r

For

b
2

 1: I0 (

For

b
2

 1:

For

b
2

 1: the optimum threshold VT

2 2 b

1
.
2 2

b
.
2

80

1
2

R VT
0

f (r) r2 I0

N0 11
.
2

b
2

dr:

Solution:
(a) Figure 26 depicts the noncoherent envelope detector for the on-off keying signal.

Figure 26: Envelope detector.


(b) If s0 (t) is sent, then the received signal is r(t) = n(t) and therefore the sampled outputs rc ,
rs are zero-mean independent Gaussian random variables with variance 2 = N20 . Hence,
p
the random variable r = rc2 + rs2 is Rayleigh distributed and the PDF is given by:
f (r|s0 (t)) =

2r Nr2
r r22
2
=
e
e 0.
2
N0

If s1 (t) is transmitted, then the received signal is:


r
2b
r(t) =
cos(2fc t + ) + n(t)
T
q
Crosscorrelating r(t) by T2 cos(2fc t) and sampling the output at t = T , results in
Z
rc

r
r(t)

2
cos(2fc t)dt
T

r
Z T

2 b
2
=
cos(2fc t + ) cos(2fc t)dt +
n(t)
cos(2fc t)dt
T
T
0
0

=
b cos() + nc
Z

where nc is zero-mean Gaussian random variable with variance N20 . Similarly, for the quadrature component we have:

rs = b sin() + ns
p
p
The PDF of the random variable r = rc2 + rs2 = b + n2c + n2s follows the Rician distribution:
!
!
r b
2r b
2r r2N+b
r r2 +2 b
0 I
=
e
f (r|s1 (t)) = 2 e 2 I0
0

2
N0
N0
(c) For equiprobable signals the probability of error is given by
p(e) =

1
2

VT

p(r|s1 (t))dr +

81

1
2

p(r|s0 (t))dr
VT

Since r > 0 the expression for the probability of error takes the form
p(e)

=
=

1
2

1
2

VT

Z
1
p(r|s0 (t))dr
2 VT
 
Z
r b
r r2 +2 b
1 r r22
2
e
I
dr
+
e 2 dr
0
2
2
2 VT 2

p(r|s1 (t))dr +
0
VT

The optimum threshold level is the value of VT that minimizes


the probability of error.

However, when Nb0  1 the optimum value is close to: 2 b and we will use this threshold
to simplify the analysis. The integral involving the Bessel function cannot be evaluated in
closed form. Instead of I0 (x) we will use the approximation:
ex
I0 (x)
2x
which is valid for large x, that is for high SNR. In this case:
1
2

VT

 
Z 2b r

2
2
r b
r r2 +2 b
1
r
e 2 I0
dr
e(r b ) /2 dr
2
2
2

2 0
2 b

This integral is further simplified if we observe that for high SNR, the integrand is dominant

in the vicinity of b and therefore, the lower limit can be substituted . Also
r
r
r
1

2 2 b
2 2
and therefore:

1
2

Z
0

b
2

r
2 2

e
b

(r b )2 /2 2

dr

1
2

b
2

Z
0

1
Q
2

1 (rb )2 /22
e
dr
2 2
!

b
2N0

Finally:
p(e)

1
Q
2

1
Q
2

b
2N0

b
2N0

82

1
2

b
2

2r Nr2
e 0 dr
N0

b
1
+ e 4N0 .
2

References
[1] J. G. Proakis, Digital Communications, 4th Edition, John Wiley and Sons, 2000.
[2] S. Haykin, Communication Systems, 4th Edition, John Wiley and Sons, 2000.
[3] A. Goldsmith, Wireless Communications, Cambridge University Press, 2006.
[4] J. G. Proakis and M. Salehi, Communication Systems Engineering, 2nd Edition, Prentice-Hall
Inc., 2002.

83

Das könnte Ihnen auch gefallen