Sie sind auf Seite 1von 9

Department of Electrical Engineering

EE 703: Digital Message Transmission (Autumn 2014)


Course Instructor: Prof. Abhay Karandikar
Problem Set 3
1. One of the two signals s0 = 1 and s1 = +1 is transmitted over the channel as shown
in Figure 1. The two noise random variables n1 and n2 are statistically independent
of each other and are independent of the signals. Their density functions are fN1 (n) =
fN2 (n) = 21 e|n| .
(a) Determine the optimum decision regions and sketch them.
(b) Determine an expression for the probability of error.
n1
r1

S
r2

n2
Figure 1:
Solution:

According to the MAP rule,for equally likely messages, choose s0 if


s0

fR (r|s0 ) fR (r|s1 )
s1

1 (|r1 +1|) 1 (|r2 +1|) s0 1 (|r1 1|) 1 (|r2 1|)

e
e
e
e
2
2
2
2
s1
s0

|r1 + 1| |r1 1| |r2 1| |r2 + 1|


s1

2 , r1 [1, )
|r1 + 1| |r1 1| =

|r2 1| |r2 + 1| =

(1)

2r , r1 [1, 1]
1

2 , r1 (, 1]

2 , r1 [1, )
2r2 , r1 [1, 1]

2 , r1 (, 1]

To obtain the decision regions, we divide the R1 R2 plane into the following nine
regions:
s0

(i) r1 [1, ), r2 [1, ) : r1 + r2 > 2. (1) reduces to 2 2. Decision is always


s1

in favour of s1 .
s0

(ii) r1 [1, ), r2 (1, 1) : r1 + r2 > 0. (1) reduces to 1 r2 . r2 cannot be less


s1

than -1, so decision is always in favour of s1 .


s0

(iii) r1 [1, ), r2 (, 1] : (1) reduces to 2 2. We can decide either s0 or s1 .


s1

s0

(iv) r1 (1, 1), r2 [1, ) : r1 + r2 > 0. (1) reduces to r1 1.As r1 cannot be


s1

less than -1, decision is always in favour of s1 .


s0

(v) r1 (1, 1), r2 (1, 1) : (1) reduces to r1 + r2 0.So, decision is in favour of s0 if


s1

r1 + r2 < 0 and in favour of s1 if r1 + r2 > 0.


s0

(vi) r1 (1, 1), r2 (, 1] : r1 + r2 < 0. (1) reduces to r1 1.As r1 < 1, decision
s1

is always in favour of s0 .
s0

(vii) r1 (, 1], r2 [1, ) : (1) reduces to 2 2.We can decide either s0 or s1 .
s1

s0

(viii) r1 (, 1], r2 (1, 1) : r1 + r2 < 0. (1) reduces to 1 r2 .Decision is always


s1

in favour of s0 .
2

s0

(ix) r1 (, 1], r2 (, 1] : r1 + r2 < 2. (1) reduces to 2 2.Decision is


s1

always in favour of s0 .
Finally, the decision region is as shown in Figure 2.

Choose either

+1

Decide S1

+1

1
Decide S0

Choose either

Figure 2:

2. Consider the vector signal constellation from the Figure 3. Assume all signals to be
equally likely and the signal vector is disturbed by zero mean Gaussian random vector
with independent elements each with unit variance.
(a) Determine the boundaries of the decision region.
(b) Determine the probability of error.
Solution:
As the messages are equiprobable, the decision regions are determined by the smallest
Euclidean distance. The decision regions are shown in Figure 4. Let the Gaussian
n2

1
e 22 . For purposes
random vector be n=(n1 n2 )T , where fN1 (n) = fN2 (n) = 2
2
of calculating Pe , s1 ,s3 ,s4 and s6 are symmetric; s2 and s5 are also symmetric. Using

S0

S2

S1
d

d
2

d
2

S4

S3

S5

Figure 3:
signals s1 and s2 are representatives, we have
4
2
(1 Pc|s1 ) + (1 Pc|s2 )
6
6
d
2
(y d
2
(x+d)2
1
2
1
2)

= 1
e 22
e 22 dydx
3 x= y=0 2 2
2 2
d
2
(y d
x2
1 2
1
1
2)

e 22
e 22 dydx
3 x= d2 y=0 2 2
2 2
( )
( ( ))2
d
d
7
4
=
Q

Q
3
2
3
2

Pe =

Pe

S0

S1

S2
d

Decide S0

Decide S1

d
2

Decide S3

d
2

Decide S4

S4

S3
Figure 4:

Decide S2

Decide S5

S5

3. Let 8 orthogonal signal vectors be placed on the vertices of a 3 dimensional hypercube.


Assume the noise to be a zero mean Gaussian random vector with independent
elements each with unit variance. Determine an expression for the probability of error.
Solution:
Consider the hypercube to be having sides of length d units, centered at origin. Then
dd
d d
), ..., ( d
).
the signal vectors are placed at ( d2 d2 d2 ), ( d
2 22
2 2 2
If r = (r1 r2 r3 ) represents the recieved vector, then the optimum decision rule is as
follows:

Decide ( d2 d2 d2 ) if r1 > 0, r2 > 0, r3 > 0.


dd
Decide ( d
) if r1 < 0, r2 > 0, r3 > 0.
2 22
...
d d
) if r1 < 0, r2 < 0, r3 < 0.
Decide ( d
2 2 2

If Pc represents the probability of correct decision, then probability of error, Pe =1-Pc .


By symmetry,
Pc = Pc|( d d d )
2 2 2

= P (r1 > 0, r2 > 0, r3 > 0)


= P (r1 > 0)P (r2 > 0)P (r3 > 0)
) (
) (
)
(
d
d
d
d
d
d
P r2 <
P r3 <
= P r1 <
2
2
2
2
2
2
(
( ))3
d
=
1Q
2
(
( ))3
Therefore, Pe = 1 1 Q d2
4. The random noise n is Gaussian with zero mean. If one of the two equally likely
messages is transmitted with s0 = s1 = 2, an optimum reciever yields Pe = 0.001,
then
(a) Compute the variance 2 of n.
(b) Consider the following decision rule
If r >
If
If

d
2

then decide s0 ,

r < d2
|r| < d2

then decide s1 ,
then decide erasure.
5

Erasure is used to avoid a decision when the reciever thinks that it might
constitute an error. Compute d such that erasure probability is 104 .
Solution:

(a) If r  (, 0), r  (0, ), we decide in favour of s0 , s1 respectively.


Pe = P (s0 )P (decision favours s1 s0 ) + P (s1 )P (decision favours s0 s1 )
0

fR (r|s1 )dr
fR (r|s0 )dr + P (s1 )
= P (s0 )
0

(r+2)2
(r2)2
1
1
1 0
1
2

=
e 2 dr +
e 22 dr
2 0
2 2 2
2 2
( )
2
= Q

From standard erfc tables, we obtain

= 3.09. The variance is 2 = 0.4189.

(b)

Perasure = P (s0 )
1
=
2

d
2
d
2

d
2
d
2

fR (r|s0 )dr + P (s1 )

e
2 2

(r+2)2
2 2

1
dr +
2

d
2

d
2
d
2
d
2

fR (r|s1 )dr

(r2)2
1

e 22 dr
2 2

Upon solving, we get


( (
)
(
))
( (
)
(
))
2 d2
2 + d2
2 d2
2 + d2
1
1
Perasure =
Q
Q
+
Q
Q
2

(
)
(
)
2 d2
2 + d2
= Q
(2)
Q

0.0001 = Q(3.09 0.7725d) Q(3.09 + 0.7725d)

(3)

Q(x) is a measure of the tail distribution of a Gaussian signal with zero mean
and unity variance i.e., it equals the area under the curve from x to . (3),
which represents the difference between the areas (0.0001), is small compared
to Q(3.09)=0.001. Therefore,0.7725d  3.09 and we can use the Taylor series
2
approximated till the third term.f (x + h) f (x) + hf 0 (x) + h2! f 00 (x).Now, using

the Leibniz integral rule, we obtain

Q(x) =
x

1 y2
e 2 dy
2

1 x2
Q0 (x) = e 2
2
x2
1
Q00 (x) = xe 2
2
(2) now simplifies to 0.0001 =

d e
2

x2
2

. Solving, we get d = 0.0192.

5. Consider binary communication to a reciever with k antennas. The transmitted signal


is a. The recieved signal at the output of antenna k is given by
rk = gk s + nk

,1 6 k 6 K

where gk is the gain of antenna k, s = a and nk is zero mean Gaussian noise with
variance 2 at antenna k.

(a) Let r = K
k=1 k rk ,where k is an arbitrary real number. What is the optimum
reciever given r?
(b) If detection is done seperately on rk rather than doing on r , then is it possible to
achieve lower probability of error?
(c) Minimize the probability of error as desired in (a) over all choices of (1 2 .. K ).
Solution:
a) All noises are idnependent. So final received signal is

r =

k=1
K

k rk
k gk s +

k=1

k nk

k=1

Here s could be either +a or a. As noises are all independent so the variance of the noise

2 2
in final received signal r is K
k=1 k As the messages are equally likely, by applying ML
rule we may determine the decision regions as

s ,if r 0
0
r =
s1 ,else
7

The probability of error could be defiened as


Pe (
r = s1 |s0 ) = P (r < 0|s0 )
distance between two points in constillation
= Q(
)
2

standard
deviation
of
noise

K
k=1 k gk a
= Q
K
2 2
k=1 k

K
k gk a
= Q k=1
K
2

k=1 k
(b) Now we will take decisions based on each received signal rk . Suppose we denote each
decision by a random variable Xk . Xk is defined as

a, if r 0 with probability 1 Q( a )

Xk =
a
a, if r < 0 with probability Q( )

when s0 was sent.


Xk

a, if r 0 with probability Q( a )

=
a, if r < 0 with probability 1 Q( a )

when s1 was sent.


All Xk s are independent. Now we will sum these random variables up to form another
random variable X to determine the final received signal.
X=

Xk

k=1

Finally the decision will be

s if X 0
0
r =
s1 else

So probability of error may be found in the following way,

P (X 0|s1 ) = P (X1 + X2 + ... + XK 0|s1 )


K

(K ) a k
a Kk
=
k Q( ) (1 Q( ))

K
k=1+

by symmetry we can write


P (X < 0|s0 ) = P (X1 + X2 + ... + XK < 0|s0 )
K

(K ) a k
a Kk
=
k Q( ) (1 Q( ))

K
k=1+

So total probability of error id given by


1
1
P (X 0|s1 ) + P (X < 0|s0 )
2
2
K
( ) a
a Kk
K
k
=
k Q( ) (1 Q( ))

Pe0 =

k=1+

(c) From part(a) we got probability of error as

K
k gk a
Pe = Q k=1
K
2

k=1 k
Now assume the following matrices

g = [g1 g2 ...gK ]T
1

K
2
And
= [
...
]T
K
K
K
2
2
2
k=1 1
k=1 k
k=1 k
Error will be minimum when the argument of the Q function will
be maximum ( becuase Q
K
g
k=1
function isa decreasing function). S we will have to maximize K k k2 .
k=1

Now from Schwartz inequality,


(
g T .)
2 (
g T .
g )(
T .
)
So the product will be maximum when g and
are co-linear. Hence, Error will be minimum
when
1 : 2 : ... : K = g1 : g2 : ... : gK
so optimum choice of k values will be
(1 2 ...K ) = c.(g1 g2 ...gK ) where c is a any arbitrary positive constant value

Das könnte Ihnen auch gefallen