Beruflich Dokumente
Kultur Dokumente
S
r2
n2
Figure 1:
Solution:
fR (r|s0 ) fR (r|s1 )
s1
e
e
e
e
2
2
2
2
s1
s0
2 , r1 [1, )
|r1 + 1| |r1 1| =
|r2 1| |r2 + 1| =
(1)
2r , r1 [1, 1]
1
2 , r1 (, 1]
2 , r1 [1, )
2r2 , r1 [1, 1]
2 , r1 (, 1]
To obtain the decision regions, we divide the R1 R2 plane into the following nine
regions:
s0
in favour of s1 .
s0
s0
(vi) r1 (1, 1), r2 (, 1] : r1 + r2 < 0. (1) reduces to r1 1.As r1 < 1, decision
s1
is always in favour of s0 .
s0
(vii) r1 (, 1], r2 [1, ) : (1) reduces to 2 2.We can decide either s0 or s1 .
s1
s0
in favour of s0 .
2
s0
always in favour of s0 .
Finally, the decision region is as shown in Figure 2.
Choose either
+1
Decide S1
+1
1
Decide S0
Choose either
Figure 2:
2. Consider the vector signal constellation from the Figure 3. Assume all signals to be
equally likely and the signal vector is disturbed by zero mean Gaussian random vector
with independent elements each with unit variance.
(a) Determine the boundaries of the decision region.
(b) Determine the probability of error.
Solution:
As the messages are equiprobable, the decision regions are determined by the smallest
Euclidean distance. The decision regions are shown in Figure 4. Let the Gaussian
n2
1
e 22 . For purposes
random vector be n=(n1 n2 )T , where fN1 (n) = fN2 (n) = 2
2
of calculating Pe , s1 ,s3 ,s4 and s6 are symmetric; s2 and s5 are also symmetric. Using
S0
S2
S1
d
d
2
d
2
S4
S3
S5
Figure 3:
signals s1 and s2 are representatives, we have
4
2
(1 Pc|s1 ) + (1 Pc|s2 )
6
6
d
2
(y d
2
(x+d)2
1
2
1
2)
= 1
e 22
e 22 dydx
3 x= y=0 2 2
2 2
d
2
(y d
x2
1 2
1
1
2)
e 22
e 22 dydx
3 x= d2 y=0 2 2
2 2
( )
( ( ))2
d
d
7
4
=
Q
Q
3
2
3
2
Pe =
Pe
S0
S1
S2
d
Decide S0
Decide S1
d
2
Decide S3
d
2
Decide S4
S4
S3
Figure 4:
Decide S2
Decide S5
S5
d
2
then decide s0 ,
r < d2
|r| < d2
then decide s1 ,
then decide erasure.
5
Erasure is used to avoid a decision when the reciever thinks that it might
constitute an error. Compute d such that erasure probability is 104 .
Solution:
(r+2)2
(r2)2
1
1
1 0
1
2
=
e 2 dr +
e 22 dr
2 0
2 2 2
2 2
( )
2
= Q
(b)
Perasure = P (s0 )
1
=
2
d
2
d
2
d
2
d
2
e
2 2
(r+2)2
2 2
1
dr +
2
d
2
d
2
d
2
d
2
fR (r|s1 )dr
(r2)2
1
e 22 dr
2 2
(
)
(
)
2 d2
2 + d2
= Q
(2)
Q
(3)
Q(x) is a measure of the tail distribution of a Gaussian signal with zero mean
and unity variance i.e., it equals the area under the curve from x to . (3),
which represents the difference between the areas (0.0001), is small compared
to Q(3.09)=0.001. Therefore,0.7725d 3.09 and we can use the Taylor series
2
approximated till the third term.f (x + h) f (x) + hf 0 (x) + h2! f 00 (x).Now, using
Q(x) =
x
1 y2
e 2 dy
2
1 x2
Q0 (x) = e 2
2
x2
1
Q00 (x) = xe 2
2
(2) now simplifies to 0.0001 =
d e
2
x2
2
,1 6 k 6 K
where gk is the gain of antenna k, s = a and nk is zero mean Gaussian noise with
variance 2 at antenna k.
(a) Let r = K
k=1 k rk ,where k is an arbitrary real number. What is the optimum
reciever given r?
(b) If detection is done seperately on rk rather than doing on r , then is it possible to
achieve lower probability of error?
(c) Minimize the probability of error as desired in (a) over all choices of (1 2 .. K ).
Solution:
a) All noises are idnependent. So final received signal is
r =
k=1
K
k rk
k gk s +
k=1
k nk
k=1
Here s could be either +a or a. As noises are all independent so the variance of the noise
2 2
in final received signal r is K
k=1 k As the messages are equally likely, by applying ML
rule we may determine the decision regions as
s ,if r 0
0
r =
s1 ,else
7
standard
deviation
of
noise
K
k=1 k gk a
= Q
K
2 2
k=1 k
K
k gk a
= Q k=1
K
2
k=1 k
(b) Now we will take decisions based on each received signal rk . Suppose we denote each
decision by a random variable Xk . Xk is defined as
a, if r 0 with probability 1 Q( a )
Xk =
a
a, if r < 0 with probability Q( )
a, if r 0 with probability Q( a )
=
a, if r < 0 with probability 1 Q( a )
Xk
k=1
s if X 0
0
r =
s1 else
(K ) a k
a Kk
=
k Q( ) (1 Q( ))
K
k=1+
(K ) a k
a Kk
=
k Q( ) (1 Q( ))
K
k=1+
Pe0 =
k=1+
K
k gk a
Pe = Q k=1
K
2
k=1 k
Now assume the following matrices
g = [g1 g2 ...gK ]T
1
K
2
And
= [
...
]T
K
K
K
2
2
2
k=1 1
k=1 k
k=1 k
Error will be minimum when the argument of the Q function will
be maximum ( becuase Q
K
g
k=1
function isa decreasing function). S we will have to maximize K k k2 .
k=1