Beruflich Dokumente
Kultur Dokumente
n
l=1
h
n,l
s
l
, and that due to noise is
n
l=1
h
n,l
N
l
. Thus
the output SNR at time n is
SNR =
|
n
l=1
h
n,l
s
l
|
2
E{(
n
l=1
h
n,l
N
l
)
2
=
|h
T
n
s|
2
h
T
n
N
h
n
where h
n
= (h
n,1
, h
n,2
, . . . , h
n,n
)
T
.
Since
N
> 0, we can write
N
=
1/2
N
1/2
N
when
1/2
N
is invertible and symmetric.
Thus,
SNR =
|(
1/2
N
h
n
)
T
1/2
N
s|
2
||
1/2
N
h
n
||
2
By the Schwarz Inequality (|x
T
y| ||x|| ||y||, we have
SNR ||
1/2
N
s||
2
with equality if and only if
1/2
N
h
n
=
1/2
N
s for a constant . Thus, max SNR occurs
when h
n
=
1
N
s. The constant is arbitrary (it does not aect SNR), so we can take
= 1, which gives the desired result.
Exercise 3:
a. From Exercise 15 of Chapter II, the optimum test here has critical regions:
k
= {y R
n
p
k
(y) = max
0lM1
p
l
(y)}.
1
Since p
l
is the N(s
l
,
2
I) density, this reduces to
k
= {y R
n
y s
k
2
= min
0lM1
y s
l
2
}
= {y R
n
s
T
k
y = max
0lM1
s
T
l
y} .
b. We have
P
e
=
1
M
M
k=0
P
k
(
c
k
),
and
P
k
(
c
k
) = 1 P
k
(
k
) = 1 P
k
( max
0l=kM1
s
T
l
Y < s
T
k
Y )
Due to the assumed orthogonality of s
1
, , s
n
, it is straighforward to show that, under H
k
,
s
T
1
Y , s
T
2
Y , , s
T
n
Y , are independent Gaussian random variable with variances
2
s
1
2
,
and with means zero for l = k and mean s
1
2
for l = k. Thus
P
k
( max
0l=kM1
s
T
l
Y < s
T
k
Y )
=
1
2s
1
P
k
( max
0l=kM1
s
T
L
Y < z) e
(zs
1
2
)/2
2
s
1
2
dz
Now
P
k
( max
0l=kM1
s
T
L
Y < z) = P
k
(
0l=kM1
{s
T
l
Y < z})
=
0l=kM1
P
k
(s
T
l
Y < z)
=
_
(
z
s
1
)
_
M1
.
Combining the above and setting x = z/s
1
yields
1 P
k
(
k
) =
1
2
_
[(x)]
M1
e
(xd)
2
2
dx, k = 0, , M 1 ,
and the desired expression for P
e
follows.
Exercise 6:
Since Y N(, ), it follows that
Y
k
is linear in Y
1
, , Y
k1
, and that
2
Y
k
does not
depend on Y
1
, , Y
k1
. Thus, I is a linear transformation of Y and is Gaussian. We need
only show that E{I} = 0
and cov(I) = I.
We have
E{I
k
} =
E{Y
k
} E{
Y
k
}
k
.
2
Since
Y
k
= E{Y
k
|Y
1
, , Y
k1
}, E{
Y
k
} is an iterated expectation of Y
k
; hence E{Y
k
} =
E{
Y
k
} and E{I
k
} = 0, k = 1, , n. To see that cov(I) = I, note rst that
Var (I
k
) = E{I
2
k
} =
E{(Y
k
Y
k
)
2
}
2
Y
k
=
2
Y
k
2
Y
k
= 1 .
Now, for l < k, we have
cov (I
k
, I
l
) = E{I
k
I
l
}
=
E{(Y
k
Y
k
)(Y
l
Y
l
)}
Y
k
Y
l
.
Noting that
E{(Y
k
Y
k
)(Y
l
Y
l
)} = E{E{(Y
k
Y
k
)(Y
l
Y
l
)|Y
1
, , Y
k1
}}
= E{(E{Y
k
|Y
1
, , Y
k1
}
Y
k
)(Y
l
Y
l
)} = E{(
Y
k
Y
k
)(Y
l
Y
l
)} = 0 ,
we have cov(I
k
, I
l
) = 0 for l < k. By symmetry we also have cov(I
k
, I
l
) = 0 for l > k, and
the desired result follows.
Exercise 7:
a. The likelihood ratio is
L(y) =
1
2
e
s
T
1
yd
2
/2
+
1
2
e
s
T
1
yd
2
/2
= e
d
2
/2
cosh s
T
1
y ,
which is monotone increasing in the statistic
T(y)
s
T
1
y
.
(Here, as usual, d
2
= s
T
1
s.) Thus, the Neyman-Pearson test is of the form
NP
(y) =
_
_
1 if T(y) >
, if T(y) =
0 if T(y) < .
To set the threshold , we consider
P
0
(T(Y ) > ) = 1 P
_
s
T
1
N
_
= 1 (/d) + (/d) = 2[1 (/d)],
3
where we have used the fact that s
T
1
N is Gaussian with zero mean and variance d
2
.
Thus, the threshold for size is
= d
1
(1 /2).
The randomization is unnecessary.
The detection probability is
P
D
(
NP
) =
1
2
P
1
(T(Y ) > | = +1) +
1
2
P
1
(T(Y ) > | = 1)
=
1
2
_
1 P
_
d
2
+ s
T
1
N
__
+
1
2
_
1 P
_
+d
2
+ s
T
1
N
__
= 2
_
1
(1 /2) + d
_
1
(1 /2) d
_
.
b. Since the likelihood ratio is the average over the distribution of of the likelihood
ratio conditioned on , we have
L(y) =
_
e
(s
T
yn
2
s
2
/2)/
2
e
2
/2
d
= k
1
e
k
2
|s
T
y|
1
2v
_
e
()
2
/2v
d = k
1
e
k
2
|s
T
y|
,
where
v
2
=
2
s
2
2
+ n
s
2
,
=
v
2
s
T
y
2
,
k
1
=
v
,
and
k
2
=
v
2
4
.
Exercise 13:
a. In this situation, the problem is that of detecting a Gaussian signal with zero mean
and covariance matrix
S
= diag{As
2
1
, As
2
2
, . . . , As
2
n
}, in independent i.i.d. Gaussian
noise with unit variance; and thus the Neyman-Pearson test is based on the quadratic
statistic
T(y) =
n
k=1
As
2
k
As
2
k
+ 1
y
2
k
.
4
b. Assuming s
k
= 0, for all k, a sucient condition for a UMP test is that s
2
k
is
constant. In this case, an equivalent test statistic is the radiometer
n
k=1
y
2
k
, which can
be given size without knowledge of A.
c. From Eq. (III.B.110), we see that an LMP test can be based on the statistic
T
lo
(y) =
n
k=1
s
2
k
y
2
k
.
Exercise 15:
Let L
a
denote the likelihood ratio conditioned on A = a. Then the undonditioned likeli-
hood ratio is
L(y) =
_
0
L
a
(y)p
A
(a)da =
_
0
e
na
2
/4
2
I
0
(a
2
r/
2
)p
A
(a)da,
with r r/A, where r =
_
y
2
c
+ y
2
s
as in Example III.B.5. Note that
r =
_
_
n
k=1
b
k
cos((k 1)
c
T
s
)y
k
_
2
+
_
n
k=1
b
k
sin((k 1)
c
T
s
)y
k
_
2
,
which can be computed without knowledge of A. Note further that
L(y)
r
=
1
2
_
0
e
na
2
/4
2
a
2
I
0
(a
2
r/
2
)p
A
(a)da > 0,
where we have used the fact that I
0
is monotone increasing in its argument. Thus, L(y)
is monotone increasing in r, and the Neyman-Pearson test is of the form
NP
(y) =
_
_
1 if r >
, if r =
0 if r <
.
To get size we choose
so that P
0
(
R >
) = e
(
)
2
/n
2
,
from which the size- desired threshold is
n
2
log .
The detection probability can be found by rst conditioning on A and then averaging
the result over the distribution of A. (Note that we have not used the explicit form of
the distribution of A to derive any of the above results.) It follows from (III.B.74) that
P
1
(
R >
|A = a) = Q(b,
0
) with b
2
= na
2
/2
2
and
0
=
_
2/n
/ =
2 log . Thus,
P
D
=
_
0
Q(
a
_
n/2,
0
)p
A
(a)da =
_
0
_
0
xe
(x
2
+na
2
/2
2
)/2
I
0
(x
a
_
n/2)
a
A
2
0
e
a
2
/2A
2
0
dxda
5
=
_
0
xe
x
2
/2
_
0
a
A
2
0
e
a
2
/2a
2
0
I
0
(x
a
_
n/2)dadx,
where a
0
=
_
2A
2
0
2
nA
2
0
+2
2
. On making the substitution y = a/a
0
, this integral becomes
P
D
=
a
2
0
A
2
0
_
0
xe
x
2
(1b
2
0
)/2
_
0
ye
(y
2
+b
2
0
x
2
)/2
I
0
(b
0
xy)dydx =
a
2
0
A
2
0
_
0
xe
x
2
(1b
2
0
)/2
Q(b
0
x, 0)dx,
where b
2
0
= na
2
0
/2
2
. Since Q(b, 0) = 1 for any value of b, and since 1 b
2
0
= a
2
0
/A
2
0
, the
detection probability becomes
P
D
=
a
2
0
A
2
0
_
0
xe
x
2
(1b
2
0
)/2
dx = e
2
0
(1b
2
0
)/2
= exp(
2
0
2
_
1 +
nA
2
0
2
2
_) =
x
0
,
where x
0
=
1
1+
nA
2
0
2
2
.
Exercise 16:
The right-hand side of the given equation is simply the likelihood ratio for detecting a
N(0,
S
) signal in independent N(0,
2
I) noise. From Eq. (III.B.84), this is given by
exp(
1
2
2
y
T
S
(
2
I +
S
)
1
y +
1
2
log(|
2
I|/|
2
I +
S
|)).
We thus are looking for a solution
S to the equation
2
S
T
y
S
2
= y
T
S
(
2
I +
S
)
1
y +
2
n
k=1
log(
2
2
+
k
),
where
1
,
2
, . . . ,
n
are the eigenvalues of
S
. On completing the square on the left-hand
side of this equation, it can be rewritten as
S y
2
= y
2
y
T
S
(
2
I +
S
)
1
y
2
n
k=1
log(
2
2
+
k
)
2
_
y
T
(
2
I +
S
)
1
y
n
k=1
log(
2
2
+
k
)
_
,
which is solved by
S = y
v
_
y
T
(
2
I +
S
)
1
y
n
k=1
log(
2
2
+
k
)
_
1/2
v,
for any nonzero vector v.
6