Sie sind auf Seite 1von 6

ECE 534: Elements of Information Theory, Fall 2010

Homework: 8
Solutions
Exercise 8.9 (Johnson Jonaris GadElkarim)
Gaussian mutual information. Suppose that (X, Y, Z) are jointly Gaussian and that X Y Z
forms a Markov chain. Let X and Y have correlation coecient
1
and le t Y and Z have correla-
tion coecient
2
. Find I(X; Z).
Solution
I(X; Z) = h(X) +h(Z) h(X, Z)
Since X, Y, Z are jointly Gaussian, hence X and Z are jointly Gaussian, their covariance matrix
will be:
K =
_

2
x

x

xz

xz

2
z
_
Hence
I(X; Z) = 0.5 log(2e
2
x
) + 0.5 log(2e
2
z
) 0.5 log(2e|K|)
|K| =
2
x

2
z
(1
2
xz
)
I(X; Y ) = 0.5 log(1
2
xz
)
Now we need to compute
xz
, using markotivity p(x,zy) = p(xy)p(zy) we can get

xz
=
E(xz)

z
=
E
_
(xz|y)
_

z
=
E
_
E(x|y)E(z|y)
_

z
Since X, Y and Z are jointly Gaussian: E(x|y) =
xxy
y
Y , we can do the same for E(z|y)

xz
=
xy

zy
I(X; Z) = 0.5 log(1 (
xy

zy
)
2
)
Exercise 9.2 (Johnson Jonaris GadElkarim)
Two-look Gaussian channel. Consider the ordinary Gaussian channel with two correlated looks at
X, that is, Y = (Y
1
, Y
2
), where
Y
1
= X + Z
1
Y
2
= X + Z
2
1
with a power constraint P on X, and (Z
1
, Z
2
) N
2
(0, K), where
K =
_
N N
N N
_
.
Find the capacity C for
(a) = 1
(b) = 0
(c) = 1
Solution The capacity will be
C = max I(X; Y
1
, Y
2
)
I(X; Y
1
, Y
2
) = h(Y
1
, Y
2
) h(Y
1
, Y
2
|X) = h(Y
1
, Y
2
) h(Z
1
, Z
2
)
h(Z
1
, Z
2
) = 0.5 log(2e)
2
|k| = 0.5 log(2e)
2
N
2
(1
2
)
The mutual information will be maximized when Y
1
, Y
2
are jointly Gaussian with covariance matrix
K
y
= P.I
2X2
+ K
z
where I
2X2
is an identity matrix of dimension 2.
|K
y
| = N
2
(1
2
) + 2PN(1 )
Hence the capacity will be: C = 0.5 log(1 + 2P/N(1 + ))
a) = 1, C = 0.5 log(1 + P/N)
b) = 0, C = 0.5 log(1 + 2P/N)
c) = 1, C = 0.5 log(1 +) =
Exercise 9.4 (Shu Wang)
Exponential noise channel. Y
i
= X
i
+ Z
i
, where Z
i
is i.i.d. exponentially distributed noise with
mean . Assume that we have a mean constraint on the signal (i.e. E[X
i
] ). Show that the
capacity of such a channel is C = log
_
1 +

_
.
Solution
From the textbook, maximize the entropy h(f) over all probability densities f satisfying
1. f(x) 0 with equality outside the support set S
2.
_
S
f(x)dx = 1
3.
_
S
f(x)dx =
i
for 1 i m
2
In this problem the support set is [0, ). Over this support set, if f(x) is Gaussian,
_
S
f(x)dx = 1.
So Gaussian cannot maximize the entropy. But if f(x) = e
x
. we can nd that
_
S
f(x)dx = 1.
So if f(x) is exponential distributioin. h(x) can be maximized. We can prove this:
f
e
(x) = e
x
D(f||f
e
) =
_
f(x) log
f(x)
e
x
dx
=
_
f(x)(log f(x) log(e
x
))dx
= h(x)
_
f(x) log(e
x
)dx
= h(x) + h(x
e
) 0
So h(x
e
) h(x). So when f(x) is exponential distributed. h(x) will be maximized.
Because EX
i
, EY
i
= E[X
i
+ Z
i
] = E[X
i
] + E[Z
i
] +. So
C = max I(X
i
; Y
i
)
= max[h(Y
i
) h(Y
i
|X
i
)]
= max[h(Y
i
) h(X
i
+ Z
i
|X
i
)]
= max[h(Y
i
) h(Z
i
|X
i
)]
= max[h(Y
i
) h(Z
i
)]
= h
e
(Y
i
) h(Z
i
)
Because EZ
i
= , f(Z
i
) =
1

. Suppose a =
1

h(Z
i
) =
_
ae
az
log ae
az
dz
= log
2
(
e
a
)
= log
2
(e)
According to the analysis above, h(Y
i
) log
2
(e( + )). So C log
2
(1 +

)
If we want C = log
2
(1 +

), that means we can nd X


i
which makes X
i
+ Z
i
is an exponential
distribution with mean + . I will show how to nd it.
The characteristic function of Z with mean is
3

Z
() = E[e
jz
=
_

0
e
jz
1

dz
=
1

1
1

j
Also we can have
Y
() =
1
+
1
1
+
j
. Because
Y
() =
X
()
Z
(). We will get:

X
() =

Y
()

Z
()
=
1 j
1 j( + )
=
1
1 j( + )

j
1 j( + )
=
1
1 j( + )


+
(
1
1 j( + )
) +

+
=

+
(
1
1 j( + )
) +

+
If we do the inverse, we will get f(x) = (

+
)(
1
+
e

x
+
) +

+
(x) and x 0
So we can nd a X which makes C = log
2
(1 +

)
Exercise 9.8 (Johnson Jonaris GadElkarim)
Parallel Gaussian channels. Consider the following parallel Gaussian channel: [see the gures in
the book], where Z
1
N(0, N
1
) and Z
2
N(0, N
2
) are independent Gaussian random variables
and Y
i
= X
i
+Z
i
. We wish to allocate power to the two parallel channels. Let
1
and
2
be xed.
Consider a total cost constraint
1
P
1
+
2
P
2
, where P
i
is the power allocated to the ith channel
and
i
is the cost per unit power in the channel. Thus, P
1
0 and P
2
0 can be chosen subject
to the cost constraint .
(a) For what value of does the channel stop acting like a single channel and start acting like a
pair of channels?
(b) Evaluate the capacity and nd P
1
and P
2
that achieve the capacity for
1
= 1,
2
= 2, N
1
= 3,
N
2
= 2 and = 10.
4
Solution a) We have power budget:
1
P
1
+
2
P
2

The Lagrange function will be:
J(P
1
, P
2
) =
2

i=1
log(1 + P
i
/N
i
) + (
1
P
1
+
2
P
2
)
Dierentiating with respect to P
i
and equating to zero and noting that the power must be positive
we get:

i
P
i
= (1/2
i
N
i
)
+
For the 2 channels to act as a pair of channels we need to ll the gap between them, i.e.
|
1
N
1

2
N
2
|

2
N
2

1
N
1
Acting as 2 Channels
b)
C = 0.5
_
log(1 + P
1
/N
1
) + log(1 + P
2
/N
2
)
_
(1)P
1
+ (2)P
2
= 10
Since
1
N
1
= 3,
2
N
2
= 4, we will ll the gap rst with 1 (lling the rst channel), then the rest
9 will be divided equally onto the 2 channels.
Hence P
1
= 5.5, P
2
= 4.5/2 = 2.25
Exercise 9.9 (Matteo Carminati)
Vector Gaussian channel. Consider the vector Gaussian noise channel
Y = X + Z
where X = (X
1
, X
2
, X
3
), Z = (Z
1
, Z
2
, Z
3
), Y = (Y
1
, Y
2
, Y
3
), E||X||
2
P, and
Z N
_
_
0,
_
_
1 0 1
0 1 1
1 1 2
_
_
_
_
.
Find the capacity. The answer may be surprising.
Solution
The channel presented in this exercise can be modelled by considering a group of three channels
with colored Gaussian noise. In order to optimize the distribution of power among the channels,
5
we must apply the algorithm described in the book. First of all, the covariance matrix of the noise
must be diagonalized; to do that its eigenvalues and its eigenvectors must be computed:
p(x) = |K
Z
xI| =
_
_
1 x 0 1
0 1 x 1
1 1 2 x
_
_
= (1 x)((1 x)(2 x) 1) (1 x)
= (1 x)x(x 3)
Thus, the eigenvalues of this matrix are
1
= 1,
2
= 0 and
3
= 3. Since one of the eigenvectors is 0
and since the determinant of a squared matrix can be computed as the product of the eigenvectors,
the determinant of K
Z
is 0. This means that the columns (or rows since the matrix is symmetric)
are linearly dependent: in particular we can see that the last column (row), can be computed as
the summation of the rst two columns (rows).
Since K
Z
is a covariance matrix, this implies that Z
3
can be rewritten as a function of Z
1
and Z
2
:
Z
3
= Z
1
+ Z
2
. As seen in exercise 9.2 this fact can be exploited to nullify the eect of the noise.
In particular if the same signal X = X
1
= X
2
= X
3
is sent over the three channels, its value can
be perfectly derived by subtracting Y
3
to the summation of Y
1
and Y
2
, in fact:
Y
1
+Y
2
Y
3
= X
1
+ Z
1
+ X
2
+ Z
2
X
3
Z
3
= X + Z
1
+ X + Z
2
X Z
1
Z
2
= X
Thus, as in exercise 9.2, the capacity of the channel can be considered to be innite!
6

Das könnte Ihnen auch gefallen