Sie sind auf Seite 1von 7

ECNG 6703 Principles of Communications Homework 3 Solutions

1. a) The Human tree is shown below

= 0.81 + 20.1 + 30.05 + 4(0.01 + 0.04) = 1.35 and the The average codeword length is R minimum possible average codeword length given by the entropy is H (X ) = pi log2 pi = as expected. 1.058. Obviously H (X ) < R b) The capacity of the channel is C = 1 Hb (0.3) = 0.1187. Since H (X ) > C it is not possible to transmit the source reliably. > C. c) No, since R 2. a) Example Matlab code and the resulting plot are provided below.

figure; grid on;

plot([0.05:0.05:1],0.5*log10(1./[0.05:0.05:1])); xlabel('D/\sigma2'); ylabel('R(D)');

b) Consider an analog source which emits a waveform (i.e., x(t)) which is a realization of the stochastic process, X (t). For X (t) bandlimited, the sampling theorem provides allows representation of X (t) using a sequence of uniform samples taken at or exceeding the Nyquist rate. While this allows for perfect signal reconstruction in theory, it is impossible to represent the continuous amplitude in a nite number of bits (i.e., the entropy is innite). In digital systems, the continuous amplitude needs to be quantized to be represented in a reasonable amount of bits. By quantization some information on the signal amplitude is lost. Thus there is a tradeo between the number of bits required for a given resolution and the amount of distortion introduced through quantization. Put another way, there is a tradeo between the data rate and the signal delity. The amount of distortion introduced is given by the rate distortion function. This simply denes the rate distortion function of any analog cource as the lower bound on the source rate that is achievable for a given distortion. With reference to the gure, it shows that the the number of bits required for a given level of distortion decreases as the allowable distortion increases. c) Solving 1 log2 = 1 Hb () we get D = 2 D
1 d) In this case we solve 2 log2 =1 log2 (1 + D 2
2 2

2 22(1Hb ()) P ). 2

.
2 1+ P 2

Hence, D =

3. a) For a Gaussian random variable of zero mean and variance 2 the ratedistortion func2 1 tion is given by R(D) = 2 log2 ( . Hence, the upper bound is satised with equality. D For the lower bound recall that H (X ) = 1 log2 (2e 2 ). Thus, H (X ) 1 log2 (2eD) = 2 2 1 1 1 2e 2 2 log2 (2e ) 2 log2 (2eD) = 2 log2 ( 2eD ) = R(D). 2 2

Thus the lower and upper bounds coincide. b) The dierential entropy of a Laplacian source with parameter is H (X ) = 1 + ln(2). The |x | e dx = 22 . variance of the Laplacian distribution is 2 = x2 21 1 2) = and H ( X ) = 1 + ln (2 ) = 1 + ln ( Hence, with 2 = 1, we obtain = 2 1.3466 nats/symbol = 1.5 bits/symbol. A plot of the lower and upper bound of R(D) is given next.

c) The variance of the triangular distribution is given by

2 =

x+ 2

x2 dx +

x+ 2

x2 dx =

2 6

1 Hence, with 2 = 1, we obtain = 6 and H (X ) = ln(6)ln( 6)+ 2 = 1.7925 bits/source output. A plot of the lower and upper bound of R(D) is given next. d) A similar process can be used for the uniform distribution. 4. The channels are illustrated in the next gure. For channel A, by symmetry of A and B inputs, we need to assume P (X = A) = P (X = B ) = p and P (X = C ) = 1 2p, for 0 p 0.5, and regardless of the value of p we will have 3

H (Y ) = 1. We note that H (Y |X = A) = H (Y |X = B ) = 0 and H (Y |X = C ) = 1, or H (Y |X ) = 1 2p and I (X ; Y ) = H (Y ). H (Y |X ) = 1 (1 2p) = 2p which is maximized for p = 0.5. Therefore we must choose A and B each with probability of 0.5 and we should not use C. The capacity is 1. For channel B, the two outputs B and C are identical and can be combined resulting a noiseless channel with capacity 1. For the cascade of two channels the same arguments result in C = 1. log2 5. a) To have a distortion of 1 we have to maintain a rate of at least R(D) = 1 2
1 log2 (4) 2 2 D

when

D = 1, or R = = 1 bits per sample which is equivalent to 8000 bits per second. This means that the channel capacity must be at least 8000 bits per second or 8000 = P P ) resulting in N0 = 3. 4000log2 (1 + N0 W W b) C = W log2 1 +
Pav W N0

= 25.9kbps.

6. a) The entropy of the source is H (X ) = H (0.3) = 0.8813 and the channel capacity is C = 1 H (0.1) = 0.531. If the source is directly connected to the channel, then the probability of error at the receiver is given by P (error) = P (X = 0)P (Y = 1|X = 0) + P (X = 1)P (Y = 0|X = 1) = 0.1.

b) Since H (X ) > C , some distortion at the output of the channel is inevitable. To nd the minimum distortion, we set R(D) = C . For a Bernoulli type of source:

R (D ) =

H ( p) H ( D ) , 0,

0 D min(p, 1 p) else

c) For reliable transmission we must have: H (X ) = C = 1 H (). Hence, with H (X ) = 0.8813 we obtain 0.8813 = 1 H () < 0.0164 or > 0.984. 7. Channel 1 Noiseless Channel 2 Lossless, Noisy Channel 3 Deterministic

and therefore, R(D) = H (p) H (D) = H (0.3) H (D). If we let R(D) = C = 0.531, we obtain H (D) = 0.3503 D = min(0.07, 0.93) = 0.07. The probability of error is: P (error) D = 0.07.

8. (a) (b) Channel 1 Noiseless Channel 2 Noisy Channel 3 Noisy (c) The channel capacity is given by C1 = maxP (x) [H (Y ) H (Y |X )]. But H (Y |X ) = 0, so C1 = maxP (x) H (Y ) = 1 which is achieved for P (0) = P (1) = 1 . 2 5

(d) Let q be the probability of the input symbol 0, and thus (1 q ) the probability of the input symbol 1. Then H (Y |X ) = x P (x)H (Y |X = x) = qH (Y |X = 0) + (1 q )H (Y |X = 1) = (1 q )H (0.5) = 1 q. The PMF of the output symbols is given by, P (Y = y1 ) = qP (Y = y1 |X = 0) + (1 q )P (Y = y1 |X = 1) = 0.5 + 0.5q P (Y = y2 ) = qP (Y = y2 |X = 0) + (1 q )P (Y = y2 |X = 1) = 0.5 0.5q
2 Hence, C2 = maxq [H (0.5 + 0.5q ) (1 q )]. Set dC = 0 to determine the value of q that dq 1 ) 2 = achieves the maximum. This gives q = 0.6, and the channel capacity, C2 = H ( 5 5 0.3219.

(e) The transition probability matrix of the third channel can be written as
1 Q= 1 Q +2 Q2 2 1

where Q1 , Q2 are the transition probability matrices of channel 1 and channel 2 respectively. We have assumed that the output space of both channels has been augmented by adding two new symbols so that the size of the matrices Q, Q1 and Q2 is the same. The transition probabilities to these newly added output symbols is equal to zero. Using the fact that the function I (p; Q) is a convex function in Q we obtain, C = maxp I (X ; Y ) = maxp I (p; Q) = maxp I (p; 1 1 f rac12Q1 + 2 Q2 ) 2 maxp I (p; Q1 ) + 1 maxp I (p; Q2 ) = 1 (C1 + C2 ) 2 2

9. a) The overall channel is a binary symmetric channel with crossover probability p. To nd p note that an error occurs if an odd number of channels produce an error. Thus
1 k (1 )nk . It can be shown that p = 2 p = k=odd n [1 (1 2)n ]. Thus limn p = 1 . k 2 Given that C = 1 H (p) it is thus seen that in the limit, C = 0.

b) You can evaluate the summation in part a) for various values of n. Calculate p for increasing 6

n values, and then use this to determine the capacity. You can then just plot the capacity versus n. The results should conrm your analytical evaluation.
P . For the log 1 + N0 10. The capacity of the additive white Gaussian channel is given by C = 1 2 W nonwhite Gaussian noise channel, although the noise power is equal to the noise power in the white Gaussian noise channel, the capacity is higher, The reason is that since noise samples are correlated, knowledge of the previous noise samples provides partial information on the future noise samples and therefore reduces their eective variance.

Das könnte Ihnen auch gefallen