Sie sind auf Seite 1von 23

EECE4572 Communication Systems I

Summer 2010 Prof. Salehi

MIDTERM SOLUTIONS
Problem 1: The power spectral density of a WSS information source is shown below (the unit of power spectral density is Watts/Hz). The maximum amplitude of this signal is 200.

6 SX (f )
2





3000 2000

J J J J J 2000 3000

- f (Hz)

1. What is the power in this process? 2. Assume that this signal is transmitted using a uniform PCM system with 512 quantization levels, what is the resulting SQNR in decibels and what is the minimum required transmission bandwidth if in sampling the signal a guard band of 1 KHz is used? 3. If the available transmission bandwidth is 47 KHz, design a PCM system that achieves the highest possible SQNR. What is the resulting SQNR and the guard band? Solution: 1. P =
SX (f ) df

= 4000 2 + 2

1 2

2 1000 = 10000 Watts.


10000 2002

2. = log2 N = 9, then SQNR = 4.8 + 6 9 + 10 log10 9 3000 + 4.5 1000 = 31500 Hz


1 1

52.8 dB, and Bt = W + 2 WG =

3. W + 2 WG 47000 or 3000 + 2 WG 47000. The largest integer that satises this is = 15 which gives WG = 266.6 Hz and SQNR = 4.8 + 6 15 + 10 log10 Problem 2: 35 points In the block diagram shown below X(t) denotes a zero-mean WSS (wide-sense stationary) and N white random process with power spectral density SX (f ) = 20 .
10000 2002

88.8 dB.

X(t) -

2 dt Y (t) = X(t) + 2X (t) ? m + 6 Z(t) LPF: [W , W ]

The block denoted by LPF represents an ideal lowpass lter that passes all frequencies in the frequency range from W to W and blocks all other frequencies. Answer the following questions, your answers will be in terms of N0 . 1. What is the power spectral density and the mean of Y (t)? 2. What is the power spectral density of Z(t)? 3. Is Z(t) a WSS random process? Why? 4. What is the variance of Z(t) if W = 4? 5. What is the power in Y (t)? Solution: 1. h(t) = (t) + 2 (t), hence H(f ) = F [h(t)] = 1 + j 4 f . We have mY = mX H(0) = N0 0 2 2 2 0 1 = 0 and SY (f ) = SX (f )|H(f )|2 = N 2 |1 + j 4 f | = 2 (1 + 16 f ). 2. For the LPF H1 (f ) = (f /2W ), hence SZ (f ) = SY (f )|H1 (f )|2 = 3. Since input is WSS and system is LTI the output will be WSS. 4. E[Z 2 (t)] = E[Z(t)Z(t)] = RZ (0) = 210 2 ). 5. PY =
SY (f ) SZ (f ) df N0 2 2 2 (1+16 f )(f /2W ).

N0 4 2 2 2 4 (1 + 16 f ) df

= N0 (4 +

1 3

N0 2 2 2 (1 + 16 f ) df

= .

Problem 3: 35 points A discrete-memoryless source X has the alphabet X = {x1 , x2 , x3 , x4 , x5 , x6 }, with corre1 1 1 1 1 1 sponding probabilities 16 , 4 , 8 , 4 , 16 , 4 . 1. What is the entropy of this source? 2. Design a Human code for this source, what is the average codeword length of the Human code? 3. Can you design a more ecient Human code by using the second extension of this source (i.e., designing a Human code for sequences of two outputs)? Why? 4. If you are asked to assign new probabilities to X (dierent from the ones given above) such that the entropy is maximized, what probabilities would you assign? What is the resulting entropy? Solution: 1. H(X) =
6 i=1 pi log2 pi 3 = 4 log2 1 4

1 8

log2

1 8

2 16

log2

1 16

= 2.375 bits/symbol.

2. Designing the Human code gives codewords {1110, 00, 110, 01, 1111, 10} (or some equiv = pi li = 2.375 binary symbols/source symbol. alent code) with R = H(X) no improvement is possible. 3. Since already R 4. Equiprobable distribution has the highest entropy, hence pi = 1/6 and H(X) = log2 6 = 2.58.

Homework 1 Problems

Homework 2 Problems

Homework 3 Problems

EECE4572 Communications I

Summer 2012 Prof. Salehi

Homework 1 Solution Note: You need to use Fourier transform properties and F.T table in this HW. To prepare for this HW read Chapters 2. Problem 1 1. Using scaling, shift, and modulation properties of the F.T., determine the F.T. of x(t), where cos( t), 0 t 4 x(t) = 0, otherwise. 2. Derive the magnitude spectrum of this signal using Matlab and plot it (magnitude spectrum is the magnitude of the F.T. of a signal, i.e., |X(f )|). In your HW solutions include both the Matlab code and the resulting plot. Is your plot symmetric (even)? Why? (Note: If you do not know how to use Matlab to nd F.T. look at Chapter 1 of the Matlab recommended book and in particular Illustrative Problem 1.5 on page 21. The Matlab fundamentals handout posted on the blackboard site is also useful to refresh your memory.) 3. Now let x1 (t) and x2 (t) be dened as cos(2 f0 t), 0 t 4 x1 (t) = 0, otherwise.

cos( t), 0 t T x2 (t) = 0, otherwise.

Note that in x1 (t) the width of the pulse is kept constant at 4 but the frequency f0 can change. 1 In x2 (t) the width of the pulse can change but the frequency is kept at 2 . Plot |X1 (f )| for f0 = 1, 2, 4 and |X2 (f ) for the pulse durations T = 8, 16 and explain how changing f0 and T change the magnitude spectrum of the signal. Solution Since the highest frequency is 4 we choose the parameter fs to be 20 which is well above twice the highest frequency. The frequency resolution df determines how accurate you want your graph to be, we choose df=0.001. The maximum time is 16 we choose the representation from 32 to 32 to represent the signal precisely, the listing of the program is given below.

df=0.001; fs=20; ts=1/fs; t=[-32:ts:32]; x1=zeros(size(t)); x1(641:721)=cos(pi*t(641:721));

x2=zeros(size(t)); x2(641:721)=cos(2*pi*t(641:721)); x3=zeros(size(t)); x3(641:721)=cos(4*pi*t(641:721)); x4=zeros(size(t)); x4(641:721)=cos(8*pi*t(641:721)); x5=zeros(size(t)); x5(641:801)=cos(pi*t(641:801)); x6=zeros(size(t)); x6(641:961)=cos(pi*t(641:961)); [X1,x11,df1]=fftseq(x1,ts,df); [X2,x21,df2]=fftseq(x2,ts,df); [X3,x31,df3]=fftseq(x3,ts,df); [X4,x41,df4]=fftseq(x4,ts,df); [X5,x51,df5]=fftseq(x5,ts,df); [X6,x61,df6]=fftseq(x6,ts,df); X11=X1/fs; X21=X2/fs; X31=X3/fs; X41=X4/fs; X51=X5/fs; X61=X6/fs; f=[0:df1:df1*(length(x11)-1)]-fs/2; plot(f,fftshift(abs(X11))) plot(f,fftshift(abs(X21))) plot(f,fftshift(abs(X31))) plot(f,fftshift(abs(X41))) plot(f,fftshift(abs(X51))) plot(f,fftshift(abs(X61))) The plots denoted by X1 through X6 are shown on the next page. X1 is the original plot for T = 4 and f0 = 1/2. The following table gives T and f0 for various X s.

X X1 X2 X3 X4 X5 X6

T 4 4 4 4 8 16

f0 1/2 1 2 4 1/2 1/2

As seen increasing f0 moves the peaks away from zero and locates them at the corresponding frequencies, increasing T makes the peaks of the signals higher and more like impulses (note the vertical scale).

2.5

X1

2.5

X2

1.5

1.5

0.5

0.5

X6

0 10 9 8 7 6 5 4 3 2 1 2.5

X3

0 9 10 10 9 8 7 6 5 4 3 2 1 X6
2.5

9 10

X4

1.5

1.5

0.5

0.5

X6

0 10 9 8 7 6 5 4 3 2 1

4.5 4 3.5 3 2.5 2 1.5 1 0.5

X5

0 9 10 10 9 8 7 6 5 4 3 2 1 X6
9 8 7 6 5 4 3 2 1 0 9 10 10 9 8 7 6 5 4 3 2 1

9 10

X6

X6

0 10 9 8 7 6 5 4 3 2 1

9 10

Problem 2 Problem 2.10 parts 2, 3. Solution 2) Using the time shift theorem, we have F [x(t)] = = = F [(t 3) + (t + 3)] sinc(f )ej 2 f 3 + sinc(f )ej 2 f 3 2sinc(f ) cos(2 3f )

3) Using time-scaling and time-shift theorems we have F [x(t)] = = = Problem 3 Problem 2.12 parts a, b, e. Solution t t a) We can write x(t) as x(t) = 2( 4 ) 2( 2 ). Then t t F [x(t)] = F [2( )] F [2( )] = 8sinc(4f ) 4sinc2 (2f ) 4 2 b) F [(2t + 3) + (3t 2)] 3 2 F [(2(t + )) + (3(t )] 2 3 2 1 f 1 f sinc2 ( )ej f 3 + sinc2 ( )ej 2 f 3 2 2 3 3

t x(t) = 2( ) (t) F [x(t)] = 8sinc(4f ) sinc2 (f ) 4

e) We can write x(t) as x(t) = (t + 1) + (t) + (t 1). Hence, X(f ) = sinc2 (f )(1 + ej 2 f + ej 2 f ) = sinc2 (f )(1 + 2 cos(2 f )) Another approach is to note that x(t) = 2 Problem 4 Problem 2.26 part 5. Solution 5) Using the convolution theorem we obtain Y (f ) = (f )(f ) Y (f ) is shown below.
t 2

(t), hence X(f ) = 4sinc2 (2f ) sinc2 (f ).

Y (f ) 1

1 2

1 2

1 1 1 2 We notice that we can write Y (f ) = 1 2 (f ) + 2 (2f ), and hence y(t) = 2 sinc(t) + 4 sinc

t 2

Problem 5 Problem 2.17. Solution (Convolution theorem:) F [x(t) Thus sinc(t) sinc(t) = = = = Problem 6 Problem 4.10, parts 1,2. Solution 1) The random variable X is Gaussian with zero mean and variance 2 = 108 . Thus p(X > x) = x Q( ) and p(X > 104 ) p(X > 4 104 ) p(2 104 < X 104 ) = = = Q Q 104 104 = Q(1) = .159 = Q(4) = 3.17 105 F 1 [F [sinc(t) F F
1 1

y(t)] = F [x(t)]F [y(t)] = X(f )Y (f )

sinc(t)]]

[F [sinc(t)] F [sinc(t)]] [(f )(f )] = F 1 [(f )]

sinc(t)

4 104 104

1 Q(1) Q(2) = .8182

2) p(X > 104 X > 0) = p(X > 104 , X > 0) p(X > 104 ) .159 = = = .318 p(X > 0) p(X > 0) .5

Problem 7 X is a Gaussian random variable with mean 3 and variance 4. Find the following probabilities. 1. P (X > 0). 2. P (X < 8). 3. P (2 < X < 2). 4. P (4 < (X + 1)2 < 16). Solution Obviously m = 3 and = 2. We have 1. P (X > 0) = Q 2. P (X < 8) = Q
03 2 38 2

= Q(1.5) = 1 Q(1.5) = 1 0.0668 = 0.9332. = Q(2.5) = 1 Q(2.5) = 1 0.00621 = 0.99379.


32 2

3. P (2 < X < 2) = Q

3(2) 2

= Q(0.5) Q(2.5) = 0.308 0.0062 = 0.3018.

4. P (4 < (X + 1)2 < 16) = P (2 < |X + 1| < 4) = P (2 < X + 1 < 4) + P (2 < X 1 < 4) = P (1 < X < 33 31 3) + P (5 < X < 3). But P (1 < X < 3) = Q 2 Q 2 = Q(0) Q(1) = 0.5 0.158 = 0.342 and P (5 < X < 3) = Q 0.00132. Therefore, P (4 < (X
3(3) 5) Q 3( = Q(3) Q(4) = 0.00135 0.0000316 2 2 2 + 1) < 16) = 0.342 + 0.00132 = 0.3433.

EECE4572 Communication Systems

Summer 2012 Prof. Salehi

Homework 2 Solution Problem 1 Problem 4.57. Solution 1) Y (t) = X(t)

((t) (t T )). Hence, SY (f ) = = SX (f )|H(f )|2 = SX (f )|1 ej 2 f T |2 SX (f )2(1 cos(2 f T ))

2) Y (t) = X(t)

( (t) (t)). Hence, SY (f ) = = SX (f )|H(f )|2 = SX (f )|j 2 f 1|2 SX (f )(1 + 4 2 f 2 )

3) Y (t) = X(t)

( (t) (t T )). Hence, SY (f ) = = SX (f )|H(f )|2 = SX (f )|j 2 f ej 2 f T |2 SX (f )(1 + 4 2 f 2 + 4 f sin(2 f T ))

Problem 2 Problem 6.9 Solution The marginal probabilities are given by p(X = 0) p(X = 1) p(Y = 0) p(Y = 1) =
k

p(X = 0, Y = k) = p(X = 0, Y = 0) + p(X = 0, Y = 1) = p(X = 1, Y = k) = p(X = 1, Y = 1) =


k

2 3

= =
k

1 3 1 3 2 3

p(X = k, Y = 0) = p(X = 0, Y = 0) =

=
k

p(X = k, Y = 1) = p(X = 0, Y = 1) + p(X = 1, Y = 1) =

Hence,
1

H(X)

i=0 1

pi log2 pi =

1 2 2 1 log2 + log2 3 3 3 3 1 1 2 2 log2 + log2 3 3 3 3

= .9183

H(X)

i=0 2

pi log2 pi =

= .9183

H(X, Y ) H(X |Y ) H(Y |X) Problem 3 Problem 6.11 Solution 1) H(X)

= = =

1 1 log2 = 1.5850 3 3 i=0

H(X, Y ) H(Y ) = 1.5850 0.9183 = 0.6667 H(X, Y ) H(X) = 1.5850 0.9183 = 0.6667

(.05 log2 .05 + .1 log2 .1 + .1 log2 .1 + .15 log2 .15 +.05 log2 .05 + .25 log2 .25 + .3 log2 .3) = 2.5282

2) After quantization, the new alphabet is B = {4, 0, 4} and the corresponding symbol probabilities are given by p(4) p(0) p(4) = = = p(5) + p(3) = .05 + .1 = .15 p(1) + p(0) + p(1) = .1 + .15 + .05 = .3 p(3) + p(5) = .25 + .3 = .55

Hence, H(Q(X)) = 1.4060. As it is observed quantization decreases the entropy of the source. Problem 4 Problem 6.20 Solution From the discussion in the beginning of Section 6.2 it follows that the total number of sequences of length n of a binary DMS source producing the symbols 0 and 1 with probability p and 1 p respectively is 2nH(p) . Thus if p = 0.3, we will observe sequences having np = 3000 zeros and n(1 p) = 7000 ones. Therefore, # sequences with 3000 zeros 28813 Another approach to the problem is via the Stirlings approximation. In general the number of binary sequences of length n with k zeros and n k ones is the binomial coecient n k = n! k!(n k)!

To get an estimate when n and k are large numbers we can use Stirlings approximation n! 2 n Hence, # sequences with 3000 zeros = Problem 5 Problem 6.21 Solution 1) The total number of typical sequences is approximately 2nH(X) where n = 1000 and H(X) =
i

n e

10000! 1 1010000 3000!7000! 21 2 30 70

pi log2 pi = 1.4855

Hence, # typical sequences 21485.5 2) The number of all sequences of length n is N n = 31000 . Hence, # typical sequences 2nH(X) n 1.1451030 # non-typical sequences N 2nH(X)

3) The typical sequences are almost equiprobable. Thus, p(X = x, x typical) 2nH(X) = 21485.5

4) Since the number of the total sequences is 31000 the number of bits required to represent these sequences is log2 (31000 ) = 1000 log2 3 1580. 5) Since the number of the Typical sequences is 2nH(X) the number of bits required to represent these sequences is log2 (2nH(X) ) = 1000H(X) 1456. 6) The most probable sequence is the one with all a3 s that is {a3 , a3 , . . . , a3 }. The probability of this sequence is 1 1000 1 n = p({a3 , a3 , . . . , a3 }) = 2 2 7) The most probable sequence of the previous question is not a typical sequence. In general in a typical sequence, symbol a1 is repeated 1000p(a1 ) = 200 times, symbol a2 is repeated approximately 1000p(a2 ) = 300 times and symbol a3 is repeated almost 1000p(a3 ) = 500 times.

Problem 3 (If you have diculty with Matlab, read the Matlab Premier posted on the web page and pages 134 139 of the Matlab book and in particular Illustrative Problem 4.2 before doing this problem. The le human.m, in the zip le available from the Bb site, designs a Human code.) A ternary source has three outputs a1 , a2 , and a3 with probabilities 0.01, 0.05, and 0.94. 1. Design Human codes for this source and its nth extension (i.e., taking n letters at a time), for n = 2, 3, 4, 5, and nd the average codeword lengths per single source outputs in each case. 2. Plot the average codeword length per single source output found in part 1 as a function of n. On the same plot indicate the entropy of the source. 3. Repeat parts 1 and 2 for a ternary source with probabilities 0.3, 0.35, and 0.35 and notice the dierence with the rst source. Solution A listing of the program follows: p=[0.01 0.05 0.94]; p2=kron(p,p); p3=kron(p2,p); p4=kron(p3,p); p5=kron(p4,p); [h,l5]=huffman(p5); [h,l4]=huffman(p4); [h,l3]=huffman(p3); [h,l2]=huffman(p2); [h,l1]=huffman(p); l2=l2/2; l3=l3/3; l4=l4/4; l5=l5/5; l=[l1 l2 l3 l4 l5]; e=entropy(p); e=e*ones(size(l)); n=[1:5]; plot(n,l,n,e); Similarly for p = [0.3 0.35 0.35]. Plots for the two cases are given below

1.1 1 0.9
Average codeword length

0.8 0.7 0.6 0.5 0.4 0.3 1


1.66 1.65 1.64 1.63 1.62 1.61 1.6 1.59 1.58 1
Entropy Average codeword length

Entropy

1.5

2.5

3.5

4.5

1.5

2.5

3.5

4.5

EECE4572 Communications Systems

Summer 2012 Prof. Salehi

Homework 3 Solution Problem 1 Problem 6.22 Solution 1) The entropy of the source is
4

H(X) =
i=1

p(ai ) log2 p(ai ) = 1.8464

bits/output

2) The average codeword length is lower bounded by the entropy of the source for error free reconstruction. Hence, the minimum possible average codeword length is H(X) = 1.8464. 3) The following gure depicts the Human coding scheme of the source. The average codeword length is R(X) = 3 (.2 + .1) + 2 .3 + .4 = 1.9 0 .4 0 0 1 .6 .3 1 1 0

10 .3 110 .2 111 .1

4) For the second extension of the source the alphabet of the source becomes A2 = {(a1 , a1 ), (a1 , a2 ), . . . (a4 , a4 )} and the probability of each pair is the product of the probabilities of each component, i.e. p((a1 , a2 )) = .2. A Human code for this source is depicted in the next gure. The average codeword length in bits per pair of source output is 2 (X) = 3 .49 + 4 .32 + 5 .16 + 6 .03 = 3.7300 R 1 (X) = R 2 (X)/2 = 1.865. The average codeword length in bits per each source output is R 5) Human coding of the original source requires 1.9 bits per source output letter whereas Human coding of the second extension of the source requires 1.865 bits per source output letter and thus it is more ecient.

000 010 100 110 0010 0011 0110 1010 1110 01110 01111 10110 10111 11110 111110 111111

(a4 , a4 ) (a4 , a3 ) (a3 , a4 ) (a3 , a3 ) (a4 , a2 ) (a2 , a4 ) (a3 , a2 ) (a2 , a3 ) (a4 , a1 ) (a2 , a2 ) (a1 , a4 ) (a3 , a1 ) (a1 , a3 ) (a2 , a1 ) (a1 , a2 ) (a1 , a1 )

.16 .12

u
.12 .09 .08 .08 .06 .06 0 u 1 1 0

0 0 0 0

u u
0 1 0

u
.04 .04 .04 .03

u u
1

u 1

u u
1

1 1

u
.03 .02 .02 .01 0 u 1 0 u 1

Problem 2 Problem 6.46 (assume the source is zero-mean, then PX = E X 2 = 2 ) Solution 1) From Table 6.2 we nd that for a unit variance Gaussian process, the optimal level spacing for a 16-level uniform quantizer is .3352. This number has to be multiplied by to provide the optimal level spacing when the variance of the process is 2 . In our case 2 = 10 and = 10 0.3352 =

1.060. The quantization levels are 1 = x 16 x 2 = x 15 x 3 = x 14 x 4 = x 13 x 5 = x 12 x 6 = x 11 x 7 = x 10 x 8 = x 9 x = = = = = = = = 7 1.060 6 1.060 5 1.060 4 1.060 3 1.060 2 1.060 1 1.060 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1.060 = 7.950 1.060 = 6.890 1.060 = 5.830 1.060 = 4.770 1.060 = 3.710 1.060 = 2.650 1.060 = 1.590

1 1.060 = 0.530 2

The boundaries of the quantization regions are given by a1 = a15 a2 = a14 a3 = a13 a4 = a12 a5 = a11 a6 = a10 a7 = a9 a8 = = = = = = = = 7 1.060 = 7.420 6 1.060 = 6.360 5 1.060 = 5.300 4 1.060 = 4.240 3 1.060 = 3.180 2 1.060 = 2.120 1 1.060 = 1.060 0

2) The resulting distortion is D = 2 0.01154 = 0.1154. 3) The entropy is available from Table 6.2. Nevertheless to show how these values are found, we

will rederive the result here. The probabilities of the 16 outputs are 1 ) = p(x 16 ) p(x 2 ) = p(x 15 ) p(x 3 ) = p(x 14 ) p(x 4 ) = p(x 13 ) p(x 5 ) = p(x 12 ) p(x 6 ) = p(x 11 ) p(x 7 ) = p(x 10 ) p(x 8 ) = p(x 9 ) p(x = = = = = = = = a15 Q( ) = 0.0094 10 a14 a15 Q( ) Q( ) = 0.0127 10 10 a14 a13 Q( ) Q( ) = 0.0248 10 10 a12 a13 Q( ) Q( ) = 0.0431 10 10 a11 a12 Q( ) Q( ) = 0.0674 10 10 a10 a11 Q( ) Q( ) = 0.0940 10 10 a10 a9 Q( ) Q( ) = 0.1175 10 10 a8 a9 Q( ) Q( ) = 0.1311 10 10

Hence, the entropy of the quantized source is


1

= H(X)
i=1

i ) log2 p(x i ) = 3.6025 6p(x

This is the minimum number of bits per source symbol required to represent the quantized source. 5) The distortion of the 16-level optimal quantizer is D16 = 2 0.01154 whereas that of the 8-level optimal quantizer is D8 = 2 0.03744. Hence, the amount of increase in SQNR (db) is 10 log10 Problem 3 Problem 6.52 Solution 1) PX = E[X 2 (t)] = RX ()| =0 = Hence, SQNR|dB = 4.8 + 6 + 10 log10 With SQNR = 60 db, we obtain 60 = 6 + 1.8 = 9.7 The smallest integer larger that is 10. Hence, the required number of quantization levels is = 10. 2) The minimum bandwidth requirement for transmission of a binary PCM signal is BT = W . Since = 10, we have BT = 10W . PX
2 xmax

SQNR16 0.03744 = 10 log10 = 5.111 db SQNR8 0.01154

A2 2 A2 = 4.8 + 6 3 = 1.8 + 6 2A2

= 4.8 + 6 + 10 log10

Problem 4 Problem 6.53 Solution 1) PX = E[X 2 (t)] = x 2 f (x) dx


0

=
2

x2

x+2 dx + 4
0

2 0

x2

x + 2 dx 4
2 0

= = Hence, sinve = log2 N = log2 32 = 5, SQNR|dB = 4.8 + 6 + 10 log10 PX


2 xmax

1 4 2 3

1 4 2 3 x + x 4 3

+
2

1 2 1 x4 + x3 4 4 3

= 4.8 + 6 5 + 10 log10

2 3 22

= 34.8 + 10 log10

1 = 27 dB 6

and R = fs = 2W = 2 5 5000 = 50000 samples/sec. 2) If the available bandwidth of the channel is BT = 40 KHz, then the maximum rate of transmission is obtained by using BT = W from which we have = 40/5 = 8. In this case the highest achievable SQNR is 2 PX SQNR|dB = 4.8 + 6 + 10 log10 2 = 4.8 + 48 + 10 log10 32 = 45(dB) 2 xmax 3) In the case of a guard band of 2 KHz the sampling rate is fs = 2W + 2000 = 12 KHz. We use 2B the relation BT = fs /2 to nd the maximum possible . The highest achievable rate is = fsT = 6.6667 and since should be an integer we set = 6. Thus, since has dropped from 8 to 6, the resulting SQNR drops 12 dB from 45 dB to 33 dB . Problem 5 Problem 6.55 Solution 1) RX (t + , t) = = = and since E[cos(2 f0 (2t + ) + 2)] = we conclude that = 1 2 E[X(t + )X(t)] E[Y 2 cos(2 f0 (t + ) + ) cos(2 f0 t + )] 1 E[Y 2 ]E[cos(2 f0 ) + cos(2 f0 (2t + ) + 2)] 2
2 0

cos(2 f0 (2t + ) + 2)d = 0

1 3 E[Y 2 ] cos(2 f0 ) = cos(2 f0 ) 2 2

The PSD is the FT of RX (t + , t), therefore SX (f ) =


3 RX (0) = 2 .

3 4 (f

f0 ) + 4 (f + f0 ). Obviously, PX =

2) The range of values of X(t) is equal to the range of values of Y , hence xmax = 3. SQNR|dB = 4.8 + 6 + 10 log10 PX
2 xmax

= 4.8 + 6 + 10 log10

3/2 = 40 9

From which = 43/6 = 7.16, hence we choose = 8. The bandwidth of the process is W = f0 , so that the minimum bandwidth requirement of the PCM system is BT = 8f0 . 3) If SQNR = 64 db, then SQNR|dB = 4.8 + 6 + 10 log10 resulting in = 12 and BT = 12f0 . Problem 6 Problem 6.59 Solution The sampling rate is fs = 44100 meaning that we take 44100 samples per second. Each sample is quantized using 16 bits so the total number of bits per second is 44100 16. For a music piece of duration 50 min = 3000 sec the resulting number of bits per channel (left and right) is 44100 16 3000 = 2.1168 109 and the overall number of bits is 2.1168 109 2 = 4.2336 109 3/2 = 64 9

Das könnte Ihnen auch gefallen