Sie sind auf Seite 1von 46
Problem 9.1 Amount of information gained by the occurrence of an event of probability p is Joga] G) bits P I varies with p as shown below: 19 T bits 4. 0 02 O04 06 OF oO 456 Problem 9.2 Let the event S=s, denote the emission of symbol s, by the source, Hence, Ip = ven,(2) bits Sk 80 S 82 83 Pe 0.4 0.3 0.2 01 1(s,) bits 1.322 1.787 2.322 3.322 Problem 9.3 Entropy of the source is HS) = we] + nine) + vase] + rae) Po Pi Pa Pa 1 1 1 1 = Ziogy(3) + Ltogy(6) + 4 togy(4) + 4 loge (4) zloe2(8) qloen(6) Z loge (4) Z loge (4) = 0.528 + 0.431 + 0.5 + 0.5 = 1.959 bits oblem 9.4 Let X denote the number showing on a single roll of a die. With a die having six faces, we note that Px is 1/6. Hence, the entropy of X is HO) = px logg| Px. 5 3 logo(6) = 0.431 bits 457 Problem 9.5 ‘The entropy of the quantizer output is 4 H=- Y PR) loggP(X) wet where X, denotes a representation level of the quantizer. Since the quantizer input is Gaussian with zero mean, and a Gaussian density is symmetric about its mean, we find that P(X) = PRY) P(K,) = POX) The representation level X, = 1.5 corresponds to a quantizer input +1 < Y < «. Hence, = 0.1611 ‘The representation level Xp = 0.5 corresponds to the quantizer input 0 < Y < 1. Hence, PO) = ft gol x ita] = 0.3389 Accordingly, the entropy of the quantizer output is afpssn roils ) + 0.3389 tog 0 3989) | = 1.91 bits Problem 9.6 (a) For a discrete memoryless source: P(o) = Pls;,) Ploi,) . Pls), Noting that M = K", we may therefore write Ma Ma z Pra) = FF Posi,) Pls) Pls.) Kiki KA -E Y- EY PU) Psi) ~ Pei.) find ied igo Ka Ka Ka = ¥E Pe) EY Pw) ~ L Pes,) Pm i) i (b) For k = 1,2,....n, we have 0 Pi, ik Ma 1) Ma A FE PCo;) logy XY Ple,) Poi.) ~ Pei.) logs] —— Fad For k=1, say, we may thus write Ka x Pei.) {0 Ma 1) 1)e Y Peo) loge] |= F Pes;,) logs|+ | OD Pes rm Pi, 3 i) Pi, Pi, loge) eas Piy Clearly, this result holds not only for k=1 but also kx2,....n. = HS) 459 (©) Ml A HS") = & PG ;) logy Rep Ml i ~ & Plo) lose Fay Pe,)= Fay Met 1 i = x PC) 106s x Pla) logs 2 1 PE) Mea YE Peo;) logs is Using the result of part (b), we thus get H(S ") = H(S) + H(S) + ~ + H(S) = nH(S) Problem 9.7 (a) The entropy of the source is, 1 015 1 1 = 0. at +0151 + 0.15 loge ——_ H(S) = 0.7 logs Tz + 0.15 logg 8 O55 = 0.258 + 0.4105 + 0.4105 = 1.079 bits (b) The entropy of second-order extension of the source is H(S?) = 2 x 1.079 ~ 2.158 bits 460 Problem 9.8 The entropy of text is defined by the smallest number of bits that are required, on the average, to represent each letter. According to Lucky*, English text may be represented by less than 3 bits per character, because of the redundancy built into the English language. However, the spoken equivalent of English text has much less redundancy; its entropy is correspondingly much higher than 3 bits. It follows therefore from the source coding theorem that the number of bits required to represent (store) text, is much smaller than the number of bits required to represent (store) its spoken equivalent. Problem 9.9 (a) With K equiprobable symbols, the probability of symbol s, is - =i Px = P(s,) X ‘The average code-word length is Ka T= Y prlk 0 K- -1 “zum The choice of a fixed code-word length l,=Ip for all k yields the value L=1y, With K symbols in the code, any other choice for l, yields a value for L no less than Ip, (b) Entropy of the source is 461 te W. Lucky, "Silicon Dreams" p.111 (St. Martin’s Press. 1989). Ka ' H(S) = © px loge|— io Px Ki, = Y Slog, K = logy K mK? 2 ‘The coding efficiency is therefore HS) n= HS) T For n=1, we have Ip = loge K ‘To satisfy this requirement, we choose K-20 where ly is an integer. Problem 9.10 A prefix code is defined as a code in which no code work is the prefix of any other code word. By inspection, we see therefore that codes I and IV are prefix codes, whereas codes II and III are not. To draw the decision tree for a prefix code, we simply begin from some starting node, and extend branches forward until each aymbol of the code is represented. We thus have: 462. she ated she 463 Problem 9, ‘We may construct two different Huffman codes by choosing to place a combined symbol as low or as high as possible when its probability is equal to that of another symbol. We begin with the Huffman code generated by placing a combined symbol as low as possible: S055 — Oss —~+@55 oe —- Gis 030 a 5 os —. as 2 Pos 4 s 010 2 O15 + Ss Oos The source code is therefore 8 (0 s, 11 8, 100 83 1010 sy 1011 464 ‘The average code.word length is therefore rT ‘The variance of L is 4 = 2 Pele rot = 0,55(1) + 0.15(2) + 0.15(3) + 0.1(4) + 0.05(4) =19 4 =F px - LP ib = 0.55(-0,97 + 0.15(0.1)? + 0.15(1.17 + 0.1(2.1)? + 0.05(2.1? = 129 Next placing a combined symbol as high as possible, we obtain the second Huffman code: * 8 S, Ss s 4 0 ——+ 0,55 oss —-04s —— 0.45 7 Ors 0.15 0.10 0,08 0 A 0,45 ons se i’ ‘ OAS =f Ons 1 os— Correspondingly, the Huffman code is 80 % 82 83 84 100 101 110 111 465 ‘The average code-word length is T = 0.55(1) + (0.15 + 0.15 + 0.1 + 0.05) (3) =19 The variance of L is 0 = 0.55(-0.9) + (0.15 + 0.15 + 0.1 + 0.05) (1.1)? = 0.99 The two Huffman codes described herein have the same average code-word length but different variances. Problem 9.12 o O95 —> 0.as as 0.25 5S “| 0 5 O25 —~ 0.95 ow Ons Oas— | 05 ' Ne Ors 0.25. oo S (0.105 0.125 | 1 S$, 0.125 0.105 0.125 Ce 5, O.1as ais 2 0.125 \ 5, d.obas & |o,ns 3 O.obs 466 ‘The Huffman code is therefore 8 10 Ss 11 8. 001 s3 010 8 011 8 0000 s 0001 ‘The average code-word length is 6 T= ¥ pele r= = 0.25(2)(2) + 0.125(8)(3) + 0.0625(4)(2) = 2.625 ‘The entropy of the source is 6 HS) = Dp be( 2) P=] Pk = 0.25(2) von sis + 0.125(3) vais i ) + 0.0625(2) veils wa = 2.625 ‘The efficiency of the code is therefore ‘We could have shown that the efficiency of the code is 100% by inspection since 467 6 DY Px loga(/py) i ne 6 D Pele P=] where 1, = log,(/p,). Problem 9.13 a) ° S$ 07 ——~-07 s Os o 0.3 a] % on 1 ‘The Huffman code is therefore nn) 5; 10 s 11 The average code-word length is T = 0.7(1) + 0.15(2) + 0.12(2) =13 (b) For the extended source we have syatot [ve om [son [ose [oe [oer [ome [oo [om Probability | 0.49 | oo | eos | eos | o105 | o.o2n5 | o.25 | o.c22s | o-o225 $81 468 Applying the Huffman algorithm to the extended source, we obtain the following source code: S080, S981. S982, 8189 8280 58181 Shy 8281, ae 1 oo1 010 011 0000 000100 000101 000110 000111 ‘The corresponding value of the average code-word length is Ty = 0.491) + 0.105(3)(3) + 0.105(4) + 0.0225(4)(4) Pry = 2,395 bits/extended symbol = 1.1975 bits/symbol (c) The original source has entropy According to Eq, (/0-28), HS) = 0.7 volar] + 0.15(2) wealats = 118 Hes) < © ¢H) + 2 a a This is a condition which the extended code satisfies. Problem 9.14 Symbol Huffman Code Code-word length A 1 1 B 011 3 c 010 3 D 001 3 E oo11 4 F 00001 5 G 00000 5 Problem 9.15 zt 0 00 2 ' 0 Vt a \ a ' OO op ; mal 0 4% \ Computer code Probability Huffman Code 00 1 0 2 11 1 10 4 o1 1 110 8 10 1 111 8 ‘The number of bits used for the instructions based on the computer code, in a probabilistic sense, is equal to 470 obs ted. 1).2dits cpaneeea On the other hand, the number of bits used for instructions based on the Huffman code, is equal to ‘The percentage reduction in the number of bits used for instruction, realized by adopting the Huffman code, is therefore 100 x ag = 12.5% Problem 9.16 Initial step Subsequences stored: 0 Data to be parsed: 11101001100010110100.. Step 1 Subsequences stored: 0, 1, 11 Data to be parsed: 101001100010110100.. Step 2 Subsequences stored: 0, 1, 11, 10 Data to be parsed: 1001100010110100... Step 3 Subsequences stored: 0, 1, 11, 10, 100 Data to be parsed: 1100010110100... Step 4 Subsequences stored: 0, 1, 11, 10, 100, 110 Data to be parsed: 0010110100... 471 Step 5. Subhsequences stored: 0, 1, 11, 10, 100, 110, 00 Data to be parsed: 10110100... Step 6 Subsequences stored: 0, 1, 11, 10, 100, 110, 00, 101 Data to be parsed: 10100... Step7 Subsequences stored: 0, 1, 11, 10, 100, 110, 00, 101, 1010 Data tobe parsed: 0 Now that we have gone as far as we can go with data parsing for the given sequence, we write Numerical 23 4 5 6 a 8 9 positions Subsequences 0, 1, 11, 10, 100, 110, 00, 101, 1010 Numerical 22, 21, 41, 31, 11, 42, 81 representations Binary encoded 0101, 0100, 0100, 0110, 0010, 1001, 10000 blocks 472 Problem 9.17 48 o4, Plxy= plsys o z P(¥o) = (1 - p)p(xp) + p P(x) =(1- 1). yl -a-pG +m) 1 2 PCYy) = P P(X) + (1 - p) p(x) =p sa-pd = PG) + - pS) 1 2 473 Problem 9.18 Px) = px) = ale ale =a-p) +p) Ply) = (1 = p)() + RS) 1 a who tee Py) wD a pn 3_P 4 2 Problem 9.19 From Eq.(9-52)we may express the mutual information as 1 4 PO} YK) 1¥) = ) Nogy| POF eM = BY, Mave a | ‘The joint probability p(x,y,) has the following four possible values: Oxy yy) = pO +B) where Po = (Xp) and py = P(x) The mutual information is therefore 474 (-pp (-p) IQGY) = (1-1 ‘1-p) le pe aetna rola on] + CL-py) p logs (-pp p (=p) C-pp p * Pid-p) Pip + I TS pp oa ara Cp) al pi(l-p) + py(1-p) logy} —___—"__— ne rt + Ps Rearranging terms and expanding algorithms: IQGY) = p log, p + (1-p) log,(1-p) ~ [px(l-p) + (1-ppp] loge{p,(1-p) + (1-ppp] - [pw + (1p) (1-p)] loggfpyp + (1-pp) (1-p)] ‘Treating the transition probability p of the channel as a running parameter, we may carry out the following numerical evaluations: ‘=O: = Py logy py - (1 - py) logs (1 - py) .5, IOGY) = 1.0 IKY) = - 0.469 - (0.1 + 0.8 p,) logy (011 + 0.8 p,) - (0.9 - 0.8 py) logy (0.9 - 0.9 p,) P= 0.5, I(X;Y) = 0.531 p=0.2: IQGY) = - 0.7219 - (0.2 + 0.6 py) logy (0.2 + 0.6 py) = (0.8 - 0.6 p,) logs (0.8 - 0.6 p,) P) = 0.5, IQGY) = 0.278 ICKY) = - 0.88129 - (0.8 + 0.4 p,) log, (0.3 + 0.4 p,) + (0.7 - 0.4 py) logy (0.7 - 0.4 py) P, = 0.5, 1(K:¥) = 0.1187 IY) =0 ‘Thus, treating the a priori probability p, as a variable and the transition probability p as a running parameter, we get the following plots: 1.0 Pz Problem From the plots of ICKY) versus p, for p as a running parameter, that were presented in the solution to Problem 10-19 we observe that I(X;Y) attains its maximum value at p,=05 for any p. Hence, evaluating the mutual information I(X:Y) at p,=0.5, we get the channel capacity: C = 1+ p logy p + (1 ~ p) logy (1 - p) - Hp) where H(p) is the entropy function of p. 476 z = p(1 - p) + (1 - py) p = (1 - po) (1 - p) + pop Hence, Pip + (1 - py) =1-p, -p)-(1- ppp =1-2 Correspondingly, we may rewrite the expression for the mutual information I(X;Y) as IY) = H@) - H@) H(2) = ~ 2 logy 2 ~ (1 ~ 2) logy (1 - 2) H(D = - p logy p - (1 - p) logy (1 - p) (b) Differentiating I0GY) with respect to pp (or p,) and setting the result equal to zero, we find that 10G;¥) attains its maximum value at pp = p, = 1/2. (c) Setting po = p, - 1/2 in the expression for the variable z, we get: zel-z=12 Correspondingly, we have H(z) = 1 We thus get the following value for the channel capacity: C= IK) h, - p12 +1-H@) where H(p) is the entropy function of the channel transition probability p. 477 Problem 9.22 From this diagram, we obtain (by inspection) Plyo Ix) = (1 - p)? + p? = 1 - 2p(1 - p) P(yo Ixy) = pCl - p) + (1 - pop = 2p - p) Hence, the cascade of two binary symmetric channels with a transition probability p is equivalent toa single binary symmetric channel with a transition probability equal to 2p(1 - p), as shown below: I= 2p(t-P) % 4 feo m) 2PCI-P) 4, * T=api-p) | Correspondingly, the channel capacity of the cascade is - Hp - p) = 2p(1 ~ p) loge (2p(1 - p)] - (1 - 2p + 2p”) loge(1 - 2p + 2p?) 478 Problem 9.23 lea ° 00 a a 0 01 Lea = tog (PEED 1GY) = DY pe, wes (see) ° ‘The joint probabilities for the channel are P(%q Yo) = (1-1) pg P(X Yo) = 0 P(%q ¥2) = Pot P(xq¥,) = 0 P(x ¥1)) = (L- ep; P(X Y2) = Pye where po + p = 1. Substituting these values in (1), we get 1QGY) = (eof oles {-) + = pao {; “I Since the transition probability p = (1 - 0) is fixed, the mutual information I(X;Y) is maximized by choosing the a priori probability po to maximize H(po). This maximization occurs at po = 1/2,. for which H(p9) = 1. Hence, the channel capacity C of the erasure channel is I - ot 479 Problem 9.24 332 (a) When each symbol is repeated three times, we have Messages Unused signals 000 001 010 11 100 101 110 iw ‘We note the following: The probal Ae Channel outputs 000 001 010 100 101 110 1 The probability that no errors occur in the transmission of three Os or three 1s is (1 - p)®. ity of just one error occurring is 3p(1 - p)”. ‘The probability of two errors ceeurringis 3p%(1 - p). The probability of receiving all three bits in error is p® With the decision-making based on a majority vote, it is clear that contributions 3 and 4 lead to the probability of error Ps = 3p%(1-p) + p? (b) When each symbol is transmitted five times, we have Messages Unused signals 00000 00001 00010 00011 11110 m1 Channel outputs 00000 00001 00010 00011 11110 mn faa ‘The probability of zero, one, two, three, four, or five bit errors in transmission is as follows, respectively: (-p® 5p (1-py* 10p(1-p? 10p*(1-p? 5ptU-p) pe ‘The last three contributions constitute the probability of error P, = p> + 5p1-p) + 10p%1-p? (a) For the general case of n=2m + 1, we note that the decision-making process (based on a majority vote) makes an error when m+1 bits or more out of the n bits of a message are received in error. The probability of i message bits being received in error is 2h fap Hence, the probability of error is (in general) 7 P= Y (?}pa -pr tamer Li The results derived in parts (a) and (b) for m=1 and m=2 are special cases of this general formula. Problem 9.25 ‘The differential entropy of a random variable is independent of its mean. To evaluate the differential entropy of a Gaussian vector X, consisting of the components X,,X»,....X, We may simplify our task by setting the mean of X equal to zero. Under this condition, we may express the joint probability density function of the Gaussian vector X as 481 2 \2 1 x x2 f(x) = 1 __oxp}-2_. - 2 (2K) 6469...0, 20; 208 ‘The logarithm of f(x) is loge f(x) = - logg((2n)"” 6409...) Hence, the differential entropy of X is W(X) = - Hele J fo ogg (fx(x)) dx = Togy((2n)"o102..0,) ff.» f fyoodx 2 + wwe fff [eB ‘We next note that Sf fixeddx = 1 SSS? tends = 0? iet.2,...0 Hence, we may simplify (1) as 482 BR) = logg[(2n)”oy62...09] + Blogse = logy, [pxo2o? sel + F losse a lato’ 2m], 2 : 2 fantotod on ] 2 loge ~ 2 togaretoteg .. of!) When the individual variances are equal: Correspondingly, the differential entropy of X is (X) = z logy(2neo?) Problem 9.26 (a) The differential entropy of a random variable X is hO® = - if * f(x) logy fx(x)dx ‘The constraint on the value x of the random variable X is Ix Ism of Using the method! Lagrange multipliers, we find that h(X), subject to this constraint, is maximized when 483 [2 [£400 loge £460 + 2 F400] is stationary. Differentiating -fy(x)logfy(x) + Af(x) with respect to f(x), and then setting the result equal to zero, we get logge + 2 = logy fy(x) This shows that f,(x) is independent of x. Hence, for the differential entropy h(X) under the constraints Ix |< M and gz fy(x)dx=1 to be maximum, the random variable X must be uniformly distributed: _{U2M, -Msx 0, where S,(f) is the power spectral density of the transmitted signal For the NEXT-dominated channel described in the question, the capacity is pp? log | 1+ PL Ja jie * eapolf) " va pr? lay v2 onl if } where B, k, J and f, are all constants pertaining to the transmission medium. This formula for capacity can only be evaluated numerically for prescribed values of these constants, 1 = 5 flog] i+ 2! z 493 Problem 9.34 For k=1, Eq. (%/38) reduces to 10 logg(SNR) = 6 log,N + C, 4B @ Expressing Eq. (2.33) in decibels, we have 2 10 logg(SNR) = 6R + 10 logy [_2P (2) Max ‘The number of bits per sample R, is defined by R = loggN We thus see that Eqs. (1) and (2) are equivalent, with C, = 10 vent 494 Problem 9.35 The rate distortion function and channel capacity theorem may be summed up diagrammatically as follows: mia OGY) max 20%) ota “tpansmissisn Dar compassion Limit Lit According to the rate distortion theory, the data compression limit set by minimizing the mutual information I(X;Y) lies at the extreme left of this representation; here, the symbol Y represents the data compressed form of the source symbol X. On the other hand, according to the channel capacity theorem the data transmission limit is defined by maximizing the mutual information I(X;Y) between the channel input X and channel output Y. This latter limit lies on the extreme right of the representation shown above. 495, 1 Problem 9. 36 Matlab codes % Computer Problem in Chapter 9 % Figure: The minimum achievable BER as a function of 4% Eb/O for several different code rates using binary signaling. % This program calculates the Minimum required Eb/NO 4% tor BPSK signalling at unit power over AWGN channel 4% given a rate and an allowed BER 4% Code is based on Brandon’s C code. 4% Ref: Brendan J. Frey, Graphical models for machine % Learning and digital communications, The MIT Press % Mathini Sellathurai EbNo=double( (7.85168, 7.42122, 6.99319, 6.58785, 6.14714, 5.7329, 6.32711, 4.92926, 4.54106, 4.16588, 3.80312, 3.48317, 3.11902, 2.7981, 2.49337, 2.20617, . 1.93251, 1.67687, 1.43313, 1.20671, 0.994633, 0.794801, 0.608808, 0.434862, 0.273476, 0.123322, ~0.0148208, -0. 144486, -0.266247, -0.374365, -0.474747, -0.5708, 0.659038, ~0.736526, -0.812523, -0.878333, -0.944802, -0.996262, -1.04468, .. ~1,10054, 1.14925, -1.18536, -1.22298, -1.24746, -1.27304, -1.31061, -1.34588, -1.37178, 1.37904, -1.40388, -1.42563, -1.45221, -1.43447, -1.44302, -1.46129, 1.45001, -1.50485, -1.50654, -1.50192, -1.45507, -1.60877, 1.52716, -1.54448, “1.81713, 1.84378, ~1.5684]); rate= double([9.989372e-01, 9.980567e-01, 9.966180e-01, 9.945634e-01, 9.914587e-01, 9.868898e-01, 9.804353e-01, 9.722413e-01, 9.619767e-01, 9.490156e-01, 9,3346800-01, 9.185144e-01, 8.946454e-01, 8.715918e-01, 8.459731e-01, 8.178003e-01, 7.881055e-01, 7.565174e-01, 7.238745e-01, 6.900430e-01, 6.856226e-01, 6.211661 5.8664800-01, 5.526132e-01, 5.188620e-01, 4.860017e-01, 4.539652e-01, 4 3.938277e-01, 3.6533280-01, 3.382965e-01, 3.129488e-01, 2.889799e-01, 2.66187 1e-01, 2.4510790-01, 2.2516010-01, 2.068837e-01, 1.894274e-01, .. 4.73322Se-O1, 1.588691e-01, 1.453627e-01, 1.326278e-01, 1.210507e-01, 1.101604e-01, 1,002778e-01, 9.150450e-02, 8.347174e-02, 7.50800%e-02, 6.886473e-02, 6.266875e-02, 5 5 4 3 188306e-02, 4.675437e-02, 4.230723e-02, 3.8516370-02, 3.4760620-02, . 3.1852430-02, 2,8832460-02, 2.606097e-02, 2.3327900-02, 2.185325e-02, 1.981896e-02, 1.7641226-02, 1,6862210-02, 1.4441080-02, 1.314112¢-02]); N=68; bedouble([ie-6]); % Allowed BER 4 Rate R (bits per channel usage) redouble([1/32, 1/16,0.1,0.2,0.3,0.4,0.5, 0.6, 0.7, 0.8,0.85,0.95]); 496 Lerzeros(t,length(r)); % initialize buffer for Eb/NO for pet:length(z) ¢ = doublo(r(p)#(1.0+b#10g(b)+(1.0-b) #10g(1.0-b) /1og(2.0))); sew; % Minimum Eb/WO calculations while ( (4-0) & (corabe(i)) ) isi-t; end isitty if ( (bo) | Gar) ) @ =double( EDNo(i)+(EbNo(i~1)-EbNo(i))#(c~rate(i))/(rate(i-1)-rate(i)) 1e(p)=10#10g10( (10° (0/10) }4c/e(p)); display(1e) else display('values out of range’) end end plot (10¥10g10(r) 16, *-") xlabel (Rate (4B)’) ylabel (Minimum E_b/_0 (4B)’) axis([10#20g10(1/32), 0, -2 4]) 497 Computer Experiment in Chapter 9 Progran to create the figure for the minimum Eb/HO needed for error-free communication With @ rate R code, over an AVGN channel using binary signaling Thie program caleulates the Kinimun required Eb/NO for BPSK signalling at unit power over AVGN channel % given a rate and an alloed BER. % Code is based on Brandon’s C code. % Ref: Brendan J. Frey, Graphical models for machine % learning and digital communications, The MIT Press 4% Mathini Sellathurai Eblio= double([7.86168, 7.42122, 6.99319, 6.56785, 6.14714, 5.7329, 6.92711, 4.92026, 4.54108, 4.16568, 3.30312, 3.45317, 3.11902, 2.7981, 2.49337, 2.20817, 1.93251, 1.67687, 1.43313, 1.20671, 0.994633, 0.794801, 0.608808, 0.434862, 0.273476, 0.123322, -0.0148204, -0. 144486, -0.266247, -0.374365, -0.474747, -0.5708, 0.659038, -0.736526, -0.812523, -0.878333, ~0.944802, -0.996262, -1.04468, . “1.10084, 1.14925, -1. 18836, 1.22298, 1.24746, -1.27394, -1.31061, -1.34688, “1.37178, -1.37904, -1.40388, -1.42653, -1.45221, -1.49447, -1.44992, -1.46129, -1,45001, -1.60485, -1.50654, 1.60192, -1.45507, -1.60577, -1.52716, -1.54448, 1.61713, -1.84378, -1.5684)); x 9893720-01, 9.980567e-01, 9.9661808-01, 9.9456340-01, 9. 914587e-01, 9.8688980-01, 9.804953e-01, 9.7224130-01, 9.619767e-01, 9.490156e-01, 9.3346800-01, 9.1851440-01, 8.9464540-01, 8.715918e-01, 8.4597316-01, 8.178003e-01, 7, 881085e-01, 7.565174e-01, 7.238745e-01, 6.900430e-01, 6.556226e-01, 6.2116616-01, 5.8664800-01, 5.525132e-01, 5.188620e-01, 4.860017e-01, 4.5396520-01, 4. “01, 3.938277e-01, 3.653328e-01, 3.382965e-01, 3.129488e-01, 2.889799e-01, 2 “01, 2.451079e-01, 2.251691e-01, 2.068837e-01, 1.8942740-01, .. a “01, 1.8885910-01, 1.453627e-01, 1.326278e-01, 1.210507e-01, 1.101604e-01, 1 “01, 9.1604600-02, 8.3471746-02, 7.698009e-02, 6.886473e-02, 6.266875e-02, 5.6988470-02, §.188306e-02, 4.675437e-02, 4.2397230-02, 3.851637e-02, 3.4760620-02,.. 3.1852430-02, 2.8832460-02, 2.606097e-02, 2.332790e-02, 2.185326¢-02, 4,941896e-02, 1.7641226-02, 1.5862210-02, 1.4441080-02, 1.3141128-02]); 8; bedouble(0.5:~19-6:16-6); % Allowed BER jouble( [0.99,1/2,1/3,1/4,1/5,1/8]); % Rate R(bits/channel usage) Ae=zeros(t,1ength(b)); Length») double (r# (1 .0+0(p)*10g(b(p) )+(41.0-b(p) )#Log(4.0-b(p))/20g(2.0))); 498, acu; while ( (i>=0) & (erate(i)) ) isi-t; end ait if ( (0) | Gav) ) fe = double(EbNo(i)+(EbNo( 1-1) -EbNo(i))#(c-rate(i))/(rate(i-1)-rate(i)) 1e(p)=10¥10g10((10"(e/10) }#c/r) 5 else display(*values out of range’) end end plot (ie, 10¥10g10(b),-") end xlabel(’E_b/N_O (4B)?) ylabel (Minimum BER?) axis([-2 1-50 -10]) 499 Answer to Problem 9.36 Minimum EN (8) ° Rate (8) Figure 1: ‘The minimum Eb/INO needed for error-free communication with a rate R code, over an AWGN channel using binary signaling ee oe -15| -25| & 1 a 13, § -s0] & = -3s| +0] 45} a5 7 -05 @ os 1 5M, (68) Figure 2: ‘The minimum achievable BER as a function of Eb/NO for several different code rates using binary signaling, 501

Das könnte Ihnen auch gefallen