Sie sind auf Seite 1von 81

CHAPTER 9 Information and Coding

Chapter Outline
9.1 Measure of Information - Entropy 9.2 Source Coding 9.2.1 Huffman Coding 9.2.2 Lempel-Ziv-Welch Coding 9.2.3 Source Coding vs. Channel Coding 9.3 Channel Model and Channel Capacity 9.4 Channel Coding 9.4.1 Waveform Coding 9.4.2 Linear Block Coding 9.4.3 Cyclic Coding 9.4.4 Convolutional Coding and Viterbi Decoding 9.4.5 Trellis-Coded Modulation 9.4.6 Turbo Coding 9.4.7 Low-Density Parity-Check (LDPC) Coding 9.4.8 Differential Space-Time Block Coding (DSTBC) 9.5 Coding Gain

9.1 MEASURE OF INFORMATION ENTROPY


On the premise that the outputs from an information source such as data, speech, or audio/video disk can be regarded as a random process, the information theory defines the self information of an event x i with probability P ( x i ) as
1 !  log 2 P ( xi )[bits] (9.1.1) - A message that there was or will be an earthquake in the area where earthquake is very rare has P ( xi ) I ( xi ) ! log 2

a value of big news and therefore can be for an to have a lot of information. But, another This measure of information is large/smallbelievedevent of lower/higher probability. Why is it message that there will be an earthquake in the area where observing are frequent defined like that? It can be intuitively justified/understood byearthquakesthe following: is of much less value as a news and accordingly, can be regarded as having a little information. - For instance, the information contained in an event which happens with probability P ( xi ) ! 1 is computed to be zero according to this definition, which fits our intuition.
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009 ( wyyang53@hanmail.net, http://wyyang53.com.ne.kr )

Why is it defined by using the logarithm? Because the logarithm makes the combined information of two independent events xi and y j (having joint probability of P ( xi ) P ( y j ) ) equal to the sum of the informations of the events:
I ( xi , y j ) ! log 2
(9.1.1) 1 1 ! log 2 !  log 2 P ( xi )  log 2 P( y j ) ! I ( xi )  I ( y j ) (9.1.2) P ( xi , y j ) P ( xi ) P ( y j )

What makes the logarithm have base 2? It is to make the information measured in bits. According to the definition (9.1.1), the information of a bit whose value x may be 0 or 1 with equal probability of 1/2 is
I ( x ) ! log 2 1 !1 1/ 2 (9.1.3)

Now, suppose we have a random variable x whose value is taken to be xi with probability P ( xi ) from the universe X ! { x i | i ! 1 to M } . Then, the average information that can be obtained from observing its value is
H ( x ) ! xi X P ( xi ) I ( xi ) !  xi X P ( xi ) log 2 P ( xi )
(9.1.1)

(9.1.4)

This is called the entropy of is x , which is when the two events x and x2 are contained in x This shows that the entropy maximized the amount of uncertainty (mystery) equally likely so 1 beforep ! value is 1 / 2 . More will be lostthe entropy defined by is observed.is maximized when all that its 1  p ! known and generally, after the value of x Eq. (9.1.4) Especially are the case of a binary information source whose value is chosen from X ! { x1 , x2 } the events in equiprobable (equally probable), i.e. with probability of the entropy

1 p P ( x1 ) ! for and iP!x2 ) to1 Mp ( xi ) ! all ( 1 !  M


(9.1.4)

(9.1.5) (9.1.7)

H ( x ) !  p log 2 p  (1  p ) log 2 (1  p )

(9.1.6)

is depicted as a function of in Fig. 9.1.

Problem 9.1 Entropy and Investment Value of a Date Suppose you are dating someone, who is interested in you. When is your value maximized so that he or she can give you the most valuable present? In connection with the concept of entropy introduced in Sec. 9.1, choose one among the following examples: (1) When you sit on the fence so that he or she cannot guess whether you are interested in him or her. (2) When you have shown him or her a negative sign. (3) When you have shown him or her a positive sign. In the same context, which fish would you offer the best bait to? Choose one among the following examples: (1) The fish you are not sure whether you can catch or not. (2) The fish you are sure you can catch (even with no bait). (3) The fish you are sure you cannot catch (with any bait).

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

9.2 SOURCE CODING


Sometimes we want to find an efficient code using fewest bits to represent the information. Shannons source coding theorem or noiseless coding theorem states that we need the number of bits not less than the entropy in order to encode an information message in such a way that perfect decoding is possible. In this section we examine Huffman coding and LZW coding. 9.2.1 Huffman Coding Huffman coding is a variable length coding which has a strategy to minimize the average number of bits required for coding a set of symbols by assigning shorter/longer length code to more/less probable symbol. It is performed in two steps merging and splitting as illustrated in Fig. 9.2. (1) Merging: Repeat arranging the symbols in descending order of probability as in each column of Table 9.1 and merging the least probable two entries into a new entry with a probability equal to the sum of their probabilities in the next column, until only two enrties remain. (2) Splitting: Assign 0/1 to each of the two remaining entries to make their initial parts of codeword. Then repeat splitting back the merged entries into the two (previous) entries and appending another 0/1 to each of their (unfinished) codewords. The average codeword length of any code assigned to the set X of symbols cannot be less than the entropy H (x) defined by Eq. (9.1.4) and especially, the average codeword length L of the Huffman code constructed by this procedure is less than H (x)  1 :

H ( x) e L ! x P ( xi )l ( c x i )
i

H ( x)  1

(9.2.1)

where l ( c x i ) is the length of the codeword for each symbol x i .


Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

function [h,L,H]=Huffman_code(p,opt) % Huffman code generator gives a Huffman code matrix h, % average codeword length L & entropy H % for a source with probability vector p given as argin(1) zero_one=['0'; '1']; if nargin>1&&opt>0, zero_one=['1'; '0']; end if abs(sum(p)-1)>1e-6 fprintf('\n The probabilities in p does not add up to 1!'); end M=length(p); N=M-1; p=p(:); % Make p a column vector h={zero_one(1),zero_one(2)} if M>2 pp(:,1)=p; for n=1:N % To sort in descending order [pp(1:M-n+1,n),o(1:M-n+1,n)]=sort(pp(1:M-n+1,n),1,'descend'); if n==1, ord0=o; end % Original descending order if M-n>1, pp(1:M-n,n+1)=[pp(1:M-1-n,n); sum(pp(M-n:M-n+1,n))]; end end for n=N:-1:2 tmp=N-n+2; oi=o(1:tmp,n); for i=1:tmp, h1{oi(i)}=h{i}; end h=h1; h{tmp+1}=h{tmp}; h{tmp}=[h{tmp} zero_one(1)]; h{tmp+1}=[h{tmp+1} zero_one(2)]; end for i=1:length(ord0), h1{ord0(i)}=h{i}; end h=h1; end L=0; for n=1:M, L=L+p(n)*length(h{n}); end % Average codeword length H=-sum(p.*log2(p)); % Entropy by Eq.(9.1.4)
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

(Example 9.1) Huffman Coding Suppose there is an information source x which generates the symbols from X ! { x1 , x2 , L , x9 } with the corresponding probability vector

P ( x ) ! p ! [0.2 0.15 0.13 0.12 0.1 0.09 0.08 0.07 0.06]

(E9.1.1)

The Hoffman codewords together with their average length and the entropy of the information source x can be found by typing the following statements into MATLAB Command Window:
>>p=[0.2 0.15 0.13 0.12 0.1 0.09 0.08 0.07 0.06]; >>[h,L,H]=Huffman_code(p) % Fig.9.2 and Eq.(9.1.4) h = '11' '001' '010' '100' '101' '0000' '0001' '0110' '0111' L = 3.1000, H = 3.0371

satisfies H ( x) e L ! xi X P ( xi )l ( c x i )

H ( x)  1 (9.2.1)

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

function coded_seq=source_coding(src,symbols,codewords) % Encode a data sequence src based on the given (symbols,codewords). no_of_symbols=length(symbols); coded_seq=[]; if length(codewords)<no_of_symbols error('The number of codewords must equal that of symbols'); end for n=1:length(src) found=0; for i=1:no_of_symbols if src(n)==symbols(i), tmp=codewords{i}; found=1; break; end end if found==0, tmp='?'; end coded_seq=[coded_seq tmp]; end function decoded_seq=source_decoding(coded_seq,codewords,symbols) % Decode a coded_seq based on the given (codewords,symbols). M=length(codewords); decoded_seq=[]; while ~isempty(coded_seq) lcs= length(coded_seq); found=0; for m=1:M codeword= codewords{m}; lc= length(codeword); if lcs>=lc&codeword==coded_seq(1:lc) symbol=symbols(m); found=1; break; end if found==0, symbol='?'; end end decoded_seq=[decoded_seq symbol]; coded_seq=coded_seq(lc+1:end); end
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

>>src='12345678987654321'; symbols='123456789'; >>coded_sequence=source_coding(src,symbols,h) % with the Huffman code h coded_sequence = 11001010100101000000010110011101100001000010110001000111 >>decoded_sequence=source_decoding(coded_sequence,h,symbols) decoded_sequence = 12345678987654321 >>length(src), length(coded_sequence) 4 ans= 17 56 9e2

It also turns out that, in comparison with the case where each of the nine different symbols {1,2,,9} is commonly represented by a binary number of 4[bits], Huffman coding compresses the source data from 17 v 4 ! 68 [bits] to 56[bits]. 9.2.2 Lempel-Ziv-Welch (LZW) Coding Suppose we have a binary sequence '001011011000011011011000'. We can apply the LZW encoding/decoding procedure to get the following results. They can also be assured by the stepby-step procedure described in Figs. 9.3.1/9.3.2. >>src='001011011000011011011000' >>[coded_sequence,dictionary]=LZW_coding(src) coded_sequence = 001341425a79 dictionary = '0' '1' '00' '01' '10' '011' '101' '11' '100' '000' '0110' '01101' '110 >>[decoded_sequence,dictionary]=LZW_decoding(coded_sequence) decoded_sequence = 001011011000011011011000 % agrees with the source dictionary = '0' '1' '00' '01' '10' '011' '101' '11' '100' '000' '0110' '01101' '110' >>length(src), length(coded_sequence) ans= 24 12

9.2.3 Source Coding vs. Channel Coding The simplified block diagram of a communication system depicted in Fig. 9.4 shows the position of the source encoder/decoder and the channel encoder/decoder.

- The purpose of source coding is to reduce the number of data bits that are to be transmitted over the channel for sending a message information to the receiver, - while the objective of channel coding is to make the transmission error correction efficient so that the error probability can be decreased.

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

Shannon-Hartley channel capacity theorem

hannel :C (9.3.7) ! capacity here

S log 2 1  N

S log 2 1 ! ( N0 / 2) 2

Eb R log 2 1 [bits/sec] (9.3.16) N0

[ z]: the channel band idth, S [W]: the signal po er, Eb [J/bit]: the signal energy per bit, R [bits/sec] the data bit rate, per unit requency[ z] in the passband N0 / 2[W/ z]: the noise N ! N0 [W]: the noise po er.
The limits of the channel capacity as S /

0Bpg

or 0 are as follows:

In orderp g taste an implication of this S / 0 B to S S formula, we suppose an ideal situation in which the data lim R reachesits upperbound log 2 10 logchannel ! 0.332B v SNRdB (9.3.17a) } B that is the 10 C p B log 2 1  transmission Srate p g capacity C . The relationship / 0B B R /B 0 EbN0d ! 10 log (E 0/B ) for such an ideal system can between bandwidth efficiency and 10 b N0 0B S / 0B p 0 S S S S be obtained by substituting C ! R S into the left-hand side of Eq. (9.3.16) as S lim B log 2 1 lim log 2 1 log 2 e ! 1.44 C p ! ! S / 0 Bp 0 S / 0 Bp 0 B B 0 0 0 0 Eb R Eb R R0 R ! log 2 1  ( Eb / N 0) (9.3.17b) C ! B log 2 1 p R ! B log 2 1 B N0 B N0 B B B ; bN0d ! 10 log10 ( Eb / N 0) ! 10 log10 (2 R / B 1) [d ] (9.3.18) R On the other hand, we have R (Data transmission rate) e C (Chanel capacity) (9 .3.8)
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

Note the following depicted 9.7: This relationship isabout Fig. as the capacity boundary curve in Fig. 9.7 where the bandwidth efficiencies vs. SNR for PSK/FSK/QAM signalings listed incan be 7.2 are plotted together. - Only the lower-right region of the curve corresponding to Table realized (toward error-free transmission). - The figure shows us possible trade-offs among SNR, bandwidth efficiency, and error probability, which can be useful for a communication system design. - However low and wide the data transmission rate and the channel bandwidth may be made, respectively, the SNR (EbN0dB) should be at least -1.6dB (Shannon limit) for reliable communication. - The curve gives a rough estimate of the maximum possible coding gain where the coding gain is defined as the SNR that can be reduced to maintain the BER by the virtue of (channel) coding. - The Shannon limit can be found using the following MATLAB statements:
>>syms x, Shannon_limit=eval(limit(10*log10((2^x-1)/x),0))

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

9.4 Channel Coding


In order to achieve reliable communication through a channel corrupted by a noise, we might have to make the codewords different from each other conspicuously enough to reduce the probability of each symbol to be taken for another symbol. The conversion of the message data with this aim is called channel coding. It may accompany some side effects such as a decline of data transmission rate, an increase of required channel bandwidth, and an increase of complexity in the encoder/decoder. In this section, we will discuss waveform coding that converts the data to distinguishable waveforms and structured coding that adds some redundant/extra parity bits for detection and/or correction of transmission errors. The structured coding is divided into two schemes. One is the block coding, which converts the source data independently of the previous data and the other is the convolutional coding, which converts the data dependently of the previous data. 9.4.1 Waveform Coding There are various methods of waveform coding such as antipodal signaling, orthogonal signaling, bi-orthogonal signaling, etc, most of which were discussed in Chapters 5 and 7. Among those waveform codings, the orthogonal signaling can be regarded as converting one bit of data and two bits of data according to its/their value(s) in the following way:
ata 0 1 00 01 10 11 p Codeword matrix H1 ! 0 0 0 1 0 0 0 0 0 1 0 1 H1 H1 H2 ! 0 0 1 1 ! H1 H1 0 1 1 0

This conversion procedure is generalized for K bits of data into the form of a Hadamard matrix as H H (9.4.1) H K ! K 1 K 1 H K 1 H K 1 Here, we define the crosscorrelation between codeword i and codeword j as

Number o bits having the same values Number o bits having di erent values (9.4.2) Total number o bits in a codeword This can be computed by changing every 0 (of both codewords) into -1 and dividing the inner product of the two bipolar codeword vectors by the codeword length. According to this definition, the crosscorrelation between any two different codewords turns out to be zero, implying the orthogonality among the codewords and their corresponding signal waveforms. zij !
On the other hand, the bi-orthogonal signaling can be regarded as converting two bits of data and generally, K bits of data in the following way:
Data 00 01 10 11 K -bits of data Codeword matrix 0 0 H 1 B2 ! 0 1 ! 11 1 0 H1 H K 1 BK ! H K 1

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

Since the number of columns of the codeword matrix is the number of bits per symbol, the number of bits spent for transmission with bi-orthogonal signaling is a half of that for orthogonal signaling and the required channel bandwidth is also a half of that for orthogonal signaling, still showing comparable BER performance. However, the codeword lengths of both signaling (coding) methods increase by geometric progression as the number K of bits per symbol increases and consequently, the data transmission rate R or the bandwidth B will suffer, resulting in the degradation of bandwidth efficiency R / B. 9.4.2 Linear Block Coding Block coding is a mapping of K-bit message vectors (symbols) into N -bit ( N " K ) code vectors by using an ( N , K ) block code which consists of 2 K codes of length N . The simplest block coding uses a repetition code to assign an N -zero ( N :odd ) sequence or an N -one sequence to a symbol or bit message 0 and 1, respectively:

Bit message: 0 p 00L 000( N zeros) , N : an odd positive number Bit message: 1 p 11 L 111( N ones)
A data sequence coded by this coding can be decoded simply by the majority rule, which decodes each N -bit subsequence into 0 or 1 depending on which of 0 or 1 is more than the other one. In this coding-decoding scheme, a symbol error happens only with at least ( N  1) / 2 transmission bits of error in an N -bit sequence and therefore the symbol error probability in a BSC with crossover probability (i.e., channel bit transmission error probability) I can be computed as

N k N N k pe , s ! k ! ( N 1) / 2 I (1  I ) k
Source: MATLAB /Simulink for Digital Communication by Won Y. Yang et al. 2009

(9.4.4)

With the crossover probability of I ! 0.01 , the symbol error probability will be
3 3 pe , s ! k ! 2 0.01k 0.993 k ! 2.98v10 4 for N !3 k 5 5 pe , s ! k !3 0.01k 0.995 k !9.85v10 6 for N !5 k

This implies that the error probability can be made close to zero just by increasing the number N of bits per symbol in the face of low bandwidth efficiency. What is the linearity of a block code? A block code is said to be linear if the modulo-2 sum/difference of any two codewords in the block code is another codeword belonging to the block code. A linear block code is represented by its K v N generator matrix G , which is modulo-2 premultiplied by a K -bit message symbol vector m to yield an N -bit codeword c as

c ! mG

(9.4.5)

Accordingly, the encoder and decoder in a linear block coding scheme use matrix multiplications with generator/parity-check matrices instead of using table lookup methods with the whole 2 K v N codeword matrices. This makes the encoding/decoding processes simple and efficient. Now, we define the minimum (Hamming) distance of a linear block code c as the minimum of Hamming distances between two different codewords:

d min (c ) ! Min i { j d H (ci , c j )


Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

(9.4.6)

Here, Hamming distance between two different codewords c i and c j is the number of bits having different values and can be found as the weight of the modulo-2 sum/difference c i c j of the two codewords: d H (ci , c j ) ! w(ci c j ) (9.4.7) where the Hamming weight w(c k ) of a codeword c k is defined to be the number of non-zero bits. Besides, since the modulo-2 sum/difference of any two codewords is another codeword in the linear block code c , the minimum distance of a linear block code is the same as the minimum among the nonzero weights of codewords.

d min (c ) !

in k { 0 (c k )

(9.4.8)

To describe the strength of a code c against transmission errors, the error-detecting/correcting capabilities d d ( c ) / d c (c ) are defined to be the maximum numbers of detectable/correctable bit errors, respectively, and they can be obtained from the minimum distance as follows:

d d (c ) ! d min (c )  1

(9.4.9)

function pemb_t=prob_err_msg_bit(et,N,No_of_correctable_error_bits) d pemb_t=0; % Theoretical message! bit error(c ) 1 d c (c ) floor min probability by Eq.(9.4.11) (9.4.10) for k=No_of_correctable_error_bits+1:N 2 pemb_t= pemb_t +k*nchoosek(N,k)*et.^k.*(1-et).^(N-k)/N; end where floor(x ) is the greatest number less than or equal to c .

In case the crossover probability of a channel, i.e., the probability of channel bit transmission error is I and the RCVR corrects only the symbol errors caused by at most d c (c ) bits of error, the bit error probability after correction can be found roughly as 1 N N k N k pe ,b $ k ! d c 1 k I (1  I ) (9.4.11) k N

(Example 9.5) A Linear Block Code of Codeword (Block) Length and Message Size Find the codewords and the minimum distance of the (7,4) linear block code represented by the generator matrix 1 1 0 1 0 0 0 0 0 0 00 1 1 0 1 0 0 0 0 0 0 0 0 0 G! 0 0 0 1 1 1 1 0 0 1 0 1 0 1 0 0 0 1 ( 9.4.1) 0 0 1 0 1 1 1 0 0 1 0 1 0 1 0 0 0 1 0 0 1 1 0 1 0 0 0 1 1 0 1 0 0 0 1 1 0 1 0 0 %dc09e05.m: A Linear Block Code in Example 9.5 0 1 0 1 1 1 0 0 1 0 1 % Constructs the codeword matrix and finds its minimum distance. 0 1 1 0 1 1 0 1 0 0 0 1 0 0 0 1 1 0 clear (9.4.5) 1 the 1 0 1 0 ! 0 0 1 K=4; Codeword ! M G ! size1 and 0 1 number 0of codewords 0 1 1 1 (E9.5.2) L=2^K; % Message 0 1 1 1 0 1 0 0 0 1 0 0 0 1 1 0 0 for i=1:L, M(i,:)=deci2bin1(i-1,K); 1 end 1 0 1 0 0 0 1 1 1 0 0 1 M % A series of K-bit binary 1 1 0 1 0 0 0 1 numbers 1 0 1 0 0 0 1 1 0 1 0 % Generator matrix G=[1 1 0 1 0 0 0; 0 1 1 10 011 01 1 1 1 0 0 1 0; 1 0 1 0 0 0 1 1 0; 1]; 1 0 0 1 % To generate codewords 1 1 0 0 1 0 1 1 1 0 0 Codewords =rem(M*G,2) % 1Modulo-2 multiplication Eq.(9.4.5)1 0 1 1 0 1 0 0 0 1 % Find the minimum distance by Eq.(9.4.8) 1 1 1 0 0 1 0 1 1 1 0 Minimum_distance =min(sum((Codewords(2:L, :))')) 1 1 1 1 1 1 1 1 1 1 1
function y=deci2bin1(x,l) % Equivalent to de2bi(x,l,'left-msb') Minimum_distance = 3 % Converts a given decimal number x into a binary number of l bits if x==0, y=0; (cf) Notey=[]; else that every operation involved in adding/subtracting/multiplying the code vectors/ while x>=1, y=[rem(x,2) y]; x=floor(x/2); end matrices is not the ordinary arithmetic operation, but the modulo-2 operation and end if consequently, y=[zeros(size(x,1),l-size(y,2)) y]; be distinguished. nargin>1, the addition and the subtraction dont have to end

>>dc09e05
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

<Construction of a Generator Matrix and the Corresponding Parity-Check Matrix> A K v N generator matrix G representing an ( N , K ) block code can be constructed as

GK v N ! PK v( N  K )

0 0 0 0 0 0 1 0 0 0 0 0 1 0 With this generator matrix, the N - bit code vector (codeword) c for 0 K -bit0message (or source a 0 0 1 or information) vector m is generated as 0 0 0 1 1 0 0 1 0 1 0 1 (9.4.13) 0 c ! m G ! p1 p2 L pN  K 1 2 L 0 K ! [ m] 1 0 1 0 0 1 0 1 0 1 0 which consists of the first (N  K ) parity bits and the last K message bits in a systematic 0 1 1 0 0 structure. Note that if a block code is represented by a generator matrix containing an identity 1 0 0 0 1 matrix, then the message vector appears (without being altered) in the0code vector and the code is 1 0 1 0 1 0 1 0 0 said to be systematic. 1 1 0 0 0 function M=combis(N,i) % Creates an error pattern matrix each row of which is an N-dimensional % vector having ones (representing bit errors) not more than i M=[]; m1=0; for n=1:i ind = combnk([1:N],n); % nchoosek([1:N],n); for m=1:size(ind,1), m1=m1+1; M(m1,ind(m,:))=1; end end

p1,1 p1,2 p1, N  K 1 0 0 p p2,2 p2, N  K 0 1 0 2,1 >> M=combis(5,2) I K vK ! M = 1 0 0 p pK ,2 p0 , N  K 1 0 01 0 K K ,1

(9.4.12)

Correspondingly to this generator matrix G , the parity-check matrix H to be used for decoding the received signal vector is defined as

H ( N  K )v N

1 0 I ( N  K )v( N  K ) | P(T  K )v K ! ! N 0

0 1 0

0 p1,1 0 p1,2 1 p1, N  K

p2,1 p2,2 p2, N  K

pK ,1 pK ,2 (9.4.14a) pK , N K

H N v( N  K )

0 1 0 1 0 I ( N  K )v( N  K ) 0 ! ! PK v( N  K ) p1,1 p1,2 p2,1 p2,2 pK ,1 pK ,2

0 0 1 p1, N  K p2, N  K pK , N  K

(9.4.14b)

so that it satis ies I PK v( N  K ) | I K v K ( N  K )v ( N  K ) ! PK v( N  K ) PK v( N  K ) ! O K v( N  K ) (9.4.15) GH ! PK v( N  K ) (a Kv( NK ) zero matrix)


T Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

For a received signal vector r ! c e computes

containing an error vector e , the decoder at RCVR


T (9.4.13)

s ! r H ! ( c e) H

! (mG e) H

T (9.4.15)

! eH

(9.4.16)

which is called a syndrome vector for the reason that it has some (distinct) information about the possible symbol error like a pathological syndrome or symptom of disease. Then, the decoder finds the error pattern e corresponding to the syndrome vector s from such a table as shown in Fig. 9.8 and subtracts it from the received signal vector r to hopefully obtain the original code vector as

c ! r e

(9.4.17)

Finally, the RCVR accepts the last K bits of this corrected code vector as the message vector m (see Eq. (9.4.13)). Fig. 9.8 shows a table which contains the healthy (valid) codewords, the error patterns, the cosets of diseased codewords infected by each error pattern, and the corresponding syndromes for the linear block code given in Example 9.5.

For example, suppose the RCVR has received a signal, which is detected to be

r ! [1 1 0 1 1 1 0]
The RCVR multiplies this received signal vector with the transposed parity matrix to get the syndrome as

1 0 0 (9.4.16) T s ! r H ! [1 1 0 1 1 1 0] 1 0 1 1

0 0 1 0 0 1 1 0 ! [1 1 1 1 1 1 1 1 1] ! [1 0 0] 1 1 1 1 0 1

Then the error pattern e ! [1 0 0 0 0 0 0] corresponding to this syndrome vector is added to the detected result to yield

(9.4.17)

r e ! [1 1 0 1 1 1 0] [1 0 0 0 0 0 0] ! [0 1 0 1 1 1 0]

which is one of the healthy (valid) codewords. The last K ! 4 bits of this corrected code vector is supposed to be the decoded message vector by reference to Eq. (9.4.13).

c ! ? p1 p2 L pN  K

A! [

m]

(9.4.13)

The MATLAB routine do_Hamming_code74.m can be used to simulate a channel encoding/decoding scheme that is based on the block code represented by the generator matrix (E9.5.1). Actually, the block code is the (7,4) Hamming code and it can be constructed using the Communication Toolbox function Hammgen() (as done in the routine) or a subsidiary routine Hamm_gen() that will soon be introduced. Note the following about the routine: - The randomly generated -dimensional message vector is coded by the (7,4) Hamming code and then BPSK-modulated. - For comparison with the uncoded case, Eq. (7.3.5) is used to compute the BER for BPSK signaling. - The same equation (Eq. (7.3.5)) is also used to find the crossover probability I , but with the SNRb (SNR per bit) multiplied by the code rate R c ! K / N to keep the SNR per message bit the same for uncoded and coded messages. That is, the SNR per transmission bit should be decreased to R c ! K / N times for the same SNR per message bit since the channel coding based on an ( N , K ) linear block code increases the number of transmission bits by 1/Rc ! N /K times of that with no channel coding. - If there are many syndromes and corresponding error patterns, it is time-consuming to search the syndrome matrix S for a matching syndrome and find out the corresponding error pattern in the error pattern matrix E. In that case, it will make the search process more efficient to create and use an error pattern index vector that can easily be accessed by using the number converted from the syndrome vector as the index (see Sec. 9.4.3). This idea is analogous to that of an address decoding circuit that can be used to decode the syndrome into the address of the memory in which the corresponding error pattern is stored.
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

function must be decreased to R c ! K / N times that for uncoded case for fair comparison. SNR do_Hamming_code74(SNRbdB,MaxIter) % (7,4) Hamming code if nargin<2, MaxIter=1e6; end n=3; N=2^n-1; K=2^n-1-n; % Codeword (Block) length and Message size Rc=K/N; % Code rate SNRb=10.^(SNRbdB/10); SNRbc=SNRb*Rc; Es sqrtSNRbc=sqrt(SNRbc); Q ! Q SNRr (7.3.5) pemb_uncoded=Q(sqrt(SNRb)); % Uncoded msg BER with / 2 by Eq.(7.3.5) BPSK N0 et=Q(sqrt(SNRbc)); % Crossover probability by Eq.(7.3.5) L=2^K; for i=1:L, M(i,:)=deci2bin1(i-1,K); end % All message vectors (9.4.13) c ! mG [H,G]=Hammgen(n); % [H,G]=Ham_gen(n): Eq.(9.4.12)&(9.4.14) Hamming_code=rem(M*G,2); % Eq.(9.4.13) Min_distance=min(sum(Hamming_code(2:L,:)')); % Eq.(9.4.8) (9.4.16) T No_of_correctable_error_bits=floor((Min_distance-1)/2); % Eq.(9.4.10) s ! eH E= combis(N,No_of_correctable_error_bits); % Error patterns (Fig.9.8) S= rem(E*H',2); NS=size(S,1); % The syndrome matrix d (c) 1 d c (c) ! loor min nombe=0; (9.4.10) 2 m] (9.4.13) for iter=1:MaxIter c ! m G ! p1 p2 L pN  K 1 2 L K ![ msg=randint(1,K); % Message vector coded=rem(msg*G,2); % Coded vector modulated=2*coded-1; % BPSK-modulated vector r= modulated +randn(1,N)/sqrtSNRbc; % Received vector with noise r_sliced=r>0; % Sliced T r_c=r_sliced; % To be corrected only if it has a syndrome s ! r H (9.4.16) s= rem(r_sliced*H',2); % Syndrome for m=1:NS % Error correction depending on the syndrome if s==S(m,:), r_c=rem(r_sliced+E(m,:),2); break; end c (9.4.17) r e ! end 1 N N nombe=nombe+sum(msg~=r_c(N-K+1:N)); pe ,if nombe>100, break;  I ) k !dc 1 k k I k (1 endN  k (9.4.11) b N end pemb=nombe/(K*iter); % Message bit error probability pemb_t=prob_err_msg_bit(et,N,No_of_correctable_error_bits); % Eq.(9.4.11) fprintf('\n Uncoded Messsage BER=%8.6f',pemb_uncoded) fprintf('\nMessage BER=%5.4f(Theoretical BER=%5.4f)\n',pemb,pemb_t)

Now, it is time to run the routine and analyze the effect of channel coding on the BER performance. To this end, let us type the statements
>>do_Hamming_code74(5) % with SNR=5dB

into the MATLAB Command Window, which will make the following simulation results appear on the screen:
Uncoded Messsage Bit Error Probability=0.037679 Message Bit Error Probability=0.059133 (theoretically 0.038457)

What happened? issues are couple of observations to make: Both of these twoThere are anaturally resolved by increasing the SNR. Let us rerun the routine: - >>do_Hamming_code74(12) the with SNR=12dB The BER (0.059) obtained from % simulation is unexpectedly higher than the theoretical value Uncoded Messsage Bit Error Probability=0.000034 of BER (0.038) obtained from Eq. (9.4.11) where Eq. (7.3.5) with the SNR reduced to R c ! K /N Message Bit Error Probability=0.000019 (theoretically 0.000010) times is used to find the crossover probability I . What does the big gap come from? It is because Eq. (9.4.11) does not consider the case where many transmitted bit errors exceeding the error-correcting capability of the channel coding results in more bit errors during the correction process because of wrong diagnosis (see Problem 9.5). - It is much more surprising that even the theoretical value of BER is higher than the BER with no coding. Can you believe that we get worse BER performance for all the hardware complexity, bandwidth expansion, and/or data rate reduction paid for the encoding-decoding process? Do we still need the channel coding? Yes, certainly. Something wrong with the routine? Absolutely not. The reason behind the worse BER is just because the SNR is not high enough to let the coding-decoding scheme work properly. Even if the parity-check bits are added to the message vector, they may not play their error-checking role since they (inspectors) are also subject to noise (corruption), possibly making wrong error detections/corrections that may yield more bit errors.
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

9.4.3 Cyclic Coding


A cyclic code is a linear block code having the property that a cyclic shift (rotation) of any codeword yields another codeword. Due to this additional property, the encoding and decoding processes can be implemented more efficiently using a feedback shift register. An ( N , K ) cyclic code can be described by an ( N  K ) th-degree generator polynomial

g ( x ) ! g 0  g1 x  g 2 x  L  g N  K x

N K

(9.4.24)

The procedure of encoding a K -bit message vector m ! [ m0 m1 L mK 1 ] represented by a ( K  1)th-degree polynomial

m( x) !

1x 

2x L 

K 1 x

K 1

(9.4.25)

into an N -bit codeword represented by an ( N  1) th-degree polynomial is as follows: 1. Divide x N  K m (x) by the generator polynomial g (x) to get the remainder polynomial rm(x) . 2. Subtract the remainder polynomial rm(x) from x N  K m (x) to obtain a codeword polynomial

c( x ) ! x

N K

m ( x) rm ( x) ! q( x)g ( x)
N  K 1

(9.4.26)
N K

! r0  r1 x  L  rN  K 1 x

 m0 x

 m1 x

N  K 1

 L  mK 1 x

N 1

which has the generator polynomial g(x) as a (multiplying) factor. Then the f irst ( N  K ) coefficients constitute the parity vector and the remaining K coefficients make the message vector. Note that all the operations involved in the polynomial multiplication, division, addition, and subtraction are not the ordinary arithmetic ones, but the modulo-2 operations.

(Example 9.6) A Cyclic Code


With a (7,4) cyclic code represented by the generator matrix

g ( x ) ! g 0  g1 x  g 2 x  g 3 x ! 1  1 x  0 x  1 x

(E9.6.1)

find the codeword for a message vector m ! [1 0 1 1]. Noting that N ! 7, K ! 4 , and N  K ! 3 , we divide x N  Km (x) by g(x) as

N K

m ( x) ! x (1  0 x  1 x  1 x ) ! x  x  x ! q( x) g ( x)  rm ( x) ! ( x  x  x  1)( x  x  1) 1
3 2 3

(E9.6.2)
to make the

to get the remainder polynomial rm ( x ) ! 1 and add it to x N  K m ( x ) ! x 3m ( x ) codeword polynomial as

c( x) ! rm ( x)  x m( x) ! 1  0 x  0 x  1 x  0 x  1 x  1 x p c ! [1
parity

( 9.6.2)

0 1
|

message

0 1 1]

The codeword made in this way has the N  K ! 3 parity bits and K ! 4 message bits.

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

Now, let us consider the procedure of decoding a cyclic coded vector. Suppose the RCVR has received a possibly corrupted code vector r ! c  e where c is a codeword and e is an error. Just as in the encoder, this received vector, being regarded as a polynomial, is divided by the generator polynomial g ( x )

r ( x) ! c( x)  e( x) ! ( x) g( x)  e( x) !

( x) g( x)  s( x)

(9.4.27)

to yield the remainder polynomial s ( x ) . This remainder polynomial s may not be the same as the error vector e , but at least it is supposed to have a crucial information about e and therefore, may well be called the syndrome. The RCVR will find the error pattern e corresponding to the syndrome s , subtract it from the received vector r to get hopefully the correct codeword

c ! r e

(9.4.28)

and accept only the last K bits (in ascending order) of this corrected codeword as a message. The polynomial operations involved in encoding/decoding every block of message/coded sequence seem to be an unbearable computational load. However, we fortunately have divider circuits which can perform such a modulo-2 polynomial operation. Fig. 9.9 illustrates the two divider circuits (consisting of linear feedback shift registers) each of which carries out the modulo-2 polynomial operations for encoding/decoding with the cyclic code given in Example 9.6. Note that the encoder/decoder circuits process the data sequences in descending order of polynomial.
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

m ! [1 0 1 1] p c ! [1 0 0 2 3 parity g ( x ) (E9.6.1) 11 x  0 x 1 x !

1 ] (E9.6.3)

message

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

The encoder/decoder circuits are cast into the MATLAB routines cyclic_encoder() and cyclic_decoder0(), respectively, and we make a program do_cyclic_code.m that uses the two routines cyclic_encoder() and cyclic_decoder() (including cyclic_encoder0()) to simulate the encoding/decoding process with the cyclic code given in Example 9.6. Note a couple of things about the decoding routine cyclic_decoder(): - It uses a table of error patterns in the matrix E, which has every correctable error pattern in its rows. The table is searched for a suitable error pattern by using an error pattern index vector epi, which is arranged by the decimal-coded syndrome and therefore, can be addressed efficiently by a syndrome just like a decoding hardware circuit. - If the error pattern table E and error pattern index vector epi are not supplied from the calling program, it uses cyclic_decoder0() to supply itself with them.
function coded= cyclic_encoder(msg_seq,N,K,g) % Cyclic (N,K) encoding of input msg_seq m with generator polynomial g Lmsg=length(msg_seq); Nmsg=ceil(Lmsg/K); Msg= [msg_seq(:); zeros(Nmsg*K-Lmsg,1)]; Msg= reshape(Msg,K,Nmsg).'; coded= []; for n=1:Nmsg msg= Msg(n,:); for i=1:N-K, x(i)=0; end for k=1:K tmp= rem(msg(K+1-k)+x(N-K),2); % msg(K+1-k)+g(N-K+1)*x(N-K) for i=N-K:-1:2, x(i)= rem(x(i-1)+g(i)*tmp,2); end x(1)=g(1)*tmp; end coded= [coded x msg]; % Eq.(9.4.26) end

function x=cyclic_decoder0(r,N,K,g) % Cyclic (N,K) decoding of an N-bit code r with generator polynomial g for i=1:N-K, x(i)=r(i+K); end for n=1:K tmp=x(N-K); for i=N-K:-1:2, x(i)=rem(x(i-1)+g(i)*tmp,2); end x(1)=rem(g(1)*tmp+r(K+1-n),2); end

function [decodes,E,epi]=cyclic_decoder(code_seq,N,K,g,E,epi) % Cyclic (N,K) decoding of received code_seq with generator polynml g % E: Error Pattern matrix or syndromes % epi: error pattern index vector %Copyleft: Won Y. Yang, wyyang53@hanmail.net, CAU for academic use only if nargin<6 nceb=ceil((N-K)/log2(N+1)); % Number of correctable error bits E=combis(N,nceb); % All error patterns consisting of 1,...,nceb errors for i=1:size(E,1) syndrome=cyclic_decoder0(E(i,:),N,K,g); synd_decimal=bin2deci(syndrome); epi(synd_decimal)=i; % Error pattern indices end end

if (size(code_seq,2)==1) code_seq=code_seq.'; end Lcode= length(code_seq); Ncode= ceil(Lcode/N); Code_seq= [code_seq(:); zeros(Ncode*N-Lcode,1)]; Code_seq= reshape(Code_seq,N,Ncode).'; decodes=[]; syndromes=[]; for n=1:Ncode code= Code_seq(n,:); syndrome= cyclic_decoder0(code,N,K,g); si= bin2deci(syndrome); % Syndrome index if 0<si&si<=length(epi) % Syndrome index to error pattern index k=epi(si); if k>0, code=rem(code+E(k,:),2); end % Eq.(9.4.28) end decodes=[decodes code(N-K+1:N)]; syndromes=[syndromes syndrome]; end if nargout==2, E=syndromes; end
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

% do_cyclic_code : MATLAB script for cyclic code. clear N=7; K=4; N=15; K=7; N=31; K=16; decode() msg= randint(1,lm); lm=1*K; is intolerably time-consuming for this big (?) code. But, as if by magic, it works for any case of not more bit queer 3-bit while my routine cyclic_decoder() cannot % This msg with the errors than nceb, errors results in 5/4 bit errorsresolve % by the cyclic_decoder()/decode() some of the cases to my shame. It is really strange that the minimum distance among the nceb=ceil((N-K)/log2(N+1)); %???? cyclic_encoder() with N=31 and K=16 should be 7, codewords produced by encode() as well as g=cyclpoly(N,K); %g_=fliplr(g); but it turns out to be dmin=5 % therefore dc=2. representing (N,K) BCH code %gBCH=bchgenpoly(N,K); andGalois vectorThe problem causing wrong correction is that some different error Extracting in the same syndrome. %g=double(gBCH.x) % patterns results the elements from a Galois array coded = cyclic_encoder(msg,N,K,g); lc=length(coded); %no_transmitted_bit_errors=3; Er = randerr(1,lc,nceb); Er=[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0]; % terrible/fatal error vector for 'encode()' Er=[1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1]; %Er=zeros(1,31);Er([4 29 31])=1; %queer error for the (31,16) cylic code r = rem(coded+Er,2); [decoded0,E,epi] = cyclic_decoder(r,N,K,g); % <-- To be run only the first time - bit time-consuming decoded = cyclic_decoder(r,N,K,g,E,epi); nobe=sum(decoded~=msg) coded1 = encode(msg,N,K,'cyclic',g); lc=length(coded1); [coded; coded1] r1 = rem(coded1+Er,2); decoded1 = decode(r1,N,K,'cyclic',g); nobe1=sum(decoded1~=msg) [H,G] = cyclgen(N,g); % <-- To be run only the first time syndt = syndtable(H); % To be run only the first time - time-consuming decoded2=decode(r1,N,K,'cyclic',g,syndt) nobe2=sum(decoded2~=msg)

Table 9.1 Communication Toolbox functions for block coding Block coding Related Communication Toolbox functions and objects Linear block encode, decode, gen2par, syndtable Cyclic encode, decode, cyclpoly, cyclgen, gen2par, syndtable BCH(Bose-Claudhuri-Hocquenghem) bchenc, bchdec, bchgenpoly LDPC (Low-Density Parity Check) fec.ldpcenc, fec.ldpcdec Hamming encode, decode, hammgen, gen2par, syndtable Reed-Solomon rsenc, rsdec, rsgenpoly, rsencof, rsdecof

Note the following about the functions encode and decode (use MATLAB Help for details): - They can be used for any linear block coding by putting a string linear and a K v N generator matrix as the fourth and fifth input arguments, respectively. - They can be used for Hamming coding by putting a string hamming as the fourth input argument or by providing them with only the first three input arguments. The following example illustrates the usages of encode()/decode() for cyclic coding and rsenc() and rsdec() for Reed-Solomon coding. Note that the RS (Reed-Solomon) codes are nonbinary BCH codes, which has the largest possible minimum distance for any linear code with the same message size K and codeword length N , yielding the error correcting capability of dc ! ( N  K ) / 2 .
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

%test_encode_decode.m to try using encode()/decode() N=7; K=4; % Codeword (Block) length and Message size g=cyclpoly(N,K); % Generator polynomial for a cyclic (N,K) code Nm=10; % # of K-bit message vectors msg = randint(Nm,K); % Nm x K message matrix coded = encode(msg,N,K,'cyclic',g); % Encoding % Add bit errors with transmitted BER potbe=0.1 potbe=0.3; received = rem(coded+ randerr(Nm,N,[0 1;1-potbe potbe]),2); decoded = decode(received,N,K,'cyclic',g); % Decoding % Probability of message bit errors after decoding/correction pobe=sum(sum(decoded~=msg))/(Nm*K) % BER % Usage of rsenc()/rsdec() M=3; % Galois Field integer corresponding to the # of bits per symbol N=2^M-1; K=3; dc=(N-K)/2; % Codeword length and Message size msg = gf(randint(Nm,K,2^M),M); % Nm x K GF(2^M) Galois Field msg matrix coded = rsenc(msg,N,K); % Encoding noise = randerr(Nm,N,[1 dc+1]).*randint(Nm,N,2^M); received = coded + noise; % Add a noise [decoded,numerr] = rsdec(received,N,K); % Decoding [msg decoded] numerr pose=sum(sum(decoded~=msg))/(Nm*K) % SER

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

9.4.4 Convolutional Coding and Viterbi Decoding


In the previous sections, we discussed the block coding that encodes every K -bit block of message sequence independently of the previous message block (vector). In this section, we are going to see the convolutional coding that converts a K -bit message vector into an N -bit channel input sequence dependently of the previous ( L  1) K -bit message vector ( L : constraint length). The convolutional encoder has a structure of finite-state machine whose output depends on not only the input but also the state. Fig. 9.10 shows a binary convolutional encoder with a K ( ! 2)-bit input, an N ( ! 3) -bit output, and L  1(! 3) 2-bit registers that can be described as a finite-state machine having 2 ( L 1) K ! 23 2 ! 64 states. This encoder shifts the previous contents of every stage register except the right-most one into its righthand one and receives a new K -bit input to load the left-most register at an iteration, sending an N -bit output to the channel for transmission where the values of the output bits depend on the previous inputs stored in the L  1 registers as well as the current input.

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

A binary convolutional code is also characterized by N generator sequences g1 , g 2 , L , g N each each of which has a length of LK. For example, the convolutional code with the encoder depicted in Fig. 9.10 is represented by the N ( ! 3) generator sequences

g1 ! [0 0 1 0 1 0 0 1] g 2 ! [0 0 0 0 0 0 0 1] g 3 ! [1 0 0 0 0 0 0 1]
which constitutes the generator (polynomial) matrix

(9.4.29a)

GN v LK

g1 0 0 1 0 1 0 0 1 ! g 2 ! 0 0 0 0 0 0 0 1 g 1 0 0 0 0 0 0 1 3

(9.4.29b)

where the value of the j th element of g i is 1 or 0 depending on whether the j th one of the LK bits of the shift register is connected to the i th output combiner or not. The shift register is initialized to all-zero state before the first bit of an input (message) sequence enters the encoder and also finalized to all-zero state by the ( L 1) K 0-bits padded onto the tail part of each input sequence. Besides, the length of each input sequence (to be processed at a time) is made to be MK (an integer M times K ) even by zero-padding if necessary. For the input sequence made in this way so that its total length is ( M  L 1) K including the zeros padded onto it, the length of the output sequence is ( M  L 1) N and consequently, the code rate will be

Rc !

M pg K MK p ( M  L 1) N N

or M f L

(9.4.30)

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

Given a generator matrix GN v LK together with the number K of input bits and a message sequence m , the above MATLAB routine conv_encoder() pads the input sequence[ nm with zeros as x11[ n 1] 0 0 0 0 0 0 x11 ] 1 0 needed and then generates the output sequence of the0 0 0 0 0 0encoder.] Communication convolutional x12 [ n x12 [ n 1] 0 1 x21[ n 1] 1 0 0and 0 0usage[ n] be explained x21 will 0 0 u1[ n ] Toolbox has a convolutional encoding function convenc() 0 its x [ n 1] ! 0 1 0 0 0  22 together with that of a convolutional decoding function vitdec() at the 0 xof [this section. u2 [ n ] end 22 n ] 0 0
0 0 1 0 0 0 x [n] 0 0 x [ n 1] x31 [ n 1] 0 0 0 1 0 0 x31 [ n ] 0 0 function [nxb,yb]=state_eq(xb,u,G) 32 32 % To be used as a subroutine for conv_encoder() x11[ n ] K=length(u); LK=size(G,2); L1K=LK-K; if isempty(xb), xb=zeros(1,L1K); y1[ n ] 1 0 1 0 0 1 x12 [ n ] 0 0 y2 [ n ] ! 0 0 0 0 0 1 x21[ n ]  0 0 u1[ n ] else y3 [ n ] 0 0 0 0 0 1 x22 [ n ] 1 0 u2 [ n ] N=length(xb); %(L-1)K x [n] x31 [ n ] if L1K~=N, error('Incompatible Dimension in state_eq()'); end 32 end A=[zeros(K,L1K); eye(L1K-K) zeros(L1K-K,K)]; B=[eye(K); zeros(L1K-K,K)]; C=G(:,K+1:end); D=G(:,1:K); nxb=rem(A*xb'+B*u',2)'; yb=rem(C*xb'+D*u',2)';

x11 x12

x21 x22

x31 x32

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

function [output,state]=conv_encoder(G,K,input,state,termmode) % generates the output sequence of a binary convolutional encoder % G : N x LK Generator matrix of a convolutional code % K : Number of input bits entering the encoder at each clock cycle. % input: Binary input sequence % state: State of the convolutional encoder % termmode='trunc' for no termination with all-0 state %Copyleft: Won Y. Yang, wyyang53@hanmail.net, CAU for academic use only if isempty(G), output=input; return; end tmp= rem(length(input),K); input=[input zeros(1,(K-tmp)*(tmp>0))]; [N,LK]=size(G); if rem(LK,K)>0 error('The number of column of G must be a multiple of K!') end %L=LK/K; if nargin<4|(nargin<5 & isnumeric(state)) input= [input zeros(1,LK)]; %input= [input zeros(1,LK-K)]; end end if nargin<4|~isnumeric(state) state=zeros(1,LK-K); end input_length= length(input); N_msgsymbol= input_length/K; input1= reshape(input,K,N_msgsymbol); output=[]; for l=1:N_msgsymbol % Convolution output=G*input ub= input1(:,l).'; [state,yb]= state_eq(state,ub,G); output= [output yb]; end

<Various Representations of a Convolutional Code>

<Viterbi Decoding of a Convolutional Coded Sequence> Detector output or decoder input: [1 1 0 1 1 0 1 0 1 0 1 1 0 0] Go back through survivor paths

Decoded result: [1 0 1 1 0 0 0]
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

Dont you wonder what the convolutional encoder outputs for the decoded result given as the input? Decoded result: [1 0 1 1 0 0 0]

Encoder output :

1 10 1 0 01 0 1 0 1 1 0 0

Let us compare this most likely encoder output with the detector output: Detector output or decoder input: [1 1 0 1 1 0 1 0 1 0 1 1 0 0] The error in the 5th bit might have been caused by the channel noise.
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

function decoded_seq=vit_decoder(G,K,detected,opmode,hard_or_soft) % performs the Viterbi algorithm on detected to get the decoded_seq % G: N x LK Generator polynomial matrix % K: Number of encoder input bits %Copyleft: Won Y. Yang, wyyang53@hanmail.net, CAU for academic use only detected = detected(:).'; if nargin<5|hard_or_soft(1)=='h', detected=(detected>0.5); end [N,LK]=size(G); if rem(LK,K)~=0, error('Column size of G must be a multiple of K'); end tmp= rem(length(detected),N); if tmp>0, detected=[detected zeros(1,N-tmp)]; end b=LK-K; % Number of bits representing the state no_of_states=2^b; N_msgsymbol=length(detected)/N; for m=1:no_of_states for n=1:N_msgsymbol+1 states(m,n)=0; % inactive in the trellis p_state(m,n)=0; n_state(m,n)=0; input(m,n)=0; end end states(1,1)=1; % make the initial state active cost(1,1)=0; K2=2^K; % To be continued ...

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

for n=1:N_msgsymbol y=detected((n-1)*N+1:n*N); % Received sequence n1=n+1; for m=1:no_of_states if states(m,n)==1 % active xb=deci2bin1(m-1,b); for m0=1:K2 u=deci2bin1(m0-1,K); [nxb(m0,:),yb(m0,:)]=state_eq(xb,u,G); nxm0=bin2deci(nxb(m0,:))+1; states(nxm0,n1)=1; dif=sum(abs(y-yb(m0,:))); d(m0)=cost(m,n)+dif; if p_state(nxm0,n1)==0 % Unchecked state node? cost(nxm0,n1)=d(m0); p_state(nxm0,n1)=m; input(nxm0,n1)=m0-1; else [cost(nxm0,n1),i]=min([d(m0) cost(nxm0,n1)]); if i==1, p_state(nxm0,n1)=m; input(nxm0,n1)=m0-1; end end end end end end decoded_seq=[]; if nargin>3 & ~strncmp(opmode,'term',4) [min_dist,m]=min(cost(:,n1)); % Trace back from best-metric state else m=1; % Trace back from the all-0 state end for n=n1:-1:2 decoded_seq= [deci2bin1(input(m,n),K) decoded_seq]; m=p_state(m,n); end

Given the generator polynomial matrix G together with the number K of input bits and the channel-DTR output sequence detected as its input arguments, the MATLAB routine vit_decoder(G,K,detected) constructs the trellis diagram and applies the Viterbi algorithm to find the maximum-likelihood decoded message sequence. The following MATLAB program do_vitdecoder.m uses the routine conv_encoder() to make a convolutional coded sequence for a message and uses vit_decoder() to decode it to recover the original message.
%do_vitdecoder.m % Try using conv_encoder()/vit_decoder() clear, clf msg=[1 0 1 1 0 0 0]; % msg=randint(1,100) lm=length(msg); % Message and its length G=[1 0 1;1 1 1]; % N x LK Generator polynomial matrix K=1; N=size(G,1); % Size of encoder input/output potbe=0.02; % Probability of transmitted bit error % Use of conv_encoder()/vit_decoder() ch_input=conv_encoder(G,K,msg) % Self-made convolutional encoder notbe=ceil(potbe*length(ch_input)); error_bits=randerr(1,length(ch_input),notbe); detected= rem(ch_input+error_bits,2); % Received/modulated/detected decoded= vit_decoder(G,K,detected) noe_vit_decoder=sum(msg~=decoded(1:lm))

The following program do_vitdecoder1.m uses the Communication Toolbox functions convenc() and vitdec() where vitdec() is used several times with different input argument values to show the readers its various usages. Now, it is time to see the usage of the function vitdec().
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

%do_vitdecoder1.m % shows various uses of Communication Toolbox function convenc() % with KxN Code generator matrix Gc - octal polynomial representation clear, clf, %msg=[1 0 1 1 0 0 0] msg=randint(1,100); lm=length(msg); % Message and its length potbe=0.02; % Probability of transmitted bit error Gc=[5 7]; % 1 0 1 -> 5, 1 1 1 -> 7 (octal number) Lc=3; % 1xK constraint length vector for each input stream [K,N]=size(Gc); % Number of encoder input/output bits trel=poly2trellis(Lc,Gc); % Trellis structure ch_input1=convenc(msg,trel); % Convolutional encoder notbe1=ceil(potbe*length(ch_input1)); error_bits1=randerr(1,length(ch_input1),notbe1); detected1= rem(ch_input1+error_bits1,2); % Received/modulated/detected % with hard decision Tbdepth=max(Gc)*5; delay=K*Tbdepth; % Traceback depth and decoding delay decoded1= vitdec(detected1,trel,Tbdepth,'trunc','hard') noe_vitdec_trunc_hard=sum(msg~=decoded1(1:lm)) decoded2= vitdec(detected1,trel,Tbdepth,'cont','hard'); noe_vitdec_cont_hard=sum(msg(1:end-delay)~=decoded2(delay+1:end)) % with soft decision ncode= [detected1+0.1*randn(1,length(detected1)) zeros(1,Tbdepth*N)]; quant_levels=[0.001,.1,.3,.5,.7,.9,.999]; NSDB=ceil(log2(length(quant_levels))); % Number of Soft Decision Bits qcode= quantiz(ncode,quant_levels); % Quantized decoded3= vitdec(qcode,trel,Tbdepth,'trunc','soft',NSDB); noe_vitdec_trunc_soft=sum(msg~=decoded3(1:lm)) decoded4= vitdec(qcode,trel,Tbdepth,'cont','soft',NSDB); noe_vitdec_cont_soft=sum(msg~=decoded4(delay+1:end))

<Usage of the Viterbi Decoding Function vitdec() with convenc() and poly2trellis()> To apply the MATLAB functions convenc()/vitdec(), we should first use poly2trellis() to build the trellis structure with an octal code generator describing the connections among the inputs, registers, and outputs. Fig. 9.13 illustrates how the octal code generator matrix Gc as well as the binary generator matrix G and the constraint length vector Lc is constructed for a given convolutional encoder. An example of using poly2trellis() to build the trellis structure for convenc()/vitdec() is as follows:
trellis=poly2trellis(Lc,Gc);

Here is a brief introduction of the usages of the Communication Toolbox functions convenc() and vitdec(). See the MATLAB Help manual or The Mathworks webpage for more details.

(1) coded=convenc(msg,trellis); msg: A message sequence to be encoded with a convolutional encoder described by trellis. (2) decoded=vitdec(coded,trellis,tbdepth,opmode,dectype,NSDB); coded : A convolutional coded sequence possibly corrupted by a noise. It should consist of binary numbers (0/1), real numbers between 1(logical zero) and -1(logical one), or integers between 0 and 2NSDB-1 (NSDB: the number of soft-decision bits given as the optional 6th input argument) corresponding to the quantization level depending on which one of {hard, unquant, soft} is given as the value of the fifth input argument dectype (decision type). trellis : A trellis structure built using the MATLAB function poly2trellis(). tbdepth: Traceback depth (length), i.e., the number of trellis branches used to construct each traceback path. It should be given as a positive integer, say, about five times the constraint length. In case the fourth input argument opmode (operation mode) is cont (continuous), it causes the decoding delay, i.e., the number of zero symbols preceding the first decoded symbol in the output decoded and as a consequence, the decoded result should be advanced by tbdepth*K where K is the number of encoder input bits. opmode: Operation mode of the decoding process. If it is set to cont (continuous mode), the internal state of the decoder will be saved for use with the next frame. If it is set to trunc (truncation mode), each frame will be processed independently, and the traceback path starts at the best-metric state and always ends in the all-zero state. If it is set to term (termination mode), each frame is treated independently and the traceback path always starts and ends in the all-zero state. This mode is appropriate when the uncoded message signal has enough zeros, say, K Max(Lc)-1) zeros at the end of each frame to fill all memory registers of the encoder.

dectype: Decision type. It should be set to unquant, hard, or soft depending on the characteristic of the input coded sequence (coded) as follows: - hard (decision) when the coded sequence consists of binary numbers 0 or 1. - unquant when the coded sequence consists of real numbers between -1(logical 1) and +1(logical 0). - soft (decision) when the optional 6th input argument NSDB is given and the coded sequence consists of integers between 0 and 2NDSB-1 corresponding to the quantization level. NSDB : Number of software decision bits used to represent the input coded seqeuence. It is needed and active only when dectype is set to soft. (3) [decoded,m,s,in]=vitdec(code,trellis,tbdepth,opmode,dectype,m,s,in) This format is used for a repetitive use of vitdec() with the continuous operation mode where the state metric m, traceback state s, and traceback input in are supposed to be initialized to empty sets at first and then handed over successively to the next iteration.

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

%do_vitdecoder1.m % shows various uses of Communication Toolbox function convenc() % with KxN Code generator matrix Gc - octal polynomial representation clear, clf, %msg=[1 0 1 1 0 0 0] msg=randint(1,100); lm=length(msg); % Message and its length potbe=0.02; % Probability of transmitted bit error Gc=[5 7]; % 1 0 1 -> 5, 1 1 1 -> 7 (octal number) Lc=3; % 1xK constraint length vector for each input stream [K,N]=size(Gc); % Number of encoder input/output bits trel=poly2trellis(Lc,Gc); % Trellis structure Tbdepth=3; delay=Tbdepth*K; % Traceback depth and Decoding delay % Repetitive use of vitdec() to process the data block by block % needs to initialize the message sequence, decoded sequence, % state metric, traceback state/input, and encoder state. msg_seq=[]; decoded_seq=[]; m=[]; s=[]; in=[]; encoder_state=[]; N_Iter=100; for itr=1:N_Iter msg=randint(1,1000); % Generate the message sequence in a random way msg_seq= [msg_seq msg]; % Accumulate the message sequence if itr==N_Iter, msg=[msg zeros(1,delay)]; end % Append with zeros [coded,encoder_state]=convenc(msg,trel,encoder_state); [decoded,m,s,in]=vitdec(coded,trel,Tbdepth,'cont','hard',m,s,in); decoded_seq=[decoded_seq decoded]; end lm=length(msg_seq); noe_repeated_use=sum(msg_seq(1:lm)~=decoded_seq(delay+[1:lm]))

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

9.4.6 Turbo Coding


In order for a linear block code or a convolutional code to approach the theoretical limit imposed by Shannons channel capacity (see Eq. (9.3.16) or Fig. 9.7) in terms of bandwidth/power efficiency, its codeword or constraint length should be increased to such an intolerable degree that the maximum likelihood decoding can become unrealizable. Possible solutions to this dilemma are two classes of powerful error correcting codes, each called turbo codes and LDPC (lower-density parity-check) codes, that can achieve a near-capacity (or near-Shannon-limit) performance with a reasonable complexity of decoder. The former is the topic of this section and the latter will be introduced in the next section.

Eb R S Channel :C (9.3.7) B log 1  S ! B log 1 ! ! B log 2 1 [bits/sec] (9.3.16) 2 2 capacity N0 B ( N0 / 2) 2 B N where B[ z]: the channel bandwidth, S [W]: the signal power, Eb [J/bit]: the signal energy per bit, R [bits/sec] the data bit rate, per unit frequency[ z] in the passband N0 / 2 [W/ z]: the noise N ! N0 B[W]: the noise po er.

E R C ! B log 2 1 b N0 B ;

E R R R R ! B log 2 1 b ! log2 1  (Eb / N 0 ) B B N0 B B EbN0dB ! 10 log10 ( Eb / N 0) ! 10 log10 (2R / B 1) [dB] (9.3.18) R

Fig. 9.15(a) shows a turbo encoder consisting of two recursive systematic convolutional (RSC) encoders and an interleaver where the interleaver permutes the message bits in a random way before input to the second encoder. (Note that the modifier systematic means that the uncoded message bits are imbedded in the encoder output stream as they are.) The code rate will be 1/2 or 1/3 depending on whether the puncturing is performed or not. (Note that puncturing is to omit transmitting some coded bits for the purpose of increasing the code rate beyond that resulting from the basic structure of the encoder.) Fig. 9.15(b) shows a demultiplexer, which classifies the coded bits into two groups, one from encoder 1 and the other from encoder 2, and applies each of them to the corresponding decoder.

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

Fig. 9.15(c) shows a turbo decoder consisting of two decoders concatenated and separated by an interleaver where one decoder processes the systematic (message) bit sequence y s and the parity bit sequence y 1 p / y 2 p together with the extrinsic information L ej (provided by the other decoder) to produce the information L ei and provides it to the other decoder in an iterative manner. The turbo encoder and the demultiplexer are cast into the MATLAB routines encoderm() and demultiplex(), respectively. Now, let us see how the two types of decoder, each implementing the log-MAP (maximum a posteriori probability) algorithm and the SOVA (soft-out Viterbi algorithm), are cast into the MATLAB routines logmap() and sova(), respectively.

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

function x = rsc_encode(G,m,termination) % encodes a binary data block m (0/1) with a RSC (recursive systematic % convolutional) code defined by generator matrix G, returns the output % in x (0/1), terminates the trellis with all-0 state if termination>0 if nargin<3, termination = 0; end [N,L] = size(G); % Number of output bits, Constraint length M = L-1; % Dimension of the state lu = length(m)+(termination>0)*M; % Length of the input lm = lu-M; % Length of the message state = zeros(1,M); % initialize the state vector x = []; % To generate the codeword for i = 1:lu if termination<=0|(termination>0 & i<=L_info) d_k = m(i); elseif termination>0 & i>lm, d_k = rem(G(1,2:L)*state.',2); end a_k = rem(G(1,:)*[d_k state].',2); xp = rem(G(2,:)*[a_k state].',2); % 2nd output (parity) bits state = [a_k state(1:M-1)]; % Next sttate x = [x [d_k; xp]]; % since systematic, first output is input bit end

function x = rsc_encode(G,m,termination) % encodes a binary data block m (0/1) with a RSC (Recursive Systematic function x = encoderm(m,G,map,puncture) % Convolutional) code defined by generator matrix G, returns the output % map: Interleaver mapping % in x (0/1), terminates the it operates with a code rate of 1/3. trellis with all-0 state if termination>0 % If puncture=0(unpunctured), if If puncture>0(punctured), it operates with a code rate of 1/2. % nargin<3, termination = 0; end [N,L] = size(G); % Number of output bits, Constraint length % Multiplexer chooses odd/even-numbered parity bits from RSC1/RSC2. M = L-1; size(G); % Numberthe output pits, Constraint length [N,L] = % Dimension of of state lu = L-1; % Dimension of the state M = length(m)+(termination>0)*M; % Length of the input lm = lu-M; % Length of the the information message block lm = length(m); % Length of message state lm zeros(1,M); % of the input the state vector lu = = + M; % Length initialize sequence x = = rsc_encode(G,m,1);the1st RSC encoder output x1 []; % To generate % codeword % interleave for i = 1:lu input to second encoder mi if x1(1,map); x2 = rsc_encode(G,mi,0); % 2nd RSC encoder output = termination<=0|(termination>0 & i<=L_info) d_k = m(i); % parallel to serial multiplex to get the output vector elseif termination>0 & i>lm, d_k = rem(G(1,2:L)*state.',2); x = []; end if a_k = rem(G(1,:)*[d_k state].',2);1/3; puncture==0 % unpunctured, rate = for = rem(G(2,:)*[a_k state].',2);x2(2,i)]; end xp i=1:lu, x = [x x1(1,i) x1(2,i) % 2nd output (parity) bits else % punctured into Next sttate state = [a_k state(1:M-1)]; % rate 1/2 for i=1:lu x = [x [d_k; xp]]; [x x1(1,i) x1(2,i)]; % first output is from RSC1 if rem(i,2), x = % since systematic, odd parity bits input bit end else x = [x x1(1,i) x2(2,i)]; % even parity bits from RSC2
end

function y = demultiplex(r,map,puncture) %Copyright 1998, Yufei Wu, MPRG lab, Virginia Tech. for academic use % map: Interleaver mapping Nb = 3-puncture; lu = length(r)/Nb; if puncture==0 % unpunctured for i=1:lu, y(:,2*i) = r(3*i-[1 0]).'; end else % punctured for i=1:lu i2 = i*2; if rem(i,2)>0, y(:,i2)=[r(i2); 0]; else y(:,i2)=[0; r(i2)]; end end end sys_bit_seq = r(1,1:Nb:end); % the systematic bits for both decoders y(:,1:2:lu*2) = [sys_bit_seq; sys_bit_seq(map)];

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

? ?
%turbo_code_demo.m m = round(rand(1,lm)); % information message bits [temp,map] = sort(rand(1,lu)); % random interleaver mapping x = encoderm(m,G,map,puncture); % encoder output x(+1/-1) noise = sigma*randn(1,lu*(3-puncture)); r = a.*x + noise; % received bits y = demultiplex(r,map,puncture); % input for decoder 1 and 2 Ly = 0.5*L_c*y; % Scale the received bits for iter = 1:Niter if iter<2, Lu1=zeros(1,lu); % Initialize extrinsic information for Decoder 1 else Lu1(map)=L_e2; % (deinterleaved) a priori information end if dec_alg==0, L_A1=logmap(Ly(1,:),G,Lu1,1); % all information else L_A1=sova(Ly(1,:),G,Lu1,1); % all information end L_e1= L_A1-2*Ly(1,1:2:2*lu)-Lu1; % Eq.(9.4.47) Lu2 = L_e1(map); % (interleaved) a priori information for Decoder 2 if dec_alg==0, L_A2=logmap(Ly(2,:),G,Lu2,2); % all information else L_A2=sova(Ly(2,:),G,Lu2,2); % all information end L_e2= L_A2-2*Ly(2,1:2:2*lu)-Lu2; % Eq.(9.4.47) mhat(map)=(sign(L_A2)+1)/2; % Estimate the message bits noe(iter)=sum(mhat(1:lu-M)~=m); % Number of bit errors end % End of iter loop

<Log-MAP (Maximum a Posteriori Probability) Decoding cast into logmap()>


To understand the operation of the turbo decoder, let us begin with the definition of priori LLR (log-likelihood ratio), called a priori L-value, which is a soft value measuring how high the probability of a binary random variable u being +1 is in comparison with that of u being 1:

Pu (u !1) L u (u ) ! ln Pu (u !1)

ith Pu (u ) : the probability of u being u

(9.4.33)

This is a priori information known before the result y caused by u becomes available. While the sign of LLR 1 Pu (u !1) " Pu (u !1)  (9.4.34) u ! sign{L u (u )} ! 1 Pu (u !1) Pu (u !1) is a hard value denoting whether or not the probability of u being +1 is higher than that of u being 1 , the magnitude of LLR is a soft value describing the reliability of the decision based on u . Conversely, Pu (u ! 1) and Pu (u ! 1) can be derived from Lu (u ) :

Pu (u ! 1) ! e

(9.4.33)

L (u )

Pu (u ! 1)

P ( u !1)  P ( u !1) !1

1 eL (u ) and Pu (u ! 1) ! Pu (u ! 1) ! 1 e L (u ) 1 e L ( u )

This can be expressed as

e( u 1) L ( u ) / 2 L (u ) /(1 e L (u ) ) for u !1 e Pu (u ) ! ! L (u ) 1 e L ( u ) ) for u !1 1/(1 e


Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

(9.4.35)

Also, we define the conditioned LLR, which is used to detect the value of u based on the value of another random variable y affected by u , as the LAPP (Log A Posteriori Probability):
L u |y (u | y ) ! ln Pu (u !1| y ) (2.1.4) P( y|u !1) Pu (u !1) / P ( y ) Pu (u !1) P ( y |u !1) (9.4.36) ! ln ! ln  ln Pu (u !1| y ) P( y|u !1) Pu (u !1) / P ( y ) P ( y |u !1) Pu (u !1)

Now, let y be the output of a fading AWGN (additive white Gaussian noise) channel (with fading amplitude a and SNR per bit Eb / N0 ) given u as the input. Then, this equation for the conditioned LLR can be written as

exp(  ( Eb / N 0 )( y  a ) 2 ) Pu (u !1) Eb L u y (u y ) ! ln  ln ! 4ay  Lu (u ) ! Lc y  L u (u ) (9.4.37) Pu (u !1) N0 exp( ( Eb / N 0 )( y  a )2 ) E with Lc ! 4 a b : the channel reliability N0
The objective of BCJR (Bahl-Cocke-Jelinek-Raviv) MAP (Maximum A posteriori Probability) algorithm proposed in [B-1] is to detect the value of the k th message bit uk depending on the sign of the following LAPP function:

( s , s )S  p ( sk ! s , sk 1 ! s, y ) / p (y ) Pu (u k !1|y ) ! ln LA (u k ) ! ln Pu (u k !1|y ) ( s , s )S  p ( sk ! s , sk 1 ! s, y ) / p (y ) ( s , s )S  p ( sk ! s , sk 1 ! s, y ) ! ln ( s ,s )S  p ( sk ! s , sk 1 ! s, y )

(9.4.38)

ith S  /S  : the set of all the encoder state transitions from s to s caused by uk !1/ 1

P and p denote the probability of a discrete-valued random variable and the probability density
of a continuous-valued random variable. The numerator/denominator of this LAPP function are the sum of the probabilities that the channel output to u k ! 1/ 1 will be y ! { y j k , yk , y j " k } with the encoder state transition from s ' to s where each joint probability density p ( s ', s , y ) ! p ( sk ! s ', sk 1 ! s, y ) can be written as
p ( s , s , y ) ! p ( sk ! s , sk 1 ! s , y ) ! p ( s , y j k ) p (s , y k |s ) p (y j " k | s ) ! E k 1 (s )K k (s , s ) F k (s ) (9.4.39)

where E k 1 (s ' ) ! p ( s ', y j k ) is the probability that the state sk at the k th depth level (stage) in the trellis is s ' with the output sequence y j k generated before the k th level, K k ( s ', s ) ! p ( s , y k s ') is the probability that the state transition from sk ! s ' to sk 1 ! s is made with the output y k generated, and F k ( s ) ! p ( y j " k s ) is the probability that the state sk 1 is s with the output sequence y j " k generated after the k th level. The first and third factors E k 1 (s ' ) /F k ( s ) can be computed in a forward/backward recursive way:

E k (s ) ! p(s, y j

k 1 )

! p (s , y j

, s , y k ) ! s S p (s , y j k ) p (s , y k |s ) (9.4.40)

! s S E k 1( s )K k ( s , s ) with E 0 (0) ! 1, E 0 ( s ) ! 0 for s { 0 F k 1 ( s ) ! p( y j " k 1 | s ) ! p( s , yk , s, y j " k | s ) ! sS p ( s , yk |s ) p ( y j " k |s ) ! s S K k ( s , s ) F k ( s ) F K (0) !1 and F K ( s ) ! 0  s { 0 ith F K ( s ) ! F K ( s ) ! 1/ Ns  s
L 1

(9.4.41a) if terminated at all-zero state other ise (9.4.41b)

where N s ! 2 ( L : the constraint length) is the number of states and K is the number of decoder input symbols.

The second factor K k (s ', s ) can be found as

p ( s ', s , yk ) p ( s ', s ) p ( yk s ', s ) K k ( s ', s ) ! p ( s , yk s ') ! ! p ( s ') p ( s ')


(2.1.4)

! p ( s k 1 sk' ) p ( yk sk' , s k 1 ) ! p( uk ) p( yk u k ) ! p( u k ) p( yks , ykp u k , xkp ( uk )) Eb s (9.4.35) e( u 1) L ( u ) / 2 Eb p p 2 2 exp  ( yk  a u k )  ( yk  a xk ( uk )) ! AWGN channel with fading amplitude a 1 e L ( u ) N0 N0
andSNR per bit E b / N0 ( u 1) L ( u ) / 2

1 e L (u )

e( u 1) L ( u ) / 2 ! 1 e L (u )

Eb s 2 p p Ak exp 2a ( yk u k  yk xk (u k )) where u k ! 1 N0 1 uk s p Ak exp Lc[ yk yk ] p 2 xk (u k )

(9.4.42)

E E 2 s with Ak ! exp  b ( yk ) 2  a 2 uk  ( ykp )2  a 2 ( xkp (u k ))2 and Lc ! 4 a b (channel reliability) N0 N0

Note a couple of things about this equation: - To compute K k , we need to know the channel fading amplitude a and the SNR per bit Eb / N0 . - Ak does not have to be computed since it will be substituted directly or via E k (Eq. (9.4.40)) or F k (Eq. (9.4.41)) into Eq. (9.4.39), then substituted into both the numerator and the denominator of Eq. (9.4.38), and finally cancelled.
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

The following MATLAB routine logmap() corresponding to the block named Log-MAP or SOVA in Fig. 9.15(c) uses these equations to compute the LAPP function (9.4.38). Note that in the routine, the multiplications of the exponential terms are done by adding their exponents and that is why Alpha and Beta (each representing the exponent of E k and F k ) are initialized to a large negative number as Infty= 100 (corresponding to a nearly-zero e 100 } 0 ) under the assumption of initial all-zero state and for the termination of decoder 1 in all-zero state, respectively. (Q: Why is Beta initialized to  ln Ns(-log(Ns)) for non-termination of decoder 2?)
function L_A = logmap(Ly,G,Lu,ind_dec) % Log_MAP algorithm using straightforward method to compute branch cost % Input: Ly = scaled received bits Ly=0.5*L_c*y=(2*a*rate*Eb/N0)*y % G = code generator for the RSC code a in binary matrix % the ) p ( s , y k | ') s, ) extrinsic , s , y k ) ! E k ( s ) ! p(Lu y j k 1= ! p ( s ', y j k information'froms ', y j kpreviouss decoder. S p( % ind_dec= index of decoder=1/2 s (assumed to be terminated/open) ( L( ) / ln L % Output: L_A = ln (P(x=1|y)/P(x=-1|y)),L i.e., Log-Likelihood Ratio L ( u ) ln ( u ) p L ( )  ln(1 e (9.4.35) e e)u 1)ithu E 2 (0) ! 1, ((us))/(1input ) bit gueachs level ) for u !1 % 0 ! s 'S E k(soft-value) of estimated E 0 !  p ln 0 at 1( s ')K k (!', s 0 Pu (u ) s ! %message ofe input! bits,for { 0 (9.4.40) lu=length(Ly)/2; Infty=1e2; e L ( u ) EPS=1e-50; Number L ( u ) etc L ( u ) ln 1 [N,L] = size(G); ) p  ln(1 ) for u !1  L F K % Number F K ( s )(!1)ln ln/ 2 the 1/(1 e if terminated  e at all-zero state  s{0 of e u 0 p ) Ns = 2^(L-1); (0) !1 and(9.4.32) states( uin0 !g trellis 1 s p uk (9.4.41b) Le1=-log(1+exp(Lu));s ) ! Le2=Lu+Le1; % ln(exp((u+1)/2*Lu)/(1+exp(Lu))) Ak exp Lc[ yk yk ] p K k ( s ',ln ( s ) up F K Set ! the trellis L (u ) % 1 other xise u k ) F k ( 2 [nout,ns,pout,ps]N p ln(1/ N ) ! ln N  s K ( s ) ! 1/ = trellis(G);e % Initialization of Alpha and Beta 1  s Alpha(1,2:Ns) = -Infty; % Eq.(9.4.40) (the initial all-zero state) 1 Lc[ yk kp ] for u k !1  ln(1 e L ( 1) )  ln Ak all-zero ystatep if ind_dec==1 % for decoder D1 with termination in ln 2 Beta(lu+1,2:Ns) = -Infty; % Eq.(9.4.41b) (the final all-zero xk ( 1) state) ( ', s ) ! p ln K D2s without termination k else % for decoder 1 1 s Beta(lu+1,:) = -log(Ns)*ones(1,Ns); L( 1)  ln(1 e L (1) )  ln Ak  Lc[ yk ykp ] p end for u k !1 2 % Compute gamma at every depth level (stage) xk ( 1) for k = 2:lu+1 Lyk = Ly(k*2-[3 2]); gam(:,:,k) = -Infty*ones(Ns,Ns); for s2 = 1:Ns % Eq.(9.4.42) gam(ps(s2,1),s2,k) = Lyk*[-1 pout(s2,2)].' +Le1(k-1); gam(ps(s2,2),s2,k) = Lyk*[+1 pout(s2,4)].' +Le2(k-1); end end

function L_A = logmap(Ly,G,Lu,ind_dec) % Log_MAP algorithm using straightforward method to compute branch cost % Input: Ly = scaled received bits Ly=0.5*L_c*y=(2*a*rate*Eb/N0)*y % G = code generator for the RSC code a in binary matrix % Lu = extrinsic information from the previous decoder. % ind_dec= index of decoder=1/2 (assumed to be terminated/open) % Output: L_A = ln (P(x=1|y)/P(x=-1|y)), i.e., Log-Likelihood Ratio % (soft-value) of estimated message input bit at each level lu=length(Ly)/2; Infty=1e2; EPS=1e-50; % Number of input bits, etc [N,L] = size(G); Ns = 2^(L-1); % Number of states in the trellis Le1=-log(1+exp(Lu)); Le2=Lu+Le1; % ln(exp((u+1)/2*Lu)/(1+exp(Lu))) % Set up the trellis [nout, ns, pout, ps] = trellis(G); % Compute Alpha in forward recursion for k = 2:lu E k ( s ) ! s 'S E k 1( s ')K k ( s ', s ) ! s 'S exp ln E k 1( s ')  ln K k ( s ', s ) (9.4.40) for s2 = 1:Ns alpha = sum(exp(gam(:,s2,k).'+Alpha(k-1,:))); % Eq.(9.4.40) if alpha<EPS, Alpha(k,s2)=-Infty; else Alpha(k,s2)=log(alpha); end end tempmax(k) = max(Alpha(k,:)); Alpha(k,:) = Alpha(k,:)-tempmax(k); end % Compute Beta in backward recursion F k 1 ( s ') ! s 'S K k ( s ', s ) F k ( s ) ! s 'S exp ln F k (s ) ln K k (s ', s ) (9.4.41a) for k = lu:-1:2 for s1 = 1:Ns beta = sum(exp(gam(s1,:,k+1)+Beta(k+1,:))); % Eq.(9.4.41) if beta<EPS, Beta(k,s1)=-Infty; else Beta(k,s1)=log(beta); end end Beta(k,:) = Beta(k,:) - tempmax(k); end % Compute the soft ', s , y ) (9.4.39) E fors the ( s ', s ) F ( s ) ! exp (ln E LLR estimated message input ' )  ln K ( s ', s )  ln F ( s )) p ( s output ! k k k k 1 ( ' )K k k 1 (s for k = 1:lu for s2 = 1:Ns % Eq.(9.4.39) temp1(s2)=exp(gam(ps(s2,1),s2,k+1)+Alpha(k,ps(s2,1))+Beta(k+1,s2)); temp2(s2)=exp(gam(ps(s2,2),s2,k+1)+Alpha(k,ps(s2,2))+Beta(k+1,s2)); end L_A(k) = log(sum(temp2)+EPS) - log(sum(temp1)+EPS); % Eq.(9.4.38) end

function [nout,nstate,pout,pstate] = trellis(G) % copyright 1998, Yufei Wu, MPRG lab, Virginia Tech for academic use % set up the trellis with code generator G in binary matrix form. % G: Generator matrix with feedback/feedforward connection in row 1/2 % e.g. G=[1 1 1; 1 0 1] for the turbo encoder in Fig. 9.15(a) % nout(i,1:2): Next output [xs=m xp](-1/+1) for state=i, message in=0 % nout(i,3:4): next output [xs=m xp](-1/+1) for state=i, message in=1 % nstate(i,1): next state(1,...2^M) for state=i, message input=0 % nstate(i,2): next state(1,...2^M) for state=i, message input=1 % pout(i,1:2): previous out [xs=m xp](-1/+1) for state=i, message in=0 % pout(i,3:4): previous out [xs=m xp](-1/+1) for state=i, message in=1 % pstate(i,1): previous state having come to state i with message in=0 % pstate(i,2): previous state having come to state i with message in=1 % See Fig. 9.16 for the meanings of the output arguments. [N,L] = size(G); % Number of output bits and Consraint length M=L-1; Ns=2^M; % Number of bits per state and Number of states % Set up next_out and next_state matrices for RSC code generator G for state_i=1:Ns state_b = deci2bin1(state_i-1,M); % Binary state for input_bit=0:1 d_k = input_bit; a_k=rem(G(1,:)*[d_k state_b]',2); % Feedback in Fig. 9.15(a) out(input_bit+1,:)=[d_k rem(G(2,:)*[a_k state_b]',2)]; % Forward state(input_bit+1,:)=[a_k state_b(1:M-1)]; % Shift register end nout(state_i,:) = 2*[out(1,:) out(2,:)]-1; % bipolarize nstate(state_i,:) = [bin2deci(state(1,:)) bin2deci(state(2,:))]+1; end % Possible previous states having reached the present state % with input_bit=0/1 for input_bit=0:1 bN = input_bit*N; b1 = input_bit+1; % Number of output bits = 2; for state_i=1:Ns pstate(nstate(state_i,b1),b1) = state_i; pout(nstate(state_i,b1),bN+[1:N]) = nout(state_i,bN+[1:N]); end end

<SOVA (Soft-In/Soft-Output Viterbi Algorithm) Decoding cast into sova()> [H-2] The objective of the SOVA-MAP decoding algorithm is to find the state sequence s ( i ) and the corresponding input sequence u ( i ) which maximizes the following MAP (maximum a posteriori probability) function

P (s ( i ) ) P (s | y ) ! p ( y | s ) p ( y)
(i ) (2.1.4) (i )

proportional

p ( y | s ) P (s ) for given y

(i )

(i )

(9.4.43)

This probability would be found from the multiplications of the branch transition probabilities defined by Eq. (9.4.42). However, as is done in the routine logmap(), we will compute the path (i) metric by accumulating the logarithm or exponent of only the terms affected by u k as follows:

L (u ) ( i ) 1 s u k  L c[ y k M k (s ) ! M k 1 (s ' )  2 2
(i ) (i )
s p

(i ) p uk yk ] p (i ) xk ( u k )

(9.4.44)

The decoding algorithm cast into the routine sova(Ly,G,Lu,ind_dec) proceeds as follows: (Step 0) Find the number of [ yk yk ]s in Ly given as the first input argument: l u =length(Ly)/2. Find the number N of output bits of the two encoders and the constraint length L from the row and column dimensions of the generator matrix G . Let the number of states be N s ! 2 L 1, the SOVA window size H ! 30, and the depth level k ! 0 Under the assumption . of all-zero state at the initial stage (depth level zero), initialize the path metric to M k ( s0 ) ! 0 ! ln1 (corresponding to probability 1) only for the all-zero state s0 and to M k ( s j ) ! g ! ln 0 (corresponding to probability 0) for the other states s j ( j { 0 ).
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

(Step 1) Increment k by one and determine which one of the hypothetical encoder input (message) u k 1 ! 0 or u k 1 ! 1 would result in larger path metric M k ( si ) (computed by Eq. (9.4.44)) for every state si (i ! 0 : Ns  1) at level k and chooses the corresponding path as the survivor path, storing the estimated value of u k 1 (into pinput(i,k)) and the relative path metric difference DM(i,k) of the survivor path over the other (non-surviving) path

(M k ( si) ! M k ( si u k 1 ! 0 /1)  M k ( si u k 1 ! 1/ 0)
for every state at the stage. Repeat this step (in the forward direction) till k ! l u .

(9.4.45)

(Step 2) Depending on the value of the fourth input argument ind_dec, determine the all-zero state s0 or any state belonging to the most likely path (with Max M k ( si )) to be the final state s ( k ) (sh(k)).

(Step 3) Find u ( k ) (uhat(k)) from pinput(i,k) (constructed at Step 1) and the corresponding previous state s ( k  1) (shat(k-1)) from the trellis structure. Decrement k by one. Repeat this step (in the backward direction) till k ! 0.
(Step 4) To find the reliability of u ( k ), let LLR= (M k ( s ( k )) . Trace back the non-surviving paths from the optimal states s ( k  i ) (for i !1 : H such that k  i e lu ), find the nearly optimal input u i ( k ). If u i (k ) { u (k ) for some i , let LLR=Min{LLR, (M k  i ( s ( k  i ))}. In this way, find the LLR estimate and multiply it with the bipolarized value of u ( k ) to determine the soft output or L-value:

L A (u ( k )) ! (2u ( k ) 1) LLR


Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

(9.4.46)

function L_A = sova(Ly,G,Lu,ind_dec) % Copyright: Yufei Wu, 1998, MPRG lab, Virginia Tech for academic use % This implements Soft Output Viterbi Algorithm in trace back mode % Input: Ly : Scaled received bits Ly=0.5*L_c*y=(2*a*rate*Eb/N0)*y % G : Code generator for the RSC code in binary matrix form % Lu : Extrinsic information from the previous decoder. (i ) L (u ) ( i ) 1 % ind_dec: Index of decoder=1/2 s p uk ( i ) (9.4.44) (i ) u k  Lc[ ) % (assumed M k (s terminated1 (s ' all-zero state/open) yk yk ] p to be ) ! M k in ( xk ( u ki ) ) 2 % Output: L_A : Log-Likelihood Ratio (soft-value) of 2 % estimated message input bit u(k) at each stage, % ln (P(u(k)=1|y)/P(u(k)=-1|y)) lu = length(Ly)/2; % Number of y=[ys yp] in Ly l u ! length(Ly)/2 lu1 = lu+1; Infty = 1e2; [N,L] = size(G); Ns = 2^(L-1); % Number of states Max M k ( si ) delta = 30; % SOVA window size si % Make decision after 'delta' delay. Tracing back from (k+delta) to k, % decide bit k when received bits for bit (k+delta) are processed. % Set up the trellis defined by G. (9.4.45) [nout,ns,pout,ps] = trellis(G); (M k ( s i) ! M k ( s i u k 1 ! 0 /1)  M k ( s i u k 1 ! 1/ 0) % Initialize the path metrics to -Infty Mk(1:Ns,1:lu1)=-Infty; Mk(1,1)=0; % Only initial all-0 state possible % Trace forward to compute all the path metrics for k=1:lu Lyk = Ly(k*2-[1 0]); k1=k+1; for s=1:Ns % Eq.(9.4.44), Eq.(9.4.45) Mk0 = Lyk*pout(s,1:2).' -Lu(k)/2 +Mk(ps(s,1),k); Mk1 = Lyk*pout(s,3:4).' +Lu(k)/2 +Mk(ps(s,2),k); if Mk0>Mk1, Mk(s,k1)=Mk0; DM(s,k1)=Mk0-Mk1; pinput(s,k1)=0; else Mk(s,k1)=Mk1; DM(s,k1)=Mk1-Mk0; pinput(s,k1)=1; LLR ! (M k ( s ( k )) end end end LLR ! for D1/D2 % Trace back from all-zero state or the most likely state Min{LLR, ( M k  i ( s ( k  i ))} % to get input estimates uhat(k), and the most likely path (state) shat if ind_dec==1, shat(lu1)=1; else [Max,shat(lu1)]=max(Mk(:,lu1)); end for k=lu:-1:1 uhat(k)=pinput(shat(k+1),k+1); shat(k)=ps(shat(k+1),uhat(k)+1); end % As the soft-output, find the minimum DM over a competing path (9.4.46) % with different information bit estimate. L A (u ( k )) ! (2u ( k ) 1) LLR for k=1:lu LLR = min(Infty,DM(shat(k+1),k+1));

Now, it is time to take a look at the main program turbo_code_demo.m, which uses the routine logmap() or sova() (corresponding to the block named Log-MAP or SOVA in Fig. 9.15(c)) as well as the routines encoderm() (corresponding to Fig. 9.15(a)), rsc_encode(), demultiplex() (corresponding to Fig. 9.15(b)), and trellis() to simulate the turbo coding system depicted in Fig. 9.15. All of the programs listed here in connection with turbo coding stem from the routines developed by Yufei Wu in the MPRG (Mobile/Portable Radio Research Group) of Virginia Tech. (Polytechnic Institute and State University). The following should be noted: - One thing to note is that the extrinsic information L e to be presented to one decoder i by the other decoder j should contain only the intrinsic information of decoder j that is obtained from its own parity bits not available to decoder i . Accordingly, one decoder should remove the information about y s (available commonly to both decoders) and the priori information L (u ) (provided by the other decoder) from the overall information L A to produce the information that will be presented to the other decoder. (Would your friend be glad if you gave his/her present back to him/her or presented him/her what he/she had already got?) To prepare an equation for this information processing job of each encoder, we extract only the terms affected by u k ! s1 from Eqs. (9.4.44) and (9.4.42) (each providing the basis for the path metric (Eq. (9.4.45)) and LLR (Eq. (9.4.38)), respectively,) to write

L (u ) ( i ) 1 L(u ) (i ) 1 s s ( s ( u k  Lc yk u ki ) u k  Lc yk u ki )  ! L( u )  Lc yk ( ( 2 2 2 u ki ) !1 2 u ki ) ! 1 which conforms with Eq. (9.4.37) for the conditioned LLR Lu |y (u | y ). To prepare the extrinsic
information for the other decoder, this information should be removed from the overall information L A (u ) produced by the routine logmap() or sova() as
s

L e ( u ) ! L A (u )  L (u )  L c y k

(9.4.47)

- Another thing to note is that as shown in Fig. 9.15(c), the basis for the final decision about u is the deinterleaved overall information L A 2 that is attributed to decoder 2. Accordingly, the turbo decoder should know the pseudo-random sequence map (that has been used for interleaving by the transmitter) as well as the fading amplitude and SNR of the channel. - The trellis structure and the output arguments produced by the routine trellis() are illustrated in Fig. 9.16. Interested readers are invited to run the program turbo_code_demo.m with the value of the control constant dec_alg set to 0/1 for Log-MAP/SOVA decoding algorithm and see the BER becoming lower as the decoding iteration proceeds. How do the turbo codes work? How are the two decoding algorithms, Log-MAP and SOVA, compared? Is there any weakpoint of turbo codes? What is the measure against the weakpoint, if any? Unfortunately, to answer such questions is difficult for the authors and therefore, is beyond the scope of this book. As can be seen from the simulation results, turbo codes have an excellent BER performance close to the Shannon limit at low and medium SNRs. However, the decreasing rate of the BER curve of a turbo code can be very low at high SNR depending on the interleaver and the free distance of the code, which is called the error floor phenomenon. Besides, turbo codes needs not only a large interleaver and block size but also many iterations to achieve such a good BER performance, which increases the complexity and latency (delay) of the decoder.

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

%turbo_code_demo.m % simulates the classical turbo encoding-decoding system. % 1st encoder is terminated with tails bits. (lm+M) bits are scrambled % and passed to 2nd encoder, which is left open without termination. clear dec_alg = 1; % 0/1 for Log-MAP/SOVA puncture = 1; % puncture or not rate = 1/(3-puncture); % Code rate lu = 1000; % Frame size Nframes = 100; % Number of frames Niter = 4; % Number of iterations EbN0dBs = 2.6; %[1 2 3]; N_EbN0dBs = length(EbN0dBs); G = [1 1 1; 1 0 1]; % Code generator a = 1; % Fading amplitude; a=1 in AWGN channel [N,L]=size(G); M=L-1; lm=lu-M; % Length of message bit sequence for nENDB = 1:N_EbN0dBs EbN0 = 10^(EbN0dBs(nENDB)/10); % convert Eb/N0[dB] to normal number L_c = 4*a*EbN0*rate; % reliability value of the channel sigma = 1/sqrt(2*rate*EbN0); % standard deviation of AWGN noise noes(nENDB,:) = zeros(1,Niter); for nframe = 1:Nframes % information message bits m = round(rand(1,lm)); [temp,map] = sort(rand(1,lu)); % random interleaver mapping x = encoderm(m,G,map,puncture); % encoder output [x(+1/-1) noise = sigma*randn(1,lu*(3-puncture)); r = a.*x + noise; % received bits y = demultiplex(r,map,puncture); % input for decoder 1 and 2 1 to 1/0 Depolarize 1/  Ly = 0.5*L_c*y; % Scale the received bits for iter = 1:Niter ... ... ... ... ... ... ... ... mhat(map)=(sign(L_A2)+1)/2; % Estimate the message bits noe(iter)=sum(mhat(1:lu-M)~=m); % Number of bit errors end % End of iter loop % Total number of bit errors for all iterations noes(nENDB,:) = noes(nENDB,:) + noe; ber(nENDB,:) = noes(nENDB,:)/nframe/(lu-M); % Bit error rate for i=1:Niter, fprintf('%14.4e ', ber(nENDB,i)); end end % End of nframe loop end % End of nENDB loop

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

%do_BCH_BPSK_sim.m clear, clf K=16; % Number of input bits to the BCH encoder (message length) N=31; % Number of output bits from the BCH encoder (codeword length) Rc=K/N; % Code rate to be multiplied with the SNR in AWGN channel block b=1; M=2^b; % Number of bits per symbol and modulation order T=0.001/K; Ts=b*T; % Sample time and Symbol time EbN0dBs=[0:4:8]; SNRbdBs=EbN0dBs+3; % for simulated BER EbN0dBs_t=0:0.1:10; EbN0s_t=10.^(EbN0dBs_t/10); % for theoretical BER SNRbdBs_t=EbN0dBs_t+3; Eb E for i=1:length(EbN0dBs) 10 log10 ! 10 log10 b  3[d ] EbN0dB=EbN0dBs(i); N0 / 2 N0 sim('BCH_BPSK_sim'); % Run the Simulink model BERs(i)=BER(1); % just ber among {ber, # of errors, total # of bits} fprintf(' With EbN0dB=%4.1f, BER=%10.4e=%d/%d\n', EbN0dB,BER); end BER_theory= prob_error(SNRbdBs_t,'PSK',b,'BER'); SNRbcdB_t=SNRbdBs_t+10*log10(Rc); et=prob_error(SNRbcdB_t,'PSK',b,'BER'); [g_BCH,No_of_correctable_error_bits] = bchgenpoly(N,K); pemb_theory=prob_err_msg_bit(et,N,No_of_correctable_error_bits);

pe ,b

(9.4.11)

1 N

N k ! d c 1 k

N k N k k I (1  I )

semilogy(EbN0dBs,BERs,'r*', EbN0dBs_t,BER_theory,'k', EbN0dBs_t,pemb_theory,'b:') xlabel('Eb/N0[dB]'); ylabel('BER'); title('BER of BCH code with BPSK'); legend('Simulation','Theoretical-No coding','Theoretical-BCH coding');
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

%dc09p07.m % To practice using convenc() and vitdec() for channel coding clear, clf Gc=[4 5 11; 1 4 2]; % Octal code generator matrix K=size(Gc,1); % Number of encoder input bits % Constraint length vector Gc_m=max(Gc.'); for i=1:length(Gc_m), Lc(i)=length(deci2bin1(oct2dec(Gc_m(i)))); end trel=poly2trellis(Lc,Gc); Tbdepth=sum(Lc)*5; delay=Tbdepth*K; lm=1e5; msg=randint(1,lm); transmission_ber=0.02; notbe=round(transmission_ber*lm); % Number of transmitted bit errors ch_input=convenc([msg zeros(1,delay)],trel); % Received/modulated/detected signal ch_output= rem(ch_input+randerr(1,length(ch_input),notbe),2); decoded_trunc= vitdec(ch_output,trel,Tbdepth,'trunc','hard'); ber_trunc= sum(msg~=decoded_trunc(????))/lm; decoded_cont= vitdec(ch_output,trel,Tbdepth,'cont','hard'); ber_cont=sum(msg~=decoded_cont(????????????))/lm; % It is indispensable to use the delay for the decoding result % obtained using vitdec(,,,'cont',) nn=[0:100-1]; subplot(221), stem(nn,msg(nn+1)), title('Message sequence') subplot(223), stem(nn,decoded_cont(nn+1)), hold on stem(delay,0,'rx') decoded_term= vitdec(ch_output,trel,Tbdepth,'term','hard'); ber_term=sum(msg~=decoded_term(????))/lm; BER_term') fprintf('\n BER_trunc BER_cont fprintf('\n %9.2e %9.2e %9.2e\n', ber_trunc,ber_cont,ber_term)
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

Normalize

Denormalize

function [pemb,nombe,notmb]=Viterbi_QAM(Gc,b,SNRbdB,MaxIter) if nargin<4, MaxIter=1e5; end if nargin<3, SNRbdB=5; end if nargin<2, b=4; end [K,N]=size(Gc); Rc=K/N; Gc_m=max(Gc.'); % Constraint length vector for i=1:length(Gc_m), Lc(i)=length(deci2bin1(oct2dec(Gc_m(i)))); end Nf=144; % Number of bits per frame Nmod=Nf*N/K/b; % Number of QAM symbols per modulated frame SNRb=10.^(SNRbdB/10); SNRbc=SNRb*Rc; % Rc does not need to be multiplied since noise will be added per symbol. sqrtSNR=sqrt(2*b*SNRb); % Complex noise for b-bit (coded) symbol trel=poly2trellis(Lc,Gc); Tbdepth=5; delay=Tbdepth*K; nombe=0; Target_no_of_error=100; for iter=1:MaxIter msg=randint(1,Nf); % Message vector coded= convenc(msg,trel); % Convolutional encoding modulated= QAM(coded,b); % 2^b-QAM-Modulation r= modulated +(randn(1,Nmod)+j*randn(1,Nmod))/sqrtSNR; demodulated= QAM_dem(r,b); % 2^b-QAM-Demodulation decoded= vitdec(demodulated,trel,Tbdepth,'trunc','hard'); nombe = nombe+sum(msg~=decoded(1:Nf)); % Number of message bit errors if nombe>Target_no_of_error, break; end end notmb=Nf*iter; % Number of total message bits pemb=nombe/notmb; % Message bit error probability
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

%do_Viterbi_QAM.m clear, clf Nf=144; Tf=0.001; Tb=Tf/Nf; % Frame size, Frame time, and Sample/Bit time Gc=[133 171]; % Octal code generator matrix [K,N]=size(Gc); Rc=K/N; % Message/Codeword length and Code rate % Constraint length vector Gc_m=max(Gc.'); for i=1:length(Gc_m) Lc(i)=length(deci2bin1(oct2dec(Gc_m(i)))); end Tbdepth=sum(Lc)*5; delay=Tbdepth*K; % Traceback depth and Decoding delay b=4; M=2^b; % Number of bits per symbol and Modulation order Ts=b*Rc*Tb; % Symbol time corresponding to b*Rc message bits N_factor=sqrt(2*(M-1)/3); % Eq.(7.5.4a) EbN0dBs=[3 6]; Target_no_of_error=50; for i=1:length(EbN0dBs) EbN0dB=EbN0dBs(i); SNRbdB=EbN0dB+3; randn('state', 0); [pemb,nombe,notmb]=???????_QAM(Gc,b,SNRbdB,Target_no_of_error);%MATLAB pembs(i)=pemb; sim('Viterbi_QAM_sim'); pembs_sim(i)=BER(1); % Simulink end [pembs; pembs_sim] % Compare BERs obtained from MATLAB and Simulink EbN0dBs_t=0:0.1:10; SNRbdBs_t=EbN0dBs_t+3; BER_theory=prob_error(SNRbdBs_t,'QAM',b,'BER'); semilogy(EbN0dBs,pembs,'r*', EbN0dBs_t,BER_theory,'b') xlabel('Eb/N0[dB]'); ylabel('BER');
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

function qamseq=QAM(bitseq,b) bpsym = nextpow2(max(bitseq)); % no of bits per symbol if bpsym>0, bitseq = deci2bin(bitseq,bpsym); end if b==1, qamseq=bitseq*2-1; return; end % BPSK modulation % 2^b-QAM modulation N0=length(bitseq); N=ceil(N0/b); bitseq=bitseq(:).'; bitseq=[bitseq zeros(1,N*b-N0)]; b1=ceil(b/2); b2=b-b1; b21=b^2; b12=2^b1; b22=2^b2; g_code1=2*gray_code(b1)-b12+1; g_code2=2*gray_code(b2)-b22+1; tmp1=sum([^b1-1].^2)*b21; tmp2=sum([:b22-1].^2)*b12; M=2^b; Kmod=sqrt(2*(M-1)/3); %Kmod=sqrt((tmp1+tmp2)/2/(2^b/4)) % Normalization factor qamseq=[]; for i=0:N-1 bi=b*i; i_real=bin2deci(bitseq(bi+[1:b1]))+1; i_imag=bin2deci(bitseq(bi+[b1+1:b]))+1; qamseq=[qamseq (g_code1(i_real)+j*g_code2(i_imag))/Kmod]; end function [g_code,b_code]=gray_code(b) N=2^b; g_code=0:N-1; if b>1, g_code=gray_code0(g_code); end b_code=deci2bin(g_code); function g_code=gray_code0(g_code) N=length(g_code); N2=N/2; if N>=4, N2=N/2; g_code(N2+1:N)=fftshift(g_code(N2+1:N)); end if N>4 g_code=[gray_code0(g_code(1:N2)) gray_code0(g_code(N2+1:N))]; end
Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

function bitseq=QAM_dem(qamseq,b,bpsym) %BPSK demodulation if b==1, bitseq=(qamseq>=0); return; end %2^b-QAM demodulation N=length(qamseq); b1=ceil(b/2); b2=b-b1; g_code1=2*gray_code(b1)-2^b1+1; g_code2=2*gray_code(b2)-2^b2+1; tmp1=sum([^b1-1].^2)*2^b2; tmp2=sum([^b2-1].^2)*2^b1; Kmod=sqrt((tmp1+tmp2)/2/(2^b/4)); % Normalization factor g_code1=g_code1/Kmod; g_code2=g_code2/Kmod; bitseq=[]; for i=1:N [emin1,i1]=min(abs(real(qamseq(i))-g_code1)); [emin2,i2]=min(abs(imag(qamseq(i))-g_code2)); bitseq=[bitseq deci2bin1(i1-1,b1) deci2bin1(i2-1,b2)]; end if (nargin>2) N = length(bitseq)/bpsym; bitmatrix = reshape(bitseq,bpsym,N).'; for i=1:N, intseq(i)=bin2deci(bitmatrix(i,:)); end bitseq = intseq; end

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. 2009

Das könnte Ihnen auch gefallen