Beruflich Dokumente
Kultur Dokumente
Presented by:
Table of Contents
1.0. CONVOLUTIONAL CODES
1.1. Overview
1.2. Convolutional coder
1.3. Representation of Convolutional codes
1.4. Parity Equations
1.5. Decoding convolutional codes
1
Convolutional and Turbo codes
1.1. Overview
In block coding, the encoder accepts a k-bit message block and generates an n-bit
codeword. Thus , codewords are produced on a block-by-block basis. Clearly,
provisions must be made in the encoder to buffer an entire message block before
generating the associated codeword. There are applications, however, where the
message bits come in serially rather than in large blocks, in which case the use of a
buffer may be desirable.
2
Convolutional and Turbo codes
Lc=(L/k)n
Alternative methods of describing a convolutional code are the tree diagram, the
trellis diagram, and the state diagram. The example will be used to explore these
alternative methods.
3
Convolutional and Turbo codes
Since the first bit in the output sequence is the message bit, this particular
convolutional code is systematic. As a result, v2and v3can be viewed as parity-
check bits.The output sequence for an arbitrary input sequence is often determined
by using a code tree. For example, the tree diagram for the above convolutional
encoder is illustrated in Figure 1.3. Initially, the encoder is set to anall-zero state.
The tree diagram shows that if the first message (input) bit is 0, the output
sequence is 000, and if the first input bit is 1, the output sequence is 111. If the first
input bit is now 1 and the second bit is 0, the second set of 3 output bits will be
001. Continuing through the tree, we will be able to show that if the third
input bit is 0, the output will be 011. If the third input bit is 1, the output will be
100. Supposing that a particular sequence takes us to a particular node in the tree,
the branching rule allows us to follow the upper branch if the next input bit is 0 and
the lower branch if the input bit is 1. Consequently, a particular path through the
tree can be traced for a specific input sequence. It can be observed that the tree
generated by this convolutional encoder (Figure 1.1) shows that the structure
repeats itself after the third stage. The tree diagram is thus shown up to the third
stage, as in Figure 1.3. This behavior is consistent with the fact that the constraint
length is 3 (Lc = 3): interpreted as the 3-bit output sequence at each stage
determined by an input bit and 2 bits contained in the first two stages (r0, r1) of the
shift register.
4
Convolutional and Turbo codes
all-zero state. The tree diagram shows that if the first message (input) bit is 0, the
output sequence is 000, and if the first input bit is 1, the output sequence is 111. If
the first input bit is now 1 and the second bit is 0, the second set of 3 output bits
will be 001. Continuing through the tree, we will be able to show that if the
third input bit is 0, the output will be 011. If the third input bit is 1, the output will
5
Convolutional and Turbo codes
6
Convolutional and Turbo codes
Arrows represent the transitions from state to state. The states of the state diagram
are labeled according to the states of the trellis diagram. The 3 bits shown next to
each transitory line represent the output bits. From the preceding discussions, we
are in a position to draw the code tree, trellis diagram, and state diagram for the
fixed 1/2 rate convolutional coder. The message sequence 10111 is used as input to
the encoder in Figure 6.8. The code tree of Figure 1.3.is drawn. The tree-drawing
procedure is the same as described previously for the encoder in Figure 6.8 by
moving up at the first branching level, down at the second and third, and up again
at the fourth level to produce the outputs appended to the traversed branches. After
the first three branches, the structure becomes repetitive, a behavior that is
consistent with the constraint length of the encoder, which is 3 (Lc=3). From the
7
Convolutional and Turbo codes
code tree, the trellis and state diagrams are drawn as shown in Figures 6.13to 6.15,
respectively.Generalization of the preceding procedures can be made, without
loss of credence, to code rate Rc = k/n. We have observed that the tree diagram
will have 2k branches originating from each branching node. Given that the effect
of constraint or memory length Lcwill be the same, paths traced for emerging
nodes of the same label in the tree diagram will begin to remerge in groups of
2kafter the first Lc branches. This implies that all paths with k(Lc –1) identical
data bits will merge together, producing a trellis of 21 ( ) − k Lcstates with all
branchings and mergings appearing in groups of 2k branches. Also, the state
diagram will have 21 ( ) − k Lc states, with each state having 2k input branches
coming into it. Thus, Lc can be said to represent the number of k-tuples stored in
the shift register.
A convolutional code that produces r parity bits per window and slides the
window forward by one bit at a time has a rate 1/r when calculated for long
messages. The greater the value of r, the higher the resilience of bit errors. But this
would require a very high amount of communication bandwidth. In practice, r and
8
Convolutional and Turbo codes
the constraint length are chosen to be as small as possible while also allowing for a
low probability of a bit error. The whole process is explained below in brief.
0101100101100011
Kbits(sliding window)
The encoder looks at k bits at a time and produces r parity bits according to
carefully chosen functions that operate over various subsets of the K bits.
The encoder uses the following functions to find the parity bits:
The encoder then sends out these bits and sequentially slides the window by 1 to
the right and then repeats the process.
We’ve seen the above parity equations used by the encoder to produce the
parity bits. In general, one can view each parity equation as being produced by
composing the message bits, X, and a generator polynomial, g.
9
Convolutional and Turbo codes
We denote by gi the K-element generator polynomial for parity bit pi. Thus
giving pi as:
Pi[n] = ∑ [] [ ] mod 2
The form of the above equation is a convolution of g and x hence the term
―convolutional code‖.
g0 = 1, 1, 1
g1 = 1, 1, 0
P0[0] = (1+0+0) = 1
P1[0] = (1+0) = 1
P0[1] = (0+1+0) = 1
P1[1] = (0+1) = 1
10
Convolutional and Turbo codes
P0[2] = (1+0+1) = 0
P1[2] = (1+0) = 1
P0[3] = (1+1+0) = 0
P1[3] = (1+1) = 0
Therefore the parity bits sent over the channel are [1,1,1,1,0,1,0,0,….]
Trellis Decoding:
The trellis is a structure derived from the state machine that will allow us to
develop an efficient way to decode convolutional codes.The state machine view
shows what happens at each instant when the sender has a message bit to process
but doesn’t show how the system evolves in time.The trellis is a structure that
makes the time evolution explicit.
From the diagram we can see that each column of the trellis has the set of
states; each state in a column is connected to two states in the next column—the
same two states in the state diagram. The top link from each state in a column of
the trellis shows what gets transmitted on a ―0‖, while the bottom shows what gets
11
Convolutional and Turbo codes
transmitted on a ―1‖.The picture shows the links between states that are traversed
in the trellis given the message 101100.
We can now think about what the decoder needs to do in terms of this trellis.
It gets a sequence of parity bits, and needs to determine the best path through the
trellis—that is, the sequence of states in the trellis that can explain the observed,
and possibly corrupted, sequence of received parity bits.The Viterbi decoder finds
a maximum likelihood path through the Trellis.
12
Convolutional and Turbo codes
2.1. Introduction
13
Convolutional and Turbo codes
14
Convolutional and Turbo codes
code #2. It should be noted that the trellis structure and free distance for the RSC
and Non Systematic Convolutional (NSC) codes are the same. The output
sequences, however, are not the same for identical input sequences. The N bit data
block is first encoded by code #1. The same data block is also interleaved and
encoded by code #2.The main purpose of the interleaver is to randomize bursty
Data bit .
The two RSC encoders can be further represented as follows thus revealing their
recursive nature.
15
Convolutional and Turbo codes
Error patterns so that it can be correctly decoded. It also helps to increase the
minimum distance of the turbo code. The RSC code can be obtained from a NSC
by adding a feedback loop and setting one of the output bits equal to the input bits.
In order to increase
the transmission efficiency, puncturing can be used. Puncturing is removing certain
bits from the output stream according to a fixed pattern given by a puncturing
matrix.
2.3.1. Interleaver
16
Convolutional and Turbo codes
2.3.2. Puncturing
Puncturing is a technique used to increase the code rate. A rate 1/3 encoder is
converted to a rate 1/2 encoder by multiplexing the two coded streams. The
multiplexer can choose the odd indexed outputs from the output of the upper RSC
encoder and its even indexed outputs from the lower one. In a more complicated
system, puncturing tables are used. An important application of puncturing is to
provide unequal error protection where relatively unimportant bits or during
cleaner channel condition a lower rate coding is used by puncturing the coded bits
while for more important bits or noisy channel conditions, higher rate coding can
be used.
In summary a classic turbo encoder results from the combination of two (or more)
encoders. These are often systematic recursive convolutional coders ( RSCs )
because their recursion provides interesting pseudo-random properties.
Specifically, the turbo encoder will typically produce three outputs to be sent on
the transmission channel (after any modulation ):
1. the exit so-called systematic, ie the very input of the encoder (the sequence )
2. the parity output 1 : the output of the first encoder
3. the parity output 2 : the output of the second encoder. The difference
between these two outputs comes from the fact that the weft is interlaced
before entering the second encoder. She is "mixed".
This coding makes it possible to distribute the information provided by a bit of the
frame on its neighbors (with parity encoding 1) and even over the entire length of
18
Convolutional and Turbo codes
the transmitted frame (with parity encoding 2). Thus, if part of the message is
heavily damaged during transmission, the information can still be found elsewhere.
19
Convolutional and Turbo codes
the confidence levels and thus obtain a more accurate estimate of the transmitted
message. The purpose of the interleaver is same as before i.e., to de–
correlate the error bursts. The output of DEC2 is fed back to DEC1 and the process
is repeated several times depending on the BER rate required for the application.
The simulation results show that Turbo code is a powerful error correcting coding
technique under SNR environments. It has achieved near Shannon capacity.
However, there are many factors need to be considered in the Turbo code design.
First, a trade-off between the BER and the number of iterations need to be made,
e.g., more iteration will get lower BER, but the decoding delay is also longer.
Secondly, the effect of the frame size on the BER also needs to be considered.
Although the Turbo code with larger frame size has better performance, the output
delay is also longer. Thirdly, the code rate is another factor
that needs to be considered. The highercoding rate needs more bandwidth. From
these results, the BER goes to 10-5 for an Eb/No of 2 dB itself. The Log-Map
algorithm is superior to the performance of the conventional Viterbi algorithm used
in the convolutional codes used in the DVB systems.
20
Convolutional and Turbo codes
Figure 3.1(a,b) shows the Bit Error Rate curve for various values of Eb/No.As the
value of Eb/No increases the probability of error goes on decreasing to a level of
10-6 for a low value of Eb/No. In this Log-Map decoding algorithm is used for
decoding the received data.
21
Convolutional and Turbo codes
The original image shown in figure 3.2(a) is encoded using Turbo codes and
subjected to Additive White Gaussian noise. In this Random interleaver is used
and Log-Map decoding algorithm is used. The generator polynomial used is the
Group Special Mobile (GSM) committee standard. We can almost able to
retrieve the original image at the fifth iteration itself. This is clear from the
above shown images 3.2(a-e) that as the number of iterations increases the noise in
the data is removed and at the end we can retrieve the original image.
22
Convolutional and Turbo codes
2.6. References
[2]. C. Berrou and A. Glavieux, ―Near Optimum Error Correcting Coding And Decoding:
Turbo-Codes,‖ IEEE Trans. on Communications, vol. 44, pp. 1261–1271, Oct. 1996.
[4]. L. Bahl, J. Jelinek, J. Raviv, and F. Raviv, ―Optimal Decoding of Linear Codes for
Minimizing Symbol Error Rate,‖ IEEE Trans. on Information Theory, vol. IT-20, pp. 284–
287, Feb 1974.
23
Convolutional and Turbo codes
[5]. J. Hagenauer, E. Offer, and L. Papke, ―Iterative Decoding of Binary Block and
Convolutional Codes,‖ IEEE Trans. on Information Theory, vol. 42, pp. 429–445, March
1996.
[6]. G. Caire and E Biglieri, "Parallel concatenated codes with unequal error protection,"
IEEE Transactions on Communications , vol. 46, No. 5, May 1998, pp. 565-567.
[9]. Atousa H. S. Mohammadi and A.K. Khandani, ``Unequal Power Allocation to the
Turbo-Encoder Output Bits with Application to CDMA Systems,'' IEEE Transactions on
Communications, vol. 47, no. 11, pp. 1609-1611, Nov. 1999.
[10]. Atousa H. S. Mohammadi and A.K. Khandani, ``Unequal Error Protection of Turbo-
Encoder Output Bits,'' Electronics Letters, Vol. 33, No. 4, pp. 273-274, Feb. 1997.
[12]. Atousa H. S. Mohammadi and A.K. Khandani, `` Unequal Error Protection on the
Turbo-Encoder Output Bits,'' Proceedings of the 1997 IEEE International Conference on
Communications, ICC'97, pp. 730-734, Jun. 1997.
24