Sie sind auf Seite 1von 25

UNIVERSITY OF BUEA

FACULTY OF ENGINEERING AND TECHNOLOGY


Department of Electrical and Electronic Engineering
Course Title: Mobile Telecommunication II
Course code: EEF 428

A REPORT ON CONVOLUTIONAL AND TURBO CODES

Presented by:

Gilbert Tanuie Achiri Tima fe15a087


Nestor Abiangang Abiawuh fe15a151

Course instructor: mr nkemeni valery


Convolutional and Turbo codes

Table of Contents
1.0. CONVOLUTIONAL CODES
1.1. Overview
1.2. Convolutional coder
1.3. Representation of Convolutional codes
1.4. Parity Equations
1.5. Decoding convolutional codes

2.0. TURBO CODES


2.1. Introduction
2.2. Turbo Encoder
2.3. Turbo Encoding process
2.3.1. Interleaving
2.3.2.Puncturing
2.4. Turbo decoder and the decoding process
2.5. Case study simulation, results and conclusions
2.6. Applications and uses of Turbo codes
2.7. References

1
Convolutional and Turbo codes

1.0. CONVOLUTIONAL CODES

1.1. Overview

In block coding, the encoder accepts a k-bit message block and generates an n-bit
codeword. Thus , codewords are produced on a block-by-block basis. Clearly,
provisions must be made in the encoder to buffer an entire message block before
generating the associated codeword. There are applications, however, where the
message bits come in serially rather than in large blocks, in which case the use of a
buffer may be desirable.

1.2. Convolutional Coder

A convolutional coder is a finite memory system. The name convolutional


refers to the fact that the added redundant bits are generated by mod-2
convolutions. A generalized convolutional encoder is shown in Figure 1.0. It
consists of an L-stage shift register, nmod-2 adders, a commutator, and a network
of feedback connections between the shift register and the adders. The number of
bits in the input data stream is k. The number of output bits for each k-bit sequence
is nbits. Since nbits are produced at the output for each input of k bits, the code rate
is still Rc= k/n. A very important parameter in the consideration of convolutional
encoding is the constraint or memory length. The constraint length is defined as the
number of shifts over which a single message bit can influence the encoder output.
For example, if input message data are in groups of kbits and are fed into the L-
stage shift register, then the register can hold (L/k) groups. Given that each group
produces noutput bits, the constraint or memory length is

2
Convolutional and Turbo codes

Lc=(L/k)n

A generalized structure of a convolutional code encoder is shown below

Fig 1.0. convolutional code encoder

Fig 1.1. Rate ½ convolutional encoder

1.3. Representation of Convolutional codes

Alternative methods of describing a convolutional code are the tree diagram, the
trellis diagram, and the state diagram. The example will be used to explore these
alternative methods.

Example : Consider a binary convolutional encoder, rate of 1/3, shown in Figure


1.3. This figure is similar to Figure 6.8except that the output v1is fed directly from
r0. For each message (input) bit, the sequence (v1v2v3) is generated. It follows
from Figure 6.9 that the output sequence is given by

3
Convolutional and Turbo codes

Since the first bit in the output sequence is the message bit, this particular
convolutional code is systematic. As a result, v2and v3can be viewed as parity-
check bits.The output sequence for an arbitrary input sequence is often determined
by using a code tree. For example, the tree diagram for the above convolutional
encoder is illustrated in Figure 1.3. Initially, the encoder is set to anall-zero state.
The tree diagram shows that if the first message (input) bit is 0, the output
sequence is 000, and if the first input bit is 1, the output sequence is 111. If the first
input bit is now 1 and the second bit is 0, the second set of 3 output bits will be
001. Continuing through the tree, we will be able to show that if the third
input bit is 0, the output will be 011. If the third input bit is 1, the output will be
100. Supposing that a particular sequence takes us to a particular node in the tree,
the branching rule allows us to follow the upper branch if the next input bit is 0 and
the lower branch if the input bit is 1. Consequently, a particular path through the
tree can be traced for a specific input sequence. It can be observed that the tree
generated by this convolutional encoder (Figure 1.1) shows that the structure
repeats itself after the third stage. The tree diagram is thus shown up to the third
stage, as in Figure 1.3. This behavior is consistent with the fact that the constraint
length is 3 (Lc = 3): interpreted as the 3-bit output sequence at each stage
determined by an input bit and 2 bits contained in the first two stages (r0, r1) of the
shift register.

4
Convolutional and Turbo codes

Fig 1.2. Rate 1/3 convolutional Encoder

Fig 1.3. code tree diagram of a Rate 1/3 coder

all-zero state. The tree diagram shows that if the first message (input) bit is 0, the
output sequence is 000, and if the first input bit is 1, the output sequence is 111. If
the first input bit is now 1 and the second bit is 0, the second set of 3 output bits
will be 001. Continuing through the tree, we will be able to show that if the
third input bit is 0, the output will be 011. If the third input bit is 1, the output will

5
Convolutional and Turbo codes

be 100. Supposing that a particular sequence takes us to a particular node in the


tree, the branching rule allows us to follow the upper branch if the next input bit is
0 and the lower branch if the input bit is 1. Consequently, a particular path through
the tree can be traced for a specific input sequence. It can be observed that the tree
generated by this convolutional encoder (Figure 1.1) shows that the structure
repeats itself after the third stage. The tree diagram is thus shown up to the third
stage, as in Figure 1.3. This behavior is consistent with the fact that the constraint
length is 3 (Lc = 3): interpreted as the 3-bit output sequence at each stage
determined by an input bit and 2 bits contained in the first two stages (r0, r1) of the
shift register.It should be noted that the bit in the last stage (r2) of the register is
shifted out and does not affect the output. In essence, it could be said that the 3-
output bit for each input bit is determined by the input bit and four possible states
labeled a, b, c, and d in Figure 1.3. and denoted, respectively, by 00, 01, 10, and
11. With this labeling, it can be observed in Figure 6.10 that, at the third stage:
There are two nodes each with the label a, b, c, or d. All branches originating from
two nodes and having the same label generate identical output sequences. This
implies that two nodes having the same label can be merged. By merging two
nodes having the same label in the code tree diagram of Figure 1.3, another
diagram emerges, as shown in Figure 1.4. This diagram is called the trellis
diagram. The dotted lines denote the output generated by the lower branch of the
code tree with input bit 1, while the solid lines denote the output generated by the
upper branch of the code tree with input bit 0.The completely repetitive structure
of the trellis diagram in Figure 1.4. suggests that a further reduction is possible
in the representation of the code to the state diagram. A state diagram, shown in
Figure 1.5., is another way of representing states (a, b, c, and d) transitioning from
one state to another.

6
Convolutional and Turbo codes

Fig 1.4. Trellis diagram of fig 1.2

Fig 1.5. State diagram of Fig 1.2

Arrows represent the transitions from state to state. The states of the state diagram
are labeled according to the states of the trellis diagram. The 3 bits shown next to
each transitory line represent the output bits. From the preceding discussions, we
are in a position to draw the code tree, trellis diagram, and state diagram for the
fixed 1/2 rate convolutional coder. The message sequence 10111 is used as input to
the encoder in Figure 6.8. The code tree of Figure 1.3.is drawn. The tree-drawing
procedure is the same as described previously for the encoder in Figure 6.8 by
moving up at the first branching level, down at the second and third, and up again
at the fourth level to produce the outputs appended to the traversed branches. After
the first three branches, the structure becomes repetitive, a behavior that is
consistent with the constraint length of the encoder, which is 3 (Lc=3). From the

7
Convolutional and Turbo codes

code tree, the trellis and state diagrams are drawn as shown in Figures 6.13to 6.15,
respectively.Generalization of the preceding procedures can be made, without
loss of credence, to code rate Rc = k/n. We have observed that the tree diagram
will have 2k branches originating from each branching node. Given that the effect
of constraint or memory length Lcwill be the same, paths traced for emerging
nodes of the same label in the tree diagram will begin to remerge in groups of
2kafter the first Lc branches. This implies that all paths with k(Lc –1) identical
data bits will merge together, producing a trellis of 21 ( ) − k Lcstates with all
branchings and mergings appearing in groups of 2k branches. Also, the state
diagram will have 21 ( ) − k Lc states, with each state having 2k input branches
coming into it. Thus, Lc can be said to represent the number of k-tuples stored in
the shift register.

Fig 1.6. code tree diagram of a ½ rate coder

A convolutional code that produces r parity bits per window and slides the
window forward by one bit at a time has a rate 1/r when calculated for long
messages. The greater the value of r, the higher the resilience of bit errors. But this
would require a very high amount of communication bandwidth. In practice, r and

8
Convolutional and Turbo codes

the constraint length are chosen to be as small as possible while also allowing for a
low probability of a bit error. The whole process is explained below in brief.

0101100101100011
Kbits(sliding window)

The encoder looks at k bits at a time and produces r parity bits according to
carefully chosen functions that operate over various subsets of the K bits.

In the above example, k = 3 and r = 2.

The rate of this code 1/r = ½

The encoder uses the following functions to find the parity bits:

P0[n] = x[n] + x[n-1] + x[n-2]

P1[n] = x[n] + x[n-1]

The encoder then sends out these bits and sequentially slides the window by 1 to
the right and then repeats the process.

1.4. Parity Equations

We’ve seen the above parity equations used by the encoder to produce the
parity bits. In general, one can view each parity equation as being produced by
composing the message bits, X, and a generator polynomial, g.

For example, the general polynomial coefficients for

P0[n] = x[n] + x[n-1] + x[n-2]

P1[n] = x[n] + x[n-1]

9
Convolutional and Turbo codes

Are (1,1,1) and (1,1,0) while for:

P0[n] = x[n] + x[n-1] + x[n-2]

P1[n] = x[n] + x[n-1]

P2[n] = x[n] + x[n-2]

They are (1,1,1), (1,1,0) and (1,0,1).

We denote by gi the K-element generator polynomial for parity bit pi. Thus
giving pi as:

Pi[n] = ∑ [] [ ] mod 2

The form of the above equation is a convolution of g and x hence the term
―convolutional code‖.

The number of generator polynomials is equal to the number of generated


parity bits, r, in each sliding window.

Consider the two polynomials g0 and g1

g0 = 1, 1, 1

g1 = 1, 1, 0

If the message sequence, X = [1,0,1,1,….] where x[n] = 0

P0[0] = (1+0+0) = 1

P1[0] = (1+0) = 1

P0[1] = (0+1+0) = 1

P1[1] = (0+1) = 1

10
Convolutional and Turbo codes

P0[2] = (1+0+1) = 0

P1[2] = (1+0) = 1

P0[3] = (1+1+0) = 0

P1[3] = (1+1) = 0

Therefore the parity bits sent over the channel are [1,1,1,1,0,1,0,0,….]

1.5. Decoding Convolutional codes:

Trellis Decoding:

The trellis is a structure derived from the state machine that will allow us to
develop an efficient way to decode convolutional codes.The state machine view
shows what happens at each instant when the sender has a message bit to process
but doesn’t show how the system evolves in time.The trellis is a structure that
makes the time evolution explicit.

From the diagram we can see that each column of the trellis has the set of
states; each state in a column is connected to two states in the next column—the
same two states in the state diagram. The top link from each state in a column of
the trellis shows what gets transmitted on a ―0‖, while the bottom shows what gets
11
Convolutional and Turbo codes

transmitted on a ―1‖.The picture shows the links between states that are traversed
in the trellis given the message 101100.

We can now think about what the decoder needs to do in terms of this trellis.
It gets a sequence of parity bits, and needs to determine the best path through the
trellis—that is, the sequence of states in the trellis that can explain the observed,
and possibly corrupted, sequence of received parity bits.The Viterbi decoder finds
a maximum likelihood path through the Trellis.

12
Convolutional and Turbo codes

2.0. TURBO CODES

2.1. Introduction

Despite the presence of block codes and convolutional codes as means of


error correction on information to be transmitted across a channel, today’s world
thrives on information exchange. Hence the need of the day is that the information
be protected well enough to be transmitted over a noisy environment. This is
achieved by adding redundant bits to the information bit streams. If the purpose of
adding redundancy is just to detect errors and inform the sender to retransmit the
information, it is known as automatic repeat request (ARQ). Forward error
correction (FEC) is another way of adding redundancy to the information bit
stream so that errors can be detected and corrected thus preventing the need for
retransmission. The price paid for adding such redundancy is a faster transmission
rate in order to send the same amount of information bits per unit time implying a
larger bandwidth requirement.
The advantage however, is that the Signal to Noise Ratio (SNR) can be
reduced significantly (also referred to as Coding Gain). In wireless systems, one of
the most important performance criterions is low power transmission as that can
provide a longer battery life and lesser co-channel interference. From coding
theory, it is known that by increasing the codeword length or the encoder memory
and using ―good‖ codes, one can theoretically approach the limiting channel
capacity. Turbo codes are a very powerful error correcting technique, which enable
reliable communication with BitError Rate (BER) close to Shannon limit. It is by
this that there has some researches which has led to the design and implementation
of the TURBO CODE. Turbo code was Proposed by a team of French Researchers;
Berrou, Glavieux, and Thitimajshima who published two papers

13
Convolutional and Turbo codes

 ―Near Shannon Limit Error Correcting Coding and Decoding: Turbo


Codes‖ ICC’93.,pp. 1064-1070 October 1993.
 ―Near Shannon Limit Error Correcting Coding and Decoding: Turbo
Codes‖ IEEE Trans. Comm., October 1996.
This two papers actually led to the implementation of turbo coding as a means of
correcting errors over a channel during the transmission of bit streams by the
IEEE.

Turbo codes are in fact a parallel concatenation of two recursive systematic


convolutional codes. The fundamental difference between convolution codes and
turbo codes is that
while for the former, performance improves by increasing the constraint length, for
turbo codes it has a small value which remains pretty much constant. Moreover, it
achieves a significant coding gain at lower coding rates. An important factor for
achieving this improvement is due to the ―soft-input/ soft-output‖ decoding
algorithm to produce soft decisions. Turbo codes enable reliable communication
over power-constrained communication channels at close to Shannon’s limit.
However, a significant number of
iterations are required to produce this result leading to higher latency. The turbo
encoder is described in Section 2.2. Interleaver design and puncturing is explained
in Section 2.3 and 2.4 respectively. Section 2.5 gives an overview of the turbo
decoder. Section 2.6 gives the simulation results and the conclusion.

2.2. The Turbo Encoder

The general structure of a turbo encoder is shown in Fig. 1. It consists of


two rate half Recursive Systematic Convolutional (RSC) encoders code #1 and

14
Convolutional and Turbo codes

code #2. It should be noted that the trellis structure and free distance for the RSC
and Non Systematic Convolutional (NSC) codes are the same. The output
sequences, however, are not the same for identical input sequences. The N bit data
block is first encoded by code #1. The same data block is also interleaved and
encoded by code #2.The main purpose of the interleaver is to randomize bursty
Data bit .

Fig.2.1.General Turbo Encoder Structure

The two RSC encoders can be further represented as follows thus revealing their
recursive nature.

15
Convolutional and Turbo codes

Fig 2.2. structure of RSC code encoders

Error patterns so that it can be correctly decoded. It also helps to increase the
minimum distance of the turbo code. The RSC code can be obtained from a NSC
by adding a feedback loop and setting one of the output bits equal to the input bits.
In order to increase
the transmission efficiency, puncturing can be used. Puncturing is removing certain
bits from the output stream according to a fixed pattern given by a puncturing
matrix.

2.3. The Turbo Encoding Process


The turbo encoding process can be explained based on on the
components that makes up the turbo encoder as seen on the figures above. As seen
from the previous paragraph where the function functions of the two RSC encoders
were given. In addition to their functions, we will now discuss on the function of
the interleaver to give a clear picture of the turbo encoding process.

2.3.1. Interleaver
16
Convolutional and Turbo codes

As mentioned earlier, the interleaver is a very important constituent of the turbo


encoder. It
spreads the bursty error pattern and also increases the free distance. Thus, it allows
the decoders to make uncorrelated estimates of the soft output values. The
convergence of the iterative decoding algorithm improves as correlation of the
estimates decreases. The simplest interleaver is the ―row-column‖ or ―block‖
interleaver where the elements are written row-wise and read column-wise. The
―helical‖ interleaver writes the data row-wise but reads diagonal-wise. There is
also an ―odd-even‖ interleaver, which has been shown to give very good results. In
this interleaver the odd positions of the input bits are encoded first. A pseudo-
random interleaving of the input sequence follows this and the even positions are
now encoded. The output consists of the input sequence and multiplexed sequence
of odd and even positioned coded bits. The drawback of this scheme is that some
information bits will have two coded bits associated with them while others won’t
have any. Thus, this causes a non-uniform distribution of coding power across the
input bit stream. A solution for this problem is to use an ―odd-even‖ type of
interleaver with an odd number of rows and columns as shown in [5]. Another type
of interleaver is the ―Simile‖ interleaver [6]. It places an additional restriction: after
encoding the sequences of information and interleaved bits, both the encoders must
be in the same state. This allows only one sequence of tail bits to cause trellis
termination. This is done by dividing the whole block of N information bits into v
+ 1 sequence where v is the memory length of the code. It is seen that the
sequences to which the information bits belong and not the order of the individual
bits in each sequence, determine the final encoder state. Thus, as long as the
interleaver doesn’t change the sequence to which the original bits belong, both
encoders will end in the same state and only one tail will be required to drive the
encoders to all zero state at the same instant.
17
Convolutional and Turbo codes

2.3.2. Puncturing

Puncturing is a technique used to increase the code rate. A rate 1/3 encoder is
converted to a rate 1/2 encoder by multiplexing the two coded streams. The
multiplexer can choose the odd indexed outputs from the output of the upper RSC
encoder and its even indexed outputs from the lower one. In a more complicated
system, puncturing tables are used. An important application of puncturing is to
provide unequal error protection where relatively unimportant bits or during
cleaner channel condition a lower rate coding is used by puncturing the coded bits
while for more important bits or noisy channel conditions, higher rate coding can
be used.

In summary a classic turbo encoder results from the combination of two (or more)
encoders. These are often systematic recursive convolutional coders ( RSCs )
because their recursion provides interesting pseudo-random properties.

Specifically, the turbo encoder will typically produce three outputs to be sent on
the transmission channel (after any modulation ):

1. the exit so-called systematic, ie the very input of the encoder (the sequence )
2. the parity output 1 : the output of the first encoder
3. the parity output 2 : the output of the second encoder. The difference
between these two outputs comes from the fact that the weft is interlaced
before entering the second encoder. She is "mixed".

This coding makes it possible to distribute the information provided by a bit of the
frame on its neighbors (with parity encoding 1) and even over the entire length of

18
Convolutional and Turbo codes

the transmitted frame (with parity encoding 2). Thus, if part of the message is
heavily damaged during transmission, the information can still be found elsewhere.

2.4. Turbo decoder and the decoding process

An iterative decoding is proposed which is basically a modification of the


Bahl decoding Algorithm. The modification is necessary due to the recursive
nature of the encoders. The difference in this algorithm from the Viterbi algorithm
[9] is that while the former produces hard outputs, this one produces soft outputs.
Thus instead of outputting only 0 or 1, the output range is continuous and is a
measure of the log-likelihood ratio of every bit estimate. The iterative feedback
scheme is shown in Fig. 2 below.

Fig 1.3. structure of a Turbo decoder

Fig.2.General Turbo Decoder Structure


The input to the decoder xk and yk are the punctured encoder outputs Xk and Yk
corrupted by two independent noises with the same variance. The demultiplexer
selects y1k when the transmitted sequence is Y1k and selects Y2k when the
transmitted sequence is Y2k and sets it to zero for no transmission. The output of
DEC1 (known as extrinsic information of the decoder) is used by DEC2 to modify

19
Convolutional and Turbo codes

the confidence levels and thus obtain a more accurate estimate of the transmitted
message. The purpose of the interleaver is same as before i.e., to de–
correlate the error bursts. The output of DEC2 is fed back to DEC1 and the process
is repeated several times depending on the BER rate required for the application.

2.5. Case study simulation, results and conclusion

The simulation results show that Turbo code is a powerful error correcting coding
technique under SNR environments. It has achieved near Shannon capacity.
However, there are many factors need to be considered in the Turbo code design.
First, a trade-off between the BER and the number of iterations need to be made,
e.g., more iteration will get lower BER, but the decoding delay is also longer.
Secondly, the effect of the frame size on the BER also needs to be considered.
Although the Turbo code with larger frame size has better performance, the output
delay is also longer. Thirdly, the code rate is another factor
that needs to be considered. The highercoding rate needs more bandwidth. From
these results, the BER goes to 10-5 for an Eb/No of 2 dB itself. The Log-Map
algorithm is superior to the performance of the conventional Viterbi algorithm used
in the convolutional codes used in the DVB systems.

20
Convolutional and Turbo codes

Figure 3.1(a,b) shows the Bit Error Rate curve for various values of Eb/No.As the
value of Eb/No increases the probability of error goes on decreasing to a level of
10-6 for a low value of Eb/No. In this Log-Map decoding algorithm is used for
decoding the received data.

21
Convolutional and Turbo codes

The original image shown in figure 3.2(a) is encoded using Turbo codes and
subjected to Additive White Gaussian noise. In this Random interleaver is used
and Log-Map decoding algorithm is used. The generator polynomial used is the
Group Special Mobile (GSM) committee standard. We can almost able to
retrieve the original image at the fifth iteration itself. This is clear from the
above shown images 3.2(a-e) that as the number of iterations increases the noise in
the data is removed and at the end we can retrieve the original image.

2.6 Application of Turbo Codes

Because of their excellent decoding performance and the moderate computational


complexity of turbo decoders, turbo codes have been adopted by several
organizations to be integrated into their standards. This is how NASA uses them in
all its space probes built since 2003. The European Space Agency (ESA) was the
first space agency to use turbo codes in the SMART-1 lunar probe . Turbo codes
are also used by UMTS and ADSL 2 .Turbo codes work very well
with OFDM and OFDMA modulation and multiplexing techniques because they
benefit from time and frequency spreads of the system; in the 4G LTE and LTE
Advanced mobile networks , these two technologies (turbo codes and OFDMA)
are combined.

22
Convolutional and Turbo codes

2.6. References

[1]. C. Berrou, A. Glavieux, and P. Thitimajshima, ―Near Shannon Limit ErrorCorrecting


Coding And Decoding:TurboCodes,‖ in ICC 1993, (Geneva, Switzerland), pp. 1064–1070,
May 1993.

[2]. C. Berrou and A. Glavieux, ―Near Optimum Error Correcting Coding And Decoding:
Turbo-Codes,‖ IEEE Trans. on Communications, vol. 44, pp. 1261–1271, Oct. 1996.

[3]. S. A. Barbulescu and S. S. Pietrobon, ―Interleaver Design for Turbo Codes,‖


Electronic Letters, vol. 30,pp. 2107–2108, Dec 1994.

[4]. L. Bahl, J. Jelinek, J. Raviv, and F. Raviv, ―Optimal Decoding of Linear Codes for
Minimizing Symbol Error Rate,‖ IEEE Trans. on Information Theory, vol. IT-20, pp. 284–
287, Feb 1974.

23
Convolutional and Turbo codes

[5]. J. Hagenauer, E. Offer, and L. Papke, ―Iterative Decoding of Binary Block and
Convolutional Codes,‖ IEEE Trans. on Information Theory, vol. 42, pp. 429–445, March
1996.

[6]. G. Caire and E Biglieri, "Parallel concatenated codes with unequal error protection,"
IEEE Transactions on Communications , vol. 46, No. 5, May 1998, pp. 565-567.

[7]. J. Bakus, A. K. Khandani, "Combined Source-Channel Coding Using TurboCodes,''


IEE Electronics Letters, Vol.33, No.19, pp.1613-1614, September 1997

[8]. Atousa H. S. Mohammadi and Weihua Zhuang, ``Variance of the Turbo-Code


Performance Bound Over the Interleavers,'' IEEE Transactions on Information Theory, Vol.
48, no. 7, pp. 2078-2086, July 2002.

[9]. Atousa H. S. Mohammadi and A.K. Khandani, ``Unequal Power Allocation to the
Turbo-Encoder Output Bits with Application to CDMA Systems,'' IEEE Transactions on
Communications, vol. 47, no. 11, pp. 1609-1611, Nov. 1999.

[10]. Atousa H. S. Mohammadi and A.K. Khandani, ``Unequal Error Protection of Turbo-
Encoder Output Bits,'' Electronics Letters, Vol. 33, No. 4, pp. 273-274, Feb. 1997.

[11]. Atousa H. S. Mohammadi and Weihua Zhuang, ``Combined Turbo-Code and


Modulation for CDMA Wireless Communications'', Proceedings of the IEEE Vehicular
Technology Conference, VTC'98, pp. 1920-1924, May 1998.

[12]. Atousa H. S. Mohammadi and A.K. Khandani, `` Unequal Error Protection on the
Turbo-Encoder Output Bits,'' Proceedings of the 1997 IEEE International Conference on
Communications, ICC'97, pp. 730-734, Jun. 1997.

24

Das könnte Ihnen auch gefallen