Sie sind auf Seite 1von 13

CHAPTER 1

INTRODUCTION

1.1. General
The information age began with two big discoveries in 1948, one technological and
the other theoretical. John Bardeen, Walter Brattain and William Shockley invented the
transistor while information and digital communications theory was invented by Claude
Elwood Shannon [Shannon, 1948]. Today communication industry has been taking full
advantage of these two discoveries.
The modern approach to error control in digital communications started with the
crucial work of Shannon, Hamming and Golay. Hamming and Golay were the first men who
had developed practical forward error control schemes and Shannon formulated a theoretical
limit of the fundamental limits on the efficiency of communications systems. They provide
the idea that with the clever design, errors could be corrected, or avoided. It was indicated by
Shannons theory that large improvements could be achieved in the performance of
communication systems. A fundamental limit on the efficiency of the communication system
was indicated by Shannon theory but it did not tell about how to achieve these limits.
Difference between theoretical limit stated by Shannon and practical results at that
time indicate that there was a lot of space for improvement in the communication systems
that motivate researchers from all around the globe. Challenges issued by Shannon could be
met with the rapid pace development of number of transistors on a single silicon chip. A
typical example of this is the invention of turbo codes and iterative decoding implemented in
the receiver. Thanks to the development of the silicon ICs, iterative decoding could be
implemented practically on the hardware. Practical implementation of the iterative decoding
was very challenging due to highly complex decoding algorithms.

Chapter 1- Introduction
Improving the error correcting capabilities of a code means, improving the quality of
received information in the same proportion and enables the transmission system to operate
in more severe conditions. It is crucial for many applications such as satellite system require
saving of weight (hardware required), in mobile communication antenna size can be reduced
with increasing quality of signal and other such wireless applications
Andrew Viterbi was the first man who presents a famous algorithm for decoding of
convolutional codes in 1967, although this algorithm [Viterbi 1967] was impractical due to
the excessive storage requirements. But this algorithm forms the basis of general
understanding of convolutional codes and iterative decoding for serially concatenated codes.
A paper was published by Claude Berrou and co-authors at the ICC conference in
1993 revolutionise the field of forward error correction coding (FECC) and modern digital
communication system [Berrou et al.1993]. This described a method of creating much more
powerful error correcting codes using parallel concatenation of convolutional codes. Its main
features were two recursive convolutional encoders (RCE) interconnected via an interleaver.
Global iterative decoding was the main trick to achieve near Shannon limit performance.
High gain was achieved in the BER performance on the existing FEC codes at that time.
Invention of turbo code took much attention of the researchers to solve the practical issue of
turbo and turbo like codes. Unfortunately it pays for this accuracy with a highly complex
recursive decoding scheme that has so far limited understanding of its methods and hence its
practical applications. Invention of Block Turbo Codes (BTC) closed much of the remaining
gap to capacity [A. Glavieux, 1994].
A well known result of information theory is that a randomly chosen code of
sufficiently large block length is capable of approaching channel capacity. However, the
optimal decoding complexity increases exponentially with block length up to a point where
decoding becomes physically unrealisable [C. Berrou, 1996]. The goal of coding theorists is
to develop codes that have large equivalent block lengths, yet contain enough structure that
practical decoding is possible. However with standard code structure (such as convolutional
code) the decoding complexity increases at a much faster rate than the achievable coding
gain. For high speed application concatenation of standard codes has proven effective in
increasing the effective block length while keeping the complexity manageable.
2

Chapter 1- Introduction
Recently a new class of error correcting codes has been developed called block turbo
codes [Li. Ping, 2001]. Due to the use of pseudo- random interleaver these codes appear
random to the channel and yet possess enough structure for decoding to be physically
realisable. Decoder complexity of such codes is highly reduced and practical implementation
for decoder becomes easy.

1.2. Fundamentals of channel coding


The efficient design of a communication system that enables reliable high-speed
services is challenging. Efficient design refers to the efficient use of primary
communication resources such as power and bandwidth. The reliability of such systems is
usually measured by the required signal-to-noise ratio (SNR) to achieve a specific bit error
rate [John G. Proakis, 2001]. A bandwidth efficient communication system with perfect
reliability, or as reliable as possible using as low as possible SNR is desired.
Error correction coding (ECC) [C. Heegard, 1999] is a technique that improves the
reliability of communication over a noisy channel. The use of the appropriate ECC allows a
communication system to operate at very low error rates, using low to moderate SNR values,
enabling reliable high-speed communication over a noisy channel. Although there are
different types of ECC that can be used for channel coding, they all have one key objective in
common, namely, achieving a high minimum Hamming distance to improve the code
performance that occurs only for few codeword.

2.3.1 The Structure of a Digital Communication System


A typical digital communication system is shown below in the fig.1.1 [John G.
Proakis, 2001]. The information source generates a message containing information that is to
be transmitted to the receiver. The information source can be an analog source that generates
analog signals such as audio or video. An analog communication system transmits such a
signal directly over the channel using analog modulation such as amplitude, frequency, or
phase modulation. The information source can also be a discrete source that generates a
sequence of symbols from a finite symbol alphabet such as a teletype machine, where the
output consists of a sequence of symbols from a given alphabet and the numbers 0 through 9.
3

s information bit sequence.


s decoded information bit sequence.
Chapter 1- Introduction

Information
Source
Encrypter
Figure 1.1 Typical Digital Communication System [John G. Proakis, 2001].
Source
Encoder

Chann
Encode

In a digital communication system, shown in Fig. 1.1, the outputs of an analog or


discrete source are converted into a sequence of bits. This sequence of bits might contain too
much redundancy. The redundancy in the source might not be useful for achieving high
reliability. Ideally, the source encoder removes redundancy and represents the source output
sequence with as few bits as possible. Note that the redundancy in the source is different
from the redundancy inserted intentionally by the error correcting code. The encrypter
Information
Source
Decrypter
encodes the data for security purposes. Encryption is the
most
effective
way
to
achieve
data
Sink
Decoder
security. The three components, information source, source encoder and encrypter can be
seen as a single component called the source. The binary sequence d is the output of the
source.
The primary goal of the channel encoder is to increase the reliability of transmission
within the constraints of signal power, system bandwidth and computational complexity. This
can be achieved by introducing structured redundancy into transmitted signals [C. Clark,
1988]. Channel coding is used in digital communication systems to correct transmission
errors caused by noise, fading and interference. The channel encoder assigns a longer
message of N bits to a message of K information bits called a codeword. This usually results
in either a lower data rate or increased channel bandwidth relative to an un-coded system.
The data rate at the output of the channel encoder can be given as:

Chann
Decod

Chapter 1- Introduction
R=R b /R c
(1.1)
Where

Rb

being the data rate at the output of the source and

The code rate

Rc

can be given as:

Rc

is the code rate.

Rc =K /N
(1.2)
To make the communication system less vulnerable to channel impairments, the
channel encoder generates codewords that are as different as possible from one another.
Since the transmission medium is a waveform medium, the sequence of bits generated by the
channel encoder can not be transmitted directly through this medium. The main goals of
modulation is not only to match the signal to the transmission medium, also enable
simultaneous transmission of a number of signals over the same physical medium and
increase the data rate. The modulation format can be any digital modulation format such as
phase shift keying (PSK), amplitude shift keying (ASK), frequency shift keying (FSK) and
combinations of these modulation formats.
A communication channel refers to the combination of physical medium (copper
wires, radio medium or optical fiber). In the channel noise, fading and interference corrupt
the transmitted signal and cause errors in the received signal. This thesis considers only
AWGN type channels, which ultimately limit system performance.
At the receiving end of the communication system, the demodulator processes the
channel-corrupted waveform and makes a hard or soft decision on each symbol. If the
demodulator makes a hard decision, its output is a binary sequence and the subsequent
channel decoding process is called hard-decision decoding. A hard decision in the
demodulator results in some irreversible information loss. If the demodulator passes the soft
output of the matched filter to the decoder, the subsequent channel decoding process is called
soft-decision decoding.
5

Chapter 1- Introduction
The channel decoder works separately from the modulator/demodulator and has the
goal of estimating the output of the source encoder based on the encoder structure and a
decoding algorithm. In general soft-decision decoding is better than hard-decision decoding.
If encryption is used, the decrypter converts encrypted data back into its original
form. The source decoder transforms the sequence at its input based on the source encoding
rule into a sequence of data, which will be used by the information sink to construct an
estimate of the message. These three components, decrypter, source decoder and information
sink can be represented as a single component called the sink.

2.3.2 Shannon Limit


Both the communication channel and the signal that travels through it have their own
bandwidth. The bandwidth B of a communication channel defines the frequency limits of the
signals that it can carry. In order to transfer data very quickly, a large bandwidth is required.
Unfortunately, every communication channel has a limited bandwidth
In 1948, Shannons theory set the fundamental limits on the efficiency of
communications systems [Shannon, 1948]. Shannon theory states that probability of error in
the transmitted data can be reduced by an arbitrary amount provided that the rate at which
data is transmitted through the channel does not exceed the channel capacity and formulated
by Shannon as:

C=W log 2 1+

S
bits/ sec
N

(1.3)

With C being the channel capacity, the maximum amount of bits that can be
transmitted through the channel per unit of time, W the bandwidth of the channel and S/N
being the signal to noise ratio (SNR) at the receiver.
This theory went against the conventional methods of that time, which consisted of
lowering probability of error by raising SNR, means by increasing the power of the
transmitted signal. Unfortunately, although Shannons theorem sets down the fundamental

Chapter 1- Introduction
limitations upon the on communication efficiency, it provides no methods through which
these limits can be reached.

1.3. Channel coding


The task of channel coding is to encode the information sent over a communication
channel in such a way that in the presence of channel noise, errors can be detected and/or
corrected. There are two types of channel coding technique.

1.3.1 Backward Error Correction (BEC) Coding Technique


Backward error correction [John G. Proakis, 2001] is a technique used for error
detection only. At the receiver error can be detected by the redundancy bits added by the
encoder at the transmitter but can not be corrected at the receiver end. If an error is detected,
the sender is requested to retransmit the message. While this method is simple and sets lower
requirements on the codes error-correcting properties, it on the other hand requires duplex
communication and causes undesirable delays in transmission. This technique is used where
delay in the transmission can be tolerated.

1.3.2 Forward Error Correction (FEC) Coding Technique


Forward error correction [John G. Proakis, 2001] is a technique used for error
detection as well as correction at the receiver end. In FEC technique redundancy bits are
added to the information bits using channel encoder. If an error is detected at the receiver
end, the message in not retransmitted, while it is corrected using redundancy bits added by
channel encoder. In FEC technique channel decoder is capable of detecting as well as
correcting a certain numbers of errors, i.e. it is capable of locating the position of error.
Number of error that can be corrected at the receiver end depends on how much bit error
correcting code is used at the transmitter end by the channel encoder. An N bit error
correcting code can detect N+1 bit of error and correct N bit error. Since FEC codes require
only simplex communication, they are especially attractive in wireless communication
systems. FEC technique improves the energy efficiency of such systems. In the rest of the
thesis we deal with binary FEC codes only.

Chapter 1- Introduction

1.4. Need for Better codes


Designing a channel code is always a trade-off between energy efficiency and
bandwidth efficiency. Codes with lower rate (i.e. bigger redundancy) can usually correct
more errors [Molisch, 2011]. If more errors can be corrected, the communication system can
operate with a lower transmit power, transmit over longer distances, tolerate more
interference, use smaller antennas and transmit at a higher data rate. These properties make
the code energy efficient. On the other hand, low-rate codes have a large overhead and are
hence consume more bandwidth. Also, decoding complexity grows exponentially with code
length, and long (low-rate) codes set high computational requirements to conventional
decoders.
There is a theoretical upper limit on the data transmission rate R, for which error-free
data transmission is possible. This limit is called channel capacity or also Shannon capacity.
Although Shannon developed his theory already in the 1940s, several decades later the code
designs were unable to come close to the theoretical limit due to decoder complexity.
Hence, new codes were sought that would allow for easier decoding. One way of
making the task of the decoder easier is using a code with mostly high-weight code words.
High-weight code words, i.e. code words containing more ones and less zeros, can be
distinguished more easily. Another strategy involves combining simple codes in a parallel
fashion [Li. Ping, 2001], so that each part of the code can be decoded separately with less
complex decoders and each decoder can gain from information exchange with others. This is
called the divide-and-conquer strategy. Turbo codes use second method to achieve near
Shannon limit performance.

1.5. Literature Survey


Following literature has been surveyed for the research work.
Shannon (1948) proposed the theoretical limit for communication system. Theoretical limit
on channel capacity for information theory and digital communication was formulated.
Relationship between bandwidth and capacity was presented in this paper. This theory was

Chapter 1- Introduction
presented in two parts. Second part described the maximum achievable transmission rate
through a channel.
Viterbi (1967) presented error bound for convolutional codes. Optimal decoding algorithm
for convolutional codes was presented. Soft input hard decision decoding was presented for
serially concatenated convolutional codes.
Berrou et. al. (1993) proposed Turbo codes. Parallel concatenation of RSC codes was
presented first time. Soft input soft output iterative decoding scheme based on BCJR
algorithm was proposed for decoding of turbo codes. BER for multiple iterations was
presented and it was shown that Near Shannon limit performance was achieved.
S. Benedetto (1996) revealed decoding scheme for parallel concatenated codes. A method to
evaluate upper bound on the bit error probability for parallel concatenated coding scheme
averaged over all interleaver length was presented for PCBC and PCCC. Optimal and sub
optimal decoding scheme was compared.
Benedetto et. al. (1998) presented serial concatenation scheme for convolutional codes.
Serial concatenation of convolutional codes was presented using interleaver between them.
SISO iterative decoding for serially concatenated codes was presented.
Forney (1973) proposed Viterbi algorithm for decoding of convolutional codes. A soft input
hard output (SIHO) decoding was presented.
Berrou et. al. (1996) revealed near Shannon limit error correcting coding and decoding.
SISO decoding of parallel concatenated codes was presented. Feedback decoding rule was
presented as p pipelined identical decoder.
Hagenauer et. al. (1989) optimized Viterbi algorithm with soft decision output. Viterbi
algorithm was improved to produce soft output instead of hard output. Applications of the
soft output Viterbi algorithm were presented.
Pyndiah et. al. (1994) proposed optimum decoding for product code. Encoder and decoder
structure for serially concatenated product code was presented. Decoder complexity
reduction was formulated.
9

Chapter 1- Introduction
Benedetto et. al. (1998) presented binary convolutional codes and recursive convolutional
codes for construction of turbo codes. Best RSC codes were presented for construction of
turbo codes with average error rate probability.
Hokfelt et. al. (1999) revealed trellis termination strategy for convolution turbo codes were
presented. BER performance of trellis terminated turbo codes and unterminated codes were
presented.
Robertson et. al. (1995) optimized optimal and suboptimal decoding techniques were
presented. Optimal and suboptimal MAP decoding algorithms implemented in log domain
were presented. Comparison of two techniques was shown.
Papke et. al. (1996) proposed SOVA decoding algorithm was presented. Improvement over
SOVA was presented for parallel concatenated turbo codes. SOVA algorithm in the log
domain was presented for improvement of BER performance. Comparison of improved
SOVA with SOVA decoding algorithm was also presented.
Gnanasekaran and Aarthi (2010) presented a new technique for decoding of turbo codes
was proposed. Modified log MAP decoding algorithm. Information exchanged between two
decoders was scaled to improve performance of log MAP decoding algorithm. Scaling factor
was selected according to channel conditions. Optimized scaling factor was obtained. A
mathematical relationship between scaling factor and BER was derived.
Fossorier et. al. (1998) optimized SOVA and max log MAP decoding algorithm.
Equivalence between the two techniques was presented. Iterative decoding for turbo code
using two techniques and their BER performance comparison were shown.
Garello et. al. (2001) revealed free distance measurement for serially concatenated turbo
code with interleaver. Effect of free distance on BER was optimized. Free distance for turbo
codes with different generator polynomial was formulated. It was shown how interleaver
effect the free distance properties of the turbo codes.
Yuan et. al. (1999) proposed new technique for interleaver design for turbo codes.
Comparison of performance for different types of interleaver for turbo codes was shown.
Effect of interleaver length on turbo code performance was shown.
10

Chapter 1- Introduction
Hokfelt et. al. (1999) revealed interleaver design for turbo codes based on performance of
iterative decoding. It was shown that s-random interleaver perform better than diagonal
circular shifting interleaver. Trellis termination for interleaver was presented.
Feng et. al. (2002) presented a new code match interleaver for turbo codes. Idle cycles were
eliminated in code match interleaver for turbo codes to improve the performance. Effect of
block size was also considered in this design.
Crozier and Guinand (2001) proposed a new interleaver design. High performance low
memory interleaver bank was designed for turbo codes. It was shown that memory
requirement was reduced nearly two times in the proposed design.
Shah et. al. (2010) performance of convolutional coded IS-95 and turbo codes CDMA 2000
was presented over Rayleigh fading channel. It was shown that turbo code perform much
better than convolutional codes due to constructional capabilities.
Banerjee et. al. (2005) revealed non systematic turbo codes and compared with conventional
systematic turbo codes. Free distance for both the code was computed and it was shown that
non systematic turbo codes have lower error floor due to their effective free distance
properties.
Ping (2001) proposed a technique based on single parity check to replace puncturing for rate
adjustment. Decoder complexity was shown to be reduced for this new scheme. Error floor
for turbo codes was improved for the proposed method.
Keying and Ping (2004) optimized improved two state single parity check codes. Decoder
complexity for the proposed method was shown to be reduced with similar performance as
classical turbo codes. Performance was compared with

(15,13)8

turbo codes used in

3GPP.
Ping et. al. (2001) presented a new FEC code called zigzag code. MLA decoding algorithm
was presented for decoding of zigzag codes. A union bound for analysis of BER was
presented. Decoder complexity in terms of addition required per iteration per information bit
was formulated. BER performance was compared with turbo codes.
11

Chapter 1- Introduction
Xiaofu et. al. (2004) proposed zigzag codes and their concatenation scheme. Various sumproduct based algorithms were presented and compared. It was shown that the improved
version of sum-product algorithm exhibit better error convergence rate while maintaining
essential parallel form.
Kschischang and Frey (1998) revealed a new decoding technique for decoding f compound
code. Iterative decoding of compound code by probability propagation in graphical model
was proposed. Decoding complexity for different iteration was formulated.
Ping et. al. (1999) presented low density parity check codes with semi random parity check
matrix. Decoding method for LDPC codes was proposed. It was shown that performance of
LDPC codes was improved with semi random matrix.

1.6. Motivation
The main motivation behind this research (the development of modified turbo codes)
is to achieve a sufficiently low decoding complexity. Although due to the recent development
in the field of microelectronics, complex decoding algorithm becomes practically realisable
yet speed and complexity of the decoding algorithm are the key parameter to be considered
for designing a system. Reduced decoder complexity reduces hardware requirements for
implementing decoder. Such a low complexity error correcting code can be used for a
number of real life applications such as telecommunication, satellite communication and
other such applications. Todays digital communication requires fast processing decoder.
Decoder complexity reduction reduces the computational delay at the receiver. This is the
main motivation behind this research work.

1.7. Research Objectives


The main objectives of this thesis are flexibility and the complexity reduction.
Complexity reduction is a natural requirement in the research environment, where several
possible schemes can be implemented and their results can be compared on a system in
shortest time possible. Turbo coding is a relatively complex scheme. There are several
parameters that can be varied and there are several alternative techniques that can be used.
Following are the objectives:
12

Chapter 1- Introduction

Design and implementation of low LCHTC and ILCHTC.

To analyze BER performance of CTC and MTC and their BER performance
comparison.

Computation of decoder complexity for LCHTC, ILCHTC and CTC

1.8. Thesis Outline


In chapter 2, we have discussed convolutional turbo codes (CTC). Here we discussed
principle, concatenation scheme, encoder structure and decoder structure for CTC. In this
chapter we have discussed principle of interleaver, interleaver requirement, interleaver design
for turbo codes and interleaver types. Here we have discussed zigzag codes. Here we have
discussed Modified Turbo Codes (MTC). Two of types of MTC have been discussed Low
Complexity Hybrid Turbo codes (LCHTC) and Improved Low Complexity Hybrid Turbo
Codes (ILCHTC).
In chapter 3, we have discussed simulation setup and simulation results with
discussions for different rate CTC and MTC and BER performance comparison for LCHTC,
ILCHTC and CTC. In this chapter we have discussed decoder complexity MTC and CTC.
In chapter 4, we presented conclusion of this research work and future scope.

Publication:
Balraj and Sandeep K. Arya, Multiple Concatination of Zigzag Codes with
Convolutional Codes (Modified Turbo Codes), AICTE Sponsored National Conference on
CCEP, JCDM College of Engineering, Sirsa, vol. 5, no. 5, pp. 2-3, May 20-22, 2012.

13

Das könnte Ihnen auch gefallen