Beruflich Dokumente
Kultur Dokumente
INTRODUCTION
1.1. General
The information age began with two big discoveries in 1948, one technological and
the other theoretical. John Bardeen, Walter Brattain and William Shockley invented the
transistor while information and digital communications theory was invented by Claude
Elwood Shannon [Shannon, 1948]. Today communication industry has been taking full
advantage of these two discoveries.
The modern approach to error control in digital communications started with the
crucial work of Shannon, Hamming and Golay. Hamming and Golay were the first men who
had developed practical forward error control schemes and Shannon formulated a theoretical
limit of the fundamental limits on the efficiency of communications systems. They provide
the idea that with the clever design, errors could be corrected, or avoided. It was indicated by
Shannons theory that large improvements could be achieved in the performance of
communication systems. A fundamental limit on the efficiency of the communication system
was indicated by Shannon theory but it did not tell about how to achieve these limits.
Difference between theoretical limit stated by Shannon and practical results at that
time indicate that there was a lot of space for improvement in the communication systems
that motivate researchers from all around the globe. Challenges issued by Shannon could be
met with the rapid pace development of number of transistors on a single silicon chip. A
typical example of this is the invention of turbo codes and iterative decoding implemented in
the receiver. Thanks to the development of the silicon ICs, iterative decoding could be
implemented practically on the hardware. Practical implementation of the iterative decoding
was very challenging due to highly complex decoding algorithms.
Chapter 1- Introduction
Improving the error correcting capabilities of a code means, improving the quality of
received information in the same proportion and enables the transmission system to operate
in more severe conditions. It is crucial for many applications such as satellite system require
saving of weight (hardware required), in mobile communication antenna size can be reduced
with increasing quality of signal and other such wireless applications
Andrew Viterbi was the first man who presents a famous algorithm for decoding of
convolutional codes in 1967, although this algorithm [Viterbi 1967] was impractical due to
the excessive storage requirements. But this algorithm forms the basis of general
understanding of convolutional codes and iterative decoding for serially concatenated codes.
A paper was published by Claude Berrou and co-authors at the ICC conference in
1993 revolutionise the field of forward error correction coding (FECC) and modern digital
communication system [Berrou et al.1993]. This described a method of creating much more
powerful error correcting codes using parallel concatenation of convolutional codes. Its main
features were two recursive convolutional encoders (RCE) interconnected via an interleaver.
Global iterative decoding was the main trick to achieve near Shannon limit performance.
High gain was achieved in the BER performance on the existing FEC codes at that time.
Invention of turbo code took much attention of the researchers to solve the practical issue of
turbo and turbo like codes. Unfortunately it pays for this accuracy with a highly complex
recursive decoding scheme that has so far limited understanding of its methods and hence its
practical applications. Invention of Block Turbo Codes (BTC) closed much of the remaining
gap to capacity [A. Glavieux, 1994].
A well known result of information theory is that a randomly chosen code of
sufficiently large block length is capable of approaching channel capacity. However, the
optimal decoding complexity increases exponentially with block length up to a point where
decoding becomes physically unrealisable [C. Berrou, 1996]. The goal of coding theorists is
to develop codes that have large equivalent block lengths, yet contain enough structure that
practical decoding is possible. However with standard code structure (such as convolutional
code) the decoding complexity increases at a much faster rate than the achievable coding
gain. For high speed application concatenation of standard codes has proven effective in
increasing the effective block length while keeping the complexity manageable.
2
Chapter 1- Introduction
Recently a new class of error correcting codes has been developed called block turbo
codes [Li. Ping, 2001]. Due to the use of pseudo- random interleaver these codes appear
random to the channel and yet possess enough structure for decoding to be physically
realisable. Decoder complexity of such codes is highly reduced and practical implementation
for decoder becomes easy.
Information
Source
Encrypter
Figure 1.1 Typical Digital Communication System [John G. Proakis, 2001].
Source
Encoder
Chann
Encode
Chann
Decod
Chapter 1- Introduction
R=R b /R c
(1.1)
Where
Rb
Rc
Rc
Rc =K /N
(1.2)
To make the communication system less vulnerable to channel impairments, the
channel encoder generates codewords that are as different as possible from one another.
Since the transmission medium is a waveform medium, the sequence of bits generated by the
channel encoder can not be transmitted directly through this medium. The main goals of
modulation is not only to match the signal to the transmission medium, also enable
simultaneous transmission of a number of signals over the same physical medium and
increase the data rate. The modulation format can be any digital modulation format such as
phase shift keying (PSK), amplitude shift keying (ASK), frequency shift keying (FSK) and
combinations of these modulation formats.
A communication channel refers to the combination of physical medium (copper
wires, radio medium or optical fiber). In the channel noise, fading and interference corrupt
the transmitted signal and cause errors in the received signal. This thesis considers only
AWGN type channels, which ultimately limit system performance.
At the receiving end of the communication system, the demodulator processes the
channel-corrupted waveform and makes a hard or soft decision on each symbol. If the
demodulator makes a hard decision, its output is a binary sequence and the subsequent
channel decoding process is called hard-decision decoding. A hard decision in the
demodulator results in some irreversible information loss. If the demodulator passes the soft
output of the matched filter to the decoder, the subsequent channel decoding process is called
soft-decision decoding.
5
Chapter 1- Introduction
The channel decoder works separately from the modulator/demodulator and has the
goal of estimating the output of the source encoder based on the encoder structure and a
decoding algorithm. In general soft-decision decoding is better than hard-decision decoding.
If encryption is used, the decrypter converts encrypted data back into its original
form. The source decoder transforms the sequence at its input based on the source encoding
rule into a sequence of data, which will be used by the information sink to construct an
estimate of the message. These three components, decrypter, source decoder and information
sink can be represented as a single component called the sink.
C=W log 2 1+
S
bits/ sec
N
(1.3)
With C being the channel capacity, the maximum amount of bits that can be
transmitted through the channel per unit of time, W the bandwidth of the channel and S/N
being the signal to noise ratio (SNR) at the receiver.
This theory went against the conventional methods of that time, which consisted of
lowering probability of error by raising SNR, means by increasing the power of the
transmitted signal. Unfortunately, although Shannons theorem sets down the fundamental
Chapter 1- Introduction
limitations upon the on communication efficiency, it provides no methods through which
these limits can be reached.
Chapter 1- Introduction
Chapter 1- Introduction
presented in two parts. Second part described the maximum achievable transmission rate
through a channel.
Viterbi (1967) presented error bound for convolutional codes. Optimal decoding algorithm
for convolutional codes was presented. Soft input hard decision decoding was presented for
serially concatenated convolutional codes.
Berrou et. al. (1993) proposed Turbo codes. Parallel concatenation of RSC codes was
presented first time. Soft input soft output iterative decoding scheme based on BCJR
algorithm was proposed for decoding of turbo codes. BER for multiple iterations was
presented and it was shown that Near Shannon limit performance was achieved.
S. Benedetto (1996) revealed decoding scheme for parallel concatenated codes. A method to
evaluate upper bound on the bit error probability for parallel concatenated coding scheme
averaged over all interleaver length was presented for PCBC and PCCC. Optimal and sub
optimal decoding scheme was compared.
Benedetto et. al. (1998) presented serial concatenation scheme for convolutional codes.
Serial concatenation of convolutional codes was presented using interleaver between them.
SISO iterative decoding for serially concatenated codes was presented.
Forney (1973) proposed Viterbi algorithm for decoding of convolutional codes. A soft input
hard output (SIHO) decoding was presented.
Berrou et. al. (1996) revealed near Shannon limit error correcting coding and decoding.
SISO decoding of parallel concatenated codes was presented. Feedback decoding rule was
presented as p pipelined identical decoder.
Hagenauer et. al. (1989) optimized Viterbi algorithm with soft decision output. Viterbi
algorithm was improved to produce soft output instead of hard output. Applications of the
soft output Viterbi algorithm were presented.
Pyndiah et. al. (1994) proposed optimum decoding for product code. Encoder and decoder
structure for serially concatenated product code was presented. Decoder complexity
reduction was formulated.
9
Chapter 1- Introduction
Benedetto et. al. (1998) presented binary convolutional codes and recursive convolutional
codes for construction of turbo codes. Best RSC codes were presented for construction of
turbo codes with average error rate probability.
Hokfelt et. al. (1999) revealed trellis termination strategy for convolution turbo codes were
presented. BER performance of trellis terminated turbo codes and unterminated codes were
presented.
Robertson et. al. (1995) optimized optimal and suboptimal decoding techniques were
presented. Optimal and suboptimal MAP decoding algorithms implemented in log domain
were presented. Comparison of two techniques was shown.
Papke et. al. (1996) proposed SOVA decoding algorithm was presented. Improvement over
SOVA was presented for parallel concatenated turbo codes. SOVA algorithm in the log
domain was presented for improvement of BER performance. Comparison of improved
SOVA with SOVA decoding algorithm was also presented.
Gnanasekaran and Aarthi (2010) presented a new technique for decoding of turbo codes
was proposed. Modified log MAP decoding algorithm. Information exchanged between two
decoders was scaled to improve performance of log MAP decoding algorithm. Scaling factor
was selected according to channel conditions. Optimized scaling factor was obtained. A
mathematical relationship between scaling factor and BER was derived.
Fossorier et. al. (1998) optimized SOVA and max log MAP decoding algorithm.
Equivalence between the two techniques was presented. Iterative decoding for turbo code
using two techniques and their BER performance comparison were shown.
Garello et. al. (2001) revealed free distance measurement for serially concatenated turbo
code with interleaver. Effect of free distance on BER was optimized. Free distance for turbo
codes with different generator polynomial was formulated. It was shown how interleaver
effect the free distance properties of the turbo codes.
Yuan et. al. (1999) proposed new technique for interleaver design for turbo codes.
Comparison of performance for different types of interleaver for turbo codes was shown.
Effect of interleaver length on turbo code performance was shown.
10
Chapter 1- Introduction
Hokfelt et. al. (1999) revealed interleaver design for turbo codes based on performance of
iterative decoding. It was shown that s-random interleaver perform better than diagonal
circular shifting interleaver. Trellis termination for interleaver was presented.
Feng et. al. (2002) presented a new code match interleaver for turbo codes. Idle cycles were
eliminated in code match interleaver for turbo codes to improve the performance. Effect of
block size was also considered in this design.
Crozier and Guinand (2001) proposed a new interleaver design. High performance low
memory interleaver bank was designed for turbo codes. It was shown that memory
requirement was reduced nearly two times in the proposed design.
Shah et. al. (2010) performance of convolutional coded IS-95 and turbo codes CDMA 2000
was presented over Rayleigh fading channel. It was shown that turbo code perform much
better than convolutional codes due to constructional capabilities.
Banerjee et. al. (2005) revealed non systematic turbo codes and compared with conventional
systematic turbo codes. Free distance for both the code was computed and it was shown that
non systematic turbo codes have lower error floor due to their effective free distance
properties.
Ping (2001) proposed a technique based on single parity check to replace puncturing for rate
adjustment. Decoder complexity was shown to be reduced for this new scheme. Error floor
for turbo codes was improved for the proposed method.
Keying and Ping (2004) optimized improved two state single parity check codes. Decoder
complexity for the proposed method was shown to be reduced with similar performance as
classical turbo codes. Performance was compared with
(15,13)8
3GPP.
Ping et. al. (2001) presented a new FEC code called zigzag code. MLA decoding algorithm
was presented for decoding of zigzag codes. A union bound for analysis of BER was
presented. Decoder complexity in terms of addition required per iteration per information bit
was formulated. BER performance was compared with turbo codes.
11
Chapter 1- Introduction
Xiaofu et. al. (2004) proposed zigzag codes and their concatenation scheme. Various sumproduct based algorithms were presented and compared. It was shown that the improved
version of sum-product algorithm exhibit better error convergence rate while maintaining
essential parallel form.
Kschischang and Frey (1998) revealed a new decoding technique for decoding f compound
code. Iterative decoding of compound code by probability propagation in graphical model
was proposed. Decoding complexity for different iteration was formulated.
Ping et. al. (1999) presented low density parity check codes with semi random parity check
matrix. Decoding method for LDPC codes was proposed. It was shown that performance of
LDPC codes was improved with semi random matrix.
1.6. Motivation
The main motivation behind this research (the development of modified turbo codes)
is to achieve a sufficiently low decoding complexity. Although due to the recent development
in the field of microelectronics, complex decoding algorithm becomes practically realisable
yet speed and complexity of the decoding algorithm are the key parameter to be considered
for designing a system. Reduced decoder complexity reduces hardware requirements for
implementing decoder. Such a low complexity error correcting code can be used for a
number of real life applications such as telecommunication, satellite communication and
other such applications. Todays digital communication requires fast processing decoder.
Decoder complexity reduction reduces the computational delay at the receiver. This is the
main motivation behind this research work.
Chapter 1- Introduction
To analyze BER performance of CTC and MTC and their BER performance
comparison.
Publication:
Balraj and Sandeep K. Arya, Multiple Concatination of Zigzag Codes with
Convolutional Codes (Modified Turbo Codes), AICTE Sponsored National Conference on
CCEP, JCDM College of Engineering, Sirsa, vol. 5, no. 5, pp. 2-3, May 20-22, 2012.
13