Beruflich Dokumente
Kultur Dokumente
2017, India
Abstract— Minimum Bit Error Rate is the aspiration of any easily implementable codes. These codes correct all
communication system, so the performance of the system is random errors (t).
improved. To achieve this, the best way is the use of channel The convolution codes are not the block codes. It
codes in communication system. In this paper the Hamming takes the input as the stream of data and works with the
Code, Convolution code, RS code, Turbo codes, LDPC codes
burst of errors [4]. Its encoders output is dependent on the
and Polar codes with BPSK modulations are implemented in
Matlab. The performance is compared for these channel current input state and also on the previous output state, so
codes in terms of BER. it needs memory element for buffering and storage.
Keywords— Communication, Error Correcting Codes(ECC), The performance of any communication system
passive Binary Phase Shift Keying (BPSK) modulation,BER depends on BER means the ratio of number errors bits to
rate. the transmitted bits. Noise is the basic parameter which
corrupts the signal when transmitted. The relation between
I. INTRODUCTION transmitted Signal and Noise is called the Signal to Noise
The low Bit Error Rate (BER) rate and high data rate are Ratio (SNR). The BER and SNR are inversely proportional
the requirements of the today’s advanced Radio to each other and hence increase the SNR makes the BER
Communication. Whenever the data is transmitted to a value low but increase in SNR means an increase in signal
wireless medium, it is affected by many factors such as power which is the major constraint for the
atmospheric changes, fading, multipath effects, Electro- communication. The intelligent way to handle these
magnetic Interferences etc. Because of these unfavorable criteria is the use and choice of coding.
effects, the accomplishment of required performance for
any communication system is not possible. One of the key This paper is discussed as follows; Section II
parameters is the BER where the performance level of any describes the Theoretical Background of the various ECC
communication depends on the value which in turn control code like Hamming code, convolutional code,
depends on various parameters such as Bandwidth and Reed-Solomon code, Turbo code, LDPC code and polar
transmitted power which are fixed. So the only way to code. The simulation results are discussed in Section III. a
protect the data from errors is to apply some sort of coding comparative analysis is carried out Section IV. Finally, the
to the data which to be transmitted. The required BER rate optimized code is briefed in the Conclusion which is
and data rate can be obtained by the appropriate use of Section V.
Channel code as well as a modulation scheme. The price is II. THEORETICAL BACKGROUND
paid with redundancy and computation.
Channel coding techniques are of two types namely
Automatic Repeat Request (ARQ) and Forward Error A. Hamming Code
Correction (FEC) [1]. In ARQ method, the transmitter has Hamming codes are the linear codes invented by
to provide an alternative means when an error occurs Richard Hamming in 1950.It is usually represented as (2 m-
which can detect that error and it also retransmitted the 1,2m-m-1) where 2m-1 is the code length (n) and 2m-m-1 is
data but this method is cost effective and retransmission the information bits[2]. It is the perfect and simple code
mechanism is required.FEC method is a popular method which can detect a maximum error of 3 and correct 1- bit
because retransmission of the data can be avoided and error so it has the minimum distance of 3 and error
bandwidth can be averaged and transmitter need not do correcting capability is 1.
any correction and this action will be taken by the receiver.
The FEC has two categories namely Block codes and B. Convolution code.
Convolutional codes. The Convolution code was introduced in 1955 by Peter
Block codes are called linear codes and they are Elias and Recursive Systematic convolutional code was
cyclic in nature. It includes many codes like Hamming invented by Claude Berrou [3]. It has a number of
code, Turbo code, Reed-Solomon (RS) code, Low-Density parameters like k denotes the input bit sequences, n
Parity Check Code (LDPC), Polar codes etc. These are denotes encoded bit sequences, constraint length is denoted
by K=M+1 and M is the number of shift register. The
978-1-5090-3704-9/17/$31.00 © 2017 IEEE
1354
2017 2nd IEEE International Conference On Recent Trends in Electronics Information & Communication Technology (RTEICT), May 19-20, 2017, India
encoded bits are obtained by the multiplication of message algorithm computes the error values. The correction of
bits with Generator Polynomial. error is done with the addition of suitable error polynomial
to the received polynomial.
The encoded bits are transmitted through the channel
and it is received by the receiver and decoding is done by
using the Viterbi Decoder.
D. Turbo Code.
A parallel concatenation code is the Turbo code developed
by C.Berrou in 1993.This is the parallel concatenation of
two convolutional codes and which is done by interleaving
Figure 1.Block Diagram of viterbi decoder [4] and it is different from convolutional code. Turbo encoder
Fig 1 shows the Block diagram of Viterbi Decoder. The is built using two convolutional codes and an interleaver
encoder output is passed through the channel then at the separates them[7]. Depending on the application suitable
receiver side, Branch Metric Unit (BMU) receives the data interleaver is selected from various types. A Variety of
and for each input sequence, each input metric is decoding algorithm is available for Turbo decoding like
calculated i.e Euclidian distance method is calculated for Maximum Aposteriori probability (MAP), Log MAP,Soft
the entire metric. The Add Compare and Select (ACS) unit Output Viterbi Algorithm(SOVA).
use the previous row values to calculate the minimum
distance. Survivor Memory(SM) unit provides previous
state information. The maximum likelihood decision
obtained from BMU is stored in Trace-back Unit (TBU).
So by using these 4 stages in viterbi decoder, the original
data is once again is decoded.
channel. The LDPC encoder can be regular or irregular, stage 3 performs the partial sum of the data obtained from
generally, Gallager code is used which regular one. stage 2 in order to get the actual decoded data [12].
The decoder of LDPC is also different types mainly
Message Passing Algorithm (MPA), Belief Propagation
Algorithm (BPA), Sum-Product Algorithm (SPA)[10]
efficient work from Viterbi decoder ,the error must be optimal parameters for encoder and decoder is practically
randomly distributed and there are more changes of insolvable.
multiple errors in recieved coded bits at lower SNR value
and it unable to recover by the Viterbi algorithm. This
can be optimized by using soft decision decoding, limiting
the trace back length etc. Convolutional code is easy to
implement over Finite State Register and no need to
segment the data into block of fixed size. It requires
Memory but this code will have more complex decoder,
require more to decode and less bandwidth efficiency.
1357
2017 2nd IEEE International Conference On Recent Trends in Electronics Information & Communication Technology (RTEICT), May 19-20, 2017, India
IV. COMPARITIVE ANALYSIS code provides the higher coding gain and yields better
performance at this rate.
Table1. Comparative Analysis of different coding techniques
V. CONCLUSION
SNR in dB(with channel coding) In this paper the various channel coding are compared.
SNR(no
BER coding ) Hammin
Convo
Turbo LDPC RS Polar
As the coding gain increases the performance increases
lution
gFigure12:BER
code
code
code
plot of d code code code and susceptibility to noise reduces. When Polar code,RS
and LDPC code are compared without Coding, it shows
10-2 6 4.33 0.56 - 0.32 1.28 1.28
that the Channel Coding gain is improved by 10.066 dB
10-3 7.7 6.76 2.88 - 0.65 1.72 1.45 for Polar Codes. With this Coding gain the susceptibility of
10-4 9.25 8.39 4.42 0.01 0.98 2.05 1.62 error is less hence using this Coding technique the data can
10 -5
10 9.59 5.60 1.33 1.28 2.32 1.78 be transmitted for longer distance.
10-6 10.73 10.52 6.53 2.50 1.58 2.50 1.05
10-7 11.61 11.30 7.32 3.50 1.89 2.73 2.18
REFERENCES
-8
10 12.23 11.97 7.99 4.38 2.32 2.91 2.43
[1] B.Sklar,“Digital Communications Fundamentals and
10-9 12.75 12.55 8.56 5.13 2.86 3.06 2.68 Applications Mathematical Methods and Algorithms”,
10-10 13.24 13.05 9.06 5.78 3.37 3.21 2.93 Prentice Hall, 2nd edition, 2001.
[2] W. Xiong, and D. W. Matolak, “Performance of Hamming
Codes in Systems Employing Different Code Symbol
Energies,” IEEE Communications Society, pp. 1055–1058
[Wireless and Communications and Networking Conference
(WCNC)].
[3] Kjell jorgen Hole, “Rate k/ (K+1) Minimal punctured
convolutional encoders”, IEEE Transactions on Information
Theory, Vol. 37, No. 3, May 1991
[4] Robert J. Mceliece, Fellow, IEEE and Wei Lin, student
member, IEEE, “The trellis complexity of Convolutional
Codes”, IEEE Transactions on Information Theory, Vol. 42,
No. 6, November 1996.
[5] B. K. Mishra, Sukruti Kaulgud and Sandhya Save, “Design of
RS Code Using Simulink Platform”, International Conference
& Workshop on Recent Trends in Technology, (TCET) 2012
Proceedings published in International Journal of Computer
Applications® (IJCA).
[6] MacKay D. J. C. (1999). “Good error-correcting codes based
on very sparse matrices”. IEEE Trans. Inform. Theory. Vol.
45, pp. 399-431.
[7] Berrou C., Glavieux A. and Thitimajshima P. (May 1993)
“Near Shannon Limit Error-Correcting Coding and Decoding:
Turbo-Codes,” Proceedings of ICC 1993, Geneva,
Switzerland, pp. 1064-1070
[8] D. J. C. MacKay and R. M. Neal, “Near Shannon limit
performance of low density parity check codes,” Electron.
Lett., vol. 33, no. 6, pp. 457–458, Mar. 1997.
[9] Thomas J. Richardson, M. Amin Shokrollahi, “Design of
Capacity Approaching Irregular Low-Density ParityCheck
Codes”, IEEE transactions on information theory,Vol.47,No.2,
February 2001
[10] E. Arıkan, “Channel polarization: A method for constructing
capacity achieving codes for symmetric binary-input
Figure 12:BER plot of different channel codes memoryless channels,” Trans. Inf. Theory, vol. 55, no. 7, pp.
3051–3073, July 2009
Table 1 and Fig 12 shows the BER analysis of different [11] . Tal and A. Vardy, “List decoding of polar codes,” in Proc.
Error correcting coding techniques like Turbo Coding, IEEE Int. Symp. Inf. Theory (ISIT 2011), Aug. 2011, pp. 1–5.
[12] C. Leroux, A. J. Raymond, G. Sarkis, and W. J.Gross, “A
Hamming Code, LDPC Code, Convolution Code, RS semi- parallel successive-cancellation decoder for polar
Code, Polar code. Coding gain is the difference between codes,” IEEE Trans. Signal Process., vol. 61, no. 2, pp. 289–
the SNR value of coded and uncoded signal at a particular 299, Jan. 2013
BER rate. Higher the Coding Gain, better the performance [13] H. Vangala, E. Viterbo, and Y. Hong,”A comparative study of
of code and less susceptibility for the errors. polar code constructions for the AWGN
channel,”arXiv:1501.02473 [cs.IT], 2015, (submitted).
At BER of 10-9,the SNR value without channel coding Available:http://arxiv.org/abs/1501.02473.
is 12.75db whereas Polar code as the SNR value 2.684db [14] H. Vangala, Y. Hong, and E. Viterbo, \E_cient algorithms for
systematic polar encoding," IEEE Communication Letters, vol.
,RS code as 3.067db and LDPC code as 2.86db. So polar 20, no. 1, pp. 1720, Jan 2016.
1358