Sie sind auf Seite 1von 5

2017 2nd IEEE International Conference On Recent Trends in Electronics Information & Communication Technology (RTEICT), May 19-20,

2017, India

Comparitive analysis of Channel coding using


BPSK Modulation
Meghana M N Dr. B. Roja Reddy B.Krishnam Prasad
Department of Telecommunication Department of Telecommunication Laboratory of Electro Optic System
Engineering, RVCE Engineering, RVCE (LEOS-ISRO)
meghamn73@gmail.com rojareddyb@gmail.com

Abstract— Minimum Bit Error Rate is the aspiration of any easily implementable codes. These codes correct all
communication system, so the performance of the system is random errors (t).
improved. To achieve this, the best way is the use of channel The convolution codes are not the block codes. It
codes in communication system. In this paper the Hamming takes the input as the stream of data and works with the
Code, Convolution code, RS code, Turbo codes, LDPC codes
burst of errors [4]. Its encoders output is dependent on the
and Polar codes with BPSK modulations are implemented in
Matlab. The performance is compared for these channel current input state and also on the previous output state, so
codes in terms of BER. it needs memory element for buffering and storage.

Keywords— Communication, Error Correcting Codes(ECC), The performance of any communication system
passive Binary Phase Shift Keying (BPSK) modulation,BER depends on BER means the ratio of number errors bits to
rate. the transmitted bits. Noise is the basic parameter which
corrupts the signal when transmitted. The relation between
I. INTRODUCTION transmitted Signal and Noise is called the Signal to Noise
The low Bit Error Rate (BER) rate and high data rate are Ratio (SNR). The BER and SNR are inversely proportional
the requirements of the today’s advanced Radio to each other and hence increase the SNR makes the BER
Communication. Whenever the data is transmitted to a value low but increase in SNR means an increase in signal
wireless medium, it is affected by many factors such as power which is the major constraint for the
atmospheric changes, fading, multipath effects, Electro- communication. The intelligent way to handle these
magnetic Interferences etc. Because of these unfavorable criteria is the use and choice of coding.
effects, the accomplishment of required performance for
any communication system is not possible. One of the key This paper is discussed as follows; Section II
parameters is the BER where the performance level of any describes the Theoretical Background of the various ECC
communication depends on the value which in turn control code like Hamming code, convolutional code,
depends on various parameters such as Bandwidth and Reed-Solomon code, Turbo code, LDPC code and polar
transmitted power which are fixed. So the only way to code. The simulation results are discussed in Section III. a
protect the data from errors is to apply some sort of coding comparative analysis is carried out Section IV. Finally, the
to the data which to be transmitted. The required BER rate optimized code is briefed in the Conclusion which is
and data rate can be obtained by the appropriate use of Section V.
Channel code as well as a modulation scheme. The price is II. THEORETICAL BACKGROUND
paid with redundancy and computation.
Channel coding techniques are of two types namely
Automatic Repeat Request (ARQ) and Forward Error A. Hamming Code
Correction (FEC) [1]. In ARQ method, the transmitter has Hamming codes are the linear codes invented by
to provide an alternative means when an error occurs Richard Hamming in 1950.It is usually represented as (2 m-
which can detect that error and it also retransmitted the 1,2m-m-1) where 2m-1 is the code length (n) and 2m-m-1 is
data but this method is cost effective and retransmission the information bits[2]. It is the perfect and simple code
mechanism is required.FEC method is a popular method which can detect a maximum error of 3 and correct 1- bit
because retransmission of the data can be avoided and error so it has the minimum distance of 3 and error
bandwidth can be averaged and transmitter need not do correcting capability is 1.
any correction and this action will be taken by the receiver.
The FEC has two categories namely Block codes and B. Convolution code.
Convolutional codes. The Convolution code was introduced in 1955 by Peter
Block codes are called linear codes and they are Elias and Recursive Systematic convolutional code was
cyclic in nature. It includes many codes like Hamming invented by Claude Berrou [3]. It has a number of
code, Turbo code, Reed-Solomon (RS) code, Low-Density parameters like k denotes the input bit sequences, n
Parity Check Code (LDPC), Polar codes etc. These are denotes encoded bit sequences, constraint length is denoted
by K=M+1 and M is the number of shift register. The
978-1-5090-3704-9/17/$31.00 © 2017 IEEE
1354
2017 2nd IEEE International Conference On Recent Trends in Electronics Information & Communication Technology (RTEICT), May 19-20, 2017, India

encoded bits are obtained by the multiplication of message algorithm computes the error values. The correction of
bits with Generator Polynomial. error is done with the addition of suitable error polynomial
to the received polynomial.
The encoded bits are transmitted through the channel
and it is received by the receiver and decoding is done by
using the Viterbi Decoder.

Figure 2.RS decoder [5]

D. Turbo Code.
A parallel concatenation code is the Turbo code developed
by C.Berrou in 1993.This is the parallel concatenation of
two convolutional codes and which is done by interleaving
Figure 1.Block Diagram of viterbi decoder [4] and it is different from convolutional code. Turbo encoder
Fig 1 shows the Block diagram of Viterbi Decoder. The is built using two convolutional codes and an interleaver
encoder output is passed through the channel then at the separates them[7]. Depending on the application suitable
receiver side, Branch Metric Unit (BMU) receives the data interleaver is selected from various types. A Variety of
and for each input sequence, each input metric is decoding algorithm is available for Turbo decoding like
calculated i.e Euclidian distance method is calculated for Maximum Aposteriori probability (MAP), Log MAP,Soft
the entire metric. The Add Compare and Select (ACS) unit Output Viterbi Algorithm(SOVA).
use the previous row values to calculate the minimum
distance. Survivor Memory(SM) unit provides previous
state information. The maximum likelihood decision
obtained from BMU is stored in Trace-back Unit (TBU).
So by using these 4 stages in viterbi decoder, the original
data is once again is decoded.

C. Reed Solomon Code.


Irving S.Reed and Gustave Solomon developed the RS
codes in 1960.These are non- binary cyclic codes where m
bits sequences are grouped as one symbol [5]. It has k
Block input symbol, n encoded block symbols and m
message size. RS code is based on Galois Field Arithmetic
where Generator polynomial g(x) is created then this g(x)
Figure 3. Iterative based Turbo decoder [7]
is multiplied by the input symbols and encoded symbols
are obtained. The message symbols were decoded in four Fig 3 shows the iterative decoding and it contains two
steps. Each step is implemented by different algorithms. decoder stages. Each decoder determines extrinsic
information based on the log-likelihood ratio (LLR). First
Fig 2 represents the RS decoder. The generator the extrinsic information is determined by BCJR algorithm
polynomial always divides the encoded data, and of decoder1 then this information is interleaved then
consequently, the elementary step of the decoding process applied to decoder2 again the process continued for the
is the division of the received polynomial by the generator certain number of iteration then the obtained extrinsic
polynomial and this brings forth the quotient and the Information is De-interleaved to produce the LLR ratio.
remainder. The remainders resulting from the divisions are The decoded data is estimated by the sign of LLR.
known as the syndromes. The following step is the
insertion the error locator polynomial, the inverse of the E. Low Density Parity Check Code (LDPC).
roots of the error locator polynomial gives the error locator LDPC code was first developed by Gallager in1963
polynomial number. Either Berlekamp–Massey algorithm which was impractical to implement. Later this work was
or Euclid’s algorithm are employed to prevail the rediscovered in 1996.This code uses a parity check matrix
coefficients of the error location polynomial. Once the which contains few 1’s compared to 0’s hence the low
coefficients are avowed, Chien search Algorithm is used to density parity check code[6][8]. This is the advantage as it
find the error locations. Later direct method or Forney provides performance very close to the capacity of the
1355
2017 2nd IEEE International Conference On Recent Trends in Electronics Information & Communication Technology (RTEICT), May 19-20, 2017, India

channel. The LDPC encoder can be regular or irregular, stage 3 performs the partial sum of the data obtained from
generally, Gallager code is used which regular one. stage 2 in order to get the actual decoded data [12].
The decoder of LDPC is also different types mainly
Message Passing Algorithm (MPA), Belief Propagation
Algorithm (BPA), Sum-Product Algorithm (SPA)[10]

. Fig 4 shows the Block diagram of SPA decoder and it


works with three steps, they are
• Initialization step: This is the first step of the SPA
where the probabilities at the variable nodes are computed
and later transmit the obtained probabilities to the check
nodes.
• Horizontal step: The interchange of information
between parity check nodes and symbol nodes is carried in
this step. In order to enhance the certainty of the
probability of each bit and to perform a precise bit decision
in the vertical step, all the co-efficient were calculated in
horizontal steps.
• Vertical step: This is the last step where the
posteriori probabilities of the current iteration x are
calculated to approximate the decoded vector. Here the
syndrome equation is analyzed to see whether the
syndrome vector is zero or not If it is zero then depending
on probabilities the original data is decoded and if not
zero, then symbol nodes are updated and parity check
Figure 5.Successive Cancellation decoder with n=8 [10]
nodes are recalculated and these calculated values are used
in the adjacent iteration which starts from the second step
i.e, horizontal step and the process continues III. PERFORMANCE ANALYSIS OF DIFFERENT
CODES
Received The simulation of Hamming codes (7,4) shown in Fig. 6
data Initial step Horizontal Vertical which has potential to ascertain two-bit errors and rectify
one-bit errors, hence its minimum distance is 3.
step step
The graph witnesses that, the bit error rate close to
10-5 is very near to the uncoded signal, as a result, the SNR
difference between these two signal at BER 10-5 is very
less hence the coding gain is also less. This code is easy to
Figure 4: Block diagram of SPA decoder implement but it can correct up to only one bit of error in a
string. Coding gain is less and overhead of this method is
F. Polar Codes 1.75 times increases in the amount of bandwidth since 3
Polar codes are the new type of FEC developed by bits are added to input of 4-bit messages so there is 75%
Arikan in 2009. It works on the principle of the channel increase in bandwidth
polarization, which can attain the Shannon capacity with
low complexity[9]. Channel polarization is accomplished
by channel combing and channel splitting. GN represents
the generator matrix of polar code which is taken from
Kronecker power of G2.
The channel is measured by mutual information and
Bhattacharya parameters. Some the channel become noisy
and some are noiseless, these noiseless channel are used
transmits the input data. The decoder of different types
they are Successive cancellation decoder, List decoder,
Fast decoder [11].
Fig.5 shows the SC decoder with channel n=8.This
Figure 6: BER plot of Hamming code
decoder mainly consists of 3 stages. First stage is to
estimate the received data, this done by using LLR The Fig 7 shows the simulation of the convolution
method.LLR data is passed to stage 2 which contain check code(7,4) with constraint length k=3.It is observed that
node(g) and variable node (f), here simple addition and viterbi decoder signal as higher bit error rate than the
product is performed through feedback mechanism. The uncoded signal at lower SNR values ,this is because to get
1356
2017 2nd IEEE International Conference On Recent Trends in Electronics Information & Communication Technology (RTEICT), May 19-20, 2017, India

efficient work from Viterbi decoder ,the error must be optimal parameters for encoder and decoder is practically
randomly distributed and there are more changes of insolvable.
multiple errors in recieved coded bits at lower SNR value
and it unable to recover by the Viterbi algorithm. This
can be optimized by using soft decision decoding, limiting
the trace back length etc. Convolutional code is easy to
implement over Finite State Register and no need to
segment the data into block of fixed size. It requires
Memory but this code will have more complex decoder,
require more to decode and less bandwidth efficiency.

Figure 9: BER Plot of Turbo code

Fig 10 shows the MATLAB simulation of the different


Regular LDPC code .LDPC codes should have number of
1’s in columns (wc)>=3 so that minimum distance grows
linearly thus longer code length yields better coding gain.
as the number of iteration increases the error probability
decreases. LDPC (2304,1152) provides good performance
at 30 iteration. LDPC decoding algorithms have more
Figure7:BER Plot of convolution. parallelism, but more Computational Time is required and
there is linear Independence of parity check matrix.
Fig 8 shows the error probability Graph shown of
different RS code. Each code has unique error correcting
capability different range of SNR .Consider
(255,223) ,here there are 255 symbols and each symbol
has 8 bits i.e, m=8 and its error detecting capability is (n-
k)=2t ,So t=8, maximum error symbols can be corrected.
The RS (255,223) can correct more number of errors and a
better BER rate compared to other coding method and also
coding gain is also high overhead is also less. This code
provides more advantages because for any linear code with
encoder specification it can achieves maximum minimum
distance and it is capable of correcting burst errors.
Figure 10: BER plot of LDPC code

Fig 11 shows the advanced coding technology is the


polar code which provides good efficiency. The simulation
result shows that the polar code reaches the required
capacity as the code length increases and this is true for the
Successive Cancellation (SC) decoder. However by using
List decoder the performance can be improved without
increasing the code length. Polar code (2048,1008) with
list decoder and CRC-16 provides low BER rate compared
to SC Polar Code.
Figure: 8. BER plot of Reed Solomon code

Fig 9 shows the comparison of the performance of


turbo code for various frame sizes K. The bit error rate
improves with the increase in frame size. As a result, lower
BER can be achieved by keeping the SNR constant.
Consider two frame size i.e, k=100 and k=320. It can be
seen in Fig 9 that, for frame size K =320, BER of 8.4 x 10-
10
is achieved at SNR 5.8 dB. However, BER of 5.76 x
10-7 is achieved for the same SNR for K=100. Thus Larger
the frame size, improved performance level can be
achieved at the price of increased latency and setting of Figure 11: BER plot of polar code

1357
2017 2nd IEEE International Conference On Recent Trends in Electronics Information & Communication Technology (RTEICT), May 19-20, 2017, India

IV. COMPARITIVE ANALYSIS code provides the higher coding gain and yields better
performance at this rate.
Table1. Comparative Analysis of different coding techniques
V. CONCLUSION
SNR in dB(with channel coding) In this paper the various channel coding are compared.
SNR(no
BER coding ) Hammin
Convo
Turbo LDPC RS Polar
As the coding gain increases the performance increases
lution
gFigure12:BER
code
code
code
plot of d code code code and susceptibility to noise reduces. When Polar code,RS
and LDPC code are compared without Coding, it shows
10-2 6 4.33 0.56 - 0.32 1.28 1.28
that the Channel Coding gain is improved by 10.066 dB
10-3 7.7 6.76 2.88 - 0.65 1.72 1.45 for Polar Codes. With this Coding gain the susceptibility of
10-4 9.25 8.39 4.42 0.01 0.98 2.05 1.62 error is less hence using this Coding technique the data can
10 -5
10 9.59 5.60 1.33 1.28 2.32 1.78 be transmitted for longer distance.
10-6 10.73 10.52 6.53 2.50 1.58 2.50 1.05
10-7 11.61 11.30 7.32 3.50 1.89 2.73 2.18
REFERENCES
-8
10 12.23 11.97 7.99 4.38 2.32 2.91 2.43
[1] B.Sklar,“Digital Communications Fundamentals and
10-9 12.75 12.55 8.56 5.13 2.86 3.06 2.68 Applications Mathematical Methods and Algorithms”,
10-10 13.24 13.05 9.06 5.78 3.37 3.21 2.93 Prentice Hall, 2nd edition, 2001.
[2] W. Xiong, and D. W. Matolak, “Performance of Hamming
Codes in Systems Employing Different Code Symbol
Energies,” IEEE Communications Society, pp. 1055–1058
[Wireless and Communications and Networking Conference
(WCNC)].
[3] Kjell jorgen Hole, “Rate k/ (K+1) Minimal punctured
convolutional encoders”, IEEE Transactions on Information
Theory, Vol. 37, No. 3, May 1991
[4] Robert J. Mceliece, Fellow, IEEE and Wei Lin, student
member, IEEE, “The trellis complexity of Convolutional
Codes”, IEEE Transactions on Information Theory, Vol. 42,
No. 6, November 1996.
[5] B. K. Mishra, Sukruti Kaulgud and Sandhya Save, “Design of
RS Code Using Simulink Platform”, International Conference
& Workshop on Recent Trends in Technology, (TCET) 2012
Proceedings published in International Journal of Computer
Applications® (IJCA).
[6] MacKay D. J. C. (1999). “Good error-correcting codes based
on very sparse matrices”. IEEE Trans. Inform. Theory. Vol.
45, pp. 399-431.
[7] Berrou C., Glavieux A. and Thitimajshima P. (May 1993)
“Near Shannon Limit Error-Correcting Coding and Decoding:
Turbo-Codes,” Proceedings of ICC 1993, Geneva,
Switzerland, pp. 1064-1070
[8] D. J. C. MacKay and R. M. Neal, “Near Shannon limit
performance of low density parity check codes,” Electron.
Lett., vol. 33, no. 6, pp. 457–458, Mar. 1997.
[9] Thomas J. Richardson, M. Amin Shokrollahi, “Design of
Capacity Approaching Irregular Low-Density ParityCheck
Codes”, IEEE transactions on information theory,Vol.47,No.2,
February 2001
[10] E. Arıkan, “Channel polarization: A method for constructing
capacity achieving codes for symmetric binary-input
Figure 12:BER plot of different channel codes memoryless channels,” Trans. Inf. Theory, vol. 55, no. 7, pp.
3051–3073, July 2009
Table 1 and Fig 12 shows the BER analysis of different [11] . Tal and A. Vardy, “List decoding of polar codes,” in Proc.
Error correcting coding techniques like Turbo Coding, IEEE Int. Symp. Inf. Theory (ISIT 2011), Aug. 2011, pp. 1–5.
[12] C. Leroux, A. J. Raymond, G. Sarkis, and W. J.Gross, “A
Hamming Code, LDPC Code, Convolution Code, RS semi- parallel successive-cancellation decoder for polar
Code, Polar code. Coding gain is the difference between codes,” IEEE Trans. Signal Process., vol. 61, no. 2, pp. 289–
the SNR value of coded and uncoded signal at a particular 299, Jan. 2013
BER rate. Higher the Coding Gain, better the performance [13] H. Vangala, E. Viterbo, and Y. Hong,”A comparative study of
of code and less susceptibility for the errors. polar code constructions for the AWGN
channel,”arXiv:1501.02473 [cs.IT], 2015, (submitted).
At BER of 10-9,the SNR value without channel coding Available:http://arxiv.org/abs/1501.02473.
is 12.75db whereas Polar code as the SNR value 2.684db [14] H. Vangala, Y. Hong, and E. Viterbo, \E_cient algorithms for
systematic polar encoding," IEEE Communication Letters, vol.
,RS code as 3.067db and LDPC code as 2.86db. So polar 20, no. 1, pp. 1720, Jan 2016.

1358

Das könnte Ihnen auch gefallen