Beruflich Dokumente
Kultur Dokumente
BY
ADVISOR
JUNE 10,2019
DILLA, ETHIOPIA
Abstract
Data communication performed at any time does not always go well, sometimes also happens
that his name error when transmitting data. Error free transmission is a major concern in
advanced electronic circuits. The errors in information occur during transmission which can lead
to wrong information at reception. Error correction codes are commonly used to protect the
information and registers in electronic circuits. The search for correcting codes led to the
Hamming code perfect 1-bit Error Correcting Codes and then extended. Hamming code 1-bit
Error Correcting and 2-bit Error Detecting Codes. In mathematical terms, Hamming codes are a
class of binary linear codes. For each integer p>2, there is a code with ‘p’ parity bits and ((2^p)-
p-1) data bits. The main aspect of this Hamming encoder and decoder is to know how a code can
be encoded and decoded using Hamming code. This project is implemented for Hamming code
single – bit error detection and single – bit error correction. For implementing this Single bit
error detector and corrector A and the B was used. Hamming codes are widely used in
computing, telecommunications and other applications. It can only detect and correct the number
of bits but cannot solve the problem.
Acknowledgment
First of all, we are grateful to the Almighty God for enabling us to complete this semester project
work. It is our pleasure to thank our advisor Mr. Gebissa for being there in guiding and
appreciating our work. We are thankful for their aspiring guidance, invaluably
constructive criticism and friendly advice during the project work. We are sincerely grateful to
them for sharing their truthful and illuminating views on a number of issues related to the
project. Finally, we take this opportunity to sincerely thank all the faculty members of
Electrical and Computer Engineering for their help and encouragement in our educational
endeavors.
CHAPTER 1
1 INTRODUCTION
1.1 Background
In modern digital communication and storage systems design, information theory is becoming
increasingly important .In recent years; there has been and increasing demand for efficient and
reliable digital data transmission. This demand has been accelerated by the emergence of large
scale, high speed data networks for the exchange and processing of digital information in the
military, governmental and private spheres. A major concern is control of errors, so that reliable
reproduction of data can be obtained. Recent developments have contributed toward achieving
the reliability required by today’s high speed digital systems and the use of coding for error
control has become an integral part in the design of modern communication systems and digital
computers.
When we are watching TV, enjoying music on CD, talking with friend by cellular phone etc., we
encounter problem due to the impact of the noisy environment. A CD player includes
semiconductor memories, optical media, laser device, disk files, and so on. Each of them is
subject to various types of noise disturbance may come from switching impulse noise, thermal
noise, dust on disk surface, or even lightening. Error control coding (also referred to as channel
Coding) is used to detect, and often correct, symbols which are received in error on digital
communications channels.
A typical digital communications system is shown below in Figure 1 The source information is
usually composed of binary or decimal digits or alphabetic information in either a continuous
waveform or a sequence of discrete symbols. The encoder transforms these messages into a
sequence of binary digits (bits) to the channel. The sequence enters the channel and is perturbed
by noise. The output enters the decoder, which makes a decision concerning which message was
sent and delivers this message to the sink. A communication system can be represented by the
following block diagram:
Reliable reproduction of the information can be obtained at the output of the channel decoder.
the cost of implementing the encoder and decoder falls within acceptable limits.
The fundamental of error control coding is based on Shannon’s landmark paper published in
1948. Shannon demonstrated that by proper encoding of the information, errors induced by a
noisy channel or storage medium can be reduced to any desired level without sacrificing the rate
of information transmission or storage. Since Shannon’s work, a great deal of effort has been
expended on the problem of devising efficient encoding and decoding methods for error control
in a noisy environment.
Recent developments have contributed toward achieving the reliability required by today’s high
speed digital systems, and the use of coding for error control has become an integral part in the
design of modern communication systems and digital computers with the dramatic decreasing of
the size and cost of semiconductor devices.
Types of errors
There are two types of errors that occur on digital communications channels:
I. single errors
II. Burst error
Single errors: - Errors that occur independently on each bit of the transmitted sequence. That is,
an error occurs on a transmitted bit independent of errors that occurred in the vicinity of the bit.
The random error channel also called as memory less channel. Good examples of random-error
channels are the deep-space channel and many satellite channels. The codes devised for
correcting random errors are called random-error-correcting codes.
Thermal noise. Noise can be divided into four separate categories: thermal noise,inter-
modulation noise, cross-talk, and impulse noise. However, this paper will focus on thermal noise,
cross-talk, and burst noise or impulse noise. Thermal noise, or white noise,is the result of the
agitation of electrons by heat. This stimulation of electrons results in unwanted currents or
voltages within an electronic component. Often times, thermal noise is referred to as white noise
because it is uniformly distributed across the bandwidths used in communications systems. The
presence of thermal noise inhibits the receiving equipment’s ability to distinguish between
incoming signals. Because thermal noise cannot be ignored or eliminated, an upper bound is
placed on the performance of a communication system.
Burst noise. Burst noise, or impulse noise, is another significant source of signal corruption
within a communication channel. Burst noise is the result of sudden step-like transitions between
at least two discrete voltage or current levels. Impulse noise, unlike thermal noise, is
discontinuous. This type of noise results from irregular pulses or noise spikes of short duration
and relatively high amplitude. While impulse noise is not as impactful on analog data, such as
voice transmissions, it is a primary source of error for digital data communication. Since burst
noise is often the result of signals induced by external sources, such as lightning, switching
equipment, or heavy electrically operated machinery, it quite frequently interrupts surrounding
signals in communication channels.
1.2 Statement of the problems
In recent years, there has been an increasing demand for efficient and reliable data transmission.
This demand has been accelerated by the emergency of large scale, high speed data networks for
the exchange and processing of digital information in the military, governmental and private
spheres. A major concern is control of errors so that reliable reproduction of data can be
obtained. Now a day, the design of digital computers and communication systems require
efficient error control coding technique. It is fact that every
Communication channel is vulnerable to noise unless we use proper error control mechanisms;
this initiated us to do our project on detecting and correcting this errors. The error control coding
encoder takes as input, the information symbols from the source and adds redundant symbols to
it, so that most of the errors introduced in the transmitting signal during transmission over noisy
channel. The decoder then receives the corrupted signal and detects the error locations by which
we can correct it. Error control coding has its own error detecting techniques and is also has a
means of correcting the errors happened on our source information. Some of these techniques are
Hamming, BCH, Reed Solomon, Turbo, convolutional coding and etc. Among these techniques
we basically focused on Hamming coding technique for this semester.
1.3 Objective of the project
1.3.1 General Objective
The goal of this project is the detection and correction of errors in digital information using
Hamming coding techniques. Such Errors almost inevitably occur after the transmission, storage
or processing of information in Digital (mainly binary) form, because of noise and interference
in communication channels, or imperfections in storage media. Protecting digital information
with a suitable Error-control code enables the efficient detection and correction of any errors that
may have occurred. Because of this we initiated to do our project on this area.
1.3.2 Specified Objective
This project is using different coding techniques to decrease communication errors that occur
during transmission and comparing the performance of each coding techniques subjected to
noise. Messages that are transmitted over a communication channel can be damaged their bits
can be masked or inverted by noise. Detecting and correcting these errors is important. .This
paper addresses a multiple error detecting and single bit correcting error control coding
technique called Hamming codes.
1.4 Methodology
To accomplish this project we have used two stages. these are:-
I. Mathematical Model: In this section we will implement different mathematical concepts that
are applicable for Hamming error control coding techniques.
II. Simulation: In conducting our project we are going to follow the following basic steps
1. Design of
Encoder
Decoder
2. Performance analyzing:- by using MATLAB
1.6 Scope of the project
We are basically intended to test error control coding using the Hamming coding technique and
we also try to demonstrate the performance of Hamming coding techniques using the graphical
user interface (GUI). We can look the overall scope of our project by looking the experimental
tool, basics and output of our project.
Experimental tool: we will do our project using the mat lab software, for this purpose we will
use a large memory (hard disk) and RAM (for demonstration purpose) computer.
Experimental basics: here we will perform the following tasks.
First we will look the basic encoding and decoding techniques without the effect of noise
in the channel.
Then, we demonstrate the effect of adding noise on the normal functioning of the
decoder.
Expected output: the final remark of this project is minimizing the error happen in
transmission by which we can have a reliable and fast transmission system. We can create error
free transmission system this is done at the expense of bandwidth.
Finally we will arrive at the point where we can choose a particular soft decision decoding
algorithm which can give an estimate of the performance that is achievable over certain channel.
CHAPTER 2
ERROR CONTROL CODING
Error correction coding is the means whereby errors which may be introduced into digital data as
a result of transmission through a communication channel can be corrected based upon received
data.error detection coding is the means whereby errors can be detected based upon received
information.collectively ,error correction and error detection coding are error control
coding.error control coding can provide the difference between an operating communications
system and a dysfunctional system.it has been a significant enabler in the telecommunication
revolution,the internet,digital recording,and space exploration.
Error control coding is in principle a collection of digital signal processing techniques aiming to
average the effects of channel noise over several transmitted signals. The amount of noise
suffered by a single transmitted symbol is much less predictable than that experienced over a
longer interval of time, so the noise margins built into the code are proportionally smaller than
those needed for uncoded symbols. An important part of error control coding is the incorporation
of redundancy into the transmitted sequences. The number of bits transmitted as a result of the
error correcting code is therefore greater than that needed to represent the information. Without
this, the code would not even allow us to detect the presence of errors and therefore would not
have any error controlling properties. This means that, in theory, any incomplete compression
carried out by a source encoder could be regarded as having error control capabilities. In
practice, however, it will be better to compress the source information as completely as possible
and then to re-introduce redundancy in a way that can be used to best effect by the error
correcting decoder.
The traditional role for error-control coding was to make a troublesome channel acceptable by
lowering the frequency of error events. The error events could be bit errors, message errors, or
undetected errors.Coding's role has expanded tremendously and today coding can do the
following:[2]
Reduce the occurrence of undetected errors:- This was one of the first uses of error control
coding. Today's error detection codes are so effective that the occurrence of undetected
errors is, for all practical purposes, eliminated.
Overcome Jamming:- Error-control coding is one of the most effective techniques for
reducing the effects of the enemy's jamming.
Error detection is the process of verifying the received information whether it is correct or not at
the receiver end with out having any information of sent original message. In sender side some
redundant bits are added to the original message based some property of message signal (i.e.
parity bits) and in the receiver side by scanning this redundant bits, the error in the message will
be predicted.
Figure :error detection diagram
Often referred to as a parity check, the vertical redundancy check is the most commonly used
and least expensive of the three checks. This technique involves the addition of a parity bit, or
parity bit, to every data unit. By adding parity bits, this ensures the total number of ones in a data
transmission is even. While some systems use an odd-parity check, the majority of the systems
implement an even-parity check.Through this technique, a vertical redundancy check can detect
single-bit errors as well as detect an odd number of burst errors. Unfortunately, this check cannot
detect errors which result in an even number of bits changed. Since a vertical redundancy check
only checks for whether or not the number of ones in a transmission is even, an error which
results in an even number of ones in the data will be considered error-free.
There are two variants of parity bits: even parity bit and odd parity bit.
In the case of even parity, for a given set of bits, the occurrences of bits whose value is 1 is
counted. If that count is odd, the parity bit value is set to 1, making the total count of occurrences
of 1s in the whole set (including the parity bit) an even number. If the count of 1s in a given set
of bits is already even, the parity bit's value is 0.
In the case of odd parity, the coding is reversed. For a given set of bits, if the count of bits with a
value of 1 is even, the parity bit value is set to 1 making the total count of 1s in the whole set
(including the parity bit) an odd number. If the count of bits with a value of 1 is odd, the count is
already odd so the parity bit's value is 0.
Figure:flow diagram of hamming code with parity bits
Error Correction
While error detection is helpful in data transmissions, error correction is essential to a successful
transmission. Error correction allows a distorted transmission to be returned to its original
transmitted form. With the introduction of redundancy, a receiver can detect some errors.
Additionally, these errors can be corrected to return a transmitted message to its original form.
During the process called error control coding, redundant information is added to a message that
is to be transmitted. After the message is received, the redundant information is to be used to
detect and potentially correct errors that occurred in the message during transmission.ARQ is an
error-detection system which checks a message for errors. If an error is found, the receiver will
inform the sender, and the section of the message that contains the error will be resent. The
receiver in an ARQ system performs error detection on all received messages. Consequently, all
responsibility for detecting errors lies with the receiver. The ARQ must be simple enough for the
receiver to handle it, yet powerful and efficient enough so the receiver does not send erroneous
messages to the user. Based on re-transmission, there are three basic ARQ schemes: stop-and-
wait, go-back-N, and selective repeat.
Stop-and-wait:In the stop-and-wait scheme, the transmitter sends a code word to the receiver
and waits for acknowledgment. A positive acknowledgment means the transmitted code word
has been successfully received, and the transmitter sends the next code word in queue. When a
receiver sends a negative acknowledgment signal, this implies an error has been detected in a
code word. Once the transmitter receives this signal, it attempts to resend the invalid code word.
These re-transmissions will continue until the receiver sends a positive acknowledgment signal
to the sender.
While this scheme is very simple, data is transmitted in only one direction. This type of one-way
transmission does not meet the qualifications for the high-speed modern communication systems
of today.
Go-back-N:In the go-back-N scheme,code words are continuously transmitted by the transmitter
in order. N is size of the “window” from which the transmitter can send code words. It is
important for N to be as large as possible within the bounds of the receiver’s ability to process
packets and the number of code words used to verify a transmission. The acknowledgment signal
for a code word typically arrives after a round-trip delay. A round-trip delay is defined as the
time between the transmission of a code word and the reception of an acknowledgment signal for
a code word. The transmitter continues to send N-1 code words during this interval. When the
sender receives a negative acknowledgment signal designating an erroneous code word, it goes
back to that code word and re-sends that code word and all N-1 following code words. A
significant disadvantage of this method is when a receiver detects an error all following code
words are discarded
Selective repeat:The selective repeat method was developed to overcome the disadvantages of
both the stop-and-wait and the go-back-N schemes. Similarly, code words are sent continuously;
however, if a code word is negatively acknowledged, only that code word is re-transmitted. In
this system, a buffer is needed to store the error free code words that follow the incorrect code
word. A buffer is a region in memory that is used to temporarily keep data from continuing while
it is in the process of transmission.The buffer is necessary because data must generally be
received in order. A size for the buffer must be chosen to be large enough so data overflow does
not occur and code words are not lost.Complications with this technique can arise when a second
code word is found invalid while the first corrected code word has not yet been received by the
buffer. Proper buffering must be maintained by the transmitter which must have the necessary
logic to handle several out of sequence code words.
Liner codes
Block codes
Linear codes. A linear code, or binary linear code, implements linear combinations of code
words as a means of encoding a transmitted bit stream. This code takes a group of K message
bits and produces a binary code word that consists of N bits.The extra N-K parity-check bits,
provided for redundancy, are determined by a set of N-K linear equations.
Block codes:Block codes organize a stream of bits in binary format, referred to as the message
bits, into blocks of length K. Since each block of bits is of length K, there is a set of 2K possible
messages that can be transmitted. The bit string encoder converts the set of 2K possible blocks of
K bits into another set of 2N longer blocks of N bits. From this larger set of 2N possible code
words, N code words are chosen to be sent in the transferred stream. The extra (N-K) bits, or
parity check bits, are the redundancy addition to the bit stream before it is communicated. The
final result is a code word which is transmitted, corrupted, and decoded separately from all other
code words.
Linear block codes. A linear code is an error-correcting code in which any linear combination
of code words is also a code word. There are several block codes which also belong to the class
of linear codes, such as:
Hamming codes
Reed-Solomon codes
Hadamard codes
Expander codes
Golay codes
Reed-Muller codes
This set of codes is referred to as linear block codes because they belong to both classes of
codes.In linear block coding, a linear encoding and decoding scheme translates a sequence of
source bits into a transmitted sequence of bits. This scheme inserts extra bits,called parity-check
bits, to add redundancy and improve reliability. A sequence of K information symbols is encoded
in a block of N symbols, where N > K, and then transmitted over the channel. First, K
information bits enter an encoder, and the encoder generates a sequence of coded symbols of
length N to be transmitted over the channel.For this code word, or transmitted sequence, N must
be greater than K to guarantee uniqueness between each code word and each of the possible 2K
messages .The most well-known example of a linear block code is the (7,4) Hamming code. For
every 4 source bits transmitted, the Hamming code transmits an additional 3 parity-check bits.
This redundancy guarantees at most one error can be corrected.
HAMMING CODE
Hamming code is a linear error-correcting code named after its inventor, Richard Hamming.
Hamming codes can detect up to two simultaneous bit errors, and correct single-bit error. By
contrast, the simple parity code cannot correct errors, and can only detect an odd number of
errors. In 1950 Hamming introduced the (7, 4) code. It encodes 4 data bits into 7 bits by adding
three parity bits. Hamming (7, 4) can detect and correct single – bit errors. With the addition of
overall parity bit, it can also detect (but not correct) double bit errors. Hamming code is an
improvement on parity check method. It can correct 1 error bit only [9].
Hamming code used two methods (even parity and odd parity) for generating redundancy bit.
The number of redundancy bits depends on the size of information data bits as shown below :
2^P=N+P+1 [1]
P=parity bits.
According to [1], 5 parity bits required for a 16 input data bits. Hamming-based codes are
widely used in memory systems for reliability improvements. The algorithm consists of two
phases: encoding and decoding. Hamming encoding involves deriving a set of parity check bits
over data bits. These parity check bits are concatenated or merged with the data bits. These extra
bits are called parity bits. We add these parity bits to the information data at the source end and
remove at destination end. Presence of parity bit allows the receiver to detect or correct corrupted
bits. The concept of including extra information in the transmission for error detection is a good
one. But in place of repeating the entire data stream, a shorter group of bits may be added to the
end of each unit. This technique is called parity because the extra bits are redundant to the
information.
For detection and correction of errors,we need to add some parity bits(check bits) to a block of
data bits.parity bits are so chosen that the resulting bit sequence has a unique characteristic which
enables error detection.coding is the process of adding the parity bits.to do that there are some
terms relating to coding:
The block of data bits to which parity bits are added is called a data word.
The bigger block containing parity bits is called the code word.
Hamming distance or simply distance between two code words is the number of
disagreement between them.for example,the distance between the two words given below is
3(figure ).
The weight of a code word is number of 1s in the code word,e.g. 11001100 has a weight of
4.
1 1 0 0 1 0 1 1
Hamming distance=3
1 0 0 1 0 0 1 1
IMPLEMENTATION
Overview
Hamming algorithm is an error correction code that can be used to detect single and double-bit
errors and correct single-bit errors that can occur when binary data is transmitted from one
device into another. This project was implemented for single-bit error detection and single-bit
error correction and it presents design and development of (21-bit, 16-bit,1-bit ). Here, 21-bit
corresponds to the total number of Hamming code bits in a transmittable unit comprising data
bits and parity bits , 16-bit is the number of while 1-bit denotes the maximum number of error
bits in the transmittable unit.
Before transmitting the 16-bit data, parity is calculated for associated bits. In the example
below,the data bits have been copied into their proper positions in the output message and parity
bits has been calculated. Finally the parity bits can be added at the end of the data unit with the
original data bits to form the (21-bit, 16-bit, 1-bit).
The procedure used by the sender to encode the message encompasses the following steps −
Once the parity bits are embedded within the message, this is sent to the user.
Even Parity − Here the total number of bits in the message is made even.
Odd Parity − Here the total number of bits in the message is made odd.
Each parity bit, pi, is calculated as the parity, generally even parity, based upon its bit position.
It covers all bit positions whose binary representation includes a 1 in the i th position except the
position of pi. Thus −
p1 is the parity bit for all data bits in positions whose binary representation includes a 1 in
the least significant position excluding 1 (3, 5, 7, 9, 11 and so on)
p2 is the parity bit for all data bits in positions whose binary representation includes a 1 in
the position 2 from right except 2 (3, 6, 7, 10, 11 and so on)
p3 is the parity bit for all data bits in positions whose binary representation includes a 1 in
the position 3 from right except 4 (5-7, 12-15, 20-23 and so on).
Decoding a message in Hamming Code
The receiver takes the transmission and recalculates the new parity bits, using the same sets of
bits used by the sender. Plus, the relevant parity bits for each set. Then it assembles the new
parity values into a binary number in the descending order of parity position. Suppose if the bit
has changed from 1 to 0 then the error bit is identified and the receiver can complement its value
and correct the error.
Once the receiver gets an incoming message, it performs recalculations to detect errors and
correct them. The steps for recalculation are −
2^p≥ N+ P + 1 where n is the number of data bits and p is the number of parity bits.
If the number of information bits is designated as d, then the number of parity bits, p is
determined by the following relationship:
[2^P]>=N+P+1
Hence,[2^P]>=16+P+1
[2^P]>=17+P,let P=5.
[2^5]>=22
Using even parity (in this case) for each parity bit, the 21 bit code word is:
If there was an error which inverted bit 5, the new code word would be:
Parity bit 1 incorrect (1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21 contain 5 ones)
Parity bit 2 correct (2, 3, 6, 7, 10, 11, 14, 15, 18, 19 contain 6 ones)
Parity bit 4 incorrect (4, 5, 6, 7, 12, 13, 14, 15, 20, 21 contain 5 ones)
Parity bit 8 correct (8,9,10,11,12,13,14, 15 contain 2 ones)
Parity bit 16 correct (16, 17, 18, 19, 20, 21 contain 4 ones)
Calculation of P(0)
The positions of all the data bits which are mentioned in decimal system are converted into
binary system. The binary location number of parity bit p0 has a 1 for its right-most digit. This
parity bit checks all bit locations, including itself, that have 1s in the same location in the binary
location numbers. Therefore, parity bit p0 checks bit locations 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21
and assigns p(0).
TABLE I
Calculation of P(1)
The binary location number of parity bit p1 has a 1 for its second right-most digit. This parity bit
checks all bit locations, including itself, that have 1s in the same location in the binary location
numbers. Therefore, parity bit p1 checks bit locations 2, 3, 6, 7, 10, 11, 14, 15, 18, 19 and assigns
p(1). TABLE II
Calculation of P(2)
The binary location number of parity bit p2 has a 1 for its third right-most digit. This parity bit
checks all bit locations, including itself, that have 1s in the same location in the binary location
numbers. Therefore, parity bit p2 checks bit locations 4, 5, 6, 7, 12, 13, 14, 15, 20, 21 and assigns
p(2).
TABLE III
Calculation of P(3)
The binary location number of parity bit p3 has a 1 for its second left-most digit. This parity bit
checks all bit locations, including itself, that have 1s in the same location in the binary location
numbers. Therefore, parity bit p3 checks bit locations 8, 9, 10,11, 12, 13, 14,15 and assigns p(3).
TABLE IV
Calculation of P(4)
The binary location number of parity bit p4 has a 1 for its left-most digit. This parity bit checks
all bit locations, including itself, that have 1s in the same location in the binary location
numbers.Therefore, parity bit p4 checks bit locations 16, 17, 18, 19, 20, 21 and assigns p(4).
TABLE V
(1101)
P1=D3^ D5^D7=0;
P2=D3^ D6^D7=1;
P4=D5^ D6^D7=0;
(1100110)
(1100110)
(1101)
parity bits is :
P1=D3^ D5^D7=0;
P2=D3^ D6^D7=1;
P4=D5^ D6^D7=0;
Code word :
(1100110)
Received code :