Sie sind auf Seite 1von 29

DILLA UNIVERSITY

COLLEGE OF ENGINEERING AND TECHNOLOGY


SCHOOL OF ELECTRICAL & COMPUTER ENGINEERING

IMPLEMENTATION OF 16-BIT HAMMING CODE ENCODER AND DECODER FOR


SINGLE BIT ERROR DETECTOR AND CORRECTOR

BY

YITBAREK G/HER 1842/15

YENENEH ASABU 1813/15

REDAHEGH TADESSE 0900/15

RABIYA BEDIRU 1053/15

ADVISOR

MR. GEBISSA (MSC.)

JUNE 10,2019

DILLA, ETHIOPIA
Abstract

Data communication performed at any time does not always go well, sometimes also happens
that his name error when transmitting data. Error free transmission is a major concern in
advanced electronic circuits. The errors in information occur during transmission which can lead
to wrong information at reception. Error correction codes are commonly used to protect the
information and registers in electronic circuits. The search for correcting codes led to the
Hamming code perfect 1-bit Error Correcting Codes and then extended. Hamming code 1-bit
Error Correcting and 2-bit Error Detecting Codes. In mathematical terms, Hamming codes are a
class of binary linear codes. For each integer p>2, there is a code with ‘p’ parity bits and ((2^p)-
p-1) data bits. The main aspect of this Hamming encoder and decoder is to know how a code can
be encoded and decoded using Hamming code. This project is implemented for Hamming code
single – bit error detection and single – bit error correction. For implementing this Single bit
error detector and corrector A and the B was used. Hamming codes are widely used in
computing, telecommunications and other applications. It can only detect and correct the number
of bits but cannot solve the problem.
Acknowledgment

First of all, we are grateful to the Almighty God for enabling us to complete this semester project
work. It is our pleasure to thank our advisor Mr. Gebissa for being there in guiding and
appreciating our work. We are thankful for their aspiring guidance, invaluably
constructive criticism and friendly advice during the project work. We are sincerely grateful to
them for sharing their truthful and illuminating views on a number of issues related to the
project. Finally, we take this opportunity to sincerely thank all the faculty members of
Electrical and Computer Engineering for their help and encouragement in our educational
endeavors.
CHAPTER 1
1 INTRODUCTION

1.1 Background

In modern digital communication and storage systems design, information theory is becoming
increasingly important .In recent years; there has been and increasing demand for efficient and
reliable digital data transmission. This demand has been accelerated by the emergence of large
scale, high speed data networks for the exchange and processing of digital information in the
military, governmental and private spheres. A major concern is control of errors, so that reliable
reproduction of data can be obtained. Recent developments have contributed toward achieving
the reliability required by today’s high speed digital systems and the use of coding for error
control has become an integral part in the design of modern communication systems and digital
computers.

When we are watching TV, enjoying music on CD, talking with friend by cellular phone etc., we
encounter problem due to the impact of the noisy environment. A CD player includes
semiconductor memories, optical media, laser device, disk files, and so on. Each of them is
subject to various types of noise disturbance may come from switching impulse noise, thermal
noise, dust on disk surface, or even lightening. Error control coding (also referred to as channel
Coding) is used to detect, and often correct, symbols which are received in error on digital
communications channels.
A typical digital communications system is shown below in Figure 1 The source information is
usually composed of binary or decimal digits or alphabetic information in either a continuous
waveform or a sequence of discrete symbols. The encoder transforms these messages into a
sequence of binary digits (bits) to the channel. The sequence enters the channel and is perturbed
by noise. The output enters the decoder, which makes a decision concerning which message was
sent and delivers this message to the sink. A communication system can be represented by the
following block diagram:

Figure 1 a communication system model(source and channel encoder)


Each part of the communication system is discussed below:
Source encoder: transforms the source out put into a sequence of binary digits (bits) called the
information/message sequence m .if the source is continuous it involves conversion of analog
into digital (A/D) conversion.
Channel encoder: transforms the information sequence m into a discrete encoded sequence n
called a code word. The channel encoders must be designed and implemented in a way which
combat the noisy environment (channel).
Noisy channel: is a channel which is subjected to noise. A channel is always a medium
through which the information being transmitted can suffer from the effect of noise, which
produces errors, that is, changes of the values initially transmitted. In this sense there will be a
probability that a given transmitted symbol is converted into another symbol. Most of the times
when we say noise the first which come to our mind and also the concern of this project is
Additive White Gaussian Noise (AWGN) Channel. In communications, the AWGN channel
model is one in which the only impairment is the linear addition of wideband or white noise with
a constant spectral density (expressed as watts per hertz of bandwidth) and a Gaussian
distribution of amplitude. The model does not account for the phenomena of fading, frequency
selectivity, interference, nonlinearity or dispersion. However, it produces simple, tractable
mathematical models which are useful for gaining insight into the underlying behavior of a
system before these other phenomena are considered. AWGN is commonly used to simulate
background noise of the channel under study.
Channel decoder: transforms the received sequence r into a binary sequence m called the
estimated sequence. The decoding strategy is based on the rules of channel encoding and the
noise characteristics of the channel.
Source decoder: transforms the estimated sequence into an estimate of the source output and
delivers this estimate to the destination. If the source is continuous, it involves digital to analog
(D/A) conversion.
The major error control coding problem is to design and implement the channel
Encoder/decoder pair such that:
Information can be transmitted (or recorded) in a noisy environment as fast as Possible.

Reliable reproduction of the information can be obtained at the output of the channel decoder.

the cost of implementing the encoder and decoder falls within acceptable limits.
The fundamental of error control coding is based on Shannon’s landmark paper published in
1948. Shannon demonstrated that by proper encoding of the information, errors induced by a
noisy channel or storage medium can be reduced to any desired level without sacrificing the rate
of information transmission or storage. Since Shannon’s work, a great deal of effort has been
expended on the problem of devising efficient encoding and decoding methods for error control
in a noisy environment.
Recent developments have contributed toward achieving the reliability required by today’s high
speed digital systems, and the use of coding for error control has become an integral part in the
design of modern communication systems and digital computers with the dramatic decreasing of
the size and cost of semiconductor devices.
Types of errors
There are two types of errors that occur on digital communications channels:
I. single errors
II. Burst error
Single errors: - Errors that occur independently on each bit of the transmitted sequence. That is,
an error occurs on a transmitted bit independent of errors that occurred in the vicinity of the bit.
The random error channel also called as memory less channel. Good examples of random-error
channels are the deep-space channel and many satellite channels. The codes devised for
correcting random errors are called random-error-correcting codes.

Figure 2: Single-bit error


Burst errors: - Errors that occur intensively in a period .That is to say, if an error occurs on a bit,
several consecutive bits are likely to be in error. The number of bits in this erroneous period,
from the first bit in error to the last bit in error, is called the burst length. A burst-error channel is
also called as channel with memory. Examples of burst-error channels are radio channels, where
the error bursts are caused by signal fading due to multipath transmission, wire and cable
transmission, which is affected by impulsive switching noise and crosstalk, and magnetic
recording, which is subject to tape dropouts due to surface defects and dust particles. The codes
devised for correcting burst errors called burst-error-correcting codes. Actually, some channels
contain a combination of both random and burst errors. They are called compound channels, and
codes devised for correcting errors on these channels are called burst-and-random-error
correcting codes.

Figure 3: Burst error of length 8


Sources of Errors

In electronic communications, noise is described as a random fluctuation in an electrical signal.


Since noise signals are random, they cannot be directly predicted.Instead, statistical models are
used to predict the average and variance for a signal.Within a communication system, noise is a
significant source of errors and produces disturbances of a signal in a communication channel.
Unfortunately, noise is unavoidable at any non-zero temperature. Since noise is both random and
unavoidable, it was necessary to create a way to limit or prevent the corruption of data through a
communication channel. Error control schemes have been created to help reduce the
unpredictable effect of noise on a signal.
Noise

Thermal noise. Noise can be divided into four separate categories: thermal noise,inter-
modulation noise, cross-talk, and impulse noise. However, this paper will focus on thermal noise,
cross-talk, and burst noise or impulse noise. Thermal noise, or white noise,is the result of the
agitation of electrons by heat. This stimulation of electrons results in unwanted currents or
voltages within an electronic component. Often times, thermal noise is referred to as white noise
because it is uniformly distributed across the bandwidths used in communications systems. The
presence of thermal noise inhibits the receiving equipment’s ability to distinguish between
incoming signals. Because thermal noise cannot be ignored or eliminated, an upper bound is
placed on the performance of a communication system.

Crosstalk:Cross-talk is a type of noise that occurs when an electrical signal is picked up by an


adjacent wire. It is the result of an unwanted electrical coupling between signals paths. Typically,
crosstalk is recognizable as pieces of speech or signal tones leaking into a person’s telephone
connection during a conversation. One type of crosstalk, called near-end crosstalk, occurs when a
signal on the transmit pair is so strong that it radiates to the receive pair. A direct consequence of
this radiation is that the receiver cannot correctly interpret the real signal. Like thermal noise,
crosstalk has a reasonably predictable and constant magnitude. Therefore, it is possible to
construct a communication system which has the ability to cope with the noise. However, noise,
such as burst noise, is more difficult to handle because it is unpredictable and appears erratically.

Burst noise. Burst noise, or impulse noise, is another significant source of signal corruption
within a communication channel. Burst noise is the result of sudden step-like transitions between
at least two discrete voltage or current levels. Impulse noise, unlike thermal noise, is
discontinuous. This type of noise results from irregular pulses or noise spikes of short duration
and relatively high amplitude. While impulse noise is not as impactful on analog data, such as
voice transmissions, it is a primary source of error for digital data communication. Since burst
noise is often the result of signals induced by external sources, such as lightning, switching
equipment, or heavy electrically operated machinery, it quite frequently interrupts surrounding
signals in communication channels.
1.2 Statement of the problems
In recent years, there has been an increasing demand for efficient and reliable data transmission.
This demand has been accelerated by the emergency of large scale, high speed data networks for
the exchange and processing of digital information in the military, governmental and private
spheres. A major concern is control of errors so that reliable reproduction of data can be
obtained. Now a day, the design of digital computers and communication systems require
efficient error control coding technique. It is fact that every
Communication channel is vulnerable to noise unless we use proper error control mechanisms;
this initiated us to do our project on detecting and correcting this errors. The error control coding
encoder takes as input, the information symbols from the source and adds redundant symbols to
it, so that most of the errors introduced in the transmitting signal during transmission over noisy
channel. The decoder then receives the corrupted signal and detects the error locations by which
we can correct it. Error control coding has its own error detecting techniques and is also has a
means of correcting the errors happened on our source information. Some of these techniques are
Hamming, BCH, Reed Solomon, Turbo, convolutional coding and etc. Among these techniques
we basically focused on Hamming coding technique for this semester.
1.3 Objective of the project
1.3.1 General Objective
The goal of this project is the detection and correction of errors in digital information using
Hamming coding techniques. Such Errors almost inevitably occur after the transmission, storage
or processing of information in Digital (mainly binary) form, because of noise and interference
in communication channels, or imperfections in storage media. Protecting digital information
with a suitable Error-control code enables the efficient detection and correction of any errors that
may have occurred. Because of this we initiated to do our project on this area.
1.3.2 Specified Objective
This project is using different coding techniques to decrease communication errors that occur
during transmission and comparing the performance of each coding techniques subjected to
noise. Messages that are transmitted over a communication channel can be damaged their bits
can be masked or inverted by noise. Detecting and correcting these errors is important. .This
paper addresses a multiple error detecting and single bit correcting error control coding
technique called Hamming codes.
1.4 Methodology
To accomplish this project we have used two stages. these are:-
I. Mathematical Model: In this section we will implement different mathematical concepts that
are applicable for Hamming error control coding techniques.
II. Simulation: In conducting our project we are going to follow the following basic steps
1. Design of
Encoder
Decoder
2. Performance analyzing:- by using MATLAB
1.6 Scope of the project
We are basically intended to test error control coding using the Hamming coding technique and
we also try to demonstrate the performance of Hamming coding techniques using the graphical
user interface (GUI). We can look the overall scope of our project by looking the experimental
tool, basics and output of our project.
Experimental tool: we will do our project using the mat lab software, for this purpose we will
use a large memory (hard disk) and RAM (for demonstration purpose) computer.
Experimental basics: here we will perform the following tasks.
 First we will look the basic encoding and decoding techniques without the effect of noise
in the channel.
 Then, we demonstrate the effect of adding noise on the normal functioning of the
decoder.
Expected output: the final remark of this project is minimizing the error happen in
transmission by which we can have a reliable and fast transmission system. We can create error
free transmission system this is done at the expense of bandwidth.
Finally we will arrive at the point where we can choose a particular soft decision decoding
algorithm which can give an estimate of the performance that is achievable over certain channel.

CHAPTER 2
ERROR CONTROL CODING
Error correction coding is the means whereby errors which may be introduced into digital data as
a result of transmission through a communication channel can be corrected based upon received
data.error detection coding is the means whereby errors can be detected based upon received
information.collectively ,error correction and error detection coding are error control
coding.error control coding can provide the difference between an operating communications
system and a dysfunctional system.it has been a significant enabler in the telecommunication
revolution,the internet,digital recording,and space exploration.

Error control coding is in principle a collection of digital signal processing techniques aiming to
average the effects of channel noise over several transmitted signals. The amount of noise
suffered by a single transmitted symbol is much less predictable than that experienced over a
longer interval of time, so the noise margins built into the code are proportionally smaller than
those needed for uncoded symbols. An important part of error control coding is the incorporation
of redundancy into the transmitted sequences. The number of bits transmitted as a result of the
error correcting code is therefore greater than that needed to represent the information. Without
this, the code would not even allow us to detect the presence of errors and therefore would not
have any error controlling properties. This means that, in theory, any incomplete compression
carried out by a source encoder could be regarded as having error control capabilities. In
practice, however, it will be better to compress the source information as completely as possible
and then to re-introduce redundancy in a way that can be used to best effect by the error
correcting decoder.
The traditional role for error-control coding was to make a troublesome channel acceptable by
lowering the frequency of error events. The error events could be bit errors, message errors, or
undetected errors.Coding's role has expanded tremendously and today coding can do the
following:[2]

 Reduce the occurrence of undetected errors:- This was one of the first uses of error control
coding. Today's error detection codes are so effective that the occurrence of undetected
errors is, for all practical purposes, eliminated.

 Reduce the cost of communications systems:- Transmitter power is expensive, especially on


satellite transponders. Coding can reduce the satellite's power needs because messages
received at close to the thermal noise level can still be recovered correctly.

 Overcome Jamming:- Error-control coding is one of the most effective techniques for
reducing the effects of the enemy's jamming.

 Eliminate Interference:-As the electromagnetic spectrum becomes more crowded with


man-made signals, error-control coding will mitigate the effects of unintentional
interference. Despite all the new uses of error control coding, there are limits to what coding
can do. On the Gaussian noise channel, for example, Shannon's capacity formula sets a
lower limit on the signal-to-noise ratio that we must achieve to maintain reliable
communications. Shannon’s lower limit depends on whether the channel is power-limited or
bandwidth-limited. The deep space channel is an example of a power limited channel
because bandwidth is an abundant resource compared to transmitter power.

Error detection and correction


Error detection
Anything from the proper operation of the internet to talking to another person on the phone
would not be possible without error detection and correction codes. The goal of any digital
communication is to transmit and receive information without data loss or the corruption of data.
The fundamental idea behind error correction coding is to add parity to the transmitted signal.
The added parity will allow the receiver to detect and correct errors. There are three kinds of
parity checks: vertical,longitudinal, and cyclic. These checks are implemented by codes to detect
as many errors as possible within a transmission.

Error detection is the process of verifying the received information whether it is correct or not at
the receiver end with out having any information of sent original message. In sender side some
redundant bits are added to the original message based some property of message signal (i.e.
parity bits) and in the receiver side by scanning this redundant bits, the error in the message will
be predicted.
Figure :error detection diagram

Vertical Redundancy Check(parity bits)

Often referred to as a parity check, the vertical redundancy check is the most commonly used
and least expensive of the three checks. This technique involves the addition of a parity bit, or
parity bit, to every data unit. By adding parity bits, this ensures the total number of ones in a data
transmission is even. While some systems use an odd-parity check, the majority of the systems
implement an even-parity check.Through this technique, a vertical redundancy check can detect
single-bit errors as well as detect an odd number of burst errors. Unfortunately, this check cannot
detect errors which result in an even number of bits changed. Since a vertical redundancy check
only checks for whether or not the number of ones in a transmission is even, an error which
results in an even number of ones in the data will be considered error-free.

There are two variants of parity bits: even parity bit and odd parity bit.

In the case of even parity, for a given set of bits, the occurrences of bits whose value is 1 is
counted. If that count is odd, the parity bit value is set to 1, making the total count of occurrences
of 1s in the whole set (including the parity bit) an even number. If the count of 1s in a given set
of bits is already even, the parity bit's value is 0.

In the case of odd parity, the coding is reversed. For a given set of bits, if the count of bits with a
value of 1 is even, the parity bit value is set to 1 making the total count of 1s in the whole set
(including the parity bit) an odd number. If the count of bits with a value of 1 is odd, the count is
already odd so the parity bit's value is 0.
Figure:flow diagram of hamming code with parity bits

Error Correction

While error detection is helpful in data transmissions, error correction is essential to a successful
transmission. Error correction allows a distorted transmission to be returned to its original
transmitted form. With the introduction of redundancy, a receiver can detect some errors.
Additionally, these errors can be corrected to return a transmitted message to its original form.

Error correction has two classifications:

 Automatic repeat request(ARQ)

 Forward error correction(FEC)

Automatic Repeat Request

During the process called error control coding, redundant information is added to a message that
is to be transmitted. After the message is received, the redundant information is to be used to
detect and potentially correct errors that occurred in the message during transmission.ARQ is an
error-detection system which checks a message for errors. If an error is found, the receiver will
inform the sender, and the section of the message that contains the error will be resent. The
receiver in an ARQ system performs error detection on all received messages. Consequently, all
responsibility for detecting errors lies with the receiver. The ARQ must be simple enough for the
receiver to handle it, yet powerful and efficient enough so the receiver does not send erroneous
messages to the user. Based on re-transmission, there are three basic ARQ schemes: stop-and-
wait, go-back-N, and selective repeat.

Stop-and-wait:In the stop-and-wait scheme, the transmitter sends a code word to the receiver
and waits for acknowledgment. A positive acknowledgment means the transmitted code word
has been successfully received, and the transmitter sends the next code word in queue. When a
receiver sends a negative acknowledgment signal, this implies an error has been detected in a
code word. Once the transmitter receives this signal, it attempts to resend the invalid code word.
These re-transmissions will continue until the receiver sends a positive acknowledgment signal
to the sender.

While this scheme is very simple, data is transmitted in only one direction. This type of one-way
transmission does not meet the qualifications for the high-speed modern communication systems
of today.

Go-back-N:In the go-back-N scheme,code words are continuously transmitted by the transmitter
in order. N is size of the “window” from which the transmitter can send code words. It is
important for N to be as large as possible within the bounds of the receiver’s ability to process
packets and the number of code words used to verify a transmission. The acknowledgment signal
for a code word typically arrives after a round-trip delay. A round-trip delay is defined as the
time between the transmission of a code word and the reception of an acknowledgment signal for
a code word. The transmitter continues to send N-1 code words during this interval. When the
sender receives a negative acknowledgment signal designating an erroneous code word, it goes
back to that code word and re-sends that code word and all N-1 following code words. A
significant disadvantage of this method is when a receiver detects an error all following code
words are discarded

Selective repeat:The selective repeat method was developed to overcome the disadvantages of
both the stop-and-wait and the go-back-N schemes. Similarly, code words are sent continuously;
however, if a code word is negatively acknowledged, only that code word is re-transmitted. In
this system, a buffer is needed to store the error free code words that follow the incorrect code
word. A buffer is a region in memory that is used to temporarily keep data from continuing while
it is in the process of transmission.The buffer is necessary because data must generally be
received in order. A size for the buffer must be chosen to be large enough so data overflow does
not occur and code words are not lost.Complications with this technique can arise when a second
code word is found invalid while the first corrected code word has not yet been received by the
buffer. Proper buffering must be maintained by the transmitter which must have the necessary
logic to handle several out of sequence code words.

Forward Error Correction


In addition to error-detection, FEC is a method that attempts to correct errors found. Extra bits
are added, according to set algorithm, to the transmission of a message.These extra bits are
received and used for error detection and correction, eliminating the need to ask for a re-
transmission of data (Forward Error Correction (FEC). One of the first codes to appear was
created by Richard Hamming. Hamming developed a infinite class of single-error-correcting
binary linear codes. This code was said to form an exhaustive partition of binary n-space.
Essentially, there are two kinds of error correction codes implemented in FEC:

 Liner codes

 Block codes

Linear codes. A linear code, or binary linear code, implements linear combinations of code
words as a means of encoding a transmitted bit stream. This code takes a group of K message
bits and produces a binary code word that consists of N bits.The extra N-K parity-check bits,
provided for redundancy, are determined by a set of N-K linear equations.

Block codes:Block codes organize a stream of bits in binary format, referred to as the message
bits, into blocks of length K. Since each block of bits is of length K, there is a set of 2K possible
messages that can be transmitted. The bit string encoder converts the set of 2K possible blocks of
K bits into another set of 2N longer blocks of N bits. From this larger set of 2N possible code
words, N code words are chosen to be sent in the transferred stream. The extra (N-K) bits, or
parity check bits, are the redundancy addition to the bit stream before it is communicated. The
final result is a code word which is transmitted, corrupted, and decoded separately from all other
code words.

Linear block codes. A linear code is an error-correcting code in which any linear combination
of code words is also a code word. There are several block codes which also belong to the class
of linear codes, such as:

 Hamming codes

 Reed-Solomon codes

 Hadamard codes

 Expander codes

 Golay codes

 Reed-Muller codes

This set of codes is referred to as linear block codes because they belong to both classes of
codes.In linear block coding, a linear encoding and decoding scheme translates a sequence of
source bits into a transmitted sequence of bits. This scheme inserts extra bits,called parity-check
bits, to add redundancy and improve reliability. A sequence of K information symbols is encoded
in a block of N symbols, where N > K, and then transmitted over the channel. First, K
information bits enter an encoder, and the encoder generates a sequence of coded symbols of
length N to be transmitted over the channel.For this code word, or transmitted sequence, N must
be greater than K to guarantee uniqueness between each code word and each of the possible 2K
messages .The most well-known example of a linear block code is the (7,4) Hamming code. For
every 4 source bits transmitted, the Hamming code transmits an additional 3 parity-check bits.
This redundancy guarantees at most one error can be corrected.

HAMMING CODE

Hamming code is a linear error-correcting code named after its inventor, Richard Hamming.
Hamming codes can detect up to two simultaneous bit errors, and correct single-bit error. By
contrast, the simple parity code cannot correct errors, and can only detect an odd number of
errors. In 1950 Hamming introduced the (7, 4) code. It encodes 4 data bits into 7 bits by adding
three parity bits. Hamming (7, 4) can detect and correct single – bit errors. With the addition of
overall parity bit, it can also detect (but not correct) double bit errors. Hamming code is an
improvement on parity check method. It can correct 1 error bit only [9].

Hamming code used two methods (even parity and odd parity) for generating redundancy bit.
The number of redundancy bits depends on the size of information data bits as shown below :

2^P=N+P+1 [1]

Where N=number of information data bits.

P=parity bits.

According to [1], 5 parity bits required for a 16 input data bits. Hamming-based codes are
widely used in memory systems for reliability improvements. The algorithm consists of two
phases: encoding and decoding. Hamming encoding involves deriving a set of parity check bits
over data bits. These parity check bits are concatenated or merged with the data bits. These extra
bits are called parity bits. We add these parity bits to the information data at the source end and
remove at destination end. Presence of parity bit allows the receiver to detect or correct corrupted
bits. The concept of including extra information in the transmission for error detection is a good
one. But in place of repeating the entire data stream, a shorter group of bits may be added to the
end of each unit. This technique is called parity because the extra bits are redundant to the
information.

Coding for detection and correction of errors

For detection and correction of errors,we need to add some parity bits(check bits) to a block of
data bits.parity bits are so chosen that the resulting bit sequence has a unique characteristic which
enables error detection.coding is the process of adding the parity bits.to do that there are some
terms relating to coding:

 The block of data bits to which parity bits are added is called a data word.

 The bigger block containing parity bits is called the code word.

 Hamming distance or simply distance between two code words is the number of
disagreement between them.for example,the distance between the two words given below is
3(figure ).

 The weight of a code word is number of 1s in the code word,e.g. 11001100 has a weight of
4.

1 1 0 0 1 0 1 1

Hamming distance=3

1 0 0 1 0 0 1 1

Figure :hamming distance


CHAPTER 3

IMPLEMENTATION
Overview
Hamming algorithm is an error correction code that can be used to detect single and double-bit
errors and correct single-bit errors that can occur when binary data is transmitted from one
device into another. This project was implemented for single-bit error detection and single-bit
error correction and it presents design and development of (21-bit, 16-bit,1-bit ). Here, 21-bit
corresponds to the total number of Hamming code bits in a transmittable unit comprising data
bits and parity bits , 16-bit is the number of while 1-bit denotes the maximum number of error
bits in the transmittable unit.

Figure :structure of hamming code

Encoding a message by Hamming Code

Before transmitting the 16-bit data, parity is calculated for associated bits. In the example
below,the data bits have been copied into their proper positions in the output message and parity
bits has been calculated. Finally the parity bits can be added at the end of the data unit with the
original data bits to form the (21-bit, 16-bit, 1-bit).

The procedure used by the sender to encode the message encompasses the following steps −

Step 1 − Calculation of the number of parity bits.


Step 2 − Positioning the parity bits.

Step 3 − Calculating the values of each parity bit.

Once the parity bits are embedded within the message, this is sent to the user.

Step 1 − Calculation of the number of parity bits


If the message contains n number of data bits, p number of parity bits are added to it so that m
is able to indicate at least (n + p+ 1) different states. Here, (n + p) indicates location of an error
in each of (n+p) bit positions and one additional state indicates no error. Since, p bits can
indicate 2^p states, 2^p must be at least equal to (n + p+ 1). Thus the following equation should
hold 2^p=N+P+1

Step 2 − Positioning the parity bits


The p parity bits placed at bit positions of powers of 2, i.e. 1, 2, 4, 8, 16 etc. They are referred in
the rest of this text as p1 (at position 1), p2 (at position 2), p3 (at position 4), p4 (at position 8)
and so on.

Step 3 − Calculating the values of each parity bit


The parity bits are redundant bits. A parity bit is an extra bit that makes the number of 1s either
even or odd. The two types of parity are −

Even Parity − Here the total number of bits in the message is made even.

Odd Parity − Here the total number of bits in the message is made odd.

Each parity bit, pi, is calculated as the parity, generally even parity, based upon its bit position.
It covers all bit positions whose binary representation includes a 1 in the i th position except the
position of pi. Thus −

p1 is the parity bit for all data bits in positions whose binary representation includes a 1 in
the least significant position excluding 1 (3, 5, 7, 9, 11 and so on)

p2 is the parity bit for all data bits in positions whose binary representation includes a 1 in
the position 2 from right except 2 (3, 6, 7, 10, 11 and so on)

p3 is the parity bit for all data bits in positions whose binary representation includes a 1 in
the position 3 from right except 4 (5-7, 12-15, 20-23 and so on).
Decoding a message in Hamming Code
The receiver takes the transmission and recalculates the new parity bits, using the same sets of
bits used by the sender. Plus, the relevant parity bits for each set. Then it assembles the new
parity values into a binary number in the descending order of parity position. Suppose if the bit
has changed from 1 to 0 then the error bit is identified and the receiver can complement its value
and correct the error.

Once the receiver gets an incoming message, it performs recalculations to detect errors and
correct them. The steps for recalculation are −

Step 1 − Calculation of the number of parity bits.

Step 2 − Positioning the parity bits.

Step 3 − Parity checking.

Step 4 − Error detection and correction

Step 1 − Calculation of the number of parity bits


Using the same formula as in encoding, the number of parity bits are ascertained.

2^p≥ N+ P + 1 where n is the number of data bits and p is the number of parity bits.

Step 2 − Positioning the parity bits


The p parity bits placed at bit positions of powers of 2, i.e. 1, 2, 4, 8, 16 etc.

Step 3 − Parity checking


Parity bits are calculated based upon the data bits and the parity bits using the same rule as
during generation of c1,c2 ,c3 ,c4 etc. Thus

c1 = parity(1, 3, 5, 7, 9, 11 and so on)

c2 = parity(2, 3, 6, 7, 10, 11 and so on)

c3 = parity(4-7, 12-15, 20-23 and so on)


Step 4 − Error detection and correction
The decimal equivalent of the parity bits binary values is calculated. If it is 0, there is no error.
Otherwise, the decimal value gives the bit position which has error. For example, if c 1c2c3c4 =
1001, it implies that the data bit at position 9, decimal equivalent of 1001, has error. The bit is
flipped to get the correct message.

Number of parity bits

If the number of information bits is designated as d, then the number of parity bits, p is
determined by the following relationship:

[2^P]>=N+P+1

This code is implemented for 16-bit input data.

Hence,[2^P]>=16+P+1

[2^P]>=17+P,let P=5.

[2^5]>=22

This value of “P” satisfies the relationship.

16 bit error correction


Consider the following 16 bit memory word:

1111 0000 1010 1110

Space out the digits leaving gaps for bits 1, 2, 4, 8, ...

__1_1 11_0 0001 01_0 1110

Using even parity (in this case) for each parity bit, the 21 bit code word is:

0 0101 1100 0001 0110 1110

If there was an error which inverted bit 5, the new code word would be:

0 0100 1100 0001 0110 1110

Parity detection of error

Remembering that we used even parity for each parity bit:

 Parity bit 1 incorrect (1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21 contain 5 ones)
 Parity bit 2 correct (2, 3, 6, 7, 10, 11, 14, 15, 18, 19 contain 6 ones)
 Parity bit 4 incorrect (4, 5, 6, 7, 12, 13, 14, 15, 20, 21 contain 5 ones)
 Parity bit 8 correct (8,9,10,11,12,13,14, 15 contain 2 ones)
 Parity bit 16 correct (16, 17, 18, 19, 20, 21 contain 4 ones)

Calculation of P(0)

The positions of all the data bits which are mentioned in decimal system are converted into
binary system. The binary location number of parity bit p0 has a 1 for its right-most digit. This
parity bit checks all bit locations, including itself, that have 1s in the same location in the binary
location numbers. Therefore, parity bit p0 checks bit locations 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21
and assigns p(0).

TABLE I
Calculation of P(1)

The binary location number of parity bit p1 has a 1 for its second right-most digit. This parity bit
checks all bit locations, including itself, that have 1s in the same location in the binary location
numbers. Therefore, parity bit p1 checks bit locations 2, 3, 6, 7, 10, 11, 14, 15, 18, 19 and assigns
p(1). TABLE II

Calculation of P(2)

The binary location number of parity bit p2 has a 1 for its third right-most digit. This parity bit
checks all bit locations, including itself, that have 1s in the same location in the binary location
numbers. Therefore, parity bit p2 checks bit locations 4, 5, 6, 7, 12, 13, 14, 15, 20, 21 and assigns
p(2).

TABLE III

Calculation of P(3)

The binary location number of parity bit p3 has a 1 for its second left-most digit. This parity bit
checks all bit locations, including itself, that have 1s in the same location in the binary location
numbers. Therefore, parity bit p3 checks bit locations 8, 9, 10,11, 12, 13, 14,15 and assigns p(3).

TABLE IV
Calculation of P(4)

The binary location number of parity bit p4 has a 1 for its left-most digit. This parity bit checks
all bit locations, including itself, that have 1s in the same location in the binary location
numbers.Therefore, parity bit p4 checks bit locations 16, 17, 18, 19, 20, 21 and assigns p(4).

TABLE V

Single Bit Error Correction:-


Consider that the code-word generated was transmitted and instead of receiving “1100110”, we
received “1110110”. Equations (1), (2) and (3) are for detecting the position of the error in the data
as follows ;

A=P4+D5+D6+D7 …………... (1)


B=P2+D3+D6+D7 ……………(2)

C=P1+D3+D5+D7 …………. (3)


Calculating the ABC for the received code 1110110 gives, A=1, B=0 , C=1. Thus ABC equals 101 in
binary , which is 5 in decimal .This indicates that D5 bit is corrupted and the decoder flips the bit at
this position and restores the original message.

Case one:- (good no error)

Consider that the code-word generated is :

(1101)

The parity bits are :

P1=D3^ D5^D7=0;

P2=D3^ D6^D7=1;

P4=D5^ D6^D7=0;

Hence the code word becomes:

(1100110)

If the received code is :

(1100110)

The code at the reception side is checked as follows :

A= P1^D3^ D5^D7 = (0^1^0^1) = 0;

B= P2^D3^ D6^D7 = (1^1^1^1) = 0;

C= P4^D5^ D6^D7 = (0^0^1^1) = 0;

ABC = (000) = 0 , so there is no error.

Since there is no error , data is accepted.

Case two:- (single bit error)


If single bit error occurs, the hamming code can detect this error and determines the position of this
bit. Then it can correct the error by flipping the value of the error bit.

Consider the original data is :

(1101)

parity bits is :

P1=D3^ D5^D7=0;

P2=D3^ D6^D7=1;

P4=D5^ D6^D7=0;

Code word :

(1100110)

Received code :

(0100110) to cheek the code

A= P1^D3^ D5^D7 = (0^1^0^0) = 1;

B= P2^D3^ D6^D7 = (1^1^1^0) = 1;

C= P4^D5^ D6^D7 = (0^0^1^0) = 1;

ABC = (111) = 7, so the bit error position is 7 .


Hence , D7 convert (0 to 1);

Das könnte Ihnen auch gefallen