Sie sind auf Seite 1von 4

Asian Journal of Computer Science And Information Technology 2: 9 (2012) 272– 275.

Contents lists available at www.innovativejournal.in

Asian Journal of Computer Science and Information Technology

Journal homepage: http://www.innovativejournal.in/index.php/ajcsit

A FAST FRACTAL IMAGE COMPRESSION USING HUFFMAN CODING


D. Venkatasekhar*, P. Aruna

Dept. of computer science & Engg., Annamalai university, Annamalai Nagar, India.

ARTICLE INFO ABSTRACT


Corresponding Author: One of the methods used for compressing images especially natural images is
D. Venkatasekhar* by benefiting from fractal features of images. Natural images have properties
Dept. of computer science & Engg, like Self-Similarity that can be used in image compressing. The basic approach
Annamalai university, Annamalai in compressing methods is based on the fractal features and searching the best
nagar, India replacement block for the original image. In this approach the best blocks are
the neighborhood blocks, this approach tries to find the best neighbor blocks;
Huffman coding can offer better fast fractal compression than Arithmetic
Keywords: Arithmetic coding, coding When compare to Arithmetic coding ,Huffman coding is best for
algorithm, Huffman. compression, It increases the speed of compression and produces high PSNR.
This work saves lot of bits in the image transmission and it also decrease the
time for producing a compressed image and also increase the quality of
decompressed image. Totally genetic algorithm increases the speed of
convergence for reaching the best block.

©2012, AJCSIT, All Right Reserved.


INTRODUCTION
In this paper we use fractal image compression so compression than Arithmetic coding. When compare to
that each natural image has sub sections and the pixels of Arithmetic coding ,Huffman coding is best for compression,
each subsection have great Self-Similarity to each other It increase the speed of compression and produces high
that is called Partitioned Iterated Function System or PIFS PSNR
in abbreviation. Bransley et al. (1993) showed that instead
I.FRACTAL IMAGE COMPRESSION
of using a fractal image, the conversion parameters of
FRACTAL COMPRESSION is a lossy compression method
image can be used to be effective in compressing image; as
for digital image, based on fractals. The method is best
a result the redundancy of images can be replaced and
suited for textures and natural images, relying on the fact
increased by some conversion.As it is mentioned we are
that parts of an image often resemble other parts of the
considering fractal images compression and one of the
same image. Fractal algorithms convert these parts into
significant benefit of it is high rate of compressing that is
mathematical data called "fractal codes" which are used to
usually better than JPEG , but the most important problem
recreate the encoded image.
of fractal image compression is the long time spent on
1. RANGE BLOCKS: Usually for keeping sample of all
searching conversion replacement and it is because of the
pixels of original image, an image in small size is
time consumed for searching the best replacement block
produced that have the general properties of original
for the original image. algorithms with Schema theory,
image, all non-overlapped and independent frames
genetic algorithm is a global search method that imitate the
that include reduced in size image are called range
natural genetic process and can solve many complicated
blocks.
cases that have uneven search space, as most natural
2. NEIGHBORHOODS: The frames of an image that are
images have this features (large search space, unevenness),
exactly adjacent to a special frame are called adjacent
therefore using GA for this purpose can be useful. The main
frame of a certain frame and they are named first layer
motivation for using schema genetic algorithm is that
neighbor frame, the frames that are adjacent with
according to natural properties, a chromosome with high
distance of 1 pixel, are called second layer neighbors,
fitness can be a good candidate for replacing, so that each
third and fourth layers are defined the same way.
block is showed by a chromosome and the best chance for
3. FRAME: Set of adjacent pixels in an image that have the
finding the best replacement block is in adjacent blocks, it
same geometric structure like square, rectangle and
is covered by Crossover and mutation mechanism that is
hexagon and so on is called frame.
accompanying Schema theory, this result in keeping
population diversity in this mechanism. In the following the
details of the work are presented. Finally Huffman coding
method is used. Arithmetic coding never the less remain in
wide use because of its simplicity and high speed.
Intuitively, Huffman coding can offer better fast fractal

272
Venkatasekhar et.al/A Fast Fractal Image Compression Using Huffman Coding.

such a widespread method for creating prefix codes that


the term "Huffman code" is widely used as a synonym for
"prefix code" even when such a code is not produced by
Huffman's algorithm.
A. COMPRESSION:
The technique works by creating a binary tree of
nodes. These can be stored in a regular array, the size of
which depends on the number of symbols. A node can be
either a leaf node or an internal node. Initially, all nodes are
leaf nodes, which contain the symbol itself, the weight
Fig. 1 Similar frames with different sizes in Lena picture: (frequency of appearance) of the symbol and optionally, a
link to a parent node which makes it easy to read the code
ADJACENCY APPROACH USED FOR FRACTAL IMAGES
(in reverse) starting from a leaf node. Internal nodes
COMPRESSING:
contain symbol weight, links to two child nodes and the
As defined previously, adjacent pixels in an image,
optional link to a parent node. As a common convention, bit
form image frames. Adjacent pixels are mainly so much
'0' represents following the left child and bit '1' represents
similar to each other because of this a proper replacement
following the right child.
for a frame can be chosen from adjacent neighbors of that
A finished tree has leaf nodes and internal nodes. A
frame.
Huffman tree that omits unused symbols produces the
most optimal code lengths. The process essentially begins
with the leaf nodes containing the probabilities of the
symbol they represent, then a new node whose children
are the 2 nodes with smallest probability is created, such
that the new node's probability is equal to the sum of the
children's probability. With the previous 2 nodes merged
into one node , and with the new node being now
considered, the procedure is repeated until only one node
Fig.2 Adjacency of layer 1 and 2 for pixel remains, the Huffman tree.
B. DECOMPRESSION:
II.METHODOLOGY Generally speaking, the process of decompression
GENETIC ALGORITHM is simply a matter of translating the stream of prefix codes
The genetic algorithm is a biologically motivated search to individual byte values; usually by traversing the
method mimicking the natural selection and natural Huffman tree node by node as each bit is read from the
genetics. Perform their work process like natural genetic, input stream (reaching a leaf node necessarily terminates
for the first time John Holland used the genetic algorithm in the search for that particular byte value). Before this can
early 1970 as a search mechanism. One of the significant take place, however, the Huffman tree must be somehow
advantages of genetic algorithm is the ability to search the reconstructed.
uneven environments that have behavioral fluctuations in In the simplest case, where character frequencies
response (Ming-Sheng et al., 2007; Mitraet al., 1998) are fairly predictable, the tree can be pre-constructed (and
HUFFMAN CODING even statistically adjusted on each compression cycle) and
Huffman coding is an entropy encoding algorithm thus reused every time, at the expense of at least some
used for lossless data compression. The term refers to the measure of compression efficiency. Otherwise, the
use of a variable-length code table for encoding a source information to reconstruct the tree must be sent a priori. A
symbol (such as a character in a file) where the variable- naive approach might be to pretend the frequency count of
length code table has been derived in a particular way each character to the compression stream. Unfortunately,
based on the estimated probability of occurrence for each the overhead in such a case could amount to several
possible value of the source symbol. kilobytes, so this method has little practical use. If the data
Huffman coding uses a specific method for is compressed using canonical encoding, the compression
choosing the representation for each symbol, resulting in a model can be precisely reconstructed with just bits of
prefix code (sometimes called "prefix-free codes", that is, information .Another method is to simply pretend the
the bit string representing some particular symbol is never Huffman tree, bit by bit, to the output stream.
a prefix of the bit string representing any other symbol) APPLICATION
that expresses the most common source symbols using Huffman Coding today is often used as a "back-
shorter strings of bits than are used for less common end" to some other compression methods. DEFLATE
source symbols. Huffman was able to design the most (PKZIP's algorithm) and multimedia codes such as JPEG
efficient compression method of this type: no other and MP3 have a front-end model and quantization followed
mapping of individual source symbols to unique strings of by.
bits will produce a smaller average output size when the DISCRETE WAVELET TRANSFORM
actual symbol frequencies agree with those used to create Wavelet Transform is a type of signal
the code. A method was later found to design a Huffman representation that can give the frequency content of the
code in linear time if input probabilities (also known as signal at a particular instant of time or spatial location. The
weights) are sorted.For a set of symbols with a uniform Haar wavelet transform decomposes the image into
probability distribution and a number of members which is different sub-band images, It splits component into
a power of two, Huffman coding is equivalent to simple numerous frequency bands called sub-bands. They are LL,
binary block encoding, e.g., ASCII coding. Huffman coding is LH, HL, and HH sub-bands. A high-frequency sub-band

273
Venkatasekhar et.al/A Fast Fractal Image Compression Using Huffman Coding.

contains the edge information of input image and LL sub- 2. Get compression ratio
band contains the clear information about the image. 3.DECODED IMAGE
DWT SUB BAND STRUCTURE

Fig3.DWT Sub-band Structure 1. Apply DWT and decode the image by decoding technique.
They are, 2. Then apply histogram function.
1. LL – Horizontally and vertically low pass 3. Apply Huffman coding
2. LH – Horizontally low pass and vertically high pass 4. Finally validate and get PSNR, Decode time, Encode time
ADVANTAGE: value.
1. DWT has a good localization property in the time GRAPH FOR HUFFMAN CODING
domain and frequency domain.
2. Number of encoding bits is less compare to existing
method.
3. Since Lossless and lossy compression methods are
used to get high compression ratio & high quality image
APPLICATION:
1. Transmission and storage application.
2. Multimedia application.
III. IMPLEMENTATION
HUFFMAN CODING
1. INPUT IMAGE
CONCLUSION
Genetic algorithm is used to find the best block of
replacement, so fractal image is done easily.Here Genetic
algorithm with huffman coding is used for fractal image
compression..Intuitively, Huffman coding can offer better
fast fractal compression than Arithmetic coding. When
compare to Arithmetic coding ,Huffman coding is best for
compression, It increase the speed of compression and
produces high PSNR. Totally genetic algorithm increases
the speed of convergence for reaching the best block.
REFERENCES
1.M.Hassaballah, M.M.Makky and Y.B. Mahdy, “A Fast
Fractal Image Compression Method Based entropy”,
Electronic Letters on computer Vision AndImage Analysis
5(1):30-40,2005
1. Select the input image for Huffman coding 2.J. H. Friedman, J. L. Bentley, and R. A. Finkel, “An
2. Apply histogram function. algorithm for finding best matches in logarithmic expected
2. ENCODED IMAGE time,” ACM Trans. Math. Softw., vol. 3, no. 3, pp. 209–226,
1977.
3.C. S. Tong and W. Man, ―Adaptive Approximation Nearest
Neighbor Search for Fractal Image Compression,‖ IEEE
Transactions on Image Processing,” vol. 11, No. 6, pp. 605-
615, 2007
4 .Nearest neighbor searching, Proc. 5th Annual ACM-SIAM
Symposium on Discrete Arya, S., Mount, D. M.,
Netanyahu, N. S., Silverman, R., Wu, A., An optimal
algorithm for approximate Algorithms (1994) 573-582.
5. R.G. Gallagher, \Variations on a theme by Huffman," IEEE
Trans. Inform. Theory, vol. 24,no. 6, pp. 668{674, Nov.
1978.
6. D.E. Knuth, \Dynamic Huffman coding," J. of Algorithms,
vol. 6, pp. 163{180, June 1985.
1. Encode the image by encoding technique

274
Venkatasekhar et.al/A Fast Fractal Image Compression Using Huffman Coding.

7.B.Bani-Eqbal,“Enchancing the speed of fractal image 8.D.Bhandri,C.A.Murthy, and S.K. pal,”Genetic algorithm
compression”, Optical Engineering, Vol. 34, No. 6, pp. 1705- with elitist model and its convergence,”Int J.Pattern
1710, June 1995. Recognit .Artif. Intell.,Vol.10,pp.731-747,1996.

275

Das könnte Ihnen auch gefallen