Sie sind auf Seite 1von 7


Electronics and Communication Department, NIEIT, Hootagalli,
Mysore, Karnataka, India 570018

Electronics and Communication Department, NIE, Manandavadi Road
Mysore, Karnataka,I ndia 570008

ABSTRACT: This paper discusses the results of survey on various lossy and lossless image
compression algorithms for gray scale images. We use the performance matrices viz, compression
ratio, time for compression/decompression and picture quality measure PSNR for each algorithm.
Furthermore we analyze advantages and disadvantages of each algorithm. We also discuss a new
method using NTT (Number theoretic transform) and recommend to develop an algorithm using
NTT in future. The recommendations on further implementation and enhancement n of image
compression algorithm are formulated in this paper.



Image data compression is concerned with the number of bits required to represent an image.
Perhaps the simplest and most dramatic form of image compression is the sampling of band
limited images, where the infinite number of pixels per unit area is reduced to one sample without
any loss of information.Applications of image compression are primarily in image transmission
and storage. Image transmission applications are in broadcast television, remote sensing via
satellite, military communications via aircraft, radar and sonar, teleconferencing, computer
communications, facsimile transmission and the like. Image storage is required for educational and
business documents, medical images that arise in computer tomography, magnetic resonance
imaging, motion pictures, and satellite images and so on.

With the increasing use of multimedia and internet applications image compression has received
an increasing interest. To address the needs and requirements of this growing field many efficient
image compression techniques have been recently developed.

The objective of this research is to fulfill the following points

1) Review of recent image compression algorithms 2) Selecting comparison measures
3) Choosing set of test images 4) Implementing code for algorithms
5) Getting numerical results and verification 6) Analysis based on results and conclusion.


This paper reviews and gives comparative analysis of popular image compression algorithms
based on JPEG/DCT,(GregoryWallace,(1991),WAVELET,(Bryan Usevitch,(2001), ZixiangXiong,
KannanRamchandran, Michael T. Orchard, and Ya-Qin Zhang,(1999),VQ (R.M. Gray,B.H. Juang,
A.H. (2004), Y. Linde, Al Buzo and R. M. Gray, (2000)),SPIHT (Amir Said, Willian
Pearlman,(1996.), JPEG2000, (MajidRabbani, Rajan Joshi, (2002), FRACTAL COMPRESSION
(B. Wohlberg and G. deJager,(1999)). We also discuss and recommend image compression
algorithm based on NTT.( Agarwal., H.C., and Burrus, CS.(1974) , C. M. Rader,(1972), I. S. Reed
et al.,(1977)).


In order to evaluate the performance of the image compression coding, it is necessary to define a
measurement that can estimate the difference between the original image and the decoded image.
For comparison the following characteristics have been considered

i) Compression time ii) Decompression time iii) Compression ratio) Compression quality which is
determined by calculating PSNR(Peak signal to noise ratio) measured in decibels as

PSNR=20*log10 (255/square root (MSE)) (1)

Where the value 255 is the maximum possible value that can be attained by the image signal.
Mean square error (MSE) is defined as
M 1 N 1
1 (2)
f ( x, y)
MSE f 1 ( x, y)
MN X 0 y 0

where M * N is the size of the original image, f(x,y) is the pixel value of the original image, and
f(x,y) is the pixel value of the decoded image. PSNR is measured in decibels (dB). It has been
shown that PSNR is not always an indicator of the subjective quality of the constructed image.

Most image compression systems are designed to minimize the MSE and maximize the PSNR.


Fig1: Test images Lena (SFM=14.019), Boat (18.157) and Barbara (16.16)

We have chosen three types of test images (512X512, 8bit/pixel fig1) with different spatial and
frequency characteristics. Characteristics of test images are evaluated in spatial domain using
spatial frequency measure (M.Eskicioglu,P.S.Fisher,(1995)). SFM is defined as follows:

1 1
SFM R2 c2 ;R (f j ,k f j , k 1 ) 2 C (f j ,k f j 1 , k )2 (3)
MN j 1 K 2 MN k 1 j 2

Where R is row frequency, C is column frequency, xj,k denotes samples of image, M and N are row
and column sizes of image. If the input image is color image then it can be separated as R,G,B,
images where each one is gray scale image and algorithm may be applied on each image


We have used MATLAB image processing toolbox, which provides comprehensive set of
referencestandard algorithms and graphical tool for analysis and algorithm development. The
SPIHT results were found using the Matlab files from .


The three test images are coded and decoded using each compression algorithms. For each test
image nine bit rates are used :0,0.2,0.4,0.6,0.8,,1.4,,2.0 Figure 2 presents
compression time in seconds, figure 3 presents decompression time in seconds, figure 4 presents
compression ratio.

Fig 2:Compression time in seconds

Fig 3:Decompression time in seconds

Figure.2 shows that the time of compression for fractal algorithm significantly exceeds the
duration of the compression procedure for other algorithms. It means that the fractal algorithm
isnt appropriate for real-time applications. Such a long period for the compression procedure is
attributed to the complexity of this procedure. The other algorithms (two types of JPEG and
SPIHT) include computational operations (discrete cosine transform, fast Fourier transform,
wavelets, etc.) which are well-optimized and therefore demand less time.

Fig 4:Compression Ratio

Figure 3 shows that the time taken for decompression is small for all algorithms and they are
roughly equal.. The SPIHT algorithm looks the most attractive among others from the point of
view of decompression time. The analysis of the values of compression ratio (fig.4) shows that
wavelet based compression algorithms yield highest compression ratio and are highly efficient for
image compression because they organize the image data in away that closely resembles human
system. Wavelet is better than JPEG compression in terms of compression ratio. Fractal algorithm
and SPIHT can give a better compression ratio but then the compression / decompression
procedure will demand more time as well, the correlation coefficient will be less, i.e. we will get
worse quality.

Figure 5 shows the graph of PSNR for different algorithms for the test image Lena.bmp for
different bitrates.

Fig 5: Plot of PSNR of lena.bmp

A large PSNR is the indication of good performance in terms of good fidelity. A PSNR of 40 dB
and above means the error is minimum between original and decoded images and they are virtually
indistinguishable by human observers. JPEG2000 and WAVELET algorithms give highest PSNR
values for all the three test images. Fractal and VQ algorithms give lowest PSNRs. Thus, the
comparative study of the characteristics of lossy algorithms makes us to conclude that the fractal
algorithm is least practical because of extremely high time of compression for the procedure. The
rest of the algorithms are comparatively equal in the quality of both compression and
decompression. One more point to be noted is that some algorithms like JPEG are hybrid. This
allows us to conclude that the combination of several compressive procedures makes it possible to
raise the quality of compression / decompression.

Figure6 gives the compressed images of different algorithms.

Fig6:Compressed images using Wavelet,Jpeg,Vq,Jpeg2000,Fractal,Spiht(clockwise from top left).


During the last decade there have been a number of publications on the application of number
theoretic transforms (NTTs) to digital signal processing. These transforms have found best use in
the calculation of the convolutions and correlations where they can eliminate the errors that may
otherwise arise due to the finite register lengths employed and, with the design of dedicated VLSI
chips for the purpose, can provide high speed operation (Towers P.J.. Pajayakrit, A ,and Holt,
A.G.J(1987), Truong. T.K.. Yeh, C.S., Reed I.S.and Chang, J.J(1993)).

We have carried out extensive study on NTTS and brought out following points. NTTs are discrete
Fourier transforms, defined over finite fields or rings. NTT has the same form as the DFT, with the
exception that it is computed over a finite ring, or field, rather than over the field of complex
numbers. The NTT is also defined on the same geometry as the DFT and preserves the Circular
Convolution Property (CCP) of the DFT. but allows error free computation with the promise of
much faster hardware solutions. These transforms are defined modulo an integer and hence are
free from errors due to rounding and/or truncation effects associated with other transforms. FNT
the Fermat Number Transform , has expanding nature and can give better compression
performance and looks very attractive for algorithm implementation.


The research makes following concluding points: The input images need to be properly analyzed
and then suitable compression algorithm to be employed The hybrid algorithms ,the combination
of dictionary based algorithms with algorithm based on transforms can be considered as most
promising. Having studied the properties of NTTS it is forecasted that the number theory
transformation will give higher compression/decompression ratio. And by employing suitable
VLSI architecture algorithms may increase the speed also. This algorithm is well worth developing
and researching in the area of image compression.


AGARWAL., H.C., and BURRUS, CS.(1974) 'Number theoretic transforms to implement fast
digital convolution' , Proceedings IEEE, 63, (4), pp. 550- 560
Amir Said, Willian Pearlman,(1996.), A New Fast and Efficient Image Codec Based on Set
Partitioning in Hierarchial Trees, IEEE Transactions on Circuits and Systems for Video
Bryan Usevitch,(2001) A Tutorial on Modern Lossy Wavelet Image Compression : Foundations
of JPEG 2000, IEEE Signal Processing Magazine.
M.Eskicioglu,P.S.Fisher,(1995),Image quality measures and their performance, IEE transactions
on communications,Vol.43,No. 12,pp2959-2965"
R.M. Gray,B.H. Juang, A.H. (2004) A classical tutorial on Vector quantization, , IEEE ASSP
Mag., vol.1, pp. 4-29.
Gregory Wallace, (1991 )The JPEG Still Picture Compression Standard, IEEE Transactions on
Consumer Electronics.
Y. Linde, Al Buzo and R. M. Gray, (2000)An Algorithm for Vector Quantizer Design, IEEE
Transactions on Communications, Vol.28, No.1, pp.84 95
MajidRabbani, Rajan Joshi, (2002)An overview of the JPEG2000 still image compression
standard, Signal Processing: Image Communication.
C. M. Rader,(1972) The number theoretic DFT and exact discrete convolution, presented at
IEEE Arden House Workshop on Digital Signal Processing.
I. S. Reed et al.,(1977) "Image Processing byTransforms Over a Finite Field," IEEE Trans. Comp.,
Vol. C26, pp. 874881.
TOWERS. P.J.. PAJAYAKRIT, A ,and HOLT, A.G.J(1987): 'Cascadable NMOS VLSl circuit for
implementing a fast convolrer using Fermat number transforms'. IEEE Proc. G. Elrcrron.
Circuirs&Sysf., 134, (2). pp. 57-66

TRUONG. T.K.. YEH, C.S., REED. 1,s..and CHANG, J.J(1993).: 'VLSI design of number
theoretic transforms for a fast convolution'. Proc. IEEE Int. Conf. on Compute; Design, VLSlin
computing, ICCD 1983. New York. U.S.A..31 October-3 November 19x3. pp. 2&203
B. Wohlberg and G. deJager,(1999) A review of the fractal image coding Literature, IEEE
Trans. On ImageProcessing, Vol.8, no.12.
ZixiangXiong, KannanRamchandran, Michael T. Orchard, and Ya-Qin Zhang,(1999) A
Comparative Study of DCT- and Wavelet-Based Image Coding, IEEE Transactions on
Circuits and Systems for Video Technology, Vol. 9, No. 5.