Sie sind auf Seite 1von 6

International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 1



Sparse Transform Matrix at Low Complexity for
Color Image Compression
Dr. K. Kuppusamy, M.Sc.,M.Phil.,M.C.A.,B.Ed.,Ph.D
#1
, R.Mehala, (M.Phil, Research Scholar)
*2
.
#
Department of Computer science and Engineering,
Alagappa University, Karaikudi, INDIA


Abstract- Image Processing is a powerful era of
the Modern Digital Technology. Compression is a
process of minimizing the size in bytes of a graphics file
without degrading the quality of the image to an
unacceptable level. In this paper, we have discusses
about Digital Image Compression for the good
performance complexity of still imagery and the
comparative study of several algorithms. In future we
are going to propose a new plan to provide a reduction
in computation over the sparse matrix and using the
various test images for the entropy coding and quality
scalability is enabled by simply truncating the generated
bit rate distortion performance.
Keywords: image compression, sparse matrix,
entropy coding, quality scalability, bit rate
etc.
I. INTRODUCTION
A. Image
An image is an essentially 2-D signal
processed by the human visual system. The
signals representing images are usually in
analog form. An image is a processing,
storage and transmission by computer
applications, they are converted from analog
to digital form.

B. Digital Image
A digital image is a representation of
a two-dimensional image as a finite set of
digital values, called picture elements or
pixels. Pixel values are typically represented
at gray levels, colors, heights, opacities etc.
Digital Image Types
1. Binary Image
2. Color Image
3. Gray Scale Image
4. Indexed Image
Digital image processing focuses on two major tasks
1. Improvement of pictorial information
for human interpretation
2. Processing of image data for storage,
transmission and representation for
autonomous machine perception Some
argument about where image
processing ends and fields such as
image analysis and computer vision
start.
C. Image Compression
Compression is a process of reducing or
eliminating redundant or irrelevant data. An
Image compression is the addresses of the
problem of reducing the amount of data
required to represent a digital image. The
Compressed image is not directly
displayable. It must be decompressed before
input to a Color Monitor. It also reduces the
time required for images to be sent over the
Internet or downloaded from Web pages.
Basic data redundancies
1. Coding Redundancy
2. Interpixel Redundancy
3. Psychovisual Redundancy
Coding redundancy is present when less
than optimal code words are used. Interpixel
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013
ISSN: 2231-2803 http://www.ijcttjournal.org Page 1616

redundancy results from correlations
between the pixels of an image.
Psychovisual redundancy is due to data that
is ignored by the human visual system.
Image compression techniques are reduced
the number of bits required to represent an
image by taking advantage of these
redundancies. An inverse process called
decompression (decoding) is applied to the
compressed data to get the reconstructed
image.
f(x,y)

Compressed Image

F(X,Y)
D. Basic Image Compressed Model
The JPEG compression process contains
three primary parts as shown in JPEG
Encoding flowchart. To prepare for
processing, the matrix representing the image
is converted from RGB color space to
YCbCr and undergoes the subsampling
process. Then the partition process divides
the matrix into the size was dependent on the
balance between image quality and the
processing power of the time. This is
formally called block and passes them
through the encoding process in chunks.
To reverse the compression and display a
close approximation to the original image the
compressed data is fed into the reverse
process as shown in JPEG Encoding flow
chart. These figures illustrate the special case
of single-component (grayscale) image
compression. Color image compression can
then be approximately regarded as
compression of multiple grayscale images,
which are either compressed entirely one at a
time, or are compressed by alternately
interleaving 8x8 sample blocks from each in
turn.
JPEG encoding flow chart.

JPEG decoding flow chart.
II. TRANSFORMATION
A reversible process that reduces
redundancy and/or provides an image
representation that is more an enable to the
efficient extraction and coding of relevant
information.
Examples
1. Block-based linear transformations,
e.g. Discrete Cosine Transform (DCT)
2. Wavelet decompositions.
3. Prediction/residual formation, e.g.
Differential Pulse Code Modulation
(DPCM)
4. Color space transformations, e.g. RGB
to YCrCb.
5. Model prediction/residual formation,
e.g. Fractals
A. Image Representation with DCT

DCT coefficients can be viewed as
weighting functions that, when applied to the
64 cosine basis functions of various spatial
frequencies (8 x 8 templates), will
reconstruct the
Original block.
= y(0,0) x + y(1,0) x + + y(7,7) x


Original
image block DC (flat) basis function
Mapper Quantize
r
Symbol
Coder
Symbol
Decoder
Inverse Mapper
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013
ISSN: 2231-2803 http://www.ijcttjournal.org Page 1617


AC basis functions

B. Differential Pulse Code Modulation

- Lossless JPEG and 4.3 DPCM are
based on differential pulse code
modulation (DPCM).

- In DPCM, a combination of
previously encoded pixels (A, B, C)
is used as a prediction (c) for the
current pixel (X).

- The difference between the actual
value and the prediction (c - X) is
encoded using Huffman coding.

- The quantized difference is encoded
in lossy DPCM

Properties
Low complexity
High quality (limited compression)
Low memory requirements
C. Color Space Transformation


The process of compression starts
from the conversion of color space. We use
the transform matrix, to convert the three
dimensions color matrix of the image from
RGB to YCbCr pixel by pixel.
0.299 0.587 0.114 0
0.1687 0.3313 0.5 0.5
0.5 0.4187 0.0813 0.5
Y R
U G
V B
( ( ( (
( ( ( (
= +
( ( ( (
( ( ( (


D. Spatial Transformation
A spatial transformation modifies the
spatial relationship between pixels in an
image, mapping pixel locations in an image
to new locations in an output image.
Toolbox Includes Functions:
- Resizing an Image
- Rotating an Image
- Cropping an Image
- 2-D Spatial Transformations
- N-D Spatial Transformations.

E. Histogram
Histogram consists of a graph
indicating the number of times each levels
occurs in the image.

original output
III. QUANTIZATION
Quantization refers to the process of
approximating the continuous set of values in
the image data with a finite set of values. The
input to a quantizer is the original data, and
the output is always one among a finite
number of levels. This is a process of
approximation, and a good quantizer is one
which represents the original signal with
minimum loss or distortion.
There are two types of quantization
1. Scalar Quantization
2. Vector Quantization.
Color space conversion from RGB to YCbCr
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013
ISSN: 2231-2803 http://www.ijcttjournal.org Page 1618

In scalar quantization, each input
symbol is treated separately in producing the
output, while in vector quantization the input
symbols are clubbed together in groups
called vectors, and processed to give the
output. This clubbing of data and treating
them as a single unit increases the optimality
of the vector quantizer, but at the cost of
increased computational complexity.
A quantizer can be specified by its
input partitions and output levels. If the input
range is divided into levels of equal spacing,
then the quantizer is termed as a Uniform
Quantizer, and if not, it is termed as a Non-
Uniform Quantizer. A uniform quantizer can
be easily specified by its lower bound and
the step size. Also, implementing a uniform
quantizer is easier than a non-uniform
quantizer. Take a look at the uniform
quantizer shown below. If the input falls
between n*r and (n+1)*r, the quantizer
outputs the symbol n.

A uniform quantizer
A many-to-one mapping that reduces
the number of possible signal values at the
cost of introducing errors. The simplest form
of quantization (also used in all the
compression standards) is scalar
quantization (SQ), where each signal value
is individually quantized.
The joint quantization of a block of
signal values is called vector quantization
(VQ). It has been theoretically shown that
the performance of VQ can get arbitrarily
close to the rate-distortion (R-D) bound by
increasing the block size.

IV. IMAGE COMPRESSION TECHNIQUES
The image compression techniques
are broadly classified into two categories
depending whether or not an exact replica of
the original image could be reconstructed
using the compressed image.
These are:
1. Lossless technique
2. Lossy techniqhe
A. Lossless compression technique
In lossless compression techniques,
the original image can be perfectly recovered
from the compressed (encoded) image. These
are also called noiseless since they do not
add noise to the signal (image).It is also
known as entropy coding since it use
statistics/decomposition techniques to
eliminate/minimize redundancy. Lossless
compression is used only for a few
applications with stringent requirements such
as medical imaging.

Following techniques are included in lossless
compression:
1. Run length encoding
2. Huffman encoding
3. LZW coding
4. Area coding
1. Run Length Encoding
This is a very simple compression
method used for sequential data. This
technique replaces sequences of identical
symbols (pixels), called runs by shorter
symbols. The run length code for a gray scale
image is represented by a sequence {Vi, Ri}
where Vi is the intensity of pixel and Ri
refers to the number of consecutive pixels
with the intensity Vi. If both Vi and Ri are
represented by one byte, this span of 12
pixels is coded using eight bytes yielding a
compression ratio of 1: 5.



Run Length Encoding
2. Huffman Encoding
This is a general technique for coding
symbols based on their statistical occurrence
82 82 82 82 82 89 89 89 89 90 90
{82,5} {89,4} {90,2}
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013
ISSN: 2231-2803 http://www.ijcttjournal.org Page 1619

frequencies (probabilities). The pixels in the
image are treated as symbols. The symbols
that occur more frequently are assigned a
smaller number of bits, while the symbols
that occur less frequently are assigned a
relatively larger number of bits. Huffman
code is a prefix code. Most image coding
standards use lossy techniques in the earlier
stages of compression and use Huffman
coding as the final step.

3. LZW Coding
LZW (Lempel- Ziv Welch) is a
dictionary based coding. Dictionary based
coding can be static or dynamic. In static
dictionary coding, dictionary is fixed during
the encoding and decoding processes. In
dynamic dictionary coding, the dictionary is
updated on fly. LZW is widely used in
computer industry and is implemented as
compress command on UNIX.

4 Area Coding
Area coding is an enhanced form of
run length coding, reflecting the two
dimensional character of images. This is a
significant advance over the other lossless
methods. The algorithms for area coding try
to find rectangular regions with the same
characteristics. These regions are coded in a
descriptive form as an element with two
points and a certain structure. This type of
coding can be highly effective but it bears
the problem of a nonlinear method, which
cannot be implemented in hardware.
Therefore, the performance in terms of
compression time is not competitive,
although the compression ratio is.

B. Lossy compression technique
Lossy schemes provide much higher
compression ratios than lossless schemes.
Lossy schemes are widely used since the
quality of the reconstructed images is
adequate for most applications .By this
scheme, the decompressed image is not
identical to the original image, but
reasonably close to it.
Major performance considerations of a lossy
compression scheme include:
1. Compression ratio
2. Signal - to noise ratio
3. Speed of encoding & decoding.
Lossy compression techniques includes following
schemes:
1. Transformation coding
2. Vector quantization
3. Fractal coding
4. Block Truncation Coding
5. Sub band coding

1. Transformation Coding
In this coding scheme, transforms
such as DFT (Discrete Fourier Transform)
and DCT (Discrete Cosine Transform) are
used to change the pixels in the original
image into frequency domain coefficients
(called transform coefficients). The selected
coefficients are considered for further
quantization and entropy encoding. DCT
coding has been the most common approach
to transform coding. It is also adopted in the
JPEG image compression standard.

2. Vector Quantization
The basic idea in this technique is to
develop a dictionary of fixed-size vectors,
called code vectors. A vector is usually a
block of pixel values. A given image is then
partitioned into non-overlapping blocks
(vectors) called image vectors. Then for each
in the dictionary is determined and its index
in the dictionary is used as the encoding of
the original image vector. Thus, each image
is represented by a sequence of indices that
can be further entropy coded.

3. Fractal Coding
The essential idea here is to
decompose the image into segments by using
standard image processing techniques such
as color separation, edge detection, and
spectrum and texture analysis. The library
actually contains codes called iterated
function system (IFS) codes, which are
compact sets of numbers. This scheme is
highly effective for compressing images that
have good regularity and self-similarity.
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013
ISSN: 2231-2803 http://www.ijcttjournal.org Page 1620


4. Block truncation coding
In this scheme, the image is divided
into non overlapping blocks of pixels. For
each block, threshold and reconstruction
values are determined. The threshold is
usually the mean of the pixel values in the
block. Then a bitmap of the block is derived
by replacing all pixels whose values are
greater than or equal (less than) to the
threshold by a 1 (0). Then for each segment
(group of 1s and 0s) in the bitmap, the
reconstruction value is determined. This is
the average of the values of the
corresponding pixels in the original block.




5. Sub band coding
In this scheme, the image is analyzed
to produce the components containing
frequencies in well-defined bands, the sub
bands. Subsequently, quantization and
coding is applied to each of the bands. The
advantage of this scheme is that the
quantization and coding well suited for each
of the sub bands can be designed separately.

V. APPLICATION TO COLOR IMAGE
COMPRESSION
We will apply the above transform
matrix in a standard JPEG baseline encoder.
The quantization operation is applied after
transformation using proposed matrix, the
diagonal term of the matrix can be merge
into the quantizer.

VI .CONCLUSION
In this paper, we proposed a sparse
matrix transform for color image
compression. A fast algorithm for
computation is also developed. The basis of
the proposed algorithm is based on integers,
and made sufficiently sparse matrix. In future
we are going to propose a new plan to
provide a reduction in computation over the
sparse matrix and using the various test
images for the entropy coding and quality
scalability is enabled by simply truncating
the generated bit rate distortion performance.
It can be suitable for fast VLSI
implementation.

REFERENCES

1. Subramanya A, Image Compression Technique,
Potentials IEEE, Vol. 20, Issue 1, pp 19-23, Feb-
March 2001,
2. Hong Zhang, Xiaofei Zhang & Shun Cao,
Analysis & Evaluation of Some Image Compression
Techniques, High Performance Computing in Asia
Pacific Region, 2000 Proceedings, 4th Int.
Conference, vol. 2, pp 799-803,14-17 May, 2000
3. Ming Yang & Nikolaos Bourbakis ,An Overview
of Lossless Digital Image Compression Techniques,
Circuits & Systems, 2005 48th Midwest Symposium,
vol. 2 IEEE , pp 1099-1102,
7 10 Aug, 2005
4.Milos Klima, Karel Fliegel,Image Compression
Techniques in the field of securityTechnology:
Examples and Discussion,Security Technology,
2004, 38th Annual 2004 Intn. Carnahan Conference,
pp 278- 284,11-14 Oct., 2004
5. Ismail Avcibas, Nasir Memon, Bulent Sankur,
Khalid Sayood, A Progressive Lossless / Near
Lossless Image Compression Algorithm,IEEE Signal
Processing Letters, vol. 9, No. 10, pp 312-314,
October 2002.
6. Dr. Charles F. Hall, A Hybrid Image Compression
Technique, Acoustics Speech & Signal Processing,
IEEE International Conference on ICASSP 85, Vol.
10, pp 149- 152, Apr, 1985
7. Wen Shiung Chen, en- HuiYang & Zhen Zhang,
A New Efficient Image Compression Technique with
Index- Matching Vector Quantization, Consumer
Electronics, IEEE Transactions, Vol. 43, Issue 2, pp
173- 182, May 1997.
8. W.B.Pennebaker and J.L.Mitchell, JPEG Still
Image Compression Standard, Chapman & Hall,
New York, 1993.
9. David H. Kil and Fances Bongjoo Shin, Reduced
Dimension Image Compression And its
Applications,Image Processing, 1995, Proceedings,
International Conference,Vol. 3 , pp 500-503, 23-26
Oct.,1995
10. C.K. Li and H.Yuen, A High Performance Image
Compression Technique For Multimedia
Applications, IEEE Transactions on Consumer
Electronics, Vol. 42, no. 2, pp 239-243, 2 May 1996.
11. Shi-Fei Ding, Zhong Zhi Shi,Yong Liang , Feng-
Xiang Jin, Information Feature Analysis and
Improved Algorithm of PCA, Proceedings of the 4
th

International Conference on Machine Learning and
Cybernetics, Guangzhou, pp 1756-1761 , 18-21
August,2005

Das könnte Ihnen auch gefallen