Sie sind auf Seite 1von 56

Abstract

In this project, an image compression technique is developed using both lossy


and lossless image compression. The lossy image compression technique used here
is the 2D-DWT technique. It uses haar wavelet for image compression. The level
of decomposition of image is decided by required compression ratio. The lossless
image compression technique used here is the FELICS algorithm. It uses adjusted
binary coding and Golomb-Rice coding for lossless image compression. It is a
VLSI-oriented algorithm.
The technique which we have developed has an encoder and a decoder. An
encoder takes the input image, performs the 2D-DWT on the image upto desired
level, decided by compression ratio required. The 2D-DWT block gives only LL i.e.
approximate band to FELICS encoder, which further compresses the data block
in lossless manner. This compressed data block is then given to the decoder. The
decoder has a FELICS decoder and a 2D-IDWT block. The FELICS decoder
reconstructs the LL band and gives it to 2D-IDWT block. The 2D-IDWT band
appends the other bands, which are set to zero, to LL band and nally the image
is reconstructed.
The quality of images obtained using this technique, is compared using vari-
ous image quality metrics such as Compression Ratio (CR), Mean Square Error
(MSE), Peak Signal to Noise Ratio (PSNR), Normalized Cross-Correlation (NCC),
Average Dierence (AD), Structural Content (SC) and Normalized Absolute Error
(NAE). The study of the technique and analysis of results is done using MATLAB.
i
Acknowledgment
I am pleased to present this dissertation report entitled A Fast, Ecient,
Lossless Image Compression System to my college as part of academic activity.
I would like to express my deep sense of gratitude to my guide Prof. S. N.
Kore for his valuable guidance, encouragement and kind co-operation throughout
the dissertation work. I feel proud to present my dissertation under his guidance.
I am thankful to Dr. Mrs. S. S. Deshpande and Dr. Mrs. A. A. Aghashe for
their encouragement and support.I am also thankful to all the teaching sta and
non-teaching sta for their co-operation to complete my dissertation work. Last
but not the least I am very thankful to all my friends, parents and those who
helped me directly or indirectly throughout this dissertation work.
Akshay Bhosale
WCE, Sangli
ii
Contents
Abstract i
Acknowledgments ii
List of Figures vi
List of Tables vii
1 Introduction 7
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Problem Description . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Organization of Report . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Literature Survey and Related Work 9
2.1 JPEG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 JPEG 2000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Lossless JPEG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 JPEG-LS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 FELICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Wavelet Transform and its Application 15
3.1 Method of applying transform . . . . . . . . . . . . . . . . . . . . . 15
3.2 Denition One level of the transform . . . . . . . . . . . . . . . . . 16
3.3 The Haar wavelet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.4 Haar transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4 Background of FELICS Algorithm 19
4.1 Adjusted Binary Code . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Coding Example of Adjusted Binary Coding . . . . . . . . . . . . . 22
4.3 Golomb-Rice Code . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.4 Coding Example of Golomb-Rice Coding . . . . . . . . . . . . . . . 24
4.5 Coding Flow of FELICS Algorithm . . . . . . . . . . . . . . . . . . 25
4.6 General Example of Coding . . . . . . . . . . . . . . . . . . . . . . 25
iii
5 2D-DWT - FELICS Algorithm 29
5.1 Encoder of Proposed Technique . . . . . . . . . . . . . . . . . . . . 29
5.1.1 2D-DWT Block . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.1.2 FELICS Encoder . . . . . . . . . . . . . . . . . . . . . . . . 31
5.2 Decoder of Proposed Technique . . . . . . . . . . . . . . . . . . . . 31
5.2.1 FELICS Decoder . . . . . . . . . . . . . . . . . . . . . . . . 31
5.2.2 2D-IDWT Block . . . . . . . . . . . . . . . . . . . . . . . . 32
5.3 Modied Image Template . . . . . . . . . . . . . . . . . . . . . . . 33
6 Results Analysis 35
6.1 Image Quality Parameters . . . . . . . . . . . . . . . . . . . . . . . 35
6.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7 Conclusion and Future Scope 44
7.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.2 Future Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
References 46
Publication 49
iv
List of Figures
1 General block diagram of Lossless image compression system . . . . 3
2 Illustration of prediction template in FELICS . . . . . . . . . . . . 4
3 Flowchart for the FELICS Algorithm . . . . . . . . . . . . . . . . . 4
4 Probability distribution model in FELICS . . . . . . . . . . . . . . 5
2.1 DPCM encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Block diagram of Lossless JPEG . . . . . . . . . . . . . . . . . . . . 12
2.3 Block diagram lossless image compression technique . . . . . . . . . 13
3.1 The Haar wavelet . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 (a) Original image (b) Image after 1-level decomposition (c) Image
after 2-level decomposition . . . . . . . . . . . . . . . . . . . . . . . 18
4.1 Illustration of prediction template used in FELICS . . . . . . . . . 19
4.2 Probability distribution model of intensity in FELICS . . . . . . . . 20
4.3 Main owchart for FELICS Algorithm . . . . . . . . . . . . . . . . 20
4.4 Flowchart for Adjusted Binary Codes . . . . . . . . . . . . . . . . . 21
4.5 Reference pixels and Current pixel adjusted binary code example . . 22
4.6 Flowchart for Golomb-Rice Codes . . . . . . . . . . . . . . . . . . . 23
4.7 Reference pixels and Current pixel for Golomb-Rice code example . 24
4.8 Pixels in Case 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.9 Pixels in Case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.10 Pixels in Case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.11 Pixels in Case 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.1 Block Diagram of Encoder . . . . . . . . . . . . . . . . . . . . . . . 29
5.2 Block Diagram of 2D-DWT Block . . . . . . . . . . . . . . . . . . . 30
5.3 Block Diagram of FELICS Encoder . . . . . . . . . . . . . . . . . . 31
5.4 Block Diagram of Decoder . . . . . . . . . . . . . . . . . . . . . . . 31
5.5 Block Diagram of FELICS Decoder . . . . . . . . . . . . . . . . . . 32
5.6 Block Diagram of 2D-IDWT Block . . . . . . . . . . . . . . . . . . 32
5.7 Original prediction template used in FELICS . . . . . . . . . . . . . 33
5.8 Modied prediction template used in FELICS . . . . . . . . . . . . 33
6.1 Lenna image compressed using (a) FELICS (b) JPEG (c) 2-level
DWT+FELICS (d) 3-level DWT+FELICS . . . . . . . . . . . . . . 38
v
6.2 Baboon image compressed using (a) FELICS (b) JPEG (c) 2-level
DWT+FELICS (d) 3-level DWT+FELICS . . . . . . . . . . . . . . 39
6.3 Bridge image compressed using (a) FELICS (b) JPEG (c) 2-level
DWT+FELICS (d) 3-level DWT+FELICS . . . . . . . . . . . . . . 40
6.4 Boat image compressed using (a) FELICS (b) JPEG (c) 2-level
DWT+FELICS (d) 3-level DWT+FELICS . . . . . . . . . . . . . . 41
6.5 Medical image compressed using (a) FELICS (b) JPEG (c) 2-level
DWT+FELICS (d) 3-level DWT+FELICS . . . . . . . . . . . . . . 42
6.6 Satellite image compressed using (a) FELICS (b) JPEG (c) 2-level
DWT+FELICS (d) 3-level DWT+FELICS . . . . . . . . . . . . . . 43
vi
List of Tables
4.1 Pixels and corresponding Codewords . . . . . . . . . . . . . . . . . 28
6.1 Results for Lenna image . . . . . . . . . . . . . . . . . . . . . . . . 38
6.2 Results for Baboon image . . . . . . . . . . . . . . . . . . . . . . . 39
6.3 Results for Bridge image . . . . . . . . . . . . . . . . . . . . . . . . 40
6.4 Results for Boat image . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.5 Results for Medical image . . . . . . . . . . . . . . . . . . . . . . . 42
6.6 Results for Satellite image . . . . . . . . . . . . . . . . . . . . . . . 43
vii
1. Name of Student : Mr. Bhosale Akshay Gajanan
2. Name of Course : M.Tech. in Electronics Engineering
3. Date of Registration : August, 2010.
4. Name of Guide : Mr. S.N.Kore
5. Proposed Title of Dissertation : Fast, Ecient, Lossless
Image Compression System.
6. Synopsis of the Work :
A)Relevance and Problem Denition
Relevance
Due to the great innovation of display and information technology, the strin-
gent requirement of data capacity is drastically increased in human life. This
trend makes a signicant impact on storage and communication evolution. The
data compression technique is extensively applied to oer acceptable solution for
this scenario, some images like satellite images or medical images have very high
resolution. Such high resolution images have large le size. Computation time
required to process such high quality images is more. Hence compression of im-
ages and video has become need of hour. The image can be compressed using
lossy or lossless compression techniques. In the lossy image compression tech-
nique, the reconstructed image is not exactly same as the original image. The
lossless image compression can remove redundant information and guarantee that
the reconstructed image is without any loss to original image. Dierent image
compression techniques are suggested by the researchers, but the technique with
Department of Electronics, Walchand College of Engineering, Sangli 1
high data compression with low loss is always preferred. Because of the advance-
ment in Internet, world has come very close and can aord and avail the services
such as medical, tourism, education etc., remotely. Data compression is the key in
giving such fast and ecient communication. It has made large impact on service
sector to provide best services to all sections of society. High code eciency is
measurement parameter for performance of data compression system.
Problem Denition
Development of Fast, Ecient, Lossless Image Compression System (FELICS)
for image data.
B) Present Theories
Using present techniques, we can compress image either by using lossy or loss-
less compression algorithms. For lossy compression technique, many sophisticated
standards have been intensively developed such as JPEG [1] and JPEG 2000 [2] for
still image, and MPEG-4 and H.264 for multimedia communications and high-end
video applications, respectively. Many articles put more eort on related VLSI
architecture designs [3]. Therefore, both algorithm and hardware implementa-
tion have attracted massive research eort for the evolution of lossy compression
technique. Lossless compression can remove redundant information and guaran-
tee that the reconstructed procedure is without any loss to original information.
This can ensure that the decoded information is exactly identical to original in-
formation. According to the coding principle of lossless compression technique,
it can be categorized into two elds: dictionary-based and prediction-based. In
dictionary-based, frequently occurring and repetitive patterns are assigned to a
shorter codeword. The less ecient codeword is assigned to the others. Based
on this principle, the codeword table should be constructed to provide the xed
mapping relationship. Many famous methods, including Human coding [4], run
length coding , arithmetic coding, and LZW [5], have been widely developed, and
some of them are further applied in lossy compression standards.
C) Introduction
The dictionary-based algorithm exploits almost identical mapping relation-
ship; prediction technique is utilized to improve coding eciency. Prediction-
based algorithms apply prediction technique to generate the residual, and utilize
the entropy coding tool to encode it. Many methods, including fast, ecient,
lossless image compression system (FELICS)[6], context-based, adaptive, lossless
image coding (CALIC)[7] and JPEG-LS, have been extensively developed in this
eld. Among these methods, the JPEG-LS presents better performance and is
further adopted as lossless/near-lossless standard, but it possesses serious data
dependency and complex coding procedure that limits the hardware performance
in high-throughput applications. The fast, ecient, lossless image compression
system (FELICS) algorithm, which consists of simplied adjusted binary code
Department of Electronics, Walchand College of Engineering, Sangli 2
and Golomb-Rice code with storage-less k parameter selection, is proposed to
provide the lossless compression method for high-throughput applications. The
simplied adjusted binary code reduces the number of arithmetic operations and
improves processing speed. According to theoretical analysis, the storage-less k
parameter selection applies a xed k value in Golomb-Rice code to remove data
dependency and extra storage for cumulation table. Besides, the colour dierence
pre-processing is also proposed to improve coding eciency with simple arithmetic
operation. The FELICS [6], proposed by P. G. Howard and J. S. Vitter in 1993,
is a lossless compression algorithm with the advantage of fast and ecient coding
principle. Furthermore, FELICS presents competitive coding eciency in com-
parison with other sophisticated lossless compression algorithms.
D) The Proposed Work
Most lossless image compression methods consists of four main components as
shown in Fir. a selector, a predictor, an error modeller and a statistical coder.
Pixel Selector: - A selector is used to choose the next pixel which is to be
encoded, from the image data.
Figure 1: General block diagram of Lossless image compression system
Intensity Predictor: - A predictor is used to estimate the intensity of the cur-
rent pixel depending on the intensities of the two reference pixels.
Error modeller: - It is used to estimate the distribution of the prediction error.
Statistical coder: - It is used to code the prediction error using the error dis-
tribution. By using an appropriate pixel sequence we can obtain a progressive
encoding, and by using sophisticated prediction and error modelling techniques
in conjunction with arithmetic coding we can obtain state-of-the-art compression
eciency. These techniques are computation intensive. The FELICS is a simple
system for lossless image compression that runs very fast with only minimal loss
of compression eciency [10]. In this algorithm raster-scan order is used, and a
pixels two nearest neighbours are used to directly obtain an approximate prob-
ability distribution for its intensity, in eect combining the prediction and error
modelling steps.
FELICS utilizes two reference pixels around current pixel to yield the pre-
diction template as shown in Fig. , and it can be divided into four cases. In
case 1, since surrounding reference pixels are not available for the rst two pixels,
P1 and P2, both current pixels are directly packed into bit stream with original
pixel intensity. For case 2, successive pixels, N1 and N2, are regarded as reference
pixels for current pixel P5. For non-rst row, cases 3 and 4 clearly dene the
relationship between current pixel and reference pixels. Between N1 and N2, the
Department of Electronics, Walchand College of Engineering, Sangli 3
Figure 2: Illustration of prediction template in FELICS
smaller reference pixel is represented as L, and the other one is H. As in Fig. , the
intensity distribution model is exploited to predict the correlation between current
pixel and reference pixels. In this model, the intensity that occurs between L and
H is with almost uniform distribution, and regarded as in-range. The intensities
higher than H or smaller than L are regarded as above range and below range,
respectively. For in-range, the adjusted binary code is adopted, and GolombRice
code is for both above range and below range [10].
Figure 3: Flowchart for the FELICS Algorithm
1) Adjusted Binary Code
Fig. shows that the adjusted binary code is adopted in in-range, where
the intensity of current pixel is between H and L. For in-range, the probability
distribution is slightly higher in the middle section and lower in both side sections.
Therefore, the feature of adjusted binary code claims that the shorter codeword
Department of Electronics, Walchand College of Engineering, Sangli 4
Figure 4: Probability distribution model in FELICS
is assigned to the middle section, and longer one is assigned to both side sections.
To describe the coding ow of adjusted binary code, the coding parameters should
be rst declared as follows:
delta = H L
range = delta + 1
upperbound = log
2
(range)
lowerbound = log
2
(range)
threshold = 2
upperbound
range
shiftnumber =
(range threshold)
2
The adjusted binary code takes the sample of P-L to be encoded, and range
indicates that the number of possible samples should be encoded for a given delta.
The upper bound and lower bound denote the maximum and minimum number
of bits to represent the codeword for each sample, respectively. Particularly, the
lower bound is identical to upper bound, while the range is exactly equal to the
power of two. The threshold and shift number are utilized to determine which
sample should be encoded with upper bound bit or lower bound bit.
Example: - If delta = 4, the range is equal to [0, 4]. According to coding pa-
rameters, the required number of bits is 2 for lower bound and 3 for upper bound.
With the intensity distribution in in-range, 2 bits are allocated for the middle
section, including sample of [1, 2, 3], and 3 bits for side section, including sample
of [0, 4]. Possible sample of P-L with delta = 4 range = delta + 1
range = 4 + 1 = 5
upperbound = log
2
(range) = log
2
(5) = 3
lowerbound = floorlog
2
(range) = log
2
(5) = 2
threshold = 2
upperbound
range
threshold = 23 5 = 8 5 = 3
shiftnumber =
(range threshold)
2
shiftnumber = (5 3)/2 = 1
Department of Electronics, Walchand College of Engineering, Sangli 5
2) Golomb-Rice Code
For both above range and below range, the probability distribution sharply
varies with exponential decay rate, and the ecient codeword should be more
intensively assigned to the intensity with high probability. Therefore, Golomb-
Rice code is adopted as the coding tool for both above range and below range.
With Golomb-Rice code, the codeword of sample x is partitioned into unary and
binary parts.
Golomb-Rice code: - Unary part: (
x
2
k
)
Binary part: x mod 2
k
where k is a positive integer.
The entire codeword is concatenated with unary part and binary part, and one
bit is inserted between both for identication. Therefore, the Golomb-Rice code
is a special case of Golomb code, and its k parameter, exactly equal to power of
2, is ecient for hardware implementation. The selection procedure of k parame-
ter induces serious data dependency and consumes considerable storage capacity.
The resulting compressor runs about ve times as fast as an implementation of
the lossless mode of the JPEG proposed standard while obtaining slightly better
compression on many images.
E) Facilities Available : Library, Computer Lab, Internet etc.
F) Estimated Cost : 10,000/-(Approx.)
G) Expected Date of Completion: July, 2012.
Akshay Gajanan Bhosale Prof. Mr. S.N.Kore
Student Guide
Dr.Mrs.S.S.Deshpande
H.O.D.
Electronics Department
Walchand College of Engg., Sangli.
Department of Electronics, Walchand College of Engineering, Sangli 6
Chapter 1
Introduction
1.1 Motivation
Due to the great innovation of display and information technology, the strin-
gent requirement of data capacity is drastically increased in human life. This
trend makes a signicant impact on storage and communication evolution. The
data compression technique is extensively applied to oer acceptable solution for
this scenario, some images like satellite images or medical images have very high
resolution. Such high resolution images have large le size and computation time
required to process such high quality images is more. Hence compression of im-
ages and video has become need of hour. The image can be compressed using
lossy or lossless compression techniques. In the lossy image compression tech-
nique, the reconstructed image is not exactly same as the original image. The
lossless image compression can remove redundant information and guarantee that
the reconstructed image is without any loss to original image. Dierent image
compression techniques are suggested by the researchers, but the technique with
high data compression with low loss is always preferred.
Because of the advancement in Internet, world has come very close and can
aord and avail the services such as medical, tourism, education etc., remotely.
Data compression is the key in giving such fast and ecient communication. It
has made large impact on service sector to provide best services to all sections of
society. High code eciency is measurement parameter for performance of data
compression system.
1.2 Problem Description
Development of an image compression technique, which provides an improved
compression ratio and also maintains the quality of image. This technique is to
be developed using a combination of both lossy and lossless techniques. The lossy
technique used here will be 2D-DWT and lossless technique used here will be
FELICS algorithm.
7
1.3 Organization of Report
1.3 Organization of Report
The dissertation report is organized into six chapters. Chapter 1 consists of
motivation behind work and problem description. Chapter 2 provides literature
survey which describes the present techniques of both lossy and lossless image
compression.
Chapter 3 describes the lossy 2D-DWT technique and its scope in the proposed
technique. Chapter 4 describes the background of lossless FELICS algorithm.
Chapter 5 describes the developed method using 2D-DWT and FELICS algorithm
for image compression. Chapter 6 includes results analysis. Chapter 7 includes
conclusion about work, future scope and the references.
Department of Electronics, Walchand College of Engineering, Sangli 8
Chapter 2
Literature Survey and Related
Work
Various Methods for Image Compression
2.1 JPEG
The term JPEG is an acronym for the Joint Photographic Experts Group
which created the standard. The JPEG compression algorithm is at its best on
photographs and paintings of realistic scenes with smooth variations of tone and
color. For web usage, where the amount of data used for an image is important,
JPEG is very popular [1].
JPEG has been working to establish the rst international compression stan-
dard for continuous tone still images, both grayscale and color. JPEGs proposed
standard aims to be generic, to support a wide variety of applications for contin-
uous tone images. To meet the diering needs of many applications, the JPEG
standard includes two basic compression methods, each with various modes of op-
eration. A DCT based method is specied for lossy compression, and a predictive
method for lossless compression. JPEG features a simple lossy technique known
as the Baseline method, a subset of the other DCT based modes of operation.
As the typical use of JPEG is a lossy compression method, which somewhat
reduces the image delity, it should not be used in scenarios where the exact re-
production of the data is required (such as some scientic and medical imaging
applications and certain technical image processing work). JPEG is also not well
suited to les that will undergo multiple edits, as some image quality will usually
be lost each time the image is decompressed and recompressed, particularly if the
image is cropped or shifted, or if encoding parameters are changed.The compres-
sion method is usually lossy, meaning that some original image information is lost
and cannot be restored, possibly aecting image quality.
JPEG is a commonly used method of lossy compression for digital photography
(image). The degree of compression can be adjusted, allowing a selectable trade-o
9
2.2 JPEG 2000
between storage size and image quality. JPEG typically achieves 10:1 compression
with little perceptible loss in image quality.
JPEG codec example Although a JPEG le can be encoded in various ways,
most commonly it is done with JFIF encoding. The encoding process consists of
several steps:
1. The representation of the colors in the image is converted from RGB to
YCBCR, consisting of one luma component (Y), representing brightness,
and two chroma components, (CB and CR), representing color. This step is
sometimes skipped.
2. The resolution of the chroma data is reduced, usually by a factor of 2. This
reects the fact that the eye is less sensitive to ne color details than to ne
brightness details.
3. The image is split into blocks of 88 pixels, and for each block, each of the Y,
CB, and CR data undergoes a discrete cosine transform (DCT). A DCT is
similar to a Fourier transform in the sense that it produces a kind of spatial
frequency spectrum.
4. The amplitudes of the frequency components are quantized. Human vision
is much more sensitive to small variations in color or brightness over large
areas than to the strength of high-frequency brightness variations. Therefore,
the magnitudes of the high-frequency components are stored with a lower
accuracy than the low-frequency components. The quality setting of the
encoder aects to what extent the resolution of each frequency component
is reduced. If an excessively low quality setting is used, the high-frequency
components are discarded altogether.
5. The resulting data for all 88 blocks is further compressed with a lossless
algorithm, a variant of Human encoding.
The decoding process reverses these steps, except the quantization because it
is irreversible.
2.2 JPEG 2000
JPEG 2000 is an image compression standard and coding system. While there
is a modest increase in compression performance of JPEG 2000 compared to JPEG,
the main advantage oered by JPEG 2000 is the signicant exibility of the code-
stream [2]. The codestream obtained after compression of an image with JPEG
2000 is scalable in nature, meaning that it can be decoded in a number of ways;
for instance, by truncating the codestream at any point, one may obtain a repre-
sentation of the image at a lower resolution, or signal-to-noise ratio. By ordering
Department of Electronics, Walchand College of Engineering, Sangli 10
2.2 JPEG 2000
the codestream in various ways, applications can achieve signicant performance
increases.
However, as a consequence of this exibility, JPEG 2000 requires encoders and
decoders that are complex and computationally demanding. Another dierence,
in comparison with JPEG, is in terms of visual artifacts: JPEG 2000 produces
ringing artifacts, manifested as blur and rings near edges in the image, while JPEG
produces ringing artifacts and blocking artifacts, due to its 88 blocks.
Features
1. Superior compression performance: at high bit rates, where artifacts become
nearly imperceptible, JPEG 2000 has a small machine-measured delity ad-
vantage over JPEG. At lower bit rates, JPEG 2000 has a much more signif-
icant advantage over certain modes of JPEG: artifacts are less visible and
there is almost no blocking. The compression gains over JPEG are attributed
to the use of DWT and a more sophisticated entropy encoding scheme.
2. Multiple resolution representation: JPEG 2000 decomposes the image into
a multiple resolution representation in the course of its compression process.
This representation can be put to use for other image presentation purposes
beyond compression as such.
3. Progressive transmission by pixel and resolution accuracy, commonly re-
ferred to as progressive decoding and signal-to-noise ratio (SNR) scalability:
JPEG 2000 provides ecient code-stream organizations which are progres-
sive by pixel accuracy and by image resolution (or by image size). This
way, after a smaller part of the whole le has been received, the viewer can
see a lower quality version of the nal picture. The quality then improves
progressively through downloading more data bits from the source.
4. Lossless and lossy compression: Like JPEG, the JPEG 2000 standard pro-
vides both lossless and lossy compression in a single compression architec-
ture. Lossless compression is provided by the use of a reversible integer
wavelet transform in JPEG 2000.
5. Random code-stream access and processing, also referred as Region Of In-
terest (ROI): JPEG 2000 code streams oer several mechanisms to support
spatial random access or region of interest access at varying degrees of gran-
ularity. This way it is possible to store dierent parts of the same picture
using dierent quality.
6. Error resilience: JPEG 2000 is robust to bit errors introduced by noisy
communication channels, due to the coding of data in relatively small inde-
pendent blocks.
Department of Electronics, Walchand College of Engineering, Sangli 11
2.3 Lossless JPEG
2.3 Lossless JPEG
Lossless JPEG was developed as a late addition to JPEG in 1993, using a
completely dierent technique from the lossy JPEG standard. It uses a predictive
scheme based on the three nearest (causal) neighbors (upper, left, and upper-left),
and entropy coding is used on the prediction error.
Lossless mode of operation Lossless JPEG is actually a mode of operation of
JPEG. This mode exists because the Discrete Cosine Transform (DCT) based
form cannot guarantee that encoder input would exactly match decoder output
since the Inverse DCT is not rigorously dened. Unlike the lossy mode which
is based on the DCT, the lossless coding process employs a simple predictive
coding model called dierential pulse code modulation (DPCM). This is a model
in which predictions of the sample values are estimated from the neighboring
samples that are already coded in the image. Most predictors take the average
of the samples immediately above and to the left of the target sample. DPCM
encodes the dierences between the predicted samples instead of encoding each
sample independently. The dierences from one sample to the next are usually
close to zero.
Figure 2.1: DPCM encoder
A typical DPCM encoder is displayed in Fig.2.1 The block in the gure acts
as a storage of the current sample which will later be a previous sample.
Figure 2.2: Block diagram of Lossless JPEG
The main steps of lossless operation mode are depicted in Fig.2.2. The three
neighboring samples must be already predicted samples. Once all the samples are
predicted, the dierences between the samples can be obtained and entropy-coded
in a lossless fashion using Human coding or arithmetic coding.
Department of Electronics, Walchand College of Engineering, Sangli 12
2.4 JPEG-LS
2.4 JPEG-LS
JPEG-LS is a simple and ecient baseline algorithm which consists of two
independent and distinct stages called modeling and encoding. JPEG-LS was
developed with the aim of providing a low-complexity lossless and near-lossless
image compression standard that could oer better compression eciency than
lossless JPEG. It was developed because at the time, the Human coding-based
JPEG lossless standard and other standards were limited in their compression
performance. Total decorrelation cannot be achieved by rst order entropy of
the prediction residuals employed by these inferior standards. JPEG-LS, on the
other hand, can obtain good decorrelation. The core of JPEG-LS is based on the
LOCO-I algorithm, that relies on prediction, residual modeling and context-based
coding of the residuals. Most of the low complexity of this technique comes from
the assumption that prediction residuals follow a two-sided geometric distribution
(also called a discrete Laplace distribution) and from the use of Golomb-like codes,
which are known to be approximately optimal for geometric distributions. Besides
lossless compression, JPEG-LS also provides a lossy mode (near-lossless) where
the maximum absolute error can be controlled by the encoder. Compression for
JPEG-LS is generally much faster than JPEG 2000 and much better than the
original lossless JPEG standard.
2.5 FELICS
Most lossless image compression methods in the literature consist of four main
components as shown in Fig. 2.3 : a selector to choose the next pixel to be
encoded, a predictor to estimate the intensity of the pixel, an error modeler to
estimate the distribution of the prediction error, and a statistical coder to code
the prediction error using the error distribution. By using an appropriate pixel
sequence we can obtain a progressive encoding, and by using sophisticated predic-
tion and error modeling techniques in conjunction with arithmetic coding we can
obtain state-of-the-art compression eciency. These techniques are computation
intensive. A simpler system for lossless image compression that runs very fast
Figure 2.3: Block diagram lossless image compression technique
with only minimal loss of compression eciency is developed.This technique is
called FELICS, for Fast, Ecient, Lossless Image Compression System [3]. Here,
raster-scan order is used, and a pixels two nearest neighbors to directly obtain
an approximate probability distribution for its intensity, in eect combining the
Department of Electronics, Walchand College of Engineering, Sangli 13
2.5 FELICS
prediction and error modeling steps. A novel technique to select the closest of a
set of error models, each corresponding to a simple prex code is used. Finally, the
intensity is encoded using the selected prex code. The FELICS uses the adjusted
binary codes and Golomb-Rice codes for encoding. The resulting compressor runs
about ve times as fast as an implementation of the lossless mode of the JPEG
proposed standard while obtaining slightly better compression on many images.
Department of Electronics, Walchand College of Engineering, Sangli 14
Chapter 3
Wavelet Transform and its
Application
Wavelet compression is a form of data compression well suited for image com-
pression (sometimes also video compression and audio compression). The goal is
to store image data in as little space as possible in a le. Wavelet compression
can be either lossless or lossy.
Using a wavelet transform, the wavelet compression methods are adequate for
representing transients, such as high-frequency components in two-dimensional
images, for example an image of stars on a night sky. This means that the transient
elements of a data signal can be represented by a smaller amount of information
than would be the case if some other transform, such as the more widespread
discrete cosine transform, had been used.
Wavelet compression is not good for all kinds of data: transient signal char-
acteristics mean good wavelet compression, while smooth, periodic signals are
better compressed by other methods, particularly traditional harmonic compres-
sion (frequency domain, as by Fourier transforms and related). Data statistically
indistinguishable from random noise is not compressible by any means.
3.1 Method of applying transform
When a wavelet transform is applied to an image, rst it is applied row-wise
and then applied to the generated data, coloumn-wise. This produces as many
coecients as there are pixels in the image (i.e.: there is no compression yet since
it is only a transform). These coecients can then be compressed more easily
because the information is statistically concentrated in just a few coecients. This
principle is called transform coding. After that, the coecients are quantized
and the quantized values are entropy encoded and/or run length encoded. In
numerical analysis and functional analysis, a discrete wavelet transform (DWT) is
any wavelet transform for which the wavelets are discretely sampled. As with other
wavelet transforms, a key advantage it has over Fourier transforms is temporal
15
3.2 Denition One level of the transform
resolution: it captures both frequency and location information (location in time).
3.2 Denition One level of the transform
The DWT of a signal x is calculated by passing it through a series of lters.
First the samples are passed through a low pass lter with impulse response g
resulting in a convolution of the two:
y[n] = x[n] g[n] =

k=
x[k]g[n k] (3.1)
The signal is also decomposed simultaneously using a high-pass lter h. The
outputs giving the detail coecients (from the high-pass lter) and approximation
coecients (from the low-pass). It is important that the two lters are related to
each other and they are known as a quadrature mirror lter.
However, since half the frequencies of the signal have now been removed, half
the samples can be discarded according to Nyquists rule. The lter outputs are
then subsampled by 2 (Mallats and the common notation is the opposite, g- high
pass and h- low pass):
y
low
[n] =

k=
x[k]g[2n k] (3.2)
y
high
[n] =

k=
x[k]h[2n k] (3.3)
This decomposition has halved the time resolution since only half of each lter
output characterises the signal. However, each output has half the frequency band
of the input so the frequency resolution has been doubled.
3.3 The Haar wavelet
The Haar sequence was proposed in 1909 by Alfrd Haar. Haar used these
functions to give an example of a countable orthonormal system for the space of
square-integrable functions on the real line. As a special case of the Daubechies
wavelet, the Haar wavelet is also known as D2.
In mathematics, the Haar wavelet is a sequence of rescaled square-shaped
functions which together form a wavelet family or basis. Wavelet analysis is sim-
ilar to Fourier analysis in that it allows a target function over an interval to be
represented in terms of an orthonormal function basis. The Haar sequence is now
Department of Electronics, Walchand College of Engineering, Sangli 16
3.3 The Haar wavelet
recognised as the rst known wavelet basis and extensively used as a teaching
example.
The Haar wavelet is also the simplest possible wavelet. The technical dis-
advantage of the Haar wavelet is that it is not continuous, and therefore not
dierentiable. This property can, however, be an advantage for the analysis of
signals with sudden transitions, such as monitoring of tool failure in machines.
For an input represented by a list of 2
n
numbers, the Haar wavelet transform may
be considered to simply pair up input values, storing the dierence and passing
the sum. This process is repeated recursively, pairing up the sums to provide the
next scale: nally resulting in 2
n1
dierences and one nal sum. it is used for
signal coding, to represent a discrete signal in a more redundant form, often as a
preconditioning for data compression.
Figure 3.1: The Haar wavelet
The Haar wavelets mother wavelet function (t) can be described as
(t) =
_

_
1 0 t < 1/2,
1 1/2 t < 1,
0 otherwise.
Its scaling function (t) can be described as
(t) =
_
1 0 t < 1,
0 otherwise.
Haar matrix
The 2 2 Haar matrix that is associated with the Haar wavelet is
Department of Electronics, Walchand College of Engineering, Sangli 17
3.4 Haar transform
H
2
=
_
1 1
1 1
_
Using the discrete wavelet transform, one can transform any sequence
(a
0
, a
1
, . . . , a
2n
, a
2n+1
)of even length into a sequence of two-component-vectors
((a
0
, a
1
) , . . . , (a
2n
, a
2n+1
)). If one right-multiplies each vector with the matrix H
2
,
one gets the result ((s
0
, d
0
) , . . . , (s
n
, d
n
)) of one stage of the fast Haar-wavelet
transform. Usually one separates the sequences s and d and continues with trans-
forming the sequence s.
3.4 Haar transform
The Haar transform is the simplest of the wavelet transforms. This trans-
form cross-multiplies a function against the Haar wavelet with various shifts and
stretches, like the Fourier transform cross-multiplies a function against a sine wave
with two phases and many stretches.
The Haar transform can be thought of as a sampling process in which rows of
the transformation matrix act as samples of ner and ner resolution.
The resulting decomposition after 1-level and 2-level is visible in the Fig. 3.2
Figure 3.2: (a) Original image (b) Image after 1-level decomposition (c) Image
after 2-level decomposition
The LL1 block in Fig. 3.2(b), has the approximate information after 1-level
decomposition, while HH1 block has the detail information after 1-level decompo-
sition. Similarly, LL2 block in Fig. 3.2(c), has the approximate information after
2-level decomposition, while HH2 block has the detail information after 2-level
decomposition.
Department of Electronics, Walchand College of Engineering, Sangli 18
Chapter 4
Background of FELICS
Algorithm
The FELICS algorithm, proposed by P. G. Howard and J. S. Vitter in 1993,
is a lossless compression algorithm with the advantage of fast and ecient coding
principle. In FELICS, three primary techniques, including intensity distribution
model, adjusted binary code and Golomb-Rice code, are incorporated to construct
complete coding ow.
Figure 4.1: Illustration of prediction template used in FELICS
As shown in Fig. 4.1, FELICS utilizes two reference pixels around current
pixel to yield the prediction template, and it can be divided into four cases. In
case 1, since surrounding reference pixels are not available for the rst two pixels,
P1 and P2, both current pixels are directly packed into bitstream with original
pixel intensity. For case 2, successive pixels, N1 and N2, are regarded as reference
pixels for current pixel P5. For non-rst row, cases 3 and 4 clearly dene the
relationship between current pixel and reference pixels. Between N1 and N2, the
smaller reference pixel is represented as L, and the other one is H. As depicted in
Fig. 4.1, the intensity distribution model is exploited to predict the correlation
between current pixel and reference pixels. In this model, the intensity that occurs
19
between L and H is with almost uniform distribution, and regarded as in range.
The intensity higher than H or smaller than L is regarded as above range and
below range, respectively. For in range, the adjusted binary code is adopted, and
Golomb-Rice code is for both above range and below range [5].
Figure 4.2: Probability distribution model of intensity in FELICS
Figure 4.3: Main owchart for FELICS Algorithm
Fig. 4.3 shows the main owchart of the FELICS algorithm. Here, the rst
two pixels are directly encoded because no reference pixels are available to those
pixels. Then next pixel is encoded, taking previous two pixels as reference pixels.
The greater one of two pixels is assigned to H, while smaller one is assigned to
L. Now, if the value of current pixel is between H and L, then adjusted binary
coding is used for encoding of that pixel, else that pixel is encoded by Golomb-Rice
coding.
Department of Electronics, Walchand College of Engineering, Sangli 20
4.1 Adjusted Binary Code
4.1 Adjusted Binary Code
Fig. 4.2 shows that the adjusted binary code is adopted in in-range, where
the intensity of current pixel is between H and L. For in range, the probability
distribution is slightly higher in the middle section and lower in both side sections.
Therefore, the feature of adjusted binary code claims that the shorter codeword
is assigned to the middle section, and longer one is assigned to both side sections.
To describe the coding ow of adjusted binary code, the coding parameters should
be rst declared as follows:
delta = H L
range = delta + 1
upperbound = (log
2
(range))
lowerbound = (log
2
(range))
threshold = 2
upperbound
range
shiftnumber =
range threshold
2
Figure 4.4: Flowchart for Adjusted Binary Codes
The adjusted binary code takes the sample of P L to be encoded, and range
indicates that the number of possible samples should be encoded for a given delta.
The upperbound and lowerbound denote the maximum and minimum number of
bits to represent the codeword for each sample, respectively. Particularly, the
lowerbound is identical to upperbound, while the range is exactly equal to the
power of two. The threshold and shiftnumber are utilized to determine which
sample should be encoded with upperbound bit or lowerbound bit. After com-
puting the various parameters required for adjusted binary codes, the samples
Department of Electronics, Walchand College of Engineering, Sangli 21
4.2 Coding Example of Adjusted Binary Coding
are available in the range 0 to delta. If the samples are kept as it is i.e. in the
ascending order starting from zero, then the number of bits required to represent
a particular sample will go on increasing as value of sample increases.
If the residual has a smaller sample value then, it can be encoded using less
number of bits, but if the residual has larger sample value then it requires more
number of bits. Thus, if there is an image having smaller residual values, then
the image will be compressed with higher compression ratio. On the other hand,
if the image is having higher values of residuals, then the codewords required for
representing such residuals require more number of bits and hence, for such an
image, instead of compression, the image will get expanded.
To avoid such a situation, the residuals having smaller as well as larger sample
values are shifted to the centre by the amount equal to shift number which is
calculated previously.
If delta = 4, the range is equal to [0, 4]. The required number of bits is 2 for
lowerbound and 3 for upperbound. With the intensity distribution in in-range, 2
bits are allocated for the middle section, including sample of [1, 2, 3], and 3 bits
for side section, including sample of [0, 4].
4.2 Coding Example of Adjusted Binary Coding
Figure 4.5: Reference pixels and Current pixel adjusted binary code example
Fig. 4.5 shows the block of pixels from other than rst row and left edge of
image template i.e. case 4.11 of image template. The current pixel has a value
46; its neighboring pixels 40 and 47 are considered as the reference pixels.
Since, the value of current pixel is between the values of two reference pixels;
adjusted binary coding is used for encoding of the current pixel.
So here,
P = 46, N1 = 40, N2 = 47
where, P : Current pixel
N1, N2: Reference pixels
Now, H = max(N1, N2) and L = min(N1, N2), so we have H = max(40, 47) = 47
and L = min(40, 47) = 40. Parameters required are computed as follows,
delta = H L = 7
range = delta + 1 = 7 + 1 = 8
Department of Electronics, Walchand College of Engineering, Sangli 22
4.3 Golomb-Rice Code
upperbound = log
2
(range) = log
2
(8) = 3
lowerbound = log
2
(range) = log
2
(8) = 2
threshold = 2
upperbound
range = 2
3
8 = 0
shiftnumber =
(range threshold)
2
=
(8 0)
2
= 4
Samples: 0 1 2 3 4 5 6 7
Samples shifted 4 5 6 7 0 1 2 3
by shift number
threshold is added 4 5 6 7 0 1 2 3
to samples threshold
The residual of pixel to be encoded is x = P L = 46 40 = 6.
Now the sample 6, is encoded as 2, so we have the binary equivalent of 2,as
the codeword, 10. To indicate that the current pixel is encoded using adjusted
binary coding, another 1 is prexed to the codeword. So the codeword becomes
110. The pixel 46 is encoded as 110.
4.3 Golomb-Rice Code
Figure 4.6: Flowchart for Golomb-Rice Codes
For both above range and below range, the probability distribution sharply
Department of Electronics, Walchand College of Engineering, Sangli 23
4.4 Coding Example of Golomb-Rice Coding
varies with exponential decay rate, and the ecient codeword should be more
intensively assigned to the intensity with high probability. Therefore, Golomb-
Rice code is adopted as the coding tool for both above range and below range.
With Golomb-Rice code, the codeword of sample x is partitioned into unary and
binary parts
Golomb-Rice Code
_

_
Unary part : floor(
x
2
k
)
Binary part : x mod 2
k
(4.1)
where, x is the value of residual and k is a positive integer and it is called as
Golomb-Rice coding parameter. The entire codeword is concatenated with unary
part and binary part, and two bits are pre-xed for identication. Therefore, the
Golomb-Rice code is a special case of Golomb code, and its k parameter, exactly
equal to power of 2, is ecient for hardware implementation.
4.4 Coding Example of Golomb-Rice Coding
Figure 4.7: Reference pixels and Current pixel for Golomb-Rice code example
Fig. 4.7 shows the block of pixels from other than rst row and left edge of
image template i.e. case 4 of image template. The current pixel has a value 183;
its neighboring pixels 177 and 179 are considered as the reference pixels.
Since, the value of current pixel is not in between the values of two reference
pixels; Golomb-Rice coding is used for encoding of the current pixel.
So here, P = 183, N1 = 177, N2 = 179
where, P : Current pixel
N1, N2: Reference pixels
Now, H = max(N1, N2) and L = min(N1, N2), So we have H = max(177, 179) =
179 and L = min(177, 179) = 177. The current pixel is above range since it is
greater than H, so we have to apply Golomb-Rice coding for encoding it. The
residual for above range is given by x = P H 1 = 183 179 1 = 3
The Golomb-Rice code has three parts: a) prex code b) unary part c) binary
part.
In the prex code rst bit is always 0 and second bit depends on whether the
current pixel is above range or below range. Second bit is 1 for above range and
0 for below range.
Department of Electronics, Walchand College of Engineering, Sangli 24
4.5 Coding Flow of FELICS Algorithm
The second part of code is unary part. Unarypart :
x
2
k
=
3
2
2
= 0, 0 in unary
is encoded as 0.
Binary part: x mod 2
k
= 3 mod 2
2
= 3, 3 in binary is encoded as 11.
So the complete codeword for current pixel 183 becomes 01011.
4.5 Coding Flow of FELICS Algorithm
Based on intensity distribution model, adjusted binary code and Golomb-Rice
code, the FELICS coding ow can be summarized as following steps.
1. The rst two pixels at rst row are directly packed into bitstream without
any encoding procedure.
2. According to the prediction template in Fig. 4.1, nd the corresponding two
reference pixels, N1 and N2.
3. Assign L = min(N1, N2), H = max(N1, N2) and delta = H L.
4. Apply adjusted binary code for P L in in-range, Golomb-Rice code for
L P 1 in below range, and Golomb-Rice code for P H 1 in above
range.
Except rst two pixels at rst row, the others are directly started from step 2
to step 4. The entire coding ow can be reversely performed as decoding ow.
4.6 General Example of Coding
There are dierent cases available for image template. Consider I be any gen-
eral matrix of size 4 4, from an image matrix, which is to be encoded. Since the
image is going to be a gray-scale image of bit depth 8, so the pixel values range
from 0 to 255. A 0 value indicates a black pixel and a 255 indicates a white pixel.
I =
_

_
254 251 245 220
244 220 244 200
250 210 220 238
200 222 219 180
_

_
Department of Electronics, Walchand College of Engineering, Sangli 25
4.6 General Example of Coding
Case 1
Figure 4.8: Pixels in Case 1
In case 1, the two pixels from left top corner are selected, according to image
template in Fig. 5.7, the rst two pixels i.e 254 and 251 are directly encoded using
8 bits and are sent directly without any encoding procedure. So the codewords
for 254 and 251 are 11110101 and 11111011 respectively.
Case 2
Now, according to case 2 from image template in Fig. 5.7, 245 is the current
pixel, P and previous two pixels are the reference pixels, N1 and N2. The circled
pixel is the current pixel and since it is in rst row it belongs to case 2 and
remaining pixels are the reference pixels for this case.
Figure 4.9: Pixels in Case 2
The greater of the two reference pixels is assigned to H and smaller is assigned
to L. So H has a value 254 and L has a value 251. Since the current pixel is
in between H and L, adjusted binary coding is applied to encoded that pixel.
The parameters required for encoding are computed as shown in section 4.2. We
get the values of the parameters as delta = 6, range = 7, lowerbound = 2,
upperbound = 3, threshold = 1 and shiftnumber = 3. So the codeword for cur-
rent pixel is evaluated to 1101, according to section 4.2.
Similarly, the next pixel 220 is also included in case 2. For this pixel the previous
two pixels i.e. 251 and 245 are the reference pixels and rest of the encoding pro-
cedure is same.
Case 3
In case 3, the current pixel is present at the left edge of the image matrix.
Now the next pixel which is to be encoded is 244. This pixel belongs to case 3,
according to Fig. 4.1.
For case 3, the reference pixels are the pixels to the top and top right of current
pixel.So we have 245 and 251 as reference pixels. Since the current pixel is less
Department of Electronics, Walchand College of Engineering, Sangli 26
4.6 General Example of Coding
Figure 4.10: Pixels in Case 3
than lower reference pixel L, it is regarded as out of range and hence it is encoded
using Golomb-Rice coding.Golomb-Rice code has three parts. In rst part there
are two bits, rst bit is always 0 to indicate that the current pixel is encoded
using Golomb-Rice coding. The second bit is to indicate whether the current pixel
is above range or below range. Now here since the current pixel is below range,
the second bit will be 0. So, rst part of Golomb-Rice code consists of 00.
The second part of Golomb-Rice code is the unary part. It is calculated according
to equation 4.1. A unary number, say x is represented by x, 1s followed by a 0.
Here the value of residual is x = L P 1 = 245 244 1 = 0, 0 in unary is
represented by 0. So the unary part has a 0.
The third part of Golomb-Rice code is the binary part.It is calculated according
to equation 4.1. The binary part is represented using minimum possible binary
bits. 0 in binary is represented by 0. So the binary part has a 0.
Now, the total codeword is formed by concatenating all the three parts of Golomb-
Rice code, so we get the codeword for 244 as 0000.
Case 4
The next pixel which is to be encoded, comes from case 4 category of image
template of Fig. 4.1. The current pixel value is 220. The reference pixels to this
current pixel are the pixels to the left and top of current pixel. So we have 251
and 244 as reference pixels.
Figure 4.11: Pixels in Case 4
Since the current pixel is less than lower reference pixel L, it is regarded as
out of range and hence it is encoded using Golomb-Rice coding.Golomb-Rice code
has three parts. In rst part there are two bits, rst bit is always 0 to indicate
that the current pixel is encoded using Golomb-Rice coding. The second bit is
to indicate whether the current pixel is above range or below range. Now here
since the current pixel is below range, the second bit will be 0. So, rst part of
Golomb-Rice code consists of 00.
The unary part is calculated according to equation 4.1. Here the value of residual
Department of Electronics, Walchand College of Engineering, Sangli 27
4.6 General Example of Coding
is x = L P 1 = 244 220 1 = 23, for this the unary number will be 5. So
the unary part has a 111110.
The third part of Golomb-Rice code is the binary part.The binary number will be
3, it is represented by 11. So the binary part has a 11.
Now, the total codeword is formed by concatenating all the three parts of Golomb-
Rice code, so we get the codeword for 220 as 0011111011.
Similarly, the remaining pixels are also encoded and their codewords along
with the length of each codeword is also sent to the decoder for reconstruction of
compressed image. The Table 4.1 shows the pixels from the image matrix and the
corresponding codeword for that particular pixel.
Table 4.1: Pixels and corresponding Codewords
Pixel Codeword
254 11111110
251 11111011
245 001001
220 00111111000
244 001010
220 0011111011
244 110100
200 001111011
250 011001
210 0011001
220 1111
238 011111001
200 0011001
222 0111011
219 0000
180 00111111111010
Department of Electronics, Walchand College of Engineering, Sangli 28
Chapter 5
2D-DWT - FELICS Algorithm
The proposed technique for image compression uses two techniques, rst , 2D-
DWT which provides lossy image compression and second, FELICS algorithm
which provides lossless image compression. As discussed in chapter 3, the wavelet
decomposition is done using haar wavelet. The level of decomposition depends
on the desired image compression ratio or image quality. Higher the level of
decomposition, higher is the compression, and poorer is the image quality. The
lossless FELICS algorithm uses adjusted binary codes and Golomb-Rice codes for
compression as discussed in chapter 4. FELICS is a predictive coding algorithm.
It reduces the number of bits required for representing a pixel, by prediction. The
codewords generated by FELICS algorithm are of variable length. The minimum
codeword length is 2 bits and maximum codeword length can be as long as 30
bits.
The proposed technique has two main blocks an encoder and a decoder. The
encoder compresses the image using a 2D-DWT and encodes it losslessly , while
decoder decodes it losslessly and then reconstructed using 2D-IDWT.
5.1 Encoder of Proposed Technique
Figure 5.1: Block Diagram of Encoder
The encoder has two main blocks, rst a 2D-DWT and second, a FELICS
encoder as shown in Fig. 5.1.
29
5.1 Encoder of Proposed Technique
5.1.1 2D-DWT Block
The image which is to be compressed, is given as input to the 2D-DWT block.
The 2D-DWT block performs the DWT on the image and decomposes it in 4
bands, LL, LH, HL and HH. The LL band is called the approximate band, since
it has most of the information of the image. It is obtained by passing the image
through low pass lter. First, the image is ltered row-wise and then coloumn-
wise. After single level decomposition all the 4 bands obtained have exactly half
the dimensions of the original image. Out of these 4 bands only LL band is used
in the proposed technique and other bands are discarded i.e they are made zero.
Now input to the FELICS encoder is only LL band. Similarly , if we apply 2-level
or 3-level decomposition to the image, the dimensions of LL band go on reducing
accordingly. In the 2D-DWT block, the image which is to be compressed is given
as input.
Figure 5.2: Block Diagram of 2D-DWT Block
Suppose the input image has dimensions 512 512, then after 1-level decom-
position using haar wavelet, image will have four bands. Each band will have
dimensions of 256 256.
Now if further decomposition is required then only the LL block, i.e. approx-
imate band is used. This 256 256 block of LL band is again decomposed using
2D-DWT. This is called 2-level decomposition. Now the dimensions of LL band
after 2-level decomposition will be 128 128.
Generally, the wavelet decomposition upto 2-level is used. If we further de-
compose the image, the dimensions of LL block will get reduced but the quality
of image gets degraded drastically. So after 2-level decomposition, the LL band is
given as input to next block of FELICS encoder. Generally only upto 2-level de-
composition of image is prefered, unless using this technique for surveillance kind
of application, where only detection is required and not exact image. For 3-level
decomposition, the image compression increases drastically but at the same time
image quality gets poorer.
Department of Electronics, Walchand College of Engineering, Sangli 30
5.2 Decoder of Proposed Technique
5.1.2 FELICS Encoder
The LL band obtained after desired decomposition level from 2D-DWT block,
has both positive and negative decimal point values. So before applying FELICS
encoding procedure we have to rst round-o the values in LL band to the nearest
integers. Then use that matrix as original image matrix. Now the values from
this matrix are encoded according to four cases as shown in g 4.1.
Figure 5.3: Block Diagram of FELICS Encoder
The FELICS encoder further compresses the band in lossless manner. It applies
adjusted binary coding and Golomb-Rice coding to compress. After compression,
the generated codewords are sent to decoder along with the length of each code-
word, since the codewords are of variable length. So the output of encoder is
compressed image data which consists of codewords and their lengths.
5.2 Decoder of Proposed Technique
Figure 5.4: Block Diagram of Decoder
The decoder has two main blocks, rst a FELICS decoder and second a 2D-
IDWT block as shown in Fig. 5.4. The compressed image data from output of
encoder is given as input to decoder.
5.2.1 FELICS Decoder
Here in FELICS decoder, the codeword is taken from codewords le and its
corresponding length is taken from length le. After getting the codeword and
and its length, the reverse coding ow of FELICS is applied. The rst bit of
codeword indicates whether that current pixel is encoded using adjusted binary
code or Golomb-Rice code. If the rst bit is 1, then that pixel is encoded using
adjusted binary coding and if it is 0, pixel is encoded using Golomb-Rice coding.
Department of Electronics, Walchand College of Engineering, Sangli 31
5.2 Decoder of Proposed Technique
Figure 5.5: Block Diagram of FELICS Decoder
If the pixel is encoded using Golomb-Rice coding, then again we must know
whether the pixel is above range or below range. For that we have to check the
second bit of codeword.
Now if the second bit is 1, then that current pixel is above range i.e. it is
greater than the higher reference pixel H and if if the second bit is 0, then that
pixel is below range i.e. it is less than the smaller reference pixel L.
After that the binary and unary parts are obtained from codeword and the
corresponding residual is generated. Finally the pixel value ia calculated. For a
pixel below range we can calculate the actual pixel value as L1 residual and
for the above range, the pixel value is calculated as, residual + H + 1.
In this way the pixel values are reconstructed using FELICS decoder and whole
matrix is then given to the 2D-IDWT block.
5.2.2 2D-IDWT Block
In 2D-IDWT block, the actual image is reconstructed. The image matrix which
is formed by the FELICS decoder is actually the LL band which was formed after
2-level decomposition of image during encoding. The other bands i.e. LH, HL
and HH are considered to be zero. So the zero matrices of same dimensions as LL
band are appended to LL band as shown in g. . Then the total appended matrix
is given as input to 2D-IDWT block.
Figure 5.6: Block Diagram of 2D-IDWT Block
The image which is constructed, has dimensions equal to those of the appended
matrix which is given as input to 2D-IDWT block. The LL band obtained at the
output of FELICS decoder is given as input to 2D-IDWT block. In the 2D-DWT
block, LL band of higher level is reconstructed by considering other three bands
Department of Electronics, Walchand College of Engineering, Sangli 32
5.3 Modied Image Template
as zero. Finally, at the output of decoder, we get the reconstructed compressed
image.
5.3 Modied Image Template
The image template which is used in original FELICS algorithm has 4 cases
as shown in Fig.5.7 Case 1 is for rst two pixels of the left corner of the image.
Case 2 is for pixels in 1st row other than rst two pixels. Case 3 is for pixels at
the left edge of the image and Case 4 is for pixels which are not in rst row and
at the left edge of image.
Figure 5.7: Original prediction template used in FELICS
Figure 5.8: Modied prediction template used in FELICS
In the modied image template, as shown in Fig. 5.8, we have added one more
case in the template. The rst two cases are same as original template. In case 4
of original template, two reference pixels were used. But here in new template we
are using 4 pixels as reference pixels. The highest of four pixels is assigned to H
and lowest of the four pixels is assigned to L.
Department of Electronics, Walchand College of Engineering, Sangli 33
5.3 Modied Image Template
The case 5 was not included in original template. The case 5 is for pixel at
the right edge of the image. For a pixel at the right edge, three reference pixels
are taken and highest of those pixels is assigned to H and lowest is assigned to L.
The template is just used to decide the reference pixels. The encoding procedure
is same as in original FELICS algorithm.
Department of Electronics, Walchand College of Engineering, Sangli 34
Chapter 6
Results Analysis
6.1 Image Quality Parameters
For comparing the images obtained from the three techniques we have consid-
ered various image quality parameters such as Compression Ratio (CR), Mean
Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Normalized Cross-
Correlation (NCC), Average Dierence (AD), Structural Content (SC) and Nor-
malized Absolute Error (NAE) [7]. Here for calculating various image quality pa-
rameters original image matrix and compressed image matrix are used. I
1
(m, n)
indicates an element of original image matrix and I
2
(m, n) indicates an element
from compressed image matrix. Also M and N indicate the number of rows and
columns of image matrix. For calculating the image quality parameters the di-
mensions of original and compressed images must be same.
1. Compression Ratio (CR)
The compression ratio is calculated as the ratio of the le size of original
image to le size of reconstructed image.
CR =
Original image le size
Compressed image le size
(6.1)
Higher value of compression ratio indicates that the reconstructed image is
more compressed. As the compression ratio increases the quality of image
degrades.
2. Mean Square Error (MSE)
The Mean Square Error is dened as,
MSE =

M,N
[I
1
(m, n) I
2
(m, n)]
2
M N
(6.2)
The large value of MSE indicates that image is of poor quality.
35
6.1 Image Quality Parameters
3. Peak-Signal to Noise Ratio (PSNR)
Peak Signal to Noise Ratio (PSNR) is dened as,
PSNR = 10 log
10
(
255
2
MSE
) (6.3)
PSNR should be as high as possible, low value of PSNR means the image
quality is poor.
4. Normalized Cross-Correlation (NCC)
Normalized Cross-Correlation (NCC) is dened as,
NCC =

M,N
[I
1
(m, n). I
2
(m, n)]

M,N
[I
1
(m, n). I
1
(m, n)]
(6.4)
Value of NCC close to 1, means the image quality is good.
5. Structural Content (SC)
Structural Content is dened as,
SC =

M,N
[I
1
(m, n). I
1
(m, n)]

M,N
[I
2
(m, n). I
2
(m, n)]
(6.5)
The large value of Structural Content (SC) means that image is of poor
quality.
6. Average Dierence (AD)
Average Dierence is dened as,
AD =

M,N
[I
1
(m, n) I
2
(m, n)]
M N
(6.6)
The large value of AD means that the pixel values in the reconstructed image
are more deviated from actual pixel value. Larger value of AD indicates
image is of poor quality.
7. Normalized Absolute Error (NAE)
Normalized Absolute Error (NAE) is dened as,
NAE =

M,N
[abs(I
1
(m, n) I
2
(m, n))]

M,N
I
1
(m, n)
(6.7)
Value of NAE close to 0, means the image is of good quality.
Department of Electronics, Walchand College of Engineering, Sangli 36
6.2 Results
6.2 Results
Here, a same image is compressed by three dierent image compression tech-
niques. First technique consists of FELICS algorithm, second technique consists
of JPEG. The third technique is the proposed technique which consists of the
DWT technique followed by the FELICS algorithm. This third technique has two
variants, rst is having 2-level 2D-DWT and the second having 3-level 2D-DWT.
This experiment is carried out on dierent class of images these images are
taken from the website http://sipi.usc.edu/database and the performance of
these techniques is compared on the basis of various image quality measures such
as Compression Ratio (CR), Mean Square Error (MSE), Peak-Signal to Noise
Ratio (PSNR), Normalized Cross-Correlation (NCC), Average Dierence (AD),
Structural Content (SC) and Normalized Absolute Error (NAE).
Here, six dierent types of images are used. Standard images like Lenna, Ba-
boon, Bridge, Boat, Medical image and Satellite image are used for analysis. The
Lenna image contains a nice mixture of detail, at regions, shading, and texture
that do a good job of testing various image processing algorithms. The Baboon
image has details and texture information. The Bridge and Boat images have
mixture of detail, edges and shading. The Medical image has at regions and
minute edges.
Department of Electronics, Walchand College of Engineering, Sangli 37
6.2 Results
Figure 6.1: Lenna image compressed using (a) FELICS (b) JPEG (c) 2-level
DWT+FELICS (d) 3-level DWT+FELICS
Table 6.1: Results for Lenna image
Technique/
Parameter
FELICS JPEG 2-level
DWT+FELICS
3-level
DWT+FELICS
CR 1.72 11.13 14.8 33.38
MSE 0 17.16 132.26 279.806
PSNR 99 35.43 26.91 23.66
NCC 1 0.999 0.9947 0.9861
SC 1 1 1.003 1.012
AD 0 0.004 -0.3107 -0.2781
NAE 0 0.024 0.0509 0.0769
Department of Electronics, Walchand College of Engineering, Sangli 38
6.2 Results
Figure 6.2: Baboon image compressed using (a) FELICS (b) JPEG (c) 2-level
DWT+FELICS (d) 3-level DWT+FELICS
Table 6.2: Results for Baboon image
Technique/
Parameter
FELICS JPEG 2-level
DWT+FELICS
3-level
DWT+FELICS
CR 1.26 5.3 11.84 34.13
MSE 49.22 101.3 540.11 700
PSNR 31.2 27.17 20.8 19.679
NCC 0.9988 0.9971 0.9732 0.964
SC 0.9995 1.0001 1.0249 1.0352
AD 0.0017 -0.0113 -0.3108 -0.2707
NAE 0.0414 0.0581 0.1268 0.1484
Department of Electronics, Walchand College of Engineering, Sangli 39
6.2 Results
Figure 6.3: Bridge image compressed using (a) FELICS (b) JPEG (c) 2-level
DWT+FELICS (d) 3-level DWT+FELICS
Table 6.3: Results for Bridge image
Technique/
Parameter
FELICS JPEG 2-level
DWT+FELICS
3-level
DWT+FELICS
CR 1.37 5.8 12.22 32.6
MSE 0 72.23 358.26 609.02
PSNR 99 29.54 22.27 20.28
NCC 1 0.996 0.9772 0.9629
SC 1 1.001 1.0217 1.0371
AD 0 0.0085 -0.2048 -0.1682
NAE 0 0.054 0.1192 0.1558
Department of Electronics, Walchand College of Engineering, Sangli 40
6.2 Results
Figure 6.4: Boat image compressed using (a) FELICS (b) JPEG (c) 2-level
DWT+FELICS (d) 3-level DWT+FELICS
Table 6.4: Results for Boat image
Technique/
Parameter
FELICS JPEG 2-level
DWT+FELICS
3-level
DWT+FELICS
CR 1.55 8.67 14.51 35.43
MSE 0 29.07 225.593 406.35
PSNR 99 33.49 24.59 22.041
NCC 1 0.9989 0.9902 0.9805
SC 1 1 1.0076 1.0177
AD 0 0.0097 -0.311 -0.2856
NAE 0 0.031 0.06885 0.096
Department of Electronics, Walchand College of Engineering, Sangli 41
6.2 Results
Figure 6.5: Medical image compressed using (a) FELICS (b) JPEG (c) 2-level
DWT+FELICS (d) 3-level DWT+FELICS
Table 6.5: Results for Medical image
Technique/
Parameter
FELICS JPEG 2-level
DWT+FELICS
3-level
DWT+FELICS
CR 2.32 18.19 21.09 52.84
MSE 1.44 2.82 73.006 130.17
PSNR 46.53 43.62 29.49 26.98
NCC 1.0003 0.9995 0.9976 0.9974
SC 0.9993 1 1.0027 1.0016
AD -0.059 0.0685 0.3802 0.1901
NAE 0.0044 0.0064 0.0318 0.039
Department of Electronics, Walchand College of Engineering, Sangli 42
6.2 Results
Figure 6.6: Satellite image compressed using (a) FELICS (b) JPEG (c) 2-level
DWT+FELICS (d) 3-level DWT+FELICS
Table 6.6: Results for Satellite image
Technique/
Parameter
FELICS JPEG 2-level
DWT+FELICS
3-level
DWT+FELICS
CR 0.99 3.52 12.145 33.61
MSE 0.2144 302.787 1002.5 1134.34
PSNR 54.8185 23.31 18.119 17.58
NCC 1 1.0078 0.99293 0.9202
SC 0.99 0.96 1.0796 1.0905
AD -0.005 -0.15 1.0369 0.9991
NAE 0.0004 0.1147 0.198 0.2313
Department of Electronics, Walchand College of Engineering, Sangli 43
Chapter 7
Conclusion and Future Scope
7.1 Conclusion
The 2D-DWT (lossy technique) and FELICS (lossless technique) are used to
develop the proposed technique which provides high compression ratio, while main-
taining considerable image quality. Since 2D-DWT is a lossy technique, it provides
high compression ratio with some loss in image quality. At the same time, since
FELICS is a lossless technique it provides small compression ratio but maintains
the image quality. Thus using this technique which is a combination of lossy
and lossless techniques, we can get high compression ratio which is desired for
surveillance kind of applications.
Dierent parameters are used for comparison of images produced by dierent
techniques. We have considered seven dierent parameters like CR, MSE, PSNR,
NCC, SC, AD and NAE for dierent Class of Images (COI). We have included
images like Lenna, Boat, Baboon, Bridge, Medical image and Satellite image as
dierent Class of Images. The compression ratio for all these images is higher
for DWT+FELICS technique than for JPEG. All the image parameter values are
within acceptable range for DWT+FELICS technique. For Baboon, Medical and
Satellite image, even the FELICS algorithm cannot reconstruct the image without
loss. Particularly, for Satellite image, using original FELICS algorithm, the image
was not compressible instead it was expanding with CR of 0.99. So after using
the new modied image template, the FELICS algorithm was able to compress
Satellite image with CR of 1.15. So the new image template will denitely increase
the CR by small amount.
44
7.2 Future Scope
7.2 Future Scope
The technique using 2D-DWT and FELICS gives gives better compression
ratios than other techniques. But the quality of image and compression ratio
can be further improved by employing encoding techniques like Human encoding
or Arithmetic encoding. These encoding techniques are lossless, so using these
techniques can improve compression ratio at the same time the image quality is
maintained.
Department of Electronics, Walchand College of Engineering, Sangli 45
References
[1] Information Technology-Digital Compression and Coding of Continuous-Tone
Still Image, ISO/IEC 10918-1 and ITU-T Recommendation T.81, 1994.
[2] JPEG 2000 Part 1 Final Draft International Standard, ISO/IEC FDIS15444-1,
Dec. 2000.
[3] P.-C. Tseng, Y.-C. Chang, Y.-W. Huang, H.-C. Fang, C.-T. Huang, and L.-G.
Chen, Advances in hardware architectures for image and video coding-a survey,
Proc. IEEE, vol. 93, no. 1, pp. 184197, Jan. 2005.
[4] D. Human, A method for the construction of minimum redundancy codes,
Proc. IRE, vol. 140, pp. 10981011, Sep. 1952.
[5] T.Welsh, A technique for high-performance data compression IEEE Comput.,
vol. 17, no. 6, pp. 810, Jun. 1984.
[6] P. G. Howard and J. S. Vitter, Fast and Ecient lossless image compres-
sion,Proc. IEEE Int. Conf. Data Compression, 1993, pp. 501510.
[7] X. Wu and N. D. Memon, Context-based, adaptive, lossless image coding,
IEEE Trans. Commun., vol. 45, no. 4, pp. 437444, Apr. 1997.
[8] M. J. Weinberger, G. Seroussi, and G. Sapiro, The LOCO-I lossless image
compression algorithm: Principles and standardization into JPEG-LS , IEEE
Trans. Image Process., vol. 9, no. 8, pp. 13091324, Aug. 2000.
[9] Yi-Qiang Hu, Hung-Hseng Hsu and Bing-Fei WuAn, Integrated Method to Im-
age Compression Using the Discrete Wavelet Transform, IEEE International
Symposium on Circuits and Systems, June 9-12, 1997, Hong Kong.
[10] Tsung-Han Tsai, Yu-Hsuan Lee, and Yu-Yu Lee, Design and Analysis of High-
Throughput Lossless Image Compression Engine using VLSI-Oriented FELICS
Algorithm, IEEE trans. on VLSI Systems, Vol. 18, No.1, January 2010.
[11] Vellaiappan Elamaran and Angam Praveen, Comparison of DCT and
Wavelets in Image coding, 2012 International Conference on Computer Com-
munication and Informatics (ICCCI -2012), Jan. 10 12, 2012, Coimbatore,
INDIA.
46
REFERENCES
[12] V.S.Vora, Prof.A.C.Suthar, Y.N.Makwana, and S.J. Davda Analysis of Com-
pressed Image Quality Assessments, Published in International Journal of Ad-
vanced Engineering and Application, Jan. 2010.
Books:
1. Rafael Gonzalez, Richard E. Woods, Digital Image Processing, Pearson
Education, India (2002).
Department of Electronics, Walchand College of Engineering, Sangli 47
REFERENCES
Department of Electronics, Walchand College of Engineering, Sangli 48
Publication
49

Das könnte Ihnen auch gefallen