Sie sind auf Seite 1von 3

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882

Volume 5, Issue 2, February 2016

REVIEW OF LOSSLESS IMAGE COMPRESSION


Malkit singh,
Department of Computer Science and Engineering,
Guru Kashi University Talwandi Sabo, Bathinda, Punjab

ABSTRACT
This paper addresses the area of lossless image
compression as it is applicable to various fields of image
processing. There are numerous applications of image
processing such as satellite imaging, medical imaging
and videos where the image size or image stream size is
to large and requires large amount of storage space or
high bandwidth for communication in its original form
image compression techniques can be used effectively in
such applications. Lossless image compression
techniques preserve the information so that exact
reconstruction of image is possible from the compressed
data. In this paper we review most popular lossless
image compression techniques.
Keywords: Compression, Image, Lossless Run length,
LMZ, Storage.

1. INTRODUCTION
1.1 IMAGE
An image is essentially a 2-D signal processed by the
human visual system. The signals representing images
are usually in analog form. However, for processing,
storage and transmission by computer applications, they
are converted from analog to digital form. A digital
image is basically a 2-Dimensional array of pixels.
Images form the significant part of data, particularly in
remote sensing, biomedical and video conferencing
applications. The use of and dependence on information
and computers continue to grow, so too does our need
for efficient ways of storing and transmitting large
amounts of data.
1.2 IMAGE COMPRESSION
Image compression addresses the problem of reducing
the amount of data required to represent a digital image.
It is a process intended to yield a compact representation
of an image, thereby reducing the image
storage/transmission requirements. Compression is
achieved by the removal of one or more of the three
basic data redundancies:
1. Coding Redundancy
2. Interpixel Redundancy
3. Psychovisual Redundancy [1]

There are basically two types of image compression:


Lossy compression
Lossless compression

2. LOSSLESS COMPRESSION
Lossless compression is a class of data compression
algorithms that allows the original data to be perfectly
reconstructed from the compressed data. By contrast,
lossy compression permits reconstruction only of an
approximation of the original data, though this usually
improves compression rates (and therefore reduces file
sizes).Lossless data compression is used in many
applications.
Lossless compression is used in cases where it is
important that the original and the decompressed data be
identical, or where deviations from the original data
could be deleterious.
Typical examples are executable programs, text
documents, and source code. Some image file formats,
like PNG or GIF, use only lossless compression, while
others like TIFF and MNG may use either lossless or
lossy methods..[2]
2.1 LOSSLESS COMPRESSION TECHNIQUES
In lossless compression techniques, the original image
can be perfectly recovered from the compressed
(encoded) image. These are also called noiseless since
they do not add noise to the signal (image).
It is also known as entropy coding since it use
statistics/decomposition
techniques
to
eliminate/minimize redundancy.
Following techniques are included in lossless
compression:
1. Run length encoding
2. Huffman encoding
3. LZW coding
4. Area coding [1]
2.1.1 Run-length Encoding
Run-length encoding (RLE) is a very simple form
of lossless data compression in which runs of data (that

www.ijsret.org

53

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 5, Issue 2, February 2016

is, sequences in which the same data value occurs in


many consecutive data elements) are stored as a single
data value and count, rather than as the original run. This
is most useful on data that contains many such runs.
Consider, for example, simple graphic images such as
icons, line drawings, and animations. It is not useful with
files that don't have many runs as it could greatly
increase the file size.
RLE may also be used to refer to an early graphics file
format supported by CompuServe for compressing black
and white images, but was widely supplanted by their
later Graphics Interchange Format. RLE also refers to a
little-used image format in Windows 3.x, with the
extension rle, which is a Run Length Encoded Bitmap,
used to compress the Windows 3.x startup screen.
Typical applications of this encoding are when the
source information comprises long substrings of the
same character or binary digit Example
For example, consider a screen containing plain black
text on a solid white background. There will be many
long runs of white pixels in the blank space, and many
short runs of black pixels within the text. A
hypothetical scan line, with B representing a black pixel
and W representing white, might read as follows:
WWWWWWWWWWWWBWWWWWWWW
WWWWBBBWWWWWWWWWWWWWW
WWWWWWWWWWBWWWWWWWWWW
WWWW
With a run-length encoding (RLE) data compression
algorithm applied to the above hypothetical scan
line, it can be rendered as follows[3]:
12W1B12W3B24W1B14W
2.1.2 Huffman Encoding
Huffman code is a particular type of optimal prefix
code that is commonly used for lossless data
compression. Huffman coding, an algorithm developed
by David A. Huffman while he was a Ph.D. student
at MIT, and published in the 1952 paper "A Method for
the Construction of Minimum-Redundancy Codes".
Huffman coding is based on the frequency of occurrence
of a data item (pixel in images). The principle is to use a
lower number of bits to encode the data that occurs more
frequently. Codes are stored in a Code Book which may
be constructed for each image or a set of images. In all
cases the code book plus encoded data must be
transmitted to enable decoding
Adaptive Huffman coding
A variation called adaptive Huffman coding involves
calculating the probabilities dynamically based on recent

actual frequencies in the sequence of source symbols,


and changing the coding tree structure to match the
updated probability estimates. It is used rarely in
practice, since the cost of updating the tree makes it
slower than optimized adaptive arithmetic coding, that is
more flexible and has a better compression. [4]
2.1.3 Lempal - Ziv Coding
In 1980, Terry Welch invented LZW algorithm which
became the popular technique for general purpose
compression systems. It was used in programs such as
PKZIP as well as in hardware devices. Lempel-ZivWelch proposed a variant of LZ78 algorithms, in which
compressor never outputs a character, it always outputs a
code. [5] LZW (Lempel- Ziv Welch) is a dictionary
based coding. Dictionary based coding can be static or
dynamic. In static dictionary coding, dictionary is fixed
during the encoding and decoding processes. In dynamic
dictionary coding, the dictionary is updated on fly. LZW
is widely used in computer industry and is implemented
as compress command on [1]
2.1.4 Area coding
In area coding technique special codewords are used to
identify large areas of contiguous 1s or 0s. In this
method the whole image is divided into blocks of size
m*n pixels, which are classified as block having only
white pixels, block having only black pixels or block
with mixed intensity. The most frequent occurring
category is then assigned the 1-bit codeword 0, and the
remaining other two categories are assigned with 2-bit
codes 10 and 11. The code assigned to the mixed
intensity category is used as a prefix, which is followed
by the mn-bit pattern of the block. Compression is
achieved because the mn bits that are normally used to
represent each constant area are replaced by a 1-bit or 2bit codeword. When predominantly white text
documents are being compressed, a slightly simpler
approach called white block skipping is used to code the
solid white areas as 0 and all other blocks including the
solid black blocks are coded as 1 followed by the bit
pattern of the block. This approach takes advantage of
the anticipated structural patterns of the image to be
compressed. As few solid black areas are expected, they
are grouped with the mixed regions, allowing a 1-bit
codeword to be used for the highly probable white
blocks [6].

3. BENIFIT OF LOSSLESS COMPRESSION


A. Storage Space
Compressing data files allows one to store more files in
the storage space that is available. Lossless compression,

www.ijsret.org

54

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 5, Issue 2, February 2016

used in zip file technology, will typically reduce a file to


50 percent of its original size. However, difference in the
file size is not seen if the zip files are already in a
compressed format, such as MP3 audio files or PDF
(Portable Document Format) text-only files [7].
B. Bandwidth and Transfer Speed
The download process uses network bandwidth
whenever we download a file, such as an MP3 audio file,
from a server on the Internet. Bandwidth is defined as
the speed at which the network transfers data and is
measured in Mbps (megabits per second). Compressed
files contain fewer "bits" of data than uncompressed
files, and, as a consequence, use less bandwidth when
we download them. This means that the transfer speed,
that is the time it takes to download a file, is quicker. It
will take 10 seconds to download a file if bandwidth of
1Mbps is available, and for downloading a file that is
10Mb (megabits) in size, it will only take 5 seconds to
download the file if the file is compressed to 5Mb [7].

REFFERENCES
[1] Subramanya A, Image Compression Technique,
Potentials IEEE, Vol. 20, Issue 1, pp 19-23, Feb-March
2001
[2] https://en.wikipedia.org/wiki/Lossless_compression
[3] Robinson, A.H, Cheery(1967)"Results of prototype
television bandwidth compression scheme"
[4] Mamta Sharma "Compression using Huffman coding
"IJCSNS International Journal of Computer Science and
Network Security, VOL.10 No.5, May 2010
[5] Parminder singh, Manoj dupar,Priyanka" Enhancing
LZW Algorithm to increse overall performance", Annual
IEEE Indian conference,2006
[6]http://cis.cs.technion.ac.il/Done_Projects/Projects_do
ne/VisionClasses/DIP_1998/Lossless_Compression/nod
e26
[7] Dr. Ajit Singh, Meenakshi Gahlawat "Image
Compression and its Various Techniques", Volume 3,
Issue 6, June 2013
[8]rimtengg.com/coit2007/proceedings/pdfs/43.pdf

C. Cost
The costs of storing the data are reduced by compressing
the files for storage because more files can be stored in
available storage space when they are compressed. We
will need to buy a second 250MB drive if we have
500MB (megabytes) of uncompressed data and a
250MB hard drive on which to store it. You will not
need to buy the extra hard drive if you compress the data
files to 50 percent of their uncompressed size [7].
D. Accuracy
It also reduces the chance of transmission errors since
fewer bits are transferred [8].
E. Security
It also provides a level of security against illicit
monitoring [8].

4. CONCLUSION
There are basically two types of compression techniques.
One is Lossless Compression and other is Lossy
Compression Technique. Comparing the performance of
compression technique is difficult unless identical data
sets and performance measures are used. Some of these
techniques are obtained good for certain applications like
security technologies. Some techniques perform well for
certain classes of data and poorly for others. Lossless
compression leads to less memory required for storage
of an image without losing its actual shape.

www.ijsret.org

55

Das könnte Ihnen auch gefallen