Sie sind auf Seite 1von 7

Alexandria Engineering Journal (2019) xxx, xxx–xxx

H O S T E D BY
Alexandria University

Alexandria Engineering Journal


www.elsevier.com/locate/aej
www.sciencedirect.com

ORIGINAL ARTICLE

Studying the effect of lossy compression and image


fusion on image classification
Mohamed Elkholy, Mohamed M. Hosny *, Hossam M. Farid El-Habrouk

Transportation Engineering Department, Faculty of Engineering, Alexandria University, Alexandria, Egypt

Received 23 August 2017; revised 7 November 2017; accepted 19 December 2018

KEYWORDS Abstract Nowadays, the remotely sensed images are of huge sizes that require the implementation
Lossy compression; of compression technique to be easily stored on the internet. In addition, image fusion which is the
MrSid; merging of panchromatic and multispectral images to generate a single image with high spatial and
Overall accuracy; spectral resolutions is required to increase the information in the resulted image. The purpose of this
HPF; paper is to study the effect of image compression and fusion techniques on the classification accu-
RMSE racy. In this study, two pan and mul Geo-eye images covering an area of Cape Town, South Africa
were registered and fused using different fusion techniques. The fused image with the superior accu-
racy was then compressed with various compression ratios ranging from 1:10 to 1:100. Then, the
compressed fused images were classified using Maximum Likelihood Classification and Artificial
Neural Network Classification techniques. Finally, the confusion matrices of the classified images
were generated and evaluated to determine the effect of compression and fusion techniques on
the accuracy of the classification process.
Ó 2018 Faculty of Engineering, Alexandria University. Production and hosting by Elsevier B.V. This is an
open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

1. Introduction Image fusion is widely used to integrate a panchromatic


image which contains the spatial information (the sharp details
One of the most important application in remote sensing is the and clear edges of each feature) and multispectral image which
image classification which aims to label each pixel in the image contains the spectral information (the correct color for each
to a certain class. Two main categories of classification were feature) for full exploitation of the information in both images.
applied in this research; the supervised classification (Maxi- Actually, several image fusion techniques were applied to
mum Likelihood Classification, MLC) and the unsupervised merge the pan and mul Geo-eye images producing a high spa-
classification (Artificial Neural Networks, AANs). tial resolution multispectral image (a fused image). These
Before classifying the images, they undergo two major image fusion techniques are: Intensity Hue Saturation (IHS),
steps. The first is image fusion and the second is image Fast Intensity Hue Saturation with spectral adjustment
compression. (FIHS), Fast Intensity Hue Saturation with spectral adjust-
ment area weighting coefficient, Principal Component Analysis
* Corresponding author.
(PCA), High Pass Filter (HPF), Gram Schmidt, Hyperspheri-
cal Color Space (HCS), Brovey Transform (BT), Wavelet
E-mail address: mohamed_elkholy@alexu.edu.eg (M. Elkholy).
+ IHS, and Wavelet + PCA.
Peer review under responsibility of Faculty of Engineering, Alexandria
University.
https://doi.org/10.1016/j.aej.2018.12.013
1110-0168 Ó 2018 Faculty of Engineering, Alexandria University. Production and hosting by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Please cite this article in press as: M. Elkholy et al., Studying the effect of lossy compression and image fusion on image classification, Alexandria Eng. J. (2019),
https://doi.org/10.1016/j.aej.2018.12.013
2 M. Elkholy et al.

As the high pass filter fusion technique provided a fused [2]. A high pass convolution filter kernel is formed and used,
image with the highest spatial and spectral accuracies with based on the computed ratio, to filter the panchromatic high
respect to the other fusion techniques, it has been adopted in spatial resolution input data. The HPF image is then supple-
this research for the second step which is image compression. mented to each band of the multispectral image. After the
The second major step is the image compression which aims HPF image is weighed relative to the global standard deviation
to reduce the size of the fused image. In this study, the Discrete of the multispectral bands with the weight issues again com-
Wavelet Transformation (MrSid format) was applied to com- puted from the ratio, the summation is done [3]. In the final
press the fused image with different ratios ranging from 1:10 to step, a linear stretch is done to the new multispectral image
1:100. to identify the standard deviation and the mean values of the
Afterwards the compressed fused image (10 images) were original input multispectral image. It shows the good and
classified using both the classification technique; MLC and acceptable results also for multi temporal and multi sensorial
ANNs. PCI digital image processing software from Geomatica data. The edges are sometimes stressed too much. Fig. 2 shows
Canada, and Erdas imagines packages were used to aid in per- a part of the HPF fused image.
forming different processing steps of this research. Finally, the
10 classified images were evaluated by producing the error 4. Image compression: image compression based on discrete
(confusion) matrix for each of them and the overall accuracy wavelet transformation (MrSid format)
of the classification process was considered to determine the
effect of the image fusion and compression techniques on the Image compression has recently become more significant in the
classification accuracy. However the following subsections remote sensing field because of the availability of high-
described in details the study area and the flow of the process- resolution and hyper-spectral satellite imagery, and the
ing steps. panchromatic image and the multispectral image that need
large space to be saved on the internet [4]. The Spatial Data
2. Study area Infrastructures (SDI) paradigm has been improved over the
years, which greatly improves the establishment of web data
Two pan and mul Geo-eye images were used. The pan image is services, usually in terms of Open Geospatial Consortium
14,464 pixels  14,532 pixels with a pixel size of 0.50 m and the (OGC) standards like the Web Map Service (WMS) and the
mul image is 3616 pixels  3633 pixels with a pixel size of Web Coverage Service (WCS). Nevertheless, it is still necessary
2.0 m and 4 spectral bands (Red, Green, Blue, and Nir). As to do interactive transmission strategies and compression for
shown in Fig. 1. The pan and mul Geo-eye images cover the the images in these web services, in order to facilitate the stor-
same area of Cape Town, South Africa. age and transfer of very large images of environments with
restricted bandwidth [5].
3. Image fusion: high pass filter (HPF) There are two broad types of image compression methods
(lossy and lossless compression). Lossy and lossless compres-
A high-pass filter (HPF), an electronic filter, passes high- sion methods are terms that illustrate whether, or not, the orig-
frequency signals, but decreases the amplitude of signals with inal image can lose any data whenever the compressed one is
frequencies lower than the cutoff frequency. The actual quan- decompressed [6].
tity of attenuation for each frequency varies from one filter to Lossless compression can be performed in different ways,
another. A high-pass filter is usually represented as a linear through which the original data can be kept without any loss.
time-invariant system [1]. When the compressed image is decompressed, it matches the
At first, the ratio between the panchromatic high spatial original image numerically, and is therefore favored in practi-
resolution image and the multispectral image is computed cal critical situation, such as the archival of business docu-

Fig. 1 Illustrates the original images (a) multispectral image, and (b) panchromatic image.

Please cite this article in press as: M. Elkholy et al., Studying the effect of lossy compression and image fusion on image classification, Alexandria Eng. J. (2019),
https://doi.org/10.1016/j.aej.2018.12.013
Studying the effect of lossy compression and image fusion 3

Fig. 2 Illustrates a part of the HPF fused image.

ments or medical image, etc. where losing date in the image different fields like making maps and agriculture fields through
and quality could cause incorrect analysis which may have a collecting spatial data. There are two different techniques for
sever adverse influence [7]. However, there must be some classification used in this research. The first one is called the
redundancy in the original data for this type of compression unsupervised classification in which the image is divided into
to be efficient. different categories by the software and the user just determi-
The other type of image compression is lossy compression. nes the number of categories in the image. The second method
It works on minimizing the image size by permanently elimi- and the most used is the supervised classification technique. It
nating certain information, especially redundant information. is based on the user to determine the image categories. This
After decompressing the image, only a part of the original research adopts the maximum likelihood supervised classifica-
information is still there. This type of compression is chosen tion method and the neural network supervised classification
in different applications where high compression ratios are method.
needed for smaller image sizes; however the spatial and spec-
tral features of the image are lost. Even though in some cases 5.1. Maximum likelihood method
the visual impact of a lossy technique may be imperceptible
[8,9]. Maximum likelihood classification is one of the most common
Multiresolution Seamless Image Database (MrSID), supervised classification techniques used with remote sensing
designed by LizardTech, is a compression algorithm based image data, and was the first rigorous algorithm to be
on wavelet transformation. The improvements in MrSID employed widely. It has been developed based on a statistically
include a memory effective implementation and an automatic acceptable manner, although its traditional approach was suf-
inclusion of pyramid layers in every data set; both of which ficient for most remote sensing purposes.
make MrSID a very suitable technique to give an effective This classifier considers both the variance and covariance of
retrieval of very large digital images with effective storage. the category spectral response patterns when classifying an
The methodology of the underlying wavelet-based compres- unknown pixel. To do this, an assumption is made that the dis-
sion used in MrSID yields high compression ratios, while sat- tribution of a class sample is Gaussian (normally distributed).
isfying stringent image quality requirements. The compression This assumption of normality is generally reasonable for com-
method used in MrSID is a lossy compression one where the mon spectral response distribution. According to this assump-
process of compression and decompression loses some of the tion, the distribution of a class sample is normal; a class can be
source data pixel-for-pixel. The compression methods used characterized by the mean vector and the covariance matrix.
by MrSID technology give both high performance and high Given these two characteristics for each pixel value, the sta-
quality imagery, while still meeting our industry’s challenging tistical probability is computed for each class to determine the
workflows [10,11]. Fig. 3 show three compressed HPF images land cover class to which the pixel belongs [12].
with three different ratios 1:10, 1:50, and 1:100 respectively.
5.2. Artificial neural networks
5. Image classification
Artificial Neural Networks (ANNs) are computer programs
Image classification is to categorize the images into different designed to simulate human learning processes through estab-
classes by applying different methods to make use of that in lishment and reinforcement of linkages between input data and

Please cite this article in press as: M. Elkholy et al., Studying the effect of lossy compression and image fusion on image classification, Alexandria Eng. J. (2019),
https://doi.org/10.1016/j.aej.2018.12.013
4 M. Elkholy et al.

Fig. 3 Illustrates three compressed fused images with compression ratio 1:10, 1:50, and 1:100.

output data. They are linkages or pathways that from the anal- ANNs are models free from any assumptions, particularly
ogy with the human learning process in the repeated associa- about the frequency distribution of the training data. ANN
tions between input and output in the training process approaches have a distinct advantage over statistical classifica-
reinforcement linkages that can then be employed to link input tion methods in that they are non-parametric and require little
and output, in the absence of training data [9]. prior knowledge of the input data distribution. In addition to

Please cite this article in press as: M. Elkholy et al., Studying the effect of lossy compression and image fusion on image classification, Alexandria Eng. J. (2019),
https://doi.org/10.1016/j.aej.2018.12.013
Studying the effect of lossy compression and image fusion 5

this previous advantage, the neural networks attempt to find reference data and the classified image [13]. The overall accu-
the best nonlinear function based on the network’s complexity racy could be calculated in different methods. The overall
without the constraint of linearity or pre-specified accuracy of the classified image illustrates to what extent the
nonlinearity. classified pixel is similar to the actual land cover conditions
obtained from its corresponding ground true data. Producer’s
6. Evaluating methods accuracy means how well the land cover could be classified and
it measures the omission errors. User’s accuracy illustrates the
6.1. Evaluation of the compressed images comparison between the supervised classified pixel and its cor-
responding data in the real-world location [14].
Evaluating the compressed images depends on the verification
of the preservation of spectral characteristics and the spatial 7. Results and analysis
resolution by computing the RMSE for each compressed
image. The RMSE is calculated as the difference between the a. The fused image from the decompressed raw pan and
mean and the standard deviation of the fused image and the mul images were compared with the 10 fused images
original one. The best value is to be equal zero. The following from the compressed pan and mul images of the differ-
equation shows how to calculate the RMSE: ent compression ratios to get the RMSE so as to study
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 2  2
the effect of compression on image fusion, as shown in
RMSE ¼ stdi  stdf þ meani  meanf ð1Þ Table1.
From Fig. 4, it is obvious that compression ratio 1:10 gives
the least RMSE as the RMSE equals 0.339 and the RMSE
6.2. Accuracy assessment increases as the compression ratio increases from 1:20 to
1:100. The compression ratio 1:100 gives the highest RMSE.
The accuracy assessment is expressed by the error matrix
which could be got by comparing between pixels in the ground

Table 1 Illustrates the RMSE for the fused image from the raw images and the 10 fused images from the compressed images.
Compression ratio ‘‘1:10” ‘‘1:20” ‘‘1:30” ‘‘1:40” ‘‘1:50”
RMSE 0.339054 1.220939 2.517272 3.247004 4.847105
Compression ratio ‘‘1:60” ‘‘1:70” ‘‘1:80” ‘‘1:90” ‘‘1:100”
RMSE 5.286009 6.847213 7.26097 7.954781 8.019851

RMSE
9
8
7
6
5
4
3 RMSE
2
1
0

Fig. 4 Illustrates the RMSE for the fused image from the raw images and the 10 fused images from the compressed images.

Table 2 The overall accuracy of the 10 decompressed images by maximum likelihood classification method.
Compression ratio ‘‘1:10” ‘‘1:20” ‘‘1:30” ‘‘1:40” ‘‘1:50”
Overall accuracy 54.667 50.667 46.667 54.00 45.33
Compression ratio ‘‘1:60” ‘‘1:70” ‘‘1:80” ‘‘1:90” ‘‘1:100”
Overall accuracy 56.66 46.66 50.66 51.33 37.33

Please cite this article in press as: M. Elkholy et al., Studying the effect of lossy compression and image fusion on image classification, Alexandria Eng. J. (2019),
https://doi.org/10.1016/j.aej.2018.12.013
6 M. Elkholy et al.

Table 3 The overall accuracy of the 10 decompressed images by neural network classification method.
Compression ratio ‘‘1:10” ‘‘1:20” ‘‘1:30” ‘‘1:40” ‘‘1:50”
Overall accuracy 41.245 57.871 42.802 40.612 48.903
Compression ratio ‘‘1:60‘‘ ‘‘1:70” ‘‘1:80” ‘‘1:90” ‘‘1:100”
Overall accuracy 48.50 50.33 48.98 46.83 41.57

70

60

50
Overall accuracy

40

30
maximum likelihood
20 neural network
raw HPF image
10

0
"1:10" "1:20" "1:30" "1:40" "1:50" "1:60" "1:70" "1:80" "1:90" "1:100"
Compression ratio

Fig. 5 Overall accuracy for the 10 decompressed images by maximum likelihood and neural network classification methods.

b. After compressing and decompressing the high pass fil-  From the previous results, it is found that the maximum
ter fused image, we classified the 10 decompressed likelihood classification method is better than the neural
images by two methods, maximum likelihood classifica- network classification method because it gives overall
tion method and neural network classification method. accuracy better than the raw HPF image’s accuracy at
The overall accuracy was calculated for each decom- different compression ratios.
pressed image and compared with the overall accuracy  Maximum likelihood classification method gives overall
of the original high pass filter fused image, as shown accuracy higher than the accuracy of the HPF fused i-
in Tables 2 and 3; Fig. 5 also show the overall accuracy mage at compression ratios: 1:10, 1:40, and 1:60.
for the 10 decompressed images, while the overall accu-  The overall accuracy for the raw HPF image and maxi-
racy for the high pass filter fused image equals 50.667%. mum likelihood classification method are equal at com-
pression ratio 1:90.
It is found that the compression ratio 1:100 gives the least  While the neural network classification method gives
overall accuracy and the compression ration 1:10 and 1:20 give better overall accuracy only at compression ratio 1:20.
the best overall accuracy, while the other compression rations  The previous chart illustrates that to get an overall accu-
give overall accuracies near to each other. racy 57.78% the best classification method is neural n-
etwork and the best compression ratio is 1:20 or the
8. Conclusion maximum likelihood classification method is acceptable
with a compression ratio 1:60.
– Image compression:
The results show that the compression ratio 1:10 is the
best and the perfect ratio that has a little effect on image
fusion as it gives the lowest RMSE, equals 0.339, while References
the other compression ratios (1:20, 1:30, . . . 1:100) give
higher RMSE as it increases by increasing the compres- [1] John Watkinson, The Art of Sound Reproduction, Focal Press,
1998, pp. 268, 479, ISBN 0-240-51512-9, Retrieved March 9,
sion ratios from 1:20 to 1:100 which gives the highest
2010.
RMSE and the compression ratio 1:100 has the worst
[2] Bruce Main, Cut ’Em Off at the Pass: Effective Uses Of High-
effect on the image fusion. Pass Filtering February 16, Live Sound International
– Image classification:

Please cite this article in press as: M. Elkholy et al., Studying the effect of lossy compression and image fusion on image classification, Alexandria Eng. J. (2019),
https://doi.org/10.1016/j.aej.2018.12.013
Studying the effect of lossy compression and image fusion 7

(Framingham, Massachusetts: ProSoundWeb, EH Publishing). [8] B. Shrestha, G.C. O’Hara, H.N. Younan, JPEG2000: image
[3] Paul M. Mather, Computer Processing of Remotely Sensed quality metrics, ASPRS 2005 Annual Conference, Geospatial
Images: An Introduction, third ed., John Wiley and Sons, 2004, Goes Global: From Your Neighborhood to the Whole Planet,
p. 181, ISBN 978-0-470-84919-4. Baltimore, Maryland, 2005.
[4] W.L. Lau, Z.L. Li, K.W. Lam, Effects of JPEG compression on [9] J.B. Campbell, Introduction to Remote Sensing, second ed.,
image classification, Int. J. Rem. Sens. 24 (7) (2003) 1535–1544. Guilford, New York, 1996.
[5] A. Zabala, X. Pons, Effects of lossy compression on remote [10] ERDAS, ERDAS Field Guide, ERDAS, Inc., Norcross, GA,
sensing image classification of forest areas, Int. J. Appl. Earth 2010.
Observ. Geoinform. 13 (2011) 43–51. [11] LizardTech, LizardTech’s MrSID Technology, Celartem Inc.
[6] L. Zhai, X.M. Tang, G. Zhang, X. Wu, Effects of JPEG2000 dba LizardTech, Seattle, Washington, USA, 2010.
and SPIHT compression on image classification, in: [12] T.M. Lillesand, R.W. Kiefer, Remote Sensing and Image
International Archives of the Photogrammetry Remote Interpretation, third ed., John Wiley & Sons, New York, 1994.
Sensing and Spatial Information Sciences, Beijing, China, [13] J.R. Jensen, Introductory Digital Image Processing: A Remote
2008, pp. 541–544. Sensing Perspective, third ed., Pearson Prentice Hall, Upper
[7] M. Mozammel Hoque Chowdhury, Amina Khatun, Image Saddle River, NJ, 2005.
compression using discrete wavelet transform, (IJCSI) Int. J. [14] R.G. Congalton, A review of assessing the accuracy of
Comput. Sci. Iss. 9 (4, No. 1) (2012) 327–330. classifications of remotely sensed data, Rem. Sens. Environ. 37
(1) (1991) 35–46.

Please cite this article in press as: M. Elkholy et al., Studying the effect of lossy compression and image fusion on image classification, Alexandria Eng. J. (2019),
https://doi.org/10.1016/j.aej.2018.12.013

Das könnte Ihnen auch gefallen