Beruflich Dokumente
Kultur Dokumente
(1)
(2)
(3)
(4)
The features of an image can be represented by a vector of selected Zernike moments and the features of
image can represent by a vector . The Euclidean Metric is the distance measure which is applied to feature
vectors of two images in the Zernike domain as in following equation:
(5)
(6)
where, represents FSM similarity between two images and ; usually, represents the reference
image and represents a corrupted version of ; represents the feature similarity measure (FSIM) where given
by
(7)
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -477
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com
(8)
The constants are chosen as =5, =3 and =7, while =0.01 is added to balance the quotient and avoid division
by zero. The function refers to the global 2D correlation between the images, as follows:
(9)
where and are the image global means. This function is applied here as on the new images
obtained from the application of edge detection to the original images and .
C. Zernike-Entropy Image Similarity Measure (Z-EISM)
In this paper, two main tools have been employed in our introduced measure, one of them is Zernike moments
which is proven effective in the extraction of image features, and the other one was based information theoretic
approach, represented by entropy because the information theory has a high capability to predict the relationship
between image intensity values. In this paper, we use Picard entropy [14] and apply it to image a joint histogram
as a probabilistic distribution and then combined the result with the Zernike approach. The proposed measure is
given by flowing formulas respectively:
(10)
where:
or:
Now we apply the entropy to measure the information held in the joint histogram that represents the joint
probability of pixel co-occurrence. Note that both and range from to . First, the Picard entropy
measure is applied to get Picard-Histogram Similarity Measure (PHS) as follows:
where reshapes the 2D joint histogram into a one-dimensional column vector via the colon
operator, as defined in MATLAB, with a new dimension and ; . Using
other entropies could be more helpful. However, this is beyond the scope of this paper at the moment and will be
investigated in future works. Now, after the detailed discretion of the entropy and joint histogram, then we are
going to introduce the final formula for the proposed measure.
where refers to the similarity between two images and , always represents the reference image and
represents a corrupted version of it. denotes to Zernike moments and is the Picard entropy
after applied the joint histogram as defined in equation (14).
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -478
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com
Fig. 1 Eight TID2008 non-face reference images used for the test and comparison image similarity measures.
IV. RESULTS AND DISCUSSION
In order to prove that the proposed measure is better than current measures, all measures must be tested in
abnormal conditions to come out with a measure can be used it in a real-time, more robust and challengeable. To
show the excellent results were obtained after applied the introduced measure against FSM and ZM metrics by
using MATLAB 2015a, four datasets have been used to evaluate the performance of all measures in this paper.
TID2008 [15], IVC [16], AT&T [17] and FEI [18] as a face and non-face images databases for testing. The following
figures show these sets respectively.
Fig. 2 Ten IVC non-face reference images used for the test and comparison image similarity measures
Fig 3 Forty AT&T reference face images, each person have ten deferent poses, facial expression and lighting used
for the test and comparison the measures to find image similarity for face recognition.
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -479
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com
Fig. 4 Fifty FEI reference face images, each person have fourteen deferent pose, facial expression and lighting used
for the test and comparison the measures to find image similarity for face recognition.
In this test we divided all datasets into two subgroups: testing group and the training group. In training group, we
choose a random face image from the faces database and non-face images from the non-face database to be a
reference image, and then we select a different facial expression, distortion and pose from the testing group, for
the same image as a challenging image to evaluate the performance of measures in recognition and similarity.
(a) (b)
Similarity with Reference p o
(c)
Fig. 5 Performance of recognition measures using original image and distort of the original image; (a) The
reference image; (b). The distorted version of it (c). Performance of FSM, ZSM and Z-EISM using non-face image.
Confidence in recognition for FSM, ZSM and Z-EISM is 0.8731, 0.5664 and 0.9951 respectively.
In this results, we have two kinds of tests: one is for similarity of images and the other for recognition (face &non-
face) to evaluate and test the proposed Z-EISM versus the Zernike moments and FSM, in addition, we will find the
average similarity for each measure as a confidence method for the presented measure.
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -480
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com
In the first test, we used two images (reference image and test image) from the environment of the two databases
(TID2008 and IVC, respectively) randomly; note that other images also achieve good results with high
performance. Figure (4) have three images: (a) is the reference image, (b) is the test image (distorted version of
the reference image) and (c) is the performance of all measures to find the similarity. In the figure (5), we will
repeat the process in the previous step but we will change the reference image. Figures (6) and (7) do the test on
face images from AT&T and FEI databases. The great results show that the presented measure is outperform the
current measures by give big distance function while the others find the proper image but there is low confidence
in their decision because there are many cases of distrust (big similarities with wrong images) in their decisions
(similarities). This is a big challenge when we employ these measures in security recognition tasks. Even if there
are some similar pixels in very different image but the use of joint histogram in our proposed measure never give
this result and this is the characteristic of the Zernike-Entropy Image Similarity Measure (Z-EISM).
(a) (b)
Image Recognition Using Similarity Measures
1
Max sim at p = 9
FSM
0.8 ZSM
Z-EISM
Similarity with Reference po
0.6
0.4
0.2
-0.2
0 2 4 6 8 10 12 14
Image index, p
(c)
Fig. 6 Performance of recognition measures using original image and distort of the original image; (a) The
reference image; (b). The distorted version of it (c). Performance of FSM, ZSM and Z-EISM using non-face image.
Confidence in recognition for FSM, ZSM and Z-EISM is 0.7944, 0.3791 and 0.9937 respectively.
(a) (b)
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -481
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com
o
Similarity with Reference p
(c)
Fig. 7 Performance of recognition measures using original image and pose image; (a) The reference image; (b).
Different pose and facial expression of reference image (c). Performance of FSM, ZSM and Z-EISM using face image.
Confidence in recognition for FSM, ZSM and Z-EISM is 0.8004, 0.3511 and 0.9930 respectively.
(a) (b)
Similarity with Reference po
(c)
Fig. 8 Performance of recognition measures using original image and pose image; (a) The reference image; (b).
Different pose and facial expression of reference image (c). Performance of FSM, ZSM and Z-EISM using face image.
Confidence in recognition for FSM, ZSM and Z-EISM is 0.6846, 0.4318 and 0.9939 respectively.
(a) (b)
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -482
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 5 10 15 20 25 30 35 40 45 50
Image index, p
(c)
Fig. 9 Performance of recognition measures using original image and pose image; (a) The reference image; (b).
Different pose and facial expression of reference image (c). Performance of FSM, ZSM and Z-EISM using face image.
Confidence in recognition for FSM, ZSM and Z-EISM is 0.6998, 0.2295 and 0.9937 respectively.
(a) (b)
(c)
Fig. 9 Performance of recognition measures using original image and pose image; (a) The reference image; (b).
Different pose, facial expression and lighting of reference image (c). Performance of FSM, ZSM and Z-EISM using
face image. Confidence in recognition for FSM, ZSM and Z-EISM is 0.6873, 0.4241 and 0.9950 respectively.
The difference in the values of the peaks of each measure is a new feature showing the high performance of the
proposed (Z-EISM). If the distance between the highest match and the second-best match is higher, that means the
measure has better performance and vice versa; i.e., if the distance is less, that means the measure has been
confused in deciding the best match by giving a non-trivial similarity between the different images. The new
feature of recognition confidence can be very useful in security systems of big databases. To show the real
performance of the proposed measure, we provided an average similarity difference using all images as a
reference image and all images as test images. In this case, similarity difference is (best match of reference image)-
(second best match within any other images).
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -483
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com
In this paper, we did an average of the similarity as a confidence measure for every images in the TID2008 and IVC
datasets. The global average can be obtained as the mean of all these sub averages. Let denote the similarity
confidence when the image ( ) with the distorted version of it ( ) is the reference image while recognizing image
( ) among all images under distorted ( ), and let refers to number of all images (original images and distorted
images), denotes to the distorted images and denote the number of original images. Then the global
confidence average is taken as =( ) . Table 1 shows the performance of the proposed ISSM
versus other methods. The preparation of the database that is more suitable for this approach (e.g., in security
applications) should take into consideration some important factors like lighting, expression, and viewpoint, while
the reference image should consider the same factors.
TABLE I- THE GLOBAL AVERAGE SIMILARITY DIFFERENCE OF BEST MATCH AND SECOND-BEST MATCH
WITHIN ALL IMAGES.
Measures
Z-EISM Zernike FSM
All persons
Average 0.0704 0.0625 0.0279
V. CONCLUSION
The use of information theory in the field of image similarity assessment or in the field of facial recognition
assessment by finding similarities between images has high accuracy and we have proved the strength and
consistency of this method in our previous work. The application of joint histogram to the entropy in comparing
images gives very accurate results in detecting similarities between images, especially within large image
databases. The integration of more than one algorithm to the similarity of images in one measure where each
algorithm gives the best features; this leads to the stable and reliable results and can be applied in real time to its
efficiency in recognition.
One of the most widely used methods of face recognition is the Zernike moments, which were employed in the
proposed measure by combining with the joint histogram as in equation (15) and getting Zernike-Entropy Image
Similarity (Z-EISM). The proposed measure has been tested with the state of art method based on the statistical
approach of finding similarities between the images and thus we proved that the method that depends on the
information theory is more efficient than the statistical method. The results show the effectiveness of the
proposed approach and we will develop it in future work by adding other features that have a stronger effect in
extracting image features
REFERENCE
[1]. Chalom, Edmond, Eran Asa, and Elior Biton. "Measuring image similarity: an overview of some useful
applications." IEEE Instrumentation & Measurement Magazine 16.1 (2013): 24-28.
[2]. Chandler, Damon M. "Seven challenges in image quality assessment: past, present, and future
research." ISRN Signal Processing 2013 (2013).
[3]. Chandler, Damon M., Md Mushfiqul Alam, and Thien D. Phan. "Seven challenges for image quality
research." Human Vision and Electronic Imaging XIX. Vol. 9014. International Society for Optics and
Photonics, 2014.
[4]. Lajevardi, Seyed Mehdi, and Zahir M. Hussain. "Zernike moments for facial expression recognition." rn 2
(2009): 3.
[5]. Lajevardi, Seyed Mehdi, and Zahir M. Hussain. "Higher order orthogonal moments for invariant facial
expression recognition." Digital Signal Processing 20.6 (2010): 1771-1779.
[6]. Pass, Greg, and Ramin Zabih. "Comparing images using joint histograms." Multimedia systems 7.3 (1999):
234-240.
[7]. Shnain, Noor Abdalrazak, Zahir M. Hussain, and Song Feng Lu. "A feature-based structural measure: An
image similarity measure for face recognition." Applied Sciences 7.8 (2017): 786.
[8]. Wang, Zhou, et al. "Image quality assessment: from error visibility to structural similarity." IEEE
transactions on image processing 13.4 (2004): 600-612.
[9]. Zhang, Lin, et al. "FSIM: a feature similarity index for image quality assessment." IEEE transactions on
Image Processing20.8 (2011): 2378-2386.
[10]. Aljanabi, Mohammed Abdulameer, Zahir M. Hussain, and Song Feng Lu. "An entropy-histogram approach
for image similarity and face recognition." Mathematical Problems in Engineering 2018 (2018).
[11]. Aljanabi, Mohammed Abdulameer, Noor Abdalrazak Shnain, and Song Feng Lu. "An image similarity
measure based on joint histogram—Entropy for face recognition." Computer and Communications (ICCC),
2017 3rd IEEE International Conference on. IEEE, 2017.
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -484
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com
[12]. Hwang, Sun-Kyoo, and Whoi-Yul Kim. "A novel approach to the fast computation of Zernike
moments." Pattern Recognition 39.11 (2006): 2065-2076.
[13]. Canny, John. "A computational approach to edge detection." IEEE Transactions on pattern analysis and
machine intelligence 6 (1986): 679-698.
[14]. Picard, C. F. "The use of information theory in the study of the diversity of biological populations." Proc.
Fifth Berk. Symp. IV. 1979.
[15]. Ponomarenko, Nikolay, et al. "TID2008-a database for evaluation of full-reference visual quality assessment
metrics." Advances of Modern Radioelectronics 10.4 (2009): 30-45.
[16]. Ninassi, A., P. Le Callet, and F. Autrusseau. "Subjective quality assessment-IVC database." online]
http://www. irccyn. ec-nantes. fr/ivcdb (2006).
[17]. “Laboratories, A.T. The Database of Faces,”
http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
[18]. “FEI Face Database,” http://fei.edu.br/∼cet/facedatabase.html
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -485