Sie sind auf Seite 1von 11

International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017

Issue 08, Volume 5 (October 2018) www.ijiris.com

ZERNIKE-ENTROPY IMAGE SIMILARITY MEASURE


BASED ON JOINT HISTOGRAM FOR FACE
RECOGNITION
Mohammed Abdulameer Aljanabi*
School of Computer Science and Technology,
Huazhong University of Science and Technology, China
mohammed@hust.edu.cn
Zahir M. Hussain
Faculty of Computer Science & Mathematics,
University of Kufa, Iraq
zmhussain@ieee.org
Noor Abd Alrazak Shnain
School of Computer Science and Technology,
Huazhong University of Science and Technology, China
nooraljanabi@hust.edu.cn
Song Feng Lu
School of Computer Science and Technology, Huazhong University of Science and Technology, China
Shenzhen Huazhong University of Science and Technology Research Institute, China
lusongfeng@hust.edu.cn
Manuscript History
Number: IJIRIS/RS/Vol.05/Issue08/OCIS10080
DOI: 10.26562/IJIRAE.2018.OCIS10080
Received: 04, October 2018
Final Correction: 10, October 2018
Final Accepted: 14, October 2018
Published: October 2018
Citation: Abdulameer, A. M., M., H. Z., Alrazak, S. N. A. & Lu, S. F. (2018). ZERNIKE-ENTROPY IMAGE SIMILARITY
MEASURE BASED ON JOINT HISTOGRAM FOR FACE RECOGNITION. IJIRIS:: International Journal of Innovative
Research in Information Security, Volume V, 475-485. doi://10.26562/IJIRIS.2018.OCIS10080
Editor: Dr.A.Arul L.S, Chief Editor, IJIRIS, AM Publications, India
Copyright: ©2018 This is an open access article distributed under the terms of the Creative Commons Attribution
License, Which Permits unrestricted use, distribution, and reproduction in any medium, provided the original author
and source are credited
Abstract— The direction of image similarity for face recognition required a combination of powerful tools and
stable in case of any challenges such as different illumination, various environment and complex poses etc. In this
paper, we combined very robust measures in image similarity and face recognition which is Zernike moment and
information theory in one proposed measure namely Zernike-Entropy Image Similarity Measure (Z-EISM). Z-EISM
based on incorporates the concepts of Picard entropy and a modified one dimension version of the two
dimensions joint histogram of the two images under test. Four datasets have been used to test, compare, and
prove that the proposed Z-EISM has better performance than the existing measures.
Keywords: Zernike moment; Entropy; Joint Histogram; Feature-based structural measure; Statistical approaches;
I. INTRODUCTION
Digital image processing is a wide scope and the images similarity or facial recognition are more interesting and
significant topics in the image processing field because of its process a sensitive case in the security systems. The
comparison of any two images to finding the similarity and dissimilarity between them by using image similarity
metric has several purposes, some examples of image similarity applications include image searches on the
internet whether for personal or for detecting copyright infringement. Another application of image similarity
measures for reconstructing a large still image from a database of small images [1].
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -475
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com
Perhaps the most important image similarity application is its use for the security purposes through finding the
similarity between two face images and this is our point in this paper and our proposed metric is more confident
to decide the proper face image and non-face image. For each technique, there are some challenges facing the
researchers and the developers of this technique. For images similarity approach also we have some challenges
essential. Some of these challenges are fundamental and actively being researched. Some are resolved relatively,
and others remain largely unsolved. The challenge of designing a model of human visual processing which can
cope with natural images and this case need to improve the model of the primary visual cortex, more ground-truth
data on natural images, and models that incorporate processing by higher-level visual areas. Researchers also face
a problem when designing an algorithm that can cope with the diversity of distortions that image similarity
algorithms can face. Distortion of the image's appearance is a particular challenge when researchers want to
design an algorithm for image similarity. In fact, there are differences between distortions perceived as additive
and distortions that affect the image's objects. There is a need for adaptive visual approaches and other high-level
effects that humans use when judging quality. Images that are simultaneously distorted by multiple types of
distortions are an interesting challenge for researchers who design and improve image similarity approaches;
where the effects of multiple distortions on image quality and the potential perceptual interactions between the
distortions and their joint effects on images. Geometric changes to images are one of the challenges of image
similarity algorithms faced by researchers [2]. Other challenges in image similarity technique stem from lack of
complete perceptual models for natural images, supra threshold distortions, interactions between distortions and
images, images containing multiple and non-traditional distortions, and images containing enhancements. The
objective of this paper is not only to highlight the limitations in our current knowledge of image quality, but to
also emphasize the need for additional fundamental research in similarity perception [3]. This paper handles the
information-theoretic approach and includes the following sections: Section 2 describes the related works that
give high-performance measures; Section 3 presents the existing measures and the proposed measure; Section 4
shows simulation results and discussion, and Section 5 presents conclusions and future work.
II. RELATED WORKS
Due to the many works which are used the same existing measures for comparison in image similarity or facial
recognition because of the robustness of these measures, so the idea of this section is to overview the related
works with the proposed work in this paper. In general, the Zernike moment is the best for face recognition tasks.
This orthogonal moment is better than other types of moments in terms of information redundancy and image
representation, so Zernike moments are very popular in image analysis due to its feature representation
capability with minimal information redundancy and due to its low dimension and enhanced discriminatory
power even under noisy environment. In 2009 S. Lajevardi and Z. Hussain presented a facial expression
recognition system using Zernike moment as a feature extractor and they considered the challenges that face this
system such as the illumination condition, pose, rotation, noise and others. The experimental results show that the
presented system has high performance and good results obtained in images with noise and rotation whereas
feature extraction time rate is slower than other methods [4]. In 2010 respectively the same authors (Lajevardi &
Hussain) proposed a rotation and noise invariant face recognition system using an orthogonal invariant moment
as a feature extractor and Naive Bayesian as classifier. The system is tested under different facial expressions,
illumination condition, pose, rotation, noise and others. Simulation results on different databases indicated
observed that the high performance of the proposed system [5]. The proposed metric in this paper is mainly based
on the combination of the Zernike moment and entropy. Simultaneously, the proposed measure based on
incorporates the concepts of Picard entropy and a modified 1D version of the 2D joint histogram of the two images
under test. G. Pass and R. Zabih proposed a joint histogram which is an alternative to the colour histogram and
incorporates further information without losing the accuracy of the colour histogram. Joint Histogram (JH)
constructed based on a set of local pixel features and constructing a multidimensional histogram. Each entry in JH
contains the number of pixels in the image that is described by a particular combination of feature values.
Evaluate the performance of JH on a database with over 210,000 images. JH outperform colour histograms by an
order of magnitude when using it to find the similarity of images [6]. Overall there are two directions in image
similarity assessment: statistical-based and information-theoretic based measures. The state of art statistical
measure has been proposed by NA Shnain et al [7], called, FSM, used the best features of the traditional image
similarity measure called SSIM [8] and the feature similarity index measure called (FSIM) [9] for image-quality
assessment, while the information theory measures have been used for the sake of image similarity and face
recognition or image similarity for face recognition by MA Aljanabi et al. [10], [11].

III. THE EXISTING MEASURES AND THE PROPOSED MEASURE


It's a good plan to utilize the combination of more than one measure and balance them mathematically to get one
algorithm can use the whole features of each metric which are used in the proposed algorithm. Two existing
images similarity and face recognition measures addressed in this paper, one for combination and other for
comparison.
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -476
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com
A. Zernike Moments (ZMs)
A type of moment function; considers an efficient global image descriptors. It is mapping an image onto a set of
complex Zernike polynomials; these polynomials are orthogonal to each other. The orthogonality of Zernike
moments can appear the properties of an image without any redundancy or overlap of information between the
moments, this characteristic makes each moment to be unique of the information in an image. Due to these
properties, Zernike moments have been used as feature sets in applications such as pattern recognition, content-
based image retrieval, and other image analysis systems. In this work we will extract various Zernike moments
(Zernike domain) as face features, then define a similarity measure after imposing a distance measure in this
domain. The distance measure will be Euclidean distance metrics.
The ZMs of an image function with order and repetition are given as[12]:

(1)

The radial polynomial is given in this equation:

(2)

When and is even.


The discrete approximation of equation (1) is defined as:

(3)

Where and when ;

(4)

The features of an image can be represented by a vector of selected Zernike moments and the features of
image can represent by a vector . The Euclidean Metric is the distance measure which is applied to feature
vectors of two images in the Zernike domain as in following equation:

(5)

B. Feature-Based Structural Measure (FSM)


FSM is an efficient state-of-art similarity measure [8], based on combining the best features and statistical
properties provided by SSIM and FSIM, trading off between their performances for similar and dissimilar images.
While Canny edge detector added a distinctive structural feature, where (after processing by Canny’s edge filter
[13]) two binary images, and , are obtained from the original two images x and y. FSM can be given by:

(6)

where, represents FSM similarity between two images and ; usually, represents the reference
image and represents a corrupted version of ; represents the feature similarity measure (FSIM) where given
by

(7)

___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -477
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com

And represents the structural similarity measure (SSIM) given by

(8)

The constants are chosen as =5, =3 and =7, while =0.01 is added to balance the quotient and avoid division
by zero. The function refers to the global 2D correlation between the images, as follows:

(9)

where and are the image global means. This function is applied here as on the new images
obtained from the application of edge detection to the original images and .
C. Zernike-Entropy Image Similarity Measure (Z-EISM)
In this paper, two main tools have been employed in our introduced measure, one of them is Zernike moments
which is proven effective in the extraction of image features, and the other one was based information theoretic
approach, represented by entropy because the information theory has a high capability to predict the relationship
between image intensity values. In this paper, we use Picard entropy [14] and apply it to image a joint histogram
as a probabilistic distribution and then combined the result with the Zernike approach. The proposed measure is
given by flowing formulas respectively:
(10)

where is a Picard entropy and . is a discrete random variable, and is


corresponding probabilities for . After this definition of entropy, then we have to applied entropy to
the joint histogram as follows:
A 2D joint histogram entry for two images and represents the probability that a pixel intensity
value from image co-occurs with pixel intensity value from image . The normalized joint histogram for two
images x and y of size is defined here as follows:

where:

or:

Now we apply the entropy to measure the information held in the joint histogram that represents the joint
probability of pixel co-occurrence. Note that both and range from to . First, the Picard entropy
measure is applied to get Picard-Histogram Similarity Measure (PHS) as follows:

where reshapes the 2D joint histogram into a one-dimensional column vector via the colon
operator, as defined in MATLAB, with a new dimension and ; . Using
other entropies could be more helpful. However, this is beyond the scope of this paper at the moment and will be
investigated in future works. Now, after the detailed discretion of the entropy and joint histogram, then we are
going to introduce the final formula for the proposed measure.

where refers to the similarity between two images and , always represents the reference image and
represents a corrupted version of it. denotes to Zernike moments and is the Picard entropy
after applied the joint histogram as defined in equation (14).

___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -478
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com

Fig. 1 Eight TID2008 non-face reference images used for the test and comparison image similarity measures.
IV. RESULTS AND DISCUSSION
In order to prove that the proposed measure is better than current measures, all measures must be tested in
abnormal conditions to come out with a measure can be used it in a real-time, more robust and challengeable. To
show the excellent results were obtained after applied the introduced measure against FSM and ZM metrics by
using MATLAB 2015a, four datasets have been used to evaluate the performance of all measures in this paper.
TID2008 [15], IVC [16], AT&T [17] and FEI [18] as a face and non-face images databases for testing. The following
figures show these sets respectively.

Fig. 2 Ten IVC non-face reference images used for the test and comparison image similarity measures

Fig 3 Forty AT&T reference face images, each person have ten deferent poses, facial expression and lighting used
for the test and comparison the measures to find image similarity for face recognition.
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -479
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com

Fig. 4 Fifty FEI reference face images, each person have fourteen deferent pose, facial expression and lighting used
for the test and comparison the measures to find image similarity for face recognition.
In this test we divided all datasets into two subgroups: testing group and the training group. In training group, we
choose a random face image from the faces database and non-face images from the non-face database to be a
reference image, and then we select a different facial expression, distortion and pose from the testing group, for
the same image as a challenging image to evaluate the performance of measures in recognition and similarity.

(a) (b)
Similarity with Reference p o

(c)
Fig. 5 Performance of recognition measures using original image and distort of the original image; (a) The
reference image; (b). The distorted version of it (c). Performance of FSM, ZSM and Z-EISM using non-face image.
Confidence in recognition for FSM, ZSM and Z-EISM is 0.8731, 0.5664 and 0.9951 respectively.

In this results, we have two kinds of tests: one is for similarity of images and the other for recognition (face &non-
face) to evaluate and test the proposed Z-EISM versus the Zernike moments and FSM, in addition, we will find the
average similarity for each measure as a confidence method for the presented measure.
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -480
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com
In the first test, we used two images (reference image and test image) from the environment of the two databases
(TID2008 and IVC, respectively) randomly; note that other images also achieve good results with high
performance. Figure (4) have three images: (a) is the reference image, (b) is the test image (distorted version of
the reference image) and (c) is the performance of all measures to find the similarity. In the figure (5), we will
repeat the process in the previous step but we will change the reference image. Figures (6) and (7) do the test on
face images from AT&T and FEI databases. The great results show that the presented measure is outperform the
current measures by give big distance function while the others find the proper image but there is low confidence
in their decision because there are many cases of distrust (big similarities with wrong images) in their decisions
(similarities). This is a big challenge when we employ these measures in security recognition tasks. Even if there
are some similar pixels in very different image but the use of joint histogram in our proposed measure never give
this result and this is the characteristic of the Zernike-Entropy Image Similarity Measure (Z-EISM).

(a) (b)
Image Recognition Using Similarity Measures
1
Max sim at p = 9
FSM
0.8 ZSM
Z-EISM
Similarity with Reference po

0.6

0.4

0.2

-0.2
0 2 4 6 8 10 12 14
Image index, p

(c)
Fig. 6 Performance of recognition measures using original image and distort of the original image; (a) The
reference image; (b). The distorted version of it (c). Performance of FSM, ZSM and Z-EISM using non-face image.
Confidence in recognition for FSM, ZSM and Z-EISM is 0.7944, 0.3791 and 0.9937 respectively.

(a) (b)
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -481
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com

o
Similarity with Reference p

(c)
Fig. 7 Performance of recognition measures using original image and pose image; (a) The reference image; (b).
Different pose and facial expression of reference image (c). Performance of FSM, ZSM and Z-EISM using face image.
Confidence in recognition for FSM, ZSM and Z-EISM is 0.8004, 0.3511 and 0.9930 respectively.

(a) (b)
Similarity with Reference po

(c)
Fig. 8 Performance of recognition measures using original image and pose image; (a) The reference image; (b).
Different pose and facial expression of reference image (c). Performance of FSM, ZSM and Z-EISM using face image.
Confidence in recognition for FSM, ZSM and Z-EISM is 0.6846, 0.4318 and 0.9939 respectively.

(a) (b)
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -482
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com

Image Recognition Using Similarity Measures


1
Max sim at p = 11 FSM
ZSM
0.9
Z-EISM

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
0 5 10 15 20 25 30 35 40 45 50
Image index, p
(c)
Fig. 9 Performance of recognition measures using original image and pose image; (a) The reference image; (b).
Different pose and facial expression of reference image (c). Performance of FSM, ZSM and Z-EISM using face image.
Confidence in recognition for FSM, ZSM and Z-EISM is 0.6998, 0.2295 and 0.9937 respectively.

(a) (b)

(c)
Fig. 9 Performance of recognition measures using original image and pose image; (a) The reference image; (b).
Different pose, facial expression and lighting of reference image (c). Performance of FSM, ZSM and Z-EISM using
face image. Confidence in recognition for FSM, ZSM and Z-EISM is 0.6873, 0.4241 and 0.9950 respectively.
The difference in the values of the peaks of each measure is a new feature showing the high performance of the
proposed (Z-EISM). If the distance between the highest match and the second-best match is higher, that means the
measure has better performance and vice versa; i.e., if the distance is less, that means the measure has been
confused in deciding the best match by giving a non-trivial similarity between the different images. The new
feature of recognition confidence can be very useful in security systems of big databases. To show the real
performance of the proposed measure, we provided an average similarity difference using all images as a
reference image and all images as test images. In this case, similarity difference is (best match of reference image)-
(second best match within any other images).
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -483
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com
In this paper, we did an average of the similarity as a confidence measure for every images in the TID2008 and IVC
datasets. The global average can be obtained as the mean of all these sub averages. Let denote the similarity
confidence when the image ( ) with the distorted version of it ( ) is the reference image while recognizing image
( ) among all images under distorted ( ), and let refers to number of all images (original images and distorted
images), denotes to the distorted images and denote the number of original images. Then the global
confidence average is taken as =( ) . Table 1 shows the performance of the proposed ISSM
versus other methods. The preparation of the database that is more suitable for this approach (e.g., in security
applications) should take into consideration some important factors like lighting, expression, and viewpoint, while
the reference image should consider the same factors.
TABLE I- THE GLOBAL AVERAGE SIMILARITY DIFFERENCE OF BEST MATCH AND SECOND-BEST MATCH
WITHIN ALL IMAGES.
Measures
Z-EISM Zernike FSM
All persons
Average 0.0704 0.0625 0.0279

V. CONCLUSION
The use of information theory in the field of image similarity assessment or in the field of facial recognition
assessment by finding similarities between images has high accuracy and we have proved the strength and
consistency of this method in our previous work. The application of joint histogram to the entropy in comparing
images gives very accurate results in detecting similarities between images, especially within large image
databases. The integration of more than one algorithm to the similarity of images in one measure where each
algorithm gives the best features; this leads to the stable and reliable results and can be applied in real time to its
efficiency in recognition.
One of the most widely used methods of face recognition is the Zernike moments, which were employed in the
proposed measure by combining with the joint histogram as in equation (15) and getting Zernike-Entropy Image
Similarity (Z-EISM). The proposed measure has been tested with the state of art method based on the statistical
approach of finding similarities between the images and thus we proved that the method that depends on the
information theory is more efficient than the statistical method. The results show the effectiveness of the
proposed approach and we will develop it in future work by adding other features that have a stronger effect in
extracting image features
REFERENCE
[1]. Chalom, Edmond, Eran Asa, and Elior Biton. "Measuring image similarity: an overview of some useful
applications." IEEE Instrumentation & Measurement Magazine 16.1 (2013): 24-28.
[2]. Chandler, Damon M. "Seven challenges in image quality assessment: past, present, and future
research." ISRN Signal Processing 2013 (2013).
[3]. Chandler, Damon M., Md Mushfiqul Alam, and Thien D. Phan. "Seven challenges for image quality
research." Human Vision and Electronic Imaging XIX. Vol. 9014. International Society for Optics and
Photonics, 2014.
[4]. Lajevardi, Seyed Mehdi, and Zahir M. Hussain. "Zernike moments for facial expression recognition." rn 2
(2009): 3.
[5]. Lajevardi, Seyed Mehdi, and Zahir M. Hussain. "Higher order orthogonal moments for invariant facial
expression recognition." Digital Signal Processing 20.6 (2010): 1771-1779.
[6]. Pass, Greg, and Ramin Zabih. "Comparing images using joint histograms." Multimedia systems 7.3 (1999):
234-240.
[7]. Shnain, Noor Abdalrazak, Zahir M. Hussain, and Song Feng Lu. "A feature-based structural measure: An
image similarity measure for face recognition." Applied Sciences 7.8 (2017): 786.
[8]. Wang, Zhou, et al. "Image quality assessment: from error visibility to structural similarity." IEEE
transactions on image processing 13.4 (2004): 600-612.
[9]. Zhang, Lin, et al. "FSIM: a feature similarity index for image quality assessment." IEEE transactions on
Image Processing20.8 (2011): 2378-2386.
[10]. Aljanabi, Mohammed Abdulameer, Zahir M. Hussain, and Song Feng Lu. "An entropy-histogram approach
for image similarity and face recognition." Mathematical Problems in Engineering 2018 (2018).
[11]. Aljanabi, Mohammed Abdulameer, Noor Abdalrazak Shnain, and Song Feng Lu. "An image similarity
measure based on joint histogram—Entropy for face recognition." Computer and Communications (ICCC),
2017 3rd IEEE International Conference on. IEEE, 2017.
___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -484
International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017
Issue 08, Volume 5 (October 2018) www.ijiris.com
[12]. Hwang, Sun-Kyoo, and Whoi-Yul Kim. "A novel approach to the fast computation of Zernike
moments." Pattern Recognition 39.11 (2006): 2065-2076.
[13]. Canny, John. "A computational approach to edge detection." IEEE Transactions on pattern analysis and
machine intelligence 6 (1986): 679-698.
[14]. Picard, C. F. "The use of information theory in the study of the diversity of biological populations." Proc.
Fifth Berk. Symp. IV. 1979.
[15]. Ponomarenko, Nikolay, et al. "TID2008-a database for evaluation of full-reference visual quality assessment
metrics." Advances of Modern Radioelectronics 10.4 (2009): 30-45.
[16]. Ninassi, A., P. Le Callet, and F. Autrusseau. "Subjective quality assessment-IVC database." online]
http://www. irccyn. ec-nantes. fr/ivcdb (2006).
[17]. “Laboratories, A.T. The Database of Faces,”
http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
[18]. “FEI Face Database,” http://fei.edu.br/∼cet/facedatabase.html

___________________________________________________________________________________________________________________________________
IJIRIS: Impact Factor Value – SJIF: Innospace, Morocco (2016): 4.651
Indexcopernicus: (ICV 2016): 88.20
© 2014- 18, IJIRIS- All Rights Reserved Page -485

Das könnte Ihnen auch gefallen