Beruflich Dokumente
Kultur Dokumente
Introduction
The automatic recognition of an individual based on the unique, stable and non-invasive
characteristics like freckles, coronas, stripes, crypts, contractile lines within iris texture, makes
iris recognition a promising solution to security. The externally visible surface of the multilayered iris contains two zones, which often differ in color. An outer ciliary zone and an inner
pupillary zone, and these two zones are divided by the collarette which appears as a zigzag
pattern. Formation of the unique patterns of the iris is random and not related to any genetic
factors. Due to the epigenetic nature of iris patterns, the two eyes of an individual contain
completely independent iris patterns, and identical twins possess uncorrelated iris patterns.
(2) Iris liveness detection: Iris liveness detection ensures the trustworthiness of the biometric
system security against spoofing methods. The main threats for iris based systems are,
(a) Eye image: screen image, photograph, paper print, video signal;
(b) Artificial eye: glass/plastic,
(c) Natural eye (genuine user): forced use;
(d) Natural eye (impostor): eye removed from body, contact lens.
(3) Iris recognition: Personal identification using iris recognition is done by matching two iris
templates, one stored in the database during training and other captured during recognition
(testing). Various methods have been proposed in literature of which the earliest was given by
J.Daugman in which he encodes the visible texture of persons iris into a compact sequence of
multi-scale quadrature 2-D Gabor wavelet coefficients, whose most significant bit comprise a
256-byte iris code. Then these iris codes are matched and recognition is done.
Figure 1.2 (a) Image acquisition (b) Segmentation (c) Feature encoding (d) Matching.
Transparent Lens
Semi-transparent Lens
but
Figure 2.2.1 (a) Eye with cosmetic lens. (b) Eye with non-cosmetic contact lens. (c) Eye with textured
contact lens
Transparent contact lens can hamper the overall accuracy of some iris biometric system in
following ways [2]:1) Little movement with respect to iris will result in marginally different effect on iris
texture at each presentation.
2) Contact lenses with visible marking present on them like L or R can confuse an iris
recognition system to register different eyes as of same person.
3) Presence of boundary between support region of lens and corrective region of the
lens can alter the appearance of the iris texture.
3
Thus, it is important to decide whether a person wears contact lens or not as presence of
both textured and transparent contact lens can severely hamper the efficacy of iris recognition
system. Contact lens detection techniques can be broadly classified into two main categories:
(1) Two class problem
(2) Three class problem.
iris image their method show promising result with correct classification rate of 100% on one
database and 94.1% on other. Zhang et al. [5] adopted LBP to represent different texture
pattern of each image. Then extracted LBP for each pixel is encoded as SIFT orientation
histogram. Three statistics namely standard deviation of w-LBP histogram, mean and standard
deviation of w-LBP map are used for feature selection, and finally SVM is used as a classifier. On
a database consisting of only textured contact lens wearing iris image and live iris image they
got a correct classification rate of 99.14%. However, absence of any transparent contact lens in
the database lowers down the confidence of this technique when all three types of lenses will
be present.
Doyle Jr. [6] used BSIF features as normalized histogram of pixels binary codes. They
trained six different classifiers on this feature obtained from image under three different
conditions of segmentation. First when whole image was used, then they used best guess
image in which they take average radius of iris and pupil for segmentation and at last they used
properly segmented image. They show a correct classification rate of 100% in case of textured
lens, and conclude that textured lens detection is a solved problem. They also concluded that if
a novel lens is included in testing which is not present during training then the correct
classification rate reduces to 86%.
previously unseen printed texture is presented or the iris image sample is acquired using
unknown iris sensor.
Raghavendra et al.[9] to characterize texture component in three regions eye, iris and
strip, used ICA based unsupervised scheme to extract BSIF as normalized histogram of pixels
binary code. SVM was used to classify images of all three regions followed by weighted
majority voting to combine decision. CCR up to 87.5% was recorded on LG400 database. All
above methods will not be possible for real time application as three regions are processed and
then classification is done and then decision is made. Gupta et al. [10] used three different
features namely LBP, GIST and HOG to extract features which was used as an input to SVM to
give classification of different images into different group as output. They got an accuracy of
93.79%, 98.69% and 62.41% respectively for LBP, GIST and HOG features.
Gragnaniello et al. [11,12] used real segmentation algorithm that excludes eyelids and
avoids normalization, and considers the information coming from the iris and part of the sclera.
To extract discriminative features they used rotation and scale invariant descriptor (SID). They
used the BoW paradigm to carry out the classification. Accuracy up to 93.17% was observed on
Notre Dame Database.
3. Proposed Method
In an image of eye with contact lens a faint boundary is always visible surrounding the iris
(fig. 3.1 (a)). The presence or absence of this boundary can be used to classify an image as with
contact lens or without contact lens. The whole process of classification can be grouped under
four categories: (1) Segmentation of region of interest
(2) Feature point extraction from region of interest
(3) Applying RANSAC algorithm to find the best fit circle
(4) Classifying the image into lens or without lens category.
(a)
(b)
(a)
(b)
Fig. 3.3 (a) Dilated edge image (b) Boundary traced region excluding eyelids and eyebrows
8
(a)
(b)
Fig. 3.4 (a) Region of interest (b) Output of canny edge detection algorithm
-1
-2
-1
(a)
0.5
0.5
0.5
-1
-1
-.5
-0.5
-1
-1
-1
-.5
-1
-1
(b)
Fig. 3.5 (a) Sobel Operator
(a)
(b)
Fig. 3.6 (a) Output of Sobel and Circular operator filtering (b) After removing boundary pixels
3.3 RANSAC
Random Sample Consensus (RANSAC) is a paradigm for fitting a model to experimental
data, introduced by Martin A. Fischler and Robert C. Bolles in 1981. As stated by Fischler and
Bolles [13] "The RANSAC procedure is opposite to that of conventional smoothing techniques:
Rather than using as much of the data as possible to obtain an initial solution and then
attempting to eliminate the invalid data points, RANSAC uses as small an initial data set as
feasible and enlarges this set with consistent data when possible".
Objective: Robust fit of a model to a data set S which contains outliers
Algorithm: [14]
(1) Randomly select a sample of 's' data points from 'S' and instantiate the model from
this subset.
(2) Determine the set of data points 'Si' which is within a distance threshold 't' of the
model. The set 'Si', is the consensus set of the sample and defines the inliers of 'S'.
(3) If the size of 'Si' (the number of inliers) is greater than some threshold 'T', re-estimate
the model using all the points in 'Si' and terminate.
(4) If the size of 'Si' is less than 'T', select a new subset and repeat the above.
(5)After 'N' trials the largest consensus set 'Si' is selected, and the model is re-estimated
using all the points in the subset 'Si'.
10
2 ( ( xi xc )2 ( yi yc )2 r )2
(1)
i 1
subject to the condition that none of the valid points deviates from the circle by more than 't'
units. Here (xc, yc) represent the coordinates of the center of the circle and 'r' is its radius. This is
actually two problems: a circle fit to the data, and a classification of the data into inliers (valid
points) and outliers.
Algorithm :
(1) We randomly select three points (s = 3) from the set of given points. Then we find
the radius and center of the circumscribed circle to the triangle formed by the three points.
xc
ma mb ( y1 y3 ) mb ( x1 x2 ) ma ( x2 x3 )
x1 x2 y1 y2
1
, yc
xc
2(mb ma )
ma
2
2
r
( xi xc )2 ( yi yc )2
(2)
(3)
Here ma, mb, mc represents slope of three lines of the circle and (x1,y1), (x2,y2), (x3,y3) are three
randomly chosen points.
(2) We computed the distance from all the other points to the circle and found the
inliers and outliers, based on whether they lie within the distance threshold or not .
11
(3) Then we computed the size of consensus set 'Si', the set of data points which are
within the distance thresholdt (fig.3.7).
(4) After at most 'N' trials the largest consensus set 'Si' is selected and the model
associated to 'Si' is considered as solution.
3.4 Classification of image
The output of the RANSAC algorithm provides us the centre and radius of the circle which
fits the given data point with minimum error. We utilize this data from each image to classify
the image as with or without lens as per following flowchart. At first we obtain the center
coordinate of iris (Ix, Iy) using Hough transform and center of the best fit circle for the left
portion of ROI (Lx, Ly). We calculate the distance between them using distance formula
().Similarly we calculate the distance between iris center and center of the best fit circle for the
right portion of ROI.
If the distance in any of the above case comes below 40 pixels we consider the image as
with lens otherwise image is classified as with no lens. The parameter of 40 pixels is considered
as threshold value as our ROI is present between circumference of iris and circle of radius 40
pixels greater than it. So the maximum possible range of radius of a valid circle representing
lens boundary lies between iris radius and 40 pixels form it.
We are giving priority to the lens image as the RANSAC algorithm we are using was
modified to give always a circle. And hence if only one of the two conditions satisfies some
range of radius then there is higher probability that a circle is present within that radius range.
12
No Lens
Contact Lens
Total
LG4000
320
380
700
13
dilation number of pixels decreases drastically and at certain level it becomes constant where it
can be assumed that edges due to eyebrow and eyelashes are no more present.
Fig. 4.1. Impact of dilating square size on number of pixels in edge image
Boundary tracing was done in the region between iris boundary and circular arc to
locate region of interest. Since we were interested only in the region lying in sclera so starting
point of boundary tracing was the intersection of a horizontal line through iris centre and the
iris boundary. We considered the region between iris boundary and arc and the lines joining the
centre of iris to terminal point of arc as the region of interest (region between red dots in
fig.4.2).
14
Then we removed the boundary pixels tracing the region of interest (boundary marked
red and blue in ROI 1 and green and blue in ROI 2 between red dots). The edges present in the
region ROI 1 and ROI 2 were passed through Sobel vertical filter and my own circularly designed
filter as we are interested in only vertical circularly oriented edges.
LENS
TOTAL
CORRECT
CLASSIFICATION
FALSE
CLASSIFICATION
CLASSIFICATION
ACCURACY
ROI 1
380
369
11
97.10
ROI 2
380
351
29
92.36
TOTAL
CORRECT
CLASSIFICATION
FALSE
CLASSIFICATION
CLASSIFICATION
ACCURACY
ROI 1
320
269
51
84.06
ROI 2
320
277
43
86.56
log(1 p)
log(1 ws )
t ( Distance threshold ) = 3
15
NUMBER
TRUE
POSITIVE
LENS
380
340
NO LENS
320
OVERALL
700
TRUE
NEGATIVE
FALSE
NEGATIVE
FALSE
POSITIVE
ACCURACY
340
40
40
89.47
205
205
115
115
64.06
545
545
155
155
77.86
Accuracy
TP TN
TP FP FN TN
NUMBER
TRUE
POSITIVE
TRUE
NEGATIVE
FALSE
NEGATIVE
FALSE
POSITIVE
ACCURACY
LENS
351
340
340
11
11
96.86
NO LENS
269
205
205
64
64
76.20
OVERALL
620
545
545
75
75
87.90
16
LG4000
SID+
RANSAC RANSAC
BOF+
SVM[23] SVM
ERROR
[4]
[3]
[6]
None-None
78.00
73.75
57.00
85.00 81.25
95.75 64.06
76.20
Lens-Lens
60.5
32.00
70.5
71.00 80.56
84.00 89.47
96.86
Total
66.72
46.62
65.88
75.58 80.98
89.87 77.85
87.90
5. Ongoing work
We will be implementing the algorithm on other database (table 6) and compare the
result with pre existing methods.
Dataset
No
Transparent
Textured
Total
LG4000
1400
1400
1400
4200
AD100
300
300
300
900
IITD Cogent
1163
1143
1160
3466
IIITD Vista
1000
1010
1005
3025
Total
3863
3853
3865
11591
6. List of publication
1. Mohit Kumar, N.B. Puhan, Iris Liveness Detection Using Texture Segmentation IEEE
National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics
(NCVPRIPG), IIT Patna, December 2015.
17
FREE
References
1.
Erdogan, Gizem, and Arun Ross. "Automatic detection of non-cosmetic soft contact lenses in ocular
images." SPIE Defense, Security, and Sensing. International Society for Optics and Photonics, 2013.
2.
Baker, Sarah E., et al. "Degradation of iris recognition performance due to non-cosmetic prescription
contact lenses." Computer Vision and Image Understanding 114.9 (2010): 1030-1044.
3.
Kywe, Wyne Wyne, Masashi Yoshida, and Kazuhito Murakami. "Contact lens extraction by using thermovision." Pattern Recognition, 2006. ICPR 2006. 18th International Conference on. Vol. 4. IEEE, 2006.
4.
Wei, Zhuoshi, et al. "Counterfeit iris detection based on texture analysis." Pattern Recognition, 2008. ICPR
2008. 19th International Conference on. IEEE, 2008.
5.
Zhang, Hui, Zhenan Sun, and Tieniu Tan. "Contact lens detection based on weighted lbp." In Pattern
Recognition (ICPR), 2010 20th International Conference on, pp. 4279-4282. IEEE, 2010.
6.
Doyle, J.S. and Bowyer, K.W., 2015. Robust Detection of Textured Contact Lenses in Iris Recognition Using
BSIF. Access, IEEE, 3, pp.1672-1683.
7.
Yadav, Divakar, et al. "Unraveling the effect of textured contact lenses on iris recognition." Information
Forensics and Security, IEEE Transactions on 9.5 (2014): 851-862.
8.
Doyle, James S., Patrick J. Flynn, and Kevin W. Bowyer. "Automated classification of contact lens type in
iris images." Biometrics (ICB), 2013 International Conference on. IEEE, 2013.
9.
Raghavendra, R., Kiran B. Raja, and Christoph Busch. "Ensemble of Statistically Independent Filters for
Robust Contact Lens Detection in Iris Images." In Proceedings of the 2014 Indian Conference on Computer
Vision Graphics and Image Processing, p. 24. ACM, 2014.
10. Gupta P, Behera S, Vatsa M, Singh R. On iris spoofing using print attack. In2014 22nd International
Conference on Pattern Recognition (ICPR) 2014 Aug 1 (pp. 1681-1686). IEEE.
11. Gragnaniello, Diego, Giovanni Poggi, Carlo Sansone, and Luisa Verdoliva. "Contact lens detection and
classification in iris images through scale invariant descriptor." In Signal-Image Technology and InternetBased Systems (SITIS), 2014 Tenth International Conference on, pp. 560-565. IEEE, 2014.
12. Gragnaniello, Diego, Giovanni Poggi, Carlo Sansone, and Luisa Verdoliva. "Using iris and sclera for
detection and classification of contact lenses." Pattern Recognition Letters (2015).
18
13. Fischler, M.A. and Bolles, R.C., 1981. Random sample consensus: a paradigm for model fitting with
applications to image analysis and automated cartography. Communications of the ACM, 24(6), pp.381395.
14. Hartley R, Zisserman A. Multiple view geometry in computer vision. Cambridge university press; 2003.
15. http://www.cse.nd.edu/~cvrl/CVRL/Data_Sets.html
19