Sie sind auf Seite 1von 19

1.

Introduction
The automatic recognition of an individual based on the unique, stable and non-invasive
characteristics like freckles, coronas, stripes, crypts, contractile lines within iris texture, makes
iris recognition a promising solution to security. The externally visible surface of the multilayered iris contains two zones, which often differ in color. An outer ciliary zone and an inner
pupillary zone, and these two zones are divided by the collarette which appears as a zigzag
pattern. Formation of the unique patterns of the iris is random and not related to any genetic
factors. Due to the epigenetic nature of iris patterns, the two eyes of an individual contain
completely independent iris patterns, and identical twins possess uncorrelated iris patterns.

Figure 1.1 (a) Diagram of eye (b) Characteristics of iris.

An iris recognition system consists of three main steps,


(1) Image acquisition: To capture the rich details of iris patterns, an imaging system should
resolve a minimum of 70 pixels in iris radius. Monochrome CCD cameras (480 - 640) have been
used because NIR illumination in the 700900-nm band was required for imaging to be
unobtrusive to humans.

(2) Iris liveness detection: Iris liveness detection ensures the trustworthiness of the biometric
system security against spoofing methods. The main threats for iris based systems are,
(a) Eye image: screen image, photograph, paper print, video signal;
(b) Artificial eye: glass/plastic,
(c) Natural eye (genuine user): forced use;
(d) Natural eye (impostor): eye removed from body, contact lens.
(3) Iris recognition: Personal identification using iris recognition is done by matching two iris
templates, one stored in the database during training and other captured during recognition
(testing). Various methods have been proposed in literature of which the earliest was given by
J.Daugman in which he encodes the visible texture of persons iris into a compact sequence of
multi-scale quadrature 2-D Gabor wavelet coefficients, whose most significant bit comprise a
256-byte iris code. Then these iris codes are matched and recognition is done.

Figure 1.2 (a) Image acquisition (b) Segmentation (c) Feature encoding (d) Matching.

2. Literature Survey: (Contact lens detection)


This section provides the literature survey on detection of presence of contact lens. Based
on the material used to manufacture them contact lenses are classified into two main groups:
soft contact lenses and rigid gas permeable (RGP) contact lenses. Soft contact lenses are made
of soft, flexible, highly absorbent plastics called hydrogels. RGP contact lenses are made of
more durable plastic than soft contact lenses [1].
Contact lenses are also classified as:
S.No. Textured Lens
1.
2.

Transparent Lens

Semi-transparent Lens

Alter the appearance For correcting vision or for For spoofing


of eye
therapeutic purpose
Colored and Opaque
Colorless and transparent
Transparent in center
colored around the iris

but

Figure 2.2.1 (a) Eye with cosmetic lens. (b) Eye with non-cosmetic contact lens. (c) Eye with textured
contact lens

Transparent contact lens can hamper the overall accuracy of some iris biometric system in
following ways [2]:1) Little movement with respect to iris will result in marginally different effect on iris
texture at each presentation.
2) Contact lenses with visible marking present on them like L or R can confuse an iris
recognition system to register different eyes as of same person.
3) Presence of boundary between support region of lens and corrective region of the
lens can alter the appearance of the iris texture.
3

Thus, it is important to decide whether a person wears contact lens or not as presence of
both textured and transparent contact lens can severely hamper the efficacy of iris recognition
system. Contact lens detection techniques can be broadly classified into two main categories:
(1) Two class problem
(2) Three class problem.

2.1.1 Two class problem


The main objective of this category is to classify the image into two categories with or
without contact lens. It doesnt say anything about the type of contact lens. The techniques
used to detect fake iris patterns in textured contact lens fails in case of transparent contact
lenses as they do not obscure the iris. Kywe et al. [3] proposed to use a thermal camera to
measure the temperature decrease on the eye surface due to evaporation of water during
blinking. Magnitude of decrease in temperature has been used to classify an eye with or
without contact lens. They correctly classified 26 of the 39 total subjects in their dataset,
resulting in an accuracy of 66.7 %, also their result was highly dependent on the humidity and
temperature of the environment and the dataset was not large enough to draw any generalized
conclusion.
Erdogan et al. [1] proposed that under the principle that the boundary of a soft contact
lens is partly visible against the sclera, the visible part is detected by examining the intensity
profiles of the pixels located in the vicinity of the outer limbus boundary of the iris. The
algorithm was tested on ICE 2005 database and MBGC Iris database giving an overall
classification accuracy of 76% and 66.8% respectively. However, the algorithm is database
dependent requiring two tunable parameters which were not generalized, also the algorithm
fails with inconsistent illumination and defocused blur.
Wei et al. [4] extracted three features from each eye image and then used SVM as
classifier to classify images based on these feature vectors. The proposed three measures to
detect fake iris are: measuring iris edge sharpness, applying Iris-Texton feature for
characterizing the visual primitives of iris textures and using selected features based on cooccurrence matrix (CM). On a database consisting only textured contact lens iris image and live
4

iris image their method show promising result with correct classification rate of 100% on one
database and 94.1% on other. Zhang et al. [5] adopted LBP to represent different texture
pattern of each image. Then extracted LBP for each pixel is encoded as SIFT orientation
histogram. Three statistics namely standard deviation of w-LBP histogram, mean and standard
deviation of w-LBP map are used for feature selection, and finally SVM is used as a classifier. On
a database consisting of only textured contact lens wearing iris image and live iris image they
got a correct classification rate of 99.14%. However, absence of any transparent contact lens in
the database lowers down the confidence of this technique when all three types of lenses will
be present.
Doyle Jr. [6] used BSIF features as normalized histogram of pixels binary codes. They
trained six different classifiers on this feature obtained from image under three different
conditions of segmentation. First when whole image was used, then they used best guess
image in which they take average radius of iris and pupil for segmentation and at last they used
properly segmented image. They show a correct classification rate of 100% in case of textured
lens, and conclude that textured lens detection is a solved problem. They also concluded that if
a novel lens is included in testing which is not present during training then the correct
classification rate reduces to 86%.

2.1.2 Three class problem


In this problem image is classified as transparent contact lens or textured contact lens or
without lens. Yadav et al. [7] modified LBP to produce feature values corresponding to different
regions of eye viz. pupil, iris and sclera. 17 different classifiers were explored to train models on
these feature sets. The algorithm was tested on four datasets, IITD Cogent, IITD Vista, ND I and
ND II resulting in an overall classification accuracy of 73.01%, 80.04%, 75.58% and 77.67%
respectively. Doyle et al. [8] used similar approach for classification using multi-resolution LBP.
They were able to obtain 71% correct detection of iris images on NDCCL12 dataset. The overall
accuracy of distinguishing between no-lens images and clear-lens images was very low. But
both above classification method accuracy decreases significantly if cosmetic lens with

previously unseen printed texture is presented or the iris image sample is acquired using
unknown iris sensor.
Raghavendra et al.[9] to characterize texture component in three regions eye, iris and
strip, used ICA based unsupervised scheme to extract BSIF as normalized histogram of pixels
binary code. SVM was used to classify images of all three regions followed by weighted
majority voting to combine decision. CCR up to 87.5% was recorded on LG400 database. All
above methods will not be possible for real time application as three regions are processed and
then classification is done and then decision is made. Gupta et al. [10] used three different
features namely LBP, GIST and HOG to extract features which was used as an input to SVM to
give classification of different images into different group as output. They got an accuracy of
93.79%, 98.69% and 62.41% respectively for LBP, GIST and HOG features.
Gragnaniello et al. [11,12] used real segmentation algorithm that excludes eyelids and
avoids normalization, and considers the information coming from the iris and part of the sclera.
To extract discriminative features they used rotation and scale invariant descriptor (SID). They
used the BoW paradigm to carry out the classification. Accuracy up to 93.17% was observed on
Notre Dame Database.

3. Proposed Method
In an image of eye with contact lens a faint boundary is always visible surrounding the iris
(fig. 3.1 (a)). The presence or absence of this boundary can be used to classify an image as with
contact lens or without contact lens. The whole process of classification can be grouped under
four categories: (1) Segmentation of region of interest
(2) Feature point extraction from region of interest
(3) Applying RANSAC algorithm to find the best fit circle
(4) Classifying the image into lens or without lens category.

(a)

(b)

Fig. 3.1 (a) Transparent contact lens (b) Region of interest


3.1 Segmentation of region of interest
For the purpose of classification we are only interested in the region outside iris and
between the eyelids (fig.3.1 (b) ). The region of interest from which classification is done lies on
both side of iris, the region between red and blue circle on left and red and green circle on
right. In order to segment this region following method is followed: (1) Iris localization
(2) Dilation of eyelid boundary and extraction of region of interest

3.1.1 Iris localization


We find the center and radius of iris by using Hough transform on the image. Once iris is
localized we trace another circle with radius 40 pixels [1] greater than that of the iris. Since we
are interested in regions lying in sclera only, so we trace two horizontal lines one on the top
and other at the bottom of iris forming a closed area. Now the region between red points will
be only considered for further steps (fig. 3.2 ).

Fig. 3.2 Iris localized region in the image


3.1.2 Dilation of eyelid boundary
In the next step we implement the canny edge detection algorithm in the considered region.
In order to select the region lying only in sclera region we dilate the edge image output
obtained from the canny output. Then we trace the boundary of the resultant image, the
starting point for the tracing is the point at an angle of zero degree and at an angle of 270
degree on the circumference of the iris (fig 3.3 (a)). The region obtained consists only sclera
portion touching the iris and excludes all the eyebrows and eyelids (fig 3.3(b)).

(a)

(b)

Fig. 3.3 (a) Dilated edge image (b) Boundary traced region excluding eyelids and eyebrows
8

3.2 Feature point extraction from region of interest


Once the region of interest (fig. 3.4 (a)) is extracted we apply Gaussian smoothing on the
image so that noise in this region can be reduced. Application of Canny edge detection
algorithm on this region extracts all possible edges (fig. 3.4 (b)). However along with contact
lens boundary as edge the algorithm produces many more false edges. Since our objective is to
detect only vertical circular arc so we apply Sobel vertical operator (fig. 3.5(a)) and a self
designed circular operator (fig. 3.5(b)) on this region to reduce number of false edges.

(a)

(b)

Fig. 3.4 (a) Region of interest (b) Output of canny edge detection algorithm

-1

-2

-1

(a)

0.5

0.5

0.5

-1

-1

-.5

-0.5

-1

-1

-1

-.5

-1

-1

(b)
Fig. 3.5 (a) Sobel Operator

(b) Circular operator

(a)
(b)
Fig. 3.6 (a) Output of Sobel and Circular operator filtering (b) After removing boundary pixels
3.3 RANSAC
Random Sample Consensus (RANSAC) is a paradigm for fitting a model to experimental
data, introduced by Martin A. Fischler and Robert C. Bolles in 1981. As stated by Fischler and
Bolles [13] "The RANSAC procedure is opposite to that of conventional smoothing techniques:
Rather than using as much of the data as possible to obtain an initial solution and then
attempting to eliminate the invalid data points, RANSAC uses as small an initial data set as
feasible and enlarges this set with consistent data when possible".
Objective: Robust fit of a model to a data set S which contains outliers
Algorithm: [14]
(1) Randomly select a sample of 's' data points from 'S' and instantiate the model from
this subset.
(2) Determine the set of data points 'Si' which is within a distance threshold 't' of the
model. The set 'Si', is the consensus set of the sample and defines the inliers of 'S'.
(3) If the size of 'Si' (the number of inliers) is greater than some threshold 'T', re-estimate
the model using all the points in 'Si' and terminate.
(4) If the size of 'Si' is less than 'T', select a new subset and repeat the above.
(5)After 'N' trials the largest consensus set 'Si' is selected, and the model is re-estimated
using all the points in the subset 'Si'.

10

3.3.1 RANSAC for fitting a circle to our feature points


The problem we need to solve is that being given a set of 2D data points, find the circle
which minimizes the sum of the squared distances of the contour points to the circle:
n

2 ( ( xi xc )2 ( yi yc )2 r )2

(1)

i 1

subject to the condition that none of the valid points deviates from the circle by more than 't'
units. Here (xc, yc) represent the coordinates of the center of the circle and 'r' is its radius. This is
actually two problems: a circle fit to the data, and a classification of the data into inliers (valid
points) and outliers.
Algorithm :
(1) We randomly select three points (s = 3) from the set of given points. Then we find
the radius and center of the circumscribed circle to the triangle formed by the three points.
xc

ma mb ( y1 y3 ) mb ( x1 x2 ) ma ( x2 x3 )
x1 x2 y1 y2
1
, yc
xc

2(mb ma )
ma
2
2
r

( xi xc )2 ( yi yc )2

(2)
(3)

Here ma, mb, mc represents slope of three lines of the circle and (x1,y1), (x2,y2), (x3,y3) are three
randomly chosen points.
(2) We computed the distance from all the other points to the circle and found the
inliers and outliers, based on whether they lie within the distance threshold or not .

Fig.3.7. Example of application of RANSAC algorithm for circle fitting

11

(3) Then we computed the size of consensus set 'Si', the set of data points which are
within the distance thresholdt (fig.3.7).
(4) After at most 'N' trials the largest consensus set 'Si' is selected and the model
associated to 'Si' is considered as solution.
3.4 Classification of image
The output of the RANSAC algorithm provides us the centre and radius of the circle which
fits the given data point with minimum error. We utilize this data from each image to classify
the image as with or without lens as per following flowchart. At first we obtain the center
coordinate of iris (Ix, Iy) using Hough transform and center of the best fit circle for the left
portion of ROI (Lx, Ly). We calculate the distance between them using distance formula
().Similarly we calculate the distance between iris center and center of the best fit circle for the
right portion of ROI.
If the distance in any of the above case comes below 40 pixels we consider the image as
with lens otherwise image is classified as with no lens. The parameter of 40 pixels is considered
as threshold value as our ROI is present between circumference of iris and circle of radius 40
pixels greater than it. So the maximum possible range of radius of a valid circle representing
lens boundary lies between iris radius and 40 pixels form it.
We are giving priority to the lens image as the RANSAC algorithm we are using was
modified to give always a circle. And hence if only one of the two conditions satisfies some
range of radius then there is higher probability that a circle is present within that radius range.

12

Flowchart 1. Classification method to determine whether lens is present or absent.

4. Experimental Results and Discussion


We implemented the proposed algorithm on images of Notre Dame Contact Lens
Detection 2015 (NDCLD`13) Dataset [15] (table 1).
Dataset

No Lens

Contact Lens

Total

LG4000

320

380

700

Table 1. Image distribution of dataset.


4.1 Segmentation and feature point extraction
The database provides the center coordinates and radius of iris, which we used directly
to find the boundary of iris. Then we implemented the canny edge detection algorithm on this
image. The threshold value used in the algorithm was doubled so that very weak edges due to
noise can be eliminated. After that a circular arc with radius 40 pixels [1] greater than that of
iris was drawn on the image. Then we dilated the edge map output of canny algorithm by a
square of size (9 * 9) such that the number of pixels in region of interest is minimized (fig. 4.1).
As it is clear from the figure that when dilation is not used number of pixels in ROI is quite high
thereby representing presence of edge due to eyebrow and eyelashes. On increasing the size of

13

dilation number of pixels decreases drastically and at certain level it becomes constant where it
can be assumed that edges due to eyebrow and eyelashes are no more present.

Fig. 4.1. Impact of dilating square size on number of pixels in edge image
Boundary tracing was done in the region between iris boundary and circular arc to
locate region of interest. Since we were interested only in the region lying in sclera so starting
point of boundary tracing was the intersection of a horizontal line through iris centre and the
iris boundary. We considered the region between iris boundary and arc and the lines joining the
centre of iris to terminal point of arc as the region of interest (region between red dots in
fig.4.2).

Fig. 4.2 Region of interest of the considered image

14

Then we removed the boundary pixels tracing the region of interest (boundary marked
red and blue in ROI 1 and green and blue in ROI 2 between red dots). The edges present in the
region ROI 1 and ROI 2 were passed through Sobel vertical filter and my own circularly designed
filter as we are interested in only vertical circularly oriented edges.
LENS

TOTAL

CORRECT
CLASSIFICATION

FALSE
CLASSIFICATION

CLASSIFICATION
ACCURACY

ROI 1

380

369

11

97.10

ROI 2

380

351

29

92.36

Table 2 : Segmentation accuracy for lens images


NO LENS

TOTAL

CORRECT
CLASSIFICATION

FALSE
CLASSIFICATION

CLASSIFICATION
ACCURACY

ROI 1

320

269

51

84.06

ROI 2

320

277

43

86.56

Table 3: Segmentation accuracy for no lens images


From the above tables we can see that in case a person is not wearing lens the
segmentation inaccuracy is high. It is so as in a image without any lens the visible region
between eyelids in general is smaller than in the image with lens. So during dilation process of
our segmentation method, the hinderance caused due to eyebrows and eyelids by covering the
maximum portion of sclera results in these region of sclera not being considered as ROI. Hence
the number of images with no region of interest is high in case of images with no lens.
4.2 Calculation of parameters for RANSAC algorithm
s ( Number of random samples chosen initially) = 3
p ( Probability that at least one of the random sample is free from outliers ) = 0.99
w ( Probability of any selected data point is inliers ) =
N ( Number of iterations) =

log(1 p)
log(1 ws )

t ( Distance threshold ) = 3
15

Total number of possibleinliers


Total number of feature points

4.3. Classification into lens or no lens image


We implemented the RANSAC algorithm on our ROI 1 and ROI 2 seperately, and
recorded the center coordinate of the circles in both the cases. As explained in the flowchart 1
we classified the image into with lens or without lens and got following output (table 4).
IMAGE

NUMBER

TRUE
POSITIVE

LENS

380

340

NO LENS

320

OVERALL

700

TRUE
NEGATIVE

FALSE
NEGATIVE

FALSE
POSITIVE

ACCURACY

340

40

40

89.47

205

205

115

115

64.06

545

545

155

155

77.86

Table 4. Output of classification method


True positive (TP): With lens correctly identified as with lens
True negative (TN): No lens correctly identified as no lens
False positive (FP): With lens incorrectly identified as no lens
False negative (FN): No lens incorrectly identified as with lens

Accuracy

TP TN
TP FP FN TN

4.4 Classification of segmentation error free images


IMAGE

NUMBER

TRUE
POSITIVE

TRUE
NEGATIVE

FALSE
NEGATIVE

FALSE
POSITIVE

ACCURACY

LENS

351

340

340

11

11

96.86

NO LENS

269

205

205

64

64

76.20

OVERALL

620

545

545

75

75

87.90

Table 5. Output when segmentation error free images are considered

16

4.5 Comparision with previous methods


Database Classification Textural GLCM
Weighted mLBP LBP+
Type
Features Features LBP
[23]
PHOG+

LG4000

SID+

RANSAC RANSAC

BOF+

SVM[23] SVM

ERROR

[4]

[3]

[6]

None-None

78.00

73.75

57.00

85.00 81.25

95.75 64.06

76.20

Lens-Lens

60.5

32.00

70.5

71.00 80.56

84.00 89.47

96.86

Total

66.72

46.62

65.88

75.58 80.98

89.87 77.85

87.90

Table 5. Results of previous methods on same database, best result is in bold.

5. Ongoing work
We will be implementing the algorithm on other database (table 6) and compare the
result with pre existing methods.
Dataset

No

Transparent

Textured

Total

LG4000

1400

1400

1400

4200

AD100

300

300

300

900

IITD Cogent

1163

1143

1160

3466

IIITD Vista

1000

1010

1005

3025

Total

3863

3853

3865

11591

6. List of publication
1. Mohit Kumar, N.B. Puhan, Iris Liveness Detection Using Texture Segmentation IEEE
National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics
(NCVPRIPG), IIT Patna, December 2015.

17

FREE

References
1.

Erdogan, Gizem, and Arun Ross. "Automatic detection of non-cosmetic soft contact lenses in ocular
images." SPIE Defense, Security, and Sensing. International Society for Optics and Photonics, 2013.

2.

Baker, Sarah E., et al. "Degradation of iris recognition performance due to non-cosmetic prescription
contact lenses." Computer Vision and Image Understanding 114.9 (2010): 1030-1044.

3.

Kywe, Wyne Wyne, Masashi Yoshida, and Kazuhito Murakami. "Contact lens extraction by using thermovision." Pattern Recognition, 2006. ICPR 2006. 18th International Conference on. Vol. 4. IEEE, 2006.

4.

Wei, Zhuoshi, et al. "Counterfeit iris detection based on texture analysis." Pattern Recognition, 2008. ICPR
2008. 19th International Conference on. IEEE, 2008.

5.

Zhang, Hui, Zhenan Sun, and Tieniu Tan. "Contact lens detection based on weighted lbp." In Pattern
Recognition (ICPR), 2010 20th International Conference on, pp. 4279-4282. IEEE, 2010.

6.

Doyle, J.S. and Bowyer, K.W., 2015. Robust Detection of Textured Contact Lenses in Iris Recognition Using
BSIF. Access, IEEE, 3, pp.1672-1683.

7.

Yadav, Divakar, et al. "Unraveling the effect of textured contact lenses on iris recognition." Information
Forensics and Security, IEEE Transactions on 9.5 (2014): 851-862.

8.

Doyle, James S., Patrick J. Flynn, and Kevin W. Bowyer. "Automated classification of contact lens type in
iris images." Biometrics (ICB), 2013 International Conference on. IEEE, 2013.

9.

Raghavendra, R., Kiran B. Raja, and Christoph Busch. "Ensemble of Statistically Independent Filters for
Robust Contact Lens Detection in Iris Images." In Proceedings of the 2014 Indian Conference on Computer
Vision Graphics and Image Processing, p. 24. ACM, 2014.

10. Gupta P, Behera S, Vatsa M, Singh R. On iris spoofing using print attack. In2014 22nd International
Conference on Pattern Recognition (ICPR) 2014 Aug 1 (pp. 1681-1686). IEEE.
11. Gragnaniello, Diego, Giovanni Poggi, Carlo Sansone, and Luisa Verdoliva. "Contact lens detection and
classification in iris images through scale invariant descriptor." In Signal-Image Technology and InternetBased Systems (SITIS), 2014 Tenth International Conference on, pp. 560-565. IEEE, 2014.
12. Gragnaniello, Diego, Giovanni Poggi, Carlo Sansone, and Luisa Verdoliva. "Using iris and sclera for
detection and classification of contact lenses." Pattern Recognition Letters (2015).

18

13. Fischler, M.A. and Bolles, R.C., 1981. Random sample consensus: a paradigm for model fitting with
applications to image analysis and automated cartography. Communications of the ACM, 24(6), pp.381395.
14. Hartley R, Zisserman A. Multiple view geometry in computer vision. Cambridge university press; 2003.
15. http://www.cse.nd.edu/~cvrl/CVRL/Data_Sets.html

19

Das könnte Ihnen auch gefallen