You are on page 1of 9

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/287481134

Presentation attack detection on visible


spectrum iris recognition by exploring inherent
characteristics of Light Fi....

Article · December 2014


DOI: 10.1109/BTAS.2014.6996226

CITATIONS READS

4 57

2 authors:

R. Raghavendra Christoph Busch


Norwegian University of Science and Technol… Darmstadt University of Applied Sciences
126 PUBLICATIONS 977 CITATIONS 434 PUBLICATIONS 3,222 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

BioMobile II View project

BioMobile View project

All content following this page was uploaded by R. Raghavendra on 28 July 2016.

The user has requested enhancement of the downloaded file.


Presentation Attack Detection on Visible Spectrum Iris Recognition by
Exploring Inherent Characteristics of Light Field Camera

R. Raghavendra Christoph Busch


Norwegian Biometric Laboratory, Gjøvik University College, Norway
Email: raghavendra.ramachandra, christoph.busch @hig.no

biometric system irrespective of the adopted modality is at


Abstract risk.
Among the available biometric modalities, the visible
Presentation (or spoof) attacks on biometric system is spectrum iris biometric system is more vulnerable to the
a growing concern that received substantial attention from presentation attack.The easiest way one can attack the vis-
both academics and industry. In this paper, we present a ible spectrum iris biometric system especially at the sen-
novel way of addressing a Presentation Attack Detection sor level is by simply presenting the iris biometric arte-
(PAD) (or spoof detection) by exploiting the inherent char- fact generated using a photograph of the eye image (or
acteristics of the Light Field Camera (LFC) for visible spec- video) or by presenting an electronic screen display using
trum iris biometric system. The proposed PAD algorithm tablet or mobile phones. The feasibility of these attacks are
will capture the variation in the depth (or focus) between acknowledged by the number of available research works
multiple depth images rendered by the LFC that in turn can [10, 5, 4, 18, 6, 14, 7] that strongly dictates the importance
be used to reveal the presentation attacks. To this extent, we of detecting these attacks. Further, various iris PAD detec-
introduce a new presentation attack database comprised of tion competitions [1, 2] were organized recently, which also
52 subjects with 104 unique eye samples. The database is shows the importance of this problem.
collected using LFC by simulating the attacks through visi- Presentation Attack Detection (PAD) (or counter mea-
ble spectrum iris biometric artefacts like printed photo and sure or spoof detection or anti-spoofing) algorithms at the
electronic display (using both Apple iPad (4th generation) sensor level can be broadly classified into two groups [4]:
and Samsung Galaxy Note 10.1 tablet). Extensive experi- (1) Hardware based (2) Software based. The idea of the
ments carried out on this database reveal the efficacy of the hardware based approach is to include additional hardware
proposed PAD algorithm with a lowest Average Classifica- component as a supporting device with the actual iris sensor
tion Error Rate = 0.5% when confronted with diverse set of that can perform the PAD. In [14], the PAD is carried out by
attacks on visible spectrum iris biometric system. employing a multiple light source to measure the variation
of reflectance between iris and sclera region. The attack
detection based on Oculomotor Plant Characteristics (OPC)
1. Introduction to capture the saccadic eye movements using eye tracker is
presented in [13]. A new hardware setup to capture a stereo
In recent years, the adoptability of the biometric system image of visible spectrum iris to detect the attack especially
for several security applications has seen not only rapid de- by presenting a cosmetic contact lens is introduced in [7].
velopment but also large-scale deployment in real-life sce-
In case of software based approach, the presentation at-
narios. Even though, the biometric system can provide a
tacks are detected after the data was captured by the sen-
reliable, user identification scheme, these systems are vul-
sor. The majority of the available state-of-the-art schemes
nerable, particularly at the sensor level due to different kind
are especially devoted to near infrared iris. Most of these
of presentation attacks. The goal of the presentation attack
available software based PAD schemes are based on ana-
is to subvert a biometric system by presenting a biometric
lyzing the statistical characteristics of the observed iris pat-
artefact that is replicating the biometric characteristic of an
tern. These observed iris pattern are further quantified to
enrolled data subject. With the evolving technology in cre-
reflect the quality of the captured image with the intent to
ating biometric artefacts (or spoofs), the vulnerability of the
possibly reveal the presence of the presentation attack. The
This work is funded by the EU 7th Framework Program under grant potential of the image quality assessment to reliably de-
agreement no 284862 for the large-scale integrated project FIDELITY. tect the presentation attack is presented in [4], in which
25 well established image quality measures are success- Normal Accept/Reject
Proposed Iris Recognition
fully employed to detect the photo print attack. The use of Light Field
PAD
Camera Reject
the frequency spectrum [10], time-frequency analysis using Algorithm
Artefact
Attack Detection

wavelet packets [6], combining different quality factors like


contrast (both local and global), analysis of purkinje reflec-
tion [15], analyzing pupil dilation [3] and statistical texture Figure 1. Block diagram of the proposed attack resistant visible
features obtained using Gray-Level-Co-occurrence-Matrix spectrum iris recognition pipeline
[18] are well explored.
In this work, we address the presentation attacks on visi-
ble spectrum iris recognition by exploring the inherent char-
acteristics of the light field camera (LFC). Adopting the
LFC as a biometric capture device will permit one to ex-
plore its unique characteristics of rendering a multiple fo-
cus images that can be obtained in a single capture. This
property of the LFC was well explored in recent litera-
ture on both face [17] and iris recognition [20, 16, 12],
which have shown the increased biometric performance of
the LFC based systems over conventional biometric sen-
sors. In this work, we explore an intuitive way of exploiting
these multiple depth images rendered by the LFC to extract
the information about the presence of an artefact (or spoof).
To achieve this, we measure the variation of focus between
multiple depth images rendered by the LFC. At the outset
following are the main contribution of this work: (1) We
present a unique perspective by exploring the variation of
focus between multiple depth images rendered by the LFC
to detect the presentation attacks on visible spectrum iris
biometric system. This is the first work that explores this
unique inherent characteristic of the LFC for PAD in the
visible spectrum iris biometric systems. (2) We contribute
to the biometric community a new relatively large scale (a) (b) (c)
database comprised of both normal (or real) and artefact
(or spoof) samples of visible spectrum iris collected from Figure 2. Multiple depth (or focus) image rendered by the LFC on
52 subjects constituting 104 unique eye patterns. The arte- the normal (or real) image capture (a) normal (or real) eye image
facts are generated using both high resolution capture of eye (b) Iris segmentation (c) Iris normalization
image using Canon EOS 550D DSLR camera and also the
all-in-focus image obtained using LFC images. This is the
sists of two main components namely (1) Proposed PAD
unique artefact (or spoof) database collected using LFC so
algorithm (2) Visible spectrum iris recognition (or baseline
far. (3) Extensive experiments are carried out on our LFC
system).
database that demonstrates an outstanding performance of
the proposed PAD algorithm in mitigating the presentation 2.1. Proposed PAD algorithm
attacks.
The rest of the paper is organized as follows: Section 2 In this section, we present our proposed scheme for ro-
presents the proposed PAD using LFC, Section 3 describe bust presentation attack detection on visible spectrum iris
the details of our new visible spectrum iris artefact database recognition. We exploit the property of LFC to generate a
and companion protocols, Section 4 discuss the experimen- multiple depth (or focus) image in a single capture attempt.
tal results and Section 5 draws the conclusion. Thus, our idea is to capture this variation of focus or depth
exhibited between multiple depth images that can provide
2. Presentation attack resistant visible spec- possible information about the presence of the presentation
trum iris recognition attack. In this work, we are focusing not only on the cost
effective and easier attacks, but also on the well adopted
Figure 1 shows the block diagram of the presentation at- and accepted presentation attacks carried out using either a
tack resistant visible spectrum iris recognition system based photo (printed using a laser printer) or an electronic screen
on Light Field Camera (LFC). The proposed scheme con- (image displayed using a tablet). Hence, the artefacts that
Figure 4 shows the block diagram of the proposed PAD
algorithm based on capturing the variation of focus between
multiple depth images. The proposed scheme can be struc-
tured in three important steps namely: (1) Pre-processing
and focus estimation (2) Estimating variation in focus (3)
Classification.

2.1.1 Pre-processing and focus estimation


Given the eye image I captured using LFC a series of pre-
processing steps are carried out to obtain the normalized iris
image on which the focus is determined. To this extent, we
(c) employed the open source algorithm OSIRIS V 4.1 [19] to
(a) (b)
carry out the iris segmentation and normalization. Figure 2
Figure 3. Multiple depth (or focus) image rendered by the LFC on
(b) & (c) shows the qualitative results of iris segmentation
the artefact image capture (a) Artefact eye image (b) Iris segmen-
tation (c) Iris normalization and normalization obtained using OSIRIS V 4.1 on normal
(or real) eye image respectively. Similar results can also
be visualized for the artefact eye image in Figure 3 (b) &
have been used in our work appear to represent an in-plane (c). The use of LFC to capture an eye image I will re-
object when compared to the normal (or real) eye images. sult in multiple depth images such that I = I1 , I2 , . . . , Ik ,
Therefore, the use of LFC as a biometric sensor to capture where k corresponds to the number of depth images. We
these visible spectrum eye artefacts will reflect less vari- then perform the pre-processing (segmentation and normal-
ation in depth when compared to that of normal (or real) ization) on each of these depth images rendered by the
eye images. Thus, it is our assertion that, by accurately LFC to obtain the corresponding normalized iris image as:
capturing these variation in terms of focus along multiple IN or = {IN or1 , IN or2 , . . . , IN ork }.
depth images rendered by the LFC should certainly reveal In the next step, we measure the focus by computing the
the presence of an attack. energy using the Discrete Wavelet Transform (DWT) with
Figure 2 - 3 illustrates the multiple depth images ren- a Haar mother wavelet [11]. We choose this approach by
dered by the LFC on both normal (or real) and artefact considering its many advantages that include (1) Monotonic
(photo print) eye image respectively. The artefact image quantitative value with respect to the focus. Thus a higher
shown in the Figure 3 is generated from the normal (or real) degree of focus in the image will result in a greater value
image shown in the Figure 2 by printing it on a good qual- of the energy. (2) This measure is robust to noise. (3) low
ity paper using a laser printer which in turn is captured us- computational costs [17]. Given the normalized iris depth
ing LFC. Here, one can observe two interesting facts: (1) image IN or1 , we first carry out the DWT to obtain the four
a larger number of depth images can be obtained from the different sub-images namely: approximate (IaN or1 ), hori-
normal (or live captured) biometric characteristic capture zontal (IhN or1 ), vertical (IvN or1 ) and diagonal (IdN or1 ).
process when compared with the artefact capture process. We then compute the energy corresponding to three differ-
(2) variation of the focus (or depth) information between ent DWT sub-images IhN or1 , IvN or1 and IdN or1 as:
multiple depth images is noted more with normal (or real) R X
X C
image when compared to that of the artefact image. These EhN or1 = (IhN or1 (x, y))2 (1)
observations further justifies our idea of capturing the vari- x=1 y=1
ation of focus between multiple depth images to reliably
carryout the PAD.
R X
X C
EvN or1 = (IvN or1 (x, y))2 (2)
x=1 y=1
Pre-processing
Depth image 1 and R X
X C
Focus estimation
EdN or1 = (IdN or1 (x, y))2 (3)
Pre-processing
Estimating Normal x=1 y=1
Depth image 2 and Classification
variation in focus
Focus estimation Artefact Finally compute the total energy that can measure the
focus of the normalized image IN or1 as : F EN or1 =
Pre-processing EhN or1 + EvN or1 + EdN or1 . We repeat this pro-
Depth image N and cedure on the remaining normalized depth images to
Focus estimation
obtain their corresponding energy values as: F E =
Figure 4. Block diagram of the proposed PAD algorithm {F EN or1 , F EN or2 , . . . , F EN ork }.
2.1.2 Estimating variation in focus (a) (b)
After computing the energy value corresponding to multi-
ple depth images rendered by the LFC, we proceed fur-
ther to compute the quantitative value that can reflect the
variation in focus between these depth images. In this
work, we first normalize the energy value corresponding
to each depth image in F E using the sigmoid normaliza-
tion [9] to obtain the corresponding normalized energy as:
N F E = {N F EN or1 , N F EN or2 , . . . , N F EN ork }. In the Figure 5. Illustration of data collection set up (a) Normal (or real)
image acquisition setup (b) Artefact image acquisition setup
next step, we sort the normalized energy N F E in the de-
scending order as SN = {Sn1 , Sn2 , . . . , Snk }. We then
compute the variation in focus by obtaining the consecu- 3. Database construction
tive difference of these normalized and sorted energy value
corresponding to the multiple depth images as follows: Our visible spectrum iris artefact database consists of
both normal (or real) and artefact samples collected from
V F = Sn1 − Sn2 − . . . − Snk (4) 52 subjects resulting in 104 unique eye samples. In the fol-
lowing, we present the data collection protocols adopted to
Where, V F represents the qualitative value that can de-
collect both normal (or real) and artefact iris samples. Fur-
scribe the variation in focus of the normalized iris image
ther, we also present the evaluation protocols that can be
IN or .
used to benchmark the effectiveness of the proposed PAD
algorithm on the visible spectrum iris recognition.
2.1.3 Classification
3.1. Normal (or real) visible spectrum iris data col-
In this work we employed a simple yet accurate classifier
lection
based on an empirically determined threshold. The value
of V F is compared against the threshold value to classify The normal (or real) visible spectrum iris data collection
IN or as either an artefact (or spoof) or a normal (or real) is carried out at our laboratory in an indoor scenario that
(or live) image. The value of threshold is obtained on the has both natural (sun) and artificial (room) lighting. The
development database as explained in section 3.3. data collection is carried out over a period of 6 months in
a single session. Figure 5 (a) shows the normal (or real)
2.2. Visible spectrum iris recognition system image acquisition setup. Each subject is asked to stand in
The baseline visible spectrum iris recognition system front of the capture device at a distance of 9-15 inches from
employed in this work is based on the Local Binary Patterns the camera. We then capture the samples using two differ-
(LBPu23×3 ) and Sparse Representation Classifier (SRC). We
ent cameras namely: (1) Lytro LFC with 1.2 Megapixel. (2)
employed this algorithm by considering its improved per- Canon EOS 550D DSLR camera with 18.1 Megapixel. For
formance over existing state-of-the-art algorithms on the each subject, we collected 5 different samples using both
visible spectrum light field iris recognition [16]. Since, the Lytro LFC and Canon DSLR camera independently. Our
LFC will render multiple depth images and employing all idea of using Canon DSLR camera is only to generate the
these images for recognition is not very useful from a com- high quality artefacts that will be explained in the next sec-
putation perspective. In recent literature two ways in which tion. The whole database consists of 520 (= 104 × 5) high
the information from these depth images can be combined resolution DSLR eye images and 4327 normal (or real) light
have been proposed namely: (1) All-in-focus image con- field eye images (including multiple depth images).
struction [16] (2) Selection of the best focus image based
3.2. Artefact visible spectrum iris data collection
on the highest energy [12]. Since the performance accu-
racy of these two schemes are quite similar, in this work, The artefact visible spectrum iris database are captured
we adopt the second scheme which is based on selecting using only Lytro LFC by reflecting a similar acquisition
the best focus image corresponding to the highest energy. conditions that was followed while capturing the normal (or
As we already computed the energy on each of these focus real) iris image. Figure 5 (b) illustrates the collection of the
images rendered by the LFC (see Section 2.1) we can select artefact image captured by the electronic screen attack. In
the best focus image without any additional computational this work, we generate the artefacts on both high resolution
cost. Thus our final baseline visible spectrum iris recogni- DSLR camera samples as well as the light field camera sam-
tion system is based on the best focus image on which the ples. We are motivated to include these attacks using high
combination of LBPu2 3×3 and SRC is carried out to obtain resolution eye samples because: (1) an attacker can easily
the comparison score. obtain the high resolution eye images (at least cropped from
face image) that can be obtained from the wide spread so- Natural Image Artefact Image

cial media or captured in a non-intrusive way without get-


ting notice of the legitimate user. (2) it is not only easy and
cost effective but can also successfully subvert the visible
spectrum iris biometric system.
For high resolution samples captured using the DSLR
camera, we generate the artefacts using three different
methods namely: (1) Photo print artefact: Here, every
high quality sample captured using DSLR camera is first
printed on a good quality A4 paper using RICOH ATICIO
MP C4502 laser printer and then recaptured using Lytro (a) (b) (c)

LFC. This procedure resulted in a print artefact database Figure 7. Illustration of artefacts generated from best focus images
with 1122 light field samples (each consisting of multiple obtained from the Lytro LFC (a) normal (or real) image (b) iPad
depth images) corresponding to 520 high resolution DSLR (c) Samsung tablet
samples. (2) Screen artefact using iPad: Here, we record
the artefacts by displaying each DSLR captured image us-
age selected from the set of multiple focus image rendered
ing iPad(4th generation) with retina display to the Lytro
by the LFC, we used the best focus image corresponding to
LFC as shown in Figure 5 (b). This artefact database con-
each subject to generate the corresponding artefact sample.
sists of 1444 light field samples (each consisting of multiple
For simplicity, here we generate the artefact sample by pre-
depth images) corresponding to 520 high resolution DSLR
senting the best focus sample using both iPad and Samsung
camera samples. (3) Screen artefact using Samsung Galaxy
tablet one at a time to the Lytro LFC. The artefact image
Note 10.1: In order to more effectively analyze the effect of
database collected using an iPad will result in a total of 1860
electronic screen attacks, we consider to collect the artefact
light field samples while the use of Samsung tablet resulted
samples using Samsung Galaxy note 10.1. Here, we capture
in a total of 1973 light field samples. Figure 7 shows the
the artefact samples by storing the high quality samples cap-
examples of the artefacts captured from LFC using iPad (7
tured using DSLR in the Samsung tablet that in turn will be
(b)) and Samsung tablet (7 (c)).
presented to the LFC. This database is comprised of 1208
light field samples including multiple depth images. Figure 3.3. Performance evaluation protocol
6 shows the examples of the artefact images captured us-
In this section, we describe the performance evaluation
ing iPad (fig:Figure6 (b)), Samsung tablet (6 (c)) and photo
protocol to effectively analyze the prominence of our gen-
print (6 (d)). While Figure 6 (a) shows the corresponding
erated visible spectrum iris artefacts and also to bench-
normal (or real) image.
mark the performance of our proposed PAD algorithm. The
Natural Image Artefact Images
whole database of 104 unique iris is divided into three in-
dependent non-overlapping groups namely: training, devel-
opment and testing. The training set consists of 4 unique
eyes, development set consist of 20 unique eyes and test-
ing set consists of 80 unique eyes. In this work, we em-
ployed the development set to tune the proposed PAD al-
gorithm and also to fix the value of the threshold which is
further used for the PAD classification. The value of the
(a) (b) (c) (d)
threshold adopted in this work corresponds to the Equal Er-
Figure 6. Illustration of artefacts generated from DSLR images (a) ror Rate (EER) calculated on the development dataset. The
Normal (or real) images (b) iPad (c) Samsung tablet (d) Photo print test dataset should be solely used to report the performance
of the proposed PAD algorithm. Further, in order to evalu-
In addition to the above mentioned artefact generation ate the vulnerability of the visible spectrum iris recognition
using high quality print-outs of the enrolled iris samples, we system for the artefacts collected in this work, we further
also consider to generate the artefacts using normal (or real) divide both development and testing dataset into reference
samples collected using LFC. This will allow one to study and probe samples. In this work, we have collected 5 sam-
the impact of the presentation attack in which an attacker ples for each subject, we choose the first four samples as
will successfully get access to the enrolled sample of the le- reference and the last sample as the probe. With this setting
gitimate user and later the same sample will be used to gen- of reference and probe, we report the baseline performance
erate the artefact to subvert the biometric sensor (i.e. LFC). with normal (or real) visible spectrum iris biometric sam-
Since our baseline system will use only the best focus im- ples. While, to study the vulnerability of the system for the
0.25
Genuine Scores
Baseline
Imposter Scores
Artefact attack Scores Baseline under attack
60
0.2 Baseline with proposed PAD

40

False Non Match Rate (in %)


Number of Attempts

0.15

20
0.1

10

0.05 5

2
0
0 0.2 0.4 0.6 0.8 1 1
Score Bins
0.5
Figure 8. Overlapped comparison scores obtained on normal and
artefact (photo print) (best viewed in color) 0.2
0.1
0.050.1 0.2 0.5 1 2 5 10 20 40 60
False Match Rate (in %)

presentation attacks we employ the artefact samples (gener- Figure 9. Verification performance of the proposed PAD on photo
ated as mentioned in Section 3.2) as the probe samples. print artefact corresponding to DSLR samples (best viewed in
color)
3.4. Availability of database
The database shall be distributed for the academic situation illustrated in Figure 8 the baseline system has ac-
and research purposes via: http:/www.nislab.no/ cepted 96.78% of artefact samples as the genuine samples.
biometrics_lab/guc_lf_viar_db. This illustrates not only the quality of the artefact employed
in this work but also the strong need of an efficient and ro-
4. Experiments and results bust PAD algorithm.
Experimental results presented in this work is carried out Figure 9 shows the verification performance of the base-
according to the protocol presented in the Section 3.3. The line visible spectrum iris system under attack (photo print).
performance of the proposed PAD algorithm is measured It can be observed here that, despite the good performance
using two kind of errors [8] namely: (1) Attack Presenta- of the baseline system (with EER = 2.26% shown in green
tion Classification Error Rate (APCER), which is reporting color) on normal (or real) samples, when exposed to the at-
the proportion of attack presentation (with a fake or arte- tack the performance is heavily deceived (as shown in red
fact) that are incorrectly classified as normal (real) presen- color). This fact can be attributed to the strong overlapping
tation. (2) Normal Presentation Classification Error Rate of genuine and artefact comparison scores as shown in Fig-
(NPCER), which is reporting the proportion of normal pre- ure 8. Further, it can also be observed from Figure 9 that,
sentation incorrectly classified as attacks. Finally, the per- the implication of the proposed PAD algorithm that can mit-
formance of the overall PAD algorithm is presented in terms igate the presentation attacks will bring back the baseline
of Average Classification Error Rate (ACER) such that, performance to the normal level (indicated by the overlap-
ping of green and blue lines in the Figure 9). Similar results
AP CER + N P CER can also be observed with the iPad attack that uses lytro
ACER = (5)
2 samples as shown in Figure 10. We illustrated these two
cases for simplicity, however similar observation can also
The lower the values of the ACER, the better is the
be made with all five kinds of attacks that are addressed in
performance. The value of APCER, NPCER and ACER
this work.
are computed with the classifier threshold. The classifier
threshold is calculated on the development dataset such that Table 1. Performance of the proposed PAD algorithm
the threshold value corresponds to the point where APCER Presentation attacks PAD Performance Measure (%)
and NPCER are equal (nothing but an EER as mentioned in APCER NPCER ACER
the Section 3.3) on the development dataset. Photo Print 0.25 0.75 0.50
Artefacts from DSLR samples iPad 1.50 0.50 1.00
Figure 8 shows the distribution of the comparison scores
Samsung Tablet 0.75 0.50 0.62
obtained using the baseline visible spectrum iris system on
Artefacts from Lytro samples iPad 1.20 3.70 2.45
both normal (or real) and artefact (photo print attack) sam-
Samsung Tablet 1.40 0.90 1.15
ples. Here, one can observe the strong overlapping of the
artefact comparison scores with the genuine comparison
scores obtained on the normal (or real) samples. With the Table 1 presents the quantitative performance of the pro-
Baseline ples. The attacks are carried out by presenting these arte-
60
Baseline under attack facts using photo print and electronic screens (using iPad
Baseline with proposed PAD
and Samsung tablet) to the LFC that is used as the biomet-
40
ric sensor for the visible spectrum iris recognition system.
Extensive experiments carried out on our database has re-
False Non Match Rate (in %)

sulted in the performance of the proposed PAD algorithm


20 with a lowest ACER of 0.50% on photo (printed using laser
print) attack.
10

5 References
2 [1] Livdet-iris competition. "http://people.clarkson.
1
edu/projects/biosal/iris/index.php".
0.5
[2] Mobile iris liveliness detection competition (mobilive 2014).
"http://mobilive2014.inescporto.pt".
0.2
0.1
[3] A. Czajka, P. Strzelczyk, and A. Pacut. Making iris recogni-
0.050.1 0.2 0.5 1 2 5 10 20 40 60 tion more reliable and spoof resistant. In SPIE, 2007.
False Match Rate (in %)
[4] J. Galbally, S. Marcel, and J. Fierrez. Image quality assess-
Figure 10. Verification performance of the proposed PAD on iPad
ment for fake biometric detection: Application to iris, fin-
artefact corresponding to Lytro samples (best viewed in color)
gerprint, and face recognition. IEEE Transactions on Image
Processing, 23(2):710–724, Feb 2014.
posed PAD scheme on all five kinds of attacks discussed [5] X. He, Y. Lu, and P. Shi. A fake iris detection method based
in this work. Quantitative results shown in the Table 1 are on fft and quality assessment. In Chinese Conference on Pat-
tern Recognition, pages 1–4, Oct 2008.
obtained by performing 10-fold cross validation to partition
[6] X. He, Y. Lu, and P. Shi. A new fake iris detection method. In
the whole database into training, development and testing
M. Tistarelli and M. Nixon, editors, Advances in Biometrics,
set. Here, it can be observed that, the proposed PAD algo-
volume 5558, pages 1132–1139. Springer Berlin Heidelberg,
rithm has shown the outstanding performance especially on 2009.
the print attack with ACER of 0.5%. In addition, similar [7] K. Hughes and K. W. Bowyer. Detection of contact-lens-
performance can also be acknowledged on the remaining based iris biometric spoofs using stereo imaging. In 46th
four different kinds of attacks. Thus, based on the above Hawaii International Conference on System Sciences, pages
experiments the proposed PAD algorithm that explores the 1763–1772, Jan 2013.
variation of focus between multiple depth images rendered [8] ISO/IEC JTC1 SC37 Biometrics. ISO/IEC WD 30107-
by the LFC provides a new dimension for the biometric ap- 3:2014 Information Technology - presentation attack detec-
plications. tion - Part 3: testing and reporting and classification of at-
In addition the proposed PAD algorithm can offer the tacks. International Organization for Standardization, 2014.
following advantages: (1) Since we are exploring an inher- [9] A. Jain, K. Nandakumar, and A. Ross. Score normaliza-
ent characteristics of the LFC, the proposed PAD scheme tion in multimodal biometric systems. Pattern recognition,
will work as an integral component rather than a stand-alone 38(12):2270–2285, 2005.
unit that are normally used in the available state-of-the-art [10] J.Daugman. iris recognition and anti-spoofing countermea-
schemes. (2) Overcomes the need of additional feature ex- sures. In 7th International Biometrics Conference, 2004.
tractions scheme based on LBP, quality defining parameters [11] J. Kautsky, J. Flusser, B. Zitová, and S. Šimberová. A new
wavelet-based measure of image focus. Pattern Recognition
and also complex classifiers. Further there is no need for
Letters, 23(14):1785–1794, 2002.
additional hardware components and the proposed scheme
[12] Kiran B. Raja, R. Raghavendra, F. A. Cheikh, B. Yang, and
is highly computationally efficient.
C. Busch. Robust iris recognition using light field camera. In
The Colour and Visual Computing Symposium 2013, 2013.
5. Conclusion [13] O. Komogortsev and A. Karpov. Liveness detection via ocu-
In this work, we have presented a novel PAD algorithm lomotor plant characteristics: Attack of mechanical replicas.
that exploits the variation of the depth between multiple In International Conference on Biometrics (ICB), pages 1–8,
depth images rendered by the LFC. This work explores the June 2013.
inherent characteristics of the LFC that can reveal the pres- [14] S. J. Lee, K. R. Park, and J. Kim. Robust fake iris detection
ence of the presentation attacks. To this extent, we collected based on variation of the reflectance ratio between the iris
a new artefact visible spectrum iris database by simulating and the sclera. In Biometric Consortium Conference, pages
five different kind of attacks. The artefacts are generated us- 1–6, Sept 2006.
ing both high quality visible spectrum iris samples obtained [15] A. Pacut and A. Czajka. Aliveness detection for iris biomet-
using DSLR camera in addition to the light field iris sam- rics. In Carnahan Conferences Security Technology, Pro-
ceedings 2006 40th Annual IEEE International, pages 122–
129, Oct 2006.
[16] R. Raghavendra, Kiran B. Raja, B. Yang, and C. Busch.
Combining iris and periocular recognition using light field
camera. In 2nd IAPR Asian Conference on Pattern Recogni-
tion (ACPR2013), 2013.
[17] R. Raghavendra, B. Yang, Kiran B. Raja, and C. Busch. A
new perspective - face recognition with light-field camera. In
Biometrics (ICB), 2013 International Conference on, pages
1–8, 2013.
[18] A. F. Sequeira, J. Murari, and J. S. Cardoso. Iris liveness
detection methods in mobile applications. In 9th Interna-
tional Conference on Computer Vision Theory and Applica-
tions, pages 1–5, 2013.
[19] G. Sutra, B. Dorizzi, S. Garcia-Salicetti, and N. Othman. A
biometric reference system for iris, osiris version 4.1. 2012.
[20] C. Zhang, G. Hou, Z. Sun, T. Tan, and Z. Zhou. Light field
photography for iris image acquisition. In 8th Chinese Con-
ference on Biometric Recognition, pages 345–352. 2013.

View publication stats