Sie sind auf Seite 1von 6

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 6, JUNE 2011, ISSN 2151-9617

HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 72

Iris Recognition Using Modified Fuzzy
Hyperline Segment Neural Network
S. S. Chowhan, U. V. Kulkarni and G. N. Shinde
AbstractIn this paper we describe Iris recognition using Modified Fuzzy Hyperline segment Neural Network (MFHLSNN) with
its learning algorithm, which is an extension of Fuzzy Hyperline Segment Neural Network (FHLSNN) proposed by Kulkarni et al.
The steps of iris recognition include iris segmentation, normalization, feature extraction and classifier. The MFHLSNN utilizes
fuzzy sets as pattern classes in which each fuzzy set is a union of fuzzy set hyperline segments the fuzzy set hyperline segment
is an n-dimensional hyperline segment defined by two end points with a corresponding membership function. We have
evaluated performance of MFHLSNN classifier using different distance measures. It is observed that Bhattacharyya distance is
superior in terms of training and recall time as compared to other distance measures. The feasibility of the MFHLSNN has been
effectively evaluated on CASIA database.
Index Terms Bhattacharyya distance, Fuzzy Neural Network, Integro differential operator, Iris Patterns.



1 INTRODUCTION
f recent, iris recognition has become the dynamic
theme for security applications, with an emphasis
on personal identification based on biometrics. Oth-
er biometric features include face, fingerprint, palm-
prints, iris, retina, gait, hand geometry etc. [1, 2]. Some
researchers work has also affirmed that the iris is essen-
tially stable over a persons life. Since the iris based per-
sonal identification systems can be more noninvasive for
the users [4, 5]. Iris boundaries can be supposed as two
non-concentric circles. We must determine the inner and
outer boundaries with their relevant radius and centers.
Iris segmentation is to locate the legitimate part of the iris.
Iris is often partially occluded by eyelashes, eyelids and
shadows. Some of the challenges for iris segmentation
include motion blur, poor contrast and defocusing [4, 13].
In segmentation, it is desired to discriminate the iris tex-
ture from the rest of the image. An iris is normally seg-
mented by detecting its inner (pupil) and outer (limbus)
boundaries. Well-known methods such as the Integro-
differential, Hough transform and active contourmodels
have been successful techniques in detecting the bounda-
ries. The initial stage of iris recognition system deals with
iris segmentation. An iris is normally segmented by de-
tecting its inner (pupil) and outer (limbus) boundaries [3].
In 1993, Daugman proposed an integro-differential opera-
tor to find both the iris inner and outer borders [5].
Wildes represented the iris texture with a laplacian py-
ramid constructed with four different resolution levels
and used the normalized correlation to determine wheth-
er the input image and the model image are from the
same class [6]. O. Byeon and T. Kim decomposed an iris
image into four levels using 2DHaar wavelet transform
and quantized the fourth-level high frequency informa-
tion to form an 87-bit code. A modified competitive learn-
ing neural network (LVQ) was used for classification [7].
Kong and Zhang developed an occlusion detection mod-
el. In this model, eyelashes are detected and verified
based on predefined criteria and reflections by statistical
test [8]. L. Ma, Y. Wang, and T. Tan represented multi-
channel Gabor filtering to capture both global and local
features from an iris [9]. J. Daugman used multiscale
quadrature wavelets to extract texture phase structure
information of the iris to generate a 2048-bit iris code and
he compared the difference between a pair of iris repre-
sentations by computing their Hamming distance [10, 21,
30]. Tisse used a combination of the integro-differential
operators with a Hough Transform for localization and
for feature extraction the concept of instantaneous phase
or emergent frequency is used. Iris code is generated by
thresholding both the models of emergent frequency and
the real and imaginary parts of the instantaneous phase
[11]. The comparison between iris signatures is per-
formed, producing a numeric dissimilarity value. If this
value is higher than a threshold, the system generates
output as a nonmatch, meaning that each iris patterns
belongs to different irises. Otherwise, the system gene-
rates the output as a match [15].
Tieniu Tan had proposed Efficient and robust segmen-
tation of noisy iris images is one of the bottlenecks for
non-cooperative iris recognition [23]. To over come from
this problem novel iris segmentation algorithm is pro-
posed a clustering based coarse iris localization scheme is
first performed to extract a rough position of the iris, as
well as to identify non-iris regions such as eyelashes and
eyebrows. A novel integro-differential constellation is
then constructed for the localization of pupillary and lim-
bic boundaries, which not only accelerates the traditional

- S. S. Chowhan is with the College of Computer Science and Information
Technology, Latur, Maharashtra, India.
- U. V. Kulkarni is with the Department of Computer Science and Engineer-
ing, SGGS Institute of Engineering and Technology, Nanded, Maharash-
tra,India.
- G. N. Shinde is with the Dept. of Computer Science and Electronics, Indira
Gandhi College, CIDCO, Nanded, Maharashtra, India.


O
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 6, JUNE 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 73

integro-differential operator but also enhances its global
convergence. Ruggero Donida Labati, et al. had
represented the detection of the iris center and bounda-
ries by using neural networks. The proposed algorithm
starts by an initial random point in the input image, and
then it processes a set of local image properties in a circu-
lar region of interest searching for the peculiar transition
patterns of the iris boundaries. A trained neural network
processes the parameters associated to the extracted
boundaries and it estimates the offsets in the vertical and
horizontal axis with respect to the estimated center [24].
2 IRIS SEGMENTATION
In segmentation it is projected that to determine the inner
and outer boundaries of the image which is composed of
pupil and iris with their relevant radius and centers. The
most popular methods are Integro-differential, Hough
transform and active contour. In order to localize an iris,
Daugman proposed the Integro-differential operator [5].
Which adopts the inner and outer boundaries (pupil and
limbus) are circular contours and perform as a circular
edge detector. Detecting the upper and lower eyelids is
also performed using the Integro-differential operator by
adjusting the contour search from circular to design an
arc. Integro-differential operator is defined as.

( )
( )
0 0
0 0
, ,
, ,
( , )
max
2
r x y
r x y
I x y
G r ds
r r
o
t
c
-
c

(1)

Where ( , ) I x y is an image, the operator searches over the
image domain ( , ) x y for the maximum in the blurred par-
tial derivative ( , ) I x y with respect to increasing radius , r
of the normalized contour integral of ( , ) I x y along a cir-
cular arc ds of radius r and center coordinates. The sym-
bol - denotes convolution and ( ) G r
o
is a smoothing func-
tion such as a Gaussian of scaleo . The operator behaves
like as a circular edge detector, blurred at a scale set by o
which searches iteratively for a maximum contour
integral derivative with increasing radius at successively
finer scales of analysis through the three parameter space
of center coordinates and radius ( , )
o o
x y defining a path of
contour integration.


Once the inner and outer borders are located then ROI
can be isolated from the image and can be stored sepa-
rately. The normalization algorithm always depends on
the algorithm of feature vector extraction and matching.
Therefore, iris texture should be made clear and eliminate
the noise factors which lead an error for matching [12].
We have implemented Daugmans rubber sheet model
where the annular part is transformed to rectangular re-
gion i:e Cartesian-to-polar transform that remaps each
pixel in the iris area into a pair of polar coordinate [13].
Most normalization approaches based on Cartesian to
polar transformation unwrap the iris texture into a fixed-
size rectangular block. The following equation transform

[0, 2 ], [0, 1] r u t e (2)

The iris and pupil breaks non-concentric circles to the
dimensionless polar coordinate system. It can be
represented in equation (3).

( )
(
( )) ( )
, , , , I x r y r I r u u u (3)

Where r is the unit of interval 1] [0, andu is angle

( ) ( ) ( ) ( )
, 1
p s
x r r x r x u u u = +
(4)

( ) ( ) ( ) ( )
, 1
p s
y r r y r y u u u = + (5)

Where ( , )
p p
x y and ( , )
s s
x y are coordinates of inner and
outer boundaries in the direction ofu .
3 FEATURE EXTRACTION

3.1 Spatial 2D Gabor filters
Gabor elementary functions are Gaussians modulated by
oriented complex sinusoidal functions especially for tex-
ture analysis of fingerprint and face recognition [19]. Ga-
bor filters are widely used in many applications such as
texture segmentation, target detection, fractal dimension
management, document analysis, edge detection, retina
identification, image coding. Here, according to the cha-
racteristics of the iris texture, to capture local details of
the iris [1]. Gabor elementary functions can be defined as:


(a)


(b)
Fig. 2. ROI is extracted. (a) is Normalized Image and (b) is
enhanced Image.

(a) (b) (c)
Fig. 1. Samples of CASIA Images with occlusion where inner and
outer boundaries are detected.
2011 Journal of Computing Press, NY, USA, ISSN 2151-9617
http://sites.google.com/site/journalofcomputing/
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 6, JUNE 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 74


' '
' 2 '2
'
2 2
1
( , ; , ) exp cos( 2 )
2
x y
y x
G x y f fx u t
o o




`



)
= + (6)

'
'
cos sin
cos sin
x x y
y y x
u u
u u
= +
=



Here f is the frequency of the sinusoidal function along
the direction , u
x
o and
y
o are the space constants of the
Gaussian envelope along with x and y axis, respectively,
and denotes the orientation of Gabor filter. For the de-
fined filter,
x
o equals to
y
o . In our experiments, the cen-
tral frequencies used are 8, 10, 12, 14 and 16 for each fil-
tering is performed at
2 3
0, , ,
4 4 4
t t t
u =

3.2 Feature Vector
Using Gabor filters with different frequency response for
the entire ROI, we can generate more discriminating fea-
tures. We generate 20 Gabor filters with different fre-
quencies and orientation. Since different irises have dis-
tinct dominant frequencies, filtering to the ROI, we ex-
tract feature values from ROI and convert them into fea-
ture vector.

1 2 3
[ , , .......... ]
T
n
F f f f f = (7)
4 TOPOLOGY OF MFHLSNN
The architecture of the MFHLSNN consists of four layers
as shown in Fig.4. In this architecture first, second, third
and fourth layer is denoted as
,
, , , and
R E D C
F F F F respec-
tively. The
R
F layer accepts an input pattern and consists
of n processing elements, one for each dimension of the
pattern. The
E
F layer consists of m processing nodes that
are constructed during training. There are two connec-
tions from each
R
F to
E
F node; one connection
represents one end point for that dimension and the other
connection represents another end point of that dimen-
sion, for a particular hyperline segment.Each
E
F node
represents hyperline segment fuzzy set and is characte-
rized by the transfer function. Let
1 2
( , ,......., )
h h h hn
R r r r =
represents the th h input pattern,
1 2
( , ,......., )
j j j jn
V v v v = is
one end of the hyperline segment ,
j
e and
1 2
( , ,......., )
j j j jn
W w w w =
is the other end point of .
j
e Then the membership func-
tion of th j
E
F node is defined as:

1
( , , ) 1 ( , , )
j j j h
e R V W f x l = (8)

in which
1 2
x l l = + ,and the distance
1
l ,
2
l and l are de-
fined as:

1
1
n
ji hi
i
l w r
=
| |
|
|
\ .
=

(9)

2
1
n
ji hi
i
l v r
=
| |
|
|
\ .
=

(10)

1
n
ji ji
i
l w v
=
| |
|
|
\ .
=

(11)

Here we have used the bhattacharya distance measure
[26] to determine the points of
1 2
, and l l l which are de-
fined in equations (9), (10), (11).
The earlier version of MFHLSNN had used the Euclidean













Fig.4. Modified Fuzzy hyperline segment neural network.






(a) (b)






(c)

Fig.3. (a) Real part (b) Imaginary part and (c) Filtered Image of iris
r
1
r
2
r
n
e
2
e
3
e
m
e
1
d
1
d
2
d
p
c
1
c
2
c
p
( )
hn h h
r r r , ,.......
2 , 1

F
R
Layer
V and W matrices
U matrix
F
D
layer
F
C
layer
F
E
layer
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 6, JUNE 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 75

0
0.2
0.4
0.6
0.8
1
0
0.5
1
0
0.2
0.4
0.6
0.8
1
distance to determine the distance of
1 2
, and l l l [25]. The
results are also compared with other distance measure as
depicted in Table 1.

( , , ) 0, 1 f x l if x Otherwise = = (12)

0 1
( , , )
1 1
j
x if x
f x l
if x



`

)
s s
=
>
(13)

The Modified fuzzy hyperline segment membership func-
tion for 1, = and with end points [0.5 0.3] w = and
[0.5 0.7] v = is shown in Fig. 5. This membership function
returns highest membership value equal to one if the pat-
tern
h
R falls on the hyperline segment joined by two end
points and
j j
V W . The membership value is governed by
the sensitivity parameter which regulates how fast the
membership value decreases when the distance between
h
R and
j
e increases. For the given input pattern , 's
h j
R e
output value is computed using equation (8).
Each node of
C
F and
D
F layer represents a class. The
D
F
layer gives soft decision and output of th k
D
F node
represents the degree to which the input pattern belongs
to the class .
k
d The weights assigned to the connections
between
E
F and
D
F layers are binary values and stored in
matrix U, and these values assigned to these connections
are defined as:
1
0
j k
jk
if e is ahyperline of class d
u
otherwise


`

)
=
(14)
for 1, 2,.............., k p = and 1, 2,........., j m = where
j
e is the
th j
E
F node and
k
d is the th k
D
F node.

The transfer function of each
D
F performs the union of
appropriate (of same class) hyperline segment fuzzy val-
ues, which is described as
max 1, 2,...........,
1
j k jk
m
n e u for k p
j
= =
=
(15)
Each FC node delivers non-fuzzy output, which is de-
scribed as:

0
max( ) 1
1
j
k k
k
if d T
C for T d for k to p
if d T
>

= = =

==

(16)

5 MFHLSNN LEARNING ALGORITHM
The supervised MFHLSNN learning algorithm for creat-
ing fuzzy hyperline segments in hyperspace consists of
three steps: A. Creation of hyperline segments, B. Inter-
section test and C. Removing intersection. These steps are
described below in detail.
5.1 Creation of hyperline segments:
The length of hyperline segment is bounded by the para-
meter. ,
m
. . s s 0 and
m
. depends on the dimen-
sion of feature vector. In the learning process appropriate
values of . is selected and hyperline segment is ex-
tended only when the length of hyperline segment after
extension is less than or equal to . . Assuming that the
training set defined as { } P h R R
h
,..., 2 , 1 | = e , the
learning starts by applying the patterns one by one from
the pattern set R. Given the th h training pairs ( , )
h h
R d ,
find all the hyperline segments belonging to the class
h
d .
After this following four sub steps are carried out sequen-
tially for possible inclusion of input patterns
h
R .

Sub step 1: Determine whether the pattern
h
R falls on
any one of the hyperline segments. This can be verified by
using membership function described in equation (8). If
h
R falls on any one of the hyperline segment then it is
included, therefore remaining steps of training process
are skipped and it is continued with the next training
pair.

Sub step 2: If the pattern
h
R falls on any one of the
hyperline passing through two end points of the hyper-
line segment, then extend the hyperline segment to in-
clude the pattern. Suppose
i
e is that hyperline segment
with end points
i
V and
i
W then
1 2
, l l and l are calculated
using Bhattacharyya distance measure as stated in equa-
tions (9), (10) and (11). Where
1
l is the distance of
h
R from
end point ,
i
W
2
l is the distance of
h
R from end point
i
V andl is the length of the hyperline segment.

2 (a): If
2 1
l l > then test whether the point
i
V falls on
the hyperline segkent formed by the points
i
W and
h
R .
This condition can be tested using equation (8) i.e. if,
( ) 1
i i h i
e V R W = , then the hyperline segment is extended by
replacing end point
i
V by
h
R to include
h
R , if extension
criteria is satisfied. Hence

new new
i h i i
V R andW W = = (17)













Fig.5. Modified Fuzzy Hyperline Segment membership function
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 6, JUNE 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 76

2 (b): If
2 1
l l > then test whether the point
i
W falls on
the hyperline segment formed by the points
i
V and
h
R If ,
( ) 1
i i h i
e V R W = hyperline segment is extended by replacing
end point
i
W with
h
R to include
h
R if extension criteria is
satisfied. Hence

new new
i h i i
W R andV V = = (18)

Sub step 3: If hyperline segment is a point then extend
it to include the pattern
h
R if extension criteria is satisfied
as described by equation (17).
Sub step 4: If the pattern
h
R is not included by any of
the above sub-steps then new hyperline segment is
created for that class, which is described as

new h new h
W R andV R = = (19)
5.2 Intersection test:
The learning algorithm allows intersection of hyper-
line segments from the same class and eliminates the in-
tersection between hyperline segments from separate
classes. Intersection test is carried out as soon as the
hyperline segment is extended either by sub-step 2 or
sub-step 3 or created in sub-step 4.
Let
1 1 1 2
[ , ,....... ], [ , ,.... ]
Ist n Ist n
W x x x V y y y = = represent
two end points of extended or created hyperline segment
and
' ' ' ' ' '
1 2 1 2
[ , ,....... ], [ , ,....... ]
n n n n
W x x x V y y y = = are end points
of the hyperline segment bf other class. First of all test
whether the hyperlines passing through end points of
two hyperline segments intersect.This is described by the
following equations. The equation of hyperline passing
through
Ist
W and
Ist
V is
1
1, 2,....,
i i
i i
a x
r for i n
y x

= =


(20)
and the equation of the hyperline passing through
n
W and
n
V is
2
'
1, 2,....,
'
i i
i i
b x
r for i n
y x

= =


(21)
where
1 2
, r r are the constant and ,
i i
a b variables. The equa-
tions (20) and (21) leads to set of n simultaneous equa-
tions which are described as

1
( ) 2( ' ' ) ' 1, 2,...,
i i i
r y x x r y i x i x i for i n + = + = (22)

The values of
1
, r and
2
r can be calculated by solving
any two simultaneous equations. If remaining n-2 equa-
tions are satisfied with the calculated values of r
1
and r
2

then two hyperlines are intersecting and the point of in-
tersection P
t
is

1 1 1 1 1
( ( ) ,......, ( ) )
t n n n
P r y x x r y x x = + + (23)
5.3 Removing Intersection:
If sub-step 2(a) and sub-step 3 has created intersection
of hyperline segments from separate classes then intersec-
tion is removed by restoring the end point
i
V as
new
j j
V V = ,
if sub-step 2(b) has created intersection then intersection
is removed by restoring the end point
i
W as
new
j j
W W = ,
and new hyperline segment is created to include the in-
put pattern
h
R , which is described by equation (13).
If the sub-step 4 creates intersection then it is removed
by restoring the end points of previous hyperline segment
of other class.
1 1 new new n n n
W V V andV W
+ +
= = = (24)

6 EXPERIMENTAL RESULTS
The MFHLSNN is implemented using MATLAB 7.0.
The results are obtained and compared with FHLSNN.
The timing analysis of training and recall are depicted in
Table 1 and recognition rates and other classifiers used
for Iris patterns are depicted in Table 2.

TABLE 1
TIMING ANALYSIS
Classifier Training time
in seconds
Testing time
in seconds
FHLSNN using Euclidian
Distance
1.1648 1.811566
FHLSNN using Correlation
coeffiecient distance
1.9378 1.912649
FHLSNN using Mahalanobis
distance
1.8202 1.980021
MFHLSNN using
Bhattacharyya distance
0.1806 0.966453

TABLE 2
PERCENTAGE RECOGNITION RATE WITH MFHLSNN

Methodology Recognition Rate Classifier
Daugman 99.25 HD, SVM
Wildes 97.43 Normalized
Correlation
Y.Wang
99.57 WED
Ali
95.20 HD, WED
MFHLSNN
95.76 MFHLSNN

The training and testing time using MFHLSNN classifier
is less compared to FHLSNN classifier proposed by Kul-
karni et al as shown in Table 1.
7 CONCLUSIONS
In this paper, an effective iris recognition system based on
modified fuzzy hyperline segment neural network ap-
proach is used. The proposed system was applied with
Integro differential operator and Cartesian to Polar Coor-
dinate transform for iris segmentation and normalization
and Gabor filters for Iris feature extraction. Using
MFHLSNN with its learning algorithm was implemented
for classification of iris patterns. Also, it is observed that
Bhattacharyya distance measure is superior in terms of
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 6, JUNE 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 77

training and recall time as compared to Eculidean dis-
tance measure. Experimental results show that the per-
formance of the proposed technique is found satisfactory.
REFERENCES
[1] Li Ma, T Tan and D. Zhang and Y. Wang, Personal Identification
Based on Iris Texture Analysis, IEEE Trans. Pattern Anal. Machine Intell,
vol 25, no 12, 2003, pp. 1519-1533.
[2] Somnath Dey and Debasis Samanta, Improved Feature Processing for
Iris Biometric Authentication System, International Journal of Com-
puter Science and Engineering, vol 4, no 2, pp. 127-134.
[3] Zhaofeng He, T. Tan, Zhenan Sun and Xianchao Qui, Boosting Or-
dinal Features for Accurate and Fast Iris Recognition, IEEE Trans. Pat-
tern Anal. Machine Intell, 2008.
[4] Li Ma, Tieniu Tan, Yunhong Wang, and Dexin Zhang. Efficient Iris
Recognition by Characterizing Key Local Variations, IEEE Transactions
on Image Processing, 13(6):739750, June 2004.
[5] J. Daugman, High Confidence Visual Recognition of Persons by a Test
of Statistical Independence, IEEE Trans. Pattern Anal. Machine Intell, vol.
15, no. 11, pp. 1148-1161, Nov. 1993.
[6] R. P. Wildes, Iris Rcognition: An Emerging Biometric Technology,
Proc. IEEE, Piscataway, NJ, USA , 1997, 85:1348-1363.
[7] S. Lim, K. Lee, O. Byeon and T. Kim, Efficient iris recognition through
improvement of feature vector and classifier, ETRI J., vol.23, no.2, 2001,
pp. 1-70.
[8] W. K. Kong and D. Zhang, Accurate Iris Segmentattion Based on
Novel Reflection and Eyelashes Detection Model, Proc. ISIMVSP, pp.
263-266, 2001.
[9] L. Ma, Y. Wang, and T. Tan, Iris Recognition Based on Multichannel
Gabor Filtering, Proc. Fifth Asian Conf. Computer Vision, vol. I, pp. 279-
283, 2002.
[10] J. G. Daugman, Demodulation by complex-valued wavelets for sto-
chastic pattern recognition, International Journal of Wavelets, Multire-
solution, and Information Processing, vol. 1, no. 1, pp. 117, 2003.
[11] C. L. Tisse and L. Michel Torres, Robert, Person Identification Tech-
nique Using Human Iris Recognition, Proceedings of the 15th Interna-
tional Conference on Vision Interface, 2002, pp. 294-299.
[12] S. S. Chowhan and G. N. Shinde, Evaluation of Statistical Feature
Encoding Techniques on Iris Images, Proc. CSIE-2009, pp. 71-75.
[13] J. G. Daugman, How Iris Recognition Works, Proc. of 2002 Internation-
al Conference on Image Processing, Vol. 1, 2002.
[14] W. W. Boles and B. Boashash A Human Identification Technique
Using Image of the Iris and Wavelet transform, IEEE Trans Signal
Processing, vol.46, no.4, 1988, pp. 1185-1188.
[15] A. Poursaberi and B. N. Arrabi, Iris Recognition for Partially occluded
images Methodology and Sensitive Analysis, Hindawi Publishing corpo-
ration journal on Advances in Signal Processing, vol. 2007.
[16] Hugo Proenc et al. , Toward Noncooperative Iris Recognition: A
Classification Approach Using MultipleSignatures, IEEE Trans Pattern
Analysis And Machine Intelligence, Vol.29, No. 4, April 2007.
[17] Gonzalez, R. C. Woods, R. E. Digital Image Processing 2nd ed., Prentice
Hall (2002).
[18] J. Daugman, Uncertainty Relation for Resolution in Space, Spatial
Frequency, and Orientation Optimized by Two-Dimensional Visual
Cortical Filters, J. Opticl Soc. of Am. A, vol. 2, pp. 1160-1169, 1985.
[19] A. Jain, S. Prabhakar, L. Hong, and S. Pankanti, Filterbank-Based,
Fingerprint Matching, IEEE Trans. Image Processing, vol. 9, no. 5, pp.
846-859, 2000.
[20] The centre of Biometrics and security Research, CASIA Iris image Da-
tabase, http://www.sinobiometrics.com
[21] Jafar M. H. Ali and Aboul Ella Hassanien, An Iris Recognition System
to Enhance E-Security Environment Based on wavelet Theory, AMO-
Advanced Modeling and Optimization journal, vol.5, no.2, pp. 93-104, 2003.
[22] Nicolaie Popescu-Bodorin, Exploring New Directions in Iris Recogni-
tion, Artificial Intelligence and Computational Logic Laboratory, De-
partment of Mathematics and Computer Science, Spiru Haret Universi-
ty of Bucharest, Bucharest, Romania.
[23] Tieniu Tan, Zhaofeng He Zhenan Sun, Efficient and robust segmenta-
tion of noisy iris images for non-cooperative iris recognition, Image and
Vision Computing 28 (2010) 223230.
[24] Ruggero Donida Labati et. al, Neural-based Iterative Approach for Iris
Detection in Iris recognition systems, Proceedings of the 2009 IEEE Sym-
posium on Computational Intelligence in Security and Defense Applications
(CISDA 2009).
[25] U. V. Kulkarni, T. R. Sontakke and G. D. Randale, Fuzzy Hyperline
Segment Neural Network for Rotation Invariant Handwritten Charac-
ter Recognition Neural Networks, Proc. IJCNN '01. International Joint
Conference on Neural Network, Washington, DC, USA, vol.4., pp. 2918
2923, 2001.
[26] Thomas Kailath, The Divergence and Bhattacharyya Distance Meas-
ures in Signal Selection, IEEE Transcation on Communicatiom Technology.
pp. 52-60, 1967.
[27] J. Daugman, Statistical richness of Visual phase information: update
on recogninzing persons by iris patterns, International Journal of Com-
puter Vision, vol. 45, no. 1, pp. 25-38, 2001.

Santosh S. Chowhan received the M.Sc.(CS) degree from Dr. BAM
University, Aurangabad, Maharashtra, India in the year 2000. He
received the M.Phil. Degree in Computer Science from Y.C.M.O.
University, Nashik in the year 2008. He is currently working as lec-
turer in the College of Computer Science and Information Technolo-
gy, Latur, Maharastra. His current research interests include various
aspects of Neural Networks and Fuzzy Logic, Pattern Recogntion
and Biometrics..

Uday V. Kulkarni received Ph.D degree in Electronics and Comput-
er Science Engineering from S. R. T. M. University , Nanded in the
year 2003. He is currently working as a professor in Dept of Com-
puter Scince and Engineering in SGGS Institute of Engineering and
Technology, Nanded, Maharashtra, India.

Ganesh N. Shinde received M. Sc. & Ph.D. degree from Dr. B.A. M.
University, Aurangabad. He is currently working as Principal in Indira
Gandhi College, Nanded and Maharashtra, INDIA. He has awarded
Benjongi Jalnawala award for securing highest marks at B.Sc. He
has published 27 papers in the International Journals and presented
15 papers in International Conferences. In his account one book is
published, which is reference book for different courses. He is mem-
ber of different academic & professional bodies such as ANAS (Jor-
dan). He is in reviewer panel for different Journals such as IEEE
(Transactions on Neural Networks), International Journal of Physical
Sciences (U.S.A.), Journal of Electromagnetic Waves and Applica-
tions (JEMWA, U.S.A.). He was the Chairperson for F-9 session of
International Conference on Computational and Experimental
Science & Engineering which was held at Honolulu, U.S.A. He is
member of Management Council & Senate of S.R.T.M. University,
Nanded, INDIA. His research interest includes Filters, Image
processing, Pattern recognition, Fuzzy Systems, Neural Network and
Multimedia analysis and retrieval system.

Das könnte Ihnen auch gefallen