Sie sind auf Seite 1von 5

The 33rd Annual Conference of the IEEE Industrial Electronics Society (IECON) Nov.

5-8, 2007, Taipei, Taiwan

Iris Recognition Based on FFG


Hee-Sung Kim, Byung-Kyu Bae, Jun-Hee Cho Dept. of Computer Science, University of Seoul, Republic of Korea
Abstract- In this paper we present FFG(Facet Function Gradient) as a metric for the matching of the iris images. An FFG is the gradient of a facet function at the center of a small patch of a given image. The facet function is obtained by fitting to the intensity values of pixels in the small patch area of the iris images. The gradient value indicates the amount and the direction of the change of light intensity at a point in an image. Therefore, FFGs are proper quantities to represent the patterns of the iris. The FFGs obtained on all patches in an image are compared to those of the prototype iris image for the decision of matching. Experimental results show that this metric is attractive for the iris recognition since the rate of recognition is improved comparing to the other studies. Keyword: Iris recognition, Iris, Pupils, FFG, Gradients

I.

INTRODUCTION

For the iris recognition system, it is essential as long as it is known currently to develop the techniques of the iris extraction from images and the comparison techniques of irises. The steps we take for extraction of the iris region from an eye image are as follows. 1) Preprocessing ofthe eye image 2) Detection of the outline of the pupil 3) Detection ofthe outer boundary ofthe iris 4) Segregation of the iris region 5) Normalization of the iris region The steps we take for iris comparison are as follows. 1) Generation of iris data for the prototype image 2) Generation of iris data for the image to be recognized 3) Matching decision though a series of calculations with the previous two iris data sets.
In this paper, we propose FFG(Facet Function Gradient) [9] as an expression element for the iris pattern and subsequently as a metric tool for the iris comparison. Also, algorithms for extraction of the iris segments are introduced. The FFG is obtained by fitting a facet function to the pixel intensity values over a patch in image and differentiating the facet function in the horizontal and vertical directions. The x-component of FFG for the patch is the value evaluated at the patch center with the facet function differentiation in the direction of the row and the y-component is in the direction of the column. As for the segmentation of the iris region from an eye image,

The iris recognition system is acknowledged to be an excellent technology in the field of personal identification or security check. It is generally considered that the recognition accuracy of this system is the best among the systems in which a part of human body is used as an object for recognition. The authentication by recognizing a part of human body is preferred to the authentication by reading ID cards since cards are cumbersome to carry and may be lost. There are disadvantages in the recognition systems of fingerprints, faces, and voices. For example, the recognition system of fingerprints is exposed to the problem that finger marks are easily copied. Subsequently the credibility of the system is reduced. The system of face recognition is difficult to realize because of the variability of facial expression, eyeglasses, and hair styles. The voice recognition systems have weakness to the environmental noises and voice alterations. The iris patterns formed before 3 years of age are not changed lifelong and are inherent to a person. Those of left and right eye of a person are even different. The iris images could not be copied as it is known until now. Particularly, the irises are located on the surfaces of the eyeballs and thus they are not affected by the eye congestion. The retina is located in the backside of the eyeball, while the iris is located on the surface of an eye. Thus, the camera has to closely approach the eye to take the retina image, but it's not necessary to be near to the eye to take the iris image. The pattern, shape, and color of the iris image can be applicable for any system that requires a high

precision of discrimination.

boundary detection method [5], Hough transform [10], the bisection method, and others have been used by some authors. In the boundary detection technique, it is sought to detect the points at which the rates of change are maximized along the radial direction in a circular area. Hough transform method is used for detection of the boundary between eyelid and iris which is modeled by a parabola. The boundary of the pupil and the left and right parts of the iris are modeled by circles. The region of iris could be segmented but the processing time is intractable since there are many variables to be determined. The bisection method is based on the property that the center of a line segment whose end points are on the circular boundary cross the center of the circle. Thus, the point that is mostly accumulated at the end of the process for collecting the middle points is determined as the center of the circle.

1-4244-0783-4/07/$20.00 C 2007 IEEE

2454

However, this method also takes a lot of time for preprocessing. For the irises comparison processes, various methods have been introduced previously. Lye et al used neural network and Euclidean distance for the irises discrimination [14], and Tisse computed the iris codes by applying the Gabor filter on the data that are transformed by Daugman's polar coordinate system [15]. Gupta didn't use any filters but the co-occurrence matrices computed from the gray levels of the original images and these matrices are used for comparison [16]. Onsy used the Gabor wavelet after the histogram equalization [18]. The rates of recognition for most of these methods remain under 9000.
II. IRIS SEGMENTATION

*F)f

Fig. 2. Eye images after binarization by the global threshold value

Most of the iris segmentation methods are concerned about the detection of outer boundaries of the pupil and the iris. The region of iris is then the inner part between these two boundaries A. Preprocessingfor pupil detection Generally, the boundary of a pupil image could be seen by an edge detector or by a binarization process. As it can be seen in Fig. 2, the pupil and the eyebrows are detected together by a binarization process with a threshold value. There were few cases in which only the pupil images detected when we tested with the images in CASIA database [1] (Fig. 1). Thus, the following steps are used to mitigate this problem.

Step 3. Elimination of the isolated points The isolated points are eliminated using mask operations. The gray levels of original images are restored on the dark pixels in the images obtained after the processes of dilation and elimination. The resulted images are shown in Fig. 3.

Fig. 3. The iris images after the processes of dilation, elimination and restoration
The boundary ofa pupil The algorithm for the detection of a pupil boundary is as follows. 1) The search begins from a radial line whose angle is 0 degree as shown in Fig. 4. 2) Traveling down along the radial line from a point on the circle as long as a predefined distance m(here 15 pixel distance is taken), the black pixels are counted and accumulated. 3) The step 2) is repeated on another radial line whose angle is n degrees(10 is taken in our experiment) more than the previous one. Up to 360 degrees, total 360/n spoke lines are searched and the counted black pixels are accumulated. 4) If the number of accumulated pixels is more than 900o of the total pixels traveled, this region is included on the pupil region. Otherwise, the radius is reduced by a length m, and the steps are repeated again.

Step 1. Global Threshold A global threshold value is determined experimentally as 0.6 times of the average value of the gray levels in an eye image. Fig. 2 shows the eye images that are obtained by the binarization process applying the global threshold value on the images shown in Fig. 1.

B.

Fig. 1. The original eye images

Step 2. Dilation operation A dilation operation is introduced for the purpose of elimination of the eyelids, eyebrows and noises, etc. as much as possible.

2455

Fig. 5. The outer boundary ofthe iris that is searched (dark curves)
Fig. 4. Pupil Search

C The outer boundary ofthe iris The outer boundary of the iris is searched by a method described below. The gray levels in the iris region are not relatively even, thus it is not tractable to find iris boundary by the threshold. Therefore, we used the difference method after enforcing contrast by histogram equalization. The equation that we used for the difference method is as follows.

Fig. 6. The iris images obtained by the procedures suggested.

D. Normalization ofthe iris image The round images of irises are converted to rectangular images for convenience since it is difficult to apply rectangular shaped masks. The radial distance in an iris image is divided S/ ( ) {S(x,y) < SbXO.9 :O(black),Sb = SI(x, y) into 100 equal segments. The reason for doing this is that the -S(X,y)>SbXO.9 :P(x,y) (2) radial distance of the iris changes due to the illumination and needs to be normalized. The two dimensional array of IOOx360 = y) - { S(x,y) > is prepared, the index of the row is for the divided radial (3) S(X,y)<SbXO.9 :P(x,y) distance and that of the column is for the angular degrees. This process is called normalization of the iris image. The converted rectangular image is shown in Fig. 7 and it S(x,y) is the function that determines whether the pixel (x, y) can be noticed that the iris region of the upper right quarter has is outside or inside area ofthe iris. SI(x,y) is the function that low possibility to contain the eyelid image. Thus, this quarter determines whether the pixel (x, y) keeps the current gray level only is used for the matching. or changes to black in the left region of iris (Fig. 7). Sr(x, y) is the function that determines whether the pixel remains the same or changes to black in the right region of the iris (Fig. 7). P(x, y) is the gray value of the pixel in the position of (x, y) . Sb is a threshold value used in the determination process. It is zero in the beginning and adapts to new values as shown in the Fig. 7. The upper left quarter of the normalized iris region equation. The variability of the gray levels in the iris region is relatively large compared to those in the pupil region. Thus, it III. IRIS MATCHING USING FFGs is difficult to search the iris region by using a stationary threshold, so we devised an adaptive threshold which is shown A facet function for the fitting on a patch of the normalized in the equations above. The factor 0.9 is determined through segments of the iris images is chosen as follows. experiments. The searched iris images are shown in Fig. 5 and Fig. 6.

S(X,Y)=

10

yi==-lOxiO

YP(x-xi+1O,y+yl)-P(x-xi,y+yl)

(1)

SbXO.9 :O(black),Sb Sr(x,

-l

2456

h(r,c)=kk +k2r+k3c+k4r2 +k5rc

+k6c2 +k7r3 +k8r2c+k rc2 +k10c3


The variables r and c in (4) are for the coordinates originated at the center of the patch. The least mean square error for fitting is shown below.

a more conspicuous value when the difference of the gradient is larger. The threshold value for the matching decision is chosen experimentally as 1.508. If M is greater than the threshold value, it is decided that the two iris images are of a different person, otherwise, ofthe same one.
IV. EXPERIMENTAL RESULTS.

The experiment was performed with the CASIA database [1] for the purpose of comparisons to the previous works. 105 iris images are used for the recognition experiments. 21 persons NN kl +k2r+k3c+k4r2 +k5rc and 5 images per each person are the object of the experiment. (5) The accuracy of the recognition turns out to be 96.19%, the reRceCL+k6c2 +k7r3 +k8r2c+k9rc2 +k10c3J FAR(False Acceptance Rate) is 1.91% and the FRR(False Rejection Rate) is 0.95%. The comparison to the previous Here, f(r, c) is the gray level at the pixel pointed by (r, c) in researches is shown in the table 1. the patch concerned. The equation (5) is differentiated with TABLE I respect to r and c at the center of the patch giving k2 and k3. THE ACCURACY COMPARISON OF THE IRIS RECOGNITION Obviously, these are the r and c component of the gradient for FAR( ) FRR( o) Accuracy ( ) the facet function given in (4) in the patch. Let this gradient be FFG. The masks for obtaining the r and c component of the Qussay [12] 82 % gradient for a patch region are referenced by K2 and K3, 82.7 % 7.2 %0 10.3 % respectively. The optimal sizes of the masks are selected to be Zaim [13] 9x9. 83 % Let K2 (x, y), K3 (x, y) be the values that are produced by Lye [14] the masks, K2 and K3, respectively, over the patch region Tisse [15] 89.37 % 1.84 % 8.79 % centered at the position of (x, y) in a normalized segment of the 90.68 % prototype iris image. And let K2' (x, y), K3 '(x, y) be the Gupta [16] values that are computed as the same way as above for the test 92.64 % image. Then the matching operation is performed by the Boles [17] following equation. Onsy [18] 83.65 %0 96.17 %0

e2

=y'y[h(r,C) f(r,c)]
reR ceC

M = E L

50

180

y=l x=O

{K2(X,y) -2 '(x,y)} +{K3(x,y) -K3'(x,y)j2


(6)

Basit [ 19]
Proposed

95.91 % 96.19 %

1.91 %

0.95 %

The indexes of summation are chosen such that the parts of the normalized segments which are not affected by the images of eyelids or eye brows. Hamming Distance is a metric that is used often in the matching process of images. In this case, it is needed to determine a threshold value that limits the gradient value of each direction. However, there is always a positional difference in the two matching images even if the two images of the same iris taken in different time. Therefore, the difference of the two gradient values K2 and K3 is produced and it is difficult to determine a proper threshold value. Subsequently, the accumulated value of the differences is desired as a matching metric. The reason why it is chosen to take the squared value of the difference is to have

Lye et al used a neural network to select matched image with metrics of Euclidean distance. Tisse used a Gabor filter to obtain the iris code after normalization using Daugman's polar coordinate system. Gupta et al didn't use any filter but compared images with the help of a co-occurrence matrix. Onsy et al used Gabor wavelet after preprocessing by the histogram. The results shown above tell that the accuracy of the iris recognition by the proposed method is the best. It is considered that the FFG is a proper metric to recognize the iris patterns. Considering that only the quarter segment of the normalized iris image is used for recognition, the recognition accuracy by the proposed FFG is conspicuous.

2457

V. CONCLUSIONS

This paper shows that the gradient of the iris patterns obtained by the facet function model is useful for the identification of persons. The accuracy of the iris recognition is high as 96.19% which is the best compared to the existing methods. However, a few of the failures of the proper detection of pupils and irises occurred. For the outer boundary of the pupil, the circular boundary detection algorithm proposed by Daugman is used. The outer boundary of the iris is searched for by a method that is devised using the idea from Grabowski [8]. The information of the original image is not fully reflected on the binary image so that the detection of boundaries on a binary image has a limitation. Thus, if a good method for segmenting irises from the eye images is developed, then the recognition accuracy will be improved more.
REFERENCES
[1] CASIA Iris Image Database. [2] Special issue on Biometrics, Proceedings of the IEEE, vol.85, no.9, Sept. 1997. [3] P. J. Phillips, A. Martin, C. L. Wilson, and M. Przybocki, "An introduction to evaluating biometric systems," Computer, vol. 33, no. 2, pp. 56-63, 2000. [4] H. Davison. The Eye. Academic Press, London,1962. [5] J.G. Daugman, "High Confidence Visual Recognition of Persons by a Test of Statistical Independence", IEEE Tans. Pattern Analysis and Machine Intelligence, vol.15, pp.1148-1161, Nov. 1993. [6] J. G. Daugman, "High confidence personal identification by rapid video analysis of iris texture," in Proc. IEEE Int. Carnahan Conf Security Technology, 1992, pp. 1-1 1. [7] R.P. Wildes, "Iris Recognition: An Emerging Biometric Technology", Proceedings of the IEEE, vol.85, pp.1348-1363, Sept. 1997. [8] K. Grabowski, W. Snakowski, M. Zubert, M. Napieralska, "Reliable Iris Localization Method with Application to Iris Recognition in Near Infrared Light", MIXDES 2006. Proceedings of the International Conference 22-24 June 2006 pp. 684 - 687

[9] H. Kim, Image Recognition, Saengneung, pp. 146-155, 1998 [10] D. H. Ballard. Generalizing the hough transform to detect arbitrary shapes. Pat. Rec., 13(2):111-122, 1981. [11] Hanho Sung, Iris Recognition Using Collarette Boundary Localization, Proceedings ofthe 17th International Conference on Pattern Recognition (ICPR'04) 1051-4651/04 pp. [12] Dr. Qussay A. Salih; Vinod Dhandapani, "Iris Recognition based on Multi-Channel Feature Extraction using Gabor Filters", International Association Of Science And Technology For Development Archive Proceedings of the 2nd IASTED International Conference on Advances in Computer Science and Technology Table of Contents, 2006, pp. 168 - 173 [13] Zaim, A.; Sawalha, A.; Quweider, M.; Iglesias, J.; Tang, R.; "ANew Method for Iris Recognition using Gray-Level Coccurence Matrix", Electro/information Technology, 2006 IEEE International Conference on electro-information technology May 2006 pp.350 - 353 [14] Lye William; Chekima, A.; Liau Chung Fan; Dargham, J.A.; "Iris recognition using self-organizing neural network", SCOReD 2002, Student Conference on Research and Development, 16-17 July 2002, pp.169- 172. [15] C. Tisse, L. Martin, L. Torres, and M. Robert, "Person Identification Technique Using Human Iris Recognition," Proc. Vision Interface, pp. 294-299, 2002. [16] Gaurav Gupta, Mayank Agarwal, "Iris Recognition using Non Filterbased Technique", Biometric Consortium Conference BC 2005, BC 18, September 19 - 21, 2005, Hyatt Regency Crystal City, Arlington, VA USA [17] W.W. Boles, and B. Boashah, "A Human Identification Technique Using Images of the Iris and Wavelet Transform", IEEE Trans. on Signal Processing, vol.46, pp. 1185-1188, April 1998. [18] Onsy Abdel Alim and Maha Sharkas, "Iris Recognition using Discrete Wavelet Transform and Artificial Neural Networks", Circuits and Systems, 2003. MWSCAS'03. Proc. 46th IEEE International Midwest Symposium on Circuits and Systems Volume 1, 27-30 Dec. 2003 pp. 337 - 340 [19] Basit, A.; Javed, M.Y.; Anjum, M.A., "Iris Recognition using Single Feature Vector", Information and Communication Technologies, 2005. ICICT 2005. First International Conference on Information and Communication Technologies, 27-28 Aug. 2005 pp. 126 - 128

2458

Das könnte Ihnen auch gefallen