Beruflich Dokumente
Kultur Dokumente
PROJECT REPORT
ON
IRIS RECOGNITION SYSTEM USING MATLAB
BATCHELOR OF ENGINEERING
IN
ELECTRONIC & TELE COMMUNICATION
SUBMITTED BY
MOHD NASRULLAH KAZMI(14DET50)
MOHD FAIZ (12ET77)
ANJUMAN I-ISLAM’S
KALSEKAR TECHNICAL CAMPUS
2016-17
DEPARTMENT OF ELECTRONIC & TELECOMMUNICATION
ENGINEERING
1
Certificate
Ideclarethatthiswrittensubmissionrepresentsmyideasinmyown
wordsandwhereothers'ideasorwordshavebeenincluded,Ihaveadequatelycited
andreferencedtheoriginalsources.IalsodeclarethatIhaveadheredtoallprinciples of
academic honesty and integrity and have not misrepresented or fabricated or
falsifiedanyidea/data/fact/sourceinmysubmission.Iunderstandthatanyviolation
oftheabovewillbecausefordisciplinaryactionbytheInstituteandcanalsoevoke
penalactionfromthesourceswhichhavethusnotbeenproperlycitedorfromwhom
proper permission has not been taken whenneeded.
-----------------------------------------
(Mohd Nasrullah Kazmi 14DET50)
-----------------------------------------
(Patel Faiz Ayyub 12ET77)
Date:
Within the last decade, governments and organizations around the world have
invested heavily in biometric authentication for increased security at critical
access points, not only to determine who accesses a system and/or service, but
also to determine which privileges should be provided to each user. For achieving
such identification, biometrics technology is emerging as a technology that
provides a high level of security, as well as being convenient and comfortable for
the citizen. For example, the United Arab Emirates employs biometric systems to
regulate the people traffic across their borders. Subsequently, several biometrics
systems have attracted much attention, such as facial recognition and iris
recognition is moreabstract.
1
1.1 Literaturesurvey
Advantages:
The proposed method can also be used to detect other types of fake data (e.g.,
printed lenses) by selecting the subset of parameters that better adapts to the new
anti-spoofing problem.
Disadvantages:
It is not an easy task to generate such a synthetic trait that it possesses all the
measured quality related features in the same degree as a real sample.
2
1.1. 2 IRIS DETECTION BASED ON TEXTURE ANALYSIS
Each pixel can generate a 40 dimension vector by Gabor filtering and concatenation,
and this vector is assigned to the bin representing the nearest texton. By mapping 40
dimensional vectors to 64 bins, we can obtain a histogram with frequent variations of
bins. The Iris-Texton histogram denotes the richness of micro-texture in iris image
3
with proper filters chosen, therefore it is sufficient to characterize the global feature
of iris image. Textural features based on grey level co-occurrence matrices (GLCM)
is the third measurement we used to detect fake iris. Those features are generated on
image ROI, which is obtained by image preprocessing. Co-occurrence matrices
represent a second order statistic measurement, as they characterize the relationship
between the neighboring pixels.
ADVANTAGES:
It can be employed in systems with speed requirement given its low
computational cost and its performance is comparable to the state-of-the-art. It can
also be deployed by setting the threshold to a low FAR (False Acceptance Rate).
CCR (Correct Classification Rate) is closely related to the type of contact lens, or the
technique of making them.
DISADVANTAGES:
The proposed method highly depend on segmentation accuracy. Some datasets
were more challenging to segment the iris regions exactly. While considering we
cannot give assurance that the iris will be segmented exactly for all the real time
data. The False Rejection Rate is also low.
Iris recognition has drawn much attention due to its convenience and security.
Compared with other biometric modality, iris pattern has been regarded as one of the
most accurate biometric modalities for its uniqueness, stability and non-
intrusiveness. However, as other biometric systems, iris system is also under threat
of forged iris attack. Efficient iris spoof detection can improve security of iris
recognition systems. Some artifacts have been considered to spoof iris recognition
system, such as paper printed iris, cosmetic contact lens, and redisplayed videos.
Spoof caused by wearing a cosmetic contact lens is particularly dangerous. It is
easily accepted by the system and hard to detect. Spoof detection is a critical
function for iris recognition because it reduces the risk of iris recognition systems
being forged.
Despite various counterfeit artifacts, cosmetic contact lens is one of the most
common and difficult to detect. In this paper, we proposed a novel fake iris detection
algorithm based on improved LBP and statistical features. Firstly, a simplified SIFT
descriptor is extracted at each pixel of the image. Secondly, the SIFT descriptor is
used to rank the LBP encoding sequence. The proposed method consists of four
steps: iris image preprocessing; generating scale space and calculating SIFT-like
descriptors calculating weighted local binary patterns extracting features and
4
classification. The main preprocessing steps include iris segmentation and de-
noising. Iris segmentation is to find iris region by precisely localizing its inner and
outer boundaries. The bounding square block of iris circle is regarded as the region
of interest for feature extraction rather than iris normalization to polar coordinate
system, to keep the regular texture pattern of colorful contact lens in Cartesian
coordinate and save time used for transformation of coordinate systems. For de-
noising, we choose low-pass filter and Total Variation as de-noising method. To
make the computation convenient, we normalized iris images into the same size 400
× 400. After smoothing, we use a simplified SIFT descriptor to analyze the local
structural characteristics. The SIFT descriptor is adopted because it is largely
invariant to changes of scale, illumination, and local affine distortions, and also in a
certain degreeof stability toview changes and noise.Application of SIFT descriptor
will enhance the stability and robustness of LBP. The first step is to generate the
scale space from the convolution of a variable-scale Gaussian template, with an
image. The second step is to extract a simplified SIFT descriptor for each pixel in its
5×5 neighborhood. Arrows denote the magnitude and orientation at each image
pixel, and the overlaid circle is weighted Gaussian window. In order to achieve
orientation invariance, the coordinates of the descriptor and the gradient orientations
are rotated relative to the main orientation, which is determined by all the gradient
direction of every scale. The last step is to get a descending rank of the orientation
histogram. Thus, we get a 576 dimensional feature for each image. The local binary
patterns (LBP) are adopted to represent texture patterns of iris images. The LBP has
emerged as an effective texture descriptor and is widely used in texture analysis.
ADVANTAGES:
The proposed method is robust to glasses impact, including specular, glass
frame shelter, hazy caused by dirty optic, and extra texture of dirty optic. The
combination of SIFT with LBP improves its invariance of scale illumination and
local affine distortion, and make the algorithm more robust to camera view change.
The proposed method is robust to detect contact lens and promising for cross-camera
Fake iris detection.
DISADVANTAGES:
The proposed works well for counterfeit iris images. These counterfeit iris
images are whose samples are time-consuming and costly to collect. The method has
to be enhanced to make it work for all type of images so that the process can be
employed in real time.
5
1.1.4Efficient Iris Spoof Detection Via Boosted Local Binary Pattern
Iris textures are commonly thought to be highly discriminative between eyes, they
(including the contact lens wearing iris textures) still present several desirable
common properties, such as: The radial distribution: Even within an iris, the scale
of the iris micro-structures varies a lot along the radius. Usually the larger the
radius is, the bigger the iris micro-structures will be. The angular self-similarity:
Although different angular regions remain discriminative, their texture patterns
display a certain degree of consistence/correlation. These properties suggest
dividing the iris into multiple regions. Each sub-region contains a particular texture
pattern. Via such division, more specific representation of iris can be obtained, and
hence makes it easier to discriminate live iris textures from counterfeit iris textures.
Moreover, the upper and lower parts of the iris are almost always occluded by
eyelids or eyelashes. It is therefore straightforward to exclude the upper and the
lower quarters from feature extraction for a more concise representation. And in
order to achieve translation and scale invariance, the iris is normalized to a
rectangular block of a fixed size of 64*512. As can be seen, the whole iris image is
divided into three annular sections along the radial direction, and two sectors along
the angular direction. In total, we get six sub-regions. The textures of counterfeit
iris images and live iris images are different in appearance and can be used for
spoof detection. LBP is defined for each pixel by thresholding its 3*3 neighborhood
pixels with the center pixel value, and considering the result as a binary bit string.
Each LBP code represents a type of micro image structure, and the distribution of
them can be used as a texture descriptor. The original LBP is later extended to
multi-scale LBP (denoted by LBPP, R) and uniform LBP. LBPP, R is calculated by
thresholding P equally spaced points on a circle (whose radius is R) with the center
pixel value. A LBP code is called uniform if its bit string contains at most two bit-
wise transitions from 0 to 1 or vice versa.
7
Chapter 2
Biometric Security Systems
2.1Introduction
A Biometric Identification system is one in which the user's "body" becomes the
password/PIN. Biometric characteristics of an individual are unique and therefore
can be used to authenticate a user's access to various system.
8
2.2 Biometric
The word ‘Biometric’ is a two sections terminology, is taken from the Greek
word, of which ‘Bio’ means life and ‘Metric’, mean measure. By combining these
two words, ‘Biometric’ can be defined as the measure (study) of life, which
includes humans, animals, and plants.“Biometric technologies” defined as an
automated methods of verifying or recognizing the identity of a living person
based on a physiological or behavioral characteristic.
By definition, there are two key words in it: “automated” and “person”. The
word “automated” differentiates biometrics from the larger field of human
identification science. Biometric authentication techniques are done completely by
machine, generally (but not always) a digital computer [36]. The second key word
is “person”. Statistical techniques, particularly using fingerprint patterns, have been
used to differentiate or connect groups of people or to probabilistically link persons
to groups, but biometrics is interested only in recognizing people as individuals. All
of the measures used contain both physiological and behavioral components, both
of which can vary widely or be quite similar across a population of individuals. No
technology is purely one or the other, although some measures seem to be more
behaviorally influenced and some more physiologically influenced.
10
2.3Biometric History
The science of using humans for the purpose of identification dates back to
the 1875 and the measurement system of Alphonse Bertillon. Bertillon’s system of
body measurements, including skull diameter and arm and foot length, was used in
the USA to identify prisoners until the 1925. Before thatWilliam Herschel and Sir
Francis Galton proposed quantitative identification through fingerprint and facial
measurements in the 1880s. The development of digital signal processing
techniques in the 1960s led immediately to work in automating human
identification. Speaker and fingerprint recognition systems were among the first to
be applied. The potential for application of this technology to high-security access
control, personal locks and financial transactions was recognized in the early 1960s.
The 1970s saw development and deployment of hand geometry systems,the start of
large-scale testing and increasing interest in government use of these automated
personal identificationtechnologies. Then, Retinal and signature verification
systems came in the 1980s, followed by face systems. Lastly, Iris recognition
systems were developed in the 1990s.
2.4Biometric Concepts
Universal: Every person must possess the characteristic/attribute. The attribute must
be one that is universal and seldom lost to accident or disease.
Invariance of properties: They should be constant over a long period of time. The
attribute should not be subject to significant differences based on age either episodic
or chronic disease.
Measurability: The properties should be suitable for capture without waiting time
and must be easy to gather the attribute data passively.
Singularity: Each expression of the attribute must be unique to the individual. The
characteristics should have sufficient unique properties to distinguish one person
from any other. Height, weight, hair and eye color are all attributes that are unique
assuming a particularly precise measure, but do not offer enough points of
differentiation to be useful for more than categorizing.
2.5Operating Mode
In the identification mode, database search is crucial and needed. A user presents a
not necessarily known sample of his/her biometrics to the system. This sample is
then compared with existing samples in a – central - database (one-to-many) [48].
Identification is a critical component in negative recognition applications, where the
system establishes whether the person is who he/she (implicitly or explicitly) denies
to be. The purpose of negative recognition is to prevent a single person from using
multiple identities. Identification may also be used in positive recognition for
convenience (the user is not required to claim an identity). While traditional
methods of personal recognition such as passwords, PINs, keys, and tokens may
work for positive recognition, negative recognition can only be established through
biometrics.
12
Fig:Components Of Biometric System
A number of biometric characteristics exist and in use each biometric has its
strengths and weaknesses, and the choice depends on the application. In other words
no biometric is “optimal” a brief introduction to the commonly used biometrics is
given below.
1. Facial, hand, and hand vein infrared thermogram, A pattern of radiated heat
from human body considers a characteristic of an individual. These samples of
patterns can be captured by an infrared camera in an unobtrusive manner like
a regular (visible spectrum) photograph. The technology could be used for
covert recognition.
2. Ear many researchers suggested that the shape of the ear to be a characteristic.
Studying the structure of the approaches is based on matching the distance of
salient points on the pinna from a landmark location on the ear. The features
of an ear are not expected to be very distinctive in establishing the identity of
an individual.
13
4. Retina, Since the retina is protected in an eye itself, and since it is not easy to
change or replicate the retinal vasculature; this is one of the most secure
biometric. Retinal recognition creates an eye signature from the vascular
configuration of the retina, which is supposed to be a characteristic of each
individual and each eye, respectively.
1. Gait, Basically, gait is the peculiar way one walks and it is a complex spatio-
temporal biometrics. This is one of the newer technologies and is yet to be
researched in more detail. Gait is a behavioral biometric and may not remain
the same over a long period of time, due to change in body weight or serious
brain damage. Acquisition of gait is similar to acquiring a facial picture and
may be an acceptable biometric.
The human eye has been called the most complex organ in our body. It's
amazing that something so small can have so many working parts. But when you
consider how difficult the task of providing vision really is, perhaps it's no wonder
after all.It is useful to consider briefly the eye anatomywhich is shown below:
We will move quickly over only some of the well-known parts of the human eye.
The cornea is a clear, transparent portion of the outer coat of the eyeball through
which light passes to the lens. The lens helps to focus light on the retina, which is the
innermost coat of the back of the eye, formed of light sensitive nerve endings that
carry the visual impulse to the optic nerve. The retina acts as a film of a camera in its
operation and tasks.
The iris is a thin circular ring lies between cornea and the lens of the human eye. A
front-on view of the iris is shown in Fig in which iris encircles the pupil the dark
centered portion of the eye. The function of iris is to control the amount of light
entering through the pupil, this done by the sphincter and dilators muscles, which
adjust the size of the pupil.
The sphincter muscle lies around the very edge of the pupil. In bright light, the
sphincter contracts, causing the pupil to constrict. The dilator muscle runs radially
through the iris, like spokes on a wheel. This muscle dilates the eye in dim lighting.
15
The sclera, a white region of connective tissue and blood vessels, surrounds the iris.
The externally visible surface of the multi-layered iris contains two zones, which
often differ in color.An outer ciliary zone and an inner pupillary zone, and these two
zones are divided by the collarette, which appears as a zigzag pattern.
The iris is composed of several layers; the visual appearance of the iris is a direct
result of its multilayered structure.Iris color results from the differential absorption
of light impinging on the pigmented cells in the anterior border layer, posterior
epithelium and is scattered as it passes through the stroma to yield a blue
appearance. Progressive levels of anterior pigmentation lead to darker colored irises.
The average diameter of the iris is nearly 11 mm and the pupil radius can range
from 0.1 to 0.8 of the iris radius. It shares high-contrast boundaries with the pupil but
less-contrast boundaries with the sclera.
16
3.2 Iris Recognition System
The idea of using the iris as a biometric is over 100 years old. However, the
idea of automating iris recognition is more recent. In 1987, Flom and Safir
obtained a patent for an unimplemented conceptual design of an automated iris
biometrics system.
Image processing techniques can be used to extract the unique iris pattern from a
digitized image of the eye, and encode it into a biometric template, which can be
stored in a database later. This biometric template contains an objective
mathematical representation of the unique information stored in the iris, and
allows comparisons to be made betweentemplates.
Given that iris is a relatively small (nearly 1 cm in diameter), dark object and that
human operators are very sensitive about their eyes; this matter required careful
engineering. Some points should be taken into account: (i) Acquiring images of
sufficient resolution and sharpness (ii) Good contrast in the interior iris pattern
without resorting to a level of illumination that annoys the operator (iii) The
images should be wellframed and (iv) Noises in the acquired images should be
eliminated as much as possible.
17
Chapter 4
Iris Database and Dataset
All current commercial iris biometrics systems still have constrained image
acquisition conditions. Near infrared illumination, in the 750–950 nm range, is
used to light the face, and the user is prompted with visual and/or auditory
feedback to position the eye so that it can be in focus and of sufficient size in the
image. In 2004, Daugman suggested that the iris should have a diameter of at least
140 pixels. The International Standards Organization (ISO) Iris Image Standard
released in 2006 is more demanding, specifying a diameter of 200 pixels.
Experimental research on iris recognition system requires an iris image dataset.
Several datasets are discussed briefly in this chapter.
There is not any public iris database. Lacking of iris data may be block tothe
research of iris recognition. To promote the research, National Laboratory of Pattern
Recognition (NLPR), Institute of Automation (IA), Chinese Academy of Sciences
(CAS) provide iris database freely for iris recognition researchers. Table summarizes
information on a number of famous iris datasets.
18
No. of No. of
Database Camera used
irises images
4.2.1CASIA Version 1
All images tested are taken from the Chinese Academy of Sciences Institute of
Automation (CASIA) iris database, apart from being the oldestthis database is
clearly the most known and widely used by the majority of researchers. Beginning
with a 320×280 pixel photograph of the eye took from 4 cm away using a near
infrared camera. The NIR spectrum (800 nm) emphasizes the texture patterns of iris
making the measurements taken during iris recognition more precise.
Factors are exclusively related with iris obstructions by eyelids and eyelashes as
shown in table above the pupil regions of all iris images were automatically detected
and replaced with a circular region of constant intensity to mask out the specular
reflections from the NIR illuminators (8 illuminators) before public release.
19
4.2.2 Iris Challenge Evaluation
The iris image datasets used in the Iris Challenge Evaluations (ICE) in 2005
and 2006 were acquired at the University of Notre Dame, and contain iris images of
a wide range of quality, including some off-axis images. The ICE 2005 database is
currently available, and the larger ICE 2006 database also released. One unusual
aspect of these images is that the intensity values are automatically contrast-
stretched by the LG 2200 to use 171 gray levels between 0 and 255. Samples are
shown in Fig
The Lions Eye Institute (LEI) database consists of 120 greyscale eye images
taken using a slit lamp camera. Since the images were captured, using natural light,
specular reflections are present on the iris, pupil, and cornea regions. Unlike the
CASIA database, the LEI database was not captured specifically for iris recognition.
4.3Database Used
In this project CASIA (version 1) iris image database is used for the testing
and experimentation. Such the images, taken in almost perfect imaging conditions,
are noise-free (photon noise, sensors noise & electrons, reflections, focus,
compression, contrast levels, and light levels).
20
Chapter 5
Image Preprocessing Algorithm
Individual recognition using Iris is most commonly employed in all the place.
In the proposed approach the process of recognition of the persons based on Iris
image is employed. Input iris images were obtained from CASIA database. The
input images were different in size. The images were resized to 256 x 256 size.The
images were then filtered using median filter process. Median filter smoothens the
noises from the input image. The images were divided into small regions and then
the noisy pixels were identified in the region. The identified noisy pixel is then
replaced with the help of the average value of the overall regions. The segmentation
of the iris regions were then performed based on the edge detection process and
gradient extraction process.
For detection of the edge pixels in the images canny edge detection process is
employed. The Canny edge detector is an edge detection operator that uses a multi-
stage algorithm to detect a wide range of edges in images. It uses a filter based on a
Gaussian, where the raw image is convolved with a Gaussian filter. The gradient
informations were obtainedfrom the images based on gradient extractionform the
21
input image.Hysteresis thresholding is employed for the iris and the pupil regions.
Circular Hough transform is applied for the iris images inorder to identify the iris
and pupil regions. The recognition of iris is done based on the DWT features
extracted from iris image. Features were extracted from the input images based on
the Discrete Wavelet transformation process. The input images were decomposed
using DWT into four regions. The images were decomposed based combination of
low pass and high pass filters. The first transformation is based on applying two low
pass filters. The second transformation is based on applying low pass filter and then
high pass filter. The third transformation is based on applying high pass filter and
then low pass filter. The final transformation is based on applying two high pass
filter. The average value, standard deviation value were extracted from the obtained
portions which is used as the features for the process. The extracted features were
optimized based on Genetic algorithm. From the extracted features best features
were selected from the images. Genetic algorithm combined with fisher criterion is
employed for this process. Population Initialization: Population is initialized using
the extracted features. The features were divided in a particular range and the
divided regions were encoded and considered as chromosomes. Selection Operation:
In the selection operation for the number of populations generated the probability for
selecting a feature is calculated. For this probability calculation fitness value is
calculated based on fisher criterion. Crossover Operation: New individual were
generated from the recombination of the existing individuals and the probability is
calculated for each individual’s. Mutation Operation: By inversing one bit in each
part of an individual’s a child is created. Probability is calculated for the generated
new child. Termination: Stopping condition for the process is provided by
identifying the convergence of the algorithm at which the maximum fitness is
obtained after number of generations. The optimized features were then used for
matching process based on Euclidean distance measurement.
Before the extraction of the features preprocessing in the iris images were employed
based on the smoothening process and the edge detection process. The process of
application of the optimization techniques helps in the reduction of the feature
counting. The performance of the process is measured based on the performance
metrics.
5.2Iris Preprocessing
The iris image is first resized to a particular size. The noises in the frames
reduces the quality of the frames. Each frames are considered as images. Inorder to
improve the quality of the images we normally employ some filtering operations.
Median filter is used for filtering. The median filter considers each pixel in the
image in turn and looks at its nearby neighbors to decide whether or not it is
representative of its surroundings. Instead of simply replacing the pixel value with
the median of neighboring pixel values. The median is calculated byfirst sortingall
22
the pixel values from the surrounding neighborhood into numerical order and then
replacing the pixel being considered with the middle pixel value. In image
processing, it is often desirable to be able to perform some kind of noise reduction
on an image or signal. The median filter is a nonlinear digital filtering technique,
often used to remove noise. Such noise reduction is a typical pre-processing step to
improve the results of later processing (for example, edge detection on an image).
Median filtering is very widely used in digital image processing because, under
certain conditions, it preserves edges while removing noise. The main idea of the
median filter is to run through the image entry by entry, replacing each entry with
the median of neighboring entries. The pattern of neighbors is called the "window",
which slides, entry by entry, over the entire image. For 1D images, the most obvious
window is just the first few preceding and following entries, whereas for 2D (or
higher-dimensional) images such as images, more complex window patterns are
possible (such as "box" or "cross" patterns). Note that if the window has an odd
number of entries, then the median is simple to define: it is just the middle value
after all the entries in the window are sorted numerically. For an even number of
entries, there is more than one possible median. Median filter is the nonlinear filter
more used to remove the impulsive noise from an image. Furthermore, it is a more
robust method than the traditional linear filtering, because it preserves the sharp
edges. Median filter is a spatial filtering operation, so it uses a 2-D mask that is
applied to each pixel in the input image. To apply the mask means to centre it in a
pixel,evaluating the covered pixel brightness and determining which brightness
value is the median value. Median filtering is one kind of smoothing technique, as is
linear Gaussian filtering. All smoothing techniques are effective at removing noise in
smooth patches or smooth regions of a image, but adversely affect edges. Often
though, at the same time as reducing the noise in a image, it is important to preserve
the edges. Edges are of critical importance to the visual appearance of images, for
example. For small to moderate levels of (Gaussian) noise, the median filter is
demonstrably better than Gaussian blur at removing noise whilst preserving edges
for a given, fixed window size. However, its performance is not that much better
than Gaussian blur for high levels of noise, whereas, for speckle noise and salt and
pepper noise (impulsive noise), it is particularly effective. Because of this, median
filtering is very widely used in digital image processing.
5.3Segmentation
The Iris region and the pupil region were segmented. The pixels inside the iris
and the pupil were converted from Cartesian coordinates to polar coordinates. The
normalization process stretches the contrast in the particular place to the whole
image. The normalization process reshapes the segmented portions to a particular
range. The ring-shape iris regions into a unified coordinates (i.e.) polar coordinates.
Preprocessing andnormalization are most common measures used in all the Iris
recognition systems. The first stage of iris recognition is to isolate the actual
23
irisregion in a digital eye image. The segmentation of iris localizes the iris’s spatial
extent in the eye image by isolating it from other structures in its vicinity, such as the
sclera, pupil, eyelids, and eyelashes. The normalization if iris invokes a geometric
normalization scheme to transform the segmented iris image from cartesian
coordinates to polar coordinates. The iris region is approximated by two circles, one
for the iris/sclera boundary and another for the iris/pupil boundary. The eyelids and
eyelashes normally occlude the upper and lower parts of the iris region. The
parabolic Hough transform is used to detect the eyelids, approximating the upper and
lower eyelids with parabolic arcs.
The normalization module uses image registration technique to transform the iris
texture from Cartesian to polar coordinates. The process, often called iris
unwrapping, yields a rectangular entity that is used for subsequent processing.
Normalization has three advantages: It accounts for variations in pupil size due to
changes in external illumination that might influence iris size; it ensures that the
irises of different individuals are mapped onto a common image domain in spite of
the variations in pupil size across subjects; it enables iris registration during the
matching stage through a simple translation operation that can account for in-plane
eye and head rotations.Associated with each unwrapped iris is a binary mask that
separates iris pixels (labeled with a “1”) from pixels that correspond to the eyelids
and eyelashes (labeled with a “0”) identified during segmentation. After
normalization, photometric transformations enhance the unwrapped iris’s textural
structure. Feature detection: Salient and distinctive objects in both reference and
sensed images are detected. Feature matching: The correspondence between the
features in the reference and sensed image established. Transform model estimation:
The type and parameters of the so-called mapping functions, aligning the sensed
image with the reference image, are estimated. Image resampling and
transformation: The sensed image is transformed by means of the mapping
functions. The Iris images were normalized and with the help of normalized images
iris region and pupil region in images were identified. Circular Hough transform is
employed for the detection of circle regions in the images. The Hough transform can
be used to determine the parameters of a circle when a number of points that fall on
the perimeter are known. A circle with radius R and center (a, b) can be described
with the parametric equations.
When the angle θ sweeps through the full 360 degree range the points (x, y) trace the
perimeter of a circle. If an image contains many points, some of which fall on
perimeters of circles, then the job of the search program is to find parameter triplets
(a, b, R) to describe each circle. The fact that the parameter space is 3D makes a
direct implementation of the Hough technique more expensive in computer memory
and time.
24
5.4Feature Extraction
Features were extracted from the input images based on the Discrete Wavelet
transformation process. The input images were decomposed using DWT into four
regions. The images were decomposed based combination of low pass and high pass
filters. The first transformation is based on applying two low pass filters. The second
transformation is based on applying low pass filter and then high pass filter. The
third transformation is based on applying high pass filter and then low pass filter.
The final transformation is based on applying two high pass filter. The average
value, standard deviation value were extracted from the obtained portions which is
used as the features for the process. The discrete wavelet transform (DWT) is a
linear transformation that operates on data vector whose length is an integer power
of two, transforming it into a numerically different vector of the same length. It is a
tool that separates data into different frequency components, and then studies each
component with resolution matched to its scale. DWT is computed with a cascade of
filtering followed by a factor 2 subsampling.
Input training set contains set of several iris images of persons. The best
values were selected from the obtained attributes. Input for Genetic algorithm is the
training set of the CASIA database. The number of chromosomes were the number
of the training samples that is considered as input. The first step of the genetic
algorithm is the selection process. In the selection process the initial fitness value is
calculated. The calculation of the fitness function is done based on
Next step is the crossover operation in which the crossover probability is calculated.
The cross over probability represents the probability of the selection of the particular
attribute.
Next step is the mutation operation in which the mutation probability is calculated.
The mutation probability represents the probability of the selection of the attributes
while the attribute is modified.
The final fitness value is the average of the crossover and mutation probability. The
four steps were repeated till the difference between the fitness functions were
minimum than a particular value. In our process the termination value is provided as
0.001. The attributes that were remaining in the last were the best attributes for the
proposed process.
26
theexperimental results, our FIG method can bring more effective recognition results
at the satisfactory computation costs, compared with single type of features and the
full set of original features. Furthermore, it brought slightly better recognition
performance and much better computation efficiency than the commonly used
genetic feature selection method based on classification accuracy rate. Another
advantage of the FIG is that it is independent of the classifiers; it is required to be
performed only once to select the features suitable for all the considered classifiers.
Genetic Algorithms (GAs) are adaptive heuristicsearch algorithm based on the
evolutionary ideas of natural selection and genetics. As such they represent an
intelligent exploitation of a random search used to solve optimization problems.
Although randomized, GAs are by no means random, instead they exploit historical
information to direct the search into the region of better performance within the
search space. The genetic algorithm is a method for solving both constrained and
unconstrained optimization problems that is based on natural selection, the process
that drives biological evolution. The genetic algorithm repeatedly modifies a
population of individual solutions.
27
5.6Matching
28
5.7Flow Diagrams
29
5.8System Architecture
Feature
Preprocessing. Segmentation.
extraction.
Feature Performance
Recognition.
selection. Measures
30
Chapter 6
Output Images
31
First Image Smoothening Second Image Smoothening
32
First Image Gradient Edge Second Image Gradient Edge
33
First Image Hysterisis Threshold Second Image Hysterisis Threshold
34
First Image Normalized Iris
35
Result
36
Chapter 7
Conclusion
7.1Conclusion
The proposed system recognizes the iris of the persons in the dataset based on
the features extracted using DWT. The extracted features are based on the DWT
which is statistical features extracted from the image. The extracted features were
optimized using Genetic algorithm. The recognition of the iris is done using the
distance metrics. The proposed system gives accuracy which is higher than the
existing algorithms which identifies that the misclassifications are reduced to a
greater extend. The input iris images were taken form CASIA database. Basic
preprocessing steps like resizing and noise removal process were employed. For
noise removal process median filter is employed. Canny edge detection process is
employed to the enhanced iris image. Image gradient is employed for the enhanced
iris image. Hysteresis thresholding is employed for the iris and the pupil regions.
Circular Hough transform is applied for the iris images inorder to identify the iris
and pupil regions. The features were extracted from the images based on the DWT
process. The extracted features were then optimized based on Genetic algorithm.
The person identification is employed based on the distance calculated between the
test image features and the images in the database. The performance of the process is
measured with the help of performance metrics like Accuracy, Sensitivity and
Specificity of the iris recognition process. The system can enhanced by including
some additional feature extraction algorithms such as SURF and other newly
enhanced feature extraction methods can describe more informations about the iris
image. The extracted addition features should be able to overcome the problem of
real time implementation of the process. Unsupervised classifiers can be used to
develop the process further.
37
7.2Future Suggestion
38
Chapter 8
References
39
[9]. Robert W. Ives et al. “Iris Recognition Using Histogram Analysis” in the
preceding of 38th Asilomar conference on signals, systems, and computers. pp. 562-
566,2014.
[10]. Bimi Jain, Dr.M.K.Gupta, Prof. Jyoti Bharti “Efficient Iris Recognition
Algorithm Using Method Of Moments” in the International Journal of Artificial
Intelligence & Applications, pp. 93-105.Vol.3, No.5, September 2012.
40
41