Sie sind auf Seite 1von 47

A

PROJECT REPORT
ON
IRIS RECOGNITION SYSTEM USING MATLAB

PROJECT WORK SUBMITTED IN PARTIAL


FULFILMENTS OF THE REQUIREMENT
FOR THE AWARD OF DEGREE OF

BATCHELOR OF ENGINEERING
IN
ELECTRONIC & TELE COMMUNICATION

SUBMITTED BY
MOHD NASRULLAH KAZMI(14DET50)
MOHD FAIZ (12ET77)

UNDER THE GUIDANCE OF


(Prof. BANDANAWAZ)

ANJUMAN I-ISLAM’S
KALSEKAR TECHNICAL CAMPUS
2016-17
DEPARTMENT OF ELECTRONIC & TELECOMMUNICATION
ENGINEERING

1
Certificate

This is to certify that the project entitled “IRIS RECOGNITION SYSTEM


USING MATLAB” on this day of 3rdNovember 2016 in partial fulfillment of the
requirement for the award of the Degree Of Bachelor Of Engineering In Electronic
&Telecommunicationof University of Mumbai from “ANJUMAN I-ISLAM’S
KALSEKAR TECHNICAL CAMPUS” is bonafide work carried out by “MOHD
NASRULLAH KAZMI and MOHD FAIZ” to the best of our knowledge and has not
formed the basis for the award previously of any degree,associated,fellowship or any
other similar title.

Prof.Bandanawaz (External Examiner)


(InternalExaminer)

Prof .Mujib Tamboli


(H.O.D)
Declaration

Ideclarethatthiswrittensubmissionrepresentsmyideasinmyown
wordsandwhereothers'ideasorwordshavebeenincluded,Ihaveadequatelycited
andreferencedtheoriginalsources.IalsodeclarethatIhaveadheredtoallprinciples of
academic honesty and integrity and have not misrepresented or fabricated or
falsifiedanyidea/data/fact/sourceinmysubmission.Iunderstandthatanyviolation
oftheabovewillbecausefordisciplinaryactionbytheInstituteandcanalsoevoke
penalactionfromthesourceswhichhavethusnotbeenproperlycitedorfromwhom
proper permission has not been taken whenneeded.

-----------------------------------------
(Mohd Nasrullah Kazmi 14DET50)

-----------------------------------------
(Patel Faiz Ayyub 12ET77)

Date:

Place: New Panvel


ACKNOWLEDGEMENT

We sincerely acknowledge our thanks to Prof. Bandanawaz of EXTC


Engineering Department for guiding us and giving us his valuable time and advice.
He always enriched us with his knowledge and gave us the necessary input to carry
out this work. We are grateful to him for his extra efforts and for being patient with
us. We are grateful to Prof. Tamboli, Head of the Department of Electronic and
Telecommunication engineering for permitting us to make use of the facilities
available in the department to carry out the project successfully. Last but not the
least we express our sincere thanks to all of our other staffs and our parents who
have patiently extended all sorts of help for accomplishing this undertaking.
ABSTRACT

Individual recognition using Iris is most commonly employed in all place.


This requires some special cameras to take the Iris image and the obtained iris
images were classified based on the features extracted. Only identification of
person is not the application of Iris recognition. It can be developed to do many
process.The iris image captured from the cameras were taken. Individual
recognition using Iris is most commonly employed in all place. In the proposed
approach the process of recognition of the persons based on Iris image is
employed. The recognition of iris is done based on the DWT features extracted
from iris image. The DWT features were the statistical features extracted from the
input images. The extracted features were optimized based on Genetic algorithm.
The genetic algorithm process includes selection, crossover and mutation process.
The probability of the selection of the features were estimated in the selection,
cross over and the mutation process. The optimized features were then used for
matching process based on Euclidean distance measurement. Before the extraction
of the features preprocessing in the iris images were employed based on the
smoothening process and the edge detection process. The preprocessed iris image
is then normalized. The normalization process identifies the iris and pupil region in
the image correctly and it reshapes the identified positions. The normalization
process improves the efficiency. The process of application of the optimization
techniques helps in the reduction of the feature counting. The person to whom the
input iris belongs is identified and with the help of the identified person the
matching process is employed. The performance of the process is measured based
on the performance metrics. The performance of the process measured indicates
that the proposed approach is more improved compared to the other existing
approaches for the iris recognition process.
INDEX

SR TITTLE PAGE NO.


NO.
1 INTRODUCTION 1
2 BIOMETRIC SECURITY SYSTEM 2
3 HUMAN VISUAL SYSTEM 15
4 IRIS DATABASE AND DATASET 18
5 IMAGE PREPROCESSING ALGORITHM 21
6 OUTPUT IMAGES 31
7 CONCLUSION 37
8 REFERENCE 39
Chapter 1
Introduction

Security of computer systems plays a crucial role nowadays. Computer


systems require remembering passwords that may be forgotten or stolen. Thus
biometrical systems, based on physiological or behavioral characteristics of a
person, are taken into consideration for a growing number of applications. These
characteristics are unique for each person, and are more tightly bound to a person
than a token object or a secret, which can be lost or transferred. Therefore, touch-
less automated real-time biometric systems for user authentication, such as iris
recognition, became a very attractive solution. It has been successfully deployed;
in several large- scale public applications, increasing reliability for users and
reducing identity fraud. This method of identification depends on relatively
unchangeable features and thus it is more accurately defined as authentication.

Within the last decade, governments and organizations around the world have
invested heavily in biometric authentication for increased security at critical
access points, not only to determine who accesses a system and/or service, but
also to determine which privileges should be provided to each user. For achieving
such identification, biometrics technology is emerging as a technology that
provides a high level of security, as well as being convenient and comfortable for
the citizen. For example, the United Arab Emirates employs biometric systems to
regulate the people traffic across their borders. Subsequently, several biometrics
systems have attracted much attention, such as facial recognition and iris
recognition is moreabstract.

Iris is most commonly used in all Biometric authentication systems since


authentication using iris is more reliable because the minute architecture of the iris
exhibit variations in every humans. The main process in the recognition of the Iris
image is basically the extraction of the features and labeling of the Iris images.
The Iris images are acquired through camera with subtle infrared illumination to
acquire images of the detail-rich, intricate structures of the iris externally visible at
the front of the eye. Digital templates encoded from the identified Iris patterns by
mathematical and statistical algorithms allow the identification of the individuals.
Iris images obtained has similar patterns for persons. The race of the person can
also be identified through iris. Asian peoples have iris mostly brown or black in
color. Non-Asian people have iris in different shades of red and sometimes blue.

1
1.1 Literaturesurvey

1.1.1IRIS DETECTION BASED ON QUALITY FEATURES

This process propose a new parameterization based on quality related


measures which is used in a global software-based solution for iris liveness
detection. This novel strategy has the clear advantage over other previously
proposed methods of needing just one iris image (i.e., the same iris image used
for access) to extract the necessary features in order to determine if the eye
presented to the sensor is real or fake. This fact shortens the acquisition process
and reduces the inconvenience for the final user. The test image is taken. In the
first step the iris is segmented from the background, for this purpose, a circular
Hough transform is used in order to detect the iris and pupil boundaries. Iris
printed images are a 2D surface in opposition to the 3D volume of a real eye for
which acquisition devices are thought. Thus, it is expected that the focus of a fake
iris will differ from that of a genuine sample. Intuitively, an image with good
focus is a sharp image. Thus, defocus primarily attenuates high spatial
frequencies, which means that almost all features estimating this property
perform some measure of the high frequencycontent in the overall image or in the
segmented iris region. It is expected that the degree of movement of an iris
printed on a sheet of paper and held in front of a sensor will be different from that
of a real eye where a more steady position can be maintained so that the small
trembling usually observed in the first case should be almost imperceptible.

Advantages:
The proposed method can also be used to detect other types of fake data (e.g.,
printed lenses) by selecting the subset of parameters that better adapts to the new
anti-spoofing problem.

Disadvantages:
It is not an easy task to generate such a synthetic trait that it possesses all the
measured quality related features in the same degree as a real sample.

2
1.1. 2 IRIS DETECTION BASED ON TEXTURE ANALYSIS

Personal identification using biometrics has been developing rapidly since


the past decade. Biometric systems have already been deployed to border control,
access to personal computers and the control of airport. However, biometric
systems still have vulnerabilities. Spoofing biometric system may occur in every
step from data acquisition to decision level, including using fake biometrics,
replaying attacks, corrupting match scores and overriding final decision, etc. Iris
pattern is considered as the most accurate and stable biometric modality, however,
iris recognition system meets new challenge in anti-counterfeit iris as color contact
lens become popular recently. Attackers wearing contact lens with artificial
textures printed onto them may try to access the system unauthorized, which is one
of the potential means to spoof the systems. This paper addresses the issue of
detecting fake iris wearing printed color contact lens, with the purpose of making
iris recognition system more robust in antispoofing. This issue is a liveness
detection problem in biometrics, which aims to ensure that images acquired by the
camera are real patterns. Iris images are preprocessed into a normalized image
before feature extraction. The main preprocessing steps include iris segmentation
and normalization. Iris segmentation is to find iris region by precisely localizing its
inner and outer boundaries. Iris normalization is to project iris from Cartesian to
polar coordinates using bilinear interpolation. Textures printed on contact lens
usually distribute over the outer half iris region, especially on the iris edge (transition
from sclera to iris region). From the appearance of fake iris, it can be seen that its iris
edge usually sharper than that of live iris. So we introduce iris edge sharpness (IES)
as the first measure to detect counterfeit iris.Texton refers to fundamental micro-
structure in generic natural images and thus constitute textures as the basic elements
in early visual perception. Iris-Texton feature extraction includes two steps. Firstly a
small, finite vocabulary of visual words in iris images, called Iris-Texton, are
learned. Then Iris-Texton histogram are used as feature vectors to represent the
global characteristics of iris images. An input image ROI, excluding
eyelids/eyelashes noise, is sent into filter banks, where the image is characterized by
its responds to a set of orientation and special frequency selective Gabor filters. Then
the filter response vectors are clustered into a set of prototypes, using a vector
quantization algorithm e.g., K-means. The K centers found by K-mean are Iris-
Textons. There are 64 Iris-Textons learned, which are named Iris-Texton
vocabulary. After that the global feature of iris image is represent by Iris-Texton
histogram. Each Iris-Texton is represented by the mean of vectors in the cluster and
serves as one bin in the histogram.

Each pixel can generate a 40 dimension vector by Gabor filtering and concatenation,
and this vector is assigned to the bin representing the nearest texton. By mapping 40
dimensional vectors to 64 bins, we can obtain a histogram with frequent variations of
bins. The Iris-Texton histogram denotes the richness of micro-texture in iris image

3
with proper filters chosen, therefore it is sufficient to characterize the global feature
of iris image. Textural features based on grey level co-occurrence matrices (GLCM)
is the third measurement we used to detect fake iris. Those features are generated on
image ROI, which is obtained by image preprocessing. Co-occurrence matrices
represent a second order statistic measurement, as they characterize the relationship
between the neighboring pixels.

ADVANTAGES:
It can be employed in systems with speed requirement given its low
computational cost and its performance is comparable to the state-of-the-art. It can
also be deployed by setting the threshold to a low FAR (False Acceptance Rate).
CCR (Correct Classification Rate) is closely related to the type of contact lens, or the
technique of making them.

DISADVANTAGES:
The proposed method highly depend on segmentation accuracy. Some datasets
were more challenging to segment the iris regions exactly. While considering we
cannot give assurance that the iris will be segmented exactly for all the real time
data. The False Rejection Rate is also low.

1.1.3 CONTACT LENS DETECTION BASED ON WEIGHTED LBP

Iris recognition has drawn much attention due to its convenience and security.
Compared with other biometric modality, iris pattern has been regarded as one of the
most accurate biometric modalities for its uniqueness, stability and non-
intrusiveness. However, as other biometric systems, iris system is also under threat
of forged iris attack. Efficient iris spoof detection can improve security of iris
recognition systems. Some artifacts have been considered to spoof iris recognition
system, such as paper printed iris, cosmetic contact lens, and redisplayed videos.
Spoof caused by wearing a cosmetic contact lens is particularly dangerous. It is
easily accepted by the system and hard to detect. Spoof detection is a critical
function for iris recognition because it reduces the risk of iris recognition systems
being forged.

Despite various counterfeit artifacts, cosmetic contact lens is one of the most
common and difficult to detect. In this paper, we proposed a novel fake iris detection
algorithm based on improved LBP and statistical features. Firstly, a simplified SIFT
descriptor is extracted at each pixel of the image. Secondly, the SIFT descriptor is
used to rank the LBP encoding sequence. The proposed method consists of four
steps: iris image preprocessing; generating scale space and calculating SIFT-like
descriptors calculating weighted local binary patterns extracting features and
4
classification. The main preprocessing steps include iris segmentation and de-
noising. Iris segmentation is to find iris region by precisely localizing its inner and
outer boundaries. The bounding square block of iris circle is regarded as the region
of interest for feature extraction rather than iris normalization to polar coordinate
system, to keep the regular texture pattern of colorful contact lens in Cartesian
coordinate and save time used for transformation of coordinate systems. For de-
noising, we choose low-pass filter and Total Variation as de-noising method. To
make the computation convenient, we normalized iris images into the same size 400
× 400. After smoothing, we use a simplified SIFT descriptor to analyze the local
structural characteristics. The SIFT descriptor is adopted because it is largely
invariant to changes of scale, illumination, and local affine distortions, and also in a
certain degreeof stability toview changes and noise.Application of SIFT descriptor
will enhance the stability and robustness of LBP. The first step is to generate the
scale space from the convolution of a variable-scale Gaussian template, with an
image. The second step is to extract a simplified SIFT descriptor for each pixel in its
5×5 neighborhood. Arrows denote the magnitude and orientation at each image
pixel, and the overlaid circle is weighted Gaussian window. In order to achieve
orientation invariance, the coordinates of the descriptor and the gradient orientations
are rotated relative to the main orientation, which is determined by all the gradient
direction of every scale. The last step is to get a descending rank of the orientation
histogram. Thus, we get a 576 dimensional feature for each image. The local binary
patterns (LBP) are adopted to represent texture patterns of iris images. The LBP has
emerged as an effective texture descriptor and is widely used in texture analysis.

ADVANTAGES:
The proposed method is robust to glasses impact, including specular, glass
frame shelter, hazy caused by dirty optic, and extra texture of dirty optic. The
combination of SIFT with LBP improves its invariance of scale illumination and
local affine distortion, and make the algorithm more robust to camera view change.
The proposed method is robust to detect contact lens and promising for cross-camera
Fake iris detection.

DISADVANTAGES:
The proposed works well for counterfeit iris images. These counterfeit iris
images are whose samples are time-consuming and costly to collect. The method has
to be enhanced to make it work for all type of images so that the process can be
employed in real time.

5
1.1.4Efficient Iris Spoof Detection Via Boosted Local Binary Pattern

Author: Z. He, Z. Sun, T. Tan, and Z. Wei


Year: 2009

Iris textures are commonly thought to be highly discriminative between eyes, they
(including the contact lens wearing iris textures) still present several desirable
common properties, such as: The radial distribution: Even within an iris, the scale
of the iris micro-structures varies a lot along the radius. Usually the larger the
radius is, the bigger the iris micro-structures will be. The angular self-similarity:
Although different angular regions remain discriminative, their texture patterns
display a certain degree of consistence/correlation. These properties suggest
dividing the iris into multiple regions. Each sub-region contains a particular texture
pattern. Via such division, more specific representation of iris can be obtained, and
hence makes it easier to discriminate live iris textures from counterfeit iris textures.
Moreover, the upper and lower parts of the iris are almost always occluded by
eyelids or eyelashes. It is therefore straightforward to exclude the upper and the
lower quarters from feature extraction for a more concise representation. And in
order to achieve translation and scale invariance, the iris is normalized to a
rectangular block of a fixed size of 64*512. As can be seen, the whole iris image is
divided into three annular sections along the radial direction, and two sectors along
the angular direction. In total, we get six sub-regions. The textures of counterfeit
iris images and live iris images are different in appearance and can be used for
spoof detection. LBP is defined for each pixel by thresholding its 3*3 neighborhood
pixels with the center pixel value, and considering the result as a binary bit string.

Each LBP code represents a type of micro image structure, and the distribution of
them can be used as a texture descriptor. The original LBP is later extended to
multi-scale LBP (denoted by LBPP, R) and uniform LBP. LBPP, R is calculated by
thresholding P equally spaced points on a circle (whose radius is R) with the center
pixel value. A LBP code is called uniform if its bit string contains at most two bit-
wise transitions from 0 to 1 or vice versa.

LBP based methods have been proved to be successful in biometric texture


representation, such as face and iris. Multi-resolution LBPs are adopted for texture
representation of each sub-region. Each bin represents the frequency of one type of
micro image structures on one sub-region, and is considered as a candidate texture
feature. A large pool of regional LBP features is therefore generated. This feature
pool must contain much redundant information because of the redundancy between
different LBP features as well as that between different sub-regions. To learn the
most discriminative regional LBP features from the redundantfeature pool, we
turned to the Adaboost algorithm. Adaboost is a machine learning algorithm that
can select a small set of the most discriminative features from a candidatefeature
6
pool. It is efficient for binary problems, and therefore is suitable for selecting the
best LBP features for iris spoof Detection. It begins with generating the feature pool
on the training samples. After that, Adaboost repeatedly learns the component
classifiers on the weighted versions of the training samples until the performance
criterion is satisfied. Clearly, there are three key modules involved in Adaboost: the
weak learner, the component classifier and the re-weighting function. The weak
learner is the criterion for choosing the best feature on the weighted training set.
The component classifier outputs a confidence score of being a positive based on its
value. The re-weighting function maintains a distribution over the training samples
and updates it in such a way that the subsequent component classifier can
concentrate on the hard samples by giving higher weights to the samples that are
wrongly classified by previous classifier. Among various Adaboost algorithms, we
choose the confidence-rated Adaboost learning for its efficiency and simplicity. In
confidence-rated Adaboost, the weak learner tries to find the feature that can
maximizes the criterion. The weak learner and the component classifier are
dependent on the density distributions of positive and negative samples, which are
estimated by histograms. More bins in the histogram give a more refined
representation of the feature density distribution. However, when the training
samples of one class are insufficient, samples dropped into each bin is not enough
for a stable estimation of the distribution of this class but just an ad-hoc one of the
current training samples. The classifier learned based on the limited training
samples will be ad-hoc.

7
Chapter 2
Biometric Security Systems

This chapter gives an overview of biometrics and their characteristics, the


system requirements and modes of operation.Then addresses briefly the commonly
used biometrics. Finally, the system performance and type of system errors will be
described.

2.1Introduction

Today, biometric recognition is a common and reliable way to authenticate


the identity of a living persons based on physiological or behavioral
characteristics.

As services and technologies have developed in the modern world, human


activities and transactions have proliferated in which rapid and reliable personal
identification is required. Examples of applications include logging on to
computers, pass through airport, access control in laboratories and factories,
people need to verify their identities, bank Automatic Teller Machines (ATMs),
and other transactions authorization, premises access control, and in general
security systems. All such identification efforts share the common goals of speed,
reliability and automation.

In previous, the most popular methods of keeping information and resources


secure are to use password and User ID/PIN protection. These schemes require the
users to authenticate themselves by entering a -secret- password that they had
previously created or were assigned. These systems are prone to hacking, either
from an attempt to crack the password or from passwords, which were not
unique. However, password can be forgotten, and identification cards can be lost
or stolen.

A Biometric Identification system is one in which the user's "body" becomes the
password/PIN. Biometric characteristics of an individual are unique and therefore
can be used to authenticate a user's access to various system.

8
2.2 Biometric

The word ‘Biometric’ is a two sections terminology, is taken from the Greek
word, of which ‘Bio’ means life and ‘Metric’, mean measure. By combining these
two words, ‘Biometric’ can be defined as the measure (study) of life, which
includes humans, animals, and plants.“Biometric technologies” defined as an
automated methods of verifying or recognizing the identity of a living person
based on a physiological or behavioral characteristic.

By definition, there are two key words in it: “automated” and “person”. The
word “automated” differentiates biometrics from the larger field of human
identification science. Biometric authentication techniques are done completely by
machine, generally (but not always) a digital computer [36]. The second key word
is “person”. Statistical techniques, particularly using fingerprint patterns, have been
used to differentiate or connect groups of people or to probabilistically link persons
to groups, but biometrics is interested only in recognizing people as individuals. All
of the measures used contain both physiological and behavioral components, both
of which can vary widely or be quite similar across a population of individuals. No
technology is purely one or the other, although some measures seem to be more
behaviorally influenced and some more physiologically influenced.

A biometric system is essentially a pattern recognition system which makes a


personal identification by determining the authenticity of a specific physiological or
behavioral characteristic possessed by the user. Biometric technologies are thus
defined as the "automated methods of identifying or authenticating the identity of a
living person based on a physiological or behavioral characteristic". A biometric
system can be either an 'identification' system or a 'verification' (authentication)
system, which are defined below
Identification - One to Many: Biometrics can be used to determine a person's
identity even without his knowledge or consent. For example, scanning a crowd
with a camera and using face recognition technology, one can determine matches
against a known database.
Verification - One to One: Biometrics can also be used to verify a person's identity.
For example, one can grant physical access to a secure area in a building by using
finger scans or can grant access to a bank account at an ATM by using retinal scan.
Biometric authentication requires to compare a registered or enrolled biometric
sample (biometric template or identifier) against a newly captured biometric sample
(for example, the one captured during a login).This is a three-step process (Capture,
Process, Enroll) followed by a Verification or Identification process.
9
During Capture process, raw biometric is captured by a sensing device such as a
fingerprint scanner or video camera. The second phase of processing is to extract
the distinguishing characteristics from the raw biometric sample and convert into a
processed biometric identifier record (sometimes called biometric sample or
biometric template). Next phase does the process of enrollment. Here the processed
sample (a mathematical representation of the biometric - not the original biometric
sample) is stored / registered in a storage medium for future comparison during an
authentication. In many commercial applications, there is a need to store the
processed biometric sample only. The original biometric sample cannot be
reconstructed from this identifier.
Due to the reliability and nearly perfect recognition rates of Biometric methods; it
becomes reliable and secure identification of people. Many biometric-based
identification systems have been proposed such as: fingerprint, face recognition,
facial expressions, voice, iris,… etc. as shown in Fig. 2.1. For this purpose, These
methods based on physical or behavioral characteristics are of interest because
people cannot forget or lose their physical characteristics.

Fig: Showing Biometric Characteristics

10
2.3Biometric History

The science of using humans for the purpose of identification dates back to
the 1875 and the measurement system of Alphonse Bertillon. Bertillon’s system of
body measurements, including skull diameter and arm and foot length, was used in
the USA to identify prisoners until the 1925. Before thatWilliam Herschel and Sir
Francis Galton proposed quantitative identification through fingerprint and facial
measurements in the 1880s. The development of digital signal processing
techniques in the 1960s led immediately to work in automating human
identification. Speaker and fingerprint recognition systems were among the first to
be applied. The potential for application of this technology to high-security access
control, personal locks and financial transactions was recognized in the early 1960s.
The 1970s saw development and deployment of hand geometry systems,the start of
large-scale testing and increasing interest in government use of these automated
personal identificationtechnologies. Then, Retinal and signature verification
systems came in the 1980s, followed by face systems. Lastly, Iris recognition
systems were developed in the 1990s.

2.4Biometric Concepts

A number of biometric characteristics may be captured in the first phase of


processing. However, automated capturing and automated comparison with
previously stored data requires that the biometric characteristics satisfy the following
characteristics.

Universal: Every person must possess the characteristic/attribute. The attribute must
be one that is universal and seldom lost to accident or disease.
Invariance of properties: They should be constant over a long period of time. The
attribute should not be subject to significant differences based on age either episodic
or chronic disease.

Measurability: The properties should be suitable for capture without waiting time
and must be easy to gather the attribute data passively.

Singularity: Each expression of the attribute must be unique to the individual. The
characteristics should have sufficient unique properties to distinguish one person
from any other. Height, weight, hair and eye color are all attributes that are unique
assuming a particularly precise measure, but do not offer enough points of
differentiation to be useful for more than categorizing.

Acceptance: The capturing should be possible in a way acceptable to a large


percentage of the population. Excluded are particularly invasive technologies, i.e.
technologies which require a part of the human body to be taken or which
(apparently) impair the human body.
11
Reducibility: The captured data should be capable of being reduced to a file which
is easy to handle.

Reliability and tamper-resistance: The attribute should be impractical to mask or


manipulate. The process should ensure high reliability and reproducibility.
Privacy: The process should not violate the privacy of the person.
Comparable: Should be able to reduce the attribute to a state that makes it digitally
comparable to others. The less probabilistic the matching involved, the more
authoritative the identification.

Inimitable: The attribute must be irreproducible by other means. The less


reproducible the attribute,the more likely it will be authoritative.

2.5Operating Mode

Depending on the application context, a biometric system may operate in


two modes: verification mode or identification mode. In the verification mode,
the system verifies the identity by comparing the presented biometric trait by a
stored biometric template in the system (one-to-one). If the similarityis
sufficient according to some similarity measure, the user is accepted by the
system. In such a system, an individual who desires to be recognized claims an
identity, usually via a Personal Identification Number (PIN), a user name, or a
smart card, and the system conducts a one-to-one comparison to determine
whether the claim is true or not (e.g., “Does this biometric data belong to this
person (x)?”).Identity verification is typically used for positive recognition,
where the aim is to prevent multiple people from using the same identity.

In the identification mode, database search is crucial and needed. A user presents a
not necessarily known sample of his/her biometrics to the system. This sample is
then compared with existing samples in a – central - database (one-to-many) [48].
Identification is a critical component in negative recognition applications, where the
system establishes whether the person is who he/she (implicitly or explicitly) denies
to be. The purpose of negative recognition is to prevent a single person from using
multiple identities. Identification may also be used in positive recognition for
convenience (the user is not required to claim an identity). While traditional
methods of personal recognition such as passwords, PINs, keys, and tokens may
work for positive recognition, negative recognition can only be established through
biometrics.

12
Fig:Components Of Biometric System

2.6Commonly Used BiometricsCharacteristic

A number of biometric characteristics exist and in use each biometric has its
strengths and weaknesses, and the choice depends on the application. In other words
no biometric is “optimal” a brief introduction to the commonly used biometrics is
given below.

2.6.1Physiological Characteristic Biometrics

1. Facial, hand, and hand vein infrared thermogram, A pattern of radiated heat
from human body considers a characteristic of an individual. These samples of
patterns can be captured by an infrared camera in an unobtrusive manner like
a regular (visible spectrum) photograph. The technology could be used for
covert recognition.

2. Ear many researchers suggested that the shape of the ear to be a characteristic.
Studying the structure of the approaches is based on matching the distance of
salient points on the pinna from a landmark location on the ear. The features
of an ear are not expected to be very distinctive in establishing the identity of
an individual.

3. Fingerprint, A fingerprint is the pattern of ridges and valleys on the surface of


a fingertip, the formation of which is determined during the first seven months
of fetal creation. Fingerprints of identical twins are different and so are the
prints on each finger of the same person. Humans have used fingerprints for
personal identification for many centuries and the matching accuracy using
fingerprints has been shown to be very high.

13
4. Retina, Since the retina is protected in an eye itself, and since it is not easy to
change or replicate the retinal vasculature; this is one of the most secure
biometric. Retinal recognition creates an eye signature from the vascular
configuration of the retina, which is supposed to be a characteristic of each
individual and each eye, respectively.

2.6.2Behavioral Characteristic Biometrics

1. Gait, Basically, gait is the peculiar way one walks and it is a complex spatio-
temporal biometrics. This is one of the newer technologies and is yet to be
researched in more detail. Gait is a behavioral biometric and may not remain
the same over a long period of time, due to change in body weight or serious
brain damage. Acquisition of gait is similar to acquiring a facial picture and
may be an acceptable biometric.

2. Keystroke, It is noticed that each person types on a keyboard in a


characteristic way. Keystroke dynamics is a behavioral biometric for some
individuals, one may expect to observe large variations in typical typing
patterns

Fig: Comparison Of Different Biometric Technique


14
Chapter 3
HumanVision System

3.1 Eye Anatomy

The human eye has been called the most complex organ in our body. It's
amazing that something so small can have so many working parts. But when you
consider how difficult the task of providing vision really is, perhaps it's no wonder
after all.It is useful to consider briefly the eye anatomywhich is shown below:

Fig: Anatomy Of Human Eye

We will move quickly over only some of the well-known parts of the human eye.
The cornea is a clear, transparent portion of the outer coat of the eyeball through
which light passes to the lens. The lens helps to focus light on the retina, which is the
innermost coat of the back of the eye, formed of light sensitive nerve endings that
carry the visual impulse to the optic nerve. The retina acts as a film of a camera in its
operation and tasks.

The iris is a thin circular ring lies between cornea and the lens of the human eye. A
front-on view of the iris is shown in Fig in which iris encircles the pupil the dark
centered portion of the eye. The function of iris is to control the amount of light
entering through the pupil, this done by the sphincter and dilators muscles, which
adjust the size of the pupil.

The sphincter muscle lies around the very edge of the pupil. In bright light, the
sphincter contracts, causing the pupil to constrict. The dilator muscle runs radially
through the iris, like spokes on a wheel. This muscle dilates the eye in dim lighting.

15
The sclera, a white region of connective tissue and blood vessels, surrounds the iris.
The externally visible surface of the multi-layered iris contains two zones, which
often differ in color.An outer ciliary zone and an inner pupillary zone, and these two
zones are divided by the collarette, which appears as a zigzag pattern.

The iris is composed of several layers; the visual appearance of the iris is a direct
result of its multilayered structure.Iris color results from the differential absorption
of light impinging on the pigmented cells in the anterior border layer, posterior
epithelium and is scattered as it passes through the stroma to yield a blue
appearance. Progressive levels of anterior pigmentation lead to darker colored irises.

The average diameter of the iris is nearly 11 mm and the pupil radius can range
from 0.1 to 0.8 of the iris radius. It shares high-contrast boundaries with the pupil but
less-contrast boundaries with the sclera.

16
3.2 Iris Recognition System
The idea of using the iris as a biometric is over 100 years old. However, the
idea of automating iris recognition is more recent. In 1987, Flom and Safir
obtained a patent for an unimplemented conceptual design of an automated iris
biometrics system.

Image processing techniques can be used to extract the unique iris pattern from a
digitized image of the eye, and encode it into a biometric template, which can be
stored in a database later. This biometric template contains an objective
mathematical representation of the unique information stored in the iris, and
allows comparisons to be made betweentemplates.

When a subject wishes to be identified by an iris recognition systemeye is first


photographed (captured by camera, this step called acquisition stage), and then a
template created for their iris region (these stages will be explained later). This
template is then compared with the other templates stored in a database until either
a matching template is found and the subject is identified, or no match is found and
the subject remains unidentified. In addition, iris recognition system works in the
two modes: verification and identification.

3.3 Iris System Challenges

One of the major challenges of automated iris recognition systems is to


capture a high quality image of iris while remaining noninvasive to the human
operator. Moreover, capturing the rich details of iris patterns, an imaging system
should resolve a minimum of 70 pixels in iris radius. In the field trials to date, a
resolved iris radius of 80–130 pixels has been more typical. Monochrome CCD
cameras (480×640) have been widely used because Near Infrared (NIR) illumination
in the 700–900-nm band was required for imaging to be non-intrusive to humans.
Some imaging platforms deployed a wide-angle camera for coarse localization of
eyes in faces; to steer the optics of a narrow- angle pan/tilt camera that acquired
higher resolution images of eyes.

Given that iris is a relatively small (nearly 1 cm in diameter), dark object and that
human operators are very sensitive about their eyes; this matter required careful
engineering. Some points should be taken into account: (i) Acquiring images of
sufficient resolution and sharpness (ii) Good contrast in the interior iris pattern
without resorting to a level of illumination that annoys the operator (iii) The
images should be wellframed and (iv) Noises in the acquired images should be
eliminated as much as possible.
17
Chapter 4
Iris Database and Dataset

This chapter describes briefly the common datasets used, concentrating on


Chinese Academy of Sciences Institute of Automation (CASIA 1) that is used
in current research.These database characteristics will be illustrated through the
literature.

4.1Iris Image Acquisition

All current commercial iris biometrics systems still have constrained image
acquisition conditions. Near infrared illumination, in the 750–950 nm range, is
used to light the face, and the user is prompted with visual and/or auditory
feedback to position the eye so that it can be in focus and of sufficient size in the
image. In 2004, Daugman suggested that the iris should have a diameter of at least
140 pixels. The International Standards Organization (ISO) Iris Image Standard
released in 2006 is more demanding, specifying a diameter of 200 pixels.
Experimental research on iris recognition system requires an iris image dataset.
Several datasets are discussed briefly in this chapter.

4.2 Brief Descriptions Of Some Dataset

There is not any public iris database. Lacking of iris data may be block tothe
research of iris recognition. To promote the research, National Laboratory of Pattern
Recognition (NLPR), Institute of Automation (IA), Chinese Academy of Sciences
(CAS) provide iris database freely for iris recognition researchers. Table summarizes
information on a number of famous iris datasets.

18
No. of No. of
Database Camera used
irises images

CASIA 1 108 756 CASIA camera

CASIA 3 1500 22051 CASIA camera

ICE 2005 244 2953 LG2200

ICE 2006 480 60000 LG2200

MMU 1 90 450 LG IrisAccess

MMU 2 199 995 Panasonic BM-ET100US Authenticam

UBIRIS 241 1877 Nikon E5700

UPOL 128 384 SONY DXC-950P 3CCD

4.2.1CASIA Version 1

All images tested are taken from the Chinese Academy of Sciences Institute of
Automation (CASIA) iris database, apart from being the oldestthis database is
clearly the most known and widely used by the majority of researchers. Beginning
with a 320×280 pixel photograph of the eye took from 4 cm away using a near
infrared camera. The NIR spectrum (800 nm) emphasizes the texture patterns of iris
making the measurements taken during iris recognition more precise.

Usability is the largest bottleneck of current iris recognition. It is a trend to develop


long-range iris image acquisition systems for friendly user authentication. However,
iris images captured at a distance are more challenging than traditional close-up iris
images. Lack of long-range iris image data in the public domain has hindered the
research and development of next-generation iris recognition systems.

Factors are exclusively related with iris obstructions by eyelids and eyelashes as
shown in table above the pupil regions of all iris images were automatically detected
and replaced with a circular region of constant intensity to mask out the specular
reflections from the NIR illuminators (8 illuminators) before public release.

19
4.2.2 Iris Challenge Evaluation

The iris image datasets used in the Iris Challenge Evaluations (ICE) in 2005
and 2006 were acquired at the University of Notre Dame, and contain iris images of
a wide range of quality, including some off-axis images. The ICE 2005 database is
currently available, and the larger ICE 2006 database also released. One unusual
aspect of these images is that the intensity values are automatically contrast-
stretched by the LG 2200 to use 171 gray levels between 0 and 255. Samples are
shown in Fig

Fig: ICE Iris Image Set

4.2.3Lions Eye Institute

The Lions Eye Institute (LEI) database consists of 120 greyscale eye images
taken using a slit lamp camera. Since the images were captured, using natural light,
specular reflections are present on the iris, pupil, and cornea regions. Unlike the
CASIA database, the LEI database was not captured specifically for iris recognition.

4.3Database Used

In this project CASIA (version 1) iris image database is used for the testing
and experimentation. Such the images, taken in almost perfect imaging conditions,
are noise-free (photon noise, sensors noise & electrons, reflections, focus,
compression, contrast levels, and light levels).

In addition, using NIR illumination helps in illumination to be controlled and


unintrusive to humans, and helps reveal the detailed structure of heavily pigmented
(dark) irises too. A random subset database of different person's eyes is selected for
test, under unbiased conditions.

20
Chapter 5
Image Preprocessing Algorithm

This chapter includes our proposed system and software implementation. It


will discuss the image pre-processing methods. It browses the image processing
steps. In addition, the segmentation processes based on the modifications of
Wildes and daugman approaches. Iris normalization and unwrapping method this
images results review. The results of each step will be drawn in the end of its
section.

5.1 Proposed System

Individual recognition using Iris is most commonly employed in all the place.
In the proposed approach the process of recognition of the persons based on Iris
image is employed. Input iris images were obtained from CASIA database. The
input images were different in size. The images were resized to 256 x 256 size.The
images were then filtered using median filter process. Median filter smoothens the
noises from the input image. The images were divided into small regions and then
the noisy pixels were identified in the region. The identified noisy pixel is then
replaced with the help of the average value of the overall regions. The segmentation
of the iris regions were then performed based on the edge detection process and
gradient extraction process.

For detection of the edge pixels in the images canny edge detection process is
employed. The Canny edge detector is an edge detection operator that uses a multi-
stage algorithm to detect a wide range of edges in images. It uses a filter based on a
Gaussian, where the raw image is convolved with a Gaussian filter. The gradient
informations were obtainedfrom the images based on gradient extractionform the
21
input image.Hysteresis thresholding is employed for the iris and the pupil regions.
Circular Hough transform is applied for the iris images inorder to identify the iris
and pupil regions. The recognition of iris is done based on the DWT features
extracted from iris image. Features were extracted from the input images based on
the Discrete Wavelet transformation process. The input images were decomposed
using DWT into four regions. The images were decomposed based combination of
low pass and high pass filters. The first transformation is based on applying two low
pass filters. The second transformation is based on applying low pass filter and then
high pass filter. The third transformation is based on applying high pass filter and
then low pass filter. The final transformation is based on applying two high pass
filter. The average value, standard deviation value were extracted from the obtained
portions which is used as the features for the process. The extracted features were
optimized based on Genetic algorithm. From the extracted features best features
were selected from the images. Genetic algorithm combined with fisher criterion is
employed for this process. Population Initialization: Population is initialized using
the extracted features. The features were divided in a particular range and the
divided regions were encoded and considered as chromosomes. Selection Operation:
In the selection operation for the number of populations generated the probability for
selecting a feature is calculated. For this probability calculation fitness value is
calculated based on fisher criterion. Crossover Operation: New individual were
generated from the recombination of the existing individuals and the probability is
calculated for each individual’s. Mutation Operation: By inversing one bit in each
part of an individual’s a child is created. Probability is calculated for the generated
new child. Termination: Stopping condition for the process is provided by
identifying the convergence of the algorithm at which the maximum fitness is
obtained after number of generations. The optimized features were then used for
matching process based on Euclidean distance measurement.

Before the extraction of the features preprocessing in the iris images were employed
based on the smoothening process and the edge detection process. The process of
application of the optimization techniques helps in the reduction of the feature
counting. The performance of the process is measured based on the performance
metrics.

5.2Iris Preprocessing

The iris image is first resized to a particular size. The noises in the frames
reduces the quality of the frames. Each frames are considered as images. Inorder to
improve the quality of the images we normally employ some filtering operations.
Median filter is used for filtering. The median filter considers each pixel in the
image in turn and looks at its nearby neighbors to decide whether or not it is
representative of its surroundings. Instead of simply replacing the pixel value with
the median of neighboring pixel values. The median is calculated byfirst sortingall
22
the pixel values from the surrounding neighborhood into numerical order and then
replacing the pixel being considered with the middle pixel value. In image
processing, it is often desirable to be able to perform some kind of noise reduction
on an image or signal. The median filter is a nonlinear digital filtering technique,
often used to remove noise. Such noise reduction is a typical pre-processing step to
improve the results of later processing (for example, edge detection on an image).
Median filtering is very widely used in digital image processing because, under
certain conditions, it preserves edges while removing noise. The main idea of the
median filter is to run through the image entry by entry, replacing each entry with
the median of neighboring entries. The pattern of neighbors is called the "window",
which slides, entry by entry, over the entire image. For 1D images, the most obvious
window is just the first few preceding and following entries, whereas for 2D (or
higher-dimensional) images such as images, more complex window patterns are
possible (such as "box" or "cross" patterns). Note that if the window has an odd
number of entries, then the median is simple to define: it is just the middle value
after all the entries in the window are sorted numerically. For an even number of
entries, there is more than one possible median. Median filter is the nonlinear filter
more used to remove the impulsive noise from an image. Furthermore, it is a more
robust method than the traditional linear filtering, because it preserves the sharp
edges. Median filter is a spatial filtering operation, so it uses a 2-D mask that is
applied to each pixel in the input image. To apply the mask means to centre it in a
pixel,evaluating the covered pixel brightness and determining which brightness
value is the median value. Median filtering is one kind of smoothing technique, as is
linear Gaussian filtering. All smoothing techniques are effective at removing noise in
smooth patches or smooth regions of a image, but adversely affect edges. Often
though, at the same time as reducing the noise in a image, it is important to preserve
the edges. Edges are of critical importance to the visual appearance of images, for
example. For small to moderate levels of (Gaussian) noise, the median filter is
demonstrably better than Gaussian blur at removing noise whilst preserving edges
for a given, fixed window size. However, its performance is not that much better
than Gaussian blur for high levels of noise, whereas, for speckle noise and salt and
pepper noise (impulsive noise), it is particularly effective. Because of this, median
filtering is very widely used in digital image processing.

5.3Segmentation

The Iris region and the pupil region were segmented. The pixels inside the iris
and the pupil were converted from Cartesian coordinates to polar coordinates. The
normalization process stretches the contrast in the particular place to the whole
image. The normalization process reshapes the segmented portions to a particular
range. The ring-shape iris regions into a unified coordinates (i.e.) polar coordinates.
Preprocessing andnormalization are most common measures used in all the Iris
recognition systems. The first stage of iris recognition is to isolate the actual

23
irisregion in a digital eye image. The segmentation of iris localizes the iris’s spatial
extent in the eye image by isolating it from other structures in its vicinity, such as the
sclera, pupil, eyelids, and eyelashes. The normalization if iris invokes a geometric
normalization scheme to transform the segmented iris image from cartesian
coordinates to polar coordinates. The iris region is approximated by two circles, one
for the iris/sclera boundary and another for the iris/pupil boundary. The eyelids and
eyelashes normally occlude the upper and lower parts of the iris region. The
parabolic Hough transform is used to detect the eyelids, approximating the upper and
lower eyelids with parabolic arcs.

The normalization module uses image registration technique to transform the iris
texture from Cartesian to polar coordinates. The process, often called iris
unwrapping, yields a rectangular entity that is used for subsequent processing.
Normalization has three advantages: It accounts for variations in pupil size due to
changes in external illumination that might influence iris size; it ensures that the
irises of different individuals are mapped onto a common image domain in spite of
the variations in pupil size across subjects; it enables iris registration during the
matching stage through a simple translation operation that can account for in-plane
eye and head rotations.Associated with each unwrapped iris is a binary mask that
separates iris pixels (labeled with a “1”) from pixels that correspond to the eyelids
and eyelashes (labeled with a “0”) identified during segmentation. After
normalization, photometric transformations enhance the unwrapped iris’s textural
structure. Feature detection: Salient and distinctive objects in both reference and
sensed images are detected. Feature matching: The correspondence between the
features in the reference and sensed image established. Transform model estimation:
The type and parameters of the so-called mapping functions, aligning the sensed
image with the reference image, are estimated. Image resampling and
transformation: The sensed image is transformed by means of the mapping
functions. The Iris images were normalized and with the help of normalized images
iris region and pupil region in images were identified. Circular Hough transform is
employed for the detection of circle regions in the images. The Hough transform can
be used to determine the parameters of a circle when a number of points that fall on
the perimeter are known. A circle with radius R and center (a, b) can be described
with the parametric equations.

When the angle θ sweeps through the full 360 degree range the points (x, y) trace the
perimeter of a circle. If an image contains many points, some of which fall on
perimeters of circles, then the job of the search program is to find parameter triplets
(a, b, R) to describe each circle. The fact that the parameter space is 3D makes a
direct implementation of the Hough technique more expensive in computer memory
and time.
24
5.4Feature Extraction

In machine learning, pattern recognition and in image processing, feature extraction


starts from an initial set of measured data and builds derived values (features)
intended to be informative and non-redundant, facilitating the subsequent learning
and generalization steps, and in some cases leading to better human interpretations.
Feature extraction is related to dimensionality reduction.When the input data to an
algorithm is too large to be processed and it is suspected to be redundant then it can
be transformed into a reduced set of features.Determining a subset of the initial
features is called feature selection.The selected features are expected to contain the
relevant information from the input data, so that the desired task can be performed
by using this reduced representation instead of the complete initial data.

Features were extracted from the input images based on the Discrete Wavelet
transformation process. The input images were decomposed using DWT into four
regions. The images were decomposed based combination of low pass and high pass
filters. The first transformation is based on applying two low pass filters. The second
transformation is based on applying low pass filter and then high pass filter. The
third transformation is based on applying high pass filter and then low pass filter.
The final transformation is based on applying two high pass filter. The average
value, standard deviation value were extracted from the obtained portions which is
used as the features for the process. The discrete wavelet transform (DWT) is a
linear transformation that operates on data vector whose length is an integer power
of two, transforming it into a numerically different vector of the same length. It is a
tool that separates data into different frequency components, and then studies each
component with resolution matched to its scale. DWT is computed with a cascade of
filtering followed by a factor 2 subsampling.

The main feature of DWT is multiscale representation of function. By using the


wavelets, given function can be analyzed at various levels of resolution. The DWT is
also invertible and can be orthogonal. Wavelets seem to be effective for analysis of
textures recorded with different resolution. It is very important problem in NMR
imaging, because high-resolution images require long time of acquisition. This
causes an increase of artifacts caused by patient movements, which should be
avoided. There is an expectation that the proposed approach will provide a tool for
fast, low resolution NMR medical diagnostic. Wavelet transform decomposes an
image into a set of basic functions. These basic functions are called wavelets.
Wavelets are obtained from a single prototype wavelet called mother wavelet by
dilations and shifting. The DWT has been introduced as a highly efficient and
flexible method for sub band decomposition of signals. The 2D -DWT is nowadays
established as a key operation in ECG Signal processing .It is multi-resolution
analysis and it decomposes ECG Signals into wavelet coefficients and scaling
function. In Discrete Wavelet Transform, signal energy concentrates to specific
wavelet coefficients. This characteristic is useful for compressing ECG Signals.
25
5.5Feature Selection

Input training set contains set of several iris images of persons. The best
values were selected from the obtained attributes. Input for Genetic algorithm is the
training set of the CASIA database. The number of chromosomes were the number
of the training samples that is considered as input. The first step of the genetic
algorithm is the selection process. In the selection process the initial fitness value is
calculated. The calculation of the fitness function is done based on

Next step is the crossover operation in which the crossover probability is calculated.
The cross over probability represents the probability of the selection of the particular
attribute.

Next step is the mutation operation in which the mutation probability is calculated.
The mutation probability represents the probability of the selection of the attributes
while the attribute is modified.

The final fitness value is the average of the crossover and mutation probability. The
four steps were repeated till the difference between the fitness functions were
minimum than a particular value. In our process the termination value is provided as
0.001. The attributes that were remaining in the last were the best attributes for the
proposed process.

An optimization is presented based on Fisher criterion and genetic optimization,


which is called FIG for short. The Fisher criterion is applied to evaluate feature
selection results, based on which a genetic optimization algorithm is developed to
find out the optimal feature subsetfrom candidate features. As demonstratedby

26
theexperimental results, our FIG method can bring more effective recognition results
at the satisfactory computation costs, compared with single type of features and the
full set of original features. Furthermore, it brought slightly better recognition
performance and much better computation efficiency than the commonly used
genetic feature selection method based on classification accuracy rate. Another
advantage of the FIG is that it is independent of the classifiers; it is required to be
performed only once to select the features suitable for all the considered classifiers.
Genetic Algorithms (GAs) are adaptive heuristicsearch algorithm based on the
evolutionary ideas of natural selection and genetics. As such they represent an
intelligent exploitation of a random search used to solve optimization problems.
Although randomized, GAs are by no means random, instead they exploit historical
information to direct the search into the region of better performance within the
search space. The genetic algorithm is a method for solving both constrained and
unconstrained optimization problems that is based on natural selection, the process
that drives biological evolution. The genetic algorithm repeatedly modifies a
population of individual solutions.

27
5.6Matching

The matching process is employed based on the measurement of the distance


between the feature vectors based on the calculation of Euclidean distance.

TP: Correctly Identified


TN: Correctly Rejected
FP: Wrongly Identified
FN:Wrongly Rejected

28
5.7Flow Diagrams

29
5.8System Architecture

Feature
Preprocessing. Segmentation.
extraction.

Feature Performance
Recognition.
selection. Measures

30
Chapter 6
Output Images

First Image Resizing Second Image Resizing

31
First Image Smoothening Second Image Smoothening

First Image Canny Edge Second Image Canny Edge

32
First Image Gradient Edge Second Image Gradient Edge

First Image Gamma Correction Second Image Gamma Correction

33
First Image Hysterisis Threshold Second Image Hysterisis Threshold

First ImageHough Transform Second Image Hough Transform

34
First Image Normalized Iris

Second Image Normalized Iris

35
Result

36
Chapter 7

Conclusion

7.1Conclusion

The proposed system recognizes the iris of the persons in the dataset based on
the features extracted using DWT. The extracted features are based on the DWT
which is statistical features extracted from the image. The extracted features were
optimized using Genetic algorithm. The recognition of the iris is done using the
distance metrics. The proposed system gives accuracy which is higher than the
existing algorithms which identifies that the misclassifications are reduced to a
greater extend. The input iris images were taken form CASIA database. Basic
preprocessing steps like resizing and noise removal process were employed. For
noise removal process median filter is employed. Canny edge detection process is
employed to the enhanced iris image. Image gradient is employed for the enhanced
iris image. Hysteresis thresholding is employed for the iris and the pupil regions.
Circular Hough transform is applied for the iris images inorder to identify the iris
and pupil regions. The features were extracted from the images based on the DWT
process. The extracted features were then optimized based on Genetic algorithm.

The person identification is employed based on the distance calculated between the
test image features and the images in the database. The performance of the process is
measured with the help of performance metrics like Accuracy, Sensitivity and
Specificity of the iris recognition process. The system can enhanced by including
some additional feature extraction algorithms such as SURF and other newly
enhanced feature extraction methods can describe more informations about the iris
image. The extracted addition features should be able to overcome the problem of
real time implementation of the process. Unsupervised classifiers can be used to
develop the process further.

37
7.2Future Suggestion

In order to increase both accuracy and robustness; a multimodal biometric


systems could be used. This confusion may be a combination of iris and finger print
biometrics. This allows the integration of two or more types of biometric recognition
and verification systems in order to meet stringent performance requirements. Such
systems are expected to be more reliable due to the presence of multiple,
independent pieces of evidence. These systems are also able to meet the strict
performance requirements imposed by various applications.In order to increase both
accuracy and robustness; a multimodal biometric systems could be used. This
confusion may be a combination of iris and finger print biometrics. This allows the
integration of two or more types of biometric recognition and verification systems in
order to meet stringent performance requirements. Such systems are expected to be
more reliable due to the presence of multiple, independent pieces of evidence. These
systems are also able to meet the strict performance requirements imposed by
various applications.Newest database from CASIA or other database could be used
in the test. Real-time camera with high resolution also may be used. This prototype,
if the acquisition, segmentation, and normalization implemented, this will achieve a
full iris recognition system.

38
Chapter 8

References

[1]. C. H. Daouk, L. A. El-Esber, F. D. Kammoun and M. A. Al Alaoui , “Iris


Recognition” precedings of IEEE International Symposium on Signal Processing
and Information Technology,2002 ,pp. 558- 562
[2]. Bhawna chouhan, shailja shukla." Iris Recognition System Using Canny Edge
Detection For Biometric Identification", in the Interrnational Journal Of
Engineering Sciences And Technology, ISSN: 0975- 5462 Vol. 3 No. I Jan 2011.
[3]. B.Sabarigiri , T.Karthikeyan “Acquisition Of Iris Images, Iris Localization,
Normalization, And Quality Enhancement For Personal Identification” in the
International Journal of EmergingTrends & Technology in Computer Science, ISSN
2278-6856,pp 274-275, Volume 1, Issue 2, July – August 2012
[4]. Hanene Guesmj et al.” Novel Iris Segmentation Method” in the preceding of
International Conference on Multimedia Computing and Systems, third edition
2012.
[5]. Yan Li, Wen Li, Yide Ma “Accurate Iris Location Based On Region Of
Interest” in the preceding of IEEE International Conference on Biomedical
Engineering and Biotechnology held in the year 2012, pp 704-707,volume 12,2012.
[6]. Afsana Ahamed and Mohammed Imamul Hassan Bhuiyan, “Low Complexity
Iris Recognition Using Curvelet Transform” in the preceding of International
Conference on informatics, Electronics & Vision held in the year 2012,pp.548-553.
[7]. Sambita Dalal and Tapasmini Sahoo” An Iris Matching Algorithm for Reliable
Person Identification Optimized Level of Decomposition” in the preceding of
International Conference on Computing, Electronics and Electrical Technologies
held in the year 2012,pp. 1073-1076.
[8]. Penny Khaw “Iris Recognition Technology for Improved Authentication”
SANS Security Essentials Practical Assignment, version 1.3 SANS Institute 2002.

39
[9]. Robert W. Ives et al. “Iris Recognition Using Histogram Analysis” in the
preceding of 38th Asilomar conference on signals, systems, and computers. pp. 562-
566,2014.

[10]. Bimi Jain, Dr.M.K.Gupta, Prof. Jyoti Bharti “Efficient Iris Recognition
Algorithm Using Method Of Moments” in the International Journal of Artificial
Intelligence & Applications, pp. 93-105.Vol.3, No.5, September 2012.

[11]. K. Bondalapati and V.K. Prasanna.," Reconfigurable computing systems.",


Proceedings of the IEEE, ISSN : 0018-9219, vol. 90, issue.7, pp.1201-1217, Jul
2002.
[12]. Jang-Hee Yoo, Jong-Gook Ko, Sung-Uk Jung, Yun-Su Chung, Ki-Hyun Kim,
Ki-Young Moon, and Kyoil Chung," Design of an Embedded Multimodal
Biometric System", Signal-Image Technologies and Internet- Based System, 2007.
SITIS '07. Third International IEEE Conference on Shanghai, Print ISBN: 978-0-
7695-3122-9, pp. 1058 - 1062 , 16-18 Dec 2007

[13]. D. Bariamis, D. Iakovidis, D. Maroulis, et al., "An FPGA-based architecture


for real time image feature extraction,", Pattern Recognition, 2004.
ICPR 2004. Proceedings of the 17th International Conference, Print ISBN: 0-
7695-2128-2, pp. 801-804 Vol. 1, 23-26Aug 2004

40
41

Das könnte Ihnen auch gefallen