Sie sind auf Seite 1von 43

CONTENTS

TOPIC PAGE NO.


Cover Page (i)
Certificate (ii)
Acknowledgement (iii)
Abstract (iv)

Chapter 1:INTRODUCTION

1 Introduction 8
1.1 Biometrics 8
1.2 Biometric system performance 9
1.3 Biometric Technologies 12
1.3.1 Face recognition 12
1.3.2 Fingerprints 12
1.3.3 Signature Recognition 13
1.3.4 Handwriting Recognition 13
1.3.5 Ear Recognition 13
1.3.6 Iris recognition 14
1.3.7 Retina recognition 14
1.3.8 Keystroke dynamics 14
1.3.9 Voice recognition 15
1.3.10 Gait recognition 15
1.4 Motivation 15
1.5 Organization of the Thesis 16

Chapter 2:RELATED WORK

2. Related Work 18
2.1 Modules of a biometric system 19

1|Page
Chapter 3:PRE-PROCESSING

3.1 Noise Removal 23


3.2 Edge detection 23

Chapter 4:FEATURE EXTRACTION

4.1 Finger Length 28


4.2 Finger width 30
4.3 Perimeter 31

Chapter 5 MATCHING 35

Chapter 6 EXPERIMENTAL RESULTS 38

Chapter 7 CONCLUSION AND FUTURE WORK 45

BIBLIOGRAPHY 46

2|Page
LIST OF FIGURES

1.1 Equal Error Rate 10


1.2 ROC Curve 11

2.1 Components of a Biometric System 19

3.1 input grayscale image 22


3.2 Input image after Binarization 22
3.3 The kernel to calculate the gradient along X axis 24
3.4 The kernel to calculate the gradient along Y axis 24
3.5 The Edge Detected Input Image 25

4.1 The Features Extracted from the Input Image 27


4.2 Kernel for a Horizontal line 28
4.3 Kernel for a vertical line 29
4.4 Kernel for line at 45 degrees 30
4.5 Kernel for line at 135 degrees 30
4.6 probability matrix 32
4.7 priority matrix 32
4.8 Mirror of the Priority Matrix along P 33

6.1 Right way of placing the finger on the scanner 38


6.2 Wrong way of placing the finger on the scanner 38
6.3 FAR - FRR curve 39
6.4 Image where the user is wearing rings 40

3|Page
CHAPTER-1

4|Page
CHAPTER-1

1. INTRODUCTION:

With the development of ever increasing technological systems that require authentication,
personal identification has become an absolute necessity. With decreasing personal contact
among the people, the utilization of technical means for personal identification is increasing.
Everything from the bank ATM to the internet requires some form of passwords. Passwords
however have their own weaknesses; not only weak passwords can be easily guessed but the
strong ones can be broken throughtoo. It is recommended that people should not use the same
password for two differentapplications and should change them regularly. In the modern
world that would mean memorizing a large number of passwords. Even access cards and
identity cards can be easily stolen or forged. More and more crimes related to password and
card thefts are being reported every day. These problems are not trivial and new means of
authentication which are more user friendly and less prone to be duplicated is required.
Biometric authentication is the ideal solution to all these requirements. Not only it is much
more user friendly than remembering a number of passwords or carrying around a card, but it
is something that cannot be stolen or cracked. The biometric authentication systems use
human traits which are unique to the individual and neither be stolen nor duplicated.
Biometric authentication is truly the future of personal identification.

1.1 BIOMETRICS:

Biometrics which can be used for identification of individuals based on their physicalor
behavioral characteristics has gained importance in today’s society where informationsecurity
is essential. Biometrics features can be classified as physiological characteristics and
behavioral characteristics. The various physiological characteristicsthat are generally used are
face, iris, finger prints, hand geometry and voice. The behavioral characteristics include
signature, handwriting analysis, voice, keystroke pattern and gait. Not every physiological or
behavioral characteristic can be recognized as a biometric.
The qualities of a good biometric are:

Uniqueness: The trait should be as unique as possible, so as to say that the same feature does
not appear in any two different individuals.

5|Page
Universality: The biometric trait should be present in as many different individuals as
possible.

Permanence: The trait should have little or no change with age.

Measurability: The trait should be measurable by relatively simple methods.

Collectability: The users of the biometric system should find it easy to present the biometric
for measurement.

A biometric system could have either or both of the two features, Identification and
Verification. In the process of identification the individual presents the required biometric
characteristic and the biometric system associates an identity to that Individual. In the case of
recognition or verification, however, the person presents requires both the biometric
characteristic and an identity. The system then verifies whether that identity is associated
with that person’s biometric characteristic or not. The proposed work aims to perform
verification.

1.2 BIOMETRIC SYSTEM PERFORMANCE:

The characteristic of the individual residing in the system is the base for comparisonfor
recognition or verification procedures. The characteristic that the individual provides during
these processes can be the face, a voice print, a hand print or a finger print. These
characteristics can never be exactly the same as that of the characteristic provided during the
registration procedure. This may be due to the changeof the characteristic and of the
environment. The change in the positioning of thesensors and various noise elements can
make it impossible to duplicate the identicalenvironment as during the time of registration.
Also the characteristic may undergochanges however little they may be with time. The result
of all this may lead tono perfect match being found for the individual in the database. This
requires thematching algorithm to return results which are near matches to the
characteristicgiven.
As only one result is desired a match which is as close to the original characteristicas possible
is required. So the matching algorithm can be designed tosimply return the closest match.
This however presents the problem that in eventhe case when individual is not registered with
the system, it may return the closestmatch to that individual. It may of course not be as near a
match as that of aregistered individual but it effectively renders the system useless as both
registeredand unregistered individuals are recognized. To prevent this threshold is used.
6|Page
Onlythe matches which are above a certain threshold are said to be valid and the othersare
rejected.
Even after using a threshold value to filter out the false acceptance of unregisteredindividuals
the system can give incorrect results. The performance ofa biometric system is measured in
certain standard terms. These are false acceptancerate (FAR), false rejection rate (FRR) and
equal error rate (EER) also calledcrossover error rate (CER).
FAR is the ratio of the number of unauthorized (unregistered) users acceptedby the biometric
system to the total of identification attempts made?FRR is the ratio of the number of number
of authorized users rejected by the biometricsystem to the total number of attempts made.
Equal error rate is a pointwhere FRR and FAR are same.

Figure 1.1: Equal Error Rate

False acceptance poses a much more serious problem than false rejection. Itis therefore
desired that the biometric system keep the FAR to the minimal possiblelimit. This can be
achieved by setting a high threshold so that only very nearmatches are recognized and all else
are rejected. The higher the security requirementfrom the system the higher the threshold
required to maintain it.However FRR also depends upon the threshold. As the threshold
increasesthe FRR increases proportionally with it. This is because due to a high
thresholdmatches which are correct but below the threshold due to noise or other factors
7|Page
willnot be recognized. It is therefore desired that a balance is maintained. Usually thisbalance
point is the ERR where the FRR and the FAR are equal. However thesecurity requirements
from the system are the primary concern while deciding thethreshold value and either of the
FAR or FRR might be sacrificed for the other. Incase of a very high security system the
threshold may be raised while for a systemwhere false rejects are of more concern the
threshold might be lowered.

Figure 1.2: ROC Curve

Thresholds being system dependent cannot be utilized to effectively comparedifferent


biometric systems. The receiver operating characteristic (ROC) is usedinstead of thresholds
for this purpose. The ROC is a plot depicting the genuine acceptancerate along the Y-axis and
the false acceptance rate along the X-axis. Timein some cases is of crucial importance in the
performance of a biometric system. Inan offline system it is not crucial but in the case of
online systems it is of importancethat the system works fast enough so as not to cause the
user unnecessary annoyance.

8|Page
1.3 BIOMETRIC TECHNOLOGIES:
A human body possess several physiological characteristics that can serve as
biometricfeatures. Also a human being develops several unique behavioral traits whichcan
also serve as biometric features. The various physiological characteristics thatare generally
used are face, iris, finger prints, finger prints, hand geometry and voice.
Face is the biometric primarily used by human beings to recognize each other. Thismakes it
an obvious choice for biometric. The difficulty however is in the fact thatthe biometric
system has to rival the complexity of the human brain. Finger printshave also been used for
quite some time now and has established its value as a biometric.The challenge now is to
develop more advanced systems which can processpartial prints and speed up the matching
process. The behavioral characteristics includesignature, handwriting analysis, voice,
keystroke pattern and gait. Signaturesand handwriting have been used extensively as
biometrics. However they have beenused only as offline biometrics, i.e. no data is collected
during the process of signingor writing. Automated biometric systems by including data
obtained during theseprocesses vastly improve the accuracy of the system.

1.3.1 FACE RECOGNITION:

Face recognition is a biometric technology where an individual is recognized by oneor more


images of the person’s face. This is currently an area of interest as theperson can be
completely oblivious to the biometric system being at work. Dueto increased threat of
terrorist activities many airports around the globe employthese systems to keep a tab on the
people entering and leaving the country withoutcausing any inconvenience to the other
passengers. Though it is a robust biometricthe changes in the facial features due to age or
even makeup and facial expressionsmay result in incorrect design. There are two major
categories for a face recognitionsystem. The recognition of passengers on an airport without
their knowledge orconsent is an example of trying to identify an individual from a group of
individualsin a dynamic environment. In the other type is a controlled environment where
thedistance between the person and the sensor is fixed. The performance of systems ina
controlled environment is obviously better than those in a more dynamic one.

1.3.2 FINGERPRINTS:

Fingerprints as biometrics have already gained widespread acceptance all over theworld.
Almost every law enforcement agency in the world utilizes fingerprints asan accurate and

9|Page
effective means of identification. Primarily fingerprints have beenutilized as a method for
verification. However now with law enforcement agenciesbuilding up electronic fingerprint
databases they have been very effective as anidentification tool too. Most fingerprint based
biometric systems utilize eitherminutiae of the friction ridges in the curves of the fingerprints
or the global appearanceor both.

1.3.3 SIGNATURE RECOGNITION:

Signatures have been used for authentication from a long time in government, legaland
commercial transactions. It is a behavioral biometric. Automated systems canbe both offline
and online. Offline systems would have to work only using thecharacteristics of the signature.
Online systems on the other hand can utilizefeatures obtained while providing the sample
such as speed of writing, total timefor signature, the pen pressure, pen inclination and number
of pen up and downs,besides using the various characteristics of the signature itself.

1.3.4 HANDWRITING RECOGNITION:

Handwriting is a behavioral biometric. It finds applications in forensic


documentexaminations for user recognition. A lot of research has been conducted to
ascertainthe individuality of a person’s handwriting. Several Handwriting recognitionsystems
both online and offline have been proposed. Besides the shape andsize of letters, pen strokes,
loops and crossed lines can be used to extract featuresfrom a handwritten document.

1.3.5 EAR RECOGNITION:

This is a relatively new biometric trait. Although it may seem like it the humanear does not
have a completely random structure. It is unique enough for a peopleto be identified using
features collected from their ear. Moreover like face it doesnot suffer from expression
changes and makeup effects. Hair present on the earhowever may cause problems for the
biometric system. Also a change in brightnessof the surrounding environment will adversely
affect the system. When the ear is covered by hair or by clothing it is proposed in that a
thermogram of the earbe used as the temperature of the ear is different from that of the hair or
the clothing.

10 | P a g e
1.3.6 IRIS RECOGNITION:

The iris, colored area surrounding the pupil holds considerable details and is considereda very
accurate biometric trait. Among all the biometric traits it is perhaps themost promising as by
far it has been found to have more than 200 unique features ascompared to 50 to 60 for other
biometrics. The probably of two individuals havingthe same retina iris has been calculated to
be infinitesimal. It is generally used inplaces where a very high security biometric system is
required mostly by governmentagencies. The system requires user co-operation and
controlled environmentfor operation.

1.3.7 RETINA RECOGNITION:


The retina is the layer of blood vessels in the white portion of the eye. The probablyof two
individuals having the same retina blood vessels patterns has been calculatedto be
infinitesimal. Even eyes of identical twins are found to have different retinapatterns. It is
therefore utilized in situations where a high security biometric systemis required. The system
requires user co-operation and controlled environment foroperation. The user must position
the eye at a fixed distance from the camera, lookdirectly into the lens and remain perfectly
still while the retina is being scanned.The retina patters remains stable for most human beings
during their lifetimes. Fewdeceives like diabetes and high blood pressure may affect the
biometric.

1.3.8 KEYSTROKE DYNAMICS:

This is a behavioral biometric and can be used in combination with passwords. Itanalyses the
patterns in the way the user types in the keys, like the time required tofind the keys, the total
speed etc. Besides being considerably cheaper to implement than other biometrics it is also
much more unobtrusive as typing on a keyboard ismuch easier on the user than the data
collection method of most other biometrics.This however is not very robust as it is
susceptible to user mood and fatigue. Thekeyboard being used also plays a major factor as a
user will have different dynamicson keyboards with different layouts. Also the actual
dynamics may change overtime. In the proposed system uses a text length of 683 characters
to achieve anFRR of 4% and an FAR of less than 0.01%. While the results are very
encouragingthe text length is prohibitive and cannot be used easily.

11 | P a g e
1.3.9 VOICE RECOGNITION:

This is one of the biometrics which is most convenient to the users. It works byanalyzing the
waveform patterns and the air pressure patters produced by anindividual’s speech. Voice
recognition systems may have the user read a predefinedsequence of word and numbers or
may request some random input till the systemcan decide whether to authenticate or reject the
user. The systems developed sofar suffer from poor accuracy as an individual’s voice can
vary due to the personcondition. The systems have also not been very effective against
mimicry.

1.3.10 GAIT RECOGNITION:

This is a recent development in the biometric field. It works by analyzing the gait,walking
style of an individual. The symmetry of walking is believed to possess thequality of a
biometric. The data collection is not convenient as some of theother biometrics like face and
voice. The change in gait due to the person’s moodand injury are few of the major concerns
for this biometric. Also biometrics likefingerprint and handprint are invariant to the
individuals’ mood but gait is one ofthe biometrics which can be hugely effected by this. In
promising recognitionrates of up to 95% is achieved using gait as the biometric.

1.4 MOTIVATION:

Hand geometry based systems can be useful in low to medium security


applications.Combined with fingerprints and finger prints in a multi modal system it canprove
very useful in high security applications. The advantage of combining thesefeatures lies in the
fact that while taking the data for hand geometry, the data forfingerprints and finger prints
can be collected simultaneously. There will be no extrainconvenience to the user while the
accuracy of the system may be greatly increaseddue to the addition of several more features.
Systems currently available for highsecurity applications which use fingerprints of all fingers
do not take consider featuresof the FINGERPRINTof the geometry of the finger. The systems
available for finger printsalone also disregard the geometry of the finger though its features
are available inthe data collected for the finger print. Hand geometry thus can prove very
useful inmulti modal systems. Most currently available systems for hand geometry use
pegsfor fixing the placement of the finger on the scanner. The proposed system aims
toeliminate this constraint by allowing the user to vary the positioning of the finger onthe
scanner. This will make the integration of the system into currently availablesystems for
fingerprints and finger prints easy as they do not use any pegs.
12 | P a g e
1.5 ORGANIZATION OF THE THESIS:

The second chapter presents an overview of the research done in the field of handgeometry.
Various systems developed for hand geometry analysis are also studied for their advantages
and deficiencies. The different approaches used to develop these systems are discussed. The
third chapter presents the first phase of the proposed system, pre-processing. In this phase the
image is obtained and is prepared for feature extraction.
The noise removal algorithm applied to eliminate the noise which creeps into theimage is
discussed. Then an edge detection algorithm is applied to obtain the fingerboundary from
which the geometric features of the finger can be extracted.
The fourth chapter gives the details of the feature extraction process. It listsout the different
features and shows how each of them is extracted from the imageobtained after edge
detection. Primarily the extraction of length of the fingers, thewidth of the fingers and the
perimeter of the entire finger are discussed.
The next chapter discusses how the obtained features are matched to the referenceimage in
order to obtain a match. An algorithm for calculating the degreeof match is presented. Also a
thresholding scheme to accept or reject the user asauthorized is discussed.
The sixth chapter presents the results obtained after testing the proposedsystem on a database
of images. The FAR and FRR are calculated from the results.
The final chapter presents the conclusions from the work. Also recommendationsof future
additions to the work are given. These additions may increase theperformance of the current
system.

13 | P a g e
CHAPTER-2

14 | P a g e
CHAPTER 2

RELATED WORK:

Various different approaches have been proposed for hand geometry recognition.The
proposed approaches differ mostly in the ways of extracting and manipulatingthe features of
hand.
In a 4 B-spline curve have been used to represent fingers. This enablesthe removal of fixed
pegs utilized in most other systems. The identification is doneby utilizing the differences
between the curves generated by various hand geometriesand utilizing the curves as a
signature for the individual. Only the fingers are representedusing the curves, the thumb is
not a part of the signature. On a databaseof 6 images each from 20 persons has a recognition
rate of 97%. The error ratein verification for the same database is 5%
Due to their interpolation property implicit polynomial 2D curves and 3Dsurfaces have been
utilized in for analyzing the handprint by. This methodworks by finding an implicit
polynomial function to fit the hand print. To keep thechange in the co-efficient of the
function to a minimum due to slight variation inthe data new methods such as 3L fitting and
Fourier fitting are tried insteadof the traditional least square fitting. In the identification rate is
found to be95% and the verification rate 99%in efforts have been made to utilize the vein
patterns in the hand asa biometric. This is done by taken the thermal image of the finger and
obtainingthe vein pattern from it. Using the heat conduction law several features can
beextracted from each feature point of the vein patters. The FAR is found to be 3.5%while
the FRR is 1.5%.
In a FINGERPRINTrecognition method is proposed based on the Eigen fingertechnology. In
this method the original finger prints are transformed to a set of featureswhich are the Eigen
vectors of the training set. Euclidian distance classifier isused after extracting the Eigen
vectors from a new FINGERPRINTfor FINGERPRINTrecognition.On a set about 200 people
the system works with an FRR of about 1% andan FAR of around 0.03%.
Bimodal systems, a combination of two biometrics using hand geometry andfinger prints has
been realized as an effective method to improve the performanceof systems using hand
geometry alone. In a bimodal biometric system usingfusion of shape and texture of the hand
is proposed. FINGERPRINTauthentication isdone using discrete cosine transform. New hand
shape features are also proposed.A score level fusion of hand shape features andfinger prints
is done using productrule. Using either of hand shape or finger prints alone produce high
FRR and FARbut when the two are combined to form a bimodal system the both the FAR
andFRR are considerably reduced. It is especially effective when the hand shapes oftwo

15 | P a g e
different individuals are very similar as in such cases the finger prints increasethe
performance remarkably. On a database of 100 users the FRR is found to be0.6% and the
FAR is found to be 0.43%.
In geometric classifiers are utilized in hand recognition. As in the proposed system in a
document scanner is used to collect hand data. Also very fewrestrictions are imposed on the
positioning of the hand on the scanner. A total of 30 different features are obtained from a
hand. For each individual 3 to 5vof the person’s data images are used as the trading set. In the
30 dimensional feature space a bounding box is found for each of these training sets. The
distanceof the query image to these bounding boxes is used as the measure of similarity.The
threshold is determined by experimentation on the database. On a databaseof 714 from 70
different people image an FRR of 6% is obtained while the FAR is 1%.In 3-D hand modeling
and gesture recognition techniques are studied.Although the hand can be modeled in other
aspects besides shape like dynamicsand kinematical structure they are not discussed here.
This is because the proposedsystem uses 2-D hand shape modeling. A geometrical hand
shape model can be approximatedusing splines as there are complicated geometrical surfaces.
The modelis can be made very accurately but this will increase the complexity as
parametersand control points will increase.

2.1 MODULES OF A BIOMETRIC SYSTEM:

A biometric system comprises of three important modules.Preprocessing, Feature Extraction


and Matching. When the input data isfed into the biometric system it may be unsuitable for
feature extraction.
Input image

Pre-Processing

Feature Extraction

Matching

Result: Pass/Fail
Figure 2.1: Components of a Biometric System

This is due to the several noise elements which may creep into the data. Noise may be
theresult of the atmosphericconditions or the surroundings. It may also be introducedby the
16 | P a g e
equipment used for collecting the data. Also the users may introduce somenoise
inadvertently. The primary job of the preprocessing module is to clean upthe noise introduced
o that the features can be extracted correctly. The proposedsystem first runs the input image
through a noise removal algorithm for this purpose.The job of the preprocessing module is to
prepare the data for feature extraction.
Besides noise removal this may entail other work on the input data. Inthe proposed system
the data is entered in jpeg format but a monochromatic imageis required for feature
extraction. Furthermore only a single bit is required to representeach pixel. This is because
the only two colors any pixel can represent arewhite and black. In this work a pixel with
value one represents a white pixel and isreferred to as a lit pixel. Similarly a pixel with value
zero represents a black pixeland is also referred to as a dark pixel. The conversion from jpeg
format to a matrixcontaining one bit representation of each pixel is also a part of
preprocessing.The monochromatic representation achieved using the one bit matrix is
stillunsuitable for feature extraction. To extract features an image which contains onlythe
boundaries of the finger is required. The preprocessing module converts the imageto one
containing only boundaries of the finger using an edge detection algorithm.The output of the
preprocessing module is a matrix with rows and columns equalto the no of rows and columns
of pixels in the image respectively. Each element ofthis matrix is either a one or a zero
representing a lit or a dark pixel respectively.
The feature extraction module is the most important module in a biometricsystem. The
function of this module is to extract and store features from the inputdata. In the proposed
system length and width of the fingers and the perimeter ofthe entire finger is extracted from
the input image. For each finger one measurementof length is taken. Also two measurements
of width at different positions are takenfor each finger. Finally an approach to measure the
perimeter of the finger in onepass is discussed. The output of the feature extraction module is
the measure ofthese features.The last module of the biometric system is matching. Here the
featuresextracted in the previous section are matched up with the features of that
individualpreviously stored in the database. The proposed system produces a match
scorebased on a comparison scheme. The match score represents the closeness of thecurrent
image to the one present in the database. A higher score represents ahigher closeness of the
images. Based on experimentations a threshold is decided.The threshold is a value which lies
in the range of match score. For any image ifthe match score is less than the threshold the
image is rejected as a match to thedatabase image specified.

17 | P a g e
CHAPTER-3

18 | P a g e
CHAPTER 3

PREPROCESSING

The images are captured using a flatbed scanner. The input image is a grayscaleimage of the
right finger without any deformity. The input image, shown in Figure3.1 is stored in jpeg
format. In cases of standard deformity such as a missing fingerthe system expresses its
inability to process the image. It is also critical that thefingers are separated from each other.
However it is not required to stretch thefingers to far apart as possible. The hand should be
placed in a relaxed state withfingers separated from each other. Since features such as length
and width whichare dependent on the image size and resolution are being used, it is critical
that tohave uniform size of images.Red, green and blue (RGB) values of each pixel is
extracted. Since a monochromaticimage is required for the proposed system a threshold is
determined. All pixels withRGB values above the threshold are considered white pixels and
all pixels belowthe threshold are considered black pixels. Initially the threshold is set very
low, veryclose to the RGB value of a black pixel in the image. This produces an image witha
completely white finger on a black background as shown in Figure 3.2. Featuressuch as
finger lengths perimeter and area of the finger can be more easily extracted.

Fig 3.1: input grayscale image Fig 3.2: Input image after Binarization

However setting the threshold very low results in a lot of noise inthe image. A good
threshold is determined and then noise removal algorithms areapplied to the image.

19 | P a g e
3.1 NOISE REMOVAL:
Ideally the scanned input image should contain no noise. However due to dustand dirt both on
the finger and on the scanner bed, even in minute quantities mayproduce differences between
the actual image scanned and the finger print. Thesevariations may also be the result of a host
of other factors including the settings ofthe scanner, the lighting effects, humidity in the
atmosphere etc. These variationsunless removed adversely affect the performance of the
system. The larger the degreeof variations or noise the less accurate the system. So before
extracting featuresfrom the image, noise is reduced as much as possible. However most noise
removalalgorithms also effect the actual features so a balanced approach is taken such thatthe
features are undamaged after noise removal.The noise removal algorithm in this case utilizes
the fact that the requiredhandprint is present in only a portion of the total image i.e. in the
lower central sideof the image. The rest is simply black space with some noise. It works by
trying tofind an entire row consisting only of black pixels.This can be done using the binary
search algorithm.
The algorithm startsfrom the center of the image and works its way upwards. Since binary
search is usedthe number of rows searched is very low. When such a row is determined all
thepoints above that row are set as black. This eliminates all the noise above this row.Using
the binary search algorithm again a column is identified left of whichis not required for
feature extraction. This is any column between the left marginand the center of the image
which has no lit pixels. All the pixels left of this columnare set to black. Similarly a column
right of which no features are to be extractedis determined. All the pixels right to this column
are set to black. This reduces thenoise present in the image considerably without effecting the
actual finger print.
The remaining noise exists between the fingers and the inside of the fingerperimeter. A
convolution filter is applied which checks if a white pixel is surroundedon all sides by black
pixels. If that is found to be the case then the white pixel isconsidered to be noise and is
converted to a black pixel. The size of the convolutionfilter is variable. First the filter uses a
3*3 template, then a 5*5 and finally a 7*7.This progressively removes larger and larger noise
elements from the image.

3.2 EDGE DETECTION:

The image obtained after elimination of noise contains regions of black andwhite pixels. In
order to extract geometric features of the FINGERPRINTit is requiredthat the image contains
only edges. Consequently it is required to convert regionsof white space to an image
20 | P a g e
containing only the boundary of the white pixels. Thisis achieved by using an edge detection
algorithm. The algorithm converts all pixelsexcluding those at the boundary of black and
white regions to black pixels. Thealgorithm also has to ensure that the thickness of this
boundary is as low as possible. This is because a thick boundary will adversely affect the
accuracy of the featuredetection algorithm.It is critical for any edge detection algorithm not to
miss any edges. It is alsoimportant that no non edges are recognized as edges. These two
criteria define theerror rate of the edge detection filter. Besides the low error rate there are
two otherqualities that a good edge detection filter should possess. The distance between
theactual edge and the edge located by the filter should be as low as possible. Also thefilter
should not provide multiple responses for single edges. Canny’s edge dictionalgorithm
possess these two qualities in addition to having a low error rate.

Figure 3.3: The kernel to calculate the gradient along X axisthe canny filter utilizes the convolution
operation. A convolution operation

Figure 3.4: The kernel to calculate the gradient along Y axis

The canny filter utilizes the convolution operation. A convolution operationis performed by
sliding the convolution kernel over the input image. Starting at thetop left corner the kernel is
moved till the end of the row. This process is repeatedfor all the rows of the image. The
kernel used is a 3x3 matrix. The output at anypixel location is the sum of the product of
values of each cell of the kernel to thevalues of the underlying pixel. Thus each movement of
the kernel provides one pixelof the output image.Producing an output image of size greater
than the input image leads tocomplications. This happens when the convolution kernel slides

21 | P a g e
to such a positionwhere it no longer fits entirely within the image. To prevent this from
occurringthe kernel is restricted to slide only till the point where it is completely within
theimage and no further. This however leaves the points on the edges unprocessed. Toprocess
the points at the edges the image is enlarged at the beginning by inventingnew input values
for points where the kernel slides off the image. These points arelocated at the right and at the
bottom of the image. These invented pixels are allassigned the value one (black). This is not a
preferred method as it distorts the outputimage but in the case of a finger it may provide an
added advantage. The pixelsat the right already have the value zero (black) so adding another
column would notmatter.
The kernels used in the edge detection algorithm are shown in Figures 3.3and Figure 3.4.
These kernels calculate the gradients along the X and Y axes. Oncethe gradient is calculated
the angle to which the edge bends is determined by calculatingthe inverse tangent of the
division of gradx and grady. The inverse of tangentangle can be anything between 0 and 180
degrees. Since the next pixel composingthe edge has to be either 0, 45, 90, 135 or 180
degrees from the current pixel, asthese are the only possible traversals from any pixel the
angle is rounded off to one ofthese values..
Algorithm:
Step1: Determine gradx and grady, the values returned by the kernels.
Step2: Determine the angle of the edge theta = tan−1 (gradx/grady).
Step3: Approximate theta to one of these values 45 90, 135 and 0 or 180.
Step4: Traverse along the edge in the direction of the approximated theta and setto 0 any
pixel which is not along theta.
The approximation made in Step 3 is rather large. This is done because sincea pixel has only
8 surrounding pixel and the edge has to proceed to one of theseangles. A region
encompassing 45 degrees is formed for each of the four angles 0,45, 90 and 135. The theta
value lies in one of these regions and is approximated tothe angle in whose region it lies.

Figure 3.5: The Edge Detected Input Image


22 | P a g e
CHAPTER-4

23 | P a g e
CHAPTER 4

FEATURE EXTRACTION:

There are several features that can be extracted from the geometry of the finger. Eachfinger
has three major lines running perpendicular to the length of the finger. Thefirst feature that
can be extracted is the length of a finger which is defined as thedistance between the tip of
the finger and the third and bottommost line on thefinger. The second major feature is the
width of the finger. One or more measurementscan be taken for the width at varying points
along the finger. The length ofthe lines on the finger can also be used as the measure of finger
width. Since the fingers may not have uniform width usually two or more measurements are
taken foreach finger along different points.The thumb has only one clear line which bisects it.
The definition of length ofthe thumb is thus a little vague and varies upon the system used.
The diameterof the largest circle that can be inscribed inside the finger and between the
fingerslines are also sometimes used as features. The perimeter of the entire finger and
ofindividual fingers can also be used as features.

Figure 4.1: The Features Extracted from the Input Image

The proposed system extracts for each finger one measurement of length andtwo
measurements of width. The thumb is not included in the feature extractionprocess. Also the
perimeter of the entire finger is measured. These constitute thefeatures based on which the
proposed system authenticates the users. The mostimportant geometric part of the finger for
24 | P a g e
feature extraction is the fingers. For eachfinger one measurement of length and two
measurements of width are taken. Thatmakes it a total of 12 features for the four fingers.
Including the perimeter bringsthe number of total features to 13.
In each case the tip of the finger is identified first. The fingers are eachtreated as two parallel
lines with two parallel lines running almost perpendicular tothe first two lines. This simplifies
the problem into a line detection problem. Thelength of each of these lines is extracted. The
length of the finger is the length ofany one of the longer lines while the lengths of the smaller
lines make the width.The features extracted from the finger are all shown in Figure 4.1.

4.1 FINGER LENGTH:

The first step is to determine the top position of each finger. This is done startingfrom the
little finger and moving up to the index finger. Since the image is of theright finger the little
finger is the leftmost. The algorithm to determine the tip ofthe other fingers starts by utilizing
the previous tip found. Since the little finger isthe first the algorithm starts from (0, 0), the top
left corner of the image. It thenfinds the first lit pixel traversing column wise. This lit pixel is
somewhere along theleft boundary of the finger. Now the algorithm has to traverse along lit
pixels so asto reach the tip of the finger. During this traversal the value of the y co-ordinateis
constantly decreasing while value of the x co-ordinate may increase or decrease.The
algorithm halts when the value of the y co-ordinate can no longer decrease. Itthen assigns the
point as the tip of the little finger (x1, y1). The system now findsthe bottom of the finger (x2,
y2) by using a line detection algorithm. The lengthof the finger is then calculated by taking
the distance between the top and bottompoints.
The algorithms for line detection uses a convolution based scheme similar tothe one utilized
in the canny filter [6] used for edge detection. The convolutionmethod is useful only for lines
which are very thin but since that is the case here itsuits the needs of the system perfectly.
Four separate convolution kernels are usedeach to detect a line of unit pixel width and at
directions of 0, 45, 90 and 135. Thekernels are shown in the Figures 4.2 to 4.5.

Figure 4.2: Kernel for a Horizontal line

Figure 4.3: Kernel for a vertical line


25 | P a g e
The kernels are tuned to produce a high positive response for dark linesagainst a light
background which is the case in the system as the edge detection algorithmleaves the finger
outlines dark and the area inside and outside of the fingers

Figure 4.4: Kernel for line at 45 degrees

Figure 4.5: Kernel for line at 135 degreeslight. For a light line against a dark background
results in a high negative response.In case both types of lines are to be recognized the
absolute value returned by thekernel can be used.
Algorithm for determining length of a given finger is given below:
Step1: Determine the tip of the finger (x1, y1) as described earlier.
Step2: Apply the line detection kernels at the current pixel and obtain the responses.
Step3: Pick the kernel with the highest response.
Step4: Move to the pixel indicated by the kernel with the highest response subjectto the
condition that the value of y should always increase as the length is beingmeasured from top
to bottom.
Step5: If no kernel shows a positive response for which the next y value will increasethen
mark the point (x2, Y2) and move to Step 7.
Step6: Repeat Step 2 to Step 5.
Step7: Obtain the length of the line as the distance between points (x1, y1) and(x2, y2).
The same function can be applied to the little, ring and middle fingers butthe index finger has
to be treated differently. This is because the line that is beingmeasured is one of the two
boundaries of the finger. Starting from the little finger itis the boundary closest to the next
finger. However for the index finger the boundaryclosest to the previous finger is taken. This
is done because if the valleys arenot used as stop points then the little and the index fingers
may show muchlarger lengths than actual. This may be due to the line being considered
straightenough when it continues down the finger. So where the right boundary of the
little,ring and middle fingers is considered while taking the length measurements the
leftboundary of the index finger has to be considered.

26 | P a g e
The tip of the ring finger always lies above that of the top of the little finger so the algorithm
searches only further away from the x co-ordinate of the top ofthe little finger and only up to
the y co-ordinate of the top of the little finger. Themiddle finger and the ring finger have the
same relation as the middle finger is thelongest of them all. The index finger however is
shorter than the middle finger. Itis usually longer than the little finger. Thus the search is
done for x greater thanthe middle finger and y less than that of the little finger. If however no
tip is foundit is assumed that the little finger is longer than the index finger and search is
madebelow the tip of the little finger. Once the tip of the fingers is found they are passedon
the function which calculates the length of the fingers.

4.2 FINGER WIDTH:

For pam widths two fixed points are found on the finger boundary and the widthis taken at
these fixed points. To find two fixed points on the finger boundary theboundary is treated as a
line and two points which divide the line in three equalparts are found. The parameters for the
line required are found during the calculationof the length of the finger. While the algorithm
is traversing through thepixels for calculating the length enough information can be gathered
from them tointerpolate a straight line along these pixels. The least square method is used for
the interpolation. All these values are determined in only one pass thorough thefinger. This
may improve the system performance considerably.From the algorithm for determination of
finger length the values of the tipof the finger (x1, y1) and the bottom (x2, y2) are obtained.
By dropping perpendicularlines from these points to the line, obtained using the least square
method thestarting and ending points of the line are determined. With the starting and
endingpoints of the line known the line can be divided into 3 equal parts generating twomore
points (x3, y3) and (x4, y4). A line perpendicular to the interpolated line andstarting at (x3,
27 | P a g e
y3) is drawn toward s the other boundary of the finger. The width isconsidered to be the
distance between the starting point (x3, y3) and the point wherethe perpendicular meets the
other boundary of the finger.
To determine the point where the perpendicular meets the other boundary ofthe finger the
algorithm traverses along a line parallel to the X axis using the point(x3, y3) as the starting
point. The algorithm traverses till it encounters a lit pixel.This pixel is on the other boundary
of the finger. The algorithm then attempts tofind a pixel closest to the perpendicular line by
considering all lit pixels within acertain distance from the current pixel. The algorithm
considers all pixels which areat a distance of two pixels or less from the current pixel. The
algorithm iterates till the pixel being considered is at the minimum distance from the
perpendicular line.
The distance between this pixel and (x3, y3) gives the width of the finger.Algorithm for
determining width of a given finger is shown below:
Step1: Draw a line perpendicular to the interpolated line starting at (x3), (y3).
Step2: Traverse to the right parallel to the x axis until a lit pixel is encountered.
Step3: Take a 5X5 pixels section of the image with the pixel encountered in step2 as the
center.
Step4: For each lit pixel calculate the distance from the perpendicular line.
Step5: The pixel for which the distance to the line is minimum is the new centerfor the 5X5
section.
Step5: Repeat Steps 4 and 5 till the minimum stops changing.
Step7: Obtain the finger width which is the distance between the pixel at thecenter and (x3,
y3).
This algorithm is repeated for (x4, y4), the other point on the finger at whichthe width is to be
calculated. For little finger, middle finger and the ring finger thealgorithm traverses to the
right in Step 2. This is because for these fingers the leftboundary is considered while
calculating the length. The case for the index fingeris however the reverse. For the index
finger the right boundary is considered whilecalculating the length so the traversal by the
finger width algorithm at Step 2 is tothe left. The rest of the algorithm remains unchanged for
the index finger.

4.3 PERIMETER:

The perimeter of the finger is another significant feature. To determine the perimeterone has
to move along the outer boundary of the finger forming a closed loop. Thelength of the loop
is the perimeter. The lines of the finger may pose a complexity.The problem is to find the
loop with the largest perimeter as the lines on the fingersand the finger with the outer
28 | P a g e
boundary may form several different loops. The loopwith the largest perimeter is the actual
perimeter of the handprint. The numberof loops rises with the increase in the number of inner
lines considered. An algorithmwhich takes the path with the chance of returning the
maximum perimeterin this case is proposed. While traversing to determine the perimeter
complicationsarise only at a fork, a place where there is more than one choice available.
Thishappens when two or more finger lines meet at a point. The boundary here is alsobeing
considered a finger line. On a finger a fork would be between a line on the outerboundary and
one of the inner lines. A fork may also occur between only inner fingerlines and not the
boundary. If however the algorithm begins from a boundary andalways make a correct choice
at a fork it will never encounter one where only innerfinger lines meet. Utilizing the fact that
the basic structure of the finger is constant,the probability of making the correct choice when
encountered with a fork, is madeunity. Thus only one pass through the entire outer boundary
is sufficient to find theperimeter since no detours are taken.

Figure 4.6: probability matrix

Since at any point a pixel is surrounded by eight pixels for the next pixel thealgorithm can
pick i any one of these eight. But one of these eight pixels is the onefrom where this pixel has
been picked. That pixel is eliminated from considerationas moving back there would make
the algorithm loop indefinitely. This leaves sevenpixels from which to pick the next
destination.

Figure 4.7: priority matrix

This is shown in Figure 4.6 where ’P’ is the current position and the ’0/1’ is the position that
the algorithm can move to next. ’0/1’ is used to represent the position as any of them can be
either 0or 1. The next possible position that the algorithm considers can only be a
positioncontaining 1. There has to be minimum two such locations, one from where the
positionP has been obtained i.e. the previous position of the algorithm and the secondis the
next position of P. In case there are only two 1’s among the eight possibilities,there are no
29 | P a g e
conflicts and the algorithm proceeds to the next position of P. If howevermore than two 1’s
exist in the eight possibilities a conflict between two or morepositions arises. The algorithm
has to pick a single position as the next position of P.
If the traversal starts from the left boundary of the finger then always theleftmost pixel is
chosen to traverses the perimeter. If however the traversal startsfrom the right boundary the
rightmost pixel is always selected to obtain the perimeter.The proposed system starts from the
left boundary of the finger and consequentlychooses the leftmost pixel at each juncture where
a choice is to be made. This choiceis made by assigning priorities to each position as shown
in Figure 4.7. One depictsthe highest priority and eight the lowest. Now if there is any
conflict the algorithmsimply chooses the one with the highest priority.The priorities are
assigned in such a way that they always provide the pathwhich leads to the maximum
perimeter. The priority system works fine when thealgorithm traces upwards. However when
the tip of the finger is reached the algorithm must trace toward the valley between two
fingers. Here the priority matrixhas to be modified to produce the correct path. This is exactly
similar to the problemof walking on a road. If one turns 180 degrees what is at the left hand
sideinitially, now is at the right hand side and vice versa. The algorithm however hasto
determine when the turn is made so that the matrix can be changed accordingly.Whenever the
algorithm is forced to choose a pixel with a priority greater than orequal to six a turning point
is recognized. The matrix is then consequently reversedso as to still choose the leftmost pixel.
This is done by taking a mirror image ofthe priority matrix along the central point P. The
mirror of the original matrix isshown in Figure 4.8. This matrix is used until another turning
point is reached atwhich point the algorithm reverts to using the original priority matrix.

Figure 4.8: Mirror of the Priority Matrix along PAlgorithm for determining Perimeter

Step1: Starting from left bottom find the lower boundary of the left side of the finger (l).
Step2: Use the priority matrix to determine the next pixel.
Step3: Save the value of the priority matrix used as value.
Step4: Set the value of current pixel to 0 move to the next pixel chosen by thematrix.
Step5: If value >= 6 then use mirror of the priority matrix.
Step6: Repeat Step 3 to Step 5 till rightmost end is reached.

30 | P a g e
CHAPTER-5

31 | P a g e
CHAPTER 5

MATCHING:

The features obtained from the input image are matched against the images in the database.
Even under the best of conditions it cannot be expected that the features obtained match
exactly with the features of the previous image of the same individual. The extracted features
are in the form of positive integers. These are referred to as magnitude of the features. So to
obtain a match the sum of the difference between the features obtained from the input image
and the image in the database is calculated. The difference of each feature is treated
separately. The lower the magnitude of the feature the more significant the difference
becomes. For example for a feature of magnitude 1000 a difference of 2 units is negligible
but it becomes significant if the magnitude is 100.So to find the final match score two
separate values are calculated.
The firstis the difference between the features from the two images. The second is the sumof
the magnitude of the images. The significance that is the relative importance ofthe difference
between the features of the two images must be seen in context withthe sum of magnitudes of
the features.

Diff = magnitudeofDatabaseimage –magnitude ofinputimage


Sum = magnitude ofDatabaseimage + magnitudeofinputimage

Using the value of sum the difference between the features can be viewed inproper context.
So the actual match of the feature is: match = sum/diff
The next step is to assign a score to the value of match determined. Thesignificance of
assigning a score is explained in the next paragraph. This score will bethe match-score for
this feature. The summation of match-score of all the featuresgives the total match-score. It
the total match-score that is compared with thethreshold to authenticate the user. The
threshold is match-score that is determinedby testing. Several negative test cases are tried and
the match-scores noted. Alsoseveral positive test cases are tried and again the match-scores
are noted.
Usingthese results a match-score which is less than the match-scores of maximum numberof
positive cases and greater than maximum number of negative cases is chosen.This chosen
match-score is set as the threshold.The value of match for each feature may be different. If
suppose diff = 0then match = 1. If the algorithm simply adds the match for each feature it

32 | P a g e
mayfail when diff = 0 and all the other features have very low value of match. Thisis because
in such a case the low values of match indicate that the user in notauthentic. However since
one of the match values is 1 the other values becomeirrelevant. The total match-score in such
a case will always be above the threshold.The value of match lies in the range [0, 1]. The
range [0, 1] is broken into severalsmaller ranges such that the value of match can lie in only
one of these ranges. Theranges are chosen using a set of constants. The constants are in
decreasing order.
If the constants are c1 > c2 > c3 > .......... > cn then the ranges are from [0, cn] to
[c1, 1].

To obtain the score match is divided by a set of constant values. These constantsare in
decreasing order so the division results in a set of values which in inthe increasing order.
Thendetrange = match/ci. The algorithm starts from c1 and continues till the detrange > 1.
Determiningthe value of i for which detrange is greater than one provides the algorithm witha
range in which sum lies. Depending upon this range a score is assigned to thefeature. The
score is an integer and may have both positive and negative values.The score reflects how
close the values of the features from the two images are. Thescore is highest for differences
which lie in the range [0, cn] and is lowest (negative)for differences in the range [c1, 1]. The
score for [ci, ci+1] is nearly double that ofthe range [ci+1, ci+2] unless the score for the first
range is negative. In such casesthe score for the second is nearly double that of the first.Each
of the features obtained is assigned a weight. This weight dependsupon the importance of the
feature. Length might be considered a more importantfeature than the weight and thus be
weighed accordingly.
So the formula for calculation of matching is
Match score =X(wi × scorei) (5.1)
, where,wi = weight of a feature.
The threshold is determined by testing various images. For any two images ifthe match-score
is more than the determined threshold the system accepts the useras an authorized user. In
case the match score is below the threshold the user isrejected as an unauthorized user.
Ideally the match-score of all unauthorized usersshould be negative and that of the authorized
users should be high real numbers.

33 | P a g e
CHAPTER-6

34 | P a g e
CHAPTER 6

EXPERIMENTAL RESULTS:

The system has been tested on 100 images. These images contain some images ofthe same
individuals taken at different time intervals. Since no pegs are used toalign the position of the
finger it is obvious that the alignment may vary for theimages of the same individual.
Although a slight rotation is acceptable the systemis not completely rotation invariant. The
users are also instructed to keep the fingeron the scanner such that the arm forms almost a
right angle with the finger. Thisis to reduce the chances of a part of the wrist being included
in the image. Also forthe same effect the users are instructed to keep the finger on the bottom
part of thescanner rather than at the top so that most part of the wrist falls outside of
thescanning range. If a large part of the wrist creeps into the image it may affect
thecalculations for the perimeter of the finger. The right way to give the Handprint isshown in
Figure 6.1 while the incorrect way of finger is shown in Figure 6.2
Initially an arbitrary threshold, roughly at the centre of the match-scorespread was chosen.
After testing with the images the arbitrary threshold provedto be fairly good. In 200 tests for
false acceptance there are a total of 22 falseacceptances, giving the arbitrary threshold an
FAR of 0.11.

35 | P a g e
Also in 30 tests for false on the scanner finger on the scannerrejections it is found to be 3
false rejects giving the FRR a value of 0.10
During these tests the match-score for each false acceptance has been noted. Also the match-
score for each false rejection are noted. A comparison of these scoresdetermines that the
threshold can be raised so as to reduce the FAR to 0.05 withoutincreasing the FRR for the
tested set of images. As more testing is performed thethreshold can be narrowed down even
further. The FAR-FRR curve is shown in Figure 6.3. The ERR obtained from this curve is
6%. A few images with rings on fingers, as shown in Figure 6.4 have also been tested.

The rings are such that they do not touch any of the fingers other thanthe one on which they
are worn. It is found that the rings do not produce resultsany different from the rest of the
sample images. The system however works bytraversing from the tip of the finger to the
valley between the fingers. So if thickrings, one which changes the shape of the valley
between the fingers by touchingtwo or more fingers or any sort of other ornaments are worn
it may result in a lossof accuracy by the system. The rings may also change the measured
width of thefingers so in case a ring is worn while registering it should also be worn at all
thefuture authentication attempts.

36 | P a g e
MATLAB BASED IMPLEMENTATION FOR FINGER PRINT
RECOGNITION

1. TRAIN DATA INSERT

37 | P a g e
2. TEST DATA INSERT

3. TRAIN DATA INCLUSION

6. FINGERPRINT RECOGNIZED

38 | P a g e
7. ACCESS DENIED /MIS-MATCHES

39 | P a g e
CHAPTER-7

40 | P a g e
CHAPTER 7

CONCLUSION AND FUTURE WORK:

The use of biometrics as a reliable means meeting the security concerns of today’s
information and network based society cannot be belittled. Biometrics is being used all over
the globe and is undergoing constant development. Hand geometry has proved to be a reliable
biometric. The proposed work shows how to utilize the shape of the finger to extract features
using very simple algorithms. The database consists of 100 different fingerprints. Some of
these fingerprints may belong to the same person obtained by placing the finger at different
alignments on the scanner.
The system showed promising results with accuracy around 95%. The FRR is found to be
close to 0.1 and the FAR to be around 0.05.
The proposed work utilizes primarily the geometry of the hand. The finger creases and even
the fingerprints can be extracted from the input image. Combining all these biometrics would
result in a multimodal system with very high accuracy. The image extracted is in grayscale
format. If a coloured image is utilized for the system additional features such as the colour of
the finger can also be used. For huge databases the search takes a long time and colour is so
distinct a feature that it can be used as an initial classifier so as to narrow the search space in
the database considerably. The use of neural network based classifier trained on a larger
database may result in further improvement of the system accuracy.

41 | P a g e
BIBLIOGRAPHY:

[1] Individuality of handwriting: A validation study. In ICDAR ’01: Proceedings of the Sixth
International Conference on Document Analysis and Recognition,page 106, Washington, DC,
USA, 2001. IEEE Computer Society.
[2] Francesco Bergadano, Daniele Gunetti, and Claudia Picardi. User authentication through
keystroke dynamics. ACM Trans. Inf. Syst. Secur., 5(4):367–397,
2002.
[3] Michael M. Blane, Zhibi Lei, HakanCivi, and David B. Cooper. The 3l algorithm for
fitting implicit polynomial curves and surfaces to data. IEEE Trans.
Pattern Anal. Mach. Intell., 22(3):298–313, 2000.
[4] Y. Bulatov, S. Jambawalikar, P. Kumar, and S.Sethia. Hand recognition using geometric
classifiers. 1999.
[5] M. Burge and W. Burger. Ear biometrics, personal identification in networked society.
kluwer academic, boston. 1999.
[6] J. Canny. A computational approach to edge detection. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 08:679–698, 1986.
[7] James B. Hayfron-Acquah, Mark S. Nixon, and John N. Carter. Automatic gait
recognition by symmetry analysis. In AVBPA, pages 272–277, 2001.
[8] Anil K. Jain, SharathPankanti, SalilPrabhakar, and Arun Ross. Recent advances in
fingerprint verification. In AVBPA, pages 182–191, 2001.
[9] A. Kholmatov. Biometric identity verification using on-line & off-line signature
verification. Master’s thesis, Sabanci University, 2003.
[10] Ajay Kumar and David Zhang. Integrating shape and texture for hand verification.
In ICIG ’04: Proceedings of the Third International Conference on
Image and Graphics (ICIG’04), pages 222–225, Washington, DC, USA, 2004.
IEEE Computer Society.
[11] Chih-Lung Lin and Kuo-Chin Fan. Biometric verification using thermal images of
finger-dorsa vein patterns. IEEE Trans. Circuits Syst. Video Techn.
14(2):199–213.
[12] Guangming Lu, David Zhang, and Kuanquan Wang. Fingerprint recognition using
eigenfingers features. Pattern Recogn. Lett., 24(9-10):1463–1467, 2003.
[13] YingLiang Ma, Frank Pollick, and W. Terry Hewitt. Using b-spline curves for hand
recognition. International Conference of Pattern Recognition, 03:274–277,
2004.
42 | P a g e
[14] A. Malaviya. A fuzzy online handwriting recognition system. 2nd International
Conference on Fuzzy Set Theory and Technology.
[15] Hikaru Morita, D. Sakamoto, TetsuOhishi, Yoshimitsu Komiya, and Takashi
Matsumoto. On-line signature verifier incorporating pen position, pen pressure, and pen
inclination trajectories. In AVBPA, pages 318–323, 2001.
[16] Cenker Oden, VedatTaylanYildiz, HikmetKirmizitas, and BurakBuke. Hand recognition
using implicit polynomials and geometric features. In AVBPA ’01:
Proceedings of the Third International Conference on Audio- and Video-Based
Biometric Person Authentication, pages 336–341, London, UK, 2001. Springer-
Verlag.
[17] S. Rieck, E. Schukat-Talamazzini, T. Kuhn, S. Kunzmann, and E. N¨oth. Automatic
transformation of speech databases for continuous speech recognition.
In P. Laface and R. DeMori, editors, Speech Recognition and Understanding.
Recent Advances, Trends, and Applications, NATO ASI Series F75, pages 181–
186. Springer, 1992.
[18] SaitSener and Mustafa Unel. Affine invariant fitting of algebraic curves using
fourier descriptors. Pattern Anal. Appl., 8(1):72–83, 2005.
[19] Y. Wu and T. Huang. Human hand modeling. Analysis and Animation in the
Context of HCI, 1999.
[20] Yong Zhu, Tieniu Tan, and Yunhong Wang. Biometric personal identification based on
handwriting. International Conference of Pattern Recognition,
02:2797, 2000.

43 | P a g e

Das könnte Ihnen auch gefallen