Sie sind auf Seite 1von 9

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART C: APPLICATIONS AND REVIEWS, VOL. 41, NO.

5, SEPTEMBER 2011

599

Enhancing Identity Prediction Using a Novel


Approach to Combining Hard- and
Soft-Biometric Information
Marjory Cristiany Da Costa Abreu and Michael Fairhurst

AbstractThe effectiveness with which individual identity can


be predicted in, for example, an antiterrorist scenario can benefit
from seeking a broad base of identity evidence. The issue of improving performance can be addressed in a number of ways, but system
configurations based on integrating different information sources
(often involving more than one biometric modality) are a widely
adopted means of achieving this. This paper presents a new approach to improving identification performance, where both direct
biometric samples and soft-biometric knowledge are combined.
Specifically, however, we propose a strategy based on an intelligent
agent-based decision-making process, which predicts both absolute
identity and also other individual characteristics from biometric
samples, as a basis for a more refined and enhanced overall identification decision based on flexible negotiation among class-related
agents.
Index TermsAgent, face, fingerprint, fusion, identity prediction, soft-biometric prediction (age and gender).

I. INTRODUCTION
OBUST and reliable identification of individuals is clearly
a most important tool in, for example, antiterrorism scenarios, yet many traditional approaches to identity determination
and people monitoring have manifest flaws and weaknesses.
Biometrics-based approaches provide a potentially very powerful option, providing both the possibility of confident identity prediction in many cases, while also allowing a lower
level monitoring function based on probabilistic assessments
or confidence-based predictions in other circumstances.
Although formal biometric identification systems are not
guaranteed in all circumstances to be effective in protecting
against terrorist threat, and although establishing identity unequivocally is always challenging, a principal benefit of biometric technologies is their ability to bind activity to an individual, thereby increasing the challenges associated with assuming
different identities, or masking identity across different activities, across physical boundaries, and even across international
borders [1].

Manuscript received December 28, 2009; revised April 12, 2010; accepted
June 26, 2010. Date of publication August 3, 2010; date of current version
August 19, 2011. The work of M. C. C. Abreu was supported by the Coordenaca o
de Aperfeicoamento de Pessoal de Nvel Superior (CAPES) (Brazilian Funding
Agency) under Grant BEX4903-06-4. This paper was recommended by Associate Editor J. Tang.
The authors are with Department of Electronics, University of Kent,
Canterbury, Kent CT2 7NT, U.K. (e-mail: mcda2@kent.ac.uk; m.c.fairhurst@
kent.ac.uk).
Digital Object Identifier 10.1109/TSMCC.2010.2056920

However, there is an argument to be made that it is necessary to be more imaginative about how to deploy biometric
identification processing if the full potential of these technologies is to be most effectively harnessed in this type of context.
For example, it is important to acknowledge that antiterrorist
applications are by their very nature rarely well defined, and
identifying, tracking, and predicting the behavior of individuals intent on avoiding detection is a major challenge; however,
issues of identity are approached. One way in which creatively
to enhance the value of the processing methodologies, which
underpin biometric identification systems is to recognise that in
many practical situations identity evidence in relation to any individual is likely to be fragmented and distributed, and thus, no
single-simple recognition algorithm is likely to be completely
effective. It may, therefore, be advantageous to shift the focus
from identification in an absolute sense to a process, which
evaluates multiple sources of identification evidence in order to
predict, with a greater degree of confidence than would otherwise be possible, the identity of an individual, whose activities
are being monitored [2].
Biometrics is now a very well established and intensively researched subject area, yet still may be considered to represent a
group of technologies, which are at a pivotal stage of development [3]. Despite the fact that the biometrics field has reached a
level of maturity, which can now support reliable practical applications (for example, in relation to travel documents, managing
worker movements in the construction industry, etc.), there are
still some important unresolved issues, which need to be addressed and, indeed, the search for improved performance is a
continuing issue [4].
In most applications, however, increasing the accuracy with
which a biometric solution can identify an individual or verify claimed identity, is a very important issue, especially since
the search for improved accuracy also raises related questions
about the possibility of tradeoffs between different types of
error, how to configure the underpinning processing systems,
and the implications across a range of performance factors of
structuring an identification system in a particular way. Approaches to improving accuracy have focused principally on
optimizing processing within chosen individual modalities (increasing the reliability of minutiae extraction from fingerprint
images [5], for example, or developing alternative face matching algorithms [6]), extending existing modalities (for example,
moving to 3-D facial imaging [7]), or seeking greater optimization of the generic classification engine deployed [8]). Beyond
this, there has been an increasing interest in acquiring better and

1094-6977/$26.00 2010 IEEE

600

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART C: APPLICATIONS AND REVIEWS, VOL. 41, NO. 5, SEPTEMBER 2011

more extensive databases to give greater confidence in system


design and to model more accurately the prevailing population
statistics, while humancomputer interaction (HCI) considerations can be a critical factor in generating data of a quality,
which supports improved performance [9].
However, an approach of particular importance and potential in the present context has been to seek solutions based on
multisource processing and, especially, to develop methods for
combining multiple modalities in making an identification decision. The multibiometrics approach has generally been shown
to generate improved accuracy levels (i.e., reduced error rate
performance [10]) and also offers greater resistance to spoofing
attacks. It also provides the flexibility to allow both a degree
of individual choice in system interaction, and the ability to
circumvent situations, where an individual is not able to provide requested biometric data, offering additional benefits in
most applications. Clearly, however, multibiometric solutions
also increase implementation complexity and can impact significantly on general system usability, and some recent work [1]
has explored in detail the relationship between multiclassifier
solutions (which have minimal impact on usability) operating
on a single-modality and full-multibiometric configurations. Interestingly, there has also been work reported on the use of
biometric data to predict other individual characteristics (such a
subject gender [11], for instance) from biometric samples. Such
characteristics, which are often especially useful in forensic applications, are variously referred to as soft-biometrics (the term
which we will adopt in this paper), metabiometrics, or simply
nonbiometric features.
These latter two strands of work are often considered separately, but in this paper, we will propose and evaluate a strategy
for integrating both standard biometric data and soft-biometric
data as a means of improving the accuracy with which an individual can be identified. In particular, we will seek an essentially multibiometric framework within which to develop an
enhanced identification process, and we will implement our approach using an intelligent agent configuration, which provides
efficiency and flexibility at the operational level, and two established fusion methods. Overall, therefore, we will propose an
approach to enhanced individual identification, which combines
an effective processing structure with a well-matched powerful
and flexible implementation framework, collectively providing
a means to improve the accuracy with which practical biometric identification systems can be designed and the effectiveness
with which they can be deployed. Importantly, our approach
exploits the idea of combining identity evidence from multiple
sources in order to maximize identification potential or to increase confidence in identity prediction, particularly, where a
hard decision based on conventional identity information from
a single-conventional source is not possible.
II. IMPROVING ACCURACY IN IDENTITY PREDICTION USING
SOFT-BIOMETRIC INFORMATION
This paper presents a new approach to improving accuracy
in person identification and confidence in identity prediction,
which combines soft-biometric prediction classifiers with more

Fig. 1.

Fusion-based system workflow.

conventional identity prediction classifiers in order to predict


individual identity more reliably.
The base classifiers (for identity prediction and soft-biometric
prediction, respectively), each process the available input data
to give their prediction. In pattern recognition terms, the identity
prediction classifiers will classify any input sample as belonging
to one of the enrolled users. On the other hand, the soft-biometric
prediction classifiers will classify any input sample as one of
the possibilities regarding that specific information type. For
example, if the soft biometric is gender, the classifiers will try
to classify the input sample as either male or female.
The information used for the fusion methods is designated
confidence degree (Conf). All the classifiers will normally
produce outputs for all the possible classes and the winner class
(the one generating the maximum output) is considered the
output of this classifiers. The fusion techniques, which will be
presented here will use all the outputs or confidence degrees for
all the classes of the problem.
However, critical to our proposed approach is the way in
which different data sources are integrated, and this is explained
in detail in Section II-A, II-B (both first defined in [12]), and
II-C.
A. Majority Vote-Based Fusion Method
Majority voting [13] is a nonlinear fusion-based method that
takes into account only the top outputs of the component experts.
The outputs of the classifiers are represented in a winner-takeall form (for each classifier, the output of the winner is 1 and
the remaining outputs are 0) and the weights for all the experts
are equal to 1.
In this paper, we aim to use different types of information
for the overall decision making based on the fusion process.
A schematic of this fusion-based configuration can be seen in
Fig. 1. An adaptation of the general method is described as
follows.
1) All the classifiers generate their output for the given input
sample.
2) If all the identity prediction classifiers vote for the same
predicted user identity and all the soft-biometric prediction
classifiers also vote for the feature related to the same user,
then this users identity is assigned as the output of the
system.

ABREU AND FAIRHURST: ENHANCING IDENTITY PREDICTION USING A NOVEL APPROACH

601

TABLE I
CONFIDENCES OF THE SAMPLE FOR IDENTITY AND GENDER PREDICTION

3) When the identity prediction classifiers disagree, the identity output of the system is the user identity, which has
received most votes from the soft-biometric prediction
classifiers and the identity prediction classifiers.
4) When there is a tie, the identity prediction classifier with
the greatest confidence provides the output of the system.
As an example to illustrate the operation of this method,
a hypothetical three-user (A, B, and C) identification task is
considered, the inputs of which contain three features (fea1,
fea2, and fea3). The proposed fusion system is composed of
two identity prediction classifiers (Cl-I-1 and Cl-I-2) and two
gender prediction classifiers (Cl-G-1 and Cl-G-2). The gender
prediction classifiers can generate either male or female as
an output. After the training process for these classifiers, the
following hypothetical test pattern is presented: fea1: 0.7; fea2:
0.4; and fea3: 0.22. User A is male, and users B and C are
female. The classifier outputs for this imagined situation can
be seen in Table I, where the winner class for each classifier is
italicized.
In this case, the following output will be generated by the
system.
1) User A: one vote (Cl-I-1).
2) User B: three votes (Cl-I-2, Cl-G-1, and Cl-G-2).
3) User C: no vote.
The predicted identity of the system would be user B, because
it has the majority of votes.
B. Sum-Based Fusion Method
Sum-based fusion [14] is a linear fusion-based method that
takes into account the confidence degree for each class of each
classifier. In this sense, when an input pattern is presented to the
base classifiers, the degrees of confidence for each class output
are added to the other related outputs giving a score to that class.
The winner class, and hence, the identity label of the system is
the class with the highest score.
The same general system structure as for the majority voting
approach is used, as shown in Fig. 1. We also need to adapt this
method to our current purpose as follows.
1) All the classifiers generate their output for the given input
sample.
2) The corresponding class confidence is added in a class
score.
3) In the event of a tie, the identity prediction classifier with
the greatest confidence provides the output of the system.

Fig. 2.

Multiagent-based system workflow.

Using the same example defined in Section II-A, the system


would give the following output.
1) User A: 2.70 (=0.90 + 0.86 + 0.19 + 0.75).
2) User B: 2.32 (=0.26 + 0.87 + 0.38 + 0.81).
3) User C: 1.78 (=0.25 + 0.34 + 0.38 + 0.81).
The predicted identity of the system will, therefore, be user
A, because this corresponds to the highest sum score.
C. Sensitivity-Based Negotiation Method
This section presents the last of our proposed techniques and
is based on an intelligent agent model. An intelligent agent is
a software-based computer system that has autonomy, social
ability, reactivity, and proactiveness [15]. Agents are entities,
which can communicate, cooperate, and work together to reach
a common goal. They interact using negotiation protocols.
We have implemented an adaptation of an agent-based negotiation method first proposed in [16]. Originally, this method
was based only on agents, which perform the same classification task. However, in our proposed adaptation, we aim to
incorporate soft-biometric classifiers to help the agents make
their decision, and thus, refine the decision-making process.
The basic idea underpinning our proposed method is that a decrease in the confidence level of the agents is considered through
the use of a sensitivity analysis during the testing phase. This
analysis can be achieved excluding and/or varying the values of
an input feature and analyzing the variation in the performance
of the classifier method. The main aim of this analysis is to
investigate the sensitivity of a classifier to a certain feature and
to use this information in the negotiation process. This analysis
is performed with respect to all features of the input patterns in
the identity prediction classifier in the agents as well as in the
soft-biometric prediction classifier.
All agents should allow their classifier module to be trained
and to negotiate a common result for a test pattern. A schematic
view of this multiagent-based system can be seen in Fig. 2.
Our proposed method for the production of an action plan is
described as follows.
1) Allow all the classifiers in the system to be trained. During
the training phase:
a) carry out the sensitivity analysis for all features in
each classifier;
b) calculate the training mean for all features of each
class.

602

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART C: APPLICATIONS AND REVIEWS, VOL. 41, NO. 5, SEPTEMBER 2011

2) Start the negotiation process by trying to show the other


agents that their results are not good ones, using both
information about the predicted identity and information
about the soft-biometric information related to the predicted identity. The best way to do this is to suggest a
decrease in their confidence level with respect to the predicted identity, which can be achieved in the following
way.
a) Calculate the difference (or distance) between the
input feature (test pattern) and the training mean of
that feature for all classes (users predicted identities and soft-biometrics predicted information).
b) Rank the features in decreasing order of dissimilarity, from the least similar feature to the most
similar) for each class.
c) For the first N features, do the following.
i) Choose an agent and let it choose another
agent to attack.
ii) Check the class assigned by the attacked agent
and the sensitivity of the corresponding classifier to this feature.
iii) Check the soft-biometric information related
to the predicted identity.
iv) Send a message to the other agent suggesting a
decrease (punishment) in the confidence level
of that agent.
3) After the negotiation process, the classifier agent with the
highest confidence level is assumed to be the most suitable
one to classify the test pattern and its output is considered
as the overall identity output.
It is important to emphasize that once one agent sends a suggestion to decrease the confidence level of the other agent, the
second agent will also send a suggestion to punish the first agent.
Each cycle within which all agents suggest punishment constitutes a round. This process proceeds until all M features are
seen or when only one of the agents has nonnegative confidence.
The principal idea behind this process is that the more distant
a particular feature is from the training mean, the higher is the
probability that a sensitive classifier is wrong. Also, if there is
evidence in the soft-biometric classifiers that the agent is wrong,
using the same idea of feature sensitivity, this information is
used in the punishment. This fact is used to suggest a decrease
in the confidence degree of an agent. The punishment value is
calculated in the following way:
 Dj,i Sj,i
Di Si
+
Ri Ca j =1 Rsofti Cm b
n

Puni =

(1)

where
1) Di is the difference between the current i feature of the
test pattern and its training mean for the identity classifier;
2) Dj,i is the difference between the current i feature for the
current winner class j of the test pattern and its training
mean for the soft-biometric classifier;
3) Si is the sensitivity of the classifier to the corresponding i
feature of the chosen class for the identity classifier;

TABLE II
SENSITIVITY ANALYSIS OF THE PREDICTION CLASSIFIERS

TABLE III
TRAINING MEAN OF ALL FEATURES

4) Sj,i is the sensitivity of the classifier to the corresponding


i feature for the current winner class j of the chosen class
for the soft-biometric classifier;
5) Ri is the ranking of the i feature in its difference from the
training mean for the identity classifier;
6) Rsofti : is the ranking of the i feature in its difference from
the training mean for the soft-biometric classifier;
7) C signifies constants a and mb define the intensity of the
punishments;
8) n is the number of soft-biometric classifiers that do not
agree with the attacked agent.
The sensitivity analysis and the training mean, along with
further environmental information related to the soft-biometric
prediction classifiers are transformed into rules and constitute
the overall domain knowledge base of the classifier agent.
An agent can be asked to change its result. This occurs when
the punishment parameter is higher than a threshold during a set
of rounds (features). Then, an agent can choose an alternative
class (usually a classifier provides a decreasing list of possible
classes to which a pattern belongs) or undertake a new decisionmaking process.
As an example to illustrate the operation of the action plan,
the hypothetical scenario of Section II-A is again used. The
confidence of the identity classifiers, which are part of the agents
(Cl-I-1 is part of Ag1 and Cl-I-2 is part of Ag2) are as shown in
Table I.
According to step 2 of the action plan, we now need to calculate the sensitivity and training mean for all features of all
classes, which can be seen in Tables II and III.
Then, step 2a) requires that the difference (absolute difference) from the testing pattern features and the training mean is
calculated (see Table IV).
According to step 2b), the features have to be ranked in a
decreasing order of difference, with the following result shows
this example.
1) Identity prediction in decreasing order of distance of the
features:
a) User A: fea1, fea2, fea3.
b) User B: fea1, fea2, fea3.
c) User C: fea1, fea3, fea2.

ABREU AND FAIRHURST: ENHANCING IDENTITY PREDICTION USING A NOVEL APPROACH

TABLE IV
ABSOLUTE DISTANCE OF THE TEST PATTERN AND ITS TRAINING MEAN

2) Gender prediction in decreasing order of distance of the


features:
a) Female: fea3, fea2, fea1
b) Male: fea3, fea1, fea2.
In the first round of step 2c-i) and 2c-ii), let us imagine that
agent Ag2 starts the negotiation, convincing the other agent
(Ag1) about the correctness of its solution. The chosen identity
of Ag1 is A and fea1 is the least similar feature. Also, the
sensitivity of Ag1 to fea1 is 7.7% and its difference to the
training mean of class A is 0.30. Furthermore, both the gender
prediction classifiers report that the user to whom the sample
belongs is female and user A is male. The punishment regime
will reflect this information. Using (1), the punishment, using a
constant = 10, is shown as following:
PunAg1fea1 =

0.30 0.077 0.21 0.053


+
3 10
3 10
0.21 0.046
= 0.001.
+
3 10

This value is, therefore, suggested to be subtracted from the


confidence of Ag1 about user A. Its new confidence for user
A will consequently be 0.899 (=0.900.001). The number of
rounds progresses until either all features have been analyzed
or until only one of the agents has a nonnegative confidence for
a number of rounds.
III. TOWARD THE EXPLOITATION OF MULTISOURCE
IDENTIFICATION INFORMATION
The use of soft-biometric information to improve identity
verification has not been as widely investigated as other options
for improving accuracy and reliability in biometric processing.
Nevertheless, it is possible to find some interesting relevant
work in the literature, of which some typical examples are the
following.
In [17] and [18], Jain et al. investigate characteristics, such
as gender, ethnicity, and height, which are extracted automatically to provide some information about the user in a fingerprint verification task. This information is incorporated into
the decision-making process of the primary biometric system,
using the probability related to that specific soft biometric.
The database adopted contains 160 users each providing four
samples of their left-index finger, left-middle finger, right-index
finger, and right-middle finger, where minutiae features were extracted for the biometrics-based processing. The experiments reported, using synthetically generated soft-biometric data based
on known statistics, show results with 80.1% accuracy.
In [19], a hybrid verification biometric system that uses
face and fingerprint as the primary characteristics and gender,

603

ethnicity, and height as the soft characteristics is described. The


fingerprint database used contains 160 users each providing
four samples of their left-index finger, left-middle finger, rightindex finger, and right-middle finger, where minutiae features
were again extracted. The face database contains images of 263
users, with ten images per user, where linear discriminant analysis (LDA) features were extracted (score vector of length 8).
The experiments use an ethnicity classifier (Asian or non-Asian
with 96.3% accuracy), a gender classifier (89.6% accuracy),
and the height information already available. As in [17], this
information is incorporated into the decision-making process of
the primary biometric systems and uses the probability related
to that specific soft-biometric characteristic. The results show
that neither ethnicity nor gender improve the performance of
the systems. However, the system that uses height, face, and
fingerprint returns an accuracy of around 95.5%.
In [20], a framework for integrating the color of a human iris
within a multimodal biometric system is described, which combines fingerprint and iris in a verification task. Steerable pyramid
filters and multichannel log-Gabor filters are used for extracting features of the fingerprints and iris, respectively. Weighted
averaging and a Parzen classifier are used for fusion of these features. The DSP-AAST iris database and fingerprint verification
competition (FVC) fingerprint database are used. The Parzen
classifier generates the best results with an accuracy of around
97%.
In [21], a fingerprint verification system, which also makes
use of body-weight measurements is presented. Weighted sum
of scores, support vector machines (SVM), multilayer perceptron (MLP) as well as logical OR and AND are the methods
employed in data fusion. A database containing fingerprints
(minutiae were extracted), body weight, and fat percentage data
for 62 individuals was collected for this study. The results reported show a decrease in the total error rate to 3.9% when
incorporating the soft-biometric information.
Some important differences can, however, be noted between
these typical approaches and that which we propose as follows.
1) A main difference between the investigations reported and
our paper is that we propose an application for an identity
prediction task, whereas all the others address the less
challenging task of identity verification.
2) The traditional approaches all use the soft-biometric information to narrow the search space, rather than operating
specifically in the fusion phase.
3) Most of the work reported is multimodal, while here, we
present an approach which, as will be demonstrated in
Section V, can produce very good performance in a less
information-rich environment.
The main contribution of our approach is the use of intelligent
agents that perform an automated identification, not depending
on a unique combination method, as is normally found in fusion
methods.
IV. EXPERIMENTAL METHODOLOGY
In the experimental work to be reported here, we use
two modalities (fingerprint and face) collected as part of a
multimodal database, where each of 79 users provided their

604

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART C: APPLICATIONS AND REVIEWS, VOL. 41, NO. 5, SEPTEMBER 2011

information in two sessions. The data used were collected in


the Department of Electronics at the University of Kent in the
U.K, as part of a Europe-wide project undertaken by the EU
BioSecure Network of Excellence [9].
The fingerprint dataset from the BioSecure project contains
samples from sensors based on both thermal and optical fingerprint capture methods. In each case, samples corresponding to
right thumb, index and middle fingers and left thumb, index and
middle fingers are collected (two samples from each of these
fingers). In our working database, we extracted features based
on the minutiae of the fingerprint samples, as follows:
1) X-pixel coordinate;
2) Y-pixel coordinate;
3) Minutia type;
4) Direction;
5) Ridge curvature;
6) Ridge density.
The minutiae were extracted using the VeriFinger [22] software. As each fingerprint image generates a different number of
detectable minutiae, while the classifiers adopted need a common number of entries, it is necessary to normalize the number
of minutiae. Here, we use a standard algorithm for core detection [23] and identify the 15 minutiae closest to the core to use
as input to the classifier, this being chosen as optimal on the
basis of extensive experimental testing. We use both optical and
thermal samples of all six fingerprints.
The face dataset from the BioSecure database contains eight
samples collected using a digital camera. Four of these samples
use a flash and the remaining four do not. Principal component
analysis (PCA) [24] was adopted using 15 eigenvectors.
The database provides a limited set of soft-biometric information, from which we have selected age and gender as the
two features to adopt in our experimentation for the purposes of
illustration. For simplicity, we have partitioned the population
into just three age groups (the age groups adopted are <25,
2560, and >60). The gender features for the users naturally
partition the population into two groups, male and female.
In order to investigate and evaluate performance in our experimental study, a range of different individual classifiers were
selected and used for identity prediction and soft-biometric information prediction, which would also form the base components of the fusion methods used in the later part of our
experimentation. The base classifiers used in our study are as
follows:
1) Multilayer perceptron (MLP) [25];
2) Fuzzy MLP (FMLP) [26];
3) Radial-basis function neural network (RBF) [27];
4) Optimized incremental reduced error pruning (IREP)
(JRip) [28];
5) Support vector machines (SVM) [29];
6) Decision trees (DT) [30];
7) K-nearest neighbours (KNN) [31].
In order to improve robustness, we chose a tenfold crossvalidation approach because of its relative simplicity, and because it has been shown to be statistically sound in evaluating
the performance of classification tasks [32]. In tenfold cross
validation, the training set is equally divided into ten different

TABLE V
ERROR MEAN AND STANDARD DEVIATION OF ALL INDIVIDUAL PREDICTION
CLASSIFIERS USED WITH THE FINGERPRINT AND FACE DATABASES

subsets. Nine out of ten of the training subsets are used to train
the classifier and the tenth subset is used as the test set producing
an error measurement each time. The procedure is repeated ten
times, with a different subset being used as the test set in each
case and the average of these ten errors represents the error rate
for that classifier.
The comparison of two classification methods is accomplished by analyzing the statistical significance of the difference
between the error mean of the classification rate on independent
test sets from the methods evaluated. In order to evaluate this
the p-value provided by the t-test [32] measures the degree of
confidence in the result. In our case, we use a confidence level
of 95%, where one sample is deemed to be statistically different
from another only when the p-value is lower than 0.05.
V. RESULTS AND DISCUSSION
In order to interpret and evaluate the results of the fusion process, it is important first to analyze the results of the individual
classifiers that will be used as the base elements in the fusionbased structure. Table V shows the results obtained using all
seven classifiers for each prediction task.
It can be seen in Table V that the identity prediction classifiers
generate a similar error mean to that returned with the softbiometric prediction process. This is not expected because in
the identification task each user identity is considered to be
a class, when in, for example, the age prediction process the
classifier only needs to give a prediction based on a choice
from just three classes, which makes the classifier task much
less challenging. Moreover, this result also demonstrates that
soft-biometric prediction can be effectively realized.
Because our aim in this paper is to predict identity, it is
important to know if there is any statistical difference between
the identity prediction classifiers. According to the interpretation
of the t-test and comparing the best classifier (SVM) with the
others, it can be asserted that the SVM classifier is statistically
better than all other classifiers apart from the FMLP for the two
databases with 0.36 and 0.39 p-values for face and fingerprint,
respectively.

ABREU AND FAIRHURST: ENHANCING IDENTITY PREDICTION USING A NOVEL APPROACH

605

TABLE VI
ERROR MEAN AND STANDARD DEVIATION OF ALL FUSION AND THE
NEGOTIATION METHOD USED WITH THE FACE AND FINGERPRINT DATABASES

The soft-biometric prediction classifiers presented, in general, a good level of accuracy. In the fingerprint database, the
age prediction classifiers generally return the best accuracies despite having to choose among three classes. On the other hand,
the gender prediction classifiers perform better than the age prediction classifiers in the face database. This suggests that the
age effects in fingerprint are greater than the difference between
the genders and the difference between the genders has a greater
impact for the face database than when age is considered.
Overall, the results returned by the individual classifiers are
rather typical of others reported in the literature, but the fusion
methods proposed are specifically aimed at trying to combine the
advantages of different classifiers and improve overall accuracy.
In our experimental work, we used three classifiers in each
prediction task considered. The particular structures considered
are as follows:
1) Identity (MLP/Jrip/SVM) + Age (FMLP/DT/KNN) +
Gender (RBF/Jrip/SVM);
2) Identity (FMLP/DT/KNN) + Age (RBF/KNN/SVM) +
Gender (FMLP/SVM/Jrip);
3) Identity (RBF/KNN/SVM) + Age (FMLP/SVM/Jrip) +
Gender (MLP/Jrip/SVM);
4) Identity (FMLP/SVM/Jrip) + Age (MLP/Jrip/SVM) +
Gender (RBF/KNN/SVM).
Table VI shows the accuracy achieved by the fusion (voting and sum) and the negotiation (sensitivity) methods. We
used a value of ten for both the constants (Ca and Cm b ) in
the negotiation method, chosen to provide a strong punishment
level, which this constant controls. The fusion methods show
an improvement of 50% on average when compared with the
individual classifiers, which is clearly a significant gain in performance. However, the sensitivity-based method achieved on
average an 80% improvement when compared with the individual base classifiers, showing a substantial further performance
enhancement.
According to the outcome of the t-test, all these combination
methods are statistically better than all the individual classifiers.
This fact confirms that there are advantages in combining different methods to improve accuracy. Also, it is very interesting to
see that the sensitivity-based method is always statistically better
than the two centralized fusion-based fusion methods. Again,

Fig. 3.

Partitioned error rates according to the age band.


TABLE VII
COMPARISON WITH RELATED WORK

this is an indication that the multiagent solution is especially


promising.
Fig. 3 shows the partitioned error rates for the age bands in
both the face and fingerprint databases. Some important observations can be made about these results are as follows.
1) The best overall error rate does not necessarily mean the
best method for a particular task.
2) Each approach demonstrates a different behavior, meaning
that the system designer can retain flexibility in matching
adopted technique to specific-target populations.
3) Face performs overall better than fingerprint, but not in
specific age bands, for instance Sum-4 for fingerprint for
the over 60 age group performs better than Sum-4 for face.
Table VII shows the results of similar related work compared to our approach. It can be seen that our proposed method
generates a better performance even when compared with multimodal systems. It is also important to recall that all the related investigations cited are based on a verification approach,
while our work addresses the more challenging identification
problem.
Based on these promising results (even though compared
with results from more complex databases) achieved using only
one full modality in the system, it will be instructive in the
future to apply this concept of combining soft-biometric and
full-biometric predicted information in multimodal systems.
VI. CONCLUSION
We have introduced a new approach to the process of improving the accuracy of biometric identification systems, which

606

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART C: APPLICATIONS AND REVIEWS, VOL. 41, NO. 5, SEPTEMBER 2011

uses both biometric and soft-biometric information in an effort


to enhance performance, robustness, and reliability. We have
proposed substantial modifications to some established methods, principally seeking to integrate soft-biometric information
sources into an enhanced identity prediction system. Our assertion is that this type of framework may be particularly advantageous in many applications, such as those encountered in
antiterrorist activities.
Our aim has been to integrate different sources of identification data, in a very general way, and to design an implementation
structure built around an approach embodying intelligence and
flexibility into the processes, which are invoked in reaching a
decision about the identity of a user. This approach has a number of advantages, since it provides a significant enhancement to
the reliability of decision making compared to other approaches
but, importantly, it also provides long-term extendability, since
it is relatively easy to see how multiple different sources of identity evidence can be incorporated within the general structure
proposed in order to support future further enhancements.
The key to our approach is a strategy, which is the opposite of that which has been dominant in multisource processing
hitherto. Here, instead of incorporating additional soft biometrics directly into a conventional decision-making framework,
we have instead adopted a procedure, which uses predictions
from the acquired biometric data to support a broader-based
network of soft decisions, which are then subject to scrutiny
and negotiation in order to reach an informed overall decision.
This is an approach, which appears to be well suited to the
nature of the generalized biometric identification problem, and
one, which can be readily modified to incorporate additional of
novel sources of information not currently available.
In the particular experiments reported here, we have chosen seven different classification algorithms as base elements
for the fusion methods, two soft-biometric information sources
(age and gender), and fingerprint and face databases to validate empirically our proposed approach. In this way, we have
demonstrated that our proposed new technique is particularly
promising, offering a new approach to the design of accurate
biometric-based person identification systems.
ACKNOWLEDGMENT
The authors would like to thank Dr. A. Canuto for her contribution to the face-feature extraction.
REFERENCES
[1] M. Fairhurst and M. Abreu, Balancing performance factors in multisource biometric processing platforms, IET Signal Process., vol. 3,
no. 4, pp. 342351, Jul. 2009.
[2] M. Abreu and M. Fairhurst, Analyzing the impact of non-biometric information on multiclassifier processing for signature recognition applications, in Proc. IEEE 2nd Int. Conf. Biometrics: Theory, Appl. Syst.,
Washington, DC, 2008, pp. 16.
[3] A. Jain, A. Ross, and S. Pankanti, Biometrics: A tool for information
security, IEEE Trans. Inf. Forensics Secur., vol. 1, no. 2, pp. 125143,
Jun. 2006.
[4] S. Prabhakar, S. Pankanti, and A. Jain, Biometric recognition: Security
and privacy concerns, IEEE Secur. Privacy, vol. 1, no. 2, pp. 3342,
Mar./Apr. 2003.

[5] A. Ross, J. Shah, and A. Jain, Towards reconstructing fingerprints from


minutiae points, in Proc. SPIE Conf. Biometric Technol. Hum. Identification II, 2005, pp. 6880.
[6] O. Boumbarov, S. Sokolov, and G. Gluhchev, Combined face recognition
using wavelet packets and radial basis function neural network, in Proc.
Int. Conf. Comput. Syst. Technol., New York, NY, 2007, pp. 17.
[7] I. Mpiperis, S. Malasiotis, and M. Strintzis, 3d face recognition by point
signatures and iso-contours, in Fourth Conf. IASTED Int. Conf., Anaheim,
CA, 2007, pp. 328332.
[8] J. Fierrez-Aguilar, D. Garcia-Romero, J. Ortega-Garcia, and J. GonzalezRodriguez, Exploiting general knowledge in user-dependent fusion
strategies for multimodal biometric verification, Proc. IEEE Int. Conf.
Acoust., Speech Signal Process., vol. 5, pp. 617620, May 2004.
[9] J. Ortega-Garcia, F. Alonso-Fernandez, J. Fierrez-Aguilar, C. GarciaMateo, S. Salicetti, L. Allano, B. Ly-Van, and B. Dorizzi, Software
tool and acquisition equipment recommendations for the three scenarios considered, Universidad Politechnica de Madrid, Tech. Rep. D6.2.1.
Contract No.: IST-2002-507634, Jun. 2006.
[10] A. Ross and A. Jain, Multimodal biometrics: An overview, in The 12th
European Signal Process. Conf., 2004, pp. 12211224.
[11] V. Thomas, N. Chawla, K. Bowyer, and P. Flynn, Learning to predict
gender from iris images, in Proc. 1st IEEE Int. Conf. Biometrics: Theory,
Appl., Syst., Sep., 2007, pp. 15.
[12] M. C. Fairhurst and M. C. C. Abreu, An investigation of predictive profiling from handwritten signature data, in Proc. 10th Int. Conf. Document
Anal. Recognit., Barcelona, Spain, 2009, pp. 13051309.
[13] L. Kuncheva, Combining Pattern Classifiers: Methods and Algorithms.
New York: Wiley-Interscience, 2004.
[14] J. Kittler and F. M. Alkoot, Sum versus vote fusion in multiple classifier
systems, IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 1, pp. 110
115, Jan. 2003.
[15] M. Wooldridge, An Introduction to Multi-Agent Systems. New York:
Wiley, 2002.
[16] M. Abreu, A. Canuto, and L. Santana, A comparative analysis of negotiation methods for a multi-neural agent system, in Proc. Fifth Int. Conf.
Hybrid Intell. Syst., Washington, DC, 2005, pp. 451456.
[17] A. Jain, S. Dass, and K. Nandakumar, Can soft biometric traits assist
user recognition?, in Proc. 1st Int. Conf. Biometric Authentication, 2004,
pp. 561572.
[18] J. Jain, S. Dass, and N. Karthik, Soft biometric traits for personal recognition systems, in Proc. 1st Int. Conf. Biometric Authentication, 2004,
pp. 731738.
[19] A. Jain, K. Nandakumar, X. Lu, and U. Park, Integrating faces, fingerprints, and soft biometric traits for user recognition, in Proc. ECCV
Workshop BioAW, 2004, pp. 259269.
[20] R. Zewail, A. Elsafi, M. Saeb, and N. Hamdy, Soft and hard biometrics
fusion for improved identity verification, in Proc. 47th Midwest Symp.
Circuits Syst., Jul. 2004, vol. 1, pp. 225228.
[21] H. Ailisto, E. Vildjiounaite, M. Lindholm, S. Makela, and J. Peltola, Soft
biometrics-combining body weight and fat measurements with fingerprint
biometrics, Pattern Recognit. Lett., vol. 27, no. 5, pp. 325334, 2006.
[22] Neurotechnologija, Verifinger fingerprint identification sdk, 2008.
[23] N. Khan, M. Javed, N. Khattak, and U. Chang, Optimization of core point
detection in fingerprints, in Proc. 9th Biennial Conf. Australian Pattern
Recognit. Soc. Digit. Image Comput. Tech. Appl., Washington, DC, 2007,
pp. 260266.
[24] K. Pearson, On lines and planes of closest fit to systems of points in
space, Philosoph. Mag., vol. 2, no. 6, pp. 559572, 1901.
[25] S. Haykin, Neural networks: A comprehensive foundation, Knowl. Eng.
Rev., vol. 13, no. 4, pp. 409412, 1999.
[26] A. Canuto, Combining neural networks and fuzzy logic for applications
in character recognition, Ph.D. dissertation, Dept. Electron., Univ. Kent,
Cantebury, U.K., May 2001.
[27] M. Buhmann and M. Buhmann, Radial Basis Functions. New York,
NY: Cambridge Univ. Press, 2003.
[28] J. Furnkranz and G. Widmer, Incremental reduced error pruning, in New
Brunswick, NJ, 1994, pp. 7077.
[29] C. Nello and S. John, An introduction to support vector machines and
other kernel-based learning methods, Robotics, vol. 18, no. 6, pp. 687
689, 2000.
[30] J. Quinlan, C4.5: Programs for Machine Learning. San Francisco, CA:
Morgan Kaufmann, 1993.
[31] A. Arya, An optimal algorithm for approximate nearest neighbors searching fixed dimensions, J. ACM, vol. 45, no. 6, pp. 891923, 1998.
[32] T. Mitchell, Machine Learning. New York: McGraw-Hill, 1997.

ABREU AND FAIRHURST: ENHANCING IDENTITY PREDICTION USING A NOVEL APPROACH

Marjory Cristiany Da Costa Abreu graduated in


computer science and the M.Sc. in systems and computing from the Federal University of Rio Grande do
Norte, Natal, Brazil, in 2004 and 2006, respectively.
She is currently working toward the Ph.D. degree
from the Department of Electronics, University of
Kent, Canterbury, U.K.
Since 2001, she has been involved with computer science, generally in artificial intelligence. Her
current research interests include machine learning,
bioinformatics, intelligent agents, privacy security,
and applied biometrics.

607

Michael Fairhurst received the B.Sc. and Ph.D. degrees in 1969 and 1972, respectively.
He is currently in the Department of Electronics,
University of Kent, Canterbury, U.K. He has authored
or coauthored more than 350 papers in the scientific literature. His research interests include computational architectures and algorithms for image analysis and classification, and applications, including
handwritten text reading and document processing,
medical image analysis and, especially, security and
biometrics.
Prof. Fairhurst has been a member of numerous conference, workshop, and
government-sponsored committees, and sits on the Editorial Boards of several
international journals. He is an Elected Fellow of the International Association
for Pattern Recognition in recognition of his contributions to the field.

Das könnte Ihnen auch gefallen