Sie sind auf Seite 1von 63

CHAPTER 1

CHAPTER 1

1. INTRODUCTION
Local facial features have played an important role in forensic applications for matching face images. These features include any salient skin region that appears on face. Scars, moles, and freckles are representative examples of the local facial features [2]. The use of local features has become more popular due to the development of higher resolution sensors, increase in face image database size and improvements in image processing and computer vision algorithms. Local features provide a unique capability to investigate, annotate and exploit face images in forensic applications by improving both the accuracy and the speed of face recognition systems. This information is also necessary for forensic experts to give testimony in courts of law where they are expected to conclusively identify suspects [3]. Along with the facial marks, demographic information (i.e., gender and ethnicity) can also be considered as ancillary information that is useful in matching face images.

The demographic Soft biometric traits are defined as characteristics that provide some information about the individual, but lack the distinctiveness and permanence to sufficiently differentiate any two individuals [4]. The use of soft biometric traits is expected to improve the face recognition performance when appropriately combined with a face matcher. On the other hand, when face images are occluded or partially damaged, soft biometric traits can be considered as alternative means of face matching or retrieval. Gender and ethnicity of a person typically do not change over the life time, so they can be used to filter the database to narrow down the candidate face images. While some of the facial marks are not permanent, most of them appear to be temporally invariant, which can be useful for face matching and retrieval. When the face images are occluded or severely off-frontal, as is often the case in surveillance videos, utilizing the soft biometric traits is the only reliable evidence to narrow down the candidate face images. Some of the examples of facial marks are shown in Fig. 1. Most of the current photo based identifications in law enforcement units and related security organizations involve a manual verification stage. The identification task is performed by a victim, untrained individual, or a trained forensics officer. This can involve verbal descriptions and hand drawn sketches [5]. Law enforcement agencies have improved the existing practices and established new techniques for gathering and exploitation of evidence 2

using information technology. A large portion of this evidence is in the form of digital media, especially digital imagery. Spaun [2] [3] describes the facial examination process carried out in the law enforcement agencies. One of the major examination steps involves identifying class and individual characteristics.

Fig 1.1 Examples of facial marks The class characteristics include overall facial shape, hair color, presence of facial hair, shape of the nose, presence of freckles, etc. The individual characteristics include the number and location of freckles, scars, tattoos, chipped teeth, lip creases, number and location of wrinkles, etc. in a face. While these examinations are currently performed manually by forensic experts, an automatic procedure will not only reduce the time consuming and subjective manual process, but is likely to be more consistent and accurate. This five step manual examination procedure is referred to as ACE-V [7], which stands for Analyze, Compare, Evaluate, and Verify. However, there have been difficulties in standardizing this manual examination process and the descriptions of local features are limited to only a small number of categories. Therefore, it is expected that the computer aided automatic feature extraction and representation will help standardize the examination process and make the process more efficient. The automatic extraction of facial marks and their representation in a face centered coordinate system will assist in a quantitative analysis. Inclusion of facial mark based matching will also enable a rich description of the face matching evidence from a mere match score to a combination of quantitative and descriptive evidence that can be used by forensic examiners in the courts. Of course, statistical studies on the distribution, frequencies and stability of the facial marks should precede before any attempt is made to fully automate the matching process. Conventional face 3

recognition systems typically encode the face images by utilizing either local or global texture features. Local techniques first detect the individual components of the human face (i.e, eyes, nose, mouth, chin, ears), prior to encoding the textural content of each of these components (e.g., EBGM and LFA) [8] [9]. Global (or holistic) techniques, on the other hand, consider the entire face as a single entity during the encoding process (e.g., PCA, LDA, Laplacianfaces, etc.) [10] [11] [12]. However, these techniques do not explicitly utilize local marks (e.g., scars and moles) and usually expect the input to be a full face image.

The use and significance of soft biometric traits can be summarized into four major categories: (a) supplement existing facial matchers to improve the identification accuracy, (b) enable fast face image retrieval, (c) enable matching or retrieval with partial or off-frontal face images and (d) provide more descriptive evidence about the similarity or dissimilarity between face images, which can be used in the courts. Since facial marks capture the individual characteristics embedded in a face image that are not explicitly utilized in conventional face recognition methods, a proper combination of face matcher and mark based matcher is expected to provide improved recognition accuracy. Fig. 2 shows a pair of images that could not be successfully matched by a state of the art face matcher. Presence of rominent facial marks in these two images strongly support the fact that these two images are of the same subject. The mark based matcher helps in indexing each face image based on the facial marks (e.g., moles or scars). These indices will enable fast retrieval and also the use of textual or key-word based query (Fig. 3). Finally, marks can help in characterizing partial, occluded , or off-frontal face images, which will assist in matching or retrieval of tasks based on a partial face images often captured by surveillance cameras. Example face images that are occluded are shown in Fig. 4. These images cannot be automatically processed using commercial face recognition engines due to the occlusion of the eyes; state of the art matchers require eye locations to align two face images. Fig. 5 shows an example retrieval result based on a facial mark on the right side of cheek in frontal, partial, and non-frontal face images.

Fig 1.2 Two face images of the same person from the FERET database

Fig 1.3 Examples of textual query and a schematic of face region segmentation.

Fig 1.4 Example images with facial occlusion

Face images with such distinctive marks can be more efficiently matched or retrieved. Early studies on facial feature were conducted by Farkas and Munro [13] and Ekman [14]. Similar studies are also found in [15]. Some of the studies on facial feature have been reported in the medical literature [16]. It has been acknowledged that the facial landmarks are largely stable in position and appearance throughout life, since they are associated with musculo-skeletal system or other body organs. There have been only a few studies reported in the literature on utilizing facial marks. Lin et al. [17] first used the SIFT operator [18] to extract facial irregularities and then fused them with a global face matcher. Facial irregularities and skin texture were used as additional means of distinctiveness to achieve performance improvement. However, the individual types of facial mark were not explicitly defined. Hence, their approach is not suitable for face database indexing. Pierrard et al. [19] proposed a method to extract moles using normalized cross correlation method and a morphable model. They claimed that their method is pose and lighting invariant since it uses a 3D morphable model. However, besides moles, they did not consider other types of facial marks. Pamudurthy et al. [20] used distribution of fine-scale skin marks pattern. The correlation coefficients calculated between registered skin patches are aggregated to obtain a matching score. Lee et al. [21] introduced scars, marks, and tattoos (SMT) in their tattoo image retrieval system. While tattoos can exist on any body part 6

and are more descriptive, we are interested in marks appearing exclusively on the face which typically show simple morphologies. Kumar et al. [22] proposed to use a set of descriptive features (e.g., male, chubby, flash, etc.), but facial marks were not defined or considered to be used. There have been a few approaches that use face annotation to efficiently manage digital photographs [23] [24] [25] [26] [27]. However, these methods or systems utilize either the presence or the identity of faces in photographs; local facial features are not used. Therefore, these are more suitable for multimedia applications rather than the forensic applications of interest here. We use demographic information (e.g., gender and ethnicity) and facial marks as the soft biometrics traits. We labeled the gender of each face images into three categories (male, female and unknown) and ethnicity into three categories (Caucasian, African- American and unknown). We currently label gender and ethnicity manually as practiced in law enforcement. We propose a fully automatic facial mark extraction system using global and local texture analysis methods. We first apply the Active Appearance Model (AAM) to detect and remove primary facial features such as eye brows, eyes, nose, and mouth. These primary facial features are subtracted from the face image. Then, the local irregularities are detected using the Laplacian-of-Gaussian (LoG) operator. The detected facial marks are used to calculate the facial similarity by their morphology and color along with the location. The mark based matcher can be combined with a commercial face matcher in order to enhance the face matching accuracy or used by itself when the commercial face matcher fails in face matching process due to occlusion or unfavorable viewpoints. Our method differs significantly from the previous studies in the following aspects: (a) we use a number of soft biometric traits (i.e., gender, ethnicity, facial marks), (b) we extract all types of facial marks that are locally salient, (c) we focus on detecting facial marks and characterize each mark based on its morphology and color (d) evaluate the performance using a state of the art face matcher on a large gallery with 10,213 subjects. The proposed soft biometric matching system will be especially useful to forensics and law enforcement agencies because it will (a) supplement existing facial matchers to improve the identification accuracy, (b) enable fast face image retrieval based on high level semantic query, (c) enable matching or retrieval from partial or off-frontal face images and (d) help in discriminating identical twins. The rest of this paper is organized as follows: 7

provides statistics on distributions and frequencies of facial marks in ten different categories, section III describes our mark detection process, section IV presents the mark based matching scheme and section V provides experimental results and discussions. Section VI summarizes our contributions and lists some directions for future work.

CHAPTER 2

CHAPTER 2

2. EXISTING SYSTEM The biometric recognition system is divided into two subsystems. One subsystem is called the primary biometric system and it is based on traditional biometric identifiers like fingerprint, face and hand-geometry. The second subsystem, referred to as the secondary biometric system, is based on soft biometric traits like age, gender, and height. Figure 3 shows the architecture of a personal identification system that makes use of both primary and soft biometric measurements. Let w1,w2,,wn represent the n users enrolled in the database. Let x be the feature vector corresponding to the primary biometric. Without loss of generality, let us assume that the output of the primary biometric system is of the form P(wi/X), i=1,2,..n where P(Wi/X) is the probability that the test user is wi given the feature vector x. If the output of the primary biometric system is a matching score, it is converted into posteriori probability using an appropriate transformation. For the secondary biometric system, we can consider P(wi/X) as the prior probability of the test user being user wi.

Let

be the soft biometric feature vector,

where y1 through yk are continuous variables and yk+1 through ym are discrete variables. The updated probability of user wi given the primary biometric feature vector x and the soft biometric feature vector y, i.e., P(Wi/X) can be calculated using the Bayes rule as

If we assume that the soft biometric variables are independent, equation (1) can be rewritten as

In equation 2,

represents the conditional probability of the

continuous variable yj given user !i. This can be evaluated from the conditional density of the variable j for user !i. On the other hand, discrete probabilities P(yj j!i); j = k + 1; k + 2; ;m represents the probability that user !i is assigned to the class yj . This is a measure of the accuracy of the classification module in assigning user !i to one of the distinct classes based on 10

biometric indicator yj . In order to simplify the problem, let us assume that the classification module performs equally well on all the users and therefore the accuracy of the module is independent of the user. The logarithm of P(!ijx; y) in equation (2) can be expressed as

This formulation has two main drawbacks. The first problem is that all the m soft biometric variables have been weighed equally. In practice, some soft biometric variables may contain more information than the others. For example, the height of a person may give more information about a person than gender. Therefore, we must introduce a weighting scheme for the soft biometric traits based on an index of distinctiveness and permanence, i.e., traits that have smaller variability and larger distinguishing capability will be given more weight in the computation of the final matching probabilities. Another potential pitfall is that any impostor can easily spoof the system because the soft characteristics have an equal say in the decision as the primary biometric trait. It is relatively easy to modify/hide ones soft biometric attributes by applying cosmetics and wearing other accessories (like mask, shoes with high heels, etc.). To avoid this problem, we assign smaller weights to the soft biometric traits compared to those assigned to the primary biometric traits. This differential weighting also has another implicit advantage. Even if a soft biometric trait of a user is measured incorrectly (e.g., a male user is identified as a female), there is only a small reduction in that users posteriori probability and the user is not immediately rejected. In this case, if the primary biometric produces a good match, the user may still be accepted. Only if several soft biometric traits do not match, there is significant reduction in the posteriori probability and the user could be possibly rejected. If the devices that measure the soft biometric traits are reasonably accurate, such a situation has very low probability of occurrence. The introduction of the weighting scheme results in the following discriminant function for user !i:

11

Where Pm i=0 ai = 1 and a0 >> ai; i = 1; 2; m. Note that ais, i = 1; 2; m are the weights assigned to the soft biometric traits and a0 is the weight assigned to the primary biometric identifier. It must be noted that the weights ai; i = 1; 2; m must be made small to prevent the domination of the primary biometric by the soft biometric traits. On the other hand, they must large enough so that the information content of the soft biometric traits is not lost. Hence, an optimum weighting scheme is required to maximize the performance gain.

12

CHAPTER 3

13

CHAPTER 3

3. PROPOSED SYSTEM
Face marks appear as salient localized regions on the face image. Therefore, a blob detector based on Difference of Gaussian (DoG) or Laplacian of Gaussian (LoG) operator can be used to detect the marks. However, a direct application of a blob detector on a face image will generate a large number of false positives due to the presence of primary facial features (e.g., eyes, eye brows, nose and mouth). Therefore, by first localizing the primary facial features and then extracting facial marks in the rest of the face region is necessary for successful mark detection. We will describe i) primary facial feature detection, ii) mapping to the mean shape and iii) mask construction processes in the following subsections.

3.1.

STATISTICS OF FACIAL MARKS


To understand the role of facial marks in face recognition andretrieval, We analyzed the

location and characteristics of facial marks in a forensic mugshot database with 1,225 images of 671 subjects by manually assigning ground truth labels to these marks. Each facial mark is represented with an enclosing rectangular bounding box. The ground truth labeling process is performed by using the following ten categories provided by a forensics expert in [3]:

Freckle: small spots from concentrated melanin Mole: growth on the skin (brown or black) Scar: marks left from cuts or wounds Pockmark: crater-shaped scar Acne: red regions caused by pimple or zit Whitening: skin region that appears white Dark skin: skin region that appears dark Abrasion: wound (includes clots) Wrinkle: fold, ridge or crease in the skin Other: all other types of marks

14

Fig 3.1 Mark distribution

Freckle is a single or a set of dark spots. When there is a dense set of spots in a small region, we label each of the prominent dark spots rather than labeling the entire set with a single bounding box. Mole is referred to as an extruded region with typically dark skin color. In a 2D facial image, it is difficult to distinguish between a spot and a mole. A mole typically appears larger in size and darker in color compared with spots. Scar is the discolored region of skin induced from a cut or injury. Pockmark is a crater- shaped scar. Acne is a red region caused by pimples or zits and stays for a few days to several months.

Whitening represents a skin region that appears brighter compared with the surrounding region; it is observed more often with dark skinned people. When a larger region of skin is observed as dark, it is labeled as dark skin. While abrasion is not temporally invariant, it can later be related to the scars that are possibly caused by abrasions. We consider only large wrinkles and ignore small wrinkles especially around the eyes and mouth. We ignore beards and facial hair in constructing the ground truth. All other facial marks that do not belong to the nine groups mentioned above are labeled as other group.

15

Fig 3.2 Schematic of the mapping from the semantic mark categories to the morphology and color based categories The statistics of mark location and frequency are shown in Fig. 6. Freckles are observed most often, followed by wrinkles and acne. The average number of marks in our database is about 7 per subject. About 97% of the subjects in our database showed at least one mark, which suggests that facial marks can indeed be useful for recognition. Our earlier preliminary work [1] showed that automatically detected facial marks when combined with a commercial matcher improves the face recognition accuracy.

Fig 3.3 Mark frequency

16

A useful information about facial marks that should be utilized during matching is the class label of each mark. However, automatic classication of each mark based on the ten categories dened above is extremely difcult because of the ambiguities between those categories. Therefore, to simplify the classication problem we dened a small number of classes that are based on morphology and color of the mark. We use three different morphologies (i.e., point, linear, irregular) and two color characteristics (i.e., dark or bright compared to the surrounding region). Fig. 7 shows how nine of the ten semantic categories can be mapped into the morphology and color based categories. Example marks in each type of morphology and color are shown in Figs. 8 and 9.

Fig 3.4 Example Marks in Semantic Categories

Fig 3.5 Example marks with Morphology and color based labels

17

3.2. FACIAL MARK DETECTION

3.2.1. Primary Facial Feature Detection

Schematic of automatic facial mark extraction process.

Fig 3.6 Schematic of automatic facial mark extraction process. We use Support Vector Machines to automatically detect 133 landmarks that delineate the primary facial features: eyes, eye brows, nose, mouth, and face boundary. These primary facial features will be disregarded in the subsequent facial mark detection process. Two tasks need to be performed for head pose estimation: constructing the pose estimators from face images with known pose information, and applying the estimators to a new face image. We adopt the method of SVM regression to construct two pose estimators, one for tilt (elevation) and the other for yaw (azimuth). The input to the pose estimators is the PCA vectors of face images discussed in Section 2.1.1. The dimensionality of PCA vectors can be reasonably small in our experiments (20, for example). More on this will be discussed in Section 3.2.2. The output is the pose angles in tilt and yaw. The SVM regression problem can be solved by maximizing

18

where x is the PCA feature vector of a face image, k is the kernel function used in the SVM pose estimator, yi is the ground-truth pose angle in yaw or tilt of pattern x; C is the upper bound of the Lagrange multipliers ai and api ; and 1 is the tolerance coefficient. More details about SVM regression can be found in Ref. [49]. Two pose estimators in the form of Eq. (4), ft for tilt and fy for yaw, are constructed. The Quadratic Programming problem is solved by a decomposition algorithm based on the LOQO algorithm [48]. The decomposition algorithm can be briefly described as follows: At each iteration, only a small set of training patterns are processed by the LOQO algorithm. The support vectors (SVs) and patterns with the largest error from the previous iteration have higher priority for selection. The algorithm is stopped when no significant improvement is achieved. Compared with other learning methods, the SVM-based method has distinguishing properties such as: (1) No model structure design is needed. The final decision function can be expressed by a set of important examples (SVs). (2) By introducing a kernel function, the decision function is implicitly defined by a linear combination of training examples in a high-dimensional feature space. (3) The problem can be solved as a Quadratic Programming problem, which is guaranteed to converge to the global optimum of the given training set.

19

3.2.2. Mapping to Mean Shape


Using the landmarks detected by AAM, we map each face image to the mean shape to simplify the mark detection, matching and retrieval. Let Si, i = 1... N represents the shape of each of the N face images in the database (gallery) based on the 133 landmarks. Then, the mean shape is simply defined as S_ = N, i=1 Si. Each face image, Si, is mapped to the mean shape, S_, by using the Barycentric coordinate based texture mapping process. First, both Si and S_ are subdivided into a set of triangles. Given a triangle T in Si, its corresponding triangle T is found in S_. Let r1, r2 and r3 (r1, r2 and r3) be the three vertices of T (T). Then, any point, p, inside T is expressed as p = _r1 + _r2 + r3 and the corresponding point p in T is similarly expressed as p = _r1 +_r2 +r3 , where _+_+ = 1. This way, the pixel value at p is mapped to p. Fig. 11 shows the schematic of the Barycentric mapping process. By repeating this mapping process for all the points inside all triangles, the texture in Si is mapped to S.

Fig 3.7 Schematic of texture mapping process using the triangular Barycentric coordinate system

3.2.3. Mask Construction


We construct a mask from the mean shape, S_, to suppress false positives due to primary facial features in the blob detection process. The blob detection operator is applied to the face image mapped into the mean shape. A mask constructed from S_ is used to suppress blob detection on the primary facial features. Let the mask constructed from the mean shape be denoted as Mg, namely, a generic mask. However, the generic mask does not cover the user specific facial features such as beard or small winkles around eyes or mouth that are likely to increase the false positives. Therefore, we also build a user specific mask, Ms, using the edge image. The edge image is obtained by using the conventional Sobel operator [32]. An example edge image is shown in Fig. 16 (b). The user specific mask Ms, constructed as a sum of Mg and

20

edges that are connected to Mg, helps in removing most of the false positives appearing around the beard or small wrinkles around eyes or mouth.

Fig 3.8 Schematic of the mean face construction.

3.2.4 Blob Detection


Facial marks mostly appear as isolated blobs. Therefore, we use the well-known blob detector, LoG operator, to detect facial mark candidates. We have used LoG kernels with three different sizes (=3_3, 7_7 and 9_9) and five different _s (=p2, p2 2, p2 3, p2 4 and p2 5). We chose the smallest LoG kernel (3_3 with _ = p2) that shows the best identification accuracy in our experiment. The LoG operator is usually applied at multiple scales to detect blobs of different sizes. However, we used a single scale LoG filter followed by a morphological operator (e.g., closing) to reduce the computation time. The LoG filtered image subtracted with the user specific mask undergoes a binarization process with a series of threshold values i = 1; :::;K in a decreasing order. The threshold value is successively applied until the resulting number of connected components is larger than a preset value (tn). A brightness constraint (tc 0) is also applied to each of the connected components to suppress false positives due to weak blob responses. When the user specific mask does not

21

effectively remove sources of false positives, true marks with lower contrast will be missed in the mark detection process. The overall procedure of facial mark detection is enumerated below. 1) Facial landmark detection (SVM) 2) Mapping to the mean shape 3) Construct user specific mask Ms (Mg is constructed only once and shared by all images) 4) Apply LoG operator 5) Using thresholds binarize and detect blobs (mj ) such that mj does not overlap with Ms and the average brightness of mj_tb 0; stop if the total number of blobs_tn 0 6)Represent each mark with a bounding box

Fig 3.9 Schematic of the iterative thresholding process for blob detection. The overall mark detection process is shown in Fig. 10. Example ground truth and automatically detected marks are shown in Fig. 14. Mark detection accuracy is evaluated in terms of precision and recall curve as shown in Fig. 15. The precision and recall values for the mark detector with a range of brightness contrast thresholds tc 0 varies from (30%, 60%) to (54%, 16%) for the mugshot database. These results represent that the automatic mark detection algorithm shows reasonable performance. In Sec. V, we will evaluate whether this automatic mark detection method helps to improve the face recognition accuracy. The effect of generic mask and user specific mask on mark detection is shown in Fig. 16. The user specific mask helps in reducing both false positives and false negatives by disregarding small wrinkles and hairs.

22

Fig 3.10. Ground truth and automatically detected facial marks for four images

3.2.5. Blob Classification


For each detected blob (i.e., mark), we assign a bounding box tightly enclosing the blob. We then classify a mark in a hierarchical fashion: linear vs. all, followed by circular (point) vs. irregular. For the linearity classification of a blob, we use two eigen values _1 and _2 obtained from the eigen decomposition on the x and y coordinates of blob pixels. When _1 is significantly larger than _2 the mark is decided as a linear blob. For the circularity detection, we calculate the second moment of the blob pixels, M2. A circle, RM2 , with radius M2 will enclose most of the blob pixels if they are circularly distributed. Therefore a decision can be made based on the ratio of the number of pixels within and outside of RM2 . The color of the blob can be decided based on the ratio of the mean intensity of the pixels inside and outside of the blob (ein=eout). The classification process can be summarized as below.

i) Morphology classification

ii) Color classification

We have set tl 3 = 1 as it is simply comparing the mean intensity of the blob and its surrounding region. We have tried a range of values for tl 1 and tl 2 and selected the set that shows the best identification accuracy, recall and precision in blob detection and classification process (tl 1=10 and tl 2=.6). The schematic of the blob classification process is shown in Fig. 17. Fig. 18 shows 23

five example images with ground truth and automatically extracted facial marks and their classes.

3.3 FACIAL MARK BASED MATCHING


Given the demographic information and facial marks, we encode them into a 50 bin histogram. The rst 48 bins represent the distribution, frequency, morphology and color of facial marks.

The last two bins represent the gender and ethnicity. To encode thefacial marks, the face image in the mean shape space is subdivided into eight different regions as shown in Fig. 19. Each mark is encoded by a six digit binary value representing its morphology and color (six labels dened in Sec. II). When there is more than one mark in the same region, a bit by bit summation is performed. The six bin values are concatenated for the eight different regions in the order as shown in Fig. 19 to generate the 48-bin histogram. The gender (ethnicity) has three distinct values, representing male, female, unknown (Caucasian, African-American, unknown). Histogram intersection method is used to calculate the matching scores.

The soft biometric traits based matcher can be used to quickly retrieve candidate images from a large database. Since the soft biometric based matcher also generates a matching score, it can be combined with any face matcher to improve the overall face recognition accuracy. We used weighted score-sum with min- max normalization method for the matcher combination [33]. The weights are empirically chosen to obtain the best recognition accuracy.

\ 24

Fig 3.12 Effects of generic and user specic masks on facial mark detection. Fig 3.11Precision-Recall curve of the automatic mark detection method.

Fig 3. 14 Schematic of the Morphology and color based mark classication.

Fig 3.13 Example images with ground truth and automatically Detected facial marks with their class.

25

CHAPTER 4

26

CHAPTER 4 4.1. MATLAB INTRODUCTION:


MATLAB (matrix laboratory) is a numerical computing environment and fourthgeneration programming language. Developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, and Fortran. Although MATLAB is intended primarily for numerical computing, an optional toolbox uses the MuPAD symbolic engine, allowing access to symbolic computing capabilities. An additional package, Simulink, adds graphical multi-domain simulation and Model-Based Design for dynamic and embedded systems. In 2004, MATLAB had around one million users across industry and academia.[2] MATLAB users come from various backgrounds of engineering, science, and economics. MATLAB is widely used in academic and research institutions as well as industrial enterprises.

4.1. History
MATLAB was created in the late 1970s by Cleve Moler, the chairman of the computer science department at the University of New Mexico.[3] He designed it to give his students access to LINPACK and EISPACK without having to learn Fortran. It soon spread to other universities and found a strong audience within the applied mathematics community. Jack Little, an engineer, was exposed to it during a visit Moler made to Stanford University in 1983. Recognizing its commercial potential, he joined with Moler and Steve Bangert. They rewrote MATLAB in C and founded MathWorks in 1984 to continue its development. These rewritten libraries were known as JACKPAC.[4] In 2000, MATLAB was rewritten to use a newer set of libraries for matrix manipulation, LAPACK.[5]

27

MATLAB was first adopted by control design engineers, Little's specialty, but quickly spread to many other domains. It is now also used in education, in particular the teaching of linear algebra and numerical analysis, and is popular amongst scientists involved with image processing.

4.2. Syntax
The MATLAB application is built around the MATLAB language. The simplest way to execute MATLAB code is to type it in the Command Window, which is one of the elements of the MATLAB Desktop. When code is entered in the Command Window, MATLAB can be used as an interactive mathematical shell. Sequences of commands can be saved in a text file, typically using the MATLAB Editor, as a script or encapsulated into a function, extending the commands available.

4.3. Variables
Variables are defined with the assignment operator, =. MATLAB is a weekly dynamically typed programming language. It is a weakly typed language because types are implicitly converted.[7] It is a dynamically typed language because variables can be assigned without declaring their type, except if they are to be treated as symbolic objects,[8] and that their type can change. Values can come from constants, from computation involving values of other variables, or from the output of a function. For example: >> x = 17 x= 17 >> x = 'hat' x= hat >> y = x + 0 y= 104 97 116

>> x = [3*4, pi/2] x= 12.0000 1.5708 28

>> y = 3*sin(x) y= -1.6097 3.0000

MATLAB has several functions for rounding fractional values to integers:

round(X): round to nearest integer, trailing 5 rounds to the nearest integer away from zero. For example, round(2.5) returns 3; round(-2.5) returns -3.

fix(X): round to nearest integer toward zero (truncate). For example, fix(2.7) returns 2; fix(-2.7) returns -2

floor(X): round to the nearest integer toward minus infinity (round to the nearest integer less than or equal to X). For example, floor(2.7) returns 2; floor(-2.3) returns -3.

ceil(X): round to the nearest integer toward positive infinity (round to the nearest integer greater than or equal to X); for example, ceil(2.3) returns 3; ceil(-2.7) returns -2

4.4 Vectors and Matrices


MATLAB is a "Matrix Laboratory", and as such it provides many convenient ways for creating vectors, matrices, and multi-dimensional arrays. In the MATLAB vernacular, a vector refers to a one dimensional (1N or N1) matrix, commonly referred to as an array in other programming languages. A matrix generally refers to a 2-dimensional array, i.e. an mn array where m and n are greater than or equal to 1. Arrays with more than two dimensions are referred to as multidimensional arrays. MATLAB provides a simple way to define simple arrays using the syntax:

init:increment:terminator. For instance: >> array = 1:2:9 array = 13579 defines a variable named array (or assigns a new value to an existing variable with the name
array) which is an array consisting of the values 1, 3, 5, 7, and 9. That is, the array starts at 1

(the init value), increments with each step from the previous value by 2 (the increment value), and stops once it reaches (or to avoid exceeding) 9 (the terminator value). 29

>> array = 1:3:9 array = 147 the increment value can actually be left out of this syntax (along with one of the colons), to use a default value of 1. >> ari = 1:5 ari = 12345 assigns to the variable named ari an array with the values 1, 2, 3, 4, and 5, since the default value of 1 is used as the incrementer. Indexing is one-based,[9] which is the usual convention for matrices in mathematics, although not for some programming languages. Matrices can be defined by separating the elements of a row with blank space or comma and using a semicolon to terminate each row. The list of elements should be surrounded by square brackets: []. Parentheses: () are used to access elements and subarrays (they are also used to denote a function argument list). >> A = [16 3 2 13; 5 10 11 8; 9 6 7 12; 4 15 14 1] A= 16 3 2 13 5 10 11 8 9 6 7 12 4 15 14 1

>> A(2,3) ans = 11 Sets of indices can be specified by expressions such as "2:4", which evaluates to [2, 3, 4]. For example, a submatrix taken from rows 2 through 4 and columns 3 through 4 can be written as: 30

>> A(2:4,3:4) ans = 11 8 7 12 14 1 A square identity matrix of size n can be generated using the function eye, and matrices of any size with zeros or ones can be generated with the functions zeros and ones, respectively. >> eye(3) ans = 100 010 001 >> zeros(2,3) ans = 000 000 >> ones(2,3) ans = 111 111 Most MATLAB functions can accept matrices and will apply themselves to each element. For example, mod(2*J,n) will multiply every element in "J" by 2, and then reduce each element modulo "n". MATLAB does include standard "for" and "while" loops, but using MATLAB's vectorized notation often produces code that is easier to read and faster to execute. This code, excerpted from the function magic.m, creates a magic square M for odd values of n (MATLAB function meshgrid is used here to generate square matrices I and J containing 1:n). [J,I] = meshgrid(1:n); A = mod(I+J-(n+3)/2,n); B = mod(I+2*J-2,n); M = n*A + B + 1; 31

Semicolons Unlike many other languages, where the semicolon is used to terminate commands, in MATLAB the semicolon serves to suppress the output of the line that it concludes (it serves a similar purpose in Mathematica.)

4.1.5. Graphics
Function plot can be used to produce a graph from two vectors x and y. The code: x = 0:pi/100:2*pi; y = sin(x); plot(x,y) produces the following figure of the sine function:

Fig 4.1 Sine function Three-dimensional graphics can be produced using the functions surf, plot3 or mesh. [X,Y] = meshgrid(-10:0.25:10,-10:0.25:10); f = sinc(sqrt((X/pi).^2+(Y/pi).^2)); mesh(X,Y,f); axis([-10 10 -10 10 -0.3 1]) xlabel('{\bfx}') ylabel('{\bfy}') 32 [X,Y] = meshgrid(-10:0.25:10,-10:0.25:10); f = sinc(sqrt((X/pi).^2+(Y/pi).^2)); surf(X,Y,f); axis([-10 10 -10 10 -0.3 1]) xlabel('{\bfx}') ylabel('{\bfy}')

zlabel('{\bfsinc} ({\bfR})') hidden off

zlabel('{\bfsinc} ({\bfR})')

This code produces a wireframe 3D plot of the two-dimensional unnormalized sinc function:

This code produces a surface 3D plot of the two-dimensional unnormalized sinc function:

Fig 4.2 wireframe 3D plot

Fig 4.3 surface 3D plot

4.1.6. Structures
MATLAB supports structure data types. Since all variables in MATLAB are arrays, a more adequate name is "structure array", where each element of the array has the same field names. In addition, MATLAB supports dynamic field names (field look-ups by name, field manipulations etc). Unfortunately, MATLAB JIT does not support MATLAB structures, therefore just a simple bundling of various variables into a structure will come at a cost

4.1.7. Function Handles


MATLAB supports elements of lambda-calculus by introducing function handles, or function references, which are implemented either in .m files or anonymous/nested functions. Secondary Programming MATLAB also carries secondary programming which incorporates the MATLAB standard code into a more user friendly way to represent a function or system.

33

4.1.8. Classes
MATLAB supports classes, however the syntax and calling conventions are significantly different than in other languages, because MATLAB does not have reference data types. For example, a call to a method object.method(); cannot normally alter any variables of object variable. To create an impression that the method alters the state of variable, MATLAB toolboxes use evalin() command, which has its own restrictions. Object Oriented Programming MATLAB's support for object-oriented programming includes classes, inheritance, virtual dispatch, packages, pass-by-value semantics, and pass-by-reference semantics.[10] classdef hello methods function doit(this) disp('hello') end end end When put into a file named hello.m, this can be executed with the following commands: >> x = hello; >> x.doit; hello Interfacing with Languages MATLAB can call functions and subroutines written in the C programming language or Fortran. A wrapper function is created allowing MATLAB data types to be passed and returned.

34

The dynamically loadable object files created by compiling such functions are termed "MEXfiles" (for MATLAB executable).[11][12] Libraries written in Java, ActiveX or .NET can be directly called from MATLAB and many MATLAB libraries (for example XML or SQL support) are implemented as wrappers around Java or ActiveX libraries. Calling MATLAB from Java is more complicated, but can be done with MATLAB extension,[13] which is sold separately by MathWorks, or using an undocumented mechanism called JMI (Java-to-Matlab Interface),[14] which should not be confused with the unrelated Java Metadata Interface that is also called JMI. MATLAB has a direct node with modeFRONTIER, a multidisciplinary and multiobjective optimization and design environment, written to allow coupling to almost any computer aided engineering (CAE) tool. Once obtained a certain result using Matlab, data can be transferred and stored in a modeFRONTIER workflow and viceversa

4.1.9. License
MATLAB is a proprietary product of MathWorks, so users are subject to vendor lockin.[2][16] Although MATLAB Builder can deploy MATLAB functions as library files which can be used with .NET or Java application building environment, future development will still be tied to the MATLAB language. File Extensions .fig .m .mat .mex MATLAB Figure MATLAB function, script, or class MATLAB binary file for storing variables MATLAB executable (platform specific, e.g. ".mexmac" for the Mac, ".mexglx" for

Linux, etc.) .p MATLAB content-obscured .m file (result of pcode() )

4.9. MATLAB PROGRAMMING/ IMAGE PROCESSING TOOLBOX


The core MATLAB package comes with several rudimentary functions (to be described later) that can be used to load, save, and perform custom functions on images. However, it is

35

often necessary to perform more complicated operations on images. The image processing toolbox allows such manipulations as:

Direct visualization of images in MATLAB Color space conversions (e.g. between RGB, HSV, L*a*b*, and so on) Object grouping and data collection Filtering and fast convolution Fourier analysis of images Image arithmetic Morphological operations

and many others.

4.10. Matlab Built-in Functions


To load an image into MATLAB, you can use the "import data" GUI (the same way as you would for a text file or similar), or you can use the "imread" function. To use "imread", you must have the directory where the image is located in your list of directories. Then use the syntax: >> myimage = imread('myimage.extension'); Use of a semicolon is recommended especially here since images contain large amounts of data. If the image is a color image, MATLAB will (for most data formats that are compatible with it) convert the image data to the RGB color space by default. The separate channels are represented by the third dimension of the image. The following code separates the channels of the image and indicates the color of each channel. >> redchannel = myimage(:,:,1); >> greenchannel = myimage(:,:,2); >> bluechannel = myimage(:,:,3);

Care must be taken when doing calculations with images because most image formats are imported as class UINT8, not as class double, to save space. There are many operations which 36

cannot be performed on UINT8 arrays (or, for that matter, on arrays with more than two dimensions). Some of these can be circumvented by using MATLAB's image arithmetic functions but sometimes it will be necessary to convert the image or a portion of it to class double to manipulate it. Another default function in MATLAB can be used to save images to a new file. The function, "imwrite", accepts two different syntaxes: >> imwrite(myimage, 'nameoffile.extension') or >> imwrite(myimage, 'nameoffile', 'extension') When using the second syntax, do not use a period before the extension. See the MATLAB documentation for supported file types for import and export. Matlabs Image Arithmatic Functions MATLAB by default imports most images as class uint8, which is not supported by the native addition, subtraction, multiplication, or division functions. In order to overcome this, one option is to convert the images to class double. However, this will cause memory issues if a large number of images are to be used, or if the images are large. It is also very slow. Therefore, MATLAB provides a better option in its arithmetic functions. MATLAB can perform the following operations using its special arithmetic functions for integers (they also work if either A or B is a constant value):

imadd(A, B): add two images A and B (equivalent to A + B, due to an operator overload, but this syntax will not work without the image analysis toolbox so it is not recommended, since it'll give a more difficult error to track down in its absense than "function not found")

imsubtract(A, B): Subtract two images A and B (equivalent to A - B, but again not recommended since it won't work without the toolbox)

immultiply(A, B): Multiply two images A and B

37

imdivide(A, B): Equivalent to A/B (but the values are rounded to the nearest integer, not truncated like true integer arithmetic)

imlincomb(K1, A1, K2, A2, ..., C): Linear combinations of images A1, A2, etc. This means you end up with K1 * A1 + K2 * A2 + ... + C. This is more efficient than using e.g. immultiply(K1,A1) + immultiply(K2,A2) because the latter rounds after every operation, leading to sometimes significant errors, while the latter only rounds at the end.

4.11.Visualization of Images
To visualize an image or a section of an image, use the imshow function: >> imshow(myimage); This will show a figure (often scaled down) of the image. You should not attempt to copy this and paste into another program, the RGB values resulting will be different. Use imwrite instead. Color Space Changes and Cforms Note that when converting from device-specific color spaces like RGB to non-device-specific spaces like LAB or XYZ, some inaccuracy will inevitably arise. One reason for this is that RGB values as given by MATLAB are often encoded with a gamma from the camera (and given another gamma by the monitor before displayed on the screen), which makes the relationship between the RGB and LAB values dependent on the internals of the camera and also nonlinear (and difficult to determine accurately). As an approximation, one can often use a power-law function to arrive at the un-modified RGB values, which in theory should be linearly correlated with the LAB values of each pixel: R' = (R / 255)1 / G' = (G / 255)1 / B' = (B / 255)1 / Note that these equations assume R, G, and B are on the range from 0 to 255, and the computation requires that R, G, and B be converted to class double from the default class uint8.

38

Image Filtering MATLAB uses one of two methods, correlation or convolution, to filter various images. The two operations are identical except correlation rotates the filter matrix 180o before filtering, while convolution keeps it the same. Convolution Direct convolution is implemented using both the conv2 method and the imfilter function. For images, since the filter is much smaller than the image, the direct method implemented in imfilter (which uses a MEX function to increase speed) is usually faster. FFT Method It can be shown that convolution in the space domain (as described earlier) is equivalent to multiplication in the Fourier domain: A o B = F 1(F(A) * F(B)) where F is the Fourier transform operator. If A and B are nearly the same size, performing the Fourier transform of both A and B, multiplying, and inverting the result is often faster than performing the computation directly. In order to use this method, it is necessary to pad the smaller of A and B with zeros so that both A and B are the same size. The method is actually fastest if both A and B are padded so that they have lengths of a power of 2, since the FFT algorithm is optimized for these sizes. After forming the padded matrices A' and B', compute the FFT of both matrices (using the fft2 function), multiply them, and invert them to find the full convolution (padded with many zeros). Following is some sample code implementing this method, and returning the convolution equivalent to the 'same' parameter in MATLAB:

4.12. Using the Image Analysis Toolbox


Filters can be used to blur images in different ways, or to sharpen them. The function fspecial contains a number of pre-defined filters that you can use for these purposes. The syntax for this function is: >> filter = fspecial(typeoffilter, parameters);

39

If no parameters are provided, the default size of the filter is different depending on the type of filter; see the MATLAB documentation page for details. Some other types of filter also have other options, such as the standard deviation in a Gaussian filter. To filter an image, MATLAB replaces each pixel of the image with a weighted average of the surrounding pixels. The weights are determined by the values of the filter, and the number of surrounding pixels is determined by the size of the filter used. MATLAB includes a function to do the filtering: imfilter >> newimage = imfilter(myimage, filter, parameters); There are several possible parameters that determine how MATLAB deals with pixels on the edges of an image (since the weighted average at the edges needs something to weigh beyond the image borders). These include padding it with a constant (give imfilter a number in the parameters spot), reflecting the image across the border (parameters = 'symmetric'),assuming the image is periodic (parameters = 'circular'), or duplicating border pixel values (parameters = 'replicate'). The best method to use depends on the application, so it is best to experiment with them. Another parameter you can pass is the choice of correlation vs. convolution. Correlation is used by default; pass 'conv' to imfilter to use convolution. Be aware that correlation will rotate your filter, be prepared for this if it is intended to have effects in a certain direction on the image.

40

CHAPTER 5

41

CHAPTER 5 5. SYSTEM SPECIFICATION 5..1 SOFTWARE REQUIREMENTS


Matlab 7.8 (Image Processing Toolbox)

5.2 HARDWARE REQUIREMENTS

Processor Operating System RAM Hard Disc Monitor

Pentium IV Windows XP 1 GB 80 GB LCD

42

CHAPTER 6

43

CHAPTER 6

6.

SAMPLE SOURCE CODE

6.1 M AIN PROGRAM CODING


clear all

clc close all

% You can customize and fix initial directory paths TrainDatabasePath = uigetdir('D:\Program Files\MATLAB\R2006a\work', 'Select training database path' ); TestDatabasePath = uigetdir('D:\Program Files\MATLAB\R2006a\work', 'Select test database path');

% select the test image prompt = {'Enter test image name (a number between 1 to 10):'}; dlg_title = 'Input of PCA-Based Face Recognition System'; num_lines= 1; def = {'1'};

TestImage = inputdlg(prompt,dlg_title,num_lines,def); TestImage = strcat(TestDatabasePath,'\',char(TestImage),'.jpg'); im = imread(TestImage);

% feature extraction for trained images T = CreateDatabase(TrainDatabasePath); [m, A, Eigenfaces] = EigenfaceCore(T); 44

% recognition part for the test image %Face_Detect = OutputName = Recognition(TestImage, m, A, Eigenfaces);

SelectedImage = strcat(TrainDatabasePath,'\',OutputName); SelectedImage = imread(SelectedImage);

imshow(im) title('Test Image'); figure,imshow(SelectedImage); title('Equivalent Image');

str = strcat('Matched image is : ',OutputName); disp(str)

6.2 CODING FOR CREATEDATABASE

function T = CreateDatabase(TrainDatabasePath) % Align a set of face images (the training set T1, T2, ... , TM ) % % Description: This function reshapes all 2D images of the training database % into 1D column vectors. Then, it puts these 1D column vectors in a row to % construct 2D matrix 'T'. % % % Argument: % % Returns: % % T - A 2D matrix, containing all 1D image vectors. Suppose all P images in the training database have the same size of MxN. So the length of 1D 45 TrainDatabasePath - Path of the training database

column vectors is MN and 'T' will be a MNxP 2D matrix.

%%%%%%%%%%%%%%%%%%%%%%%% File management TrainFiles = dir(TrainDatabasePath); Train_Number = 0;

for i = 1:size(TrainFiles,1) if not(strcmp(TrainFiles(i).name,'.')|strcmp(TrainFiles(i).name,'..')|strcmp(TrainFiles(i).name,'Thu mbs.db')) Train_Number = Train_Number + 1; % Number of all images in the training database end end

%%%%%%%%%%%%%%%%%%%%%%%% Construction of 2D matrix from 1D image vectors T = []; for i = 1 : Train_Number % I have chosen the name of each image in databases as a corresponding % number. However, it is not mandatory! str = int2str(i); str = strcat('\',str,'.jpg'); str = strcat(TrainDatabasePath,str);

img = imread(str); img = rgb2gray(img);

[irow icol] = size(img);

temp = reshape(img',irow*icol,1); % Reshaping 2D images into 1D image vectors T = [T temp]; % 'T' grows after each turn end

46

6.3 CODING FOR EIGEN CORE

function [m, A, Eigenfaces] = EigenfaceCore(T) % Use Principle Component Analysis (PCA) to determine the most % discriminating features between images of faces. % % Description: This function gets a 2D matrix, containing all training image vectors % and returns 3 outputs which are extracted from training database. % % Argument: % % % % % Returns: % m Eigenfaces - (M*Nx1) Mean of the training database - (M*Nx(P-1)) Eigen vectors of the covariance matrix of the T - A 2D matrix, containing all 1D image vectors. Suppose all P images in the training database have the same size of MxN. So the length of 1D column vectors is M*N and 'T' will be a MNxP 2D matrix.

training database % A - (M*NxP) Matrix of centered image vectors

%%%%%%%%%%%%%%%%%%%%%%%% Calculating the mean image m = mean(T,2); % Computing the average face image m = (1/P)*sum(Tj's) Train_Number = size(T,2); %%%%%%%%%%%%%%%%%%%%%%%% Calculating the deviation of each image from mean image A = []; for i = 1 : Train_Number temp = double(T(:,i)) - m; % Computing the difference image for each image in the training set Ai = Ti - m 47 (j = 1 : P)

A = [A temp]; % Merging all centered images end

%%%%%%%%%%%%%%%%%%%%%%%% Snapshot method of Eigenface methos L = A'*A; % L is the surrogate of covariance matrix C=A*A'. [V D] = eig(L); % Diagonal elements of D are the eigenvalues for both L=A'*A and C=A*A'.

%%%%%%%%%%%%%%%%%%%%%%%% Sorting and eliminating eigenvalues L_eig_vec = []; for i = 1 : size(V,2) if( D(i,i)>1 ) L_eig_vec = [L_eig_vec V(:,i)]; end end

%%%%%%%%%%%%%%%%%%%%%%%% Calculating the eigenvectors of covariance matrix 'C' Eigenfaces = A * L_eig_vec; % A: centered image vectors

6.4 CODING FOR RECOGNITION FUNCTION


function OutputName = Recognition(TestImage, m, A, Eigenfaces) % Recognizing step.... % % Description: This function compares two faces by projecting the images into facespace and % measuring the Euclidean distance between them. % % Argument: % % m - (M*Nx1) Mean of the training 48 TestImage - Path of the input test image

% % % % % % % % % % Returns: A Eigenfaces

database, which is output of 'EigenfaceCore' function.

- (M*Nx(P-1)) Eigen vectors of the covariance matrix of the training database, which is output of 'EigenfaceCore' function.

- (M*NxP) Matrix of centered image vectors, which is output of 'EigenfaceCore' function.

OutputName

- Name of the recognized image in the training database.

%%%%%%%%%%%%%%%%%%%%%%%% Projecting centered image vectors into facespace

% All centered images are projected into facespace by multiplying in % Eigenface basis's. Projected vector of each face will be its corresponding % feature vector.

ProjectedImages = []; Train_Number = size(Eigenfaces,2); for i = 1 : Train_Number temp = Eigenfaces'*A(:,i); % Projection of centered images into facespace ProjectedImages = [ProjectedImages temp]; end

%%%%%%%%%%%%%%%%%%%%%%%% Extracting the PCA features from test image InputImage = imread(TestImage); temp = InputImage(:,:,1);

[irow icol] = size(temp); InImage = reshape(temp',irow*icol,1); Difference = double(InImage)-m; % Centered test image ProjectedTestImage = Eigenfaces'*Difference; % Test image feature vector %%%%%%%%%%%%%%%%%%%%%%%% Calculating Euclidean distances 49

Euc_dist = []; for i = 1 : Train_Number q = ProjectedImages(:,i); temp = ( norm( ProjectedTestImage - q ) )^2; Euc_dist = [Euc_dist temp]; end [Euc_dist_min , Recognized_index] = min(Euc_dist); OutputName = strcat(int2str(Recognized_index),'.jpg');

50

CHAPTER 7

51

CHAPTER 7
7.
SCREEN SHOTS

After running the main program, the browse menu will open as follows. From that we have to choose the location of Train database and Test database.

52

After selecting the Train database and Test database, the menu will open as shown below. In which the number (1 to 10) for the image should given as input.

53

After giving the number as input, the images in test database and train database are matched and the corresponding images were displayed as shown below

54

CHAPTER 8

55

CHAPTER 8

8.

APPLICATIONS

Security agencies make use of face recognition to catch suspects on the basis of their facial identity.

Law enforcement bodies also use this technology to catch criminals on run. Airline industry is another field, where this technology is installed to avoid hijacking and other criminal activities.

Banks and government offices also use face recognition technology to restrict undesirable happenings.

Access Control Face Databases Face ID HCI - Human Computer Interaction Multimedia Management Security Smart Cards Visual Surveillance and others

56

CHAPTER 9

57

CHAPTER 9

9.

CONCLUSION

Conventional face matching systems generate only numeric matching scores as a similarity between two face images; whereas, the facial mark based matching provide specific and more meaningful evidence about the similarity of two face images. Thus, automatic extraction and representation of facial marks is becoming important in forensic applications. Facial marks can be used to support other evidence presented in courts of law and may be used as a strong evidence by themselves, when other evidence is either not available or reliable. We have developed a soft biometric traits based face matching system. It uses gender and ethnicity information and facial marks. This soft biometric matcher can be combined with any face matcher to improve the recognition accuracy or used by itself when a face matcher fails because of face occlusion. We also show that facial marks can help in discriminating identical twins. With the proposed soft biometric matcher, users can issue semantic queries to retrieve images of interest from a large database. For example, a query could be of the form Retrieve all face images with a mole on the left side of lip. The proposed framework of using soft biometric traits in face matching is highly relevant to the goals of FBIs Next Generation Identification (NGI) R&D effort [36]. The criminal history database maintained by FBI includes photographic face evidence such as scars, marks and tattoos as a useful component of the database for the purpose of subject identification. The proposed work is an effort towards establishing the scientific basis and capability of quantitative analysis to exploit the NGI activity [37]. We believe this effort will help avoid the controversies beleaguered the fingerprint matching community in recent years [38].

58

CHAPTER 10

59

CHAPTER 10
FUTURE ENHANCEMENT

OUR ONGOING WORK INCLUDES (i) IMPROVING THE MARK DETECTION ACCURACY, (ii) EXTENDING THE AUTOMATIC MARK DETECTION TO OFF -FRONTAL
AND

FACE IMAGES

(III) STUDYING THE IMAGE RESOLUTION REQUIREMENT FOR RELIABLE MARK EXTRACTION.

60

REFERENCES

[1] A. K. Jain and U. Park, Facial marks: Soft biometric for face recognition, in Proc. IEEE ICIP, pp. 14, 2009. [2] N. A. Spaun, Forensic biometrics from images and video at the Federal Bureau of Investigation, in Proc. BTAS, pp. 13, 2007. [3] N. A. Spaun, Facial comparisons by subject matter experts: Their role in biometrics and their training, in Proc. ICB, pp. 161168, 2009. [4] A. K. Jain, S. C. Dass, and K. Nandakumar, Soft biometric traits for personal recognition systems, in Proc. ICBA, LNCS 3072, pp. 731738, 2004. [5] B. Klare and A. K. Jain, Sketch to photo matching: A feature-based approach, in Proc. SPIE, Biometric Technology for Human Identication VII, pp. 110, 2010. [6] J. Phillips, H. Wechsler, J. S. Huang, and P. J. Rauss, The feret database and evaluation procedure for face recognition algorithms, Image and Vision Computing, vol. 16, no. 5, pp. 295306, 1998. [7] H. Tuthill and G. George, Individualization: Principles and Procedures in Criminalistics. Lightning Powder Company, Inc., 2002. [8] L. Wiskott, J.-M. Fellous, N. Kruger, and C. von der Malsburg, Face recognition by elastic bunch graph matching, IEEE Trans. PAMI, vol. 19, no. 7, pp. 775779, 1997. [9] P. S. Penev and J. J. Atick, Local feature analysis: a general statistical theory for object representation, Computation in Neural Systems, vol. 7, pp. 477500, 1996. [10] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, Eigenfaces vs. sherfaces: recognition using class specic linear projection, IEEE Trans. PAMI, vol. 19, no. 7, pp. 711 720, 1997. [11] X. He, S. Yan, Y. Hu, P. Niyogi, and H.-J. Zhang, Face recognition using laplacianfaces, IEEE Trans. PAMI, vol. 27, no. 3, pp. 328340, 2005. [12] F. Wang, J. Wang, C. Zhang, and J. T. Kwok, Face recognition using spectral features, Pattern Recognition, vol. 40, no. 10, pp. 27862797,2007. 61

[13] L. G. Farkas and I. R. Munro, Anthropometric Facial Proportions in Medicine. Thomas, 1987. [14] P. Ekman, W. Friesen, and J. Hager, The Facial Action Coding System, 2nd ed. Weidenfeld & Nicholson, 2002. [15] Craniofacial Anthropometry, http://www.plagiocephaly.info/faqs/ anthropometry.htm. [16] University of Michigan Medical Gross Anatomy/Head and Neck/,

http://anatomy.med.umich.edu/surface. [17] D. Lin and X. Tang, From macrocosm to microcosm, in Proc. CVPR, pp. 13551362, 2006. [18] D. G. Lowe, Distinctive image features from scale invariant keypoints, IJCV, vol. 60, no. 2, pp. 91110, 2004. [19] J. S. Pierrard and T. Vetter, Skin detail analysis for face recognition, in Proc. CVPR, pp. 18, 2007. [20] S. Pamudurthy, E. Guan, K. Mueller, and M. Rafailovich, Dynamic approach for face recognition using digital image skin correlation, in Proc. Audio- and Video-based Biometric Person Authentication (AVBPA), pp. 10101017, 2005. [21] J.-E. Lee, A. K. Jain, and R. Jin, Scars, marks and tattoos (SMT): Soft biometric for suspect and victim identication, in Proc. Biometric Symposium, Biometric Consortium Conference, pp. 18, 2008. [22] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar, Attribute and simile classiers for face verication, in IEEE International Conference on Computer Vision (ICCV), 2009. [23] L. Zhang, Y. Hu, M. Li, W. Ma, and H. Zhang, Efcient propagation for face annotation in family albums, in Proc. ACM International Conference on Multimedia, pp. 716723, 2004. [24] J. Y. Choi, S. Yang, Y. M. Ro, and K. N. Plataniotis, Face annotation for personal photos using context-assisted face recognition, in Proc. ACM International Conference On Multimedia Information Retrieval, pp. 4451, 2008. [25] S.-W. Chu, M.-C. Yeh, and K.-T. Cheng, A real-time, embedded face-annotation system, in Proc. ACM International Conference On Multimedia, pp. 989990, 2008. [26] Picasa, http://picasa.google.com. [27] iPhoto 09, http://www.apple.com/support/iphoto. [28] T. Lindberg, Feature detection with automatic scale selection, IJCV, vol. 30, no. 2, pp. 79116, 1998.

62

[29] T. F. Cootes, G. J. Edwards, and C. J. Taylor, Active appearance models, in Proc. ECCV, vol. 2, pp. 484498, 1998. [30] M. B. Stegmann, The AAM-API: An open source active appearance model implementation, in Proc. MICCAI, pp. 951952, 2003. [31] C. J. Bradley, The Algebra of Geometry: Cartesian, Areal and Projective Co-ordinates. Bath: Highperception, 2007. [32] R. O. Duda and P. E. Hart, Pattern Classication and Scene Analysis, 2nd ed. John Wiley and Sons, 1995. [33] A. K. Jain, K. Nandakumar, and A. Ross, Score normalization in multimodal biometric systems, Pattern Recognition, vol. 38, no. 12, pp. 22702285, 2005. [34] Z. Sun, A. Paulino, J. Feng, Z. Chai, T. Tan, and A. Jain, A study of multibiometric traits of identical twins, in Proc. SPIE Defense, Security, and Sensing: Biometric Technology for Human Identication, pp. 112, 2010. [35] FaceVACS Software Developer Kit, Cognitec Systems GmbH, http://www.cognitecsystems.de. [36] Federal Bureau of Investigation, Next Generation Identication overview page, http://www.fbi.gov/hq/cjisd/ngi.htm. [37] N. A. of Sciences, Strengthening Forensic Science in the United States: A Path Forward. THE NATIONAL ACADEMIES PRESS, 2000. [38] S. L. Zabell, Fingerprint evidences, Journal of Law and Policy, vol. 13, pp. 143170, 2005.

63

Das könnte Ihnen auch gefallen