Sie sind auf Seite 1von 19

Comparison of Dimensionality Reduction Techniques for Face Recognition

Berkay Topu 6950 Sabanci University

Outline

Motivation Face Detection Database (M2VTS) Different Dimensionality Reduction Techniques PCA, LDA, aPAC, Normalized PCA, Normalized LDA Classification Results Conclusion

Motivation

Face recognition: active research area specialising on how to recognize faces within images or videos Dimensionality reduction: to be able to represent/classify with less amount of data Linear Transforms

Overall System
Face Detection PCA Dimension Reduction Classification

Dimensionality Reduction

Pattern Recognition

Face Detection

Automatic face detection of OpenCV library

Using Haar-like features

Resized to 64x48

Database

M2VTS database used for Audio-visual speech recognition Lip detection suitable for face recognition

Database

40 pictures of each 37 subjects

32 pics for training & 8 pics for testing Unnecessary to use the whole image in recognition system Is it possible to represent with less information?

64x48 pixels 3072 pixels

PCA (Principal Component Analysis)

Weaknesses:

Translation variant Scale variant Background variant Lighting variant Fast and needs lesser amount of memory

Advantages:

PCA (Principal Component Analysis)

Principal component analysis (PCA) seeks a computational model that best describes a face by extracting the most relevant information contained in that face. Finds a lower dimensional subspace whose basis vectors correspond to the maximum variance direction in the original image space.

Solution is the eigenvectors of the scatter matrix

LDA (Linear Discriminant Analysis)


Finds the vectors in the underlying space that best discriminate among classes. The goal is to maximize between-class scatter (covariance) while minimizing within-class scatter. maximize the ratio

det S B det SW
1 SW SB

Solution is the eigenvectors of

aPAC (Approximate Pairwise Accuracy Criterion)

Drawbacks of LDA:

Maximizing the squared distances between pairs of classes, outliers dominate the eigenvalue decomposition. So, LDA tends to over-weight the influence of classes that are already well seperated.

Solution is generalization of LDA by weighting the contribution of each class due to Mahanalobis distance between classes.

aPAC (Approximate Pairwise Accuracy Criterion)

1 K-class LDA can be decomposed into K ( K 1) 2

two-class LDA. Introducing a weighting of the contributions of individual class pairs to the overall criterion. Weighting function depends on the Bayes error rate* between classes. Altough it is generalization of LDA, no additional complexity in computation.

* Bayes error rate: theoretical minimum to the error any classifier can make.

nPCA (Normalized PCA)

PCA computes the projection that maximizes the preservation of pairwise distances in the projected space. Weighting this sum of the squared distances by introducing symmetric pairwise dissimilarities. Proposed weights:

1 d ij where distij distij : Euclidean distance in the original space

nPCA (Normalized PCA)


Solution is the eigenvectors of X T Ld X where Ld is a matrix containing pairwise dissimilarities.

nLDA (Normalized LDA)

Drawbacks of the LDA can be overcome by


Appropriately chosen weights to reduce the dominance of large distances Pairwise similarities together with the pairwise dissimilarities Attraction between elements of the same class and repulsion between elements of different classes.

Classification (Training & Testing)

Classification in MATLAB PrTools (Pattern Recognition Toolbox) Nearest Mean Classifier (nmc) & Linear Classifier (ldc) 40 images from 37 subject 1480 images

32x37 = 1184 images for training 8x37 = 296 images for testing

Training and Testing


Training
Detected faces from different people Dimension Reduction Classifier Training Statictical data for face images

Testing
Unknown detected faces Dimension Reduction

Score calculation for each method

Recognition Rates

Test Results
Reduced dimension = 32

PCA(128) LDA nmc ldc 77.7 % 61.15 % 89.19 % 88.85 %

aPAC 87.84 % 87.84 %

nPCA 71.62 % 86.15 %

nLDA 88.85 % 88.85 %

Reduced dimension = 16

PCA(128) LDA nmc ldc 77.7 % 61.15 % 83.11 % 84.46 %

aPAC 85.47 % 85.47 %

nPCA 66.89 % 79.73 %

nLDA 86.15 % 84.46 %

Recognition rate prior to dimension reduction (using all pixels) is 79.05

Conclusion

Face recognition in the lower dimension Improved recognition rates for several dimensionality reduction techniques Further work:

Analysis of low recognition rates in some cases Block PCA and LDA

Das könnte Ihnen auch gefallen