Sie sind auf Seite 1von 19

CHAPTER ONE

INTRODUCTION

1.1 BACKGROUND OF THE STUDY

The information age is quickly revolutionizing the way transactions are completed.

Everyday actions are increasingly being handled electronically, instead of with pencil and paper

or face to face. This growth in electronic transactions has resulted in a greater demand for fast and

accurate user identification and authentication.

Access codes for buildings, banks accounts and computer systems often use PIN's for

identification and security clearences. Using the proper PIN gains access, but the user of the PIN

is not verified. When credit and ATM cards are lost or stolen, an unauthorized user can often come

up with the correct personal codes. Despite warning, many people continue to choose easily

guessed PINs and passwords: birthdays, phone numbers and social security numbers. Recent cases

of identity theft have highten the need for methods to prove that someone is truly who he/she

claims to be. Face recognition technology may solve this problem since a face is undeniably

connected to its owner expect in the case of identical twins. Its nontransferable. The system can

then compare scans to records stored in a central or local database or even on a smart card.

A facial recognition system is a computer application for automatically identifying or

verifying a person from a digital image or a video frame from a video source. One of the ways to

do this is by comparing selected facial features from the image and a facial database. (Bonsor,

2008). It is typically used in security systems and can be compared to other biometrics such as

fingerprint or eye iris recognition systems.

Instead of requiring people to place their hand on a reader (a process not acceptable in some

cultures as well as being a source of illness transfer) or precisely position their eye in front of a

1
scanner, face recognition systems unobtrusively take pictures of people's faces as they enter a

defined area. There is no intrusion or delay, and in most cases the subjects are entirely unaware of

the process. They do not feel "under surveillance" or that their privacy has been invaded.

The subject of face recognition is as old as computer vision, both because of the practical

importance of the topic and theoretical interest from cognitive scientists. Despite the fact that other

methods of identification (such as fingerprints, or iris scans) can be more accurate, face recognition

has always remains a major focus of research because of its non-invasive nature and because it is

people's primary method of person identification.

Perhaps the most famous early example of a face recognition system is due to Kohonen ,

who demonstrated that a simple neural net could perform face recognition for aligned and

normalized face images. The type of network he employed computed a face description by

approximating the eigenvectors of the face image's autocorrelation matrix; these eigenvectors are

now known as `eigenfaces.'

Kohonen's system was not a practical success, however, because of the need for precise

alignment and normalization. In following years many researchers tried face recognition schemes

based on edges, inter-feature distances, and other neural net approaches. While several were

successful on small databases of aligned images, none successfully addressed the more realistic

problem of large databases where the location and scale of the face is unknown.

Kirby and Sirovich (1989) later introduced an algebraic manipulation which made it easy

to directly calculate the eigenfaces, and showed that fewer than 100 were required to accurately

code carefully aligned and normalized face images. Turk and Pentland (1991) then demonstrated

that the residual error when coding using the eigenfaces could be used both to detect faces in

cluttered natural imagery, and to determine the precise location and scale of faces in an image.

2
They then demonstrated that by coupling this method for detecting and localizing faces

with the eigenface recognition method, one could achieve reliable, real-time recognition of faces

in a minimally constrained environment. This demonstration that simple, real-time pattern

recognition techniques could be combined to create a useful system sparked an explosion of

interest in the topic of face recognition.

Face recognition works by using a computer to analyze a subject's facial structure. Face

recognition software takes a number of points and measurements, including the distances between

key characteristics such as eyes, nose and mouth, angles of key features such as the jaw and

forehead, and lengths of various portions of the face. Using all of this information, the program

creates a unique template incorporating all of the numerical data. This template may then be

compared to enormous databases of facial images to identify the subject.

Face recognition was one of those brilliant but technically iffy and ethically tricky

counterterrorism technologies deployed as a result of the September 11 attacks. The idea was to

automatically screen out terrorists as they walked through security checkpoints--only it didn't work

out that way: at a test in Tampa, for example, airport employees were correctly identified just 53

percent of the time. Civil-liberties groups also raised concerns about false positives--people being

mistakenly identified as terrorists, and possibly arrested, just because of their looks. And so,

without a demonstratable benefit, face recognition largely dropped off the public's radar. (Simson

and Beth, 2009)

According to Simson and Beth (2009), Many countries, including the United States, quietly

revised their requirements for passport photos to make them friendlier to face-recognition

software. The National Institute of Standards and Technology, which had been testing the

technology since 1994, conducted large-scale face-recognition tests in 2002 and 2006. Oregon and

3
some other states began using face recognition to detect when one person tries to obtain a license

under different names. And all the time, the technology kept getting better. In order to have a

functioning face-recognition system, a computer must first be able to detect the face--that is, given

a photograph, it must be able to find the faces in it. Technically, this is easier and more reliable

than identifying a particular person. This technology was pretty much perfected just after

September 11, 2001. The result: face-detection systems began appearing in digital cameras and

camcorders a few years ago. These algorithms generally work by searching for objects that look

like eyes, a nose, and perhaps something that's kind of round. They identify boxes where faces are

likely to be, and then tell the autofocus system what part of the photo needs to be in focus.

1.2 AIMS AND OBJECTIVES OF STUDY

The aim of this seminar report is to present the principles and operations of the Facial Recognition

Technology. The objectives of this seminar work are as follows;

 To outline the uses and applications of the Facial Recognition System

 To discuss the future scope of the Facial Recognition System

 To provide the effects and benefits of the Facial Recognition System

 To suggest possible ways or method of improving Facial Recognition technology.

1.3 SIGNIFICANCE OF THE STUDY

This study will be beneficial to field of computer science, software engineering and engineering

as it will describe the characteristics of systems with the Facial Recognition technology, it provides

the applications and highlight the challenges that may lie ahead. This research work will also

suggest some possible ways to improve on Facial Recognition technology and it will serve a

research material for further research work.

4
1.4 DEFINITION OF TERMS

Security: the state of being free from danger or threat.

Strengthen: make or become stronger.

Biometric: relating to or involving the application of statistical analysis to biological data.

Authentication: the process or action of proving or showing something to be true, genuine, or

valid.

Recognition: the action or process of recognizing or being recognized, in particular

5
CHAPTER TWO

LITERATURE REVIEW

2.1 GENERAL REVIEW OF FACE RECOGNITION

This chapter provides a detailed review of face recognition research. There are two underlying

motivations to present this survey: the first is to provide an up-to-date review of the existing

literature, and the second is to offer some insights into the studies of machine recognition of faces.

To provide a comprehensive survey, existing recognition techniques of face recognition are

categorized and detailed descriptions of representative methods within each category are

presented. In addition, relevant topics such as psychophysical studies, system evaluation, issues of

illumination and pose variation are covered.

Automated face recognition was developed in 1960s. The first semi-automated system for face

recognition required the administrator to locate features such as eyes, ears, nose, and mouth on the

photographs before it calculating the distances and the ratios to a common reference point, which

were then compared to the reference data. In 1970s, the problem with both of these early solutions

was that the measurements and locations were manually computed. In 1990, Kirby and Sirovich

applied Principal Component Analysis, a standard linear algebraic technique, to the face

recognition problem. This was considered as a milestone. It is shown that less than one hundred

values were required to accurately code a suitably aligned and normalized face image. As a result

of many studies, scientists come up with the decision that face recognition is not like other object

recognition. Face recognition is one of the few biometric methods that possess the merits of both

high accuracy and low intrusiveness. It has the accuracy of a physiological approach without being

6
intrusive. For this reason, since the early seventies, face recognition has drawn the attention of

researchers in fields from security, psychology and image processing.

Early face recognition algorithms used simple geometric models, but the recognition process has

now matured into a science of sophisticated mathematical representations and matching processes.

Since the early 1950s when digital computers were born and the world gained significant

processing power, computer scientists have endeavored in bringing thought and the senses to the

computer. During the 1980s, work on face recognition remained largely dormant. Plagued with

the fears expressed in George Orwell’s 1984, most members of society are very concerned about

the use of a computer system which is capable of recognizing them wherever they go. Since the

1990s, the research interest in face recognition has grown significantly as a result of the

following facts:

i The increase in emphasis on civilian/commercial research projects, the re-emergence of

neural network classifiers with emphasis on real time computation and adaptation.

ii The availability of real time hardware.

iii The increasing need for surveillance related application due to terrorist and drug trafficking

activities, etc.

In the year 1991, Turk and Pentland discovered that while using the Eigen faces techniques, the

residual error could be used to detect faces in images, a discovery that enabled reliable real-time

automated face recognition systems. This demonstration initiated much-needed analysis on how

to use the technology to support national needs while being considerate of the public’s social and

privacy concerns.

7
Critics of the technology complain that the London Borough of Newham scheme has, as of 2004,

never recognized a single criminal, despite several criminals in the system's database living in the

Borough and the system having been running for several years. "Not once, as far as the police

know, has Newham's automatic facial recognition system spotted a live target." This information

seems to conflict with claims that the system was credited with a 34% reduction in crime, which

better explains why the system was then rolled out to Birmingham also.

In 2006, the performance of the latest face recognition algorithms was evaluated in the Face

Recognition Grand Challenge (FRGC). High resolution face images, 3-D face scans and iris

images were used in the tests. The results indicated that the new algorithms are 10 times more

accurate than the face recognition algorithms of 2002 and 100 times more accurate than those of

1995. Some of the algorithms were able to outperform human participants in recognizing faces

and could identify even identical twins.

Tolba et al (2006) have reported an up-to-date review of major human face recognition research

in “Face recognition: a literature review, methods and technologies of face recognition”. A

literature review of the most recent face recognition techniques is presented. Description and

limitations of face databases which are used to test the performance of these face recognition

algorithms are given.

This face recognition problem is made difficult by the great variability in head rotation and tilt,

lighting intensity, angle, facial expression, aging etc. Some other attempts at facial recognition by

machine have allowed for little or no variability in these quantities. Yet, the method of correlation

or pattern matching of unprocessed data, which is often used by some researchers, is certain to fail

in cases where the variability is great. In particular, the correlation is very low between two pictures

of the same person with two different head rotations. Modern face recognition has reached an

8
identification rate greater than 90% with well-controlled pose and illumination conditions. The

task of recognizing faces has attracted much attention both from Neuro-scientists and from

computer vision scientists. While network security and access control are it most widely discussed

applications, face recognition has also proven useful in other multimedia information processing

areas.

2.2 REVIEW OF PCA AND LDA BASED FACE RECOGNITION

Numerous algorithms have been proposed for face recognition; Chellappa et al (1995), Zhang et

al (1997) and Chan et al (1998) use face recognition techniques to browse video database to find

out shots of particular people. Haibo Li et al (1993) code the face images with a compact

parameterized facial model for low-bandwidth communication applications such as videophone

and teleconferencing. Recently, as the technology has matured, commercial products have

appeared on the market.

Turk et al (1991) developed Principal Component Analysis (PCA) technique for Face recognition

to solve a set of faces using Eigen values. Rama Chellappa et al (2003) have dealt with the feature

based method using statistical, structural and neural classifiers for Human and Machine

Recognition of Faces. Krishnaswamy et al (1998) proposed automatic face recognition using

Linear Discriminant Analysis (LDA) of Human faces.

Chengjun Liu and Harry Wechsler (2002) presented new coding schemes, the Probabilistic

Reasoning Modes (PRM) and Enhanced Fisher linear discriminant Models (EFM) for indexing

and retrieval from large image databases. Michael Bromby (2003) has presented a new form of

Forensic identification-facial biometrics, using computerized identification.

9
Joss Beveridge et al (2003) provided the PCA and LDA algorithms for face recognition. A detailed

Literature Survey of Face Recognition and Reconstruction Techniques were given by Roger Zhang

and Henry Chang (2005).

Vytautas Perlibakas (2004) has reported a method in “Face Recognition Using Principal

Component Analysis and Wavelet Packet Decomposition” which allows using PCA based face

recognition with a large number of training images and performing training much faster than using

the traditional PCA based method.

Kyungnam Kim (1998) has proposed PCA to reduce the large dimensionality of the data space

(observed variables) to the smaller intrinsic dimensionality of feature space (independent

variables), which are needed to describe the data economically in “Face Recognition using

Principle Component Analysis”. The original face is reconstructed with some error, since the

dimensionality of the image space is much larger than that of face space.

Jun-Ying et al (2005) have combined the characteristics of PCA with LDA. This improved method

is based on normalization of within-class average face image, which has the advantages of

enlarging classification distance between different-class samples. Experiments were done on ORL

(Olivetti Research Laboratory) face database. Results show that 98% of correct recognition rate

can be acquired and a better efficiency can be achieved by the improved PCA method.

El-Bakry (2007) has proposed a new PCA implementation for fast face detection based on the

cross-correlation in the frequency domain between the input image and eigenvectors (weights) in

“New Fast Principal Component Analysis for Face Detection”. This search is realized using

crosscorrelation in the frequency domain between the entire input image and eigenvectors. This

increases detection speed over normal PCA algorithm implementation in the spatial domain.

10
Wangmeng Zuo et al (2006) have described in “Combination of two novel LDA-based methods

for face recognition” the Combination of two LDA methods which performed LDA on distinctly

different subspaces and this may be effective in further improving the recognition performance.

Fisher face technology uses 2D-Gaussian filter to smooth classical Fisher faces.

Xiaoxun Zhang and Yunde Jia (2007) have explained the principal subspace, the optimal reduced

dimension of the face sample in “A linear Discriminant analysis framework based on random

subspace for face recognition Pattern Recognition” to construct a random subspace where all the

discriminative information in the face space is distributed in the two principal subspaces of the

within-class and between-class matrices.

Moshe Butman and Jacob Goldberger (2008) have introduced a face recognition algorithm in

“Face Recognition Using Classification-Based Linear Projections” based on a linear subspace

projection. The subspace is found via utilizing a variant of the neighborhood component analysis

(NCA) algorithm which is a supervised dimensionality reduction method that has been recently

introduced.

Changjun Zhou et al (2010) have introduced a features fusion method for face recognition based

on Fisher’s Linear Discriminant (FLD) in “Features Fusion Based on FLD for Face Recognition”.

The method extracts features by employing Two-Dimensional principal component analysis

(2DPCA) and Gabor Wavelets, and then fuses their features which are extracted with FLD

respectively.

Hui Kong Lei Wang et al (2005) have explained in their paper, “Framework of 2D Fisher

Discriminant Analysis: Application to Face Recognition with Small Number of Training Samples”

that 2D Fisher Discriminant Analysis (2D-FDA) is different from the 1D-LDA based approaches.

11
2D-FDA is based on 2D image matrices rather than column vectors so the image matrix does not

need to be transformed into a long vector before feature extraction which contains unilateral and

bilateral 2D-FDA.

Yanwei Pang et al (2004) have proposed “A Novel Gabor-LDA Based Face Recognition Method”

in which face recognition method based on Gabor-wavelet with linear Discriminant analysis

(LDA) is presented. These are used to determine salient local features, the positions of which are

specified by the Discriminant pixels. Because the numbers of discriminant pixels are much less

than those of the whole image, the amount of Gabor Wavelet coefficients is decreased.

Xiang et al (2004) have reported in “Face Recognition using recursive Fisher Linear Discriminant

with Gabor wavelet coding” that the constraint on the total number of features available from

Fisher Linear Discriminant (FLD) has seriously limited its application to a large class of problems.

In order to overcome this disadvantage of FLD, a recursive procedure of calculating the

Discriminant features is suggested. Work is currently under progress to study the various design

issues of face recognition, and the objective is to achieve 99% accuracy rate for identity recognition

for all the widely used databases, and at least 80% accuracy for facial expression recognition for

Yale database.

Juwei Lu et al (2003) have shown in “Regularization Studies on LDA for Face Recognition”, that

the applicability of Linear Discriminant Analysis (LDA) to high-dimensional pattern classification

tasks such as face recognition (FR) often suffers from the so-called small sample size (SSS)

problem arising from the small number of available training samples compared to the

dimensionality of the sample space. The effectiveness of the proposed method has been

demonstrated through experimentation using the FERET database.

12
Chengjun Liu and Harry Wechsler (2002) have reported in “Gabor Feature Based Classification

(GFC) using the Enhanced Fisher Linear Discriminant Model for Face Recognition” that the

feasibility of the proposed GFC method has been successfully tested on face recognition using a

data set from the FERET database, which is a standard 16 tested for face recognition technologies.

2.3 REVIEW OF NEURO AND FUZZY BASED FACE RECOGNITION

Rowley et al (1998) have provided a neural network-based upright frontal face detection system

in “Neural Network-Based Face Detection”. To collect negative examples, a bootstrap algorithm

is used, which adds false detections into the training set, as training progresses.

Jianming Lu et al (2007) have presented a new method of face recognition on fuzzy clustering and

parallel neural networks, based on the neuron-fuzzy system. The face patterns are divided into

several small-scale parallel neural networks based on fuzzy clustering, and they are combined to

obtain the recognition result.

Yu et al (2001) has discussed Multiple Fisher Classifiers Combination for Face Recognition based

on Grouping Ada Boost Gabor Features. The key issue in using Gabor features is how to efficiently

reduce its high dimensionality. Gabor-based representation is too high dimensional even after

being selected by some feature selection methods. In order to increase the total dimension of FDA

subspace, the AdaBoosted Gabor features are regrouped into some smaller feature subsets.

Hongzhou Zhang et al (2007) have implemented face recognition by reconstructing frontal view

features using linear transformation in “Face Recognition Using Feature Transformation” under

different poses. Fei Zuo and Peter (2008) have introduced a fast face detector using an efficient

architecture in “Cascaded face detection using neural network ensembles” based on a hierarchical

13
cascade of neural network ensembles with which enhanced detection accuracy and efficiency are

achieved.

Dmitry Bryliuk and Valery Starovoitov (2002) in “Access Control by Face Recognition Using

Neural Networks” have considered a Multilayer Perceptrons Neural Network (NN) for access

control based on face image recognition. The robustness of NN classifiers with respect to the False

Acceptance and False Rejection errors is studied. A new thresholding approach for rejection of

unauthorized persons is proposed.

Keun-Chang Kwak et al (2007) employed Fisher based Fuzzy Integral and Wavelet

Decomposition methods for face recognition in the University of Alberta Shiguang Shan et al

(2004) dealt with the Gabor Wavelet for Face Recognition from the angle of its robustness to mis-

alignment. Jun Zhang et al (1997) have compared three recently proposed algorithms for face

recognition: eigenfaces, auto association and classification neural nets, and elastic matching in

“Face Recognition: Eigenfaces, Elastic Matching, and Neural Nets”.

Smach et al (2005) have implemented a classifier based on neural networks MLP (Multi-layer

Perception) for face detection in “Design of a Neural Networks Classifier for Face Detection”. The

MLP is used to classify face and non-face patterns. Then a Hardware implementation is achieved

using VHDL based Methodology. The system was implemented in VHDL and synthesized using

Leonardo synthesis tool. The model’s robustness has been obtained with a back propagation

learning algorithms.

According to Kakarwal et al (2009), it is important to select the invariant facial features especially

faces with various pose and expression changes in Information Theory and Neural Network based

Approach for Face Recognition. This work presents some novel feature extraction techniques such

14
as Entropy and Mutual Information. For classification, feed forward neural network is used which

will be better than traditional methods for accurately recognizing the faces.

Gaile (1992) have explored the use of morphological operators for feature extraction in range

images and curvature maps of the human face in “Application of Morphology to Feature Extraction

for Face Recognition”. This paper has described general procedures for locating features defined

by the configuration of extrema in principal curvature. A novel connection technique based on the

concept of constrained skeleton was also introduced in this paper. This technique being based on

a proximity rule defined by a structuring element, it could be used successfully for a variety of

applications.

Sushmita Mitra and Sankar (2005) have explained that Fuzzy sets are well-suited to modeling

different forms of uncertainties and ambiguities, often encountered in real life in “Fuzzy sets in

pattern recognition and machine intelligence”. Fuzzy set theory is the oldest and most widely

reported component of present day soft computing, which deals with the design of flexible

information processing systems.

Vonesch et al (2005) have illustrated the flexibility of the proposed design method in “Generalized

bi-orthogonal Daubechies wavelets”. Most importantly, it is possible to incorporate a priori

knowledge on the characteristics of the signals to be analyzed into the approximation spaces, via

the exponential parameters.

Alaa Eleyar and Hasan Demiral (2007) have proposed PCA and LDA based Neural Network for

Human Face Recognition. Lekshmi and Sasikumar (2009) have analysed both the Global and

Local Information for Facial Expression Recognition.

15
Fatma et al (2008) have discussed in “Comparison between Haar and Daubechies Wavelet

Transformations on FPGA Technology” that the Daubechies wavelet is more complicated than the

Haar wavelet. Daubechies wavelets are continuous; thus, they are more computationally expensive

to use than the Haar wavelet. This wavelet type has balanced frequency responses but non-linear

phase responses. Daubechies wavelets use overlapping windows, so the high frequency coefficient

spectrum reflects all high frequency changes.

Manjunathi and Ma (1996) have suggested “Texture Features for Browsing and Retrieval of Image

Data” that a novel adaptive filter selection strategy in Gabor Wavelets to reduce the image

processing computations while maintaining a reasonable level of retrieval performance.

Hossein Sahoolizadeh et al (2008), has proposed a new hybrid method of Gabor wavelet faces

using extended NFS classifier in “Face Detection using Gabor Wavelets and Neural Networks”.

Down sampled Gabor wavelets transform of face images as features for face recognition in

subspace approach is superior to pixel value approach.

Yousra Ben Jemaa and Sana Khanfir (2009) have discussed in “Automatic local Gabor features

extraction for face recognition” that Face is represented with its own Gabor coefficients expressed

at the fiducial points (points of eyes, mouth and nose). The first is composed of geometrical

distances automatically extracted between the fiducial points. The second is composed of the

responses of Gabor wavelets applied in the fiducial points and the third is composed of the

combined information between the previous vectors.

From the literature review, it is found that in 1970s, typical pattern classification techniques were

used to measure attributes between features in faces or face profiles. During the eighties, work on

16
face recognition remained largely dormant. Since the early nineties, research interest in Face

Recognition Technology has grown very significantly.

Over the last ten years, increased activity has been seen in tackling problems such as segmentation

and location of a face in a given image and extraction of features such as eyes, mouth, etc. Also,

numerous advancements have been made in the design of statistical and neural network classifiers

for face recognition. There are many methods that have been proposed in the literature for the

facial recognition task. However, all of them have still disadvantages such as not complete

reflection about face structure and face texture. Therefore, a combination of different algorithms

which can integrate the complementary information should lead to improve the efficiency of the

entire system.

Actually, development of face recognition over the past years allows an organization into three

types of recognition algorithms, namely frontal, profile, and view-tolerant recognition, depending

on the kind of images and the recognition algorithms. While frontal recognition certainly is the

classical approach, view-tolerant algorithms usually perform recognition in a more sophisticated

fashion by taking into consideration some of the underlying physics, geometry and statistics.

Profile schemes as stand-alone systems have a rather marginal significance for identification.

However, they are very practical either for fast coarse pre-searches of large face database to reduce

the computational load for a subsequent sophisticated algorithm, or as part of a hybrid recognition

scheme.

The following observations were made after surveying the research literature:

i Some features of face or image subspace may be simultaneously invariant to all the

variations that a face image may exhibit.

17
ii Given more number of training images almost any technique will do better and the number

of test images will decide the performance. These two actors are the major reasons why

face recognition is not widely used in real – world application.

The commercial applications range from static matching of photographs on credit cards, ATM

cards, passports, driver's licenses and photo ID to real-time matching with still images or video

image sequences for access control.

18
19

Das könnte Ihnen auch gefallen