Sie sind auf Seite 1von 15

P.Latha, Dr.L.Ganesan & Dr.S.

Annadurai

Signal Processing: An International Journal (SPIJ) Volume (3) : Issue (5)


153
Face Recognition using Neural Networks

P.Latha

plathamuthuraj@gmail.com
Selection .grade Lecturer,
Department of Electrical and Electronics Engineering,
Government College of Engineering,
Tirunelveli- 627007
Dr.L.Ganesan

Assistant Professor,
Head of Computer Science & Engineering department,
Alagappa Chettiar College of Engineering & Technology,
Karaikudi- 630004
Dr.S.Annadurai
Additional Director, Directorate of Technical Education
Chennai-600025
Abstract
Face recognition is one of biometric methods, to identify given face image using
main features of face. In this paper, a neural based algorithm is presented, to
detect frontal views of faces. The dimensionality of face image is reduced by
the
Principal component analysis (PCA) and the recognition is done by the Back
propagation Neural Network (BPNN). Here 200 face images from Yale database
is taken and some performance metrics like Acceptance ratio and Execution time
are calculated. Neural based Face recognition is robust and has better
performance of more than 90 % acceptance ratio.
Key words: Face recognition-Principal Component Analysis- Back Propagation Neura
l Network -
Acceptance ratio–Execution time
1. INTRODUCTION
A face recognition system [6] is a computer vision and it automatically identif
ies a human face
from database images. The face recognition problem is challenging as it needs to
account for all
possible appearance variation caused by change in illumination, facial features,
occlusions, etc.
This paper gives a Neural and PCA based algorithm for efficient and robust face
recognition.
Holistic approach, feature-based approach and hybrid approach are some of the ap
proaches for
face recognition. Here, a holistic approach is used in which the whole face regi
on is taken into
account as input data. This is based on principal component-analysis (PCA) techn
ique, which is
used to simplify a dataset into lower dimension while retaining the characterist
ics of dataset.
Pre-processing, Principal component analysis and Back Propagation Neural Algorit
hm
are the major implementations of this paper. Pre-processing is done for two purp
oses
(i)
To reduce noise and possible convolute effects of interfering system,
(ii)
To transform the image into a different space where classification may prove
easier by exploitation of certain features.
PCA is a common statistical technique for finding the patterns in high dimension
al data’s [1].
Feature extraction, also called Dimensionality Reduction, is done by PCA for a t
hree main
purposes like
i)
To reduce dimension of the data to more tractable limits
P.Latha, Dr.L.Ganesan & Dr.S.Annadurai

Signal Processing: An International Journal (SPIJ) Volume (3) : Issue (5)


154
ii)
To capture salient class-specific features of the data,
iii)
To eliminate redundancy.
Here recognition is performed by both PCA and Back propagation Neural Networks
[3].
BPNN mathematically models the behavior of the feature vectors by appropriate de
scriptions and
then exploits the statistical behavior of the feature vectors to define decision
regions
corresponding to different classes. Any new pattern can be classified depending
on which
decision region it would be falling in. All these processes are implemented for
Face Recognition,
based on the basic block diagram as shown in fig 1.

Fig. 1 Basic Block Diagram


The Algorithm for Face recognition using neural classifier is as follows:
a) Pre-processing stage –Images are made zero-mean and unit-variance.
b) Dimensionality Reduction stage: PCA - Input data is reduced to a lower dimen
sion to facilitate
classification.
c) Classification stage - The reduced vectors from PCA are applied to train BPNN
classifier to
obtain the recognized image.
In this paper, Section 2 describes about Principal component analysis, Section 3
explains
about Back Propagation Neural Networks, Section 4 demonstrates experimentation a
nd results
and subsequent chapters give conclusion and future development.
2. PRINCIPAL COMPONENT ANALYSIS
Principal component analysis (PCA) [2] involves a mathematical procedure that tr
ansforms a
number of possibly correlated variables into a smaller number of uncorrelated va
riables called
principal components. PCA is a popular technique, to derive a set of features fo
r both face
recognition.
Any particular face can be
(i)
Economically represented along the eigen pictures coordinate space, and
(ii)
Approximately reconstructed using a small collection of Eigen pictures
To do this, a face image is projected to several face templates called eigenface
s which can be
considered as a set of features that characterize the variation between face ima
ges. Once a set
of eigenfaces is computed, a face image can be approximately reconstructed using
a weighted
combination of the eigenfaces. The projection weights form a feature vector for
face
representation and recognition. When a new test image is given, the weights are
computed by
projecting the image onto the eigen- face vectors. The classification is then ca
rried out by
comparing the distances between the weight vectors of the test image and the ima
ges from the
database. Conversely, using all of the eigenfaces extracted from the original im
ages, one can
reconstruct the original image from the eigenfaces so that it matches the origin
al image exactly.
2.1 PCA Algorithm
The algorithm used for principal component analysis is as follows.
(i) Acquire an initial set of M face images (the training set) & Calculate the e
igen-faces
from the training set, keeping only M eigenfaces that correspond to the highest
eigenvalue.
(ii) Calculate the corresponding distribution in M -dimensional weight space for
each
known individual, and calculate a set of weights based on the input image
(iii) Classify the weight pattern as either a known person or as unknown, accord
ing to its
distance to the closest weight vector of a known person.
Pre-
processed
Input Image
Principal
Component
Analysis
(PCA)
Back
Propagation
Neural Network
(BPNN)
Classified
Output
Image
P.Latha, Dr.L.Ganesan & Dr.S.Annadurai

Signal Processing: An International Journal (SPIJ) Volume (3) : Issue (5)


155
Let the training set of images be
M
Γ
Γ
Γ
,.....
,
2
1
. The average face of the set is defined
by

=
Γ
=
Ψ
M
n
n
M
1
1
-----------
(1)
Each face differs from the average by vector

Ψ

Γ
=
Φ
i
i

(2)
The co  variance matrix is formed by
T
M
n
T
n
n
A
A
M
C
.
.
1
1
=
Φ
Φ
=

=
(3)
where the matrix
].
,.....,
,
[
2
1
M
A
Φ
Φ
Φ
=
This set of large vectors is then subject to principal component analysis, which
seeks a
set of M orthonormal vectors
M
u
u ....
1
.To obtain a weight vector Ω of contributions of
individual eigen faces to a facial image Γ, the face image is transformed into its
eigen face
components projected onto the face space by a simple operation

)
(
Ψ

Γ
=
T
k
k
u
ω
-----------(4)
For k=1,.., M', here M' ≤ M is the number of eigen-faces used for the recognition
. The eights
form vector Ω = [
'
,......,
,
2
1
M
ω
ω
ω
] that describes the contribution of each Eigen-face in
representing the face image Γ, treating the eigen-faces as a basis set for face im
ages.The
simplest method for determining hich face provides the best description of an u
nknon input
facial image is to find the image k that minimizes the Euclidean distance
k
ε
.
=
k ε
||
)
(
k



||2 ------------(5)
whr
k
Ω is a weight vector describing the kth face from the training set. A face is cla
ssified as
belonging to person k when the ‘
k
ε
‘is blow som chosn thrshold
ε Θ othrwis, th fac is
classifid as unknown.
Th algorithm functions by projcting fac imags onto a fatur spac that span
s th
significant variations among known fac imags. Th projction opration charact
rizs an
individual fac by a wightd sum of ignfacs faturs, so to rcogniz a part
icular fac, it is
ncssary only to compar ths wights to thos of known individuals. Th input
imag is
matchd to th subjct from th training st whos fatur vctor is th closst
within accptabl
thrsholds.
Eign facs hav advantags ovr th othr tchniqus availabl, such as

sp d and
fficincy. For th systm to work wll in PCA, th facs must b sn from a f
rontal viw undr
similar lighting.
3. NEURAL NETWORKS AND BACK PROPAGATION ALGORITHM
A succssful fac rcognition mthodology dpnds havily on th particular choi
c of th
faturs usd by th pattrn classifir .Th Back-Propagation is th bst known
and widly usd
larning algorithm in training multilayr prcptrons (MLP) [5]. Th MLP rfr t
o th ntwork
consisting of a st of snsory units (sourc nods) that constitut th input la
yr, on or mor
hiddn layrs of computation nods, and an output layr of computation nods. Th
 input signal
propagats through th ntwork in a forward dirction, from lft to right and on
a layr-by-layr
basis.
Back propagation is a multi-layr fd forward, suprvisd larning ntwork bas
d on gradint
dscnt larning rul. This BPNN provids a computationally fficint mthod for
changing th
w ights in fd forward ntwork, with diffrntiabl activation function units,

to larn a training st
P.Latha, Dr.L.Gansan & Dr.S.Annadurai

Signal Procssing: An Intrnational Journal (SPIJ) Volum (3) : Issu (5)


156
of input-output data. Bing a gradint dscnt mthod it minimizs th total squ
ard rror of th
output computd by th nt. Th aim is to train th ntwork to achiv a balanc
btwn th
ability to rspond corrctly to th input pattrns that ar usd for training an
d th ability to provid
good rspons to th input that ar similar.
3.1 Back Propagation Nural Ntworks Algorithm
A typical back propagation ntwork [4] with Multi-layr, fd-forward suprvisd
larning is as
shown in th figur. 2. Hr larning procss in Back propagation rquirs pairs
of input and
targt vctors. Th output vctor ‘o ‘is compard with targt vctor’t ‘. In cas of dif
frnc of ‘o’
and‘t‘vctors, th wights ar adjustd to minimiz th diffrnc. Initially random
wights and
thrsholds ar assignd to th ntwork. Ths wights ar updatd vry itratio
n in ordr to
minimiz th man squar rror btwn th output vctor and th targt vctor.

Fig. 2 Basic Block of Back propagation nural ntwork


Input for hiddn layr is givn by

=
=
n
z
mz
z
m
w
x
nt
1
----------- (6)
Th units of output vctor of hiddn layr aftr passing through th activation
function ar givn
by
(
)
m
m
nt
h

+
=
exp
1
1
 (7)
In same manner, input for output layer is given by
kz
m
z
z
k
w
h
net

=
=
1
 (8)
and the units of output vector of output layer are given by
(
)
k
k
net
o

+
=
exp
1
1
  (9)
or updating the weights, we need to calculate the error. This can be done by
(
)

=

=
k
l
i
i
i
t
o
E
2
2
1
 (10)
oi and ti represents the real output and target output at neuron i in the output
layer respectively. If
the error is minimum than a predefined limit, training process will stop; otherw
ise weights need to
be updated. or weights between hidden layer and output layer, the change in wei
ghts is given by
j
i
ij
h
w
αδ
=

 ----------- (11)
P.L th , Dr.L.G nes n & Dr.S.Ann  ur i

Sign l Processing: An Intern tion l Journ l (SPIJ) Volume (3) : Issue (5)
157 
whereα is  tr ining r te coefficient th t is restricte to the r nge [0.01,1.0],
h jj is the output
 of  
neuron j in the hi en l yer, n δi c n be obt ine by
(
) (
)
i
i
i
i
i
o
l
o
o
t


=
δ
-----------
 (12) 
Simil rly, the ch nge of the weights between hi en l yer n output l yer, is g
iven by
j
Hi
ij
x
w
βδ
=

-----------
 (13)
where β is a training rate coefficient that is restricte to the range [0.01,1.0]
, xj is the output of     
neuron j in the input layer, an δHi can e o taine y
(
)
ij
k
j
j
i
i
Hi
w
x
l
x

=

=
1
δ
δ
 (14)

xi is the output at neuron i in the input layer, an summation term represents t
he weighte sum of   
all δj values correspon ing to neurons in output layer that o taine in equation.
After calculating   
the weight change in all layers, the weights can simply up ate y

(
)
(
)
ij
ij
ij
w 
ol
w
new
w

+
=
 ----------- (15)
This process is repe te , until the error re ches  minimum v lue
2.4.3 Selection of Tr ining P r meters
For the efficient oper tion of the b ck prop g tion network it is necess ry for
the ppropri te

selection of the p r meters use for tr ining.
Initi l Weights
This initi l weightwill influence whether the net re ches  glob l or loc l min
im  of the error n
if so howr pi ly it converges. To get the best result the initi l weights re s
et to r n om numbers
between -1 n 1.
Tr ining  Net
The motiv tion for pplying b ck prop g tion net is to chieve  b l nce between
memoriz
 tion 
n gener liz tion; it is not necess rily  v nt geous to continue tr ining unti
l the error re ches  
 minimum v lue. The weight  justments re b se on the tr ining p tterns. As 
long s error  
the for v li tion ecre ses tr ining continues. Whenever the error begins to in
cre se, the net is 
st rting to memorize the tr ining p tterns. At this point tr ining is termin te
. 
Number of Hi en Units
If the ctiv tion function c n v ry with the function, then it c n be seen th t
 n-input, m-  
output function requires t most 2n+1 hi en units. If more number of hi en l y
ers re present,   
then the c lcul tion for the δ’s re repe te for e ch  ition l hi en l yer prese
nt, summing ll 
the δ’s for units present in the previous l yer th t is fe into the current l yer f
or which δ is being
c lcul te .
Le rning r te 
In BPN, the weight ch nge is in  irection th t is  combin tion of current gr 
ient n the    
previous
 gr  ient. A sm ll le rning r te is use to voi m jor isruption of th
e irection of 
le rning when very unusu l p ir of tr ining p tterns is presente .
V rious p r meters ssume for this lgorithm re s follows.

No.of Input unit = 1 fe ture m trix


Accur cy = 0.001
le rning r te = 0.4
No.ofepochs = 400
No. of hi en neurons = 70
No.of output unit = 1
 
M in  v nt ge of this b ck prop g tion lgorithm is th t it c n i entify the gi
ven im ge s  f ce 
im ge or non f ce im ge n then recognizes the given input im ge .Thus the b ck
prop g tion 
neur l network cl ssifies the input im ge s recognize im ge.

4. Experiment tion n Results 
P.L th , Dr.L.G nes n & Dr.S.Ann  ur i

Sign l Processing: An Intern tion l Journ l (SPIJ) Volume (3) : Issue (5)
158  
In this p per for experiment tion, 200 im ges from Y le t b se re t ken n 
s mple of 20
f ce im ges is s shown in fig 3. One of the im ges s shown in fig 4  is t ken
s the Input  
im ge.
 The me n im ge n reconstructe output im ge by PCA, is s shown in fig
4b n 4c. 
 BPNN,  tr
In  ining set of 50 im ges is s shown in fig 5  n the Eigen f ces 
n recognize 
output im ge re s shown in fig 5b n 5c.

Fig 3. S mple Y le D t b se Im ges

4( ) 4(b)
4 (c)
 
Fig 4.( ) Input Im ge , (b)Me n Im ge , (c) Recognize Im ge by PCA metho

5( ) 5(b)
5(c)  
Fig 5 ( ) Tr ining set, (b) Eigen f ces , (c) Recognize Im ge by BPNN metho

T ble 1 shows the comp rison of ccept nce r tio n execution time v lues for 4
0, 80,  
120,160 n 200 im ges of Y le t b se. Gr phic l n lysis of the s me is s s
hown in fig 6.
No .of
Accept nce r tio (%)
Execution Time (Secon s) 
P.L th , Dr.L.G nes n & Dr.S.Ann  ur i

Sign l Processing: An Intern tion l Journ l (SPIJ) Volume (3) : Issue (5)
159
Im ges
PCA
PCA with BPNN
PCA
PCA with BPNN
40
92.4
96.5
38
36
60
90.6
94.3
46
43
120
87.9
92.8
55
50
160
85.7
90.2
67
58
200
83.5
87.1
74
67
 
T ble 1 Comp rison of ccept nce r tio n execution time for Y le t b se im g
es


Fig.6: comp rison of Accept nce r tio n execution ti
me
5. CONCLUSION 
F ce recognition h s receive subst nti l ttention from rese rches in biometric
s, p ttern  
recognition fiel n computer vision communities. In this p per, F ce recognit
ion using Eigen  
f ces h s been shown to be ccur te n f st. When BPNN technique is combine w
ith PCA,   
non line r f ce im ges c n be recognize e sily. Hence it is conclu e th t thi
s metho h s the  
ccept nce r tio is more th n 90 % n execution time of only few secon s. F ce
recognition 
c n be pplie in Security me sure t Air ports, P ssport verific tion, Crimin l
s list verific tion in 
police
  ep rtment, Vis  processing , Verific tion of Elector l i entific tion n
C r Security
me sure t ATM’s..
6. REFERENCES  
[1]. B.K.Gunturk,A.U.B tur, n Y.Altunb s k,(2003) “Eigenf ce- om in super-resolu
tion for
f ce recognition,” IEEE Tr ns ctions of . Im ge Processing. vol.12, no.5.pp. 597-6
06.
Comp rision of Execution Time
0
10
20
30
40
50
60
70
80
40
60
120
160
200
No of im ges
Execution Time(sec)
PCA
PCA with BPNN
Comp rision of Accept nce R tio
75
80
85
90
95
100
40
60
120 160
200
No of Im ges
Accept nce R tio(%)
PCA
PCA with BPNN 
P.L th , Dr.L.G nes n & Dr.S.Ann  ur i

Sign l Processing: An Intern tion l Journ l (SPIJ) Volume (3) : Issue (5)
  160
[2]. M.A.Turk n A.P.Petl n , (1991) “Eigenf ces for Recognition,” Journ l of Cogni
tive
Neuroscience. vol. 3, pp.71-86.
[3]. T.Y h gi n H.T k no,(1994) “F ce Recognition using neur l networks with mul
tiple 
combin tions of c tegories,” Intern tion l Journ l of Electronics Inform tion n
Communic tion Engineering., vol.J77-D-II,
 no.11,
 pp.2151-2159.
[4]. S.L wrence, C.L.Giles, A.C.Tsoi, n A. .B ck, (1993) “IEEE Tr ns ctions of N
eur l
Networks. vol.8, no.1, pp.98-113.  
[5]. C.M.Bishop,(1995) “Neur l Networks for P ttern Recognition” Lon on, U.K.:Oxfor
University Press.   
[6]. K il sh J. K r n e S nj y N. T lb r “In epen ent Component A

n lysis of E ge
Inform tion for F ce Recognition” Intern tion l Journ l of Im ge Proce
ssing Volume (3) :
Issue (3) pp: 120 -131.

Das könnte Ihnen auch gefallen