Sie sind auf Seite 1von 14

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/300707013

A 3D Facial Recognition System Using


Structured Light Projection
Chapter June 2014
DOI: 10.1007/978-3-319-07617-1_22

READS

13

2 authors, including:
Francisco Cuevas
Centro de Investigaciones en Optica
56 PUBLICATIONS 778 CITATIONS
SEE PROFILE

All in-text references underlined in blue are linked to publications on ResearchGate,


letting you access and read them immediately.

Available from: Francisco Cuevas


Retrieved on: 31 July 2016

A 3D Facial Recognition System


Using Structured Light Projection
Miguel A. Vzquez and Francisco J. Cuevas
Centro de Investigaciones en ptica, A.C.
Loma del Bosque 115, Col. Lomas del Campestre, CP. 37150
Len, Guanajuato, Mxico
{mvazquez,fjcuevas}@cio.mx

Abstract. In this paper, a facial recognition system is described, which carry


out the classification process by analyzing 3D information of the face. The
process begins with the acquisition of the 3D face using light structured
projection and the phase shifting technique. The faces are aligned respect a face
profile and the region of front, eyes and nose is segmented. The descriptors are
obtained using the eigenfaces approach and the classification is performed by
linear discriminant analysis. The main contributions of this work are: a) the
application of techniques of structured light projection for the calculation of the
cloud of points related to the face, b) the use of the phase of the signal to
perform recognition with 97% reliability, c) the use of the profile of the face in
the alignment process and d) the robustness in the recognition process in the
presence of gestures and facial expressions.
Keywords: Biometrics, facial recognition, structured light projection, pattern
recognition, artificial vision, 3D face, 3d recovery.

Introduction

The face recognition is one of the main ways of personal identification in our
everyday social interaction; people focus the visual attention in features and facial
expressions. Humans are able to recognize up to hundreds of faces including people
that have not seen for a long time or with different lighting conditions, pose, facial
expressions or face with accessories (Turk & Petland, 1991). The development of
automatic facial recognition system had been a challenge of several disciplines such
as computer science, artificial vision, pattern recognition and biometrics. Under
controlled conditions, the automatic face recognition systems are fast, accurate,
economical and non-invasive. In the other hand, under non-controlled conditions it
fails since they have different kind of problems such as variations in scale,
orientation, facial expression, lighting conditions, occlusions, presence or absence of
accessories among others (Cabello Pardos, 2004; Chenghua, Yunhong, Tieniu, &
Long, 2004; Hwanjong, Ukil, Sangyoun, & Kwanghoon, 2006; Xue, Jianming, &
Takashi, 2005; Zhang, 2010).

M. Polycarpou et al. (Eds.): HAIS 2014, LNAI 8480, pp. 241253, 2014.
Springer International Publishing Switzerland 2014

242

M.A. Vzquez and F.J. Cuevas

Traditionally, there are several methods to carry out the automatic identification of
people such as passwords, personal identification numbers (PIN), radio frequency
identification cards (RFID), keys, passport, driving license, among others. The
disadvantage of these methods is that they use resources that can be lost, forgotten,
shared, manipulated or stolen. It can have consequences either economic, illegal
access, cloning of cards among others (Arun, Karthik, & Anil, 2006; Saeed &
Nagashima, 2012). In the other hand, identification techniques based on biometrics
offer a more robust solution since using physical or behavioral traits that are unique,
permanent and non-transferable in the individual. Physical features can be extracted
from the eyes (iris, retina), hands (fingerprints, hand geometry, vascular patterns) or
face, so it can be used the behavioral traits gait, bell speech, writing, signing, press
dynamics keyboards, etc. (Zhang, 2010; Jain, Flynn, & Ross, 2008; Wayman, 2011).
In this paper, it is presented the design of a facial recognition system, which uses
the depth information of the face as biometric pattern. This is made up of modules of
a typical pattern recognition system and implements techniques of different areas such
as: biometrics, optics, machine vision, patter recognition, geometry and statistical
difference (Woz, Graa, & Corchado, 2014).

Development of the Face Verification System

The proposed face verification system is composed of several modules that operate
systematically over the three-dimensional information of the face. It is described in
Figure 1.

Fig. 1. Design of the facial recognition system

2.1

Data Acquisition Module

The data acquisition module aims to record a set of four intensity images by binary
fringe projection and the 3D face data is recovered by four-step phase shifting method
(Fu & Luo, 2011; Siva Gorthi & Rastogi, 2009).

A 3D Faciaal Recognition System Using Structured Light Projection

243

The optical arrangementt of structured light projection system is shown in Figure 2.


It requires a digital camera, a multimedia projector and a reference plane C, P andd R,
xes of the camera and projector are parallel, coplanar and
respectively. The optical ax
normal to the reference plan
ne.

Fiig. 2. Structured light projection setup

The binary pattern profile, generated


g
by computer, is described by Equation (1).
,

/4

/4

(1)

where k is the sequential number of the fringe pattern, and p is the period of the
signal, mod(p) is the modu
ule of the signal period. Fringe patterns generated by the
Equation (1) are consecutiively projected on the reference plane and the surfacee of
the face, while they are reco
orded by a digital camera.

(a)

(b)

Fig. 3. (a) Binary fringe patteern generate by computer, (b) Computer-generate binary friinge
pattern profile in row 60, (c)) image acquired by a digital camera and (d) Fringe sinusooidal
profile acquired in row 60

244

M.A. Vzquez and F.J. Cuevas

(c)

(d)
Fig. 3. (continued)

The projected binary pattern has some advantages over a sinusoidal light pattern
projection. A projected binary pattern is uniform on changes of intensity of the video
projector, and the registered fringe profile adopts a sinusoidal profile due to blurring
effect provoked by the use of lens in the capture system. Figure 3 (a) describes the
binary pattern generated by a computer, (b) the binary pattern profile computer
generated in row 60, (c) the image recorded by a digital camera (c) and (d) profile of
the image recorded in row 60 (d). It can be seen that actually the profile of acquired
pattern describes a sinusoidal function.
2.2

Three-Dimensional Reconstruction Module

The goal of this module is to obtain three-dimensional model of the face (3D model).
It is approximated by the fringe projection technique with four-step phase-shifting
method (Fu & Luo, 2011). Each image is described by the following Equation (2):
,

cos

(2)

where
,
is the background illumination,
,
is the modulation factor,
,
is the initial phase associate with the form of the face,
is the phase shift
between each fringe pattern,
is the sequential number of the phase shift and
capture. The phase of the signal is demodulated to detect the face shape from
Equation (3):
,

tan

(3)

The result of Equation (3) is wrapped as can be seen in Figure 4(b), so it is necessary
to carry out a phase unwrapping algorithm for obtaining the continuous phase. The
phase unwrapping algorithm is applied by using quality maps and discrete routes
(Arevallilo Herrez, Burton, Lalor, & Gdeisat, 2002).

A 3D Faciaal Recognition System Using Structured Light Projection

(a)

245

(b)

(c)
Fig. 4. (a) Images recorded with
w phase shifting method, (b) wrapped phase associated with the
surface of the face and (c) phaase unwrapped associated with the surface of the face (3D moddel)

,
and the phase of the reference pllane
The phase of the face is denoted by
, . When the diffference of the unwrapped phases
as
is
,
calculated, the three dimenssional model of the face is gotten (Figure 4(c)).
2.3

Alignment and Segmentation Module

The alignment process alllows orient the position of the faces with respect tto a
reference face, this processs is carried out by means of the ICP algorithm, whhich
minimizes the distance betw
ween the face object and the face model from the iterattive
calculation of the transform
mation matrix (Besl & McKay, 1992). The transformattion
matrix is calculated from th
he vertical profile of a face model and the tear ducts off the
eyes, instead of using all po
oints on the surface of the face.
The tear ducts of the eyees are located from the analysis of the surface by using the
classifier of median and Gaaussian (Colombo, Cusano, & Schettini, 2006). The bbase

246

M.A. Vzquez and F.J. Cuevas

and the domus of the nose are samples in a figure 5 (a). From the calculation of the
straight line which passes through the domus and the base of the nose is defined the
upright profile of the face as shown in figures 5 (b) and (c).
The face model is aligned by calculating of the matrix of transformation is
calculated using the ICP algorithm. Using the algorithm ICP is calculated the matrix
of transformation from the profile of the face to align and the face model, as shown
in figure 6 where (a) is the surface of the face before alignment and (b) is the surface
of face after alignment.
Finally, the segmentation process is applied to determinate the forehead eyes and
the nose using the binary mask. The process is done according to Equation (4):
,

(4)

where zseg(x,y) is segmented object face, zob(x,y) is the face object and m(x,y) is the
binary mask. The result is described in Figure 7.

x
(a)

(b)

(c)
Fig. 5. (a) Landmark over the face, (b) profile overlapped on face and (c) deep of the face
profile

A 3D Facial Recognition System Using Structured Light Projection

247

(a)

(b)
Fig. 5. Example of a face surface (a) before alignment and (b) after alignment

(a)

(b)

(c)

Fig. 6. Face segmentation process. (a) Binary mask, (b) 3D model before performing the
segmentation, (c) 3D model after enhance segmentation

2.4

Feature Extraction Module

The descriptors of the face are obtained using the technique of Principal Component
Analysis (PCA) or eigenfaces (Turk & Petland, 1991; Chenghua, Yunhong, Tieniu, &
Long, 2004; Xue, Jianming, & Takashi, 2005). It is based on the analysis of the
variability of the depth information of the face, which reduces the dimension of the
original data set. Initially, the 3D face data is stored in a matrix of M elements, and

248

M.A. Vzquez and F.J. Cuevas

then PCA decreases these elements to a vector of N elements (N << M). The
descriptor of a face is determined from Equation (5).
(5)
where U are the main components of the covariance matrix of all training faces,
the data of the face (Turk & Petland, 1991).
2.5

is

Classification/Recognition Module

The classification and recognition process are performed by Linear Discriminant


Analysis technique (Krzanowski, 1988; Seber, 1984; Gonzalez & Woods, 2008; Bow,
1984; Duda, Hart, & Stork, 2001; Andrews, 1972). It divides the feature space into
mutually exclusive regions, where each region defines the area of influence of a class.
The classification process identifies a set of discriminant functions for calculated
decision functions. Then, the descriptors are classified by evaluation of the decision
functions.
A linear function defines the surface of decision between two adjacent classes is
described by Equation (6).
(6)
is the surface of decision between the classes
y
(i ),
where
and
are the prototype of classes
y
respectively. The descriptor
and is assigned to the class
if
evaluated for each of the functions
0 otherwise it is assigned in class
(Bow, 1984; Gonzalez & Woods, 2008).
2.6

is

Decision Process

The validation of the classification of the descriptors is carried out by comparing the
mean quadratic error (ECM) fixing a threshold. The threshold is defined from the
calculation of the ECM's classification of a set of 3D models that are not registered in
the system. In this way, if the error is less than the set threshold, the process of
belongs to the class .
identification considers that face

threshold

(7)

Results

In the Facial verification system were recorded 173 facial models corresponding to 47
users with an average of 4 faces per person. Figure 8 corresponds to samples of the
recorded 3D models, they are encoded in grey levels for viewing in 2D.

A 3D Facial Recognition System Using Structured Light Projection

249

Fig. 7. Sample of three-dimensional models

They were considered two sets for the identification tests: Set A, consisting of 9
users not registered in the system; and set B, consisting of all the users registered in
the system. Table 1 summarizes the results.
Table 1. Results of the classification of users registered and unregistered in the facial
recognition system
Set
A
B

Num. of
users
9
47

Acceptance
Num.
%
0
0
46
97.87

Reject
Num.
%
9
100
0
0

False positive
Num.
%
0
0
0
0

False negative
Num.
%
0
0
1
2.13

The system is capable of classifying people with different facial expressions


positively since analysis of the face is only done with regions that suffer minimal
variations to these changes as shown in the next four examples. It was included
images with facial expressions or people wearing accessories such as glasses.
Example 1. User ID: 0047

(a)

(b)
Fig. 8. a) Face object and (b) set of faces of training

250

M.A. Vzquez and F.J. Cuevas

Example 2. User ID: 0022.

(a)

(b)
Fig. 9. a) Face object and (b) set of faces of training

Example 3. User ID: 0031.

(a)

(b)

Fig. 10. a) Face object and (b) set of faces of training

Example 4. User ID: 0004.

(a)

(b)

Fig. 11. a) Face object and (b) set of faces of training

A 3D Facial Recognition System Using Structured Light Projection

251

The results of the classification are presented in table 2. It is worth mentioning that
the error threshold for rejection is ECM=0.3682. The threshold was computed from
the classification of faces whose identity is not registered in the system.
Table 2. Results of the classification of examples 1-4
Example
1
2
3
4

ID_User
0022
0047
0031
0004

ID_find
0022
0047
0031
0004

ECM
0.2547
0.3109
0.1809
0.2279

Result
Positive
Positive
Positive
Positive

Num. of training faces.


20
6
2
3

In example 1, it can be seen that one of the faces of the training set has errors of
reconstruction on the side of the cheek, jaw and lips, this is because that at the time of
scanning the user made any movement by altering the pattern of stripes. Even so, it
qualify right way. In example 2, the important aspect to highlight is one of the faces
of training has a slight rotation. The classification is positive. For example 3 and 4,
the user only has two training images; despite this was one that generated a lower
ECM. In example 4 the face was digitized with glasses, while those of training were
digitized without this, despite the combination of variations between the object face
and the training faces the classification is satisfactory.

Conclusions

It was introduced a facial recognition system that assigns the user identity from the
analysis of the variation of the depth information from the surface of the face, which
is obtained by using structured light projection and the phase shifting technique. The
facial recognition system has proven to be a reliable and robust to identify effective
users. It was able to identify effectively the 97.87% of users registered in the
database, while 2.13% turned out with a false negative error. It is important to
emphasize that the system is able to assign the (Heseltine, Pears, & Austin,
2004)identity of persons with different facial expressions, because effectively to the
analysis of the information of the chamfer is only done in the regions of the face in
which presents minimum variation. It is worth mentioning that the proposed
alignment process allows optimize the computational load and processing time to find
the optimal transformation matrix. Applications of the developed system, in principle,
can be used from the control of entry/exit in the business area, control of virtual
access to computer resources to control physical access in restricted areas only to
personnel authorized.
In the last decade many systems related with the 3D facial recognition have been
developed, such as systems that analyze points, lines and regions in the face surface
(G. Gordon, 1991) with verification rates of 83.3% - 91.7%. In other hand also have
developed systems that analyzes the entire information of face like in X. Chenghua, et
al; Russ, et al; Yunqui, et al; and Heseltine et al. all this systems can recognize
between 69 % - 100 %, however these use the entire information of face that requires

252

M.A. Vzquez and F.J. Cuevas

many computational resources. The system that we propose use only a small region of
the face that allows optimizes time and computational resources.

References
Song, H., Yang, U., Lee, S., Sohn, K.: 3D Face Recognition Based on Facial Shape Indexes
with Dynamic Programming. In: Zhang, D., Jain, A.K. (eds.) ICB 2005. LNCS, vol. 3832,
pp. 99105. Springer, Heidelberg (2005)
Andrews, H.C.: Introduction to mathematical techniques in pattern recognition. WileyInterscience, Canada (1972)
Arevallilo Herrez, M., Burton, D.R., Lalor, M.J., Gdeisat, M.A.: Fast two-dimensional phaseunwrapping algorithm based on sorting by reliability following a noncontinuous path.
Applied Optics 41(35), 74377444 (2002)
Arun, A.R., Karthik, N., Anil, K.J.: Hand book of multibiometrics. Springer, New York (2006)
Besl, P.J., McKay, N.D.: A method for registration of 3-D shapes. IEEE Transactions on
Pattern Analysis and Machine Intelligence 14(2), 39256 (1992)
Bow, S.-T.: Pattern recognition. Aplication to data-set problems. Electrocal Engineering and
Electronics, Pennsylvania (1984)
Cabello Pardos, E.: Tcnicas de reconocimiento facial mediante redes neuronales.
Departamento de tecnologa fotonica, facultad de informtica, Madrid (2004)
Chenghua, X., Yunhong, W., Tieniu, T., Long, Q.: A new attempt to face recognition using 3d
eigenfaces. In: Proc. ACCV 2004, pp. 884889 (2004)
Colombo, A., Cusano, C., Schettini, R.: 3D face detection using curvature analysis. Pattern
Recognition. The Journal of the Pattern Recognition Society 39(3), 445455 (2006)
Duda, R., Hart, P., Stork, D.: Pattern clasification. A Wiley International Publication (2001)
Fu, Y., Luo, Q.: Fringe projection profilometry based on a novel phase shift method. Optics
Express 19(22) (2011)
Gordon, G.G.: Face Recognition from depth maps and surface curvature. In: Conference on
Geometric Methods in Computer Vision, pp. 234247. SPIE, San Diego (1991)
Gonzalez, R., Woods, R.: Digital image proscessing. Pearson Pretince Hall, New Jersey (2008)
Heseltine, T., Pears, N., Austin, J.: Three-dimensional Face Recognition: an Eigensurface
Approach. In: International Conference on Image Processing. IEEE, Singapore (2004)
Jain, A., Flynn, P., Ross, A.: Handbook of biometrics. Springer, New York (2008)
Krzanowski, W.J.: Principles of Multivariate Analysis: A Users Perspective. Oxford
University Press, New York (1988)
Kyong, K.I., Bowyer, K.W., Flynn, P.J.: Multiple Nose Region Matching for 3D Face
Recognition Under Varying Facial Expression. Transactions on Pattern Analisysis and
machine Intelligence, 16951700 (2006)
Russ, T., Boehen, C., Peters, T.: 3D Face Recognition Using 3D Alignment for PCA. In:
Conference on Computer Vision and Pattern Recognition. IEEE Computer Society (2006)
Saeed, K., Nagashima, T.: Biometrics and Kansei Enginering. Springer, New York (2012)
Seber, G.: Multivariate Observations. John Wiley & Sons, Inc., Hoboken (1984)
Siva Gorthi, S., Rastogi, P.: Fringe Projection Techniques: Whither we are? Optics and Lasers
in Engineering 48(2), 133140 (2009)
Turk, M., Petland, A.: Eigenfaces for recognition. Journal of Cognitive Neurosience 3(1), 71
86 (1991)
Wayman, J.: Introduction to biometrics. Springer, New York (2011)

A 3D Facial Recognition System Using Structured Light Projection

253

Woz, M., Graa, M., Corchado, E.: A survey of multiple classifier systems as hybrid systems.
Information Fusion 16, 317 (2014)
Xue, Y., Jianming, L., Takashi, Y.: A method of 3D face recognition based on principal
component analysis algorithm. In: IEEE International Symposium on Circuits and Systems,
ISCAS 2005 (2005)
Yunqui, L., Haibin, L., Qingmin, L.: Geometric Features of 3D Face and Recognition of It by
PCA. Journal of Multimedia 6(2) (April 2002)
Zhang, C.: A survey of recent advances in face detecction. Microsoft corporation (2010)

Das könnte Ihnen auch gefallen