Beruflich Dokumente
Kultur Dokumente
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG
6
Abstract—This paper presents an approach to enhance the performance of a face recognition system using hybrid algorithm
based on Artificial Neural Networks (ANN) optimized by Genetic Algorithm (GA) introducing sustainability of rotational distortion.
Though the traditional face recognition system is very sensitive to the face parameter variations, the proposed Neuro-Genetic
hybrid system is found to be stance and performs well for improving the robustness and naturalness of human-computer-
interaction. In this work, we investigated two approaches in order to improve face recognition performance for the rotational face
environment: one seeks to extract the features from the face image by using improved and efficient image pre-processing
techniques. Fuzzy membership function has been used for the feature extraction purpose. The other task combines the
extracted features that have been used by the Neuro-Genetic hybrid algorithm to improve the performance of rotational face
recognition system. Experimental results show the superiority of the proposed rotational independent face recognition system
according to various orientations.
Index Terms— Human Computer Interaction, Rotational Face Recognition, Neuro-Genetic Hybrid Algorithm, Fuzzy
Membership Function, Facial Feature Extraction.
—————————— ——————————
1 INTRODUCTION
BEGIN
Step1. Start from top-left corner, repeat for each column
Fig.1 Paradigm of the proposed rotation independent face recogni- and row
tion system using hybrid technique.
If (sum of all black pixels) in column/row > 0
then save column and row
else don’t save (delete from the face image) col
3 PROPOSED TECHNIQUE FOR THE FACE umn/row
RECOGNITION SYSTEM Step2. Calculate and save height and width of the re-
duced binary form of face image
The first step in image pre-processing is image acquisi- END
tion. To do so, an imaging sensor along with signal digiti-
zation capability has been used so that captured image We calculate the Center of the preprocessed image us-
can be converted to digital form directly. After acquisition ing the formula as below.
of face image, Stams [25] Active Appearance Model Width 1 Height 1 (1)
(ASM) has been used to detec the facial features. Then the Center_x ; Center_y
binary image has been taken. The Region Of Interest 2 2
(ROI) has been choosen according to the selection ROI Once we have got the center of the preprocessed face
[26], [27]. Lastly the background noise has been eliminat- image, we have drawn circles with centroid (Center_x,
ed [28] and finally appearance based facial feature has Center_y) and various radii (r_max, ... ,r_min) as shown
been found. The procedure of the facial image pre- in Fig.3 and the sum of all black pixels as well as all black
processig parts are shown in Fig. 2. and white pixels (total pixels) belonging to each circle
have calculated. This sum is fuzzy set for distorted face
images. We have used fuzzy sets with a membership
function as shown in Fig. 4 and we have calculated the
membership values for each radius and those values are
used by the neuro-genetic hybrid system for learning and
classification. For this purpose we have used the follow-
ing algorithm:
Algorithm for using the fuzzy membership function
for feature extraction:
BEGIN
Step1. For each black pixel (x,y), calculate radius (r) using
the following formula:
Fig. 2. Facial image pre-processig for the proposed system (a) r center _ x x center _ y y
2
(2)
2
TABLE 1 [10] A.M. Burton, V. Bruce, P.J.B. Hancock, “From pixels to people:
PERFORMANCE MEASUREMENT OF THE PROPOSED SYSTEM a model of familiar face recognition,” Cognitive Sci. 23 (1999),
pp. 1–31, 1999.
[11] S. Kong, J. Heo, B. Abidi, J. Paik and M. Abidi, “Recent Ad-
vances in Visual and Infrared Face Recognition - A Review,”
The Journal of Computer Vision and Image Understanding, Vol. 97,
No. 1, pp. 103-135, 2005.
[12] K. Parimala Geetha, S. Sundaravadivelu and N. Albert Singh,
“Rotation Invariant Face Recognition using Optical Neural
Networks,” TENCON 2008 - 2008 IEEE Region 10 Conference,
Hyderabad, India, 2009.
[13] Kiyomi Nakamura, and Hironobu Takano, “Rotation and Size
Independent Face Recognition by the Spreading Associative
Neural Network,”International Joint Conference on Neural Net-
8 CONCLUSION AND OBSERVATION works, Sheraton Vancouver Wall Centre Hotel, Vancouver, BC,
The experimental results show the versatility of the Neu- Canada, 2006.
ro-Genetic hybrid algorithm based rotation independent [14] S. H. Lin, S. Y. Kung, and L. J. Lin., “Face recognition/detection
face recognition system. The critical parameters such as by probabilistic decision-based neural network,” IEEE Transac-
gain term, speed factor, number of hidden layer nodes, tions on Neural Networks, Special Issue on Artificial Neural Net-
crossover rate and the number of generations have a great works and Pattern Recognition, 8(1), 1997.
impact on the recognition performance of the proposed [15] Henry A. Rowley, Shumeet Baluja, and Takeo Kanade, 1998.
system. The optimum values of the above parameters Neural network based face detection. IEEE Transactions on Pat-
have been selected effectively to find out the best perfor- tern Analysis and Machine Intelligence, 20(1).
mance. The highest recognition rate of BPN and GA has [16] Jianke Zhu, Mang I Vai and Peng Un Mak, ”Gabor Wavelets
been achieved at 95 % and 96 % respectively. According Transform and Extended Nearest Feature Space Classifier for
to the VALID database, Table 1 shows the recognition Face Recognition,” Proceedings of the Third IEEE International
accuracy that has been achieved in Neuro-Genetic hybrid Conference on Image and Graphics (ICIG’04), 2004.
system with different orientations. Therefore, this pro- [17] M. Kirby and L. Sirovich, “Application of the Karhunen-Loeve
posed system can be used in various security and access Procedure for the Characterization of Human Faces,” IEEE
control purposes. The performance of the system can be Trans. Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp.
improved by using efficient image pre-processing tech- 103‐108, 1990.
niques. Finally the performance of this proposed system [18] J. Karhunen, E. Oja, L. Wang, R. Vigario, and J. Joutsensalo, “ A
can be populated according to the largest face recognition class of neural networks for independent component analysis,”
database. IEEE Trans. Neural Networks, vol. 8, pp. 486–504, 1997.
[19] E. Osuna, R. Freund, and F. Girosi, “Training support vector
REFERENCES machines: An application to face detection,” Proc. IEEE Conf.
Computer Vision and Pattern Recognition, pp. 130–136 1997.
[1] Jain, A., Bole, R., and Pankanti, S., BIOMETRICS Personal Identi-
[20] M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cogni‐
fication in Networked Society. Kluwer Academic Press, Boston,
tive Neuroscience, vol. 3, no. 1, pp. 71–86, 1991.
1999.
[21] Y.‐S. Ryu and S.‐Y. Oh, “Simple hybrid classifier for face recog‐
[2] S. Akamatsu, “The research trend of face recognition by com-
nition with adaptively generated virtual data,” Pattern Recognit.
puter,” The Journal of IEICE, Vol.80, No.3, pp. 257-266, 1997.
Leters., vol. 23, pp. 833–841, 2002.
[3] O. Hasegawa, S. Morisima, and M. Kameko, “Information
[22] J. Bala, P. Pachowicz, and K. De Jong, “Multistrategy Learning
processing of face,” IEICE Trans. D-II, Vol.80, No.8, pp. 2047-
from Engineering Data by Integrating Inductive Generalization
2065, 1997.
and Genetic Algorithms, in Machine Learning: A Multistrategy
[4] M. Yachida, Robot vision. Syokoudou Publishing, 1993.
Approach,” Michalski and G. Tecuci (Eds.), Vol. IV, R.S. Morgan
[5] A. Samal and P.A. Iyengar, “Automatic recognition and analy-
Kaufmann, San Mateo, CA., pp. 121‐138, 1994.
sis of human faces and facial expressions: a survey,” Patt. Re-
[23] F. Gruau and D. Whitley, “Adding Learning to the Cellular
cogn. 25 (1) (1992), pp. 65–77, 1992.
Development of Neural Networks: Evolution and the Baldwin
[6] D. Valentin, H. Abdi, A.J. O_Toole, “G.W. Cottrell, Connection-
Effect,” Evolutionary Computation, Vol.1, No.3, pp. 213‐234, 1993.
ist models of face processing: a survey,” Patt. Recogn. 27 (9)
[24] H. Vafaie and K. De Jong, “Improving a Rule Induction System
(1994) pp. 1209–1230, 1994.
Using Genetic Algorithms, in Machine Learning: A Multistrate‐
[7] R. Chellappa, C.L. Wilson and S. Sirohey, “Human and ma-
gy Approach,” Michalski and G. Tecuci (Eds.), Vol. IV, R.S., Mor‐
chine recognition of faces: a survey,” Proc. IEEE 83 (5) (1995)
gan Kaufmann, San Mateo, CA., pp. 453‐469, 1994.
pp. 705–740, 1995.
[25] Stephen Milborrow and Fred Nicolls, “Locating Facial Features
[8] J. Zhang, Y. Yan and M. Lades, “Face recognition: eigenface,
with an Extended Active Shape Model,” available at
elastic matching, and neural nets,” Proc. IEEE 85 (9) (1997) pp.
http://www.milbo.org/stasm‐files/locating‐facial‐features‐with‐
1423–1435, 1997.
an‐extended‐asm.pdf.
[9] I. Craw, N. Costen, T. Kato and S. Akamatsu, “How should we
[26] R. Herpers, G. Verghese, K. Derpains and R. McCready, “Detec‐
represent faces for automatic recognition?,” IEEE Trans. Patt.
tion and tracking of face in real environments,” IEEE Int. Work‐
Anal. Mach. Intell. 21 (8) (1999), pp. 725–736, 1999.
JOURNAL OF COMPUTING, VOLUME 2, ISSUE 7, JULY 2010, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 11
shop on Recognition, Analysis and Tracking of Face and Gesture in Md. Fayzur Rahman was born in 1960
in Thakurgaon, Bangladesh. He re-
Real‐ Time Systems, Corfu, Greece, pp. 96‐104, 1999.
ceived the B. Sc. Engineering degree in
[27] J. Daugman, “Face detection: a survey,” Comput. Vis. Imag. Un‐ Electrical & Electronic Engineering from
derst, 83, 3, pp. 236‐ 274, 2001. Rajshahi Engineering College, Bangla-
[28] Rafael C. Gonzalez and Richard E. Woods, Digital Image desh in 1984 and M. Tech degree in
Industrial Electronics from S. J. College
Processing. Addison‐Wesley, 2002.
of Engineering, Mysore, India in 1992.
[29] K. Wong, K. Lam and W. Siu, “An efficient algorithm for human He received the Ph. D. degree in ener-
face detection and facial feature extraction under different con‐ gy and environment electromagnetic
ditions,” Pattern Recognition, Vol. 34, 2001. from Yeungnam University, South Korea, in 2000. Following his
graduation he joined again in his previous job in BIT Rajshahi. Cur-
[30] Kaushik Roy, Abu Sasyeed Md. Sohail, Md. Rabiul Islam and
rently he is a Professor in Electrical & Electronic Engineering in Raj-
A.H.M. Sarower Satter, “Image Processing Techniques for real shahi University of Engineering & Technology (RUET). He is current-
Time Eye Recognition,” 2nd International Conference on Computer ly engaged in education in the area of Electronics & Machine Control
Science and Its Application, National University, San Diego, Cali‐ and Digital signal processing. He is a member of the Institution of
Engineer’s (IEB), Bangladesh, Korean Institute of Illuminating and
fornia, USA, 2004
Installation Engineers (KIIEE), and Korean Institute of Electrical En-
[31] Siddique and M. & Tokhi, M., “Training Neural Networks: Back gineers (KIEE), Korea.
Propagation vs. Genetic Algorithms,” Proceedings of International
Joint Conference on Neural Networks, pp. 2673‐ 2678, Washington
D.C.USA,2001
[32] Whiteley, D., “Applying Genetic Algorithms to Neural Net‐
works Learning,” Proceedings of Conference of the Society of Artifi‐
cial Intelligence and Simulation of Behavior, pp. 137‐ 144, England,
Pitman Publishing, Sussex, 1989.
[33] Whiteley, D., Starkweather and T. & Bogart, C., “Genetic Algo‐
rithms and Neural Networks: Optimizing Connection and
Connectivity,” Parallel Computing, Vol. 14, pp. 347‐361, 1990.
[34] Kresimir Delac, Mislav Grgic and Marian Stewart Bartlett,
Recent Advances in Face Recognition. I‐Tech Education and Pub‐
lishing KG, pp. 223‐246, Vienna, Austria, 2008.
[35] Rajesskaran S. and Vijayalakshmi Pai, G.A., Neural Networks,
Fuzzy Logic, and Genetic Algorithms‐ Synthesis and Applications.
Prentice‐Hall of India Private Limited, New Delhi, India, 2003.
[36] N. A. Fox, B. A. OʹMullane and R. B. Reilly, “The Realistic Mul‐
ti‐modal VALID database and Visual Speaker Identification
Comparison Experiments,” Proc. of the 5th International Confe‐
rence on Audio‐ and Video‐Based Biometric Person Authentication
(AVBPA‐2005), New York, 2005.