Sie sind auf Seite 1von 95

DOOR LOCK SECURITY SYSTEM BASED ON

FACE IMAGE AS A KEY USING IMAGE


PROCESSING

FATIN SYAZWANI BINTI ZULKIFLI

UNIVERSITI MALAYSIA PAHANG


DOOR LOCK SECURITY SYSTEM BASED ON FACE IMAGE AS A KEY USING
IMAGE PROCESSING

FATIN SYAZWANI BINTI ZULKIFLI

This thesis is submitted as partial fulfillment of the requirements for the award of the
Bachelor of Electrical Engineering (Hons.) (Electronics)

Faculty of Electrical & Electronics Engineering


Universiti Malaysia Pahang

JUNE 2016
UNIVERSITI MALAYSIA PAHANG

DECLARATION OF THESIS AND COPYRIGHT


Author’s Full Name : FATIN SYAZWANI BINTI ZULKIFLI

Identification Card No : 930521115008


Title : DOOR LOCK SECURITY SYSTEM BASED ON
FACE IMAGE AS A KEY USING IMAGE
PROCESSING
Academic Session : 2016 SEMESTER II

I declare that this thesis is classified as:


(Contains confidential information under the
CONFIDENTIAL Official Secret Act 1972)

(Contains restricted information as specified by


RESTRICTED the organization where research was done)*

/ I agree that my thesis to be published as online


OPEN ACCESS open access (Full text)

I acknowledge that Universiti Malaysia Pahang reserve the right as follows:


1. The Thesis is the Property of University Malaysia Pahang.
2. The Library of University Malaysia Pahang has the right to make copies for the purpose
of research only.
3. The Library has the right to make copies of the thesis for academic exchange.

Certified by:

(Author’s Signature) (Supervisor’s Signature)


FATIN SYAZWANI BINTI ZULKIFLI ZULKIFLI BIN MUSA

Date: 02thJUNE 2016 Date: 02thJUNE 2016


22

SUPERVISOR’S DECLARATION

I hereby declare that I have checked this thesis and in my opinion, this thesis is adequate
in terms of scope and quality for the award of the degree of the Bachelor Degree of
Electrical Engineering (Hons.) (Electronics).

Signature :

Name of Supervisor : ENCIK ZULKIFLI BIN MUSA


Position : LECTURER
Date : 02thJUNE 2016
33

STUDENT’S DECLARATION

I hereby declare that the work in this thesis is my own except for quotations and summaries
which have been duly acknowledged. The thesis has not been accepted for any degree and
is not concurrently submitted for award of other degree.

Signature :

Name : FATIN SYAZWANI BINTI ZULKIFLI


ID Number : ED12040
Date : 02th JUNE 2016
44

Dedicated, in thankful appreciation for my beloved family, the great supervisor gift
from beginning until the end, Mr. Zulkifli bin Musa, Mr. Azri, Mr Ahmad Zainuddin,
Mr Hendriawan, Mr. Toibullah and my friends for giving a constant source of love
and sacrifice throughout my journey in education.
55

ACKNOWLEDGMENTS

In the name of Allah Most Gracious and Most Merciful.


First and foremost, praise be to Allah because of His love, strength, for granting
knowledge and patience that He has given to me to accomplish my final year project
entitled “Door lock security system based on face image using image processing”. I
specially thank for His blessing in my daily life, good health and healthy mind even
though, I have to go through some difficulties along my journey.
I am highly indebted and thoroughly grateful to Mr. Zulkifli bin Musa, final year
project coordinator for his immerse interest in my topic of research, for providing me with
material, coding MATLAB and links that I could not possibly have discovered on my
own, for his kind word to introducing me to Mr. Mohd Azri guide me on AutoCAD and
given me the components that I needed during finish up my hardware. While, Mr. Din,
whose work demonstrates and helps me in making a door lock security system. I can
ever repay the debt that I owe to you. I take this opportunity to express my deep regards
to Mr. Hendriawan and Mr Toibullah who had contributed in the completion of this
project.
I would like to express my sincere gratitude and appreciate to my family
members for their concern, financial encouragement and understanding. Not to forget, I
would also like to acknowledge with much appreciation to my supportive friends for
guidance given and various kinds of help.
Finally, thanks to those who have contributed directly or indirectly to the suc-
cess of this project whom I have not mentioned their name specifically. Without them,
this project would not successful.

Thank You.
66

ABSTRACT

In recent years, security systems have become one of the most demanding sys-
tems to secure our assets and protect our privacy. A more reliable security system
should be developed to avoid losses due to identity theft or fraud. Thus, a lot of re-
searches have been done in order to improve established security system, especially
systems that are based on human identification. Face recognition system is widely used
in human identification process due to its capability to measure and subsequently identi-
fies human specific identification especially for security purposes. In this paper several
steps were proposed to implement door lock security system based on facial characteris-
tic by using image processing. The proposed system using a personal computer with
software Matlab2015b as the main processing medium. While the main project focused
on image processing toolbox. Image processing system detects and classifies facial im-
ages into two groups. The first group is an authorized individual, while the second is the
individual who is not allowed. Detection system detects human faces into five major
points on a human face; the left eye, right eye, nose and mouth. Next, the classification
system had been analyze the value of the standard deviation of the distance between the
fifth point 6 face earlier. In addition, the classification system will also analyze the indi-
vidual eye area. If the value of the standard deviation is between 13.25 until 14.57, and
the wide eye is between 2.85 until 3.15, the individual is classified as authorized per- sons.
Whereas, if the two values or one value is not equal, the individual may be classi- fied as
an unauthorized person. Doors are equipped with the drive system will automati- cally
open when the system detects the presence of an authorized person. Instead, the drive
system will always automatically closed if it detects unauthorized person. The main
components of the system are camera, personal computer (PC), arduino, motor driver kit
DFRduino and magnetic lock
77

ABSTRAK

Pada masa kini, sistem keselamatan telah menjadi salah satu sistem yang paling
mencabar untuk melindungi harta benda dan privasi kami. Sistem keselamatan yang
lebih efisien perlu dibangunkan untuk mengelakkan kerugian akibat kecurian identiti
atau penipuan. Oleh itu, banyak kajian telah dilakukan untuk meningkatkan keupayaan
sistem keselamatan. Salah satu cabang teknologi sistem keselamatan ialah sistem yang
berdasarkan pengecaman manusia. Sistem pengecaman wajah manusia telah digunakan
secara meluas kerana keupayaannya dalam mengukur dan mengenal pasti identiti
manusia. Dalam kajian ini, kami mencadangkan beberapa langkah untuk melaksanakan
sistem keselamatan kunci pintu berdasarkan pengecaman muka dengan menggunakan
pemprosesan imej. Sistem yang dicadangkan ini menggunakan komputer peribadi
dengan perisian Matlab2015b sebagai medium pemprosesan utama. Manakala tumpuan
utama difokuskan kepada toolbox pemprosesan imej. Sistem pemprosesan imej akan
mengesan dan mengkelaskan imej wajah kepada dua golongan. Golongan pertama ialah
individu yang dibenarkan, manakala golongan kedua ialah individu yang tidak
dibenarkan. Sistem pengesanan wajah manusia akan mengesan 5 titik utama pada wajah
manusia; iaitu mata kiri, mata kanan, hidung dan mulut. Seterusnya, sistem pengkelasan
pula akan menganalisis nilai sisihan piawai bagi 6 jarak diantara kelima titik wajah
tadi. Sebagai tambahan sistem pengkelasan juga akan menganalisis luas kawasan mata
individu berkenaan. Sekiranya nilai sisihan piawai ialah di antara 13.25 sehingga 14.57,
dan luas mata ialah di antara 2.85 sehingga 3.15 maka individu berkenaan dikelaskan
kepada individu yang dibenarkan. Manakala, sekiranya kedua nilai atau salah satu nilai
tidak sama maka individu berkenaan dikelaskan kepada individu yang tidak dibenarkan.
Pintu yang dilengkapi dengan sistem pemacu automatik akan dibuka apabila sistem
mengesan kehadiran individu yang dibenarkan. Sebaliknya, sistem pemacu automatik
akan sentiasa tertutup sekiranya ia mengesan individu yang tidak dibenarkan.
Komponen utama yang digunakan dalam sistem ini adalah kamera, computer peribadi
(PC), arduino, pemandu motor driver kit DFRDuino dan kunci magnetic.
88

TABLE OF CONTENTS

Page
SUPERVISOR’S DECLARATION ii
STUDENT’S DECLARATION iii
DEDICATION vi
ACKNOWLEDGEMENT v
ABSTRACT vi
ABSTRAK vii
TABLE OF CONTENTS viii
LIST OF TABLES xi
LIST OF FIGURES xii
LIST OF ABBREVIATIONS xv

CHAPTER 1 INTRODUCTION

1.1 Project Introduction 1


1.2 Problem Statement And Background 2
1.3 Project Objectives 2
1.4 Project Scope 3

CHAPTER 2 LITERATURE REVIEW

2.1 General Knowledge Regarding Conventional Door Lock 4


System
2.2 Automatic Door Lock System Technology 5
2.2.1 RFID smart card system 6
2.2.2 Biometric technology (fingerprint) 7
2.2.3 Data login (password/ keypad system) 8
2.2.4 Image processing 9
2.2.4.1 Face detection and recognition 9
2.2.4.2 Veins recognition 10
2.2.4.3 Iris scanner and recognition 11
99

2.2.4.4 Voice recognition 12

2.3 Revolution Of Face Detection And 14


Recognition Technology

CHAPTER 3 METHODOLOGY

3.1 Introduction 16
3.2 Workstation 16
3.3 Flowchart Of Software 18
3.4 Overview Of Proposed System 19
3.5 Hardware Component Of The Face 19
Recognition System
3.5.1 Power Supply 12V DC 20
3.5.2 Arduino UNO 21
3.5.3 Motor Driver DFRDuino 22
3.5.4 Personal Computer 23
3.5.5 Door Lock Actuator 23
3.5.6 Logitech C270 24
3.6 Assembly Circuit 25
3.7 Software Implementation 26
3.7.1 Image Acquisition 26
3.7.2 Face Detection 26
3.7.3 Face Filter 29
3.7.4 Feature Extraction 30
3.7.5 Standard Deviation (Xsd) and Maximum area of eyes 33
(MAA)
3.7.6 Face classification 35
3.8 Conclusion 35
10
1
0

CHAPTER 4 RESULTS AND DISCUSSION

4.1 Introduction 36
4.2 Data Acquisition System (Daq) 36
4.3 Face Detection 38
4.4 Face Filter 40
4.5 Feature Extraction 42
4.5.1 5 Bbox 42
4.5.2 4 Points 44
4.5.3 Distance Points O Face 44
4.6 Maximum Area Of Eyes (Maa) 44

4.7 Comparison Of Sample Data 45


4.8 Conclusion 49

CHAPTER 5 CONCLUSION AND RECOMMENDATION

5.1 Conclusion 50
5.2 Recommendations For Future Research 51

REFERENCES 52
APPENDICES
A Gantt Chart Psm 1&Psm 2 55
B Cost Budget 57
C Specification of equpments 58
D Coding 62
E Progress Working Flow 77
11
1
1

LIST OF TABLES

Table No. Title Page

4.1 The result of controlled parameters 39


12
1
2

LIST OF FIGURES

Figure No. Title Page


2.1 The example of digital door lock 4

2.2 Magnetic Sensor 5

2.3 The Example of Smart Card System 6

2.4(a) The example of fingerprint 7

2.4(b) Pattern of fingerprint 7

2.5 The example of data login (password/keypad system) 8

2.6 The example of face detection and recognition 9

2.7 The example of veins recognition 10

2.8 The example of iris scanner and recognition 12

2.9 The example of voice recognition 13

3.1 The workstation of initial step of face recognition 16

3.2 Full process of face recognition system 18

3.3 Overview of proposed system 19

3.4 Complete system of hardware development 19

3.5 12V dc supply 20

3.6 Arduino UNO 21

3.7 Motor Driver DFRDuino 22

3.8 Personal Computer 23

3.9 Door Lock Actuator 23

3.10 Logitech Camera C270 24

3.11 Assembly Circuit 25

3.12(a) 3rd kind of Haar Feature 27

3.12(b) 4th kind of Haar Feature 27


13

3.13 ii(x,y) = sum of image intensities in shaded area 27

3.14 The face detection algorithm flow based on several cas- 28


cade Classifiers

3.15 Flowchart Of Face Filter 29

3.16 Flow Chart Of Feature Extraction 30

3.17 5 Boundary Box 31

3.18 Centre Points 32

3.19 6 Distances Points 32

3.20 Flowchart to find standard deviation and Maximum are of 33


eye

4.1(a) The distance person from left and right with angle 60°. 37

4.1(b) The distance person from camera is 60 cm and both angles 37


are 45°.

4.1(c) Several samples of Face Image 37

4.2(a) Matched 38

4.2(b) Un-Matched 38

4.2(c) Un-Matched 38

4.3 Before Face Filter Is Applied 41

4.4 After Face Filter Is Applied 41

4.5 Sample of 5 box 43

4.6(a) Matched 43

4.6(b) Un-Matched 43

4.7(a) Matched 44

4.7(b) Un-Matched 44

4.8(a) Matched 44

4.8(b) Un-Matched 44
14

4.9 Distance point on face vs number of images (Person 1) 45

4.10 Distance point on face vs number of images (Person 2) 46

4.11 Distance point on face vs number of images (Person 3) 46

4.12 Distance point on face vs number of images (Person 4) 47

4.13 Distance point on face vs number of images (Person 5) 47

4.14 Standard Deviation of face vs Number of images 48

4.15 Maximum area of eyes vs Number of image 48


15

LIST OF ABBREVIATIONS

ID Identification Card
PCA Principal Component Analysis
ANN Artificial Neural Network
RFID Radio Frequency Identification
DNA Deoxyribonucleic acid
PWM Pulse square Width Modulation
USB Universal Serial Bus
UART Universal Asynchronous Receiver Transmitter
TTL Transistor-transistor Logic Circuit
TM Intel Core
GHz Giga hertz
RAM Read Access Memory
MP Mega Pixels
fps Frame Per Second
3-D Three Dimensional
sqrt Square root
Xm Mean
Xdmp Difference from mean
Xv Variance
DAQ DATA ACQUISITION SYSTEM
MAA Maximum Area of eyes
IC Integrated Circuit
SRAM Static Random Access Memory
EEPROM Electrically Erasable Programmable Read-Only Memory
MHz Megahertz
GHz Gigahertz
FOV Field of View
SD Card Secure Digital Card
VFX Video Effects
V Voltage
DC Direct Current
16
16

PVC Polymerizing Vinyl Chloride


LED Light Emitting Diode
IR Infrared
S/O Socket Outlet
ICA Independent Component Analysis
CHAPTER 1

INTRODUCTION

1.1 PROJECT INTRODUCTION

Nowadays, technology is advancing fast. Security and peace of mind are essen-
tial needs for a high quality of life. Thus, it is very important to have a reliable security
system that can secure our assets as well as to protect our privacy.

Installing a home security system can be costly, but not installing one could cost
even more. Even if it is not the latest and greatest technology for personal properties, it
is still important to have a few basics set up around the units. For instance, a standard
alarm should be a fundamental necessity in any apartment or condo. If a break-in oc- curs,
people want to make sure that the families are safe and secure. An alarm system can
help us and give us peace of mind. With these tools, people can keep the families and
their valuables safe from any intruder who might enter your property.

The conventional security system requires a person to use a key, identification (ID)
card or password to access an area such as home and office. However, the existing secu-
rity system has several weaknesses where it can be easily forged and stolen. Thus, these
problems increased the interest in biometric technology to provide a higher degree of
security to replace the conventional security system.

Face recognition is one of the most popular authentication methods in biometric


technology. It is the most natural means of biometric verification compared to the other
type of biometric verification such as fingerprint, iris and voice verification. Apart from
that, face recognition have many advantages such as:
18
2

i. It requires no physical interaction on behalf of the user


ii. It is accurate and allows for high enrolment and verification rates
iii. It does not require an expert to interpret the comparison result
iv. It can use your existing hardware infrastructure, existing cameras and image
capture.
v. The only biometric that allow passive identification from one to many envi-
ronments.

1.2 PROBLEM STATEMENT & BACKGROUND

Based on statistics reported by the Royal Malaysia Police, 11,586 burglary cases
were reported from January 2013 until June 2013 [1]. The huge amount of burglary cas-
es lead to a huge amount of losses faced by the victims. Thus, the security for access
control is very important as huge amount of losses emphasized that the security system
should not be taken lightly.

Therefore, security system for access control should be modernized to enhance the
security purpose. A more reliable security system should be developed to avoid greater
loss. Biometric technology can be implemented in the security system for access control
as it offers a higher degree of security compared to the conventional security system. From
[2], biometrics is the most secure and convenient authentication tool since it cannot be
borrowed, stolen, or forgotten, and forging one is practically impossible.

1.3 PROJECT OBJECTIVE

The objective of this project is to design door lock security system using face
recognition. The specific objectives that have to be achieved are as follows:
1. To develop an automatic door lock security
2. Detect and recognize face gesture using image processing
3. To control the door by distinguish face image recognize
19
2

1.4 PROJECT SCOPE

The scopes of this project are:


i. MATLAB software is use for the face recognition process once the image is
captured by the camera
ii. Computer Vision Toolbox is use to determine the initial face recognition data-
base.
iii. Development of face recognition system by using distance of 5 nodal points in
person face.
iv. Distance of detection of camera is set at 60 cm.
v. ARDUINO UNO will provide output for the system for the door lock actuator
system.
CHAPTER 2

LITERATURE REVIEW

2.1 GENERAL KNOWLEDGE REGARDING CONVENTIONAL DOOR


LOCK SYSTEM

In today's society, advances in technology have made life easier by providing us


with higher levels of knowledge through the invention of different devices. However, each
technological innovation harbors the potential of hidden threats to its users. One major
threat is theft of private personal data and information. As digital data become more
prevalent, users try to secure their information with highly encrypted passwords and ID
cards. However, the misuse and theft of these security measures are also on the rise.
Taking advantage of security flaws in ID cards result in cards being duplicated or
counterfeited and being misused. This increasing battle with cyber security has led to
the birth of biometric security systems.

Figure 2.1: The example of digital door lock


55

2.2 AUTOMATIC DOOR LOCK SYSTEM TECHNOLOGY

Door lock system is a restriction for access the house. It is important to have a
security system for door lock in order to secure our assets and privacy. Magnetic sensor
can be used to upgrade the traditional door knob to increase the level of security. For
example, it use for detect the condition of door (open/close).

Figure 2.2: Magnetic Sensor

The magnetic doors lock which works by using the concept of electromagnetism
in which it is composed of an electromagnet and armature plate. Typically the electro-
magnet portion of the lock is attached to the door frame and the mating armature plate is
attached to the door to allow the two mechanisms to work efficiently together. When the
door is closed, the two components are in contact with each other.

The design takes into account the concept where the current allowed to flow
through the wire for the production of magnetic flux or magnetic power. The door will
be kept locked because the produced power will provides necessary strength that keeps
the door from being opened. The magnetic fluxes cause the armature plate attracted to
the electromagnet and create a locking action which eventually will lock the door.

The operation methods of magnetic door can be divided into three basic opera-
tions. The first method involves the use of a keypad system such as passwords. The sys-
tem will lock and unlock with the numeric code. A smart card such as Radio Frequency
Identification (RFID) tag is used in the second operation method of magnetic door
which is typically used for business and commercial buildings. For the last operation
method, the magnetic door can be operated by using biometrics technologies such as
thumb print and face recognition.
66

2.2.1 RFID Smart Card System

Smart card allows the card owner to access the facility. A smart card can be pro-
grammed to allow or deny access through specified doors or facility. It stores protected
information and the person’s privileges. There are two types of smart card either contact
or contactless. The contactless smart card usually used the electronic signal to transfer
data while physical contact is used for communication in contact based card. However,
it has several weaknesses as the card can be easily lost, stolen or damage if it is exposed
to high electromagnetic field.

Figure 2.3: The example of smart card system


77

2.2.2 Biometric Technology (Fingerprint)

Fingerprint recognition [3] is the technology that verifies the identify of a person
based on the fact that everyone has unique fingerprints. The reason can be considered that
fingerprint can achieve the best balance among authentication performance, cost, size of
device, and ease of use. However, most of fingerprint authentication devices have
some problems to be solved. One is that captured images are easily affected by the
condition of finger surface and it can reduce authentication performance. The other is that
the problem of fake fingers has been pointed out. And the last but not the least is the
loss of privacy and security in all biometric systems which include fingerprint bio-
metric system.

Figure 2.4 (a): The example of fingerprint Figure 2.4 (b): Pattern of fingerprint
88

2.2.3 Data Login (Password/ Keypad System)

The most common form of system identification and authorization mechanism is


a password or keypad system. For higher assurance, the user needs to change passwords
frequently so that it cannot be guessed. The user can choose their password in the con-
sideration of its practicality. The password should be generated properly and keep as
secret. This system can be considered as one of the weak security system as the pass-
word can be easily forgotten or it can be hacked easily. Thus, password or keypad based
system is not really reliable for door lock system.

Figure 2.5: The example of data login (password/keypad system)


99

2.2.4 Image Processing

2.2.4.1 Face Detection and Recognition

Face Detection & Recognition System is an application for automatically identi-


fying or verifying a person from a digital image or a video frame. One of the ways to do
this is by comparing selected facial features from the image and a facial database. This
is a perfect way to empower web and desktop applications with face-based user authen-
tication, automatic face recognition, and identification. Biometric face recognition sys-
tems will collect data from the users' face and store them in a database for future use. It
will measure the overall structure, shape and proportion of features on the user's face such
as: distance between eyes, nose, mouths, ears, jaw, size of eyes, mouth and others
expressions. Facial expression is also counted as one of the factors to change during a
user's facial recognition process. Examples include smiling, crying, and wrinkles on the
face [4]. Furthermore, it is easy to install and does not require any expensive hardware.
Facial recognition technology is used widely in a variety of security systems such as
physical access control or computer user accounts.

Figure 2.6: The example of face detection and recognition


10
1
0

2.2.4.2 Veins Recognition

One of the recent biometric technologies invented is the vein recognition sys- tem.
Veins are blood vessels that carry blood to the heart. Each person's veins have unique
physical and behavioral traits. Taking advantage of this, biometrics uses unique
characteristics of the veins as a method to identify the user. Vein recognition systems
mainly focus on the veins in the users hands. Each finger on human hand has veins
which connect directly with the heart and it has its own physical traits [5]. Compared to
the other biometric systems, the user's veins are located inside the human body. There-
fore, the recognition system will capture images of the vein patterns inside of users' fin-
gers by applying light transmission to each finger. For more details, the method works
by passing near-infrared light through fingers, this way a camera can record vein pat-
terns.

Vein recognition systems are getting more attention from experts because it has
many other functions which other biometrics technologies do not have. It has a higher
level of security which can protect information or access control much better. The level
of accuracy used in vein recognition systems is very impressive and reliable by the
comparison of the recorded database to that of the current data. Furthermore, it also has
a low cost on installation and equipment. Time which is taken to verify each individual
is shorter than other methods (average is 1/2 second) [5].

Figure 2.7: The example of veins recognition


11
1
1

2.2.4.3 Iris Scanner and Recognition

The human iris is a thin circular structure in the eyes which is responsible for
controlling the diameter and size of the pupils. It also controls the amount of light which
is allowed through to retinal in order to protect the eye's retina. Iris color is also a varia-
ble different to each person depending upon their genes. Iris color will decide eye color
for each individual. There are several colors for iris such as: brown (most popular color
for the iris), green, blue, grey, hazel (the combination of brown, green and gold), violet,
pink (in really rare cases). The iris also has its own patterns from eye to eye and person
to person, this will make up to uniqueness for each individual [6].

Iris recognition systems will scan the iris in different ways. It will analyze over
200 points of the iris including: rings, furrows, freckles, the corona and others charac-
teristics. After recording data from each individual, it will save the information in a da-
tabase for future use in comparing it every time a user want to access to the system [6].

Iris recognition security systems are considered as one of the most accurate se-
curity system nowadays. It is unique and easy to identify a user. Even though the system
requires installation equipment and expensive fees, it is still the easiest and fastest method
to identify a user. There should be no physical contact between the user and the system
during the verification process. During the verification process, if the users are wearing
accessories such as glasses and contact lenses, the system will work as normal because it
does not change any characteristics of the user's iris. Theoretically, even if users have eye
surgery, it will have no effect on the iris characteristics of that individual [6].
12
1
2

Figure 2.8: The example of iris scanner and recognition

2.2.4.4 Voice Recognition

Voice recognition is a process of identifying the identity of an unknown speaker


on the basis of individual information that contain in the speech signal. It lends itself
well to a variety of applications such as security access control, mobile banking and voice
mail. There are two main factor which makes a person unique. Firstly, it is the
physiological component which is known as the voice tract. Secondly, it is a behavioral
component which is known as the voice accent. By combining both of these factors, it is
almost impossible to imitate another person's voice exactly. Taking advantages of these
characteristics, biometrics technology created voice recognition systems in order to ver-
ify each person's identification using only their voice. Mainly, voice recognition will
focus on the vocal tract because it is a unique characteristic of a physiological trait. It
works perfectly in physical access control for users [5].
Voice recognition systems are easy to install and it requires a minimal amount of
equipment. This equipment includes microphones, telephone and/or even PC micro-
phones. However, there are still some factors which can affect the quality of the system.
Firstly, performance of users when they record their voice to database is important. For
that reason, users are asked to repeat a short passphrase or a sequence of numbers
and/or sentences so that the system can analyze the users' voice more accurately. On the
other hand, unauthorized users can record authorized users' voices and run it through the
verification process in order to get user access control to system. To prevent the risk of
13
1
3

unauthorized access via recording devices, voice recognition systems will ask users to
repeat random phases which are provided by the system during verification state [5].

Figure 2.9: The example of voice recognition

In conclusion, there are many biometric security systems that can be used for
surveillance. However, face recognition is the easiest system and the technology is very
low prices compared to other biometric system. Although, the technology is less unique
compared to iris and DNA, the system still the best technology. The face recognition
system also can be improve by doing more research and apply a new technology on it.
14
1
4

2.3 REVOLUTION OF FACE DETECTION AND RECOGNITION TECH-


NOLOGY

Human face plays an important role in our social interaction, conveying people’s
identity but it is a dynamic object and has a high degree of variability in its appearances.
So to overcome this variability face detection and face recognition methods have been
introduced. For this project the feature extraction was used to make analysis for face
image. Some of feature extraction method that can be used for this project is Face
Bunch Graph (Nodal Points), Principal Component Analysis (PCA), Gabor Filter and
Independent Component Analysis (ICA).

For the face bunch graph, the characteristics of a person’s images input through
a digital video camera had been analyzes by a facial recognition through digital camera
video. The overall facial structures are measured, such as distances between eyes, nose,
mouth, chin and jaw edges. These measurements are reserved in a database and used as
a comparison when a user stands before the camera [7]. Common representation of face
could be obtained by creating from 70 nodal points which is face bunch graph. The
same point is finding to match to the face bunch graph when the image has been given.
The recognition process only required 14 to 22 points for face recognition to be com-
pleted. There are many advantages of nodal points such as easy to use because in many
cases, it can be performed without a person knowing. Other than that, the cost to im-
plement biometric is much lower, and it is convenient and socially acceptable due
to only the picture is taken for face recognition. Meanwhile, the disadvantage of nodal
point is that the system cannot tell the difference between identical points [8].

The Principal Component Analysis (PCA) is a method of projection to a sub- space


and is widely used in pattern recognition [9]. PCA is used to re-express the origi- nal data
in lower dimension basis vector. Therefore, the noise and redundancy of the data are
kept to a minimum and the data is described economically. Pattern recognition based on
the Karhunen-Loeve expansion, Kirby and Sirovich [10, 11] have shown that any
particular face can be represented in terms of a best coordinate system termed as
eigenfaces. In face recognition,. PCA is used to calculate the eigenfaces and find the
vectors that best accounts for the distribution of face images within the entire image
15
1
5

space [12]. Typically, two phases are included in the PCA algorithms which are the
training phase and the classification phase. In the training phase, the eigenspace was
established from training samples and the training images are mapped to eigenspace for
classification. During the classification phase, an appropriate classifier is used to classi-
fy when input is projected to the same eigenspace.

Gabor Filter applied to images to extract features aligned at particular orienta-


tions or angles. It possesses optimal localization properties in both spatial and frequency
domains and they have been successfully used in many pattern recognition applications
[13]. Gabor filter bank can capture the relevant frequency spectrum in all directions.
Gabor filter is a complex exponential modulated by a Gaussian function in the spatial
domain. The input image is pre-processing, the histogram for better equalization for better
contrast image. Pre-processed image is convolved with Gabor filters by multiply- ing
image Gabor filters in frequency domain [14]. Gabor filters are widely used in im- age
analysis and computer vision. The Gabor filters transform provides an effective way to
extract information in the form of space and frequency. The Gabor image representa- tion
is obtained by computing the convolution of the original image with several Gabor
wavelets.

Independent Component Analysis (ICA) is a technique for extracting statistical-


ly independent variables from a mixture of them. The technique is quite new and has
originated from the world of signal processing. In a task such as face recognition, much
of the important information may be contained in the high order relationships among the
image pixels. ICA is more powerful method compared to PCA in term of represents the
signal or image, retaining higher order statistics. It is because PCA method might lose
important information in higher order statistic of facial images. In the field of feature
extraction, ICA showed the capability than eigenface Approach based on PCA. ICA
method is used to localize facial features such as eyes and mouth which extracts the
statistically independent basic image set from multiple image presented.

In nutshell, from the literature study of all the methods or techniques for feature
extraction. Face Bunch Graph is most suitable to be used in this project because of their
advantages that meet their limitation

.
CHAPTER 3

METHODOLOGY

3.1 INTRODUCTION

This chapter discusses the methodology used in developing the face recognition
system. It begins with overall view for the complete system. The next section involves the
selection of main components in the process of door lock system. Then, the
following section will explain the operation of the system design and implementation.
Finally, the chapter explains the process of image acquisition using the hardware setup.

3.2 WORKSTATION

Door Lock
Actuator Image
Frame

Motor Driver DFRDUino


PC Arduino
Circuit

Distance 60 cm

Figure 3.1: The workstation of initial step of face recognition


17

We need to do data collection to proceed the project. In this project, we need to collect
front view of face image. For this project, Logitech C270 is used to record the face image.
The sample image that used for this project about 5 students that were 3 female and 2
female. The video is recorded 5 second for each person. The frame for 6 second will
produced 40 samples of images. In other words, the total samples were 200 images. The
distance with the camera is fixed which is 50 cm. The video camera is rec- orded in lab
robotics to make sure the illumination is same for every face recording.
18

3.3 FLOWCHART OF SOFTWARE

Full process of face recognition system is shown as below.

Start

Image Acquisition

 Data collection - Capture


images
 Transfer Image to com-
puter

Face Detection

Face Filter (largest)

Feature Extraction

Xsd (Standard Deviation)

MAA(Maximum Area of eye)

Face Classification

End

Figure 3.2: Full process of face recognition system


19

3.4 OVERVIEW OF PROPOSED SYSTEM

PC

Process-
ing

Process
Record Lock/ Door Lock
Webcam Pic Unlock Actuator

Figure 3.3: Overview of proposed system

3.5 HARDWARE COMPONENT OF THE FACE RECOGNITION SYSTEM

In general, the hardware system in this work consists of three major subsystems.
Figure 3.1 shows the complete system for hardware development.

Logitech Computer Arduino Motor driver Door Lock


Image DFRDUINO and Unlock
Webcam C270 UNO
Processing

Figure 3.4: Complete system of hardware development.


20

The first sub-system was a real time face recognition which uses Logitech Camera
to record the face image. The image was taken from extracted frame to get the best
view of the image. Secondly, MATLAB was used to analyse the data. The data will
produced the binary output when converted to analog signal. Arduino acts as DAQ card
to control the lock and unlocking process of the door lock actuator which depends on
the output of face recognition phase. The DFRDUINO motor driver was used to control
motor direction and speed using an arduino. By allowing the simply address Arduino pins,
it makes it very simple to incorporate a motor into door lock system and able to power a
motor with a separate power supply of up to 12v. The third sub-system was when the
door lock actuator was unlocked after the system recognition process and will remain
locked if the system did not recognized the authorized person in the distance of
60cm from the camera.

The hardware components used in developing the overall system are discussed
in the following sub-sections.

3.5.1 Power Supply 12V Dc

A 12V Dc power supply was used to boost up the main components of the sys-
tem. It consists of two DC outputs; +12V and -12V. This power supply was used to
boost up the DFRDUINO motor driver and door lock actuator. Figure 3.2 shows the power
component used.

Figure 3.5: 12V dc power supply


21

3.5.2 Arduino UNO

For hardware development, Arduino UNO was used in the system. It was used
to control the lock and unlocking process of magnetic door inconjuction with the output
from the face recognition phase.

The board of the open source hardware contains everything to support the mi-
crocontroller such as 14 digital input/output pins, a Universal Serial Bus (USB) connec-
tion, a power jack and reset button. It is based on ATmega328 and can be connected to a
computer with a USB cable provided [15].

The board can be powered up by using USB connection or external power sup-
ply. The ATmega328 provides Universal Asynchronous Receiver Transmitter (UART)
to communicate with the computer by using transistor-transistor logic circuit (TTL)
(5V) serial communication which is convenient on digital pins 0 and pins 1.

The board provides 14 digital input/output pins which operate at 5 volts. Basi-
cally, the digital pins default to inputs or the pins configured to be in high impedance state.
Each pin has an internal pull-up resistor where it is disconnected by default of 20-
50 k Ohms.

A motor driver DFRduino is needed to send the signal to arduino for door lock
actuator lock and unlock without has a schematic diagram.

Figure 3.6: Arduino UNO microcontroller


22

3.5.3 Motor driver DFRDuino

Motor Driver DFRDuino is used to support the Arduino UNO to give the signal
for door lock actuator to lock or unlock. The board supported by thousands of open source
codes and can be easily extended with most arduino shields. The integrated 2 way DC
motor driver and wireless socket gives a much easier way to start your robotic project.

The board provides 6 PWM Channels which consist of (Pin 11, Pin 10, Pin 9,
Pin 6, Pin 5, Pin 3) and powered up with USB interface. Motor driver board support Male
and Female pin header [16].

Figure 3.7: Motor Driver DFRDuino


23

3.5.4 Personal Computer

After an image is captured, it is passes to a PC for processing. In this work, Asus


with Intel Core (TM) 2 core (3 GHz) processor; having 4GB RAM has been used. All
the image processing functions have been implemented on 2015b version.

Figure 3.8: Personal Computer

3.5.5 Door Lock Actuator

The door lock actuator in this hardware development was used to control the
door lock and unlock process and consists of series of gears triggered by a small motor.
A rack and pinion set assisted the actuator by converting the rotational motion into ver-
tical motion that is required to physically unlock or lock the doors.

Figure 3.9: Door Lock Actuator


24

3.5.6 Logitech Camera C270

In this study, a camera was used to capture the image. Camera plays a very im-
portant role in capturing face images. The selection criteria for the camera were based
on the size, resolution, brightness, simple handling and long life span. A Logitech Cam-
era C270 (see Figure 3.7) has high resolution of 1280 x 720 pixels with Crisp 3 MP photos
Technology, Hi-Speed USB 2.0 was chosen. It was capable of capturing up to 30 fps,
which was appropriate for this work. The images captures by the webcam was
smooth with no pixellation and the price is cheaper compared to other camera.

Figure 3.10: Logitech Camera C270


25

3.6 ASSEMBLY CIRCUIT

The complete circuit to control the door lock actuator and switch lamp are shown be-
low.

Webcam

Power Supply
12Vdc
+ -
(Matlab + Ardu-
ino)
S/O

1o
USB
2KW

50Hz + - Arduino UNO


5V

Pin 2 Door Lock Actuator

D4 D5
Pin 3

Forward
DFRDuino 5V
Reverse

Switch lamp

Lamp

+ -

Figure 3.11: Assembly circuit


26

3.7 SOFTWARE IMPLEMENTATION

3.7.1 Image Acquisition

First, in the image acquisition process, the input face image was captured via in-
tegrated webcam. Once the input image is captured, the features information will be
extracted. The purpose of image acquisition is to seek and extract a region which con-
tains only the face information.

3.7.2 Face Detection

Detection of facial features such as eye, nose, and mouth is an important step for
many subsequent facial image analysis tasks. For this project, we applied a Voila Jones
Face Detection Algorithm method to identify a face image from the face’s unique fea-
tures [17]. During detection, each window is assigned to face class or background based
on the distances to the approximated face class mode. Viola Jones algorithm has mainly
4 stages :-

a) Haar features selection


The Viola-Jones face detection method uses combinations of simple Haar-
Like features to classify faces. Haar like features are rectangular digital im-
age features that get their name from their similarity to Haar-wavelets. The
value of a two-rectangle feature is the difference between the sum of the pix-
els within two rectangular regions. The regions have the same size and shape
and are horizontally or vertically adjacent (see Figure 3.12). A three rectan-
gle feature computes the sum within two outside rectangles subtracted from
the sum in a center rectangle. Finally a four-rectangle feature computes the
difference between diagonal pairs of rectangles [17].

Value of the Rectangular Features can be evaluated as


Value = Σ (pixels in black area) - Σ (pixels in white area)
27

Figure 3.12(a): 3rd of Haar Feature Figure 3.12(b): 4th of Haar Feature

b) Creating Integral Image


The integral image can be computed from an image using a few operations
per pixel. Once computed, any one of these Haar-like features can be com-
puted at any scale or location in constant time. The integral image at location
(x,y) contains the sum of the pixels above and to the left of x,y inclusive
( , ) = ∑ ′< , ′< ( ′, ′)
(1)
Where ii(x, y) is the integral image and i(x, y) is the original image in-
tensity

Figure 3.13: ii(x,y) = sum of image intensities in


shaded area
28

c) Adaboost Training algorithm


The object detection framework employs a variant of the learning algorithm
Adaboost to both select the best features and to train classifiers that use
them. This algorithm constructs a “strong” classifier as a linear combination
of weighted simple “weak” classifiers.
ℎ( ) = (∑ =1 ℎ ( )) (2)

Each weak classifier is a threshold function based on the feature .

− δ iff < θ (3)


h (x) =
− δ Otherwise

Where threshold is and are determine in the training, as well as the coef-
ficients .

d) Cascade Classifiers
In general, face detection algorithm based on AdaBoost may divided into
three major parts. Firstly by using “the integral image” to extract face’s rec-
tangle feature. Secondly, is formed weak classifier, which is based on single
rectangle feature, and using AdaBoost algorithm to trained the weak classifi-
er. Then, some accurate feature is combine to forming a strong classifier that
is more accurately in distinguish between “face” and “nonface” mode. The
third is in accordance with the principle of “first heavy after the light” cas-
cade multiple strong classifiers. In other words, it is put these strong classifi-
er in the front which is formed by important features and have more simple
structure. It can be filtering out numerous “non-face” sub window, so it will
put the detection focus on these regions which have larger possibility of exist
human face.

Extract face Weak clas- Combined Cascade


rectangle sifiers several several
feature based on weak clas- strong clas-
single sifier sifiers

Figure 3.14: The face detection algorithm flow based on several cascade Classifiers
29

3.7.3 Face Filter

Face filter is applied in this project to remove the unwanted non face. Non face
occurs because the face image is taken with some noises in the background and more than
one face appeared in the image. Therefore, the largest area has been selected to fix
only one image appear when detect face.

Face detection

Yes
Number
Largest area
row >1

No

Figure 3.15: Flowchart of face filter

Coding filter is shown as below :~

%select the largest area of bbox


Sbox = size(bbox); %Sbox(size box), %bbox(boundary box)
Cbox = Sbox(1);% bil row yg ade , %Cbox(Centre box)
if Cbox > 1
for ii = 1:1:Cbox
FAC(1,Cbox) = 0; %FAC(Face Area Centre)
FA = bbox(ii,3)*bbox(ii,4);%get area=width*height
FAC(1,ii) = FA;
end
[M,I] = max(FAC); %m is value, i is location
bbox = bbox(I,:); %I is image
30

3.7.4 FEATURE EXTRACTION

Feature Extraction is the most important step in face detection or recognition.


The purpose of the feature extraction is to extract the feature vectors or information which
represents the face. Face Bunch Graph is applied to this project. The feature ex-
traction may consist of 5 boundary box, 4 centre points and 6 distances.

5 Box

 Face
 Left eye
 Right eye
 Nose
 Mouth

4 Centre Points

6 Distances

Figure 3.16: Flowchart of feature Extraction

a) 5 Boundary box
The boundary box (bbox) is applied in this system by using computer vision
toolbox. The bbox are setting as below. This function is needed to detect
each parts of face such as left eye, right eye, nose and mouth.

A = bbox(:,1:4);% face
B = bbox(:,5:8);% left eye
C = bbox(:,9:12);% right eye
D = bbox(:,13:16);% mouth
E = bbox(:,17:20);% nose
31

Figure 3.17: 5 Boundary box

b) 4 Centre points
There are 4 centre points that has been created from the boundary box. It
measures based on distance between eyes, nose and mouth.

////////////////////////CB///////////////////////

Xcb = B(:,1)+(B(:,3)/2);
Ycb = B(:,2)+(B(:,4)/2);
CB = [Xcb,Ycb];
CB (Left Eye)
///////////////////////CC////////////////////////

Xcc = C(:,1)+(C(:,3)/2);
Ycc = C(:,2)+(C(:,4)/2);
CC = [Xcc,Ycc];
CC (Right Eye)
////////////////////////CD///////////////////////

Xcd = D(:,1)+(D(:,3)/2);
Ycd = D(:,2)+(D(:,4)/2);
CD = [Xcd,Ycd];
CD (Mouth)
///////////////////////CE////////////////////////

Xce = E(:,1)+(E(:,3)/2);
Yce = E(:,2)+(E(:,4)/2);
CE = [Xce,Yce];
CE (Nose)

DataC=[CB;CC;CD;CE]
32

Figure 3.18: 4 Centre points

c) 6 Distance points
From centre point of face, we applied another method using Pythagoras
theorem to get 6 distance points. This formula is stated as below :~
= √( − ) − ( − ) ℎ = 1,2,3,4,5,6

Figure 3.19: 6 Distance points


33

3.7.5 Standard Deviation (Xsd) and Maximum area of eyes (MAA)

Xsd and Maa are chosen based on the accuracy that have a "standard" way of
knowing what is normal, and what is extra large or extra small [18].

Door
Lock/Unlock

13.25<
Xsd<14.57 Door Closed

2.85<MAA Door Closed


<3.15

Door Open

Pause(2)

Figure 3.20: Flowchart to find standard deviation and Maximum are of eyes
34

Formula of mean, difference from mean and variance is calculated before the value
of standard deviation is applied on this project. While, the value for max- imum
area of eyes is obtained if the area value of left eye and right eye is com- pared to
get the maximum value. Two filter is applied on this project to increase
the security. Those formula are stated below :~


, = ℎ = 1,2,3,4,5,6

, = − ℎ = 1,2,3,4,5,6

∑ ( )
, =

∑ ( )
, = √
35

3.7.6 Face classification

The door will be opened if the authorize person standing between that range
13.25< Xsd <14.57. Otherwise, the door will be closed. But the weakness of the system is
that it still can opened the door even if the person detect within the range was not an
authorize person. Hence, second filter was needed to increase the security system. Thus,
if the owner was standing at the distance of 60cm and at a specific range of
2.80< MAA <3.30 specifically for the owner, the door will be automatically opened but
this was different for person other than the owner although they are standing at the
60cm distance from the camera but the range of MAA was not the same as the owner then
the door will not be opened. Other than that, the door locks actuator pause for 2 second.

3.8 CONCLUSION

As a conclusion, the system consists of two elements which are software and
hardware development. Software development is more focus on to develop a database
of face recognition. Arduino microcontroller, motor driver DFRduino and door lock
actuator was used as the main of hardware in this project. The purpose of development
of hardware is to show the functionality of the face recognition security system.
CHAPTER 4

RESULT AND DISCUSSION

4.1 INTRODUCTION

In this chapter shows the experimental result for this research. Tables of results,
graphs, and figures are included. Detailed of explanation of graphs and figures are also
provided. The data collected had been done after recording the video and convert this
video to image by using FS studio. In this project, Minitab 16 software is used in order
to get the graphical analysis and the best value for distance points on face based on the
calculation. This software is really user friendly and reliable.

4.2 DATA ACQUISITION SYSTEM (DAQ)

The test image that used for this project was the images about 5 students from
this university that were 3 female and 2 male. The face had been recorded for the same
illumination and front view. Time taken for capture the video was 5 seconds. In other
words, the pictures produced are 40 samples per second (fps) for each person. The total
samples of frame for five persons are 200. The distance had been fixed from the camera
which is 60 cm. The distance on 60 cm had been chosen based on face algorithm. It can
detect all the persons with the specified angle which are 60° from left and right. If the
person stands more than 60°, it could not recognize the face image because the half of
image data is distortion and the accuracy of image is not accurate. The height of PVC
stand is 80 cm from bottom. The angle for both camera elevation and depression are
45°. When the camera is adjusted on specified angle, it is suitable for all identity and
physical person. Hence, the image obtained is on the frame. The recording was done in
the lab robotics room to make sure the illumination is same for every face recording.
37

Lighting level was played an important role because it can significantly affect the result.
It was also not easy to make feature extraction if illumination of the image is too high.

4=45°
θ3=45°
θ1=60° θ 2=60°
50
cm

15 cm 15 cm 60 cm

Figure 4.1(a): The distance Figure 4.1(b): The distance


person from left and right with person from camera is 60 cm
angle 60°. and both angles are 45°.

It can be conclude that the camera position with person in between 60 cm. There
are 4 angle that obtained from those experiments such as θ1= θ2=60°, θ3= θ4=60° and
height stand of camera is 50cm from bottom.
38

Figure 4.1(c) shows several samples of images to obtain the parameter result from de-
tailed study of software usage.

Figure 4.1(c): Several samples of face images

4.3 FACE DETECTION

Face Detection

(Filter)

Image 1
Image 2

Figure 4.2(a): Figure 4.2(b): Figure 4.2(c):


Matched Unmatched Unmatched
39

Figure 4.2 (a) shows the result of an image is in good condition while figure 4.2 (b) and
figure 4.2 (c) are shown unmatched image. Those image unmatched due to the noises of
background, expression and pose. The objective is to determine the accuracy rate of the
system at the most uncontrolled condition. The accuracy rate of the system can be cal-
culated from Equation 4.1 below.

= 100%

The result of the controlled parameter analysis is tabulated in Table 4.1.

Table 4.1: The result of controlled parameters

Image Xsd MAA Status


1 15.27271 3.308564 Matched
2 15.2743 3.327972 Matched
3 15.19026 3.327972 Matched
4 14.85234 3.621903 Matched
5 15.1709 3.31639 Matched
6 15.22005 3.327972 Matched
7 15.65564 3.308564 Matched
8 15.37452 3.308564 Matched
9 15.16011 3.308564 Matched
10 15.29636 3.327972 Matched
11 23.6229 3.335658 Matched
12 15.43393 3.327972 Matched
13 15.20351 3.346939 Matched
14 15.2404 3.327972 Matched
15 15.08849 3.335658 Matched
16 15.06008 3.327972 Matched
17 15.1045 3.327972 Matched
18 15.27718 3.335658 Matched
19 15.0073 3.308564 Matched
20 15.00823 3.308564 Matched
21 15.00501 3.335658 Matched
22 15.00996 3.31639 Matched
23 15.13929 3.308564 Matched
24 15.49199 3.680154 Matched
40

25 15.06708 3.296665 Matched


26 14.43369 3.672467 Matched
27 29.12807 3.596817 Unmatched
28 15.38195 3.547529 Matched
29 15.16615 3.346939 Matched
30 15.23755 3.372912 Unmatched
31 15.12898 3.354493 Matched
32 15.48151 3.553519 Matched
33 14.97463 3.335658 Matched
34 27.95609 3.562293 Matched
35 15.25343 3.327972 Matched
36 15.32537 3.308564 Matched
37 15.14899 3.327972 Unmatched
38 15.25647 3.361917 Matched
39 14.54957 3.648848 Matched
40 15.00296 3.562293 Matched

From the analysis, the system gives 92.5% of accuracy. The accuracy rate of the
system with controlled is suitable for the application of access control. The application
of access control is achieved since it needs high rate of accuracy as the system need to
differentiate between authorized and unauthorized person to grant the access.

4.4 Face filter

Face filter is applied in this project to remove the unwanted non face. Non face
occurs because the face image is taken with some noises in the background and more than
one face appeared in the image. Therefore, the largest area has been selected to fix only
one image appear when detect face.
41

BBOX
row1
BBOX
row2 Height,
H1

Height, H2
Width, W1

Width, W2

Figure 4.3: Before face filter is applied

BBOX
row1
Height,
H1

x Width, W1

Figure 4.4: After face filter is applied

Before the filter is applied there are 2 bbox size which detect face and non face. The
matrix bbox is 2x21; 2 row data. The bbox is selected based on largest area that ob-
tained from the formula ;~

1= ℎ1 ℎ ℎ1

2= ℎ2 ℎ ℎ2

After that, the process of filter is carried out to remove non face. From the calculation,
we observe the largest area is area1 and we select is as face. Then, the matrix data bbox
is 1x21; 1 row data and 21 column data.
42

4.5 Feature extraction

In machine learning, pattern recognition and in image processing, feature extrac-


tion starts from an initial set of measured data and builds derived values (features) in-
tended to be informative and non-redundant, facilitating the subsequent learning and
generalization steps, and in some cases leading to better human interpretations. Feature
extraction is related to dimensionality reduction.

4.5.1 5BBOX

The 5 boxes is extracted in bbox. The total of boxes in matrix data is 1x21. The
boxes has divided into five categories such as head box, left eye box, right eye box,
mouth and nose. Step for feature extraction is shown below.

1x4 = Head Box


X1 Y1 W1 H1
1x4 = Right Area Box
X2 Y2 W1 H1
1x21
1x4 = Left Area Box X3 Y3 W1 H1

1x4 = Mouth Box X4 Y4 W1 H1

X5 Y5 W1 H1
1x4 = Nose Box
43

The result shown 1x21 consists of 5 bbox as stated below. The images detected
can be divided into 2 conditions; matched and unmatched.

X1,Y1
X2, Y2 X3, Y3

H2 H3
X4, Y4
W2 W3
H4 H1

W4

H5

W5

W1

Figure 4.5: Sample of 5 Box

Figure 4.6(a): matched Figure 4.6(b): unmatched


44

4.5.2 4 points

There are 4 centre points that has been created from the boundary box. It
measures based on distance between eyes, nose and mouth. The image will pro-
duced result matched and unmatched.

X X X
X X

X X

Figure 4.7(a): Figure 4.7(b): un-


matched matched

4.5.3 Distance points on face

Figure 4.8(a): Figure 4.8(b): unmatched


matched

4.6 Maximum area of eyes (MAA)

Based on this graphs, we compared the area of left eye and right eye. From, this
result we choose the maximum of those areas.

H1 H2

W1 W2
45

4.7 COMPARISON OF SAMPLE DATA

Plot analysis was performed to indicate the relationship between distance points
on face and the number of images. The figure 4.2, figure 4.3, figure 4.4, figure 4.5 and
figure 4.6 shows the graph analysis that has been plotted with the distance of 60 cm
from camera. Then, the video is converted into 40 frames of images as offline data. Based
on the graph below, standard deviation on face and maximum area of eyes is the best
method to apply for face recognition for first filter and second filter. These filters obtained
were constantly fixed while for standard deviation there was only minor dif-
ference compared to the other variables.

Distance point on face vs number of images (Person 1)


Variable
80
Xsd
A le A
70 re
MA A
Distance point on face

60 P1
P2
P3
50 P4
P5
40 P6

30

20

10

0
4 8 12 16 20 24 28 32 36 40
Number of images

Figure 4.9: Distance point on face vs number of images (Person 1)


46

Distance points on face vs Number of images (Person 2)


80 Variable
Xsd
A le
70 A re
MA A
Distance points on face

60 P1
P2
50 P3
P4
P5
40 P6

30

20

10
0
4 8 12 16 20 24 28 32 36 40
Number of images

Figure 4.10: Distance point on face vs number of images (Person 2)

Distance points on face vs Number of image (Person 3)


80 Variable
Xsd
A le
70 A re
MA A
Distance points on face

60 P1
P2
50 P3
P4
P5
40 P6

30

20

10
0
4 8 12 16 20 24 28 32 36 40
Number of images

Figure 4.11: Distance point on face vs number of images (Person 3)


47

Distance points on face vs Number of images (Person 4)


80 Variable
Xsd
70 A le A
re
MA A
60
Distance points on face

P1
P2
50 P3
P4
P5
40 P6

30

20

10

0
4 8 12 16 20 24 28 32 36 40
Number of images

Figure 4.12: Distance point on face vs number of images (Person 4)

Distance points on face vs Number of images (Person 5)


120 Variable
Xsd
A le A
100 re
MA A
Distance points on face

P1
80 P2
P3
P4
P5
60
P6

40

20

0
4 8 12 16 20 24 28 32 36 40

Number of images

Figure 4.13: Distance point on face vs number of images (Person 5)


48

Standard Deviation of face vs Number of Images


25 Variable
Xsd(wany )
Xsd(Khalilah)
Xsd(zak iah)
Xsd(Raif)
Xsd(Boss)
20
Standard deviation

15

10

4 8 12 16 20 24 28 32 36 40

Number of images

Figure 4.14: Standard Deviation of face vs Number of images

Maximum area of eyes vs Number of images


Variable MA A
3.8
(wany ) MA A
(Khalilah) MA A
(zak iah) MA A
3.6 (Raif) MA A
Maximum area of eyes

(Boss)

3.4

3.2

3.0

2.8

4 8 12 16 20 24 28 32 36 40

Number of images

Figure 4.15: Maximum area of eyes vs Number of images


49

4.8 CONCLUSION

In this system, the calculation was made based on the formulaon applied to the
system. Standard deviation and maximun area were chosen based on the result obtained
from from the graph. It is determined that the plot surface for both of the variables were
constantly linear or smooth compared to other. The standard deviation range of face for
person number 1 was between 13.25< Xsd<14.57 while the maximum area was between
2.80<MAA<3.30. But the data started to differ for each person when the second filter was
applied to the system as required to increase the security and relibility of the system.
Thus it was concluded that the system was successful due to it characteristic.
CHAPTER 5

CONCLUSION AND RECOMMENDATION

5.1 CONCLUSION

The main objectives of this project are to design and develop a security system
based on face recognition by using Matlab and a microcontroller as the main circuit.
The development of database was successfully developed by using Computer Vision
Toolbox approach and bunch face graph (5 nodal points) in Matlab. It involves two
main modules which are feature extraction and feature matching. Meanwhile, for the
hardware development, Arduino and motor driver DFRDuino were used in the main
circuit in order to control the door lock actuator. In addition, the second objective is to
build a security system based on biometric concept to access the door which will detects
and recognize human face gestures by using image processing. This system had been
successfully developed in which it able to distinguish facial images to grant a special
access to the owner. This analysis was successfully conducted by using a few variables
and parameters. Therefore, all these objectives were successfully fulfilled.
51

5.2 RECOMMENDATIONS FOR FUTURE RESEARCH

The performance of this system is considerably good and it can be a improve to


much better version. For the future work, some recommendation had been listed based
on the problem and limitation to enhance the performance:

1. Algorithm of feature extraction and feature matching.


There a lot of modern algorithm with better performance. For example, Pattern
recognition application by using Artificial Neural Network is a popular tech- nique
since the system are more user-friendly and the accuracy rate in facial de- tection
is high.

2. The serial connection between MATLAB and Arduino UNO microcontroller.


The communication error is a common problem when interfacing MATLAB and
Arduino UNO microcontroller. The Arduino UNO microcontroller is used in
this project due to its simplicity, easy to program and low in cost. However,
Arduino UNO microcontroller is not suitable for continuous system since it is
not capable to run for a long time and only suits with a low capacity processor.
In the future work, Arduino with high specification is recommend in order to
minimize the error.

3. Create a system that ca recognized more images.


52

REFERENCES

[1] Z. b. Abdullah, "Official Portal of Royal Malaysia Police,Statistik Jenayah Pecah Rumah Jan-
Jun 2013," Royal Malaysia Police, 20 November 2013. [Online]. Available:
http://www.rmp.gov.my/.

[2] S. a. S. M. Liu, "A Practical Guide to Biometric Security Technology," A Practical Guide to
Biometric Security Technology, pp. 27-32, 2001.

[3] R. K. M. B. K. S. B. Sravya. V, "A Survey on Fingerprint Biometric System," International


Journal of Advanced Research in Computer Science and Software Engineering, vol. 2, no. 4,
pp. 307-313, April 2012.

[4] C. Le, "A survey of Biometric Security Systems," A survey of Biometric Security Systems, 28
November 2011.

[5] P. O'Neill, A. O'Neill, S. Winters and L. Kwiaton, "Biometrics security system," Biometrics
security system, 2011.

[6] Areza, "Biometricsnewportal 2011," SECON, 2011. [Online]. Available:


http://www.biometricnewsportal.com/.

[7] K. B. a. R. Johnson, "Face Recognition - Technology Overview," Ex-Sight.Com, 2009.


[Online].

[8] S. Modi, "Face recognition technology," Student, 25 October 2013. [Online]. Available:
http://www.slideshare.net/SiddharthModi1/face-recognition-technology-27574561.

[9] M. a. O. Carikci, "A Face Recognition System Based on Eigenfaces," A Face Recognition
System Based on Eigenfaces, pp. 118-123, 2012.

[10] Z. Z. Z. H. S. a. F. D. Chaoyang, "Comparison of Three FaceRecognition Algorithms,"


Comparison of Three FaceRecognition Algorithms.International Conference on Systems
and Informatics., vol. ICSAI, no. 2012, pp. 1896-1900, May 19-20, 2012.

[11] M. a. S. L. Kirby, " Application of The Karhunen-Loeve Procedure forThe Characterization


of Human Faces," Application of The Karhunen-Loeve Procedure forThe Characterization of
Human Faces.IEEE Transaction on Pattern Analysis and Machine Intelligence. , vol. 12, no.
1, pp. 103-108, 1990.

[12] C. A. S. a. C.-M. T. Lih-Heng, "PCA,LDA and Neural Networkfor Face Identification,"


PCA,LDA and Neural Networkfor Face Identification.IEEE Conference on Industrial
Electronics and Applications., vol. ICIEA, no. 2009, pp. 1256-1259, May 25-27, 2009.
53

[13] R. T. D. V. K. Aruna Bhadu, "Facial Expression Recognition Using DCT, Gabor and Wavelet
Feature Extraction Techniques," Facial Expression Recognition Using DCT, Gabor and
Wavelet Feature Extraction Techniques, vol. 2, no. 1, July 2012.

[14] C. D. N. a. R. M, "Gabor Wavelets and Morphological Shared Weighted Neural Network


Based Automatic Recognition," Gabor Wavelets and Morphological Shared Weighted
Neural Network Based Automatic Recognition, vol. 4, no. 4, August 2013.

[15] J. Smith, "Arduino UNO Board," Disember 21, Malaysia, 2013.

[16] Jacky, "DFRduino Romeo-All in one Controller," 2013.

[17] S. S. Shaily Pandey, "An Optimistic Approach for Implementing Viola Jones Face Detection
Algorithm in Database System and in Real Time," in International Journal of Engineering
Research & Technology , Kanpur, India, July 2015.

[18] J. Stephen, "Math is fun," Math is fun, 2014. [Online]. Available:


http://www.mathsisfun.com/data/standard-deviation.html.

[19] Z. b. Abdullah, "Google," 20 November 2013. [Online]. Available:


http://www.rmp.gov.my/.

[20] S. S. Diwanji, "Biometrics Authentication System," Biometrics Authentication System, vol.


Vol. 3, no. Issue 2, pp. pp: (917-920), April - June 2015.

[21] A. K. R. A. a. P. S. Jain, "An Introduction to Biometric," An Introduction to Biometric, vol.


14(1) , pp. 4-20, 2004 .

[22] M. Faundez-Zanuy, "Biometric Security Technology," Biometric Security Technology.IEEE


Transaction on Aerospace and Electronic System., vol. 21, no. (6), pp. 15-26, 2006.

[23] C. o. H. (n.d.), "Face ID," Retrieved 27 October 2013. [Online]. Available:


http://www.hanvon.com/en/products/FaceID/downloads/FAQ.html.

[24] S. H. S. J. D. C. V. a. B. A. Kar, "A MultiAlgorithmic Face Recognition System," A


MultiAlgorithmic Face Recognition System. International Conference on Advanced, pp.
321-326, December 20-23 2006.

[25] M. A. H. J. N. a. K. M. Agarwal, "Face Recognition Using Principle Component Analysis,


Eigenface and Neural Network," Face Recognition Using Principle Component Analysis,
Eigenface and Neural Network, no. ICSAP 2010, pp. 310-314, February 9-10, 2010.

[26] C. a. N. P. Riddhi, "Details Study On 2D Face Recognition Technique," Indian Streams


Research Journal, vol. 3(2), pp. 1-13, 2013.
54

[27] S. G. a. P. W. M. Caifeng Shan, "Robust Facial Expression Recognition using Local Binary
Pattern," Robust Facial Expression Recognition using Local Binary Pattern.

[28] S. G. a. A. K. T. Bhaskar Gupta, "Face detection using Gabor Feauture Extraction and
Artificial Neural Network," Face detection using Gabor Feauture Extraction and Artificial
Neural Network.

[29] J. D., "Train a Cascade Object Detector," Mathwork, 2016. [Online]. Available:
http://www.mathworks.com/company/?s_tid=gn_co.

[30] C. X. Ng, "Math is Fun," Math is Fun, 2014. [Online]. Available:


http://www.mathsisfun.com/data/standard-deviation.html.

[31] S. &. E. A. KalaJames, "Real Time Smart Car Lock Security System Using Face Detection and
Recognition," Real Time Smart Car Lock Security System Using Face Detection and
Recognition, Jan.10 - 12, 2012.
55

APPENDIX A

A.1 GANTT CHART PSM 1

9/9- 14/9- 21/9- 28/9- 5/10- 12/10- 19/10- 26/10- 2/11- 9/11- 16/11- 23/11- 30/11- 7/12-
11/9 18/9 25/9 2/10 9/10 16/10 23/10 30/10 6/11 13/11 20/11 27/11 4/12 11/12
Week Week Week Week Week Week Week Week Week Week Week Week Week Week
1 2 3 4 5 6 7 8 9 10 11 12 13 14
PSM1 briefing session
Find Supervisor and
Project title
Register Title and Submit
Abstract
Research on project, cost,
equipment, Gantt chart,
project flow chart.
Design sketch using Au-
toCAD
Final design
Proposal and presenta-
tion slide preparation
Submit Proposal +
Presentation Slide +
Evaluation form
PSM 1 Seminar
Hardware Testing
Hardware
Software
Report writing
56

A.1 GANTT CHART PSM 2


57

APPENDIX B

COST BUDGET

Cost estimation of face recognition security system.

No Item Quantity Unit/Price Cost


1 Arduino Uno Board 1 RM 58.00 RM 58.00
2 DFRduino (Arduino shield) 1 RM 56.00 RM 56.00
3 C270 Camera Logitech 720HD 1 RM 120.00 RM 120.00
4 Door Lock Actuator 1 RM 25.00 RM 25.00
5 12V DC Power Supply 1 RM 77.00 RM 77.00
6 Globe Door Lock 1 RM 30.00 RM 30.00
7 Single Core Cable 2 RM 1.50 RM 3.00
8 PVC Box 1 RM 20.00 RM 20.00
9 LED Lamp 1 RM 8.00 RM 8.00
10 Total RM 397.00
58

APPENDIX C

SPECIFICATION OF EQUIPMENTS

C.1 ARDUINO UNO

ARDUINO UNO TECHNICAL SPECIFICATION

Microcontroller ATmega328P
Operating Voltage 5V
Input Voltage (recommended) 7-12V
Input Voltage (limit) 6-20V
Digital I/O Pins 14 (of which 6 provide PWM output)
PWM Digital I/O Pins 6
Analog Input Pins 6
DC Current per I/O Pin 20 mA
DC Current for 3.3V Pin 50 mA
Flash Memory 32 KB (ATmega328P)
of which 0.5 KB used by boot loader
SRAM 2 KB (ATmega328P)
EEPROM 1 KB (ATmega328P)
Clock Speed 16 MHz
Length 68.6 mm
Width 53.4 mm
Weight 25 g
59

C.2 CAMERA LONGITECH C270HD

CAMERA LONGITECH C270H TECHNICAL SPECIFICATION

Type Corded USB

USB Type High Speed USB 2.0

USB VID_PID VID_046D&PID_081A

Microphone Built-in, Noise Suppression

Lens and Sensor Type Plastic

Focus Type Fixed

Field of View (FOV) 60°

Focal Length 4.0 mm

Optical Resolution (True) 1280 x 960 1.2MP

Image Capture (4:3 SD) 320x240, 640x480 1.2 MP, 3.0 MP

Image Capture (16:9 W) 360p, 480p, 720p

Video Capture (4:3 SD) 320x240, 640x480, 800x600

Video Capture (16:9 W) 360p, 480p, 720p,

Frame Rate (max) 30fps @ 640x480

Video Effects (VFX) N/A

Indicator Lights (LED) Activity/Power

Cable Length 5 Feet or 1.5 Meters


60

C.3 MOTOR DRIVER DFRDUINO

MOTOR DRIVER DFRDUINO SPECIFICATION

Microcontroller Atmega 168/328


Operating Voltage 5V
Input Voltage (recommended) 7-12V
Input Voltage (limit) 6-20V
Digital I/O Pins 14 (of which 6 provide PWM output)
PWM Digital I/O Pins 6 (Pin11,Pin10,Pin9,Pin6,Pin5,Pin3)
Analog Input Pins 6
DC Current per I/O Pin 20 mA
DC Current for 3.3V Pin 50 mA
Flash Memory 32 KB (ATmega328P)
of which 0.5 KB used by boot loader
SRAM 2 KB (ATmega328P)
EEPROM 1 KB (ATmega328P)
Clock Speed 16 MHz
Length 90 mm
Width 80 mm
Weight 25 g
Features  Support AREF
 Support Male and Female Pin
Header
Function Auto sensing/switching power input
Display Serial Interface TTL Level
61

C.4 DOOR LOCK ACTUATOR

DOOR LOCK ACTUATOR TECHNICAL SPECIFICATION

Brand Part Express

Item Weight 2.4 ounces

Product Dimension 8.1 x 1.9 x 1 inches

Item model number PDL50

Manufacturer Part Number Door Lock 2 wire

Components Series of gears triggered by small a motor

Converting the rotational motion into ver-


Function
tical motion to lock and unlock
62

APPENDIX D

Code Sketch 1: Real time video


% delete MATLAB serial connection on COM3
delete(instrfind({'Port'},{'COM3'}));

%clear all value at workspace


clear all;

%clear all history at command window


clc;
close all;

% connect the board


a=arduino('COM3');

%specify pin mode as input or output


% a.pinMode(2,'ouput');
% a.pinMode(3,'ouput');

configurePin(a,'D2','DigitalOutput');
configurePin(a,'D3','DigitalOutput');

%start online camera:


mycam = webcam('Logitech HD Webcam C270');
preview(mycam);

for j=1:inf
% filename = ['Video 18 ' num2str(j) '.jpg']
IM1 = snapshot(mycam);
detector = buildDetector();
[bbox bbimg faces bbfaces] = detectFaceParts(detector,IM1,2);

%select the largest area of bbox


Sbox = size(bbox);
Cbox = Sbox(1);% bil row yg ade
if Cbox > 1
for ii = 1:1:Cbox
FAC(1,Cbox) = 0;
FA = bbox(ii,3)*bbox(ii,4);%get area=width*height
FAC(1,ii) = FA;
end
[M,I] = max(FAC); %m adalah nilai, i adalah lokasi
bbox = bbox(I,:);
end

A = bbox(:,1:4);% face
B = bbox(:,5:8);% left eye
C = bbox(:,9:12);% right eye

D = bbox(:,13:16);% mouth
E = bbox(:,17:20);% nose

%////////////////////////CB///////////////////////
63

Xcb = B(:,1)+(B(:,3)/2);
Ycb = B(:,2)+(B(:,4)/2);
CB = [Xcb,Ycb];
% CB

%///////////////////////CC///////////////////////

Xcc = C(:,1)+(C(:,3)/2);
Ycc = C(:,2)+(C(:,4)/2);
CC = [Xcc,Ycc];
% CC

%///////////////////////CD///////////////////////

Xcd = D(:,1)+(D(:,3)/2);
Ycd = D(:,2)+(D(:,4)/2);
CD = [Xcd,Ycd];
% CD

%///////////////////////CE///////////////////////

Xce = E(:,1)+(E(:,3)/2);
Yce = E(:,2)+(E(:,4)/2);
CE = [Xce,Yce];
% CE

% DataC=[CB;CC;CD;CE]
% % %///////////////////////DISTANCE P ON
FACE///////////////////////////
% %
if sum(B) == 0 | sum(E)== 0
P1=0;
else
P1=sqrt(((Xce-Xcb).^2)+((Yce-Ycb).^2));
end
%

if sum(C) == 0 | sum(E)== 0
P2=0;
else
P2=sqrt(((Xce-Xcc).^2)+((Yce-Ycc).^2));
end
%

if sum(D) == 0 | sum(E)== 0
P3=0;
else
P3=sqrt(((Xcd-Xce).^2)+((Ycd-Yce).^2));
end
%

if sum(B) == 0 | sum(C)== 0
P4=0;
else
P4=sqrt(((Xcc-Xcb).^2)+((Ycc-Ycb).^2));
end
%

if sum(B) == 0 | sum(D)== 0
64

P5=0;
else
P5=sqrt(((Xcd-Xcb).^2)+((Ycd-Ycb).^2));
end
%

if sum(C) == 0 | sum(D)== 0
P6=0;
else
P6=sqrt(((Xcd-Xcc).^2)+((Ycd-Ycc).^2));
end
%

% DataP=[P1,P2,P3,P4,P5,P6]
% %///////////////////////CALCULATE MEAN///////////////////////////
%
Xm = (P1+P2+P3+P4+P5+P6)./6;
%
% %/////////////CALCULATE DIFFERENCE FROM THE MEAN////////////////
%
Xdmp1 = P1-Xm;
Xdmp2 = P2-Xm;
Xdmp3 = P3-Xm;
Xdmp4 = P4-Xm;
Xdmp5 = P5-Xm;
Xdmp6 = P6-Xm;
%
% %///////////////////////CALCULATE VARI-
ANCE///////////////////////////
%
Xv =
((Xdmp1.^2)+(Xdmp2.^2)+(Xdmp3.^2)+(Xdmp4.^2)+(Xdmp5.^2)+(Xdmp6.^2))./6
;
%
%
% %//////////////////CALCULATE STANDARD DEVIA-
TION//////////////////////
%
Xsd = sqrt(Xv);
Ale = B(:,3).*B(:,4);
Are = C(:,3).*C(:,4);
AA = [Ale;Are];
Maa = max (AA);
Out2 = [Xsd , Maa];
OpSize = size(Out2);

Out3 = [Xsd, Maa, OpSize(2), j]

if OpSize(2)==2 %face detected


A=1
if 13.25<Xsd && Xsd<14.57 % filter xsd
B=1
if 1700<Maa && Maa<2035 % filter Maa
C=1
writeDigitalPin(a,'D4',1); % open
writeDigitalPin(a,'D5',1); % active
pause (1);
writeDigitalPin(a,'D4',1); % open
writeDigitalPin(a,'D5',0); % inactive
sprintf('face detected');
else
C=0
65

writeDigitalPin(a,'D4',0); % close
writeDigitalPin(a,'D5',1); % active
pause (1);
writeDigitalPin(a,'D4',0); % close
writeDigitalPin(a,'D5',0); % inactive
sprintf('face undetected');
end
else
B=0

writeDigitalPin(a,'D4',0); % close
writeDigitalPin(a,'D5',1); % active
pause (1);
writeDigitalPin(a,'D4',0); % close
writeDigitalPin(a,'D5',0); % inactive
sprintf('face undetected');
end
else

A=0
writeDigitalPin(a,'D4',0); % close
writeDigitalPin(a,'D5',1); % active
pause (1);
writeDigitalPin(a,'D4',0); % close
writeDigitalPin(a,'D5',0); % inactive
sprintf('face undetected');
end

pause (2);
end

Code Sketch 2: Hardware Test

% delete MATLAB serial connection on COM3


delete(instrfind({'Port'},{'COM3'}));

%clear all value at workspace


clear all;

%clear all history at command window


clc;
close all;

% connect the board


a=arduino('COM3');

%specify pin mode as input or output


% a.pinMode(2,'ouput');
% a.pinMode(3,'ouput');

configurePin(a,'D2','DigitalOutput');
configurePin(a,'D3','DigitalOutput');
Xsd=14;
Maa=3.3;
for j=1:inf
if j<10
%yes
writeDigitalPin(a,'D4',0); % close door
66

writeDigitalPin(a,'D5',1); % active
pause (1);
writeDigitalPin(a,'D4',0); % close door
writeDigitalPin(a,'D5',0); % inactive
sprintf('face detected');
else
%no
writeDigitalPin(a,'D4',1); % open door
writeDigitalPin(a,'D5',1); % active
pause (1);
writeDigitalPin(a,'D4',1); % open door
writeDigitalPin(a,'D5',0); % inactive
sprintf('face detected');
end
j
pause (2);
end

Code Sketch 3: Analysis Threshold


%clear all value at workspace
clear all;

%clear all history at command window


clc;
close all;

%start online camera:


mycam = webcam('Logitech HD Webcam C270');
preview(mycam);

DataWani = zeros(50,2);

for j=1:50
% filename = ['Video 18 ' num2str(j) '.jpg']
IM1 = snapshot(mycam);
detector = buildDetector();
[bbox bbimg faces bbfaces] = detectFaceParts(detector,IM1,2);

%select the largest area of bbox


Sbox = size(bbox);
Cbox = Sbox(1);% bil row yg ade
if Cbox > 1
for ii = 1:1:Cbox
FAC(1,Cbox) = 0;
FA = bbox(ii,3)*bbox(ii,4);%get area=width*height
FAC(1,ii) = FA;
end
[M,I] = max(FAC); %m adalah nilai, i adalah lokasi
bbox = bbox(I,:);
end

A = bbox(:,1:4);% face
B = bbox(:,5:8);% left eye
67

C = bbox(:,9:12);% right eye


D = bbox(:,13:16);% mouth
E = bbox(:,17:20);% nose

%////////////////////////CB///////////////////////

Xcb = B(:,1)+(B(:,3)/2);
Ycb = B(:,2)+(B(:,4)/2);
CB = [Xcb,Ycb];
% CB

%///////////////////////CC///////////////////////

Xcc = C(:,1)+(C(:,3)/2);
Ycc = C(:,2)+(C(:,4)/2);
CC = [Xcc,Ycc];
% CC

%///////////////////////CD///////////////////////

Xcd = D(:,1)+(D(:,3)/2);
Ycd = D(:,2)+(D(:,4)/2);
CD = [Xcd,Ycd];
% CD

%///////////////////////CE///////////////////////

Xce = E(:,1)+(E(:,3)/2);
Yce = E(:,2)+(E(:,4)/2);
CE = [Xce,Yce];
% CE

% DataC=[CB;CC;CD;CE]
% % %///////////////////////DISTANCE P ON
FACE///////////////////////////
% %
if sum(B) == 0 | sum(E)== 0
P1=0;
else
P1=sqrt(((Xce-Xcb).^2)+((Yce-Ycb).^2));
end
%

if sum(C) == 0 | sum(E)== 0
P2=0;
else
P2=sqrt(((Xce-Xcc).^2)+((Yce-Ycc).^2));
end
%

if sum(D) == 0 | sum(E)== 0
P3=0;
else
P3=sqrt(((Xcd-Xce).^2)+((Ycd-Yce).^2));
end
%

if sum(B) == 0 | sum(C)== 0
68

P4=0;
else
P4=sqrt(((Xcc-Xcb).^2)+((Ycc-Ycb).^2));
end
%

if sum(B) == 0 | sum(D)== 0
P5=0;
else
P5=sqrt(((Xcd-Xcb).^2)+((Ycd-Ycb).^2));
end
%

if sum(C) == 0 | sum(D)== 0
P6=0;
else
P6=sqrt(((Xcd-Xcc).^2)+((Ycd-Ycc).^2));
end
%

% DataP=[P1,P2,P3,P4,P5,P6]
% %///////////////////////CALCULATE MEAN///////////////////////////
%
Xm = (P1+P2+P3+P4+P5+P6)./6;
%
% %/////////////CALCULATE DIFFERENCE FROM THE MEAN////////////////
%
Xdmp1 = P1-Xm;
Xdmp2 = P2-Xm;
Xdmp3 = P3-Xm;
Xdmp4 = P4-Xm;
Xdmp5 = P5-Xm;
Xdmp6 = P6-Xm;
%
% %///////////////////////CALCULATE VARI-
ANCE///////////////////////////
%
Xv =
((Xdmp1.^2)+(Xdmp2.^2)+(Xdmp3.^2)+(Xdmp4.^2)+(Xdmp5.^2)+(Xdmp6.^2))./6
;

% %//////////////////CALCULATE STANDARD DEVIA-


TION//////////////////////
%
Xsd = sqrt(Xv);
Ale = B(:,3).*B(:,4);
Are = C(:,3).*C(:,4);
AA = [Ale;Are];
Maa = max (AA);
Out2 = [Xsd , Maa];
DataWani(j,:)=Out2
pause (1);
end
69

Code Sketch 4: Test Loops

Xsd=12;
Maa=1800;
if OpSize==2 %face detected
A=1
if 13.25<Xsd && Xsd<14.57 % filter xsd
B=1
if 1700<Maa && Maa<2035 % filter Maa
C=1
else
C=0
end
else
B=0
end
else
A=0
end

Code Sketch 5: Read video one by one to check the images are detected or not

clear all;
clc;

img = imread('Video 25 03.jpg');

detector = buildDetector();
[bbox bbimg faces bbfaces] = detectFaceParts(detector,img,2);
A = bbox(:,1:4)% face
B = bbox(:,5:8)% left eye
C = bbox(:,9:12)% right eye
D = bbox(:,13:16)% mouth
E = bbox(:,17:20)% nose

%////////////////////////CB///////////////////////

Xcb = B(:,1)+(B(:,3)/2);
Ycb = B(:,2)+(B(:,4)/2);
CB = [Xcb,Ycb];

%///////////////////////CC///////////////////////

Xcc = C(:,1)+(C(:,3)/2);
Ycc = C(:,2)+(C(:,4)/2);
CC = [Xcc,Ycc];

%///////////////////////CD///////////////////////

Xcd = D(:,1)+(D(:,3)/2);
Ycd = D(:,2)+(D(:,4)/2);
CD = [Xcd,Ycd];
70

%///////////////////////CE///////////////////////

Xce = E(:,1)+(E(:,3)/2);
Yce = E(:,2)+(E(:,4)/2);
CE = [Xce,Yce];

%///////////////////////DISTANCE P ON FACE///////////////////////////

if sum(B) == 0 | sum(E)== 0
P1=0
else
P1=sqrt(((Xce-Xcb).^2)+((Yce-Ycb).^2));
end

if sum(C) == 0 | sum(E)== 0
P2=0
else
P2=sqrt(((Xce-Xcc).^2)+((Yce-Ycc).^2))
end

if sum(D) == 0 | sum(E)== 0
P3=0
else
P3=sqrt(((Xcd-Xce).^2)+((Ycd-Yce).^2))
end

if sum(B) == 0 | sum(C)== 0
P4=0
else
P4=sqrt(((Xcc-Xcb).^2)+((Ycc-Ycb).^2))
end

if sum(B) == 0 | sum(D)== 0
P5=0
else
P5=sqrt(((Xcd-Xcb).^2)+((Ycd-Ycb).^2))
end

if sum(C) == 0 | sum(D)== 0
P6=0
else
P6=sqrt(((Xcd-Xcc).^2)+((Ycd-Ycc).^2))
end

%///////////////////////CALCULATE MEAN///////////////////////////

Xm = (P1+P2+P3+P4+P5+P6)./6

%/////////////CALCULATE DIFFERENCE FROM THE MEAN////////////////

Xdmp1 = P1-Xm
Xdmp2 = P2-Xm
Xdmp3 = P3-Xm
Xdmp4 = P4-Xm
Xdmp5 = P5-Xm
Xdmp6 = P6-Xm
71

%///////////////////////CALCULATE VARIANCE///////////////////////////

Xv =
((Xdmp1.^2)+(Xdmp2.^2)+(Xdmp3.^2)+(Xdmp4.^2)+(Xdmp5.^2)+(Xdmp6.^2))./6

%//////////////////CALCULATE STANDARD DEVIATION//////////////////////

Xsd = sqrt(Xv)

%//////////////CALCULATE AREA OF LEFT, RIGHT, MAX AREA OF EYE BOX


///////////

Ale = log10(B(:,3).*B(:,4))
Are = log10(C(:,3).*C(:,4))
AA = [Ale;Are]
Maa = max (AA)

%///////////////////////////EXCEL OUTPUT ////////////////////////////

FR = [Xsd;Ale]
DRow = [Xsd,Ale,Are,Maa,P1,P2,P3,P4,P5,P6]

Code Sketch 6: Read 40 images to ensure the smoothness of the program


clear all;
clc;
bilI=40;
for i=1:1:bilI
DRow (bilI,10)=0.0;
FD(2,bilI)=0.0;
fn = sprintf('Video 25 %02d.jpg',i);
img = imread(fn);

detector = buildDetector();
[bbox bbimg faces bbfaces] = detectFaceParts(detector,img,2);

%select the largest area of bbox


Sbox = size(bbox);
Cbox = Sbox(1);% bil row yg ade
if Cbox > 1
for ii = 1:1:Cbox
FAC(1,Cbox) = 0;
FA = bbox(ii,3)*bbox(ii,4);%get area=width*height
FAC(1,ii) = FA;
end
[M,I] = max(FAC); %m adalah nilai, i adalah lokasi
bbox = bbox(I,:);
end
72

A = bbox(:,1:4);% face
B = bbox(:,5:8);% left eye
C = bbox(:,9:12);% right eye
D = bbox(:,13:16);% mouth
E = bbox(:,17:20);% nose

%////////////////////////CB///////////////////////

Xcb = B(:,1)+(B(:,3)/2);
Ycb = B(:,2)+(B(:,4)/2);
CB = [Xcb,Ycb];

%///////////////////////CC///////////////////////

Xcc = C(:,1)+(C(:,3)/2);
Ycc = C(:,2)+(C(:,4)/2);
CC = [Xcc,Ycc];

%///////////////////////CD///////////////////////

Xcd = D(:,1)+(D(:,3)/2);
Ycd = D(:,2)+(D(:,4)/2);
CD = [Xcd,Ycd];

%///////////////////////CE///////////////////////

Xce = E(:,1)+(E(:,3)/2);
Yce = E(:,2)+(E(:,4)/2);
CE = [Xce,Yce];

%///////////////////////DISTANCE P ON
FACE///////////////////////////

if sum(B) == 0 | sum(E)== 0
P1=0
else
P1=sqrt(((Xce-Xcb).^2)+((Yce-Ycb).^2));
end

if sum(C) == 0 | sum(E)== 0
P2=0
else
P2=sqrt(((Xce-Xcc).^2)+((Yce-Ycc).^2))
end

if sum(D) == 0 | sum(E)== 0
P3=0
else
P3=sqrt(((Xcd-Xce).^2)+((Ycd-Yce).^2))
end

if sum(B) == 0 | sum(C)== 0
P4=0
else
P4=sqrt(((Xcc-Xcb).^2)+((Ycc-Ycb).^2))
73

end

if sum(B) == 0 | sum(D)== 0
P5=0
else
P5=sqrt(((Xcd-Xcb).^2)+((Ycd-Ycb).^2))
end

if sum(C) == 0 | sum(D)== 0
P6=0
else
P6=sqrt(((Xcd-Xcc).^2)+((Ycd-Ycc).^2))
end

%///////////////////////CALCULATE MEAN///////////////////////////

Xm = (P1+P2+P3+P4+P5+P6)./6;

%/////////////CALCULATE DIFFERENCE FROM THE MEAN////////////////

Xdmp1 = P1-Xm;
Xdmp2 = P2-Xm;
Xdmp3 = P3-Xm;
Xdmp4 = P4-Xm;
Xdmp5 = P5-Xm;
Xdmp6 = P6-Xm;

%///////////////////////CALCULATE VARI-
ANCE///////////////////////////

Xv =
((Xdmp1.^2)+(Xdmp2.^2)+(Xdmp3.^2)+(Xdmp4.^2)+(Xdmp5.^2)+(Xdmp6.^2))./6;

%//////////////////CALCULATE STANDARD DEVIA-


TION//////////////////////

Xsd = sqrt(Xv);

%//////////////////CALCULATE AREA OF LEFT EYE BOX


///////////////////

Ale = log10(B(:,3).*B(:,4));
Are = log10(C(:,3).*C(:,4));
AA = [Ale;Are]
Maa = max (AA);

%///////////////////////////EXCEL OUTPUT
////////////////////////////
i
FD(:,i)=[Xsd,P1];

DRow (i,:) = [Xsd,Ale,Are,Maa,P1,P2,P3,P4,P5,P6];

end
74

if sum(B) == 0 | sum(E)== 0
P1=0
else
P1=sqrt(((Xce-Xcb).^2)+((Yce-Ycb).^2));
end

if sum(C) == 0 | sum(E)== 0
P2=0
else
P2=sqrt(((Xce-Xcc).^2)+((Yce-Ycc).^2))
end

if sum(D) == 0 | sum(E)== 0
P3=0
else
P3=sqrt(((Xcd-Xce).^2)+((Ycd-Yce).^2))
end

if sum(B) == 0 | sum(C)== 0
P4=0
else
P4=sqrt(((Xcc-Xcb).^2)+((Ycc-Ycb).^2))
end

if sum(B) == 0 | sum(D)== 0
P5=0
else
P5=sqrt(((Xcd-Xcb).^2)+((Ycd-Ycb).^2))
end

if sum(C) == 0 | sum(D)== 0
P6=0
else
P6=sqrt(((Xcd-Xcc).^2)+((Ycd-Ycc).^2))
end

%///////////////////////CALCULATE MEAN///////////////////////////

Xm = (P1+P2+P3+P4+P5+P6)./6;

%/////////////CALCULATE DIFFERENCE FROM THE MEAN////////////////

Xdmp1 = P1-Xm;
Xdmp2 = P2-Xm;
Xdmp3 = P3-Xm;
Xdmp4 = P4-Xm;
Xdmp5 = P5-Xm;
Xdmp6 = P6-Xm;

%///////////////////////CALCULATE VARI-
ANCE///////////////////////////

Xv =
((Xdmp1.^2)+(Xdmp2.^2)+(Xdmp3.^2)+(Xdmp4.^2)+(Xdmp5.^2)+(Xdmp6.^2))./6;
75

%//////////////////CALCULATE STANDARD DEVIA-


TION//////////////////////

Xsd = sqrt(Xv);

%//////////////////CALCULATE AREA OF LEFT EYE BOX


///////////////////

Ale = log10(B(:,3).*B(:,4));
Are = log10(C(:,3).*C(:,4));
AA = [Ale;Are]
Maa = max (AA);

%///////////////////////////EXCEL OUTPUT
////////////////////////////
i
FD(:,i)=[Xsd,P1];

DRow (i,:) = [Xsd,Ale,Are,Maa,P1,P2,P3,P4,P5,P6];

end
76

Code Sketch 7: Builds face parts detector object

% buildDetector: build face parts detector object


%
% detector = buildDetector( thresholdFace, thresholdParts, stdsize )
%
%Output parameter:
% detector: built detector object
%
%
%Input parameters:
% thresholdFace (optional): MergeThreshold for face detector (Default:
1)
% thresholdParts (optional): MergeThreshold for face parts detector (De-
fault: 1)
% stdsize (optional): size of normalized face (Default: 176)
%
%
%Example:
% detector = buildDetector();
% img = imread('img.jpg');
% [bbbox bbimg] = detectFaceParts(detector,img);
%

function detector = buildDetector( thresholdFace, thresholdParts,


stdsize )

if( nargin < 1 )


thresholdFace = 1;
end

if( nargin < 2 )


thresholdParts = 1;
end

if( nargin < 3 )


stdsize = 176;
end

nameDetector = {'LeftEye'; 'RightEye'; 'Mouth'; 'Nose'; };


mins = [[12 18]; [12 18]; [15 25]; [15 18]; ];

detector.stdsize = stdsize;
detector.detector = cell(5,1);
for k=1:4
minSize = int32([stdsize/5 stdsize/5]);
minSize = [max(minSize(1),mins(k,1)), max(minSize(2),mins(k,2))];
detector.detector{k} = vi-
sion.CascadeObjectDetector(char(nameDetector(k)), 'MergeThreshold',
thresholdParts, 'MinSize', minSize);
end

detector.detector{5} = vision.CascadeObjectDetector('FrontalFaceCART',
'MergeThreshold', thresholdFace);
77

APPENDIX E

PROGRESS WORKING FLOW

Das könnte Ihnen auch gefallen