Beruflich Dokumente
Kultur Dokumente
This thesis is submitted as partial fulfillment of the requirements for the award of the
Bachelor of Electrical Engineering (Hons.) (Electronics)
JUNE 2016
UNIVERSITI MALAYSIA PAHANG
Certified by:
SUPERVISOR’S DECLARATION
I hereby declare that I have checked this thesis and in my opinion, this thesis is adequate
in terms of scope and quality for the award of the degree of the Bachelor Degree of
Electrical Engineering (Hons.) (Electronics).
Signature :
STUDENT’S DECLARATION
I hereby declare that the work in this thesis is my own except for quotations and summaries
which have been duly acknowledged. The thesis has not been accepted for any degree and
is not concurrently submitted for award of other degree.
Signature :
Dedicated, in thankful appreciation for my beloved family, the great supervisor gift
from beginning until the end, Mr. Zulkifli bin Musa, Mr. Azri, Mr Ahmad Zainuddin,
Mr Hendriawan, Mr. Toibullah and my friends for giving a constant source of love
and sacrifice throughout my journey in education.
55
ACKNOWLEDGMENTS
Thank You.
66
ABSTRACT
In recent years, security systems have become one of the most demanding sys-
tems to secure our assets and protect our privacy. A more reliable security system
should be developed to avoid losses due to identity theft or fraud. Thus, a lot of re-
searches have been done in order to improve established security system, especially
systems that are based on human identification. Face recognition system is widely used
in human identification process due to its capability to measure and subsequently identi-
fies human specific identification especially for security purposes. In this paper several
steps were proposed to implement door lock security system based on facial characteris-
tic by using image processing. The proposed system using a personal computer with
software Matlab2015b as the main processing medium. While the main project focused
on image processing toolbox. Image processing system detects and classifies facial im-
ages into two groups. The first group is an authorized individual, while the second is the
individual who is not allowed. Detection system detects human faces into five major
points on a human face; the left eye, right eye, nose and mouth. Next, the classification
system had been analyze the value of the standard deviation of the distance between the
fifth point 6 face earlier. In addition, the classification system will also analyze the indi-
vidual eye area. If the value of the standard deviation is between 13.25 until 14.57, and
the wide eye is between 2.85 until 3.15, the individual is classified as authorized per- sons.
Whereas, if the two values or one value is not equal, the individual may be classi- fied as
an unauthorized person. Doors are equipped with the drive system will automati- cally
open when the system detects the presence of an authorized person. Instead, the drive
system will always automatically closed if it detects unauthorized person. The main
components of the system are camera, personal computer (PC), arduino, motor driver kit
DFRduino and magnetic lock
77
ABSTRAK
Pada masa kini, sistem keselamatan telah menjadi salah satu sistem yang paling
mencabar untuk melindungi harta benda dan privasi kami. Sistem keselamatan yang
lebih efisien perlu dibangunkan untuk mengelakkan kerugian akibat kecurian identiti
atau penipuan. Oleh itu, banyak kajian telah dilakukan untuk meningkatkan keupayaan
sistem keselamatan. Salah satu cabang teknologi sistem keselamatan ialah sistem yang
berdasarkan pengecaman manusia. Sistem pengecaman wajah manusia telah digunakan
secara meluas kerana keupayaannya dalam mengukur dan mengenal pasti identiti
manusia. Dalam kajian ini, kami mencadangkan beberapa langkah untuk melaksanakan
sistem keselamatan kunci pintu berdasarkan pengecaman muka dengan menggunakan
pemprosesan imej. Sistem yang dicadangkan ini menggunakan komputer peribadi
dengan perisian Matlab2015b sebagai medium pemprosesan utama. Manakala tumpuan
utama difokuskan kepada toolbox pemprosesan imej. Sistem pemprosesan imej akan
mengesan dan mengkelaskan imej wajah kepada dua golongan. Golongan pertama ialah
individu yang dibenarkan, manakala golongan kedua ialah individu yang tidak
dibenarkan. Sistem pengesanan wajah manusia akan mengesan 5 titik utama pada wajah
manusia; iaitu mata kiri, mata kanan, hidung dan mulut. Seterusnya, sistem pengkelasan
pula akan menganalisis nilai sisihan piawai bagi 6 jarak diantara kelima titik wajah
tadi. Sebagai tambahan sistem pengkelasan juga akan menganalisis luas kawasan mata
individu berkenaan. Sekiranya nilai sisihan piawai ialah di antara 13.25 sehingga 14.57,
dan luas mata ialah di antara 2.85 sehingga 3.15 maka individu berkenaan dikelaskan
kepada individu yang dibenarkan. Manakala, sekiranya kedua nilai atau salah satu nilai
tidak sama maka individu berkenaan dikelaskan kepada individu yang tidak dibenarkan.
Pintu yang dilengkapi dengan sistem pemacu automatik akan dibuka apabila sistem
mengesan kehadiran individu yang dibenarkan. Sebaliknya, sistem pemacu automatik
akan sentiasa tertutup sekiranya ia mengesan individu yang tidak dibenarkan.
Komponen utama yang digunakan dalam sistem ini adalah kamera, computer peribadi
(PC), arduino, pemandu motor driver kit DFRDuino dan kunci magnetic.
88
TABLE OF CONTENTS
Page
SUPERVISOR’S DECLARATION ii
STUDENT’S DECLARATION iii
DEDICATION vi
ACKNOWLEDGEMENT v
ABSTRACT vi
ABSTRAK vii
TABLE OF CONTENTS viii
LIST OF TABLES xi
LIST OF FIGURES xii
LIST OF ABBREVIATIONS xv
CHAPTER 1 INTRODUCTION
CHAPTER 3 METHODOLOGY
3.1 Introduction 16
3.2 Workstation 16
3.3 Flowchart Of Software 18
3.4 Overview Of Proposed System 19
3.5 Hardware Component Of The Face 19
Recognition System
3.5.1 Power Supply 12V DC 20
3.5.2 Arduino UNO 21
3.5.3 Motor Driver DFRDuino 22
3.5.4 Personal Computer 23
3.5.5 Door Lock Actuator 23
3.5.6 Logitech C270 24
3.6 Assembly Circuit 25
3.7 Software Implementation 26
3.7.1 Image Acquisition 26
3.7.2 Face Detection 26
3.7.3 Face Filter 29
3.7.4 Feature Extraction 30
3.7.5 Standard Deviation (Xsd) and Maximum area of eyes 33
(MAA)
3.7.6 Face classification 35
3.8 Conclusion 35
10
1
0
4.1 Introduction 36
4.2 Data Acquisition System (Daq) 36
4.3 Face Detection 38
4.4 Face Filter 40
4.5 Feature Extraction 42
4.5.1 5 Bbox 42
4.5.2 4 Points 44
4.5.3 Distance Points O Face 44
4.6 Maximum Area Of Eyes (Maa) 44
5.1 Conclusion 50
5.2 Recommendations For Future Research 51
REFERENCES 52
APPENDICES
A Gantt Chart Psm 1&Psm 2 55
B Cost Budget 57
C Specification of equpments 58
D Coding 62
E Progress Working Flow 77
11
1
1
LIST OF TABLES
LIST OF FIGURES
4.1(a) The distance person from left and right with angle 60°. 37
4.2(a) Matched 38
4.2(b) Un-Matched 38
4.2(c) Un-Matched 38
4.6(a) Matched 43
4.6(b) Un-Matched 43
4.7(a) Matched 44
4.7(b) Un-Matched 44
4.8(a) Matched 44
4.8(b) Un-Matched 44
14
LIST OF ABBREVIATIONS
ID Identification Card
PCA Principal Component Analysis
ANN Artificial Neural Network
RFID Radio Frequency Identification
DNA Deoxyribonucleic acid
PWM Pulse square Width Modulation
USB Universal Serial Bus
UART Universal Asynchronous Receiver Transmitter
TTL Transistor-transistor Logic Circuit
TM Intel Core
GHz Giga hertz
RAM Read Access Memory
MP Mega Pixels
fps Frame Per Second
3-D Three Dimensional
sqrt Square root
Xm Mean
Xdmp Difference from mean
Xv Variance
DAQ DATA ACQUISITION SYSTEM
MAA Maximum Area of eyes
IC Integrated Circuit
SRAM Static Random Access Memory
EEPROM Electrically Erasable Programmable Read-Only Memory
MHz Megahertz
GHz Gigahertz
FOV Field of View
SD Card Secure Digital Card
VFX Video Effects
V Voltage
DC Direct Current
16
16
INTRODUCTION
Nowadays, technology is advancing fast. Security and peace of mind are essen-
tial needs for a high quality of life. Thus, it is very important to have a reliable security
system that can secure our assets as well as to protect our privacy.
Installing a home security system can be costly, but not installing one could cost
even more. Even if it is not the latest and greatest technology for personal properties, it
is still important to have a few basics set up around the units. For instance, a standard
alarm should be a fundamental necessity in any apartment or condo. If a break-in oc- curs,
people want to make sure that the families are safe and secure. An alarm system can
help us and give us peace of mind. With these tools, people can keep the families and
their valuables safe from any intruder who might enter your property.
The conventional security system requires a person to use a key, identification (ID)
card or password to access an area such as home and office. However, the existing secu-
rity system has several weaknesses where it can be easily forged and stolen. Thus, these
problems increased the interest in biometric technology to provide a higher degree of
security to replace the conventional security system.
Based on statistics reported by the Royal Malaysia Police, 11,586 burglary cases
were reported from January 2013 until June 2013 [1]. The huge amount of burglary cas-
es lead to a huge amount of losses faced by the victims. Thus, the security for access
control is very important as huge amount of losses emphasized that the security system
should not be taken lightly.
Therefore, security system for access control should be modernized to enhance the
security purpose. A more reliable security system should be developed to avoid greater
loss. Biometric technology can be implemented in the security system for access control
as it offers a higher degree of security compared to the conventional security system. From
[2], biometrics is the most secure and convenient authentication tool since it cannot be
borrowed, stolen, or forgotten, and forging one is practically impossible.
The objective of this project is to design door lock security system using face
recognition. The specific objectives that have to be achieved are as follows:
1. To develop an automatic door lock security
2. Detect and recognize face gesture using image processing
3. To control the door by distinguish face image recognize
19
2
LITERATURE REVIEW
Door lock system is a restriction for access the house. It is important to have a
security system for door lock in order to secure our assets and privacy. Magnetic sensor
can be used to upgrade the traditional door knob to increase the level of security. For
example, it use for detect the condition of door (open/close).
The magnetic doors lock which works by using the concept of electromagnetism
in which it is composed of an electromagnet and armature plate. Typically the electro-
magnet portion of the lock is attached to the door frame and the mating armature plate is
attached to the door to allow the two mechanisms to work efficiently together. When the
door is closed, the two components are in contact with each other.
The design takes into account the concept where the current allowed to flow
through the wire for the production of magnetic flux or magnetic power. The door will
be kept locked because the produced power will provides necessary strength that keeps
the door from being opened. The magnetic fluxes cause the armature plate attracted to
the electromagnet and create a locking action which eventually will lock the door.
The operation methods of magnetic door can be divided into three basic opera-
tions. The first method involves the use of a keypad system such as passwords. The sys-
tem will lock and unlock with the numeric code. A smart card such as Radio Frequency
Identification (RFID) tag is used in the second operation method of magnetic door
which is typically used for business and commercial buildings. For the last operation
method, the magnetic door can be operated by using biometrics technologies such as
thumb print and face recognition.
66
Smart card allows the card owner to access the facility. A smart card can be pro-
grammed to allow or deny access through specified doors or facility. It stores protected
information and the person’s privileges. There are two types of smart card either contact
or contactless. The contactless smart card usually used the electronic signal to transfer
data while physical contact is used for communication in contact based card. However,
it has several weaknesses as the card can be easily lost, stolen or damage if it is exposed
to high electromagnetic field.
Fingerprint recognition [3] is the technology that verifies the identify of a person
based on the fact that everyone has unique fingerprints. The reason can be considered that
fingerprint can achieve the best balance among authentication performance, cost, size of
device, and ease of use. However, most of fingerprint authentication devices have
some problems to be solved. One is that captured images are easily affected by the
condition of finger surface and it can reduce authentication performance. The other is that
the problem of fake fingers has been pointed out. And the last but not the least is the
loss of privacy and security in all biometric systems which include fingerprint bio-
metric system.
Figure 2.4 (a): The example of fingerprint Figure 2.4 (b): Pattern of fingerprint
88
One of the recent biometric technologies invented is the vein recognition sys- tem.
Veins are blood vessels that carry blood to the heart. Each person's veins have unique
physical and behavioral traits. Taking advantage of this, biometrics uses unique
characteristics of the veins as a method to identify the user. Vein recognition systems
mainly focus on the veins in the users hands. Each finger on human hand has veins
which connect directly with the heart and it has its own physical traits [5]. Compared to
the other biometric systems, the user's veins are located inside the human body. There-
fore, the recognition system will capture images of the vein patterns inside of users' fin-
gers by applying light transmission to each finger. For more details, the method works
by passing near-infrared light through fingers, this way a camera can record vein pat-
terns.
Vein recognition systems are getting more attention from experts because it has
many other functions which other biometrics technologies do not have. It has a higher
level of security which can protect information or access control much better. The level
of accuracy used in vein recognition systems is very impressive and reliable by the
comparison of the recorded database to that of the current data. Furthermore, it also has
a low cost on installation and equipment. Time which is taken to verify each individual
is shorter than other methods (average is 1/2 second) [5].
The human iris is a thin circular structure in the eyes which is responsible for
controlling the diameter and size of the pupils. It also controls the amount of light which
is allowed through to retinal in order to protect the eye's retina. Iris color is also a varia-
ble different to each person depending upon their genes. Iris color will decide eye color
for each individual. There are several colors for iris such as: brown (most popular color
for the iris), green, blue, grey, hazel (the combination of brown, green and gold), violet,
pink (in really rare cases). The iris also has its own patterns from eye to eye and person
to person, this will make up to uniqueness for each individual [6].
Iris recognition systems will scan the iris in different ways. It will analyze over
200 points of the iris including: rings, furrows, freckles, the corona and others charac-
teristics. After recording data from each individual, it will save the information in a da-
tabase for future use in comparing it every time a user want to access to the system [6].
Iris recognition security systems are considered as one of the most accurate se-
curity system nowadays. It is unique and easy to identify a user. Even though the system
requires installation equipment and expensive fees, it is still the easiest and fastest method
to identify a user. There should be no physical contact between the user and the system
during the verification process. During the verification process, if the users are wearing
accessories such as glasses and contact lenses, the system will work as normal because it
does not change any characteristics of the user's iris. Theoretically, even if users have eye
surgery, it will have no effect on the iris characteristics of that individual [6].
12
1
2
unauthorized access via recording devices, voice recognition systems will ask users to
repeat random phases which are provided by the system during verification state [5].
In conclusion, there are many biometric security systems that can be used for
surveillance. However, face recognition is the easiest system and the technology is very
low prices compared to other biometric system. Although, the technology is less unique
compared to iris and DNA, the system still the best technology. The face recognition
system also can be improve by doing more research and apply a new technology on it.
14
1
4
Human face plays an important role in our social interaction, conveying people’s
identity but it is a dynamic object and has a high degree of variability in its appearances.
So to overcome this variability face detection and face recognition methods have been
introduced. For this project the feature extraction was used to make analysis for face
image. Some of feature extraction method that can be used for this project is Face
Bunch Graph (Nodal Points), Principal Component Analysis (PCA), Gabor Filter and
Independent Component Analysis (ICA).
For the face bunch graph, the characteristics of a person’s images input through
a digital video camera had been analyzes by a facial recognition through digital camera
video. The overall facial structures are measured, such as distances between eyes, nose,
mouth, chin and jaw edges. These measurements are reserved in a database and used as
a comparison when a user stands before the camera [7]. Common representation of face
could be obtained by creating from 70 nodal points which is face bunch graph. The
same point is finding to match to the face bunch graph when the image has been given.
The recognition process only required 14 to 22 points for face recognition to be com-
pleted. There are many advantages of nodal points such as easy to use because in many
cases, it can be performed without a person knowing. Other than that, the cost to im-
plement biometric is much lower, and it is convenient and socially acceptable due
to only the picture is taken for face recognition. Meanwhile, the disadvantage of nodal
point is that the system cannot tell the difference between identical points [8].
space [12]. Typically, two phases are included in the PCA algorithms which are the
training phase and the classification phase. In the training phase, the eigenspace was
established from training samples and the training images are mapped to eigenspace for
classification. During the classification phase, an appropriate classifier is used to classi-
fy when input is projected to the same eigenspace.
In nutshell, from the literature study of all the methods or techniques for feature
extraction. Face Bunch Graph is most suitable to be used in this project because of their
advantages that meet their limitation
.
CHAPTER 3
METHODOLOGY
3.1 INTRODUCTION
This chapter discusses the methodology used in developing the face recognition
system. It begins with overall view for the complete system. The next section involves the
selection of main components in the process of door lock system. Then, the
following section will explain the operation of the system design and implementation.
Finally, the chapter explains the process of image acquisition using the hardware setup.
3.2 WORKSTATION
Door Lock
Actuator Image
Frame
Distance 60 cm
We need to do data collection to proceed the project. In this project, we need to collect
front view of face image. For this project, Logitech C270 is used to record the face image.
The sample image that used for this project about 5 students that were 3 female and 2
female. The video is recorded 5 second for each person. The frame for 6 second will
produced 40 samples of images. In other words, the total samples were 200 images. The
distance with the camera is fixed which is 50 cm. The video camera is rec- orded in lab
robotics to make sure the illumination is same for every face recording.
18
Start
Image Acquisition
Face Detection
Feature Extraction
Face Classification
End
PC
Process-
ing
Process
Record Lock/ Door Lock
Webcam Pic Unlock Actuator
In general, the hardware system in this work consists of three major subsystems.
Figure 3.1 shows the complete system for hardware development.
The first sub-system was a real time face recognition which uses Logitech Camera
to record the face image. The image was taken from extracted frame to get the best
view of the image. Secondly, MATLAB was used to analyse the data. The data will
produced the binary output when converted to analog signal. Arduino acts as DAQ card
to control the lock and unlocking process of the door lock actuator which depends on
the output of face recognition phase. The DFRDUINO motor driver was used to control
motor direction and speed using an arduino. By allowing the simply address Arduino pins,
it makes it very simple to incorporate a motor into door lock system and able to power a
motor with a separate power supply of up to 12v. The third sub-system was when the
door lock actuator was unlocked after the system recognition process and will remain
locked if the system did not recognized the authorized person in the distance of
60cm from the camera.
The hardware components used in developing the overall system are discussed
in the following sub-sections.
A 12V Dc power supply was used to boost up the main components of the sys-
tem. It consists of two DC outputs; +12V and -12V. This power supply was used to
boost up the DFRDUINO motor driver and door lock actuator. Figure 3.2 shows the power
component used.
For hardware development, Arduino UNO was used in the system. It was used
to control the lock and unlocking process of magnetic door inconjuction with the output
from the face recognition phase.
The board of the open source hardware contains everything to support the mi-
crocontroller such as 14 digital input/output pins, a Universal Serial Bus (USB) connec-
tion, a power jack and reset button. It is based on ATmega328 and can be connected to a
computer with a USB cable provided [15].
The board can be powered up by using USB connection or external power sup-
ply. The ATmega328 provides Universal Asynchronous Receiver Transmitter (UART)
to communicate with the computer by using transistor-transistor logic circuit (TTL)
(5V) serial communication which is convenient on digital pins 0 and pins 1.
The board provides 14 digital input/output pins which operate at 5 volts. Basi-
cally, the digital pins default to inputs or the pins configured to be in high impedance state.
Each pin has an internal pull-up resistor where it is disconnected by default of 20-
50 k Ohms.
A motor driver DFRduino is needed to send the signal to arduino for door lock
actuator lock and unlock without has a schematic diagram.
Motor Driver DFRDuino is used to support the Arduino UNO to give the signal
for door lock actuator to lock or unlock. The board supported by thousands of open source
codes and can be easily extended with most arduino shields. The integrated 2 way DC
motor driver and wireless socket gives a much easier way to start your robotic project.
The board provides 6 PWM Channels which consist of (Pin 11, Pin 10, Pin 9,
Pin 6, Pin 5, Pin 3) and powered up with USB interface. Motor driver board support Male
and Female pin header [16].
The door lock actuator in this hardware development was used to control the
door lock and unlock process and consists of series of gears triggered by a small motor.
A rack and pinion set assisted the actuator by converting the rotational motion into ver-
tical motion that is required to physically unlock or lock the doors.
In this study, a camera was used to capture the image. Camera plays a very im-
portant role in capturing face images. The selection criteria for the camera were based
on the size, resolution, brightness, simple handling and long life span. A Logitech Cam-
era C270 (see Figure 3.7) has high resolution of 1280 x 720 pixels with Crisp 3 MP photos
Technology, Hi-Speed USB 2.0 was chosen. It was capable of capturing up to 30 fps,
which was appropriate for this work. The images captures by the webcam was
smooth with no pixellation and the price is cheaper compared to other camera.
The complete circuit to control the door lock actuator and switch lamp are shown be-
low.
Webcam
Power Supply
12Vdc
+ -
(Matlab + Ardu-
ino)
S/O
1o
USB
2KW
D4 D5
Pin 3
Forward
DFRDuino 5V
Reverse
Switch lamp
Lamp
+ -
First, in the image acquisition process, the input face image was captured via in-
tegrated webcam. Once the input image is captured, the features information will be
extracted. The purpose of image acquisition is to seek and extract a region which con-
tains only the face information.
Detection of facial features such as eye, nose, and mouth is an important step for
many subsequent facial image analysis tasks. For this project, we applied a Voila Jones
Face Detection Algorithm method to identify a face image from the face’s unique fea-
tures [17]. During detection, each window is assigned to face class or background based
on the distances to the approximated face class mode. Viola Jones algorithm has mainly
4 stages :-
Figure 3.12(a): 3rd of Haar Feature Figure 3.12(b): 4th of Haar Feature
Where threshold is and are determine in the training, as well as the coef-
ficients .
d) Cascade Classifiers
In general, face detection algorithm based on AdaBoost may divided into
three major parts. Firstly by using “the integral image” to extract face’s rec-
tangle feature. Secondly, is formed weak classifier, which is based on single
rectangle feature, and using AdaBoost algorithm to trained the weak classifi-
er. Then, some accurate feature is combine to forming a strong classifier that
is more accurately in distinguish between “face” and “nonface” mode. The
third is in accordance with the principle of “first heavy after the light” cas-
cade multiple strong classifiers. In other words, it is put these strong classifi-
er in the front which is formed by important features and have more simple
structure. It can be filtering out numerous “non-face” sub window, so it will
put the detection focus on these regions which have larger possibility of exist
human face.
Figure 3.14: The face detection algorithm flow based on several cascade Classifiers
29
Face filter is applied in this project to remove the unwanted non face. Non face
occurs because the face image is taken with some noises in the background and more than
one face appeared in the image. Therefore, the largest area has been selected to fix
only one image appear when detect face.
Face detection
Yes
Number
Largest area
row >1
No
5 Box
Face
Left eye
Right eye
Nose
Mouth
4 Centre Points
6 Distances
a) 5 Boundary box
The boundary box (bbox) is applied in this system by using computer vision
toolbox. The bbox are setting as below. This function is needed to detect
each parts of face such as left eye, right eye, nose and mouth.
A = bbox(:,1:4);% face
B = bbox(:,5:8);% left eye
C = bbox(:,9:12);% right eye
D = bbox(:,13:16);% mouth
E = bbox(:,17:20);% nose
31
b) 4 Centre points
There are 4 centre points that has been created from the boundary box. It
measures based on distance between eyes, nose and mouth.
////////////////////////CB///////////////////////
Xcb = B(:,1)+(B(:,3)/2);
Ycb = B(:,2)+(B(:,4)/2);
CB = [Xcb,Ycb];
CB (Left Eye)
///////////////////////CC////////////////////////
Xcc = C(:,1)+(C(:,3)/2);
Ycc = C(:,2)+(C(:,4)/2);
CC = [Xcc,Ycc];
CC (Right Eye)
////////////////////////CD///////////////////////
Xcd = D(:,1)+(D(:,3)/2);
Ycd = D(:,2)+(D(:,4)/2);
CD = [Xcd,Ycd];
CD (Mouth)
///////////////////////CE////////////////////////
Xce = E(:,1)+(E(:,3)/2);
Yce = E(:,2)+(E(:,4)/2);
CE = [Xce,Yce];
CE (Nose)
DataC=[CB;CC;CD;CE]
32
c) 6 Distance points
From centre point of face, we applied another method using Pythagoras
theorem to get 6 distance points. This formula is stated as below :~
= √( − ) − ( − ) ℎ = 1,2,3,4,5,6
Xsd and Maa are chosen based on the accuracy that have a "standard" way of
knowing what is normal, and what is extra large or extra small [18].
Door
Lock/Unlock
13.25<
Xsd<14.57 Door Closed
Door Open
Pause(2)
Figure 3.20: Flowchart to find standard deviation and Maximum are of eyes
34
Formula of mean, difference from mean and variance is calculated before the value
of standard deviation is applied on this project. While, the value for max- imum
area of eyes is obtained if the area value of left eye and right eye is com- pared to
get the maximum value. Two filter is applied on this project to increase
the security. Those formula are stated below :~
∑
, = ℎ = 1,2,3,4,5,6
, = − ℎ = 1,2,3,4,5,6
∑ ( )
, =
∑ ( )
, = √
35
The door will be opened if the authorize person standing between that range
13.25< Xsd <14.57. Otherwise, the door will be closed. But the weakness of the system is
that it still can opened the door even if the person detect within the range was not an
authorize person. Hence, second filter was needed to increase the security system. Thus,
if the owner was standing at the distance of 60cm and at a specific range of
2.80< MAA <3.30 specifically for the owner, the door will be automatically opened but
this was different for person other than the owner although they are standing at the
60cm distance from the camera but the range of MAA was not the same as the owner then
the door will not be opened. Other than that, the door locks actuator pause for 2 second.
3.8 CONCLUSION
As a conclusion, the system consists of two elements which are software and
hardware development. Software development is more focus on to develop a database
of face recognition. Arduino microcontroller, motor driver DFRduino and door lock
actuator was used as the main of hardware in this project. The purpose of development
of hardware is to show the functionality of the face recognition security system.
CHAPTER 4
4.1 INTRODUCTION
In this chapter shows the experimental result for this research. Tables of results,
graphs, and figures are included. Detailed of explanation of graphs and figures are also
provided. The data collected had been done after recording the video and convert this
video to image by using FS studio. In this project, Minitab 16 software is used in order
to get the graphical analysis and the best value for distance points on face based on the
calculation. This software is really user friendly and reliable.
The test image that used for this project was the images about 5 students from
this university that were 3 female and 2 male. The face had been recorded for the same
illumination and front view. Time taken for capture the video was 5 seconds. In other
words, the pictures produced are 40 samples per second (fps) for each person. The total
samples of frame for five persons are 200. The distance had been fixed from the camera
which is 60 cm. The distance on 60 cm had been chosen based on face algorithm. It can
detect all the persons with the specified angle which are 60° from left and right. If the
person stands more than 60°, it could not recognize the face image because the half of
image data is distortion and the accuracy of image is not accurate. The height of PVC
stand is 80 cm from bottom. The angle for both camera elevation and depression are
45°. When the camera is adjusted on specified angle, it is suitable for all identity and
physical person. Hence, the image obtained is on the frame. The recording was done in
the lab robotics room to make sure the illumination is same for every face recording.
37
Lighting level was played an important role because it can significantly affect the result.
It was also not easy to make feature extraction if illumination of the image is too high.
4=45°
θ3=45°
θ1=60° θ 2=60°
50
cm
15 cm 15 cm 60 cm
It can be conclude that the camera position with person in between 60 cm. There
are 4 angle that obtained from those experiments such as θ1= θ2=60°, θ3= θ4=60° and
height stand of camera is 50cm from bottom.
38
Figure 4.1(c) shows several samples of images to obtain the parameter result from de-
tailed study of software usage.
Face Detection
(Filter)
Image 1
Image 2
Figure 4.2 (a) shows the result of an image is in good condition while figure 4.2 (b) and
figure 4.2 (c) are shown unmatched image. Those image unmatched due to the noises of
background, expression and pose. The objective is to determine the accuracy rate of the
system at the most uncontrolled condition. The accuracy rate of the system can be cal-
culated from Equation 4.1 below.
= 100%
From the analysis, the system gives 92.5% of accuracy. The accuracy rate of the
system with controlled is suitable for the application of access control. The application
of access control is achieved since it needs high rate of accuracy as the system need to
differentiate between authorized and unauthorized person to grant the access.
Face filter is applied in this project to remove the unwanted non face. Non face
occurs because the face image is taken with some noises in the background and more than
one face appeared in the image. Therefore, the largest area has been selected to fix only
one image appear when detect face.
41
BBOX
row1
BBOX
row2 Height,
H1
Height, H2
Width, W1
Width, W2
BBOX
row1
Height,
H1
x Width, W1
Before the filter is applied there are 2 bbox size which detect face and non face. The
matrix bbox is 2x21; 2 row data. The bbox is selected based on largest area that ob-
tained from the formula ;~
1= ℎ1 ℎ ℎ1
2= ℎ2 ℎ ℎ2
After that, the process of filter is carried out to remove non face. From the calculation,
we observe the largest area is area1 and we select is as face. Then, the matrix data bbox
is 1x21; 1 row data and 21 column data.
42
4.5.1 5BBOX
The 5 boxes is extracted in bbox. The total of boxes in matrix data is 1x21. The
boxes has divided into five categories such as head box, left eye box, right eye box,
mouth and nose. Step for feature extraction is shown below.
X5 Y5 W1 H1
1x4 = Nose Box
43
The result shown 1x21 consists of 5 bbox as stated below. The images detected
can be divided into 2 conditions; matched and unmatched.
X1,Y1
X2, Y2 X3, Y3
H2 H3
X4, Y4
W2 W3
H4 H1
W4
H5
W5
W1
4.5.2 4 points
There are 4 centre points that has been created from the boundary box. It
measures based on distance between eyes, nose and mouth. The image will pro-
duced result matched and unmatched.
X X X
X X
X X
Based on this graphs, we compared the area of left eye and right eye. From, this
result we choose the maximum of those areas.
H1 H2
W1 W2
45
Plot analysis was performed to indicate the relationship between distance points
on face and the number of images. The figure 4.2, figure 4.3, figure 4.4, figure 4.5 and
figure 4.6 shows the graph analysis that has been plotted with the distance of 60 cm
from camera. Then, the video is converted into 40 frames of images as offline data. Based
on the graph below, standard deviation on face and maximum area of eyes is the best
method to apply for face recognition for first filter and second filter. These filters obtained
were constantly fixed while for standard deviation there was only minor dif-
ference compared to the other variables.
60 P1
P2
P3
50 P4
P5
40 P6
30
20
10
0
4 8 12 16 20 24 28 32 36 40
Number of images
60 P1
P2
50 P3
P4
P5
40 P6
30
20
10
0
4 8 12 16 20 24 28 32 36 40
Number of images
60 P1
P2
50 P3
P4
P5
40 P6
30
20
10
0
4 8 12 16 20 24 28 32 36 40
Number of images
P1
P2
50 P3
P4
P5
40 P6
30
20
10
0
4 8 12 16 20 24 28 32 36 40
Number of images
P1
80 P2
P3
P4
P5
60
P6
40
20
0
4 8 12 16 20 24 28 32 36 40
Number of images
15
10
4 8 12 16 20 24 28 32 36 40
Number of images
(Boss)
3.4
3.2
3.0
2.8
4 8 12 16 20 24 28 32 36 40
Number of images
4.8 CONCLUSION
In this system, the calculation was made based on the formulaon applied to the
system. Standard deviation and maximun area were chosen based on the result obtained
from from the graph. It is determined that the plot surface for both of the variables were
constantly linear or smooth compared to other. The standard deviation range of face for
person number 1 was between 13.25< Xsd<14.57 while the maximum area was between
2.80<MAA<3.30. But the data started to differ for each person when the second filter was
applied to the system as required to increase the security and relibility of the system.
Thus it was concluded that the system was successful due to it characteristic.
CHAPTER 5
5.1 CONCLUSION
The main objectives of this project are to design and develop a security system
based on face recognition by using Matlab and a microcontroller as the main circuit.
The development of database was successfully developed by using Computer Vision
Toolbox approach and bunch face graph (5 nodal points) in Matlab. It involves two
main modules which are feature extraction and feature matching. Meanwhile, for the
hardware development, Arduino and motor driver DFRDuino were used in the main
circuit in order to control the door lock actuator. In addition, the second objective is to
build a security system based on biometric concept to access the door which will detects
and recognize human face gestures by using image processing. This system had been
successfully developed in which it able to distinguish facial images to grant a special
access to the owner. This analysis was successfully conducted by using a few variables
and parameters. Therefore, all these objectives were successfully fulfilled.
51
REFERENCES
[1] Z. b. Abdullah, "Official Portal of Royal Malaysia Police,Statistik Jenayah Pecah Rumah Jan-
Jun 2013," Royal Malaysia Police, 20 November 2013. [Online]. Available:
http://www.rmp.gov.my/.
[2] S. a. S. M. Liu, "A Practical Guide to Biometric Security Technology," A Practical Guide to
Biometric Security Technology, pp. 27-32, 2001.
[4] C. Le, "A survey of Biometric Security Systems," A survey of Biometric Security Systems, 28
November 2011.
[5] P. O'Neill, A. O'Neill, S. Winters and L. Kwiaton, "Biometrics security system," Biometrics
security system, 2011.
[8] S. Modi, "Face recognition technology," Student, 25 October 2013. [Online]. Available:
http://www.slideshare.net/SiddharthModi1/face-recognition-technology-27574561.
[9] M. a. O. Carikci, "A Face Recognition System Based on Eigenfaces," A Face Recognition
System Based on Eigenfaces, pp. 118-123, 2012.
[13] R. T. D. V. K. Aruna Bhadu, "Facial Expression Recognition Using DCT, Gabor and Wavelet
Feature Extraction Techniques," Facial Expression Recognition Using DCT, Gabor and
Wavelet Feature Extraction Techniques, vol. 2, no. 1, July 2012.
[17] S. S. Shaily Pandey, "An Optimistic Approach for Implementing Viola Jones Face Detection
Algorithm in Database System and in Real Time," in International Journal of Engineering
Research & Technology , Kanpur, India, July 2015.
[27] S. G. a. P. W. M. Caifeng Shan, "Robust Facial Expression Recognition using Local Binary
Pattern," Robust Facial Expression Recognition using Local Binary Pattern.
[28] S. G. a. A. K. T. Bhaskar Gupta, "Face detection using Gabor Feauture Extraction and
Artificial Neural Network," Face detection using Gabor Feauture Extraction and Artificial
Neural Network.
[29] J. D., "Train a Cascade Object Detector," Mathwork, 2016. [Online]. Available:
http://www.mathworks.com/company/?s_tid=gn_co.
[31] S. &. E. A. KalaJames, "Real Time Smart Car Lock Security System Using Face Detection and
Recognition," Real Time Smart Car Lock Security System Using Face Detection and
Recognition, Jan.10 - 12, 2012.
55
APPENDIX A
9/9- 14/9- 21/9- 28/9- 5/10- 12/10- 19/10- 26/10- 2/11- 9/11- 16/11- 23/11- 30/11- 7/12-
11/9 18/9 25/9 2/10 9/10 16/10 23/10 30/10 6/11 13/11 20/11 27/11 4/12 11/12
Week Week Week Week Week Week Week Week Week Week Week Week Week Week
1 2 3 4 5 6 7 8 9 10 11 12 13 14
PSM1 briefing session
Find Supervisor and
Project title
Register Title and Submit
Abstract
Research on project, cost,
equipment, Gantt chart,
project flow chart.
Design sketch using Au-
toCAD
Final design
Proposal and presenta-
tion slide preparation
Submit Proposal +
Presentation Slide +
Evaluation form
PSM 1 Seminar
Hardware Testing
Hardware
Software
Report writing
56
APPENDIX B
COST BUDGET
APPENDIX C
SPECIFICATION OF EQUIPMENTS
Microcontroller ATmega328P
Operating Voltage 5V
Input Voltage (recommended) 7-12V
Input Voltage (limit) 6-20V
Digital I/O Pins 14 (of which 6 provide PWM output)
PWM Digital I/O Pins 6
Analog Input Pins 6
DC Current per I/O Pin 20 mA
DC Current for 3.3V Pin 50 mA
Flash Memory 32 KB (ATmega328P)
of which 0.5 KB used by boot loader
SRAM 2 KB (ATmega328P)
EEPROM 1 KB (ATmega328P)
Clock Speed 16 MHz
Length 68.6 mm
Width 53.4 mm
Weight 25 g
59
APPENDIX D
configurePin(a,'D2','DigitalOutput');
configurePin(a,'D3','DigitalOutput');
for j=1:inf
% filename = ['Video 18 ' num2str(j) '.jpg']
IM1 = snapshot(mycam);
detector = buildDetector();
[bbox bbimg faces bbfaces] = detectFaceParts(detector,IM1,2);
A = bbox(:,1:4);% face
B = bbox(:,5:8);% left eye
C = bbox(:,9:12);% right eye
D = bbox(:,13:16);% mouth
E = bbox(:,17:20);% nose
%////////////////////////CB///////////////////////
63
Xcb = B(:,1)+(B(:,3)/2);
Ycb = B(:,2)+(B(:,4)/2);
CB = [Xcb,Ycb];
% CB
%///////////////////////CC///////////////////////
Xcc = C(:,1)+(C(:,3)/2);
Ycc = C(:,2)+(C(:,4)/2);
CC = [Xcc,Ycc];
% CC
%///////////////////////CD///////////////////////
Xcd = D(:,1)+(D(:,3)/2);
Ycd = D(:,2)+(D(:,4)/2);
CD = [Xcd,Ycd];
% CD
%///////////////////////CE///////////////////////
Xce = E(:,1)+(E(:,3)/2);
Yce = E(:,2)+(E(:,4)/2);
CE = [Xce,Yce];
% CE
% DataC=[CB;CC;CD;CE]
% % %///////////////////////DISTANCE P ON
FACE///////////////////////////
% %
if sum(B) == 0 | sum(E)== 0
P1=0;
else
P1=sqrt(((Xce-Xcb).^2)+((Yce-Ycb).^2));
end
%
if sum(C) == 0 | sum(E)== 0
P2=0;
else
P2=sqrt(((Xce-Xcc).^2)+((Yce-Ycc).^2));
end
%
if sum(D) == 0 | sum(E)== 0
P3=0;
else
P3=sqrt(((Xcd-Xce).^2)+((Ycd-Yce).^2));
end
%
if sum(B) == 0 | sum(C)== 0
P4=0;
else
P4=sqrt(((Xcc-Xcb).^2)+((Ycc-Ycb).^2));
end
%
if sum(B) == 0 | sum(D)== 0
64
P5=0;
else
P5=sqrt(((Xcd-Xcb).^2)+((Ycd-Ycb).^2));
end
%
if sum(C) == 0 | sum(D)== 0
P6=0;
else
P6=sqrt(((Xcd-Xcc).^2)+((Ycd-Ycc).^2));
end
%
% DataP=[P1,P2,P3,P4,P5,P6]
% %///////////////////////CALCULATE MEAN///////////////////////////
%
Xm = (P1+P2+P3+P4+P5+P6)./6;
%
% %/////////////CALCULATE DIFFERENCE FROM THE MEAN////////////////
%
Xdmp1 = P1-Xm;
Xdmp2 = P2-Xm;
Xdmp3 = P3-Xm;
Xdmp4 = P4-Xm;
Xdmp5 = P5-Xm;
Xdmp6 = P6-Xm;
%
% %///////////////////////CALCULATE VARI-
ANCE///////////////////////////
%
Xv =
((Xdmp1.^2)+(Xdmp2.^2)+(Xdmp3.^2)+(Xdmp4.^2)+(Xdmp5.^2)+(Xdmp6.^2))./6
;
%
%
% %//////////////////CALCULATE STANDARD DEVIA-
TION//////////////////////
%
Xsd = sqrt(Xv);
Ale = B(:,3).*B(:,4);
Are = C(:,3).*C(:,4);
AA = [Ale;Are];
Maa = max (AA);
Out2 = [Xsd , Maa];
OpSize = size(Out2);
writeDigitalPin(a,'D4',0); % close
writeDigitalPin(a,'D5',1); % active
pause (1);
writeDigitalPin(a,'D4',0); % close
writeDigitalPin(a,'D5',0); % inactive
sprintf('face undetected');
end
else
B=0
writeDigitalPin(a,'D4',0); % close
writeDigitalPin(a,'D5',1); % active
pause (1);
writeDigitalPin(a,'D4',0); % close
writeDigitalPin(a,'D5',0); % inactive
sprintf('face undetected');
end
else
A=0
writeDigitalPin(a,'D4',0); % close
writeDigitalPin(a,'D5',1); % active
pause (1);
writeDigitalPin(a,'D4',0); % close
writeDigitalPin(a,'D5',0); % inactive
sprintf('face undetected');
end
pause (2);
end
configurePin(a,'D2','DigitalOutput');
configurePin(a,'D3','DigitalOutput');
Xsd=14;
Maa=3.3;
for j=1:inf
if j<10
%yes
writeDigitalPin(a,'D4',0); % close door
66
writeDigitalPin(a,'D5',1); % active
pause (1);
writeDigitalPin(a,'D4',0); % close door
writeDigitalPin(a,'D5',0); % inactive
sprintf('face detected');
else
%no
writeDigitalPin(a,'D4',1); % open door
writeDigitalPin(a,'D5',1); % active
pause (1);
writeDigitalPin(a,'D4',1); % open door
writeDigitalPin(a,'D5',0); % inactive
sprintf('face detected');
end
j
pause (2);
end
DataWani = zeros(50,2);
for j=1:50
% filename = ['Video 18 ' num2str(j) '.jpg']
IM1 = snapshot(mycam);
detector = buildDetector();
[bbox bbimg faces bbfaces] = detectFaceParts(detector,IM1,2);
A = bbox(:,1:4);% face
B = bbox(:,5:8);% left eye
67
%////////////////////////CB///////////////////////
Xcb = B(:,1)+(B(:,3)/2);
Ycb = B(:,2)+(B(:,4)/2);
CB = [Xcb,Ycb];
% CB
%///////////////////////CC///////////////////////
Xcc = C(:,1)+(C(:,3)/2);
Ycc = C(:,2)+(C(:,4)/2);
CC = [Xcc,Ycc];
% CC
%///////////////////////CD///////////////////////
Xcd = D(:,1)+(D(:,3)/2);
Ycd = D(:,2)+(D(:,4)/2);
CD = [Xcd,Ycd];
% CD
%///////////////////////CE///////////////////////
Xce = E(:,1)+(E(:,3)/2);
Yce = E(:,2)+(E(:,4)/2);
CE = [Xce,Yce];
% CE
% DataC=[CB;CC;CD;CE]
% % %///////////////////////DISTANCE P ON
FACE///////////////////////////
% %
if sum(B) == 0 | sum(E)== 0
P1=0;
else
P1=sqrt(((Xce-Xcb).^2)+((Yce-Ycb).^2));
end
%
if sum(C) == 0 | sum(E)== 0
P2=0;
else
P2=sqrt(((Xce-Xcc).^2)+((Yce-Ycc).^2));
end
%
if sum(D) == 0 | sum(E)== 0
P3=0;
else
P3=sqrt(((Xcd-Xce).^2)+((Ycd-Yce).^2));
end
%
if sum(B) == 0 | sum(C)== 0
68
P4=0;
else
P4=sqrt(((Xcc-Xcb).^2)+((Ycc-Ycb).^2));
end
%
if sum(B) == 0 | sum(D)== 0
P5=0;
else
P5=sqrt(((Xcd-Xcb).^2)+((Ycd-Ycb).^2));
end
%
if sum(C) == 0 | sum(D)== 0
P6=0;
else
P6=sqrt(((Xcd-Xcc).^2)+((Ycd-Ycc).^2));
end
%
% DataP=[P1,P2,P3,P4,P5,P6]
% %///////////////////////CALCULATE MEAN///////////////////////////
%
Xm = (P1+P2+P3+P4+P5+P6)./6;
%
% %/////////////CALCULATE DIFFERENCE FROM THE MEAN////////////////
%
Xdmp1 = P1-Xm;
Xdmp2 = P2-Xm;
Xdmp3 = P3-Xm;
Xdmp4 = P4-Xm;
Xdmp5 = P5-Xm;
Xdmp6 = P6-Xm;
%
% %///////////////////////CALCULATE VARI-
ANCE///////////////////////////
%
Xv =
((Xdmp1.^2)+(Xdmp2.^2)+(Xdmp3.^2)+(Xdmp4.^2)+(Xdmp5.^2)+(Xdmp6.^2))./6
;
Xsd=12;
Maa=1800;
if OpSize==2 %face detected
A=1
if 13.25<Xsd && Xsd<14.57 % filter xsd
B=1
if 1700<Maa && Maa<2035 % filter Maa
C=1
else
C=0
end
else
B=0
end
else
A=0
end
Code Sketch 5: Read video one by one to check the images are detected or not
clear all;
clc;
detector = buildDetector();
[bbox bbimg faces bbfaces] = detectFaceParts(detector,img,2);
A = bbox(:,1:4)% face
B = bbox(:,5:8)% left eye
C = bbox(:,9:12)% right eye
D = bbox(:,13:16)% mouth
E = bbox(:,17:20)% nose
%////////////////////////CB///////////////////////
Xcb = B(:,1)+(B(:,3)/2);
Ycb = B(:,2)+(B(:,4)/2);
CB = [Xcb,Ycb];
%///////////////////////CC///////////////////////
Xcc = C(:,1)+(C(:,3)/2);
Ycc = C(:,2)+(C(:,4)/2);
CC = [Xcc,Ycc];
%///////////////////////CD///////////////////////
Xcd = D(:,1)+(D(:,3)/2);
Ycd = D(:,2)+(D(:,4)/2);
CD = [Xcd,Ycd];
70
%///////////////////////CE///////////////////////
Xce = E(:,1)+(E(:,3)/2);
Yce = E(:,2)+(E(:,4)/2);
CE = [Xce,Yce];
%///////////////////////DISTANCE P ON FACE///////////////////////////
if sum(B) == 0 | sum(E)== 0
P1=0
else
P1=sqrt(((Xce-Xcb).^2)+((Yce-Ycb).^2));
end
if sum(C) == 0 | sum(E)== 0
P2=0
else
P2=sqrt(((Xce-Xcc).^2)+((Yce-Ycc).^2))
end
if sum(D) == 0 | sum(E)== 0
P3=0
else
P3=sqrt(((Xcd-Xce).^2)+((Ycd-Yce).^2))
end
if sum(B) == 0 | sum(C)== 0
P4=0
else
P4=sqrt(((Xcc-Xcb).^2)+((Ycc-Ycb).^2))
end
if sum(B) == 0 | sum(D)== 0
P5=0
else
P5=sqrt(((Xcd-Xcb).^2)+((Ycd-Ycb).^2))
end
if sum(C) == 0 | sum(D)== 0
P6=0
else
P6=sqrt(((Xcd-Xcc).^2)+((Ycd-Ycc).^2))
end
%///////////////////////CALCULATE MEAN///////////////////////////
Xm = (P1+P2+P3+P4+P5+P6)./6
Xdmp1 = P1-Xm
Xdmp2 = P2-Xm
Xdmp3 = P3-Xm
Xdmp4 = P4-Xm
Xdmp5 = P5-Xm
Xdmp6 = P6-Xm
71
%///////////////////////CALCULATE VARIANCE///////////////////////////
Xv =
((Xdmp1.^2)+(Xdmp2.^2)+(Xdmp3.^2)+(Xdmp4.^2)+(Xdmp5.^2)+(Xdmp6.^2))./6
Xsd = sqrt(Xv)
Ale = log10(B(:,3).*B(:,4))
Are = log10(C(:,3).*C(:,4))
AA = [Ale;Are]
Maa = max (AA)
FR = [Xsd;Ale]
DRow = [Xsd,Ale,Are,Maa,P1,P2,P3,P4,P5,P6]
detector = buildDetector();
[bbox bbimg faces bbfaces] = detectFaceParts(detector,img,2);
A = bbox(:,1:4);% face
B = bbox(:,5:8);% left eye
C = bbox(:,9:12);% right eye
D = bbox(:,13:16);% mouth
E = bbox(:,17:20);% nose
%////////////////////////CB///////////////////////
Xcb = B(:,1)+(B(:,3)/2);
Ycb = B(:,2)+(B(:,4)/2);
CB = [Xcb,Ycb];
%///////////////////////CC///////////////////////
Xcc = C(:,1)+(C(:,3)/2);
Ycc = C(:,2)+(C(:,4)/2);
CC = [Xcc,Ycc];
%///////////////////////CD///////////////////////
Xcd = D(:,1)+(D(:,3)/2);
Ycd = D(:,2)+(D(:,4)/2);
CD = [Xcd,Ycd];
%///////////////////////CE///////////////////////
Xce = E(:,1)+(E(:,3)/2);
Yce = E(:,2)+(E(:,4)/2);
CE = [Xce,Yce];
%///////////////////////DISTANCE P ON
FACE///////////////////////////
if sum(B) == 0 | sum(E)== 0
P1=0
else
P1=sqrt(((Xce-Xcb).^2)+((Yce-Ycb).^2));
end
if sum(C) == 0 | sum(E)== 0
P2=0
else
P2=sqrt(((Xce-Xcc).^2)+((Yce-Ycc).^2))
end
if sum(D) == 0 | sum(E)== 0
P3=0
else
P3=sqrt(((Xcd-Xce).^2)+((Ycd-Yce).^2))
end
if sum(B) == 0 | sum(C)== 0
P4=0
else
P4=sqrt(((Xcc-Xcb).^2)+((Ycc-Ycb).^2))
73
end
if sum(B) == 0 | sum(D)== 0
P5=0
else
P5=sqrt(((Xcd-Xcb).^2)+((Ycd-Ycb).^2))
end
if sum(C) == 0 | sum(D)== 0
P6=0
else
P6=sqrt(((Xcd-Xcc).^2)+((Ycd-Ycc).^2))
end
%///////////////////////CALCULATE MEAN///////////////////////////
Xm = (P1+P2+P3+P4+P5+P6)./6;
Xdmp1 = P1-Xm;
Xdmp2 = P2-Xm;
Xdmp3 = P3-Xm;
Xdmp4 = P4-Xm;
Xdmp5 = P5-Xm;
Xdmp6 = P6-Xm;
%///////////////////////CALCULATE VARI-
ANCE///////////////////////////
Xv =
((Xdmp1.^2)+(Xdmp2.^2)+(Xdmp3.^2)+(Xdmp4.^2)+(Xdmp5.^2)+(Xdmp6.^2))./6;
Xsd = sqrt(Xv);
Ale = log10(B(:,3).*B(:,4));
Are = log10(C(:,3).*C(:,4));
AA = [Ale;Are]
Maa = max (AA);
%///////////////////////////EXCEL OUTPUT
////////////////////////////
i
FD(:,i)=[Xsd,P1];
end
74
if sum(B) == 0 | sum(E)== 0
P1=0
else
P1=sqrt(((Xce-Xcb).^2)+((Yce-Ycb).^2));
end
if sum(C) == 0 | sum(E)== 0
P2=0
else
P2=sqrt(((Xce-Xcc).^2)+((Yce-Ycc).^2))
end
if sum(D) == 0 | sum(E)== 0
P3=0
else
P3=sqrt(((Xcd-Xce).^2)+((Ycd-Yce).^2))
end
if sum(B) == 0 | sum(C)== 0
P4=0
else
P4=sqrt(((Xcc-Xcb).^2)+((Ycc-Ycb).^2))
end
if sum(B) == 0 | sum(D)== 0
P5=0
else
P5=sqrt(((Xcd-Xcb).^2)+((Ycd-Ycb).^2))
end
if sum(C) == 0 | sum(D)== 0
P6=0
else
P6=sqrt(((Xcd-Xcc).^2)+((Ycd-Ycc).^2))
end
%///////////////////////CALCULATE MEAN///////////////////////////
Xm = (P1+P2+P3+P4+P5+P6)./6;
Xdmp1 = P1-Xm;
Xdmp2 = P2-Xm;
Xdmp3 = P3-Xm;
Xdmp4 = P4-Xm;
Xdmp5 = P5-Xm;
Xdmp6 = P6-Xm;
%///////////////////////CALCULATE VARI-
ANCE///////////////////////////
Xv =
((Xdmp1.^2)+(Xdmp2.^2)+(Xdmp3.^2)+(Xdmp4.^2)+(Xdmp5.^2)+(Xdmp6.^2))./6;
75
Xsd = sqrt(Xv);
Ale = log10(B(:,3).*B(:,4));
Are = log10(C(:,3).*C(:,4));
AA = [Ale;Are]
Maa = max (AA);
%///////////////////////////EXCEL OUTPUT
////////////////////////////
i
FD(:,i)=[Xsd,P1];
end
76
detector.stdsize = stdsize;
detector.detector = cell(5,1);
for k=1:4
minSize = int32([stdsize/5 stdsize/5]);
minSize = [max(minSize(1),mins(k,1)), max(minSize(2),mins(k,2))];
detector.detector{k} = vi-
sion.CascadeObjectDetector(char(nameDetector(k)), 'MergeThreshold',
thresholdParts, 'MinSize', minSize);
end
detector.detector{5} = vision.CascadeObjectDetector('FrontalFaceCART',
'MergeThreshold', thresholdFace);
77
APPENDIX E