Sie sind auf Seite 1von 17

Face Emotion Detection By Using AI

Abstract
Computer vision is such kind of research field which tries to percept and represent
the 3D information for world objects. Its essence is to reconstruct the visual
aspects of 3D object by analyzing the 2D information extracted accordingly. 3D
objects surface reconstruction and representation not only provide theoretical
benefits, but also are required by numerous applications.

Face detection is a process, which is to analysis the input image and to determine
the number, location, size, position and the orientation of face. Face detection is
the base for face tracking and face recognition, whose results directly affect the
process and accuracy of face recognition. The common face detection methods are:
knowledge-based approach, Statistics-based approach and integration approach
with different features or methods. The knowledge-based approach can achieve
face detection for complex background images to some extent and also obtain high
detection speed, but it needs more integration features to further enhance the
adaptability. Statistics-based approach detects face by judging all possible areas of
images by classifier, which is to look the face region as a class of models, and use
a large number of “Face” and “non-face” training samples to construct the
Classifier.

Introduction
Face Detection has been one of the hottest topics of computer vision for the past
few years.This technology has been available for some years now and is being used
all over the place.From cameras that make sure faces are focused before you take a
picture, to Facebook when it tags people automatically once you upload a picture
(before you did that manually remember?).
Attractive people… That's why I'm taking the picture

In short, how Face Detection and Face Recognition work when unlocking your
phone is as following:

You look at your phone, and it extracts your face from an image (the nerdy name
for this process is face detection). Then, it compares the current face with the one it
saved before during training and checks if they both match (its nerdy name is face
recognition) and, if they do, it unlocks itself.

As you see, this technology not only allows me to be any type of dog I want.
People are getting pretty interested in it because of its ample applications. ​ATMs
with Facial Recognition​ and Face Detection software have been introduced to
withdraw money. Also, Emotion Analysis is gaining relevance for research
purposes
An ATM with a facial recognition system. Source

THEORY OF FACE DETECTION CLASSIFIERS

A computer program that decides whether an image is a positive image (face


image) or negative image (non-face image) is called a ​classifier​. A classifier is
trained on hundreds of thousands of face and non-face images to learn how to
classify a new image correctly. OpenCV provides us with two pre-trained and
ready to be used for face detection classifiers:

1. Haar Classifier
2. LBP Classifier

Both of these classifiers process images in gray scales, basically because we don't
need color information to decide if a picture has a face or not (we'll talk more about
this later on). As these are pre-trained in OpenCV, their learned knowledge files
also come bundled with OpenCV To run a classifier, we need to load the
knowledge files first, as if it had no knowledge, just like a newly born baby Each
file starts with the name of the classifier it belongs to. For example, a ​Haar
cascade classifier ​These are the two types of classifiers we will be using to
analyze Casper.
HAAR CLASSIFIER
The ​Harr classifier ​is a machine learning based approach, an algorithm created by
Paul Viola and Michael Jones; which (as mentioned before) are trained from many
many positive images (with faces) and negatives images (without faces).

LBP CASCADE CLASSIFIER


As any other classifier, the ​Local Binary Patterns​, or LBP in short, also needs to
be trained on hundreds of images. LBP is a visual/texture descriptor, and
thankfully, our faces are also composed of micro visual patterns.

PROJECT OVERVIEW
The method has strong adaptability and robustness, however, the detection speed

needs to be improved, because it requires test all possible windows by exhaustive

search and has high computational complexity. The method has real-time detection

speed and high detection accuracy, but needs long training time. The digital image

of the face generated is a representation of a two-dimensional image as a finite set

of digital values, called picture elements or pixels Pixel values typically represent

gray levels, colours, heights, opacities etc. It is to be noted that digitization implies

that a digital image is an approximation of a real scene. Recently there has been a

tremendous growth in the field of computer vision. The conversion of this huge

amount of low level information into usable high level information is the subject of

computer vision. It deals with the development of the theoretical and algorithmic

basis by which useful information about the 3D world can be automatically

extracted and analyzed from a single or multiple 2D images of the world .


PROJECT DESCRIPTION
Object Detection using Haar feature-based cascade classifiers is an effective object
detection method proposed by Paul Viola and Michael Jones in their paper, “Rapid
Object Detection using a Boosted Cascade of Simple Features” in 2001. It is a
machine learning based approach where a cascade function is trained from a lot of
positive and negative images. It is then used to detect objects in other images.

Here we will work with face detection. Initially, the algorithm needs a lot of
positive images (images of faces) and negative images (images without faces) to
train the classifier. Then we need to extract features from it. For this, haar features
shown in below image are used. They are just like our convolutional kernel. Each
feature is a single value obtained by subtracting sum of pixels under white
rectangle from sum of pixels under black rectangle.

PROBLEM DEFINITION

Existing system

Computer vision is such kind of research field which tries to percept and represent
the 3D information for world objects. Its essence is to reconstruct the visual
aspects of 3D object by analyzing the 2D information extracted accordingly. 3D
objects surface reconstruction and representation not only provide theoretical
benefits, but also are required by numerous applications.

Proposed system

In this project, we are going to describe a system that can detect and track human

face and give out necessary information as per our training data.

ADVANTAGES OF PROPOSED SYSTEM:

This project will be as a use case in biometrics, often as a part of (or together with)

a facial recognition system. It is also used in video surveillance, human computer

interface and image database management. Some recent digital cameras use face

detection for autofocus

User Documentation

The method has strong adaptability and robustness, however, the detection speed

needs to be improved, because it requires test all possible windows by exhaustive

search and has high computational complexity. The method has real-time detection
speed and high detection accuracy, but needs long training time. The digital image

of the face generated is a representation of a two-dimensional image as a finite set

of digital values, called picture elements or pixels Pixel values typically represent

gray levels, colours, heights, opacities etc. It is to be noted that digitization implies

that a digital image is an approximation of a real scene. Recently there has been a

tremendous growth in the field of computer vision. The conversion of this huge

amount of low level information into usable high level information is the subject of

computer vision. It deals with the development of the theoretical and algorithmic

basis by which useful information about the 3D world can be automatically

extracted and analyzed from a single or multiple 2D images of the world

SYSTEM REQUIREMENTS:
Anaconda Software to be installed.

● Microsoft Windows 10/8/7/Vista/2003/XP.

● For Anaconda—Minimum 3 GB disk space to download and install.

● 4 GB RAM recommended.

● OpenCV will be downloaded and used.

Data flow diagrams


Code
SOFTWARE TESTING
Testing
Software testing is a critical element of software quality assurance and represents
the ultimate review of specification, design and code generation.

TESTING OBJECTIVES
• To ensure that during operation the system will perform as per specification.
• TO make sure that system meets the user requirements during operation
• To make sure that during the operation, incorrect input, processing an
output will be detected
• To see that when correct inputs are fed to the system the outputs are correct
• To verify that the controls incorporated in the same system as intended
• Testing is a process of executing a program with the intent of finding an
error
• A good test case is one that has a high probability of finding an as yet
undiscovered error

The software developed has been tested successfully using the following testing
strategies and any errors that are encountered are corrected and again the part of
the program or the procedure or function is put to testing until all the errors are
removed. A successful test is one that uncovers an as yet undiscovered error.

Note that the result of the system testing will prove that the system is working
correctly. It will give confidence to system designer, users of the system, prevent
frustration during implementation process etc.,

TEST CASE DESIGN:

White box testing


White box testing is a testing case design method that uses the control structure of
the procedure design to derive test cases. All independents path in a module are
exercised at least once, all logical decisions are exercised at once, execute all loops
at boundaries and within their operational bounds exercise internal data structure to
ensure their validity. Here the customer is given three chances to enter a valid
choice out of the given menu. After which the control exits the current menu.

Black Box Testing


Black Box Testing attempts to find errors in following areas or categories,
incorrect or missing functions, interface error, errors in data structures,
performance error and initialization and termination error. Here all the input data
must match the data type to become a valid entry.
The following are the different tests at various levels:

Unit Testing:
Unit testing is essentially for the verification of the code produced
during the coding phase and the goal is test the internal logic of the
module/program. In the Generic code project, the unit testing is done during coding
phase of data entry forms whether the functions are working properly or not. In this
phase all the drivers are tested they are rightly connected or not.

Integration Testing:
All the tested modules are combined into sub systems, which are then
tested. The goal is to see if the modules are properly integrated, and the emphasis
being on the testing interfaces between the modules. In the generic code integration
testing is done mainly on table creation module and insertion module.

Validation Testing
This testing concentrates on confirming that the software is error-free in all
respects. All the specified validations are verified and the software is subjected to
hard-core testing. It also aims at determining the degree of deviation that exists in
the software designed from the specification; they are listed out and are corrected.
System Testing

This testing is a series of different tests whose primary is to fully exercise the
computer-based system. This involves:
• Implementing the system in a simulated production environment and testing
it.
• Introducing errors and testing for error handling.

OUTPUT SCREENS
CONCLUSION
This proposed a real time face recognition system (RTFRS). RTFRS has been
implemented in four implementation variants (i.e., CPU Mono, CPU Parallel,
Hybrid Mono and Hybrid Parallel. Fisherface algorithm is employed to implement
recognition phase and Haar-cascade algorithm is employed for the detection phase.
In addition, these implementations are based on industrial standard tools involve
Open Computer Vision (OpenCV) programming language, EmguCV version
windows universal CUDA 2.9.0.1922, and heterogeneous processing units. The
experiment consists of applying 400 images for 40 persons' faces (10 images per
person), defining, training, and recognizing these images on these four variants, the
experiment is taken place on the same environment The speed up factor is
measured with respect to the CPU Mono implementation (the slowest than all other
three variants). The practical results demonstrated that, the Hybrid Parallel
Recognition is the fastest algorithm variant among the all, because it gives an
overall speed up around times. The CPU Parallel gives an overall speed up around .
Finally, the Hybrid Mono gives a little improvement about (1.04). Thus, employing
parallel processing on modern computer architecture can accelerate face
recognition system.

Das könnte Ihnen auch gefallen