Beruflich Dokumente
Kultur Dokumente
13 www.erpublication.org
Music Player Based on Facial Expressions
This method also uses the database of the facial expression In feature selection by Adaboost, each Gabor filter is a treated
such as JAFFE database. as a weak classifier. Adaboost picks the best of those
This method achieves the accuracy of 83% using the database. classifiers, and then boosts the weights on the examples to
weight the errors more. The next filter is selected as the one
Gabor Filter using AdaBoost and SVM that gives the best performance on the errors of the previous
This method is shown from the next section of the article. filter. At each step, the chosen filter can be shown to be
uncorrelated with the output of the previous filters.
14 www.erpublication.org
International Journal of Engineering and Technical Research (IJETR)
ISSN: 2321-0869, Volume-3, Issue-4, April 2015
V. SYSTEM IMPLEMENTATION 3% were Asian or Latino. Videos were recorded in analog
S-video using a camera located directly in front of the subject.
Subjects were instructed by an experimenter to perform a
series of 23 facial expressions. Subjects began and ended each
Environmental setup required for implementing proposed display with a neutral face. Before performing each display,
system includes Ubuntu as platform, OpenCV as an experimenter described and modeled the desired display.
programming language. Image sequences from neutral to target display were digitized
into 640 by 480 pixel arrays with 8-bit precision for gray scale
values.
Dependencies on various languages are as shown below:
For our study, we selected the 313 sequences from the dataset
CMake >=2.8
that were labeled as one of the 7 basic emotions. The
Python >=2.7, <3.0
sequences came from 90 subjects, with 1 to 7 emotions per
OpenCV >=2.4.5
subject. The first and last frames (neutral and peak) were used
as training images and for testing generalization to new
The procedure to compile on Linux platform is: subjects, for a total of 625 examples. The trained classifiers
were later applied to the entire sequence.
1. mkdir build
2. cd build
3. cmake.. ; make ; make install - now the asset folder
VII. EXPECTED RESULT
should be populated
Proposed model is expected to detect the facial
For compilation of system in windows following steps are expression and play music as per the mood shown by the
followed: expression. The results of applying the Cohn-Kanade
database to the system are as follows:
Using CMake or CMakeGUI, select emotime as
source folder and configure.
If it complains about setting the variable
OpenCV_DIR set it to the appropriate path so that:
o C:/path/to/opencv/dir/ contains the libraries
(*.lib)
o C:/path/to/opencv/dir/include contains the
include directories (OpenCV)
o If the include directory is missing the project
will likely not be able to compile due to
missing reference to OpenCV or similar.
Then generate the project and compile it.
This was tested with Visual Studio 12 64 bit.
15 www.erpublication.org
Music Player Based on Facial Expressions
REFERENCES
16 www.erpublication.org