Sie sind auf Seite 1von 5

Proceedings of the 10th INDIACom; INDIACom-2016; IEEE Conference ID: 37465

2016 3 International Conference on ―Computing for Sustainable Global Development‖, 16th - 18th March, 2016
rd

Bharati Vidyapeeth's Institute of Computer Applications and Management (BVICAM), New Delhi (INDIA)

EEG-based Emotion Recognition of Quran


Listeners
Anas Fattouh, Ibrahim Albidewi, Bader Baterfi
Computer Science Department, Faculty of Computing and IT, King Abdulaziz University
Jeddah 21589, SAUDI ARABIA
Email ID: afattouh@kau.edu.sa

Abstract – In this paper, we investigate the possibility of understand the user’s emotion and reaction upon this
recognizing the emotional state of a subject from his brainwave understanding [17,18].
activities while he/she is listening to verses from the Quran. To this In order to build an emotion recognition system, a relationship
end, an experiment is designed and performed where the between the targeted emotion and the emotion’s measurement
electroencephalogram (EEG) brain activities of 17 participants are
signals has to be identified. To this end, several databases
recorded, using Emotiv EPOC neuroheadset, while they are listening
to selected verses from the Quran. At the end of each trail, the were developed in literature that contain standard labeled
participant provides a self-assessed happiness-unhappiness feedback stimuli, i.e. the emotion induced by each stimulus [19]. Once
and, at the same time, valence-arousal values are calculated from appropriate stimuli are selected from a database for an
recoded EEGs using fractal dimension method based on Higuchi emotion, an experiment is conducted where the subject is
algorithm. These values are used to train a classifier that recognizes exposed to the stimuli and the emotion’s measurement signals
two emotional states, happy and unhappy. The classifier is trained are recorded. Then, an appropriate emotional model is selected
using random forest algorithm and the achieved recognition and its parameters are identified from the recorded data [20].
accuracy is 87.96%. Unfortunately, this approach cannot be applied to estimate the
Keywords – Auditory Stimuli; Brain Computer Interface (BCI); emotions evoked during listing to recited verses from the
Emotion Recognition; Emotiv EPOC Neuroheadset. Quran as no such databases exist for the Quran audio clips. In
this paper, an attempt is done to find a mapping between the
NOMENCLATURE listen recited verses from the Quran and the evoked emotions.
BCI Brain Computer Interface To this end, an experiment is designed such that the
EEG Electroencephalogram electroencephalogram (EEG) brainwaves are recorded while
FN False Negative the subject is listing to recited verses from the Quran. The
FP False Positive emotions, obtained from the recoded EEG signals and from
FD Fractal Dimension the subject’s feedback, are used to train an emotional model
SAM Self-Assessment Manikin that discriminates between two emotions, happy and unhappy.
TN True Negative
TP True Positive II. METHODOLOGY
The proposed emotional state estimation system consists
I. INTRODUCTION of the subsystems shown in Fig. 1.
Emotions play an important role in our thinking and
behavior. In order to understand emotions, it is important to
understand their three components, the subjective component,
the physiological component, and the expressive component
[1]. The physiological component deals with our body’s
reaction to the emotion. It could be recognized from external
signals such as text [2], facial expressions [3], speech [4], body
gestures [5], or all [6,7]. In addition, internal signals could be
used to discriminate the emotional state such as heart rate [8] Fig. 1. Proposed emotional state recognition system
and brain waves [9].
Emotion recognition has many applications in different In this section, a brief explanation of each subsystem is given.
domains such as education [10], heath [11], commerce [12], A The Recording Device
games [13], security [14], and others [15,16]. However, the
most important application for the computer scientists could be The recording device has three units, signal acquisition unit,
the natural language processing domain where the machine can importer unit and recording unit. The signal acquisition unit
used in the proposed system is the Emotiv EPOC neuroheadset

Copy Right © INDIACom-2016; ISSN 0973-7529; ISBN 978-93-80544-20-5 2293


Proceedings of the 10th INDIACom; INDIACom-2016; IEEE Conference ID: 37465
2016 3 International Conference on ―Computing for Sustainable Global Development‖, 16 th - 18th March, 2016
rd

shown in Fig. 2. The headset has fourteen electrodes AF3, F7, fractal dimension based on Higuchi algorithm is used as it
F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4 showed satisfied performance in classifying emotions in two-
distributed over 10-20 international system positions in dimensional space [29,30]. Consider finite set of time
addition to two reference electrodes. The acquired signals are
aligned, band-pass filtered, and digitized at frequency 128 Hz
series X t  ,t  1, 2,N , a new time series can be
and wirelessly transmitted to a windows PC [21]. The importer constructed as follows [31]:
unit is a Matlab®-based server that streams EEG signals X km  X  m  , X  m  k  , X  m  2k ,
acquired by the Emotiv headset to a Simulink® model in real-
 
(1)
time [22]. The imported signal is recorded from inside X m   N  m  k  k
Matlab® using a dedicated function [23].
for m  1, 2, , k , where [ ] denotes the Gauss’ notation,
m is the initial time and k is the interval time. Let
Lm  k  be the length of the curve X km defined by:
 N  m  k 
1  
Lm  k     X  m  ik   X  m   i  1 k  
k  i 1


N 1
 (2)
 N  m  k  k
Fig. 2. Recording device: (a) Emotiv neuroheadset, (b) Electrodes map [21]
and (c) Signal importer [22] Let L  k  be the average value over k sets of Lm  k  and
B The Emotional Model assume that L  k   k  D , then the curve Lm  k  is
In order to estimate the emotion of a subject, the later should be fractal with dimension D . In this case, the fractal dimension
quantified. Two approaches were proposed in literature to (FD) can be obtained as the slop of the best fitting regression
 L k   log 1 k 
model emotions, discrete models and dimensional models [24].
Discrete view states that there are six to ten basic emotions, line of log with for different values
which determine emotional responses [25]. In other hand,
of k , i.e.
dimensional view claims that emotions are subjective n
incorporation of experiences of valence and arousal [26].
Valence is a personal emotion of pleasantness or x i X  y i Y 
unpleasantness while arousal is a personal state of sensation FD  i 1
n
(3)
x 
2
activated or deactivated [27] (see Fig. 3 [28]). i X
i 1

where x i  log 1 k i  , y i  log  L  k i  ,


1 n 1 n
X  
n i 1
x i , Y   y i , i  1, 2,, n and n is
n i 1
the number of considered values of k .
The arousal value can be calculated as the fractal dimension of
the raw EEG signals recorded from electrode FC6 while the
valence value can be calculated as the difference between
fractal dimensions of the raw EEG signals recorded from
electrode AF3 and F4 respectively (see Fig. 2 for the position
of electrodes on the scalp).
Fig. 3. Emotions conceptualizing (a) Discrete view, (b) Dimensional view
D The Classifier
C The Feature Extractor The arousal-valence values obtained from the feature extractor
The feature extractor subsystem takes the recorded raw EEG subsystem need to be mapped to emotions using two-
signals and produces features appropriate to the adopted dimensional emotion space. Different types of classifiers can
emotional model. As a tow dimensional arousal-valence model be used for this objective; each one has its applications and
is adopted for this work, the feature extractor should produce limitations. Decision trees are the most used method for
the arousal and valence values in raw EEG signals. To this end, inductive inference as they can efficiently handle noisy

Copy Right © INDIACom-2016; ISSN 0973-7529; ISBN 978-93-80544-20-5 2294


EEG-based Emotion Recognition of Quran Listeners

training data; deal with missing feature values, process large 2. Fractal dimension of arousal is calculated from
amount of training data, and the underline classification channel FC6 using Equation (3).
process can be interpreted by explanatory variables. Moreover, 3. Fractal dimension of valence is calculated from
to improve the accuracy of the decision tree, an ensemble of channels AF3 and F4 using Equation (3).
decision trees, called random forest, are deployed. Consider a
E Classifier Training
forest of T tress trained independently using a different
bootstrap sample from the original data; a test point v is The arousal-valence values obtained from the previous step
are used as features of emotions to train the classifier. The
simultaneously pushed through all trees until it reaches the
corresponding classes of these features are the emotions
corresponding leaves. The final class is given by [32]:
T
reported by the subjects after each trial. A 10-fold cross
1
p c | v  
T
 p c | v 
t 1
t (4) validation procedure is employed where the 108 feature
vectors are randomly divided into 10 disconnect subgroups.
Then, nine subgroups are employed in training 50 tree
where p c | v  is the probability of being a test point classifiers and the other subgroup is used to assess the
v belongs to a class c by all trees, pt c | v  is the probability
performance of the trained classifiers. This process is
reiterated 10 times such that each time one different subgroup
of being a test point v belongs to a class c by a tree t and T is left for assessment.
is the number of trained trees. IV. RESULTS AND DISCUSSION
III. EXPERIMENT In order to explore the relationship between the emotions
reported by the subjects and the selected verses and readers,
A The Subjects the data are plotted as shown in Fig. 4. From Fig. 4, one can
Seventeen male volunteers with age in the range 16 years to 45 observe that emotion cannot be infer from listen verses and/or
years were participated in the experiment. Participants have reader, it is subjective. Which is a logical observation as same
different nationalities and they have no past psychiatric or verse recited by the same reader could produce different
neurological disease. They also have no experience in brain emotions for different subjects.
computer interface (BCI) experiment. They have to sign an The question now is whether the emotion can be infer from the
informed consent prepared in accord with the regulation of the electroencephalogram (EEG) measurements. To this end,
local ethic committee. arousal-valence values are calculated from recorded EEG
using Equation (3). Fig. 5.a shows a scatter plot of obtained
B The Stimuli arousal-valence as a function to emotion reported by the
Seventy-Five verses recited by five different persons were subject after each recording.
selected by experts based on the meaning of these verses. They In order to build an emotion recognition system, 50 tree
expect that they could evoke different emotions. Ten seconds classifiers are trained with 10-fold cross validation procedure.
of a blank is added to the beginning of each verse to be used as The classified emotions are shown in Fig. 5.b.
a baseline for the recorded data. From Fig. 6, one can observe that four observations of 69
C The Experiment ―Unhappy‖ class were misclassified as ―Happy‖ and nine
observations of 39 ―Happy‖ class were misclassified as
The experiment starts with a pre-session, where the subject is ―Unhappy‖. The performance indicators of trained classifier
informed about the experiment and the steps to follow in order are given in the following equations.
to complete the experiment successfully; then a consent is
TP TN
signed by the subject. The experiment is performed in a clam Accuracy   0.88
room with low lighting and comfortable ambient. The subject TP TN  FP  FN
sits on an armchair in front of a PC. The Emotiv headset is TP
mounted on the subject’s head and the data acquisition program Precision   0.94
starts on the PC. After ensuring that the Emotiv electrodes are TP  FP
well connected with the program, the subject selects a verse TP
and starts listing to it while he is focusing on the meaning of it. Sensitivity (Recall)   0.88
After each recording, the subject reports his emotion using TP  FN
Self-Assessment Manikin (SAM) [33]. TN
Specificity   0.88
D Data Processing and Feature Extraction FP TN
The following operations are applied in order to calculate the 2 TP
F-measure   0.91
arousal and valence of recorded EEG signals: 2 TP  FN  FP
1. EEG signals are filtered by a Butterworth bandpass
filter of order 8 with a lower cutoff frequency of 2 Hz
and a higher cutoff frequency of 42 Hz.

Copy Right © INDIACom-2016; ISSN 0973-7529; ISBN 978-93-80544-20-5 2295


Proceedings of the 10th INDIACom; INDIACom-2016; IEEE Conference ID: 37465
2016 3 International Conference on ―Computing for Sustainable Global Development‖, 16 th - 18th March, 2016
rd

where TP  65 the true is positive rate, FP  4 is the false


positive rate, TN  30 is the true negative rate and
FN  9 is the false negative rate.
The obtained values are good compared with those of many
pattern recognition applications.

(b) Classified emotions


Fig. 5. Scatter plot of EEG’s features (arousal-valence values)

(a) Emotions vs verses and readers V. CONCLUSION


Arousal-valence model is a well-known model to classify
of emotional states. This paper explored the use of arousal-
valence model to classifying two emotional states of subjects
while they are listing to verses from the Quran. Fractal
dimension method based on Higuchi algorithm was used to
calculate the arousal-valence values from recoded EEG
brainwaves. These values are considered as features of
induced emotions and they were used with the subjects’
reported emotions to train a classifier that can discriminate
between tow emotional states, happy and unhappy. The
classifier was trained using random forest algorithm and the
achieved accuracy was 87.96%.
The proposed experiment will be expanded to include more
emotions and it will be used with larger number of subjects,
(b) Correlation between different factors
both male and female. In addition, other methods for
Fig. 4. Scatter plot of data and its correlation matrix calculating the arousal-valence values from EEG brainwaves
will be explored and compared.
REFERENCES
[1] K. Cherry, The Everything Psychology Book: Explore the human psyche
and understand why we do the things we do: Everything Books, 2010.
[2] J. Li and F. Ren, "Emotion recognition from blog articles," in Natural
Language Processing and Knowledge Engineering, 2008. NLP-KE'08.
International Conference on, 2008, pp. 1-8.
[3] T. Partala, V. Surakka, and T. Vanhala, "Real-time estimation of
emotional experiences from facial expressions," Interacting with
Computers, vol. 18, pp. 208-226, 2006.
[4] M. El Ayadi, M. S. Kamel, and F. Karray, "Survey on speech emotion
recognition: Features, classification schemes, and databases," Pattern
Recognition, vol. 44, pp. 572-587, 2011.
[5] S. Piana, A. Staglianò, F. Odone, A. Verri, and A. Camurri, "Real-time
automatic emotion recognition from body gestures," arXiv preprint
arXiv: 1402.5047, 2014.
[6] G. Caridakis, G. Castellano, L. Kessous, A. Raouzaiou, L. Malatesta, S.
(a) Real emotions Asteriadis, et al., "Multimodal emotion recognition from expressive
faces, body gestures and speech," in Artificial intelligence and
innovations 2007: From theory to applications, ed: Springer, 2007, pp.
375-388.

Copy Right © INDIACom-2016; ISSN 0973-7529; ISBN 978-93-80544-20-5 2296


EEG-based Emotion Recognition of Quran Listeners

[7] G. Castellano, L. Kessous, and G. Caridakis, "Emotion recognition [30] N. Thammasan, K.-i. Fukui, K. Moriyama, and M. Numao, "EEG-Based
through multiple modalities: face, body gesture, speech," in Affect and Emotion Recognition during Music Listening," The 28th Annual
emotion in human-computer interaction, ed: Springer, 2008, pp. 92-103. Conference of the Japanese Society for Artificial Intelligence, vol. 28,
[8] D. S. Quintana, A. J. Guastella, T. Outhred, I. B. Hickie, and A. H. pp. 1-3, 2014.
Kemp, "Heart rate variability is associated with emotion recognition: [31] T. Higuchi, "Approach to an irregular time series on the basis of the
direct evidence for a relationship between the autonomic nervous system fractal theory," Physica D: Nonlinear Phenomena, vol. 31, pp. 277-283,
and social cognition," International Journal of Psychophysiology, vol. 86, 1988.
pp. 168-172, 2012. [32] A. Criminisi, J. Shotton, and E. Konukoglu, "Decision forests: A unified
[9] Y. Liu, O. Sourina, and M. K. Nguyen, "Real-time EEG-based emotion framework for classification, regression, density estimation, manifold
recognition and its applications," in Transactions on computational learning and semi-supervised learning," Foundations and Trends® in
science XII, ed: Springer, 2011, pp. 256-277. Computer Graphics and Vision, pp. 81-227, 2011.
[10] L. Shen, M. Wang, and R. Shen, "Affective e-Learning: Using" [33] M. M. Bradley and P. J. Lang, ―Measuring emotion: The self-assessment
Emotional" Data to Improve Learning in Pervasive Learning manikin and the semantic differential,‖ Journal of Behavior Therapy &
Environment." Educational Technology & Society, vol. 12, no. 2, pp. Experimental Psychiatry, vol. 25, no. 1, pp. 49–59, 1994.
176-189, 2009.
[11] L. Dennison, L. Morrison, G. Conway, and L. Yardley, "Opportunities
and challenges for smartphone applications in supporting health behavior
change: qualitative study," Journal of medical Internet research, vol. 15,
2013.
[12] F. Ren and C. Quan, "Linguistic-based emotion analysis and recognition
for measuring consumer satisfaction: an application of affective
computing," Information Technology and Management, vol. 13, pp. 321-
332, 2012.
[13] S. Yildirim, S. Narayanan, and A. Potamianos, "Detecting emotional state
of a child in a conversational computer game," Computer Speech &
Language, vol. 25, pp. 29-44, 2011.
[14] E. Boldrini, A. Balahur Dobrescu, P. Martínez Barco, and A. Montoyo
Guijarro, "EmotiBlog: a fine-grained model for emotion detection in non-
traditional textual genres," 2009.
[15] T. Vogt, E. André, and N. Bee, "EmoVoice—A framework for online
recognition of emotions from voice," in Perception in multimodal
dialogue systems, ed: Springer, 2008, pp. 188-199.
[16] Yoon, Hyunjin, et al. "Emotion recognition of serious game players using
a simple brain computer interface." ICT Convergence (ICTC), 2013
International Conference on. IEEE, 2013.
[17] N. Fragopanagos and J. G. Taylor, "Emotion recognition in human–
computer interaction," Neural Networks, vol. 18, pp. 389-405, 2005.
[18] B. Klein, L. Gaedt, and G. Cook, "Emotional robots: Principles and
experiences with Paro in Denmark, Germany, and the UK," GeroPsych:
The Journal of Gerontopsychology and Geriatric Psychiatry, vol. 26, p.
89, 2013.
[19] Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang, "A survey of affect
recognition methods: Audio, visual, and spontaneous expressions,"
Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.
31, pp. 39-58, 2009.
[20] T. L. Nwe, S. W. Foo, and L. C. De Silva, "Speech emotion recognition
using hidden Markov models," Speech communication, vol. 41, pp. 603-
623, 2003.
[21] Brain map image from:
http://www.emotiv.com/bitrix/components/bitrix/forum.interface/show_fi
le.php?fid=1529.
[22] EPOC Simulink EEG Importer:
http://www.xcessity.at/products_epoc_simulink_eeg_importer.php.
[23] A. Fattouh, O. Horn, and G. Bourhis, "Emotional BCI control of a smart
wheelchair," Int. J. Comput. Sci, vol. 10, pp. 32-36, 2013.
[24] T. Eerola and J. K. Vuoskoski, "A comparison of the discrete and
dimensional models of emotion in music," Psychology of Music, 2010.
[25] J. L. Tracy and D. Randles, "Four models of basic emotions: a review of
Ekman and Cordaro, Izard, Levenson, and Panksepp and Watt," Emotion
Review, vol. 3, pp. 397-405, 2011.
[26] I. B. Mauss and M. D. Robinson, "Measures of emotion: A review,"
Cognition and emotion, vol. 23, pp. 209-237, 2009.
[27] L. F. Barrett, "Discrete emotions or dimensions? The role of valence
focus and arousal focus," Cognition & Emotion, vol. 12, pp. 579-599,
1998.
[28] R. M. Bagby, A. G. Ryder, D. Ben-Dat, J. Bacchiochi, and J. D. Parker,
"Validation of the dimensional factor structure of the Personality
Psychopathology Five in clinical and nonclinical samples," Journal of
personality disorders, vol. 16, pp. 304-316, 2002.
[29] O. Sourina and Y. Liu, "A Fractal-based Algorithm of Emotion
Recognition from EEG using Arousal-Valence Model," in
BIOSIGNALS, 2011, pp. 209-214.

Copy Right © INDIACom-2016; ISSN 0973-7529; ISBN 978-93-80544-20-5 2297

Das könnte Ihnen auch gefallen