Sie sind auf Seite 1von 253

Visualization of Affect in Faces based on

Context Appraisal

Defended by
Diana Di Lorenza Arellano Tavara

A thesis submitted to Departament de Ciencies Matematiques i Informatica of the


University of Balearic Islands in accordance with the requirements for the degree of
Doctor of Computer Science

Thesis Advisors
Dr. Francisco J. Perales
Dr. Javier Varona

2012
2
i
ii
Dr. Francisco Jose Perales Lopez.

Profesor Titular de Universidad.

Departamento de Matematicas e Informatica.

Universitat de les Illes Balears.

Dr. Javier Varona Gomez.

Profesor Contratado Doctor.

Departamento de Matematicas e Informatica.

Universitat de les Illes Balears.

HACEN CONSTAR:

Que la memoria titulada Visualization of Affect in Faces based on Context Appraisal ha sido
realizada por Diana Di Lorenza Arellano Tavara bajo nuestra direccion en el Departamento
de Matematicas e Informatica de la Universitat de les Illes Balears y constituye la tesis
para optar al Grado de Doctor en Informatica.

Palma de Mallorca, Enero de 2012

iii
Dr. Francisco Jose Perales Lopez
Director de la tesis

Dr. Javier Varona Gomez


Director de la tesis

Diana Di Lorenza Arellano Tavara


Doctorando

iv
To all who contribute to make this Thesis a reality.

v
vi
Acknowledgments

I can hardly believe I am writing this page, because it means it is finished.


First of all I would like to thank God & Co. for giving me all the strength through the
hardest moments along these years, and for blessing me every day.
This part needs to be written in Spanish, otherwise they will not understand. A mis
padres maravillosos por demostrarme lo que es la valenta, la decision y sobretodo, la mente
positiva.
To my beloved brothers who have been there all the time cheering me up when I needed
it, and for making me feel as their little sister even when I am the oldest.
To Paco and Xavi, for being there from the very beginning of this journey. Although
it has not been always easy, here we are, at the end of one road... but at the beginning of
a new one. Muchas muchas gracias por todo.
To all those colleagues and people from the academia who have helped me to be where
I am: Sandra, Eva, Elisabeth, Pere, Gloria, Mara Jose, Juan M. ... literally you offered
me a place, and now you are sharing this with me.
To ALL my Spanish, Venezuelan, Peruvian, Serbian ... and world-wide friends... you
have made my days. Cris, Jessi, Moncho, Rosita, Leslie, Gaby, Vctor, Simon, Isaac,
Carlitos, Mehdi, Pilar, Patricia, Yolanda, Angela, Carol, Mon, Tucko, Marija, Pedrn,
Caro, Ili, Eli... I should write another thesis just to thank you for being my friends.
To all those persons who day to day have helped, have laughed with me, have cried
with me, or just did not care. To all of you, wherever you are, thank you.
And finally to my Vladito, because without YOU, this would have been a different
story. Puno ti hvala! Volim te puno!

Muchsimas gracias

vii
viii
Abstract

Virtual Characters are more than avatars capable of expressing emotions and interact
with the users. Virtual Characters should be seen as a very reliable representation of a
human being, capable of expressing all the possible affective traits after the appraisal and
evaluation of what is happening around and inside them. They should feel and express
what they are feeling, they should convince you they are real.
To achieve this level of believability several researchers have proposed different com-
putational and affective models, as well as graphical techniques to simulate expressions,
gestures, behavior or voice. All this state of art has provided us with sufficient data and
information to see what else needs to be done.
As a result, we propose a contextual and affective framework that allows the generation
of the context that surrounds the character as well as the simulation of its psychologi-
cal characteristics like preferences, standards, personality, or admiration for other agents.
Moreover, the framework proposes novel and implementation independent techniques for
the visualization of emotions and mood.
Through experimentation we come up with a set of head-position/eye-gaze configu-
rations that are perceived as certain personality traits, we validate the generation of ex-
pressions for moods, and assessed the feasibility of the context generation through movie
scenes, which translated into our system, triggered the same emotions and elicit the same
facial expressions as in the movie.
This research is a step forward in the creation of more believable virtual characters, by
pointing out other elements that should be considered when creating characters that can
be used in affective HCI applications, storytelling, or virtual worlds for entertainment (e.g.
videogames) or for therapies (e.g. in therapies with autistic children).

Key words: Virtual Characters; Context Representation; Facial Expressions; Psychology.

ix
x
Resumen

Hablar de personajes virtuales implica hablar de mucho mas que avatares capaces de expre-
sar emociones e interactuar con los usuarios. Los personajes virtuales deberan ser vistos
como una representacion fidedigna de los seres humanos, capaces de expresar un amplio
rango de rasgos afectivos despues de haber analizado y evaluado que ocurre fuera y dentro
de ellos. Deben sentir y expresar lo que sienten de tal forma que logren convencer que son
reales.
Para alcanzar este nivel de credibilidad gran cantidad de investigadores han propuesto
diferentes modelos afectivos y computacionales, as como tecnicas en graficos para simular
expresiones, gestos, comportamientos y voz. Todo este trabajo previo nos ha permitido
obtener suficientes datos para analizar que mas se puede hacer en esta area.
Como resultado, proponemos una metodologa que permite la generacion automatica del
contexto que rodea al personaje, as como la simulacion de sus caractersticas psicologicas
como preferencias, estandares, personalidad, o admiracion por otros agentes. Mas aun, se
presentan novedosos algoritmos independientes de la implementacion para la visualizacion
de emociones y humor.
Mediante experimentos y test que miden el grado de percepcion en los usuarios asoci-
amos un conjunto de configuraciones orientacion de la cabeza/direccion de la mirada a
rasgos de personalidad, y validamos el metodo para generar expresiones de humor. Tambien
evaluamos la fiabilidad de la generacion de contexto usando escenas de pelculas, obteniendo
el mismo set de emociones y expresiones faciales que en dichas pelculas.
Finalmente, cabe destacar que este trabajo de investigacion es un paso hacia adelante
en la creacion de personajes mas crebles, ya que indica que elementos deberan tomarse
en cuenta al momento de crear personajes virtuales que puedan ser usados en aplicaciones
Interaccion persona-ordenador, cuentacuentos, o mundos virtuales destinados al entreten-
imiento (videojuegos) o fines medicos (terapias con ninos autistas).

xi
Palabras Claves: Personajes virtuales, Representacion de contexto, Expresiones faciales,
Psicologa.

xii
Contents

1 Introduction 1
1.1 The nature of the problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Motivation - The Domain of Interest . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Aims of Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Research Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Significance and Potential Applications . . . . . . . . . . . . . . . . . . . . . 6
1.6 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Psychological Theories 9
2.1 Psychological Theories of Emotion . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.1 Categorical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.2 Dimensional Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.3 Appraisal Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Psychological Theories of Personality . . . . . . . . . . . . . . . . . . . . . . 17
2.2.1 Eysenck Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.2 Five Factor Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.3 Circumplex Structures . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3 Psychological Theories of Mood . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.1 Ekman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.2 Pleasure-Arousal-Dominance Space . . . . . . . . . . . . . . . . . . . 22
2.3.3 UWIST Mood Adjective Checklist . . . . . . . . . . . . . . . . . . . 23
2.3.4 Positive and Negative Affect . . . . . . . . . . . . . . . . . . . . . . 23
2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

xiii
CONTENTS

3 Related Work in Affective Computing 25


3.1 Computational Models of Affect . . . . . . . . . . . . . . . . . . . . . . . . 26
3.1.1 Cathexis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.1.2 The Affective Reasoner . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.1.3 Virtual Puppet Theater (VPT) . . . . . . . . . . . . . . . . . . . . . 28
3.1.4 Multi-layer personality model . . . . . . . . . . . . . . . . . . . . . . 29
3.1.5 Greta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.6 Generic Personality and Emotion Model . . . . . . . . . . . . . . . . 31
3.1.7 ALMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.1.8 FATIMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.1.9 WASABI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1.10 EMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1.11 Memory-based Emotion model . . . . . . . . . . . . . . . . . . . . . 36
3.1.12 OSSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.1.13 MARC system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.1.14 Comparison between models . . . . . . . . . . . . . . . . . . . . . . 39
3.2 Visual Perception of Affective Phenomena . . . . . . . . . . . . . . . . . . . 42
3.2.1 Visual Cues for Personality . . . . . . . . . . . . . . . . . . . . . . . 42
3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4 Framework Overview 47
4.1 System Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2 Semantic Layer: Context Representation . . . . . . . . . . . . . . . . . . . . 50
4.2.1 Context - Inner world of the character . . . . . . . . . . . . . . . . . 50
4.2.2 Context - Outer world of the character . . . . . . . . . . . . . . . . . 51
4.3 Affective Layer: Affective Model . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.1 Emotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.2 Mood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.3 Personality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.4 Visualization Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

5 Context Representation 55
5.1 Context - An Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

xiv
CONTENTS

5.1.1 Previous works on Context Representation . . . . . . . . . . . . . . 56


5.2 Semantic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.3 Event ontology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.3.1 Action - (Fig. 5.3: A) . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.3.2 SpatialThing - (Fig. 5.3:B) . . . . . . . . . . . . . . . . . . . . . . . 62
5.3.3 AgentRole - (Fig. 5.3:C) . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.3.4 Temporal Entity - (Fig. 5.3:D) . . . . . . . . . . . . . . . . . . . . . 64
5.3.5 Contained Events - (Fig. 5.3:E) . . . . . . . . . . . . . . . . . . . . . 64
5.4 PersonalityEmotion ontology . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.4.1 EventRelation - (Fig. 5.4:A) . . . . . . . . . . . . . . . . . . . . . . 66
5.4.2 Goals - (Fig. 5.4:B) . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.4.3 PreferenceRelation - (Fig. 5.4:C,D) . . . . . . . . . . . . . . . . . . 66
5.4.4 AgentAdmiration - (Fig. 5.4:F,G) . . . . . . . . . . . . . . . . . . . 67
5.5 Emotion Elicitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.6 Guideline to use the ontologies . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.7 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.7.1 Ontology implementation . . . . . . . . . . . . . . . . . . . . . . . . 76
5.7.2 Interface implementation . . . . . . . . . . . . . . . . . . . . . . . . 82
5.8 Use Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.8.1 Movie Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

6 Affective Model for Mood Generation 89


6.1 Affective Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.1.1 PAD Space Pleasure, Arousal and Dominance . . . . . . . . . . . . 90
6.1.2 Reasons to use PAD in a Computational Model of Affect . . . . . . 91
6.2 Affective Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.2.1 Representation of Mood, Personality and Emotions . . . . . . . . . . 93
6.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

7 Visualization of Affect in Faces 101


7.1 Overview of the Visualization Module . . . . . . . . . . . . . . . . . . . . . 102
7.2 Expression Coding Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.2.1 MPEG-4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

xv
CONTENTS

7.2.2 FACS - Facial Action Coding System . . . . . . . . . . . . . . . . . . 106


7.2.3 Reasons to use MPEG-4 and FACS . . . . . . . . . . . . . . . . . . . 107
7.3 Facial Animation Engines and Applications . . . . . . . . . . . . . . . . . . 108
7.3.1 Game Engine from the University of Augsburg . . . . . . . . . . . . 108
7.3.2 Xface Toolkit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7.4 Visualization of Emotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.4.1 Universal (or Basic) Emotions . . . . . . . . . . . . . . . . . . . . . . 111
7.4.2 Intermediate Emotions . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.4.3 Generation of Intermediate Emotions . . . . . . . . . . . . . . . . . 114
7.5 Visualization of Mood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.5.1 (1) Mapping of emotions into the PAD space . . . . . . . . . . . . . 121
7.5.2 (2) AUs analysis of Facial Expressions of Emotions . . . . . . . . . . 122
7.5.3 (3) AUs mapping into the PAD Space . . . . . . . . . . . . . . . . . 124
7.6 Visualization of Personality Traits . . . . . . . . . . . . . . . . . . . . . . . 136
7.6.1 Head Pose and Eye Gaze . . . . . . . . . . . . . . . . . . . . . . . . 136
7.6.2 Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
7.6.3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
7.6.4 Experimental Study . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
7.6.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.6.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
7.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

8 Evaluation 147
8.1 Objectives of Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
8.2 Experiment: Visualization of Emotions . . . . . . . . . . . . . . . . . . . . . 148
8.2.1 Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
8.2.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8.2.3 Experimental Study . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8.2.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
8.3 Experiment: Visualization of Moods . . . . . . . . . . . . . . . . . . . . . . 155
8.3.1 Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
8.3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
8.3.3 Experimental Study . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
8.3.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

xvi
CONTENTS

8.3.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176


8.4 Experiment: Context Representation . . . . . . . . . . . . . . . . . . . . . . 178
8.4.1 Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
8.4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
8.4.3 Experimental Study . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
8.4.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
8.4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

9 Conclusions 187
9.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
9.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
9.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
9.4 Publications and contributions . . . . . . . . . . . . . . . . . . . . . . . . . 193
9.4.1 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
9.4.2 Proceedings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
9.4.3 Workshops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
9.4.4 Research placements . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Bibliografy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

A Ontology Rules 211

B Mapping from FAPs to AUs 217


B.1 Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
B.2 Opposite AUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

C Emotion Values in the Activation-Evaluation Space 229

D Publications and contributions 233

xvii
CONTENTS

xviii
Chapter 1

Introduction

If we knew what it was we were doing, it would not be called research, would it?
Albert Einstein.

Since the creation of the first virtual character, a lot of research has been done to
provide them with realism, believability, and empathy.
Nowadays, thinking of virtual characters means thinking of virtual worlds and
videogames as L.A. Noire, Call of Duty, The SimsTM , among many others. These charac-
ters are endowed with great realism and believability thanks to advanced technologies in
Computer Graphics and Artificial Intelligence.
Nevertheless, these techniques might be computationally expensive and therefore, not
very suitable to implement in real time interactive virtual worlds. From the AI point of
view, they should interact with the user or other inhabitants, and have affective reactions
to a series of events. From the physical point of view, they are human or animal repre-
sentations with gestures, voices, and facial expressions that show their affective states in
different instants of time.
These characters or avatars (virtual representations of a person) are not only limited
to videogames or entertainment; they can be used almost in any interactive application as
virtual presenters, educational tutors, instructors, or social networks avatars.
Nevertheless, to engage an audience, the characters must be believable, specially when
it comes to their affective responses. In this regard, the field of Affective Computing has
made great advances to give characters affective characteristics. Affective computing is a

1
CHAPTER 1. INTRODUCTION

term coined by Rosalind Picard [123] that relates to, arises from, or deliberately influences
emotions. Affective computing includes implementing emotions, and therefore aids to test
different theories of emotions. It also includes giving a computer the ability to recognize
and express emotions.
Given that our goal is to develop a framework for creation of believable characters
capable of a wide range of facial expressions, elicited as consequence of their emotional
reactions to the events in a changing virtual world, is one of the key fields in this research.
In this chapter, the nature of the problem, motivation, aims of research, and research
methodology are introduced. Also, the significance of the chosen approaches and possible
applications are identified. Finally, the thesis outline is summarized at the end of the
chapter.

1.1 The nature of the problem

One of the main problems to face in Affective Computing is the lack of consensus in answers
to questions like: what are emotions?, or what is personality?. As Picard stated, there
is open debate about these topics, and evidence lacks on all sides of the debates. Like her,
we have based our work on relevant theories, and how they have been used for creating
believable characters.
But what is believability? Paiva et al. [121] observed that believability is one of the most
debated properties of synthetic characters and the goal of researchers working on this area
for many years now. The term was introduced by Bates team [11] relating to characters
that give the illusion of life, facilitating the users suspension of disbelief. Believability
has been intensively explored in literature, and it is still the Holy Grail of the synthetic
characters research area. Why are synthetic characters not believable? Is it too hard?
Moreover, the question what makes a character behave in a believable way? arises
from the appraisal of situations and events that the characters experience. Therefore, a
precise and complete description of what surrounds the characters and how they perceive
it is necessary to make them react accordingly, and show the feelings they are experiencing.
Finally, how to evaluate the expression of affect in the character in a believable way can
only be measured empirically and subjectively with a significant sample of subjects. To
this respect the obtained results will be valid as long as the hypothesis formulated proves
to be true, or false, depending on the case that wants to be evaluated.

2
1.2. MOTIVATION - THE DOMAIN OF INTEREST

1.2 Motivation - The Domain of Interest

The research presented in this thesis is motivated by the idea of having a platform where
one can create interactive virtual characters and situations automatically and straightfor-
wardly. To achieve this, integration is the key. Thus integration of semantics, affective
computing and computer graphics is the basis of a system that allows the representation
of what happens with the characters, how they feel, and how they express those feelings.
The first domain of interest is semantics, which help us to define context. The main
reason for using context is stated by Kaiser and Wehrle [78] in the following paragraph:
The current, concrete meaning of a facial expression can only be interpreted within the
whole temporal and situational context. In everyday interactions, we know the context
and we can use all information that is available to interpret the facial expression of another
person. Therefore, if we generate the context we can generate accurate facial expressions
according to it.
The second domain of interest is driven by the contexts affective approach, which
constitutes one of the novelties of our work. By defining the affective traits of the character
as part of the context, a more accurate affective state of the character, and thus more
accurate facial expressions will be achieved.
The third domain of interest is focused on the characters facial expressions. Having a
character whose facial cues evoke the facial behavior of human beings can be of great help
to enrich the transmission of the contexts affective message.
Therefore, we propose a three-layered model, where the first two layers, a semantic
layer and an affective layer, are the ones that deal with the context. The semantic layer
defines the context (at a characters internal and external level) and produces an affective
output that is interpreted by the affective layer. The affective layer provides the psycho-
logical background to evaluate the emotions, mood and personality of the character, and
transform them into a representation for facial expressions. The third layer deals with the
visualization of emotions and moods, which constitutes one of the novelties of the work.
So far emotions were the main affective traits to be shown through facial expressions. Nev-
ertheless, mood is also an important affective trait that can be manifested in the face of a
person.
Another affective trait that has been poorly researched when it comes to its facial
expression is personality. Personality, by definition, is stable; but as Linda Edelstein said,
put a character in extraordinary circumstances, and certain traits come to the forefront

3
CHAPTER 1. INTRODUCTION

while others recede [40]. Nevertheless, people tend to show the same traits when placed
in similar situations: A highly competitive man will likely show ambition in the office,
or playing Monopoly with his family. The perception of personality based on observation
has long been a subject of research in behavioral psychology. It is just until recently that
this research has focused on facial actions. For this reason, as part of the third domain of
interest and as a novel research, we explore some facial cues to express personality.

1.3 Aims of Research


This research focuses on the development of a contextual and affective framework that
allows the creation of virtual characters, capable of expressing emotions, mood and per-
sonality through facial expressions.
The primary aims are to develop and implement:

1. A new approach for context representation in virtual worlds

A model that defines the concepts that are part of the context of the character
(its outer (environment) and inner (psychology traits) world).
A methodology that allows the user to define and infer knowledge about the
context, and to create new scenarios in a simpler way.
Psychology-based rules to produce emotional responses in the characters.

2. A model to appraise and elicit different affective traits in virtual characters

An affective model that uses psychology theories of affect to elicit new affective
states in the character based on its felt emotions, personality and mood.
A mathematical representation of affective traits to be computationally
tractable.

3. A visualization module for novel representation of different affective traits


through facial expressions

Generation of facial expressions of universal and intermediate emotions.


Generation of facial expressions of mood to visualize the output of the Affective
Module.
An exploratory study of visual cues for personality traits to make characters
more believable.

4
1.4. RESEARCH METHODOLOGY

1.4 Research Methodology


The research methodology in this thesis is a combination of analysis of previous works,
experimentation of theories and implementation of new ideas to obtain a novel approach
for a Computational Affective Model. The fields on which we base the research are Semantic
Web and Ontologies, Psychology, Computer Graphics, and Artificial Intelligence.
The first step in our research is the study of psychological theories, computational mod-
els, and frameworks for virtual characters; so we know what has been done and acknowledge
the missing features for a more automatic generation of virtual characters.
This previous analysis leads to the formulation of the following research questions:

Context Representation

1. What is the impact of context in virtual worlds


2. Which are the context factors that needed to be taken into account for virtual
characters in a virtual world?
3. How context influences the emotional responses of virtual characters?
4. Which techniques have been used to simulate context in computational systems?

Affective Model

1. Which emotional theories should be addressed to represent emotions?


2. How can the mood of a character be represented?
3. How do personality affect moods and emotions? How do these three affective
traits interact with each other?

Visualization of Facial Expressions

1. How can we obtain facial expressions for universal and intermediate emotions?
2. Which facial cues should be considered when expressing mood and personality?
3. Do the physical characteristics of the face influence the recognition of an affective
trait?

By doing the corresponding research to answer the former questions we will be capable
of choosing the best techniques and methods to implement and validate the Computational
Affective Model proposed in this thesis.

5
CHAPTER 1. INTRODUCTION

1.5 Significance and Potential Applications


This research examines the relation between context, affect elicitation, and facial expres-
sions; all applied to virtual characters in virtual environments. The chosen approaches are
significant steps towards providing more believable characters, or agents, whose affective
behavior can be generated in a more automatic way. The approaches are the following:
(A) Simulating scenarios for interactive characters
By implementing a semantic representation of the context, we can translate daily sit-
uations into the computational model. Therefore, what surrounds the character can be
appraised and evaluated according to its internal configuration (psychological parameters)
and also according to a set of rules derived from the model. As a result, a set of emotional
responses will be elicited. In this way, a number of simulations for HCI applications can
be automatically generated like virtual students in virtual classrooms to train teachers,
or virtual agents that help to improve communication skills in children with autism or
Asperger syndrom.
(B) Fast storytelling with affective output visualization
Imagine a system that can tell the facial expression that a character will have based
on a set of events that are part of a story. With our computational affective model we
are able to represent those events, obtain the emotional responses to them, and visualize
the characters facial expressions. Therefore, story designers and character animators can
have a draft version of how the character should look like due to certain situation; and
moreover, the corresponding facial parameters to control to achieve that facial expression.
(C) Visualizing mood and personality
Our study goes a step forward in the investigation of how mood and personality are
expressed through facial expressions. Until now the study of emotions in the face has
reached a point where little is left to be done. But what is the expression of certain mood,
or certain personality trait is still a field to be researched, and that can contribute to create
more believable affective characters.

1.6 Thesis outline


The remainder of this thesis is organized as follows:
Chapter 2 reviews a number of psychological theories and computational models of
emotions, mood and personality; as well as other applications that deal with affective

6
1.6. THESIS OUTLINE

characters, which we will use as a reference for our research. The discussion focuses on the
techniques, theories, and results obtained by each previous computational model, and how
they serve as the basis for our own computational model.
Chapter 3 provides an overview of the framework for the computational affective mod-
ule. The discussion aims to give readers a general vision of the whole model and guide
them into the subsequent chapters in this thesis. In particular, the system architecture
will be presented to shortly introduce the different modules to be developed.
Chapter 4 introduces the semantic model used to represent context. There it is analyzed
the motivation for using context and the previous works that have researched on semantic
techniques. We also present our requirements and methods for context representation, how
to use this model to create stories, definition of characters, their environment and their
emotional responses.
Chapter 5 explains the affective model used for the computation of the characters emo-
tional states from personality, mood and emotions values. It takes as input the characters
emotional responses generated by the semantic model, and produces a new mood using
the characters personality traits and previous mood. The chosen representation for these
affective traits is based on psychological theories and affective models presented in Chapter
2.
Chapter 6 describes the visualization module which is used to generate facial expressions
for the affective state of the character. For visualization of mood, which constitutes one of
the novelties of this work, an in-depth explanation if offered so it can be replicated by future
researchs. Finally, our contribution to the research on facial expressions for personality is
described.
Chapter 7 reports the evaluation of the Computational Affective Model. It exposes the
obtained results which validates not only the effectivity of the computational model, but
the correct visualization and perception of the elicited facial expressions.
Chapter 8 summarizes our work, provides an outlook to its potentials and implications,
analyses the limitations of the taken approaches and gives some directions for future work.

7
CHAPTER 1. INTRODUCTION

8
Chapter 2

Psychological Theories

Any emotion, if its sincere, is involuntary.


Mark Twain

Psychology has been one of the base research fields in Affective Computing, because it
provides the affective models and theories to be used.
As our main goal is to create believable and affective virtual characters, in the follow-
ing we outline a selection of psychological theories focused on representation of affective
components as emotions, mood, and personality. This selection has been guided by the
importance and contribution of these works to the generation of virtual characters.

2.1 Psychological Theories of Emotion

The study of emotions is a challenging area, since emotions can be analyzed from different
perspectives. This has originated a number of theories and models that intend to explain
what they are, as well as how and why they are appraised and elicited. This section
attempts to classify and overview some of the theories that have been used in the com-
putational field, thus we have: Categorical models, Dimensional models, and Appraisal
models.

9
CHAPTER 2. PSYCHOLOGICAL THEORIES

2.1.1 Categorical Models


Categorical models claim the existence of historically evolved basic emotions, which are
universal and can be found in all cultures. In these models, or theories, emotions are
labeled and considered as families instead of individual emotions.

Darwin

Charles Darwins work is so relevant because he made major contributions to the study
of facial expressions in a way that had not been done before. In his book The Expression
of Emotions in Man and Animals [33], he stated that facial expressions and involuntary
movements are based on three principles: serviceable associated habits (certain movements
are done even when they are not necessary, e.g. scratch ones head when thinking or when
being confused); antithesis (perform movements of a directly opposite nature when having
a directly opposite state of mind, e.g. move the arms to wave away a person even if that
person is not close enough); and direct action of the nervous system (certain expressions
are influenced by physiological reactions).
By means of multiple observations in several countries using as subjects infants, people
with dementia, Duchenes studies, art works, and people from different cultures and races,
he studied how people behave when experiencing different affective states. For example
when suffering; in anxiousness, pity or despair; when feeling happiness or devotion; and
so on. For example, he observed that laugh or smile were expressions for the state high
spirits in a deaf and blind person, a normal person and idiots (medical term).
Darwin grouped emotions in categories according to shared characteristics and move-
ments, focusing primarily on the face. He grounded the idea that facial expressions of emo-
tion are universal and gestures are culture-specific conventions, also seen in other species,
which evolved serving once particular functions (e.g. baring teeth in anger to prepare for
attack), becoming useful when communicating these emotions to others.

Ekman

Paul Ekman, inspired by Darwins approach [46], studied the universality of emotional
expressions and developed a methodology to describe these expressions based on muscular
movements, the Facial Action Coding System - FACS [48].
From his experiments, Ekman confirmed Darwins theory of universality [45], claiming
that the same emotion might be elicited by different circumstances, but its expression could

10
2.1. PSYCHOLOGICAL THEORIES OF EMOTION

be found across cultures. He provided a set of characteristics as distinctive universal signs,


physiology responses, automatic appraisal, brief duration, and so on, which distinguish
emotions from other affective phenomena.
He proposed six basic emotions (anger, fear, joy, disgust, sadness and surprise) [42],
which were extended to fifteen in later works: amusement, anger, contempt, content-
ment, disgust, embarrassment, excitement, fear, guilt, pride in achievement, relief, sad-
ness/distress, satisfaction, sensory pleasure, and shame [44].

Plutchik

Robert Plutchik proposed a theory based on biological natural selection, distinguishing


eight basic prototype functional patterns of behavior, or primary emotions [125].
His model arranges emotions in a cone-structure, based on bipolarity and similarity. For
example, anger and fear are bipolar because anger leads to attack and fear to withdrawal.
Consequently these two primary emotions lie on opposite sides of the emotion cone as
shown in Figure 2.1 [12].

Figure 2.1: Plutchiks dimensions of emotions (Fig. 2.2 from [12])

Plutchik also accounts for emotions that are either combinations of two or three basic
emotions, or one basic emotion experienced at a greater or a milder intensity. He called this
combination dyad, e.g. joy and acceptance produce love. Although Plutchik was aware that
some combinations might never occur at all, he stated that his model covered all aspects
of emotional life. However, the model is questionable when trying to classify concepts as
anticipation or surprise.

11
CHAPTER 2. PSYCHOLOGICAL THEORIES

2.1.2 Dimensional Models


Dimensional models, or dimensional theories of emotions, assume the existence of two of
more major dimensions which are able to describe different emotions. The idea originated
from the observation that some emotions share characteristics that can be seen as different
degrees of the two dimensions (or more). Therefore they do not need to be labeled and
categorized, constraining their study and measurement.

Whissell

Cynthia Whissell provided a list of emotional terms compiled in her Dictionary of Affect in
Language [153]. It includes approximately 4000 English words with affective connotations,
where each word is described along the dimension of Activation (or Arousal) and along the
dimension of Evaluation (or Pleasantness).
Whissells work is used for measuring emotion, and though its lower reliability in the
Activation dimension, it has been proved to work better when applied to passages or lists
because it allows the evaluation of the affective tone of the entire passage or list.
In practice, the Dictionary of Affect can be applied to both short-term and long-term
responses (mood description, personality description, reaction to immediate situations, and
analysis of texts or diaries). As words are rated along a two-dimensional space, Whissell
observed that the classification of words as emotional is related to their distance to the
origin.

Russell

Another theory based in a two-dimensional bipolar space is the one proposed by Rus-
sell [135]. Based on previous experiments performed by colleagues, Russell also found
that there are three properties of the cognitive representation of affect: the pleasantness-
unpleasantness and arousal-sleep dimensions; the bipolarity of dimensions that describe
affect; and that any affect word could be defined as a combination of the pleasure and
arousal components.
As a result, a two-dimensional space was evaluated with the horizontal dimension cor-
responding to pleasure-displeasure, and the vertical corresponding to arousal-sleep. Russell
also observed the lack of need for a third or more dimensions, because having extra dimen-
sions would only account for a tiny proportion of the variance and are limited to subsamples
of emotion words. Figure 2.2 shows the eight affect concepts in a circular order.

12
2.1. PSYCHOLOGICAL THEORIES OF EMOTION

Figure 2.2: Eight affect concepts [134]

In recent works, Russell [133] proposed a framework that weds bipolar dimensions to
discrete categories. He presented a set of concepts that intended to re-arrange knowledge
from previous models to make a consensus among them. The first set of concepts are
technical terms which define various emotion-related events: Core affect, Affective qual-
ity, Attributed affect, Affect regulation and Object. The second set of concepts bridge the
gap between the technical terms and folk concepts, leading to a more familiar manner of
speaking: Mood, Empathy, Displeasure motive, Prototype, Emotional episode, Prototypical
emotional episode, Emotional meta-experience and Emotion regulation.

Cochrane

Thomas Cochrane [29] proposed an eight-dimensional model to map the conceptual space
of emotions as faithfully and efficiently as possible. We included this model due to its
potential to be used in computational applications given that it offers a useful tool for
researchers, regardless of the theory of emotions that they hold. It applies equally to what-
ever component of emotion (appraisals, emotion language, subjective feeling, physiological
changes, expressive behaviors, action tendencies or regulation strategies), integrating dif-
ferent approaches by capturing the meaning of the emotion at an abstract level.
The eight proposed dimensions are: Valence (attracted-repulsed), Personal Strength
(powerful-weak), Freedom (free-constrained), Probability (certain-uncertain), Intentional fo-
cus (generalized-focused), Temporal flow (future directed-current-past directed), Temporal
Duration (enduring-sudden), and Social connection (connected-disconnected).

13
CHAPTER 2. PSYCHOLOGICAL THEORIES

After mapping emotional terms in his model, Cochrane proved that even terms re-
lated to emotional subclasses can be differentiated, and none of these differences would be
captured by the traditional valence or arousal dimensions.

2.1.3 Appraisal Models

According to the appraisal theory of emotions, the emotional responses results from a
dynamic evaluation (appraisal) of needs, beliefs, goals, concerns, environmental demands
that might occur consciously or unconsciously. Therefore, this type of theories has become
one of the most active, and attractive approach in the domain of emotional psychology.

OCC Model

One of the most used model of emotions in the computational field is the one proposed by
Ortony, Clore and Collins [120], known as the OCC Model. This model is of a cognitive
nature, and intends to explain peoples perception of the world and how it causes them to
experience emotions.
For Ortony et al., emotions cannot be arranged in a low-dimensional space; rather they
should be organized in groups. They found representative clusters identified by eliciting
conditions, under which emotions are triggered. Also, inside each emotion group, each
emotion type is seen as a family of closely related emotions.
The assumption of the model is that there are three major aspects of the world, upon
persons can focus: events, agents, or objects, which elicit different types of emotions. When
one focuses on events is because of their consequences, when one focuses on agents is because
of their actions, and when one focuses in objects is because of their aspect or properties.
The structure of the OCC model is shown in Figure 2.3, where individual groups of emotion
types are enclosed in boxes, with the groups name in the bottom part of the box.
The intensity of emotions for each group is given by a number of variables that de-
pend on the appraisal of the event, agents or objects. For instance, FORTUNES-OF-
OTHERS, there are four variables that affect the intensity of its emotions: desirability-for-
self, desirability-for-other, deservingness, and liking.
Regarding the OCC model, Bartneck [9] [10], reflected on the missing features (extensive
amount of knowledge to categorize the affective response, history function to keep track of
previous events, extensive number of emotions to be represented) and the lack of context
handling of the OCC model when creating believable characters. Nonetheless, Ortony

14
2.1. PSYCHOLOGICAL THEORIES OF EMOTION

Figure 2.3: OCC Model - Global structure of emotion types [120]

and colleagues presented their awareness about these issues, and they stated that the OCC
model is a basis model to define human emotions with a cognitive and individual approach.
Therefore, the problems that Bartneck explains are details that need to be handled in a
separate way or as a component of the OCC model.

Scherers Model

A model that does not deal with categories but with processes is the Component Process
Model (CPM), proposed by Klaus Scherer [140]. The CPM is a dimensional dynamic
model that defines emotions as adaptive reactions to events driven by processes of the
organism, which consists of five components corresponding to five distinct functions: (1)
Cognitive: evaluation of objects and events; (2) Peripheral efference: system regulation, (3)

15
CHAPTER 2. PSYCHOLOGICAL THEORIES

Motivational: preparation and direction of action, (4) Motor expression: communication of


reaction and behavioral intention, and (5) Subjective feeling: monitoring of internal state
and organism-environment interaction.
The CPM also explains how emotional states can be differentiated as a result of a
sequence of specified stimulus evaluation (or appraisal) checks (SECs). SECs are based on
four appraisal objectives: (1) Relevance Detection: evaluates the stimulus according to
the events probability of occurrence, level of pleasantness, and importance or relevance for
the organisms goals or need; (2) Implication Assessments: evaluates the consequences
of the event for the self; (3) Coping Potential: determines the type of responses for an
event, and their consequences; and (4) Normative Significance: how the individual
and the society evaluate an action and the significance of an emotion-producing event.
The importance of the work of Scherer is that it is a representation of human appraisal,
without being limited by objects, goals and other agents, as the OCC model. The CPM
also gives the necessary information to visualize the emotional appraisal by giving a set of
cues (in face, body, voice and internal systems) and parameters that can be manipulated
in a virtual character.

Frijdas Theory

Nico Frijda proposed an appraisal theory of emotions based on the term concern. According
to Frijda, a concern is what gives a particular event its emotional meaning. Emotions arise
in response to events that are important to the individuals goals, motives, or concerns [60].
For him an emotion is defined by six characteristics that describe its function: (1)
Concern relevance detection, (2) Appraisal, (3) Control precedence, (4) Action readiness
changes, (5) Regulation, and (6) Social nature of the environment. On the other hand, the
emotion process can be described with three lines: the core process (leads from stimulus
event to response), the regulation line (processes that intervene in the core process) and
the line of inputs other than the stimulus event. The outputs are: the overt response and
physiological changes [59]. It is depicted in Figure 2.4.
The advantage of Frijdas model is that it can be formalized in such a way that form
the basis of a computational model, as used in the architecture of a computer agent [109].

16
2.2. PSYCHOLOGICAL THEORIES OF PERSONALITY

Figure 2.4: Frijdas emotion process. Adapted from Fig. 9.1. in [59].

2.2 Psychological Theories of Personality

Concerning personality, theories happen to be very different among them. The state of the
art theory is the Five Factor Model, or Big Five, which proposes five almost independent
dimensions providing a very clear definition of personality. Nevertheless, it is not clear
(psychologist are still doing studies) how these dimensions can be related between each
other. Another theory based on FFM factors is the AB5C model, proposed by De Raad.
The advantage of this theory is that it allows to combine two factors, obtaining all the
corresponding adjectives needed to define a characters mixed personality.

17
CHAPTER 2. PSYCHOLOGICAL THEORIES

2.2.1 Eysenck Model


Hans Eysenck proposed one of the earliest personality theories. He called it temperament,
and it was based primarily on physiology and genetics, firmly believing that most funda-
mental personality traits are inherited. On the other hand, his theory also supported the
fact that environment determines behavior [21].
Eysencks original research found two main dimensions of temperament: Neuroticism
(N) and Extraversion-Introversion (E). However, after factorial and other empirical studies
a third dimension emerged which was named Psychoticism, (P), which was conceived as a
set of correlated behavior variables indicative of predisposition to psychotic breakdown [52].
Neuroticism, or Emotionality, is a dimension that ranges from normal, fairly calm
people to ones that tend to be quite nervous. It is characterized by high levels of negative
affect such as depression and anxiety, originated at the sympathetic nervous system.
Extraversion-introversion is a dimension found in everyone, produced as the balance of
inhibition and excitation in the brain. Extroversion is characterized by being outgoing,
talkative, high on positive affect (feeling good), and in need of external stimulation.
The Eysenck Personality Questionnaire (EPQ) [51] is a questionnaire to assess the
personality traits of a person, and it is still used by psychologists nowadays.

2.2.2 Five Factor Model


McCrae and Costa [97] proposed a hierarchical organization of personality traits in terms
of five basic dimensions: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and
Openness to Experience. The Five Factor Model (FFM), or Big Five, shares two traits
with the Eysenks model (Extraversion and Neuroticism). Table 2.1 presents a description
of each trait.
Factor Description Adjectives
Extraversion Preference for social situations talkative, energetic, sociable
Agreeableness Interaction with others trustworthy, friendly, cooperative
Conscientiousness Organized, persistent in achieving methodic, organized, efficient
goals
Neuroticism Tendency for negative thoughts insecure, emotionally unstable
Openness Open, interest in culture imaginative, creative, explorer

Table 2.1: Five Factor Model traits [85]

The traits were found in self-reports and ratings, in natural languages and theoretically

18
2.2. PSYCHOLOGICAL THEORIES OF PERSONALITY

based questionnaires, in children, college students, and older adults, in men and women,
and in English, Dutch, German, and Japanese samples. All five factors were shown to
have convergent and discriminant validity across instruments and observers, and to endure
across decades in adults [97]. It provides the model with two advantages: universality and
applicability.
Regarding universality, the FFM is strongly rooted in biology, and it has been found
that each of the five factors is heritable [98]. Regarding applicability, the FFM can be
used in different branches of psychology: industrial, organizational, clinical, educational,
forensic, and health psychology. Another advantage of the model is that any personality
type can be represented through the combination of the five traits, because they are found
to be independent from each other.
Although the FFM is the most used personality model to date, some psychologists
criticize the methodology, and the number of traits. Some say that five factors are too
many factors, but studies demonstrate that five factors are just right. Others say that
five factors are insufficient to summarize all that we know about individual differences
in personality. To this respect, the authors reply that they merely represent the highest
hierarchical level of trait description.

2.2.3 Circumplex Structures

The motivation for circumplex models is that they provide much more opportunity for
identifying clusters of traits that are semantically cohesive.

Wiggins Model

Wiggins et al. [155] reoriented the Interpersonal Circumplex, or IPC, which defines a broad
set of interpersonal traits that are directly related with affective and cognitive behavior.
The IPC has sixteen dimensions that were reduced to eight, where each octant is a combi-
nation of the dimensions: Dominance/Passiveness and Affect/Hostility. The octants that
are adjacent to each other share attributes and the ones that are opposed are inversely
related. Figure 2.5 shows the circumplex.
Traditionally this circular structure has been used to define interpersonal rela-
tions and to explain users trustworthiness in collaborative virtual environments and
telemedicine [23]. The advantage of this model is that allows a fine-grained definition
of personality traits.

19
CHAPTER 2. PSYCHOLOGICAL THEORIES

Figure 2.5: Wiggins - Two-factors: Dominance/Hostility and Affect/Hostility (Fig. 2.


from [154])

AB5C Model

The Abridged Big Five Dimensional Circumplex (AB5C) [74] taxonomy of personality
traits consists of 10 circumplexes that were obtained by the pair-combination of FFM
traits. Hofstee et al. found that by blending the FFM traits by pairs, a much tighter
conceptual structure that seems to work in practice was achieved. On the other hand,
the model is less restrictive than simple-structure models and two-dimensional circumplex
models, like the Wiggins model [155]. Figure 2.6 shows one of the ten circumplexes that
combines Extraversion or Surgency (Factor I) with Emotional Stability (Factor IV).

Virtue and Dynamism Dimensions

De Raad and Barelds [128] used two factors, Virtue and Dynamism, to organize the Big
Five variables in a circumplex model. The advantage of using this model is that the
positions of the trait-variables relative to each other become clear. This organization is
shown in Figure 2.7, where variables can be read focusing on two sets of opposite clusters.

20
2.2. PSYCHOLOGICAL THEORIES OF PERSONALITY

Figure 2.6: AB5C - Extraversion (I) and Emotional Stability (IV) (Fig. 1. from [74])

21
Figure 2.7: Circumplex - Two-factors: Virtue and Dynamism (Fig. 8.4. from [127])
CHAPTER 2. PSYCHOLOGICAL THEORIES

2.3 Psychological Theories of Mood


Mood represents the overalls view of the internal state of an individual. The difference
between mood and emotion depends on three criteria: temporal, expression and cause.
Moods last longer than emotions, and they might not associated with a specific expression
or cause. The main functional difference is that, emotions modulate actions while moods
modulate cognition [148].

2.3.1 Ekman

Ekman [43] distinguished mood from emotions in terms of their time course (moods last
for hours or days) and of what should be found in the neural circuitry that directs and
maintains each of these affective states.
Most of the times, laypeople use the same word to refer to a mood or to an emotion.
For instance, the word irritable would mean low-intensity anger or a long-lasting state.
Another criterion is that moods seem to lower the threshold for arousing emotions; as if
the person is seeking an opportunity to indulge the emotion relevant to the mood.
Ekman also stated that moods do not have their own unique facial expressions, while
emotions do. Another characteristic of mood is that usually people cannot recall what
situation brought them to a certain mood, while they can do that with emotions. Internal
chemical changes can also change the mood, for instance, lack of sleep or food.

2.3.2 Pleasure-Arousal-Dominance Space

Albert Mehrabian proposed a framework for definition and measurement of different emo-
tional states, emotional traits, and personality traits in terms of three nearly orthogonal
dimensions: Pleasure, Arousal, and Dominance; which define the PAD Space.
There are two PAD Scales. One for definition of emotional states, or emotions (PAD
Emotion Model); and the other for definition of temperament (PAD Temperament Model).
Both the PAD Temperament Model and the PAD Emotion Model allow us to predict the
correlation between any two traits (temperament) or states (emotions) for which PAD
components have been experimentally identified. In this way, an agent would be infused
with personality characteristics, or emotions, that appear to have life-like quality. For
example, an agent that is configured to be neurotic would thus manifest related character-
istics (e.g. anxiety, proneness to binge eating, depression, or even panic disorder). On the

22
2.3. PSYCHOLOGICAL THEORIES OF MOOD

other hand, based on the correlation among traits, this neurotic agent would not be likely
to exhibit extroverted or nurturing traits [103].
During the past few years this model has been used in computational model for repre-
sentation of mood in virtual characters [4], [63], [80]. The reason for using a temperament
model as a mood model is given by the fact that in the PAD space a set of different affective
values are produced depending on the values of pleasure, arousal and dominance, which
in turn change over time. As the combination of these three dimensions produce eight
different octants, then we can assume that these octants are moods.
In [105], Emotional States refer to transitory conditions of the organism (e.g. feeling
alert vs. tired, happy vs. unhappy), which can be seen as emotions and/or moods. Tem-
perament refers to an individuals stable emotional characteristics (i.e. emotional traits
or generalized emotional predispositions). More precisely, temperament is an average of
the states of pleasure, arousal, and dominance across representative life situations.

2.3.3 UWIST Mood Adjective Checklist

The UWIST Mood Adjective Checklist (UMACL) is a tool for measuring mood. Matthews
et al. [96] defined mood as an emotion-like experience lasting for at least several minutes.
Some of the previous mood models they studied to obtain their final scale were the
one proposed by Mehrabian and Russell [106], which used three bipolar factors: pleasure-
displeasure, arousal, and dominance-submissiveness; Zevor and Tellegens [157] two factors-
model: positive affect and negative affect; Thayers [145] that also obtained these two fac-
tors, but he labeled them energetic arousal and tense arousal ; and Mackay et al.s [90]
who identified bipolar dimensions related to hedonic tone or feeling of pleasantness-
unpleasantness, and arousal.
In the end, Matthews et al. proposed a three-dimensional model of mood: energetic
arousal, hedonic tone, and general arousal. It is of great importance in clinical research,
because of its apparent ability to discriminate between depressed (low hedonic tone) and
anxious (high tense arousal) mood states.

2.3.4 Positive and Negative Affect

David Watson [150] proposed an alternative mood model to the Pleasantness-


Unpleasantness/ Activation one. It focuses on the general dimensions of Negative and
Positive Affect. The Negative Affect dimension represents different types of negative mood

23
CHAPTER 2. PSYCHOLOGICAL THEORIES

as feelings of nervousness, sadness, irritation, and guilt. The Positive Affect reflects the
experiencing of some type of positive mood as feelings of joy, energy, enthusiasm, and
alertness.
The model classifies positive and negative moods in four basic types: high positive/low
negative (e.g. feeling happy), high positive/high negative (e.g. mixture of fear and ex-
citement in a roller coaster), low positive/high negative (e.g. feeling depressed), and low
positive/low negative (e.g. disengaged state while watching television).
Watson and Clark [152] developed their own mood inventory named PANAS-X, which
is an extension of the original PANAS (Positive and Negative Affect Schedule). It consists
of 11 scales that assess specific types of affect: 4 basic negative affects (fear, sadness, guilt
and hostility), 3 basic positive affects (joviality, self-assurance and attentiveness), and 4
other affective states (shyness, fatigue, serenity and surprise).

2.4 Summary
In this chapter we have reviewed some psychological theories of emotions, mood and per-
sonality that are relevant in the field of Affective Computing. From Darwin, who studied
the universality of facial expressions, to Thomas Cochrane, who proposed a novel theory for
the implementation of appraisal, several psychologists have come up with different ways to
study emotions. Categorical, dimensional, and appraisal models of emotions are the three
types we have overviewed, being the OCC model the most used in computational models
to date. Regarding personality, the Five Factor Model, or Big Five is still the state-of-art
personality model. One reason is its replicability along different studies. Another reason is
that its five dimensions allow the description of any type of personality. Finally, the study
of mood is becoming more and more important in the Affective Computing field. There-
fore, efforts are being directed to its representation in computational models, going from
bi-dimensional representations (good mood and bad mood) to the 8-moods PAD Space.

24
Chapter 3

Related Work in Affective


Computing

Dr. Walter Gibbs: Ha, ha. Youve got to expect some static. After all, computers are just
machines; they cant think.
Alan Bradley: Some programs will be thinking soon.
Dr. Walter Gibbs: Wont that be grand? Computers and the programs will start thinking
and the people will stop.
TRON (1982).

Thanks to the efforts in the fields of Affective Computing, Artificial Intelligence, Computer
Graphics, and Cognitive Sciences, the creation of virtual characters has been improved and
enriched through the years.
Some researchers have proposed computational models based on psychological theories
to elicit different affective traits and behaviors in the characters. Others have focused on
studying which behaviors are perceived as manifestation of different affective traits. While
the first ones aimed for a character that feels and react accordingly to those feelings; the
last ones aimed for cues that make a character looks like feeling.
In the following we will review previous works grouped by those that propose com-
putational models of affect for Embodied Conversational Agents (ECAs), and those that
studied the perception of affect in the face and head. Finally, a summary of the section is
provided.

25
CHAPTER 3. RELATED WORK IN AFFECTIVE COMPUTING

3.1 Computational Models of Affect


According to Danny Hillis, vicepresident of Disney Imagineering in 1997, there are 4 holly
grail items concerning entertainment agents: a computable science of emotion, virtual
actors, agent evolution, and computable storytelling [50].
To achieve these goals, many researchers have proposed and implemented computa-
tional models for the generation of affect, as can be seen in the State of Art reports in [148]
and [79]. In the next subsections we outline the objectives, main contributions and visual-
ization methods of the most cited and relevant works in the Affective Computing Field.

3.1.1 Cathexis
Juan Velasquez [147] presented Cathexis, one of the first distributed, computational models
that represented the dynamic nature of emotions, moods and temperaments, as well as their
influence on the behavior of synthetic autonomous agent. The architecture of the model
presented two components: the Emotion Generation System and the Behavior System.
The emotion generation system used appraisal theories with other emotional theories
based on physical reactions. The implementation was based on proto-specialists agents pro-
vided with sensors to monitor internal and external stimuli, allowing the elicitation of family
of emotions (e.g. Fear, Fright, Terror, etc.). Emotions could be basic or blended/mixed
(e.g. Grief, a combination of sadness and anger).
Moods were defined from a psychobiological perspective as levels of arousal that in-
fluence the activation of emotions. Temperaments were different values of thresholds that
controlled the intensity and arousal of emotions. To compute the intensity of emotions,
Velasquez took into account its previous level of arousal, the contributions of each emotion
elicitor for that emotion, and the interaction with other emotions.
The behavior system decided which behavior to display given the agents emotional
state. Each behavior contained two major components: one for generation of prototypical
facial expressions, body postures and vocal expressions; and other for identification of
motivations for behaviors and action tendencies (e.g. fighting, insulting, biting, etc.).
The system was implemented in an object-oriented framework. The ECA was Simon, a
synthetic agent representing a baby. The users interacted with Simon through an interface,
providing external stimuli that caused him to react emotionally. Our model is similar to
Cathexis in the sense that we also took into consideration internal and external stimuli.
The difference is that our external stimuli is provided by events happening inside the

26
3.1. COMPUTATIONAL MODELS OF AFFECT

virtual world, and we do not consider physiological elements to elicit or manifest affective
phenomena. Figure 3.1 shows the facial expressions of Simon.

Figure 3.1: Simons facial expressions (Fig. 3. from [147])

3.1.2 The Affective Reasoner


The Affective Reasoner (AR) [50] was a platform where agents were able to reason about
events and other emotional episodes in other agents lives, reacting with emotions and
emotion-induced actions.
In the AR situations were evaluated by the agents according to their individual concerns
and affective state, producing different interpretations. Each interpretation elicited emo-
tion types and some variable bindings, according to the Emotion Elicited Condition Theory
(EEC), proposed by Ortony et al. [120], which was represented in a separate database as
a set of high-level emotion rules.
Agents temperament was defined with respect to agents idiosyncrasy and response
to a situation. Once emotions arose, temperament regulated how these were going to
be manifested (from somatic responses like turning red to highly intentional responses)
through processing modules that chose compatible action responses (expressions) or took
into account the state of the world. Emotions intensities depended on variables such as
degree of importance to the agent, surprisingness of the action, temporal proximity, and
so forth. Finally, moods changed the thresholds for interpretations of situations and altered
the activation of expression channels.
The AR was found to be useful for generation of stories with emotional content. It
was also the first model to use all the OCC emotions, allowing agents to reason about
one anothers concerns. The virtual actors were talking-heads (either computer or human)

27
CHAPTER 3. RELATED WORK IN AFFECTIVE COMPUTING

expressing facial emotion content with speech, and in some cases, music [49]. The main
difference of our model is that we do not represent physiological responses. Elliott modeled
mood as the factor that changed thresholds for emotion activation, while we used the PAD
model [105] to represent it. Finally, memory was a factor that Elliott took into consider-
ation using databases, while we relied on ontologies to reuse already existent knowledge,
but so far we do not deal with memory.

3.1.3 Virtual Puppet Theater (VPT)

Andre et al. [3] presented one model that integrated personality and emotions to create
interactive virtual characters. In the Puppet project, children were intended to gain a basic
understanding on how emotional states change, as well as to comprehend how physical and
verbal actions in social interaction can induce emotions in others.

The architecture considered deliberative planning (goals) and reactive plans (inten-
tions), built on a BDI framework. They considered a knowledge base (database that
contains the world model), a plan library (collection of plans to be used by the agent to
achieve its goals), an intention structure (internal model of the current goals or desires,
and instantiated plans or intentions), and an interpreter (resolves conflicts, select a plan
and execute it). Events might be elicited from the virtual environment, or from the user
input. They also introduced body states (hunger, fatigue, boredom).

The modeled emotions were: anger, fear, happiness and sadness. These could be elicited
through OCC rules, or by the child interacting with the system. Regarding personality,
they considered two traits from the FFM, extraversion and agreeableness. Interaction could
be performed in three ways: the child controlled one avatar and interacted with others,
the child observed the interaction of the avatars, and the child was like the director of the
theater controlling the behavior of all characters.

Visualization was done through 2D cartoon-like characters that form part of a farm: a
farmer and a animals. We chose to explain the Puppet project because as it did, we are
considering a model of the world, also named knowledge base, which is modeled through
ontologies. The main difference is the planning behavior, which we do not consider, because
we are interested in visualizing affect through facial expressions.

28
3.1. COMPUTATIONAL MODELS OF AFFECT

3.1.4 Multi-layer personality model


Kshirsagar and Magnenat-Thalmann [85] proposed a multi-layer personality model for
creation of affective characters. Instead, of focusing on events appraisal, they enabled a
complete design of personality that caused deliberative reactions that change the mood,
and it affected (and was affected) by momentary emotions.
For personality, they combined all the dimensions of the Five Factor Model (FFM) [99].
Regarding emotions, they used the categories proposed in the OCC model [120], but not
its cognitive processing. For visualization, they re-categorized the 22 OCC emotions, plus
Surprise and Disgust, into 6 expression groups corresponding to the six basic expressions
proposed by Ekman. Mood was the layer that linked personality with emotions, and it
could be good, bad or neutral.
To implement the architecture and to model the uncertainty of human behavior, Kshir-
sagar and Magnenat-Thalmann used Bayesian Belief Networks (BBN). They created one
BBN for each basic factor of personality, defining prior and transition probabilities for mood
change. Then, this probability of mood change was computed for each elicited emotion.
To implement the influence of mood on emotions, they defined matrices with probabilities
of transitions between emotional states.
The output of the model were facial expressions synchronized with speech movement.
The main similarity of this model with our work is the consideration of mood as an in-
termediate layer between mood and personality. Figure 3.2 shows the a character with
resultant moods.

Figure 3.2: Facial Animation (Fig. 6. from [85])

29
CHAPTER 3. RELATED WORK IN AFFECTIVE COMPUTING

3.1.5 Greta

Rosis et al. [35] designed and prototyped Greta, a 3D Embodied Conversational Agent
(ECA) provided with Mind and Body to enhance in the user the impression of communi-
cating with a specific person.
The mind of Greta was designed based on a BDI (Belief-Desire-Intention) model [130]
considering: temperament and personality, social context, dynamics of the agents state,
response decay, and multiple emotions. It means that her mind has a representation of
the beliefs and goals that elicit emotions and the decision to display them or not. They
also combined emotions and considered intensity changes with time, and how each of them
prevails according to the agents personality and social context of the conversation.
The body of Greta used a repertoire of signals to be employed during communication
like facial expressions, head movements or gaze direction. In recent versions of Greta [93],
the agent also produced gestures (arms and hands movements) and upper body movements.
To implement Gretas mind, they used Dynamic Belief Network (DBN). To keep Gretas
mind independent of her body, they defined a mark-up language (Affective Presentation
Markup Language - APML [34]) to associate semantics to the natural language utterances.
One of the advantages of this system was its multimodality and domain-independence.
By not using emotional and personality models they built a fine-grained cognitive structure,
in which the appraisal of events was represented in terms of the agents system of beliefs
and goals. The problem arose with the use of DBNs, because the number of nodes increases
considerably with the number of modeled emotions. The difference with our model is the
implementation of the mind of our system, which was done through ontologies, and the
interaction between affective phenomena that, in our case, was done using the PAD Space.
Figure 3.3 shows the first attempts on creation of facial expressions using Greta.

Figure 3.3: (A) Gretas Fear and (B) Gretas Joy (Fig. 36. and 37. from [122])

30
3.1. COMPUTATIONAL MODELS OF AFFECT

3.1.6 Generic Personality and Emotion Model


Egges et al. [41] described a personality, emotion and mood simulation model, based on
appraisal theories.
The model of appraisal they used was the OCC [120]. However, as it did not consider
personality traits, they included them as the selection criteria to indicate what and how
many goals, structures and attitudes fit with the personality. For instance, Conscientious-
ness influenced how soon goals are abandoned and new goals are adopted [41].
Personality was represented through a vector with the intensities for each trait. It is
worth noting that any personality model could be simulated. Emotions were considered as
emotional states that changed over time, represented through vectors with the intensities of
the 22 emotions of the OCC model. They had an emotional state history that kept record
of the emotional states over time. Finally, mood was represented as a bi-dimensional or n-
dimensional vector. For interrelation of these affective elements, first they defined matrices
with the influence values of one element on another (Personality-Emotion, Emotion-Mood,
Personality-Mood, Mood-Emotion). Then, they defined functions that used the values
from these matrices to compute the changes in mood and emotions.
As in the Multi-layer personality model 3.1.4, the visualization was done through facial
expressions and dialogs in virtual characters. The importance of this model is the addition
of personality as a key element in the appraisal process, which we have followed in this
thesis. Figure 3.4 shows the visual output of the model.

Figure 3.4: Facial Expressions for Anger, Surprise and Sadness (Fig. 7. from [41])

31
CHAPTER 3. RELATED WORK IN AFFECTIVE COMPUTING

3.1.7 ALMA

Gebhard [63] proposed ALMA (A Layered Model of Affect) to simulate the interaction of
emotions, mood, and personality. The model used OCC emotions and the FFM personality
model. These elements were mapped into the PAD Space [104] to represent moods.
Elicited emotions were represented in the PAD space as vectors and were condensed in
a center of mass, influencing the mood and making it jump from one octant to the other
of the PAD model. Personality was defined as a default mood using a set of equations
provided by Mehrabian [105]. It influenced emotions by giving an initial offset to those
related to the activated personality traits. Finally, mood updated personality, using the
inverse of the equations employed to compute the default mood.
ALMA was implemented using AffectML, an XML-based affect modeling language. A
characters personality profile consisted of the personality definition and subjective ap-
praisal rules. At runtime, the affect computation periodically updated the affective profile
of all characters, appraised relevant input for all characters, and gave as output a set of
emotion eliciting conditions. These were used to update a characters emotions and mood.
Regarding visualization, facial expressions, or visual cues as blush of shame showed
the experienced emotion. Idle behaviors represented the mood. Nevertheless, ALMA has
been mainly used for dialog-generation in virtual characters [64]. Our work is based on this
model because it provided the framework to interrelate personality, mood and emotions
in a novel and practical way. Figure 3.5 shows the PAD space where affective traits are
mapped and the resultant behavior in a character.

3.1.8 FATIMA

Inspired by the work of traditional character animators, Dias and Paiva [38] proposed an
architectural model to build autonomous characters whose behavior was influenced by their
emotional state and personality.
The architecture presented two layers: the reactive layer (hardwire reactions to emo-
tions and events that must be rapidly triggered and performed after the appraisal process),
and the deliberative layer for the agents planful behavior. FATIMA generated characters
behavior based on appraisal and coping processes. Appraisal focuses on the goals of the
character, triggering the emotions to take into account to prepare a plan. Coping de-
pended on the emotional state and personality of the character. They considered two
types of coping: problem focused (set of actions to achieve and execute a result) and emo-

32
3.1. COMPUTATIONAL MODELS OF AFFECT

Figure 3.5: (A) Character Behavior ([63]). (B) Shame representation over time ([64])

tional focused (changes the agents interpretation of circumstances (importance of goals,


effects probability), thus lowering strong negative emotions).

They used the OCC model for emotions, and also represented arousal (degree of ex-
citement of the character) and mood. Mood, represented as good or bad, was an overall
valence of the characters emotional state and influences the intensity of emotions. Person-
ality was defined by: a set of goals, a set of emotional reaction rules, the characters action
tendencies, emotional thresholds, and decay rates for each emotion type.

FATIMA provided synthetic characters that were believable and empathic. They had
cognitive capabilities, interacted with external users and were domain independent. This
model was mainly goal-based, letting motivations and standards to define the characters
personality. Our model is a simplified version of a goal-based model, which takes it into
account to generate emotions, but also considers preferences and admiration for other
agents to elicit other OCC emotions.

FATIMA was implemented in the computer application FearNot! [37], which was de-
veloped to tackle and eventually help to reduce bullying problems in schools. Figure 3.6
shows a screenshot of FearNot!.

33
CHAPTER 3. RELATED WORK IN AFFECTIVE COMPUTING

Figure 3.6: (A) Characters in FearNot! and (B) Facial Expressions (Fig. 3 and 4 [121])

3.1.9 WASABI
Becker-Asano [12] presented WASABI Affect Simulation for Agents with Believable
Interactivity, a computational simulation of affect for embodied agents. The architec-
ture of WASABI distinguished two layers: a physis layer and a cognition layer; and three
affective states: mood, primary emotions, and secondary emotions.
In the physis layer, primary emotions were produced from the non-conscious appraisal
of the stimuli. The set of primary emotions were anger, annoyance, boredom, concentration,
depression, fear, happiness, sadness, and surprise.
In the cognition layer the agent used its BDI-based cognitive reasoning abilities to
update his memory and generate expectations. From the conscious appraisal, secondary
emotions were produced. They were first filtered in the PAD space influencing the embodi-
ment of the agent and the expression of these emotions. Secondary emotions corresponded
to the prospect-based emotions OCC-cluster: hope, fears-confirmed, and relief.
Mood was the background state that influenced the elicitation of emotions. In contrast
to ALMA (Section 3.1.7), mood was not derived from PAD space, but modeled as an
agents overall feeling of well-being on a bipolar scale of positive vs. negative valence.
The ECA where WASABI was tested was Max. He was employed in a museum applica-
tion, where he conducted multimodal smalltalk conversations with visitors. Furthermore,
WASABI has also been applied to a gaming scenario, in which secondary emotions were
simulated in addition to primary ones [13]. The main difference of WASABI with our model
is the differentiation between primary and secondary emotions at the appraisal level. We

34
3.1. COMPUTATIONAL MODELS OF AFFECT

make this distinction when visualizing facial expressions of our characters, and moreover,
our secondary emotions are mixtures of primary emotions. Although we also use the PAD
space, we consider mood traits as the octants of the space instead of limiting them to
positive or negative. Figure 3.7 shows the virtual agent Max.

Figure 3.7: Virtual character Max (Fig. 4. from [13])

3.1.10 EMA

Marsella and Gratch [94] [95] proposed EMA (EMotion and Adaptation), a computational
framework that represented the dynamics in appraisal, which make the elicited situations
change based on inferences and previous knowledge to cope with that situation.
The agents interpretation of its agent-environment relationship was called causal in-
terpretation. It provided a explicit representation of the agents beliefs, desires, intentions,
plans and probabilities used for the appraisal processes. This causal interpretation changed
in time depending on the agents future observations or inferences. Regarding events, they
were defined in terms of appraisal variables which were: Perspective, Desirability, Likeli-
hood, Causal Attribution, Temporal Status, Controllability and Changeability.
Coping refers to how the agent reacts to the appraised events. Coping strategies in
EMA used the cognitive operators of the appraisal process, and decided for the most
suitable actions to be performed. Strategies included: planning, seek instrumental support,
procrastination, positive reinterpretation, acceptance, denial, mental disengagement, shift
blame, seek/suppress information.
As AR (Section 3.1.2) and FATIMA (Section 3.1.8), EMA followed Frijdas theory, so
then appraisal and mood elicited emotional states. Finally, coping response was biased by
this overall mood state. Regarding knowledge, it was represented through propositions,

35
CHAPTER 3. RELATED WORK IN AFFECTIVE COMPUTING

and SOAR [114] was used to model cognitive operators (e.g. update belief, output speech,
listen to speaker, initiate action, and so on) [95].
The relevance of this work is the single layered appraisal model, which resulted simply
enough to be implemented in virtual characters, so they can make congruent inferences
about their world and cope accordingly to them. We also followed Frijdas background,
because through our appraisal mechanism (ontologies) we elicit emotional states. We also
used variables as desirability, likelihood and temporal status as the variables to appraise
the different situations.

3.1.11 Memory-based Emotion model

Kasap et al. [80], proposed a memory-based emotion model that intended to achieve a more
natural interaction between the user and a virtual character. Through face recognition
techniques, the character could remember a users face and automatically adjust the
current interaction on the basis of its existing relationship with the user.
To model emotions Kasap et al. used the reduced OCC model, which has 12 emotions
(six positive: joy, hope, relief, pride, gratitude, and love; and six negative: distress, fear,
disappointment, remorse, anger, and hate). In addition, they used another 4 user-related
emotions: happy-for, gloating, sorry-for, and resentment. Personality was modeled using
FFM and moods were represented in the PAD space, defined by equations proposed by
Mehrabian [105].
This architecture was similar to ALMA (Section 3.1.7), but Kasap et al. integrated
the interpersonal-relationship concept in which emotion, mood, personality, and social
relationships affect to each other. Long-term memory allowed the virtual character to
store specific interaction sessions, and then retrieving this information as needed.
At the beginning, during, and at the end of each interaction, emotions and moods were
updated using the values provided in a matrix that related emotions (rows) with moods
(columns). Relationships were framed on the dimensions of Dominance and Friendliness,
following Argyles model [5]. Long-term memory was represented through structures named
episodic memory, which kept track of people that had interacted and calculated relationship
levels with them [91].
Their example application was Eva, a geography teacher who had a good and a bad
interaction with two different students, showing the differences in the interactions outputs.
Figure 3.8 shows the facial expression of Eva during interaction.

36
3.1. COMPUTATIONAL MODELS OF AFFECT

This model is very similar to our model, regarding mapping of affective traits in the
PAD space. While they consider relationship during the calculation of the affective state,
we take it into consideration while appraisal of the event (in the ontology). Also, Kasap
et al. only modeled facial expressions of emotions, while we map moods pleasure, arousal
and dominance values in the face.

Figure 3.8: Evas response to events (Fig. 3. from [80])

3.1.12 OSSE
Ochs et al. [118] proposed a model of the dynamics of social relations, based on emotions,
for Non-Player Characters (NPC). NPCs are defined according to their personality traits
and social roles (relation with other NPCs).
To represent the world surrounding the NPCs and its events, they used a formal rep-
resentation that included: a vocabulary ( entities with no reasoning capacity and actions),
NPCs attitudes functions (towards objects and other characters), praise functions (to-
wards actions), events as the 4-uplet <agent, action, patient, degree certainty>, emotion
representation, emotions intensity (degree of desirability, of praiseworthy/blameworthy),
personality traits, dynamics of social relations as the 4-uplet <liking, dominance, familiar-
ity, solidarity >, and social roles (e.g. employee/manager, child/father).
In the architecture, events were triggered by the scenario and appraised using OCC-
based algorithms. It elicited emotions that were influenced by personality. In turn, emo-

37
CHAPTER 3. RELATED WORK IN AFFECTIVE COMPUTING

tions affected the NPCs emotional state and altered the social relations with other NPCs.
In the implementation of the architecture, emotion representation was done through
the 10-emotions reduced OCC model [119], and they just considered the personality traits
Neuroticism and Extraversion. For instance, a more extroverted NPC experiences higher
joy, hope, pride and relief emotion values. Emotional decay functions and emotion thresh-
olds depended on the personality. To test the model, they used interactive dialogs that
simulated scenarios with different contexts (e.g. police interrogation and job interview).
This model is important because it focused on the dynamics of emotions, and not
just in their representation. They considered most of the aspects that participate in the
emotion elicitation from a cognitive perspective, achieving a simple representation. Ochs et
al.s model conceptually resembles our model because actions, objects, agent, preferences,
and events are related in a similar fashion. Nevertheless, we make use of ontologies to
represent the knowledge that they implemented through functions. In this way, we reuse
existent knowledge and perform inferences about the context defined. Moreover, they did
not consider emotions as happy-for (sorry-for ), which occurs when someone we appreciate
undergoes desirable (undesirable) events; nor gloating (resentment). In our model we can
elicit these emotions because we have a liking level for another agent which is represented
in the ontologies.

3.1.13 MARC system

Courgeon et al. [31] proposed a model based on Scherers Componential Process Model
(CPM) (Section 2.1.3). Their motivation was the appraisal of affective events in real-time.
For this reason, they used Scherers model, which defines appraisal based on the evaluations
an organism make in order to survive.
MARCs architecture had an appraisal module that evaluated the elicited events and
generated the parameters for facial animation. They just used 7 out of the 10 appraisal
sub-checks of the CPM: expectedness, unpleasantness, goal hindrance, external causa-
tion, copying potential, immorality, and self consistency. For each sub-check they defined
Gaussian curves, based on the proposed discrete prediction values (open, very low, low,
medium, high, and very high), because they provided a linear quantification of discrete
values. Therefore, they could evaluate the sub-checks in a continuous scale.
The process can be summarized as, when an event occurs the relevant emotions for that
event are evaluated. They used a set of four emotions: joy, sadness, anger, and guilt. For

38
3.1. COMPUTATIONAL MODELS OF AFFECT

each emotion, they multiply its relevance by the emotions Gaussian value of all its sub-
checks. Then, the final emotion is set with a value and animation parameters are generated.
Visualization was done using MARC, a virtual character that displayed sequential facial
expressions. Figure 3.9 shows the systems architecture.

Figure 3.9: MARCs architecture (Fig. 1. from [31])

We decided to review this work because it is a novel appraisal model that uses the
CPM in a simple manner. In essence, Courgeon et al.s model follows our own framework
organization: we have an event elicitor module that performs appraisal, then a module
where the affective parameters are fine-tuned and values for facial animation are generated,
and finally a facial animation module.

3.1.14 Comparison between models


Table 3.1 presents the most important aspects we have taken into consideration when
studying previous works. We have focused in the affective elements they use (number of
emotions, mood and personality; and which theory they based on), the psychological theory
background (CPM, Belief-Desire-Intention (BDI) theory, Cognitive Appraisal, Reactive or
deliberative planning), the implementation techniques (Bayesian Belief Networks, OCC
rules, Quantitative functions), and the visualization method. Regarding this last aspect,
we have biased our research of previous works to those that use virtual characters and
generate facial expressions.

39
40
Model Affective Elements Psychological Implementation Visualization ECA
Background

Cathexis - 6 basic emotions (Ekman) Appraisal + Emotions: Sensor agents - Basic facial Simon
(1997) - Mixed Emotions Physical motivations (neural, sensorimotor, expressions
- Mood (Roseman model) motivational, cognitive). - Body postures
- Temperament (emotions Behavior: motivations, - Vocal expressions
thresholds) external stimuli.
Affective - 24 emotions (OCC model) BDI Databases for expectations, - Facial expressions Talking heads
Reasoner - Mood concerns-of-other, goals, (either computer
(1998) standards and preferences or human)
(GPS). OCC rules, actions,
conflicts sets.
PUPPET - 4 Emotions (anger, fear, Deliberative and OCC rules - Facial expressions Farm
(1999) sadness, happiness) reactive planning + - Sounds (e.g. cat characters
- Personality (extraverted BDI purrs and hisses) (farmer,
and agreeableness) pigs, etc.)
Multilayer - 24 emotions (OCC model Emotional reactive Bayesian Belief Networks - Basic facial Virtual character
Personality + Surprise + Disgust) (BBF) for personality. expressions
Model - Personality (FFM) Probability transitions - Vocal expressions
(2002) - Mood (good/bad/neutral) matrices for mood.
Greta - 24 emotions (OCC model) BDI Dynamic Belief Networks for - Facial expressions Greta
(2003) emotion representation. - Body postures
- Head movements
and gaze
- Vocal expressions
Generic - N emotions Appraisal (OCC Matrices of influence. - Basic facial not found
Personality - P Personalities model + personality) Functions to compute expressions
and Emotion - M Moods affective traits changes. - Vocal expressions
Model (2004)
FATIMA - 24 emotions (OCC model) Appraisal and Reactive and deliberative - Facial expressions FearNot!
(2005) - Personality (goals, Coping appraisal. - Body postures (John the victim,
emotional reaction rules, Problem focused and Martinha the
action tendencies, emotional emotional focused coping. neutral and Luke
thresholds and decay rates) the bully)
- Mood (good/bad)
CHAPTER 3. RELATED WORK IN AFFECTIVE COMPUTING
ALMA - 24 emotions (OCC model) Cognitive Appraisal Appraisal rules: basic, act, - Facial expressions Cyberella,
(2005) - Personality (FFM) emotion and mood display - Body postures Cross Talk,
- Mood (PAD space) (AffectML) - Written dialog VirtualHuman,
Eric, Virtual
Poker-Character
WASABI - Primary emotions BDI + bodily Physis and cognitive layers - Facial expressions Max
(2008) (prototypical emotions) dynamics in PAD Space. - Vocal expressions
- Secondary emotions
(prospect-based emotions
in OCC-cluster)
- Mood (positive/negative)
EMA - Emotions (surprise, hope Dynamic Appraisal Cognitive architecture not found not found
(2009) joy, fear, sadness, anger and coping (SOAR)
and guilt)
- Mood (aggregated
emotional states)
Memory-based - 12 Emotions Cognitive appraisal Matrices relating - Facial expressions Eva
Emotion (reduced OCC) + Interpersonal emotions and moods. - Body postures
Model (2009) - Personality (FFM) relationship Probabilities of - Written dialog
- Mood (PAD space) (dominance- affective state change.
3.1. COMPUTATIONAL MODELS OF AFFECT

friendliness) + Episodic memory structure.


Long-term memory
OSSE - 10 Emotions Dynamics of social Semiquatitative. - Written dialog not found
(2009) (reduced OCC) relations Numerical functions.
- Personality (extraverted
and neurotic)
MARC - 4 Emotions (joy, anger, CPM Gaussian functions - Facial expressions Marc
(2009) sadness, guilt) (7 sub-checks)
Our - 24 Emotions (OCC model) Deliberative Appraisal Ontologies. - Facial expressions Alice, Alfred
system - Personality (FFM) + Knowledge Inference
- Mood (PAD space)

Table 3.1: Comparison between Computational Affective Models

41
CHAPTER 3. RELATED WORK IN AFFECTIVE COMPUTING

3.2 Visual Perception of Affective Phenomena


When it comes to Personality, little has been done regarding its visual representation in a
virtual character. Nevertheless, some previous works have dealt with this topic, obtaining
the results that are presented in the following.

3.2.1 Visual Cues for Personality


Personality is something that everyday people recognize and discuss about others, and
personality is a valuable piece of information about a person [77]. The following previous
works have focused on personality as a component in affective models that modifies the
resultant emotional state of the virtual character.
According to Mehrabian [102], facial pleasantness was associated with less dominant,
more neurotic, and more anxious tendencies of an individual. Thus, when greater facial
pleasantness or higher rates of smiling occur in what could be considered awkward situ-
ations, they indicate greater efforts by the speaker to relieve tension and discomfort by
placating the listener.
According to Keltner [81], Neuroticism is correlated with increased facial expressions
of anger, contempt, and fear. In a study, neurotic individuals showed increased facial
expressions of distress while watching a person making an embarrassing face. Following an
overpraise induction, Neuroticism was negatively correlated with participants Duchenne
smiles, which was the dominant response in the situation mentioned before. Extraversion
consistently predicted facial expressions associated with social approach. It was positively
correlated with increased Duchenne smiles of enjoyment and amusement, as well as with
increased facial expressions of sadness, which they viewed as a signal of increased social
approach.
Isbister and Nass [77] examined whether people would interpret and respond to verbal
and non-verbal cues of personality in virtual characters as they do from other persons.
Their experiment used subjects who had previously taken personality tests, so they could
determine if people prefer to work with others that are similar to themselves. As a re-
sult, participants accurately identified the extroverted language and non-verbal cues as
significantly more extroverted than the introverted language and postures. Similarly, they
preferred characters with consistent cues, regardless of participant personality, showing
that they rather to work with a character that was complementary to them, vs. one that
was similar.

42
3.2. VISUAL PERCEPTION OF AFFECTIVE PHENOMENA

Krahmer et al.[83] investigated three potential personality cues: gaze, speech and eye-
brow movements, which were assigned variants of Extraversion and Introversion. The
novelty was the study of the perception of these cues combined in different personality
profiles (e.g. introverted gaze, introverted speech and extraverted brow). They used a
virtual talking-head with the three variants of personality cues, which had to be rated by
subjects while reading a poem in Dutch. The results showed that extravert brows were
perceived more extraverted when combined with extraverted gaze, and introvert brows
were perceived as more introvert with introvert gaze. Concerning combinations of cues
they found that including an extravert feature in an introvert agent does not imply that
subjects perceive that agent as more extraverted. They also observed that the personality
profile of the character had no influence in the quality assessment. They also confirmed
that speech influences perception of personality.
Arya et al. [7] experimentally associated facial actions (head tilting, turning, and nod-
ding, eyebrow raising, blinking, and expression of emotions) and their frequency and du-
ration to the dimensions of Dominance and Affiliation, proposed by Wiggins [155]. Their
objective was to have an agent with certain facial actions that can be perceived by the
user as certain personality types, instead of establishing the relation between personality
types and facial actions. The results showed that joy was highly related to the Affiliation
dimension and was seen more dominant. Contempt was also seen as very dominant but
correlated negatively with Affiliation. Slowly and fast raising of both brows was seen as an
Affiliation signal. Slowly turning the head was seen as very dominant, while slowly moving
the head to the side was seen as very submissive. On the other hand, raising fast one brow
was seen as very dominant, while a fast avert gaze was seen as more submissive.
Zammitto et al. [156] tried to expand the personality model proposed in Arya et al. [7]
by using the AB5C model (Section 2.2.3), which provided the framework for going from
mapped circumplexes to meticulous use of labels. As a result they obtained adjectives for
identifying the Big Five personality factors, plus specific visual cues for two of them. Their
idea was to incorporate visual cues for the rest of the Big Five traits.
Bee et al. [15] analized how facial displays, eye gaze, and head tilts express social
dominance. Using one-way ANOVAs and two-tailed t-test, depending on what they wanted
to obtain, they obtained results as the emotional displays of joy, anger and disgust were
perceived as high dominant, while fear was less dominant. Regarding gaze, joy, disgust and
the neutral display with direct gaze (looking straight to the user) were the most dominant,
while fear was again the less dominant. Nevertheless, anger with averted gaze was perceived

43
CHAPTER 3. RELATED WORK IN AFFECTIVE COMPUTING

as high dominant. Concerning influence of head orientation, significant effects were only
obtained for anger, sadness and the neutral display.
McRorie et al. [101] proposed a model to construct Sensitive Artificial Listeners (SAL)
using Eysencks model of personality to generate behavioral characteristics (e.g. visual
cues) from it. The idea was to link verbal/non-verbal behavior with impressions of per-
sonality. In order to do that, first they defined the agents specifying its characteristics and
mental state: agree, disagree, accept, refuse, believe, disbelieve, interest, not interest, like,
dislike, understand, and not understand. Then, they computed the agents behavior when
it was listening to the user, selecting the most appropriate signal. Finally, they determined
the corresponding non verbal behaviors taking into account the characters modality pref-
erence and expressiveness parameters. For example, extraverts tend to demonstrate more
body movements, and display greater levels of facial activity and gesturing, more frequent
head nods, general speed of movement, and direct facial posture and eye contact is more
likely to be maintained. For instance, their character Spike, who is angry and argumen-
tative, conveys negative communicative functions, in particular dislike, disagreement and
lack of interest.
Cloud-Buckner et al. [28] conducted an experiment to investigate how people perceived
personality on avatars based on actions, language, and behavior; and if race and gender of
the avatar influenced this perception. The independent variables taken into consideration
were race (dark or white), gender (male or female), and personality (P1: friendly, outgoing,
self-sufficient, with high-activity level and some anger; P2: introverted, self-conscious,
cooperative, orderly, modest, disciplined and sympathetic to others). Results showed that
race and gender of the avatar did play a role in the perception of personality. For example,
for P1 and P2, when there was a difference on Race, the dark skin had a higher rating for
every item except anger. When there was a difference on Gender, in P1, the male had a
higher score, but in P2, the women had a higher score.
Neff et al. [113] studied the impact of verbal and non-verbal factors in the perception of
the Big Five trait of Extraversion. The non-verbal expression of Extraversion was given by
the head and body attitude: gesture amplitude, direction, speed, and movement of body
parts. Neff et al. performed evaluation on videos of a female avatar making a restaurant rec-
ommendation and asking subjects to judge her extraversion level and naturalness. Results
showed that linguistic extraversion has more influence on the perception of extraversion
than gesture rate or gesture performance; that indeed higher gesture rates are perceived as
more extraverted; changes in gesture performance correlate with changes in extraversion

44
3.3. SUMMARY

perception; and the combination of higher gesture rates with a more extraverted gesture
performance increase the perception of extraversion.
Bee et al. [16] examined the interaction of verbal and non-verbal behavior to create
the impression of dominance in intelligent virtual agents. Non-verbal communication is
achieved by gaze and head movement, which is associated with social dominance; and
gestures that acquaint friendliness and enthusiasm. Verbal communication is achieved
through linguistic behavior, which is associated with the personality traits of extraver-
sion and agreeableness. Their novel result demonstrated that linguistic personality traits
(extraversion and agreeableness) influence the perception of dominance. Also, gaze-based
dominance influences the perception of personality traits.

3.3 Summary
This chapter can be summarized as the overview of models to make character feel and
to make character look like feeling. In the first group we found the models of Velasquez,
Andre et al., Kshirsagar et al., Egges et al., or Gebhard et al., which have become State
of Art when studying computational models of affect. Other works as Kasap et al., Ochs
et al., and Courgeon et al., propose new models that intend to improve the creation of
affective characters. Table 3.1 shows the highlights of the reviewed models as well as of
our model.
In the second group we have briefly described studies related with the perception of
personality and their relation with different visual cues. This studies have been mostly
performed by psychologists, but recently researchers in Computer Science like Arya and
colleagues, Zammitto and colleages, or Bee and colleagues are experimenting with the
implementation of these cues in virtual characters and explore how they enhance the per-
ception of emotions, mood and personality.

45
CHAPTER 3. RELATED WORK IN AFFECTIVE COMPUTING

46
Chapter 4

Framework Overview

Da Vinci combined art and science and aesthetics and engineering, that kind of unity is
needed once again.
Ben Shneiderman

The combination of different fields and theories to achieve believable virtual characters is
not an easy task. It could be seen in the previous chapters the amount of previous work
to give characters affective traits. It also could be seen that context is a key element in
the generation and recognition of emotions. On the other hand, little has been seen about
expression of mood or personality that could enhance the believability in the characters.
Therefore, we propose a framework that deals with the representation of context and
the visualization of emotions, mood and personality, combining ontologies, psychological
theories and previous computational models. Nevertheless, due to the novelty of this
work, every piece of the framework needs to be evaluated to prove our research hypothesis
established in the introductory chapter. Therefore, the main purpose of this framework is
to establish the basis for a model that could help in the development of applications with
storytelling, interaction with virtual characters or fast visualization of affect.
This chapter gives an overview of how we implement the necessary elements to create
believable affective virtual characters immersed in contextualized virtual worlds.

47
CHAPTER 4. FRAMEWORK OVERVIEW

4.1 System Framework


To represent daily events it is necessary to have a model that defines them, describes the
concepts related with those events, and implements the rules of appraisal that will elicit
affective responses.
Therefore, we propose a three-layered Computational Affective Model that consists
of a semantic layer (Context Representation Module), an affective layer (Affective
Module) and a visualization layer (Facial Expressions Visualization Module). Figure 4.1
presents the diagram of our model, where the relationships between the main modules are
shown.

Figure 4.1: General Conceptual Model

As can be seen, the world provides the elements to represent the context, which are
appraised and elicits emotions that are interpreted by the affective model. Then, the
affective model combines these emotional responses with the mood and personality of the
character, to produce a final affective state, or mood. Finally, this mood is manifested by
the agent through its facial expressions. Figure 4.2 presents a more detailed schema of the
modules to be developed, and the psychological theories (or models) that will be used.
Starting from the top leftmost side of Figure 4.2, it is seen that the World is defined
in terms of the events that occur in it, and also in terms of the characters, or agents who
inhabit it. This categorization responds to idea of differentiating context regarding things
that happen outside the character (events) and inside the character (the character itself
through its psychological-affective characteristics).
An event is composed of an action that is performed, or affects a character (agent) or
an object; a time unit when it occurs; and a place where the event occurs. As for the agent,
it is defined in terms of its goals, preferences, social admiration for other agents (although

48
4.1. SYSTEM FRAMEWORK

Figure 4.2: Conceptual Model of the Framework

49
CHAPTER 4. FRAMEWORK OVERVIEW

this term uses the emotional word admiration, which has a positive connotation, it can
be negative), and personality.
The World elements are inputted to the Context Representation module that is imple-
mented using two ontologies: Event Ontology, which describes all the concepts that form
part of an event; and PersonalityEmotion Ontology, which considers all the concepts that
define a character from a psychological point of view (goals, preferences, social admiration
with other agents, and personality). The output of this module is a set of emotions with
their respective intensities.
These emotions are the input of the Affective Model, which computes the interrela-
tions between emotions, moods and personality, giving more accurate affective responses.
The output of this module is a set of moods expressed in terms of pleasure, arousal and
dominance values.
The Visualization module is responsible for the generation of facial expressions, provid-
ing a visual representation for the emotions elicited in the Context Representation module
and the moods elicited by the Affective Model. Regarding personality, it does not only
influence at the computation level, but also at the visual level. That is why we explore
how some visual cues influence the perception of personality traits.
In the following sections we offer a brief description of the parameters used in each
layer of our computational affective model.

4.2 Semantic Layer: Context Representation

To represent the context in which the characters will be immersed we propose a Semantic
Model that will be briefly described in this section, and then explained in Chapter 5.

4.2.1 Context - Inner world of the character

The internal state of a character is the result of the context appraisal, which is performed
using information as the goals, preferences, admiration for other characters, and personality
traits. Many other aspects could be taken into consideration as social role, culture, religion,
social dynamics between characters, etc. Nevertheless, we consider the former elements
the basic ones to represent the interaction of the character with his environment. In the
following subsection we will make explicit our intended meanings for them.

50
4.2. SEMANTIC LAYER: CONTEXT REPRESENTATION

Goals

In its simple sense goals define a group of affairs that an agent desires to happen. In this
dissertation, they are related to the occurrence of an event that the agent wishes to happen
and produces benefits to it. A benefit is represented as the elicitation of one, or a set of
positive emotions. Goals are considered equivalent in their structure, which means that no
difference will be made among preservation goals, partially achieved goals, and so forth.

Preferences

We have defined them as the degree of liking, or disliking, that an agent feels for sur-
rounding entities. They can be things around the character as a specific park, certain
street, flowers, chocolates, or other agents; or things that are perceived by the character
as thoughts, ideas, standards, etc. For example, the agent may be a pacifist and love the
idea of freedom, or dislike racism.

Interaction / Relation with other agents

In real life people like or dislike other people to a certain degree. This feeling influences
the affective output when two characters are participating in the event, or when the event
is evaluated from the perspective of an external agent (an agent that is not participating
in the event, but has witnessed it and might feel emotions because of it).

Social role

According to Edelstein [40], roles are blend of personality traits and the work a person
does (company president, hit man, or new mother). A role emphasizes different aspects
of an individual. Although we have not set explicitly the roles as explained, we do have
established four roles which deal with the character as a protagonist or as spectator of the
event.

4.2.2 Context - Outer world of the character


An Event is defined based on four questions that provide the main information about it:
what, where, who and when.

what: references the action that happens in the event. An action is represented with
a verb, which indicates what occurs in the event. The action has complements that

51
CHAPTER 4. FRAMEWORK OVERVIEW

complete it, giving sense to the event. Complements can be direct or indirect objects,
which are represented by Abstract Entities and Physical Entities, which were
overviewed in subsection 4.2.1.

who: indicates the role of the character with respect to the action in the event.

when: specifies the time of occurrence of the event.

where: provides a description of the place where the event occurs.

4.3 Affective Layer: Affective Model


The following theories are the ones used to represent emotions, mood and personality. The
implementation of the model is explained in a further chapter.

4.3.1 Emotions

To represent emotions in our system we chose the ones proposed in the OCC model, plus
the emotions of disgust and surprise, explained in Chapter 2. The reason to choose it,
as done previously by other researchers, is because is comprehensible and precise enough
to allow a computational implementation of emotions. These are obtained from contextual
elements for world and agents context representation, managed by the semantic layer.

4.3.2 Mood

For mood we chose the PAD Model, also explained in Chapter 2, because it links emotions,
mood and personality in the same 3D space. In PAD, each mood corresponds to an octant
of that space resulting in: Exuberant, Bored, Dependent, Disdainful, Relaxed, Anxious,
Docile and Hostile.

4.3.3 Personality

Finally, for personality, we use the traits of the Five Factor Model (FFM) [97] (also known
as Big Five) because of its proven universality after years of research using self-report ques-
tionnaires [74], and because this model can be mapped in the PAD Space with equations
provided to this end. FFM is explained in details in Chapter 2.

52
4.4. VISUALIZATION LAYER

4.4 Visualization Layer


To generate the different expressions we use two standards: the MPEG-4 standard [58] and
the Facial Animation Coding System (FACS) [48]. MPEG-4 is considered a derivative of
FACS, and while the former has been mainly used for synthesis of facial expressions, the
latter has been mainly used for recognition of facial expressions.
The MPEG-4 standard provides facial animation parameters, which result in fast and
computationally inexpensive animations. We decided to use this standard because of its
simplicity and efficiency to define facial points that can be mapped to all areas of the
face (eyes, nose, mouth, head), to manipulate them through animation parameters, and
because of the satisfactory visual output that it produces. We use MPEG-4 for visualization
of intermediate emotions, obtained as the linear combination of basic ones.
On the other hand, FACS is a coding system that describes facial movements, based
on muscular activity. This movement description is done through action units (AUs),
e.g. AU1: inner brow raiser. We use FACS to map the values of Pleasure, Arousal and
Dominance into facial expressions, based on a database of facial movements described and
defined for a set of expressions associated with these dimensions.

4.5 Summary
In this chapter we provided a global view of our Computational Affective Model. It is
formed by three layers that allows the simulation of believable virtual characters.
The first layer is related with the semantics that define the context of the character
(inner context, based on a BDI theory plus personality traits; and outer context, which is
the world and its entities).
The second layer deals with the affective output produced as a result of the events
elicited by the semantic layer. In this layer emotions, mood, and personality are combined
to leave the character in an affective state defined by the values of Pleasure, Arousal and
Dominance.
The third layer is in charged of the visualization of the facial expressions that show the
affective state of the character. We have used MPEG-4 and FACS to prove that our model
is domain independent, regarding the visualization methods.

53
CHAPTER 4. FRAMEWORK OVERVIEW

54
Chapter 5

Context Representation

There are no facts, only interpretations.


Friedrich Nietzsche

Appraisal can be defined as the process of understanding and evaluating what is happening
around us, allowing the elicitation of emotions. Therefore, to create emotional characters
we need to simulate a context where a variety of situations occurred and can be appraised.
This chapter is organized as follows. First, a review of previous works related with
generation of context and its representation is presented, followed by the explanation of
the implementation of our own conceptual model to represent context. Then, a practical
summary of the whole conceptual model is given, as well as a discussion about the imple-
mentation details. Afterwards, a practical example is provided to demonstrate the use of
the model. Finally, a summary concludes the chapter.

5.1 Context - An Overview


Dey et al. defined context as any information that can be used to characterize the situation
of an entity. An entity is a person, place, or object that is considered relevant to the inter-
action between an user and the application, including the user, the device and application
themselves. [36]
Similarly, Doyle offered another definition of context as a representation of certain
aspects of the state of the world... There may be several contexts active at one time, and
contexts may be nested within one another. [39].

55
CHAPTER 5. CONTEXT REPRESENTATION

Hence, how a character expresses its affective state significantly depends on the context.
Without it, it might become difficult to discern between the meaning of a facial expression,
to decide how to react in certain situation, or how the course of a story/situation should
be unfolded. In this way, Kaiser and Wehrle [78] stated that facial expressions can only
be interpreted when they are inside a context (temporal or situational) that allows us to
generate them in an accurate way.
One of the problems when representing context is that aspects like culture, religion,
and other social rules and roles should also be taken into account, making the contexts
representation and evaluation an unmanageable task.
A solution for this is to represent context from a childs point of view. When we
think of children, we realize that their context has the same set of events as the one from
adults, but their appraisal lacks of social rules. Therefore, they react to those events in a
purer emotional way.
With this in mind, we decided to represent context as a child would do it, providing
a generic description of what is happening outside and inside the virtual characters, or
virtual agents - both terms will be used indistinctly. This is one of our main contributions,
because as it will be seen in the next subsection, no previous work has managed to represent
and elicit affect through ontologies.

5.1.1 Previous works on Context Representation

In the field of Computer Science some areas that have work on context representation are
Affective Computing, Ubiquitous Computing, and Artificial Intelligence.
Strang et al. [141] evaluated six of the most relevant existing approaches to model
context for ubiquitous computing: key-value models, markup scheme models, graphical
models, object-oriented models, logic based models, and ontology-based models. These
approaches are also useful for any field where context representation should be taken into
account. Indeed, Strang et al. concluded that the most promising assets for context
modeling can be found in ontologies.
Ontology has been a field of philosophy since Aristotle characterized by the study of
existence, a compendium of all there is in the world. Nowadays, it has evolved in great
measure in the computer science and artificial intelligence fields [89]. An ontology is defined
as an explicit specification of an abstract, simplified view of a world to represent. It specifies
both the concepts related to this view and their interrelations [117].

56
5.1. CONTEXT - AN OVERVIEW

Krummenacher et al. [84] also stated that ontologies provide better modeling facilities
(intuitive notions of classes and properties), while being semi-structured and incorporating
a clear semantic model. They mentioned a number of upper ontologies for context defini-
tion, which are defined as the most general category of ontologies applicable across large
sets of domains. General upper ontologies are DOLCE (Descriptive Ontology for Linguistic
and Cognitive Engineering) [61], SUMO [115], Cyc [87], and SOUPA [26]. Nonetheless,
ontologies still have their drawbacks, especially in issues related to scalability and domain
specific reasoning.
Due to scalability and domain-related issues other researchers have proposed their own
ontologies like Lopez et al. [89], who proposed a generic ontology Emotions Ontology.
The ontology differentiates the concepts of the physical world and the mental world.
By relating these two worlds they achieved a model for describing emotions and their
detection and expression in systems that work with contextual and multimodal elements.
To complete their specification they used DOLCE because it defined concepts as event,
process, or action, used to contextualize other concepts in the Emotions Ontology; and
FrameNet [139] because it is better suited for modeling context as situations. Although
this work resembles our research, they did not deal with concepts as goals, likings of the
character or relation with other agents.
Nakasone et al. [112] presented a generic storytelling ontology model, the Concept
Ontology. Their idea was to define a set of topics in which the story, or part of the story,
is based, and to link them through a pseudo-temporal relation that ensures a smooth
transition between them. As this ontology was for storytelling, the classes defined were
related to scenes, acts, relations, agents participating in the story and their roles. Its
main advantage is that it provided the elements for creating a story according to narrative
principles.
Figa et al. [56] used ontologies and other components to develop an architecture for
Virtual Interactive Storytelling Agents (VISTAs) capable of interacting with the user
through natural language. First, they generated AIML scripts from live audio, video
recordings of storytellers, or from online chats, which were analyzed using the WordNet
lexical database [55] enhanced with Prolog inference rules. Then, they used ontologies
to select a storys subset expressing the focus of interest of the user. These agents were
oriented to online teaching and shared virtual environments to support learning.
Swartjes et al. [142] implemented a multi-agent framework that generated stories based
on emergent narrative. Story generation was performed in two phases: simulation and

57
CHAPTER 5. CONTEXT REPRESENTATION

presentation. During simulation, the one that concern us the most, they used an ontology
to define the settings of the story world. They used the example of a pirates world, where
the ontology defined concepts as Sword or Pirate, and relations as hate or owns.
Ontologies can also be used for the representation of the emotional output generated
by context. For instance, Obrenovic et al. [116] provided flexible definitions of emotional
cues at different levels of abstraction. Their Emotional Cues Ontology provided a language
to share and reuse information about emotions. The concepts handled by this ontology
were emotions, emotional cues (i.e. facial expressions, gestures, speech and voice), and
media (where the emotional cues are represented). Their main objective was to generate
or recognize emotional cues according to the media and the emotion felt. They did not
focus on the process of simulation of the environment that elicits the affective information.
Heckmann et al. [71] introduced the general user model ontology GUMO as part of a
framework for ubiquitous user modeling. What is interesting about this work is the model-
ing of Basic User Dimensions as: personality, characteristics, emotional state, physiological
state, mental State and facial Expression. The principal use of this data is in assessing
the actual state of the user. Regarding facial expressions, if a user shows some expression,
they represented this information in their UserJournal without interpreting the current
emotional state of the user. This work is very similar to our work but from the Ubiquitous
Computing field. Nevertheless, their inference process is related to what will occur in the
environment (ambient intelligence) and not to what will be the affective state of the user.
Chang et al. [25] presented a three-layered framework for scenario construction, formed
by: a mind model, a concept model and a reality model. The concept model provided
characters with ontology-based environmental information, so they can use ontological
inference to associate objects in the world with their goals and build plans according to
the world where they are. Nonetheless, the concept layer did not take into consideration
affective elements, which were in the mind layer.
Benta et al. [18] proposed an application that combined affect, context, ontologies and
ubiquitous computing. They presented an ontology-based representation of affective states
for context aware applications. Using the Activation-Evaluation space proposed by Cowie
et al. [32], they extended the ontology SOCAM (CONON) [149] so it can define different
States: Affective, Mental and Physiological. CONON is an ontology for modeling context
in pervasive computing environments.
Gutierrez et al. [70] developed an ontology to provide a semantic layer for concepts
related to: human body modeling and analysis, animation of virtual humans, and interac-

58
5.2. SEMANTIC MODEL

tion of virtual humans with virtual objects. This ontology was more focused on defining a
character from a physical and behavioral point of view, which can be of great help when
deciding which actions to take depending on the context that surrounds the character.
Our intention with this overview of previous works was to show how context has been
represented before, and how ontologies have also been used to represent behavior in virtual
characters. Nevertheless, we found that former research lacks of a structured definition
of the psychological characteristics that build a virtual character from an internal point
of view. Moreover, we think it is necessary to describe the process between situations
produced in certain context and the elicitation of emotions based on a belief-desire-intention
perspective. This process of event appraisal carried on by the character is what we have
intended to define using ontologies.

5.2 Semantic Model


In order to describe an agents environment we propose a semantic framework based on
ontologies, as seen in Figure 5.1. The motivation behind it is the potential that ontologies
offer to represent knowledge in certain domain, and infer new knowledge from the already
existing one.

Figure 5.1: Context Representation Schema

59
CHAPTER 5. CONTEXT REPRESENTATION

The development of ontologies usually starts by defining its domain and scope. It means
that given a motivating scenario, we should be able to answer certain questions related to
it. These questions are known as competency questions (CQs). Informal CQs are queries
which require the objects or constraints defined with the object. Fomal CQs are defined as
an entailment or consistency problem with respect to the axioms in the ontology [69]. We
have focused only on informal CQs to evaluate the expressiveness of our ontologies: Outer
World CQs and Inner World CQs.
The Outer World Competency Questions are:

1. In a certain situation (e.g. a fragment of a story, an episode of daily life), which are
the occurring events?

2. How is an event described? Which properties does it have? Which is the main action
that gives meaning to that event?

3. What is affected by the action?

4. Where is the event happening? Can it happen in more than one location?

5. When is the event happening? How can we describe the duration of events? How
can we simulate the duration of an event?

6. Can events be organized temporarily (event 1 occurs first, then event 2, and so on)?
Can they be organized automatically, so they produce some temporal lineal story?

7. Who is performing the event? Or who is affected by the event? Are there other
agents involved? How are they related?

The Inner World Competency Questions are:

1. Which are the goals of an agent? What are the consequences of achieving a goal?

2. What does the agent like and dislike? Can they be physical objects or intangible
things?

3. How does the agent feel about other agents?

4. Which emotions can the agent feel? How are they elicited?

5. What is the personality of the agent? How does it influence its emotional state?

60
5.3. EVENT ONTOLOGY

From the former analysis we decided to design and implement two different ontologies,
represented in Figure 5.2. The personalityEmotion Ontology considers all the concepts
that define a character (or agent) from a psychological-affective point of view - goals,
preferences, social admiration with other agents, and personality (inner world). The event
Ontology describes the environment that surrounds the character (outer world) based on
the events occurring in it. The following sections describe both ontologies in detail

Figure 5.2: Semantic Model for Context Representation with Ontologies

5.3 Event ontology


According to Ortony et al. [120], events are the constructions people make about things
that happen, independently of any beliefs about their causes. As mentioned in Section 4,
an Event is defined based on four questions (what, where, who, and when), which are rep-
resented by the classes Action, SpatialLocation, AgentRole and TemporalEntity (Fig. 5.3).
Then, each class has its corresponding subclasses, as will be seen in the following.

5.3.1 Action - (Fig. 5.3: A)

An Action is represented with a verb. So, as in grammar a verb has complements (direct or
indirect objects) that give a meaning or complete the sentence, an Action has complements
represented by the classes AbstractEntity and PhysicalEntity.
AbstractEntity includes all intangible concepts as ideas, thoughts, dreams, or standards
(the notion of how to be socially correct).
PhysicalEntity defines all things that we can see and touch through the subclasses
SpatialLocation, MaterialThing and Agent. A PhysicalEntity has a Dimension formed by
the subclasses: Width, Height, and/or Depth. For instance, a building can be very wide
and tall, a pool can be 3 meters deep, or a floor-lamp can have a height of 1.2 meters.

61
CHAPTER 5. CONTEXT REPRESENTATION

Figure 5.3: Event Ontology Diagram

The relation that links the concepts Event and Action are what and onEvent. The
latter specifies that an action occurs in certain event, making the relation between these
concepts cyclic. For instance, the action Eat can be linked to physical entity Bread,
and it is part of certain event. Therefore, the description logic (DL) representation is

Eat := Action
(thereAreIrrealT hings(Eat, ) thereAreObjects(Eat, Bread))
what(Event1, Eat)

5.3.2 SpatialThing - (Fig. 5.3:B)

SpatialThing defines where the event occurs. Its subclasses are: HumanStructure (e.g.,
buildings), NaturalStructure (e.g., beach) and OpenStructure (e.g., street).
This concept has a characteristic represented by the class AmbientCondition. Its sub-
classes Temperature and Humidity describe how cold or hot, and how wet or dry, a place
is. From these two elements Weather can be inferred, thus defined as the state of the
atmosphere with respect to temperature, and wetness or dryness.

62
5.3. EVENT ONTOLOGY

The relation that relates the event with its location is named where. For example, an
event that occurs in a Farm in sunny day can be described as

F arm := HumanStructure
hasSuperClass(F arm, SpatialT hing)
hasT emperature(F arm, 25)
hasW eather(F arm, Sunny)

5.3.3 AgentRole - (Fig. 5.3:C)

An agent, represented by the class Agent, is the person or animal around which the event
happens. Therefore, it extends the subclasses Character and Animal to differentiate be-
tween human-like agents and animal-like agents.
To know exactly the role of the agent in an event, we have created the class AgentRole
with its subclasses: Execute and Receive. The subclass Execute instances those agents that
perform the action of the event (executers), while Receive instances the agents affected by
that action (receivers).
However, to specify even more the role we extended these two classes to represent the
participation of the different agents in the event. Thus, Execute presents the subclasses
ByMe and ByOther, and Receive extends the subclasses OnMe and OnOther.
- The role ByMe is taken by the agent who analyses the event and at the same time
executes the action.
- The role ByOther is taken by that agent who executes the action, but is not the one who
analyses the event.
- The role OnMe is taken by the agent who analyses the event and at the same time receives
the effects of the action.
- The role OnOther is taken by the agent that receives the action of the event, but this
agent is not the one who analyses the event.
Let us take the situation Rose buys flowers for Charlie. If Charlie is the one who
evaluates the event, the Rose has the role ByOther because she is executing the action
buy. But if Rose evaluates the event, her role would be ByMe and Charlies role
would be OnOther because he is the passive subject.
The relation that relates the event with the agent, and specifically, with its role is who.
For example, using the previous event we can describe it as

63
CHAPTER 5. CONTEXT REPRESENTATION

Charlie := Character
hasSuperClass(Charlie, Agent)
hasRole(Charlie, Charlie OnM e)
hasSuperClass(Charlie OnM e, Receive)

5.3.4 Temporal Entity - (Fig. 5.3:D)


The TemporalEntity class specifies the time of occurrence of the event through the rela-
tionship when. It has two subclasses: Instant and Interval. The first one defines the
specific point in time when the event occurs; the second one defines the period of time
when the event occurs. For instance, if Rose buys flowers for Charlie in the morning,
we can specify morning as a period of time from 06:00 until 12:00, then

morning := Interval
hasSuperClass(morning, T emporalEntity)
hasBeginning(morning, time0600)
hasEnd(morning, time1200)

5.3.5 Contained Events - (Fig. 5.3:E)


Another important aspect that needs to be considered is when an event describes the
occurrence of another event. These are called Contained Events and are represented
by the recursive relation contains. An example of this kind of relation can be seen in
the event: Rose sees Charlie stealing an expensive car. In this case, the main event
would be Rose sees something. But that something is another event: Charlie steals an
expensive car. Therefore, two events should be evaluated: from Roses perspective and
from Charlies perspective.
The level of recursion for contained events can be extensive. Nevertheless, we are
working with only two levels of recursion.

64
5.4. PERSONALITYEMOTION ONTOLOGY

5.4 PersonalityEmotion ontology

How someone feels is the result of a process of context appraisal, which is performed using
information like ones goals, preferences, degree of admiration for others, and personality
traits.
To implement these concepts, explained in Section 4, we designed the PersonalityEmo-
tion Ontology, presented in Figure 5.4. It is worth noting that from now on we use the
term Character instead of Agent because we are considering the domain from a human
perspective. But as a Character is an Agent, then both terms can also be used indis-
tinguishable. The following subsections explain the diagram according to the main classes
pointed out by letters.

Figure 5.4: Personality-Emotion Ontology Diagram

65
CHAPTER 5. CONTEXT REPRESENTATION

5.4.1 EventRelation - (Fig. 5.4:A)

The class EventRelation defines the relation established between an Event, a Character
and the level of satisfaction produced in that character specified by the class EventSat-
isfactionScale. The subclasses of EventSatisfactionScale are: SATISFACTORY (S), IN-
DIFFERENT (IA), and NOT SATISFACTORY (NS), and each one has instances which
values range in a degree (0, 1).
For instance, in the event: event001:Rose goes to the movies every Saturday, we can
deduce that the character Rose enjoys going to watch films every week, and we can describe
it as

event001 rel := EventRelation


hasEvent(event001 rel, event001)
hasEventSatisf action(Rose, event001 rel)
hasSatisf actionV alue(event001 rel, event001 s)

5.4.2 Goals - (Fig. 5.4:B)

The Goals of a character are events with certain desirability degree. This degree indicates
how much a character wants that event to happen. If desirabilitydegree 0.7, then we
consider the event as a goal. The intensity of the emotions elicited due to the occurrence,
or achievement of a goal are determined by the desirability degree.

5.4.3 PreferenceRelation - (Fig. 5.4:C,D)

Preferences are set for physical or abstract entities. The class PreferenceRelation defines
the relation established between a Character, its Preferences and the emotional attachment
to them, expressed by the class EmotiveScale (Fig. 5.4:D).
The class Preferences groups all the instances independently of the character.
The class EmotiveScale has the subclasses: STRONGLYGOOD (SG), GOOD (G),
INDIFFERENT (IP), BAD (B), and STRONGLYBAD (SB), and each one determines the
emotion to be triggered: Love, Liking, No emotions, Disliking, Hate/Fear, respectively
(Fig. 5.4:E). The intensity of each emotion is determined by the degree of each instance
of these subclasses, where degree (0, 1). These emotions are taken from the OCC model
set based on attitudes or tastes [27].

66
5.5. EMOTION ELICITATION

PreferenceRelation allows us to model the idea of the OCC model, where attitudes of
the character towards aspects of objects lead to Like them or Dislike them; and when
there is attraction to those objects, to emotions of Love or Hate. A set of logic rules helps
to discern between fear and hate.
For instance, the event rcomedy001: Rose enjoys watching romantic comedies can
be formally described as

rcomedy001 rel := P ref erenceRelation


hasP ref erence(Rose, rcomedy001 rel)
hasP ref erenceV alue(rcomedy001 rel, romanticcomedy)
hasEmotiveScale(rcomedy001 rel, rcomedy001 sg)
hasSuperClass(rcomedy001 sg, StronglyGood)
hasEmotion(rcomedy001 sg, Love)

5.4.4 AgentAdmiration - (Fig. 5.4:F,G)


The class AgentAdmiration (Fig. 5.4:F,G) defines the admiration of one agent for another,
based on their roles (specified by the class AgentRole). It also decides the set of produced
emotions. For example, receive good news from a best friend awakes more positive emotions
than receiving them from someone one really dislikes.
AgentAdmiration has three subclasses that categorized the admiration: POSITIVE
(P), INDIFFERENT (IAg) or NEGATIVE (N), again with degree (0, 1).
The roles that are related to AgentAdmiration are onOther and byOther because
they depend on the appreciation of the main agent to generate the emotions corresponding
to the occurred event.

5.5 Emotion Elicitation


The OCC model provides a framework to elicit emotions according to three kinds of value
structures underlying perceptions of goodness and badness: goals, standards and attitudes.
Goals represent desirable events that can be achieved, or not, by the character.
Attitudes provides the basis for evaluating objects, which give rise to emotions of Love,
Liking, No emotions, Disliking, Hate/Fear, according to the Preference values.
Standards are considered in terms of how praiseworthy (or blameworthy) an agent is
in relation to the main agent evaluating certain event.

67
CHAPTER 5. CONTEXT REPRESENTATION

Therefore, to generate emotions according to standards, three main aspects should be


taken into account: (a) what is the role of the agent to be evaluated (AgentRole), (b)
which is the level of satisfaction of the event (EventSatisfactionScale), and (c) which is the
admiration degree of the main agent of the event for the other agents (AgentAdmiration).
Emotions are represented as instances (individuals) of the class Emotion, and its at-
tributes are used to determine which ones will be elicited. These attributes are:

Type of emotion: Positive (P) and Negative (N)

Type of admiration: Positive (+) and Negative (-)

Figure 5.5 is an expansion of the red-dotted segment of Figure 5.4:F,G. It shows the
implementation of the different roles as individuals and how they are related with Emotions
and AgentAdmirations individuals. An individual is an instance of an ontology class.

Figure 5.5: Agent Admiration at AgentRole level

68
5.5. EMOTION ELICITATION

onMe - (Fig. 5.5: , )

The subclass onMe indicates that the agent receives the effects of the event. According to
the OCC theory, the effects or consequences for the self can be relevant or irrelevant. If
irrelevant, elicited emotions are Joy or Sadness.
If consequences are relevant, then there are two individuals related to Hope or Fear.
So, to know which emotions to elicit, we need to follow these rules:
- If the event is hoped and it occurs (meaning it is confirmed), the emotion Satisfac-
tion arises; but if it is disconfirmed, the emotion Disappointment arises.
- If the event is feared and confirmed, the emotion Fear-Confirmed is elicited; but if
it is disconfirmed, the emotion Relief is elicited.
We have represented these consequences as the subclasses irrelevant and relevant. ir-
relevant has the individual irrelevant emotions (Fig. 5.5: ), which is linked to the indi-
viduals Sadness and Joy through the relation hasSelfEmotion (Fig. 5.5: G,E). relevant
has two instances or individuals: onMe fear and onMe hope. The first one relates the
agent with role onMe with the emotions Fear and Relief through the relation hasSelfEmo-
tion (Fig. 5.5: ,G,E). The second one relates the agent affected by the hoped event to
Satisfaction and Disappointment through the same relation hasSelfEmotion.
For example, in the case Rose has the role Rose OnMe, first we need to see if the event
in which she participates is relevant or irrelevant for her. If the event is a Goal, then it is
relevant, otherwise, it is irrelevant.

1. If relevant, then Rose OnMe should be equivalent to one of the individuals of the
class relevant according to the following cases:

If the goal is hoped, then Rose OnMe is associated with the individual
onMe hope, and thus associated with Satisfaction and Disappointment.
If the goal is goal is feared, which means that the sentence describing the goal
has an opposite semantic meaning to the goal (e.g. a goal would be winning
the lottery, but the event is Not winning the lottery), then Rose OnMe is
associated with the individual onMe fear, and thus associated with Fear and
Relief.

The next step is to discern which emotion will be elicited. Then, we take a look at
the satisfaction level of the goal-event.

69
CHAPTER 5. CONTEXT REPRESENTATION

If Rose OnMe is associated with onMe hope and the event is Satisfactory, the
elicited emotion is Satisfaction; if the event is NotSatisfactory, the elicited emo-
tion is Disappointment.
If Rose OnMe is associated with onMe fear and the event is Satisfactory, the
elicited emotion is Relief; if the event is NotSatisfactory, then Fear is elicited.

2. If irrelevant, then Rose OnMe is equivalent to the individual irrelevant emotions.


Thus, we look at the level of satisfaction of the event. If the event is Satisfactory,
Joy is elicited. If the event is NotSatisfactory, the elicited emotion is Sadness.

byMe - (Fig. 5.5: )

According to the OCC theory, the action of the agent focused on the self can produce
Pride or Shame; and depending on the consequences of the event, produces Gratification
or Remorse. In our model, this is translated as:

If the event is Satisfactory, elicited emotions are positive: Pride and Gratification

If the event is NotSatisfactory, elicited emotions are negative: Shame and Remorse.

The subclass byMe represents the agents that perform the action of the event, and the
individuals are also related to Emotion through the relation hasSelfEmotion (Fig. 5.5: ).
Therefore, the individual ByMe Emotions is linked with the individuals Remorse, Shame,
Gratification and Pride.
For instance, if the individual Rose is the one performing an event, her role Rose ByMe
will be equivalent to the individual ByMe Emotions. Then, we need to see if the event in
which she participates is Satisfactory or NotSatisfactory, and the elicited emotions will be
the ones corresponding to the individuals previously enumerated.

onOther - (Fig. 5.5: , )

The OCC theory proposes the set of emotions that should be elicited regarding the conse-
quences or effects that an event has on other agents. If the event is desirable for others,
elicited emotions are Happy-for or Resentment, and if the event is undesirable for others,
elicited emotions are Gloating or Pity.

70
5.5. EMOTION ELICITATION

As mentioned in Subsection 5.4.4, emotions elicited due to individuals belonging to


the class onOther depend on the degree of AgentAdmiration for those individuals. The
individuals of the class onOther are: Desirable and Undesirable.
Desirable is linked with the individual AgentAdmiration onOtherDesirable of the
class AgentAdmiration through the relation hasAdmiration. In turn, AgentAdmira-
tion onOtherDesirable is linked with the individuals Resentment and Joy through the relation
hasOtherEmotion. (Fig. 5.5: , G).
Undesirable is linked with the individual AgentAdmiration onOtherUndesirable through
the relation hasAdmiration. In turn, AgentAdmiration onOtherUndesirable is linked with
the individuals Pity and Gloating through the relation hasOtherEmotion (Fig. 5.5: , G).
Let us take as example Roses role Rose onOther:

Rose onOther is equivalent to Desirable if the consequence of the event is desirable for
her, which means that the event is Satisfactory. To discern between elicited emotions,
we consider the AgentAdmiration degree:

1. If the main agent has a Positive admiration for Rose, the elicited emotion in
the main agent is positive, Joy
2. If the main agent has a Negative admiration for Rose, the elicited emotion in
the main agent is negative, Resentment

Rose onOther is equivalent to Undesirable if the consequence of the event is undesirable


for her, which means that the event is NotSatisfactory. To discern between elicited
emotions, we consider the AgentAdmiration degree

1. If the main agent has a Positive admiration for Rose, the elicited emotion in the
main agent is negative: Pity (the main agent feels sorry for the non-satisfactory
event on Rose)
2. If the main agent has a Negative admiration for Rose, the elicited emotion in
the main agent is positive: Gloating

byOther - (Fig. 5.5: )

The OCC theory that states that the action of other agent can produce Admiration or
Reproach, and depending on the consequences of the event, it can produce Gratitude or
Anger towards that other agent.

71
CHAPTER 5. CONTEXT REPRESENTATION

In our model, the individual ByOther Emotions of the class byOther is linked to the
individual AgentAdmiration byOther of the class AgentAdmiration through the relation
hasAdmiration. In turn AgentAdmiration byOther is linked with the individuals Admira-
tion, Reproach, Anger and Gratitude (Fig. 5.5: , G).
Taking as example Roses role Rose byOther, which is equivalent to AgentAdmira-
tion byOther:

If the event is Satisfactory and the main agent has a Positive admiration for Rose,
the elicited emotion is Admiration

If the event is Satisfactory and the main agent has a Negative admiration for Rose,
the elicited emotion is Gratitude

If the event is NotSatisfactory and the main agent has a Positive admiration for
Rose, the elicited emotion is Reproach

If the event is NotSatisfactory and the main agent has a Negative admiration for
Rose, the elicited emotion is Anger

Figure 5.6 shows a diagram with all the elicited emotions according to the type of event
(Positive (P) or Negative (N)) and the type of agent admiration (Positive (+) or Negative
(-)), which summarizes what was explained above.

Figure 5.6: Categorization of Emotions.

72
5.6. GUIDELINE TO USE THE ONTOLOGIES

5.6 Guideline to use the ontologies

As seen in the previous sections, the Event ontology provides all the elements to represent
an event, while the PersonalityEmotion ontology provides the relations between the event
and the character, as well as all elements related to the characters internal aspects (goals,
preferences, admiration for other agents, personality) used in event appraisal.
The following is a guideline for representing a situation in certain scenario:

1. Situation Identification. First of all we need to describe what the characters will
go through as if it were a story. In this way we can start identifying all the elements
of the ontologies. For example,

Rose started her day at 6 a.m. this morning. While she was having breakfast, she
heard about train strike in the radio. This was very inconvenient, because she had an
early appointment with Charlie at the office and with the strike she wont get there.

2. Event Identification. From the whole situation to be represented, we extract each


relevant event that provokes a change in the character or in the environment, so we
can define the elements of the Event Ontology (Fig. 5.3).

Each event is seen as a predicate that includes a subject (agent), verb (action) and
predicate (ambient conditions, physical and abstract things - objects, other agents,
location). Therefore, once we define the instances of each class in the ontology, we
assign a level of Satisfaction to each event regarding the agent that appraises it.

For instance, from the situation Rose started her day at 6 a.m. this morning, we
can get the Event instance:

event001 := Rose wakes up at 6 a.m.

If it is difficult for Rose to wake up early, then the instance of EventSatisfactionScale


is
event001 satScale := NotSatisfactory
satisf actionDegree(event001 satScale, 0.4)

3. Character Preferences and Goals definition. Once all elements that form the
event are identified, we assign values to them using the PersonalityEvent Ontology
(Fig. 5.4:B,C). The steps to follow are:

73
CHAPTER 5. CONTEXT REPRESENTATION

(a) Define the instances of PreferenceRelation using the PhysicalEntities and


AbstractEntities identified in step 2, and assigning the corresponding Emo-
tiveScales. For example:

breakf ast := Preferences


breakf ast001 sg := StronglyGood
breakf ast001 rel := P ref erenceRelation
hasP ref erences(Rose, breakf ast001 rel)
hasP ref erenceV alue(breakf ast001 rel, breakf ast)
hasEmotiveScale(breakf ast001 rel, breakf ast001 sg)
(b) Define the instances of Goal by assigning a Desirability degree to each potential
event-goal. For example, Rose has the goal:
event002 := Rose get early to work
but according to the defined situation, we find the following event:
event003 := Rose is not going to get early to work
As can be seen, the event003 has an opposite semantic meaning to her goal
(event002). Therefore, desirabilityDegree(event003) = 0.2.

4. Agent Admiration for other agents. To define the admiration that the main
character (the one from whose point of view the event is evaluated) feels for other
agents, we need to define the instances of AgentAdmiration in the story. In the former
example the other agent is Charlie, so the event can be defined as:
event004 := Rose attends to an appointment with Charlie
First of all, we need to define the roles of Rose and Charlie to proceed with the
identification of the corresponding instances. As this event might be ambiguous, we
have two options:
(1) Roses role is Rose byMe and Charliess role is Charlie onOther
(2) Roses role is Rose onMe and Charliess role is Charlie byOther
In case (1), we would need to define the event as desirable or undesirable for Charlie.
As this event is neither one nor the other for Charlie, then we try with case (2).
Thus, following Fig. 5.5: :

(a) Associate Charlie byOther with the instance ByOther Emotions.


(b) Link ByOther Emotions with the instance AgentAdmiration byOther

74
5.6. GUIDELINE TO USE THE ONTOLOGIES

(c) Define an instance of AgentAdmiration that can be used to discriminate among


the set of emotions associated to AgentAdmiration byOther. In this case, Rose
has Negative admiration for Charlie; therefore we define
negative001 aa := Negative
hasSuperClass(negative001 aa, AgentAdmiration)
hasAdmiration(Charlie byOther, negative001 aa)
admirationDegree(negative001 aa, 0.9)

5. Personality definition. Using the FFM the personality traits for each character
are set. For instance, we can define Roses personality as:
extraversion rose := Personality
personalityDegree(extraversion rose, 0.2)
agreeableness rose := Personality
personalityDegree(agreeableness rose, 0.3)
conscientiousness rose := Personality
personalityDegree(conscientiousness rose, 0.9)
openness rose := Personality
personalityDegree(openness rose, 0.1)
neuroticism rose := Personality
personalityDegree(neuroticism rose, 0.95)

6. Direct Emotion elicitation. As a result of these two ontologies, a set of positive


and negative emotions is produced. The intensities of these emotions depend on the
preferences and admiration level for other agents (Figure 5.6).
For instance, the preference breakf ast001 sg := StronglyGood elicits the emotion
Love.
On the other hand, the instance AgentAdmiration byOther is linked to the emotions
Admiration, Reproach, Anger and Gratitude. Also, the instance of AgentAdmiration
that relates both agents, Rose and Charlie, is negative001 aa, and the event is Not-
Satisfactory, then following the previous rules, the elicited emotion is Anger.

7. Rules definition. By defining a set of rules it is possible to handle specific events


and personalities to elicit different emotions in similar situations. Rules are of the
form IF THEN ELSE. For example, a rule for the event Rose wakes up very early
might be:

75
CHAPTER 5. CONTEXT REPRESENTATION

IF Action = wake up T emporalEntity = very-early


T HEN Emotion = Hate

Appendix A shows a list of the rules implemented in this thesis.

8. Contained Events (Optional). In case there are contained events (given by the
relationship contains), the appraisal is done in the main event following the steps
described above, but taking into consideration the next issues:

(a) The EventSatisfactionScale is defined in relation to the contained event, and


from the main characters point of view (the leading agent in the main event).
(b) The AgentAdmiration is defined in relation to the agents in the contained
event. It means, how the main character feels about the agents in the contained
event.
(c) Preferences are not evaluated in the contained event.
(d) Personality is defined for all agents.
(e) Emotions are elicited according to Step 6
(f) Rules are created for refinement of emotions according to Step 7

5.7 Implementation
To implement the proposed Semantic Model, we divided the tasks in two main parts:

1. Implementation of the ontologies: using Protege version 4.0

2. Implementation of the interface: using JDK 1.6.0 12 and JENA libraries for ontology
support.

5.7.1 Ontology implementation

In order to implement our ontologies, we needed to consider various aspects like the lan-
guage to use, how to create inference rules, or which other ontologies could be useful to
complete the knowledge we wanted to represent.
The answers to these issues are explained in detail in the following subsections.

76
5.7. IMPLEMENTATION

OWL (Ontology Web Language)

The W3Cs Web Ontology Language (OWL) [68] is considered by many as the prospective
standard for creating ontologies on the Semantic Web.
OWL features two core elements: classes and properties. A class represents a set of
individuals, all of which also belongs to any superclass of the class. A property specifies
the relation between an instance of the domain class to an instance of the target class [25].
Besides that, OWL provides individuals and data values that are stored as Semantic Web
documents.
OWL was designed to be used by applications that need to process the content of
information instead of just showing it to the user. OWL provides a better machine inter-
pretability of Web content than that offered by XML, RDF, and RDF Schema (RDF-S)
because it provides the formal semantics plus additional vocabulary [100].
To model the ontologies, we use OWL because it is the standard language and because
of its rich expressiveness and solid foundation for concept modeling and knowledge-based
inference. The ontologies editor we have worked with is Protege version 4.0, a free and open
source ontology editor and knowledge-base framework [57]. Figures 5.7, 5.8, 5.9, 5.10, 5.11
and 5.12 show the ontology diagrams for the different concepts of the ontologies defined
using OWL and Protege.

Figure 5.7: Event defined in OWL

77
CHAPTER 5. CONTEXT REPRESENTATION

Figure 5.8: Spatial Thing concept defined in OWL

Figure 5.9: Action defined in OWL

78
5.7. IMPLEMENTATION

Figure 5.10: Agent Preferences and Goals defined in OWL

79
CHAPTER 5. CONTEXT REPRESENTATION

Figure 5.11: Agent Admiration defined in OWL

Figure 5.12: Event Relation defined in OWL

80
5.7. IMPLEMENTATION

External Ontologies

Many times the ontologies proposed for certain domain are not enough, and one can found
other ontologies already defined by other researchers that help to describe all the required
knowledge.
In our case, we have used the WordNet [107] library for ontology Java processing,
named RiTa.WordNet [75]. It defines the verbs as instances of the class Action, nouns
as instances of the class Preferences, and adverbs to give semantic meaning to the instances
of Event.
RiTa.WordNet is the API that provides simple access to the WordNet ontology. It is
an easy-to-use natural language library that provides simple tools for experimenting with
generative literature.
The other external ontology we have used is the OWL-Time ontology, which is an
ontology of temporal concepts. It provides a vocabulary for expressing facts about topo-
logical relations among instants and intervals, together with information about durations,
and about datetime information [73].
We have used the OWL-Time ontology to specify the instances of TemporalEntity,
defined in our Event Ontology. Thus, we could represent the temporal characteristics of
an event: when it started, how long it lasted (minutes, hours), and so on.

Inference Rules

Inference rules are useful to define particular scenarios, which can be obtained from data
previously defined. The following are the scenarios where rules are used:

In our model, most of the rules allow the elicitation of the emotion, or set of emotions,
depending on the level of satisfaction of the event (EventSatisfactionScale) and role of
the character (AgentRole). For instance, if we have an agent with the role byMe (e.g.
Rose ByMe), it can be associated to the already defined individual ByMe emotions,
and with the rule, emotions linked to this individual will be triggered.

Rules are also used when the level of admiration (AgentAdmiration) of one agent for
another needs to be taken into account. In the same way as with the agent role,
an individual of a subclass of AgentAdmiration (e.g. negative aa, which indicates a
Negative admiration) can be associated through inference rules with a certain agent

81
CHAPTER 5. CONTEXT REPRESENTATION

role (e.g. Rose ByMe), and therefore, only the corresponding negative emotions will
be elicited.

Finally, rules can be used to enhance the personality traits levels of the character,
according to the rules defined by Poznanski and Thagard [126]. They imposed limits
on how much personality can change due to the environment. Once this limit is
reached, the environment does not change the personality anymore. They considered
five types of situations: friendly, hostile, explore, persist and stressful. However, as
we have just two relevant types of events, or situations, we will consider the rules
just for friendly (SATISFACTORY) and hostile/stressful (NOTSATISFACTORY).
Table 5.1 presents the rules proposed by them, and adapted to our work.

Situation Effects on personality


Satisfactory more Extroverted AND more Agreeable
(Friendly) less Introverted AND less Disagreeable
Not Satisfactory IF Disagreeable THEN more Disagreeable
(Hostile/ IF Introverted THEN more Introverted
Stressful) ELSE randomly more Disagreeable OR more Introverted
ALSO less Extroverted AND less Agreeable
IF Neurotic THEN more Neurotic

Table 5.1: Personality change rules

Appendix A shows all the rules defined in Java using Jena libraries, which are triggered
at the beginning of the application execution.

5.7.2 Interface implementation


The application prototype we developed allowed us to simulate different situations which
were automatically appraised. To achieve this, first we generated Java classes from the on-
tologies using Jastor, and then we manipulated them through JENA and a Java application
interface.

Jastor

Jastor is a open source Java code generator that emits Java Beans from Web Ontologies
(OWL), and generates Java interfaces, implementations, factories, and listeners based on
the properties and class hierarchies in the Web Ontologies [143].

82
5.7. IMPLEMENTATION

The advantage of using Jastor is that it converts all the classes defined in the ontologies
to their equivalent Java classes, including individuals already defined in Protege. If one
needs to add more individuals, it just requires to modify the .java files. Nevertheless, to
rewrite the .owl files, it is necessary to do it through the data stored in a database, which is
done using JENA. Figure 5.13 shows an extract of the implementation of the class ByOther,
which was automatically generated by Jastor.

Figure 5.13: Implementation of the property hasOtherEmotion using Jastor

Once all the classes corresponding to the ontology entities were generated, these were
used and modified by the application through functions of the JENA library.

JENA (JAVA libraries)

With Java JDK 1.6.0 12 and JENA library [72] we were able to perform operations on the
ontologies, as well as inference that was done using JENA rules. JENA is a Java framework
for building Semantic Web applications. It provides a programmatic environment for RDF,
RDFS and OWL, SPARQL and includes a rule-based inference engine (JENA rules).
One of the advantages of JENA is that being developed in Java, it is applicable to
various environments. In addition, it is open source and strongly backed up by solid
documentation. Also, regardless to the schema and the used data models, JENA can
simultaneously work with multiple ontologies from different sources, and it can be bound
to SQL databases from different vendors.
Another powerful feature of the JENA framework is the inference API. It contains
several reasoner types, which efficiently conclude new relations in the knowledge graph.
Among the reasoners there are: RDF(S), OWL, Transitive and Generic reasoners. JENA
also works with third party reasoners as Pellet.

83
CHAPTER 5. CONTEXT REPRESENTATION

Interface

To create the GUI that allowed us to manipulate the ontology classes, we used JFC/Swing.
The idea for creating an interface like this is that any user can create characters and
define for each of them the events, goals, preferences and admiration for other agents,
which will define their context. Then, all these data would be stored in a database (using
JENA API), so it can be manipulated and reused to create new scenarios, or similar ones
with different emotional outputs.
In this way, just by assigning values for the concepts of the ontologies, the application
is capable of inferring and creating the relations between those concepts, giving as a result
a set of emotions which are felt by the character. As a final goal, a database of contextual
situations could be constructed, containing any kind of related data to be reused by the
defined ontologies. Figure 5.14 shows two windows of the GUI which correspond to the
main window and the physical entity window.

Figure 5.14: Interface of the Context Representation Application

5.8 Use Case


The semantic model we have proposed can be applied in different applications where having
a story or allowing interaction between the character and its environment is required. To
test it, we have used movie scenes, so we can have background information to validate with.

84
5.8. USE CASE

5.8.1 Movie Scenario


To test our model, following the guidelines of previous section, we have decided for a scene
extracted from the Robert Aldrichs film What Ever Happened to Baby Jane? [2]. Using
this scenario, we intended to show that with our computational model the evaluation of the
context is done automatically and the elicited emotions are the same as the ones expressed
by the characters in the selected scene. In this way we can guarantee coherence between
the context, the events and the emotional output.
What Ever Happened to Baby Jane? is a psychological thriller with some black
comedy. The specific scene we are taking as example of how to produce dynamic affective
states is the following (1):
Jane enters Blanches bedroom with a closed food tray, and informs Blanche that the
maid has the day off. Blanche realizes that she is alone with Jane and she also knows that
there are rats in the cellar. At this moment of the scene, Blanche is very hungry, and when
she sees the tray she thinks of the rats in the basement. But then, she wants to believe that
Jane is not capable of putting a rat in the tray. Jane gives the tray to Blanche and leaves
the bedroom. When Blanche opens the tray, to her horror, finds a rat lying there.
From this scene, five events can be elicited (2):
(1) Blanche is hungry.
(2) Jane enters Blanches bedroom with a closed tray.
(3) Blanche is alone with Jane in the house.
(4) Blanche does not believe that Jane is capable of putting a rat in the food.
(5) Blanche opens the tray and sees the rat.
(6) Jane hears Blanche opening the tray.
These events are a simplified version of the story, but they can help us to explain how
we added emotional content to them. Through the wizard of the computational model
(previously explained) the user can define the context for Jane and Blanche.
We will carefully examine the event (4) Blanche does not believe that Jane is capable
of putting a rat in the food.
In this case the main event is: Blanche does not believe something. Thatsomething
is considered as a CONTAINED event, which is: Jane is capable of putting a rat in the
food. The action is put (putting a rat in the food). Then we need to apply guideline 8.

Event Satisfaction: The contained event is evaluated from the perspective of the
character that performs the main event: Blanche. She considers this event as NOT-

85
CHAPTER 5. CONTEXT REPRESENTATION

SATISFACTORY = 0.8, and it is not a goal.

Admiration Degree: If Blanche is the performer of the main event then her role is
ByMe, the role of Jane who performs the contained event is ByOther. Admiration
of Blanche for Jane is set to NEGATIVE = 0.6.

Personality: Blanche is a little bit extroverted (0.4), extremely agreeable (0.99), mod-
erately conscientious (0.8), not neurotic (0.2), and extremely open (0.99). On the
other hand, Jane is extremely extroverted (0.99), disagreeable (0.2), not conscientious
(0.2), extremely neurotic (0.99), and somewhat open (0.6).

Emotion Elicitation: Using Figure 5.6, we first elicit emotions using the contained-
characters role (Jane, ByOther) and the contained-event satisfaction (NOT SATIS-
FACTORY = 0.8), from the main-characters perspective (Blanche). Elicited emotion
is anger = 0.6.
Now we need to evaluate the event using the main-characters role and contained-
event satisfaction. It means, the role of Blanche (ByMe) and event Satisfaction is
NOT SATISFACTORY = 0.8. Elicited emotion are shame = 0.8 and remorse = 0.8.
To consider the main event satisfaction, we need to use a logic rule to decide for it.
The rule is as follows:
IF contained-event.SatisfactionScale = NOT SATISFACTORY
AND action = NOT believe
THEN main-event.SatisfactionScale = SATISFACTORY AND byMe.emotions =
pride = 0.8 OR gratification = 0.8
It means that as Blanche does not think that a Not Satisfactory event will occur,
then it becomes Satisfactory. As we can see, elicited emotions for role ByMe are
different; therefore, we select the ones obtained from the rule.
The final set of emotions is: Anger = 0.6, Pride = 0.64, Gratification = 0.64 . From
this set we chose the ones with the greatest values.

5.9 Summary
The novelty of our approach is the representation of context by means of a semantic model,
not only as events in the world, but also as the internal characteristics of the character which

86
5.9. SUMMARY

when related with the events, give as result believable emotional responses. This chapter
presented an overview of previous works on context representation and how they have been
related with the creation of virtual characters; as well as the description, definition and
implementation of the ontologies that form part of the semantic model. Finally, an use
case demonstrated not only how to use these ontologies, but the obtained results after the
simulation of a set of events.

87
CHAPTER 5. CONTEXT REPRESENTATION

88
Chapter 6

Affective Model for Mood


Generation

Personality is to a man what perfume is to a flower.


Charles M. Schwab

It is known that behavior, social interaction and communication, action planning and
response to the environment are guided and regulated through emotions [17]. Neverthe-
less, other traits like mood and personality have also a strong influence on emotions and
behaviors.
Hence the achievement of more complex and richer affective characters, or agents, will
depend on the interrelation between emotions, mood and personality.
In the previous chapter we went through the process of context appraisal to obtain a set
of emotions that are experimented by the character in certain situation. In this chapter,
we explain how to enrich those emotions with mood and personality. As result we have
a set of pleasure, arousal and dominance values, corresponding to the resultant mood of
the character.
This chapter is organized as follows. Section 6.1 introduces the affective layer used
for regulation and generation of moods, based on the PAD model proposed by Albert
Mehrabian [105]. Section 6.2.1 provides a detailed description of the affective model used
to process the emotions received from the semantic layer (Chapter 5) in order to generate
the values for different moods. Finally, Section 6.3 offers a summary of the chapter.

89
CHAPTER 6. AFFECTIVE MODEL FOR MOOD GENERATION

6.1 Affective Model


As explained in Chapter 4, the affective layer is the one that takes as input the emotions
produced in the semantic layer, and processes them to generate as output a temporal mood
in the character. Figure 6.1 shows an extract of the framework that corresponds to the
affective layer.

Figure 6.1: Affect Representation Schema

In our work emotions are generated based on the OCC model and personality traits
are chosen from the Five Factor Model (FFM). To relate emotions, moods and personality
in the same space, we use the Pleasure-Arousal-Dominance space (PAD) because it allows
us to obtain mood values in terms of pleasure, arousal and dominance dimensions.
The PAD space is considered a dimensional model, and the motivation to choose it for
mood representation is derived from the ideas explained by Cochrane [29]. One attraction
for a dimensional approach is that it respects a fundamental observation that emotions can
vary very smoothly, and the information they provide is typically dynamic in content. The
other idea relates to language; therefore extracting dimensions from language can allow one
to transcend its constraints to some extent and break down affective terms into descriptive
terms that convey the same meaning for different people both within and across cultures.

6.1.1 PAD Space Pleasure, Arousal and Dominance


The PAD model [105] is a framework for definition and measurement of different emo-
tional states, emotional traits, and personality traits in terms of three nearly orthogonal
dimensions: Pleasure, Arousal, and Dominance.

90
6.1. AFFECTIVE MODEL

The notations +P and -P for pleasure and displeasure, +A and -A for arousal and
nonarousal, and +D and -D for dominance and submissiveness, respectively, are used
throughout this work.
Pleasuredispleasure distinguishes positive affective states from negative ones. Arousal-
nonarousal is defined in terms of a combination of mental alertness and physical activity.
Dominance-submissiveness is defined in terms of control versus lack of control over events,
ones surroundings, or other people. The resulting octants and mood categories, corre-
sponding to various combinations of high versus low pleasure, arousal, and dominance [104].
These are indicated in Table 6.1.
Exuberant (+P +A +D) Bored (-P -A -D)
Docile (+P -A -D) Hostile (-P +A +D)
Dependent (+P +A -D) Disdainful (-P -A +D)
Relaxed (+P -A +D) Anxious (-P +A -D)

Table 6.1: Octants in the PAD space: Moods

6.1.2 Reasons to use PAD in a Computational Model of Affect


Some of the advantages of the PAD model are mentioned in the following:

Context representation. Some examples of the context we want to represent,


specifically how the emotional states and some other psychological characteristics
predispose a person toward certain sets of behavior, were already exposed by Mehra-
bian. For instance:

A more pleasant emotional state predisposes a person to act in a friendly and


sociable manner with others; conversely, an unpleasant emotional state (e.g.,
a headache) tends to heighten chances that the individual will be unfriendly,
inconsiderate, or even rude with others.
A more pleasant and, for most tasks, a less aroused emotional state enhances
ones desire to work (so, if your office is uncomfortably cold or hot, you are less
likely to want to do the jobs that await your attention)

Personality models. The possibility of mapping personality, emotions and emo-


tional states parameters into the same space constitutes one of the main advantages

91
CHAPTER 6. AFFECTIVE MODEL FOR MOOD GENERATION

for choosing this model. For example, one could be able to map the Big Five per-
sonality model, or de AB5C personality model, into the PAD space and obtain the
same set of temperaments, or moods [104].

Widely used. In the last few years this model has been one of the most used to rep-
resent personality, emotions and mood. According to Garvin [62], referencing other
authors, the components within the PAD scale have shown to have good reliability
and nomological validity through the many studies that have utilized them. Actually,
the PAD space has not only been used by researchers in Affective Computing like
Gebhard [63], Becker-Asano [12], Kessler et al. [82], Kasap et al. [80], Ben Moussa and
Thalmann [111], Courgeon et al. [30] among others, but also by researchers in areas
like computer vision to recognize facial expressions [24], in design of theme parks [108],
for integration of emotional strategies in spoken dialogues [19], or for marketing and
communications, product development and personnel management [103].

Mathematical description. When Mehrabian proposed the PAD space, he de-


signed a set of equations based on his numerous observations, which allowed the
mapping of personality traits and emotions into the three dimensions of Pleasure,
Arousal and Dominance. Thus, having a dimensional model instead of a discrete
model allows more variability in the results and a wider ranger of mood responses.
The other aspect to note about the PAD is its dynamic nature, which is seen by the
variations that can occur in emotions and personality when the P, A, and D values
are changed over time.

Easiness of assessment. To evaluate the affective output of the PAD space there
is a non-verbal, visually oriented questionnaire based on the three dimensions of
pleasure, arousal and dominance: the Self-Assessment Manikin (SAM) [22]. SAM
depicts each PAD dimension with a graphic character arranged on a linear scale, and
subjects rate each input according to this character, obtaining fast evaluations and
very accurate results.

For the aforementioned reasons, the PAD model would provide us the theoretical frame-
work to create more believable characters capable of react in different situations. The
following section explains how emotions elicited in the Semantic Layer are mapped into the
PAD Space and through the Affective Layer, moods are generated and expressed in terms
of pleasure, arousal and dominance.

92
6.2. AFFECTIVE LAYER

6.2 Affective Layer


For the elicitation of moods we implement a module for computation of affect, based on
the ALMA model [63]. The advantage of the model is that it is based in the PAD space
and it provides a mathematical representation of the influence of moods, emotions and
personality on each other.
Figure 6.2 shows schematically each of the elements of the affective model that will
be further explained. As can be noticed, the CONTEXT is the one that supplies the
output of the semantic model, which are in this case the elicited emotions.

Figure 6.2: Schema of the affective model

6.2.1 Representation of Mood, Personality and Emotions


Mood, Default Mood and Emotion

A mood M is defined as a point in one of the eight octants of the PAD space. Therefore,
it is defined by the values of pleasure, arousal, and dominance as M = (P, A, D), where
P [1, 1], A [1, 1], D [1, 1]. Figure 6.3 shows the mood in the space.
The mood intensity m is defined by its distance to
the zero point ofthe PAD mood
~
space, and it is computed as the norm of the vector OM denoted by P 2 + A2 + D2 .
Because this three dimensional space has maximum absolute values of 1.0, the longest

distance in a mood octant is 3.
As mentioned by Gebhard in his research [64], people have better understanding of
descriptions when quantified words are used instead of numbers. Hence we applied the
same categorization for mood intensities:

93
CHAPTER 6. AFFECTIVE MODEL FOR MOOD GENERATION

Figure 6.3: Mood in the PAD space


slightly, if m [0.0, 31 3]

moderate, if m ( 31 3, 32 3]

highly, if m ( 32 3, 3]

Personality can also be mapped into the PAD space as the ground, or default mood,
G, which is the mood start value. It is considered the normal state of the character
according to its personality. This concept is based on the relationship established by
Mehrabian between the Five Factor Model and the PAD traits, expressed in Eq. 6.1.

P = (0.21 E) + (0.59 A) + (0.19 N )


A = (0.15 O) + (0.30 A) (0.57 N ) (6.1)
D = (0.25 O) + (0.17 C) + (0.60 E) (0.32 A)

For example, Eq. 6.2 computes the default mood of a person with a very extro-
verted and very agreeable personality (personality = (0.1, 0.2, 0.9, 0.9, 0.1)). Then,
G = (0.739, 0.2280, 0.311) which is translated into a moderate Exuberant mood.

P = (0.21 0.9) + (0.59 0.9) + (0.19 0.1) = 0.739


A = (0.15 0.1) + (0.30 0.9) (0.57 0.1) = 0.2280 (6.2)
D = (0.25 O.1) + (0.17 0.2) + (0.60 0.9) (0.32 0.9) = 0.311

94
6.2. AFFECTIVE LAYER

Finally, the emotions felt by the character at certain moment are also mapped into the
PAD space. These emotions are the ones obtained from the semantic model and influence
the change of mood in the character.
An emotion E is located in the space according to its values of pleasure, arousal and
dominance, which were obtained empirically by Mehrabian and completed by Gebhard [63,
65]. Table 6.2 shows these values, which indicate how much of pleasure, arousal and
dominance exists in each emotion. The intensity e of an emotion is given by the values
obtained in the semantic model. Figure 6.4 shows how the default mood and emotions are
represented.

Emotion P A D Mood
Admiration 0.5 0.3 -0.2 +P+A-D Dependent
Anger -0.51 0.59 0.25 -P+A+D Hostile
Disliking -0.4 -0.2 0.1 -P-A+D Disdainful
Disappointment -0.3 -0.4 -0.4 -P-A-D Bored
Sadness -0.5 -0.42 -0.23 -P-A-D Bored
Fear -0.64 0.60 -0.43 -P+A-D Anxious
Gloating 0.3 -0.3 -0.1 +P-A-D Docile
Gratification 0.6 0.5 0.4 +P+A+D Exuberant
Gratitude 0.4 0.2 -0.3 +P+A-D Dependent
Hate -0.6 0.6 0.3 -P+A+D Hostile
Hope 0.2 0.2 -0.1 +P+A-D Dependent
Joy 0.4 0.2 0.1 +P+A+D Exuberant
Liking 0.40 0.16 -0.24 +P+A-D Dependent
Love 0.3 0.1 0.2 +P+A+D Exuberant
Pity -0.4 -0.2 -0.5 -P-A-D Bored
Pride 0.52 0.22 0.61 +P+A+D Exuberant
Relief 0.2 -0.3 0.4 +P-A+D Relaxed
Resentment -0.3 0.1 -0.6 -P+A-D Anxious
Reproach -0.3 -0.1 0.4 -P-A+D Disdainful
Remorse -0.2 -0.3 -0.2 -P-A-D Bored
Satisfaction 0.50 0.42 0.47 +P-A+D Exuberant
Shame -0.3 0.1 -0.6 -P+A-D Anxious

Table 6.2: Mapping of Emotions into PAD space

Emotions and Emotional Center

When more than one emotion are active at the same time, it results in different points
located in the same, or different octants. Therefore, we need to compute another point
that represents all of them, which will induce the displacement of the mood. This point

95
CHAPTER 6. AFFECTIVE MODEL FOR MOOD GENERATION

Figure 6.4: Representation of Emotions

is named emotional center E, and it is the center of mass of all the emotions Ei . Eq. 6.3
depicts the intensity of the emotional center, computed as the norm of this vector from the
origin of the PAD space. Figure 6.5 shows how the emotional center is represented.

Pn
Ei ,AEi ,DEi )ei
i=1 (PP
E = n , where ei (0.0, 1.0] (6.3)
i=1 ei

Figure 6.5: Representation of the Emotional Center

96
6.2. AFFECTIVE LAYER

Current Mood

When the default mood G is changed by the emotional center E in an instant t, it becomes
the current mood of the character. The current mood, denoted by M(t), is computed
through the center of mass between G and E. Equation 6.4 shows this computation,
where M(t) is the current mood, G is the default mood, g its intensity, E is the emotional
center and e its intensity. Figure 6.6 shows the current mood M(t).

g G + e E
M(t) = (6.4)
g + e

Figure 6.6: Changing of mood in an instant t - Current Mood

New Mood

In a more realistic model, when the character experiments new emotions, these will not be
affected by the default mood G, but by the current mood M(t).
Then, a new mood at instant t + 1, M(t + 1), is the result of the emotions experienced
at instant t (represented by the emotional center E) and the current mood M(t). Figure 6.7
shows the generation of the new mood M(t + 1). Again, the new mood is computed as the
center of mass between the current mood and the emotional center.

97
CHAPTER 6. AFFECTIVE MODEL FOR MOOD GENERATION

Figure 6.7: Changing of mood in an instant t - New Mood

The new mood represents the change of mood of the character along time. It is worth
noting that these changes are progressive, as it happens for people in their daily lives. It
will not happen that the character changes its mood from one extreme of the PAD space
to the other. The reason is that, according to Watson [150], extremely high levels of one
type of mood are rare in everyday life. Instead, daily experiences show low to moderate
intensity states. For example, someone who is mildly nervous may be very enthusiastic.

Decay

Finally, an important aspect that has to be considered is decay. It occurs when the character
has no active emotions, which means that no new emotions have been elicited, and therefore
his or her current mood will tend to go back to the ground, or default mood.
This change has to be progressive, that is why we compute the decayed new mood,
M(t + 1) as the center of mass between the default mood G and the current mood M(t),
according to Eq. 6.5. By successive repetitions of this equation, the current mood will
eventually become the default mood, unless new emotions are experienced. Figure 6.8
shows how the vector M(t) is moved towards G in the PAD space.

g G + m M(t)
M(t + 1) = (6.5)
g+m

98
6.3. SUMMARY

Figure 6.8: Mood Decay

6.3 Summary
As can be seen, our model is a modified version of the ALMA model. We have decided for
computations based on center of mass, contrary to the vectorial computations in ALMA,
because it allowed us to represent in a simplified and effective way the influence of the
points representing personality and emotions on mood.
In this way, we obtained a model that served well enough to our purposes, allowing us
to represent a characters mood and its changes along time depending on the personality.
The representation was based on terms of pleasure, arousal and dominance, providing a
novel way for mood simulation and visualization.

99
CHAPTER 6. AFFECTIVE MODEL FOR MOOD GENERATION

100
Chapter 7

Visualization of Affect in Faces

No man, for any considerable period, can wear one face to himself, and another to the
multitude, without finally getting bewildered as to which may be the true.
Nathaniel Hawthorne

Non-verbal expressions of affect can be performed using voice, gestures, body positions,
and facial expressions. For example, anger can be manifested through a raise in the tone
of the voice; but if the person in anger is introverted, then he or she would just frown.
In this work, the main focus are facial expressions because they have been found to be
the richest source of information about emotions, and one of the richer and more accurate
ways of expressing affect [46].
One of the first scientists who studied the facial expressions in men and animals was
Charles Darwin. All his observations were captured in his book The Expression of Emo-
tions in Man and Animals [33]. He associated different facial movements with certain
emotions, which were observed in different persons in different countries under the same
emotional state. This led to the formulation of the principle of the universality of emotions.
In the previous chapters we went through the process of context appraisal to obtain a
set of emotions, and explained how these emotions are modulated and contribute to the
elicitation of moods. In this chapter, we explain how to visualize these affective traits.
This chapter is organized as follows. Section 7.1 gives an overview of previous works
on visualization of facial expressions. First the two most used standards for creating
parameterized facial expressions, MPEG-4 and FACS, are explained. Then, it is presented

101
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

the two facial animation engines and other applications used for generation of the facial
expressions. Afterwards, the algorithms and techniques used in the generation of emotion
and mood expressions, as well as personality cues are detailed. Finally, a summary of the
chapter is presented.

7.1 Overview of the Visualization Module


One of the novelties of this work is the visualization of moods using FACS based on the
Pleasure, Arousal and Dominance space; the other novelty is the visualization of personality
based on visual cues. To achieve this the following requirements should be fulfilled:

Facial expressions should represent a strong emotion felt by the character at some
instant. We consider two types of emotions: universal ones (joy, anger, sadness,
disgust, surprise, and fear ) and intermediate emotions (the ones proposed in the
OCC model, e.g. disappointment, remorse). As there are no parameters established
for the intermediate ones, these should be obtained from already defined emotions.

Emotions should have different intensities. For example: does the lowering of the
eyebrow control the intensity of an angry face, and will a greater lowering of the
eyebrows make the person look angrier? [54].

Facial expressions should express the mood in an interval of time. Regarding mood,
Faigin sets the question: to what extent are day-to-day moods visible on the face?
To answer it, he cites the example of fear: ... if we are terrified enough our response
will be etched on the face. But in the case of a long-term mood like anxiety, Faigin
says that it depends on coping as well-socialized adults, we still go about our daily
life in spite of a disruptive inner mood of anxiety.

Personality can be expressed in terms of visual cues that enhance or attenuates


certain facial expressions. A visual cue is defined as that action or characteristic that
is associated to a personality trait.

Considering the requirements mentioned above, this section describes the last mod-
ule of our computational model, which concerns to the generation and visualization of
facial expressions in virtual characters. Figure 7.1 shows an extract of the framework that
corresponds to the visualization layer.

102
7.2. EXPRESSION CODING SYSTEMS

Figure 7.1: Visualization Schema

As can be seen, two types of facial expressions are visualized: expressions for emotions
and expressions for moods. Expressions of emotions are generated using the output of the
Context Representation module -or Semantic model. For their visualization we use the
MPEG-4 standard.
Expressions of moods are generated from the output of the Affective Model. The
obtained values are mapped into the Pleasure-Arousal-Dominance (PAD) space, which
allows to visualize any variation of mood. In this case, only FACS were used.
Concerning personality, its visualization relies on visual cues, which in this work are
head pose and eye gaze. In this case, also FACS is the only standard used.

7.2 Expression Coding Systems

The visualization of facial expressions is considered one of the most challenging tasks in the
field of animation. On the one hand, the slightest inconsistency can be rapidly recognized
by the human eye and lead to a lack of believability; and on the other hand, the human
face is very complex, and to capture all its details can become a very difficult task.

103
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

The techniques used for generation of facial expressions can be grouped into: geometry-
based and image-based techniques.
Geometry-based techniques include key-framing with interpolation, direct parameter-
ization, pseudo-muscle-based approach and muscle-based approach. Image-based tech-
niques could be divided into morphing between photographic images, texture manipulation,
image blending and vascular expressions [129].
There is another method called performance-driven methods, which captures real peo-
ples movements and uses them to animate virtual characters. To implement techniques
that belong this last method two standards or coding systems have been stated as the most
important ones: the MPEG-4 standard and the Facial Animation Coding System (FACS).

7.2.1 MPEG-4
MPEG-4 is an ISO/IEC standard developed by the MPEG (Moving Picture Experts
Group) [58], [144]. One of the objects provided by the standard is a facial object that
represents a human face, as well as facial data: Facial Definition Parameters (FDPs),
Facial Animation Parameters (FAPs), and Facial Animation Parameters Units (FAPUs).

Facial Definition Parameters (FDPs) are defined by the locations of feature points
and are used to customize a given face model to a particular face. There are 84 feature
points, which are used as reference to calculate the facial movement. Figure 7.2 shows
how FDPs are defined in a human face.

Facial Animation Parameters (FAPs) define the normalized displacement of par-


ticular feature points from their neutral position and are closely related to muscle
actions. There are 68 FAPs: 66 low-level parameters associated with the lips, jaw,
eyes, mouth, cheek, nose; and two high-level parameters (FAPs 1 and 2) associated
with expressions and visemes.
In our work all FAPs are coded, and for each group there is a mask indicates which
specific FAPs are represented in the bitstream, where a 1 indicates that the corre-
sponding FAP is present in the bitstream.

Facial Animation Parameter Unit (FAPUs) are the units used to defined FAP values.
A FAPU is computed from spatial distances between major facial features on the
model in its neutral state, and they are defined as fractions of distances between key
facial features (Figure 7.3). Measurement units are presented in Table 7.1.

104
7.2. EXPRESSION CODING SYSTEMS

Figure 7.2: FDPs groups

Feature Description FAPU


IRISD0 Distance between upper ad lower eyelid IRISD = IRISD0/1024
ES0 Eye separation ES = ES0/1024
ENS0 Eye - nose separation ENS = ENS0/1024
MNS0 Mouth - nose separation MNS = MNS0/1024
MW0 Mouth width MW =MW0/1024
AU Angle unit 105

Table 7.1: FAPUs and their definitions

Figure 7.3: FAPUs

105
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

Some of the most renowned works that used MPEG-4 are:

Pasquariello and Pelachaud [122], who developed the Simple Facial Animation Engine
(SFAE) to animate Greta, a 3D facial model compliant with MPEG-4 specifications.

Courgeon et al. [31] developed MARC, an MPEG-4 based facial animation system
that also supports FACS action units based animation and BML (Behavior Markup
Language) external control.

Kasap et al. [80] had Eva, a virtual character whose facial movements for each type
of emotional response were encoded using the MPEG-4 standard.

Malatesta et al. [92] used MPEG-4 facial animation parameters to obtain facial ex-
pressions for predicted intermediate and final emotional states.

Raouzaiou et al. [131] proposed a framework based on the MPEG-4 standard [58] to
achieve two goals: modeling of primary expressions using MPEG-4s Facial Anima-
tion Parameters (FAPs); and development of a rule-based technique for analysis and
synthesis of intermediate facial expressions.

Arya et al. [6] proposed iFace, a general-purpose software framework that implements
the face multimedia object, which are based on top of the MPEG-4 standard.

Balci [8] developed XFace, a set of open source tools for creation of embodied conver-
sational agents using MPEG4 and keyframe based rendering driven by SMIL-Agent
scripting language.

7.2.2 FACS - Facial Action Coding System

The Facial Action Coding System (FACS) is a method introduced by Paul Ekman, Wallace
V. Friesen, and Joseph C. Hager to measure facial behaviors [48] and to systematically
categorize facial expressions. It was developed by determining how the contraction of each
facial muscle (singly and combined) changes the appearance of the face.
FACS measurement units are Action Units (AUs), and not muscles. One of the reasons
for this is that some changes involve more than one muscle, and then they are mapped
into one AU. Another reason is that changes produced by one muscle are separated into
different AUs to describe the actions produced by the same muscle.

106
7.2. EXPRESSION CODING SYSTEMS

FACS consists of 56 AUs in total, 44 of those used to describe facial expressions and
the remaining 12 to describe head and eye movement. Appendix B has a list with all the
existent AUs.
There are FACS that are more accentuated in one side of the face or in both. From this
point of view, FACS can be unilateral or bilateral. For example, in a smile the AUs are
present in both sides of the face. Therefore, we can read: AU12 (lip corner puller ). But in
the contempt expression, the AU12 is present just in one side of the face. Therefore, we
read it as: L/R12, meaning that it can be the left (L) or the right (R) side. Other locations
of the AUs would be bottom (B) and top (T). Nevertheless, in our work for easiness of use,
we will treat AUs as unilateral.
Regarding previous works, FACS have been mainly used in the field of facial expression
recognition. In the field of generation of facial expressions, FACS have been used for quite
some by:

Platt and Badler [124], who designed the first model based on muscular actions using
FACS as basis for the control of facial expressions.

Grammer and Oberzaucher [67] modeled all AUs as a system of morph targets at
their maximum contraction directly on the head mesh in a modeling program.

Bee et al. [14] created a facial animation system based on Ekmans FACS, which
allows the creation and animation of facial expressions.

7.2.3 Reasons to use MPEG-4 and FACS


Like Mosmondor et al. [110], we preferred MPEG-4 to procedural approaches and the more
complex muscle-based models because it is very simple to implement, and therefore easy
to port to various platforms. It is also the state of the art standard for parameterized
facial expressions in academia. Nevertheless, in the last few years MPEG-4 has shown a
decreased use, probably this was due to the lack of quality that was obtained in animations
and the difficulty for describing all kind of facial movements.
On the other hand, FACS is the system developed to classify facial expressions through
the use of action units (AUs); therefore, it provides the necessary data to generate facial
expressions. Another reason is the independence that FACS provides from other animation
standards or methodologies. Instead of using standard-specific-compliant facial mesh in
order to generate and animate the expressions, one can have a 3D facial model with its

107
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

corresponding action units deformations and obtain a variety of facial expressions. As


stated by Kaiser and Wehrle [78]:
The Facial Action Coding System (FACS) allows the reliable coding of any facial action
in terms of the smallest visible unit of muscular activity (Action Units), each referred to
by a numerical code. As a consequence, coding is independent of prior assumptions about
prototypical emotion expressions. Using FACS we can link facial expressions to emotions,
mood, and personality.
That is why many film studios are heading towards FACS in their animations of 3D
characters (Monster House,King Kong [137] and Avatar ), which makes it a strong standard
to take into consideration not only for facial recognition, but for facial animation.
As data set for facial expressions of universal emotions, and some intermediate emo-
tions and moods, we used the Facial Expression Repertoire (FER) [20], developed by
the Filmakademie Baden-Wurttemberg. It provides an extensive data set with different
expressions and their corresponding AUs. These data will be used to create the required
expressions. Intensity for each AU is given by the output of the affective model and the
corresponding correlations with pleasure, arousal, and dominance dimensions.

7.3 Facial Animation Engines and Applications


In this section we outline the facial animation engines we have used to generate facial
expressions, based on FACS and MPEG-4.

7.3.1 Game Engine from the University of Augsburg

The game engine of the University of Augsburg is a facial animation system based on FACS,
designed to create facial expressions for animated agents in an easier, more intuitive and
faster way than conventional animation tools [14].
To implement FACS, morph targets (or blend shapes) were used. In the implementation
of the 3D model Alfred, each morph target corresponds to one AU. In total, 23 AUs were
implemented.
Regarding Alfred, he was modeled using Luxology modo and Autodesk 3D Studio
Max, through subdivision surfaces and sculpting techniques. Textures are either digital
handpainted (color & specular map) or baked from a high-resolution mesh (normal map).
Head has a polycount of about 21.000 triangles.

108
7.3. FACIAL ANIMATION ENGINES AND APPLICATIONS

The created facial expressions can be saved and loaded in a XML format. The config-
uration of the controllers, the basic expressions and their reduced controller sets are based
on data from the FER Database. Figure 7.4 shows the development environment, the XML
file that is used to set up the AUs and the resultant face.

Figure 7.4: Game Engine of the University of Augsburg: development environment, con-
figuration file and facial mesh

The reasons that led us to use this Game Engine are:

It provides the framework to work with FACS in a very straightforward manner.

The possibility to evaluate the influence of gaze orientation and head tilt in the
affective output, given that the engine allows head and eye movement.

The facial animation system can render the facial expressions in real time.

7.3.2 Xface Toolkit


Xface is open source, platform independent toolkit with four pieces of software for devel-
oping 3D embodied conversational agents compliant with the MPEG-4 standard. The core
Xface library allows developers to embed 3D facial animation to their applications. The

109
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

XfaceEd editor provides an easy to use interface to generate MPEG-4 ready meshes from
static 3D models. The XfacePlayer is a sample application that demonstrates the toolkit
in action; and the XfaceClient is used as the communication controller over network.
All the pieces in the toolkit are operating system independent, and can be compiled
with any ANSI C++ standard compliant compiler. For animation, the toolkit relies on
OpenGL and is optimized enough to achieve satisfactory frame rates (minimum 25 frames
per second are required for FAP generating tool) with high polygon count (12000 polygons).
The Xface module we use is XfaceEd, where we import the facial mesh in VRML
(Virtual Reality Modeling Language) format. Once in the editor, we can assign values to
the FDPs and FAPs parameters. In one of the tabs of the editor we define how each FDP
will influence its neighbor vertices according to muscle models (Fig. 7.5).

Figure 7.5: Tab in XfaceEd where FDPs are assigned to the facial mesh

In the FAPs previsualization tab, XfaceEd loads the facial mesh from a file with all
the FDPs definitions. Then, it loads the FAP values which are specified in a file of FAPs
frames. Finally, it decodes these values into the face and the expression can be played.
XfaceEd also offers the possibility to change and visualize each FAP at a time. Through
a modification made in the Xface editor, we are able to change all the FAPs directly into
the model and write them into the FAPs file.

110
7.4. VISUALIZATION OF EMOTIONS

7.4 Visualization of Emotions


The emotions that will be visualized in this work belong to one of two groups: universal
emotions (also named basic or primary) and intermediate emotions. The standard used
for the generation of emotional expressions is MPEG-4.

7.4.1 Universal (or Basic) Emotions

What is a basic emotion and what are emotions are topics which answers still raise
debates between psychologists. In our research we use as universal emotions the ones
proposed by Ekman: Joy, Sadness, Anger, Disgust, Fear, and Surprise, mainly because
they are perceived as such in different cultures and because they are the state of art of
universal or basic emotions.
When generating expressions using MPEG-4, each emotion has a set of FAPs. Let Ei
be the i-th emotion generated and Pi the set of activated FAPs for Ei .
Each FAP j in Pi has a range of variation Xi,j [minV alue, maxV alue], indicating
the minimum and maximum values along which the FAP can be displaced. As for j, it is
defined as j = {1, ..., 64} given that the standard defines 64 FAPs.
The variation ranges Xi,j for universal emotions were obtained by manual manipulation
of FAPs to simulate the expressions of joy, sadness, fear, disgust, anger, surprise, and
neutral drawn by Faigin [54]. These models are seen in Figure 7.6, and the generated
universal facial expressions are shown in Figure 7.7.

7.4.2 Intermediate Emotions

As mentioned in previous chapters, intermediate emotions are taken from OCC model,
except for happy-for and fear-confirmed because their expressions can be represented as
joy and fear, respectively.
Based on the work of Raouzaiou et al. [131] we generated a set of intermediate emotions
using the MPEG-4 standard. They identified eight fundamental emotions: acceptance,
fear, surprise, sadness, disgust, anger, anticipation and joy, which were the starting points
for interpolation. There were two different ways to generate new expressions: if the new
emotion En is very similar to the fundamental emotion Ei , i.e., if their facial expressions
differ mainly in strength of muscle contraction, then the new expression En can be com-
puted as a category of the expression Ei . If the new emotion En does not clearly belong

111
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

Figure 7.6: Faigins universal facial expressions. Up: Anger, Disgust, Fear. Down: Joy,
Sadness, Surprise

Figure 7.7: Universal facial expressions generated using MPEG-4. Up: low intensities of
Anger, Disgust, Fear, Joy, Sadness and Surprise. Down: high intensities of Anger, Disgust,
Fear, Joy, Sadness and Surprise

112
7.4. VISUALIZATION OF EMOTIONS

to a fundamental category, its facial expression is computed by interpolation between the


shifted expressions of the two emotions E1 and E2 that are closest to En .
In the same manner, we generate intermediate expressions either by categorizing an
universal emotion, or by mixing two universal emotions (Figure 7.8).

Figure 7.8: Combination of universal emotions to obtain intermediate emotions

Instead of combining the data from the Whissell and Plutchik [132] studies, as
Raouzaiou et al. did, we only used the data from Whissell. The reason to use only
the Whissells Dictionary of Emotions [153] is that it provides a complete list of terms
with affective connotations described in terms of their activation (or arousal) and evalua-
tion (or pleasantness). From these values, we can locate the emotions in a 2D space and
compute their angular distances, establishing similarity among them. Figure 7.9(a) shows
the emotions located into the activation-evaluation space.

Figure 7.9: Whissell Emotions: (a) Not centered values, (b) Centered values

113
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

For a better visualization of the angular distances we center the emotions with respect
to the origin (0, 0) using the mean values of the activation and evaluation dimensions
(a = 4.00 and e = 4.00) obtained by Whissell. In this way, emotions can be spread along
negative and positive axis maintaining the angular proportion between them.
Equation 7.1 shows how to obtain the angle of an emotion with its centered activation
value acenter = a a and centered evaluation value ecenter = e e. Then, we use the rules
in Table 7.2 to locate the angle in its corresponding quadrant, given that the returned
is between 0 and 90 . Figure 7.9(b) shows the centered emotions.

acenter
= arctan (7.1)
ecenter

if (acenter > 0 ecenter < 0) then = 180


else if (acenter < 0 ecenter < 0) then = 180 +
else if (acenter < 0 ecenter > 0) then = 360

Table 7.2: Rules to locate the angles in the 2D activation-evaluation space

Once emotions have been properly located in the activation-evaluation space we can
decide if an intermediate emotion will be obtained as a category or as a combination of
universal emotions.
In this work, the universal emotions used to obtain the intermediate ones, as well as
their activation, evaluation and angular values are given in Table 7.3.
In the case of categorization, Method 1 is applied. It changes the intensity of an
universal emotion, resulting in an intermediate emotion that is a category of one universal
emotion. Another way to realize if an emotion is in certain category is its closeness to
certain universal emotion in the 2D activation-evaluation space.
In the case of mixing universal emotions, Method 2 combines two universal emotions,
resulting in an intermediate emotion that has features of the two original ones.

7.4.3 Generation of Intermediate Emotions


When working with MPEG-4 an emotion is defined by FAPs. Let E1 , E2 y En be the
two universal emotions 1 and 2, and the intermediate emotion n; a1 , a2 , an , e1 , e2 , en
are the centered activation and evaluation values for each emotion. The set of activated
FAPs in each emotion is given by: P1 , P2 , Pn . Each Pi has a range of variation Xi,j , such

114
7.4. VISUALIZATION OF EMOTIONS

Emotion Activation Evaluation Angle Universal emotions


Joy 5.4 6.1 33.7
Sadness 3.8 2.4 187.1
Disgust 5.0 3.2 128.6
Anger 5.5 3.3 171.3
Fear 4.9 3.4 115.0
Surprise 6.5 5.2 64.3
Admiration 4.9 5.0 41.9 joy + surprise
Disappointment 5.2 2.4 143.1 surprise + sadness or
disgust + sadness
Gloating 5.0 5.1 42.2 joy + surprise
Gratification 4.1 4.9 6.3 joy
Gratitude 4.7 5.4 26.5 joy without raise eyebrows.
Hate 5.6 3.7 100.6 anger
Hope 4.7 5.2 30.25 joy
Liking 5.3 5.1 49.7 joy + surprise
Love 5.3 5.3 45.0 joy + surprise
Pity 4.5 3.1 150.9 sadness + fear
Pride 4.7 5.3 28.3 joy
Relief 2.5 5.5 315.0 joy
Remorse 3.1 2.2 206.6 sadness
Reproach 4.9 2.8 143.1 disgust
Resentment 5.1 3.0 132.2 sadness + disgust
Satisfaction 4.1 4.9 6.3 joy
Shame 3.2 2.3 154.8 sadness

Table 7.3: Whissell activation and valuation values, angles and combinations of emotions

as Pi = {Xi,j }, where i is the emotion, j = {1, ..., 64} is the FAP number, and Xi,j
[minV alue, maxV alue]. Also, minV alue [1000, 1000] and maxV alue [1000, 1000].

Method 1: Categorization of an Universal Emotion

The objective is to obtain the range of variation Xn,j of each FAP j in En . For this, a
proportionality constant A is computed using Equation 7.2, which is the proportion in
which the activation value of the universal emotion Ei will be present in the intermediate
emotion En . Then, A is multiplied by the range of variation Xi,j of the universal emotion
Ei to get Xn,j , according to Equation 7.3. Xn,j is the translated range of Xi,j due to the
activation value of En . Figure 7.10 shows an example of categorization.

an
A= (7.2)
ai

115
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

Xn,j = A(Xi,j ) (7.3)

Figure 7.10: Intermediate emotions from one universal emotion

Method 2: Combination of two Universal Emotions

The second method for generating En is applied when it is between two universal emotions
that can be considered close to each other. Then we have to combine the FAPs of both
universal emotions according to the following rules:

Rule I:
If FAP j is involved in the set of activated FAPs P1 , P2 with the same sign (which
represents the same direction of movement):
(a) First we compute the weighted translations of X1,j and X2,j , according to Equa-
tions 7.2 and 7.3. Let these translations be t(X1,j ) and t(X2,j ), which represent
sub-ranges of the FAPs of E1 and E2 . Figure 7.11 is a representation of how the
range is translated.

Figure 7.11: Example of a translated variation range

116
7.4. VISUALIZATION OF EMOTIONS

(b) Then, we compute the center c1,j , c2,j and the length s1,j , s2,j of the translated
ranges t(X1,j ) and t(X2,j ). The center is the mean between minV alue y maxV alue
of the translated ranges. The length is the number of elements per range.
(c) In order to compute the center cn,j and length sn,j of En , first we need to compute
the angles 1 and 2 for E1 and E2 using Equation 7.1.
(d) Using the angles obtained in (c), we compute two constants 1 and 2 . 1
indicates the proportion of E1 E2 in En E1 . 2 indicates the proportion of E1 E2
in En E2 . This is done using Equations 7.4 and 7.5.

n 1
1 = (7.4)
2 1

2 n
2 = (7.5)
2 1

Figure 7.12 shows that the values represented by 1 and 2 are the proportion E1
needs to move to get to En .

Figure 7.12: Angles equivalence

(e) Finally, using Equations 7.6 and 7.7 we compute the center and length of the
ranges of variation of En multiplying 1 and 2 by the center and length of the
translated ranges or E1 and E2 . With these values we can compute the final variation
range for En .

sn,j = 1 s1,j + 2 s2,j (7.6)

cn,j = 1 c1,j + 2 c2,j (7.7)

117
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

Rule II:
If FAP j is involved in both P1 , P2 but with contradictory sign, then Xn,j is null
because FAPs are cancelled given that an intersection between the ranges of E1 and
E2 is never found.

Rule III:
If FAP j is involved only in one of P1 or P2 , then the range of variation Xn,j will be
averaged with the ranges of the neutral face, according to Equation 7.2 and 7.8.

Xi,j
Xn,j = A (7.8)
2
As a result of these three rules, we obtain a set of FAPs with a range of variation
that indicates all the possible intensities of the emotion. Figure 7.13 shows the expression
obtained by combining the universal emotions joy and surprise. Figure 7.14 shows some
facial expressions obtained for intermediate emotions.

Figure 7.13: Intermediate emotions from two universal ones

Generalization of the generation of emotional expressions

The methods previously explained can also be applied to other standards as demonstrated
by Albrecht et al. [1], who modified the work of Raouzaiou et al., aimed at an MPEG-4
based model, to a physics-based facial animation system.
In the same fashion, and to prove that the algorithm can be implemented independently
of the animation system, we use FACS to generate intermediate emotions from universal
emotions and the Game Engine of the University of Augsburg.

118
7.4. VISUALIZATION OF EMOTIONS

Figure 7.14: Examples of Intermediate Emotions (left to right): Love, Disappointment,


Hate, Pity.

For the universal emotions, we followed the same methodology as with FAPs. In this
case, we use the Facial Expression Repertoire (FER) [20] to obtain the AUs corresponding
to each expression, as shown in Figures 7.15 and 7.16.

(a) Disgust (b) Fear (c) Joy

Figure 7.15: Basic emotions (I): disgust, fear, joy

To generate intermediate emotions using FACS we also use the two methods applied
with MPEG-4.
Let E1 , E2 y En be the two universal emotions 1 and 2, and the intermediate emotion
n; a1 , a2 , an , e1 , e2 , en are the centered activation and evaluation values for each emotion.
The set of activated AUs in each emotion is given by: P1 , P2 , Pn . Each Pi has a range of
variation Xi,j , such as Pi = {Xi,j }, where i is the emotion, j is the number that identifies
the AU according to the FACS manual [48], and Xi,j [0, 1]. The main difference with

119
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

(a) Sadness (b) Anger (c) Surprise

Figure 7.16: Basic emotions (II): sadness, anger, surprise

the methodology used with MPEG-4 is that in this case Xi,j is considered a single value,
the intensity of the AU j.
Figure 7.17show a set of facial expressions obtained for intermediate emotions.

(a) Love (b) Disappointment

(c) Hate (d) Pity

Figure 7.17: Upper row: Love, Disappointment. Lower row: Hate, Pity

120
7.5. VISUALIZATION OF MOOD

7.5 Visualization of Mood


The motivation for the visualization of mood comes from the fact that there is almost no
literature where it is explained how to represent them through facial expressions. That is
why we establish a correspondence between FACS Action Units (AUs) and a mood model,
so it would be possible to know which AUs would describe moods.
The mood model we have selected is the Pleasure-Arousal-Dominance model (PAD),
therefore we consider the PAD octants as the mood values: Exuberant, Bored, Disdain-
ful, Dependent, Docile, Hostile, Anxious and Relaxed. This model has been completely
explained in Chapter 2.
As seen in Figure 7.18, we know how to describe emotional expressions in function of
action units (AUs), and we know how to map emotions into the PAD space. The only
missing element is the mapping of AUs into the PAD space.

Figure 7.18: Relation between Emotions, AUs and PAD Space

The methodology to map AUs into PAD is: (1) mapping of emotions into the PAD
space, (2) AUs analysis of facial expressions of emotions using the Facial Expression Reper-
toire (FER) and (3) AUs mapping in PAD Space.

7.5.1 (1) Mapping of emotions into the PAD space

The ALMA model [63] and the work of Russell and Mehrabian [136] provides us with the
PAD values for a set of emotions. These values are in Table 7.4 and some are represented
in Figure 7.19

121
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

Emotion P A D Mood
Admiration 0.4 -0.49 -0.24 +P-A-D Docile
Anger -0.51 0.59 0.25 -P+A+D Hostile
Arrogance 0.0 0.34 0.5 -P+A+D Hostile
Confusion -0.53 0.27 -0.32 -P+A-D Anxious
Disliking -0.4 -0.2 0.1 -P-A+D Disdainful
Disappointment -0.3 -0.4 -0.4 -P-A-D Bored
Rage -0.44 0.72 0.32 -P+A+D Hostile
Sadness -0.5 -0.42 -0.23 -P-A-D Bored
Fear -0.64 0.60 -0.43 -P+A-D Anxious
Gloating 0.3 -0.3 -0.1 +P-A-D Docile
Gratification 0.6 -0.3 0.4 +P+A+D Exuberant
Gratitude 0.2 0.5 -0.3 +P+A-D Dependent
Hate -0.4 -0.2 0.4 -P-A+D Disdainful
Hope 0.2 0.2 -0.1 +P+A-D Dependent
Joy 0.5 0.42 0.23 +P+A+D Exuberant
Liking 0.40 0.16 -0.24 +P+A-D Dependent
Love 0.3 0.1 0.2 +P+A+D Exuberant
Pity -0.4 -0.2 -0.5 -P-A-D Bored
Pride 0.52 0.22 0.61 +P+A+D Exuberant
Relief 0.2 -0.3 0.4 +P-A+D Relaxed
Resentment -0.3 0.1 -0.6 -P+A-D Anxious
Reproach -0.3 -0.1 0.4 -P-A+D Disdainful
Remorse -0.2 -0.3 -0.2 -P-A-D Bored
Satisfaction 0.50 0.42 0.47 +P-A+D Exuberant
Shame -0.3 0.1 -0.6 -P+A-D Anxious
Terror -0.62 0.82 -0.43 -P+A-D Anxious
Worry (insecure) -0.57 0.14 -0.42 -P+A-D Anxious
Fatigue -0.18 -0.57 -0.29 -P-A-D Bored

Table 7.4: Mapping of Emotions into PAD space [64], [136]

7.5.2 (2) AUs analysis of Facial Expressions of Emotions

The AUs that describe the different emotional expressions are obtained from the Facial
Expression Repertoire (FER) [20]. It is an on-line database developed by the Filmakademie
Baden-Wurttemberg, which maps over 150 emotional expressions to FACS and explains in
detail which AUs must be activated for certain facial expressions. It provides an extensive
data set with different expressions and their corresponding AUs, that will be used to create
the required expressions.

The expressions we use for the analysis are presented in Table 7.5.

122
7.5. VISUALIZATION OF MOOD

(a) Enraged (b) Embarrassment smile (c) Ingratiating Smile (d) Contempt

(e) Arrogance (f) Qualifier smile (g) Coy smile (h) Fear

(i) Terror (j) Worry (k) Rage (l) Surprise

(m) Disgust (n) Confused (o) Uproarious laughter (p) Yawning

(q) Tired (r) Disdain (s) Relief Happiness (t) Slight Sadness

Table 7.5: FER expressions for emotion analysis

123
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

Figure 7.19: Location of example emotions in PD space

7.5.3 (3) AUs mapping into the PAD Space

Given that an emotion is located in certain PAD region, and that this emotion has a set of
AUs, we need to figure out the correspondence between a PAD region and all of the AUs.
The methodology consists in using the emotions in Table 7.4 and identify the AUs
that describe their movements. This gives us a rough idea of where the AUs are mainly
active. The difficulty lies in deciding the PAD region where the AU should be active when
it belongs to emotions in different octants. For example, AU 1 is active in fear which is in
the -P -D quadrant, but is also active in other emotions in the +P -D quadrant.
The analysis is divided in: analysis of the AUs in the Pleasure-Dominance (PD) space,
and analysis of the AUs in the Arousal (A) space. One of the reasons for this divisions is
that it facilitates the analysis of the dimensions where the AUs could be activated. Another
reason is that there are AUs that can be easily associated with arousal or activation, as
opening of the mouth or the eyes.
From these analysis we can formulate a function dependent on the Pleasure-Dominance
or Arousal dimension for each AU, which depicts the area where the AU is active.
As for the analyzed AUs, we consider a reduced set of AUs that results potentially
sufficient to express, in a readily recognizable manner a set of facial expressions [53]. The
final set of AUs used in this research are presented in Table 7.6.

124
7.5. VISUALIZATION OF MOOD

AU Facial Action Code Muscular Basis


1 Inner Brow Raiser Frontalis, Pars Medialis
2 Outer Brow Raiser Frontalis, pars lateralis
4 Brow Lowerer - Frown Depressor Glabellae, Corrugator, Depressor supercilii
5 Upper Lid Raiser Levator Palpebrae Superioris
7 Lid Tightnener Orbicularis oculi, Pars palebralis
10 Upper Lip Raiser Levator labii superioris
12 Lip Corner Puller Zygomaticus major
15 Lip Corner Depressor Triangularis
17 Chin Raiser Mentalis
25 Lips part Depressor labii inferioris OR Relaxation of mentalis OR Orbicularis oris
26 Jaw Drop (mouth only) Masseter, relaxed Temporalis and Internal Pterygoid
43 Eyes closed Relaxation of Levator palpebrae superioris

Table 7.6: Mood AUs configurations

In the following we present the emotions and methodology used in the analysis of each
AU. Table 7.8 contains all the equations to map each AU into the different dimensions.

AU 1 - Inner Brow Raiser


This AU is found to be activated in sadness, which according to Lance and
Marsella [86] is low in pleasure and dominance. It is also found in emotions of
liking and relief, which have positive pleasure.
For this reason, two analysis were performed: one in negative Pleasure (-P) and
one in positive Pleasure (+P); along negative Dominance dimension (-D).
For the -P-D quadrant, we use sadness as guide emotion. The FER expression that
was compliant with this emotion was slight sadness (Figure 7.5 (s)). For the +P-D
quadrant, the emotion we use for analysis is relief. The FER expression that seemed
to be compliant with it is relief happiness (Figure 7.5 (t)).
After the AU analysis, we formulate a linear function that returns the area in the
-P-D where AU 1 is activated. We use the values of sadness (p = 0.5, d = 0.25)
according to Equation 7.9.

(4.0)d if d (0.25, 0.0]
AU 1intensity = (7.9)
1.0 if d [1.0, 0.25]

125
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

To map the AU 1 into the +P-D quadrant using relief (p = 0.2 and d = 0.4)
according to Equation 7.10, which is similar to Equation 7.9, but with an offset of
0.4 because relief is the point where AU 1 begins to change.



0.0 if d [0.4, 0.0)


AU 1intensity = (4.0)(d + 0.4) if d (0.65, 0.4) (7.10)



1.0 if d [1.0, 0.65]

Figure 7.20 shows the area where AU 1 is activated.

Figure 7.20: Mapping of AU 1 in PAD

AU 2 - Outer Brow Raiser

This AU is also found to be activated in the negative Dominance dimension (-D),


along the Pleasure dimension. As with AU 1, two analysis were performed: one in
the negative Pleasure space (-P), and one in the positive Pleasure space (+P).

For the -P-D quadrant, we use sadness and fear to compute the area. It results in
a progression beginning at the location of sadness (p = 0.5 and d = 0.25), and
ending where fear is located (p = 0.65 and d = 0.45).

For the +P-D quadrant we used the same analysis as for AU 1. Figure 7.21 shows
the area where AU 2 is activated. The functions to obtain the areas are in Table 7.8.

126
7.5. VISUALIZATION OF MOOD

Figure 7.21: Mapping of AU 2 in PAD

AU 4 - Brow Lowerer
AU 4 is found in the negative Pleasure (-P) along the Dominance dimension.
To map this AU in the -P+D quadrant we use anger (p = 0.5 and d = 0.25)
because it is located in that quadrant. In FER the expressions considered as anger
variants are: enraged (compressed lips) and sternness, which have about the same
dominance value on the -P side (Figure 7.5).
For the -P-D quadrant we use sadness (p = 0.5 and d = 0.25) to compute
the intensity of AU 4. Figure 7.22 shows the area where AU 4 is activated and the
equations to obtain this area are shown in Table 7.8.

Figure 7.22: Mapping of AU 4 in PAD

127
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

AU 6 - Cheek Raiser
The feature raised cheeks is a key element for joy and appears in displays of genuine
emotion, but is missing in fake smiles masking other feelings. Therefore, we map this
AU to the positive Pleasure (+P) along the Dominance dimension.
AU 6 is seen in the FER expressions for embarrassment smile and ingratiating smile
(Figure 7.5), which are representative for shame. Nevertheless, shame tends not to
be represented with a smile. Therefore, we concluded that the mentioned expressions
of smile are a sign of agreeableness towards the interlocutor. That is why we use
liking (p = 0.4 and d = 0.24) for the negative dominance (+P-D).
For the positive dominance (+P+D) we use the emotions joy (p = 0.5 and d = 0.25)
and satisfaction (p = 0.5 and d = 0.47). Figure 7.23 shows the area where it is
activated and Table 7.8 shows the equations to obtain the area of activation.

Figure 7.23: Mapping of AU 6 in PAD

AU 10 - Upper Lip Raiser


This AU corresponds to the positive Dominance and negative Pleasure dimen-
sions (-P+D), since it is a key feature of contempt. Similar emotions are disdain or
arrogance (Figure 7.5).
To compute its intensity we used the emotion arrogance (p = 0.0 and d = 0.5).
Table 7.8 presents the equation to compute the activation of this AU, and Figure 7.24
shows its area of activation.

128
7.5. VISUALIZATION OF MOOD

Figure 7.24: Mapping of AU 10 in PAD

AU 12 - Lip Corner Puller


Based on previous studies, it is known that positive pleasure influences the lip cor-
ners. For joy they are pulled to the ears. Thus, AU 12 is mapped into the positive
Pleasure (+P) along the Dominance dimension.
Regarding FER expressions, on +P-D quadrant we can find the qualifier smile, the
coy smile and the embarrassment smile, all showing raised lip corners (Figure 7.5).
On the+P+D quadrant we can find the emotion joy and its many variants. Thus
joy is the emotion which is more representative for the activation of this AU.
Figure 7.25 shows where AU 12 is located in the space. Table 7.8 presents the equation
for its computation.

AU 14 - Dimpler
This AU appears in a smiling mouth. This marks a key feature for the positive
Pleasure dimension (+P).
One of the FER expressions described by this AU is enjoyable contempt (Figure 7.5),
which can be seen as the positive pleasure variety of arrogance.
Thus, AU 14 is mapped to the +P+D quadrant. However, as its region of activation
is closer to the maximum dominance values and lower positive pleasure values, for
computation of this AU we take into account both P and D. Figure 7.26 shows the
area where AU 14 is activated.

129
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

Figure 7.25: Mapping of AU 12 in PAD

Figure 7.26: Mapping of AU 14 in PAD

130
7.5. VISUALIZATION OF MOOD

AU 15 - Lip Corner Depressor


This AU, similarly to AU 12, describes the movement of the corner lips, but down-
wards as in sadness, fear and anger.
Therefore, AU 15 is mapped into the negative Pleasure dimension (-P) along the
Dominance axis. To compute its intensity, we take into consideration the values for
sadness. Figure 7.27 shows where AU 15 is located in the space.

Figure 7.27: Mapping of AU 15 in PAD

AU 5 - Upper Lid Raiser


This AU is found in emotions with a high arousal component such as fear (a = 0.6),
terror (a = 0.82) or worry (a = 0.l4). That is why we just considered the positive
Arousal dimension (+A) to locate AU 5.
Regarding FER expressions, we used the ones corresponding to fear, terror and worry,
allowing us to define the AU intensity between terror and worry. Figure 7.28 shows
where the AU has values in the Arousal dimension.

Figure 7.28: Mapping of AU5 in the Activation dimension

131
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

AU 25 - Lips Part
According to FER, the AU 25 is found in emotions like rage, surprise, disgust, or
in expressions like dazzled smile, which would correspond to a confused expression
(Figure 7.5). All these emotions have distinctive values in the positive Arousal
dimension.
We use confusion (a = 0.27) and rage with a = 0.72 to compute the AUs activation.
Figure 7.29 shows how the AU increased its intensity only in the activation dimension.

Figure 7.29: Mapping of AU25 in the Activation dimension

AU 26 - Jaw Drop
The AU 26 describes a similar movement as AU 25, but it involves a movement of the
bone, which might result sometimes in a more exaggerated opening of the mouth.
Nevertheless, AU 26 is also found in surprise, fear or disgust, among other FER
expressions like uproarious laughter and yawning (Figure 7.5).
We use the values of fear (a = 0.6) and disgust, (a = 0.35) to compute this AU
intensity. Figure 7.30 shows how the AU increased its intensity only in the activation
dimension.

Figure 7.30: Mapping of AU26 in the Activation dimension

132
7.5. VISUALIZATION OF MOOD

AU 43 - Eye Closure
AU 43 describes a movement that is associated emotions or states like tiredness, dis-
dain or relief happiness (Figure 7.5). Thus it belongs to the negative Arousal
dimension (-A).
To compute the area where AU 43 is activated we use the arousal value of fatigued
(a = 0.57). Figure 7.31 shows the location of this AU in the activation dimension.

Figure 7.31: Mapping of AU43 in the Activation dimension

As a result, the set of AUs in each mood are shown in Table 7.7.

Mood +P (AUs) -P (AUs) +A (AUs) -A (AUs) +D (AUs) -D (AUs)


Exuberant 1, 2, 6, 12, 14 5, 25, 26 4, 6, 10, 12, 14, 15
Bored 1, 2, 4, 10, 15 43 1, 2, 4, 6, 12, 15
Docile 1, 2, 6, 12, 14 43 1, 2, 4, 6, 12, 15
Hostile 1, 2, 4, 10, 15 5, 25, 26 4, 6, 10, 12, 14, 15
Anxious 1, 2, 4, 10, 15 5, 25, 26 1, 2, 4, 6, 12, 15
Relaxed 1, 2, 6, 12, 14 43 4, 6, 10, 12, 14, 15
Dependent 1, 2, 6, 12, 14 5, 25, 26 1, 2, 4, 6, 12, 15
Disdainful 1, 2, 4, 10, 15 43 4, 6, 10, 12, 14, 15

Table 7.7: Set of AUs for each mood in PAD space

Table 7.8 is a summary of all the equations formulated with the previous analysis, and
it also contains the graphs that depict the areas where the AUs are activated in the PAD
space.

133
134
AU Emotions Equations Comments

(4.0)d; d (0.25, 0.0] p [1.0, 0.0)
sadness AU 1 = Linear function from d = 0.0
1.0; d [1.0, 0.25] p [1.0, 0.0)
AU1 to d = 0.25


0.0; d [0.4, 0.0) p (0.0, 1.0]


relief AU 1 = (4.0)(d + 0.4); d (0.65, 0.4) p (0.0, 1.0] offset of -0.4 because AU 1



1.0; begins to change in relief
d [1.0, 0.65] p (0.0, 1.0]


0.0; d (0.25, 0.0] p [1.0, 0.0)


sadness, fear AU 2 = (4.0)d; d (0.45, 0.25] p [1.0, 0.0) progression begins at sad-


AU2
1.0; ness, ending at fear
d [1.0, 0.45) p [1.0, 0.0)


0.0; d [0.4, 0.0] p (0.0, 1.0]


relief AU 2 = (4.0)(d + 0.4); d (0.65, 0.4) p (0.0, 1.0]



1.0; d [1.0, 0.65] p (0.0, 1.0]


(4d)(2p); p (0.5, 0.0] d (0.0, 0.25)



(2.0)p; p (0.5, 0.0] d [0.25, 1.0]
anger AU 4 = this AU is dependent on

(4.0)d; p [1.0, 0.5] d (0.0, 0.25)
AU4
pleasure and dominance


1.0; p [1.0, 0.5] d [0.25, 1.0]


(4d)(2p); p (0.5, 0.0] d (0.25, 0.0]



(2.0)p; p (0.5, 0.0] d (1.0, 0.25]
AU 4 =

(4.0)d; p [1.0, 0.5] d (0.25, 0.0]




1.0; p [1.0, 0.5] d [1.0, 0.25]


(2.0)(d + 0.25); d (0.25, 0.25)


liking, joy, pride AU 6 = (4.0)(0.5 d); d [0.25, 0.5)
AU6


1.0; d [0.5, 1.0]

(2.0)(d 0.5); d [0.5, 1.0]
arrogance AU 10 =
AU10 0.0; d [0.0, 0.5)
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES


0.0; p [1.0, 0.0)


joy AU 12 = (2.0)p; p [0.0, 0.5) Joy is representative along
AU12


1.0; dominance
p [0.5, 1.0]

2(0.5 p)2(d 0.5); p [0.0, 0.5) d (0.5, 1.0]
enjoyable contempt AU 14 = Enjoyable contempt: arro-
AU14 0.0; else gance with positive pleasure


0.0; p (0.0, 1.0]


sadness AU 15 = (2.0)p; p (0.5, 0.0]
AU15


1.0; p [1.0, 0.5]


0.0; a [1.0, 0.1]


7.5. VISUALIZATION OF MOOD

worry, terror AU 5 = a0.1 progression begins at worry,


0.7
; a (0.1, 0.8)
AU5


1.0; ending at terror
a [0.8, 1.0]


0.0; a [0.0, 0.3]


confusion, rage AU 25 = a0.3 progression begins at confu-
0.4
; a (0.3, 0.7)
AU25


1.0; sion, ending at rage
a [0.7, 1.0]


0.0; a [0.0, 0.35]


disgut, fear AU 26 = a0.35 ; a (0.35, 0.6) progression begins at dis-
AU26 0.25

gust, ending at fear
1.0; a [0.6, 1.0]


0.0; a [0.0, 1.0]


fatigue a
AU 26 = 0.6 ; a (0.6, 0.0)
AU43


1.0; a [0.6, 1.0]

Table 7.8: Equations to obtain the degree of activation of each AU in the PAD space

135
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

7.6 Visualization of Personality Traits


Our research in the visualization of personality is motivated by the observation that over
time people can make assumptions, and even determine the personality of another known
person based on certain characteristics. Therefore, we raise the question: is it possible to
perceive personality from the face of a character?. Moreover, the former question raises
another question: how to visualize personality in a character?
To give an answer to these questions, we explore through experimentation the charac-
teristics, or visual cues, that are taken into account when perceiving personality. Visual
cues are defined as those facial actions, movements or states that can be static or dynamic.
Examples of visual cues are age, gender or attractiveness.
Our premise is based on the work of Arya et al. [7], where it is said that personality
types should be able to affect all possible facial actions directly and independently of the
mood . It also applies to expressions of emotions. In this way more believable characters
could be created since its personality would be manifested in a much clearly manner.
We focus on the perception of the personality traits Extraversion, Agreeableness, and
Emotional Stability taken from the Five Factor model, when using two visual cues: head
orientation and eye gaze. These visual cues are given by the FACS Action Units: AU 51,
AU 52, AU 53, AU 54 for head orientation, and AU 61, AU 62, AU 63, AU 64 for eye gaze.
Extraversion can be defined by seven components: venturesomeness, affiliation, positive
affectivity, energy, ascendance, and ambition [151]. On the other hand, people low in
extraversion are described as quiet, reserved, retiring, shy, silent, and withdrawn.
Neuroticism, or low Emotional Stability, represents individual differences in the ten-
dency to experience distress, and in the cognitive and behavioral styles that follow from
this tendency. In addition, individuals low in Neuroticism may be defined as they are
simply calm, relaxed, even-tempered, and unflappable.
The Agreeableness factor was taken into consideration because the idea is to generate
agents that the user can interact with, and this factor measures the level of friendliness,
cooperation, generosity, among other socially, or human related characteristics.

7.6.1 Head Pose and Eye Gaze


As mentioned previously, the aim of this research is to explore the influence of static visual
cues on the perception of a characters personality. The idea of using the these two visual
cues: eye gaze and head tilts, was obtained from the work of Bee et al. [15], who studied

136
7.6. VISUALIZATION OF PERSONALITY TRAITS

their influence when a virtual agent is expected to express social dominance; and Arya et
al. [7] who defined a set of visual cues (among them head turn, head tilt, head down and
averted gaze) to map personality dimensions Dominance and Affiliation.
Finally, we decided to also study neuroticism for being the other personality trait that
is present in all personality models, from Eysenck [52] to these days.

7.6.2 Hypothesis

In the experiment that is going to be explained in the following, we assess the relationship
between these non-verbal facial cues and their influence on the perception of a characters
personality. We expect the following outcomes:

The perception of the three personality traits Extraversion, Agreeableness, and Emo-
tional Stability is not influenced whether the virtual characters head points to the
left or to the right.

The perception of Extraversion, Agreeableness, and Emotional Stability is influenced


by the different directions where the head is pointing to. It makes a difference if the
virtual character, for example, is looking upwards-sideways or downwards-center.

Depending on the personality trait, direction plays a role in how these traits are
perceived. We expect that it makes a difference, for example, how Extraversion is
perceived in contrast to Agreeableness when the character is looking upwards.

Not only head orientation influences the perception of the personality traits. Also
variations of eye gaze directions further influence how the personality traits of the
virtual character are perceived.

7.6.3 Methodology

The methodology consisted of:

1. Creating different static images with combined head poses (upwards, center, down-
wards, sideways) and different eye gaze (upwards, center, downwards, sideways)

2. Carrying out an online survey to measure the perception of personality on those


images

137
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

3. Evaluating the results to associate visual cues to extraversion, agreeableness, or emo-


tional stability.

7.6.4 Experimental Study

To carry out the experiment we used the virtual agent Alfred, from the Game Engine of
the University of Augsburg 7.3.1.
The visual cues: head movement and gaze orientation were calculated by varying hor-
izontal and vertical angles, each in three symmetric steps.
For both vertical and horizontal axis, the neutral center position remained at 0.0. The
positions for turning the head sideways were set as 8.5 for looking left, from the agents
point of view, and -8.5 for looking right. For tilting the head vertically, the high target
was located at 8.0 degrees, whereas the low one was at -8.0. Since the distance between
neck and eye joints weakened the visible effect on the eye movement, the vertical angles
had to be doubled for the eye targets.
All limits were chosen in regard to the goal that the pupils should remain visible, even
when the eyes look in the opposite direction of the head.
By combining these angles, nine different targets could be provided for the survey.
These were then converted to Cartesian coordinates using a fixed radius for all target
angles, and sent to the virtual agents IK component for every combination of head pose
and eye gaze. The 81 resulting expressions were captured as screenshots.
However, to ensure a sufficient number of votes per picture, the number of samples had
to be reduced and redundant combinations eliminated. To do this, previous observations
were performed with a reduced group of users, obtaining as a result that the direction of
lateral head movements would not cause much of a difference. Thus we decided to merge
both left and right looking images into one sideways category. To keep the natural
variation, about half of the required images were chosen randomly to either gaze in one
or the other direction. The associated eye gaze targets were mirrored to keep the proper
relation between head and eye movements.
In the end, we worked with a reduced set of 54 images (6 head directions 9 eye
directions) of Alfred. Combinations of orientations were written: vertical -horizontal, e.g.
upwards-center. Table 7.9 shows some samples of head orientations.

138
7.6. VISUALIZATION OF PERSONALITY TRAITS

Description Image Description Image

head neutral head turned left

head turned right head up

head down

Table 7.9: Varying head orientation.

Questionnaire

133 subjects (47 female and 86 male) participated in the experiment through an online
questionnaire. The mean age was 26.6 (SD = 8.8). The questions were provided in
English, German or Spanish, depending on the subjects mother tongue.
The questionnaire consisted of 54 static images, where each image was judged at least 10
times. The images corresponded to a virtual character in which head orientation (upwards-
center, upwards-sideways, middle-center, middle-sideways, downwards-center, downwards-
sideways) and eye gaze (upwards-center, downwards-center, upwards-sideways, downwards-
sideways) were combined.
Then the experimental stimuli which consisted of 15 images per user were presented
one at a time, in random order. For each stimulus the participant had to answer to six
items of the Ten-Item Personality Inventory (TIPI) [66] presented in a 7-item Likert Scale,
where 1 corresponded to Disagree Strongly and 7 to Agree Strongly. Table 7.10 shows
the items presented for each image of the questionnaire.

139
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

Trait Item
Extraversion Extraverted, enthusiastic
Reserved, quiet
Agreeableness Critical, quarrelsome
Sympathetic, warm
Emotional Stability Anxious, easily upset
Calm, emotionally stable

Table 7.10: Questionnaire items for perception of head orientation.

7.6.5 Results

The following results show the mean values and standard deviations related to how
was Alfred perceived over all ratings:

Neither as extroverted nor as introverted: = 3.7, = 1.2

As neutral regarding agreeableness: = 3.8, = 1.4

As slightly emotional stable: = 4.3, = 1.4.

Looking to the Right and to the Left

To study if the side where the agent is looking to has an influence on the perception
of personality, we assumed that in general there are no noticeable difference among the
personality traits whether the agent looks to the right or to the left.
The method was to apply a two-tailed independent t-test to the overall values for
Extraversion, Agreeableness, and Emotional Stability dependent on which side the virtual
agent is looking.
The results of the t-test with 329 degrees of freedom for the trait of Extraversion,
t(329) = 1.2, p = .25, r = .07, showed that there was no significant difference for
Extraversion between Alfred looking to the left ( = 3.8, = 1.2) and looking to the right
( = 3.9, = 1.3), given that the obtained p-value is above the probability threshold.
Moreover, the effect size r has a value below .1, which demonstrates the weak relationship
between Extraversion and the side to where the character is looking to. Therefore, we can
accept the assumption of no noticeable difference in Extraversion when the agent looks to
the left or to the right.

140
7.6. VISUALIZATION OF PERSONALITY TRAITS

Agreeableness also did not show any significant differences for the virtual character
Alfred between looking to the left ( = 3.8, = 1.5) and looking to the right ( = 3.7,
= 1.4), t(329) = .62, p = .54, r = .03. Again, given that the obtained p-value is above
the threshold and the effect size r is below .1, a weak relationship between Agreeableness
and the side to where the character is looking to is shown. Hence, we also can accept the
assumption of no noticeable difference.
Finally, Emotional Stability as well did not show any significant differences between
looking to the left ( = 4.4, = 1.4) and looking to the right ( = 4.4, = 1.2),
t(329) = .35, p = .73, r = .02. In the latter case, as with Extraversion and Agreeableness,
the effect size r is for all three personality dimensions below .1 and thus can be further
interpreted as not even a small effect.

Extraversion

Positioning the head to the upwards-sideways got the highest rating for Extraversion,
while redirecting the head downwards-center got the lowest rating for Extraversion
(Figures 7.32 and 7.33).

Figure 7.32: Mean values for Extraversion dependent on the head orientation.

The one-way ANOVA showed that there was a significant effect on the perception
of Extraversion on levels of the different head orientations, F (5, 663) = 15.4, p < .001,
2 = .10.
Tukey post hoc tests revealed several significant differences within the perception of
Extraversion dependent on the head orientation. Table7.11 shows these results, where each
row and column corresponds to the combination of vertical and horizontal positioning, e.g.
U-C means upwards-center.

141
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

Figure 7.33: The head orientation (downwards-center ) with the lowest rating (left) and the
one (upwards-sideways) with the highest rating (right) for Extraversion.

U-S U-C M-S M-C D-S D-C


U-S n.s. * *** *** ***
U-C n.s. n.s. + * ***
M-S * n.s. n.s. n.s. ***
M-C *** + n.s. n.s. *
D-S *** * n.s. n.s. +
D-C *** *** *** * +

Table 7.11: Post-hoc comparisons for Extraversion and the varying head orientations.
Vertical orientations: U = upwards, M = middle, D = downwards. Horizontal orientations:
S = sideways, C = center. + p < .1, * p < .05, *** p < .001, n.s. = not significant.

Alfred with its head pointing upwards-sideways ( = 4.3, = 1.2) was perceived
significantly less extraverted than the heads pointing to the middle-sideways ( = 3.8,
= 1.3, p < .05), to the middle-center ( = 3.5, = 1.2, p < .001), downwards-side
( = 3.4, = 1.1, p < .001), and downwards-center ( = 3.0, = 1.1, p < .001).
An upwards-center head position ( = 3.9, = 1.2) is perceived as less extraverted
than the heads looking to the center ( = 3.5, = 1.2, p < .1), downwards-side ( = 3.4,
= 1.1, p < .05), and downwards-center ( = 3.0, = 1.1, p < .001).
A head directed middle-sideways ( = 3.8, = 1.3) is perceived as less extraverted
than a head looking downwards-center ( = 3.5, = 1.2, p < .001).
As we applied a two-tailed post hoc test, the significant results are also valid vice versa.
In general we can see that when Alfred has the head sideways, the perception of Ex-
traversion is increased independently of the vertical orientation (Figure 7.32). Nevertheless,
the vertical orientation has also influence in the perception. A raised head is perceived as

142
7.6. VISUALIZATION OF PERSONALITY TRAITS

more extraverted than a head oriented to the middle or downwards.


Regarding eye gaze, we could not find any significant differences among the six head
orientations within the nine eye directions.

Agreeableness

The highest value for Agreeableness was achieved when the virtual character looked
downwards-center . The lowest value was achieved for a virtual character looking
upwards-center (Figures 7.34 and 7.35).

Figure 7.34: Mean values for Agreeableness dependent on the head orientation.

Figure 7.35: The head orientation (upwards-center ) with the lowest rating (left) and the
one (downwards-center ) with the highest rating (right) for Agreeableness.

There was a significant effect on the perception of Agreeableness on levels of the different
head orientations, F (5, 663) = 14.4, p < .001, 2 = .09.
Table 7.12 shows the results of the Tukey post hoc tests, which revealed several signifi-
cant differences within the perception of Agreeableness dependent on the head orientation

143
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

U-S U-C M-S M-C D-S D-C


U-S n.s. *** ** *** ***
U-C n.s. *** *** *** ***
M-S *** *** n.s. n.s. n.s.
M-C ** *** n.s. n.s. n.s.
D-S *** *** n.s. n.s. n.s.
D-C *** *** n.s. n.s. n.s.

Table 7.12: Post-hoc comparisons for Agreeableness and varying head orientation. Vertical
orientations: U = upwards, M = middle, D = downwards. Horizontal orientations: S =
sideways, C = center. ** p < .01, *** p < .001, n.s. = not significant

The head directed upwards-sideways ( = 3.3, = 1.3) was perceived as less agreeable
than a head directed to the middle-sideways ( = 4.0, = 1.4, p < .001), a centered head
( = 3.9, = 1.3, p < .01), a head downwards-sideways ( = 4.1, = 1.4, p < .001), or
downwards-center ( = 4.3, = 1.2, p < .001).
Alfred with the head looking upwards-center ( = 3.1, = 1.3) was perceived as less
agreeable than when looking to the center-sideways ( = 4.0, = 1.4, p < .001), to the
center ( = 3.9, = 1.3, p < .001), downwards-sideways ( = 4.1, = 1.4, p < .001), and
downwards-center ( = 4.3, = 1.2, p < .001).
The lowest values for Agreeableness were achieved for the virtual character looking
upwards. Higher values could be achieved for Alfred with centered head, and even slightly
higher values were achieved for looking downwards (Figure 7.34).
Regarding eye gaze, also for Agreeableness we could not find any significant differences
for the varying eye gaze directions dependent on the six head orientations.

Emotional Stability

Alfred looking middle-side achieved the highest ratings for Emotional Stability and
the upwards-center orientation achieved the lowest ratings (Figures 7.36 and 7.37).
There was a significant effect on the perception of Emotional Stability on levels of the
different head orientations, F (5, 663) = 3.6, p < .01, 2 = .02.
Tukey post hoc tests revealed only one significant difference within the perception of
Emotional Stability dependent on the head orientation.
Alfreds head directed to the middle-sideways ( = 4.7, = 1.2) was perceived with
significantly lower Emotional Stability than directed upwards-center ( = 4.0, = 1.3)

144
7.6. VISUALIZATION OF PERSONALITY TRAITS

Figure 7.36: Mean values for Emotional Stability dependent on the head orientation.

Figure 7.37: The head orientation (upwards-center ) with the lowest rating (left) and the
one (middle-sideways) with the highest rating (right) for Emotional Stability.

with p < .001.


Looking to the middle-sideways ( = 4.7, = 1.2) was perceived as more emotional
stable than looking downwards-center ( = 4.2, = 1.5, p < .1).
For Emotional Stability the highest value were perceived when Alfreds vertical head
orientation was directed to the middle, independently if it was centered or side-oriented.
Furthermore, looking upwards or downwards got in general the lowest values for Emotional
Stability (Figure 7.36).

7.6.6 Discussion

The obtained results provided us with data that could be used to improve the modeling
of personality in virtual agents and therefore, the interaction between real users and these
agents. An important aspect was the study of visual cues for certain personality traits that

145
CHAPTER 7. VISUALIZATION OF AFFECT IN FACES

have been not studied before, as emotional stability and agreeableness.


With the experiment we concluded that, for the Alfred character the upwards-
sideways head orientation is related to extraversion, downwards-center head orientation
to agreeableness, and center-sideways head orientation to emotional stability. We also
found that the side to where the character is facing (left or right) and eye gaze do not
influence the perception of personality traits.

7.7 Summary
This chapter described the algorithms and methods we used to visualize emotions, mood
and personality.
Emotions were visualized in a MPEG-4 based avatar, in which we implemented an
algorithm that using expressions of universal emotions is capable of generating expressions
for intermediate emotions, either by combining two universal emotions or categorizing one
of them. As a result we obtained a set of facial expressions which were correctly identified
by a group of subjects, demonstrating the validity of the algorithm.
One of the novelties of this work is the generation of facial expressions for moods. Our
main contribution is the development of a set of FACS-based functions which locate AUs
into the PAD Space model, allowing us to associate those AUs with the moods correspond-
ing to the PAD model. In this way, we can generate facial expressions for the moods using
the AUs activated in the corresponding octants of the space.
Finally, for personality we performed an experiment to explore how the visual cues:
head pose and eye gaze, influence the perception of three personality traits: extraver-
sion, agreeableness and emotional stability. In result we obtained that for the evaluated
character, extraversion was associated with a head in an upwards-sideways position, agree-
ableness with the head in a downwards-center position, and emotional stability with a
center-sideways position. Using this subjective information about how personality is per-
ceived we could enhance facial expressions, achieving more believable characters.

146
Chapter 8

Evaluation

Not everything that can be counted counts, and not everything that counts can be counted.
Albert Einstein

The results of formal evaluations are presented in this chapter aiming at assessing the
degree of users perception, the believability of the characters and the effectiveness of the
computational affective model we have presented.
To achieve our objective, we performed a number of individual experiments with dif-
ferent subjects. That was the most accurate way to evaluate each part of the framework
and guarantee its functionality, given that a quantitative evaluation of the whole system
was infeasible.
The chapter is organized as follows. First, we present the general objectives of this
evaluation and what we want to achieve. Then, the following subsections explain in detail
the subjects, apparatus and procedures used in each experiment. Finally, a summary of
the chapter is given.

8.1 Objectives of Evaluation


As seen in the previous chapters, the framework we have developed has three differen-
tiated modules, each one with a specific functionality. The first module corresponds to
the semantic module, which uses ontologies and logic rules for context representation and
elicitation of emotions given certain context. The second module is the affective module,

147
CHAPTER 8. EVALUATION

which takes the emotions elicited by the semantic model, maps them into a Pleasure-
Arousal-Dominance space (PAD) and process them to obtain the mood of the character.
Finally, the third module is the visualization module, which implements the algorithms
and methods for the generation of facial expressions of emotions and mood.
Nevertheless, an evaluation of the whole framework, all in once is a difficult task because
there would be too many variables to take into consideration. That is why we broke down
the evaluation process and designed various experiments which will validate our work and
will give us hints of how to improve the generation of context and the visualization of
affective traits. The evaluation objectives are:

1. Validation of the expressions of basic and intermediate emotions, and their perception
as such by the subjects.

2. Validation of the expressions of mood elicited by the Affective Module, and their
perception as such by the subjects.

3. Evaluation of how the emotions elicited in different scenes of the Context Represen-
tation Module are perceived by the subjects.

In the following we explain how we achieve the former objectives through subjective
experiments.

8.2 Experiment: Visualization of Emotions


In this section we present an experiment that validates the facial expressions for universal
and intermediate emotions that were generated using the MPEG-4 standard.

8.2.1 Hypothesis
Before carrying on this experiment we formulated the following hypothesis:

Expressions of universal emotions are better perceived than expressions of interme-


diate emotions, obtaining higher success rates in static images.

Given that intermediate emotions are obtained from the combination of two universal
ones, subjects would be capable of identify at least one of the universal emotions that
composed the intermediate one.

148
8.2. EXPERIMENT: VISUALIZATION OF EMOTIONS

Regarding the second hypothesis, we did not ask for recognition of the specific inter-
mediate emotion because, as Ekman mentioned in [48], the name one can give to an
emotion depends a lot on context and the situation in which that emotion arises.

8.2.2 Methodology
To perform the evaluation, first we selected a group of images correspondent to a set of the
generated facial expressions. This selection was the result of previous observations, where
we realized that different emotions shared the same facial expression. Table 8.1 contains a
list with the emotions which expressions were evaluated.

Nr. Emotion Nr. Emotion


1 Joy 9 Love
2 Sadness 10 Disappointment
3 Disgust 11 Satisfaction
4 Anger 12 Pity
5 Surprise 13 Admiration
6 Fear 14 Reproach
7 Gloating 15 Gratitude
8 Hate N Neutral

Table 8.1: Evaluated emotions

To evaluate the validity of the generated expressions, we carried out a subjective study
with the following methodology:

1. Fulfillment of an in-situ paper survey to measure the perception of emotions on the


generated images.

2. Evaluation of the results to validate the recognition of expressions of universal emo-


tions, and to validate the algorithm for generation of expressions of intermediate
emotions.

8.2.3 Experimental Study


To carry out the experiment we used the virtual agent Alice, modeled using the FaceGen
software [76] and animated using the Xface toolkit (Section 7.3.2).
75 students of the 2nd year of Computer Science at the Universitat de les Illes Balears,
Spain, with no previous knowledge about emotion recognition participated in the survey.

149
CHAPTER 8. EVALUATION

Their age ranged between 18 and 42 years old ( 22).


The images were projected in a screen of 120 x 120 cm, and the subjects needed to fill
the paper questionnaire in Spanish. The total evaluation time was 40 minutes.
The item we wanted to evaluate was the recognition of universal emotions in
images corresponding to 16 facial expressions. With this experiment we verified the
efficacy of the algorithm used for generation of intermediate emotions.

Questionnaire

The experimental stimuli consisted on a set of 16 images: 15 images corresponding to high


intensity emotions, and 1 image of the neutral expression.
Each image was projected during 30 seconds before continuing with the next one.
For each image, subjects needed to choose which universal emotion was associated to the
projected image. The options were:

(CN): neutral expression (S): high surprise

(A): high joy


(M): high fear
(T): high sadness
(Other): specify another affective
(E): high anger
word to be associated to the expres-
(D): high disgust sion.

Results

Table 8.2 shows the results of recognition of universal emotions on each of the 16 expres-
sions. First column presents the emotion associated to the evaluated expression. Second
column presents the emotions that are combined to generate the corresponding interme-
diate emotion. The following columns present the percentages of the number of hits (%),
regarding recognition of the facial expressions.

150
8.2. EXPERIMENT: VISUALIZATION OF EMOTIONS

Evaluated Universal Neutral J Sa D A Su F


Expression Emotion(s)
Neutral 80 9 1 2 0 0 0
Joy J 1 93 0 1 0 2 0
Sadness Sa 0 0 87 8 2 0 0
Disgust D 1 0 2 60 28 2 0
Anger A 0 0 3 3 84 2 0
Surprise Su 0 1 0 2 5 70 15
Fear F 2 0 3 6 1 52 33
Gloating J/Su 16 42 1 2 2 26 6
Hate A 0 0 2 18 68 0 1
Love J/Su 2 59 0 0 0 27 1
Disappointment D/Sa 18 1 38 17 0 11 7
Satisfaction J 4 84 1 0 2 2 0
Pity Sa/F 1 0 24 21 1 1 43
Admiration J/Su 4 46 0 0 1 41 2
Reproach (H) D/A 4 1 0 54 33 1 0
Gratitude (H) J 5 83 0 2 1 1 1

Table 8.2: % of recognition of universal emotions in 31 facial expressions. J: Joy, Sa:


Sadness, D: Disgust, A: Anger, Su: Surprise, F: Fear

Universal Emotions. 6 expressions for universal emotions were studied. Figure 8.1
depicts the recognition hits percentages presented in Table 8.2.

Figure 8.1: % of recognition success for Universal emotions.

151
CHAPTER 8. EVALUATION

As can be seen in Figure 8.1, in most of the cases expressions of joy, sadness, anger
and surprise where recognized. On the other hand, disgust and fear obtained lower
success rate. Disgust tended to be confused with anger, and fear with surprise.
This last result has to do with the fact that our expressions for surprise and fear
share some facial configurations as raised eyebrows and open mouth. Nevertheless,
this result was expected because both expressions are mainly differentiated through
context where the event is taking place and timing issues, which cannot be represented
on a still image.
Moreover, Ekman and Friesen [47] see fear differing from surprise in three ways:

1. Whilst surprise is not necessarily pleasant or unpleasant, even mild fear is un-
pleasant.
2. Something familiar can induce fear, but hardly surprise (for example, a visit to
the dentist).
3. Whilst surprise usually disappears as soon as it is clear what the surprising event
was, fear can last much longer, even if the event is known.

Intermediate Emotions. 9 emotions where evaluated: 3 categorized from one uni-


versal emotion and 6 originated from the combination of two universal emotions. The
emotions categorized from one universal emotion are hate, satisfaction and gratitude.
The emotions obtained from the combination of two universal ones are disappoint-
ment, gloating, love, pity, admiration and reproach.
From Figure 8.2 we can observe that all the expressions categorized from one universal
emotion were correctly recognized by the majority of subjects.
Regarding expressions for emotions generated from the combination of two universal
ones, Figure 8.3 depicts the percentages of success rates when subjects evaluated
these images. It can be seen that for all the expressions the recognition rates of
both universal emotions used for generating the expression of the intermediate one
obtained the highest percentages.

8.2.4 Discussion
At the beginning of this section two hypothesis were formulated. In the first hypothesis we
wanted to see if expressions of universal emotions were better perceived than expressions of

152
8.2. EXPERIMENT: VISUALIZATION OF EMOTIONS

Figure 8.2: % of recognition success for Intermediate emotions categorized from 1 universal
emotion

Figure 8.3: % success for Intermediate emotions generated from combining 2 universal
emotions

153
CHAPTER 8. EVALUATION

intermediate emotions. From the results obtained we could observe that universal emotions
were indeed easily recognized. This fact helped to the good results in the recognition of
intermediate expressions, given that the universal ones we used to generate them were
correctly assessed.
Regarding the second hypothesis, we wanted to know if subjects could identify the uni-
versal emotion (or emotions) used for the generation of the expression of the intermediate
emotion. Indeed, it was proved with the results that showed that the great majority of
the participants recognized in an high percentage the universal emotion from which the
intermediate is obtained. Even more, the results showed that when it came to intermediate
expressions generated from two universal ones, the two corresponding universal emotions
obtained the highest recognition rates above the rest of universal emotions.

154
8.3. EXPERIMENT: VISUALIZATION OF MOODS

8.3 Experiment: Visualization of Moods


The aim of this experiment is to validate the rules and functions formulated in Section 7.5,
and at the same time to generate expressions for each of the 8 moods defined by the PAD
model (Exuberant, Bored, Disdainful, Dependent, Docile, Hostile, Anxious and Relaxed ).
The set of facial expressions were obtained by combining different levels of pleasure
(P), arousal (A) and dominance (D), where each level corresponded to low (values close to
zero), medium, and high (values close to one) intensities. In this way, we obtained different
AUs combinations, that described expressions that can be associated with different PAD
values.
Then, these expressions were subjectively evaluated but a group of subjects, shedding
light on the intensity of PAD combinations that results in expressions that would corre-
spond to a correct identification of certain mood.

8.3.1 Hypothesis
The generation of facial expressions for moods is a topic that has been poorly addressed
by previous researches. Therefore, this evaluation is a mixing of experimentation and
validation, expecting to prove the following hypothesis:

All 8 moods have corresponding facial expressions, which are described by the AUs
activated in the PAD octant of that mood according to the rules formulated in Sec-
tion 7.5.

8.3.2 Methodology
The methodology of this experiment goes from the creation of the expressions to evaluate
to the analysis of the results. The methodology steps are:

1. Generation of a set of images per each mood. Given that each mood has 3 dimensions
(pleasure, arousal and dominance) and 3 intensity values per dimension (low, medium,
high), in total (8)(33 ) = 216 images were generated.

2. Execution of an online survey using a validated tool to measure the perception of


mood in those images.

3. Evaluation of the results to verify that expressions associated to each mood exist,
and to obtain the AUs that describe those expressions.

155
CHAPTER 8. EVALUATION

8.3.3 Experimental Study

As mentioned above, we generated for each of the 8 moods a total of 33 = 27 images of


expressions corresponding to the combinations of: {low pleasure, medium pleasure, high
pleasure}, {low arousal, medium arousal, high arousal} and {low dominance, medium
dominance and high dominance}. To establish these limits, the degree low corresponded
to 0.1, medium to 0.5 and high to 1.0.
This gave us a total of 216 images that were randomly evaluated through an online
survey. Expressions were generated using the virtual agent Alfred, from the Game Engine of
the University of Augsburg 7.3.1. Some examples of the images we evaluated are presented
in Figure 8.3.

Table 8.3: Facial expressions in the mood quadrants of the PAD Space. Upper row:
Anxious, Bored, Dependent, Disdainful. Lower row: Docile, Exuberant, Hostile, Relaxed

We evaluated the expressions based on their PAD values and not on their adjectives,
because we wanted to do it in a simpler manner that could be understood across cultures,
avoiding the issue of translating each adjective and still maintain its correct meaning.

Questionnaire

109 subjects (59 male and 50 female) between 19 and 55 years old, with a mean age of 29.2
(SD = 7.1) participated in the experiment through an online questionnaire.
To assess the images we used the Self-Assessment Manikin questionnaire, which expla-
nation was provided in English and Spanish, so we could get a greater sample of subjects.
The Self-Assessment Manikin (SAM) [22] is a non-verbal, graphic representation of the

156
8.3. EXPERIMENT: VISUALIZATION OF MOODS

three dimensions: Pleasure, Arousal and Dominance. It directly assesses the pleasure,
arousal and dominance associated in response to an object or event. SAM ranges from
a smiling, happy figure to a frowning, unhappy figure when representing the pleasure
dimension; from an excited, wide-eyed figure to a relaxed, sleepy figure for the arousal
dimension; and from a small to a large figure that indicates maximum control in the
situation in the case of the dominance dimension.
The experimental stimuli consisted of 18 static images, randomly selected from the pool
of 216 images. Then, each subject had to rate each of these 18 images using SAM, which
was presented in a 5-item Likert Scale, where 1 corresponded to the minimum value of
the dimension and 5 to the maximum. For analysis purposes, this scale was normalized
between 1 and 1. The questions were of the form:
(1) How is Alfred feeling?. Possible answers corresponded to the SAM items for pleasure:
very displeased (-1), displeased (0.5), neutral (0.0), pleased (0.5), very pleased (1.0).
(2) How energetic seems Alfred?. Possible answers corresponded to the SAM items for
arousal: very relaxed (-1), relaxed (0.5), neutral (0.0), excited (0.5), very excited (1.0).
(3) How dominant is Alfred?. The possible answers corresponded to the SAM items
for arousal: very submissive (-1), submissive (0.5), neutral (0.0), dominant (0.5), very
dominant (1.0).
Figure 8.4 shows a page of the online questionnaire.

Figure 8.4: On-line questionnaire for measuring mood using SAM

157
CHAPTER 8. EVALUATION

The first two images presented to the subjects were for training purposes. They corre-
sponded to the moods Exuberant and Bored with their maximum values: +P+A+D and
-P-A-D, respectively. Thus the subject could evaluate these images using the SAM figures,
and then continue with the remaining 16 random images.

8.3.4 Results

The obtained results opened the possibility to associate facial expressions to moods defined
by the values of pleasure, arousal and dominance.
When analyzing the results, we could observe that each image was evaluated among
5 and 13 times (with a mean value 10). The explanation for this variability is the
randomness of the image generation.
Table 8.4 presents the mean values and standard deviations after analyzing each PAD
dimension in all images corresponding to one of the eight moods. Figure 8.5 shows graphi-
cally the mean values for each dimension for each mood. Blue bars correspond to pleasure
(P) values, red bars to arousal (A) and green bars to dominance (D).

Mood Pleasure Arousal Dominance


Mean SD Mean SD Mean SD
Anxious -P+A-D -0.5 0.43 0.0 0.57 -0.3 0.61
Bored -P-A-D -0.4 0,50 -0,5 0,50 -0,5 0,50
Dependent +P+A-D 0.5 0.37 0.3 0.44 0.3 0.48
Disdainful -P-A+D -0.4 0.45 -0.1 0.61 -0.1 0.61
Docile +P-A-D 0.5 0.31 -0.1 0.49 0.0 0.56
Exuberant +P+A+D 0.4 0.44 0.4 0.44 0.4 0.48
Hostile -P+A+D -0.5 0.44 0.4 0.54 0.2 0.62
Relaxed +P-A+D 0.4 0.35 -0.2 0.52 0.1 0.52

Table 8.4: Mean analysis of recognition of facial expressions

Given that the three dimensions pleasure, arousal and dominance are independent, we
need to analyze each image to obtain the best combination of dimensions intensities, and
hence to obtain the significant image(s) representing each mood.
In the following, a detailed analysis is presented, so we can know exactly which expres-
sions were recognized as representatives for each mood.

158
8.3. EXPERIMENT: VISUALIZATION OF MOODS

Figure 8.5: Mean analysis of recognition of facial expressions

Mood: Anxious

This mood corresponds to the -P+A-D quadrant of the PAD model. Therefore, we expect
a set of expressions corresponding to this mood to be recognized as with negative pleasure
and dominance, and positive arousal.
Figure 8.6 presents the distribution of the mean recognition values for each of the 27
images corresponding to this mood

Figure 8.6: Mean values of the analysis of facial expressions of mood anxious

Table 8.5 presents the mean values and standard deviations of the expressions rec-
ognized as anxious. The mean values representing the average recognition rate of each

159
CHAPTER 8. EVALUATION

dimension, and the standard deviations representing its variability showed that a set of six
images were identified as -P+A-D.

Image Pleasure Arousal Dominance


Mean SD Mean SD Mean SD
01-An-HHH -0.67 0.32 0.38 0.52 -0.04 0.54
02-An-HHM -0.83 0.35 0.44 0.30 -0.22 0.71
09-An-HLL -0.25 0.27 0.17 025 -0.33 0.51
10-An-MHH -0.56 0.39 0.22 0.61 -0.22 0.56
11-An-MHM -0.83 0.35 0.39 0.60 -0.06 0.84
19-An-LHH -0.41 0.37 0.09 0.53 -0.18 0.64

Table 8.5: Mean values and Standard Deviations for expressions of Anxiousness

The former results constrain our analysis of the images, thus we could assess its validity
through the recognition hit rates for each image. Table 8.6 shows the hit rates of the images
that obtained higher scores and the number of subjects that evaluated each image. The
expressions discarded are those in which their P, A or D was wrongly identified by more
than half of the subjects.

Image -P +A -D Number of Subjects


02-An-HHM 8 7 6 9
10-An-MHH 7 5 5 10
19-An-LHH 7 6 6 11

Table 8.6: Hit rates for expressions of anxious mood

From the previous results we could observe that the images identified in the same
quadrant of anxious have:
- Negative Pleasure intensities along the negative dimension
- Positive Arousal intensity in its maximum level
- Negative Dominance intensity between its medium and maximum level

Using these results we can see which AUs were activated according to the in-
tensities of pleasure, arousal and dominance of the selected expressions, resulting in
AU 1, AU 2, AU 4, AU 15 for pleasure and dominance and AU 5, AU 25, AU 26 for arousal.
These AUs correspond to the ones in the quadrant of anxiousness. Figure 8.7 shows the
validated faces for this mood.

160
8.3. EXPERIMENT: VISUALIZATION OF MOODS

Figure 8.7: Facial expressions with their corresponding PAD intensities for anxious

Mood: Bored

This mood corresponds to the -P-A-D quadrant of the PAD model. Therefore, we expect
expressions corresponding to this mood to be recognized as with negative pleasure, arousal
and dominance.
Figure 8.8 presents the distribution of the mean recognition values of each of the 27
images corresponding to the mood bored.

Figure 8.8: Mean values of the analysis of facial expressions of mood bored

Table 8.7 presents the mean values and standard deviations of the expressions recog-
nized as bored. These results show that most of the images were identified as -P-A-D.
The former results constrain our analysis of the images, thus we could assess its validity
through the recognition hit rates for each image. Table 8.8 shows the hit rates of the images
that obtained higher scores and the number of subjects that evaluated each image. The

161
CHAPTER 8. EVALUATION

Image Pleasure Arousal Dominance


Mean SD Mean SD Mean SD
01-Bo-HHH -0.61 0.36 -0.77 0.39 -0.7 0.43
02-Bo-HHM -0.75 0.41 -0.92 0.20 -0.58 0.49
03-Bo-HHL -0.57 0.45 -0.50 0.50 -0.14 0.56
04-Bo-HMH -0.20 0.27 -0.10 0.22 -0.60 0.42
05-Bo-HMM -0.50 0.5 -0.57 0.45 -0.93 0.18
06-Bo-HML -0.61 0.33 -0.61 0.42 -0.50 0.43
07-Bo-HLH -0.75 0.35 -0.5 0.52 -0.75 0.26
08-Bo-HLM -0.57 0.35 -0.64 0.38 -0.50 0.29
09-Bo-HLL -0.58 0.38 -0.17 0.52 -0.08 0.20
10-Bo-MHH -0.83 0.25 -1.00 0.0 -0.75 0.27
11-Bo-MHM -0.65 0.41 -0.45 0.50 -0.55 0.64
12-Bo-MHL -0.75 0.27 -0.67 0.60 -0.58 0.58
13-Bo-MMH -1.00 0.0 -0.60 0.54 -0.70 0.44
14-Bo-MMM -0.66 0.35 -0.66 0.56 -0.77 0.26
15-Bo-MML -0.54 0.30 -0.57 0.38 -0.64 0.41
16-Bo-MLH -0.67 0.52 -0.33 0.61 -0.67 0.41
17-Bo-MLM -0.73 0.34 -0.45 0.61 -0.55 0.47
18-Bo-MLL -0.42 0.34 -0.35 0.43 -0.54 0.48
20-Bo-LHM 0.00 0.25 -0.40 0.55 -0.30 0.68
24-Bo-LML -0.07 0.45 -0.50 0.41 -0.50 0.71

Table 8.7: Mean values and Standard Deviations for expressions of Boredness

expressions discarded are those in which their P, A or D was wrongly identified by more
than half of the subjects.
From the previous results we observed that the images have these characteristics:
- Negative pleasure intensity is between high and medium
- Negative arousal and negative dominance intensity can have any degree (high, medium
or low)
It is worth noting that the images that were not recognized as bored present low negative
pleasure (almost neutral regarding pleasure), as can be seen in Figure 8.8. Also, the hit
rates analysis resulted in a reduced set of expressions in compare to the ones obtained
through mean values, mainly because the majority of subjects could not associate an
arousal or dominance value to the expression.
Using these results we can see which AUs were activated according to the intensities
of pleasure, arousal and dominance of the selected expressions, resulting in AU 1, AU 2
(activated only when p > 0.5), AU 4, AU 15 for pleasure and dominance and AU 43 for
arousal. These AUs correspond to the ones in the quadrant of bored. Figure 8.9 shows a
subset of the expressions that obtained highest recognition scores.

162
8.3. EXPERIMENT: VISUALIZATION OF MOODS

Image -P -A -D Number of Subjects


01-Bo-HHH 11 11 12 13
02-Bo-HHM 5 6 4 6
05-Bo-HMM 4 5 7 7
06-Bo-HML 8 7 6 9
07-Bo-HLH 9 7 10 10
08-Bo-HLM 6 6 6 7
10-Bo-MHH 6 6 6 6
11-Bo-MHM 8 7 8 10
12-Bo-MHL 6 5 5 6
13-Bo-MMH 5 3 4 5
14-Bo-MMM 8 7 9 9
15-Bo-MML 12 11 11 14
16-Bo-MLH 4 3 5 6
17-Bo-MLM 10 7 7 11
18-Bo-MLL 9 10 10 13

Table 8.8: Hit rates for expressions of bored mood

Figure 8.9: Facial expressions with their corresponding PAD intensities for bored

163
CHAPTER 8. EVALUATION

Mood: Dependent

This mood corresponds to the +P+A-D quadrant of the PAD model. Therefore, we
expect expressions corresponding to this mood to be recognized as with positive pleasure,
positive arousal and negative dominance. Figure 8.10 presents the distribution of the mean
recognition values of the 27 images for this mood.

Figure 8.10: Mean values of the analysis of facial expressions of mood dependent

Table 8.9 presents the mean and standard deviation values of the expressions that were
assessed as +P+A-D.

Image Pleasure Arousal Dominance


Mean SD Mean SD Mean SD
07-De-HLH 0.56 0.30 0.22 0.26 -0.06 0.39
10-De-MHH 0.50 0.38 0.25 0.71 0.00 0.53
22-De-LMH 0.33 0.25 0.22 0.36 0.00 0.35
23-De-LMM 0.50 0.00 0.25 0.29 0.00 0.00

Table 8.9: Mean values and Standard Deviations for expressions of Dependent

From the previous results, it can be seen that the value of dominance is very close to
zero, or zero. Regarding the remaining images, the dimensions pleasure and arousal were
correctly located in almost 90% of the cases. Nevertheless, dominance was assessed as
negative in only 14.8% of the cases.
This results show the difficulty when assessing dominance in the expressions of the
mood dependent. This issue is also seen when evaluating the recognition hit rates for each

164
8.3. EXPERIMENT: VISUALIZATION OF MOODS

image, where each one obtains significant values for pleasure and arousal, but values close
to zero for dominance, except for the expression 10-De-MHH. This expression is the only
one in which dominance was recognized by half of the subjects. Table 8.10 shows the hit
rates for the subset of the selected images.

Image +P +A -D Number of Subjects


07-De-HLH 8 4 3 9
10-De-MHH 6 5 4 8
22-De-LMH 6 5 2 9
23-De-LMM 4 2 0 4

Table 8.10: Hit rates for expressions of dependent mood

Figure 8.11 shows the expressions that could be assessed as being in +P+A-D. The
AUs that describe these expression are AU 1, AU 2 and AU 12, which are activated only for
positive pleasure with medium to high intensity and theoretically for negative dominance
with high intensity; and AU 5, AU 25, AU 26 for positive arousal with any intensity.

Figure 8.11: Facial expressions with their corresponding PAD intensities for dependent

Mood: Disdainful

This mood corresponds to the -P-A+D quadrant of the PAD model. Therefore, we expect
expressions corresponding to this mood to be recognized as with negative pleasure, negative
arousal and positive dominance.
Figure 8.12 presents the distribution of the mean recognition values for expression
corresponding to this mood.

165
CHAPTER 8. EVALUATION

Figure 8.12: Mean values of the analysis of facial expressions of mood disdainful

Table 8.11 presents the mean and standard deviation values of the expressions recog-
nized as with -P-A+D.
Image Pleasure Arousal Dominance
Mean SD Mean SD Mean SD
15-Di-MML -0.56 0.32 -0.38 0.44 0.13 0.58
23-Di-LMM -0.17 0.25 -0.42 0.58 0.25 0.52

Table 8.11: Mean values and Standard Deviations for expressions of Disdainful

From the previous results it can be seen that two expressions were associated with the
dimensions of disdain. Standard deviation values show that there is a great variability
among the subjects rating. Therefore, we constrain the analysis to assess the hit rates for
these two images. Table 8.12 shows the hit rates for the subset of the selected images.

Image -P -A +D Number of Subjects


15-Di-MML 7 4 4 8
23-Di-LMM 2 4 3 6

Table 8.12: Hit rates for expressions of disdainful mood

The hit rates analysis demonstrates that from the two expressions identified in the
disdain mood, the second one (23-Di-LMM) presents low rating for pleasure. It is expected
given that the value for pleasure is low, therefore this expression shares some characteristics
of the neutral expression, where the character has the corner lips slightly raised.

166
8.3. EXPERIMENT: VISUALIZATION OF MOODS

Thus the expressions recognized as disdainful shared the following characteristics:


- Negative pleasure between medium and low level
- Negative arousal in its medium level
- Positive dominance between medium and low level
These configurations correspond to the activated AU 4, AU 15 and AU 43, all of them
with medium to low intensities. Figure 8.13 shows the expressions that obtained highest
recognition scores.

Figure 8.13: Facial expressions with their corresponding PAD intensities for disdainful

Mood: Docile

This mood corresponds to the +P-A-D quadrant of the PAD model. Therefore, we
expect expressions corresponding to this mood to be recognized as with positive pleasure
and negative arousal and dominance.
Figure 8.14 presents the distribution of the mean recognition values of each of the 27
images corresponding to the mood docile.
Table 8.13 presents the numeric mean and standard deviation values of those expressions
that were indeed recognized as +P-A-D.
The former results constrain our analysis to these expressions, which were further stud-
ied by obtaining the recognition hit rates for each dimension. An expression is considered
correspondent to the mood docile, and therefore to the +P-A-D quadrant if more than half
of the subjects identified its three dimensions as such. Table 8.14 shows the hit rates for
the subset of the selected images.

167
CHAPTER 8. EVALUATION

Figure 8.14: Mean values of the analysis of facial expressions of mood docile

Image Pleasure Arousal Dominance


Mean SD Mean SD Mean SD
02-Do-HHM 0.75 0.26 -0.38 0.51 -0.19 0.65
03-Do-HHL 0.55 0.15 -0.25 0.48 -0.15 0.41
04-Do-HMH 0.69 0.37 -0.38 0.58 -0.63 0.44
11-Do-MHM 0.50 0.28 -0.50 0.57 -0.57 0.34
13-Do-MMH 0.58 0.19 -0.29 0.49 -0.29 0.39
14-Do-MMM 0.63 0.25 -0.25 0.5 -0.13 0.85
21-Do-LHL 0.33 0.24 -0.13 0.37 -0.17 0.61
22-Do-LMH 0.30 0.34 -0.35 0.41 -0.25 0.59

Table 8.13: Mean values and Standard Deviations for expressions of Docile

Image +P -A -D Number of Subjects


04-Do-HMH 7 4 6 8
11-Do-MHM 6 5 6 7
13-Do-MMH 12 7 7 12
22-Do-LMH 5 7 7 10

Table 8.14: Hit rates for expressions of docile mood

The hit rates analysis demonstrates that the pleasure dimension was recognized in
all the cases with a high percentage rate. Also, arousal and dominance were correctly
identified, although in some cases arousal obtained a lower success rate than dominance.
Regarding the expressions that were recognized as docile, they shared the following
characteristics:

168
8.3. EXPERIMENT: VISUALIZATION OF MOODS

- Positive pleasure with any intensity level


- Negative arousal between medium and high level
- Negative dominance between medium and high level

These configurations correspond to the activated AU 1, AU 2, AU 12 and AU 43. Fig-


ure 8.15 shows the expressions that obtained highest recognition scores.

Figure 8.15: Facial expressions with their corresponding PAD intensities for docile

Mood: Exuberant

This mood corresponds to the +P+A+D quadrant. Therefore, we expect expressions


corresponding to this mood to be recognized as with positive pleasure, arousal and domi-
nance.
Figure 8.16 presents the recognition values of each expression for this mood, and exuber-
ant, while Table 8.15 contains the mean and standard deviation values of those expressions
that were identified as with positive pleasure, positive arousal and positive dominance.

169
CHAPTER 8. EVALUATION

Figure 8.16: Mean values of the analysis of facial expressions of mood exuberant

Image Pleasure Arousal Dominance


Mean SD Mean SD Mean SD
01-Ex-HHH 0.64 0.37 0.79 0.26 0.64 0.37
02-Ex-HHM 0.79 0.25 0.71 0.25 0.75 0.39
03-Ex-HHL 0.38 0.75 0.50 0.40 0.88 0.25
04-Ex-HMH 0.61 0.33 0.50 0.5 0.44 0.30
05-Ex-HMM 0.44 0.32 0.31 0.37 0.06 0.62
06-Ex-HML 0.65 0.24 0.55 0.36 0.45 0.36
07-Ex-HLH 0.44 0.49 0.44 0.32 0.25 0.46
08-Ex-HLM 0.69 0.37 0.44 0.17 0.44 0.17
09-Ex-HLL 0.38 0.23 0.13 0.44 0.38 0.44
10-Ex-MHH 0.33 0.40 0.42 0.20 0.33 0.25
11-Ex-MHM 0.50 0.35 0.56 0.39 0.56 0.39
12-Ex-MHL 0.63 0.47 0.63 0.25 0.50 0.40
13-Ex-MMH 0.58 0.20 0.67 0.25 0.58 0.37
14-Ex-MMM 0.50 0.35 0.50 0.61 0.60 0.22
15-Ex-MML 0.50 0.26 0.38 0.35 0.50 0.46
18-Ex-MLL 0.69 0.25 0.50 0.26 0.38 0.35
22-Ex-LMH 0.06 0.39 0.11 0.22 0.22 0.26
23-Ex-LMM 0.17 0.25 0.42 0.49 0.33 0.60
24-Ex-LML 0.25 0.37 0.19 0.37 0.25 0.46
26-Ex-LLM 0.15 0.31 0.04 0.43 0.08 0.53

Table 8.15: Mean values and Standard Deviations for expressions of Exuberant

The former results constrain our analysis to these expressions, which were further stud-
ied by obtaining the recognition hit rates for each dimension. Table 8.16 shows the hit
rates for the subset of the selected images.
The hit rates analysis shows that expressions with low pleasure were discarded, leaving
those expressions with medium to high pleasure and any level of arousal and dominance.
From these results we concluded that the activated AUs are AU 6, AU 12 for medium to
high positive pleasure values, and AU 5, AU 25, AU 26 for arousal with any intensity level.

170
8.3. EXPERIMENT: VISUALIZATION OF MOODS

Image +P +A +D Number of Subjects


01-Ex-HHH 6 7 6 7
02-Ex-HHM 12 12 10 12
04-Ex-HMH 8 7 7 9
06-Ex-HML 10 8 7 10
07-Ex-HLH 6 6 4 8
08-Ex-HLM 7 7 7 8
09-Ex-HLL 6 4 6 8
10-Ex-MHH 3 5 4 6
11-Ex-MHM 7 7 7 9
12-Ex-MHL 3 4 3 4
13-Ex-MMH 6 6 5 6
14-Ex-MMM 4 4 5 5
15-Ex-MML 7 5 5 8
18-Ex-MLL 8 7 5 8

Table 8.16: Hit rates for expressions of exuberant mood

It is also interesting to note that expressions combining low +P, high +A and high,
medium and low +D were identified as hostile mood. A possible explanation for these
results is that in the area close to low pleasure (positive and negative), the AU 12 responsible
for pulling the corner lips is not active. This in combination with open eyes (feature of
arousal) provokes a different effect than exuberance. Figure 8.17 shows a subset of the
expressions corresponding to each image of these values.

Figure 8.17: Facial expressions with their corresponding PAD intensities for exuberant

171
CHAPTER 8. EVALUATION

Mood: Hostile

This mood corresponds to the -P+A+D quadrant of the PAD model. Figure 8.18 presents
the distribution of the mean recognition values of each of the 27 images corresponding to
the mood hostile.

Figure 8.18: Mean values of the analysis of facial expressions of mood hostile

Table 8.17 presents the mean and standard deviation values of the expressions which
dimensions were identified as -P+A+D.

Image Pleasure Arousal Dominance


Mean SD Mean SD Mean SD
01-Ho-HHH -0.25 0.95 1.00 0 0.50 1
02-Ho-HHM -0.75 0.46 0.81 0.25 0.88 0.23
04-Ho-HMH -0.75 0.46 0.88 0.35 0.25 0.84
05-Ho-HMM -0.64 0.32 0.14 0.50 0.14 0.55
07-Ho-HLH -0.93 0.18 0.57 0.34 0.14 0.62
10-Ho-MHH -0.56 0.46 0.61 0.65 0.33 0.75
11-Ho-MHM -0.50 0.54 0.58 0.58 0.42 0.58
12-Ho-MHL -0.55 0.41 0.55 0.47 0.41 0.70
13-Ho-MMH -0.61 0.48 0.89 0.22 0.33 0.70
14-Ho-MMM -0.50 0.40 0.07 0.45 0.29 0.48
16-Ho-MLH -0.86 0.37 0.86 0.24 0.29 0.90
17-Ho-MLM -0.50 0.35 0.39 0.41 0.33 0.56
19-Ho-LHH -0.88 0.23 1.00 0 0.44 0.67
20-Ho-LHM -0.10 0.41 0.10 0.54 0.30 0.57
22-Ho-LMH -0.50 0.43 0.64 0.41 0.61 0.56
25-Ho-LLH -0.44 0.39 0.44 0.30 0.28 0.50

Table 8.17: Mean values and Standard Deviations for expressions of Hostile

172
8.3. EXPERIMENT: VISUALIZATION OF MOODS

The previous results allowed us to constrain our analysis to those expressions in which
subjects correctly recognized the -P+A+D dimensions. These let us with the 59.2% of the
expressions, for which the recognition hit rates were obtained. Table 8.18 shows the hit
rates for the subset of the selected images.

Image -P +A +D Number of Subjects


01-Ho-HHH 2 4 3 4
02-Ho-HHM 6 8 8 8
04-Ho-HMH 6 7 5 8
07-Ho-HLH 7 6 3 6
10-Ho-MHH 6 8 5 10
11-Ho-MHM 3 5 4 6
12-Ho-MHL 8 9 7 11
13-Ho-MMH 6 9 6 9
16-Ho-MLH 6 7 5 7
17-Ho-MLM 7 7 6 9
19-Ho-LHH 8 8 6 8
22-Ho-LMH 9 11 12 14
25-Ho-LLH 6 7 6 9

Table 8.18: Hit rates for expressions of hostile mood

From the previous analysis it is shown that the main issue with these expressions was
the Dominance dimension, which presented very high standard deviations. Nevertheless, a
considerable number of expressions were assessed as -P+A+D, from which we extract the
following characteristics:
- Negative pleasure has any intensity level
- Positive arousal has also any intensity level
- Positive dominance is mainly assessed with intensity levels between medium and high

These results allow us to identify the AUs that are activated and describe the expres-
sions that can be associated to the mood hostile. The AUs are AU 4, AU 10, AU 15 for
medium to high positive dominance values, and AU 5, AU 25, AU 26 for arousal with any
intensity level.

Figure 8.19 shows a subset of the expressions corresponding to -P+A+D.

173
CHAPTER 8. EVALUATION

Figure 8.19: Facial expressions with their corresponding PAD intensities for hostile

Mood: Relaxed

This mood corresponds to the +P-A+D quadrant of the PAD model. Therefore, we
expect expressions corresponding to this mood to be recognized as with positive pleasure
and dominance, and negative arousal.
Figure 8.20 presents the distribution of the mean recognition values of each of the 27
images corresponding to the mood relaxed.
Table 8.19 presents the mean and standard deviation values of the expressions identified
in +P-A+D.
Image Pleasure Arousal Dominance
Mean SD Mean SD Mean SD
04-Re-HMH 0.71 0.26 -0.07 0.67 0.14 0.55
05-Re-HMM 0.64 0.24 -0.43 0.34 0.21 0.48
06-Re-HML 0.43 0.45 -0.50 0.40 0.29 0.48
10-Re-MHH 0.64 0.23 -0.18 0.51 0.05 0.47
25-Re-LLH 0.25 0.26 -0.19 0.25 0.13 0.35
26-Re-LLM 0.25 0.27 -0.17 0.25 0.08 0.49
27-Re-LLL 0.13 0.22 -0.04 0.25 0.04 0.39

Table 8.19: Mean values and Standard Deviations for expressions of Relaxed

The previous results allowed us to constrain our analysis to those expressions in which

174
8.3. EXPERIMENT: VISUALIZATION OF MOODS

Figure 8.20: Mean values of the analysis of facial expressions of mood relaxed

subjects correctly recognized the +P-A+D dimensions. Therefore, we constrain the anal-
ysis to assess the hit rates for these two images. Table 8.20 shows the hit rates for the
subset of the selected images.
Image +P -A +D Number of Subjects
04-Re-HMH 7 3 3 7
05-Re-HMM 7 5 3 7
06-Re-HML 6 5 4 7
10-Re-MHH 11 4 3 11

Table 8.20: Hit rates for expressions of relaxed mood

From the hit rates analysis we observed that a reduced number of expressions were
indeed assessed as +P-A+D. The main issue was to evaluate the positive dominance,
which in all the 27 expressions obtained mean values close to zero, meaning that to the
question How dominant is Alfred? the majority of the subjects answered i dont know.
An example of this situation is the expressions tagged as 10-Re-MHH, which pleasure
dimension obtained a 100% of recognition rates, but arousal and dominance less than 36%.
The expressions that in the end were associated to the relaxed mood shared the following
characteristics:
- Positive pleasure with high intensity values
- Negative arousal with medium intensity values
- Positive dominance was independent of the other values
These results allow us to identify the AUs that are activated and describe the expres-

175
CHAPTER 8. EVALUATION

sions that can be associated to the mood relaxed. The AUs are AU 6, AU 12 for high positive
pleasure values, and AU 43 for arousal with medium intensity.
Figure 8.21 presents some of the expressions associated with the mood relaxed.

Figure 8.21: Facial expressions with their corresponding PAD intensities for mood relaxed

8.3.5 Discussion

The experiment that has been presented answered the hypothesis formulated at the begin-
ning of this section, which intended to prove that all moods have associated expressions
which are described by the AUs activated in their corresponding pleasure, arousal and
dominance dimension.
After the analysis of the results, we found that indeed the expressions associated with
a mood are described by the AUs in the octant of that mood. Given that this was an
exploratory experiment to see if the functions to locate AUs in the PAD space were valid,
we did not establish any percentage of error or success. Therefore, we used mean values and
standard deviations to compute the average perception of each expression. From this, we
obtained subsets of expressions which should be further evaluated to exhaustively validate
them.
Regarding the easiness of recognition of each dimension, the results concluded that:
- Pleasure was correctly identified in the vast majority of the cases, except for those in
which its values were close to zero, and it is seen as the dimension that gives meaning to
the expression.
- Arousal was also correctly identified in most of the cases, except on few cases when arousal
was negative and most of the subjects did not know how to assess it.
- Dominance presented most of the perception problems. Subjects were not sure how to

176
8.3. EXPERIMENT: VISUALIZATION OF MOODS

measure it or what to take into consideration to do it.


These results are somehow expected given that pleasure and arousal are dimensions that
can be represented in the expression, while dominance is a dimension that is manifested
during interaction. Therefore, it was difficult to assess from an static image.
From the comments given by the majority of the subjects, the main issue with Domi-
nance is that it is difficult to assess in a static image because, as it was said before, it is
part of peoples interactions. Nevertheless, the use of SAM icons helped considerably to
successfully perform the questionnaire, because subjects understood what they were asked
for.
Regarding the final set of AUs that describe the expressions for the moods, Table 8.21
present the AUs for each mood.

Mood (AUs)
Exuberant AU 6, AU 5, AU 12, AU 25, AU 26
Bored AU 1, AU 2, AU 4, AU 15, AU 43
Docile AU 1, AU 2, AU 12, AU 43
Hostile AU 4, AU 10, AU 5, AU 15, AU 25, AU 26
Anxious AU 1, AU 2, AU 4, AU 5, AU 15, AU 25, AU 26
Relaxed AU 6, AU 12, AU 43
Dependent AU 1, AU 2, AU 5, AU 12, AU 25, AU 26
Disdainful AU 4, AU 15, AU 43

Table 8.21: AUs describing expressions of mood

177
CHAPTER 8. EVALUATION

8.4 Experiment: Context Representation

One of the objectives of our work was the representation of context in order to generate
believable emotions in the character. Moreover, from previous experiments we observed
that a character can have the same facial expression for different emotions, and the inter-
pretation of these expressions may fail if we do not know enough about that character, or
the context in which that expression is occurring. Therefore, it is important not only to
focus on creating believable expressions but also to associate some beliefs (given by the
context) to those expressions.

8.4.1 Hypothesis

We expect the following outcomes:

Given an emotion E produced in the character due to a context C, a subject will


associate a facial expression of E to the context C.

Given that exists a real facial expression for C (e.g. of an actor or an ordinary
person), the facial expression of the character is similar to the real one under C.

8.4.2 Methodology

To validate the former hypothesis we used the following methodology to perform the ex-
periment:

1. Selection and definition of scenes of a movie into the Context Representation Module.
It will give as output a set of emotions, from which the one with the greatest value
is chosen. If two emotions are elicited with the same intensity, we mix them using
the algorithm to generate intermediate expressions.

2. Execution of an online survey to measure the perception of facial expressions inside


a context.

3. Evaluation of the results.

178
8.4. EXPERIMENT: CONTEXT REPRESENTATION

8.4.3 Experimental Study

For this experiment we chose two movies: Leon (1994, directed by Luc Besson) and Down-
fall (Der Untergang) (2004, directed by Oliver Hirschbiegel). From each movie we selected
three scenes where it could be clearly seen the facial expressions of the real actors.
In Movie 1- Leon we selected the part where the main character Mathilda, a 12 years-
old girl enters her building, and finds out that her family has been brutally murdered. In
Movie 2- Downfall we selected the part when Hitler, already tired and defeated wants
to counterattack and his soldiers try to make him change his mind. In this last scenes, we
focused on one of the soldiers and his emotional reactions.
From the scenes of Movie 1 and Movie 2, we set up the events, characters preferences,
descriptions of the locations, admiration among characters, in order to be introduced in
the Context Representation module, and elicit the corresponding emotions. Tables 8.22
and 8.23 contains the elements used for the context representation.

Character: Mathilda
Goals: - Stay alive
Preferences: Gangsters (ST RON GLY BAD = 1.0)
Familys apartment (IN DIF F EREN T = 0.0)
Agent Admiration: Little brother (P OSIT IV E = 1.0)
Event Role of Character Emotions elicited
(Event 1) Mathilda comes home and notices a Mathilda executes the event fear = 0.4
gangster in front of the familys apartment. (ByMe)
She acts as if nothing happens
(N OT SAT ISF ACT ORY = 0.7)
(Event 2) Mathilda hears that her little brother The brother receives the event pity = 1.0
has been killed (OnOther)
(N OT SAT ISF ACT ORY = 1.0)
(Event 3) Mathilda rings the doorbell of Leon and Mathils receives the event sadness = 1.0
waits while the gangster is outside (OnMe)
(N OT SAT ISF ACT ORY = 1.0)

Table 8.22: Events and context details for Movie 1- Leon

The emotions presented in the former tables, generated by each event, were the ones
that corresponded to the script of the movie annotated in a previous work that used the
same set of scenarios [138]. This correspondence between the emotion elicited and the
emotion in the script is a hint that the rules of the Semantic Model produce the right
emotions to the events.
Using the emotions elicited, we generated the facial expressions of the characters. For

179
CHAPTER 8. EVALUATION

Character: The Officer


Goals: - Stop the war
- Convince Hitler to change his mind
Preferences: War (ST RON GLY BAD = 1.0)
Counterattack consequences (ST RON GLY BAD = 0.74)
Event Role of Character Emotions elicited
(Event 1) The Officer talks to Hitler about the The officer executes the event fear = 0.5
negative consequences of a counterattack (ByMe)
(N OT SAT ISF ACT ORY = 0.5)
(Event 2) The Officer convinces Hitler about The officer executes the event discourage = 0.7
the negative consequences (ByMe) (fear + shame)
(N OT SAT ISF ACT ORY = 0.7)
(Event 3) The Officer cannot convince Hitler The officer executes the event disappointment = 0.9
to change his mind of a counterattack (ByMe) (disgust + sadness)
(N OT SAT ISF ACT ORY = 0.9)

Table 8.23: Events and context details for Movie 2- Downfall

movie Leon we used the character Alice, which resembled Mathilda. For the movie Downfall
we used Alfred, who quite resembled The Officer. Table 8.24 shows the virtual characters
and the actors in the events described in Tables 8.22 and 8.23.
These images were evaluated through an online survey with the intention to find
(1) if the facial expression of the virtual actor is believable in correspondence with the
event/scene, and (2) if the expressions of the real actor and the virtual character were
perceived as similar by the subjects.

180
(a) Leon - Event (1): fear (b) Leon - Event (2): pity (c) Leon - Event (3): sadness
8.4. EXPERIMENT: CONTEXT REPRESENTATION

(d) Downfall - Event (1): fear (e) Downfall - Event (2): discourage (f) Downfall - Event (3): disappointment

Table 8.24: Virtual and real actors from the movies Leon and Downfall

181
CHAPTER 8. EVALUATION

Questionnaire

61 subjects (30 male and 31 female) between 18 and 55 years old (MEAN = 28, SD =
5.45) participated in the experiment through an online questionnaire.

The experimental stimuli consisted of 12 static images:


- 3 images of Alice showing fear, pity and sadness, respectively;
- 3 images of an actress showing fear, pity and sadness, respectively;
- 3 images of Alfred showing fear, discourage and disappointment, respectively;
- 3 images of an actor showing fear, discourage and disappointment, respectively.

Before evaluating the scenes of each movie, we gave a synopsis of what was happening to
center the subject in the context. For the Movie 1- Leon the description was: Mathilda,
a 12-year old girl, comes home from the grocery. When she finds that her family has been
brutally murdered she seeks help from Leon, whose apartment is down the hall.

For the Movie 2- Downfall the description was: During the last days of the II WW,
Hitler is sick and exhausted; the defeat of the Third Reich is imminent. Hitler and his
officers are having a meeting in which he is planning impossible counterattacks and giving
orders to men who are already dead. His officers have serious doubts about their
leader but do not dare to speak up because of the reprimands..

Then, for each event showed in Tables 8.22 and 8.23, we asked the following questions:

- (Question 1) Facial expression elicited by context: The following image is the facial
expression of <nombre character> after the event described above. Do you agree this
would be the facial expression?
- (Question 2) Similarity between actors: Are the avatars expression (left) and the actors
expression (right) similar to each other?

For (Question 1) we used the images of Alice and Alfred shown in Table 8.24. For
(Question 2) we used the pair images Alice-actress and Alfred-actor shown in the same
table.

Each question was rated using a 5-item Likert Scale, where 1 corresponded to the
minimum degree (totally disagree or very different) and 5 to the maximum (totally agree or
very similar ). Figure 8.22 shows a page of the online questionnaire. The full questionnaire
can be found in the following URL: http://dmi.uib.es/ ugiv/diana/contextmodel/.

182
8.4. EXPERIMENT: CONTEXT REPRESENTATION

Figure 8.22: Questionnaire for evaluation of the context representation

8.4.4 Results

The results obtained allow us to validate the proposed Context Representation Module, as
well as the generation of facial expressions in two avatars, one animated using MPEG-4
(Alice - Leon) and the other FACS (Alfred - Downfall ).
Table 8.25 presents the results for each question of each of the events taken from
Tables 8.22 and 8.23.

183
CHAPTER 8. EVALUATION

Movie Event Question MEAN SD Number of Hits


() () (Likert Scale)
1 2 3 4 5
(1) (Question 1)Facial expression in context 3.08 1.00 3 19 9 30 0
(Question 2)Similarity between actors 2.65 1.12 8 28 2 23 0
(2) (Question 1) 4.01 0.82 0 5 5 35 16
1- Leon
(Question 2) 3.93 1.09 2 8 2 29 20
(3) (Question 1) 3.32 0.99 3 11 13 31 3
(Question 2) 3.11 1.13 4 20 5 29 3

(1) (Question 1) 2.75 1.06 7 20 17 15 2


(Question 2) 2.33 1.19 18 22 5 15 1
(2) (Question 1) 3.05 1.08 6 14 14 25 2
2- Downfall
(Question 2) 2.68 1.24 12 20 7 19 3
(3) (Question 1) 3.72 0.85 2 5 6 43 5
(Question 2) 3.65 1.1 4 9 0 39 9

Table 8.25: Virtual and real actors from the movies Leon and Downfall

To analyze Question (1), we use the number of hits per item of the Likert-scale. We
observe that over 50% of the subjects agreed that the three expressions of Alice in Leon
corresponded to the events that generate them.
As for Alfred in the Downfall, the only expression for which 70.5% of the subjects
agreed that it was according to the scene was the one in Event (3) (Table 8.24), which
was the one with greatest emotional load (the emotion disappointment had an intensity of
0.9). In the Event (1) subjects disagreed with the facial expression of the avatar, which
indicates that the expression did not transmit the emotion of the scene. In the Event (2)
40.9% of the subjects agreed that the expression matched the situation, but 23% of the
subjects could not decide.
In the analysis of Question (2), subjects considered that the avatars were similar
and very similar to the actors in the Event (2) of the movie Leon and in the Event (3)
of the movie Downfall. In the remaining events, opinions of the participants were divided
obtaining almost equally results (less than 50%) for agreement and disagreement between
real and virtual expressions.
The reason why we did not use the mean values for the analysis is that they were
+3.0 with a standard deviation of +1.0 approximately. Therefore, we could not get any
conclusion based only on these values.

184
8.4. EXPERIMENT: CONTEXT REPRESENTATION

8.4.5 Discussion
Two hypothesis were formulated at the beginning of this section: (1) the generated facial
expressions of the avatar are in correspondence to the context that is represented; (2) there
is a similarity between the facial expressions generated in the avatar and those of the actors
when simulating the same context.
As for the first hypothesis we conclude that the facial expressions transmitted the
emotional content of the context, validating the output of our Context Representation
module. The cases where it failed corresponded to the expressions of Alfred, which were
not expressive enough due to the low intensity of the emotions in context.
As for the second hypothesis, although there were agreements of similarity, subjects
also focused in features as sweating (in the movie Downfall the actor is sweating), shape of
the mouth, and tears (in the case of Leon). Nevertheless, these observations gave us even
more hints about what participants take into consideration when perceiving an emotion in
the character even when context is given.

185
CHAPTER 8. EVALUATION

186
Chapter 9

Conclusions

I am not young enough to know everything


Oscar Wilde

The work that has been presented is the answer to those questions formulated at the
beginning of this thesis with the aim of developing a contextual and affective framework
that allows the creation of virtual characters capable of expressing emotions, mood and
personality through facial expressions and visual cues.
Therefore, this thesis is the result of an exhaustive research on different topics that
contribute to the representation and generation of context, as well as to the simulation
and visualization of affect. Among these topics are psychological theories of emotions,
mood and personality; computational models for the creation of interactive virtual charac-
ters; semantic web and ontologies; and visualization techniques for the generation of facial
expressions.
We begin this chapter by briefly reviewing each of the previous chapters, in order to
depict a general picture of the contributions and novelties presented in this thesis. Then,
a discussion section will present a more detailed view of the aspects that were considered
during this research, the results of the multiple experiments and evaluations which helped
to prove stated hypothesis and shed light on new theories, and finally the tasks to perform
as future work.

187
CHAPTER 9. CONCLUSIONS

9.1 Summary

This thesis started as a quest for the elements that are necessary to create believable
characters. Nevertheless, a large number of previous works have pursued this quest as
well, obtaining as a result computational models based on psychological theories which can
simulate up to a point believable characters.
But as the human being is complex enough, there are several elements which also have
to be considered to enrich the believability of these characters. One of this elements is the
context that surrounds the character, not only from a physical point of view, but also from
an affective point of view. One needs to think of the different things that could happen to
the characters, of all the things that they could do, what they like or dislike, what they
aim for, an so on. In other words, what the world makes the character feel.
Another element that has been not so extensively researched is the expression of moods
and personality. Thus we aimed to find a way to manifest these two affective elements in
the face of the character, and use them to enhance the expressions given by emotions,
making the character more believable.
Therefore, knowing which were the research objectives and the results we wanted to
obtain, we started a deep investigation of psychological theories that define emotions,
moods and personality (Chapter 2). Also, we researched on the different computational
models that have been developed to simulate virtual characters (Chapter 3). From this state
of art, we found that the majority of models share the same psychological background. It
means, they use the OCC model to represent emotions, the Five-Factor Model to represent
personality, and just until recently a few works have used the PAD Space to represent
moods. On the other hand, the techniques implemented to simulate the behaviors of the
characters goes from neural networks, to mathematical-logical functions or scripting.
However, the answer to our questions remained unsolved: how to represent the context
of the character from a physical and affective point of view, and how to visualize the mood
and personality. That is why we proposed a novel framework that comprises a context
representation, affect processing and visualization of these affective traits (Chapter 4).
The context representation was achieved through the design and implementation of a
Semantic Model, that making use of ontologies allowed us to define all those concepts that
are important to model the internal and external world of the characters (Chapter 5). The
novelty of our ontologies is that they do not just represent the concepts of the physical world
and affective traits of the character, but allows inference on them, obtaining emotional

188
9.2. DISCUSSION

responses to events individually appraised by each character of that world.


The affect processing, necessary to obtain the emotion, mood and personality parame-
ters to be visualized, was performed by an Affective Model capable of establish the relations
between emotions, mood and personality. Hence, the characters not only feel emotions,
but they get in a certain mood according to these emotions and their personality traits
(Chapter 6).
Finally, to visualize these affective traits we decided to use facial expressions. The main
reason for that is that they are one of the richer and more accurate ways of expressing
affect [46]. Moreover, one of the main objectives of this research was to provide moods
with facial expressions.
Regarding emotions, we implemented an algorithm that combined expressions of uni-
versal emotions to generate expressions of intermediate emotions, and that can be extended
to generate other expressions by combining intermediate emotions. Regarding mood, we
proposed a set of functions to move facial features (in this case, FACS Action Units [48])
so we could achieve facial expressions that are associated to all the moods defined in PAD
model. Regarding personality, we performed subjective evaluations to analyze which vi-
sual cues on a virtual character could be associated to personality traits. The visual cues
taken into account were: head position and eye gaze. The visualization methods and their
implementations are presented in Chapter 7.

9.2 Discussion
The contextual and affective framework proposed in this thesis required an exhaustive
analysis and study of different fields that have been combined in order to create a complete
process of character generation. That is why through the different chapters psychology,
ontologies, affective computing, computer graphics and facial annotation standards are
interlaced and treated to a depth that is sufficient to achieve the objectives of this thesis.
In the case of ontologies, we designed and implemented two ontologies that took into
account external and internal aspects of the character and its virtual world. Our intention
was not to provide new semantics and formal validation for them. Our real aim was to
provide a tool that allows the definition of meta-concepts related to the character and its
world, make inferences about them, and therefore, easily generate events that occur in that
world and automatically obtain their influence on the characters emotional state. In this
way, a storyteller could just define the events of the world and the psychological profile of

189
CHAPTER 9. CONCLUSIONS

the character, and then stories would be simulated by only re-using and combining this
information.
Regarding affective computing and psychology, our intention was not to create a new
paradigm or a new computational model for the creation of virtual characters. Our true
objective was to use previous works and theories and combined them in a new way that
permits to build a bridge between the contextual representation and the visual representa-
tion of affect. That is way we made use of the well-known OCC model, Five Factor model
and PAD Space for generation of emotions, personality and mood, respectively; as well as
of the ALMA model to get the basis of a system that simulated the interaction of these
affective traits.
Finally, this thesis did not deal with new computer graphics techniques, instead we
used low-cost and efficient facial animation and facial actions description standards like
the MPEG-4 and FACS to implement algorithms for the generation of novel and original
parameterizable facial expressions, like the expressions of all the intermediate emotions
of the OCC. Moreover, due to the advantages of parameterization, these algorithms are
independent of animation procedures, being extendible to other standards and visualiza-
tion techniques. Another novelty in this work is the formulation and implementation of
equations that allowed us to create the expressions for a very important affective trait, as
it is the mood.
This thesis is also strongly grounded on experimentation and evaluation. The several
hypothesis stated for visualization of affect and generation of context led to experiments
that not only validated our results, but assessed the perception on the subjects side,
giving hints and opening new ways to continue researching in topics that could enormously
enhance the believability of characters.
In the case of the evaluation of expressions of emotions, the results demonstrated that
subjects could recognized the expressions used for the generation of the expressions of
intermediate emotions. Even more, in some cases, subjects rated the obtained intermediate
expression with the proper emotional term. Besides suggestions related to the morphology
of the character (e.g. hair, wrinkles), the numbers showed that expressions obtained in
general highly recognition scores.
As for the moods, we performed a questionnaire where subjects could assess expressions
of an avatar according to its pleasure (P), arousal (A) and dominance (D) intensity values.
The results of this test conveyed two important conclusions. The first one is that all
the 8 moods defined by the PAD model have their corresponding facial expressions. Some

190
9.2. DISCUSSION

moods like exuberant, bored or hostile presented a large number of expressions, while other
moods like docile or dependent presented one or two. This difference was mainly due to the
difficulty of assessment of Dominance in static images. The second important conclusion
is that the functions that were formulated to map AUs into the PAD Space were valid.
These results are of great significance because knowing the AUs that describe the
expressions of certain mood, other emotional facial expressions can be enhanced and truly
reflect the affective state of the character.
Another aspect that was studied from a purely experimental approach was the percep-
tion of personality traits. In this thesis we presented the initial steps of a research that
can be easily branched out, because personality is a component that is always expressed
but difficult to simulate in a procedural way.
To evaluate the perception of personality we produced a set of configurations for head
position and eye gaze, in a way that they do not affect the facial features but somehow
enhance the emotional expression. The results were very encouraging because subjects
indeed associate extraversion, agreeableness and neuroticism to some configurations. For
the Alfred character, the upwards-sideways head orientation was related to extraver-
sion, downwards-center head orientation to agreeableness, and center-sideways head
orientation to emotional stability. However, eye gaze did not influence the perception of
personality traits.
All these previous findings represent a great advance in the area of visualization of
affective traits, because we have more information about all the elements that should be
considered when making a character look more believable from an affective point of view.
Finally, a very original experiment was the one to validate the Context Representation
module. The idea was to see if subjects could better assessed a facial expression if the
context of that expression was given.
The ground data we used to validate the module were the scenes of two movies, so we
would have real facial expressions of real actors to compare with when the same context
is elicited. The movies were Leon and Downfall, basically because they were scripted in
a way we could easily use in our module, and second because the characters Mathilda in
Leon and The soldier in the movie Downfall, resembled our characters Alice and Alfred,
respectively.
Once again, the results proved that the ontologies we defined and the contextual module
we implemented has the potential to feasible reproduce certain context and elicit suitable
emotions. For the movie Leon, the participants could recognize in the expressions of Alice

191
CHAPTER 9. CONCLUSIONS

the emotions triggered by the events. For the movie Downfall, the results were not that
satisfactory, mainly because of the repressed and passive emotional content of the chosen
scenes, and the lack of notorious expressions in the actor, and in the character.
In summary, the framework we propose and the experiment results we obtained can
be applied in different applications where it is required to have a story or the interaction
between the character and its environment. For instance, educative applications, where the
student needs to learn about certain topics with the help of a virtual tutor or simulations
in a virtual world; story simulation (kind of storytelling) for representing movies; or theme-
oriented situations where the story/script is basically generated by the interaction of the
character and its world.

9.3 Future Work


This thesis opened several research lines in different fields as semantic web/ontologies, affect
visualization, or animation of facial expressions; to be used in different scenarios like affec-
tive HCI applications, storytelling, or virtual worlds that could be use for entertainment
(e.g. videogames) or for therapies (e.g. in therapies with autistic children).
Although the previous research showed us that we are on the right track, and we should
move forward into this direction, there is still work to be done. The following are some of
the tasks we expect to accomplish in the near future:

Study of the perception of other visual regarding personality and mood.


In this research we explored how both visual cues: head movement and eye gaze were
associated with extraversion, agreeableness and emotional stability; as well as how
different FACS configurations described facial expressions for moods. In the future,
the idea would be to perform an in-depth study of how these visual cues are associated
to openness and conscientiousness, so we have a theory of expressions of personality
for the five personality traits. Also other factors could be taken into account when
perceiving personality traits in visual cues as gender, age, nationality, and from that
observe if perception is the same in different groups of subjects.
Regarding mood, it is a broad topic in which active research is being performed.
Through a more elaborated study to evaluate the already obtained expressions of
mood, we could observe if there are other sets of AUs which would also enhance the
emotional response of the character.

192
9.4. PUBLICATIONS AND CONTRIBUTIONS

Generate expressions of emotions enhanced by the features of mood and


personality. The main objective would be to dynamically generate facial expres-
sions of emotions with characteristics of mood and personality. To achieve this, it is
necessary to study how exactly mood and personality affects emotion features from a
physical point of view. For instance, how an anxious mood with a given AUs config-
uration would affect the AUs, or other facial features of an expression of happiness.
Therefore, several tests should be perform to evaluate the results of the different
combinations.

Implementation of the actual results in HCI applications. One of the main


goals is to provide a tool that complements the therapies to improve communica-
tion skills in autistic children and people with Asperger Syndrome. This is moti-
vated by the work we carried on with the Spanish organization Autismo Burgos
(http://www.autismoburgos.org/), who performed our paper questionnaire for percep-
tion of emotions in facial expressions with their autistic children and young adults.
Although the number of tests was limited due the amount of subjects they had at
the time of evaluation, it gave us hints about what it is needed in an software appli-
cation to engage them (e.g. animations instead of static images; one single character
to perform the experiment; the Alice character was found engaging although some
facial features were not that accepted, like her hair).
Therefore, having an application that implements the results obtained in this thesis,
together with the improvements from the previous evaluations would be a great step
forward in the field of assistive technologies.

9.4 Publications and contributions


The following subsections present the journals, conferences and workshops where this re-
search work has been partially published. Appendix D contains the articles of these pub-
lications.

9.4.1 Journals
D. Arellano, J. Varona, F J. Perales. Why do I feel like this? - The importance
of context representation for emotion elicitation. International Journal of Synthetic
Emotions (IJSE), 2(2), 2011. DOI: 10.4018/IJSE. ISSN: 1947-9093.

193
CHAPTER 9. CONCLUSIONS

D. Arellano, J. Varona and F. J. Perales. Generation and visualization of emotional


states in virtual characters. Computer Animation and Virtual Worlds (CAVW),
19(3-4), pp. 259-270. 2008. DOI: 10.1002/cav.263.

9.4.2 Proceedings
D. Arellano, N. Bee, K. Janowski, E. Andre, J. Varona, F. J. Perales. Influence of
Head Orientation in Perception of Personality Traits in Virtual Agents. AAMAS
2011 (Short paper), Taiwan, May, 2011.

D. Arellano, I. Lera, J. Varona and F. J. Perales. Integration of a semantic and affec-


tive model for realistic generation of emotional states in virtual characters. ACII09,
Amsterdam, Netherlands, September, 2009.

I. Lera, D. Arellano, J. Varona, C. Juiz and R. Puigjaner. Semantic Model for Facial
Emotion to improve the human computer interaction in AML. In UCAmI 2008, Vol.
51/2009, Salamanca, Spain, pp. 139-148, 2008.

9.4.3 Workshops
D. Arellano, I. Lera, J. Varona and F. J. Perales. Generating Affective Characters for
Assistive Applications. EMOTIONS & MACHINES Workshop, Geneva, Switzerland,
August 2009.

D. Arellano, I. Lera, J. Varona and F. J. Perales. Virtual Characters with Emotional


States. 2nd PEACH Summer School. Dubrovnik, Croatia, 2008.

9.4.4 Research placements


Institut fur Informatik, Lehrstuhl fur Multimedia-Konzepte und Anwendungen. Uni-
versity of Augsburg. Germany. From May 12th , 2010 until November 8th , 2010.

194
Bibliography

[1] I. Albrecht, M. Schroder, J. Haber, and H. Seidel. Mixed feelings: Expression of non-
basic emotions in a muscle-based talking head. Journal of Virtual Reality. Special
issue on Language, Speech and Gesture for Virtual Reality, 8(4):201212, 2005.

[2] R. D. Aldrich. What ever happened to baby jane? [film]. Film, 1962. Warner Bros.

[3] E. Andre, M. Klesen, P. Gebhard, S. Allen, and T. Rist. Integrating models of


personality and emotions into lifelike characters. In Affective Interactions, Lecture
Notes in Computer Science, pages 136149. Springer, 1999.

[4] D. Arellano, I. Lera, J. Varona, and F. J. Perales. Integration of a semantic and


affective model for realistic generation of emotional states in virtual characters. In
Proceedings of 2009 International Conference on Affective Computing & Intelligent
Interaction (ACII 2009), pages 642648, 2009.

[5] M. Argyle. Bodily communication. Methuen & Co. Ltd, 1988.

[6] A. Arya, S. DiPaola, L. Jefferies, and J. T. Enns. Socially communicative characters


for interactive applications. In WSCG2006, 2006.

[7] A. Arya, L. N. Jefferies, J. T. Enns, and S. DiPaola. Facial actions as visual cues
for personality. Journal of Visualization and Computer Animation, 17(3-4):371382,
2006.

[8] K. Balci. XfaceEd: Authoring tool for embodied conversational agents. In ICMI,
pages 208213, 2005.

[9] C. Bartneck. Integrating the occ model of emotions in embodied characters. In


Proceedings of the Workshop on Workshop on Virtual Conversational Characters:
Applications, Methods, and Research Challenges, 2002.

195
BIBLIOGRAPHY

[10] C. Bartneck, M. J. Lyons, and M. Saerbeck. The relationship between emotion


models and artificial intelligence. In SAB2008 Workshop on The Role of Emotion in
Adaptive Behavior and Cognitive Robotics, Osaka, 2008.
[11] J. Bates. The nature of character in interactive worlds and the oz project. Technical
report cmu-cs-92-200, Carnegie Mellon University, 1992.
[12] C. Becker-Asano. WASABI: affect simulation for agents with believable interac-
tivity, volume 319 of Dissertationen zur kunstlichen Intelligenz. IOS Press, 2008.
http://books.google.com/books?id=8ABvlwHBCQIC.
[13] C. Becker-Asano and I. Wachsmuth. Wasabi as a case study of how misattribution of
emotion can be modelled computationally. In A Blueprint for Affective Computing:
a Sourcebook and Manual, pages 179193. Oxford University Press, 2010.
[14] N. Bee, B. Falk, and E. Andre. Simplified facial animation control utilizing novel
input devices: A comparative study. In International Conference on Intelligent User
Interfaces (IUI 09), pages 197206, 2009.
[15] N. Bee, S. Franke, and E. Andre. Relations between facial display, eye gaze and
head tilt: Dominance perception variations of virtual agents. In Proceedings of 2009
International Conference on Affective Computing & Intelligent Interaction (ACII
2009), pages 628634, 2009.
[16] N. Bee, C. Pollock, E. Andre, and M. Walker. Bossy or wimpy: Expressing social
dominance by combining gaze and linguistic behaviors. In J. Allbeck, N. Badler,
T. Bickmore, C. Pelachaud, and A. Safonova, editors, Intelligent Virtual Agents,
volume 6356 of Lecture Notes in Computer Science, pages 265271. Springer Berlin
/ Heidelberg, 2010.
[17] K. L. Bellman. Emotions: Meaningful mappings between the individual and its
world. In Emotions in Humans and Artifacts, pages 149188. Cambridge, MA: MIT
Press, 2002.
[18] K.-I. Benta, A. Rarau, and M. Cremene. Ontology based affective context repre-
sentation. In Proceedings of the 2007 Euro American conference on Telematics and
information systems, EATIS 07, pages 46:146:9, New York, NY, USA, 2007. ACM.
[19] G. Bertrand, F. Nothdurft, S. Walter, A. Scheck, H. Kessler, and W. Minker. Towards
investigating effective affective dialogue strategies. In N. C. C. Chair), K. Choukri,
B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, M. Rosner, and D. Tapias, edi-

196
BIBLIOGRAPHY

tors, Proceedings of the Seventh conference on International Language Resources and


Evaluation (LREC10). European Language Resources Association (ELRA), 2010.
[20] C. Biehn. Facial expression repertoire (fer), 2005.
http://research.animationsinstitut.de/.
[21] G. Boeree. Hans eysenck (1916 1997) (and other temperament theorists).
http://webspace.ship.edu/cgboer/eysenck.html, 1998.
[22] M. Bradley and P. Lang. Measuring emotion: The self-assessment manikin and the
semantic differential. Journal of Behavior Therapy and Experimental Psychiatry,
25:4959, 1994.
[23] H. G. Brown, M. S. Poole, L. Deng, and P. Forducey. Towards a sociability the-
ory of computer anxiety: An interpersonal circumplex perspective. In HICSS 05:
Proceedings of the Proceedings of the 38th Annual Hawaii International Conference
on System Sciences (HICSS05) - Track 6, page 151.1, USA, 2005. IEEE Computer
Society.
[24] J. Cao, H. Wang, P. Hu, and J. Miao. Pad model based facial expression analysis.
In G. Bebis, R. Boyle, B. Parvin, D. Koracin, P. Remagnino, F. Porikli, J. Peters,
J. Klosowski, L. Arns, Y. Chun, T.-M. Rhyne, and L. Monroe, editors, Advances in
Visual Computing, volume 5359 of Lecture Notes in Computer Science, pages 450
459. Springer Berlin / Heidelberg, 2008.
[25] P. H.-M. Chang, Y.-H. Chien, E. C.-C. Kao, and V.-W. Soo. A knowledge-based
scenario framework to support intelligent planning characters, pages 134145. Lecture
Notes in Computer Science. Springer-Verlag, London, UK, 2005.
[26] H. Chen, F. Perich, T. Finin, and A. Joshi. Soupa: Standard ontology for ubiquitous
and pervasive applications. In International Conference on Mobile and Ubiquitous
Systems: Networking and Services, pages 258267, Boston, MA, 2004.
[27] G. L. Clore and A. Ortony. Cognition in emotion: Always, sometimes, or never? In
I. R. D. L. . L. Nadel, editor, Cognitive Neuroscience of Emotion. Series in Affective
Science, 2000.
[28] J. Cloud-Buckner, M. Sellick, B. Sainathuni, B. Yang, and J. Gallimore. Expression of
personality through avatars: Analysis of effects of gender and race on perceptions of
personality. In Proceedings of the 13th International Conference on Human-Computer
Interaction. Part III, pages 248256, Berlin, Heidelberg, 2009. Springer-Verlag.

197
BIBLIOGRAPHY

[29] T. Cochrane. 8 dimensions for the emotions. Social Science Information. Special
issue The language of emotion: conceptual and cultural issues, 48:379420, 2009.
[30] M. Courgeon, S. Buisine, and J.-C. Martin. Impact of expressive wrinkles on per-
ception of a virtual characters facial expressions of emotions. Proceedings of the 9th
International Conference on Intelligent Virtual Agents, pages 201214, 2009.
[31] M. Courgeon, C. Clavel, and J.-C. Martin. Appraising emotional events during a real-
time interactive game. In Proceedings of the International Workshop on Affective-
Aware Virtual Agents and Social Robots, AFFINE 09, pages 7:17:5, New York, NY,
USA, 2009. ACM.
[32] R. Cowie, E. DouglasCowie, S. Savvidou, E. McMahon, M. Sawey, and M. Schroder.
Feeltrace: An instrument for recording perceived emotion in real time. In Proceedings
of the ISCA Workshop on Speech and Emotion, pages 1924, 2000.
[33] C. Darwin. The Expression of Emotions in Man and Animals. London: Murray,
1873.
[34] B. De Carolis, V. Carofiglio, and C. Pelachaud. From discourse plans to believ-
able behaviour generation. In Proceedings of the International National Language
Generation Conference, New York, 2002.
[35] F. de Rosis, C. Pelachaud, I. Poggi, V. Carofiglio, and B. D. Carolis. From gretas
mind to her face: modelling the dynamics of affective states in a conversational
embodied agent. International Journal of Human-Computer Studies, 59(1-2):81118,
2003.
[36] A. K. Dey, G. D. Abowd, and D. Salber. A conceptual framework and a toolkit for
supporting the rapid prototyping of context-aware applications. Human-Computer
Interaction, 16:97166, December 2001.
[37] J. Dias. Fearnot!: Creating emotional autonomous synthetic characters for empathic
interactions. Masters thesis, Universidade Tecnica de Lisboa, Instituto Superior
Tecnico, 2005.
[38] J. Dias and A. Paiva. Feeling and reasoning: a computational model for emo-
tional agents. In Proceedings of 12th Portuguese Conference on Artificial Intelligence,
EPIA, pages 127140, 2005.
[39] P. Doyle. Believability through context using knowledge in the world to create
intelligent characters. In Proceedings of the First International Joint Conference on

198
BIBLIOGRAPHY

Autonomous Agents and Multiagent Systems: part 1, AAMAS 02, pages 342349,
New York, NY, USA, 2002. ACM.
[40] L. Edelstein. The Writers Guide to Character Traits. Writers Digest Books, 2
edition, 2009.
[41] A. Egges, S. Kshirsagar, and N. Magnenat-Thalmann. Generic personality and emo-
tion simulation for conversational agents: Research articles. Comput. Animat. Virtual
Worlds, 15(1):113, 2004.
[42] P. Ekman. Emotion in the Human Face. Cambridge University Press, 1982.
[43] P. Ekman. Moods, emotions, traits. In P. E. . R. Davidson, editor, The nature of
emotion, pages 5658. New York: Oxford University Press, 1994.
[44] P. Ekman. Basic emotions. In T. Dalgleish and M. Power, editors, Handbook of
Cognition and Emotion, chapter 3. Sussex, U.K.: John Wiley & Sons, Ltd., 1999.
[45] P. Ekman. Facial expressions. In T. Dalgleish and M. Power, editors, Handbook of
Cognition and Emotion, chapter 16. New York: John Wiley & Sons Ltd, 1999.
[46] P. Ekman. Darwins contributions to our understanding of emotional expressions.
Philosophical Transactions of the Royal Society Biological Sciences, 364(1535):3449
3451, 2009.
[47] P. Ekman and W. Friesen. Unmasking the Face. New Jersey: Prentice-Hall, 1975.
[48] P. Ekman, W. Friesen, and J. Hager. The Facial Action Coding System. Weidenfeld
& Nicolson, London, UK, 2nd edition, 2002.
[49] C. Elliott. I picked up catapia and other stories: a multimodal approach to expres-
sivity for emotionally intelligent agents. In Proceedings of the first international
conference on Autonomous agents, AGENTS 97, pages 451457, New York, NY,
USA, 1997. ACM.
[50] C. Elliott. Hunting for the holy grail with emotionally intelligent virtual actors.
SIGART Bull., 9(1):2028, 1998.
[51] H. J. Eysenck and S. B. G. Eysenck. Manual of the Eysenck Personality Question-
naire. London: Hodder and Stoughton, 1975.
[52] S. Eysenck. Citation classic. the measurement of psychoticism: a study of factor
stability and reliability. CC/SOC BEHAV SCI, 1(35):12, 1986.

199
BIBLIOGRAPHY

[53] M. Fabri, D. J. Moore, and D. J. Hobbs. Designing avatars for social interactions. In
L. Canamero and R. A. (eds.), editors, Animating Expressive Characters for Social
Interaction, Advances in Consciousness Research Series. Benjamins Publishing, 2008.

[54] G. Faigin. The Artists Complete Guide to Facial Expressions. Watson-Guptill, New
York, 1990.

[55] C. Felbaum. Wordnet, an Electronic Lexical Database for English. Cambridge: MIT
Press, 1998.

[56] E. Figa and P. Tarau. The vista architecture: experiencing stories through virtual
storytelling agents. SIGGROUP Bull., 23(2):2728, 2002.

[57] S. C. for Biomedical Informatics Research. The protege ontology ed-


itor and knowledge acquisition system. Latest version available at
http://protege.stanford.edu/download/download.html. Last reviewed: 28/02/2011.

[58] R. Forchheimer and I. S. Pandzic. MPEG-4 facial animation The Standard, Imple-
mentation and Applications. John Wiley and Sons, West Sussex, UK, 1996.

[59] N. H. Frijda. The Emotions. Cambridge ; New York : Cambridge University Press;
Paris: Editions de la Maison des sciences de lhomme, 1986.

[60] N. H. Frijda. The laws of emotion. In . N. S. Jennifer M. Jenkins, Keith Oatley, editor,
Human Emotions: A Reader, pages 271287. Malden, MA: Blackwell Publishers,
1998.

[61] A. Gangemi, N. Guarino, C. Masolo, A. Oltramari, and L. Schneider. Sweeten-


ing ontologies with dolce. In Proceedings of the 13th International Conference on
Knowledge Engineering and Knowledge Management. Ontologies and the Semantic
Web, EKAW 02, pages 166181, London, UK, 2002. Springer-Verlag.

[62] A. N. Garvin. Experiential retailing: extraordinary store environments and purchase


behavior. Masters thesis, Eastern Michigan University, 2009.

[63] P. Gebhard. ALMA: a layered model of affect. In AAMAS 05: Proceedings of the
fourth international joint conference on Autonomous agents and multiagent systems,
pages 2936, NY, USA, 2005. ACM.

[64] P. Gebhard. Emotionalisierung interaktiver Virtueller Charaktere. PhD thesis, Uni-


versitat des Saarlandes, 2007.

200
BIBLIOGRAPHY

[65] P. Gebhard and K. H. Kipp. Are computer-generated emotions and moods plausible
to humans?. In IVA, volume 4133 of Lecture Notes in Computer Science, pages
343356. Springer, 2006.
[66] S. D. Gosling, P. J. Rentfrow, and W. B. S. Jr. A very brief measure of the big-five
personality domains. Journal of Research in Personality, 37(6):504528, 2003.
[67] K. Grammer and E. Oberzaucher. The reconstruction of facial expressions in em-
bodied systems: New approaches to an old problem. ZiF Mitteilungen, 2:1431,
2006.
[68] W. O. W. Group. Owl 2 web ontology language. document overview. w3c recommen-
dation 27 october 2009. Latest version available at http://www.w3.org/TR/owl2-
overview/. Last reviewed: 09/19/2010.
[69] M. Gruninger and M. Fox. Methodology for the design and evaluation of ontologies.
In IJCAI95, Workshop on Basic Ontological Issues in Knowledge Sharing, 1995.
[70] M. Gutierrez, A. Garca-Rojas, D. Thalmann, F. Vexo, L. Moccozet, N. Magnenat-
Thalmann, M. Mortara, and M. Spagnuolo. An ontology of virtual humans: Incor-
porating semantics into human shapes. Vis. Comput., 23(3):207218, 2007.
[71] D. Heckmann, T. Schwartz, B. Brandherm, M. Schmitz, and M. von Wilamowitz-
Moellendorff. Gumo: The general user model ontology. User Modeling 2005, pages
428432, 2005.
[72] L. Hewlett-Packard Development Company. Jena: A Semantic Web Framework for
Java, 2008. version 2.5.6. http://jena.sourceforge.net/.
[73] J. R. Hobbs and F. Pan. Time ontology in owl. Latest version available at
http://www.w3.org/TR/owl-time/. Last reviewed: 25/06/2011.
[74] W. K. Hofstee, B. de Raad, and L. R. Goldberg. Integration of the big five and cir-
cumplex approaches to trait structure. Journal of Personality and Social Psychology,
63(1):146163, 1992.
[75] D. C. Howe. Rita.wordnet. http://www.rednoise.org/rita/wordnet/documentation/index.htm.
Last reviewed: 08052011.
[76] S. Inversions. http://www.facegen.com/modeller.htm.
[77] K. Isbister and C. Nass. Consistency of personality in interactive characters: ver-
bal cues, non-verbal cues, and user characteristics. Int. J. Hum.-Comput. Stud.,
53(2):251267, 2000.

201
BIBLIOGRAPHY

[78] S. Kaiser and T. Wehrle. Facial expressions in social interactions: Beyond basic
emotions, volume Advances in Consciousness Research (Vol.74): Animating Expres-
sive Characters for Social Interactions, chapter 4, pages 5369. Amsterdam : John
Benjamins Publishing Company, 2008.

[79] Z. Kasap and N. Magnenat-Thalmann. Intelligent virtual humans with autonomy and
personality: State-of-the-art. Intelligent Decision Technologies, 1(1-2):315, 2007.

[80] Z. Kasap, M. B. Moussa, P. Chaudhuri, and N. Magnenat-Thalmann. Making them


remember - emotional virtual characters with memory. IEEE Computer Graphics
and Applications, 29(2):2029, 2009.

[81] D. Keltner. Facial expression, personality, and psychopathology. In P. Ekman and


E. L. Rosenberg, editors, What the face reveals: basic and applied studies of spon-
taneous expression using the facial action coding system(FACS), chapter 25, pages
548550. Oxford University Press, 1997.

[82] H. Kessler, A. Festini, H. C. Traue, S. Filipic, M. Weber, and H. Hoffmann. Sim-


plex - simulation of personal emotion experience. In K. V. (Ed.), editor, Affective
Computing: Emotion Modelling, Synthesis and Recognition. I-Tech Education and
Publishing, Vienna, 2008.

[83] E. Krahmer, S. van Buuren, Z. Ruttkay, and W. Wesselink. Audio-visual personality


cues for embodied agents: An experimental evaluation. In Proc. of the AAMAS03
Ws on Embodied Conversational Characters as Individuals, 2003.

[84] R. Krummenacher, H. Lausen, and T. Strang. Analyzing the modeling of context


with ontologies. In International Workshop on Context-Awareness for Self-Managing
Systems, pages 1122, 2007.

[85] S. Kshirsagar and N. Magnenat-Thalmann. A multilayer personality model. In


SMARTGRAPH 02: Proceedings of the 2nd international symposium on Smart
graphics, pages 107115, NY, USA, 2002. ACM.

[86] B. Lance and S. Marsella. Glances, glares, and glowering: how should a virtual
human express emotion through gaze? Autonomous Agents and Multi-Agent Systems,
20(1):5069, 2010.

[87] D. Lenat. Cyc: A large-scale investment in knowledge infrastructure. Communica-


tions of the ACM, 38(11):3338, 1995.

202
BIBLIOGRAPHY

[88] Y. li Tian, T. Kanade, and J. F. Cohn. Recognizing action units for facial expression
analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23:97
115, 2001.
[89] J. M. Lopez, R. Gil, R. Garca, I. Cearreta, and N. Garay. Towards an ontology
for describing emotions. In Proceedings of the 1st world summit on The Knowledge
Society: Emerging Technologies and Information Systems for the Knowledge Society,
WSKS 08, pages 96104, Berlin, Heidelberg, 2008. Springer-Verlag.
[90] C. Mackay, T. Cox, G. Burrows, and T. Lazzerini. An inventory for the measure-
ment of self-reported stress and arousal. The British Journal of Social and Clinical
Psychology, 17(3):283284, 1978.
[91] N. Magnenat-Thalmann and Z. Kasap. Modelling socially intelligent virtual
humans. In VRCAI 09: Proceedings of the 8th International Conference
on Virtual Reality Continuum and its Applications in Industry. ACM, 2009.
http://www.vrcai2009.com/socially intelligentVH.pdf.
[92] L. Malatesta, A. Raouzaiou, K. Karpouzis, and S. Kollias. Towards modeling em-
bodied conversational agent character profiles using appraisal theory predictions in
expression synthesis. Applied Intelligence, 30(1):5864, 2009.
[93] M. Mancini and C. Pelachaud. Implementing distinctive behavior for conversational
agents. Gesture-Based Human-Computer Interaction and Simulation, Lecture Notes
in Computer Science, 5085/2009:163174, 2009.
[94] S. Marsella and J. Gratch. Ema: A computational model of appraisal dy-
namics. In European Meeting on Cybernetics and Systems Research, 2006.
http://www.ict.usc.edu/ marsella/publications/N Emcsr Marsella.pdf.
[95] S. Marsella and J. Gratch. Ema: A process model of appraisal dynamics. Journal of
Cognitive Systems Research, 10(1):7090, 2009.
[96] G. Matthews, D. Jones, and A. Chamberlain. Refining the measurement of mood:
the uwist mood adjetive checklist. British Journal of Psychology, 81:1742, 1990.
[97] R. McCrae and O. John. An introduction to the five-factor model and its applications.
Journal of Personality, 60(1):175215, 1992.
[98] R. R. McCrae. The five-factor model of personality traits: consensus and contro-
versy. In . G. M. P. J. Corr, editor, Cambridge Handbook of Personality Psychology,
chapter 9, pages 148161. Oxford: Oxford University Press, 2009.

203
BIBLIOGRAPHY

[99] R. R. McCrae and P. T. Costa. Validation of a five-factor model of personality across


instruments and observers. Journal of Personality and Social Psychology, 52(1):81
90, 1987.
[100] D. L. McGuinness and F. van Harmelen. Owl web ontology language.
overview. w3c recommendation 10 february 2004. Latest version available
at http://www.w3.org/TR/2004/REC-owl-features-20040210/#s1.2. Last reviewed:
09/19/2010.
[101] M. McRorie, I. Sneddon, E. de Sevin, E. Bevacqua, and C. Pelachaud. A model
of personality and emotional traits. In Z. Ruttkay, M. Kipp, A. Nijholt, and
H. Vilhjalmsson, editors, Intelligent Virtual Agents, volume 5773 of Lecture Notes
in Computer Science, pages 2733. Springer Berlin / Heidelberg, 2009.
[102] A. Mehrabian. Nonverbal Communication. Chicago, IL: Aldine-Atherton, 1972.
[103] A. Mehrabian. Incorporating emotions and personality in artificial intelligence soft-
ware, 1995. http://www.kaaj.com/psych/ai.html. Last access: 20101111.
[104] A. Mehrabian. Analysis of the big-five personality factors in terms of the pad tem-
perament model. Australian Journal of Psychology, 48(2):8692, 1996.
[105] A. Mehrabian. Pleasure-arousal-dominance: A general framework for describing and
measuring individual differences in temperament. Current Psychology, 14(4):261292,
1996.
[106] A. Mehrabian and J. A. Russell. An Approach to Environmental Psychology. Cam-
bridge, MIT Press, 1974.
[107] G. Miller. Wordnet: A lexical database for english. Communications of the ACM,
38(11), 1995.
[108] M. Mitrasinovic. Total landscape, theme parks, public space. Ashgate Publishing
Limited, 2006.
[109] D. Moffat and N. H. Frijda. Where theres a will theres an agent. In M. J. Wooldridge
and N. R. Jennings, editors, Intelligent Agents: ECAI-94. Workshop on Agent The-
ories, Architectures, and Languages, pages 245260. Springer, Berlin, 1995.
[110] M. Mosmondor, T. Kosutic, and I. S. Pandzic. Livemail: personalized avatars for
mobile entertainment. In Proceedings of the 3rd international conference on Mobile
systems, applications, and services, MobiSys 05, pages 1523, New York, NY, USA,
2005. ACM.

204
BIBLIOGRAPHY

[111] M. B. Moussa and N. Magnenat-Thalmann. Applying affect recognition in serious


games: The playmancer project. Lecture Notes in Computer Science. Motion in
Games, 5884/2009:5362, 2009.
[112] A. Nakasone and M. Ishizuka. Storytelling ontology model using rst. In IAT 06:
Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent
Technology, pages 163169, Washington, DC, USA, 2006. IEEE Computer Society.
[113] M. Neff, Y. Wang, R. Abbott, and M. Walker. Evaluating the effect of gesture and
language on personality perception in conversational agents. In J. Allbeck, N. Badler,
T. Bickmore, C. Pelachaud, and A. Safonova, editors, Intelligent Virtual Agents,
volume 6356 of Lecture Notes in Computer Science, pages 222235. Springer Berlin
/ Heidelberg, 2010.
[114] A. Newell. Unified Theories of Cognition. Cambridge, MA: Harvard University Press,
1990.
[115] I. Niles and A. Pease. Towards a standard upper ontology. In Proceedings of the
international conference on Formal Ontology in Information Systems - Volume 2001,
FOIS 01, pages 29, New York, NY, USA, 2001. ACM.
[116] Z. Obrenovic, N. Garay, J. Lopez, I. Fajardo, and I. Cearreta. An ontology for
description of emotional cues. In J. Tao, T. Tan, and R. Picard, editors, Affective
Computing and Intelligent Interaction, volume 3784 of Lecture Notes in Computer
Science, pages 505512. Springer Berlin / Heidelberg, 2005.
[117] Z. Obrenovic, D. Starcevic, and V. Devedzic. Using ontologies in design of multimodal
user interfaces. Human-Computer Interaction - INTERACT 03, pages 535542,
2003.
[118] M. Ochs, N. Sabouret, and V. Corruble. Simulation of the dynamics of nonplayer
characters emotions and social relations in games. IEEE Transactions on Computa-
tional Intelligence and AI in Games, 1(4):281297, 2009.
[119] A. Ortony. On making believable emotional agents believable. In Emotions in Hu-
mans and Artifacts, pages 189211. Cambridge, MA: MIT Press, 2002.
[120] A. Ortony, G. Clore, and A. Collins. The Cognitive Structure of Emotions. Cambridge
University Press, 1988.
[121] A. Paiva, J. Dias, D. Sobral, R. Aylett, P. Sobreperez, S. Woods, C. Zoll, and L. Hall.
Caring for agents and agents that care: Building empathic relations with synthetic

205
BIBLIOGRAPHY

agents. Autonomous Agents and Multiagent Systems, International Joint Conference


on, 1:194201, 2004.
[122] S. Pasquariello and C. Pelachaud. Greta: A simple facial animation engine. 6th
Online World Conference on Soft Computing in Industrial Applications, Session on
Soft Computing for Intelligent 3D Agents, 2001.
[123] R. W. Picard. Affective Computing. MIT Press, 1997.
[124] S. M. Platt and N. Badler. Animating facial expressions. Computer Graphics,
15(3):245252, 1981.
[125] R. Plutchik. Emotions: A Psychoevolutionary Synthesis. Harper & Row, New York,
1980.
[126] M. Poznanski and P. Thagard. Changing personalities: towards realistic virtual
characters. Journal of Experimental & Theoretical Artificial Intelligence, 17(3):221
241, 2005.
[127] B. D. Raad. Structural models of personality. In . G. M. P. J. Corr, editor, Cambridge
Handbook of Personality Psychology, chapter 8, pages 127147. Oxford: Oxford Uni-
versity Press, 2009.
[128] B. D. Raad and D. P. Barelds. A new taxonomy of dutch personality traits based
on a comprehensive and unrestricted list of descriptors. Journal of Personality and
Social Psychology, 94(2):347364, 2008.
[129] M. Radovan and L. Pretorius. Facial animation in a nutshell: past, present and
future. In SAICSIT 06 Proceedings of the 2006 annual research conference of the
South African institute of computer scientists and information technologists on IT
research in developing countries, 2006.
[130] A. S. Rao and M. P. Georgeff. Modeling rational agents within a bdiarchitecture.
In J. Allen, R. Fikes, and E. H. Sandewall, editors, Proceedings of the 2nd In-
ternational Conference on Principles of Knowledge Representation and Reasoning
(KR91), pages 473484, 1991.
[131] A. Raouzaiou, N. Tsapatsoulis, K. Karpouzis, and S. Kollias. Parameterized fa-
cial expression synthesis based on MPEG-4. EURASIP J. Appl. Signal Process.,
2002(1):10211038, 2002.
[132] M. R. Raphalalani. Basic emotions in Tshivenda: a cognitive semantic analysis.
Masters thesis, Stellenbosch University, 2006.

206
BIBLIOGRAPHY

[133] J. Russell. Core affect and the psychological construction of emotion. Psychological
Review, 110(1):145172, 2003.

[134] J. A. Russell. A circumplex model of affect. Journal of Personality and Social


Psychology, 39(6):11611178, 1980.

[135] J. A. Russell. Measures of emotion. In R. Plutchik and H. Kellerman, editors, Emo-


tion: Theory, Research, and Experience. The Measurement of Emotions, volume 4,
chapter 4, pages 83111. New York: Academic Press, 1989.

[136] J. A. Russell and A. Mehrabian. Evidence for a three-factor theory of emotions.


Journal of Research in Personality, 11(3):273294, 1977.

[137] M. Sagar. Facial performance capture and expressive translation for king kong. In
ACM SIGGRAPH 2006 Sketches, SIGGRAPH 06, New York, NY, USA, 2006. ACM.

[138] R. Schapp. Enhancing video game characters with emotion, mood and personality.
Masters thesis, Delft University of Technology, 2009.

[139] J. Scheffczyk, C. F. Baker, and S. Narayanan. Ontology-based reasoning about lexical


resources. Ontologies and Lexical Resources for Natural Language Processing, pages
18, 2006.

[140] K. R. Scherer. Appraisal considered as a process of multi-level sequential checking,


pages 92120. Oxford University Press, New York and Oxford, k. r. scherer, a. schorr,
& t. johnstone (eds.) edition, 2001.

[141] T. Strang and C. Linnhoff-Popien. A context modeling survey. In UbiComp 1st In-
ternational Workshop on Advanced Context Modelling, Reasoning and Management,
pages 3141, 2004.

[142] I. Swartjes and M. Theune. The virtual storyteller: story generation by simulation.
Twentieth Belgian-Netherlands Conference on Artificial Intelligence, BNAIC, 2008.

[143] B. Szekely and J. Betz. Jastor: Typesafe, ontology driven rdf access from java. Latest
version available at http://jastor.sourceforge.net/. Last reviewed: 25/06/2011.

[144] M. Tekalp and J. Ostermann. Face and 2-D mesh animation in MPEG-4. Signal
Processing: Image Communication, 15(4-5):387421, 2000.

[145] R. E. Thayer. Factor analytic and reliability studies on the activation-deactivation


adjective check list. Psychological Reports, 42:747756, 1978.

207
BIBLIOGRAPHY

[146] Y. Tong, W. Liao, and Q. Ji. Inferring facial action units with causal relations. In
Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and
Pattern Recognition (CVPR06), 2006.

[147] J. D. Velasquez. Modeling emotions and other motivations in synthetic agents. In


Proceedings of AAAI97, 1997.

[148] V. Vinayagamoorthy, M. Gillies, A. Steed, E. Tanguy, X. Pan, C. Loscos, and


M. Slater. Building expression into virtual characters. Eurographics. State of the
Art Reports (STARs), 2006.

[149] X. H. Wang, D. Q. Zhang, T. Gu, and H. K. Pung. Ontology based context modeling
and reasoning using owl. In Proceedings of the Second IEEE Annual Conference on
Pervasive Computing and Communications Workshops, PERCOMW 04, pages 18,
Washington, DC, USA, 2004. IEEE Computer Society.

[150] D. Watson. Mood and Temperament. The Guilford Press, 2000.

[151] D. Watson and L. A. Clark. On traits and temperament: general and specific factors
of emotional experience and their relation to the five-factor model. J Pers, 60(2):441
476, 1992.

[152] D. Watson and L. A. Clark. The panas-x: Manual for the pos-
itive and negative affect schedule - expanded form. Available from
http://www.psychology.uiowa.edu/Faculty/Watson/PANAS-X.pdf, 1994.

[153] C. Whissell. The dictionary of affect in language. In R. Plutchik and H. Kellerman,


editors, EMOTION. Theory, Research, and Experience. Volume 4: The Measurement
of Emotions, chapter 5, pages 113131. New York: Academic Press, 1989.

[154] J. S. Wiggins. A psychological taxonomy of trait-descriptive terms: The interpersonal


domain. Journal of Personality and Social Psychology, 37(3):395412, 1979.

[155] J. S. Wiggins, P. D. Trapnell, and N. Phillips. Psychometric and geometric charac-


teristics of the revised interpersonal adjective scales (ias-r). Multivariate Behavioral
Research, pages 517530, 1988.

[156] V. Zammitto, S. DiPaola, and A. Arya. A methodology for incorporating personal-


ity modeling in believable game characters. 4th Intl Conf on Games Research and
Development (CyberGames), 2008.

208
BIBLIOGRAPHY

[157] M. A. Zevon and A. Tellegen. The structure of mood change: An id-


iographic/nomothetic analysis. Journal of Personality and Social Psychology,
43(1):111122, 1982.

209
BIBLIOGRAPHY

210
Appendix A

Ontology Rules

The following rules are specified for emotion elicitation in JENA format. Thus this is
the way they were defined in our application. The format is:
name-of-the-rule: a1 a2 ... an = c1 c2 ... cn ,
where ai and ci are predicates of the form: subject, relation, object.
A subject is in most of the cases a variable that is taken from the database. To use it in
the rule it is preceded by a question mark (?). For instance, the variable ?character refers
to the character to whom the rules are applied.
A relation is the link between the subject and the object. It is taken from the ontology
itself through an URI that indicates where it has defined. In our case the URIs that has
been defined are:
- personalityOntology: http://dmi.uib.es/ ugiv/ontologies/personalityEmotion
- event: http://dmi.uib.es/ ugiv/ontologies/event
- rdf-syntax: http://www.w3.org/1999/02/22-rdf-syntax-ns (this is the definition of
RDF concepts according to the standard)

An object can be a variable taken from the database (preceded by ?) or a defined


type in the ontology (preceded by the URI).
Table A.1 presents the rules used to elicit emotions according to the Satisfaction of the
Event and the AgentAdmiration for agents with role ByOther and OnOther.

211
APPENDIX A. ONTOLOGY RULES

Rule: NotSatisfactionByMeEmotions
If the event is NotSatisfactory and the role is ByMe, then the rules are:
<EventRelation><hasSatisfactionValue><EventSatisfactionScale>
?eventSatScale <rdf-syntax#type><personalityOntology#NotSatisfactory>
?character <personalityOntology#hasEventSatisfaction> ?eventRelation
?character <event#hasRole> ?agentRole
?agentRole <rdf-syntax#type> <event#ByMe>
=
?agentRole <personalityEmotion#hasNegativeSelfEmotion><personalityEmotion#Shame>
?agentRole <personalityEmotion#hasNegativeSelfEmotion><personalityEmotion#Remorse>

Rule: SatisfactionByMeEmotions
If the event is Satisfactory and the role is ByMe, then the rules are:
?eventRelation <personalityEmotion#hasSatisfactionValue> ?eventSatScale
?eventSatScale <rdf-syntax#type> <personalityEmotion#Satisfactory>
?character <personalityEmotion#hasEventSatisfaction> ?eventRelation
?character <event#hasRole> ?agentRole
?agentRole <rdf-syntax#type><event#ByMe>
=
?agentRole <personalityEmotion#hasPositiveSelfEmotion><personalityEmotion#Pride>
?agentRole <personalityEmotion#hasPositiveSelfEmotion><personalityEmotion#Gratification>

Rule: IndifferentByMeEmotions
If the event does not have a level of satisfaction (i.e. it is Indifferent) and the role is ByMe, then the rules
are:
?eventRelation <personalityEmotion#hasSatisfactionValue> ?eventSatScale
?eventSatScale <rdf-syntax#type> <personalityEmotion#IndifferentE>
?character <personalityEmotion#hasEventSatisfaction> ?eventRelation
?character <event#hasRole> ?agentRole
?agentRole <rdf-syntax#type> <event#ByMe>
=
?agentRole <personalityEmotion#hasSelfEmotion><personalityEmotion#Neutral>

Rule: SatisfactionOnMeEmotionsGoal
If the event is Satisfactory, it means the realization of a goal, and the role is OnMe, then the rules are:
?eventRelation <personalityEmotion#hasEvent> ?event
?eventRelation <personalityEmotion#hasSatisfactionValue> ?satScale
?satScale <rdf-syntax#type> <personalityEmotion#Satisfactory>
?character <personalityEmotion#hasEventSatisfaction> ?eventRelation
?character <event#hasRole> ?agentRole
?agentRole <rdf-syntax#type> <event#OnMe>
?character <personalityEmotion#hasGoal>?event
=
?agentRole <personalityEmotion#hasPositiveSelfEmotion><personalityEmotion#Satisfaction>
?agentRole <personalityEmotion#hasPositiveSelfEmotion> <personalityEmotion#Relief>

Rule: SatisfactionOnMeEmotions
If the event is Satisfactory and the role is OnMe, then the rules are:

212
?eventRelation <personalityEmotion#hasEvent> ?event
?eventRelation <personalityEmotion#hasSatisfactionValue> ?satScale
?satScale <rdf-syntax#type><personalityEmotion#Satisfactory>
?character <personalityEmotion#hasEventSatisfaction> ?eventRelation
?character <event#hasRole> ?agentRole
?agentRole <rdf-syntax#type> <event#OnMe>
=
?agentRole <personalityEmotion#hasPositiveSelfEmotion> <personalityEmotion#Joy>

Rule: NotSatisfactionOnMeEmotionsGoal
If the event is NotSatisfactory, it is related to a goal, and the role is OnMe, then the rules are:
?eventRelation <personalityEmotion#hasEvent> ?event
?eventRelation <personalityEmotion#hasSatisfactionValue> ?satScale
?satScale <rdf-syntax#type> <personalityEmotion#NotSatisfactory>
?character <personalityEmotion#hasEventSatisfaction > ?eventRelation
?character <event#hasRole> ?agentRole
?agentRole <rdf-syntax#type> <event#OnMe>
?character <personalityEmotion#hasGoal> ?event
=
?agentRole <personalityEmotion#hasNegativeSelfEmotion> <personalityEmotion#Disappointment>
?agentRole <personalityEmotion#hasNegativeSelfEmotion> <personalityEmotion#Fear>

Rule: NotSatisfactionOnMeEmotions
If the event is NotSatisfactory and the role is OnMe, then the rules are:
?eventRelation <personalityEmotion#hasEvent> ?event
?eventRelation <personalityEmotion#hasSatisfactionValue> ?satScale
?satScale <rdf-syntax#type> <personalityEmotion#NotSatisfactory>
?character <personalityEmotion#hasEventSatisfaction> ?eventRelation
?character <event#hasRole> ?agentRole
?agentRole <rdf-syntax#type> <event#OnMe>
=
?agentRole <personalityEmotion#hasNegativeSelfEmotion> <personalityEmotion#Sadness>

Rule: PositiveAdmirationByOtherEmotions
The agent who has the role ByOther is positively admired by the agent who evaluates the event. Then the
rules are:
?agent <personalityEmotion#feels> ?admirationScale
?admirationScale <personalityEmotion#feelsFor> ?otherAgent
?admirationScale <rdf-syntax#type> <personalityEmotion#Positive>
?otherAgentRole <event#hasEmotionalAgent> ?otherAgent
?otherAgentRole <rdf-syntax#type><ontologies/event#ByOther>
=
?otherAgentRole <personalityEmotion#hasPositiveOtherEmotion> <personalityEmotion#Admiration>
?otherAgentRole <personalityEmotion#hasNegativeOtherEmotion><personalityEmotion#Reproach>

Rule: NegativeAdmirationByOtherEmotions
The agent who has the role ByOther is negatively admired by the agent who evaluates the event. Then the
rules are:

213
APPENDIX A. ONTOLOGY RULES

?agent <personalityEmotion#feels> ?admirationScale


?admirationScale <personalityEmotion#feelsFor> ?otherAgent
?admirationScale <rdf-syntax#type> <personalityEmotion#Negative>
?otherAgentRole <event#hasEmotionalAgent> ?otherAgent
?otherAgentRole <rdf-syntax#type> <event#ByOther>
=
?otherAgentRole <personalityEmotion#hasPositiveOtherEmotion> <personalityEmotion#Gratitude>
?otherAgentRole <personalityEmotion#hasNegativeOtherEmotion> <personalityEmotion#Anger>

Rule: PositiveAdmirationOnOtherEmotions
The agent who has the role OnOther is positively admired by the agent who evaluates the event. Then the
rules are:
?agent <personalityEmotion#feels> ?admirationScale
?admirationScale <personalityEmotion#feelsFor> ?otherAgent
?admirationScale <rdf-syntax#type> <personalityEmotion#Positive>
?otherAgentRole <event#hasEmotionalAgent> ?otherAgent
?otherAgentRole <rdf-syntax#type> <event#OnOther>
=
?otherAgentRole <personalityEmotion#hasPositiveOtherEmotion> <personalityEmotion#Joy>
?otherAgentRole <personalityEmotion#hasNegativeOtherEmotion> <personalityEmotion#Pity>

Rule: NegativeAdmirationOnOtherEmotions
The agent who has the role OnOther is negatively admired by the agent who evaluates the event. Then the
rules are:
?agent <personalityEmotion#feels> ?admirationScale
?admirationScale <personalityEmotion#feelsFor> ?otherAgent
?admirationScale <rdf-syntax#type> <personalityEmotion#Negative>
?otherAgentRole <event#hasEmotionalAgent> ?otherAgent
?otherAgentRole <rdf-syntax#type> <event#OnOther>
=
?otherAgentRole <personalityEmotion#hasPositiveOtherEmotion> <personalityEmotion#Gloating>
?otherAgentRole <personalityEmotion#hasNegativeOtherEmotion> <personalityEmotion#Resentment>

Rule: EmotiveScaleSGEmotions
These are the emotions elicited due to a StronglyGood event. Then the rules are:
?emotiveScale <rdf-syntax#type> <personalityEmotion#StronglyGood>
=
?emotiveScale <personalityEmotion#hasPositiveEmotion> <personalityEmotion#Love>
?emotiveScale <personalityEmotion#hasEmotion><personalityEmotion#Love>

Rule: EmotiveScaleGEmotions
These are the emotions elicited due to a Good event. Then the rules are:
?emotiveScale <rdf-syntax#type> <personalityEmotion#Good>
=
?emotiveScale <personalityEmotion#hasPositiveEmotion> <personalityEmotion#Liking>
?emotiveScale <personalityEmotion#hasEmotion><personalityEmotion#Liking>

Rule: EmotiveScaleIEmotions
These are the emotions elicited due to a Indifferent event. Then the rules are:
?emotiveScale <rdf-syntax#type> <personalityEmotion#Indifferent>
=

214
?emotiveScale <personalityEmotion#hasEmotion><personalityEmotion#Neutral>

Rule: EmotiveScaleBEmotions
These are the emotions elicited due to a Bad event. Then the rules are:
?emotiveScale <rdf-syntax#type> <personalityEmotion#Bad>
=
?emotiveScale <personalityEmotion#hasNegativeEmotion> <personalityEmotion#Disliking>
?emotiveScale <personalityEmotion#hasEmotion><personalityEmotion#Disliking>

Rule: EmotiveScaleSBEmotions
These are the emotions elicited due to a StronglyBad event. Then the rules are:
?emotiveScale <rdf-syntax#type> <personalityEmotion#StronglyBad>
=
?emotiveScale <personalityEmotion#hasNegativeEmotion> <personalityEmotion#Hate>
?emotiveScale <personalityEmotion#hasNegativeEmotion> <personalityEmotion#Fear>
?emotiveScale <personalityEmotion#hasEmotion><personalityEmotion#Hate>
?emotiveScale <personalityEmotion#hasEmotion><personalityEmotion#Fear>

Rule: IsGoal
Allows to know if the event is a Goal. Then the rules are:
?agent <personalityEmotion#hasGoal> ?event
=
?event <personalityEmotion#isGoalOf> ?agent

Table A.1: JENA Rules for the elicitation of emotions

215
APPENDIX A. ONTOLOGY RULES

216
Appendix B

Mapping from FAPs to AUs

B.1 Mapping
Table B.1 presents the mapping done from Facial Animation Parameters (FAPs) into Action
Units (AUs), based on the movements performed and the muscles involved.

Table B.1: Mapping of Facial Animation Parameters (FAPs) into Action Units (AUs)

AU Description Facial Muscle FAP Description Picture


1 Inner Brow Frontalis, 31 + raise l i eyebrow
Raiser pars medialis 32 + raise r i eyebrow

2 Outer Brow Frontalis, 35 + raise l o eyebrow


Raiser pars lateralis 36 + raise r o eyebrow

4 Brow Lowerer Depressor Glabellae, 31 - raise l i eyebrow


- Frown Corrugator, 32 - raise r i eyebrow
Depressor supercilii 33 - raise l m eyebrow
34 - raise i m eyebrow
35 - raise l o eyebrow
36 - raise r o eyebrow
37 - squeeze l eyebrow
38 - squeeze r eyebrow
5 Upper Lid Levator palpebrae, 19 - close t l eyelid
Raiser 1 superioris 20 - close t r eyelid

217
APPENDIX B. MAPPING FROM FAPS TO AUS

Table B.1 Continue


AU Description Facial Muscles FAP Description Picture
6 Cheek Raiser Orbicularis oculi, 41 + lift l cheek
pars orbitalis 42 + lift r cheek
21 + close b l eyelid
22 + close b r eyelid

7 Lid Tightnener Orbicularis oculi, 19 + close t l eyelid


pars palebralis 20 + close t r eyelid
21 + close b l eyelid
22 + close b r eyelid
9 Nose Wrinkler Levator labii 4- lower t midlip
superioris alaquae 61 + stretch l nose
nasi 62 + stretch r nose

10 Upper Lip Levator labii 4- lower t midlip


Raiser superioris 61 + stretch l nose
62 + stretch r nose
63 + raise nose

11 Nasolabial Zygomaticus minor 6+ stretch l cornerlip


Deepener 7+ stretch r cornerlip
61 + stretch l nose
62 + stretch r nose

12 Lip Corner Zygomaticus major 6+ stretch l cornerlip


Puller 7+ stretch r cornerlip
12 + raise l cornerlip
13 + raise r cornerlip

13 Cheek Puffer Levator anguli oris 39 + Puff l cheek


(a.k.a. Caninus) 40 + Puff r cheek

14 Dimpler Buccinnator 6- stretch l cornerlip


7- stretch r cornerlip
53 - stretch l cornerlip o
54 - stretch r cornerlip o

15 Lip Corner Triangularis 12 - Raise l cornerlip


Depressor 13 - Raise r cornerlip

218
B.1. MAPPING

Table B.1 Continue


AU Description Facial Muscles FAP Description Picture

16 Lower Lip Depressor labii 5- Raise b midlip


Depressor 1 inferioris 10 - Raise b midlip lm
11 - Raise b midlip rm
57 - Raise b midlip lm o
58 - Raise b midlip rm o

17 Chin Raiser Mentalis 18 + Depress chin


10 + raise b midlip lm
11 + raise b midlip rm

18 Lip Puckerer Incisive labii 16 + Push b lip


superiori
Incisive labii 17 + Push t lip
inferioris

20 Lip Stretcher Risorius 6+ Stretch l cornerlip


Platysma 7+ Stretch r cornerlip
53 + Stretch l cornerlip o
54 + Stretch r cornerlip o
22 Lip Funneler Orbicularis oris 4- Lower t midlip
5- Raise b midlip
6- Stretch l cornerlip
7- Stretch r cornerlip o
8- Lower t lip lm
9- Lower t lip rm
10 - Raise b lip lm
11 - Raise b lip rm
23 Lip Tightener Orbicularis oris 5+ raise b midlip
6- Stretch l cornerlip
7- Stretch r cornerlip o

24 Lip Pressor Orbicularis oris 4+ Lower t midlip


5+ Raise b midlip
8+ Lower t lip lm
9+ Lower t lip rm
10 + Raise b lip lm
11 + Raise b lip rm
25 Lip Part Depressor labii 5- Raise b midlip
inferioris OR
Relaxation of 10 - Raise b lip lm
mentalis OR
Orbicularis oris 11 - Raise b lip rm

219
APPENDIX B. MAPPING FROM FAPS TO AUS

Table B.1 Continue


AU Description Facial Muscles FAP Description Picture

26 Jaw Drops Masseter, 3+ Open jaw


relaxed Temporalis,
Internal Pterygoid

27 Mouth Stretch Pterygoids, 5- Raise b midlip


digastric 10 - Raise b lip lm
11 - Raise b lip rm
52 - Raise b midlip o
57 - Raise b lip lm o
58 - Raise b lip rm o
28 Lip Suck Orbicularis oris 16 - Push b lip
17 - Push t lip

AD 29 Jaw Thrust 4 Orbicularis oris 14 + Thrust jaw


17 - Push t lip

AD 30 Jaw Sideways 15 Shift jaw


AD 38 Nostril Dilator Nasalis 61 Stretch l nose
Pars Alaris 62 Stretch r nose
AD 39 Nostril Nasalis 61 Stretch l nose
Compressor Pars Transversa 62 Stretch r nose
Depressor Septi
Nasi
43b Lid droop Relaxation of 19 + Close t l eyelid
(f. 41) Levator palpebrae 20 + Close t r eyelid
superioris

43d Slit Orbicularis oculi 19 + Close t l eyelid


(f. 42) 20 + Close t r eyelid
21 + Close b l eyelid
22 + Close b r eyelid
43 Eyes closed Relaxation of 19 + Close t l eyelid
(f. 42) Levator palpebrae 20 + Close t r eyelid
superioris; 21 + Close b l eyelid
Orbicularis oculi, 22 + Close b r eyelid
pars palpebralis,
44 Squint Orbicularis oculi, 19 + Close t l eyelid
(f. 42) Pars palpebralis 20 + Close t r eyelid
21 + Close b l eyelid

220
B.1. MAPPING

Table B.1 Continue


AU Description Facial Muscles FAP Description Picture
22 + Close b r eyelid
31 - Raise l i eyebrow
32 - Raise r i eyebrow

45 Blink Relaxation of, 19 Close t l eyelid


(f. 42) Levator palpebrae 20 Close t r eyelid
Occurs superioris; 21 Close b l eyelid
in a Orbicularis oculi, 22 Close b r eyelid
moment pars palpebralis
46 Wink Relaxation of, 19 or Close t l eyelid
(f. 42) Levator palpebrae 20 Close t r eyelid
Only superioris; 21 or Close b l eyelid
one eye Orbicularis oculi, 22 Close b r eyelid
pars palpebralis
51 Head turn 49 Head yaw
left

52 Head turn 49 Head yaw


right

53 Head up 48 Head pitch

54 Head down 48 Head pitch

55 Head tilt 48 Head roll


left

221
APPENDIX B. MAPPING FROM FAPS TO AUS

Table B.1 Continue


AU Description Facial Muscles FAP Description Picture

56 Head tilt 48 Head roll


right

61 Eyes turn 23 Yaw l eyeball


left

62 Eyes turn 23 Yaw l eyeball


right

63 Eyes up 25 Pitch l eyeball


26 Pitch r eyeball

64 Eyes down 25 Pitch l eyeball


26 Pitch r eyeball

1
This mapping is valid with FAPs with negative intensities - upper movement
2
Images for AUs: 1, 2, 4, 5, 6, 7, 9, 10, 11, 13, 15, 17, 18, 22, 25, 26, 27, 43b, 43d, 43, 44, 51, 52, 53, 54,
55, 56 were taken from: http://www.cs.cmu.edu/ face/facs.htm
3
Images for AUs: 12, 14, 16, 20, 23, 24, 28, 29, 46, 61, 62, 63, 64 were taken from:
http://micromovimiento.com/?p=933#more-933
4
AD instead of AU stands for Action Descriptor

222
B.2. OPPOSITE AUS

B.2 Opposite AUs


As result of the observation of the facial movements and the AUs involved, we concluded
that the AUs shown in white and gray cells in Tables B.2, B.3, B.4 and B.5 cannot be
present at the same time in a facial expression. It will help us to discern which FAPs, and
in which direction, they should be activated.
For instance, Table B.2 indicates that the AUs 12 and 15 are opposed because they
share a set of muscles which move in different directions. These muscles are given by the
FAPs, which are present in both AUs. Thus FAP12 and FAP13 are moving upwards in the
movement described by AU12, but they are moving downwards in the movement described
by AU15.

AU Description Facial Muscles FAP Description Picture


12 Lip Corner Zygomaticus major 6+ Stretch l cornerlip
Puller 7+ Stretch r cornerlip
12 + Raise l cornerlip
13 + Raise r cornerlip

15 Lip Corner Triangularis 12 - Raise l cornerlip


Depressor 13 - Raise r cornerlip

Table B.2: PAD space octants description

Table B.3 shows a set of AUs - 16, 25 and 26, which are opposed to AU17. As can
be seen in the table, AUs 16 and 25 describe a movement that involves FAPs 10 and 11
with and downward direction, while AU17 has a correspondence with these FAPs but with
upward direction. Regarding AU26, although it does not share similar FAPs with AU17,
the movement described by the former it is opposed to the one described by the latter.
Table B.4 shows two sets of AUs that are also opposed. On one hand, AUs 23 and
24 describe movements that involve a closure of the mouth, although they do not involve
the same set of FAPs. On the other hand, AUs 25, 26 and 27 describe movements that
imply openness of the mouth. Here we can see that AU24 is related to FAPs (5, 10 and
11) which are oriented upwards, while AU25 and AU27 are related to this set of FAPs but
downwards. Although AU26 is not related to the former FAPs, it describes a movement
that is opposed to the ones described by AUs 23 and 24.

223
APPENDIX B. MAPPING FROM FAPS TO AUS

AU Description Facial Muscles FAP Description Picture


16 Lower Lip Depressor labii 5- Raise b midlip
Depressor 1 inferioris 10 - Raise b midlip lm
11 - Raise b midlip rm
57 - Raise b midlip lm o
58 - Raise b midlip rm o

25 Lip Part Depressor labii 5- Raise b midlip


inferioris OR
Relaxation of 10 - Raise b lip lm
mentalis OR
Orbicularis oris 11 - Raise b lip rm

26 Jaw Drops Masseter, 3+ Open jaw


relaxed Temporalis,
Internal Pterygoid

17 Chin Raiser Mentalis 18 + Depress chin


10 + Raise b midlip lm
11 + Raise b midlip rm

Table B.3: PAD space octants description

Finally, Table B.5 presents AUs related with stretching and tightening of lips. AU20
is related to FAPs 6 and 7, which are oriented in a way that stretch the lips. Opposite to
that are AUs 22 and 23, which are also related to FAPs 6 and 7, but oriented in a way
that squash the lips (tightening or funneling them).

AUs Combinations

Table B.6 shows a set of combinations, collected from different sources that worked with
facial expression recognition. It gives us a hint of how AUs can be combined to create
different facial expressions.
On the other hand, Table B.7 shows a set of combinations that are not possible in
human faces.

224
B.2. OPPOSITE AUS

AU Description Facial Muscles FAP Description Picture


23 Lip Tightener Orbicularis oris 5+ raise b midlip
6- Stretch l cornerlip
7- Stretch r cornerlip o

24 Lip Pressor Orbicularis oris 4+ Lower t midlip


5+ Raise b midlip
8+ Lower t lip lm
9+ Lower t lip rm
10 + Raise b lip lm
11 + Raise b lip rm
25 Lip Part Depressor labii 5- Raise b midlip
inferioris OR
Relaxation of 10 - Raise b lip lm
mentalis OR
Orbicularis oris 11 - Raise b lip rm

26 Jaw Drops Masseter, 3+ Open jaw


relaxed Temporalis,
Internal Pterygoid

27 Mouth Stretch Pterygoids, 5- Raise b midlip


digastric 10 - Raise b lip lm
11 - Raise b lip rm
52 - Raise b midlip o
57 - Raise b lip lm o
58 - Raise b lip rm o

Table B.4: PAD space octants description

225
APPENDIX B. MAPPING FROM FAPS TO AUS

AU Description Facial Muscles FAP Description Picture


20 Lip Stretcher Risorius 6+ Stretch l cornerlip
Platysma 7+ Stretch r cornerlip
53 + Stretch l cornerlip o
54 + Stretch r cornerlip o
22 Lip Funneler Orbicularis oris 4- Lower t midlip
5- Raise b midlip
6- Stretch l cornerlip
7- Stretch r cornerlip o
8- Lower t lip lm
9- Lower t lip rm
10 - Raise b lip lm
11 - Raise b lip rm
23 Lip Tightener Orbicularis oris 5+ Raise b midlip
6- Stretch l cornerlip
7- Stretch r cornerlip o

Table B.5: PAD space octants description

226
B.2. OPPOSITE AUS

AUs Possible Meaning Source


1+2+5 Fear / Hot Anger (Novelty Sudden) [92]
9 + 10 + 15 + 35 Fear (Intrinsic Pleasant/Unpleasant) [92]
4+7 Fear (Expectation/Discrepant) / [92]
Hot Anger (Goal attainment/obstructive)
17 + 23 Fear (Goal attainment/obstructive) [92]
20 + 26 + 27 Fear (Power) [92]
4 + 7 + 10 + 17 + 24 Hot Anger (Control and power high) [92]
9 + 25 For analysis purposes, Donato et al. cited in [88]
10 + 25 they treated each combination as a new AU Donato et al. cited in [88]
16 + 25 Donato et al. cited in [88]
20 + 25 Lower Face Donato et al. cited in [88]
23 + 24 [88]
9 + 17 Lower Face [88]
9 + 25 Lower Face [88]
9 + 17 + 23 + 24 Lower Face [88]
10 + 17 Lower Face [88]
10 + 25 Lower Face [88]
15 + 17 Lower Face [88]
10 + 15 + 17 Lower Face [88]
12 + 25 Lower Face [88]
12 + 26 Lower Face [88]
17 + 23 + 24 Lower Face [88]
23 + 24 Lower Face [88]
1+6 Upper face [88]
4+5 Upper face [88]
6+7 Upper face [88]
1+2+4 Upper face [88]

Table B.6: AUs combinations

AUs Source
15 + 25 [146]
12 + 15 own observation
16 + 17 comparison using related FAPs
17 + 26 comparison using related FAPs
17 + 25 comparison using related FAPs
23 + 27 [146]
24 + 25 comparison using related FAPs
24 + 26 comparison using related FAPs
24 + 27 comparison using related FAPs
20 + 22 comparison using related FAPs
20 + 23 comparison using related FAPs

Table B.7: AUs that do not happen together

227
APPENDIX B. MAPPING FROM FAPS TO AUS

228
Appendix C

Emotion Values in the


Activation-Evaluation Space

Table C.1: Emotion Values in the Activation-Evaluation Space

Emotion Activation Evaluation


Adventurous 4.3 5.9
Affectionate 4.7 5.4
Afraid 4.9 3.4
Aggressive 5.9 2.9
Agreeable 4.3 5.2
Amazed 5.9 5.5
Ambivalent 3.2 4.2
Amused 4.9 5.0
Angry 4.2 2.7
Annoyed 4.4 2.5
Antagonistic 5.3 2.5
Anticipatory 3.9 4.7
Anxious 6.0 2.3
Apathetic 3.0 4.3
Ashamed 3.2 2.3
Astonished 5.9 4.7
Attentive 5.3 4.3
Bashful 2.0 2.7
Bewildered 3.1 2.3
Bitter 6.6 4.0
Boastful 3.7 3.0
Bored 2.7 3.2
Calm 2.5 5.5

229
APPENDIX C. EMOTION VALUES IN THE ACTIVATION-EVALUATION SPACE

Table C.1 Continue


Emotion Activation Evaluation
Cautious 3.3 4.9
Cheerful 5.2 5.0
Confused 4.8 3.0
Contemptuous 3.8 2.4
Content 4.8 5.5
Contrary 2.9 3.7
Cooperative 3.1 5.1
Critical 4.9 2.8
Curious 5.2 4.2
Daring 5.3 4.4
Defiant 4.4 2.8
Delighted 4.2 6.4
Demanding 5.3 4.0
Depressed 4.2 3.1
Despairing 4.1 2.0
Disagreeable 5.0 3.7
Disappointed 5.2 2.4
Discouraged 4.2 2.9
Disgusted 5.0 3.2
Disinterested 2.1 2.4
Dissatisfied 4.6 2.7
Distrustful 3.8 2.8
Eager 5.0 5.1
Ecstatic 5.2 5.5
Embarrassed 4.4 3.1
Empty 3.1 3.8
Enthusiastic 5.1 4.8
Envious 5.3 2.0
Furious 5.6 3.7
Gleeful 5.3 4.8
Gloomy 2.4 3.2
Greedy 4.9 3.4
Grouchy 4.4 2.9
Guilty 4.0 1.1
Happy 5.3 5.3
Helpless 3.5 2.8
Hopeful 4.7 5.2
Hopeless 4.0 3.1
Hostile 4.0 1.7
Impatient 3.4 3.2
Impulsive 3.1 4.8
Indecisive 3.4 2.7
Intolerant 3.1 2.7
Irritated 5.5 3.3

230
Table C.1 Continue
Emotion Activation Evaluation
Jealous 6.1 3.4
Joyful 5.4 6.1
Loathful 3.5 2.9
Lonely 3.9 3.3
Meek 3.0 4.3
Nervous 5.9 3.1
Obedient 3.1 4.7
Obliging 2.7 3.0
Outraged 4.3 3.2
Panicky 5.4 3.6
Patient 3.3 3.8
Pensive 3.2 5.0
Pleased 5.3 5.1
Possessive 4.7 2.8
Proud 4.3 5.7
Puzzled 2.6 3.8
Quarrelsome 4.6 2.6
Rebellious 5.2 4.0
Rejected 5.0 2.9
Remorseful 3.1 2.2
Resentful 5.1 3.0
Sad 3.8 2.4
Sarcastic 4.8 2.7
Satisfied 4.1 4.9
Scornful 5.4 4.9
Self-controlled 4.4 5.5
Serene 4.3 4.9
Sociable 4.8 5.3
Sorrowful 4.5 3.1
Stubborn 4.9 3.1
Submissive 3.4 3.1
Surprised 6.5 5.2
Suspicious 6.5 5.2
Sympathetic 3.6 3.2
Terrified 6.3 3.4
Trusting 3.4 5.2
Unaffectionate 3.6 2.1
Unfriendly 4.3 1.6
Wondering 3.3 5.2
Worried 3.9 2.9

231
APPENDIX C. EMOTION VALUES IN THE ACTIVATION-EVALUATION SPACE

232
Appendix D

Publications and contributions

This chapter contains the articles that were presented and published in different con-
ferences and journals, with the objective to present the partial results we were achieving
along the realization of this thesis.

233

Das könnte Ihnen auch gefallen