Beruflich Dokumente
Kultur Dokumente
F e l i c i t o u s C o m p u t i n g I n s t i t u t e visit to
High
Anger
Joy
Arousal
Low
Sadness
Low
Love
High
Valance
Hand Writing
Speech
Gestures
Facial Feature
Anger Users usually have to consciously take part in it Joy Chances of suppressing emotive cues Chances of showing inverted affective state Tend to be biased Love Sadness Low
Arousal
Low High
Valance
Hand Writing
Speech
Gestures
Facial Feature
Anger Joy Users do not need to take part actively Users have much less control over the results Less attention and control means less bias Love Sadness Low
Arousal
Low High
Valance
Hand Writing
Speech
Gestures
Facial Feature
Active
Transition Over time users tend to get familiarized The actions slowly gets more passive Controlling tendency decreases Faking of emotion decreases Other two Key Factors for the Transition: 1. Device invisibility 2. Sensor distance
Passive
Device invisibility
Instead of
An invisible tracker/sensor device sits in the background and do work for the users
MoodScope
Sensor Distance
Modes of Passive Recognizers Skin Conductance Brain Imagery Bio Chemical Heartbeat Eye & pupil Facial Feature Movement and Gestures
Without Attachments
No body attachments Large Sensor Distance Can be operated more passively Much more unconscious participation Bias can be more minimized
Facial Feature
In many cases where: 1. the face is not visible 2. there is no provision for attaching sensors to body 3. there is no speech input The movement and gesture detection is much more feasible to detect affect
An Automated system that detects emotive states in such situations, can even save lives.
HaiXiu -
HaiXiu -
Microsoft Kinect for movement detection Rather than discreet affective states, our target is to detect Arousal and Valence Levels in continuous space. This model of continuous affective level detection can be implemented with other continuous affective spaces. e.g: Plutchiks Emotion Wheel, PAD model Presently HaiXiu detects only Arousal levels. Work is going on to include the Valence level.
High
Anger
Joy
Arousal
Low
Sadness
Low
Love
High
Valance
Detection
The ANN outputs one variable for Arousal Level The output range is from -1 (totally relaxed) to +1 (Very Aroused)
Challenges
1. Short working range of Kinect : .8m to 4.0m 2. Shorter than the range needed in practical scenarios
3. Data not consistent enough for precise movement feature Calculation 4. Fault Tolerance in case of recording and detection is needed. 5. Kinect does not follow BVH format thus available gesture databases in BVH can not be natively used without a converter module (less efficiency)
Next Step
1. 2. 3. 4. 5. 6. Introducing the Position CoOrdinates Fine tune the Arousal level recognizer A Robust Gesture recognition module Building a Valence recognizer module Getting more test data with more number of subjects Multiple Kinect integration for better recognition
Anger
Joy
Arousal
Low
Sadness
Low
Love
High
Valance
1. 2. 3. 4. 5.
Every one of the modes of recognition have their merits There are a plethora of existing facial expression detectors like affectiva Speech based emotion recognition has also been extensively done MoodScope has changed the smartphone based affect detection Powerful tools like AmbientDynamix makes integration of various sensor inputs ease for processing and using in small devices like a smartphone
Thank You