Sie sind auf Seite 1von 45

Affective Computing

Affective computing aims at developing computers with understanding capabilities vastly beyond todays computer systems. Affective computing is computing that relates to, or arises from, or deliberately influences emotion. Affective computing also involves giving machines skills of emotional intelligence: the ability to recognize and respond intelligently to emotion, the ability to appropriately express (or not express) emotion, and the ability to manage emotions. The latter ability involves handling both the emotions of others and the emotions within one self.

Today, more than ever, the role of computers in interacting with people is of importance. Most computer users are not engineers and do not have the time or desire to learn and stay up to date on special skills for making use of a computers assistance. The emotional abilities imparted to computers are intended to help address the problem of interacting with complex systems leading to smoother interaction between the two. Emotional intelligence that is the ability to respond to ones own and others emotions is often viewed as more important than mathematical or other forms of intelligence. Equipping computer agents with such intelligence will be the keystone in the future of computer agents.

Emotions in people consist of a constellation of regulatory and biasing mechanisms, operating throughout the body and brain, modulating just about everything a person does. Emotion can affect the way you walk, talk, type, gesture, compose a sentence, or otherwise

Affective Computing

communicate. Thus, to infer a persons emotion, there are multiple signals you can sense and try to associate with an underlying affective state. Depending on which sensors is available (auditory, visual, textual, physiological, biochemical, etc.) one can look for different patterns of emotions influence. The most active areas for machine motion recognition have been in automating facial expression recognition, vocal inflection recognition, and reasoning about emotion given text input about goals and actions. The signals are then processed using pattern recognition techniques like hidden Markov models (HMMs), hidden decision trees, auto-regressive HMMs, Support Vector Machines and neural networks.

The response of such an affective system is also very important consideration. It could have a preset response to each user emotional state or it could learn from trying out different strategies on the subject with the passing of time and deciding the best option as time passes on. user, to see which are most pleasing. Indeed, a core property of such learning systems is the ability to sense positive or negative feedback affective feedback and incorporates this into the learning routine. A wide range of uses have been determined and implemented for such systems. These include systems, which detect the stress level in car drivers to toys, which sense the mood of the child and reacts accordingly.

In the following report I will be exploring the different aspects of affective computing and the current research, which is going on about affective computing.

Affective Computing

A general system with users who have affect or emotion and the surrounding world can be represented by the following sketch. Each of the aspects shown in the figure is explained in brief afterwards.

The most important component of the system will be the emotive user. This is any user or being who has emotions and whose actions and decisions are influenced by his emotions. He forms the core of any affective system. This affect he is able to communicate to other humans and computers and to his self. Human to human affect communication is a widely studied branch of psychology and is one of the base subjects which where explored when Affective computing was considered.

Affective Computing

In this paper I will be mainly dealing with affective communication with computers and the stages involved in such a communication. Now in a general way it can be said that a human user will display an emotion. This emotion will be sensed by any one of the interfaces to the affective application. This might be a wearable computer or any other device, which has been designed for inputting affective signals. In this way the sensing of the affective signal takes place. A pattern recognition algorithm is further applied to recognize the affective state of the user. The affective state is now understood and modeled. This information is now passed to an affective application or an affective computer, which uses it to communicate back with the emotive user.

Research is also going on as to synthesizing affect in computers, which will provide further dimension of originality to the Human Computer Interaction. Each of the dimensions of affective interaction is discussed below.

To reproduce or to sense emotions in humans one has to have a basic understanding of human emotions. Research psychologists have been studying emotion for a long time; in fact it is one of the oldest areas of research. Several classic theories of emotion exist:

James-Lange Theory
According to this theory, actions precede emotions and the brain interprets said actions as emotions. A situation occurs and the brain

Affective Computing

interprets the situation, causing a characteristic physiological response. This may include any or all of the following: perspiration, heart rate elevation, facial and gestural expression. These reflexive responses occur before the person is aware that he is experiencing an emotion; only when the brain cognitively assesses the physiology is it labeled as an "emotion".

Cannon-Bard Theory
Cannon and Bard opposed the James-Lange theory by stating that the emotion is felt first, and then actions follow from cognitive appraisal. In their view, the thalamus and amygdale play a central role; interpreting an emotion-provoking situation and simultaneously sending signals to the ANS (autonomic nervous system) and to the cerebral cortex, which interprets the situation cognitively.

Schachter-Singer Theory
Schachter and Singer agreed with James and Lange -- that the experience of emotions arises from the cognitive labeling of physiological sensation. However, they also believed that this was not enough to explain the more subtle differences in emotion self-perception, i.e. the difference between anger and fear. Thus, they proposed that an individual will gain information from the immediate situation (ex: a danger is nearby) and use it to qualitatively label the sensation.

There are many more theories than these, with new ones being refined almost every day. Current thinking is that emotion involves a dynamic state that consists of both cognitive and physical events. Emotions are classified into two types.

Affective Computing

Primary emotions
Human brains have many components that are evolutionarily old. Some are responsible for animal emotions, e.g. being startled, frozen with terror, sexually aroused, or nauseated. Information from perceptual systems fed to a fast pattern recognition mechanism can rapidly trigger massive global changes. Such mechanisms apparently include the brain stem and the limbic system. Engineers will appreciate the need for fast acting pattern based global alarm mechanisms to ensure that an agent reacts appropriately to important risks and opportunities (Sloman 1998). Damasio (1994) calls these primary emotions.

These products of our evolutionary history are still often useful. Because they involve physiological reactions relevant to attacking, fleeing, freezing, and so on, sensors measuring physiological changes (including posture and facial expression) can detect such primary emotions.

Secondary emotions
Primary emotions may be less important for civilized social animals than certain semantically rich affective states generated by cognitive processes involving appraisal of perceived, or imagined situations. Damasio refers these to as secondary emotions. They can arise only in architecture with mechanisms for processes such as envisaging, recalling, planning and reasoning. Patterns in such processes can trigger learnt or innate associations in the alarm system, which cause rapid automatic evaluations to be performed.

Affective Computing

Possible effects include: 1. Reactions in the primary emotion system including physiological changes e.g. muscular tension, weeping, flushing, and smiling. These can produce a characteristic feel, e.g. a flush of embarrassment, growing tension, etc. (Try imagining a surgical operation on your eyeball.) 2. Rapid involuntary redirection of thought processes. It is not always appreciated that effects of type (2) can occur without effects of type (1).

Two types of secondary emotions

Damasio conjectures that triggering by thought contents depends on somatic markers which link patterns of thought contents with previously experienced pleasures or pains or other strong feelings. Such triggering enables secondary emotions to play an important role by directing and redirecting attention in dealing with complex decisions.; It is also believed that secondary emotions always trigger primary mechanisms, producing sentic modulation. However, two subclasses of secondary emotions exist: central and peripheral secondary emotions.

Central secondary emotions

These involve involuntary redirection of ongoing cognitive processes like planning, reasoning, reminiscing, self-monitoring, etc. Such shifts of attention can occur entirely at the cognitive level without involving sentic modulation. An example might be guilt, which involves negative assessment of ones own motives, decisions or thoughts, and

Affective Computing

can produce thoughts about whether detection will occur, whether to confess, likely punishment, how to atone, how to avoid detection, etc. Other emotions (infatuation, anxiety, etc.) will have different effects on attention.

Peripheral secondary emotions

These occur when cognitive processes trigger states like primary emotions, without any disposition to redirect thought processes (e.g. the shudder produced by imagining scraping ones fingernails on a blackboard).A hybrid secondary emotion could involve a mixture of both types, e.g. guilt or embarrassment accompanied by sensed bodily changes.

Emotions of the first type are often important to novelists, playwrights, poets and garden fence gossips. There need not be any overt expression, but when there is it will typically be some sort of verbal utterance or intentional action. People usually do not label their emotions: like other animals and young children even human adults may lack the sophistication to recognize and classify their own mental states. Rather, a central secondary emotion can be expressed involuntarily in choice of words, or an extended thought or behavior pattern such as frequently returning to a theme, or always expressing disapproval of a certain person. Subtle patterns expressing anger, jealousy, pride, or infatuation may be clearly visible to others long before the subject notices the emotional state.

Affective Computing


Affective communication is communicating with someone (or something) either with or about affect. A crying child, and a parent comforting that child, are both engaged in affective communication. An angry customer complaining to a customer service representative, and that representative trying to clear up the problem are both also engaged in affective communication. We communicate through affective channels naturally every day. Indeed, most of us are experts in expressing, recognizing and dealing with emotions. However, affective

communication that involves computers represents a vast but largely untapped research area. What role can computers play in affective communication? How can they assist us in putting emotional channels back into "lossy" communications technologies, such as email and online chat where the emotional content is lost? How can computer technology support us in getting to know our own bodies, and our own emotions? What role, if any, can computers play in helping manage frustration, especially frustration that arises from using technology?

Researchers are beginning to investigate several key aspects of Affective Communication as it relates to computers. Affective communication may involve giving computers the ability to recognize emotional expressions as a step toward interpreting what the user might be feeling. However, the focus in this area is on communication that involves emotional expression. Expressions of emotion can be communicated to others without an intermediate step of recognition; they can simply be "transduced" into a form that can be digitally transmitted and re-presented at another location. Several devices are being investigated for facilitating this, under the name of Affective Mediation --

Affective Computing

using computers to help communicate emotions to other people through various media.

Affective Mediation
Technology supporting machine-mediated communication continues to grow and improve, but much of it still remains impoverished with respect to emotional expression. While much of the current research in the Affective Computing group focuses on sensing and understanding the emotional state of the user or the development of affective interfaces, research in Affective Mediation explores ways to increase the "affective bandwidth" of computer-mediated communication through the use of graphical visualization. By graphical visualization, it means the representation of emotional information in an easy-tounderstand, computer graphics format. Currently the focus is on physiological information, but it may also include behavioral information (such as if someone is typing louder or faster than usual.) Building on traditional representations of physiological signals -- continuously updating line graphs -- one approach is to represent the user's physiology in three-dimensional, real-time computer graphics, and to provide unique, innovative, and unobtrusive ways to collect the data. This research focuses on using displays and devices in ways that will help humans to communicate both with themselves and with one another in affect-enhanced ways.

Human-to-Human Communication
From email to full-body videoconferencing, virtual

communication is growing rapidly in availability and complexity. Although this richness of communication options improves our ability to

Affective Computing

converse with others who are far away or not available at the precise moment that we are, the sense that something is missing continues to plague users of current methodologies. Affective Communication seeks to provide new devices and tools for supplementing person-to-person communications media. Specifically, through the use of graphical displays viewable by any or all members of a mediated conversation, researchers hope to provide an augmented experience of affective expression, which supplements but also challenges traditional computermediated communication.

Human-to-Self (Reflexive) Communication

Digitized representation of affective responses creates possibilities for our relationship to our own bodies and affective response patterns. Affective communication with oneself reflexive

communication - explores the exciting possibilities of giving people access to their own physiological patterns in ways previously unavailable, or available only to medical and research personnel with special, complex, or expensive equipment. The graphical approach creates new technologies with the express goal of allowing the user to gain information and insight about his or her own responses.

Computer expression of emotion

This work represents a controversial area of human-computer interaction, in part since attributing emotions and emotional

understanding to machines has been identified as a philosophical problem: what does it mean for a machine to express emotions that it doesn't feel? What does it mean for humans to feel "empathized with" by machines that are simply unable to really "feel" what a person is going

Affective Computing

through? Currently, few computer systems have been designed specifically to interact on an emotional level. Eliza, Clark Elliot's Affective Reasoner, and interactive pets such as the Tamagochi are all examples of systems built for affective human-computer communication. When the Tamagochi "cries" for attention, this is an example of computer expression of emotion. Another example is the smile that Macintosh users are greeted with, indicating that "all is well" with the boot disk. If there is a problem with the boot disk, the machine displays the "sad Mac".

Humans are experts at interpreting facial expressions and tones of voice, and making accurate inferences about others' internal states from these clues. Controversy rages over anthropomorphism: Should researchers leverage this expertise in the service of computer interface design, since attributing human characteristics to machines often means setting unrealistic and unfulfillable expectations about the machine's capabilities? Show a human face, expect human capabilities that far outstrip the machine?

Yet the fact remains that faces have been used effectively in media to represent a wide variety of internal states. And with careful design, researchers regard emotional expression via face and sound as a potentially effective means of communicating a wide array of information to computer users. As systems become more capable of emotional communication with users, researchers see systems needing more and more sophisticated emotionally-expressive capability.

Affective Computing


Sensors are an important part of an Affective Computing System because they provide information about the wearer's physical state or behavior. They can gather data in a continuous way without having to interrupt the user. There are many types of sensors being developed to accommodate and to detect different types of emotions. A full system of sensors is described below along with some sample data. The system is UNIX based. The system shown below includes a Galvanic Skin Response (GSR) Sensor, a Blood Volume Pulse (BVP) sensor, a Respiration sensor and an Electromyogram (EMG). This system was adopted for the following reasons: The sensors are available through a commercial manufacturer and the system is lightweight and portable and relatively robust to changes in user position; the system is entirely on the user which helps maintain the privacy of the wearer's data.

Affective Computing

The sensors all attach to the Thought Technology ProComp Encoder Unit, a device that receives the signals and translates them into digital form. The output from the ProComp unit then ports to a computer for processing and recognition.

The Galvanic Skin Response (GSR) Sensor

Galvanic Skin Response is a measure of the skin's conductance between two electrodes. Electrodes are small metal plates that apply a safe, imperceptibly tiny voltage across the skin. The electrodes are typically attached to the subject's fingers or toes using electrode cuffs (as shown on the left electrode in the diagram) or to any part of the body using a Silver-Chloride electrode patch such as that shown on the EMG. To measure the resistance, a small voltage is applied to the skin and the skin's current conduction is measured.

Skin conductance is considered to be a function of the sweat gland activity and the skin's pore size. An individual's baseline skin conductance will vary for many reasons, including gender, diet, skin type and situation. Sweat gland activity is controlled in part by the sympathetic nervous system. When a subject is startled or experiences anxiety, there will be a fast increase in the skin's conductance (a period of seconds) due to increased activity in the sweat glands (unless the glands are saturated with sweat.)

After a startle, the skin's conductance will decrease naturally due to reabsorption. There is a saturation to the effect: when the duct of

Affective Computing

the sweat gland fills there is no longer a possibility of further increasing skin conductance. Excess sweat pours out of the duct. Sweat gland activity increases the skin's capacity to conduct the current passing through it and changes in the skin conductance reflect changes in the level of arousal in the sympathetic nervous system.

A graph of Galvanic Skin Response (GSR) skin conductance over a 27-minute period during an experiment. Increased GSR indicates a heightened sympathetic nervous system arousal. The Blood Volume Pulse Sensor

The Blood Volume pulse sensor uses photoplethysmography to detect the blood pressure in the extremities. Photoplethysmography is a process of applying a light source and measuring the light reflected by the skin. At each contraction of the heart, blood is forced through the peripheral vessels, producing engorgement of the vessels under the light source--thereby modifying the amount of light to the photosensor. The resulting pressure waveform is recorded.

Affective Computing

A graph of Blood Volume Pulse (BVP) sensor output showing a waveform of human heart beats, measured over a 27-minute period during an

experiment. Note in this

signal how the envelope (the

overall shape of the waveform) of the BVP as


the subject is startled in this diagram, at around the 16-18 minute period.

Three typical heart beats, as measured in the fingertips by the Blood Volume Pulse sensor.

Since vasomotor activity (activity which controls the size of the blood vessels) is controlled by the sympathetic nervous system, the

Affective Computing

BVP measurements can display changes in sympathetic arousal. An increase in the BVP amplitude indicates decreased sympathetic arousal and greater blood flow to the fingertips.

The Respiration Sensor

The respiration sensor can be placed either over the sternum for thoracic monitoring or over the

diaphram for diaphragmatic monitoring. In all experiments so far we have used diaphragmatic monitoring. The sensor consists mainly of a large Velcro belt, which extends around the chest cavity and a small elastic which stretches as the subject's chest cavity expands. The amount of stretch in the elastic is measured as a voltage change and recorded. From the waveform, the depth the subject's breath and the subject's rate of respiration can be learned.

A typical respiration waveform reading over approximately 27 minutes.

Affective Computing

The Electromyogram (EMG) Sensor

The electromyographic sensors measure the electromyographic activity of the muscle (the electrical activity produced by a muscle when it is being contracted), amplify the signal and send it to the encoder. In the encoder, a band pass filter is applied to the signal. For all our experiments, the sensor has used the 0-400 microvolt range and the 20-500 Hz filters, which is the most commonly used position.

An electromyogram of jaw clenching during an experiment


The research work mainly involves efforts to understand the correlation of emotion that can potentially be identified by a computer and primarily behavioral and physiological expressions of emotion. Because we can measure physical events, and cannot recognize a person's thoughts, research in recognizing emotion is limited to

Affective Computing

correlates of emotional expression that can be sensed by a computer, including such things as physiology, behavior, and even word selection when talking. Emotion modulates not just memory retrieval and decision-making (things that are hard for a computer to know), but also many sense-able actions such as the way you pick up a pencil or bang on a mouse (things a computer can begin to observe).

In assessing a user's emotion, one can also measure an individual's self-report of how they are feeling. However, that is often considered unreliable, since self-report varies with thoughts and situations such as "what it is appropriate to say I feel in the office?" and "how do I describe how I feel now anyhow?" Many people have difficulty recognizing and/or verbally expressing their emotions, especially when there is a mix of emotions or when the emotions are nondescript. In many situations it is also inappropriate to interrupt the user for a self-report. Nonetheless, researchers think it is important that if a user wants to tell a system verbally about their feelings, the system should facilitate this. Researchers are interested in emotional expression through verbal as well as non-verbal means, not just how something is said, but how word choice might reveal an underlying affective state.

Our focus begins by looking at physiological correlates, measured both during lab situations designed to arouse and elicit emotional response, and during ordinary (non-lab) situations, the latter via affective wearable computing.

Our first efforts toward affect recognition have focused on detecting patterns in physiology that we receive from sensing devices. To

Affective Computing

this effect, we are designing and conducting experiments to induce particular affective responses. One of our primary goals is to be able to determine which signals are related to which emotional states -- in other words, how to find the link between the user's emotional state and its corresponding physiological state. We are hoping to use, and build upon, some of the work done by others on coupling physiological information with affective states. Current efforts that use physiological sensing are focusing on:

GSR (Galvanic Skin Response), EKG (Electrocardiogram), EMG (Electromyogram), BVP (Blood Volume Pressure), Respiration, and Temperature.

Consider the case where a subject is given the following three tasks to perform:

Math (M) Subject performs simple arithmetic problems Verbal (V) Subject reads aloud a string of nonsense letters Startle (S) Subject listens to a 6-minute tape with recorded rings separated by intervals of 2 minutes each.

The following figure shows the GSR response of a test subject. Each of the tasks described above is followed by a period of rest before the subject is engaged in the next task.

Affective Computing

Graph of Galvanic Skin Response (GSR).

Modeling Affect
There is no definitive model of emotions. Psychologists have been debating for years how to define them. The pattern recognition problem consists of sorting observed data into a set of states (classes or categories), which correspond to several distinct (but possibly overlapping, or "fuzzy") emotional states. Which tools are most suitable to accomplish this depends on the nature of the signals observed. One particular way we can model affective states is as a set of discrete states with defining characteristics that the user can transition to and from. The following diagram illustrates the idea:

Affective Computing

A diagram of a Markov model for affective states.

Each emotional state in the diagram is defined by a set of features. Features may be just about anything we can measure or compute -- e.g. the rise time of a response, or the frequency range of a peak interval, etc. Therefore, an important part of the pattern recognition process consists of identifying functions of these features, which differentiate one state from another. Each state in this model is integrated into a larger scheme, which includes other affective states the user can move to and from. The transitions in this (Markov) model are defined by transition probabilities. For instance, if we believe that a user in the affective state labeled as "Anger" is more likely to make a transition to a

Affective Computing

state of "Rage" than he is to a state of "Sadness", we need to adjust the conditional probabilities to reflect that. A model like this one is trained on observations of suitable sentic signals (physiological or other signals through which affective content is manifested) to be able to characterize each state and estimate the transition probabilities.

Furthermore, the modeling of affective states can be adapted to reflect a particular user's affective map. Therefore, the notion of systems that learn from user interaction can be imported into the affective pattern recognition problem to develop robust systems. The following diagram offers an overview of the stages of the recognition process.

A diagram of the affect pattern recognition module.

The feature extraction stage is the area at which most of the current research is directed to determine which are the relevant affective signal features to submit to the learner. Since the learner operates on the extracted features and not on the signals, it is possible, for instance, to use learners developed to operate on the visual domain, and apply their operations to the processing of affective signals.

Affective Computing


Once the Sensing and Recognition modules have made their best attempt to translate user signals into patterns that signify the user's emotional responses, the system may now be said to be primitively aware of the user's immediate emotional state. But what can be done with this information? How will applications be able to make sense of this moment-to-moment update on the user's emotional state, and make use of it? The Affective Understanding module will use, process, and store this information, to build and maintain a model of the user's emotional life in different levels of granularity--from quick, specific combinations of affective responses--to meta-patterns of moods and other emotional responses. This module will communicate knowledge from this model with the other modules in the system.

The Affective Understanding module will eventually be able to incorporate contextual information about the user and his/her environment, to generate appropriate responses to the user that incorporate the user's emotional state, the user's cognitive abilities, and his/her environmental situation. The Affective Understanding module may:

Absorb information, by receiving a constant data stream on the user's current emotional state from the Recognition module.

Remember the information, by keeping track of the user's emotional responses via storage in short, medium, and long-term memory buffers.

Model the user's current mood, by detecting meta-patterns in the user's emotional responses over time, comparing these patterns to the

Affective Computing

user's previously defined moods, and possibly canonical, universal or archetypal definitions of human moods previously modeled.

Model the user's emotional life. Recognize patterns in the way the user's emotional states may change over time, to generate a model of the user's emotional states--patterns in the typical types of emotional state that the user experiences, mood variation, degrees of valence (i.e. mildly put off vs. enraged), pattern combinations (i.e. tendencies toward a pattern of anger followed by depression).

Apply the user affect model. This model may help the Affective Understanding module by informing the actions that this module decides to take--actions that use the Applications and Interface modules to customize the interaction between user and system, to predict user responses to system behavior, and to eventually make predictions about the user's interaction with environmental stimuli. The Affective Understanding module's actions may be in the form of selecting an application to open, for example an application which might interactively query the user regarding an emotional state. Or the module might supply such an application with needed information, for instance enabling the application to offer the user assistance that the user might want or need. Alternatively, the Affective Understanding module might automatically respond with a pre-user-approved action for the given situation, such as playing uplifting music when the user is feeling depressed. The Affective Understanding module might even open an application that can carry on a therapeutically-motivated conversation with its user via a discrete speech interface, perhaps discussing the user's feelings with the user after sensing that the user is alone and extremely upset.

Update the user affect model. This model must be inherently dynamic in order to reflect the user's changing response patterns over time. To this end, the system will be sensitive to changes in the

Affective Computing

user's meta-patterns as it begins to receive new kinds of data from the Recognition module. Similarly, the Understanding module's learning agents will receive feedback from both Application and Interface modules that will inform changes to the user model. This feedback may consist of indications of levels of user satisfaction-whether the user liked or disliked the system's behavior. This feedback may come either as direct feedback from the user via the interface, or indirectly by way of inference from how an application was used (e.g. the way that application X was used and then terminated indicated that the user may have been frustrated with it). These user responses will help to modify the Understanding module's model of the user and, therefore, the recommendations for system behavior that the Understanding module makes to the rest of the system.








preferences, for use in specific circumstances when interacting with the user. For example, instructions not to attempt to communicate with the user while she/he is extremely agitated, or requests for specific applications during certain moods--i.e. "Start playing melancholy music when I've been depressed for x number of hours, and then start playing upbeat music after this duration." This taxonomy may be eventually incorporated into the user model; however, ultimately, the user's wishes should be able to override any modeled preference.

Feature two-way communication with the system's Recognition module. Not only will the Recognition module constantly send updates to the Understanding module, but the Understanding module will also send messages to the Recognition module. These messages may include alerting the Recognition module to "look out" for subsequent emotional responses that the Understanding module's

Affective Computing

model of the user's meta-patterns predicts. Other kinds of Understanding-module-to-Recognizing-module messages may

include assisting the Recognition module in fine-tuning its recognition engine by suggesting new combinations of affect response patterns that the user seems to be displaying. These novel combinations may in turn inform novel patterns in the user's affect that may be beyond the scope of the Recognition engine to find on its own.

Eventually build and maintain a more complete model of the user's behavior. The more accurate a model of the user's cognitive abilities and processes can be built, the better the system will be at predicting the user's behavior and providing accurate information to the other modules within the Affective Computing system.

Eventually model the user's context. The more information the system has about the user's outside environment, the more effective the interaction will be, as will be the benefit to the user. A system that knows that the user is in a conversation with someone else may not wish to interrupt the user to discuss the user's current affective response. Similarly, a system that can tell that the user has not slept in several days, is ill or starving or under deadline pressure, will certainly be able to communicate with much more sensitivity to the user

Provide a basis for the generation of synthetic system affect. A system that can display emotional responses of its own is a vast, distinct area of research. The Affective Understanding module described here may be able to inform the design of such systems. And, once built, such a system could be integrated into the Affective Understanding module to great effect. For example, a system that is able to display authentic empathy in its interaction with the user might prove even more effective in an Active Listening application

Affective Computing

than a system that shows artificial empathy. (looks like empathy to the user, but the machine doesn't really feel anything).

Ensure confidentiality and security. The understanding module will build and maintain a working model and record of the user's emotional life; eventually, this model may also record other salient, contextual aspects of the user's life. Therefore, perhaps more so than any other part of the affective computing system, the affective understanding module will house information that must be kept confidential.

Synthesizing emotions in machines and with this, building machines that not only appear to "have" emotions, but also actually do have internal mechanisms analogous to human or animal emotions is the next step. In a machine, (or software agent, or virtual creature) which "has" emotions, the synthesis model decides which emotional state the machine (or agent or creature) should be in. The emotional state is then used to influence subsequent behavior.

Some forms of synthesis act by reasoning about emotion generation. An example of synthesis is as follows: if a person has a big exam tomorrow, and has encountered several delays today, then he/she might feel stressed and particularly intolerant of certain behaviors, such as interruptions not related to helping prepare for the exam. This synthesis model can reason about circumstances (exam, delays), and suggest which emotion(s) are likely to be present (stress, annoyance).

Affective Computing

The ability to synthesize emotions via reasoning about them, i.e. to know that certain conditions tend to produce certain affective states, is also important for emotion recognition. Recognition is often considered the "analysis" part of modeling something -- analyzing what emotion is present. Synthesis is the inverse of analysis -- constructing the emotion. The two can operate in a system of checks and balances: Recognition can proceed by synthesizing several possible cases, then asking which case most closely resembles what is perceived. This approach to recognition is sometimes called "analysis by synthesis."

Synthesis models can also operate without explicit reasoning. Researchers are exploring the need for machines to "have" emotions in a bodily sense. The importance of this follows from the work of Damasio and others who have studied patients who essentially do not have "enough emotions" and consequently suffer from impaired rational decision making. The nature of their impairment is oddly similar to that of today's Boolean decision-making machines, and of AI's brittle expert systems. Recent findings suggest that in humans, emotions are essential for flexible and rational decision-making. Our hypothesis is that they emotional mechanisms will be essential for machines to have flexible and rational decision making, as well as truly creative thought and a variety of other human-like cognitive capabilities.


Once we begin to explore applications for affective systems, interface design challenges and novel strategies for human-computer interaction immediately begin to suggest themselves. These design challenges concern both hardware and software. In terms of software, the

Affective Computing

human interface to applications can change with the increased sensitivity that the affective sensing/recognizing/understanding system will bring to the interaction. In terms of hardware, the design challenges present themselves even more immediately.

For example, various bio-sensors and other devices such as pressure sensors may be used as inputs to an affective computing system, perhaps by placing them into mice, keyboards, chairs, jewelry, or clothing, things a user is naturally in physical contact with. Sensing may also be done without contact, via cameras and microphones or other remote sensors. How will these sensors evolve into the user's daily life? Will sensors be embedded in the user's environment, or will they be part of one's personal belongings, perhaps part of a personal wearable computer system? In the latter case, how can we design these systems so that they are unobtrusive to the user and/or invisible to others? In either case, consideration for the user's privacy and other needs must be addressed. So the design of the interface is very important to all concerned since the interface to the user will determine the utility more than the actual complexity of the system.

The best type of interfaces would be using wearable interfaces. Wearable computers are entire systems that are carried by the user, from the CPU and hard drive, to the power supply and all input/output devices. The size and weight of these wearable hardware systems are dropping, even as durability of such systems are increasing. Researchers are also designing clothing and accessories (such as watches, jewelry, etc.) into which these devices may be embedded to make them not only unobtrusive and comfortable to the user, but also invisible to others.

Affective Computing

Wearable computers allow researchers to create systems that go where the user goes, whether at the office, at home, or in line at the bank. More importantly, they provide a platform that can maintain constant contact with the user in the variety of ways that the system may require; they provide computing power for the all affective computing needs, from affect sensing to the applications that can interpret, understand and use the data; and they can store the applications and user input data in on-board memory. Finally, such systems can link to personal computers and to the Internet, providing the same versatility of communications and applications as most desktop computers.

A prototype affective computing system, which is currently being developing in MIT, uses a modified "Lizzy" wearable is described below. Researchers plan to create a uniform set of affective computing hardware platforms, both to conduct affect sensing/recognizing experiments, and to develop eventual end user systems. An example of this hardware system is shown below. The computer module itself is five and a half inches square (about the length of a pen), by three inches deep. It runs the Linux operating system. The steel casing can protect the computer in falls from heights up to six feet, even on hard surfaces like concrete. This system is durable enough that it can withstand occasional blows, knocks, even the user's accidentally sitting on various parts of the system without damage.

Affective Computing

The wearable computer module used to develop the affective computing system. The strong steel case is five and a half inches square by 3 inches deep, shown with the cover on (left) and off (right). Output devices

Three interface devices for wearables: The Private Eye (left) provides a tiny monitor display that only one eye can see, and may be mounted on a pair of safety glasses. The JABRA net (right) is an earphone device for listening to auditory output from the system. The green part of the JABRA fits in the ear; a microphone that sits on the end that is exposed to the outside is for listening to sound that the ear would normally hear without the earpiece. The PalmPilot (middle) is a PDA that can be used without obscuring the user's vision.

Affective Computing

The Private Eye

Instead of an LCD screen monitor attached to the computer (as with a laptop model), the wearable computer uses more robust, personal interfaces for "hands free" operation which allows the user to walk around freely and have the computer operational at all times. Currently, the standard interface for the system is the "Private Eye" (see photo) , a text only interface that is positioned in front of one of the user's eyes. This interface uses a row of LED's (light emitting diodes) and a rapidly spinning mirror to create the illusion of a full screen of text. The Private Eye is a very low power device (one half watt compared to 3.5 watts for typical VGA, head mounted devices), which means a much lighter (and therefore slower) drain on the battery. The Palm Pilot is an example of a more socially acceptable interface. While it is uncommon to see someone wearing a head mounted display or earpiece, it is fairly common to see someone using a Palm Pilot. This interfaces allows the user to harness the full power of the wearable while remaining socially inconspicuous.

The JABRA net (as shown) is an example of a lightweight, auditory interface, and a candidate output device for the affective wearable computer. This interface paradigm leaves the user's eyes unobscured, and is barely noticeable to the casual observer. An auditory interface like the JABRA would serve well for a variety of applications, including those that are not vision intensive. Using this interface, the

Affective Computing

computer would use computer-generated speech to speak with the wearer.

Input devices
The user would be able to communicate with the system via several possible means:

A one handed keyboard like the Twiddler (see below); A PDA style handwriting tablet or miniature keyboard; Eventually by speaking directly to the system, with a speech recognition system in tandem with a microphone.

The Twiddler is currently the preferred input device for wearable computers. It is a lightweight, one-handed, "chordic" keyboard. A chordic

keyboard, like those used by court stenographers, produces characters by pressing combinations of buttons. Two handed chordic keyboards are capable of typing speeds that are much faster than traditional QWERTY keyboards; an experienced user of the Twiddler can exceed speeds of 50 wpm while using only one hand. The Twiddler shown here has an attached "orthotic spacer" (the orange lump on the bottom side of the Twiddler in the photo), which makes operation more comfortable for some users. A chordic keyboard is very quiet, and offers the user a way to silently (and, in many cases, privately) communicate with their computer, a desirable option even after sophisticated speech recognition systems come of age.

Affective Computing

The PalmPilot works as an input device as well as an output device. It can be used for both functions simultaneously, or it can be used in conjunction with another device.

An alternative is a pair of wearable expression glasses that sense changes in facial muscles, such as furrowing the brow in confusion or interest. These glasses have a small point of contact with the brow, but otherwise can be considered less obtrusive than a camera in that the glasses offer privacy, robustness to lighting changes, and the ability to move around freely without having to stay in a fixed position relative to a camera. The expression glasses can be used concurrently, while concentrating on a task, and can be activated either unconsciously or consciously. People are still free to make false expressions, or to have a poker face to mask true confusion if they do not want to communicate their true feelings (which I think is good), but if they want to communicate, the glasses offer a virtually effortless way to do so. Details on the design of the glasses and on experiments conducted with the expression glasses are available in the paper by Scheirer et al.


Once the affective computing system has sensed the user's biosignals and recognized the patterns inherent in the signals, the system's Understanding module will assimilate the data into its model of the user's emotional experience. The system is now capable of communicating useful information about the user to applications that can use such data. What do affective computing applications look like? How would they differ from current software applications?

Affective Computing

Perhaps the most fundamental application of affective computing will be to form next-generation human interfaces that are able to recognize, and respond to, the emotional states of their users. Users who are becoming frustrated or annoyed with using a product would "send out signals" to the computer, at which point the application might respond in a variety of ways -- ideally in ways that the user would see as "intuitive".

Beyond this quantum leap in the ability of software applications to respond with greater sensitivity to the user, the advent of affective computing will immediately lend itself to a host of applications, a number of which are described below.

Affective Medicine
Affective computing could be a great tool in the field of medicine. Stress is an emotion, which is widely felt by all of us in this world of technology that forces us to pick up a pace, which is higher than what we can handle. Stress is a big killer also. Studies have indicated that stress is a big factor affecting health. It has been shown that people who are more stressed out have lower resistance to diseases than a normal person. It is also noted that the most stressed out people are those who use high-end technology. So if computers and other devices were to interact with their users on an affective level, it might help to bring down and control stress and consequently help the health of the users.

Another use of affective system is to train Autistic children. Although autism is a complex disorder where children tend to have difficulty with social-emotional cues. They tend to be poor at

Affective Computing

generalizing what they learn, and learn best from having huge numbers of examples, patiently provided. Many autistics have indicated that they like interacting with computers, and some have indicated that communicating on the web levels the playing field for them, since emotion communication is limited on the web for everyone. Current intervention techniques for autistic children suggest that many of them can make progress recognizing and understanding the emotional expressions of people if given lots of examples to learn from and extensive training with these examples.

MIT has developed a systemASQ: Affective Social Quotientaimed at helping young autistic children learn to associate emotions with expressions and with situations. The system plays videos of both natural and animated situations giving rise to emotions, and the child interacts with the system by picking up one or more stuffed dwarfs that represent the set of emotions under study, and that wirelessly communicate with the computer. This effort, has been tested with autistic kids aged 3-7. Within the computer environment, several kids showed an improvement in their ability to recognize emotion. More extensive evaluation is needed in natural environments, but there are already encouraging signs that some of the training is carrying over, such as reports by parents that the kids asked more about emotions at home, and pointed out emotions in their interactions with others.

Another application, which has been designed, is to use affective computers to collect details about the condition of patients when they visit a physician. Today, physicians usually have so little time with patients that they feel it is impossible to build rapport and

Affective Computing

communicate about anything except the most obviously significant medical issues. However, given findings such as those highlighted here emotional factors such as stress, anxiety, depression, and anger can be highly significant medical factors, even when the patient might not mention them.

In some cases, patients prefer giving information to a computer instead of to a doctor, even when they know the doctor will see the information: computers can go more slowly if the patient wishes, asking questions at the patients individual speed, not rushing, not appearing arrogant, offering reassurance and information, while allowing the physician more time to focus on other aspects of human interaction. Also, in some cases, patients have reported more accurate information to computers; those referred for assessment of alcohol-related illnesses admitted to a 42% higher consumption of alcohol when interviewed by computer than when interviewed for the same information by psychiatrists.

Affective Tutor
Another good application for affective computer is to impart education to students. Computers are widely being used to impart quality education to students. But most of these CBTs or Computer Based Tutorials are either linear, that is they follow a fixed course, or they are based on the ability of the student which is gauged from the response of the student to test situations. Even such a response is very limited.

An affective tutor on the other hand would be able to gauge the students understanding as well as whether he is bored, confused, strained or in any other psychological state which affects his or her

Affective Computing

studies and consequently change its presentation or tempo so as to enable the student to adjust just as a human teacher would do. This would consequently increase the students grasp of the subject and give a better overall output from the system.

The MIT has developed a computer piano tutor, which changes its pace, and presentation based upon naturally expressed signals that the user is interested, bored, confused, or frustrated. Several tests, which have been conducted with the system, have shown very good results.

Affective DJ
Another application, which has been developed, is a digital music delivery system that plays music based on your current mood, and your listening preferences. This system is able to detect that the user is currently experiencing a feeling of sorrow or loneliness and consequently select a piece of music which it feels will help change the mood you are in. It will also be able to make changes in your current play list if it feels that the user is getting bored of the current play list or that the music has been able to change the affect of that person to another state.

Another promising development is a video retrieval system might help identify not just scenes having a particular actor or setting, but scenes having a particular emotional content: fast-forward to the "most exciting" scenes. This will allow the user to be watching a scene, which has his or her favorite actors, and also which suit the users current mood.

Affective Computing

Affective Toys
In the age where robotic toys are the craze of the time affective toys will soon enter the toy world to fill in the void that the robots are unable to show or have emotions and have to be attributed to them by an imaginative child. Affective toys on the other hand will have emotions of their own and will be able to exchange these emotions with the child, as a normal human child would do. The most famous affective toy is The Affective Tigger, which is a reactive expressive toy. The stuffed tiger reacts to a human playmate with a display of emotion, based on its perception of the mood of play.

Affective Avatars
Virtual Reality avatars that accurately and in real time represent the physical manifestations of affective state of their users in the real world are a dream of hard-core game players. They would enjoy the game more and also feel more a part of the game if their Avatars would behave just like they would in a similar scenario. They would like their Avatar to be scared when they are scared, angry when they are angry and also excited whenever they feel excited. Work has been progressing in this direction. For example, AffQuake is an attempt to incorporate signals that relate to a player's affect into ID Software's Quake II in a way that alters game play. Several modifications have been made that cause the player's avatar within Quake to alter its behaviors depending upon one of these signals. For example, in StartleQuake, when a player becomes startled, his or her avatar also becomes startled and jumps back.

Many other applications have risen with continuing research in this field. More and more possibilities are opening up every day.

Affective Computing

In this seminar I have tried to provide a basic framework of the work done in the field of affective computing. Over the years, scientists have aimed to make machines that are intelligent and that help people use their native intelligence. However, they have almost completely neglected the role of emotion in intelligence, leading to an imbalance on a scale where emotions are almost always ignored. This does not mean that the newer research should be solely to increase the Affective ability of the computers. It is widely known that too much emotion is also as bad possibly worse than no emotion. So a lot of research is needed to learn about how affect can be used in a balanced, respectful, and intelligent way; this should be the aim of affective computing as we develop new technologies that recognize and respond appropriately to human emotions.

The science is still very young but is showing large amounts of promise and should provide more to HCI than did the advent of GUI and speech recognition. The research is promising and will cause Affective computing to be a essential tool in the future.

Affective Computing


J. Scheirer, R. Fernandez, J. Klein, and R. W. Picard (2002), "Frustrating the User on Purpose: A Step Toward Building an Affective Computer"

Jonathan Klein, Youngme Moon and Rosalind W. Picard (2002), "This Computer Responds to User Frustration"

Rosalind W. Picard, Jonathan Klein (2002), "Computers that Recognise and Respond to User Emotion: Theoretical and Practical Implications"

Rosalind W. Picard and Jocelyn Scheirer (2001), "The Galvactivator: A Glove that Senses and Communicates Skin Conductivity"

Carson Reynolds and Rosalind W. Picard (2001), "Designing for Affective Interactions"

Affective Computing

Affective computing is computing that relates to, arises from or deliberately influences emotions. Neurological studies indicate that the role of emotions in human cognition is essential and that emotions play a critical role in rational decision-making, perception, human interaction and human intelligence. In the view of increased human computer interaction or HCI it has become important that for proper and full interaction between Humans and Computers, computers should be able to at least recognize and react to different user emotion states.

Emotion is a difficult thing to classify and study fully. Therefore to replicate or to detect emotions in agents is a challenging task. In HumanHuman interaction it is often easy to see if a person is angry, happy, or frustrated etc. It is not easy to replicate such an ability in an agent. In this seminar I will be dealing with different aspects of affective computing including a brief study of human emotions, theory and practice related to affective systems, challenges to affective computing and systems which have been developed which and support this type of interaction. I will also be doing a tryst into the area of ethics related to this field as well as implication of computers which will have emotions of their own.

Affective computing is an emerging, interdisciplinary area, addressing a variety of research, methodological, and technical issues pertaining to the integration of affect into human-computer interaction. The specific research areas include recognition of distinct affective states, user interface adaptation and function integration due to changes in users affective state, supporting technologies such as wearable computing for improved affective state detection and adaptation.

Affective Computing

I thank God Almighty for the successful completion of my seminar. I express my sincere gratitude to Dr. M N Agnisharman Namboothiri, Head of the Department, Information Technology. I am deeply indebted to Staffin-charge, Miss. Sangeetha Jose and Mr. Biju, for their valuable advice and guidance. I am also grateful to all other members of the faculty of Information Technology department for their cooperation.

Finally, I wish to thank all my dear friends, for their whole-hearted cooperation, support and encouragement.

Sanooj. A.A

Affective Computing

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. Introduction A general Overview Human Emotions Emotional or Affective Communication Sensing Human Emotions recognizing Affective Input Understanding the Affective Input Synthesizing Emotions Interfaces to affective Computing Applications of Affective Systems Conclusion References