Sie sind auf Seite 1von 1

Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition

(FG'00), pp. 484-490, Grenoble, France.

Comprehensive Database for Facial Expression Analysis

Takeo Kanade Jeffrey F. Cohn Yingli Tian


The Robotics Institute Department of Psychology The Robotics Institute
Carnegie Mellon University University of Pittsburgh Carnegie Mellon University
Pittsburgh, PA, USA 15213 The Robotics Institute Pittsburgh, PA, USA 15213
tk@cs.cmu.edu Carnegie Mellon University yltian@cs.cmu.edu
http://www.cs.cmu.edu/~face 4015 O'Hara Street
Pittsburgh, PA, USA 15260
jeffcohn+@pitt.edu

Abstract may transfer poorly to applications in which expressions,


subjects, contexts, or image properties are more variable.
Within the past decade, significant effort has occurred in In addition, no common data exist with which multiple
developing methods of facial expression analysis. laboratories may conduct comparative tests of their
Because most investigators have used relatively limited methods. In the absence of comparative tests on common
data sets, the generalizability of these various methods data, the relative strengths and weaknesses of different
remains unknown. We describe the problem space for approaches is difficult to determine. In the areas of face
facial expression analysis, which includes level of and speech recognition, comparative tests have proven
description, transitions among expression, eliciting valuable [e.g., 17], and similar benefits would likely
conditions, reliability and validity of training and test accrue in the study of facial expression analysis. A
data, individual differences in subjects, head orientation large, representative test-bed is needed with which to
and scene complexity, image characteristics, and evaluate different approaches.
relation to non-verbal behavior. We then present the We first describe the problem space for facial
CMU-Pittsburgh AU-Coded Face Expression Image expression analysis. This space includes multiple
Database, which currently includes 2105 digitized image dimensions: level of description, temporal organization,
sequences from 182 adult subjects of varying ethnicity, eliciting conditions, reliability of manually coded
performing multiple tokens of most primary FACS action expression, individual differences in subjects, head
units. This database is the most comprehensive test-bed orientation and scene complexity, image acquisition, and
to date for comparative studies of facial expression relation to non-facial behavior. We note that most work
analysis. to date has been confined to a relatively restricted region
of this space. We then describe the characteristics of
databases that map onto this problem space, and evaluate
1. Introduction Phase 1 of the CMU-Pittsburgh AU-Coded Facial
Expression Database against these criteria. This
Within the past decade, significant effort has database provides a large, representative test-bed for
occurred in developing methods of facial feature tracking comparative studies of different approaches to facial
and analysis. Analysis includes both measurement of expression analysis.
facial motion and recognition of expression. Because
most investigators have used relatively limited data sets,
the generalizability of different approaches to facial
2 Problem space for face expression
expression analysis remains unknown. With few analysis
exceptions [10, 11], only relatively global facial
expressions (e.g., joy or anger) have been considered, 2.1 Level of description
subjects have been few in number and homogeneous
with respect to age and ethnic background, and recording Most of the current work in facial expression
conditions have been optimized. Approaches to facial analysis attempts to recognize a small set of prototypic
expression analysis that have been developed in this way expressions. These prototypes occur relatively

484

Das könnte Ihnen auch gefallen