This is a 3D video database of spontaneous facial expressions in a diverse group of young adults. Well-validated emotion inductions were used to elicit expressions of emotion and paralinguistic communication. Frame-level ground-truth for facial actions was obtained using the Facial Action Coding System. Facial features were tracked in both 2D and 3D domains.
Datasets Being Used/To be Used in Project
Collective of temporal and a static facial expressions, extracted from scenes in movies.
A dataset, created from the select frames in AFEW.
This dataset, supresses the problem of having limited numbered databases that displays the behaviour and corresponding affect. To address this problem, the MMI-Facial Expression database was conceived in 2002 as a resource for building and evaluating facial expression recognition algorithms. The database addresses a number of key omissions in other databases of facial expressions. In particular, it contains recordings of the full temporal pattern of a facial expressions, from Neutral, through a series of onset, apex, and offset phases and back again to a neutral face.
This database contains videos of facial action units which were recorded starting in autumn of 2003 at the MPI for Biological Cybernetics in the Face and Object Recognition Group, department Prof. Bülthoff, using the Videolab facilities created by Mario Kleiner and Christian Wallraven.
The SEMAINE database was collected for the SEMAINE-project which aims to build a SAL, a Sensitive Artificial Listener, a multimodal dialogue system which can interact with humans with a virtual character, sustain an interaction with a user for some time and react appropriately to the user's non-verbal behaviour.
Dataset with the purpose of benchmarking AU and expression detections of discrete emotion recognition systems.
Dataset collected from selected SEMAINE and BP4D examples.
The dataset consists of 48x48 pixel grayscale images of faces. The faces have been automatically registered so that the face is more or less centered and occupies about the same amount of space in each image. The task is to categorize each face based on the emotion shown in the facial expression in to one of seven categories (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral).
The CMU Multi-PIE face database contains more than 750,000 images of 337 people recorded in up to four sessions over the span of five months. Subjects were imaged under 15 view points and 19 illumination conditions while displaying a range of facial expressions. In addition, high resolution frontal images were acquired as well. In total, the database contains more than 305 GB of face data.
The Cohn-Kanade AU-Coded Facial Expression Database is for research in automatic facial image analysis and synthesis and for perceptual studies.
Dataset used in the First Emotion Recognition In The Wild Challenge was organised at the ACM Intenrnational Conference on Multimodal Interaction 2013, Sydney.
Database created from the select Helen and Lfpw test set examples.
Database constructed from annotated Flickr images.
AFW presents extensive results on standard face benchmarks, as well as a new "in the wild" annotated dataset, to perform remarkable results for such tasks: face detection, pose estimation, and landmark estimation in real-world, cluttered images. Dataset extensively used for Zhu-Ramanan's face detection work(X. Zhu, D. Ramanan. "Face detection, pose estimation and landmark localization in the wild").
811 faces downloaded from the web using simple text queries on sites such as google.com, flickr.com, and yahoo.com. Annotated with 68 facial landmark points in each image.