Unsupervised Features for Facial Expression Intensity Estimation over Time

Computer Vision and Pattern Recognition Workshops (CVPRW), June 2018.

Maren Awiszus, Stella Graßhof, Felix Kuhnke and Jörn Ostermann


Abstract

The diversity of facial shapes and motions among persons is one of the greatest challenges for automatic analysis of facial expressions. In this paper, we propose a feature describing expression intensity over time, while being invariant to person and the type of performed expression. Our feature is a weighted combination of the dynamics of multiple points adapted to the overall expression trajectory. We evaluate our method on several tasks all related to temporal analysis of facial expression. The proposed feature is compared to a state-of-the-art method for expression intensity estimation, which it outperforms. We use our proposed feature to temporally align multiple sequences of recorded 3D facial expressions. Furthermore, we show how our feature can be used to reveal person-specific differences in performances of facial expressions. Additionally, we apply our feature to identify the local changes in face video sequences based on action unit labels. For all the experiments our feature proves to be robust against noise and outliers, making it applicable to a variety of applications for analysis of facial movements.

The paper can be found here.


Alignment for BU4DFE

Calculated Temporal Alignment data for the BU4DFE database can be downloaded here as a .zip file.

Example Videos from BU4DFE:

Full video unaligned One transition aligned
Anger
Disgust
Fear
Happiness
Sadness
Surprise

Video Data is taken from:
”A High-Resolution 3D Dynamic Facial Expression Database” by Lijun Yin; Xiaochen Chen; Yi Sun; Tony Worm; Michael Reale, The 8th International Conference on Automatic Face and Gesture Recognition, 17-19 September 2008 (Tracking Number: 66)
Copyright Research Foundation for The State University of New York, 2006-2014. BU-4DFE Database, as of: 27 Jan 2015


Project page

See our project page for further details about our work in analysis and synthesis of faces.