TNT logo LUH TNT

Analysis and Synthesis of Human Faces

TNT members involved in this project:
Stella Graßhof, M.Sc.
Felix Kuhnke, M.Sc.
Prof. Dr.-Ing. Jörn Ostermann

At Institut für Informationsverarbeitung (TNT) we are interested in analyzing and synthesizing the human face. Our motivation is to provide tools for efficient interpretation, evaluation and creation of facial shapes, (inter)actions and virtual faces, thereby enabling natural human-computer interaction (HCI).

We analyze different kinds of face data such as still images, videos, 3D point clouds, and others, then process and describe them with mathematical tools.
Obtaining deeper insights and understandings of underlying structures, processes, and perception, enables us to provide methods for further analysis and synthesis in various applications.

The following overview shows the main parts of our research:

 

Statistical Face Models have a long tradition in the field of facial analysis and synthesis. They provide a compact representation of of facial geometry and attributes. You can find our research on 3D Face Models here.

 

One of our longer standing goals has been to produce Talking Heads (realistic talking virtual human faces) for Human-Computer Interaction. The major challenges in this field is to produce visuals indistinguishable from real faces. Besides texture and geometry of the virtual head, dynamic features like linguistically correct speech animation and realistic facial animation of facial expressions are important factors. Furthermore, the behavior and animation of the virtual face needs to consider the human dialog partner. An early prototype (year 2009) of a talking head can be found here.

 

In addition, using a talking head in a dialog based interaction requires an underlying dialog system. A dialog system will handle the content of the audible/spoken part of the interaction. It processes the users speech using techniques from text-to-speech and natural language processing and will also generate natural speech output based on natural language understanding and generation.

Show all publications
  • Maren Awiszus, Stella Graßhof, Felix Kuhnke, Jörn Ostermann
    Unsupervised Features for Facial Expression Intensity Estimation over Time
    Computer Vision and Pattern Recognition Workshops (CVPRW), June 2018
  • Felix Kuhnke, Jörn Ostermann
    Visual Speech Synthesis From 3D Mesh Sequences Driven By Combined Speech Features
    Proc. of the IEEE International Conference on Multimedia and Expo (ICME), IEEE, Hong Kong, July 2017
  • Stella Graßhof, Hanno Ackermann, Felix Kuhnke, Jörn Ostermann, Sami Brandt
    Projective Structure from Facial Motion
    15th IAPR International Conference on Machine Vision Applications (MVA) (accepted), Nagoya (Japan), May 2017