Overview Statistic: PDF-Downloads (blue) and Frontdoor-Views (gray)

An Extended 3D Morphable Face Model with Applications in Experimental Psychology

  • Our faces and facial expressions are an important means of communication and social interaction. One goal of the behavioral sciences is to better understand how the features of the faces that we look at influence our behavior. These include static features like facial proportions or the shape and color of certain parts of a face which primarily constitute facial identity, as well as dynamic movements resulting from the activation of the mimic musculature. Experimental psychology provides an empirical approach to this endeavor. In experiments, participants are typically exposed to images or videos of realistic faces with specifically controlled features. By analysis of the reactions to such stimuli, conclusions can be drawn about the influence of facial features on the participants’ behavior. Psychologists today mostly generate face stimuli with the help of digital tools. Image editing with Photoshop is highly flexible, but also time-consuming and subjective. Using tools like Psychomorph or Fantamorph is easier and more objective, but does not allow specific control over facial features. In contrast, stimulus generation with 3D Morphable Face Models (3DMMs) offers a better balance between objectivity, ease of use, and flexibility. 3DMMs are statistical models which have been determined from 3D scans of real people’s faces and facial expressions. After these training scans have been brought into correspondence, methods like principal component analysis (PCA) can be used to determine the major modes of variation of facial shape and texture in the data. Such modes typically vary the overall facial proportions, expressions, or skin color. They can be individually controlled and flexibly combined to generate new faces and facial expressions. The plausibility of the generated faces can be ensured by having the mode combinations follow the multivariate distribution of the training data. 3DMMs have been mostly used by psychologists for the generation of stimulus images of faces with neutral expression. Static and dynamic stimuli of facial expressions are also of great interest, but generation with 3DMMs is less common. A problem is that the majority of current 3DMMs can only generate facial movements according to the six prototypic expressions of anger, disgust, fear, happiness, sadness, and surprise. More diverse or subtle expressions are often impossible. Among other reasons, this is due to the difficulty in establishing accurate correspondence in the training data. Further, the modes of most 3DMMs were created by means of PCA. These modes often lack interpretability, fail to generate facial details, and rarely provide psychologists a specific control over identity or expression features. Some 3DMMs also generate subtle artifacts that might lead to undesired effects during face perception. They are also less realistic than faces which were designed by artistic experts for recent computer games and animated movies. Last but not least, current 3DMMs have probably not yet been used for interactive experiments in virtual reality (VR) for technical reasons. Although they provide many advantages also beyond the generation of static or dynamic stimuli, the limitations of current 3DMMs have so far prevented a widespread usage in experimental psychology. The goal of this dissertation is to foster the creation and usage of 3DMMs in this context. To this end, we make three major contributions. First, we describe a matching method that establishes correspondence for 3D face scans with a very high accuracy. Unlike the most commonly used methods, it transforms the facial features into a 2D intermediate representation so that they can be aligned to a reference using image registration. We perform experiments with a large database of 3D scans of faces and facial expressions showing that our method outperforms previous approaches. Second, the 3D scans which were previously brought into correspondence are used for the creation of a 3DMM whose resolution is an order of magnitude higher than that of most existing models. We learn a variety of meaningful modes that, e.g., vary features only in specific regions of the face, or that are related to demographic factors such as ethnicity and age. Further, modes of local facial movements are established that can be flexibly combined into a large variety of expressions. We evaluate the quality of the newly created 3DMM in two experiments. Our results show its advantages over previous models, especially the higher degree of realism of dynamic stimuli of facial expressions which were created with our model. Third, we demonstrate that 3DMMs can not only be used for the generation of stimuli. We develop two experimental methods that are readily applicable in experimental psychology. Initially, we create 3D avatar faces with our 3DMM that are readily applicable in VR. They are used in a new open source framework for virtual mirror experiments on self-face perception. A study is conducted which demonstrates the advantages of the framework over previous methods. Furthermore, our 3DMM is used to create a method for improved control of facial asymmetry in existing stimulus photographs. We show that the method accounts for different dimensions of facial asymmetry and is less sensitive than previous approaches to extrinsic factors like the posture of the head. The different methods are evaluated in a study investigating the influence of facial asymmetry on ratings of attractiveness, femininity, and masculinity. The results indicate the benefits and validity of our method.

Export metadata

Additional Services

Share in Twitter Search Google Scholar Statistics - number of accesses to the document
Metadaten
Author:Martin GreweORCiD
Document Type:Doctoral Thesis
Tag:3D morphable face model; experimental psychology; facial analysis; facial modeling
Granting Institution:Technische Universität Berlin
Advisor:Stefan Zachow
Date of final exam:2023/03/31
Year of first publication:2023
Page Number:187
Accept ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.