Social interactions are critical to all societies. One of the most powerful tools for social interaction is the face – a complex system comprising variations of movement (expressions), morphology (shape/structure) and complexion (color/texture). Consequently, the face can elicit myriad rapid social judgments (e.g. personality, emotion, group membership, age, health, social status) with significant consequences (e.g. sentencing/voting decisions, social isolation, job offers). Yet, little is known about how the complex, dynamic face transmits the myriad messages that regulate social interactions in different cultures, how these complex face signals map onto psychological processes (e.g. categorical/dimensional perception) or which signals facilitate or hinder cross-culture communication. This is largely due to fragmented research on social concepts (mental states, personality, emotions), face signals (morphology, movements, complexion) and culture, which, consequently, has overlooked a possible latent algebraic, syntactical structure to social face signals across cultures. My own work hints at such a structure. My ambitious program will unify these fragments to derive the first generative, algebraic and syntactical model of social face signals using innovative methods combining social/cultural psychology, 3D dynamic computer graphics, vision science psychophysical methods, and mathematical psychology. It will thus test and validate a new theoretical framework of social face signals that will unite both categorical/dimensional and universal/culture-specific accounts of social face perception. This framework is highly relevant in the context of globalization and cultural integration where social communication using virtual agents is integral to modern society. It is thus imperative to equip digital agents with the tools to flexibly generate socially and culturally sophisticated face signals. FACESYNTAX will thus transfer the generative model to social robotics.
Call for proposal
See other projects for this call