Periodic Reporting for period 3 - CoAct (Communication in Action: Towards a model of Contextualized Action and Language Processing)
Período documentado: 2021-09-01 hasta 2023-02-28
So far, the project has led to a large corpus of casual conversations including audio, video and kinematic recordings. This corpus is in the process of being annotated for many behaviours of interest to the project, including manual and head gestures, facial signals, speech and prosody. These data will be analysed to answer various questions relating to how people communicate intentions multimodally in social interaction, and how these multimodal signals feed into the process of reaching mutual understanding and alignment. Further, the corpus data will serve to test specific hypotheses about multimodal intention communication in experimental settings, both in terms of producing communicative acts (with a focus on speech acts/social actions) and how these are processed on a cognitive and neural level. Some experimental studies have already begun, including cutting edge methods such as EEG-hyperscanning and motion tracking. Experimental studies using Virtual Reality stimuli motivated by the corpus analyses are currently being prepared.