Skip to main content
Ir a la página de inicio de la Comisión Europea (se abrirá en una nueva ventana)
español es
CORDIS - Resultados de investigaciones de la UE
CORDIS

Human communication as joint epistemic engineering

Periodic Reporting for period 1 - MINDSHARING (Human communication as joint epistemic engineering)

Período documentado: 2023-01-01 hasta 2025-06-30

Imagine ordering a drink at a diner, pointing at an empty glass as an attentive waiter passes by. How did you select that particular gesture, and how could the waiter possibly interpret it as you intended? As any other signal we use to communicate daily, that gesture is highly ambiguous outside its context of use. How can human communication work by using referentially flexible and contextually dependent signals?
MINDSHARING argues that interlocutors are communicatively effective because they jointly control their interaction-specific shared context.
MINDSHARING integrates computational, developmental, and cognitive neuroscience to understand how that control is algorithmically defined, culturally acquired, and neurally implemented.
First, using context-sensitive neural networks, MINDSHARING identifies communicative control parameters during interactive multi-turn linguistic and non-verbal referential games, then assesses the value of those parameters as potential communicative universals across worldwide cultures. Second, using prospective longitudinal studies, MINDSHARING identifies the socio-cultural experiences that influence the acquisition of communicative abilities during ontogenetic development. Third, using concurrent brain stimulation and imaging in communicating dyads, MINDSHARING tracks and perturbs the neural dynamics of communicative control parameters as dyads continuously adjust their shared context to novel communicative challenges.
MINDSHARING provides a novel causal account of a foundational element of human society, the ability to communicate with referentially flexible signals. MINDSHARING brings computational, cultural, and neurocognitive explanations into the inter-personal space where communication is used and where it is learned. MINDSHARING will deliver what existing accounts have not delivered yet, multi-level causal explanations of the multi-level human ability to communicate with referentially flexible and contextually dependent signals.
The MindSharing project has set in motion the three workpackages delineated in the proposal.
In WP1, we have developed new computational architectures to identify and quantify control parameters of human communicative interactions, reflecting long-range dependencies within interlocutors’ interaction history, over and above interaction-invariant relations between signals. These computational architectures have already been applied to the two referential games developed in the MindSharing project, namely the Pizzini Game and the Tacit Communication Game.
In WP2, we have deployed a non-invasive location tracking technology (https://noldus.com/human-behavior-research(se abrirá en una nueva ventana)) to quantify the structure of social interactions experienced by 3 to 4 years old in their pre-school environment during a standard morning class (8.30 to 12.30). We have already acquired data from more than 300 children. In the mean time, we have developed a familiarization procedure to allow for the acquisition of structural and functional magnetic resonance images in those children willing to participate to this part of the study.
In WP3, we have developed an experimental protocol and organized a dedicated lab for multi-modal tracking of communicative behaviors in pairs of interlocutors engaged in the two referential games developed in the MindSharing project (Pizzini Game, Tacit Communication Game). We have acquired exploratory datasets in several pairs of participants, monitoring performance, speech, and gaze behaviors. This phase of the project will soon lead to the selection of specific parameters and estimation of effects size, to be tested in a pre-registered confirmatory portion of the study.
The most significant achievement, so far, of the MindSharing project has been the identification of a latent control parameter used by human interlocutors during referential communication. The project aims at understanding how interlocutors regulate the referential process to effectively coordinate novel, context-dependent mappings in real-time interactions. To address this issue, we have combined two complementary approaches to characterize interaction-specific mappings between signals and referents across communicative turns. First, we employ an experimental semiotic task (the Tacit Communication Game, TCG) that amplifies natural generative demands by requiring participants to communicate without preexisting shared signal-referent mappings. In the TCG, dyads collaborate to arrange geometric shapes into designated configurations across multiple turns, minimizing reliance on conventional linguistic or gestural cues while enabling precise quantification of communicative behaviors across diverse referential challenges. Second, we have developed a hierarchical Transformer model (HTM) to generate movement- and interaction-level embeddings of communicative behaviors (113 adult dyads). Unlike standard large language models, this approach captures dependencies not only between tokens, but also between the actual referents of those tokens over the broader communicative exchange. By leveraging full access to both signal trajectories and referential spaces, we move beyond surface-level signal analysis to uncover how discrete behavioral sequences evolve into structured patterns of referential coordination over time. This approach allowed us to generate movement- and interaction-level embeddings. We experimentally generated communicative variance by sampling from neurotypical (NT) and autistic (ASC) dyads engaged in the TCG. We identify changes in parameters that track the representational dimensionality of signals and referents as communication unfolds. Movement-level embeddings (within-trial dependencies) could not differentiate the two groups, indicating comparable communicative behaviors. In contrast, interaction-level embeddings (across-trial dependencies) distinguished ASC from NT dyads with high accuracy. Crucially, representational complexity, i.e. dyadic alignment in the interaction-level intrinsic dimensionality used to encode communicative histories, tracked referential coordination demands, with greater misalignment in ASC dyads under referential volatility. We are currently in the process of testing the reproducibility and generalizability of these findings, applying the same general methodology to data obtained from linguistic and multimodal referential communication. We are also testing whether and how interlocutors track representational complexity during a dialogue.
Mi folleto 0 0