Community Research and Development Information Service - CORDIS


The ease with which we speak to others may lead us to think that the cognitive and brain mechanisms put at play in a conversation are rather simple. However, the speed and accuracy with which interlocutors switch back and forth between language production and comprehension suggests that a conversation is a complex process that entails these two systems to be perfectly orchestrated. Surprisingly, despite the social relevance of these speech acts, little is known about language processing in a conversation. Language processing has been predominantly investigated separately for language production and language comprehension and within the boundaries of individual brains.
The general objective of the research project entitled “Language as a joint action (LAJA)” awarded with the IEF Marie Curie fellowship (2014-2016) was to investigate the coupling between language production and comprehension during verbal communication. This objective was pursued by investigating two fundamental aspects that are presumably involved in the success of any verbal communication: 1) the ability to predict others’ upcoming actions and, 2) the ability to monitor whether the observed action matches with the predicted ones. These issues were addressed as two different but related questions: Question 1) what is being predicted from others’ speech and, Question 2) how are monitoring processes engaged during verbal actions. In the following, the work carried out to achieve the project's objectives and the main results are concisely described.
Question 1
The goal of this part of the project was to examine how the language production system of a co-actor/listener is involved during prediction of others’ to-be-generated words. Specifically, whether listeners draw upon their own production system to predict others’ generated speech at different levels of representation (semantic, lexical, phonological). Three experiments were devoted to answer this question.
Experiment 1
The objective of Experiment 1 was to explore whether prediction processes (and their neural signatures) are modulated by the engagement of the production system. During the experiment, participants performed an object-color association task (priming paradigm) while their EEG (electroencephalogram) were recorded. Briefly, a prime-word appeared on the screen (e.g., “lemon”) followed by the auditory presentation of a target-color name that could match (“yellow”) or not (“blue”) the color to which the named object is associated. In addition, within the match condition, words could be strongly associated to a color (“lemon”) or being less clearly associated (e.g., “butterfly”) to a specific color. In this way, prediction processes were explored at two temporal moments, locked to the visual presentation of the prime (“lemon/butterfly”) and locked to the auditory presentation of the congruent/incongruent target word (“yellow/blue”). Importantly, the experiment was divided in two blocks, one block in which the participant has to speak in some trials and to listen in some others and another block in which the participant has only to listen. Of importance here were the listening trials and specifically modulations of the ERP component N400, taken in previous literature as an index of lexico-semantic prediction during language comprehension.
The results revealed that brain responses were clearly modulated by the involvement of language production in the task. Specifically, the classic and expected N400 effect appeared earlier and was more frontally distributed when participants were required to listen/speak (supporting the idea that prediction is production-based). In contrast, at the timing of target processing, the results revealed that the involvement of speech production has no impact during integration processes in language comprehension.
Experiment 2
The objective of this experiment was to explore how prediction processes are modulated by the presence of an interlocutor. To do so, the experiment was the same as Experiment 1 but with one difference: participants met the confederate (the person recording the spoken color words used in Experiment 1) at the beginning of the experiment. Thus, participants were induced to believe that the confederate was performing the task jointly with them. To do so, specific instructions were given to the participant and the confederate, and we made sure that participants could hear the voice of the confederate. In this way, during the experiment they could recognize the voice of the confederate as their partner in the task.
The results revealed that interacting with others had an impact the way we anticipate others’ upcoming speech. In addition, our results showed that interacting with others encourages prediction as indicated by the differences between Experiments 1 & 2 during integration processes.

Experiment 3
The aim of this experiment was to identify the necessary preconditions under which shared predicted representations are formed. Specifically, we tested how task similarity modulates prediction processes. To do so, a joint picture processing task was conducted in turns between two participants. One of the participants was asked to respond manually when objects belonged to a specific semantic category (e.g., living), while the other participant was asked to respond also manually but only when the object’s name started with certain phonemes (e.g., vowels). This led to stimuli for which only one participant had to respond to (e.g., cat) and others for which both participants had to respond to (e.g., elephant). The hypothesis here was that if participants integrate the others’ task, participants should be affected by the response of the other participant. To test this hypothesis, reaction times were compared when only one participant had to respond to the object relative to when both had to respond to the same object. The results revealed that participants’ responses were affected by the response of their partners. Interestingly, this effect was modulated by the task of the participant. The response of the other led to a facilitation effect only for participants performing a phonological task but not for those performing the semantic task. Within the framework of models of speech production, where semantic processing is assumed to occur earlier than phonological processing, these results suggest that task co-representation occurs only when representing other’s task does not interfere with the task at hand. This result posits some limits to models in the joint action literature assuming that task co-representation occurs automatically between action partners.

Question 2
The goal of this part of the project was to explore how monitoring processes are involved when an error is encountered in others’ speech.
Experiment 4
The objective of this experiment was to test whether brain signatures associated to error detection (i.e., Error Related Negativity, ERN) occur similarly for own and others’ verbal errors. To test this, a picture naming task was adapted to the social context and two participants/partners (a participant and a confederate) were asked to name pictures in turns (according to the color in which the pictures were presented). During the experiment, EEG was continuously registered for the participant. Importantly for the purposes of the experiment, the confederate was required to make verbal errors or unexpected responses for some trials (e.g. say “bird” instead of “eagle”). At the end of the experiment, participants were asked to determine whether they would have used the same or a different name from the one used by the confederate. The responses that did not match between the participant and the confederate were considered as errors, and their temporal course analyzed. Therefore, we obtained brain responses when the participant was making an error (own’s errors) and also when the confederate was making an error (other’s errors).
The results revealed that brain responses were similarly modulated for own and other’s errors. That is, an early negativity was observed for errors regardless of whether the error was made by the participant or by the confederate. Interestingly, only own errors elicited electrical modulations before producing the error, supporting the existence of an internal speech monitoring mechanism that detects speech errors before they occur.

In sum, four experiments have been conducted to investigate the coupling between language production and comprehension during verbal interactions. Overall, our results have provided evidence on how language processing is modulated in a conversation.
Despite the apparent simplicity of engaging in a conversation, many brain mechanisms are orchestrated in order to successfully understand each other. This project was especially valuable in providing empirical evidence on language as essentially a joint action, which will be very relevant not only to cognitive neuroscience but also to social neuroscience. At a more practical level, in a society that is becoming more individualistic and social media and electronic interaction are replacing 'face to face' encounters, our results are very relevant in showing that verbal interactions shape the cognitive processes engaged by individuals in social contexts.

Related information

Reported by

Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top