Skip to main content
CORDIS - Forschungsergebnisse der EU
CORDIS

Neural Bases of Multimodal Integration in Children

Periodic Reporting for period 1 - ChildGesture (Neural Bases of Multimodal Integration in Children)

Berichtszeitraum: 2016-11-01 bis 2018-10-31

[What is the problem/issue being addressed?]
Children typically learn their language in a multimodal environment as their caregivers interact with them with a variety of modalities such as eye gaze, facial expressions or hand gestures. Especially hand gestures are often used by caregivers to convey semantic information as well as speech. Thus, gestures are an important information medium for children to understand adult’s message. It is crucial to investigate how children process the information from gesture and speech in comprehension, and to what extent children can benefit from gesture when speech is ambiguous.

[What are the overall objectives?]
By using behavioural measures, many studies have revealed that children and adults can process information from iconic gestures and speech. However, behavioural measures are the off-line measure, which do not provide access to the underlying neurocognitive processing of speech and iconic gestures. Neurophysiological studies can provide more direct measures of the on-line cognitive process underlying the comprehension of co-occurring multimodal semantic information from speech and gesture. However, so far, neurophysiologic studies on speech-gesture integration have been restricted to adults. Thus, this project examined neurocognitive processing of semantic information from gesture and speech in children and adults by using both EEG and behaviour measures.
1. the online neurocognitive processing of gesture-speech integration in native Dutch-speaking 6-7-year-old children (Study 1).
2. to what extent gesture enhances speech comprehension of speech when speech is degraded. We examined this by using behaviour measures (Study 2) and EEG (Study3).

[Why is it important for society?]
It is important to conduct this research to obtain the neurobiological data on gesture-speech integration, because children receive multimodal input from their caregivers in their language acquisition. Two contributions are expected from this proposed project. On a theoretical level, the findings of the proposed study will provide the neurobiological mechanism in which gesture and speech are processes in children and adults. On a practical level, the findings will provide useful information to parents and teachers about how they should use their gestures to foster childrens’ language acquisition. This project will provide data providing the missing link between adults’ neurobiological data and childrens’ behaviour data on the process of multimodal integration. The result will impact not only on developmental psychology, but also on broad scientific fields such as neurobiology, informatics, robotics, cognitive science, and pedagogy.
We have conducted three experimental studies in this project. In this section, I summarised the research purpose and the main result for each study. Note that we are still analysing data in the third study. Hence, I reported here the research purpose and the expected result.

First study: Neural integration of gesture and speech in children.
By using electrophysiological (EEG) measures that have previously provided evidence for adults' integration of gestures and speech we examined the online neurocognitive processing of gesture-speech integration in native Dutch-speaking 6-7-year-old children. We focused on N400 ERP component known to be modulated by semantic integration load. We created short video clips, where a speaker uttered a spoken action verb and simultaneously produced an iconic gesture representing actions. Each clip had two versions of matching or mismatching gesture-speech combinations which manipulated the semantic integration load. In the matching condition, the gesture and the verb conveyed the same information, whereas in the mismatching condition, they conveyed different information. The participants were instructed to attentively watch and listen to the clips and had a word-monitoring task occasionally. The event-related potentials (ERPs) time-locked to the speech onset showed that the amplitude of N400 was larger in the mismatching condition than in the matching condition-as previously found for adults and the effect showed a similar topography as well. This finding provides the first neural evidence that at the ages of 6-7, children' online processing of multimodal semantic information is comparable to that of adults.. This has implications for using gestures towards children in various educational and communicative settings.

Second study: The contribution of iconic gestures to degraded speech in children.
We often communicate in noisy situations such as classrooms, stations or living rooms. Research has shown that adult speakers often use hand gestures in such situations to effectively convey their messages, and listeners can use information from gestures to understand the degraded speech. However, little is known about to what extent gestures enhance children’s speech comprehension. To address this question, Dutch speaking adults and children aged 6 and 7 years were presented with a series of video clips, where an actor produced a Dutch verb with or without an iconic gesture. The speech signal was either clear, 4-, 8-, or 10-band noise vocoded speech. Results showed that children can benefit from gestures to disambiguate degraded speech as well as adults, but their performance did not arrive at adult level yet. For adults, the enhancement effect of gesture was greater in the 4-band condition than in the 8-band condition, whereas children showed the opposite pattern. It is argued that as children develop their language and cognitive skills, they can process degraded speech and use gesture information to disambiguate the speech.

Third study: Neurophysiological evidence of gesture enhancement effect on degraded speech in children and adults.
Study 1 found that children aged 6 and 7 years can integrate gesture and speech. But the stimuli used in the study was not high ecological validity in the sense that we do not come across gesture-speech mismatch expression in daily life.
It is no clear whether we still find similar children’s brain response If stimuli are higher ecological validity. Study 2 found that children’s accuracy score with gestures was lower than adult’s, but showed similar gesture enhancement effect at 8-band noise vocoding speech.
Thus, this third study aimed to see whether we find see similar neural process of gesture and degraded speech between children and adults by measuring EEG.
It is important to conduct these studies to obtain the neurobiological data on gesture-speech integration, given that children receive multimodal input from their caregivers in their language acquisition. Two contributions were obtained from this proposed project. On a theoretical level, the findings of the proposed study provided the neurobiological mechanism in which gesture and speech are processes in children and adults. On a practical level, the findings provided useful information to parents and teachers about how they should use their gestures to foster children’s language acquisition. This project provided data providing the missing link between adults’ neurobiological data and children’s behaviour data on the process of multimodal integration. The result has an impact not only on developmental psychology, but also on broad scientific fields such as neurobiology, informatics, robotics, cognitive science, and pedagogy.
advertisement of our study for parents