Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Neural Bases of Multimodal Integration in Children

Article Category

Article available in the following languages:

How can we grasp what children understand?

An EU-funded project set out to discover how and whether children can comprehend and process a speaker’s gestures and speech that is addressed to them.

Children learn language in a multimodal environment as their caregivers interact with them in various modalities such as eye gaze, gesture and speech. Given that gestures often carry information relevant to the accompanying speech, they are an important medium for children to understand speakers’ messages. The task at hand This led to ChildGesture, a research project undertaken with the support of the Marie Curie programme. It combined the efforts of Dr Kazuki Sekine, a postdoc Marie Skłodowska-Curie fellow, Professor Asli Ozyurek, project coordinator, and the Multimodal Language and Cognition lab. ChildGesture investigated if children are able to understand and process gestures and speech, including the way they do this. Prof. Ozyurek explains: “We aimed to answer this not only through children’s behavioural responses but also by examining, for the first time, their neurocognitive processing of semantic information from gesture and speech using neuroimaging techniques such as electroencephalography (EEG).” While adult brains show certain responses, known as the N400 effect, “there is a lack of understanding on brain signatures for multimodal semantic integration for children,” adds Prof. Ozyurek. The project also sought to uncover, behaviourally, whether and how children benefit from gestures in the context of disambiguating noisy speech. Key discoveries First, 6 to 7-year–old Dutch children were presented with clear individual action words simultaneously with matching or mismatching iconic gestures while EEGs were recorded. An N400 effect was found comparing mismatching to matching conditions, showing that children integrate multimodal semantic information at the neural level as adults do. “In a follow-up experiment we asked if and how children benefit from iconic gestures in disambiguating noisy speech compared to adults,” explains Prof. Ozyurek. Participants were presented with action words in different noise levels in three conditions – speech only, speech + gesture and visual only – and were asked to say what they heard. Accuracy results showed that adults were better than children in speech-only – degraded conditions – and visual-only conditions. But in speech + gesture conditions, children reached adult levels of degraded speech. “Thus, in adverse listening conditions children ‘need’ multimodal input to reach adult levels of unimodal speech comprehension,” reports Prof. Ozyurek. Furthermore, both adults and children were faster in speaking their responses in multimodal conditions. Gestures might thus provide a link between comprehension and production systems. Next steps The project is currently recording EEGs from children listening to degraded speech with or without gestures. “This will give us insight into where and how a child’s brain is adult-like in combining multimodal signals and also how they maintain unimodal or multimodal information in their memory after watching the videos,” explains Prof. Ozyurek. Further, ChildGesture wants to replicate its studies with children with cochlear implants who require more visual input than those who are not hearing impaired, especially in noisy contexts. Another next step is to conduct this study with younger children, to see if multimodal integration is a developing trait or an inborn bias of our brain. Finally, the team would also like to replicate this study with bilingual children. “More information on our studies can be found on the website Brain Imaging of Multimodal Communication in Development, adds Prof. Ozyurek.

Keywords

ChildGesture, children, gesture, speech, multimodal, EEG, language, neurocognitive processing, electroencephalography, unimodal

Discover other articles in the same domain of application