Children typically learn their language in a multimodal environment as their caregivers interact with them with a variety of modalities. Especially, hand gestures are often used by caregivers to convey semantic information with speech. Thus, gestures are an important medium for children to understand speakers’ messages (McNeill, 1992). By using behavioural measures, previous studies have revealed that children and adults can comprehend and integrate information from iconic gestures and speech. For example, the Researcher found that it is hard for 3-year-olds to integrate information from speech and co-speech gestures, but 5-year-olds could perform with similar results to adults. However, behavioural measures do not provide access to the underlying neurocognitive processing of speech and iconic gestures, whereas neuroimaging techniques can provide more direct measures of the online cognitive process underlying the comprehension of co-occurring multimodal semantic information. Thus, this project examines neurocognitive processing of semantic information from gesture and speech in children and adults by using neuroimaging techniques such as EEG and fMRI. Study 1 investigates whether iconic gesture and speech are independently or bidirectionally processed in 3- and 5-year-olds and adults. Study 2 examines which brain areas are used to speech and gesture in children and adults. Two contributions are expected from this project. A practical contribution will be providing information to caregivers and teachers about how they should use their gestures to foster childrens’ language acquisition. A theoretical contribution will be providing neurobiological data on how gesture and speech are processed in children and adults. Through this project, the Researcher will obtain knowledge about neuroimaging research and his expertise will contribute to the Host institution by providing knowledge about childrens’ communication research.