Skip to main content
European Commission logo print header

Defining an integrated model of the neural processing of speech in light of its multiscale dynamics


This interdisciplinary project will define an integrated model of speech processing by recording, modelling and manipulating neural oscillatory dynamics during perception of speech defined as a multiscale temporal signal. Dominant models of speech perception describe its underlying neural mechanisms at a static neuroanatomical level, neglecting the cognitive algorithmic and neural dynamic levels of description. These latter can only be investigated by considering the temporal dimension of speech, which is structured according to a hierarchy of linguistic timescales (phoneme, syllable, word, phrase). Recent advances in behavioural paradigms, computational modelling, and neuroimaging data analysis make it now possible to investigate the cognitive algorithms and neural dynamics subtending the processing of speech. To define an integrated model of speech perception, this project seeks to: 1. record neural activity in humans with magnetoencephalography and intracranial recordings during perception of continuous speech; 2. quantify linguistic information at each timescale of speech with a computational model; and 3. estimate their respective and shared neural correlates with multivariate and directed connectivity analyses. Feasibility is ensured by an in-house access to neuroimaging and intracranial recordings as exemplified in the data on Figure 1 of this proposal. This project will critically test whether neural oscillations play a fundamental role in the computational processes of perception and cognition. It will define the mapping between speech and neural timescales and reveal how information is transferred and combined along the linguistic computational processing hierarchy. It will overall specify -in terms of the nature of the information processed and of the dynamical hierarchical organization- the respective contributions of left and right hemispheric ventral and dorsal auditory pathways in speech processing.


Net EU contribution
€ 1 861 100,00
Rue de tolbiac 101
75654 Paris

See on map

Ile-de-France Ile-de-France Paris
Activity type
Research Organisations
Other funding
€ 0,00