CORDIS - Résultats de la recherche de l’UE
CORDIS

Oscillatory Rhythmic Entrainment and the Foundations of Language Acquisition

Periodic Reporting for period 4 - BABYRHYTHM (Oscillatory Rhythmic Entrainment and the Foundations of Language Acquisition)

Période du rapport: 2021-03-01 au 2022-12-31

Language lies at the heart of our experience as humans and disorders of language acquisition carry severe developmental costs. In the BabyRythm project, we have been addressing the issue of how best to predict child language outcomes using neural measures taken pre-verbally in infancy. This is important for society because half of “late talkers”, infants who are not yet speaking by 2 years of age, will go on to develop language impairments. The other half will not. Currently, we have no reliable means of identifying the "late talkers" who are at-risk for developmental language disorders. Brain imaging offers potentially reliable markers to individual differences, enabling measurement of factors that occur automatically as part of natural speech processing mechanisms.

The BabyRhythm project used infant brain imaging (EEG) and psycho-acoustic measures to generate robust early neural and behavioral predictors of vocabulary, phonological and grammatical development. During 8 brain imaging sessions taken over the first year of life, a series of neural markers of auditory, visual and motor responses to rhythmic language were collected from 113 infants, and we also measured the temporal precision of rhythmic movements. From 12 months to 3.5 years, we have been measuring a range of language outcomes. These data have enabled us to identify the most robust predictors of later language outcomes for the domains of gesture, vocabulary and phonology. The key predictors for continuous speech were accuracy of cortical tracking of the speech signal and some specific cortical dynamics. The key predictors for rhythmic speech ("ba...ba....ba...") were synchronicity of timing measures. "Preferred phase" refers to the tendency for neural oscillations to occur around a particular angle (timepoint) relative to a specific rhythmic event at which it is measured (e.g. a rhythmically repeated sound). We found that the mean phase angle of the infant brain's response, to both audio-visual and visual-only speech, predicted later language outcomes.

The overall objectives were to generate a coherent theoretically-driven dataset of cross-modal developmental neural and behavioral measures that should be applicable across European languages. These objectives were thus met. All the neural predictors that we identified can be measured for any language.
Since the project began we recruited a cohort of 122 infants, of whom we still retained 113 five years later when analysing language outcomes. We took electrophysiology (EEG) recordings at 2, 4, 5, 6, 7, 8, 9 and 11 months while infants listened to continuous speech (nursery rhymes) and other rhythms (syllable repetition, "ta..ta..ta" and a drumbeat.

Beginning at 12 months, home visits began (at 12, 15, 18, 24 and 30 months) measuring vocabulary, phonology and grammar outcomes. Following the Pandemic lockdown (Covid 19 restrictions began in the UK in March 2020), we converted the home visits to remote data collection by Zoom. This was successful for the vocabulary and phonology outcome measures, although less successful for the grammar outcome measures. We also added a Zoom phonology session (recognition of rhymes) at 42 months.

A total of 113 infants finished both the brain imaging components and the language follow-up sessions during the project, with the last test session in May 2021. The intensive data collection protocols and the longitudinal research design meant that there was then a period of scoring many of these data, so report writing commenced in late 2021. Given the problems experienced with Covid 19, the project was granted a no-cost extension to December 2022. By December 2022, we had 4 papers published, 6 papers in revision, and 4 more papers in preparation.

Attaheri, A., et al. (2022). Cortical tracking of sung speech in adults vs infants: a developmental analysis. Frontiers in Neuroscience, 16, 842447. https://doi.org/10.3389/fnins.2022.842447

Ní Choisdealbha, et al. (2022). Neural detection of changes in amplitude rise time in infancy. Developmental Cognitive Neuroscience, 54, 101075. https://doi.org/10.1016/j.dcn.2022.101075

Attaheri, A., et al. (2022). Delta- and theta-band cortical tracking and phase-amplitude coupling to sung speech by infants. Neuroimage, 247, 118698. https://doi.org/10.1016/j.neuroimage.2021.118698

Gibbon, S., et al. (2021). Machine learning accurately classifies neural responses to rhythmic speech vs. non-speech from 8-week-old infant EEG. Brain and Language, 220, 104968. https://doi.org/10.1016/j.bandl.2021.104968

Sinead Rocha, et al. (2022). Infant sensorimotor synchronisation to speech and non-speech rhythms: A longitudinal study. PsyArXiv Preprints. 10.31234/osf.io/jbrga

Áine Ní Choisdealbha, et al. (2022). Cortical Oscillations in Pre-verbal Infants Track Rhythmic Speech and Non-speech Stimuli. PsyArXiv Preprints. 10.31234/osf.io/vjmf6

Sinead Rocha, et al. (2022). Language acquisition in the longitudinal BabyRhythm cohort. PsyArXiv Preprints. 10.31234/osf.io/28c35

Áine Ní Choisdealbha, et al. (2022). Oscillatory timing of neural responses to rhythm from 2 months linked to individual differences in language from 12 to 24 months. PsyArXiv Preprints. 10.31234/osf.io/kdezm

Adam Attaheri, et al. (2022). Infant low-frequency EEG cortical power, cortical tracking and phase-amplitude coupling predicts language a year later. bioRxiv. 10.1101/2022.11.02.514963

Áine Ní Choisdealbha, et al. (2022). Cortical tracking of visual rhythmic speech by 5- and 8-month-old infants: Individual differences in phase angle relate to language outcomes up to 2 years. PsyArXiv Preprints. 10.31234/osf.io/ukqty
We were able to conduct two kinds of data analysis with our infant electrophysiological brain responses that go beyond the state of the art when the proposal was funded in 2016.

1. We used machine learning (AI) to investigate whether it is possible to classify the rhythmic input heard by the infant brain as speech versus nonspeech from the nature of the EEG signal alone at 2 months of age, using SVM and CNN approaches (fig. 1 CNN results). We were able to show classification with over 87% accuracy (Gibbon et al., 2021, https://doi.org/10.1016/j.bandl.2021.104968). This is valuable as it suggests that our methods for identifying neural biomarkers were well chosen.

We also pioneered the use of phase (rhythmic timing) analyses for infant neural responses to speech. This has generated data that are perhaps the most exciting outcomes of the project, as no other research group in the world has achieved this. We were able to show that differences in the timing of the brain response to rhythmic language led to quite large differences in vocabulary outcomes (as large as 300 more known words). As this is a completely automatic neural mechanism over which there is no conscious control, these neural phase measures may be able to predict which infants are at risk for poorer language outcomes.
Circular plots showing individual mean phase angles and vector length in blue and group means in red
CNN results
Cortical tracking of auditory rhythm across the first year: An EEG study
Significant relationships between phase angle or vector length in visual speech or AV-V analyses, an