Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Signal processing and Learning Applied to Brain data

Periodic Reporting for period 4 - SLAB (Signal processing and Learning Applied to Brain data)

Reporting period: 2020-04-01 to 2021-08-31

Understanding how the brain works in healthy and pathological conditions is considered as one of the challenges for the 21st century. After the first electroencephalography (EEG) measurements in 1929, the 90’s was the birth of modern functional brain imaging with the first functional MRI (fMRI) and full head magnetoencephalography (MEG) system. By offering noninvasively unique insights into the living brain, imaging has revolutionized in the last twenty years both clinical and cognitive neuroscience.
After pioneering breakthroughs in physics and engineering, the field of neuroscience has to face two major challenges. The size of the datasets keeps growing with ambitious projects such as the Human Connectome Project (HCP) which will release terabytes of data. The answers to current neuroscience questions are limited by the complexity of the observed signals: non-stationarity, high noise levels, heterogeneity of sensors, lack of accurate models for the signals.
SLAB contributed to the development of the next generation of statistical models and algorithms for mining electrophysiology signals which offer unique ways to image the brain at a millisecond time scale. SLAB addressed topics such as non-invasive brain imaging from M/EEG signals, unsupervised learning from multivariate M/EEG time series, fusion of multimodal neuroimaging data. Importantly SLAB contributed to the development of major open source packages for processing M/EEG signals as well as to perform statistical learning on such data.
In SLAB, we developed dedicated machine learning and statistical signal processing methods and favored the emergence of new challenges for these fields focussing on five open problems:

1) source localization with M/EEG for brain imaging at high temporal resolution: we have developed state-of-the-art optimization methods exploiting the sparsity of solutions. The intuition is that few brain regions are engaged at a given millisecond on one cognitive task. The techniques based on active set techniques have enabled significant speed ups. New models taking into account heteroscedastic noise models have been proposed and very importantly automatic model selection procedures have been proposed and made available to the neuroscience community via the MNE-Python software.

2) unsupervised representation learning from multivariate (M/EEG) signals: state-of-the-art convolutional sparse coding (CSC) distributed algorithms have been proposed. While this technique starts only to be of interest in the neuroscience community, we have also tackled the acceleration of widely employed mining techniques, namely Independent Component Analysis (ICA) for which we have proposed two major contributions that accelerate inference of independent sources from EEG signals (the PICARD algorithm and the QNDIAG algorithm for joint diagonalization).

3) fusion of heterogeneous neuroimaging data: we have explored the use of predictive stacking models to evaluate cognitive decline from combined anatomical MRI, functional MRI and MEG. This work built on the concept of "brain age" was published in eLife and subsequently led to a public and reproducible benchmark on multiple EEG or MEG population datasets.

4) modelling of non-stationary spectral interactions to identify functional coupling between neural ensembles: to address this very timely issue we proposed the Driven Autoregressive (DAR) models which offer more statistical power (detect coupling with shorter signals) and come with a built-in automatic model selection procedure.

5) development of algorithms tractable on large datasets and easy to use by non-experts: we have contributed and extended numerous open-source Python packages, to name a few scikit-learn, MNE-Python, but also the new Pactools, Picard and the Celer packages.

SLAB has strengthened the mathematical and computational foundations of neuroimaging data analysis. The methods developed have applications across fields (e.g. computational biology, astronomy, econometrics). Yet, the primary users of the technologies developed are cognitive and clinical neuroscientists.
The technologies developped during the SLAB project contributed to the state-of-the-art in three ways. First, computationally more efficient algorithms have been proposed, hence leading to reduced computational burden for end users. This line of works also enabled the processing of large scale datasets, hence leading to improved statistical power. Second, the contributions of SLAB are on the modeling side with the introduction of novel statistical models more robust to noise or to non-stationarities, while offering strong mathematical guarantees, for example when it comes to parameter selection or quantitative evaluation. Finally, the SLAB project contributed heavily to open source packages widely used first by applied data scientists, but also by neuroscience labs processing MEG or EEG data to address clinical or cognitive neuroscience questions.
slab-logo.png