Skip to main content

Signal processing and Learning Applied to Brain data

Periodic Reporting for period 3 - SLAB (Signal processing and Learning Applied to Brain data)

Reporting period: 2018-10-01 to 2020-03-31

Understanding how the brain works in healthy and pathological conditions is considered as one of the challenges for the 21st century. After the first electroencephalography (EEG) measurements in 1929, the 90’s was the birth of modern functional brain imaging with the first functional MRI (fMRI) and full head magnetoencephalography (MEG) system. By offering noninvasively unique insights into the living brain, imaging has revolutionized in the last twenty years both clinical and cognitive neuroscience.
After pioneering breakthroughs in physics and engineering, the field of neuroscience has to face two major challenges. The size of the datasets keeps growing with ambitious projects such as the Human Connectome Project (HCP) which will release terabytes of data. The answers to current neuroscience questions are limited by the complexity of the observed signals: non-stationarity, high noise levels, heterogeneity of sensors, lack of accurate models for the signals.
SLAB contributes to the development of the next generation of statistical models and algorithms for mining electrophysiology signals which offer unique ways to image the brain at a millisecond time scale. In SLAB, we develop dedicated machine learning and statistical signal processing methods and favor the emergence of new challenges for these fields focussing on five open problems:

1) source localization with M/EEG for brain imaging at high temporal resolution for which we develop fast optimization methods and dedicated models that can cope with complex noise models.

2) representation learning from multivariate (M/EEG) signals using convolutional sparse coding (CSC) to reveal new insights in the morphology of brain signals

3) fusion of heterogeneous, invasive and non-invasive, electromagnetic sensors to improve spatiotemporal resolution

4) modelling of non-stationary spectral interactions to identify functional coupling between neural ensembles

5) development of algorithms tractable on large datasets and easy to use by non-experts, namely the scikit-learn and MNE-Python software.

SLAB strengthens the mathematical and computational foundations of neuroimaging data analysis. The methods developed have applications across fields (e.g. computational biology, astronomy, econometrics). Yet, the primary users of the technologies developed are cognitive and clinical neuroscientists.
So far we have made progress on different aspects of the project. We have made significant
progress to accelerate the optimization of so-called sparse models which are of practical
interest as they naturally select a few variables that explain the data. This was done
thanks to the use of convex analysis theory and a deep understanding of computer
implementation details.

Besides we have proposed novel non-stationary signal models able
to capture interesting phenomena in neural signals, so-called
phase-amplitude coupling. This very interdisciplinary work has
now been published and starts to be adopted in the neuroscience
studies.

Another important contribution aimed at learning from the data the shape of
the signal waveforms produced by neural ensembles. Using a technique known
as convolutional sparse coding (CSC) we can avoid relying on fixed signal
basis like Fourier or wavelets. There is presently a vast interest in the
neuroscience community to learn and quantify the waveform produced by
various neural populations as it is believed to be a signature or
biomarker of some cognitive impairments.

Finally we also developped a number of software tools that have a massive
impact in the machine learning and the neuroscience communities. The packages
are scikit-learn software used by about half of a million
data scientists and MNE-Python that is presently
used by some startups and by dozens of labs around the world.
By the end of the project we aim to make more progress on algorithm
scalability by offering distributed implementation for the convolutional
sparse coding problem. We also aim to leverage invasive recordings
to further improve the spatiotemporal resolution of the
source localization techniques based on sparse regularizations.
We also aim to transfer the results of the first period of the project
in the scikit-learn and MNE-Python software to increase the impact
of the project.