Skip to main content
European Commission logo print header

Hyperscanning 2.0 Analyses of Multimodal Neuroimaging Data: Concept, Methods and Applications

Final Report Summary - HYPERSCANNING 2.0 (Hyperscanning 2.0 Analyses of Multimodal Neuroimaging Data: Concept, Methods and Applications)

Hyperscanning 2.0 is a novel paradigm for analyzing neuroimaging data, which optimally extracts common brain activity from independent but temporally-synchronized measurements. Typically, such measurements are obtained by exposing one or more participant to the same experimental stimulus, either concurrently or one after another. Any correlation between datasets must then relate to the common stimulus. This approach makes it possible to investigate neural processing related to complex real-world stimuli such as movies without any prior knowledge of the timings of specific events within the movie, and thereby to study human cognition in a way not possible using traditional experimental paradigms using controlled but artificial stimuli.

The main goal of this project was to develop and apply Hyperscanning 2.0 techniques that are optimal for a large class of neural signals, namely the envelopes (instantaneous amplitudes) of neural oscillations. Oscillations are ubiquitous features in electromagnetic measurements such as electro- and magnetoencephalography (EEG/MEG) and electrocorticography (ECoG).

The research was divided into three work packages. WP 1 concerns the development, test, and dissemination of a Hyperscanning 2.0 algorithm called canonical source power correlation analysis (cSPoC), which is capable of identifying brain oscillations exhibiting consistent power modulations across datasets, e. g., during complex stimulation or social interaction. WP 2 concerns the investigation of neural correlates of visual attention and engagement during movie viewing using Hyperscanning 2.0 techniques. WP 3 concerns the study of emotional processes in clinical populations. The fourth work package WP 4 was mainly concerned with the dissemination of project results.

Within WP 1, we have developed and tested the cSPoC algorithm, and have demonstrated its efficacy on real EEG data. In a realistic simulation study, cSPoC clearly outperforms the most relevant existing algorithms in terms of reconstructing amplitude coupled EEG sources. Our analyses of real data moreover show that cSPoC is capable of reconstructing the generators of sensori-motor rhythms (SMR) in the left and right motor cortex without knowledge of the experimental paradigm. This finding has applications in brain-computer interfacing, where the modulation of SMR is used to enable severely handicapped persons to communicate without using physical movements. In a second study, we used cSPoC to investigate the relationship between the human alpha (10 Hz) and beta (20 Hz) rhythms, which has so far been an open question in neuroscience.

In the course of WP 2, we obtained EEG, ECoG, and functional magnetic resonance imaging (fMRI) data recorded while participants were exposed to the same movie stimulus. All three modalities showed stimulus-related information. For EEG, these were predominantly found in the alpha frequency band but not in frequencies higher than beta. For ECoG and fMRI the stimulus-related information localized to the same brain structures. We also found significant correlations between all three modalities. Most importantly, we found negative correlations between fMRI signals and EEG amplitudes in low frequencies, and positive correlations in high frequencies. These results contribute to a better understanding of the neural activity captured by various neuroimaging modalities, and the relationships between these modalities.

Within WP 3, we studied grieving participants using fMRI. The first study concerned participants grieving the loss of a pet. In the first part of the experiment, participants completed an Emotional Stroop task, in which the task (naming the colors of words) conflicted with emotional arousal, as the presented words were either related to the pet or not. We trained a machine learning model to predict response time (a proxy for emotional engagement) from the neuroimaging data. The model was then applied in the second part of the experiment, in which participants were exposed to memories of the pet in form of written sentences they provided in advance. We found a negative association between post-hoc sadness and the output of the model, indicating that the model captured grief-related neural activity.

In a second study, participants grieving the loss of a close person by suicide were invited to an fMRI experiment. We trained multivariate machine learning models to capture the neural basis of their grief at the intersection of three cognitive modalities (viewing pictures of the deceased, reading short stories about the deceased, thinking about the deceased) based on the recorded fMRI data. This analysis identified neural clusters associated with cross-modal deceased-related stimuli. The machine learning model trained on the activity of these brain regions was applied to periods of unconstrained thinking, in which participants only sporadically reported the contents of their minds. The output of this model identified deceased-related but not living or self-related thoughts, independently of grief-severity and time since loss. This result was robust w.r.t. two control conditions, and indicate that we found a neural pattern of the mental representation of a loss in grieving participants.

The combined results of the two studies robustly indicate a sensor-modality- and subject-independent neural representation of the subject of grief in the human insular cortex, the activity of which is predictive of clinically relevant variables such as avoidance of thoughts of the loss in complicated grief disorders. These results may pave the way for a better understanding of the neural mechanisms underlying complicated as opposed to normal grief, and may ultimately lead to novel diagnostic measures of the success of coping with the loss. Even novel therapies based on neurofeedback are conceivable.

Summarizing, we have provided a novel analytic tool, cSPoC, for the optimal analysis of amplitude-coupled brain rhythms. We expect that the application of cSPoC in Hyperscanning settings will provide valuable insights into how the brain processes natural stimuli such as language and video. We have compared three common neuroimaging modalities in terms of the amount of stimulus-related information they contain, which will help to assess the efficacy of each modality for Hyperscanning studies, as well as to clarify the relationships between these imaging modalities. Finally, we have studied rumination and complicated grief in two populations of grieving participants, and have provided clinically relevant reproducible neural markers of grief. Overall, our efforts will increase our understanding of brain functioning in health and disease.