European Commission logo
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS

Non-invasive decoding of cortical patterns induced by goal directed movement intentions and artificial sensory feedback in humans

Periodic Reporting for period 4 - Feel your Reach (Non-invasive decoding of cortical patterns induced by goal directed movement intentions and artificial sensory feedback in humans)

Okres sprawozdawczy: 2020-11-01 do 2021-07-31

In Europe, estimated 300,000 people are suffering from a spinal cord injury (SCI) with 11,000 new injuries every year. The consequences of SCI affect both these individuals and the society. The loss of arm motor functions leads to a life-long dependency on care-givers and therefore to a dramatic decrease in the quality of life. With the help of neuroprostheses, grasping and elbow function can be substantially improved. However, remaining body movements often do not provide enough degrees of freedom for natural control. The ideal solution for a natural control would be to directly record motor commands from cortical areas and convert them into control signals. This would allow bypassing the interrupted spinal cord.
A brain-computer interface transforms voluntarily induced changes of brain signals into control signals and serves as a promising human-machine interface. In the last decade, we showed first results in EEG-based control of a neuroprosthesis in several individuals with SCI; however, the control is not yet intuitive enough. The objective of FeelYourReach is to develop a novel control framework that incorporates goal directed movement intention, movement decoding, error processing and sensory feedback processing to allow a more natural control of a neuroprosthesis. We believe that such a framework would enable individuals with high SCI to move independently, improving their quality of life.
We carried out several studies for the investigation and detection of goal-directed movement intentions. We could detect self-paced movement imaginations of a reach-and-grasp based on low-frequency time-domain features. Also, the event-related cortical potentials differ depending on whether the target selection process was internally-driven by the participant or externally-cued by the paradigm. Also, it is possible to asynchronously detect self-paced reach-and-grasp movements in a realistic scenario, in which participants were allowed to perform saccades towards an internally-driven target.

We investigated the decoding of movement covariates (position and velocity) from low frequency EEG signals in several tracking tasks, and found a better encoding of velocity in sensorimotor areas during the visuomotor task. In a first online study, we showed that directional information like positions, velocities, accelerations could be decoded from EEG, and successfully used to control a robotic arm. Although correlations were overall higher than chance, an amplitude mismatch between kinematics and decoded trajectories was found. In a follow-up, we included non-directional kinematics (e.g. distance, speed) could both improve the quality of decoding. In a 2nd online study, a decoder integrating directional (positions and velocities) and non-directional (speed) information was used for the online control. In addition, we studied and implemented several electrooculography (EOG) correction methods and the EOG influence on movement decoding. Furthermore, we investigated the performance of hand trajectory decoding based on low-frequency EEG in the source space.
Finally, a new paradigm based on attempted movement was devised to cater the decoding to actual end users with limited motor output. Over three sessions we asked participants to track a moving target or trace depicted shapes on a screen. While no global learning effects were visible, it was shown that attempted movement could be decoded above chance level.

We also investigated the detection of error-related potentials (ErrP) and the influence of feedback during continuous motor control. We could asynchronously detect the erroneous events with a high accuracy, and found that the type of feedback (jittered or smooth) is not influencing the detection of ErrP. ErrP detection during the continuous control of a robot in an online scenario, i.e. in which participants received real-time feedback regarding the detection of ErrPs was studied next. Finally, we successfully conducted a study to evaluate the use of the ErrP generic classifier (based on existing data) in participants with a SCI during a calibration-free online experiment in which participants continuously controlled a robotic arm.

We explored the relation between neural and behavioural correlates of a large variety of grasping movements. EEG activity reflected different movement covariates in different stages of grasping. We classified EEG associated with the grasping movement types and number of fingers involved in grasping, as well as with the grasped object's intrinsic properties. Object properties and grasp types can be decoded during the planning and execution. Moreover, we found that this preferential time-wise encoding allows the decoding of object properties already from the observation stage, while the grasp type can also be accurately decoded at the object release stage.

We built a device to deliver kinesthetic feedback in real-time and we investigated different strategies to deliver kinesthetic feedback. We projected movement sensations to the skin of the shoulder blade, and we conducted a study investigating several movement-related parameters during the execution of planar center-out movements. Movement trials could be classified against rest with accuracies significantly exceeding chance level, regardless of whether vibrotactile feedback was provided. Studying motor imagery and providing this kind of feedback classification yielded better results in the condition with vibrotactile guidance. In the last EEG study, where subjects performed attempted hand movements tracking 2-D trajectories, and received concurrent vibrotactile kinesthetic feedback we investigated the effect of this kind on decoding performance.

Finally, we integrated our previously developed findings on i) movement decoding, ii) goal-directed movement intention detection, and iii) error processing, in a unique control framework. Preliminary results on healthy participants are showing the feasibility and efficacy of all three decoders. After conducting a larger study on healthy participants, we therefore plan to finally test our framework on SCI patients. A first case study with a complete spinal cord injured (AIS A, NLI C2) participant, in which we tested parts of the framework, in particular the trajectory decoder and implicitly the robotic control.

We reported our results in more than 20 journal papers and 25 conference contributions.We conducted more than 320 measurements on healthy participants and 20 on persons with SCI. The significance of the output of this project allows us to move further in our endeavor to help individuals with SCI regain independence in some of their daily tasks.
We could show, for the first time, that it is possible to control a robotic arm by recording non-invasive EEG signals while participants perform or attempt those continuous movements. A first single case study with a person with SCI could demonstrate closed loop control. Furthermore, we were the first who showed in patients with SCI that it is possible to detect error potentials in EEG during continuous robot movement. Even with a generic classifier. And we studied kinesthetic feedback delivered to participants during movement imagery. Finally, merging our findings into one complete system, which allows detecting the start of a movement, continuous movement decoding and error detection has been achieved.
Project Overview