Skip to main content

Boosting Brain-Computer Communication with high Quality User Training

Periodic Reporting for period 2 - BrainConquest (Boosting Brain-Computer Communication with high Quality User Training)

Reporting period: 2019-01-01 to 2020-06-30

Brain-Computer Interfaces (BCIs) are communication systems that enable users to send commands to computers through brain signals only, by measuring and processing these signals. Making computer control possible without any physical activity, BCIs have promised to revolutionize many application areas, notably assistive technologies, e.g. for wheelchair control, and man-machine interaction. For instance, using a BCI, a tetraplegic user can move a cursor on a computer screen towards the left or right simply by imagining left or right hand movements, respectively. Despite this promising potential, BCIs are still barely used outside laboratories, due to their current poor reliability. For instance, BCIs only using two imagined hand movements as mental commands can recognize, on average, less than 80% of these commands correctly, while 10 to 30% of users cannot control a BCI at all.
A BCI should be considered a co-adaptive communication system: its users learn to perform mental commands using mental imagery (e.g. by imagining movements or mental maths) that the machine learns to recognize by processing the brain signals measured from the user. Most research efforts so far have been dedicated to improving how the brain signals are processed and analyzed. However, BCI control is a skill that users have to learn too. Unfortunately, how BCI users learn to produce clear and reliable mental commands is essential but is barely studied, i.e. fundamental knowledge about how users learn BCI control is lacking. Moreover, standard BCI user training approaches are not following human learning principles nor guidelines from educational psychology. Thus, poor BCI reliability is probably largely due to highly suboptimal user training.
In order to obtain a truly reliable BCI we need to completely redefine user training approaches. To do so, this BrainConquest project proposes to study, to understand and to model how users learn to perform reliable BCI mental commands. Then, based on human learning principles and such models, the BrainConquest project aims at creating a new generation of BCIs which ensure that users learn how to successfully control BCIs, hence making BCIs dramatically more reliable. Such a reliable BCI could positively change man-machine interaction as BCIs have promised but failed to do so far. We notably plan to use such reliable BCIs as assistive technologies for severely motor impaired users, as new control tools for everyone (e.g. for gaming) as well as for post-stroke neuro-rehabilitation.
So far, since the beginning of the project, we have notably worked on understanding and modeling the factors (e.g. user skills or traits – such as their personality or cognitive abilities) influencing BCI user performance and learning. In other words, we worked on identifying which factors influence the most BCI performance and learning, and how such factors interact with each other. So far, we have proposed new ways to measure users’ skills at BCI control, that reflect the skills of the users better, independently of how good the machine is. We have also studied and identified, for the first time, that BCI experimenters, i.e. the scientists training BCI users to BCI control, actually do influence how users learn and perform. In particular, our first results, that still need to be confirmed, suggest that BCI users tend to learn better with women experimenters than with men experiments. Finally, by using Artificial Intelligence (AI) techniques, we could also reveal how some personality traits of BCI users, notably how anxious they are, could influence their performance and learning.
In order to refine further our models, we also need a better understanding and estimation of the users’ mental states during BCI training, e.g. about their motivation or mental efforts. We thus worked on designing digital tools to estimates users’ cognitive, affective and motivational states from their brain (here, electroencephalography – EEG) and physiological (e.g heart rate or sweat) signals. So far, we have designed new AI tools to recognize low or high mental efforts, as well as negative or positive emotions from EEG, and this with a better reliability than with existing methods. We also conducted experiments to induce various types of attention, e.g. sustained attention over a long period of time or split attention to both audio and visual messages. We could show that we could recognize which attention type the user is involved in from EEG signals. More recently, we have also studied curiosity, a mental state that is key to ensure successful learning. By designing a suitable experiment to make users be in various states of curiosity (e.g. bored versus curious) and by using the AI tools developed above, we were able to discriminate low versus high curiosity from EEG.
In addition to modeling BCI user training, we also need to optimize it. We notably aim at doing so by optimizing the feedback given to BCI users – i.e. the information provided by the BCI to users about what the BCI has recognized, so that users can learn more efficiently to control. So far, we have proposed and studied various BCI feedback types, including a feedback based on both vibrotactile stimulations and realistic visual feedback (moving 3D hands when an imagined hand movement was recognized) or, for the first time, social feedback. For the latter, we notably proposed, designed and studied the first artificial learning companion for BCI, i.e. a kind of small robot that provides social and emotional feedback to users, by encouraging the users or suggesting them what to do depending on their performance and learning. We showed it could improve performance for the users who prefer to work in group, rather than alone with a computer.
Still to optimize BCI training, we have also designed and studied, both offline (by re-analyzing past EEG data), and online with a tetraplegic BCI user, adaptive AI tools, that adapt to the users’ changing EEG signals. Our studies suggested the superiority of these approaches in terms of BCI performance and user training. This work was notably performed as we trained a tetraplegic BCI user to control a multi-command BCI over 3 months, in order to participate to the Cybathlon BCI series in Graz in 2019. In this Cybathlon competition, paralyzed users competed in a BCI-controlled racing video game. While we did not win this competition, this experience enabled us to design a new adaptive BCI system as mentioned above and provided us with many leads for future improvement.
Finally, many of the digital tools that we have developed so far, e.g. the various AI tools mentioned above, have been shared for free and open-source, as part of the OpenViBE BCI platform. As thus, our results can be used for free by anyone, whether scientists or the general public.
As mentioned above, so far the BCI research field is lacking theories and models about BCI user training, thus preventing us from optimally training our users, which in turn can lead to unreliable BCI systems. With this project, we are working towards building this model and theories of BCI user training. We are doing so by identifying the factors, i.e. users skills, mental states or traits, as well as BCI machine characteristics, that influence this learning, so that we can in turn influence and optimize this training. Overall, we thus expect to build by the end of this project a full model of BCI user training, that we could use to propose a completely new BCI training procedure, that will ensure BCI users can effectively and efficiently learn to control BCIs.