Skip to main content
Aller à la page d’accueil de la Commission européenne (s’ouvre dans une nouvelle fenêtre)
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Multifunctional, adaptive and interactive AI system for Acting in multiple contexts

Periodic Reporting for period 2 - MAIA (Multifunctional, adaptive and interactive AI system for Acting in multiple contexts)

Période du rapport: 2022-07-01 au 2024-06-30

The problem addressed, the impact on society and overal objectives of MAIA have not changed since the starting of the project.
Our interactions with objects in the environment are essential for life. They are possible through the coordination of eye, hand and body movements which represent our main interfaces with the external world.
Despite the alarming data on disability, no short-term solutions can provide a complete recovery based on the regeneration of damage nerve tissue. Therefore, in the non-acute phase of disabling disease, enhancing functional activities could represent a gainful solution. In the last few years, neuroscientists and engineers have shown the promising possibility to use cortical recordings from the human brain to drive Human Centric Artificial Intelligence (AI) controllers integrated in robotic devices to allow interactions with the environment. However, much work has to be done to achieve human-acceptable levels of control and mimic the adaptive nature of human motor behaviour. Recent research on the healthy brain has shown that a large part of this job is performed by brain areas classified as high-order cognitive areas in the posterior parietal cortex (PPC).
Although the PPC contains all necessary information, it is extremely difficult to decode at once this large cortical structure, while a successful neuroprosthetic control in real time is demanding for efficient assistance. Supplementing neural data with overt behavioral indicators of intention and attention, such as eye movements, can reduce problem complexity and improve the precision of the decoding procedures of the AI controller.
We will apply the predictive processing approach to improve even more the efficiency and the precision of the AI controller.
The MAIA project aims to develop a Human Centric AI, that exploits neural signals in combination with behavioural signals and can be integrated in different types of assistive devices such as robotic arms, wheelchairs and exoskeletons, by an approach guided by the real needs and expectation of end-users MAIA points to establish a European innovation ecosystem that can potentially span from healthcare to industry, and space exploration.
During the second reporting period of the project (July 2022-June 2024), we achieved the following main results:
- In WP1, WWU-ZEISS conducted different studies to progress in the achievement the decoding of action intention from eye movements and to design bi-directional interactive AI system. Specifically, WWU-ZEISS established how eye movements can be used in intention decoding, efficient and automatic error communication, and trust-building.
- In WP2, UNIBO acquired new neural data in non-human primates (NHP) from the parietal and frontalfrontal cortices during different visuomotor tasks in real environment and in VR. In parallel, further transcranial magnetic stimulations (TMS) experiments have been conducted in human volunteers to characterize the causal relationship between brain activity and behavior.
CNR conducted several different studies aimed, first,  to define, expand the theoretical basis and details of an innovative approach to multidomain decoding intention from neural and behavioral data based on predictive coding and, second, to investigate and critically compare the Posterior parietal cortex (PPC) and primary motor (M1) cortex as potential sources of information for timely intention decoding.
Finally, TEC performed the initial step in the application of the neurocomputational models developed within the MAIA context in the end users of this technology: human subjects.
- In WP3, UNIBO-PSI and IRCCS completed the definition of end-user requirements by interviews and focus groups. Additionally, preliminary studies on healthy participants were performed to build machine learning algorithms that can predict the intention of action based on multicomponent signals in reach-to-grasp actions and navigation. WP3 also started testing the paradigms best fitting with the evaluation of the embodiment (e.g. sidedness task, rubber hand illusion) and the factors favouring such embodiment in special population using prostheses in their daily lives.  
- In WP4, TEC conducted different studies for developing and implementing neurocomputational models exploiting existing intracortical neural data in a human stroke patient, and also explored alternative sources of non-invasive signals (electroencephalography, electromyography) to develop and test enriched real-time control systems in humans. Finally, TEC and STAM have been developing a hardware prototype consisting of a motorized wheelchair equipped with a robotic arm to support a real-world-use-case-scenario.
- In WP5, STAM continuosly supported the creation of an ecosystem and a vivid stakeholder community, which not only is aware of the potential of the MAIA proposed approach, but also which may contribute to the overall human-centric approach in several fields and disciplines besides the biomedical one.
- In WP6, results from the project continuosly have been disseminated through the website, scientific publications in international journals, participation at lectures, conferences and workshops. 
In the conception of MAIA, future prosthetic and assistive devices will be much more proactive than current human-machine interfaces, exploiting brain signals and other currently unconventional means of communication, such as gaze and gestures, to perceive human intentions and convey feedback. Such a rich bidirectional communication will require AI-based decoding where both the device and the assisted person will likely need to learn and adapt to each other. The MAIA project develops techniques for AI-based decoding of motor intentions with bidirectional feedback between the user and the AI devices that supports mutual learning. The intentions decoder takes input from neural (posterior parietal and motor cortices) as well as behavioral (kinematics and gaze) signals to learn to infer the desired human intentions. A lighter non-invasive version of the interface is built around intuitive and natural communication by gaze movements that typically precede any manual or bodily action. Identified targets of the intended actions are highlighted upon a gaze shift on the target. Oculomotor reactions to this signaling are used by the user as a means to convey to the decoder a mismatch and therefore improve the decoding. A testbed scenario is navigation on an electrical wheelchair with obstacle avoidance and reaching objects with a robotic arm mounted on said powered chair.
The current MAIA results may have several scientific-technological impacts since: 1) a strong engagement of companies working on machine learning field; 2) the creation of an interactive human-centric AI system can lead to a higher acceptance and trust in the technology; 3) awareness on the possibility to improve accuracy of AI system in providing motor responses for example in medical environments; 4) shading light on new scientific and industrial realities in the field of human and AI cooperation; 5) fill the social gaps that invalidating pathologies generate.
MAIA CONCEPT
MAIA main results
Mon livret 0 0