Skip to main content
Aller à la page d’accueil de la Commission européenne (s’ouvre dans une nouvelle fenêtre)
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Multifunctional, adaptive and interactive AI system for Acting in multiple contexts

Periodic Reporting for period 3 - MAIA (Multifunctional, adaptive and interactive AI system for Acting in multiple contexts)

Période du rapport: 2024-07-01 au 2025-06-30

The MAIA project set out to address the critical challenge of restoring autonomy and improving quality of life for individuals who have lost motor functions due to stroke, tumor surgery, or accidents. This problem is of growing importance for society, as demographic changes and advances in healthcare have increased the number of people living with chronic motor impairments, creating an urgent need for assistive technologies that are not only effective but also acceptable and trustworthy for users. MAIA’s overall objective was to develop a radically new, bio-inspired neurocomputing paradigm integrated into an adaptive AI system capable of decoding action intentions from neural and behavioral signals and translating them into the control of semi-autonomous devices. From a societal perspective, the project aimed to demonstrate how such human-centric AI can enhance autonomy, foster inclusion, and contribute to an innovation ecosystem in healthcare and beyond. For the final period, MAIA achieved these objectives and delivered concrete innovations with strong exploitation potential such as the locomotion intention decoder. In addition to its technological outcomes, MAIA has substantially advanced basic scientific knowledge. It provided new insights into the neural mechanisms of visuomotor transformations, intention decoding, and parieto-frontal connectivity in both humans and non-human primates, and demonstrated how these findings can be translated into functional AI models for real-world applications. This scientific progress not only underpins the development of innovative assistive devices but also enriches the broader fields of neuroscience, cognitive science, and AI.
In conclusion, MAIA has demonstrated that bio-inspired, human-centric AI is not only technically feasible but also socially meaningful. By advancing neuroscience-informed intention decoding, validating adaptive AI in both virtual and real-world conditions, strengthening the knowledge base of basic science, and building pathways for exploitation and technology transfer, the project has laid a solid foundation for the next generation of neuroprosthetic and assistive technologies. These outcomes hold the potential to substantially improve the independence and quality of life of individuals with motor impairments while fostering innovation across healthcare, industry, and beyond.
From the beginning of the project to the end of the final reporting period, MAIA partners advanced both basic science and applied technologies across its six work packages.
In WP1, ZEISS and WWU developed and validated gaze-based selection paradigms and anomaly detection models, delivering a VR-based assistive simulation for object selection and wheelchair navigation that improved user trust and control accuracy, and is now integrated into an augmented reality framework.
In WP2, UNIBO and CNR provided novel insights into the encoding of reach direction and depth in parieto-frontal circuits of macaques, complemented by human TMS studies that revealed the functional specialization and plasticity of medial PPC subregions. CNR further advanced the active inference framework, extending its application from motor control to higher-order cognitive intentions.
In WP3, UNIBO demonstrated that the MAIA AI intention decoder generalizes well to wheelchair users with motor impairments, maintaining high accuracy and consistent gaze-based intention decoding, while also showing enhanced embodiment of the MAIA wheelchair prototype.
In WP4, TEC implemented multimodal neurocomputational algorithms and adaptive BMI paradigms on the ISMORE exoskeleton, successfully testing them with both healthy participants and stroke patients.
In WP5, partners fostered an interdisciplinary ecosystem, engaging stakeholders, delivering training, and grounding technology development in standards, ensuring that MAIA outcomes are both technically innovative and socially responsible.
Finally, WP6 ensured broad dissemination of results through high-impact publications, conferences, workshops, and the project website.
For the final period, MAIA not only consolidated its scientific achievements but also delivered innovations with strong exploitation potential. The locomotion intention decoder, integrated into the wheelchair demonstrator, demonstrated practical viability and commercial promise, leading to a German patent application and paving the way for a European extension. These results underline MAIA’s dual impact: on the one hand, it significantly expanded basic scientific knowledge on visuomotor transformations, intention decoding, and adaptive neuro-AI integration; on the other, it laid the foundation for tangible technological applications with the capacity to enhance autonomy and quality of life for individuals with motor impairments. Dissemination activities ensured that both the scientific community and broader society are aware of these advances, reinforcing MAIA’s role as a catalyst for innovation at the intersection of neuroscience, AI, and assistive technology.
In the conception of MAIA, future prosthetic and assistive devices are envisioned to be far more proactive than current human–machine interfaces, exploiting brain, gaze, and behavioral signals to anticipate human intentions and provide meaningful feedback. Over the course of the project, MAIA has gone beyond this vision by delivering concrete innovations and scientific advances. At the basic science level, MAIA has deepened knowledge of how the parietal and frontal cortices encode reach direction and depth, how intention signals can be decoded in both humans and non-human primates, and how plasticity within parieto-frontal circuits can be harnessed for adaptive control. These findings significantly advance the state of the art in neuroscience and AI. At the technological level, MAIA has demonstrated robust intention decoding across user populations, including individuals with motor impairments, and developed adaptive AI systems integrating multimodal inputs (EEG, EMG, EOG, gaze) for bidirectional communication. Notably, the locomotion intention decoder, tested with patients and integrated into the wheelchair demonstrator, represents a novel and exploitable solution with strong commercial promise.
Scientifically, MAIA provides a foundation for next-generation neuro-AI models that are transparent, adaptive, and biologically inspired, contributing to neuroscience, machine learning, and cognitive science. Technologically, the project demonstrates that gaze-based and multimodal decoders can enable intuitive, non-invasive, and reliable communication channels between humans and assistive devices, creating pathways for integration into neuroprosthetics, rehabilitation robotics, and mobility systems. Socio-economically, MAIA has laid the groundwork for an innovation ecosystem that bridges academia, industry, and clinical partners, fostering new opportunities in healthcare markets and beyond. Societally, the project addresses urgent needs by empowering individuals with motor impairments, reducing social and functional barriers, and enhancing autonomy and quality of life. By promoting human-centric AI that is trustworthy, acceptable, and scalable to broader domains such as industry and space exploration, MAIA contributes to shaping a future where technology complements human agency rather than replacing it.
MAIA CONCEPT
Mon livret 0 0