Skip to main content
Vai all'homepage della Commissione europea (si apre in una nuova finestra)
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

TRAnsparent InterpretabLe robots

Periodic Reporting for period 1 - TRAIL (TRAnsparent InterpretabLe robots)

Periodo di rendicontazione: 2023-03-01 al 2025-02-28

After several years of positive projections, some special-purpose domestic robots have finally reached the market, with the IFR Report on Service Robots showing an increase of 25% for domestic and 22% for entertainment robots. This increase is largely due to simple devices, like lawnmowers and vacuum cleaners, but several reports predict a massive market growth in the next years for other companion robots due to new markets in healthcare, rehabilitation, and logistics, fuelled by improvements in the field of Artificial Intelligence (AI). This is challenged by findings from industry (IBM4), which found that, while 82% of enterprises are considering AI for their products, 60% fear liability issues due to a lack of transparency when it comes to decision-making (aka, the “black box” problem). If companies want to sell intelligent robots, novel research solutions for human users’ interpretability and transparency will be key to acceptance. Also, since lay users often have trouble interpreting the current state of a robot, even on a simple level, e.g. to know when a robot is listening or how it is processing the last request. Therefore, an increasingly important issue for the acceptance of robots in human homes is the transparent interpretability of the robot's behaviour and the underlying decision-making processes.
TRAIL - TRAnsparent, InterpretabLe Robots strategically focuses on a novel, highly interdisciplinary and cross-sectoral research and training programme for a better understanding of transparency in deep learning, artificial intelligence, and robotics systems.
To train a new generation of researchers to become experts in the design and implementation of transparent, interpretable neural systems and robots, we have built a highly interdisciplinary consortium, containing expert partners with long-standing expertise in cutting-edge artificial intelligence and robotics, including deep neural networks, computer science, mathematics, social robotics, human-robot interaction and psychology. To build transparent robotic systems, these new researchers or doctoral candidates (DCs) need to learn about the theory and practice of the principles of (1) internal decision understanding and (2) external transparent behaviour. Since the ability to interpret complex robotic systems needs highly interdisciplinary knowledge, we start, at the decision level, to interpret deep neural learning and analyse what knowledge can be efficiently extracted. At the same time, on the behaviour level, the disciplines of human-robot interaction and psychology are key in understanding how to present the extracted knowledge as behaviour intuitively and naturally to a human user to integrate the robot into a cooperative human-robot interaction. A scaffolded training curriculum guarantees that the DCs have not only a deep understanding of both research areas but also experience optimal skill training to be fully prepared for a successful research career in academia and industry.
We have implemented a complementary training programme for the 10 recruited doctoral candidates (DCs), which has covered interdisciplinary topics and scientific work to prepare them for later work in industry and academia. So far, this training programme has consisted of 4 network workshops (NET), where the DCs received feedback on their current research and were trained by expert speakers, and 4 learning circle activities (LCAs), in which the DCs learned about scientific work in a peer-supported setting.
The DCs are progressing well in their research and have published journal articles and papers at renowned conferences to disseminate their results to the scientific community. Additionally, we have organized and participated in a workshop on the topic of "Explainable AI in Human-Robot Interaction" at the International Conference on Artificial Neural Networks in 2024. The DCs presented their research in the context of this workshop to an international scientific audience. Another workshop is planned for RO-MAN 2025 later this year.
The DCs have been working on model-agnostic algorithms and frameworks to increase transparency in deep learning in both unimodal and multimodal settings, which will impact how AI is used in many other research areas and will contribute to new standards of transparent AI. Their research will also contribute to the development of transparent social robot companions for older people, due to the strong connection to industry applications in the training programme, and the collaboration with users in both healthcare and multi-generational settings.
We focus on an interdisciplinary training schedule, with expert speakers at every workshop, and the inclusion of the DCs in the preparation of project deliverables. This will prepare the DCs for a successful career in industry or academia, with a strong collection of skills required for research, aside from scientific skills.
The consortium at the International Conference on Artifical Neural Networks 2024.
Il mio fascicolo 0 0