The first aim of the project’s work was the development of detailed requirements and specifications for the toolkit design. Afterwards, work progressed on developing DL methodologies that fulfilled the requirements of robotics applications. Significant progress was made in all directions OpenDR aimed to, i.e. deep human-centric and environment perception and cognition methods, deep robot action and decision making, as well as appropriate simulation environments and compilation of datasets to support the training of advanced DL models for robotics. More specifically, significant progress was witnessed on all tasks of human-centric perception, that include deep person/face/body part active detection/recognition and pose estimation, deep person/face/body part tracking, human activity recognition, social signal (facial expression, gesture, posture, etc.) analysis and recognition, deep speech and biosignals analysis and recognition, as well as multi-modal human centric perception and cognition. To this end, a multitude of state-of-the-art methodologies and models from targeted applications were implemented and evaluated, in addition to novel work in a wide range of domains, e.g. adaptive inference models, active perception methodologies, continual learning and inference, etc. Significant progress was also witnessed in the task of deep environment perception and cognition, where several novel state-of-the-art methods were developed. Significant results were also obtained in all tasks for deep robot action and decision making, i.e. deep planning, navigation, action and control and human-robot interaction. OpenDR partners also worked towards developing simulation tools for training efficient DL algorithms. More specifically, the Webots simulator was extended by improving its simulation capabilities, adjusting the simulation environment to make it highly compatible with the ROS framework, and consequently with the corresponding real robotics systems, as well as preparing the infrastructure to run simulations on the web to give a high visibility to the OpenDR results. The project also developed 15 open datasets and software modules to create data. OpenDR tools were extensively evaluated and the results of the evaluation, along with the computing requirements and potential limitations were carefully documented. Furthermore, the OpenDR toolkit was successfully integrated and demonstrated in the three targeted use cases, i.e. healthcare, agriculture and agile production. Finally, the consortium has released three major versions of the toolkit. The latest one has been publicly released in December 2023 and integrates more than 30 tools for various tasks (activity recognition, face detection and recognition, human pose estimation, object detection and tracking, semantic and panoptic segmentation, facial emotion recognition, hand gesture recognition etc.), updates to support more recent software versions (e.g. CUDA and ROS2), improved and efficient implementations of several tools, agile integration and testing pipelines, as well as detailed documentation and usage examples. The OpenDR team will continue supporting the toolkit in the years to come. The toolkit reception from the robotics / deep learning /computer vision community is already very encouraging: so far, the GitHub repository was awarded more than 590 stars and the toolkit (as a whole or individual tools) has been downloaded more than 17000 times. Moreover, the project results have been disseminated in 33 journal articles, 66 conference papers and 1 edited book.