Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

AI-Powered Manipulation System for Advanced Robotic Service, Manufacturing and Prosthetics

Periodic Reporting for period 2 - IntelliMan (AI-Powered Manipulation System for Advanced Robotic Service, Manufacturing and Prosthetics)

Reporting period: 2023-10-01 to 2024-12-31

Advancements in intelligent robotics have significantly improved autonomous manipulation capabilities, yet robots still struggle to interact with the world as efficiently and adaptively as humans. A major challenge lies in enabling robots to perceive, learn, and execute complex tasks in diverse and unstructured environments, requiring a seamless blend of perception, decision-making, and control.
The IntelliMan project addresses these challenges by developing an AI-powered manipulation system with persistent learning capabilities. Unlike conventional automation, IntelliMan focuses on adaptive, high-performance manipulation, integrating multi-modal perception, human demonstration learning, and real-time autonomy arbitration. The goal is to create a system that continuously improves its manipulation skills, dynamically adapting to new tasks, objects, and environments while ensuring safety, efficiency, and human trust.
A core innovation of IntelliMan is its ability to transfer knowledge across different robotic platforms and applications, ranging from prosthetics and assistive robotics to household service robots, flexible manufacturing, and fresh food handling. By leveraging sensor fusion and machine learning, the system can autonomously interpret its surroundings, detect task execution failures, and refine its actions through human interaction and environmental feedback.
Moreover, IntelliMan does not only focus on technological advancements but also investigates the human perception of AI-powered manipulation systems, aiming to enhance user acceptability, trust, and cooperation. Through cutting-edge research in shared autonomy, intent recognition, and human-robot interaction, the project contributes to the broader goal of seamless integration of intelligent robots into everyday applications.
- Developed and tested simulation (Unity-Mujoco VR) integration to assess control strategies for prosthetic embodiment;
- Integrated flexible tactile sensors into the Hannes and AR10 prosthetic hands for improved grasping force measurement;
- Integrated the TIAGo service robot with ROS2 modules for kitchen-related manipulation tasks, incorporating UNIGE’s tactile sensors for precise grasping;
- Implemented a deep learning-based object pose estimation system to enhance object recognition in cluttered environments;
- Established a robotic cell for wire grasping and insertion tasks. With improved task success rates through CNN-based wire position estimation;
- Developed a dual-camera vision system for high-precision grasping in industrial environments;
- Integrated a KUKA iiwa robotic arm with modular multi-fingered grippers and Deep Object Pose Estimation (DOPE) for object recognition, and investigating multi-object grasping strategies to enhance efficiency in food logistics.
- Developed a geometric algebra-based approach for control fusion, for application to optimal control and force sensing, releasing an open-source C++ library with tutorials for researchers and developers;
- Designed learning approaches based on Signed Distance Fields (SDFs) and multi-resolution shape encoding for collision avoidance and whole-body manipulation;
- Introduced ergodic control methods for active environmental sensing and Sim-to-Real reinforcement learning policies for grasping;
- Developed shared autonomy models for robotic hand control, integrating sEMG signals and vibrotactile feedback for adaptive grasp strength regulation;
- Created an ontology-driven adaptive manipulation framework using for knowledge representation;
- Investigated incremental learning paradigms for myocontrol applications, demonstrating improvements in prosthetic user control;
UNIBO and INAIL developed a Hidden Markov Model (HMM)-based shared autonomy framework for robotic hand control, enabling adaptive grasp strength regulation using sEMG signals and vibrotactile feedback. This innovation improves precision and responsiveness in prosthetic and assistive robotic applications.
UPC introduced an ontology-based reasoning framework that integrates Large Language Models (LLMs) for knowledge representation in robotic manipulation. This approach enhances human-robot collaboration by allowing robots to interpret human instructions more effectively and adapt their actions accordingly.
UNIBO and UCLV developed a deep-learning-based vision system for 6D pose estimation using endoscopic cameras and stereo vision, significantly improving the accuracy of robotic handling of thin, deformable objects such as wires and food items.
EUT and UNIBO introduced an RL-based grasping approach, leveraging Sim-to-Real transfer across multiple robotic platforms.
FAU developed incremental myocontrol paradigms for simultaneous and proportional control of prosthetic hands, demonstrating improved usability for individuals with upper-limb amputations.
IDIAP introduced a novel ergodic control method that enhances robotic exploration and environmental sensing, optimizing surface coverage in manipulation tasks.
FAU conducted a study using eye-tracking, muscle activity, and kinematic data to assess user trust and sense of control in AI-powered prosthetic systems.
EUT developed a method using Large Language Models (LLMs) and Genetic Programming (GP) to automatically generate executable Behavior Trees (BTs) for robotic task execution, making robot programming more accessible to non-expert users.
UNIBO designed a Bluetooth-enabled vibrotactile bracelet to provide real-time force feedback to prosthetic users, enhancing their perception of grip strength.
My booklet 0 0