Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Next generation of AI-powered robotics for agile production

Periodic Reporting for period 2 - AGIMUS (Next generation of AI-powered robotics for agile production)

Reporting period: 2023-10-01 to 2025-03-31

Context and motivation – European manufacturing is shifting to small-batch, high-mix production, yet today’s industrial robots remain single-purpose, floor-fixed and slow to re-program. This rigidity blocks the robotization of sectors such as aircraft, satellite, lift and customized packaging lines, where every product variant demands new motions and tool paths. The bottleneck is limited robot perception, planning and learning: motion planners cannot sequence multi-step manipulations, reinforcement-learning policies are fragile and data-hungry, and vision systems fail on unseen objects. The result is costly downtime, material waste and lost competitiveness.

AGIMUS aims to deliver an open-source, AI-powered framework that transforms existing mobile manipulators into general-purpose, high-performance shop-floor assistants capable of adapting to dynamic manufacturing environments. The project’s overall objectives are structured around four core strategic goals.
The first objective focuses on advancing perception, planning, and control capabilities. AGIMUS targets sub-2 mm accuracy in marker-less object tracking, the ability to perform motion planning based on online video demonstrations, and the execution of adaptive control policies operating at feedback rates of at least 500 Hz.
The second objective addresses the integration of offline policy training with real-time adaptation. To this end, AGIMUS employs cloud-based differentiable-physics simulations to build a reusable “memory of motion,” which in turn guides model predictive control (MPC) running directly on the robot.
The third objective is to enhance robot autonomy through ancillary services. These include trajectory optimization and a cloud/edge computing split enabled by 5G connectivity, together yielding 3–5% lower energy consumption and a 5% increase in battery autonomy.
Finally, AGIMUS will demonstrate and validate its framework through rigorous stress-testing in three dedicated testing zones and real-world deployment across three industrial pilot sites. These demonstrations, covering more than six diverse use cases, aim to deliver measurable productivity improvements of at least 15% and material waste reductions of up to 20%.

Project pathway to impact
The AGIMUS project follows a structured and impact-driven pathway designed to ensure that its outcomes address real industrial needs while maximizing scientific, economic, and societal benefits.
The foundation of the project is laid in WP1 through extensive stakeholder interviews, value chain mapping, and the integration of ethics-by-design principles. This ensures that the developed solution is aligned with actual shop-floor requirements while embedding safety, privacy, and trustworthiness from the outset.
The second stage, spanning WPs 2 to 4, focuses on the creation of advanced technological components. AGIMUS develops and refines state-of-the-art AI models for perception, planning, and control, supported by a robust cloud-edge architecture that leverages 5G to enable fast and adaptive response times in real-world conditions.
In WP5, the project shifts to system integration and stress-testing. Here, the developed components are embedded into mobile manipulators and tested under demanding conditions across three cooperative testing zones, allowing for iterative refinement and validation.
The fourth stage, carried out in WP6, involves full deployment in industrial pilot environments, including aerospace, elevator manufacturing, and packaging lines. These pilots demonstrate the framework’s versatility and economic viability, generating quantitative evidence to support the creation of best-practice guidelines for use by standardization bodies and policymakers.
Finally, in WP7, the project’s dissemination and exploitation activities ensure long-term impact. The majority of software and datasets are released under open-source licenses, with long-term stewardship guaranteed by an INRIA-hosted consortium. At the same time, a dedicated IP strategy ensures that commercially valuable results are protected and further developed.
The scale and significance of AGIMUS’ expected impacts are substantial. Technologically, it positions Europe as a first mover in the domain of versatile, AI-powered mobile manipulators, in alignment with Horizon-CL4 strategic goals. Economically, the industrial pilots aim to deliver at least a 15% increase in productivity and up to a 20% reduction in material waste, strengthening SME competitiveness. From a sovereignty perspective, the project reduces Europe’s dependency on non-EU robotic software by developing fully open and European-controlled solutions. Socially, AGIMUS emphasizes participatory design and workforce engagement, aiming for over 75% acceptance among shop-floor workers—a key factor for long-term adoption and impact.
During the second reporting period (months 13–30) the consortium moved from individual algorithm design to an integrated, shop-floor-ready stack. Work began with two “coding weeks” in Toulouse and Prague where the partners jointly produced the first ROS 1/ROS 2 drafts of the AGIMUS architecture and embedded the ethics-by-design grid in every software module. In parallel, the Motion Solver Toolkit delivered a differentiable-physics simulator that reaches 115 k contact samples per second and a new sparse MPC optimizer, Aligator, which cut the Shelf-1 planning task from five seconds to one, delivering a five-fold speed-up. The Offline Policy Training team introduced vision-guided task-and-motion planning that learns directly from monocular video; using the HappyPose tracker, they achieved 92 % pose-tracking accuracy within a 2 cm tolerance and reliable six-step plans across nine novel objects. On-robot adaptation progressed with a 1 kHz visual MPC loop demonstrated on a Franka arm and accepted for publication at ICRA 2025. Finally, the architecture was deployed in three cooperative testing zones where six generic skills (pick-and-place, screw-driving, deburring and others) were validated on Panda manipulators, fully specifying six industrial pilot cases for the next phase.
The differentiable-physics engine and unified contact solver constitute the first European platform able to supply analytical gradients for rigid- and compliant-contact manipulation at speeds 100 times faster than automatic-differentiation baselines, opening the door to gradient-based reinforcement learning and whole-body MPC for contact-rich tasks. The Aligator/ProxDDP toolchain then turns those gradients into real-time, collision-aware trajectories, cutting robot set-up time from weeks to hours. Vision-guided task-and-motion planning from video demonstrations overcomes the programming barrier for high-mix factories, while the 1 kHz HappyPose-driven visual MPC removes the need for fiducials in precision pick-and-place. The majority of components are released as open-source libraries (Pinocchio, Crocoddyl, Aligator, HappyPose) and paired with public datasets, positioning Europe as a reference stack for agile robotics. Industrial pilots are expected to raise productivity by 15 % and reduce scrap by 20 %, with deployment costs for new variants halved.
My booklet 0 0