Skip to main content
Weiter zur Homepage der Europäischen Kommission (öffnet in neuem Fenster)
Deutsch Deutsch
CORDIS - Forschungsergebnisse der EU
CORDIS

Next generation of AI-powered robotics for agile production

Periodic Reporting for period 1 - AGIMUS (Next generation of AI-powered robotics for agile production)

Berichtszeitraum: 2022-10-01 bis 2023-09-30

AGIMUS aims to deliver an open-source breakthrough innovation in AI-powered agile production, introducing solutions that push the limits of perception, planning, and control in robotics, enabling general-purpose robots to be quick to set-up, autonomous and to easily adapt to changes in the manufacturing process. To achieve such agile production, AGIMUS leverages on cutting-edge technologies and goes beyond the state-of-the-art to equip current mobile manipulators with a combination of (i) an advanced task and motion planner that can learn from online available video demonstrations; (ii) optimal control policies obtained from advances in reinforcement learning based on efficient differentiable physics simulations of the manufacturing process; as well as (iii) advanced perception algorithms able to handle objects and situations unseen during initial training. The AGIMUS solutions and their impact will be demonstrated and thoroughly stress-tested in 3 testing zones, as well as 3 industrial pilots in Europe, under numerous diverse real-world case studies and scenarios (different tools, environments, processes, etc.):

1) tooling tasks with a mobile manipulator robot on the aircraft assembly chain, in a human-compatible environment.

2) assembly of customized products in the process of elevator manufacturing.

3) customized packaging in the THIMM pack’n’display shopfloor.

In every step, and from the very beginning, AGIMUS will go beyond current norms and involve a wide range of stakeholders, starting from the production line itself, to identify the essential ethical-by-design principles and guidelines that can maximize acceptance and impact.

AGIMUS is a collaborative project that seeks a breakthrough in the use of mobile multiarm manipulator robots in industrial small-batch manufacturing context. The consortium is pushing several concepts centered around optimization and model-based predictive control of complex robots, for achieving agnostic tasks with minimal programation and high level of autonomy. The impact of our methods is emphasized by demonstrating the technology in 3 relevant industrial environments of the 3 industrial end users of the project: AIRBUS in France, KLEEMANN in Greece and THIMM in Czeck Republic. The objectives of the project require a collaborative joining expertise in motion planning (brought by LAAS-CNRS, Toulouse, France), computer vision (brought by CTU, Prague, Czeck Republic), optimization and machine learning (btought by INRIA, Paris, France), robot software architecture and deployment (brought by Toward, Toulouse, France) and robot hardware design (brought by PAL, Barcelona, Spain), supported by the expertise in project management (brought by Q-PLAN, Thessaloniki, Greece).
Benefiting of a team quickly set up, the first year has been the time to release the first scientific prototypes and define the structure in which they will be implemented, integrated, evaluated and demonstrated. In addition, the first year has been focused on the set up of the consortium and management structure, the refinement of the activity planning in particular the work architecture, experiment and evaluation set up and definition of the scenarios of the industrial pilots. We already reached several achievements that led us to publish papers and software for peer evaluation.

We have proposed a first implementation of an efficient differentiable simulator, based on differentiable collision detection, benchmark and shown more efficient than the best implementation of the literature. We have proposed a new algorithm for efficiently solving optimal control with hard constraints. This algorithm, along with other tools based on vision learning, has allowed us to implement a clean and extendable version of a human gesture tracker, that we have used to perform task-and-motion planning guided by human demonstration on a real manipulator robot. Our object pose estimator MegaPose, published in open source in the toolbox HappyPose, was awarded during the BOP Challenge at the major vision conference ICCV. We used it to perform a robust and efficient vision-guided model predictive controller on the real robot and extended the method to bridge the gap between optimal control and reinforcement learning.

Based on the analysis of the requirements, the architecture of deployment established in the same period will now allow us to integrate the first prototypes into a complete robot framework for evaluation in three experimental scenarios, whose structure and benchmark have also been accurately defined. The next period will see this first experimentation and the refinement of the scientific prototypes, in preparation for the final deployment of the industrial pilots in the last phase of the project.
During the first period, we have produced a new method for object pose estimation, MegaPose, which does not need the knowledge of the model of the object at training time. Implemented in the new toolbox HappyPose, it has been awarded as the best open source method in the BOP Challenge of ICCV.

The project has produced several papers during the first year, that are now under submission for peer review. If confirmed, each will also corresponds to a significant progress beyond the state of the art in robot simulation, trajectory optimization, motion learning and control.
Mein Booklet 0 0