Skip to main content
European Commission logo print header

AUTOnomous co-operative machines for highly RECONfigurable assembly operations of the future

Final Report Summary - AUTORECON (AUTOnomous co-operative machines for highly RECONfigurable assembly operations of the future)

Executive Summary:
Several industrial sectors today are still using linear sequences of operations where the same manual and automated tasks are repeated in each cycle. This paradigm is efficient when aiming at maximum capacity and considering no breakdowns, but very inefficient in case of line de-saturation.

AUTORECON proposes the enablement, development and introduction of:
• Autonomous, exchangeable and mobile production units which can change tasks and position in the shop floor,
• Highly interactive robotic structures, and
• Random production flow,
with all of these integrated under a common and open architecture.

The factory of the future as envisaged by AUTORECON encompasses:
• Reconfigurable transformers like tools that enable autonomous assembly equipment which can adapt production process to process disturbances and market variations. The concept integrates novel actuators, end effectors with multiple connection points and advanced sensing capabilities as well as mobile robotic units, fostering efficient multi-variant production.
• Intelligent Control & Monitoring systems enabling enhanced performance and high reconfiguration abilities using distributed controls. AUTORECON unit control will fuse data coming from a peripheral sensing network to allow resource awareness of disturbances.

At the line control level, a service oriented architecture will enable:
• Autonomous communication between resources for deciding on adaptation actions;
• The effortless integration of new equipment.

A demonstration in the automotive industry involves the live reconfiguration of an assembly line in the case of a simulated breakdown. The AUTORECON intelligent control can evaluate alternative actions (e.g. use of mobile robotic units etc.) and select the optimal one. A second demonstration in the consumer goods industry case uses the AUTORECON reconfigurable grippers and sensing network in order to pick randomly placed components and route them by using cooperative robot handling.

Project Context and Objectives:
AUTORECON considers the issue that nowadays assembly lines are still organized in a fixed combination of fixed linear sequences of operations where manual and automated tasks are repeated in the same way each cycle time in the most suitable and optimized way. In this sense, the control systems of this kind of assembly system are organized in a hierarchical architecture. This paradigm is efficient in case the production is set to the maximum capacity and there are no halt situations due to technical problems. However, the complexity increases dramatically in case of mixing in the same plant assembly operations of either different models of the same product or even different products at the same line. AUTORECON aims at developing solutions to enable the ability to change the sequence of operations in the same plant. This is achieved by introducing autonomous production/ handling units which can change task and position, cooperating among themselves, based on current process sequences. One of the key competences is the ability to recover from eventual failures of any robot/tool by switching position/job, auto reconfiguring the tools and the line. The described approach allows answering quickly to production stop, reducing hence losses as much as possible. Based on the above, AUTORECON aims to develop technology that overcomes the bottlenecks imposed by fixed control logic and the rigid flow line structures having expensive, single-purpose transportation equipment and complex programming methods. AUTORECON will achieve its objectives through the following developments:
• Reconfigurable tools that enable autonomous flexible assembly equipment to easily adapt production process to process disturbances and market variations, integrating novel actuators and wireless sensor networks. Towards this concept, within AUTORECON handling operations will be carried out using end effectors equipped by multiple connection points, that allow the exchange of both part and end-effectors during the process To achieve this functionality, innovative actuation and clamping devices are being developed thus also allowing the manipulation of different parts, fostering efficient multi-variant production, demonstrating highly dexterous behavior.
• Mobile robots architecture to enhance plant/line/cell auto reconfigurability capabilities. The concept will allow the use of mobile robotic units that can easily be transferred around the shop floor and automatically take up tasks in cooperation with the robots already installed in the assembly line.
• Intelligent Control & Monitoring systems enabling enhanced performance and high level reconfigurability of production processes using a distributed control architecture. AUTORECON aspires to utilize a sensor driven approach to empower a decentralized control framework based on a Service Oriented Architecture. The sensing network will be responsible for identifying and capturing signals that will be used to produce reaction commands in real time. For this purpose methods for the systematic and automated generation of alternative reaction scenarios are under development in the project.
• Integration & communication architecture to allow easier integration and networking of the control systems utilizing agent-based, web-services and ontology technologies. The communication architecture will involve all the mechanisms for:
o Easily plugging new components/ robot allowing their automatic set-up and operation,
o Enabling robot to robot co-operation, allowing robot-based product parts transfer without the use of traditional transfer lines or fixed on-ground tooling;

The main concepts underlying the proposed architecture are concerning the following 3 pillars: 1) Reconfigurable/mobile "Transformable" like tools; 2) Intelligent Monitoring & Control Systems; 3) Open Architecture for Manufacturing Systems. More in detail, moving:
From: - Expensive, rigid flow lines, - Single-purpose transportation equipment, To: - Mobile robots architecture, - Robot-based product parts transfer, - Tools equipped with smart sensing and actuating capabilities.
From: - Centralized decision making with fixed and rigid control logic, - Non evolutionary systems over their lifecycle, To: - Service oriented contro, - Peripheral sensing network - reaction & modification of operation in real time, - Random production flow.
From: - Complex programming methods, - Large inflexible software packages, - Control application expensive to implement & maintain, To: - Open architecture, - Easily plug & resource, - Automatic co-operative operation.

The project aspires at achieving the following industry driven objectives:
1. Reduction of reconfiguration time
- Increased cost-efficiency by minimizing reconfiguration time from 3 weeks to 3 weekends, and by reducing occupied floor space by 30%, fostering zero loss launch;
- Reduction of programming efforts by up to 50% (from 800 to 400 man hours) developing the open communication and integration architecture, ontology technology, and standardized production items;
- Line re-configurability from 3 calendar weeks (in case of ground geo-fixturing to robot mounted end-effectors) to 3 consecutive week-ends
2. Enablement of random production flows:
• Increase reconfigurability at line level from fixed flow to random product flow by developing mobile production units and tools to enable random routing of parts within the plant.
• Minimize the impact that is imposed on the production by the launch of a new model. Enhanced reaction capabilities (reconfiguration using mobile units and flexible and effectors)
• Reduce the cost of introducing a new model up to 70%.
3. Enablement of flexible and reusable tooling
• Increased reconfigurability by increasing handling flexibility, thus allowing to handle multiple product models/variants considering logistics constraints;
• Increased productivity and reduced cycle (minimization of picking/dropping operations) time thanks to flexible robot-to-robot based production (handling/welding) and elimination of stationary tooling;
• Increased reliability/availability of the flexible end-effectors (and hence of the production line) introducing auto-calibration to ensure expected tolerances;
• Reduce the number of model specific end effectors at line level, from 150 currently used for single model lines to 75 and from 250 to 75 for lines with 4 models (considering 60jph).
• Reduction of module specific investment (as % of total investment) from 60% to 40%.
• Minimization of environmental impact by reducing energy consumption by 25% per year and eliminating unnecessary waste (reduced consumables and scrap material, better utilization of compressed air).

Project Results:
Automotive industry Demonstrator main results
Component Architecture and Design
Dexterous Gripper

The dexterous gripper has the ability to grasp, lift and manipulate several metal automotive parts. The dexterity of the gripper is clearly visible, as it is able to adapt itself and reconfigure for being able to manipulate 9 different parts. Dexterous gripper is powered through the tool changer by the COMAU Robot. Its embedded pc is used to control the motion of its fingers and to expose its ROS interface. Using this interface, the gripper is able to communicate with all the other AUTORECON resources for performing the operations that will be assigned to it.

Flexible Gripper
The flexible gripper is used to grasp big parts of different size and weight that can be either symmetric or non-symmetric. For this reason, the gripper has the flexibility to reconfigure itself according to the piece that it is going to grasp. The development of the double tool changer on the gripper increases the flexibility of the production line. The gripper along with work piece can be exchanged between two Robots, thus reducing the work piece exchange time. Through the tool changer, the flexible gripper can be powered directly from the COMAU Robot.
The embedded pc of the gripper is placed inside the control unit. Using an external Wi-Fi adapter, it is able to connect to the AUTORECON network and expose its ROS services. The flexible griper’s ROS interface is used for the communication with the other AUTORECON resources for controlling its arms during the execution phase.

Mobile Platform
The Mobile Platform along with the COMAU Robot mounted on it consist the Mobile Unit. The Mobile Unit is able to move inside the factory according to the needs of the production, allowing, in case of an emergency breakdown of another Robot, to navigate to a docking station close to it, take over the on-going stopped operation by picking up the used tool (gripper, welding gun, etc.) and continue the operation.
High precision visual servoing is used for docking the platform at the docking stations and an autonomous multi-component navigation system is used for moving the platform between various docking stations. Mobile platform has itself a ROS interface which is used for controlling the various operations of the platform as well as for communicating with the other AUTORECON resources and the control software.

Vision System
For the AUTORECON project, Recognisense vision system was used. Recognisense is developed by COMAU and is integrated with COMAU Robots. A Baumer Camera and an external PC where used and all were at the same network as the Robot. At the external PC, the Recognisense software was running for correcting the Robot’s position.
Recognisense was used in 2 occasions during the automotive demonstrator:
• After docking for updating the user frame of the Robot
• During the gripper exchange for correctly entering the free flange of the gripper.

AUTORECON Control Software
AUTORECON Control Software is a multi-level software package that is used to control a flexible manufacturing line of a factory that is based on the previously described components. Several different modules are part of this software.
1. The Jena Server module is a web application that is used to host and manage the Ontology repository of our manufacturing line. An owl file is loaded at the Jena Server and can be edited through the GUI.
2. The AUTORECON GUI has been developed using the Java programming language and the Spring framework. It connects with the JenaServer using HTTP connection. The user can modify the Ontology data model according to the factory needs.
3. The Ontology Service is developed as a ROS package that aims to register all the AUTORECON resources and connect them with the Ontology repository for accessing and modifying information during the execution phase.
4. The Robot Service is also a ROS package. It connects with the Gripper ROS interfaces by calling the services whenever needed. Furthermore, it connects using TCP socket connection with the Robot Controller in order to send to it the operations that have to be performed by the Robot. Finally, it connects to the Ontology Service for registering, retrieving the schedule and informing in case of a breakdown or a task completion.
5. The Gripper Services are running at the gripper embedded pcs and expose the right ROS services for changing reconfiguration, grasping and releasing the parts and preparing to be exchanged or released on the tool stand.
6. The AUTORECON planning module is responsible for calling the scheduler in order to assign efficiently the tasks to the available resources. The AUTORECON planning module connects with HTTP protocol with the JenaServer in order to read and write at the Ontology repository.

Physical Demonstrator Analysis
The AUTOMOTIVE demonstrator was created in order to prove the feasibility of an autonomous and flexible production line which will be able to reconfigure dynamically according to the current situation. The demonstrator took place at TOFAS premises and all the partners took part at the final tuning and the integration of all the AUTORECON components in order to be able to operate without problems at the specific cell.

Components
The components that were installed at the demo area and used for the final demonstration were the following:
• A Smart5 NJ 220 – 2.7 COMAU Robot;
• A Mobile Unit (Tecnalia’s Mobile Platform + Smart5 NJ4 170 – 2.5 COMAU Robot);
• A Flexible Gripper;
• A Dexterous Gripper;
• A Welding Robot (Smart NH4 COMAU Robot + COMAU Welding Gun);
• Various automotive parts (9 single parts and 2 subassemblies) placed on especially designed racks;
• 1 mobile fixture especially designed for moving the Tunnel subassembly from the loading area to the welding area;
• The AUTORECON-Master pc where the AUTORECON Control Software was running.

Network Setup
For the needs of the demo, a wireless network has been setup (SSID: AUTORECON) at which all the AUTORECON resources are connected. Two wireless access points were used in order to setup this network. One wireless access point was next to the AUTORECON-Master pc and one was placed at the Mobile Platform. Both were configured to connect with each other and with a switch. So, every device connected to the two switches was automatically part of the AUTORECON network.
• The AUTORECON-Master pc, the Fixed Handling Robot and the Welding Robot were connected through cable at the switch.
• Two cameras were also connected to the switch in order to enable remote access at the demo area.
• All the Mobile Platform pcs and devices, as well as the COMAU Robot mounted on the platform were connected at the platform switch.
• Dexterous and Flexible gripper used wireless adaptors in order to connect wirelessly at the AUTORECON.

For enabling the remote access at the demo area, the AUTORECON-Master pc had internet access through the TOFAS-Guest network and TeamViewer software was installed with fixed username and password.

1st Part: Loading parts on the fixture
During the first phase of the demonstrator, the Mobile Robot is moving to the loading area to load 9 automotive parts on a mobile fixture. When arriving to the loading area, the Robot on the mobile platform is at the fold down position and the Dexterous gripper is on the tool stand. In order to calibrate its frame, the COMAU Robot uses the Recognisense vision system. The marker pictures are stored at the Recognisense pc and using the camera and some built in pdl routines, the Robot makes corrections and eliminates the deviation. For achieving greater precision, the Recognisense procedure is repeated several times.

When the demo begins, the COMAU Robot corrects its user frame using the Recognisense vision system. Afterwards, it moves, approaches the dexterous gripper on the tool stand, enters into the gripper flange (tool changer) and activates the pneumatic actuator in order to grasp the gripper. Using the dexterous gripper, it has to load 9 parts on the mobile fixture.
Before approaching each part, the gripper must reconfigure itself for having the fingers at the right position in order to enter the holes and lock the parts. After entering the holes, the gripper locks the parts with the acquire_part service call. Next, the Robot moves to the mobile fixture and places the part at the right position.

The sequence of operations for the loading area can be summarized as following:
1. Mobile platform is already docked. Mobile (COMAU) Robot updates the user frame using the Recognisense vision system.
2. Mobile Robot approaches dexterous gripper
3. Mobile Robot enters the gripper flange (tool changer)
4. Mobile Robot locks the gripper using a pneumatic actuator
5. Mobile Robot moves away from the tool stand with the gripper
6. Mobile Robot approaches the part
7. Dexterous gripper reconfigures
8. Mobile Robot with gripper enters the holes of the part
9. Dexterous gripper acquires part
10. Mobile Robot moves to the fixture
11. Dexterous gripper releases part
12. Mobile Robot moves to home position
13. Mobile Robot moves to tool stand
14. Dexterous gripper changes to park configuration
15. Mobile Robot unlocks and releases the dexterous gripper
Operations 6-11 are repeating for each one of the nine different parts. The tasks of the Mobile Robot are stored at the Ontology and are loaded at the Robot Service module of the Robot. This module reads each operation and sends it to the appropriate resource. So, it sends all the movement commands and the activation of the pneumatic actuator commands to the Robot controller, while it calls the gripper ROS services for reconfiguration, acquiring or releasing the part.
After completing the tasks at the loading area, the software forces the execution to pause, and the mobile fixture is moved with the parts loaded on it, at the welding area.

2nd part: Welding operations
After the fixture is moved to the welding area, a resume command is sent (using the AUTORECON Control Software) to all the resources which enter into execution phase and continue their tasks.
Initially, the welding Robot performs some tack welding (geometry spots) on the parts placed on the fixture. This welding task is consisted of several “move” and “weld” operations that are also stored at the Ontology repository and sent by the Robot Service to the Robot controller of the Welding Robot.
This task has as post-condition the task of the fixed Robot attaching the gripper and grasping the tunnel from the fixture; therefore, the Fixed Robot moves to the Flexible Gripper tool stand, enters the flange (tool changer) and grasps the gripper. With the gripper attached, it moves towards the fixture. Before approaching the tunnel, the gripper reconfigures itself in order to be able to correctly grasp it. After the Robot moves close to the pickup position of the tunnel, the gripper closes its clamps and the Robot is ready to handle the part for continuing the welding operations.

The sequence of the operations of the welding phase of the demo can be summarized as following:
1. Welding Robot performs tack (geometry) welding on the mobile fixture
2. Fixed Handling Robot moves close to the tool stand and grabs the flexible gripper
3. Fixed Handling Robot moves with the gripper towards the fixture
4. Flexible Gripper reconfigures itself
5. Fixed Handling Robot approaches the tunnel to the pickup position
6. Flexible Gripper closes clamps
7. Fixed Handling Robot moves away from the fixture with the Tunnel subassembly attached to the gripper
8. Fixed Handling Robot handles the tunnel
9. Welding Robot continues welding the tunnel that is grabbed by the Handling Robot
While executing the above, a breakdown event of the handling Robot is simulated. This Robot cannot continue the tasks assigned to it and the production line must reconfigure itself in order to take care of this unexpected event.

Gripper exchange
The Fixed Robot moves to a position that is reachable for the Mobile Robot and simulates a break down event. A ROS message is transmitted and informs all the resources about the break down. Then, all the resources stop their work immediately after completing their current operation. Afterwards, they negotiate between each other and one of them is chosen to call the AUTORECON planning module in order to perform a rescheduling.
The AUTORECON planning module again reads the remaining tasks and operations from the Ontology and, given that the Fixed Robot is not available, it assigns its tasks to the only resource that can take its place: the Mobile Robot. So, after writing again the assignments in the Ontology repository, the resources are informed to retrieve their schedule and perform their work.
The Mobile Robot must now perform the job of the Fixed Robot using the Flexible Gripper. After querying the Ontology about the place where the job should be performed and about the tool that it needs, it undocks from the loading area docking plate, navigates to the welding area and docks to the docking plate at the welding area.
Now, the gripper exchange procedure must take place. The Mobile (COMAU) Robot on the Mobile Platform moves close to the Flexible Gripper free flange (tool changer) and uses the Recognisense vision system in order to make the fine tuning and enter correctly into it. Before entering, it notifies the Fixed Handling Robot to enable the soft-servo mode and then enter drive-off state. After entering into the flange (tool changer), Mobile Robot locks the gripper and informs the Fixed Robot to unlock the gripper using ROS service call. After the Fixed Robot unlocks the gripper, the Mobile Robot moves away and continues the handling task.

The sequence of the operations performed at this phase is as following:
1. Mobile platform undocks from the loading area
2. Mobile platform navigates to the welding area
3. Mobile platform docks at welding area
4. Mobile Robot approaches the Flexible Gripper tool changer flange
5. Fixed Handling Robot enables soft servo
6. Fixed Handling Robot enters drive off mode
7. Mobile Robot uses Recognisense vision system in order to calibrate and correctly enter the tool changer flange of the flexible gripper
8. Mobile Robot enters the gripper’s tool changer flange
9. Mobile Robot locks the gripper
10. Fixed Handling Robot enters drive-on mode
11. Fixed Handling Robot disables soft servo
12. Fixed Handling Robot unlocks the gripper
13. Mobile Robot moves away
14. Mobile Robot continues handling the tunnel
15. Welding Robot performs welding with the Mobile Robot

Flexible gripper new reconfiguration
Finally, in order to demonstrate the flexibility of the Flexible Gripper and its capability to change dynamically reconfiguration according to the geometry of the part that it is going to grasp, the Fixed Robot will re-attach the gripper and attempt to pick up a second automotive subassembly called vano di carico.
The Mobile Robot now needs to release the tunnel on the rack and then “give” the Flexible Gripper back to the Fixed Robot. Thus, a new gripper exchange is going to take place. This time the Mobile Robot moves to a position that is known and reachable by the Fixed Robot, enables soft-servo and drives off. The Fixed Robot enters the free flange of the tool changer of the Flexible Gripper, locks the gripper and notifies the Mobile Robot to unlock. Now, the Fixed Robot has the control of the flexible Gripper and is ready to perform its next tasks. Before reaching the vano di carico, the gripper is instructed to change configuration by using the “reconfigure” ROS service call. Now, its clamps are changing position and are getting ready for this new part. Finally, the Robot approaches the vano di carico, the gripper closes its clamps (with the “close_clamps” service call) and the Fixed Robot moves with the part grasped.

The sequence of the operations for this final phase of the demo is as follows:
1. Mobile Robot moves close to the rack where the tunnel should be released
2. Flexible Gripper open its clamps and releases the part (“open_clamps” ROS service)
3. Mobile Robot moves away
4. Mobile Robot moves to a position close to the Fixed Robot.
5. Mobile Robot enables soft servo and drives off.
6. Fixed Robot approaches Flexible Gripper free flange.
7. Fixed Robot enters the free flange and locks.
8. Mobile Robot drives on, disables soft servo and unlocks the Flexible Gripper.
9. Fixed Robot moves away
10. Fixed Robot moves near vano di carico
11. Flexible Gripper reconfigures for vano di carico
12. Fixed Robot moves to grasping position
13. Flexible gripper closes its clamps
14. Fixed Robot moves away

Consumer goods industry Demo
Component Architecture and Design
Dexterous Gripper

The dexterous gripper has the ability to grasp, rotate and manipulate parts. The gripper is used to grasp and rotate the shaver handles. Two identical grippers were used. For the operation of the gripper, it needs to be supplied by 24Volt DC and about 10A. Control of the motor is done by the embedded PC of each gripper. The embedded PC was connected to a local Ethernet network with two robot arms. Moreover the embedded PC is running UBUNTU Linux and a ROS interface was implemented of the gripper. Using this interface, the gripper is able to communicate with all the other AUTORECON resources for performing the operations that will be assigned to it. The gripper is connected to the other modules through the Ethernet cables.

Vision System
The vision system’s role in this scenario was the detection of the shaver handles in the specified field of view on the belt conveyor, as well as the features measurements, in order to calculate the real world coordinates of the handles. The developed vision system is capable of working under the high frequency of this scenario. Like all the other modules in this scenario the vision system modules have a ROS interface which was capable to communicate with the grippers and the other modules of the scenario.

Camera characteristics
The camera which was used in the demonstrator, is a Basler Ace GigE Camera (Gigabit Ethernet camera), with a small form factor matching that of many analog cameras (29mm x 29mm). This camera is powered up from a Power Over Ethernet Protocol. For controlling the camera, the driver provided by the manufacturer was used. This driver is provided with an interface in C++ in order to control the camera.

Physical Demonstrator Analysis
Physical demonstration of the consumer goods scenario

During the demonstration two scenarios were demonstrated, namely the packaging and the feeding scenarios.
With these scenarios it was proved that it is possible for the different components to be integrated and work in relatively high speed that is needed by the real application.
The specification for the output rate of the feeding conveyor according to BIC pilot case is 90 Handles/minute that is 2 Handles/second.

Packaging Scenario
This scenario is part of the packaging where the assembled shavers that are delivered from the assembly machine are oriented and accurately positioned on the feeding conveyor so that the Smart Six Robots can pick them and position them in the loading trays.
The setup of this scenario includes two Smart Six Robots, Robot bases, Feeding conveyor, Track conveyor, Loading trays, and Camera setup.

Following, a list with the basic steps for running this scenario is given.
1) Preparation of the scenario
a. Activate the Robot controller and start the Robot agent for each Robot. The Robot agents are the programs running on the Robot controller and receive motion commands and execute them. The communication with the other modules is based on a TPC/IP protocol.
b. Power up the gripper and their embedded PC. After the embedded PC is activated the applications that implement the ROS interface for each gripper were started.
c. Activate the Vision module. The Vision Module is the software that connects to the camera and is responsible for the acquisition of the image and for the processing of this image to extract the information about the handles.
d. Activate the program that is running on the PC and it is responsible for the communication with the Robot. This software is running on the main PC. It is responsible for communication with the Robot and to provide commands.
2) Running the scenario
a. The shavers are running from the machines in the feeding conveyor. The handles are placed well oriented on the conveyor belt and it is needed to put them on the tray.
b. The conveyor belt was moving and the handles passed through the field of view of the camera. While the handles are visible from the camera the vision module calculates the necessary data to describe the position and orientation of the handles.
c. The module responsible for controlling of the Robots connects using the data that the vision system had calculated. It is responsible to identify the handles that are nearest to the Robot TCP and generate the motion commands in order for the Robot to grasp these handles. Moreover, this software was responsible for calling the appropriate ROS function of the gripper. For this scenario there are two basic functions that are being used, the grasp handles and the reconfigure handle. The first function is responsible for closing the fingers so the gripper can grasp the handle. Using the second function, the gripper rotates the handles in such a position in order to be possible to release them on the trays.
d. The gripper module is responsible for controlling the gripper. The basic function that was essential for this scenario was the close finger for grasping the handles, open fingers to release the handles, and rotate the handles 90 degrees, so it will be easier for the gripper to release the handles on the trays.
e. The Robot agents were responsible for receiving and executing the motion command. The most challenging task was the pick-up from the conveyor belt. During this task the handle was moving on the conveyor belt and the coordinates of the moving command should be updated. The main steps of this module was the following:
1. Wait for task to be assigned
2. Pick up the handles from the conveyor belt
3. Release the handles in the slot
4. Again wait for task stage
f. When the trays are filled up, the Robots were paused until the human replaced the handles tray with an empty one and give the signal (by pressing a key on the main pc keyboard)
Comments
The gripper is capable of picking up and rotating two handles in the same time. This was not demonstrated on the scenario. The problem was that the human was responsible for placing the handles on the conveyor belt. Because the human was not capable to place all the handles in the same distance it was not possible for the gripper to pick up two handles on the same time.
During the experiment we found out that even some mm error in the placement could cause the failure of the rotation of the handle.

Feeding Scenario
The other scenario where the vision system was used is the part feeding where the Smart Six Robots pick up the randomly positioned and oriented handles from the feeding conveyor and position them in the track conveyor.
The setup of this scenario includes two Smart Six Robots, Robot bases, Feeding conveyor, feeding tower, and Camera.

Here is a list with the basic steps for running this scenario:
1) Preparation step
This step is similar to the preparation step for the packaging case
2) Running the scenario
a. Feeding the conveyor with handles. The handles are provided from the machine. The handles are lying on the conveyor belt randomly. Considering the geometry of the handles there are 3 poses that the handles are able to assume lying on the conveyor belt.
b. The conveyor belt was moving and the handles came through the field of view of the camera. While the handles are visible from the camera the vision module calculated the necessary data to describe the position, orientation and the pose of the handles. In addition to the packaging scenario in this scenario it was needed the identification of the handles pose.
c. The module responsible for controlling of the Robots was the same like the previous scenario. But now after the pickup of the handles, the handles needed to be rotated to the correct orientation. The gripper was responsible for the rotation of the handles but which function of the gripper it was depending to the pose of the handles.
d. The gripper module it is responsible for controlling the gripper. In this case the more function of the gripper was used in order to rotate the handles.
e. The Robot agents was similar to the agent that was used in the previous scenario, but now in the execution there was one more step, the rotated of the handle in order to be released on the tower :
1. Wait for task to be assigned
2. Pick up the handles from the conveyor belt
3. Rotate the handles according to the handles pose.
4. Release the handles in the tower
5. Again wait for task stage
Potential Impact:
The following information can be found also under deliverable D7.1 (“Exploitation and dissemination report”) submitted by the consortium.
The exploitable results achieved by the AUTORECON project are the following:
• Lockable joint mechanisms
• Flexible gripper as a whole product
• Dexterous gripper-automotive (HW+SW)
• "Pino" pins for grasping parts via holes (TOFAS dex gripper)
• Dexterous gripper-consumer goods (HW+SW)
• AGV with autonomous Navigation system
• Mobile platform as a whole product (including Robot with self-calibration algorithm, AGV, Navigation)
• Line level control software
• Unit level control package Automotive
• Unit level control package Consumer Goods

Moreover, chapter 4.2 (“Use and dissemination of foreground” - see attached PDF) is reporting the main dissemination activities that were performed, as summarized in 2 separate tables:
- Table 1 (Template A1): List of all scientific (peer reviewed) publications relating to the foreground of the project.
- Table 2 (Template A2): List of all dissemination activities (publications, conferences, workshops, web sites/applications, press releases, flyers, articles published in the popular press, videos, media briefings, presentations, exhibitions, thesis, interviews, films, TV clips, posters).

In terms of Impact, the two demonstrators that were performed demonstrated the following:

“The research efforts demonstrate the feasibility and technological advantage of the new European factories of the future in traditional and emerging industrial sectors.”
AUTORECON delivered both cooperative hardware components as well as a holistic framework for decentralized control of the factory. The technology that AUTORECON is based on has been under investigation and has been applied in a few industrial cases. AUTORECON brought this technology to its limits and further extended it by generating new knowledge. The application of agent-based and service-oriented control technology in a highly flexible, highly cooperative environment that comprises of mobile units, highly flexible handling tools for manufacturing highly complex products with variable properties and geometries, in a diversity of industrial sectors, such as the automotive and the consumer goods products industry, ensures that the feasibility of the technology will be demonstrated efficiently.

“Results stimulate important innovations in production technology and enhance industrial work environments, especially in traditional sectors including food and agro-industries.”
The results of AUTORECON were achieved by generating new knowledge especially in the fields of device control, open device connectivity, wireless sensor networks integration and flexible handling technology. Significant innovations were achieved, which will originate from the need of enabling flexible production in two distinct industrial sectors.
The automotive industry has been an ideal demonstration environment for such technology, due to the highly dynamic nature of the production environment, which is imposed by the need for multi-variant production in a random flow. The demonstration in the consumer goods products industry, has brought the technology to its limits, due to large number of subcomponents that need to be handled at high speed, high quality requirements that impose high precision in handling and control.

“The technology developed drastically improves the international position of European manufacturers with respect to their openness to adopt new manufacturing processes and product innovations.”
The basis of AUTORECON is the utilization and development of open technology. Today’s main challenges in adopting new technologies involve the lack of open and robust software and the lack of standardized interfaces towards sophisticated sensors. This is one of the main reasons why the adoption of flexible robotic cells is not wide spread in industries outside the machine, vehicle and chemical production industries (such as food, industrial machinery, communications, consumer goods, medical etc) as estimated by the International Federation of Robotics (IFR) (www.ifr.org). The highly reconfigurable solutions demonstrated by AUTORECON integrated under the open AUTORECON architecture provides all the means to overcome these challenges especially thanks to the reduced effort and complexity that they offer during the introduction of new equipment. The latest European wide report on robotics market outlook reveals that professional and service robots are the segments that have the greatest potential for growth. There are significant opportunities particularly in health/medical and field robots that offer the best chance for growth in the robot sector. AUTORECON thanks to the openness that it promotes, plays a crucial role in the acceleration of the introduction of robotic technology and the development of specialized solutions for each industrial sector. Openness was achieved in two dimensions, hardware and software.
• In the hardware level, highly interoperable and exchangeable end effectors can be exchanged among production units/robots according to standardized interfaces. The mobile nature of the production units, ensures an open and standardized hardware architecture, for enabling a highly cooperative nature of production equipment.
• In the software level, the control technology utilized the service oriented architecture that standardizes the architecture of control software. In addition, this control architecture enables a standardized mechanism of communication among production units based on the ROS framework, and results in a production environment of extreme cooperative potential, thus the European manufacturers will be able to operate more openly.

List of Websites:
http://www.autorecon.eu/ where the list of all beneficiaries with the corresponding contact names can be found