Skip to main content

Robotic Manipulation Planning for Human-Robot Collaboration on Forceful Manufacturing Tasks

Periodic Reporting for period 1 - HumRobManip (Robotic Manipulation Planning for Human-Robot Collaboration on Forceful Manufacturing Tasks)

Reporting period: 2017-05-01 to 2019-04-30

This project focused on robotic manipulation planning for human-robot interaction during forceful collaboration. Take the example where a human is drilling holes onto a board and then cutting a piece of it. As the human applies the operations, the robot grasps the board for the human, occasionally changing its grasp on the object, always trying to maximize the stability and smoothness of the interaction.
The project addressed various technical challenges associated with this problem. First, it addressed the algorithmic challenge of planning a sequence of grasps for multiple manipulators (e.g. a robot with two arms). Second, it addressed the problem of modelling of the human forces applied during the interaction. Third, it addressed the problem of human body posture during the interaction. Finally, it addressed the problem of integrating these algorithms on a real robot system.
The project resulted in algorithms and systems that go beyond the state of the art in addressing these challenges. Particularly, in terms of taking a manipulation planning approach to the problem of human-robot forceful collaboration, to the best of our knowledge, this project generated the first set of algorithms in the literature. The project also resulted in creating new collaborations between myself and the School of Biomedical Sciences at the University of Leeds.
In today’s factory automation, robots and humans work separated from each other, making it impossible for them to collaborate and complement each other’s skills. This project developed algorithms which enable human-robot collaboration in manufacturing environments, a key manufacturing technology for the future of Europe.
The first work package was aimed at developing a planning algorithm which, given the description of a manufacturing task, plans the actions of all robots in a human-robot team to perform the task. Below are the three main tasks, which correspond to different versions of the planner:
1. Planning stable grasps: The first task in this work package was the development of an algorithm that can plan stable grasps individually for a sequence of operations. The algorithm particularly addressed the problem of stability checking, given a multi-manipulator grasp on the object, and an external force. This basic algorithm was presented at the 2nd UK Robot Manipulation Workshop, and the associated abstract is attached in the final report.
2. Planning to transfer between grasps: The second task in this work package was to improve the planning algorithm, to enable the robot to smoothly transition between sequential forceful operations. The paper presenting this algorithm was submitted, accepted, and presented at the IEEE/RSJ IROS 2018 conference. This paper is attached in the final report.
3. Using environmental surfaces: Finally, the planning algorithm was extended to work not only using the grippers of the robot manipulators, but also using environmental contacts, e.g. a shared table surface between the robot and the human. This extended algorithm has been submitted to the IEEE/RSJ IROS 2019 conference and is currently under review. The submitted manuscript is attached to the final report.

The second work package in this project was about the development of models that represent the human choices/characteristics:
1. We developed a way to measure the amount of space between the human and the robot during the collaboration, and we provided an optimization framework to maximize this distance while still holding the object at a pose appropriate for the collaboration. The results were submitted, accepted, and presented at the IEEE-RAS Humanoids 2018 conference. This paper is attached to the final report.
2. We developed a model to capture the forces applied by the human during the forceful operations. Take the example of drilling and cutting, where the uncertainty in the drilling and cutting directions can vary as the human applies these operations. Our planner was supposed to generate robot grasps that can resist all possible forces and torques within this distribution. We proposed a conic model such that if we check the stability at all the edges of this polyhedral cone, then we can guarantee stability inside of the cone. This model is part of a journal paper that we are finalizing.

The final work package focused on integrating the planning algorithm with a real robot system:
1. The first task was to bring together the planning algorithm implemented in Python programming language, with a kinematic robot modelling and planning software OpenRAVE (http://openrave.org/) a Baxter robot with two manipulators and grippers, and a three-dimensional planning model (called a URDF model) of the robot. We made use of external libraries such as or_urdf (https://github.com/personalrobotics/or_urdf) to be able to instantiate a Baxter model within OpenRAVE. We used Robot Operating System (ROS) to tie these components together and have a complete system. We used this system for our experiments and for demonstrations that we performed to visitors to our laboratory. A video of this system can be seen in: https://www.youtube.com/watch?v=IHti307yGFY
2. To perform the force modelling task, we needed to collect accurate time-based force-torque data during human-robot interaction. The Baxter robot we used as the main system did not provide such sensory accuracy. Therefore we used an existing UR5 robot arm that we fit with a Robotiq FT100 force torque sensor. We connected these components with the rest of the system using ROS, which enabled us to collect real-time and accurate force-torque measurements at 100 Hz. This data was then used in building the actual conic force models mentioned above in Work Package 2.
3. The final task in this work package was the development of a graphical user interface. The graphical interface enabled a user to approach the robot, select new operations on an object, and send these to the planner for the robot to then execute, streamlining the complete interaction between the human and the robot.
The project resulted in novel algorithms beyond the state-of-the-art, as is evidenced by the multiple publications in prestigious robotics conferences. Particularly, the planning algorithm presented at IROS 2018, and later extended in our IROS 2019 submission, is to the best of our knowledge the first algorithm out there to solve the problem of manipulation planning under changing external forces.
In terms of wider societal impact, I used this project to create various links with the industry, particularly in the UK. Visitors to our lab used the Interactive Demonstration GUI developed within the project together with the robot system, to understand the system capabilities. On 9-10 April 2019, I organized a UK Robot Manipulation Workshop at the University of Leeds. Researchers from leading UK universities and companies such as Dyson, Ocado Technologies, Amazon Robotics, and Google DeepMind visited and observed the system developed.
Being an individual fellowship, an important impact goal of this project was its contributions to my career as a researcher. This project enabled me to establish myself as a leading robotic manipulation researcher in the UK and Europe, via the publications and organizations (workshops and competitions) I took part in. These publications and organizations are detailed in the final report.