Skip to main content
European Commission logo
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS

Autonomous Robotic Surgery

Periodic Reporting for period 4 - ARS (Autonomous Robotic Surgery)

Okres sprawozdawczy: 2022-04-01 do 2023-09-30

Robotic autonomy would be an essential feature to provide support, assistance, and guidance to humans in critical situations such as search and rescue operations, manufacturing and health care. As we see in the current COVID-19 emergency, the workforce in hospitals and healthcare organizations is not sized to address a sudden emergency. It could effectively use some robotic help, possibly shared among institutions and nations, to support daily activities. To achieve this goal, robots must have sophisticated reasoning capabilities, be aware of their surroundings and be capable of a cognitive interaction with their human counter parts.

To address the challenges of developing an autonomous robot, we focused on robotic surgery because is one of the leading research and commercialization areas of robotics, has direct impact on people lives, and can count on the support of surgeons who could use robots to improve their skills and reduce their workload. We proposed the ARS (Autonomous Robotic Surgery) project to develop some of the scientific theories and basic technologies necessary to achieve an autonomous surgical robot. Of course, we do not expect to reach human trials of any of our technologies and our main objective is to demonstrate an exemplary autonomous robotic surgery on a phantom. Specifically, the scientific objectives of the ARS project are:
1. To fully analyze and formally represent real surgical interventions with abstract models that integrate a priori knowledge with the results of surgical data analysis.
2. To develop methods to plan an intervention for a specific anatomy.
3. To develop methods for the real time control of the surgical instruments.
4. To develop situation awareness and reasoning methods to identify the current situation and re-plan the task, if necessary.
5. To demonstrate the autonomous execution of a representative surgical intervention using the DVRK setup and a physical phantom.

We were awarded an ERC Proof of Concept (PoC) project to demonstrate some of the ARS technologies in the execution of a prostate biopsy.
We were awarded a second ERC PoC grant to expand the reach of the technologies for prostate biopsies by using the OCT technology.
The research carried out during this Reporting Period has evolved from single lines into an integrated effort leading to demonstrations of increasingly surgical relevance and we have achieved the following results:

1. Knowledge modelling and representation. We have clarified the distinction between the top-down approach (model derived from books and expertise) and the bottom-up approach (model derived from data). In particular:

a. Top-down approach. We made progress in the development of methods to extract declarative and procedural knowledge of surgical procedures from robotic surgery textbooks. We are also designing methods on how to interface the description of a procedure extracted from textbooks written in Natural Language to the Answer Set Programming (ASP) framework used to plan the experiments demonstrated in this period.

b. Bottom-up approach. The approach followed in this research line has been to assume that each surgical procedure can be represented by a hybrid dynamic system consisting of several discrete states each corresponding to a specific action, e.g. approach, cutting, lifting, etc., and by a dynamic model representing the motions within the discrete state. With these assumptions, the data modelling consists of identifying the temporal sequences corresponding to each discrete state of the hybrid model, and to model the continuous motion within each state. This research led to the successful conclusion of Michele Ginesi doctoral work.

c. Simulation allows testing the task plan and the assumptions on the biomechanical properties of the tissues involved in the intervention. We are using simulation in different ways: to test a procedure plan, to keep an updated image of the anatomical environment, and as the background knowledge against which we compare the results of the real actions. Since reality is always right, discrepancies between the simulation and the data acquired from the field trigger a revision of the model to make it consistent with the anatomical behavior. To simulate in real time the anatomical behavior we use a Deep Neural Network (DNN), called BA-Net, which is trained with the synthetic data generated by the FEM off-line simulator. This research led to the successful conclusion of Eleonora Tagliabue doctoral work.

2. Planning. Thanks to the interaction between perception and planning, the plan variables (fluents) are instantiated at run-time using the sensor feedback, and when the nominal execution encounters an unexpected event, the fluents’ values are updated, new logic rules are triggered, and the plan sequence is modified to achieve task completion. This research led to the successful conclusion of Daniele Meli doctoral work.

3. Execution. Demonstrations and experiments are carried out with the DaVinci Research Kit (dVRK) of the robotics laboratory, although the system starts showing operational problems and random failures that affect its availability. The dVRK is based on the Standard (Classic) model of the da Vinci Surgical System, and it has been operational for about 20 years.

4. Monitoring. Perception in robotic surgery is key to successfully complete a task. Wehave expanded the sensor suite developed in the 2nd Reporting Period, with new sensors and algorithms to extract geometrical and physical properties of the anatomical environment. The sensor to measure the electrical impedance of the tissues have been integrated with an exploration algorithm that uses two instruments, instead of the original one. The measurement are calibrated with the approach developed in the previous Reporting Period, However, in a surgical environment, the anatomy is not steady and therefore we are developing the so called “anatomical SLAM”, which includes a semantic map, to distinguish between steady and moving organs. A new effort has started towards the development of force sensors for surgical instruments and to the identification of the forces involved in a surgical procedure. Echography images to enhance the perceptionare processed with a new method based on a Deep Neural Net, called PROST-Net.

5. Verification. After the successful completion of the Peg and Ring and of the Tissue Lifting tasks, we are now focusing on improving these technologies to make them suitable for the simulation of a kidney intervention.
We have made significant progress beyond the state of the art in all the research lines addressed by the project. The focus of this Reporting Period has been to inegrate the technologies developed in the previous RP and, in fact, all the publications involve multiple research topics.

See the publication list for the full description of the publications.

In the last Reporting Period of the project we are focusing on clinically relevant experiments, leading to the execution of a simulated partial nephrectomy. We have started a detailed analysis of the task, identifying many individual tasks that are not explicitly described in the surgical text books, and implementing them with a combination of the ARS technologies.
Simulation and execution of autonomous tissue lifting
Experimental set-up