Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS
Content archived on 2024-06-18

HUMAN-IN-THE-LOOP TELEPRESENCE CONTROL FOR ROBOT-ASSISTED SURGERY

Final Report Summary - TELEPRESENCE SURGERY (HUMAN-IN-THE-LOOP TELEPRESENCE CONTROL FOR ROBOT-ASSISTED SURGERY)

Objectives: In this project we aim to develop new bilateral teleoperation control methods for surgery that will consider biomechanical and neurological models of the operator. In particular, we are aiming to push the boundary of the stability-transparency trade-off, and maintain stable system performance while providing useful force feedback to the operator. We ground the development of these methods in recent results in human sensorimotor control and learning, and apply them on Robot-assisted Surgery research platforms (da Vinci Research Kit and Raven II) in surgically-relevant tasks. We collaborate with surgeons to identify the key aspects of perception and action that are relevant for successful surgery, and extend the perceptuomotor transparency approach into the realm of robot-assisted surgery by defining specific performance and perception measures for surgery.
Specifically, we wish to address the following research aims:
Aim 1. Theoretical analysis of stability in human-in-the-loop teleoperation.
1.1. Build a simple model that captures the salient characteristics of human motor control in surgical context.
1.2. Build a simple model that captures the salient characteristics of a living tissue.
1.3. Analyze the stability of overall system: human operator, teleoperation control channel, and tissue.
Aim 2. Define transparency in robot assisted surgery (RAS).
2.1. Derive conditions for motor transparency in surgery.
2.2. Derive conditions for perceptual transparency in surgery.
Aim 3. Design, implementation, and validation of human-in-the-loop controller for RAS
3.1. Design and implementation of the controller on a clinical RAS system.
3.2.Comparison of surgical performance of our system with standard RAS, standard MIS, and direct operation in an artificial surgical environment.
Work description:
To model the movement of surgeons and novice users, we used a clinical version of the da Vinci Si system equipped it with pose trackers, force sensor, and a monitor placed on the surgical table allowing studying teleoperated and freehand movement of surgeons using a very similar setup (Fig. 1A). We demonstrated that manipulator dynamics affect movement trajectories, and that these effects depend on expertise and on the direction of movement, suggesting that they can be modeled as a result of interplay between the dynamics of the master manipulator, the arm of the user, and neural control strategies. We also found that experienced surgeons coordinate the variability of their joint angles to stabilize hand movements more than novices, and that the effect of teleoperation on this coordination depends on experience -- experts increase teleoperated stabilization relative to freehand, whereas novices decrease it.
We used the da Vinci Research Kit (Fig. 1B), a custom research version of the da Vinci Surgical System, to compare teleoperated and open needle-driving movements of experienced da Vinci surgeons and novices. The experimental protocol consisted of structured but unconstrained needle driving trials repeated 80 times to allow for computational modeling of movement coordination and learning. Kinematic analysis showed that teleoperation increases trial time but reduces path length, that the trial times and path lengths of experienced surgeons are smaller than those of novices. In addition, there are significant differences in learning between experienced surgeons and novice users, and the differences between experienced and novice users vanished at the end of the learning curve. We also developed a new metric that quantified the integral of the change in instrument orientation normalized by the path length traveled by the instrument tip. In spite of a statistically significant learning, the novices did not reach expert performance. Such novel metrics may improve quantification of surgical skill.
We extended the analysis of the perceptual and motor transparency framework previously formulated by the researcher. We proved analytically that for a teleoperation channel with a position and force scaling and a constant transmission delay, in a palpation and perception of stiffness task, it is possible to find gains that assure perfect perceptual and remote motor transparency while maintaining stability, and showed that stability depends on the operator maintaining sufficient arm impedance relative to environment impedance and delay. In the current project, we did not intend to explore delayed teleoperation. However, delay causes an interesting gap between perception and action, which might be relevant for the case of robot-assisted surgery because of the nonlinear properties of live tissue.
During daily tool mediated interaction with objects, we use visual and force information to form an internal representation of their size and mechanical properties that serves us for perception and for guiding actions, such as in precision grip, where grip force is modulated with the predicted load forces. This is particularly important for surgery, because many of the maneuvers include grasping and manipulation of tissue using various forms of grippers, and teleoperation distorts the relation between the fingers' grip and the instruments' grip. At the newly established biomedical Robotics Lab at the Department of Biomedical Engineering, Ben-Gurion University of the Negev, we established a robot-assisted surgery research platform composed of a Raven II surgical robot teleoperated with a pair of Sigma 7 haptic devices and instrumented with a 3D visualization system (Fig. 1C). We studied the effect of gripper aperture scaling on telegrasping transperancy. We asked participants to perform actual or pantomimed telegrasping: actually grasp very small objects or show the intent of how they would grasp them. Our preliminary results suggest that grip aperture and movement trajectories in telegrasping are similar to natural grasping, and that scaling up the mapping between finger and instrument gripper apertures improves transparency.
In a virtual reality study, we explored the relationship between grip force adjustment and perception of stiffness during interaction with an elastic force field. We found that while participants underestimated the stiffness of the delayed field when compared to the non-delayed, grip force characteristics were not affected by the delay. Both the amplitude of the grip force modulation and the temporal lag between it and the load force in the last probing of the elastic force fields before participants gave their answer were similar between delayed and non-delayed force fields. These results suggest that an accurate internal representation of both environment stiffness and time delay was used for adjusting the grip force. However, this representation that contributed to the generation of a correct grip force did not help in eliminating the bias in stiffness perception. These results suggest that during performance of a perceptual task that is based on proprioceptive feedback, separate neural mechanisms are responsible for perception and action-related computations in the brain.
One of the major problems with using force feedback in RAS is related to the safety of using high-gain force feedback that may move the hands of the surgeons from their intended path. Therefore, we explored using novel haptic feedback mechanism that stretches the skin of the finger pad of the user to convey force feedback information without applying a gross movement to the hand. We showed that this device is capable to augment and substitute force feedback information in perception of stiffness tasks. In both studies, users received skin stretch feedback with magnitude proportional to their penetration depth into a virtual wall. In the augmentation study, this was in addition to applying kinesthetic force feedback. We developed a computational model that explains the sensory augmentation of stiffness perception by skin stretch cues. In addition, the device was used in a teleoperated palpation task were users’ ability to identify a rigid structure inside softer tissue was quantifies with force feedback and various forms of sensory substitution, showing that skins stretch provides the best substitution for force feedback when compared to vibration feedback and visual substitution.
Using a custom-designed 3-dof device for skin deformation information rendering that may be attached to the master manipulator of the da Vinci surgical system, we compared the performance of users in a task of identifying the location of a contoured hole using force and skin deformation information that was provided in one or three degrees of freedom. This task represents finding a suitable location for inserting a trocar through an intercostal space, or other cases in which anatomy needs to be identified using its shape in the absence of visual information. We showed that users identified the location of the hole faster and more accurately when 3-degrees-of-freedom information was available, indicating that similarly to kinesthetic force feedback, skin stretch/deformation information is interpreted intuitively, and that availability of rich information in many degrees of freedom improves task performance. In the augmentation experiment, participants performed a 3D path guidance task. This may be relevant for conveying preoperative planning about a desired instrument path via tissue, or for providing guidance from a senior surgeon during training. Augmenting force feedback with skin deformation feedback reduced path-following error. Therefore, skin stretch feedback is a promising method for conveying force information in RAS, and may be used in the attempt to generate a stable and transparent teleoperation controller.
We studied the how users control the movement of a virtual cursor using isometric force input device. State of the art RAS system use the movement of the surgeon as the sole input; however, there might be advantages in combining force and movement inputs in the attempt to mimic natural interaction with soft tissue that may involve the control of movements as well as interaction forces. We studied how users adapt to a visuo-motor rotation of the movement of a cursor when their force input controls the position or velocity of the cursor. Such rotation is relevant for RAS when the endoscopic camera is rotated. We found that users are able to adapt in both position- and velocity-based cursor control, and that the time course of adaptation resembles that of movement adaptation. Interestingly, the generalization of adaptation is different between movement and force inputs, indicating that the rotation is represented in different reference frames – in the case of movement, it is represented in the visual/hand space, whereas in the case of isometric force input, it is represented in joints space.
In many human-in-the-loop robotic applications such as robot-assisted surgery and remote teleoperation, predicting the intended motion of the human operator may be useful for successful implementation of shared control, guidance virtual fixtures, and predictive control. We developed a stochastic optimal control framework for modeling human reaching movements in the presence of obstacles. It consists of probabilistic collision avoidance constraints in addition to a cost function that trades-off between effort and end-state variance in the presence of a signal-dependent noise associated with human motion. Preliminary experiments were performed to experimentally validate our computational framework.
In conclusion, we have made significant progress towards all the planned aims of the study. We opened a new avenue for the incorporation of human sensorimotor control in the robot-assisted surgery system design and training of new surgeons. The researcher has gained an extensive training in the field of surgical robotics during her outgoing phase at Stanford University, and has established a new lab at the return host institution Ben-Gurion University of the Negev. In the new Biomedical Robotics Lab we apply neuroscience theories about the human sensorimotor control, perception, adaptation, learning, and skill acquisition in the development of human-operated medical and surgical robotic systems. We also use robots, haptic devices, and other mechatronic devices as a platform to understand the human sensorimotor system in real-life tasks like surgery, and in virtual tasks like virtual reality games or surgical simulation. We hope that this research will improve the quality of treatment for patients, will facilitate better training of surgeons, advance the technology of teleoperation and haptics, and advance our understanding of the brain. The new lab already employs a postdoctoral fellow, 2 PhD students, 2 Msc students and 6 undergraduate researches, and won the first research grant from the Israeli Science Foundation to continue research to develop a new generation of human-centered force-reflecting controllers for robot-assisted surgery..
Major findings:
1. Characterizing the effect of a da Vinci Si master manipulator on the kinematics of hand movement and stabilization of hand movement by coordination of arm joints of experienced surgeons and novice users.
2. Implementing a simple teleoperation control architecture on a da Vinci Research Kit @Stanford, developing a novel needle-driving task, and using both in characterizing teleoperated and open needle-driving by experienced surgeons and novice users., and developing novel surgical skill evaluation metrics that quantify the orientation of the surgical instrument
3. Implementing a simple teleoperation control architecture on a robot-assisted surgery research platform composed of a Raven II surgical robot teleoperated with a pair of Sigma 7 haptic devices and instrumented with a 3D visualization system @BGU, and developing a new protocol for studying the effect of gripper aperture scaling on telegrasping transperancy. Preliminary results suggest that scaling up the mapping between finger and instrument gripper apertures improves transparency.
4. Deriving analytical conditions for perceptual and motor transparency and stability of a simple teleoperation channel in palpation and perception of stiffness task under constant delay, assuming linear approximation of user, linear environment, and ideal teleoperation channel that includes force and position scaling.
5. Modeling the effect of a novel skin stretch feedback device in augmentation and substitution of force feedback in perception of stiffness task.
6. Testing the ability of users to intuitively interpret information conveyed by a novel tactile skin stretch/deformation device in a contoured hole localization and path following task.
7. Proving that skin stretch substitution is useful in a teleoperated palpation task and that participants perform better with this new sensory substitution than with other forms like vision and vibration.
8. Finding that during exploratory palpation of elastic force fields with force feedback delay, an accurate internal representation of both environment stiffness and time delay is used for adjusting the grip force, while perception of stiffness as measured by verbal judjement of relative stiffness of elastic force fields is biased.
9. Showing that adaptation to visuo-motor rotation in isometric force control of virtual cursor is similar in rate to adaptation in movement control, but unlike in the case of movement control, the rotation is represented in joint based reference frame.
10. Developing a stochastic optimal control framework for modeling human reaching movements in the presence of obstacles.


final1-ramis-systems.png
My booklet 0 0