Skip to main content
Aller à la page d’accueil de la Commission européenne (s’ouvre dans une nouvelle fenêtre)
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Hands for Autonomous aNd Dexterous grasping

Periodic Reporting for period 1 - HAND (Hands for Autonomous aNd Dexterous grasping)

Période du rapport: 2022-04-01 au 2024-03-31

The human hand is an incredibly complex system with a huge spectrum of functionality. The loss of a hand is a traumatic experience usually followed by psychological and rehabilitation challenges. The interaction between engineering and science has, for a long time, made efforts to restore the functionality of a lost limb. The most advanced prosthetic hands currently available are operated via myoelectric signals. Here, the human-machine interface (HMI) usually relies on electromyography (EMG) sensors placed on the surface of the stump. Signals picked up from these sensors are used to drive the prosthesis in a one-muscle-one-movement simplistic approach (e.g. flex biceps to close the hand). Despite several major and well-known challenges related to surface EMG acquisition, this simple control approach proved to be functional with basic prosthetic grippers, but failed to translate properly when modern multi-articulated prosthetic hands reached the market. For these prostheses, cumbersome contraction patterns must be learnt from the user to properly switch between the different grasps (e.g. flex wrist three times to enable pinch grasp), killing the intuitiveness of control. That is why in the last decades, researchers spent many efforts trying to relief the amputee users from the burden of a complicated HMI, moving the learning to the machine instead via artificial intelligence algorithms. Thanks to these efforts, it is now widely accepted that machine learning algorithms can indeed facilitate the user when operating prosthetic hands with more than one degree-of-freedom, and this is confirmed also by the commercial interests behind this solution.

But if the problem is solved and the solution is already accessible on the market, why 40% of the users still reject these devices? Why the dexterity and functionality of hand prostheses is still far from being comparable to that of a biologic limb? Well, for sure there are still challenges related to the acquisition of EMG from the surface of the skin, or to the total lack of sensory feedback, challenges that coming-soon implanted solutions are successfully overcoming. However, these solutions are still under clinical investigation and won’t reach the mass before a decade, and most importantly, they cannot provide a full answer to the complex problem of restoring the human hand functionality. Such articulated problem must be addressed in parallel from different directions: more intelligent hardware must be developed for the HMI as much as for the robotic prostheses. Unfortunately, the efforts spent so far on more intelligent and autonomous robotic hardware are far way to be satisfactory. Nowadays, we have sensors technology, artificial intelligence algorithms and portable processing capabilities required to considerably improve the inherent potential of a robotic hand to take independent decisions. Semi-autonomous prosthetic hands can be a game changer, ultimately converting the conventional view of a prosthetic hand from a tool to a more complex device that interacts in an intelligent fashion with the user and the surrounding environment.

In an effort to contribute to this direction, this project addresses two main scientific and technological challenges:
1) explore the autonomous selection of the hand grasp by processing data from exteroceptive sensors. Modern proximity sensors can tell us a lot about the material and shape of the target object that we intend to grasp, and such information can be used to predispose the robotic hand for such human-object interactions. Moreover, the same information could also be used to improve the safety of robot-human interactions.
2) explore the autonomous execution of the hand grasp by processing inertial data available from the stump. Patterns of hand acceleration and digits closure velocity during the reaching-to-grasp phase can be exploited to simply replicate the biology of human-object interactions. Much information about the user motor volition can be extracted from the reaching-to-grasp movement of the stump, heavily reducing the dependencies from conventional noisy EMG sensors.
The project has achieved has achieved 4 out of 5 of its methodology objectives (Fig. 2), namely:
1. design a heavily instrumented glove to allow acquisition of various information such as hand kinematics, proximity to objects, inertial and tactile events;
2. use this glove to acquire an exhaustive dataset of interactions of able-bodied volunteers with objects;
3. analyse this dataset to develop object recognition and kinematics modelling;
4. port these automatisms in an instrumented hand prosthesis.

(1) The instrumented glove was based on the CyberGlove and further instrumented with a low-power pulse-coherent radar, with a depth camera, and with an inertial sensor.
(2) Such instrumented glove was used by able-bodied volunteers to collect an exhaustive dataset of human-object interactions, namely HANDdata. The interactions were recorded with a first-person perspective and were organized in different scenarios with increasing levels of complexity. The HANDdata dataset and methods are publicly available.
(3) The dataset was extensively analysed for two goals: object recognition via proximity vision and deep learning, and digits closure trajectory estimation via forearm hand-transport inertia modelling.
(4) Such models were ported on a prosthetic hand for verification and validation with able-bodied volunteers.

The project is now facing the (5) and last methodology objective of validating the autonomous grasping strategy with individuals with amputation.
The HAND project can boast several results which are intended to promote the development of semi-autonomous prosthetic hands. A time-independent, one-to-one relationship between the kinematics of hand transport towards the target object, and digit closure around the target was hypothesized and demonstrated. To this goal, an extensive human-object interaction dataset was collected, publicly shared and analyzed. For the first time, results not only suggest the existence of such relationship, but also that it is quite resilient to the different properties of the target objects. This finding unleashes a new set of opportunities for autonomous object grasping simply by integrating the standard-in-care EMG interface with inertial sensors. Moreover, new evidence was reported about the possibility to integrate/delegate the robotic hand grasp selection with/to exteroceptive sensors such as modern radars or simpler depth cameras. These sensors proved reliable and also enclose an important reduced risk for data privacy. Low-power radars allowed, for the first time, differentiation between biological tissue and inanimate objects, inspiring simple solutions to improve robot-human safety.

However, due to the preliminary and exploratory purposes of the HAND project, it does not make sense to actually evaluate the socio-economic impact of the project so far. Nevertheless, this project provided the momentum for an important research path that will be carried out in the coming years. This path will bring forward also an important message, namely its underlying call for better and "more intelligent" prosthetic hardware. In the era of artificial intelligence and robotics, it is mandatory to steer some of the gigantic potential ahead of us towards ethical solutions for healthcare and assistive technology.
HAND project research methodology objectives
Mon livret 0 0