European Commission logo
English English
CORDIS - EU research results

Programme Category


Article available in the following languages:


Clinical validation of artificial intelligence (AI) solutions for treatment and care


Applying trustworthy-AI[[High Level Group on Artificial Intelligence, set up by the European Commission, Ethics Guidelines for Trustworthy AI, document made public on 8 April 2019.]] in healthcare contexts generate a multitude of benefits, including more effective disease management by optimised personalised treatments and assessment of health outcomes.

Based on existing (pre)clinical evidence, proposals should focus on implementing clinical studies to validate AI-based solutions comparing their benefits versus standard-of-care treatments in non-communicable diseases. Proposals should pay special attention to the usability, performance and safety of the AI-based solutions developed, and above all to their clinical evaluation and (cost-)effectiveness in view of their inclusion into current clinical guidelines for personalised treatments following current EU regulatory framework.

Proposals should address all of the following:

  • Supporting the clinical development, testing and validation of AI-assisted treatment and care options, hereby assisting in clinical decision-making;
  • Timely end-user inclusion (e.g. patient, caregiver and health care professional) along the clinical development of the AI-based solutions and the clinical validation process, considering the potential of social innovation approaches to support inclusion and dialogue between patients, carers and health care professionals;
  • Enhancing accurate prognosis for and response to a specific personalised treatment, hereby providing a solid risk assessment (e.g. potential adverse events, side effects, expected treatment compliance and adherence over the time compared to standard care);
  • Inclusion of sex and gender aspects, age, socio-economic, lifestyle and behavioural factors and other social determinants of health, as soon as possible considering also early stages/phases of development;
  • Assessing potential manual or automated biases for large uptake;
  • Integration of an extensive information and communication package about AI-assisted treatment options;, highlighting their relevance for the patients and carers;
  • Measuring the (cost-)effectiveness of AI-assisted development of therapeutic strategies and its implementation in clinical practice.

Proposals should describe a pathway for establishing standard operating procedures for the integration of AI in health care (e.g. for supporting clinical decision-making in treatment and care). Proposals are encouraged to consider multidisciplinary approaches and allow for intersectoral representation. Proposals have to ensure that resulting data comply with the FAIR[[FAIR data are data, which meet principles of findability, accessibility, interoperability, and reusability.]] principles and data generated by the AI-based solutions are in line with established international standards.

Integration of ethics and health humanities perspectives are essential to ensure an ethical approach to the development of robust, fair and trustworthy AI solutions in health care, taking into account underrepresented patient populations. In relation to the use and interpretation of data, special attention should be paid to systematic discrimination or bias (e.g. due to gender or ethnicity) when developing and using AI solutions. Proposals should also focus on traceability, transparency, and auditability of AI algorithms in health. The international perspective should be taken into account, preferably through international collaboration, to ensure the comprehensiveness, interoperability and transferability of the developed solutions.

Where relevant, applicants are highly encouraged to deliver a plan for the regulatory acceptability of their technologies and to interact at an early stage with the relevant regulatory bodies. SME(s) participation is encouraged.