Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Trustworthy and Inclusive MLOps (Machine Learning Operations) made in Europe

Periodic Reporting for period 1 - TIME (Trustworthy and Inclusive MLOps (Machine Learning Operations) made in Europe)

Reporting period: 2022-06-01 to 2022-11-30

Trustworthy and Inclusive MLOps made in Europe (TIME) aims at accelerating the realisation of the strategic ambition of Clearbox AI a
deeptech Italian startup, led by a woman leader, Dr. Shalini Kurapati. Our current product, the AI Control Room enables companies to
harness the power of AI following the Trustworthy AI principles efficiently and robustly.
The AI control room is both a cloud-based and on-premises software platform built on a patented implementation of generative
models derived from 10 years of Research and Development. It implements the various principles of trustworthy AI such as data
assessment, technical robustness check, model validation, monitoring and reproducibility in a viable and easily usable technical
environment.
The objective of Clearbox AI is to validate a sustainable business model while creating a positive and lasting impact on society
through AI. Through this call the main activities we plan to implement towards this objective are to: 1. Implement the bias and
discrimination assessment and mitigation module within our product offering and release it open source 2. Accelerate
Communication and dissemination of the value-based propositions of trust, bias mitigation, and inclusion.
Our ambition beyond our commercial success and positive social impact is to continue to challenge the technology competitiveness
gap in AI in Europe (Enterprise/civilian AI adoption is behind US, China, and Israel) and global women leadership gap (only 8% of AI
leadership roles are held by women).
The Women tech EU call can help us accelerate our path to create this impact and to support Clearbox AI, and its woman leader in
competing with larger players on the platform of global EU leadership in Trustworthy AI.
We published our software on detecting bias, improving data quality, and implementing the fairness metrics with the following software modules:

Data quality assessment and profiling module
Bias detection, mitigation, and fairness metrics module

You can find the detailed documentation in the README files of each of the two software modules.

We have also documented the practical use of the two modules with a step-by-step guide and report that is publicly available on the zenodo public repository citing the EU funding and Grant number: https://zenodo.org/record/7266913(opens in new window)
In this project we made the first steps towards bias detection and mitigation with structure data. Bias mitigation is a complex and multi-disciplinary endeavour since it is a mix of social, human and computational bias. There are no universally accepted metrics for fairness since it is context and culture dependent. We would like to continue our work on bias mitigation and improving fairness starting from the data side, and extended our capabilities to other types of datasets such as timeseries and images.
My booklet 0 0