Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Trusted AI for Transparent Public Governance fostering Democratic Values

Periodic Reporting for period 1 - AI4Gov (Trusted AI for Transparent Public Governance fostering Democratic Values)

Reporting period: 2023-01-01 to 2023-12-31

AI4Gov is a joint effort of policy makers, public institutions / organizations, legal, Social Science and Humanities and Big Data/AI experts to unveil the potentials of AI and Big Data technologies for developing evidence-based innovations, policies, and policy recommendations to harness the public sphere, political power, and economic power for democratic purposes. The project will also uphold fundamental rights and values standards of individuals when using AI and Big Data technologies. Hence, the project aims to contribute to the promising research landscape that seeks to address ethical, trust, discrimination, and bias issues by providing an in-depth analysis and solutions addressing the challenges that various stakeholders in modern democracies are faced with when attempts are made to mitigate the negative implications of Big Data and AI. In this direction, the project will introduce solutions and frameworks towards a two-fold sense, to facilitate policymakers on the development of automated, educated and evidence-based decisions and to increase the trust of citizens in the democratic processes and institutions. Moreover, the project will leverage the capabilities of state-of-the-art tools for providing un-bias, discrimination-free, fair, and trusted AI. These tools will be validated in terms of their ability to provide technical and/or organisational measures, causal models for bias and discrimination, and standardized methodologies for achieving fairness in AI.
The AI4Gov project has managed to achieve a series of objectives, in line with the foreseen schedule and on the foreseen degree. First and foremost, it managed to design a reference framework for an ethical and democratic AI which is mapped into the 1st version of the Holistic Regulatory Framework and the holistic AI governance model for AI Ethics by Design. Both take into account the concrete research methodology for identifying risks and threats of AI. Second, it designed and developed AI fairness monitoring and bias mitigation tools presented under the 1st versions of the Virtualized Unbiasing Framework (VUF) for AI & Big Data and the Bias Detector of AI/ML models toolkit. Third, it managed to develop trusted AI techniques for explaining decisions of AI systems to policy makers, citizen and other stakeholders presented under the 1st versions of the XAI algorithms / models, SAX models and the SAX/XAI library, and the FAIRification of data. Fourth, it boosted the regulatory compliance (GDPR, AI Act) of AI based models for supporting democratic processes released through the 1st version of the Data Management Framework assessing the ethical issues of AI and data collected in the context of the project and the initiation of validation of AI4Gov and HRF and VUF in seven real-life UCs. Fifth, it designed and implemented a reference AI platform for regulating the use of AI & big data released as well under the 1st versions of the Blockchain-based Data access Regulator, AI algorithms/Models/Tools for Policy Making. Sixth, it started training the stakeholders and educating citizens in the use of the platform and the even important elements of a democratic AI, preparing training materials and implementing already two training workshops. Seventh, it progressed with the cocreation, deployment and validation of legal and organizational blueprints of the HRF in various UCs, presented under the first version of the challenge and showcase AI4Gov innovations, frameworks, and tools through various use cases in different application sectors. Last, it progressed building a vibrant community of interested and committed stakeholders around the project platform creating strong liaisons and collaborations with AI initiatives and communities, including the clustering with the projects funded under the same call.
AI4Gov makes a step forward towards identifying bias that seriously affects decisions based on AI algorithms. Bias is a cognitive phenomenon that significantly shapes individuals' perceptions, judgments, and decision-making across diverse situations. The emergence of AI bias occurs when algorithms generate systematically prejudiced outputs due to biased assumptions during development or within the training data. For this purpose, AI4Gov has developed a Bias Detector Toolkit which is a holistic application focused on explaining AI bias and equipping developers with an easy-to-navigate and visually organized catalogue.

In addition, the aspects of explainability, interpretability, and transparency in the modern policymaking domain are essential for the provision of citizen-centered and democratic decisions. The AI4Gov project seeks to leverage the potential derived from the utilization of SotA and cutting-edge eXplainable AI (XAI) techniques and approaches, such as LIME and SHAP that can be relative useful on the provision of preference importance ranking in the explanation stage. However, the project goes also beyond these techniques through the introduction of Situation Aware eXplainability (SAX) techniques that are evolutionary XAI techniques applied to business processes. They aim at tackling the shortcomings of contemporary XAI techniques when applied to business processes, such as their inability to express the business process model constraints and to include the richness of contextual situations that affect process outcomes. While their explanations are usually not given in a human-interpretable form that can ease the understanding by humans. A situation-aware eXplanation is a causal sound explanation that takes into account the process context in which the explanandum occurred, including relevant background knowledge, constraints, and goals. Under this scope Large Language Models (LLMs) are also utilized that are a subset of foundation models that can perform a variety of natural language processing (NLP) tasks and are capable generating narratives for improved process outcome explanations. In addition, a SAX4BPM library was also developed to include a set of services to support the different aspects of SAX explanations, taking into account contextual information classified into three different types: completeness, soundness, and synthesis.

Complementary to these novel approaches, a decentralised blockchain infrastructure has been designed and initially implemented within this first reporting period to enhance the transparency and traceability of data and policies storage and business logic execution. In deeper detail, off-chain policies are expected to govern core operational characteristics of the network and enforce data policies that are not expected or allowed to change in the future. On-chain policies, on the other hand, involve the whole consortium, both pilots of AI4Gov and future adopters, and allow the collaboration in changes needed in the implemented business logic and/or the data flow scenarios. Furthermore, considerations regarding GDPR’s accountability guidelines, the right of individuals to control their data, the right to be forgotten and its compatibility with the immutable nature of decentralised ledgers were taken into account contributing to the introduction of a set of novel and validated decentralised business models. These models further enable public and private organizations to monetize their assets and boost the trustworthiness of the AI-based policy development process for citizens, businesses, and public administrations.
AI4Gov project logo
My booklet 0 0