Skip to main content
European Commission logo
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS

Value-Aware Artificial Intelligence

Periodic Reporting for period 1 - VALAWAI (Value-Aware Artificial Intelligence)

Okres sprawozdawczy: 2022-10-01 do 2023-09-30

Current AI systems are not value-aware. Yet, users often attribute unintended intentionality and value-awareness to AI systems, posing problems and potential dangers. This is problematic, confusing and in some cases dangerous. We argue that AI systems should be morally capable, possessing self-awareness to reflect and justify its behavior in moral terms. Simply aligning AI systems with curated data is deemed insufficient. The project aims to address this gap by developing computational models for awareness, focusing on three key aspects: developing a generic architecture for awareness, testing it for value awareness, and applying it to a variety of application domains.

The three application domains targeted by VALAWAI—social media, social robots, and medical decision-making—have high innovation potential. The project aims to provide support in these domains by implementing guardrails in social media, constraining robot behavior within ethical norms, and aiding medical decision-making through the support of value-aware AI. These practical applications demonstrates the project's commitment to providing innovative tools for enhancing value awareness in a variety of AI applications.

To enhance value awareness in AI applications, the VALAWAI consortium outlines five objectives: constructing and implementing a computational model for value-awareness that we refer to as the Reflective Global Neuronal Workspace model (RGNW), developing a framework for value-aware situation analysis and decision-making, demonstrating the functional adequacy of RGNW in three different domains, showcasing how value-aware AI can mitigate negative side effects, and prototyping a toolbox for value-aware AI.
1. Construct and Implement the RGNW Model:
• Significant strides have been made in defining the Reflective Global Neuronal Workspace (RGNW) model, which has been presented it in Deliverable 1.5.
• Initial tools contributing to the RGNW toolbox have been presented in Deliverable 1.2. The model lays the foundation for computational formalization and operationalization through the ongoing RGNW toolbox development and prototyping.

2. Develop a Framework for Value-Aware Situation Analysis and Decision-Making:
• A framework has been crafted with a specific focus on Quantitative Measures for Awareness within the RGNW architecture.
• The measures, with potential applicability to general AI systems, primarily focus on evaluating the performance of the VALAWAI components with respect to enabling value awareness.
• The defined set of measures that include Component Integration using Conditional Mutual Information (CMI), Component Contribution to Awareness through CMI and functional connectivity analysis, Value Alignment of Normative Systems to assess moral alignment in multi-agent systems, Large Language Models for Assessing Values in social robots, and User' Perception of Value Awareness in AI systems through user experiments
• We have provided a common terminology that allows us and other projects to study and share information about the value-awareness of AI systems.

3. Demonstrate Functional Adequacy of RGNW in Three Application Domains:
• The project has progressed in setting up three use cases: social media observatories, domestic social robots, and social decision-making in medical environments.
• Initial data collection, stakeholder engagement, and focus groups have commenced for the three use cases, contributing to the ongoing development of the RGNW architecture.

4. Tackle Adverse Effects of AI Technologies:
• Ethical considerations have been meticulously addressed and the approval was received from the Ethics committee.
• We are setting up the experiments planned to assess the negative impacts of existing AI technologies in the three use cases.
• We have worked closely with our Ethical Advisor to check that VALAWAI approach does not present ethical risks (Deliverables 7.1 7.2 and 7.3)

5. Prototype, Engineer, and Release a Toolbox for Value-Aware AI:
• Progress has been made in defining the toolbox's design and software architecture, with an initial software architecture proposed in Deliverable 1.2.
We foresee six main results at the moment:

1. Quantitative Measures for Awareness:
• Include assessing component integration using Conditional Mutual Information (CMI).
• Evaluate the moral alignment of norms in multi-agent systems.
• Measure users' perceptions of AI systems' value-awareness.

2. Contract-Based Model of Moral Cognition
• Focuses on the justifiability of actions to others.
• Enhances future AI decision-making—notable applications in human-robot interactions.
• It aims to foster harmonious coexistence between humans and AI in the moral domain.

3. Versatile VALAWAI Architecture:
• The VALAWAI architecture could serve as a versatile platform with components that can be utilized by third parties to build applications leveraging moral cognition tools.
• The Toolbox is driven by the demands of the three use cases covering various domains.
• This architecture is expected to impact emerging research on values in AI and the development of value-aware AI.

4. Advancements in Social Media:
• The social media use case tackles information dynamics issues in social media, such as polarisation, echo chambers, and toxic interactions, advancing statistical analysis tools and network science by incorporating moral values information.
• We showed that expressing moral values provides a more refined separation of public discourse viewpoints on social media for a divisive topic, such as immigration.
• This result will enable the recommendation of content from outgroup viewpoints based on the proximity to the user's moral profile.

5. Innovations in Social Robots:
• The social robots use case integrates vision capabilities into conversational agents, utilizing LLMs to enhance text-based prompts with real-time visual input.
• The work in the first year provides an initial implementation of a dialogue manager to enhance the traditional text-based prompts with real-time visual input
• Our paper reports on six interactions with a Furhat robot powered by this system, illustrating and discussing the results obtained.
• This vision-enabled dialogue system enables more contextually aware interactions, where conversational agents seamlessly blend textual and visual modalities.

6. Novel Approaches in Social Decision-Making:
• The social decision-making use case introduces methodologies, algorithms, and tools that push the state of the art in the medical field by incorporating reasoning about values in medical protocols.
• This is novel for medical professionals because they are usually not used to reason about values in their everyday decision-making process.
• The use case is advancing the design of AI tools so that they provide information about the alignment of specific actions with values, and offer feedback on the alignment of entire medical protocols, helping in the design of such protocols.

The consortium will continue working in stakeholder interaction, conduct demonstrations, and assess proposed solutions.
Towards an AI infused by human values