Periodic Reporting for period 1 - AIITL (AI IN THE LEAD? WHEN, WHY, AND HOW AI LEADERSHIP WILL (NOT) WORK)
Reporting period: 2022-11-15 to 2024-11-14
The project "AI IN THE LEAD? WHEN, WHY, AND HOW AI LEADERSHIP WILL (NOT) WORK" explored these questions by examining whether, when, and how AI systems can take on leadership functions—such as motivating teams, building relationships, and influencing performance—and under which circumstances this would be helpful or harmful. As AI moves into roles traditionally associated with human judgment and social interaction, understanding its leadership potential is essential for navigating the future of work. Understanding how AI impacts leadership dynamics is essential not only for researchers and businesses, but also for educators, policymakers, and designers of future workplace technologies.
The project focused on three main goals:
1. To study how different AI leadership styles (such as charismatic or relationship-oriented) affect employees’ motivation, satisfaction, and performance.
2. To examine how these effects change based on the type of task and how long people interact with the AI.
3. To identify individual and situational differences—such as trust in AI, worker age, or experience—that shape how people respond to AI-led leadership.
Rooted in psychology, organizational behavior, management, and computer science, the project connects directly to EU strategic priorities under “A Europe Fit for the Digital Age” and “An Economy that Works for People.” It aims to support the ethical, human-centered use of AI in leadership, ensuring future technologies promote trust, fairness, and well-being at work.
The work focused on two research areas. First, the project studied how people respond to charismatic AI leaders who use inspirational and visionary language. The chatbot was perceived as charismatic, but its effects on motivation and performance were mixed. This shows that being seen as a charismatic leader does not always lead to better outcomes when the leader is not human but an AI.
Second, the project tested relationship-oriented AI leadership, which focuses on empathy, support, and interpersonal care. This style led to more positive emotional reactions in certain settings, especially during simple, short tasks. However, there was little impact on performance, and some people felt they performed worse under AI guidance.
Leadership styles were tested across different tasks (e.g. repetitive, complex, creative) and timeframes (short vs. extended). Individual factors such as trust in AI and prior experience were explored but did not consistently explain reactions to AI leadership.
A major achievement was the creation of ResearchChatAI (available at researchchatai.com) a free online tool that helps researchers and practitioners design and test AI leaders, followers, or teammates (or other types of AI agents). This platform supports future studies and broadens access to studying human-AI interactions in organizational settings. ResearchChatAI has already gained interest from academic institutions across the EU, providing a practical tool for training, experimentation, and future research. It supports scalable, ethics-aware testing of human-AI interactions in leadership and teamwork.
The findings show that people do respond to AI leaders with positive emotions. However, these responses do not always lead to better outcomes. In some cases, AI leadership had no effect or even negative effects on how people viewed their own performance. These results challenge the idea that human leadership behaviors will work the same way when performed by AI and highlight the need for caution when introducing AI into sensitive roles like the leadership of humans. This highlights a key policy and scientific insight: AI is not just a technical tool, but a social actor that must be evaluated in terms of trust, expectations, and workplace culture.
The project also created ResearchChatAI, a free, open-source platform that allows users to create and test AI leaders. This innovation helps researchers explore AI-human interaction more easily and supports transparent, replicable science.
To build on these findings, more real-world testing in long-term work settings is needed. Collaborations with businesses, additional funding for demonstration projects, and alignment with ethical and legal AI standards will be key to ensuring that AI leadership is used in responsible and effective ways.