Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Music and Artificial Intelligence: Building Critical Interdisciplinary Studies

Periodic Reporting for period 2 - MusAI (Music and Artificial Intelligence: Building Critical Interdisciplinary Studies)

Reporting period: 2023-04-01 to 2024-09-30

The MusAI research program is the first large-scale attempt to critically illuminate the impacts artificial intelligence (AI) is having in the realm of culture. Given the long history of AI’s application to music, and music’s significance as among the earliest media to be profoundly impacted by digitisation and the internet, MusAI researches this impact through the lens of music. In the first three years (2021-24), the program comprises nine research projects that lay the basis for a field of critical interdisciplinary studies of music and AI. The importance for society lies in the program addressing AI’s escalating impacts on culture and music, and human experiences of culture and music. Some of these impacts are clearly problematic: from generative AI music production, which threatens to replace human musical composition of, for example, the music accompanying film, television and advertising, to how the consumption of music is being cumulatively shaped at population scale by opaque recommendation systems built into global commercial music streaming services. The overall objectives of MusAI are, in the first phase (years 1-3), to build a body of research that throws critical light on key dimensions of AI’s impact on culture via music. In the second phase (2025-26), we will translate that research into a new, critical interdisciplinary AI pedagogy, led by an expert in this field, which goes far beyond existing paradigms intended to mitigate the potential harms of AI: ethics applied to AI, and responsible AI. Instead, we aim to prototype a new interdisciplinary education for AI students in computer science and engineering that will extend their contextual understanding of the conditions within which AI is developing, and provide them with a range of critical thinking tools they can take forward as they enter the professional and academic AI fields. This is a development that a number of leading universities worldwide have called for, and it participates in the widespread sense that applying ethics to AI is insufficient to transform the culture of AI development. We intend to produce a model of a new critical education, a radically interdisciplinary AI pedagogy, that contributes new positive models for training AI engineers. Our overarching aim is to assist in transforming the culture of AI development, within industry and academia, and to generate a future generation of critical and reflexive AI engineers and designers.
MusAI’s research achievements are concentrated in four areas:

A) The political economy of AI music: WPs 2a, 2b and 4a:
Three studies examine the structure of the commercial AI music industry, using historical, ethnographic and auto-ethnographic methods. Two studies analyse this industry, and particularly start-up companies, in the global North. They compare the start-ups’ product visions, innovation strategies, and concepts of value and risk. The third examines the ‘regionalisation’ of music streaming platforms utilising AI in the global South (UAE, Saudi Arabia and Egypt), providing important comparative insight.

B) Social, cultural and material analyses of AI music: WPs 1a, 1b and 3b:
Three studies develop critical cultural, social, material and philosophical approaches to the analysis of music AI technologies, paradigms and practices. One probes how musical genres are modelled by data science, critiquing existing models and developing subtler models based on social scientific and humanistic theories of genre. The second study, interdisciplinary across philosophy and anthropology, examines how music recommender systems are influencing the development of aesthetic experience at mass scale. The third probes how human listening is modelled by AI-based ‘machine listening’.

C) Creative practices and artistic critique: WPs 3a and 3c:
Two studies address creative artistic practices using AI, while illuminating the varied forms taken by artistic critiques of AI. The first takes a historical approach to composers Maryanne Amacher and David Tudor, who pioneered critical engagements with AI, contrasting this with present-day online communities using AI. In the second study, composers Artemi Gioti and Aaron Einbond reflect critically on their compositional practice using machine learning.

D) Interdisciplinary AI research between computer science (CS) and social sciences and humanities (SSH): WPs 1b, 4b and 5:
Three projects entail unprecedented interdisciplinary collaborations between AI computer scientists and engineers and team members from SSH. One probes how musical genres are now modelled by AI, and develops subtler models, with profound methodological and epistemological implications. A second critiques music recommender systems (RS) and, on the basis of public interest principles, has designed an alternative RS via an innovative metric called ‘commonality’. The third, described above, will prototype a radically interdisciplinary education for AI students.
The MusAI program has made significant innovations beyond the state of the art, as intended, by initiating a field of critical interdisciplinary studies of AI’s impact on music and culture. But it has also gone far beyond expectations. Technologically, WP4b has innovated by developing a new evaluation metric called ‘commonality’ based on a critique of commercial recommender systems and employing public service media principles to design an alternative, public interest paradigm for recommender systems. The project entails sustained, deep interdisciplinary collaborations between AI computer scientists and PI Born (representing SSH). Our approach has affinities with ‘ethics and AI’ and ‘responsible AI’, but differs in taking inspiration from public interest approaches in the media industries, adapting them to AI. This is a ‘positive’, design-oriented approach which contrasts with the ‘negative’ or constraining approaches of both ‘ethics and AI’ and ‘responsible AI’; it involves changing the very culture of AI research and design. Our ‘commonality’ metric departs radically from the personalisation paradigm pervasive in commercial AI and specifically recommender systems. It draws attention to the wider, cumulative cultural and social effects of recommender systems and similar AI applications. Adapted from the public service media principle of universality, ‘commonality’ measures the extent to which a given intervention in a recommender system – our example is increasing diversity in the recommendation of music, film and books – is experienced in common across a population of users. For public service media organisations this is a valuable measure, and the huge impact of our work is evident in collaborations with two major public media organisations, the BBC and Radio Canada, to trial the ‘commonality’ metric on their recommender systems – a second stage of research not anticipated at the outset. We foresee adapting the metric also to the recommendation of news, an area of immense concern given the tendency to filter bubbles. But the implications of ‘commonality’ are not confined to public media: our argument is that the wider, cumulative cultural and social effects of AI should be at the top of the list of concerns of governments, regulators, the AI industry, and those training future AI professionals. To this end we are active in public and AI professional forums to disseminate these messages and encourage changes to the culture of AI research and design.
img-1317.jpg
My booklet 0 0