Community Research and Development Information Service - CORDIS

H2020

MUSICAL-MOODS Report Summary

Project ID: 659434
Funded under: H2020-EU.1.3.2.

Periodic Reporting for period 1 - MUSICAL-MOODS (A mood-indexed database of scores, lyrics, musical excerpts, vector-based 3D animations, and dance video recordings)

Reporting period: 2015-12-01 to 2017-11-30

Summary of the context and overall objectives of the project

The Musical-Moods project aims at the development of a mood-indexed and multimodal database for interactive music systems of next generation. These systems will have use in various applications, such as profiling of users and databases for creative and media industries, improving access for citizens and researchers to musical heritage, services for audio on demand, education and training activities, music therapy, and music making.
The research approach comprises multidisciplinary tools and methods from a broad range of disciplines, including sciences (cognitive science, human-computer interaction, machine learning, natural language processing and signal processing), arts (music, dance, motion capture and 3D animation) and humanities (musicology, history of music, philosophy).

OBJ: Development of an online database of scores, lyrics and musical excerpts, vector-based 3D animations, and dance video recordings, indexed by mood.
- Develop a taxonomy of relations between the musical, linguistic and motion domains for interactive music systems. The taxonomy is implemented in the realization of the multimedia dance acquisitions as a software configuration for the interactive music system VIVO.
- Develop a game with a purpose to collect language annotations for multimedia. Game development is completed. Deployment is imminent.
- Realizing a multimedia corpus. We expect over 300 minutes of multimedia capturing of intermedia improvisation between dancers and the interactive music system VIVO. Media includes audio (music and speech), video and motion capture data, and language annotations for mood.
- Music mood classification using audio and metadata will be realized by using a cross-modal validation approach using data from language and dance motion domains.
- Internet annotations will be collected by using the online game with a purpose, already developed in the project.
- An online call for artists will use the database in music making or sound generation will be aimed at extending the evaluation further.

Work performed from the beginning of the project to the end of the period covered by the report and main results achieved so far

The fellow conducted research in natural language processing and games with a purpose, and directed undergraduate and graduate research at the Computation of Language Laboratory at Dept. of Cognitive Sciences at University of California, Irvine (http://www.socsci.uci.edu/~lpearl/CoLaLab/), in order to accomplish the relevant project goals on language data and to facilitate the transfer of knowledge. Computational methods have been identified and tested on preliminary corpus data.
The fellow conducted research in multimodal classification and directed graduate and postgraduate research at Dept. of Electronic Engineering, University of Rome Tor Vergata. Computational methods have been identified and tested on preliminary corpus data.
The fellow engaged in numerous activities comprising classes, courses, seminars, workshops, concerts, talks, conferences and other dissemination activities employed as part of the project. These activities have also been used as part of the methodology to define, together with domain experts, a multimodal taxonomy of relations between the musical, linguistic and motion domains.
The fellow is currently realizing a multimedia corpus from solo dance sessions in improvisation with the interactive music system VIVO. This corpus will be shaped into the final database, alongside the definition of a computational model for multimodal mood classification.

List of major deliverables delivered
- Multimedia Game with A Purpose (MGWAP). Annotation game developed (http://www.musicalmoods2020.org/mgwap). It will be used to generate language data annotations for the multimodal database.
- Conference article published (2017, Paolizzo, F. Enabling Embodied Analogies in Interactive Music Systems. In: Proc. of A Body of Knowledge: Embodied Cognition and the Arts. Irvine: University of California. arXiv:1712.00334 [cs.HC]).
- Journal article (2017, Paolizzo, F. Johnson, C. G. Autonomy in the Interactive Music System VIVO. arXiv:1711.11319 [cs.HC]; currently being submitted to Journal of New Music Research).

Dissemination activities
- Talk at ICIT within the Colloquium Series. January 12, 2016. (http://music.arts.uci.edu/icit/icit-colloquium-fabio-paolizzo/).
- Intermedia concert at Theatre of Tor Bella Monaca in Rome and University of Rome Tor Vergata. September 20, 2016.The event “Sempre Libera / Always Free” was organised in collaboration with UC Irvine.
- Talk at “A Body of Knowledge: Embodied Cognition and the Arts Conference”. December 10, 2016. The talk was part of the panel “Where is the body in Code?” with Dr. Chris Salter.
- JamXchange. April 1, 2017. The fellow performed in the concert with Prof. Sharon Wray and Hip Hop with Cyrian Reed, along with other guests presenters Sakina Ibrahim, Darlisa Wajid-Ali, Erin Landry and Moncell Ill Kozby Durden.
- Music Director for JazzXchange. May 5 and May 6, 2017. The fellow collaborated as Musical Director and performing artist for JazzXchange: The House that America Built – Part II.
- The fellow organized and performed in “Sempre Libera 2, Embodied”. The event was organized in collaboration with the Master in Sonic Arts of the University of Rome Tor Vergata (June 25, 2017).

Progress beyond the state of the art and expected potential impact (including the socio-economic impact and the wider societal implications of the project so far)

Following the definition of a mood-based taxonomy of relations between music, music lyrics, dance and video, a system for capturing the taxonomy has been used to define the project database with attention to mood. This taxonomy is currently being implemented using the interactive media system VIVO.

Development of a multimedia game-with-a-purpose (MGWAP). MGWAP will be used for collecting mood annotations on music and text for machine learning algorithms to operate on music and language by exploiting the wisdom of the crowd phenomenon (pre-release stage).

Generation of a large database of music, video, motion capture and speech data from dance and interview sessions with professional student dancers and the interactive music system VIVO. Notably, the sessions were captured using a gold standard for motion capture (VICON system), in a green screen studio. Currently testing different solutions for temporal segmentation and multimodal classification of the data in terms of mood.

Related information

Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top