The project aims at the development of an online database of scores, lyrics and musical excerpts, vector-based 3D animations, and dance video recordings, indexed by mood. Such a taxonomy of relations between the musical, linguistic and motion domains will be aimed at interactive music systems and music making. For realising the database, digital scores inclusive of lyrics will be gathered from collections of music in the public domain. Music mood classification using audio and metadata will aim at capturing sophisticated features but using no explicit domain-specific knowledge about a mental state. Datasets will be realised through a cross-modal approach. Validation of the model will be carried out by combining results from an online game-with-a-purpose, for Internet users, and from intermedia case studies for selected dancers. In further case studies, music works will be realised, also by invited artists, for the evaluation of the database in interactive music making. An online call for artists to use the database in music making or sound generation will be aimed at extending the evaluation further. The final database will be made available online for further exploitation. The present research will generate new knowledge for use in next-generation systems of interactive music and music emotion recognition, also contributing to extend the investigation in the broader areas of music making, computational creativity and information retrieval.
Field of science
- /natural sciences/computer and information sciences/artificial intelligence/computational creativity
Call for proposal
See other projects for this call
Funding SchemeMSCA-IF-GF - Global Fellowships
94607 Oakland Ca