Skip to main content



MUSMAP addressed methodological and technological challenges in the development of sound synthesis techniques and of multi-modal data acquisition and analysis methods for the construction of auditory-motor patterning schemes that enable the modelling of instrumental playing technique in music performance. The case study was violin playing: motion and sound analysis techniques used to identify and describe auditory and motor sequences from a purposely constructed database of classical violin performances.

The research carried out in MUSMAP was driven by three main objectives. First, to utilize multi-modal data acquisition and analysis methods to develop and validate auditory-motor pattern representation models by creating and systematically studying an extensive set of violin performance multi-modal recordings of trained musicians. Second, to apply auditory-motor pattern representations to design a system for automatic control of violin sound synthesis presenting a an architecture through which a motor sequencing component is updated in response to perceptual features of synthesized sound. Third, to apply auditory-motor pattern representations to propose novel, high-quality sound processing technologies through which meaningful audio perceptual attribute manipulations are driven by motor primitives, enabling auditory-motor remapping of recorded or real-time violin performances.

The results achieved during MUSMAP include work on the following areas:
1 - Development a framework for non-intrusive capture of motor control and audio signals relevant for violin performance technique.
2 - Design and implementation of a bowed string physical modeling synthesis system suitable for automatic control from motor control signals.
3 - Study and simulation of the relationship between bowing controls and violin sound attributes by multi-modal analysis of recorded violin performance.
4 - Design and implementation of a sound analysis / processing / transformation system capable of motor-driven manipulation of violin audio perceptual attributes, both off-line and real-time.

These results, which have led to 14 publications in international journals and conferences plus some other publications currently under preparation, represent a valuable contribution in a number of fields related to upcoming applications of virtual or augmented reality technologies to the field of music analysis, learning, and synthesis.

First, multi-modal data acquisition and processing for the analysis of musical instrument playing as a form of human expression and with great potential for changing the way we learn to play a musical instrument (an example is the recently funded TELMI EU project, led by the host institution).

Second, there were significant advances in musical instrument sound synthesis techniques based on efficient physical models capable of running in real-time while producing realistic sound and offering fliexible control mechanisms. The developed methods for modal analysis/synthesis of musical instrument resonators have the capacity to recover digital waveguide physical modeling synthesis technologies and bring them back to the front, while full-scale numerical models still need very long simulation times (i.e. NESS EU project).

Third, new techniques were proposed for generating playability maps and automatic control of physical models via motor primitives (patterns) generated by multi-modal (auditory-motor) data gathered from recordings or synthesized during simulations. These first steps in input-output (motor-audio) processing represent a good ground towards the use of deep learning techniques to infer models that allow to simulate the sensory-motor synchronization process itself. The idea goes around devising reinforcement learning schemes based on a combination of recurrent and convolutional deep neural networks to learn instrumental playing, and then studying the nature of obtained networks with the aim of inferring salient structures and mechanisms behind sensory-motor integration which could be used to procure simple brain function simulations with potential to help understanding how our brain works.

Finally, regarding research reproducibilty, during MUSMAP significant progress was made in the development of Repovizz. The Repovizz system ( comprises a remote hosting platform and a data archival protocol through which data of different modalities can be stored, visualized, annotated, and selectively retrieved via a web interface and a dedicated web application programming interface. Today, Repovizz is used in a number of large-scale European research projects (including the EU TELMI project). This could position the host institution as a world-class research group on data-driven research, providing future ground for the development of technical means for exchanging, managing, or disseminating heterogeneous data and results thorugh versatile, web-ready platforms that enable collaborative and reproducible research.