The EU-funded MUSMAP project used motion and sound analysis techniques to identify and describe auditory and sound analysis sequences from a database of classical violin performances that was created for this purpose. Researchers developed techniques for multimodal data acquisition and analysis in order to construct auditory-motor patterning schemes that could model instrumental playing techniques from a computational perspective. The aim was to develop and test auditory-motor pattern representation models by studying the recording of trained musicians. Scientists used these models to design a system for the automatic control of violin sound synthesis to give a closed-loop, feed-forward architecture by which a motor sequencing component was updated in response to perceptual features of synthesised sound. This provided a framework for the production of computer-aided music, while also acting as an ideal test bed for validating the auditory-motor representation models developed. Finally, researchers developed novel, high-quality sound processing technologies to enable the auditory-motor remapping of recorded or real-time violin performances. MUSMAP contributed to significant advances in musical instrument sound synthesis techniques based on physical models running in real time and producing realistic sound. In addition, new techniques were proposed for generating playability maps and automatic control of physical models taken from recordings or synthesised during simulations. These initial steps in input-output processing represent a solid foundation for the use of deep learning techniques in designing models that can simulate the motor synchronisation process itself. This could be used to devise deep learning networks for the playing of an instrument that determine mechanisms behind sensory-motor integration, thereby highlighting simple brain function simulations that shed light on how our brains work. MUSMAP also helped to develop the Repovizz system, a remote hosting platform and a data archival protocol via which data of different modalities can be stored, visualised, annotated and selectively retrieved.
Auditory-motor interactions, violin music, MUSMAP, multimodal data, sound synthesis