Skip to main content

Fluid Corpus Manipulation: Creative Research in Musical Mining of Large Sound/Gesture Datasets through Foundational Access to the Latest Advances of Signal Decomposition.

Periodic Reporting for period 2 - FluCoMa (Fluid Corpus Manipulation: Creative Research in Musical Mining of Large Sound/Gesture Datasets through Foundational Access to the Latest Advances of Signal Decomposition.)

Reporting period: 2019-03-01 to 2020-08-31

Cutting-edge musical composition has always been dependent on, critical and subversive of the latest advances of technology. This dependency allows a reciprocal enrichment: creative research makes critical and subversive uses of the latest advances of technological research, which then feeds back with new, divergent ideas. Unfortunately, there is a contemporary challenge inherent to aesthetic research in computer composition: an ever-expanding gap between digital signal processing (DSP) advances and their availability to musical investigators.

On the one hand, techno-fluent researchers are now well established in the field, and portable affordable computing devices are faster than ever. Moreover, never before has the creative community been so thriving: the age of digital sound production and its virtual web-based dissemination has helped many communities to grow around creative coding software in a decentralised way. This thriving research community regards computer programming, albeit at high level, as part of their creative research workflow to explore novel sonic worlds. On the other hand, the same conditions have helped scientists push the boundaries of what is possible to do within the realms of DSP, both in breadth and in depth. The world of telecommunication is still at the forefront of this research, but everyday uses of such technology barely scratch the surface of what would be creatively possible to achieve. We are surrounded by these algorithms, in our telephones, web browsers, and other technology, yet digital artists and researchers struggle to obtain deeper access to it to facilitate wider creative research use.

One such advance is signal decomposition: a sound can now be separated into its transient, pitched, and residual constituents. These potent algorithms are partially available in closed software, or in laboratories, but not at a suitable level of modularity within the coding environments used by the creative researchers (Max, PureData, and SuperCollider) to allow ground-breaking sonic research into a rich unexploited area: the manipulation of large sound corpora. Indeed, with access to, genesis of, and storage of large sound banks now commonplace, novel ways of abstracting and manipulating them are needed to mine their inherent potential.

The Fluid Corpus Manipulation project proposes to tackle this issue by bridging this gap between DSP and music research, empowering techno-fluent aesthetic investigators with a toolset for signal decomposition within their mastered software environments, in order to experiment with new sound and gesture design untapped in large corpora. The three degrees of manipulations to be explored are (1) expressive browsing and descriptor-based taxonomy, (2) remixing, component replacement, and hybridisation by concatenation, and (3) pattern recognition at component level, with interpolating and variation making potential. This research therefore is carried out by an interdisciplinary team of composers and programmers, who seek further ideas from a steering group of scientists and composers, in order to propose new tools and approaches to large corpora exploration. Providing some of the big data tools to relatively small datasets, it enables the creative researchers in this project, and subsequently a whole community, to develop novel ways to manipulate, abstract, and hybridize their personal datasets of sounds and gestures, to discover new sounds and gestures through novel approaches and solutions to old problems - either acoustic limits, browsing concerns, classification, or variation making.

The timeliness of the project can hardly be better: the convergence of a community of eager techno-fluent composers, the maturity of the DSP algorithms, the proliferation of rich corpora, and the available computing power, provide the perfect ground for this project to propose progressive sonic research possibilities. To bridge the gap between the communities of creative and DSP researchers is a major challenge, yet FluCoMa proposes a foundational approach and methodological insight to tackle this issue by providing the tools at the right level to the creative researchers, and direct critical subversive feedback to the scientists.
The work so far follows the plan of iterative design proposed in the original grant project, consisting of finding the best interface to empower techno-fluent composers with deeper knowledge and tools for sound separation. We are working in concentric circles, from hypotheses of the core research team tested locally, then with the feedback and research of the commissioned composers and the other plenary participants, to finally release the tools publicly to the various community of practice. The first toolset has now reached that last stage, and the second toolset, focused on manipulation and hybridization, is under its first design iteration.

The project has, so far:
- published the first toolset online as extensions for the leading creative coding environments (Max, PureData, SuperCollider) and the command-line interface, on the 3 leading operating systems (MacOS, Windows10, Linux) as well as the underlying code architecture allowing such a breadth of interdisciplinary researches in algorithm and interface interaction;
- released the prototype of the community-building tools: an online learning resource to empower composers with the knowledge required to master and subvert their new tools (learn.flucoma.org) and an online forum for discussing the various use of the tools (discourse.flucoma.org)
- published 14 papers, of which 12 are peer-reviewed;
- made public 22 videos taken from the key elements of the first 3 plenaries;
- produced one concert of 5 works showcasing research made with the first toolbox, in a world-leading festival, parts of which were broadcasted by the BBC with short interviews explaining the research aspects of the works, and then made all the works public on YouTube;
- have supported the aesthetic research of 7 other pieces where elements of the toolset were partially tested;
- released 4 other code sources, plus the 3 codebases from the papers above;
- supported the emergence of the first two independent coding project emerging from the community, which in turn fed back ideas of interface research;
The project is on track to provide new affordances in sound manipulation, in real-time and in deferred-time, and most importantly, integrated at the right level of granularity in creative coding researchers’ workflows, yielding new works, new sounds and new questions about taxonomies, hybridisations and interpolations. This
second cycle is underway and will produce:
- another toolbox, focused on data manipulation, providing a curated set of ‘big data’ tools for small datasets, again for all the platforms defined above;
- the underlying infrastructure C++ code implementation, with the rationale on how it is a potentially reusable code base for such interdisciplinary research;
- two more plenaries, which should yield 8 more videos;
- one more concert, consisting of five works and their videos;
- five other pieces from the main team with later versions of the toolboxes;
- various other papers on DSP research, interface advances, and creative process for techno-fluent composers;
- a finished improved version of the community building tools, with discourse.flucoma.org going fully public and learn.flucoma.org being fully developed and populated.