Periodic Reporting for period 4 - FluCoMa (Fluid Corpus Manipulation: Creative Research in Musical Mining of Large Sound/Gesture Datasets through Foundational Access to the Latest Advances of Signal Decomposition.)
Período documentado: 2022-03-01 hasta 2023-02-28
On the one hand, techno-fluent researchers are now well-established in the field, and portable affordable computing devices are faster than ever. Moreover, never before has the creative community been so thriving: the age of digital sound production and its virtual web-based dissemination has helped many communities to grow around creative coding software in a decentralised way. This thriving research community regards computer programming, albeit at a high level, as part of their creative research workflow to explore novel sonic worlds. On the other hand, the same conditions have helped scientists push the boundaries of what is possible to do within the realms of DSP, both in breadth and in-depth. The world of telecommunication is still at the forefront of this research, but everyday uses of such technology barely scratch the surface of what would be creatively possible to achieve. We are surrounded by these algorithms, in our telephones, web browsers, and other technology, yet digital artists and researchers struggle to obtain deeper access to it to facilitate wider creative research use.
One such advance is signal decomposition: a sound can now be separated into its transient, pitched, and residual constituents. These potent algorithms are partially available in closed software, or in laboratories, but not at a suitable level of modularity within the coding environments used by the creative researchers (Max, PureData, and SuperCollider) to allow ground-breaking sonic research into a rich unexploited area: the manipulation of large sound corpora. Indeed, with access to, genesis of, and storage of large sound banks now commonplace, novel ways of abstracting and manipulating them are needed to mine their inherent potential.
The Fluid Corpus Manipulation project proposes to tackle this issue by bridging this gap between DSP and music research, empowering techno-fluent aesthetic investigators with a toolset for signal decomposition and data manipulation within their mastered software environments, in order to experiment with new sound and gesture design untapped in large corpora. This research, therefore, is carried out by an interdisciplinary team of composers and programmers, in order to propose new tools and approaches to large corpora exploration. Providing some of the big data tools to relatively small datasets, it enables the creative researchers in this project, and subsequently a whole community, to develop novel ways to manipulate, abstract, and hybridize their personal datasets of sounds and gestures, to discover new sounds and gestures through novel approaches and solutions to old problems - either acoustic limits, browsing concerns, classification, or variation making, using machine listening and machine learning to think holistically these problems of taxonomy and hybridisation.
The timeliness of the project can hardly be better: the convergence of a community of eager techno-fluent composers, the maturity of the DSP algorithms, the proliferation of rich corpora, and the available computing power, provide the perfect ground for this project to propose progressive sonic research possibilities. To bridge the gap between the communities of creative and DSP researchers is a major challenge, yet FluCoMa proposes a foundational approach and methodological insight to tackle this issue by providing the tools at the right level to the creative researchers and direct critical subversive feedback to the scientists.
The project's contribution is multifaceted, anchored around the ecosystem of technical means for fluid corpus manipulation: 1) a set of software extensions for the three leading creative coding environments used for musicking; 2) a knowledge base, online, made of helpfiles, tutorials, analysis of works, interviews, and workshopping material; 3) a discussion forum providing the emerging community with a platform for exchanging ideas around critical practice research in creative coding and programmatic mining of personal sound banks.
The project has :
- published many iterations of the toolset online as extensions for the leading creative coding environments (Max, PureData, SuperCollider) and the command-line interface, on the 3 leading operating systems (MacOS, Windows10, Linux) as well as the underlying code architecture allowing such a breadth of interdisciplinary researches in algorithm and interface interaction; It is now in the public domain and being at its 6 update.
- released three community-building tools: a third version of an online learning resource to empower musicians with the knowledge required to master and subvert their new tools (learn.flucoma.org) including help, tutorials, in-depth articles, and video podcasts with leading practice-researchers in the field of creative coding, an online forum for discussing the various use of the tools (discourse.flucoma.org) and a guide to the codebase maintenance (develop.flucoma.org)
- published 25 papers, of which 22 are peer-reviewed;
- produced two concerts totalling 13 works showcasing research made with the two toolboxes, both in a world-leading festival. The first concert was broadcasted in parts by the BBC with short interviews explaining the research aspects of the works. All works were made public on YouTube;
- made public 75 videos taken from the key elements of the plenaries, tutorials, pieces, and keynotes;
- have supported the aesthetic research of 9 other pieces where elements of the toolset were partially tested;
- released 4 other code sources, plus the 5 codebases from the papers above;
- supported the emergence of a few independent coding projects emerging from the community, which in turn fed back ideas of interface research;