Conclusion

The written component of this PhD has contextualised and reflected upon a portfolio of five creative projects that trace the development of my computer-aided compositional practice with content-aware programs over the last four years. For each project, I have documented the underlying compositional philosophoy and process, and traced the evolution of associated technological tools and creative-coding practice, as well as documenting the growth of musical ideas specific to each of those projects. From the tools presented alongside the varying musical outcomes, I hope to have answered the question: How can I use content-aware programs to compose with digital samples in a studio-based computer-aided compositional practice?, albeit not perhaps in a way that admits of a concise statement.

Two sub-questions emerged from my initial question: What compositional procedures do I delegate to either human or computer and what is the balance of these two? and How do those delegated procedures support and interact with my decision making in composition? Reflecting on the portfolio, I think there is a clear delineation between the first three projects, Stitch/Strata, Annealing Strategies and Refracted Touch, and the last two, Reconstruction Error and Interferences, and how they grapple with these questions. With the benefit of hindsight, I consider the first three projects as almost “blind” experimentation and an attempt to consolidate what capabilities of the computer I deemed valuable in computer-aided composition as well as what forms of content-awareness were interesting and useful to me. As a result, each of these initial projects explored an entirely individual approach, and imbued the computer with different decision-making capabilities without necessarily working towards the consolidation of technological forces in my practice.

Stitch/Strata revealed to me that interacting with the computer by creating automated procedures for organising atomic units of sounds is incompatible with the level of intervention and control that I want to have over those processes. Furthermore, high-level structures are complex to generate in this manner, because for me they are predicated on the existence of a knowable model that can describe the interaction between multiple, or indeed all, formal levels in a composition. My initial goal for Stitch/Strata was to make the computer responsible for many fundamental aspects of the work by using a model that represents coherent form across many hierarchical levels. Frustrated with the lack of satisfying results emerging from this almost complete deference to the computer, I reigned its agency back, forming a workflow in which I was offered solutions to ad-hoc musical questions and tasks. This was revelatory, and accelerated the compositional process once I was able to “have my say” with the aid of the computer. An important facet of experience was how the dialogical nature of my interactions with the computer changed from that point on. These dialogues were situated in manoeuvres between listening, coding, and responding to the computer’s perceptually relevant organisations of the sound materials.

On the other hand, Annealing Strategies involved almost complete cognitive off-load of compositional decision-making to the computer. I had few opportunities to intervene in either the generative process undertaken by the simulated annealing algorithm, or (as a result) the outcome of the music it produced. Instead, my control was communicated through the management of constraints and parameters as well as the curatorial control over which “iteration” of this process would become the final work. This balance of agency was successful to an extent, despite requiring me to give away my ability to intervene almost entirely. However, I did not feel that it was necessarily a generalisable workflow, nor was it one that could be applied to many different future compositional projects.

Refracted Touch explored agent behaviours in terms of machine listening and the ability of the computer to interact and react in real-time according to a set of musical and parametric constraints. In the reflection on this project, I outline a number of issues pertaining to my workflow within the paradigm of live electronics, illustrating the complexities of managing improvisation and programming, and of clarifying my musical intentions at the same time. Giving up agency to the computer in this way offered little in return, and thus I view it as the least successful piece in the portfolio.

The last two projects in the portfolio are characterised by a much more refined and focused technological approach than in the first three projects, using FTIS, ReaCoMa and the DAW as creative interfaces to the computer. Thus, the technological efforts of these works were invested less in creating new specific tools, and more towards incrementally improving an interconnected framework of systems for analysis (FTIS/Python), manipulation (ReaCoMa) and representation (REAPER).

This combination of technologies influenced the procedures and compositional thinking that I deferred to the computer, centring its function on organising sample-based materials according to perceptual similarity. Through this, the computer evolved to become a co-listener and facilitator of my engagement with these materials and it structured different pathways to the solution of ad-hoc compositional problems. This approach helped higher-level musical thinking reach internal consistency through feedback and through experimentation with the computational outputs.

Rather than being responsible for the generation of certain compositional aspects itself, the form of the works and the organisation of materials within these forms emerged through a dialogue between me and the computer. In this configuration, I retained ultimate control over decision-making and used the computer as a cognitive resource to influence me. In comparison to projects at the start of this PhD research, particularly Annealing Strategies, I more stringently limited the agency of the computer and constrained what it could concretely contribute to the sonic aspects of a composition. In these last two projects it never created fixed sections of the music, nor was it designed to perform low-level musical control. Instead, it suggested what could be done with materials by presenting them in different relationships, arrangements and structures through machine-listening. Thus, the last two projects embody a human-computer configuration based on the notion of querying, as a way of both retrieving answers to compositional questions and of helping to clarify and shape new and connected queries.

Contribution to Knowledge

This PhD demonstrates a number of bespoke workflows using content-aware programs to inform and guide compositional decision-making in a computer-aided practice. The focus of the research has been on the mechanisms by which a content-aware machine can function as a heuristic for compositional ideas at various stages of conceptual and aesthetic development. In my engagement with listening machines, I create a dialogue between workflow, creative coding, listening, taste and compositional decision-making in a computer-aided, studio-based practice.

The primary contribution of this research are my workflows, which are novel in the current landscape of computer-aided composition. Currently, there is little research that involves using the DAW alongside bespoke lower-level technologies to support computer-led exploration with sample-based materials. I asserted in [2. Preoccupations] that much contemporary computer-aided research is influenced by the models implicit in instrumental score-writing. This PhD research diverges from this approach, and exposes some of the ways in which human-computer authored compositions can be mediated through the hybridisation of human listening and machine listening, rather than through generative or procedural strategies.

I also demonstrate how machine listening and learning can be synthesised into a creative practice on a technical level. A large portion of my compositional process is devoted to building combinations of processes so that the computer can produce representations of corpora and their internal relationships, based on their perceptual similarity. While I am mostly concerned with the discovery of sound-sets by their commonalities, there is naturally room to explore the opposite of this, as well as the spectrum between. As such, the techniques and technologies I have employed for sound searching, analysis and machine learning here might present a model by which other practitioners can derive their own approaches, perhaps because they share similar conceptual or aesthetic goals to me, or because they desire specialised ways of curating sample-based materials.

Elements of my creative coding can be assimilated into other research and creative practice. The tools and technology that I have made are open source and can be hacked and extended by other techno-fluent musicians. ReaCoMa, in particular, has gained much public popularity and use, bridging ways of thinking about computer-led sound decomposition with the approachable and familiar environment of the DAW. Amongst several “high-traffic” internet communities, ReaCoMa has been received well and its adoption into other artists’ set of tools is evident in the questions and discussions spawned by the shared engagement. Such discussions can be found at this Fluid Corpus Manipulation (FluCoMa) discourse thread, this lines forum discourse thread and on the REAPER forums too.

In addition to this, individual composers have incorporated ReaCoMa into their compositional workflow and contributed their own suggestions and improvements to the project. Hans Tutschku has provided a significant amount of feedback and continually returns to ReaCoMa as a tool. This engagement has taken place in our personal communications, as well as in several different threads of discussion on FluCoMa discourse such as this one here: “Transient slice question”. Natasha Barrett and Martin Parker have both contributed to ReaCoMa’s development by filing technical issues which were discovered in their experimentation with it. I have also serendipitously discovered composers incorporating ReaCoMa into their practice. The album pulse machine xor moves (2021) by Crank Satori was made, in part, using the ReaCoMa tools as a mixing and post-processing tool in tracks 1, 8 and 11.

FTIS also contributes to the wider landscape of computer-aided composition software, offering combined and integrated implementations of machine-listening, machine-learning and machine-analysis technologies that other software does not. A major benefit of this unification of technologies is how it allows the outputs of these processes to be readily incorporated into well-established paradigms of composition. In this thesis, I demonstrate how FTIS can be used programmatically to construct REAPER sessions in response to data, or how its JSON outputs can be utilised in Max for audition processes. This interoperability of FTIS is novel, and supports merging the complexities of machine listening and learning with my compositional method, as well as potentially that of others. Given its open architecture, as well as its low-level scripting and command-line interface, FTIS can readily be embedded in other environments or languages, and can be extended to incorporate other ways of working and different types of meaningful output. Other machine-listening algorithms could be added as analysers or additional environmental adapters could be created to facilitate FTIS operating fluently with other software or being embedded into existing practices.

At the time of submitting this thesis, there are several groups of artists and creative coders interested in and researching the artistic affordances of similar technologies to those deployed in my practice. MIMIC, FluCoMa and AI Music Creativity (the joining of MuMe and CSMC) are some key examples of projects that overlap, to varying degrees, in their research aims with my own. My writing, code and creative works have been disseminated to this interconnected community of researchers, including published journal articles and conference presentations at the International Computer Music Conference and AI Music Conference (see Bradbury (2017), Bradbury (2020)), as well as formal presentations at key research events, such as the Fluid Corpus Manipulation plenary in November, 2019.

Further Research and the Future of My Practice

Over the course of this PhD, developing various technological tools has furnished me with a sophisticated creative coding practice that is intertwined deeply with my musical thinking and compositional decision-making. At the start of the PhD I was limited in my programming abilities and only used Max. As I became interested in machine learning and machine listening, I naturally had to expand my knowledge of other languages, libraries and ecosystems such as Python and scikit-learn to meet the demands of my changing and evolving interests. This ultimately gave rise to the necessary skills upon which FTIS could be built, which draws on several technologies.

In the future, developing FTIS will be essential to the organic evolution of my overall artistic practice, as opposed to the creation of new tools on a project-by-project basis. While the start of this PhD saw several technological shifts between projects, I have now found an extensible framework and stable methodology for drawing technology into my artistic thinking that does not require me to “reinvent the wheel” in response to new musical ideas and aims. I have already put in place the necessary architectural work allowing FTIS to be extended programmatically and with new analysers, without having to modify the underlying system and framework itself. This is described in [5.3.3 Architecture]. Referring back to the graph of connected compositional actions presented in [2.1.2.3 The Feedback Loop], it seems restrictive to bind this network of interactions that characterise the human-computer dialogue to individual projects. Moving forward, I intend that my creative-coding practice will cross-pollinate different aesthetic and compositional ideas as projects are undertaken, opening dialogues between pieces and forming a web of interconnected works situated around FTIS.

One development of FTIS that is not explored in depth in this PhD is visualisation and visual interaction, which could potentially allow for novel ways of interacting with processing pipelines and corpora for composition. As it currently stands, the only way to interact with the outputs of FTIS is to draw them into other environments, such as Max or REAPER, in order to inspect the results aurally. This has been an effective workflow so far because it allows me to situate my engagement with complex data within the realm of listening and soundful experimentation. That said, I have in some instances visualised data, such as with the UMAP results described in [4.4.2.4 Dimension Reduction], and this has been a useful process in dealing with the scale and complexity of audio-descriptor analysis and machine-learning processes. I anticipate that this could be more thoroughly explored, perhaps by creating a “front-end” interface to FTIS which allows both the data and a representation of a corpus to be portrayed visually and even operated on through a set of intuitive tactile metaphors such as cutting, drawing or reshaping as one would a physical material. Such an interface would allow me graphically to transform and curate samples from a corpus within the immediate frame of visual feedback, allowing for a more rapid and intuitive response to the outputs of machine listening and learning than is available with the current version of FTIS.

In addition to this, I anticipate that a front-end interface could support more rapid transportation of computer outputs between different creative-coding and composition environments than is currently possible. Instead of having to rely on scripts, such as when I imported cluster data into REAPER, I imagine a visual workflow based on rapidly moving data between FTIS and REAPER, for example. A potential innovation could be to have FTIS automatically and interactively update a REAPER or Max session, thus providing a tighter feedback loop between scripting, visualisation and audition.

This PhD has catalysed a prolonged phase of personal aesthetic, technological and philosophical development. I did not imagine that undertaking this research would shift the focus of my practice the way that it did, or that its technicity would develop toward the use of such low-level programming tools. Indeed, the conceptual and poetic aspects of my practice evolved alongside my desire to harness content-aware programs, and to fuse their agency with mine, leading me to become proficient in languages and paradigms which were new to me. I began this research as a technical novice, limited by a lack of confidence and knowledge relative to that which I have now at the end of this PhD. The notion of “hacking” other people’s code, or writing my own from scratch to create bespoke software for composition feels like second nature to me now, and exploring the intricacies of my compositional thinking is a mindset that I cannot separate from the creation of content-aware programs. I aim to cultivate this transformative experience further, embracing the entangled nature of coding and sonic exploration in my work. I am excited to see where this intertwined process will take me, and to observe how my composing with the computer will develop in the future.