List of Interactive Elements

This list contains a reference to the interactive elements of this thesis. It does not include elements that link to external content such as YouTube or Vimeo. Instead, this information is provided in the [References] amongst a list of hyperlinks, videos as well as a discography of works which have been mentioned.

List of Audio Elements

Opening gesture of Stitch/Strata
3:31 to 4:04 in "segmenoittet" from Reconstruction Error
AUDIO 4.1.1
Stitch Strata (Stereo Mixdown)
AUDIO 4.1.2
Adversarial searching
AUDIO 4.1.3
A single output from the accum_phrase process
AUDIO 4.1.4
A single output from the jank process (1)
AUDIO 4.1.5
A single output from the jank process (2)
AUDIO 4.1.6
A single output from the search_small process (1)
AUDIO 4.1.7
A single output from the search_small process (2)
AUDIO 4.1.8
Glitch-like and robotic phrases produced by the jank process
AUDIO 4.1.9
Constrained and tense textures created with the jank process
AUDIO 4.2.1
Annealing Strategies (738_noise.simplex_529135023)
AUDIO 4.2.2
AUDIO 4.2.3
AUDIO 4.2.4
AUDIO 4.2.5
Annealing Strategies (738_noise.simplex_529135023)
AUDIO 4.3.1
Refracted Touch
AUDIO 4.3.2
Pre-composed gestures generated by the rt.bits module group, isolated in the performance recording.
AUDIO 4.4.1
Reconstruction Error
AUDIO 4.4.2
ibuffer~ databending
AUDIO 4.4.3
Samples generated using the mosh command line tool
AUDIO 4.4.4
libLLVMAMDGPUDesc.a with my manual segmentation
AUDIO 4.4.5
libLLVMAMDGPUDesc.a segmented with the fluid.noveltyslice~ algorithm
AUDIO 4.4.6
AUDIO 4.4.7
Cluster 131 .maxwave sounds
AUDIO 4.4.9
AUDIO 4.5.1
AUDIO 4.5.2
Induction recording of e-reader while switching the aeroplane mode on and off
AUDIO 4.5.3
Induction recording of mobile phone without user interaction
AUDIO 4.5.4
Shorter induction recording taken from a game controller
AUDIO 4.5.5
Several classification results
AUDIO 4.5.6
Kindle_04_08.wav segment
AUDIO 4.5.7
Short draft of a piece composed with sounds derived by corpus filtering
AUDIO 4.5.8
Various loop experiments using clusters such as 1, 2, and 37
AUDIO 4.5.9
P 08_19
AUDIO 5.3.1
Noise-based layer created by chaining ReaCoMa scripts for interactive decomposition in REAPER.
AUDIO 4.1.10
search_small outputs interspersed
AUDIO 4.1.11
several accum_phrase outputs concatenated together
AUDIO 4.1.12
Dynamic descriptor query outputs (1)
AUDIO 4.1.13
Dynamic descriptor query outputs (2)
AUDIO 4.1.14
Dynamic descriptor query outputs (3)
AUDIO 4.4.10
Material from cluster 165 processed with the transient extraction algorithm in three different ways. 2:07 in X86Desc.a.
AUDIO 4.4.11
AUDIO 4.4.12
A short section of composed music using various clusters of mechanical material.
AUDIO 4.4.13
AUDIO 4.5.10
A sketch created from the output of
AUDIO 4.5.11
Central query sample for
AUDIO 4.5.12
Sketch using a combination of sounds from initial sketch (AUDIO 10) and from output.
AUDIO 4.5.13
Three "anchor" sounds
AUDIO 4.5.14
AUDIO 4.5.15
E-reader sample section displaying active and static states (06-Kindle Off-200513_1547.wav)
AUDIO 4.5.16
ReaCoMa sorting applied to small segments of active gesture
AUDIO 4.5.17
Foundational layers
AUDIO 4.5.18
Intuitively composed rhythmic constructs

List of Code Elements

CODE 2.1
JSON output describing computationally generated clusters of perceptually similar audio samples segments.
CODE 3.1
A configuration example for AudioGuide
CODE 4.1.1
CODE 4.1.3
CODE 4.3.1
Cubic Non-Linear Distortion
CODE 4.3.2
Patch comments in the first version of the Refracted Touch patch
CODE 4.4.1
Meta-analysis data describing the shared membership of clusters. This portion describes the shared membership of clusters between the 250-clusters and 500-clusters analysis.
CODE 4.4.2
Meta-analysis data describing the shared membership of clusters. This portion describes the shared membership of clusters between the 500-clusters and 1600-clusters analysis.
CODE 4.4.3
Lua script for importing a cluster of samples as a track of contiguous samples into REAPER.
CODE 4.5.1
Corpus filtering by loudness example using the loudness() method of a Corpus() object.
CODE 4.5.2
Adding two FTIS Corpus objects together using operator overloading. the variable multi_corpus is a new corpus as a result of adding corpus_one to corpus_two.
CODE 5.1.1
Invocation of mosh to process a single file
CODE 5.1.2
Command line invocation with bitDepth, numChans and sampRate arguments
CODE 5.2.1
Didactic FTIS example for segmenting a corpus
CODE 5.2.2
UMAP Analyser
CODE 5.2.3
Creating a FTIS World() and defining the sink
CODE 5.2.4
Connecting analysers together with >> operator and building the chain
CODE 5.2.5
Calling the run() method of the World() instance
CODE 5.2.6
An example of automatically produced metadata
CODE 5.2.7
Pseudocode example connecting three analysers together
CODE 5.2.8
Pseudocode example where the Median() is replaced by the Average() analyser
CODE 5.2.9
Hash creation function in FTIS
CODE 5.2.10
The create_identity() function

List of Demo Elements

DEMO 2.1
A network of actions and outputs. Observing different outputs can lead to compositional decision-making or lead back to further computational exploration and new outputs being made.
DEMO 3.1
Interactive audition of harmonic-percussive source separation
DEMO 4.1.1
Random recombination of voice segments
DEMO 4.1.2
z12 algorithm interactive example
DEMO 4.4.1
The effect of UMAP parameters on the projection characteristics
DEMO 4.4.2
Agglomerative Clustering with different levels of granularity
DEMO 4.5.1
k-d tree sample searching
DEMO 5.2.1
The effect of UMAP parameters on the projection characteristics
DEMO 5.3.1
Transient Extraction in sys.ji_.
DEMO 5.3.2
Transient Extraction in X86Desc.a.
DEMO 5.3.3
Harmonic-percussive source separation in _.dotmaxwave.
DEMO 5.3.4
Non-negative matrix factorisation decomposition in P 08_19.

List of Image Elements

A screenshot demonstrating the dada.kaleido object and its geometric interface.
Diagram from McClean and Wiggins (2010) depicting the bricolage programming feedback loop.
A screenshot showing computer generated outputs of concatenated phonetic outputs. The letter of each file represents a different concatenation strategy, with multiple variations for each one.
Samples projected into a two-dimensional space using dimension reduction on mel-frequency cepstrum coefficients analysis. This data was used in the Reconstruction Error project.
A preview of the texture map found at
An example of a breakpoint interpolation scheme between keyframes.
IMAGE 4.1.1
Odessa subsumption architecture taken from Linson et al., (2015). The boxes represent modules belonging to one of the three behavioural layers, "Play", "Diverge", "Adapt".
IMAGE 4.2.1
A visual depiction of the interconnected Fourses oscillators.
IMAGE 4.2.2
A flowchart describing the steps of the simulated annealing algorithm.
IMAGE 4.2.3
The highest data point in weather data being located through simulated annealing.
IMAGE 4.2.4
The travelling salesman problem being solved by simulated annealing.
IMAGE 4.3.1
Adaptive granular synthesis module.
IMAGE 4.3.2
Daryl's web-based interface for receiving real-time information about patch states. Text-based prompts can be seen in the middle/top-half of the screen. A large progress bar occupies the centre showing his progress through a state.
IMAGE 4.3.3
Electronics, microphone and amplification configuration for the first Refracted Touch performance.
IMAGE 4.3.4
The feedback module part of the rt.low module group.
IMAGE 4.4.1
Recurrence matrix visualisation for libLLVMAMDGPUDesc.a.wav.
IMAGE 4.4.2
Owen Green answering my question in the FluCoMa discourse.
IMAGE 4.4.3
Example of a novelty curve, taken from Foote (2000, p. 454).
IMAGE 4.4.4
HDBSCAN minimum spanning tree visualisation.
IMAGE 4.4.5
HDBSCAN cluster hierarchy visualisation. Taken from
IMAGE 4.4.6
REAPER session for segmnoittet. Tracks displayed on the left side of the screen contain various tracks of clusters and their respective "sub-clusters".
IMAGE 4.5.1
Visual depiction of "cluster segmentation" algorithm.
IMAGE 4.5.2
Clustered segmentation results rendered as a REAPER session.
IMAGE 5.3.1
User interface for parameter selection is shown after selecting and running one of the Lua scripts.
IMAGE 5.3.2
New takes containing the harmonic and percussive components are appended to the source media item.
IMAGE 5.3.3
fluid-noveltyslice.lua is used with default parameters, rendering new takes at each slice point.
IMAGE 5.3.4
Each item has been tagging using tag-loudness.lua. The results can be quickly viewed by hovering over the note.

List of Video Elements

VIDEO 4.1.2
Querying the corpus of voice segments using dynamic audio descriptor queries.
VIDEO 4.1.3
Dynamic gestures created by dynamically querying a corpus of vocal segments based on audio descriptors.
VIDEO 4.1.4
A self-contained static musical behaviour is formed by creating an "anchor" in the descriptor space and momentarily departing and then returning to it.
VIDEO 4.2.2
A screen capture demonstrating the single input interface in practice.
VIDEO 4.3.7
Demonstration of the harmonic-percussive separator process using Daryl's playing as an input signal.
VIDEO 4.3.8
Demonstration of the Short Resonators module.
VIDEO 4.5.4
Exploring segmented and clustered active material.
VIDEO 4.5.5
Auditioning the output of the script.
VIDEO 4.5.6
Using ReaCoMa to segment and then proliferate the active gesture.
VIDEO 5.3.1
Chaining ReaCoMa processes for interactive decomposition in REAPER.
VIDEO 4.3.10
Demonstration of the granular synthesis module using percussive separation as a source.
VIDEO 4.3.11
Cubic non-linear distortion module with two different samples from Daryl.
VIDEO 4.3.12
Example of accumulation with a sample as input. The original audio is played back through the left channel, and a single sample impulse is triggered in the right channel ever time the accumulator resets.
VIDEO 4.3.13
State recognition with the mubu.gmm object.
VIDEO 4.4.10
Using Max and the meta-analysis data to explore the corpus in a structured manner.
VIDEO 4.4.11
Using ReaScript API to create tracks of samples from cluster data.
VIDEO 4.4.12
Interactive item spacing with Lua accessing the ReaScript API.