Hearing from Within a Crossfade by lewis Wolstanholme

This installation explores spatialised deconstructions of timbre, using various techniques to create immersive transitions between different sonic materials. Primarily, this work employs the Joint Time-Frequency Scattering transform, alongside other techniques for neural audio synthesis. This spatialisation process creates an atmospheric sonic landscape which highlights the modulatory and transitory characteristics of a sound at distinct locations within a space.

arrow-left-white

Presented by : Lewis Wolstanholme

Biography

This installation explores spatialised deconstructions of timbre, using various techniques to create immersive transitions between different sonic materials. Primarily, this work employs the Joint Time-Frequency Scattering transform (JTFS), alongside other techniques for neural audio synthesis. The JTFS transform is used to produce a multi-dimensional representation of audio, which analyses the spectrotemporal modulations present within sound, such as frequency modulations and amplitude modulations. The JTFS transform has been shown to closely model how our brain interprets modulatory changes in sonic materials. Using this technique, it is possible to design an iterative resynthesis algorithm, utilising machine learning techniques and gradient descent, which can be used to distort and reshape the form of musical and sonic materials during composition. Similarly, this procedural resynthesis technique has also been applied using the Griffin Lim algorithm, which utilises Fourier transforms during the resynthesis process.

For this work, various recordings and musical fragments have been stitched together to create an immersive, textural and seamlessly evolving sonic palette. To achieve this, the JTFS and Griffin Lim resynthesis techniques have been utilised to artificially create long-form passages of audio that demonstrate a transition from one sonic fragment to another. By rendering the audio produced at every step during the gradient descent process, it is possible to portray the inner workings of these resynthesis techniques, and create passages of audio that emphasise the transitory process from one source material to another. The products of these resynthesis techniques are then spatialised dependent upon the characteristic transform being used, which in the case of the JTFS transform spatialises audio fragments relative to their underlying spectrotemporal modulations. This spatialisation process creates an atmospheric sonic landscape which highlights the modulatory characteristics of a sound at distinct locations within a space.

This work has been produced in collaboration with the technologist Christopher Mitcheltree, who has been developing a new approach towards employing the JTFS transform during the creative process. Christopher is a PhD researcher at the Centre for Digital Music at Queen Mary University of London, and has also worked on commercial projects such as Neutone. This work also builds upon many of the techniques originally presented in the 2023 AES paper ‘Hearing from Within a Sound’.